id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1310.5430 | Validating Network Value of Influencers by means of Explanations | cs.SI physics.soc-ph | Recently, there has been significant interest in social influence analysis.
One of the central problems in this area is the problem of identifying
influencers, such that by convincing these users to perform a certain action
(like buying a new product), a large number of other users get influenced to
follow the action. The client of such an application is a marketer who would
target these influencers for marketing a given new product, say by providing
free samples or discounts. It is natural that before committing resources for
targeting an influencer the marketer would be interested in validating the
influence (or network value) of influencers returned. This requires digging
deeper into such analytical questions as: who are their followers, on what
actions (or products) they are influential, etc. However, the current
approaches to identifying influencers largely work as a black box in this
respect. The goal of this paper is to open up the black box, address these
questions and provide informative and crisp explanations for validating the
network value of influencers.
We formulate the problem of providing explanations (called PROXI) as a
discrete optimization problem of feature selection. We show that PROXI is not
only NP-hard to solve exactly, it is NP-hard to approximate within any
reasonable factor. Nevertheless, we show interesting properties of the
objective function and develop an intuitive greedy heuristic. We perform
detailed experimental analysis on two real world datasets - Twitter and
Flixster, and show that our approach is useful in generating concise and
insightful explanations of the influence distribution of users and that our
greedy algorithm is effective and efficient with respect to several baselines.
|
1310.5463 | Engineering Crowdsourced Stream Processing Systems | cs.DB cs.AI cs.SE | A crowdsourced stream processing system (CSP) is a system that incorporates
crowdsourced tasks in the processing of a data stream. This can be seen as
enabling crowdsourcing work to be applied on a sample of large-scale data at
high speed, or equivalently, enabling stream processing to employ human
intelligence. It also leads to a substantial expansion of the capabilities of
data processing systems. Engineering a CSP system requires the combination of
human and machine computation elements. From a general systems theory
perspective, this means taking into account inherited as well as emerging
properties from both these elements. In this paper, we position CSP systems
within a broader taxonomy, outline a series of design principles and evaluation
metrics, present an extensible framework for their design, and describe several
design patterns. We showcase the capabilities of CSP systems by performing a
case study that applies our proposed framework to the design and analysis of a
real system (AIDR) that classifies social media messages during time-critical
crisis events. Results show that compared to a pure stream processing system,
AIDR can achieve a higher data classification accuracy, while compared to a
pure crowdsourcing solution, the system makes better use of human workers by
requiring much less manual work effort.
|
1310.5468 | Message-Passing Algorithms for Optimal Utilization of Cognitive Radio
Networks | cs.IT cond-mat.stat-mech cs.NI math.IT | Cognitive Radio has been proposed as a key technology to significantly
improve spectrum usage in wireless networks by enabling unlicensed users to
access unused resource. We present new algorithms that are needed for the
implementation of opportunistic scheduling policies that maximize the
throughput utilization of resources by secondary users, under maximum
interference constraints imposed by existing primary users. Our approach is
based on the Belief Propagation (BP) algorithm, which is advantageous due to
its simplicity and potential for distributed implementation. We examine
convergence properties and evaluate the performance of the proposed BP
algorithms via simulations and demonstrate that the results compare favorably
with a benchmark greedy strategy.
|
1310.5479 | Applications of Large Random Matrices in Communications Engineering | cs.IT math.IT | This work gives an overview of analytic tools for the design, analysis, and
modelling of communication systems which can be described by linear vector
channels such as y = Hx+z where the number of components in each vector is
large. Tools from probability theory, operator algebra, and statistical physics
are reviewed. The survey of analytical tools is complemented by examples of
applications in communications engineering. Asymptotic eigenvalue distributions
of many classes of random matrices are given. The treatment includes the
problem of moments and the introduction of the Stieltjes transform. Free
probability theory, which evolved from non-commutative operator algebras, is
explained from a probabilistic point of view in order to better fit the
engineering community. For that purpose freeness is defined without reference
to non-commutative algebras. The treatment includes additive and multiplicative
free convolution, the R-transform, the S-transform, and the free central limit
theorem. The replica method developed in statistical physics for the purpose of
analyzing spin glasses is reviewed from the viewpoint of its applications in
communications engineering. Correspondences between free energy and mutual
information as well as energy functions and detector metrics are established.
These analytic tools are applied to the design and the analysis of linear
multiuser detectors, the modelling of scattering in communication channels with
dual antennas arrays, and the analysis of optimal detection for communication
via code-division multiple-access and/or dual antenna array channels.
|
1310.5488 | A practical approach to ontology-enabled control systems for
astronomical instrumentation | astro-ph.IM cs.AI cs.SE | Even though modern service-oriented and data-oriented architectures promise
to deliver loosely coupled control systems, they are inherently brittle as they
commonly depend on a priori agreed interfaces and data models. At the same
time, the Semantic Web and a whole set of accompanying standards and tools are
emerging, advocating ontologies as the basis for knowledge exchange. In this
paper we aim to identify a number of key ideas from the myriad of
knowledge-based practices that can readily be implemented by control systems
today. We demonstrate with a practical example (a three-channel imager for the
Mercator Telescope) how ontologies developed in the Web Ontology Language (OWL)
can serve as a meta-model for our instrument, covering as many engineering
aspects of the project as needed. We show how a concrete system model can be
built on top of this meta-model via a set of Domain Specific Languages (DSLs),
supporting both formal verification and the generation of software and
documentation artifacts. Finally we reason how the available semantics can be
exposed at run-time by adding a "semantic layer" that can be browsed, queried,
monitored etc. by any OPC UA-enabled client.
|
1310.5515 | Perfect Permutation Codes with the Kendall's $\tau$-Metric | cs.IT math.IT | The rank modulation scheme has been proposed for efficient writing and
storing data in non-volatile memory storage. Error-correction in the rank
modulation scheme is done by considering permutation codes. In this paper we
consider codes in the set of all permutations on $n$ elements, $S_n$, using the
Kendall's $\tau$-metric. We prove that there are no perfect
single-error-correcting codes in $S_n$, where $n>4$ is a prime or $4\leq n\leq
10$. We also prove that if such a code exists for $n$ which is not a prime then
the code should have some uniform structure. We define some variations of the
Kendall's $\tau$-metric and consider the related codes and specifically we
prove the existence of a perfect single-error-correcting code in $S_5$.
Finally, we examine the existence problem of diameter perfect codes in $S_n$
and obtain a new upper bound on the size of a code in $S_n$ with even minimum
Kendall's $\tau$-distance.
|
1310.5534 | A Study of Truck Platooning Incentives Using a Congestion Game | cs.GT cs.SY math.OC | We introduce an atomic congestion game with two types of agents, cars and
trucks, to model the traffic flow on a road over various time intervals of the
day. Cars maximize their utility by finding a trade-off between the time they
choose to use the road, the average velocity of the flow at that time, and the
dynamic congestion tax that they pay for using the road. In addition to these
terms, the trucks have an incentive for using the road at the same time as
their peers because they have platooning capabilities, which allow them to save
fuel. The dynamics and equilibria of this game-theoretic model for the
interaction between car traffic and truck platooning incentives are
investigated. We use traffic data from Stockholm to validate parts of the
modeling assumptions and extract reasonable parameters for the simulations. We
use joint strategy fictitious play and average strategy fictitious play to
learn a pure strategy Nash equilibrium of this game. We perform a comprehensive
simulation study to understand the influence of various factors, such as the
drivers' value of time and the percentage of the trucks that are equipped with
platooning devices, on the properties of the Nash equilibrium.
|
1310.5540 | Frequency Effects on Predictability of Stock Returns | q-fin.ST cs.IT math.IT | We propose that predictability is a prerequisite for profitability on
financial markets. We look at ways to measure predictability of price changes
using information theoretic approach and employ them on all historical data
available for NYSE 100 stocks. This allows us to determine whether frequency of
sampling price changes affects the predictability of those. We also relations
between price changes predictability and the deviation of the price formation
processes from iid as well as the stock's sector. We also briefly comment on
the complicated relationship between predictability of price changes and the
profitability of algorithmic trading.
|
1310.5542 | Ship Detection and Segmentation using Image Correlation | cs.CV | There have been intensive research interests in ship detection and
segmentation due to high demands on a wide range of civil applications in the
last two decades. However, existing approaches, which are mainly based on
statistical properties of images, fail to detect smaller ships and boats.
Specifically, known techniques are not robust enough in view of inevitable
small geometric and photometric changes in images consisting of ships. In this
paper a novel approach for ship detection is proposed based on correlation of
maritime images. The idea comes from the observation that a fine pattern of the
sea surface changes considerably from time to time whereas the ship appearance
basically keeps unchanged. We want to examine whether the images have a common
unaltered part, a ship in this case. To this end, we developed a method -
Focused Correlation (FC) to achieve robustness to geometric distortions of the
image content. Various experiments have been conducted to evaluate the
effectiveness of the proposed approach.
|
1310.5553 | Hypothesis Testing on Invariant Subspaces of the Symmetric Group, Part I
- Quantum Sanov's Theorem and Arbitrarily Varying Sources | quant-ph cs.IT math-ph math.IT math.MP math.RT | We report a proof of the quantum Sanov Theorem by elementary application of
basic facts about representations of the symmetric group, together with a
complete characterization of the optimal error exponent in a situation where
the null hypothesis is given by an arbitrarily varying quantum source instead.
Our approach differs from previous ones in two points: First, it supports a
reasoning inspired by the method of types. Second, the measurement scheme we
propose to distinguish the two alternatives not only does that job
asymptotically perfect, but also yields additional information about the null
hypothesis. An example of that is given. The measurement is composed of
projections onto permutation-invariant subspaces, thus providing a direct link
between one of the most basic tasks in quantum information on the one hand side
and fundamental objects in representation theory on the other. We additionally
connect to representation theory by proving a relation between Kostka numbers
and quantum states, and to state estimation via a generalization of a
well-known spectral estimation theorem to non-i.i.d. sequences.
|
1310.5568 | Towards Application of the RBNK Model | cs.CE cs.NE | The computational modeling of genetic regulatory networks is now common
place, either by fitting a system to experimental data or by exploring the
behaviour of abstract systems with the aim of identifying underlying
principles. This paper presents an approach to the latter, considering the
response to environmental changes of a well-known model placed upon tunable
fitness landscapes. The effects on genome size and gene connectivity are
explored.
|
1310.5597 | CIDS country rankings: comparing documents and citations of USA, UK and
China top researchers | cs.DL cs.IR | This technical report presents a bibliometric analysis of the top 30 cited
researchers from USA, UK and China. The analysis is based on Google Scholar
data using CIDS. The researchers were identified using their email suffix: edu,
uk and cn. This na\"{i}ve approach was able to produce rankings consistent with
the SCImago country rankings using mininal resources in a fully automated way.
|
1310.5619 | Devnagari Handwritten Numeral Recognition using Geometric Features and
Statistical Combination Classifier | cs.CV | This paper presents a Devnagari Numerical recognition method based on
statistical discriminant functions. 17 geometric features based on pixel
connectivity, lines, line directions, holes, image area, perimeter,
eccentricity, solidity, orientation etc. are used for representing the
numerals. Five discriminant functions viz. Linear, Quadratic, Diaglinear,
Diagquadratic and Mahalanobis distance are used for classification. 1500
handwritten numerals are used for training. Another 1500 handwritten numerals
are used for testing. Experimental results show that Linear, Quadratic and
Mahalanobis discriminant functions provide better results. Results of these
three Discriminants are fed to a majority voting type Combination classifier.
It is found that Combination classifier offers better results over individual
classifiers.
|
1310.5620 | Towards Energy Efficiency: Forecasting Indoor Temperature via
Multivariate Analysis | cs.SY | The small medium large system (SMLSystem) is a house built at the Universidad
CEU Cardenal Herrera (CEU-UCH) for participation in the Solar Decathlon 2013
competition. Several technologies have been integrated to reduce power
consumption. One of these is a forecasting system based on artificial neural
networks (ANNs), which is able to predict indoor temperature in the near future
using captured data by a complex monitoring system as the input. A study of the
impact on forecasting performance of different covariate combinations is
presented in this paper. Additionally, a comparison of ANNs with the standard
statistical forecasting methods is shown. The research in this paper has been
focused on forecasting the indoor temperature of a house, as it is directly
related to HVAC---heating, ventilation and air conditioning---system
consumption. HVAC systems at the SMLSystem house represent 53.9% of the overall
power consumption. The energy used to maintain temperature was measured to be
30--38.9% of the energy needed to lower it. Hence, these forecasting measures
allow the house to adapt itself to future temperature conditions by using home
automation in an energy-efficient manner. Experimental results show a high
forecasting accuracy and therefore, they might be used to efficiently control
an HVAC system.
|
1310.5624 | Google matrix of the citation network of Physical Review | physics.soc-ph cs.DL cs.SI | We study the statistical properties of spectrum and eigenstates of the Google
matrix of the citation network of Physical Review for the period 1893 - 2009.
The main fraction of complex eigenvalues with largest modulus is determined
numerically by different methods based on high precision computations with up
to $p=16384$ binary digits that allows to resolve hard numerical problems for
small eigenvalues. The nearly nilpotent matrix structure allows to obtain a
semi-analytical computation of eigenvalues. We find that the spectrum is
characterized by the fractal Weyl law with a fractal dimension $d_f \approx 1$.
It is found that the majority of eigenvectors are located in a localized phase.
The statistical distribution of articles in the PageRank-CheiRank plane is
established providing a better understanding of information flows on the
network. The concept of ImpactRank is proposed to determine an influence domain
of a given article. We also discuss the properties of random matrix models of
Perron-Frobenius operators.
|
1310.5665 | Learning Theory and Algorithms for Revenue Optimization in Second-Price
Auctions with Reserve | cs.LG | Second-price auctions with reserve play a critical role for modern search
engine and popular online sites since the revenue of these companies often
directly de- pends on the outcome of such auctions. The choice of the reserve
price is the main mechanism through which the auction revenue can be influenced
in these electronic markets. We cast the problem of selecting the reserve price
to optimize revenue as a learning problem and present a full theoretical
analysis dealing with the complex properties of the corresponding loss
function. We further give novel algorithms for solving this problem and report
the results of several experiments in both synthetic and real data
demonstrating their effectiveness.
|
1310.5684 | Linear tree codes and the problem of explicit constructions | cs.IT math.IT | We reduce the problem of constructing asymptotically good tree codes to the
construction of triangular totally nonsingular matrices over fields with
polynomially many elements. We show a connection of this problem to Birkhoff
interpolation in finite fields.
|
1310.5698 | Massive Query Expansion by Exploiting Graph Knowledge Bases | cs.IR | Keyword based search engines have problems with term ambiguity and vocabulary
mismatch. In this paper, we propose a query expansion technique that enriches
queries expressed as keywords and short natural language descriptions. We
present a new massive query expansion strategy that enriches queries using a
knowledge base by identifying the query concepts, and adding relevant synonyms
and semantically related terms. We propose two approaches: (i) lexical
expansion that locates the relevant concepts in the knowledge base; and, (ii)
topological expansion that analyzes the network of relations among the
concepts, and suggests semantically related terms by path and community
analysis of the knowledge graph. We perform our expansions by using two
versions of the Wikipedia as knowledge base, concluding that the combination of
both lexical and topological expansion provides improvements of the system's
precision up to more than 27%.
|
1310.5715 | Stochastic Gradient Descent, Weighted Sampling, and the Randomized
Kaczmarz algorithm | math.NA cs.CV cs.LG math.OC stat.ML | We obtain an improved finite-sample guarantee on the linear convergence of
stochastic gradient descent for smooth and strongly convex objectives,
improving from a quadratic dependence on the conditioning $(L/\mu)^2$ (where
$L$ is a bound on the smoothness and $\mu$ on the strong convexity) to a linear
dependence on $L/\mu$. Furthermore, we show how reweighting the sampling
distribution (i.e. importance sampling) is necessary in order to further
improve convergence, and obtain a linear dependence in the average smoothness,
dominating previous results. We also discuss importance sampling for SGD more
broadly and show how it can improve convergence also in other scenarios. Our
results are based on a connection we make between SGD and the randomized
Kaczmarz algorithm, which allows us to transfer ideas between the separate
bodies of literature studying each of the two methods. In particular, we recast
the randomized Kaczmarz algorithm as an instance of SGD, and apply our results
to prove its exponential convergence, but to the solution of a weighted least
squares problem rather than the original least squares problem. We then present
a modified Kaczmarz algorithm with partially biased sampling which does
converge to the original least squares solution with the same exponential
convergence rate.
|
1310.5720 | Cascading Failures in Networks with Proximate Dependent Nodes | physics.soc-ph cs.SI physics.data-an | We study the mutual percolation of a system composed of two interdependent
random regular networks. We introduce a notion of distance to explore the
effects of the proximity of interdependent nodes on the cascade of failures
after an initial attack. We find a non-trivial relation between the nature of
the transition through which the networks disintegrate and the parameters of
the system, which are the degree of the nodes and the maximum distance between
interdependent nodes. We explain this relation by solving the problem
analytically for the relevant set of cases.
|
1310.5738 | A Kernel for Hierarchical Parameter Spaces | stat.ML cs.LG | We define a family of kernels for mixed continuous/discrete hierarchical
parameter spaces and show that they are positive definite.
|
1310.5748 | Optimal Distributed Control of Reactive Power via the Alternating
Direction Method of Multipliers | math.OC cs.SY | We formulate the control of reactive power generation by photovoltaic
inverters in a power distribution circuit as a constrained optimization that
aims to minimize reactive power losses subject to finite inverter capacity and
upper and lower voltage limits at all nodes in the circuit. When voltage
variations along the circuit are small and losses of both real and reactive
powers are small compared to the respective flows, the resulting optimization
problem is convex. Moreover, the cost function is separable enabling a
distributed, on-line implementation with node-local computations using only
local measurements augmented with limited information from the neighboring
nodes communicated over cyber channels. Such an approach lies between the fully
centralized and local policy approaches previously considered. We explore
protocols based on the dual ascent method and on the Alternating Direction
Method of Multipliers (ADMM) and find that the ADMM protocol performs
significantly better.
|
1310.5755 | Determination, Calculation and Representation of the Upper and Lower
Sealing Zones During Virtual Stenting of Aneurysms | cs.CV physics.med-ph q-bio.TO | In this contribution, a novel method for stent simulation in preoperative
computed tomography angiography (CTA) acquisitions of patients is presented
where the sealing zones are automatically calculated and visualized. The method
is eligible for non-bifurcated and bifurcated stents (Y-stents). Results of the
proposed stent simulation with an automatic calculation of the sealing zones
for specific diseases (abdominal aortic aneurysms (AAA), thoracic aortic
aneurysms (TAA), iliac aneurysms) are presented. The contribution is organized
as follows. Section 2 presents the proposed approach. In Section 3,
experimental results are discussed. Section 4 concludes the contribution and
outlines areas for future work.
|
1310.5767 | Contextual Hypergraph Modelling for Salient Object Detection | cs.CV | Salient object detection aims to locate objects that capture human attention
within images. Previous approaches often pose this as a problem of image
contrast analysis. In this work, we model an image as a hypergraph that
utilizes a set of hyperedges to capture the contextual properties of image
pixels or regions. As a result, the problem of salient object detection becomes
one of finding salient vertices and hyperedges in the hypergraph. The main
advantage of hypergraph modeling is that it takes into account each pixel's (or
region's) affinity with its neighborhood as well as its separation from image
background. Furthermore, we propose an alternative approach based on
center-versus-surround contextual contrast analysis, which performs salient
object detection by optimizing a cost-sensitive support vector machine (SVM)
objective function. Experimental results on four challenging datasets
demonstrate the effectiveness of the proposed approaches against the
state-of-the-art approaches to salient object detection.
|
1310.5770 | Quantized Stationary Control Policies in Markov Decision Processes | math.OC cs.SY | For a large class of Markov Decision Processes, stationary (possibly
randomized) policies are globally optimal. However, in Borel state and action
spaces, the computation and implementation of even such stationary policies are
known to be prohibitive. In addition, networked control applications require
remote controllers to transmit action commands to an actuator with low
information rate. These two problems motivate the study of approximating
optimal policies by quantized (discretized) policies. To this end, we introduce
deterministic stationary quantizer policies and show that such policies can
approximate optimal deterministic stationary policies with arbitrary precision
under mild technical conditions, thus demonstrating that one can search for
$\varepsilon$-optimal policies within the class of quantized control policies.
We also derive explicit bounds on the approximation error in terms of the rate
of the approximating quantizers. We extend all these approximation results to
randomized policies. These findings pave the way toward applications in optimal
design of networked control systems where controller actions need to be
quantized, as well as for new computational methods for generating
approximately optimal decision policies in general (Polish) state and action
spaces for both discounted cost and average cost.
|
1310.5777 | Exploring Scientists' Working Timetable: A Global Survey | cs.DL cs.IR physics.soc-ph | In our previous study (Wang et al., 2012), we analyzed scientists' working
timetable of 3 countries, using realtime downloading data of scientific
literatures. In this paper, we make a through analysis about global scientists'
working habits. Top 30 countries/territories from Europe, Asia, Australia,
North America, Latin America and Africa are selected as representatives and
analyzed in detail. Regional differences for scientists' working habits exists
in different countries. Besides different working cultures, social factors
could affect scientists' research activities and working patterns.
Nevertheless, a common conclusion is that scientists today are often working
overtime. Although scientists may feel engaged and fulfilled about their hard
working, working too much still warns us to reconsider the work - life balance.
|
1310.5781 | RANSAC: Identification of Higher-Order Geometric Features and
Applications in Humanoid Robot Soccer | cs.RO cs.AI cs.CV | The ability for an autonomous agent to self-localise is directly proportional
to the accuracy and precision with which it can perceive salient features
within its local environment. The identification of such features by
recognising geometric profile allows robustness against lighting variations,
which is necessary in most industrial robotics applications. This paper details
a framework by which the random sample consensus (RANSAC) algorithm, often
applied to parameter fitting in linear models, can be extended to identify
higher-order geometric features. Goalpost identification within humanoid robot
soccer is investigated as an application, with the developed system yielding an
order-of-magnitude improvement in classification performance relative to a
traditional histogramming methodology.
|
1310.5791 | ROP: Matrix recovery via rank-one projections | math.ST cs.IT math.IT stat.ME stat.ML stat.TH | Estimation of low-rank matrices is of significant interest in a range of
contemporary applications. In this paper, we introduce a rank-one projection
model for low-rank matrix recovery and propose a constrained nuclear norm
minimization method for stable recovery of low-rank matrices in the noisy case.
The procedure is adaptive to the rank and robust against small perturbations.
Both upper and lower bounds for the estimation accuracy under the Frobenius
norm loss are obtained. The proposed estimator is shown to be rate-optimal
under certain conditions. The estimator is easy to implement via convex
programming and performs well numerically. The techniques and main results
developed in the paper also have implications to other related statistical
problems. An application to estimation of spiked covariance matrices from
one-dimensional random projections is considered. The results demonstrate that
it is still possible to accurately estimate the covariance matrix of a
high-dimensional distribution based only on one-dimensional projections.
|
1310.5793 | Intelligent City Traffic Management and Public Transportation System | cs.AI cs.CY | Intelligent Transportation System in case of cities is controlling traffic
congestion and regulating the traffic flow. This paper presents three modules
that will help in managing city traffic issues and ultimately gives advanced
development in transportation system. First module, Congestion Detection and
Management will provide user real time information about congestion on the road
towards his destination, Second module, Intelligent Public Transport System
will provide user real time public transport information,i.e, local buses, and
the third module, Signal Synchronization will help in controlling congestion at
signals, with real time adjustments of signal timers according to the
congestion. All the information that user is getting about the traffic or
public transportation will be provided on users day to day device that is
mobile through Android application or SMS. Moreover, communication can also be
done via Website for Clients having internet access. And all these modules will
be fully automated without any human intervention at server side.
|
1310.5796 | Relative Deviation Learning Bounds and Generalization with Unbounded
Loss Functions | cs.LG | We present an extensive analysis of relative deviation bounds, including
detailed proofs of two-sided inequalities and their implications. We also give
detailed proofs of two-sided generalization bounds that hold in the general
case of unbounded loss functions, under the assumption that a moment of the
loss is bounded. These bounds are useful in the analysis of importance
weighting and other learning tasks such as unbounded regression.
|
1310.5806 | Exact Controllability of Complex Networks | physics.soc-ph cond-mat.dis-nn cs.SI | Controlling complex networks is of paramount importance in science and
engineering. Despite the recent development of structural-controllability
theory, we continue to lack a framework to control undirected complex networks,
especially given link weights. Here we introduce an exact-controllability
paradigm based on the maximum multiplicity to identify the minimum set of
driver nodes required to achieve full control of networks with arbitrary
structures and link-weight distributions. The framework reproduces the
structural controllability of directed networks characterized by structural
matrices. We explore the controllability of a large number of real and model
networks, finding that dense networks with identical weights are difficult to
be controlled. An efficient and accurate tool is offered to assess the
controllability of large sparse and dense networks. The exact-controllability
framework enables a comprehensive understanding of the impact of network
properties on controllability, a fundamental problem towards our ultimate
control of complex systems.
|
1310.5815 | Selective linking from social platforms to university websites: a case
study of the Spanish academic system | cs.DL cs.SI physics.soc-ph | Mention indicators have frequently been used in Webometric studies because
they provide a powerful tool for determining the degree of visibility and
impact of web resources. Among mention indicators, hypertextual links were a
central part of many studies until Yahoo discontinued the linkdomain command in
2011. Selective links constitute a variant of external links where both the
source and target of the link can be selected. This paper intends to study the
influence of social platforms (measured through the number of selective
external links) on academic environments, in order to ascertain both the
percentage that they constitute and whether some of them can be used as
substitutes of total external links. For this purpose, 141 URLs belonging to 76
Spanish universities were compiled in 2010 (before Yahoo! stopped their link
services), and the number of links from 13 selected social platforms to these
universities were calculated. Results confirm a good correlation between total
external links and links that come from social platforms, with the exception of
some applications (such as Digg and Technorati). For those universities with a
higher number of total external links, the high correlation is only maintained
on Delicious and Wikipedia, which can be utilized as substitutes of total
external links in the context analyzed. Notwithstanding, the global percentage
of links from social platforms constitute only a small fraction of total links,
although a positive trend is detected, especially in services such as Twitter,
Youtube, and Facebook.
|
1310.5828 | Priority-based intersection management with kinodynamic constraints | cs.RO | We consider the problem of coordinating a collection of robots at an
intersection area taking into account dynamical constraints due to actuator
limitations. We adopt the coordination space approach, which is standard in
multiple robot motion planning. Assuming the priorities between robots are
assigned in advance and the existence of a collision-free trajectory respecting
those priorities, we propose a provably safe trajectory planner satisfying
kinodynamic constraints. The algorithm is shown to run in real time and to
return safe (collision-free) trajectories. Simulation results on synthetic data
illustrate the benefits of the approach.
|
1310.5841 | Ontology based data warehouses federation management system | cs.DB | Data warehouses are nowadays an important component in every competitive
system, it's one of the main components on which business intelligence is
based. We can even say that many companies are climbing to the next level and
use a set of Data warehouses to provide the complete information or it's
generally due to fusion of two or many companies. these Data warehouses can be
heterogeneous and geographically separated, this structure is what we call
federation, and even if the components are physically separated, they are
logically seen as a single component. generally, these items are heterogeneous
which make it difficult to create the logical federation schema,and the
execution of user queries a complicated mission. In this paper, we will fill
this gap by proposing an extension of an existent algorithm in order to treat
different schema types (star, snow flack) including the treatment of
hierarchies dimension using ontology
|
1310.5884 | The optimality of attaching unlinked labels to unlinked meanings | cs.CL physics.data-an physics.soc-ph | Vocabulary learning by children can be characterized by many biases. When
encountering a new word, children as well as adults, are biased towards
assuming that it means something totally different from the words that they
already know. To the best of our knowledge, the 1st mathematical proof of the
optimality of this bias is presented here. First, it is shown that this bias is
a particular case of the maximization of mutual information between words and
meanings. Second, the optimality is proven within a more general information
theoretic framework where mutual information maximization competes with other
information theoretic principles. The bias is a prediction from modern
information theory. The relationship between information theoretic principles
and the principles of contrast and mutual exclusivity is also shown.
|
1310.5895 | Stable Recovery from the Magnitude of Symmetrized Fourier Measurements | cs.IT math.IT | In this note we show that stable recovery of complex-valued signals
$x\in\mathbb{C}^n$ up to global sign can be achieved from the magnitudes of
$4n-1$ Fourier measurements when a certain "symmetrization and zero-padding" is
performed before measurement ($4n-3$ is possible in certain cases). For real
signals, symmetrization itself is linear and therefore our result is in this
case a statement on uniform phase retrieval. Since complex conjugation is
involved, such measurement procedure is not complex-linear but recovery is
still possible from magnitudes of linear measurements on, for example,
$(\Re(x),\Im(x))$.
|
1310.5930 | A Unifying Model for External Noise Sources and ISI in Diffusive
Molecular Communication | cs.IT math.IT | This paper considers the impact of external noise sources, including
interfering transmitters, on a diffusive molecular communication system, where
the impact is measured as the number of noise molecules expected to be observed
at a passive receiver. A unifying model for noise, multiuser interference, and
intersymbol interference is presented, where, under certain circumstances,
interference can be approximated as a noise source that is emitting
continuously. The model includes the presence of advection and molecule
degradation. The time-varying and asymptotic impact is derived for a series of
special cases, some of which facilitate closed-form solutions. Simulation
results show the accuracy of the expressions derived for the impact of a
continuously-emitting noise source, and show how approximating intersymbol
interference as a noise source can simplify the calculation of the expected bit
error probability of a weighted sum detector.
|
1310.5957 | Entropy region and convolution | cs.IT math.IT math.PR | The entropy region is constructed from vectors of random variables by
collecting Shannon entropies of all subvectors. Its shape is studied here by
means of polymatroidal constructions, notably by convolution. The closure of
the region is decomposed into the direct sum of tight and modular parts,
reducing the study to the tight part. The relative interior of the reduction
belongs to the entropy region. Behavior of the decomposition under
selfadhesivity is clarified. Results are specialized to and completed for the
region of four random variables. This and computer experiments help to
visualize approximations of a symmetrized part of the entropy region. Four-atom
conjecture on the minimization of Ingleton score is refuted.
|
1310.5963 | Improving the methods of email classification based on words ontology | cs.IR cs.CL | The Internet has dramatically changed the relationship among people and their
relationships with others people and made the valuable information available
for the users. Email is the service, which the Internet provides today for its
own users; this service has attracted most of the users' attention due to the
low cost. Along with the numerous benefits of Email, one of the weaknesses of
this service is that the number of received emails is continually being
enhanced, thus the ways are needed to automatically filter these disturbing
letters. Most of these filters utilize a combination of several techniques such
as the Black or white List, using the keywords and so on in order to identify
the spam more accurately In this paper, we introduce a new method to classify
the spam. We are seeking to increase the accuracy of Email classification by
combining the output of several decision trees and the concept of ontology.
|
1310.5965 | Fusion of Hyperspectral and Panchromatic Images using Spectral Uumixing
Results | cs.CV | Hyperspectral imaging, due to providing high spectral resolution images, is
one of the most important tools in the remote sensing field. Because of
technological restrictions hyperspectral sensors has a limited spatial
resolution. On the other hand panchromatic image has a better spatial
resolution. Combining this information together can provide a better
understanding of the target scene. Spectral unmixing of mixed pixels in
hyperspectral images results in spectral signature and abundance fractions of
endmembers but gives no information about their location in a mixed pixel. In
this paper we have used spectral unmixing results of hyperspectral images and
segmentation results of panchromatic image for data fusion. The proposed method
has been applied on simulated data using AVRIS Indian Pines datasets. Results
show that this method can effectively combine information in hyperspectral and
panchromatic images.
|
1310.5985 | Adaptive Push-Then-Pull Gossip Algorithm for Scale-free Networks | cs.NI cs.DC cs.SI | Real life networks are generally modelled as scale free networks. Information
diffusion in such networks in decentralised environment is a difficult and
resource consuming affair. Gossip algorithms have come up as a good solution to
this problem. In this paper, we have proposed Adaptive First Push Then Pull
gossip algorithm. We show that algorithm works with minimum cost when the
transition round to switch from Adaptive Push to Adaptive Pull is close to
Round(log(N)). Furthermore, we compare our algorithm with Push, Pull and First
Push Then Pull and show that the proposed algorithm is the most cost efficient
in Scale Free networks.
|
1310.5999 | Improvement of Automatic Hemorrhages Detection Methods Using Shapes
Recognition | cs.CV | Diabetic Retinopathy is a medical condition where the retina is damaged
because fluid leaks from blood vessels into the retina. The presence of
hemorrhages in the retina is the earliest symptom of diabetic retinopathy. The
number and shape of hemorrhages is used to indicate the severity of the
disease. Early automated hemorrhage detection can help reduce the incidence of
blindness. This paper introduced new method depending on the hemorrhage shape
to detect the dot hemorrhage (DH), its number, and size at early stage, this
can be achieved by reducing the retinal image details. Detection and recognize
the DH by following three sequential steps, removing the fovea, removing the
vasculature and recognize DH by determining the circularity for all the objects
in the image, finally determine the shape factor which is related to DH
recognition, this stage strengthens the recognition process. The proposed
method recognizes and separates all the DH.
|
1310.6007 | Efficient Optimization for Sparse Gaussian Process Regression | cs.LG | We propose an efficient optimization algorithm for selecting a subset of
training data to induce sparsity for Gaussian process regression. The algorithm
estimates an inducing set and the hyperparameters using a single objective,
either the marginal likelihood or a variational free energy. The space and time
complexity are linear in training set size, and the algorithm can be applied to
large regression problems on discrete or continuous domains. Empirical
evaluation shows state-of-art performance in discrete cases and competitive
results in the continuous case.
|
1310.6011 | On Sparse Representation in Fourier and Local Bases | cs.IT math.IT | We consider the classical problem of finding the sparse representation of a
signal in a pair of bases. When both bases are orthogonal, it is known that the
sparse representation is unique when the sparsity $K$ of the signal satisfies
$K<1/\mu(D)$, where $\mu(D)$ is the mutual coherence of the dictionary.
Furthermore, the sparse representation can be obtained in polynomial time by
Basis Pursuit (BP), when $K<0.91/\mu(D)$. Therefore, there is a gap between the
unicity condition and the one required to use the polynomial-complexity BP
formulation. For the case of general dictionaries, it is also well known that
finding the sparse representation under the only constraint of unicity is
NP-hard.
In this paper, we introduce, for the case of Fourier and canonical bases, a
polynomial complexity algorithm that finds all the possible $K$-sparse
representations of a signal under the weaker condition that $K<\sqrt{2}
/\mu(D)$. Consequently, when $K<1/\mu(D)$, the proposed algorithm solves the
unique sparse representation problem for this structured dictionary in
polynomial time. We further show that the same method can be extended to many
other pairs of bases, one of which must have local atoms. Examples include the
union of Fourier and local Fourier bases, the union of discrete cosine
transform and canonical bases, and the union of random Gaussian and canonical
bases.
|
1310.6012 | Evolution of swarming behavior is shaped by how predators attack | q-bio.PE cs.NE | Animal grouping behaviors have been widely studied due to their implications
for understanding social intelligence, collective cognition, and potential
applications in engineering, artificial intelligence, and robotics. An
important biological aspect of these studies is discerning which selection
pressures favor the evolution of grouping behavior. In the past decade,
researchers have begun using evolutionary computation to study the evolutionary
effects of these selection pressures in predator-prey models. The selfish herd
hypothesis states that concentrated groups arise because prey selfishly attempt
to place their conspecifics between themselves and the predator, thus causing
an endless cycle of movement toward the center of the group. Using an
evolutionary model of a predator-prey system, we show that how predators attack
is critical to the evolution of the selfish herd. Following this discovery, we
show that density-dependent predation provides an abstraction of Hamilton's
original formulation of ``domains of danger.'' Finally, we verify that
density-dependent predation provides a sufficient selective advantage for prey
to evolve the selfish herd in response to predation by coevolving predators.
Thus, our work corroborates Hamilton's selfish herd hypothesis in a digital
evolutionary model, refines the assumptions of the selfish herd hypothesis, and
generalizes the domain of danger concept to density-dependent predation.
|
1310.6063 | Word Spotting in Cursive Handwritten Documents using Modified Character
Shape Codes | cs.CV | There is a large collection of Handwritten English paper documents of
Historical and Scientific importance. But paper documents are not recognized
directly by computer. Hence the closest way of indexing these documents is by
storing their document digital image. Hence a large database of document images
can replace the paper documents. But the document and data corresponding to
each image cannot be directly recognized by the computer.
This paper applies the technique of word spotting using Modified Character
Shape Code to Handwritten English document images for quick and efficient query
search of words on a database of document images. It is different from other
Word Spotting techniques as it implements two level of selection for word
segments to match search query. First based on word size and then based on
character shape code of query. It makes the process faster and more efficient
and reduces the need of multiple pre-processing.
|
1310.6066 | Skin Segmentation based Elastic Bunch Graph Matching for efficient
multiple Face Recognition | cs.CV | This paper is aimed at developing and combining different algorithms for face
detection and face recognition to generate an efficient mechanism that can
detect and recognize the facial regions of input image. For the detection of
face from complex region, skin segmentation isolates the face-like regions in a
complex image and following operations of morphology and template matching
rejects false matches to extract facial region. For the recognition of the
face, the image database is now converted into a database of facial segments.
Hence, implementing the technique of Elastic Bunch Graph matching (EBGM) after
skin segmentation generates Face Bunch Graphs that acutely represents the
features of an individual face enhances the quality of the training set. This
increases the matching probability significantly.
|
1310.6092 | A Ray-based Approach for Boundary Estimation of Fiber Bundles Derived
from Diffusion Tensor Imaging | cs.CV | Diffusion Tensor Imaging (DTI) is a non-invasive imaging technique that
allows estimation of the location of white matter tracts in-vivo, based on the
measurement of water diffusion properties. For each voxel, a second-order
tensor can be calculated by using diffusion-weighted sequences (DWI) that are
sensitive to the random motion of water molecules. Given at least 6
diffusion-weighted images with different gradients and one unweighted image,
the coefficients of the symmetric diffusion tensor matrix can be calculated.
Deriving the eigensystem of the tensor, the eigenvectors and eigenvalues can be
calculated to describe the three main directions of diffusion and its
magnitude. Using DTI data, fiber bundles can be determined, to gain information
about eloquent brain structures. Especially in neurosurgery, information about
location and dimension of eloquent structures like the corticospinal tract or
the visual pathways is of major interest. Therefore, the fiber bundle boundary
has to be determined. In this paper, a novel ray-based approach for boundary
estimation of tubular structures is presented.
|
1310.6110 | A two-step model and the algorithm for recalling in recommender systems | cs.IR | When a user finds an interesting recommendation in a recommender system, the
user may want to recall related items recommended in the past to reconsider or
to enjoy them again. If the system can pick up such "recalled" items at each
user's request, it must deepen the user experience.
We propose a model and the algorithm for such personalized "recalling" in
conventional recommender systems, which is an application of neural networks
for associative memory. In our model, the "recalled" items can reflect each
user's personality beyond naive similarities between items.
|
1310.6119 | Asynchronous Rumour Spreading in Social and Signed Topologies | cs.SI physics.soc-ph | In this paper, we present an experimental analysis of the asynchronous push &
pull rumour spreading protocol. This protocol is, to date, the best-performing
rumour spreading protocol for simple, scalable, and robust information
dissemination in distributed systems. We analyse the effect that multiple
parameters have on the protocol's performance, such as using memory to avoid
contacting the same neighbor twice in a row, varying the stopping criteria used
by nodes to decide when to stop spreading the rumour, employing more
sophisticated neighbor selection policies instead of the standard uniform
random choice, and others. Prior work has focused on either providing
theoretical upper bounds regarding the number of rounds needed to spread the
rumour to all nodes, or, proposes improvements by adjusting isolated
parameters. To our knowledge, our work is the first to study how multiple
parameters affect system behaviour both in isolation and combination and under
a wide range of values. Our analysis is based on experimental simulations using
real-world social network datasets, thus complementing prior theoretical work
to shed light on how the protocol behaves in practical, real-world systems. We
also study the behaviour of the protocol on a special type of social graph,
called signed networks (e.g., Slashdot and Epinions), whose links indicate
stronger trust relationships. Finally, through our detailed analysis, we
demonstrate how a few simple additions to the protocol can improve the total
time required to inform 100% of the nodes by a maximum of 99.69% and an average
of 82.37%.
|
1310.6132 | Time varying ISI model for nonlinear interference noise | physics.optics cs.IT math.IT | We show that the effect of nonlinear interference in WDM systems is
equivalent to slowly varying inter-symbol-interference (ISI), and hence its
cancellation can be carried out by means of adaptive linear filtering. We
characterize the ISI coefficients and discuss the potential gain following from
their cancellation.
|
1310.6139 | Practical Full Duplex Physical Layer Network Coding | cs.IT math.IT | We propose a practical network code for the wireless two-way relay channel
where all nodes communicate in full duplex (FD) mode. The physical layer
network coding (PNC) operation is applied with the FD operating nodes, reducing
the transmission time to a single time slot, hence doubling the spectral
efficiency when compared to classical PNC systems. In our system model, binary
phase shift keying modulated signals are transmitted over Rayleigh fading
channels. We derive the theoretical error rates at relay and end nodes
according to the maximum likelihood detection rule, in case of non-ideal
self-interference cancellation. Theoretical results are also verified via
simulations.
|
1310.6173 | Self-Organizing Mobility Robustness Optimization in LTE Networks with
eICIC | cs.NI cs.PF cs.SY | We address the problem of Mobility Robustness Optimization (MRO) and describe
centralized Self Organizing Network (SON) solutions that can optimize
connected-mode mobility Key Performance Indicators (KPIs). Our solution extends
the earlier work of eICIC parameter optimization [7], to heterogeneous networks
with mobility, and outline methods of progressive complexity that optimize the
Retaining/Offloading Bias which are macro/pico views of Cell Individual Offset
parameters. Simulation results under real LTE network deployment assumptions of
a US metropolitan area demonstrate the effects of such solutions on the
mobility KPIs. To our knowledge, this solution is the first that demonstrates
the joint optimization of eICIC and MRO.
|
1310.6257 | Dissociation and Propagation for Approximate Lifted Inference with
Standard Relational Database Management Systems | cs.DB cs.AI | Probabilistic inference over large data sets is a challenging data management
problem since exact inference is generally #P-hard and is most often solved
approximately with sampling-based methods today. This paper proposes an
alternative approach for approximate evaluation of conjunctive queries with
standard relational databases: In our approach, every query is evaluated
entirely in the database engine by evaluating a fixed number of query plans,
each providing an upper bound on the true probability, then taking their
minimum. We provide an algorithm that takes into account important schema
information to enumerate only the minimal necessary plans among all possible
plans. Importantly, this algorithm is a strict generalization of all known
PTIME self-join-free conjunctive queries: A query is in PTIME if and only if
our algorithm returns one single plan. Furthermore, our approach is a
generalization of a family of efficient ranking methods from graphs to
hypergraphs. We also adapt three relational query optimization techniques to
evaluate all necessary plans very fast. We give a detailed experimental
evaluation of our approach and, in the process, provide a new way of thinking
about the value of probabilistic methods over non-probabilistic methods for
ranking query answers. We also note that the techniques developed in this paper
apply immediately to lifted inference from statistical relational models since
lifted inference corresponds to PTIME plans in probabilistic databases.
|
1310.6265 | Optimal Transmit Filters for ISI Channels under Channel Shortening
Detection | cs.IT math.IT | We consider channels affected by intersymbol interference with
reduced-complexity, mutual information optimized, channel-shortening detection.
For such settings, we optimize the transmit filter, taking into consideration
the reduced receiver complexity constraint. As figure of merit, we consider the
achievable information rate of the entire system and with functional analysis,
we establish a general form of the optimal transmit filter, which can then be
optimized by standard numerical methods. As a corollary to our main result, we
obtain some insight of the behavior of the standard waterfilling algorithm for
intersymbol interference channels. With only some minor changes, the general
form we derive can be applied to multiple-input multiple-output channels with
intersymbol interference. To illuminate the practical use of our results, we
provide applications of our theoretical results by deriving the optimal shaping
pulse of a linear modulation transmitted over a bandlimited additive white
Gaussian noise channel which has possible applications in the
faster-than-Nyquist/time packing technique.
|
1310.6288 | Spatial-Spectral Boosting Analysis for Stroke Patients' Motor Imagery
EEG in Rehabilitation Training | stat.ML cs.AI cs.LG | Current studies about motor imagery based rehabilitation training systems for
stroke subjects lack an appropriate analytic method, which can achieve a
considerable classification accuracy, at the same time detects gradual changes
of imagery patterns during rehabilitation process and disinters potential
mechanisms about motor function recovery. In this study, we propose an adaptive
boosting algorithm based on the cortex plasticity and spectral band shifts.
This approach models the usually predetermined spatial-spectral configurations
in EEG study into variable preconditions, and introduces a new heuristic of
stochastic gradient boost for training base learners under these preconditions.
We compare our proposed algorithm with commonly used methods on datasets
collected from 2 months' clinical experiments. The simulation results
demonstrate the effectiveness of the method in detecting the variations of
stroke patients' EEG patterns. By chronologically reorganizing the weight
parameters of the learned additive model, we verify the spatial compensatory
mechanism on impaired cortex and detect the changes of accentuation bands in
spectral domain, which may contribute important prior knowledge for
rehabilitation practice.
|
1310.6304 | Combining Structured and Unstructured Randomness in Large Scale PCA | cs.LG | Principal Component Analysis (PCA) is a ubiquitous tool with many
applications in machine learning including feature construction, subspace
embedding, and outlier detection. In this paper, we present an algorithm for
computing the top principal components of a dataset with a large number of rows
(examples) and columns (features). Our algorithm leverages both structured and
unstructured random projections to retain good accuracy while being
computationally efficient. We demonstrate the technique on the winning
submission the KDD 2010 Cup.
|
1310.6323 | Logic in the Lab | cs.AI cs.GT cs.LO | This file summarizes the plenary talk on laboratory experiments on logic at
the TARK 2013 - 14th Conference on Theoretical Aspects of Rationality and
Knowledge.
|
1310.6338 | Risk aversion as an evolutionary adaptation | q-bio.PE cs.GT cs.NE | Risk aversion is a common behavior universal to humans and animals alike.
Economists have traditionally defined risk preferences by the curvature of the
utility function. Psychologists and behavioral economists also make use of
concepts such as loss aversion and probability weighting to model risk
aversion. Neurophysiological evidence suggests that loss aversion has its
origins in relatively ancient neural circuitries (e.g., ventral striatum).
Could there thus be an evolutionary origin to risk avoidance? We study this
question by evolving strategies that adapt to play the equivalent mean payoff
gamble. We hypothesize that risk aversion in the equivalent mean payoff gamble
is beneficial as an adaptation to living in small groups, and find that a
preference for risk averse strategies only evolves in small populations of less
than 1,000 individuals, while agents exhibit no such strategy preference in
larger populations. Further, we discover that risk aversion can also evolve in
larger populations, but only when the population is segmented into small groups
of around 150 individuals. Finally, we observe that risk aversion only evolves
when the gamble is a rare event that has a large impact on the individual's
fitness. These findings align with earlier reports that humans lived in small
groups for a large portion of their evolutionary history. As such, we suggest
that rare, high-risk, high-payoff events such as mating and mate competition
could have driven the evolution of risk averse behavior in humans living in
small groups.
|
1310.6342 | Cultural Evolution as Distributed Computation | cs.MA nlin.AO | The speed and transformative power of human cultural evolution is evident
from the change it has wrought on our planet. This chapter proposes a human
computation program aimed at (1) distinguishing algorithmic from
non-algorithmic components of cultural evolution, (2) computationally modeling
the algorithmic components, and amassing human solutions to the non-algorithmic
(generally, creative) components, and (3) combining them to develop
human-machine hybrids with previously unforeseen computational power that can
be used to solve real problems. Drawing on recent insights into the origins of
evolutionary processes from biology and complexity theory, human minds are
modeled as self-organizing, interacting, autopoietic networks that evolve
through a Lamarckian (non-Darwinian) process of communal exchange. Existing
computational models as well as directions for future research are discussed.
|
1310.6343 | Provable Bounds for Learning Some Deep Representations | cs.LG cs.AI stat.ML | We give algorithms with provable guarantees that learn a class of deep nets
in the generative model view popularized by Hinton and others. Our generative
model is an $n$ node multilayer neural net that has degree at most $n^{\gamma}$
for some $\gamma <1$ and each edge has a random edge weight in $[-1,1]$. Our
algorithm learns {\em almost all} networks in this class with polynomial
running time. The sample complexity is quadratic or cubic depending upon the
details of the model.
The algorithm uses layerwise learning. It is based upon a novel idea of
observing correlations among features and using these to infer the underlying
edge structure via a global graph recovery procedure. The analysis of the
algorithm reveals interesting structure of neural networks with random edge
weights.
|
1310.6376 | Can Facial Uniqueness be Inferred from Impostor Scores? | cs.CV | In Biometrics, facial uniqueness is commonly inferred from impostor
similarity scores. In this paper, we show that such uniqueness measures are
highly unstable in the presence of image quality variations like pose, noise
and blur. We also experimentally demonstrate the instability of a recently
introduced impostor-based uniqueness measure of [Klare and Jain 2013] when
subject to poor quality facial images.
|
1310.6405 | Utility-based Decision-making in Distributed Systems Modelling | cs.LO cs.MA | We consider a calculus of resources and processes as a basis for modelling
decision-making in multi-agent systems. The calculus represents the regulation
of agents' choices using utility functions that take account of context.
Associated with the calculus is a (Hennessy Milner-style) context sensitive
modal logic of state. As an application, we show how a notion of `trust domain'
can be defined for multi-agent systems.
|
1310.6427 | Estimating Channel Parameters from the Syndrome of a Linear Code | cs.IT math.IT | In this letter, we analyse the properties of a maximum likelihood channel
estimator based on the syndrome of a linear code. For the two examples of a
binary symmetric channel and a binary input additive white Gaussian noise
channel, we derive expressions for the bias and the mean squared error and
compare them to the Cram\'er-Rao bound. The analytical expressions show the
relationship between the estimator properties and the parameters of the linear
code, i.e., the number of check nodes and the check node degree.
|
1310.6429 | Knowledge-Based Programs as Plans: Succinctness and the Complexity of
Plan Existence | cs.AI cs.LO | Knowledge-based programs (KBPs) are high-level protocols describing the
course of action an agent should perform as a function of its knowledge. The
use of KBPs for expressing action policies in AI planning has been surprisingly
overlooked. Given that to each KBP corresponds an equivalent plan and vice
versa, KBPs are typically more succinct than standard plans, but imply more
on-line computation time. Here we make this argument formal, and prove that
there exists an exponential succinctness gap between knowledge-based programs
and standard plans. Then we address the complexity of plan existence. Some
results trivially follow from results already known from the literature on
planning under incomplete knowledge, but many were unknown so far.
|
1310.6432 | When is an Example a Counterexample? | cs.AI | In this extended abstract, we carefully examine a purported counterexample to
a postulate of iterated belief revision. We suggest that the example is better
seen as a failure to apply the theory of belief revision in sufficient detail.
The main contribution is conceptual aiming at the literature on the
philosophical foundations of the AGM theory of belief revision [1]. Our
discussion is centered around the observation that it is often unclear whether
a specific example is a "genuine" counterexample to an abstract theory or a
misapplication of that theory to a concrete case.
|
1310.6440 | Facebook and the Epistemic Logic of Friendship | cs.LO cs.SI | This paper presents a two-dimensional modal logic for reasoning about the
changing patterns of knowledge and social relationships in networks organised
on the basis of a symmetric 'friendship' relation, providing a precise language
for exploring 'logic in the community' [11]. Agents are placed in the model,
allowing us to express such indexical facts as 'I am your friend' and 'You, my
friends, are in danger'.
The technical framework for this work is general dynamic dynamic logic (GDDL)
[4], which provides a general method for extending modal logics with dynamic
operators for reasoning about a wide range of model-transformations, starting
with those definable in propositional dynamic logic (PDL) and extended to allow
for the more subtle operators involved in, for example, private communication,
as represented in dynamic epistemic logic (DEL) and related systems. We provide
a hands-on introduction to GDDL, introducing elements of the formalism as we
go, but leave the reader to consult [4] for technical details.
Instead, the purpose of this paper is to investigate a number of conceptual
issues that arise when considering communication between agents in such
networks, both from one agent to another, and broadcasts to socially-defined
groups of agents, such as the group of my friends.
|
1310.6443 | Leveraging Physical Layer Capabilites: Distributed Scheduling in
Interference Networks with Local Views | cs.NI cs.IT math.IT | In most wireless networks, nodes have only limited local information about
the state of the network, which includes connectivity and channel state
information. With limited local information about the network, each node's
knowledge is mismatched; therefore, they must make distributed decisions. In
this paper, we pose the following question - if every node has network state
information only about a small neighborhood, how and when should nodes choose
to transmit? While link scheduling answers the above question for
point-to-point physical layers which are designed for an interference-avoidance
paradigm, we look for answers in cases when interference can be embraced by
advanced PHY layer design, as suggested by results in network information
theory.
To make progress on this challenging problem, we propose a constructive
distributed algorithm that achieves rates higher than link scheduling based on
interference avoidance, especially if each node knows more than one hop of
network state information. We compare our new aggressive algorithm to a
conservative algorithm we have presented in [1]. Both algorithms schedule
sub-networks such that each sub-network can employ advanced
interference-embracing coding schemes to achieve higher rates. Our innovation
is in the identification, selection and scheduling of sub-networks, especially
when sub-networks are larger than a single link.
|
1310.6481 | Barrier Certificates Revisited | cs.SY | A barrier certificate can separate the state space of a con- sidered hybrid
system (HS) into safe and unsafe parts ac- cording to the safety property to be
verified. Therefore this notion has been widely used in the verification of
HSs. A stronger condition on barrier certificates means that less expressive
barrier certificates can be synthesized. On the other hand, synthesizing more
expressive barrier certificates often means high complexity. In [9], Kong et al
consid- ered how to relax the condition of barrier certificates while still
keeping their convexity so that one can synthesize more expressive barrier
certificates efficiently using semi-definite programming (SDP). In this paper,
we first discuss how to relax the condition of barrier certificates in a
general way, while still keeping their convexity. Particularly, one can then
utilize different weaker conditions flexibly to synthesize dif- ferent kinds of
barrier certificates with more expressiveness efficiently using SDP. These
barriers give more opportuni- ties to verify the considered system. We also
show how to combine two functions together to form a combined barrier
certificate in order to prove a safety property under consid- eration, whereas
neither of them can be used as a barrier certificate separately, even according
to any relaxed condi- tion. Another contribution of this paper is that we
discuss how to discover certificates from the general relaxed condi- tion by
SDP. In particular, we focus on how to avoid the unsoundness because of numeric
error caused by SDP with symbolic checking
|
1310.6485 | Secret Key Cryptosystem based on Non-Systematic Polar Codes | cs.CR cs.IT math.IT | Polar codes are a new class of error correcting linear block codes, whose
generator matrix is specified by the knowledge of transmission channel
parameters, code length and code dimension. Moreover, regarding computational
security, it is assumed that an attacker with a restricted processing power has
unlimited access to the transmission media. Therefore, the attacker can
construct the generator matrix of polar codes, especially in the case of Binary
Erasure Channels, on which this matrix can be easily constructed. In this
paper, we introduce a novel method to keep the generator matrix of polar codes
in secret in a way that the attacker cannot access the required information to
decode the intended polar code. With the help of this method, a secret key
cryptosystem is proposed based on non-systematic polar codes. In fact, the main
objective of this study is to achieve an acceptable level of security and
reliability through taking advantage of the special properties of polar codes.
The analyses revealed that our scheme resists the typical attacks on the secret
key cryptosystems based on linear block codes. In addition, by employing some
efficient methods, the key length of the proposed scheme is decreased compared
to that of the previous cryptosystems. Moreover, this scheme enjoys other
advantages including high code rate, and proper error performance as well.
|
1310.6486 | Systemic Risk Identification, Modelling, Analysis, and Monitoring: An
Integrated Approach | cs.CE q-fin.GN | Research capacity is critical in understanding systemic risk and informing
new regulation. Banking regulation has not kept pace with all the complexities
of financial innovation. The academic literature on systemic risk is rapidly
expanding. The majority of papers analyse a single source or a consolidated
source of risk and its effect. A fraction of publications quantify systemic
risk measures or formulate penalties for systemically important financial
institutions that are of practical regulatory relevance. The challenges facing
systemic risk evaluation and regulation still persist, as the definition of
systemic risk is somewhat unsettled and that affects attempts to provide
solutions. Our understanding of systemic risk is evolving and the awareness of
data relevance is rising gradually; this challenge is reflected in the focus of
major international research initiatives. There is a consensus that the direct
and indirect costs of a systemic crisis are enormous as opposed to preventing
it, and that without regulation the externalities will not be prevented; but
there is no consensus yet on the extent and detail of regulation, and research
expectations are to facilitate the regulatory process. This report outlines an
integrated approach for systemic risk evaluation based on multiple types of
interbank exposures through innovative modelling approaches as tensorial
multilayer networks, suggests how to relate underlying economic data and how to
extend the network to cover financial market information. We reason about data
requirements and time scale effects, and outline a multi-model hypernetwork of
systemic risk knowledge as a scenario analysis and policy support tool. The
argument is that logical steps forward would incorporate the range of risk
sources and their interrelated effects as contributions towards an overall
systemic risk indicator, would perform an integral analysis of ...
|
1310.6511 | Simultaneous Information and Energy Transfer in Large-Scale Networks
with/without Relaying | cs.IT math.IT | Energy harvesting (EH) from ambient radio-frequency (RF) electromagnetic
waves is an efficient solution for fully autonomous and sustainable
communication networks. Most of the related works presented in the literature
are based on specific (and small-scale) network structures, which although give
useful insights on the potential benefits of the RF-EH technology, cannot
characterize the performance of general networks. In this paper, we adopt a
large-scale approach of the RF-EH technology and we characterize the
performance of a network with random number of transmitter-receiver pairs by
using stochastic-geometry tools. Specifically, we analyze the outage
probability performance and the average harvested energy, when receivers employ
power splitting (PS) technique for "simultaneous" information and energy
transfer. A non-cooperative scheme, where information/energy are conveyed only
via direct links, is firstly considered and the outage performance of the
system as well as the average harvested energy are derived in closed form in
function of the power splitting. For this protocol, an interesting optimization
problem which minimizes the transmitted power under outage probability and
harvesting constraints, is formulated and solved in closed form. In addition,
we study a cooperative protocol where sources' transmissions are supported by a
random number of potential relays that are randomly distributed into the
network. In this case, information/energy can be received at each destination
via two independent and orthogonal paths (in case of relaying). We characterize
both performance metrics, when a selection combining scheme is applied at the
receivers and a single relay is randomly selected for cooperative diversity.
|
1310.6516 | Simulating the Influence of Collaborative Networks on the Structure of
Networks of Organizations, Employment Structure, and Organization Value | cs.SI cs.CY physics.soc-ph | From the perspective of reindustrialization, it is important to understand
the evolution of the structure of the network of organizations employment
structure, and organization value. Understanding the potential influence of
collaborative networks (CNs) on these aspects may lead to the development of
appropriate economic policies. In this paper, we propose a theoretical approach
to analysis this potential influence, based on a model of dynamic networked
ecosystem of organizations encompassing collaboration relations among
organization, employment mobility, and organization value. A large number of
simulations has been performed to identify factors influencing the structure of
the network of organizations employment structure, and organization value. The
main findings are that 1) the higher the number of members of CNs, the better
the clustering and the shorter the average path length among organizations; 2)
the constitution of CNs does not affect neither the structure of the network of
organizations, nor the employment structure and the organization value.
|
1310.6536 | Randomized co-training: from cortical neurons to machine learning and
back again | cs.LG q-bio.NC stat.ML | Despite its size and complexity, the human cortex exhibits striking
anatomical regularities, suggesting there may simple meta-algorithms underlying
cortical learning and computation. We expect such meta-algorithms to be of
interest since they need to operate quickly, scalably and effectively with
little-to-no specialized assumptions.
This note focuses on a specific question: How can neurons use vast quantities
of unlabeled data to speed up learning from the comparatively rare labels
provided by reward systems? As a partial answer, we propose randomized
co-training as a biologically plausible meta-algorithm satisfying the above
requirements. As evidence, we describe a biologically-inspired algorithm,
Correlated Nystrom Views (XNV) that achieves state-of-the-art performance in
semi-supervised learning, and sketch work in progress on a neuronal
implementation.
|
1310.6555 | Web Annotation as a First Class Object | cs.DL cs.IR | Scholars have made handwritten notes and comments in books and manuscripts
for centuries. Today's blogs and news sites typically invite users to express
their opinions on the published content; URLs allow web resources to be shared
with accompanying annotations and comments using third-party services like
Twitter or Facebook. These contributions have until recently been constrained
within specific services, making them second-class citizens of the Web.
Web Annotations are now emerging as fully independent Linked Data in their
own right, no longer restricted to plain textual comments in application silos.
Annotations can now range from bookmarks and comments, to fine-grained
annotations of a selection of, for example, a section of a frame within a video
stream. Technologies and standards now exist to create, publish, syndicate,
mash-up and consume, finely targeted, semantically rich digital annotations on
practically any content, as first-class Web citizens. This development is being
driven by the need for collaboration and annotation reuse amongst domain
researchers, computer scientists, scientific publishers, and scholarly content
databases.
|
1310.6592 | Revealing travel patterns and city structure with taxi trip data | physics.soc-ph cs.SI | Detecting regional spatial structures based on spatial interactions is
crucial in applications ranging from urban planning to traffic control. In the
big data era, various movement trajectories are available for studying spatial
structures. This research uses large scale Shanghai taxi trip data extracted
from GPS-enabled taxi trajectories to reveal traffic flow patterns and urban
structure of the city. Using the network science methods, 15 temporally stable
regions reflecting the scope of people's daily travels are found using
community detection method on the network built from short trips, which
represent residents' daily intra-urban travels and exhibit a clear pattern. In
each region, taxi traffic flows are dominated by a few 'hubs' and 'hubs' in
suburbs impact more trips than 'hubs' in urban areas. Land use conditions in
urban regions are different from those in suburban areas. Additionally, 'hubs'
in urban area associate with office buildings and commercial areas more,
whereas residential land use is more common in suburban hubs. The taxi flow
structures and land uses reveal the polycentric and layered concentric
structure of Shanghai. Finally, according to the temporal variations of taxi
flows and the diversity levels of taxi trip lengths, we explore the total taxi
traffic properties of each region and proved the city structure we find.
External trips across regions also take large proportion of the total traffic
in each region, especially in suburbs. The results could help transportation
policy making and shed light on the way to reveal urban structures with big
data.
|
1310.6637 | A language independent web data extraction using vision based page
segmentation algorithm | cs.IR | Web usage mining is a process of extracting useful information from server
logs i.e. users history. Web usage mining is a process of finding out what
users are looking for on the internet. Some users might be looking at only
textual data, where as some others might be interested in multimedia data. One
would retrieve the data by copying it and pasting it to the relevant document.
But this is tedious and time consuming as well as difficult when the data to be
retrieved is plenty. Extracting structured data from a web page is challenging
problem due to complicated structured pages. Earlier they were used web page
programming language dependent; the main problem is to analyze the html source
code. In earlier they were considered the scripts such as java scripts and
cascade styles in the html files. When it makes different for existing
solutions to infer the regularity of the structure of the Web Pages only by
analyzing the tag structures. To overcome this problem we are using a new
algorithm called VIPS algorithm i.e. independent language. This approach
primary utilizes the visual features on the webpage to implement web data
extraction.
|
1310.6650 | Polar Coded HARQ Scheme with Chase Combining | cs.IT math.IT | A hybrid automatic repeat request scheme with Chase combing (HARQ-CC) of
polar codes is proposed. The existing analysis tools of the underlying
rate-compatible punctured polar (RCPP) codes for additive white Gaussian noise
(AWGN) channels are extended to Rayleigh fading channels. Then, an
approximation bound of the throughput efficiency for the polar coded HARQ-CC
scheme is derived. Utilizing this bound, the parameter configurations of the
proposed scheme can be optimized. Simulation results show that, the proposed
HARQ-CC scheme under a low-complexity SC decoding is only about $1.0$dB away
from the existing schemes with incremental redundancy (\mbox{HARQ-IR}).
Compared with the polar coded \mbox{HARQ-IR} scheme, the proposed HARQ-CC
scheme requires less retransmissions and has the advantage of good
compatibility to other communication techniques.
|
1310.6654 | Pseudo vs. True Defect Classification in Printed Circuits Boards using
Wavelet Features | cs.CV | In recent years, Printed Circuit Boards (PCB) have become the backbone of a
large number of consumer electronic devices leading to a surge in their
production. This has made it imperative to employ automatic inspection systems
to identify manufacturing defects in PCB before they are installed in the
respective systems. An important task in this regard is the classification of
defects as either true or pseudo defects, which decides if the PCB is to be
re-manufactured or not. This work proposes a novel approach to detect most
common defects in the PCBs. The problem has been approached by employing highly
discriminative features based on multi-scale wavelet transform, which are
further boosted by using a kernalized version of the support vector machines
(SVM). A real world printed circuit board dataset has been used for
quantitative analysis. Experimental results demonstrated the efficacy of the
proposed method.
|
1310.6657 | MISO Broadcast Channel with Imperfect and (Un)matched CSIT in the
Frequency Domain: DoF Region and Transmission Strategies | cs.IT math.IT | In this contribution, we focus on a frequency domain two-user
Multiple-Input-Single-Output Broadcast Channel (MISO BC) where the transmitter
has imperfect and (un)matched Channel State Information (CSI) of the two users
in two subbands. We provide an upper-bound to the Degrees-of-Freedom (DoF)
region, which is tight compared to the state of the art. By decomposing the
subbands into subchannels according to the CSI feedback qualities, we interpret
the DoF region as the weighted-sum of that in each subchannel. Moreover, we
study the sum \emph{DoF} loss when employing sub-optimal schemes, namely
Frequency Division Multiple Access (FDMA), Zero-Forcing Beamforming (ZFBF) and
the $S_3^{3/2}$ scheme proposed by Tandon et al. The results show that by
switching among the sub-optimal strategies, we can obtain at least 80% and
66.7% of the optimal sum \emph{DoF} performance for the unmatched and matched
CSIT scenario respectively.
|
1310.6669 | Degrees-of-Freedom Region of MISO-OFDMA Broadcast Channel with Imperfect
CSIT | cs.IT math.IT | This contribution investigates the Degrees-of-Freedom region of a two-user
frequency correlated Multiple-Input-Single-Output (MISO) Broadcast Channel (BC)
with imperfect Channel State Information at the transmitter (CSIT). We assume
that the system consists of an arbitrary number of subbands, denoted as $L$.
Besides, the CSIT state varies across users and subbands. A tight outer-bound
is found as a function of the minimum average CSIT quality between the two
users. Based on the CSIT states across the subbands, the DoF region is
interpreted as a weighted sum of the optimal DoF regions in the scenarios where
the CSIT of both users are perfect, alternatively perfect and not known.
Inspired by the weighted-sum interpretation and identifying the benefit of the
optimal scheme for the unmatched CSIT proposed by Chen et al., we also design a
scheme achieving the upper-bound for the general $L$-subband scenario in
frequency domain BC, thus showing the optimality of the DoF region.
|
1310.6674 | Dealing with Interference in Distributed Large-scale MIMO Systems: A
Statistical Approach | cs.IT math.IT | This paper considers the problem of interference control through the use of
second-order statistics in massive MIMO multi-cell networks. We consider both
the cases of co-located massive arrays and large-scale distributed antenna
settings. We are interested in characterizing the low-rankness of users'
channel covariance matrices, as such a property can be exploited towards
improved channel estimation (so-called pilot decontamination) as well as
interference rejection via spatial filtering. In previous work, it was shown
that massive MIMO channel covariance matrices exhibit a useful finite rank
property that can be modeled via the angular spread of multipath at a MIMO
uniform linear array. This paper extends this result to more general settings
including certain non-uniform arrays, and more surprisingly, to two dimensional
distributed large scale arrays. In particular our model exhibits the dependence
of the signal subspace's richness on the scattering radius around the user
terminal, through a closed form expression. The applications of the
low-rankness covariance property to channel estimation's denoising and
low-complexity interference filtering are highlighted.
|
1310.6675 | Optimization-based Islanding of Power Networks using Piecewise Linear AC
Power Flow | math.OC cs.SY | In this paper, a flexible optimization-based framework for intentional
islanding is presented. The decision is made of which transmission lines to
switch in order to split the network while minimizing disruption, the amount of
load shed, or grouping coherent generators. The approach uses a piecewise
linear model of AC power flow, which allows the voltage and reactive power to
be considered directly when designing the islands. Demonstrations on standard
test networks show that solution of the problem provides islands that are
balanced in real and reactive power, satisfy AC power flow laws, and have a
healthy voltage profile.
|
1310.6704 | A Hierarchical Dynamic Programming Algorithm for Optimal Coalition
Structure Generation | cs.MA | We present a new Dynamic Programming (DP) formulation of the Coalition
Structure Generation (CSG) problem based on imposing a hierarchical
organizational structure over the agents. We show the efficiency of this
formulation by deriving DyPE, a new optimal DP algorithm which significantly
outperforms current DP approaches in speed and memory usage. In the classic
case, in which all coalitions are feasible, DyPE has half the memory
requirements of other DP approaches. On graph-restricted CSG, in which
feasibility is restricted by a (synergy) graph, DyPE has either the same or
lower computational complexity depending on the underlying graph structure of
the problem. Our empirical evaluation shows that DyPE outperforms the
state-of-the-art DP approaches by several orders of magnitude in a large range
of graph structures (e.g. for certain scalefree graphs DyPE reduces the memory
requirements by $10^6$ and solves problems that previously needed hours in
minutes).
|
1310.6719 | Two Dimensional Array Imaging with Beam Steered Data | cs.CV cs.IT math.IT stat.AP | This paper discusses different approaches used for millimeter wave imaging of
two-dimensional objects. Imaging of a two dimensional object requires reflected
wave data to be collected across two distinct dimensions. In this paper, we
propose a reconstruction method that uses narrowband waveforms along with two
dimensional beam steering. The beam is steered in azimuthal and elevation
direction, which forms the two distinct dimensions required for the
reconstruction. The Reconstruction technique uses inverse Fourier transform
along with amplitude and phase correction factors. In addition, this
reconstruction technique does not require interpolation of the data in either
wavenumber or spatial domain. Use of the two dimensional beam steering offers
better performance in the presence of noise compared with the existing methods,
such as switched array imaging system. Effects of RF impairments such as
quantization of the phase of beam steering weights and timing jitter which add
to phase noise, are analyzed.
|
1310.6736 | Fast 3D Salient Region Detection in Medical Images using GPUs | cs.CV | Automated detection of visually salient regions is an active area of research
in computer vision. Salient regions can serve as inputs for object detectors as
well as inputs for region based registration algorithms. In this paper we
consider the problem of speeding up computationally intensive bottom-up salient
region detection in 3D medical volumes.The method uses the Kadir Brady
formulation of saliency. We show that in the vicinity of a salient region,
entropy is a monotonically increasing function of the degree of overlap of a
candidate window with the salient region. This allows us to initialize a sparse
seed-point grid as the set of tentative salient region centers and iteratively
converge to the local entropy maxima, thereby reducing the computation
complexity compared to the Kadir Brady approach of performing this computation
at every point in the image. We propose two different approaches for achieving
this. The first approach involves evaluating entropy in the four quadrants
around the seed point and iteratively moving in the direction that increases
entropy. The second approach we propose makes use of mean shift tracking
framework to affect entropy maximizing moves. Specifically, we propose the use
of uniform pmf as the target distribution to seek high entropy regions. We
demonstrate the use of our algorithm on medical volumes for left ventricle
detection in PET images and tumor localization in brain MR sequences.
|
1310.6740 | Active Learning of Linear Embeddings for Gaussian Processes | stat.ML cs.LG | We propose an active learning method for discovering low-dimensional
structure in high-dimensional Gaussian process (GP) tasks. Such problems are
increasingly frequent and important, but have hitherto presented severe
practical difficulties. We further introduce a novel technique for
approximately marginalizing GP hyperparameters, yielding marginal predictions
robust to hyperparameter mis-specification. Our method offers an efficient
means of performing GP regression, quadrature, or Bayesian optimization in
high-dimensional spaces.
|
1310.6753 | Romantic Partnerships and the Dispersion of Social Ties: A Network
Analysis of Relationship Status on Facebook | cs.SI physics.soc-ph | A crucial task in the analysis of on-line social-networking systems is to
identify important people --- those linked by strong social ties --- within an
individual's network neighborhood. Here we investigate this question for a
particular category of strong ties, those involving spouses or romantic
partners. We organize our analysis around a basic question: given all the
connections among a person's friends, can you recognize his or her romantic
partner from the network structure alone? Using data from a large sample of
Facebook users, we find that this task can be accomplished with high accuracy,
but doing so requires the development of a new measure of tie strength that we
term `dispersion' --- the extent to which two people's mutual friends are not
themselves well-connected. The results offer methods for identifying types of
structurally significant people in on-line applications, and suggest a
potential expansion of existing theories of tie strength.
|
1310.6767 | Curiosity Based Exploration for Learning Terrain Models | cs.RO | We present a robotic exploration technique in which the goal is to learn to a
visual model and be able to distinguish between different terrains and other
visual components in an unknown environment. We use ROST, a realtime online
spatiotemporal topic modeling framework to model these terrains using the
observations made by the robot, and then use an information theoretic path
planning technique to define the exploration path. We conduct experiments with
aerial view and underwater datasets with millions of observations and varying
path lengths, and find that paths that are biased towards locations with high
topic perplexity produce better terrain models with high discriminative power,
especially with paths of length close to the diameter of the world.
|
1310.6772 | Sockpuppet Detection in Wikipedia: A Corpus of Real-World Deceptive
Writing for Linking Identities | cs.CL cs.CR cs.CY | This paper describes the corpus of sockpuppet cases we gathered from
Wikipedia. A sockpuppet is an online user account created with a fake identity
for the purpose of covering abusive behavior and/or subverting the editing
regulation process. We used a semi-automated method for crawling and curating a
dataset of real sockpuppet investigation cases. To the best of our knowledge,
this is the first corpus available on real-world deceptive writing. We describe
the process for crawling the data and some preliminary results that can be used
as baseline for benchmarking research. The dataset will be released under a
Creative Commons license from our project website: http://docsig.cis.uab.edu.
|
1310.6775 | Durkheim Project Data Analysis Report | cs.AI cs.CL cs.LG | This report describes the suicidality prediction models created under the
DARPA DCAPS program in association with the Durkheim Project
[http://durkheimproject.org/]. The models were built primarily from
unstructured text (free-format clinician notes) for several hundred patient
records obtained from the Veterans Health Administration (VHA). The models were
constructed using a genetic programming algorithm applied to bag-of-words and
bag-of-phrases datasets. The influence of additional structured data was
explored but was found to be minor. Given the small dataset size,
classification between cohorts was high fidelity (98%). Cross-validation
suggests these models are reasonably predictive, with an accuracy of 50% to 69%
on five rotating folds, with ensemble averages of 58% to 67%. One particularly
noteworthy result is that word-pairs can dramatically improve classification
accuracy; but this is the case only when one of the words in the pair is
already known to have a high predictive value. By contrast, the set of all
possible word-pairs does not improve on a simple bag-of-words model.
|
1310.6780 | Mining Maximal Cliques from an Uncertain Graph | cs.DS cs.DB | We consider mining dense substructures (maximal cliques) from an uncertain
graph, which is a probability distribution on a set of deterministic graphs.
For parameter 0 < {\alpha} < 1, we present a precise definition of an
{\alpha}-maximal clique in an uncertain graph. We present matching upper and
lower bounds on the number of {\alpha}-maximal cliques possible within an
uncertain graph. We present an algorithm to enumerate {\alpha}-maximal cliques
in an uncertain graph whose worst-case runtime is near-optimal, and an
experimental evaluation showing the practical utility of the algorithm.
|
1310.6795 | Downlink Multi-Antenna Heterogeneous Cellular Network with Load
Balancing | cs.IT cs.NI math.IT | We model and analyze heterogeneous cellular networks with multiple antenna
BSs (multi-antenna HetNets) with K classes or tiers of base stations (BSs),
which may differ in terms of transmit power, deployment density, number of
transmit antennas, number of users served, transmission scheme, and path loss
exponent. We show that the cell selection rules in multi-antenna HetNets may
differ significantly from the single-antenna HetNets due to the possible
differences in multi-antenna transmission schemes across tiers. While it is
challenging to derive exact cell selection rules even for maximizing
signal-to-interferenceplus-noise-ratio (SINR) at the receiver, we show that
adding an appropriately chosen tier-dependent cell selection bias in the
received power yields a close approximation. Assuming arbitrary selection bias
for each tier, simple expressions for downlink coverage and rate are derived.
For coverage maximization, the required selection bias for each tier is given
in closed form. Due to this connection with biasing, multi-antenna HetNets may
balance load more naturally across tiers in certain regimes compared to
single-antenna HetNets, where a large cell selection bias is often needed to
offload traffic to small cells.
|
1310.6808 | Gender Classification Using Gradient Direction Pattern | cs.CV | A novel methodology for gender classification is presented in this paper. It
extracts feature from local region of a face using gray color intensity
difference. The facial area is divided into sub-regions and GDP histogram
extracted from those regions are concatenated into a single vector to represent
the face. The classification accuracy obtained by using support vector machine
has outperformed all traditional feature descriptors for gender classification.
It is evaluated on the images collected from FERET database and obtained very
high accuracy.
|
1310.6817 | Systematic Error-Correcting Codes for Rank Modulation | cs.IT math.IT | The rank-modulation scheme has been recently proposed for efficiently storing
data in nonvolatile memories. Error-correcting codes are essential for rank
modulation, however, existing results have been limited. In this work we
explore a new approach, \emph{systematic error-correcting codes for rank
modulation}. Systematic codes have the benefits of enabling efficient
information retrieval and potentially supporting more efficient encoding and
decoding procedures. We study systematic codes for rank modulation under
Kendall's $\tau$-metric as well as under the $\ell_\infty$-metric.
In Kendall's $\tau$-metric we present $[k+2,k,3]$-systematic codes for
correcting one error, which have optimal rates, unless systematic perfect codes
exist. We also study the design of multi-error-correcting codes, and provide
two explicit constructions, one resulting in $[n+1,k+1,2t+2]$ systematic codes
with redundancy at most $2t+1$. We use non-constructive arguments to show the
existence of $[n,k,n-k]$-systematic codes for general parameters. Furthermore,
we prove that for rank modulation, systematic codes achieve the same capacity
as general error-correcting codes.
Finally, in the $\ell_\infty$-metric we construct two $[n,k,d]$ systematic
multi-error-correcting codes, the first for the case of $d=O(1)$, and the
second for $d=\Theta(n)$. In the latter case, the codes have the same
asymptotic rate as the best codes currently known in this metric.
|
1310.6833 | New Proximity Estimate for Incremental Update of Non-uniformly
Distributed Clusters | cs.DB | The conventional clustering algorithms mine static databases and generate a
set of patterns in the form of clusters. Many real life databases keep growing
incrementally. For such dynamic databases, the patterns extracted from the
original database become obsolete. Thus the conventional clustering algorithms
are not suitable for incremental databases due to lack of capability to modify
the clustering results in accordance with recent updates. In this paper, the
author proposes a new incremental clustering algorithm called CFICA(Cluster
Feature-Based Incremental Clustering Approach for numerical data) to handle
numerical data and suggests a new proximity metric called Inverse Proximity
Estimate (IPE) which considers the proximity of a data point to a cluster
representative as well as its proximity to a farthest point in its vicinity.
CFICA makes use of the proposed proximity metric to determine the membership of
a data point into a cluster.
|
1310.6870 | Joint Wireless Information and Energy Transfer in a K-User MIMO
Interference Channel | cs.IT math.IT | Recently, joint wireless information and energy transfer (JWIET) methods have
been proposed to relieve the battery limitation of wireless devices. However,
the JWIET in a general K-user MIMO interference channel (IFC) has been
unexplored so far. In this paper, we investigate for the first time the JWIET
in K-user MIMO IFC, in which receivers either decode the incoming information
data (information decoding, ID) or harvest the RF energy (energy harvesting,
EH). In the K-user IFC, we consider three different scenarios according to the
receiver mode -- i) multiple EH receivers and a single ID receiver, ii)
multiple IDs and a single EH, and iii) multiple IDs and multiple EHs. For all
scenarios, we have found a common necessary condition of the optimal
transmission strategy and, accordingly, developed the transmission strategy
that satisfies the common necessary condition, in which all the transmitters
transferring energy exploit a rank-one energy beamforming. Furthermore, we have
also proposed an iterative algorithm to optimize the covariance matrices of the
transmitters that transfer information and the powers of the energy beamforming
transmitters simultaneously, and identified the corresponding achievable
rate-energy tradeoff region. Finally, we have shown that by selecting EH
receivers according to their signal-to-leakage-and-harvested energy-ratio
(SLER), we can improve the achievable rate-energy region further.
|
1310.6876 | Application of Fourier and Wavelet Transform for analysing 300 years
Sunspot numbers to Explain the Solar Cycles | cs.CE | In this paper Fourier Transform and Wavelet Transform are applied in case of
recent 300 years of sunspot numbers to explain the solar cycles. Here basically
parallel study of Fourier and Wavelet analysis are done and we have observed
that the better result can be obtained from Wavelet analysis during sunspot
number analysis. We are able to show various minima and maxima in the recent
ages of solar cycles with this tool. The exact periodicity and other possible
periodicities in the cyclic phenomenon of sunspot activity are determined.
|
1310.6925 | Electric Vehicle Charging Station Placement: Formulation, Complexity,
and Solutions | cs.SY math.OC | To enhance environmental sustainability, many countries will electrify their
transportation systems in their future smart city plans. So the number of
electric vehicles (EVs) running in a city will grow significantly. There are
many ways to re-charge EVs' batteries and charging stations will be considered
as the main source of energy. The locations of charging stations are critical;
they should not only be pervasive enough such that an EV anywhere can easily
access a charging station within its driving range, but also widely spread so
that EVs can cruise around the whole city upon being re-charged. Based on these
new perspectives, we formulate the Electric Vehicle Charging Station Placement
Problem (EVCSPP) in this paper. We prove that the problem is non-deterministic
polynomial-time hard. We also propose four solution methods to tackle EVCSPP
and evaluate their performance on various artificial and practical cases. As
verified by the simulation results, the methods have their own characteristics
and they are suitable for different situations depending on the requirements
for solution quality, algorithmic efficiency, problem size, nature of the
algorithm, and existence of system prerequisite.
|
1310.6938 | Optimal Asymmetric Binary Quantization for Estimation Under
Symmetrically Distributed Noise | cs.IT math.IT | Estimation of a location parameter based on noisy and binary quantized
measurements is considered in this letter. We study the behavior of the
Cramer-Rao bound as a function of the quantizer threshold for different
symmetric unimodal noise distributions. We show that, in some cases, the
intuitive choice of threshold position given by the symmetry of the problem,
placing the threshold on the true parameter value, can lead to locally worst
estimation performance.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.