id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1312.1904 | The PageRank Problem, Multi-Agent Consensus and Web Aggregation -- A
Systems and Control Viewpoint | cs.SY | PageRank is an algorithm introduced in 1998 and used by the Google Internet
search engine. It assigns a numerical value to each element of a set of
hyperlinked documents (that is, web pages) within the World Wide Web with the
purpose of measuring the relative importance of the page. The key idea in the
algorithm is to give a higher PageRank value to web pages which are visited
often by web surfers. On its website, Google describes PageRank as follows:
``PageRank reflects our view of the importance of web pages by considering more
than 500 million variables and 2 billion terms. Pages that are considered
important receive a higher PageRank and are more likely to appear at the top of
the search results." Today PageRank is a paradigmatic problem of great interest
in various areas, such as information technology, bibliometrics, biology, and
e-commerce, where objects are often ranked in order of importance. This article
considers a distributed randomized approach based on techniques from the area
of Markov chains using a graph representation consisting of nodes and links. We
also outline connections with other problems of current interest to the systems
and control community, which include ranking of control journals, consensus of
multi-agent systems, and aggregation-based techniques.
|
1312.1909 | From Maxout to Channel-Out: Encoding Information on Sparse Pathways | cs.NE cs.CV cs.LG stat.ML | Motivated by an important insight from neural science, we propose a new
framework for understanding the success of the recently proposed "maxout"
networks. The framework is based on encoding information on sparse pathways and
recognizing the correct pathway at inference time. Elaborating further on this
insight, we propose a novel deep network architecture, called "channel-out"
network, which takes a much better advantage of sparse pathway encoding. In
channel-out networks, pathways are not only formed a posteriori, but they are
also actively selected according to the inference outputs from the lower
layers. From a mathematical perspective, channel-out networks can represent a
wider class of piece-wise continuous functions, thereby endowing the network
with more expressive power than that of maxout networks. We test our
channel-out networks on several well-known image classification benchmarks,
setting new state-of-the-art performance on CIFAR-100 and STL-10, which
represent some of the "harder" image classification benchmarks.
|
1312.1913 | Adapting Binary Information Retrieval Evaluation Metrics for
Segment-based Retrieval Tasks | cs.IR | This report describes metrics for the evaluation of the effectiveness of
segment-based retrieval based on existing binary information retrieval metrics.
This metrics are described in the context of a task for the hyperlinking of
video segments. This evaluation approach re-uses existing evaluation measures
from the standard Cranfield evaluation paradigm. Our adaptation approach can in
principle be used with any kind of effectiveness measure that uses binary
relevance, and for other segment-baed retrieval tasks. In our video
hyperlinking setting, we use precision at a cut-off rank n and mean average
precision.
|
1312.1915 | Long-Lived Distributed Relative Localization of Robot Swarms | cs.RO | This paper studies the problem of having mobile robots in a multi-robot
system maintain an estimate of the relative position and relative orientation
of near-by robots in the environment. This problem is studied in the context of
large swarms of simple robots which are capable of measuring only the distance
to near-by robots.
We present two distributed localization algorithms with different trade-offs
between their computational complexity and their coordination requirements. The
first algorithm does not require the robots to coordinate their motion. It
relies on a non-linear least squares based strategy to allow robots to compute
the relative pose of near-by robots. The second algorithm borrows tools from
distributed computing theory to coordinate which robots must remain stationary
and which robots are allowed to move. This coordination allows the robots to
use standard trilateration techniques to compute the relative pose of near-by
robots. Both algorithms are analyzed theoretically and validated through
simulations.
|
1312.1918 | Cut-Set Bounds for Networks with Zero-Delay Nodes | cs.IT math.IT | In a network, a node is said to incur a delay if its encoding of each
transmitted symbol involves only its received symbols obtained before the time
slot in which the transmitted symbol is sent (hence the transmitted symbol sent
in a time slot cannot depend on the received symbol obtained in the same time
slot). A node is said to incur no delay if its received symbol obtained in a
time slot is available for encoding its transmitted symbol sent in the same
time slot. Under the classical model, every node in a discrete memoryless
network (DMN) incurs a unit delay, and the capacity region of the DMN satisfies
the well-known cut-set outer bound. In this paper, we propose a generalized
model for the DMN where some nodes may incur no delay. Under our generalized
model, we obtain a new cut-set outer bound, which is proved to be tight for
some two-node DMN and is shown to subsume an existing cut-set bound for the
causal relay network. In addition, we establish under the generalized model
another cut-set outer bound on the positive-delay region -- the set of
achievable rate tuples under the constraint that every node incurs a delay. We
use the cut-set bound on the positive-delay region to show that for some
two-node DMN under the generalized model, the positive-delay region is strictly
smaller than the capacity region.
|
1312.1931 | Multi-frame denoising of high speed optical coherence tomography data
using inter-frame and intra-frame priors | cs.CV | Optical coherence tomography (OCT) is an important interferometric diagnostic
technique which provides cross-sectional views of the subsurface microstructure
of biological tissues. However, the imaging quality of high-speed OCT is
limited due to the large speckle noise. To address this problem, this paper
proposes a multi-frame algorithmic method to denoise OCT volume.
Mathematically, we build an optimization model which forces the temporally
registered frames to be low rank, and the gradient in each frame to be sparse,
under logarithmic image formation and noise variance constraints. Besides, a
convex optimization algorithm based on the augmented Lagrangian method is
derived to solve the above model. The results reveal that our approach
outperforms the other methods in terms of both speckle noise suppression and
crucial detail preservation.
|
1312.1957 | Uplink Interference Analysis for Two-tier Cellular Networks with Diverse
Users under Random Spatial Patterns | cs.NI cs.IT math.IT | Multi-tier architecture improves the spatial reuse of radio spectrum in
cellular networks, but it introduces complicated heterogeneity in the spatial
distribution of transmitters, which brings new challenges in interference
analysis. In this work, we present a stochastic geometric model to evaluate the
uplink interference in a two-tier network considering multi-type users and base
stations. Each type of tier-1 users and tier-2 base stations are modeled as
independent homogeneous Poisson point processes, and tier-2 users are modeled
as locally non-homogeneous clustered Poisson point processes centered at tier-2
base stations. By applying a superposition-aggregation-superposition approach,
we quantify the interference at both tiers. Our model is also able to capture
the impact of two types of exclusion regions, where either tier-2 base stations
or tier-2 users are restricted in order to avoid cross-tier interference. As an
important application of this analytical model, an intensity planning scenario
is investigated, in which we aim to maximize the total income of the network
operator with respect to the intensities of tier-2 cells, under constraints on
the outage probabilities of tier-1 and tier-2 users. The result of our
interference analysis suggests that this maximization can be converted to a
standard convex optimization problem. Finally, numerical studies further
demonstrate the correctness of our analysis.
|
1312.1969 | PSN: Portfolio Social Network | cs.SI | In this paper we present a web-based information system which is a portfolio
social network (PSN) that provides solutions to recruiters and job seekers. The
proposed system enables users to create portfolios so that he/she can add his
specializations with piece of code, if any, specifically for software
engineers, which is accessible online. The unique feature of the system is to
enable the recruiters to quickly view the prominent skills of the users. A
comparative analysis of the proposed system with the state of the art systems
is presented. The comparative study reveals that the proposed system has
advanced functionalities.
|
1312.1971 | Modeling Suspicious Email Detection using Enhanced Feature Selection | cs.AI | The paper presents a suspicious email detection model which incorporates
enhanced feature selection. In the paper we proposed the use of feature
selection strategies along with classification technique for terrorists email
detection. The presented model focuses on the evaluation of machine learning
algorithms such as decision tree (ID3), logistic regression, Na\"ive Bayes
(NB), and Support Vector Machine (SVM) for detecting emails containing
suspicious content. In the literature, various algorithms achieved good
accuracy for the desired task. However, the results achieved by those
algorithms can be further improved by using appropriate feature selection
mechanisms. We have identified the use of a specific feature selection scheme
that improves the performance of the existing algorithms.
|
1312.1986 | Approximating the Stationary Probability of a Single State in a Markov
chain | cs.DS cs.SI | In this paper, we present a novel iterative Monte Carlo method for
approximating the stationary probability of a single state of a positive
recurrent Markov chain. We utilize the characterization that the stationary
probability of a state $i$ is inversely proportional to the expected return
time of a random walk beginning at $i$. Our method obtains an
$\epsilon$-multiplicative close estimate with probability greater than $1 -
\alpha$ using at most $\tilde{O}\left(t_{\text{mix}} \ln(1/\alpha) / \pi_i
\epsilon^2 \right)$ simulated random walk steps on the Markov chain across all
iterations, where $t_{\text{mix}}$ is the standard mixing time and $\pi_i$ is
the stationary probability. In addition, the estimate at each iteration is
guaranteed to be an upper bound with high probability, and is decreasing in
expectation with the iteration count, allowing us to monitor the progress of
the algorithm and design effective termination criteria. We propose a
termination criteria which guarantees a $\epsilon (1 + 4 \ln(2)
t_{\text{mix}})$ multiplicative error performance for states with stationary
probability larger than $\Delta$, while providing an additive error for states
with stationary probability less than $\Delta \in (0,1)$. The algorithm along
with this termination criteria uses at most
$\tilde{O}\left(\frac{\ln(1/\alpha)}{\epsilon^2}
\min\left(\frac{t_{\text{mix}}}{\pi_i}, \frac{1}{\epsilon
\Delta}\right)\right)$ simulated random walk steps, which is bounded by a
constant with respect to the Markov Chain. We provide a tight analysis of our
algorithm based on a locally weighted variant of the mixing time. Our results
naturally extend for countably infinite state space Markov chains via Lyapunov
function analysis.
|
1312.1993 | Enhancing resilience of interdependent networks by healing | physics.soc-ph cond-mat.stat-mech cs.SI | Interdependent networks are characterized by two kinds of interactions: The
usual connectivity links within each network and the dependency links coupling
nodes of different networks. Due to the latter links such networks are known to
suffer from cascading failures and catastrophic breakdowns. When modeling these
phenomena, usually one assumes that a fraction of nodes gets damaged in one of
the networks, which is followed possibly by a cascade of failures. In real life
the initiating failures do not occur at once and effort is made replace the
ties eliminated due to the failing nodes. Here we study a dynamic extension of
the model of interdependent networks and introduce the possibility of link
formation with a probability w, called healing, to bridge non-functioning nodes
and enhance network resilience. A single random node is removed, which may
initiate an avalanche. After each removal step healing sets in resulting in a
new topology. Then a new node fails and the process continues until the giant
component disappears either in a catastrophic breakdown or in a smooth
transition. Simulation results are presented for square lattices as starting
networks under random attacks of constant intensity. We find that the shift in
the position of the breakdown has a power-law scaling as a function of the
healing probability with an exponent close to 1. Below a critical healing
probability, catastrophic cascades form and the average degree of surviving
nodes decreases monotonically, while above this value there are no macroscopic
cascades and the average degree has first an increasing character and decreases
only at the very late stage of the process. These findings facilitate to plan
intervention in case of crisis situation by describing the efficiency of
healing efforts needed to suppress cascading failures.
|
1312.2039 | Active Classification for POMDPs: a Kalman-like State Estimator | cs.SY math.OC | The problem of state tracking with active observation control is considered
for a system modeled by a discrete-time, finite-state Markov chain observed
through conditionally Gaussian measurement vectors. The measurement model
statistics are shaped by the underlying state and an exogenous control input,
which influence the observations' quality. Exploiting an innovations approach,
an approximate minimum mean-squared error (MMSE) filter is derived to estimate
the Markov chain system state. To optimize the control strategy, the associated
mean-squared error is used as an optimization criterion in a partially
observable Markov decision process formulation. A stochastic dynamic
programming algorithm is proposed to solve for the optimal solution. To enhance
the quality of system state estimates, approximate MMSE smoothing estimators
are also derived. Finally, the performance of the proposed framework is
illustrated on the problem of physical activity detection in wireless body
sensing networks. The power of the proposed framework lies within its ability
to accommodate a broad spectrum of active classification applications including
sensor management for object classification and tracking, estimation of sparse
signals and radar scheduling.
|
1312.2045 | Joint Spatial Division and Multiplexing for mm-Wave Channels | cs.IT math.IT | Massive MIMO systems are well-suited for mm-Wave communications, as large
arrays can be built with reasonable form factors, and the high array gains
enable reasonable coverage even for outdoor communications. One of the main
obstacles for using such systems in frequency-division duplex mode, namely the
high overhead for the feedback of channel state information (CSI) to the
transmitter, can be mitigated by the recently proposed JSDM (Joint Spatial
Division and Multiplexing) algorithm. In this paper we analyze the performance
of this algorithm in some realistic propagation channels that take into account
the partial overlap of the angular spectra from different users, as well as the
sparsity of mm-Wave channels. We formulate the problem of user grouping for two
different objectives, namely maximizing spatial multiplexing, and maximizing
total received power, in a graph-theoretic framework. As the resulting problems
are numerically difficult, we proposed (sub optimum) greedy algorithms as
efficient solution methods. Numerical examples show that the different
algorithms may be superior in different settings.We furthermore develop a new,
"degenerate" version of JSDM that only requires average CSI at the transmitter,
and thus greatly reduces the computational burden. Evaluations in propagation
channels obtained from ray tracing results, as well as in measured outdoor
channels show that this low-complexity version performs surprisingly well in
mm-Wave channels.
|
1312.2047 | Diagnosis of Switching Systems using Hybrid Bond Graph | cs.SY | Hybrid Bond Graph (HBG) is a Bond Graph-based modelling approach which
provides an effective tool not only for dynamic modeling but also for fault
detection and isolation (FDI) of switching systems. Bond graph (BG) has been
proven useful for FDI for continuous systems. In addition, BG provides the
causal relations between systems variables which allow FDI algorithms to be
developed systematically from the graph. There are many methods that exploit
structural relations and functional redundancy in the system model to find
efficient solutions for the residual generation and residual evaluation steps
in FDI of switching systems. This paper describes two different techniques,
quantitative and qualitative, based on common modelling approach that employs
HBG. In quantitative approach, global analytical redundancy relationships
(GARRs) are derived from the HBG model with a specified causality assignment
procedure. GARRs describe the system behaviour at all of its operating modes.
In qualitative approach, functional redundancy can be captured by a Temporal
Causal Graph (TCG), a directed graph that may include temporal information
|
1312.2048 | The False Premises and Promises of Bitcoin | cs.CE q-fin.GN | Designed to compete with fiat currencies, bitcoin proposes it is a
crypto-currency alternative. Bitcoin makes a number of false claims, including:
solving the double-spending problem is a good thing; bitcoin can be a reserve
currency for banking; hoarding equals saving, and that we should believe
bitcoin can expand by deflation to become a global transactional currency
supply. Bitcoin's developers combine technical implementation proficiency with
ignorance of currency and banking fundamentals. This has resulted in a failed
attempt to change finance. A set of recommendations to change finance are
provided in the Afterword: Investment/venture banking for the masses; Venture
banking to bring back what investment banks once were; Open-outcry exchange for
all CDS contracts; Attempting to develop CDS type contracts on investments in
startup and existing enterprises; and Improving the connection between startup
tech/ideas, business organization and investment.
|
1312.2060 | Blind Identification via Lifting | cs.SY | Blind system identification is known to be an ill-posed problem and without
further assumptions, no unique solution is at hand. In this contribution, we
are concerned with the task of identifying an ARX model from only output
measurements. We phrase this as a constrained rank minimization problem and
present a relaxed convex formulation to approximate its solution. To make the
problem well posed we assume that the sought input lies in some known linear
subspace.
|
1312.2061 | Region and Location Based Indexing and Retrieval of MR-T2 Brain Tumor
Images | cs.CV cs.IR | In this paper, region based and location based retrieval systems have been
implemented for retrieval of MR-T2 axial 2-D brain images. This is done by
extracting and characterizing the tumor portion of 2-D brain slices by use of a
suitable threshold computed over the entire image. Indexing and retrieval is
then performed by computing texture features based on gray-tone
spatial-dependence matrix of segmented regions. A Hash structure is used to
index all images. A combined index is adopted to point to all similar images in
terms of the texture features. At query time, only those images that are in the
same hash bucket as those of the queried image are compared for similarity,
thus reducing the search space and time.
|
1312.2062 | A Novel Hierarchical Ant based QoS aware Intelligent Routing Scheme for
MANETS | cs.NI cs.AI | MANET is a collection of mobile devices with no centralized control and no
pre-existing infrastructures. Due to the nodal mobility, supporting QoS during
routing in this type of networks is a very challenging task. To tackle this
type of overhead many routing algorithms with clustering approach have been
proposed. Clustering is an effective method for resource management regarding
network performance, routing protocol design, QoS etc. Most of the flat network
architecture contains homogeneous capacity of nodes but in real time nodes are
with heterogeneous capacity and transmission power. Hierarchical routing
provides routing through this kind of heterogeneous nodes. Here, routes can be
recorded hierarchically, across clusters to increase routing flexibility.
Besides this, it increases scalability and robustness of routes. In this paper,
a novel ant based QoS aware routing is proposed on a three level hierarchical
cluster based topology in MANET which will be more scalable and efficient
compared to flat architecture and will give better throughput.
|
1312.2063 | The Minimal Compression Rate for Similarity Identification | cs.IT cs.DB cs.IR math.IT | Traditionally, data compression deals with the problem of concisely
representing a data source, e.g. a sequence of letters, for the purpose of
eventual reproduction (either exact or approximate). In this work we are
interested in the case where the goal is to answer similarity queries about the
compressed sequence, i.e. to identify whether or not the original sequence is
similar to a given query sequence. We study the fundamental tradeoff between
the compression rate and the reliability of the queries performed on compressed
data. For i.i.d. sequences, we characterize the minimal compression rate that
allows query answers, that are reliable in the sense of having a vanishing
false-positive probability, when false negatives are not allowed. The result is
partially based on a previous work by Ahlswede et al., and the inherently
typical subset lemma plays a key role in the converse proof. We then
characterize the compression rate achievable by schemes that use lossy source
codes as a building block, and show that such schemes are, in general,
suboptimal. Finally, we tackle the problem of evaluating the minimal
compression rate, by converting the problem to a sequence of convex programs
that can be solved efficiently.
|
1312.2065 | Implementation of CRISP Methodology for ERP Systems | cs.DB | ERP systems contain huge amounts of data related to the actual execution of
business processes. These systems have a particular way of recording activities
which results in an unclear display of business processes in event logs.
Several works have been conducted on ERP systems, most of them focusing on the
development of new algorithms for the automatic discovery of business
processes. We focused on addressing issues like, how can organizations with ERP
systems apply process mining for analyzing their business processes in order to
improve them. The data handling aspect of ERP systems contrasts with those of
BPMS or workflow based systems, whose systematical storage of events
facilitates the application of process mining techniques. CRISP-DM has emerged
as the de facto standard for developing data mining and knowledge discovery
projects. Successful data mining requires three families of analytical
capabilities namely reporting, classification and forecasting. A data miner
uses more than one analytical method to get the best results. The objective of
this paper is to improve the usability and understandability of process mining
techniques, by implementing CRISP-DM methodology for their application in ERP
contexts, detailed in terms of specific implementation tools and step by step
coordination. Our study confirms that data discovery from ERP system improves
strategic and operational decision making.
|
1312.2069 | Applying the Apriori algorithm for investigating the relationships
between demographic characteristics of Iranian top 100 enterprises and the
strcture of their commercial website | cs.DB cs.CY | This study was conducted with the main aim to investigate the relationships
between demographic characteristics of companies and the facilities required
for their commercial websites. The research samples are the top 100 Iranian
companies as ranked by the Iranian Industrial Management Institute; the method
applied is datamining, using Association Rules throught the Apriori algorithms.
To collect the data, an aithor-modified check list has been utilized, coverig
the three areas of faclities within commercial websites, i.e. fundamental,
information-providing, and service-delivering facilities. having extracted the
association rules between the mentioned two sets of variables, 68 rules with a
confidence rate of 90% and above were obtained, and based on their significance
were classified into two groups of must-have and should-have requirements; a
recommended package of facilities is hitherto offered to other companies which
intend to enter e-commerce through their commerical websites with regards to
each company's unique demographic characteristics.
|
1312.2070 | A message-passing approach for threshold models of behavior in networks | physics.soc-ph cs.SI | We study a simple model of how social behaviors, like trends and opinions,
propagate in networks where individuals adopt the trend when they are informed
by threshold $T$ neighbors who are adopters. Using a dynamic message-passing
algorithm, we develop a tractable and computationally efficient method that
provides complete time evolution of each individual's probability of adopting
the trend or of the frequency of adopters and non-adopters in any arbitrary
networks. We validate the method by comparing it with Monte Carlo based agent
simulation in real and synthetic networks and provide an exact analytic scheme
for large random networks, where simulation results match well. Our approach is
general enough to incorporate non-Markovian processes and to include
heterogeneous thresholds and thus can be applied to explore rich sets of
complex heterogeneous agent-based models.
|
1312.2074 | Load Balancing using Ant Colony in Cloud Computing | cs.DC cs.CY cs.SY | Ants are very small insects.They are capable to find food even they are
complete blind. The ants lives in their nest and their job is to search food
while they get hungry. We are not interested in their living style, such as how
they live, how they sleep. But we are interested in how they search for food,
and how they find the shortest path. The technique for finding the shortest
path are now applying in cloud computing. The Ant Colony approach towards Cloud
Computing gives better performance.
|
1312.2087 | Towards Structural Natural Language Formalization: Mapping Discourse to
Controlled Natural Language | cs.CL | The author describes a conceptual study towards mapping grounded natural
language discourse representation structures to instances of controlled
language statements. This can be achieved via a pipeline of preexisting state
of the art technologies, namely natural language syntax to semantic discourse
mapping, and a reduction of the latter to controlled language discourse, given
a set of previously learnt reduction rules. Concludingly a description on
evaluation, potential and limitations for ontology-based reasoning is
presented.
|
1312.2094 | Parallelization in Extracting Fresh Information from Online Social
Network | cs.SI | Online Social Network (OSN) is one of the most hottest services in the past
years. It preserves the life of users and provides great potential for
journalists, sociologists and business analysts. Crawling data from social
network is a basic step for social network information analysis and processing.
As the network becomes huge and information on the network updates faster than
web pages, crawling is more difficult because of the limitations of band-width,
politeness etiquette and computation power. To extract fresh information from
social network efficiently and effectively, this paper presents a novel
crawling method and discusses parallelization architecture of social network.
To discover the feature of social network, we gather data from real social
network, analyze them and build a model to describe the discipline of users'
behavior. With the modeled behavior, we propose methods to predict users'
behavior. According to the prediction, we schedule our crawler more reasonably
and extract more fresh information with parallelization technologies.
Experimental results demonstrate that our strategies could obtain information
from OSN efficiently and effectively.
|
1312.2096 | Harbinger: An Analyzing and Predicting System for Online Social Network
Users' Behavior | cs.SI physics.soc-ph | Online Social Network (OSN) is one of the hottest innovations in the past
years, and the active users are more than a billion. For OSN, users' behavior
is one of the important factors to study. This demonstration proposal presents
Harbinger, an analyzing and predicting system for OSN users' behavior. In
Harbinger, we focus on tweets' timestamps (when users post or share messages),
visualize users' post behavior as well as message retweet number and build
adjustable models to predict users' behavior. Predictions of users' behavior
can be performed with the discovered behavior models and the results can be
applied to many applications such as tweet crawler and advertisement.
|
1312.2121 | Engineering Cooperative JADE Agents with the AMCIS Methodology: The
Transportation Management Case Study | cs.SE cs.MA | This paper discusses in detail important analysis and design issues emerged
during the development of an agent-based transportation e-market. This
discussion is based on concepts coming from the AMCIS methodology and the JADE
framework. The AMCIS methodology is specifically tailored to the analysis and
design of cooperative information agent-based systems, while it supports both
the levels of the individual agent structure and the agent society in the
Multi-Agents Systems (MAS) development process. According to AMCIS, MAS are
viewed as being composed of a number of autonomous cooperative agents that live
in an organized society, in which each agent plays one or more specific roles,
while their plans and interaction protocols are well defined. On the other hand
JADE is a FIPA specifications compliant agent development environment that
gives several facilities for an easy and fast implementation. Our aim is to
reveal the mapping that may exists between the basic concepts proposed by AMCIS
for agents specification and agents interactions and those provided by JADE for
agents implementation, and therefore to propose a kind of roadmap for agents
developers.
|
1312.2132 | Robust Subspace System Identification via Weighted Nuclear Norm
Optimization | cs.SY cs.LG stat.ML | Subspace identification is a classical and very well studied problem in
system identification. The problem was recently posed as a convex optimization
problem via the nuclear norm relaxation. Inspired by robust PCA, we extend this
framework to handle outliers. The proposed framework takes the form of a convex
optimization problem with an objective that trades off fit, rank and sparsity.
As in robust PCA, it can be problematic to find a suitable regularization
parameter. We show how the space in which a suitable parameter should be sought
can be limited to a bounded open set of the two dimensional parameter space. In
practice, this is very useful since it restricts the parameter space that is
needed to be surveyed.
|
1312.2135 | A Repair Framework for Scalar MDS Codes | cs.IT math.IT | Several works have developed vector-linear maximum-distance separable (MDS)
storage codes that min- imize the total communication cost required to repair a
single coded symbol after an erasure, referred to as repair bandwidth (BW).
Vector codes allow communicating fewer sub-symbols per node, instead of the
entire content. This allows non trivial savings in repair BW. In sharp
contrast, classic codes, like Reed- Solomon (RS), used in current storage
systems, are deemed to suffer from naive repair, i.e. downloading the entire
stored message to repair one failed node. This mainly happens because they are
scalar-linear. In this work, we present a simple framework that treats scalar
codes as vector-linear. In some cases, this allows significant savings in
repair BW. We show that vectorized scalar codes exhibit properties that
simplify the design of repair schemes. Our framework can be seen as a finite
field analogue of real interference alignment. Using our simplified framework,
we design a scheme that we call clique-repair which provably identifies the
best linear repair strategy for any scalar 2-parity MDS code, under some
conditions on the sub-field chosen for vectorization. We specify optimal repair
schemes for specific (5,3)- and (6,4)-Reed- Solomon (RS) codes. Further, we
present a repair strategy for the RS code currently deployed in the Facebook
Analytics Hadoop cluster that leads to 20% of repair BW savings over naive
repair which is the repair scheme currently used for this code.
|
1312.2137 | End-to-end Phoneme Sequence Recognition using Convolutional Neural
Networks | cs.LG cs.CL cs.NE | Most phoneme recognition state-of-the-art systems rely on a classical neural
network classifiers, fed with highly tuned features, such as MFCC or PLP
features. Recent advances in ``deep learning'' approaches questioned such
systems, but while some attempts were made with simpler features such as
spectrograms, state-of-the-art systems still rely on MFCCs. This might be
viewed as a kind of failure from deep learning approaches, which are often
claimed to have the ability to train with raw signals, alleviating the need of
hand-crafted features. In this paper, we investigate a convolutional neural
network approach for raw speech signals. While convolutional architectures got
tremendous success in computer vision or text processing, they seem to have
been let down in the past recent years in the speech processing field. We show
that it is possible to learn an end-to-end phoneme sequence classifier system
directly from raw signal, with similar performance on the TIMIT and WSJ
datasets than existing systems based on MFCC, questioning the need of complex
hand-crafted features on large datasets.
|
1312.2139 | Optimal rates for zero-order convex optimization: the power of two
function evaluations | math.OC cs.IT math.IT stat.ML | We consider derivative-free algorithms for stochastic and non-stochastic
convex optimization problems that use only function values rather than
gradients. Focusing on non-asymptotic bounds on convergence rates, we show that
if pairs of function values are available, algorithms for $d$-dimensional
optimization that use gradient estimates based on random perturbations suffer a
factor of at most $\sqrt{d}$ in convergence rate over traditional stochastic
gradient methods. We establish such results for both smooth and non-smooth
cases, sharpening previous analyses that suggested a worse dimension
dependence, and extend our results to the case of multiple ($m \ge 2$)
evaluations. We complement our algorithmic development with
information-theoretic lower bounds on the minimax convergence rate of such
problems, establishing the sharpness of our achievable results up to constant
(sometimes logarithmic) factors.
|
1312.2140 | A Comparative Study on Remote Tracking of Parkinsons Disease Progression
Using Data Mining Methods | cs.CE cs.DB | In recent years, applications of data mining methods are become more popular
in many fields of medical diagnosis and evaluations. The data mining methods
are appropriate tools for discovering and extracting of available knowledge in
medical databases. In this study, we divided 11 data mining algorithms into
five groups which are applied to a data set of patients clinical variables data
with Parkinsons Disease (PD) to study the disease progression. The data set
includes 22 properties of 42 people that all of our algorithms are applied to
this data set. The Decision Table with 0.9985 correlation coefficients has the
best accuracy and Decision Stump with 0.7919 correlation coefficients has the
lowest accuracy.
|
1312.2154 | Sequential Monte Carlo Inference of Mixed Membership Stochastic
Blockmodels for Dynamic Social Networks | cs.SI cs.LG stat.ML | Many kinds of data can be represented as a network or graph. It is crucial to
infer the latent structure underlying such a network and to predict unobserved
links in the network. Mixed Membership Stochastic Blockmodel (MMSB) is a
promising model for network data. Latent variables and unknown parameters in
MMSB have been estimated through Bayesian inference with the entire network;
however, it is important to estimate them online for evolving networks. In this
paper, we first develop online inference methods for MMSB through sequential
Monte Carlo methods, also known as particle filters. We then extend them for
time-evolving networks, taking into account the temporal dependency of the
network structure. We demonstrate through experiments that the time-dependent
particle filter outperformed several baselines in terms of prediction
performance in an online condition.
|
1312.2159 | Learning about social learning in MOOCs: From statistical analysis to
generative model | cs.SI | We study user behavior in the courses offered by a major Massive Online Open
Course (MOOC) provider during the summer of 2013. Since social learning is a
key element of scalable education in MOOCs and is done via online discussion
forums, our main focus is in understanding forum activities. Two salient
features of MOOC forum activities drive our research: 1. High decline rate: for
all courses studied, the volume of discussions in the forum declines
continuously throughout the duration of the course. 2. High-volume, noisy
discussions: at least 30% of the courses produce new discussion threads at
rates that are infeasible for students or teaching staff to read through.
Furthermore, a substantial portion of the discussions are not directly
course-related.
We investigate factors that correlate with the decline of activity in the
online discussion forums and find effective strategies to classify threads and
rank their relevance. Specifically, we use linear regression models to analyze
the time series of the count data for the forum activities and make a number of
observations, e.g., the teaching staff's active participation in the discussion
increases the discussion volume but does not slow down the decline rate. We
then propose a unified generative model for the discussion threads, which
allows us both to choose efficient thread classifiers and design an effective
algorithm for ranking thread relevance. Our ranking algorithm is further
compared against two baseline algorithms, using human evaluation from Amazon
Mechanical Turk.
The authors on this paper are listed in alphabetical order. For media and
press coverage, please refer to us collectively, as "researchers from the EDGE
Lab at Princeton University, together with collaborators at Boston University
and Microsoft Corporation."
|
1312.2163 | Multipermutation Codes in the Ulam Metric for Nonvolatile Memories | cs.IT math.IT | We address the problem of multipermutation code design in the Ulam metric for
novel storage applications. Multipermutation codes are suitable for flash
memory where cell charges may share the same rank. Changes in the charges of
cells manifest themselves as errors whose effects on the retrieved signal may
be measured via the Ulam distance. As part of our analysis, we study
multipermutation codes in the Hamming metric, known as constant composition
codes. We then present bounds on the size of multipermutation codes and their
capacity, for both the Ulam and the Hamming metrics. Finally, we present
constructions and accompanying decoders for multipermutation codes in the Ulam
metric.
|
1312.2164 | Budgeted Influence Maximization for Multiple Products | cs.LG cs.SI stat.ML | The typical algorithmic problem in viral marketing aims to identify a set of
influential users in a social network, who, when convinced to adopt a product,
shall influence other users in the network and trigger a large cascade of
adoptions. However, the host (the owner of an online social platform) often
faces more constraints than a single product, endless user attentions,
unlimited budget and unbounded time; in reality, multiple products need to be
advertised, each user can tolerate only a small number of recommendations,
influencing user has a cost and advertisers have only limited budgets, and the
adoptions need to be maximized within a short time window.
Given theses myriads of user, monetary, and timing constraints, it is
extremely challenging for the host to design principled and efficient viral
market algorithms with provable guarantees. In this paper, we provide a novel
solution by formulating the problem as a submodular maximization in a
continuous-time diffusion model under an intersection of a matroid and multiple
knapsack constraints. We also propose an adaptive threshold greedy algorithm
which can be faster than the traditional greedy algorithm with lazy evaluation,
and scalable to networks with million of nodes. Furthermore, our mathematical
formulation allows us to prove that the algorithm can achieve an approximation
factor of $k_a/(2+2 k)$ when $k_a$ out of the $k$ knapsack constraints are
active, which also improves over previous guarantees from combinatorial
optimization literature. In the case when influencing each user has uniform
cost, the approximation becomes even better to a factor of $1/3$. Extensive
synthetic and real world experiments demonstrate that our budgeted influence
maximization algorithm achieves the-state-of-the-art in terms of both
effectiveness and scalability, often beating the next best by significant
margins.
|
1312.2169 | Spectral Efficiency and Outage Performance for Hybrid D2D-Infrastructure
Uplink Cooperation | cs.IT math.IT | We propose a time-division uplink transmission scheme that is applicable to
future cellular systems by introducing hybrid device-to-device (D2D) and
infrastructure cooperation. We analyze its spectral efficiency and outage
performance and show that compared to existing frequency-division schemes, the
proposed scheme achieves the same or better spectral efficiency and outage
performance while having simpler signaling and shorter decoding delay. Using
time-division, the proposed scheme divides each transmission frame into three
phases with variable durations. The two user equipments (UEs) partially
exchange their information in the first two phases, then cooperatively transmit
to the base station (BS) in the third phase. We further formulate its common
and individual outage probabilities, taking into account outages at both UEs
and the BS. We analyze this outage performance in Rayleigh fading environment
assuming full channel state information (CSI) at the receivers and limited CSI
at the transmitters. Results show that comparing to non-cooperative
transmission, the proposed cooperation always improves the instantaneous
achievable rate region even under half-duplex transmission. Moreover, as the
received signal-to-noise ratio increases, this uplink cooperation significantly
reduces overall outage probabilities and achieves the full diversity order in
spite of additional outages at the UEs. These characteristics of the proposed
uplink cooperation make it appealing for deployment in future cellular
networks.
|
1312.2171 | bartMachine: Machine Learning with Bayesian Additive Regression Trees | stat.ML cs.LG | We present a new package in R implementing Bayesian additive regression trees
(BART). The package introduces many new features for data analysis using BART
such as variable selection, interaction detection, model diagnostic plots,
incorporation of missing data and the ability to save trees for future
prediction. It is significantly faster than the current R implementation,
parallelized, and capable of handling both large sample sizes and
high-dimensional data.
|
1312.2177 | Machine Learning Techniques for Intrusion Detection | cs.CR cs.LG cs.NI | An Intrusion Detection System (IDS) is a software that monitors a single or a
network of computers for malicious activities (attacks) that are aimed at
stealing or censoring information or corrupting network protocols. Most
techniques used in today's IDS are not able to deal with the dynamic and
complex nature of cyber attacks on computer networks. Hence, efficient adaptive
methods like various techniques of machine learning can result in higher
detection rates, lower false alarm rates and reasonable computation and
communication costs. In this paper, we study several such schemes and compare
their performance. We divide the schemes into methods based on classical
artificial intelligence (AI) and methods based on computational intelligence
(CI). We explain how various characteristics of CI techniques can be used to
build efficient IDS.
|
1312.2183 | Maximum Likelihood Estimation from Sign Measurements with Sensing Matrix
Perturbation | cs.IT math.IT | The problem of estimating an unknown deterministic parameter vector from sign
measurements with a perturbed sensing matrix is studied in this paper. We
analyze the best achievable mean square error (MSE) performance by exploring
the corresponding Cram\'{e}r-Rao Lower Bound (CRLB). To estimate the parameter,
the maximum likelihood (ML) estimator is utilized and its consistency is
proved. We show that the perturbation on the sensing matrix exacerbates the
performance of ML estimator in most cases. However, suitable perturbation may
improve the performance in some special cases. Then we reformulate the original
ML estimation problem as a convex optimization problem, which can be solved
efficiently. Furthermore, theoretical analysis implies that the
perturbation-ignored estimation is a scaled version with the same direction of
the ML estimation. Finally, numerical simulations are performed to validate our
theoretical analysis.
|
1312.2203 | Research on fresh agriculture product based on overconfidence of the
retailer under options and spot markets dominated | cs.CE q-fin.GN | In this article, we analyze the application of options contract in special
commodity supply chain such as fresh agricultural products. This problem is
discussed in the point of the retailer. When spot market and future market are
both available, we discuss how the retailer chooses the optimal production.
Furthermore, overconfidence is introduced to the supply chain of the fresh
agricultural products, which has not happened before. Then,based on the
overconfidence of the retailer, we explore how overconfidence affects the
supply chain system under different circumstances. At last, we get the
conclusion that different overconfidence level has different affection on
retailer's optimal ordering quantity and profit.
|
1312.2222 | A Stability Result for Sparse Convolutions | cs.DM cs.IT math.CO math.IT | We will establish in this note a stability result for sparse convolutions on
torsion-free additive (discrete) abelian groups. Sparse convolutions on
torsion-free groups are free of cancellations and hence admit stability, i.e.
injectivity with a universal lower bound $\alpha=\alpha(s,f)$, only depending
on the cardinality $s$ and $f$ of the supports of both input sequences. More
precisely, we show that $\alpha$ depends only on $s$ and $f$ and not on the
ambient dimension. This statement follows from a reduction argument which
involves a compression into a small set preserving the additive structure of
the supports.
|
1312.2227 | Decision Fusion with Unknown Sensor Detection Probability | cs.IT math.IT | In this correspondence we study the problem of channel-aware decision fusion
when the sensor detection probability is not known at the decision fusion
center. Several alternatives proposed in the literature are compared and new
fusion rules (namely 'ideal sensors' and 'locally-optimum detection') are
proposed, showing attractive performance and linear complexity. Simulations are
provided to compare the performance of the aforementioned rules.
|
1312.2232 | Algorithms for Joint Phase Estimation and Decoding for MIMO Systems in
the Presence of Phase Noise | cs.IT math.IT | In this work, we derive the maximum a posteriori (MAP) symbol detector for a
multiple-input multiple-output system in the presence of Wiener phase noise due
to noisy local oscillators. As in single-antenna systems, the computation of
the optimal receiver is an infinite dimensional problem and is thus
unimplementable in practice. In this purview, we propose three suboptimal,
low-complexity algorithms for approximately implementing the MAP symbol
detector, which involve joint phase noise estimation and data detection. Our
first algorithm is obtained by means of the sum-product algorithm, where we use
the multivariate Tikhonov canonical distribution approach. In our next
algorithm, we derive an approximate MAP symbol detector based on the
smoother-detector framework, wherein the detector is properly designed by
incorporating the phase noise statistics from the smoother. The third algorithm
is derived based on the variational Bayesian framework. By simulations, we
evaluate the performance of the proposed algorithms for both uncoded and coded
data transmissions, and we observe that the proposed techniques significantly
outperform the other algorithms proposed in the literature.
|
1312.2237 | Clustering online social network communities using genetic algorithms | cs.SI physics.soc-ph | To analyze the activities in an Online Social network (OSN), we introduce the
concept of "Node of Attraction" (NoA) which represents the most active node in
a network community. This NoA is identified as the origin/initiator of a
post/communication which attracted other nodes and formed a cluster at any
point in time. In this research, a genetic algorithm (GA) is used as a data
mining method where the main objective is to determine clusters of network
communities in a given OSN dataset. This approach is efficient in handling
different type of discussion topics in our studied OSN - comments, emails, chat
expressions, etc. and can form clusters according to one or more topics. We
believe that this work can be useful in finding the source for spread of this
GA-based clustering of online interactions and reports some results of
experiments with real-world data and demonstrates the performance of proposed
approach.
|
1312.2242 | CLIC: A Framework for Distributed, On-Demand, Human-Machine Cognitive
Systems | cs.AI | Traditional Artificial Cognitive Systems (for example, intelligent robots)
share a number of limitations. First, they are usually made up only of machine
components; humans are only playing the role of user or supervisor. And yet,
there are tasks in which the current state of the art of AI has much worse
performance or is more expensive than humans: thus, it would be highly
beneficial to have a systematic way of creating systems with both human and
machine components, possibly with remote non-expert humans providing
short-duration real-time services. Second, their components are often dedicated
to only one system, and underutilized for a big part of their lifetime. Third,
there is no inherent support for robust operation, and if a new better
component becomes available, one cannot easily replace the old component.
Fourth, they are viewed as a resource to be developed and owned, not as a
utility. Thus, we are presenting CLIC: a framework for constructing cognitive
systems that overcome the above limitations. The architecture of CLIC provides
specific mechanisms for creating and operating cognitive systems that fulfill a
set of desiderata: First, that are distributed yet situated, interacting with
the physical world though sensing and actuation services, and that are also
combining human as well as machine services. Second, that are made up of
components that are time-shared and re-usable. Third, that provide increased
robustness through self-repair. Fourth, that are constructed and reconstructed
on the fly, with components that dynamically enter and exit the system during
operation, on the basis of availability, pricing, and need. Importantly, fifth,
the cognitive systems created and operated by CLIC do not need to be owned and
can be provided on demand, as a utility; thus transforming human-machine
situated intelligence to a service, and opening up many interesting
opportunities.
|
1312.2244 | Time-dependent Hierarchical Dirichlet Model for Timeline Generation | cs.CL cs.IR | Timeline Generation aims at summarizing news from different epochs and
telling readers how an event evolves. It is a new challenge that combines
salience ranking with novelty detection. For long-term public events, the main
topic usually includes various aspects across different epochs and each aspect
has its own evolving pattern. Existing approaches neglect such hierarchical
topic structure involved in the news corpus in timeline generation. In this
paper, we develop a novel time-dependent Hierarchical Dirichlet Model (HDM) for
timeline generation. Our model can aptly detect different levels of topic
information across corpus and such structure is further used for sentence
selection. Based on the topic mined fro HDM, sentences are selected by
considering different aspects such as relevance, coherence and coverage. We
develop experimental systems to evaluate 8 long-term events that public
concern. Performance comparison between different systems demonstrates the
effectiveness of our model in terms of ROUGE metrics.
|
1312.2249 | Scalable Object Detection using Deep Neural Networks | cs.CV stat.ML | Deep convolutional neural networks have recently achieved state-of-the-art
performance on a number of image recognition benchmarks, including the ImageNet
Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on
the localization sub-task was a network that predicts a single bounding box and
a confidence score for each object category in the image. Such a model captures
the whole-image context around the objects but cannot handle multiple instances
of the same object in the image without naively replicating the number of
outputs for each instance. In this work, we propose a saliency-inspired neural
network model for detection, which predicts a set of class-agnostic bounding
boxes along with a single score for each box, corresponding to its likelihood
of containing any object of interest. The model naturally handles a variable
number of instances for each class and allows for cross-class generalization at
the highest levels of the network. We are able to obtain competitive
recognition performance on VOC2007 and ILSVRC2012, while using only the top few
predicted locations in each image and a small number of neural network
evaluations.
|
1312.2267 | IRCI Free Range Reconstruction for SAR Imaging with Arbitrary Length
OFDM Pulse | cs.IT math.IT | Our previously proposed OFDM with sufficient cyclic prefix (CP) synthetic
aperture radar (SAR) imaging algorithm is inter-range-cell interference (IRCI)
free and achieves ideally zero range sidelobes for range reconstruction. In
this OFDM SAR imaging algorithm, the minimum required CP length is almost equal
to the number of range cells in a swath, while the number of subcarriers of an
OFDM signal needs to be more than the CP length. This makes the length of a
transmitted OFDM sequence at least almost twice of the number of range cells in
a swath and for a wide swath imaging, the transmitted OFDM pulse length becomes
long, which may cause problems in some radar applications. In this paper, we
propose a CP based OFDM SAR imaging with arbitrary pulse length, which has IRCI
free range reconstruction and its pulse length is independent of a swath width.
We then present a novel design method for our proposed arbitrary length OFDM
pulses. Simulation results are presented to illustrate the performances of the
OFDM pulse design and the arbitrary pulse length CP based OFDM SAR imaging.
|
1312.2287 | Quickest Search over Multiple Sequences with Mixed Observation | cs.IT math.IT | The problem of sequentially finding an independent and identically
distributed (i.i.d.) sequence that is drawn from a probability distribution
$f_1$ by searching over multiple sequences, some of which are drawn from $f_1$
and the others of which are drawn from a different distribution $f_0$, is
considered. The observer is allowed to take one observation at a time. It has
been shown in a recent work that if each observation comes from one sequence,
the cumulative sum test is optimal. In this paper, we propose a new approach in
which each observation can be a linear combination of samples from multiple
sequences. The test has two stages. In the first stage, namely scanning stage,
one takes a linear combination of a pair of sequences with the hope of scanning
through sequences that are unlikely to be generated from $f_1$ and quickly
identifying a pair of sequences such that at least one of them is highly likely
to be generated by $f_1$. In the second stage, namely refinement stage, one
examines the pair identified from the first stage more closely and picks one
sequence to be the final sequence. The problem under this setup belongs to a
class of multiple stopping time problems. In particular, it is an ordered two
concatenated Markov stopping time problem. We obtain the optimal solution using
the tools from the multiple stopping time theory. The optimal solution has a
rather complex structure. For implementation purpose, a low complexity
algorithm is proposed, in which the observer adopts the cumulative sum test in
the scanning stage and adopts the sequential probability ratio test in the
refinement stage. The performance of this low complexity algorithm is analyzed
when the prior probability of $f_{1}$ occurring is small. Both analytical and
numerical simulation results show that this search strategy can significantly
reduce the searching time when $f_{1}$ is rare.
|
1312.2315 | Noisy Bayesian Active Learning | cs.IT math.IT math.OC math.ST stat.TH | We consider the problem of noisy Bayesian active learning, where we are given
a finite set of functions $\mathcal{H}$, a sample space $\mathcal{X}$, and a
label set $\mathcal{L}$. One of the functions in $\mathcal{H}$ assigns labels
to samples in $\mathcal{X}$. The goal is to identify the function that
generates the labels even though the result of a label query on a sample is
corrupted by independent noise. More precisely, the objective is to declare one
of the functions in $\mathcal{H}$ as the true label generating function with
high confidence using as few label queries as possible, by selecting the
queries adaptively and in a strategic manner.
Previous work in Bayesian active learning considers Generalized Binary
Search, and its variants for the noisy case, and analyzes the number of queries
required by these sampling strategies. In this paper, we show that these
schemes are, in general, suboptimal. Instead we propose and analyze an
alternative strategy for sample collection. Our sampling strategy is motivated
by a connection between Bayesian active learning and active hypothesis testing,
and is based on querying the label of a sample which maximizes the Extrinsic
Jensen-Shannon divergence at each step. We provide upper and lower bounds on
the performance of this sampling strategy, and show that these bounds are
better than previous bounds.
|
1312.2338 | Practical Design for Multiple-Antenna Cognitive Radio Networks with
Coexistence Constraint | cs.IT math.IT | In this paper we investigate the practical design for the multiple-antenna
cognitive radio (CR) networks sharing the geographically used or unused
spectrum. We consider a single cell network formed by the primary users (PU),
which are half-duplex two-hop relay channels and the secondary users (SU) are
single user additive white Gaussian noise channels. In addition, the
coexistence constraint which requires PUs' coding schemes and rates unchanged
with the emergence of SU, should be satisfied. The contribution of this paper
are twofold. First, we explicitly design the scheme to pair the SUs to the
existing PUs in a single cell network. Second, we jointly design the nonlinear
precoder, relay beamformer, and the transmitter and receiver beamformers to
minimize the sum mean square error of the SU system. In the first part, we
derive an approximate relation between the relay ratio, chordal distance and
strengths of the vector channels, and the transmit powers. Based on this
relation, we are able to solve the optimal pairing between SUs and PUs
efficiently. In the second part, considering the feasibility of implementation,
we exploit the Tomlinson-Harashima precoding instead of the dirty paper coding
to mitigate the interference at the SU receiver, which is known side
information at the SU transmitter. To complete the design, we first approximate
the optimization problem as a convex one. Then we propose an iterative
algorithm to solve it with CVX. This joint design exploits all the degrees of
design. To the best of our knowledge, both the two parts have never been
considered in the literature. Numerical results show that the proposed pairing
scheme outperforms the greedy and random pairing with low complexity. Numerical
results also show that even if all the channel matrices are full rank, under
which the simple zero forcing scheme is infeasible, the proposed scheme can
still work well.
|
1312.2353 | On the difference between checking integrity constraints before or after
updates | cs.DB | Integrity checking is a crucial issue, as databases change their instance all
the time and therefore need to be checked continuously and rapidly. Decades of
research have produced a plethora of methods for checking integrity constraints
of a database in an incremental manner. However, not much has been said about
when to check integrity. In this paper, we study the differences and
similarities between checking integrity before an update (a.k.a. pre-test) or
after (a.k.a. post-test) in order to assess the respective convenience and
properties.
|
1312.2355 | On the dependency on the size of the data when chasing under conceptual
dependencies | cs.DB | Conceptual dependencies (CDs) are particular kinds of key dependencies (KDs)
and inclusion dependencies (IDs) that precisely characterize relational
schemata modeled according to the main features of the Entity-Relationship (ER)
model. An instance for such a schema may be inconsistent (data violate the
dependencies) and incomplete (data constitute a piece of correct information,
but not necessarily all the relevant information). While undecidable under
general KDs and IDs, query answering under incomplete data is known to be
decidable for CDs. The known techniques are based on the chase -- a special
instance, organized in levels of depth, that is a representative of all the
instances that satisfy the dependencies and that include the initial instance.
Although the chase generally has infinite size, query answering can be
addressed by posing the query (or a rewriting thereof) on a finite, initial
part of the chase. Contrary to previous claims, we show that the maximum level
of such an initial part cannot be bounded by a constant that does not depend on
the size of the initial instance.
|
1312.2358 | Exact Recovery for Sparse Signal via Weighted $l_1$ Minimization | cs.IT math.IT | Numerical experiments in literature on compressed sensing have indicated that
the reweighted $l_1$ minimization performs exceptionally well in recovering
sparse signal. In this paper, we develop exact recovery conditions and
algorithm for sparse signal via weighted $l_1$ minimization from the insight of
the classical NSP (null space property) and RIC (restricted isometry constant)
bound. We first introduce the concept of WNSP (weighted null space property)
and reveal that it is a necessary and sufficient condition for exact recovery.
We then prove that the RIC bound by weighted $l_1$ minimization is
$\delta_{ak}<\sqrt{\frac{a-1}{a-1+\gamma^2}}$, where $a>1$, $0<\gamma\leq1$ is
determined by an optimization problem over the null space. When $\gamma< 1$
this bound is greater than $\sqrt{\frac{a-1}{a}}$ from $l_1$ minimization. In
addition, we also establish the bound on $\delta_k$ and show that it can be
larger than the sharp one 1/3 via $l_1$ minimization and also greater than
0.4343 via weighted $l_1$ minimization under some mild cases. Finally, we
achieve a modified iterative reweighted $l_1$ minimization (MIRL1) algorithm
based on our selection principle of weight, and the numerical experiments
demonstrate that our algorithm behaves much better than $l_1$ minimization and
iterative reweighted $l_1$ minimization (IRL1) algorithm.
|
1312.2366 | A preliminary survey on optimized multiobjective metaheuristic methods
for data clustering using evolutionary approaches | cs.NE | The present survey provides the state-of-the-art of research, copiously
devoted to Evolutionary Approach (EAs) for clustering exemplified with a
diversity of evolutionary computations. The Survey provides a nomenclature that
highlights some aspects that are very important in the context of evolutionary
data clustering. The paper missions the clustering trade-offs branched out with
wide-ranging Multi Objective Evolutionary Approaches (MOEAs) methods. Finally,
this study addresses the potential challenges of MOEA design and data
clustering, along with conclusions and recommendations for novice and
researchers by positioning most promising paths of future research. MOEAs have
substantial success across a variety of MOP applications, from pedagogical
multifunction optimization to real-world engineering design. The survey paper
noticeably organizes the developments witnessed in the past three decades for
EAs based metaheuristics to solve multiobjective optimization problems (MOP)
and to derive significant progression in ruling high quality elucidations in a
single run. Data clustering is an exigent task, whose intricacy is caused by a
lack of unique and precise definition of a cluster. The discrete optimization
problem uses the cluster space to derive a solution for Multiobjective data
clustering. Discovery of a majority or all of the clusters (of illogical
shapes) present in the data is a long-standing goal of unsupervised predictive
learning problems or exploratory pattern analysis.
|
1312.2368 | A Unified Markov Chain Approach to Analysing Randomised Search
Heuristics | math.OC cs.NE | The convergence, convergence rate and expected hitting time play fundamental
roles in the analysis of randomised search heuristics. This paper presents a
unified Markov chain approach to studying them. Using the approach, the
sufficient and necessary conditions of convergence in distribution are
established. Then the average convergence rate is introduced to randomised
search heuristics and its lower and upper bounds are derived. Finally, novel
average drift analysis and backward drift analysis are proposed for bounding
the expected hitting time. A computational study is also conducted to
investigate the convergence, convergence rate and expected hitting time. The
theoretical study belongs to a prior and general study while the computational
study belongs to a posterior and case study.
|
1312.2375 | Novel text categorization by amalgamation of augmented k-nearest
neighborhood classification and k-medoids clustering | cs.IR | Machine learning for text classification is the underpinning of document
cataloging, news filtering, document steering and exemplification. In text
mining realm, effective feature selection is significant to make the learning
task more accurate and competent. One of the traditional lazy text classifier
k-Nearest Neighborhood (kNN) has a major pitfall in calculating the similarity
between all the objects in training and testing sets, there by leads to
exaggeration of both computational complexity of the algorithm and massive
consumption of main memory. To diminish these shortcomings in viewpoint of a
data-mining practitioner an amalgamative technique is proposed in this paper
using a novel restructured version of kNN called AugmentedkNN(AkNN) and
k-Medoids(kMdd) clustering.The proposed work comprises preprocesses on the
initial training set by imposing attribute feature selection for reduction of
high dimensionality, also it detects and excludes the high-fliers samples in
the initial training set and restructures a constrictedtraining set. The kMdd
clustering algorithm generates the cluster centers (as interior objects) for
each category and restructures the constricted training set with centroids.
This technique is amalgamated with AkNNclassifier that was prearranged with
text mining similarity measures. Eventually, significantweights and ranks were
assigned to each object in the new training set based upon their accessory
towards the object in testing set. Experiments conducted on Reuters-21578 a UCI
benchmark text mining data set, and comparisons with traditional kNNclassifier
designates the referredmethod yieldspreeminentrecitalin both clustering and
classification.
|
1312.2378 | Unsupervised classification of uncertain data objects in spatial
databases using computational geometry and indexing techniques | cs.DB | Unsupervised classification called clustering is a process of organizing
objects into groups whose members are similar in some way. Clustering of
uncertain data objects is a challenge in spatial data bases. In this paper we
use Probability Density Functions (PDF) to represent these uncertain data
objects, and apply Uncertain K-Means algorithm to generate the clusters. This
clustering algorithm uses the Expected Distance (ED) to compute the distance
between objects and cluster representatives. To further improve the performance
of UK-Means we propose a novel technique called Voronoi Diagrams from
Computational Geometry to prune the number of computations of ED. This
technique works efficiently but results pruning overheads. In order to reduce
these in pruning overhead we introduce R*-tree indexing over these uncertain
data objects, so that it reduces the computational cost and pruning overheads.
Our novel approach of integrating UK-Means with voronoi diagrams and R* Tree
applied over uncertain data objects generates imposing outcome when compared
with the accessible methods.
|
1312.2383 | On the Performance of Filters for Reduction of Speckle Noise in SAR
Images off the Coast of the Gulf of Guinea | cs.CV | Synthetic Aperture Radar (SAR) imagery to monitor oil spills are some methods
that have been proposed for the West African sub-region. With the increase in
the number of oil exploration companies in Ghana (and her neighbors) and the
rise in the coastal activities in the sub-region, there is the need for proper
monitoring of the environmental impact of these socio-economic activities on
the environment. Detection and near real-time information about oil spills are
fundamental in reducing oil spill environmental impact. SAR images are prone to
some noise, which is predominantly speckle noise around the coastal areas. This
paper evaluates the performance of the mean and median filters used in the
preprocessing filtering to reduce speckle noise in SAR images for most image
processing algorithms.
|
1312.2390 | Stochastic Stability of Event-triggered Anytime Control | math.OC cs.SY | We investigate control of a non-linear process when communication and
processing capabilities are limited. The sensor communicates with a controller
node through an erasure channel which introduces i.i.d. packet dropouts.
Processor availability for control is random and, at times, insufficient to
calculate plant inputs. To make efficient use of communication and processing
resources, the sensor only transmits when the plant state lies outside a
bounded target set. Control calculations are triggered by the received data. If
a plant state measurement is successfully received and while the processor is
available for control, the algorithm recursively calculates a sequence of
tentative plant inputs, which are stored in a buffer for potential future use.
This safeguards for time-steps when the processor is unavailable for control.
We derive sufficient conditions on system parameters for stochastic stability
of the closed loop and illustrate performance gains through numerical studies.
|
1312.2451 | CEAI: CCM based Email Authorship Identification Model | cs.LG | In this paper we present a model for email authorship identification (EAI) by
employing a Cluster-based Classification (CCM) technique. Traditionally,
stylometric features have been successfully employed in various authorship
analysis tasks; we extend the traditional feature-set to include some more
interesting and effective features for email authorship identification (e.g.
the last punctuation mark used in an email, the tendency of an author to use
capitalization at the start of an email, or the punctuation after a greeting or
farewell). We also included Info Gain feature selection based content features.
It is observed that the use of such features in the authorship identification
process has a positive impact on the accuracy of the authorship identification
task. We performed experiments to justify our arguments and compared the
results with other base line models. Experimental results reveal that the
proposed CCM-based email authorship identification model, along with the
proposed feature set, outperforms the state-of-the-art support vector machine
(SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The
proposed model attains an accuracy rate of 94% for 10 authors, 89% for 25
authors, and 81% for 50 authors, respectively on Enron dataset, while 89.5%
accuracy has been achieved on authors' constructed real email dataset. The
results on Enron dataset have been achieved on quite a large number of authors
as compared to the models proposed by Iqbal et al. [1, 2].
|
1312.2457 | Compressed Quantitative MRI: Bloch Response Recovery through Iterated
Projection | cs.IT math.IT | Inspired by the recently proposed Magnetic Resonance Fingerprinting
technique, we develop a principled compressed sensing framework for
quantitative MRI. The three key components are: a random pulse excitation
sequence following the MRF technique; a random EPI subsampling strategy and an
iterative projection algorithm that imposes consistency with the Bloch
equations. We show that, as long as the excitation sequence possesses an
appropriate form of persistent excitation, we are able to achieve accurate
recovery of the proton density, $T_1$, $T_2$ and off-resonance maps
simultaneously from a limited number of samples.
|
1312.2459 | Distance Closures on Complex Networks | cs.SI cond-mat.dis-nn cs.IR nlin.CG physics.soc-ph | To expand the toolbox available to network science, we study the isomorphism
between distance and Fuzzy (proximity or strength) graphs. Distinct transitive
closures in Fuzzy graphs lead to closures of their isomorphic distance graphs
with widely different structural properties. For instance, the All Pairs
Shortest Paths (APSP) problem, based on the Dijkstra algorithm, is equivalent
to a metric closure, which is only one of the possible ways to calculate
shortest paths. Understanding and mapping this isomorphism is necessary to
analyse models of complex networks based on weighted graphs. Any conclusions
derived from such models should take into account the distortions imposed on
graph topology when converting proximity/strength into distance graphs, to
subsequently compute path length and shortest path measures. We characterise
the isomorphism using the max-min and Dombi disjunction/conjunction pairs. This
allows us to: (1) study alternative distance closures, such as those based on
diffusion, metric, and ultra-metric distances; (2) identify the operators
closest to the metric closure of distance graphs (the APSP), but which are
logically consistent; and (3) propose a simple method to compute alternative
distance closures using existing algorithms for the APSP. In particular, we
show that a specific diffusion distance is promising for community detection in
complex networks, and is based on desirable axioms for logical inference or
approximate reasoning on networks; it also provides a simple algebraic means to
compute diffusion processes on networks. Based on these results, we argue that
choosing different distance closures can lead to different conclusions about
indirect associations on network data, as well as the structure of complex
networks, and are thus important to consider.
|
1312.2465 | A Compressed Sensing Framework for Magnetic Resonance Fingerprinting | cs.IT math.IT | Inspired by the recently proposed Magnetic Resonance Fingerprinting (MRF)
technique, we develop a principled compressed sensing framework for
quantitative MRI. The three key components are: a random pulse excitation
sequence following the MRF technique; a random EPI subsampling strategy and an
iterative projection algorithm that imposes consistency with the Bloch
equations. We show that theoretically, as long as the excitation sequence
possesses an appropriate form of persistent excitation, we are able to
accurately recover the proton density, T1, T2 and off-resonance maps
simultaneously from a limited number of samples. These results are further
supported through extensive simulations using a brain phantom.
|
1312.2482 | Automatic recognition and tagging of topologically different regimes in
dynamical systems | cs.CG cs.LG math.DS nlin.CD physics.data-an | Complex systems are commonly modeled using nonlinear dynamical systems. These
models are often high-dimensional and chaotic. An important goal in studying
physical systems through the lens of mathematical models is to determine when
the system undergoes changes in qualitative behavior. A detailed description of
the dynamics can be difficult or impossible to obtain for high-dimensional and
chaotic systems. Therefore, a more sensible goal is to recognize and mark
transitions of a system between qualitatively different regimes of behavior. In
practice, one is interested in developing techniques for detection of such
transitions from sparse observations, possibly contaminated by noise. In this
paper we develop a framework to accurately tag different regimes of complex
systems based on topological features. In particular, our framework works with
a high degree of success in picking out a cyclically orbiting regime from a
stationary equilibrium regime in high-dimensional stochastic dynamical systems.
|
1312.2506 | An Application of Answer Set Programming to the Field of Second Language
Acquisition | cs.AI | This paper explores the contributions of Answer Set Programming (ASP) to the
study of an established theory from the field of Second Language Acquisition:
Input Processing. The theory describes default strategies that learners of a
second language use in extracting meaning out of a text, based on their
knowledge of the second language and their background knowledge about the
world. We formalized this theory in ASP, and as a result we were able to
determine opportunities for refining its natural language description, as well
as directions for future theory development. We applied our model to automating
the prediction of how learners of English would interpret sentences containing
the passive voice. We present a system, PIas, that uses these predictions to
assist language instructors in designing teaching materials. To appear in
Theory and Practice of Logic Programming (TPLP).
|
1312.2526 | Connectivity maintenance by robotic Mobile Ad-hoc NETwork | cs.RO | The problem of maintaining a wireless communication link between a fixed base
station and an autonomous agent by means of a team of mobile robots is
addressed in this work. Such problem can be of interest for search and rescue
missions in post disaster scenario where the autonomous agent can be used for
remote monitoring and first hand knowledge of the aftermath, while the mobile
robots can be used to provide the agent the possibility to dynamically send its
collected information to an external base station. To study the problem, a
distributed multi-robot system with wifi communication capabilities has been
developed and used to implement a Mobile Ad-hoc NETwork (MANET) to guarantee
the required multi-hop communication. None of the robots of the team possess
the knowledge of agent's movement, neither they hold a pre-assigned position in
the ad-hoc network but they adapt with respect to the dynamic environmental
situations. This adaptation only requires the robots to have the knowledge of
their position and the possibility to exchange such information with their
one-hop neighbours. Robots' motion is achieved by implementing a behavioural
control, namely the Null-Space based Behavioural control, embedding the
collective mission to achieve the required self-configuration. Validation of
the approach is performed by means of demanding experimental tests involving
five ground mobile robots capable of self localization and dynamic obstacle
avoidance.
|
1312.2544 | Time-Switching Uplink Network-Coded Cooperative Communication with
Downlink Energy Transfer | cs.IT math.IT | In this work, we consider a multiuser cooperative wireless network where the
energy-constrained sources have independent information to transmit to a common
destination, which is assumed to be externally powered and responsible for
transferring energy wirelessly to the sources. The source nodes may cooperate,
under either decode-and-forward or network coding-based protocols. Taking into
account the fact that the energy harvested by the source nodes is a function of
the fading realization of inter-user channels and user-destination channels, we
obtain a closed-form approximation for the system outage probability, as well
as an approximation for the optimal energy transfer period that minimizes such
outage probability. It is also shown that, even though the achievable diversity
order is reduced due to wireless energy transfer process, it is very close to
the one achieved for a network without energy constraints. Numerical results
are also presented to validate the theoretical results.
|
1312.2551 | A state vector algebra for algorithmic implementation of second-order
logic | cs.AI cs.LO | We present a mathematical framework for mapping second-order logic relations
onto a simple state vector algebra. Using this algebra, basic theorems of set
theory can be proven in an algorithmic way, hence by an expert system. We
illustrate the use of the algebra with simple examples and show that, in
principle, all theorems of basic set theory can be recovered in an elementary
way. The developed technique can be used for an automated theorem proving in
the 1st and 2nd order logic.
|
1312.2574 | Backing off from Infinity: Performance Bounds via Concentration of
Spectral Measure for Random MIMO Channels | cs.IT math.IT math.ST stat.TH | The performance analysis of random vector channels, particularly
multiple-input-multiple-output (MIMO) channels, has largely been established in
the asymptotic regime of large channel dimensions, due to the analytical
intractability of characterizing the exact distribution of the objective
performance metrics. This paper exposes a new non-asymptotic framework that
allows the characterization of many canonical MIMO system performance metrics
to within a narrow interval under moderate-to-large channel dimensionality,
provided that these metrics can be expressed as a separable function of the
singular values of the matrix. The effectiveness of our framework is
illustrated through two canonical examples. Specifically, we characterize the
mutual information and power offset of random MIMO channels, as well as the
minimum mean squared estimation error of MIMO channel inputs from the channel
outputs. Our results lead to simple, informative, and reasonably accurate
control of various performance metrics in the finite-dimensional regime, as
corroborated by the numerical simulations. Our analysis framework is
established via the concentration of spectral measure phenomenon for random
matrices uncovered by Guionnet and Zeitouni, which arises in a variety of
random matrix ensembles irrespective of the precise distributions of the matrix
entries.
|
1312.2578 | Kernel-based Distance Metric Learning in the Output Space | cs.LG | In this paper we present two related, kernel-based Distance Metric Learning
(DML) methods. Their respective models non-linearly map data from their
original space to an output space, and subsequent distance measurements are
performed in the output space via a Mahalanobis metric. The dimensionality of
the output space can be directly controlled to facilitate the learning of a
low-rank metric. Both methods allow for simultaneous inference of the
associated metric and the mapping to the output space, which can be used to
visualize the data, when the output space is 2- or 3-dimensional. Experimental
results for a collection of classification tasks illustrate the advantages of
the proposed methods over other traditional and kernel-based DML approaches.
|
1312.2598 | Monitoring voltage collapse margin by measuring the area voltage across
several transmission lines with synchrophasors | cs.SY | We consider the fast monitoring of voltage collapse margin using
synchrophasor measurements at both ends of transmission lines that transfer
power from two generators to two loads. This shows a way to extend the
monitoring of a radial transmission line to multiple transmission lines. The
synchrophasor voltages are combined into a single complex voltage difference
across an area containing the transmission lines that can be monitored in the
same way as a single transmission line. We identify ideal conditions under
which this reduction to the single line case perfectly preserves the margin to
voltage collapse, and give an example that shows that the error under practical
non-ideal conditions is reasonably small.
|
1312.2606 | Multi-Task Classification Hypothesis Space with Improved Generalization
Bounds | cs.LG | This paper presents a RKHS, in general, of vector-valued functions intended
to be used as hypothesis space for multi-task classification. It extends
similar hypothesis spaces that have previously considered in the literature.
Assuming this space, an improved Empirical Rademacher Complexity-based
generalization bound is derived. The analysis is itself extended to an MKL
setting. The connection between the proposed hypothesis space and a Group-Lasso
type regularizer is discussed. Finally, experimental results, with some
SVM-based Multi-Task Learning problems, underline the quality of the derived
bounds and validate the paper's analysis.
|
1312.2627 | Multipoint Volterra Series Interpolation and H2 Optimal Model Reduction
of Bilinear Systems | math.NA cs.SY | In this paper, we focus on model reduction of large-scale bilinear systems.
The main contributions are threefold. First, we introduce a new framework for
interpolatory model reduction of bilinear systems. In contrast to the existing
methods where interpolation is forced on some of the leading subsystem transfer
functions, the new framework shows how to enforce multipoint interpolation of
the underlying Volterra series. Then, we show that the first-order conditions
for optimal H2 model reduction of bilinear systems require multivariate Hermite
interpolation in terms of the new Volterra series interpolation framework; and
thus we extend the interpolation-based first-order necessary conditions for H2
optimality of LTI systems to the bilinear case. Finally, we show that
multipoint interpolation on the truncated Volterra series representation of a
bilinear system leads to an asymptotically optimal approach to H2 optimal model
reduction, leading to an efficient model reduction algorithm. Several numerical
examples illustrate the effectiveness of the proposed approach.
|
1312.2629 | Sense, Model and Identify the Load Signatures of HVAC Systems in Metro
Stations | cs.SY | The HVAC systems in subway stations are energy consuming giants, each of
which may consume over 10, 000 Kilowatts per day for cooling and ventilation.
To save energy for the HVAC systems, it is critically important to firstly know
the "load signatures" of the HVAC system, i.e., the quantity of heat imported
from the outdoor environments and by the passengers respectively in different
periods of a day, which will significantly benefit the design of control
policies. In this paper, we present a novel sensing and learning approach to
identify the load signature of the HVAC system in the subway stations. In
particular, sensors and smart meters were deployed to monitor the indoor,
outdoor temperatures, and the energy consumptions of the HVAC system in
real-time. The number of passengers was counted by the ticket checking system.
At the same time, the cooling supply provided by the HVAC system was inferred
via the energy consumption logs of the HVAC system. Since the indoor
temperature variations are driven by the difference of the loads and the
cooling supply, linear regression model was proposed for the load signature,
whose coefficients are derived via a proposed algorithm . We collected real
sensing data and energy log data from HaiDianHuangZhuang Subway station, which
is in line 4 of Beijing from the duration of July 2012 to Sept. 2012. The data
was used to evaluate the coefficients of the regression model. The experiment
results show typical variation signatures of the loads from the passengers and
from the outdoor environments respectively, which provide important contexts
for smart control policies.
|
1312.2631 | Kernel representation approach to persistence of behavior | math.OC cs.SY | The optimal control problem of connecting any two trajectories in a behavior
B with maximal persistence of that behavior is put forth and a compact solution
is obtained for a general class of behaviors. The behavior B is understood in
the context of Willems's behavioral theory and its representation is given by
the kernel of some operator. In general the solution to the problem will not
lie in the same behavior and so a maximally persistent solution is defined as
one that will be as close as possible to the behavior. A vast number of
behaviors can be treated in this framework such as stationary solutions, limit
cycles etc. The problem is linked to the ideas of controllability presented by
Willems. It draws its roots from quasi-static transitions in thermodynamics and
bears connections to morphing theory. The problem has practical applications in
finite time thermodynamics, deployment of tensigrity structures and legged
locomotion.
|
1312.2632 | SEED: Public Energy and Environment Dataset for Optimizing HVAC
Operation in Subway Stations | cs.SY | For sustainability and energy saving, the problem to optimize the control of
heating, ventilating, and air-conditioning (HVAC) systems has attracted great
attentions, but analyzing the signatures of thermal environments and HVAC
systems and the evaluation of the optimization policies has encountered
inefficiency and inconvenient problems due to the lack of public dataset. In
this paper, we present the Subway station Energy and Environment Dataset
(SEED), which was collected from a line of Beijing subway stations, providing
minute-resolution data regarding the environment dynamics (temperature,
humidity, CO2, etc.) working states and energy consumptions of the HVAC systems
(ventilators, refrigerators, pumps), and hour-resolution data of passenger
flows. We describe the sensor deployments and the HVAC systems for data
collection and for environment control, and also present initial investigation
for the energy disaggregation of HVAC system, the signatures of the thermal
load, cooling supply, and the passenger flow using the dataset.
|
1312.2637 | The Throughput-Outage Tradeoff of Wireless One-Hop Caching Networks | cs.IT math.IT | We consider a wireless device-to-device (D2D) network where the nodes have
pre-cached information from a library of available files. Nodes request files
at random. If the requested file is not in the on-board cache, then it is
downloaded from some neighboring node via one-hop "local" communication. An
outage event occurs when a requested file is not found in the neighborhood of
the requesting node, or if the network admission control policy decides not to
serve the request. We characterize the optimal throughput-outage tradeoff in
terms of tight scaling laws for various regimes of the system parameters, when
both the number of nodes and the number of files in the library grow to
infinity. Our analysis is based on Gupta and Kumar {\em protocol model} for the
underlying D2D wireless network, widely used in the literature on capacity
scaling laws of wireless networks without caching. Our results show that the
combination of D2D spectrum reuse and caching at the user nodes yields a
per-user throughput independent of the number of users, for any fixed outage
probability in $(0,1)$. This implies that the D2D caching network is
"scalable": even though the number of users increases, each user achieves
constant throughput. This behavior is very different from the classical Gupta
and Kumar result on ad-hoc wireless networks, for which the per-user throughput
vanishes as the number of users increases. Furthermore, we show that the user
throughput is directly proportional to the fraction of cached information over
the whole file library size. Therefore, we can conclude that D2D caching
networks can turn "memory" into "bandwidth" (i.e., doubling the on-board cache
memory on the user devices yields a 100\% increase of the user throughout).
|
1312.2642 | Cellular Automata based Feedback Mechanism in Strengthening biological
Sequence Analysis Approach to Robotic Soccer | cs.MA cs.RO | This paper reports on the application of sequence analysis algorithms for
agents in robotic soccer and a suitable representation is proposed to achieve
this mapping. The objective of this research is to generate novel better
in-game strategies with the aim of faster adaptation to the changing
environment. A homogeneous non-communicating multi-agent architecture using the
representation is presented. To achieve real-time learning during a game, a
bucket brigade algorithm is used to reinforce Cellular Automata Based
Classifier. A technique for selecting strategies based on sequence analysis is
adopted.
|
1312.2668 | Optimal compression in natural gas networks: a geometric programming
approach | cs.SY | Natural gas transmission pipelines are complex systems whose flow
characteristics are governed by challenging non-linear physical behavior. These
pipelines extend over hundreds and even thousands of miles. Gas is typically
injected into the system at a constant rate, and a series of compressors are
distributed along the pipeline to boost the gas pressure to maintain system
pressure and throughput. These compressors consume a portion of the gas, and
one goal of the operator is to control the compressor operation to minimize
this consumption while satisfying pressure constraints at the gas load points.
The optimization of these operations is computationally challenging. Many
pipelines simply rely on the intuition and prior experience of operators to
make these decisions. Here, we present a new geometric programming approach for
optimizing compressor operation in natural gas pipelines. Using models of real
natural gas pipelines, we show that the geometric programming algorithm
consistently outperforms approaches that mimic existing state of practice.
|
1312.2669 | DRSP : Dimension Reduction For Similarity Matching And Pruning Of Time
Series Data Streams | cs.DB | Similarity matching and join of time series data streams has gained a lot of
relevance in today's world that has large streaming data. This process finds
wide scale application in the areas of location tracking, sensor networks,
object positioning and monitoring to name a few. However, as the size of the
data stream increases, the cost involved to retain all the data in order to aid
the process of similarity matching also increases. We develop a novel framework
to addresses the following objectives. Firstly, Dimension reduction is
performed in the preprocessing stage, where large stream data is segmented and
reduced into a compact representation such that it retains all the crucial
information by a technique called Multi-level Segment Means (MSM). This reduces
the space complexity associated with the storage of large time-series data
streams. Secondly, it incorporates effective Similarity Matching technique to
analyze if the new data objects are symmetric to the existing data stream. And
finally, the Pruning Technique that filters out the pseudo data object pairs
and join only the relevant pairs. The computational cost for MSM is O(l*ni) and
the cost for pruning is O(DRF*wsize*d), where DRF is the Dimension Reduction
Factor. We have performed exhaustive experimental trials to show that the
proposed framework is both efficient and competent in comparison with earlier
works.
|
1312.2678 | Analysis & Prediction of Sales Data in SAP-ERP System using Clustering
Algorithms | cs.DB | Clustering is an important data mining technique where we will be interested
in maximizing intracluster distance and also minimizing intercluster distance.
We have utilized clustering techniques for detecting deviation in product sales
and also to identify and compare sales over a particular period of time.
Clustering is suited to group items that seem to fall naturally together, when
there is no specified class for any new item. We have utilizedannual sales data
of a steel major to analyze Sales Volume & Value with respect to dependent
attributes like products, customers and quantities sold. The demand for steel
products is cyclical and depends on many factors like customer profile,
price,Discounts and tax issues. In this paper, we have analyzed sales data with
clustering algorithms like K-Means&EMwhichrevealed many interesting
patternsuseful for improving sales revenue and achieving higher sales volume.
Our study confirms that partition methods like K-Means & EM algorithms are
better suited to analyze our sales data in comparison to Density based methods
like DBSCAN & OPTICS or Hierarchical methods like COBWEB.
|
1312.2681 | Degrees of Freedom of MIMO Cellular Networks: Decomposition and Linear
Beamforming Design | cs.IT math.IT | This paper investigates the symmetric degrees of freedom (DoF) of MIMO
cellular networks with G cells and K users per cell, having N antennas at each
base station and M antennas at each user. In particular, we investigate
achievability techniques based on either decomposition with asymptotic
interference alignment (IA) or linear beamforming schemes, and show that there
are distinct regimes of (G,K,M,N) where one outperforms the other. We first
note that both one-sided and two-sided decomposition with asymptotic IA achieve
the same degrees of freedom. We then establish specific antenna configurations
under which the DoF achieved using decomposition based schemes is optimal by
deriving a set of outer bounds on the symmetric DoF. For linear beamforming
schemes, we first focus on small networks and propose a structured approach to
linear beamforming based on a notion called packing ratios. Packing ratio
describes the interference footprint or shadow cast by a set of transmit
beamformers and enables us to identify the underlying structures for aligning
interference. Such a structured beamforming design can be shown to achieve the
optimal spatially normalized DoF (sDoF) of two-cell two-user/cell network and
the two-cell three-user/cell network. For larger networks, we develop an
unstructured approach to linear interference alignment, where transmit
beamformers are designed to satisfy conditions for IA without explicitly
identifying the underlying structures for IA. The main numerical insight of
this paper is that such an approach appears to be capable of achieving the
optimal sDoF for MIMO cellular networks in regimes where linear beamforming
dominates asymptotic decomposition, and a significant portion of sDoF
elsewhere. Remarkably, polynomial identity test appears to play a key role in
identifying the boundary of the achievable sDoF region in the former case.
|
1312.2688 | Spatial Throughput Characterization in Cognitive Radio Networks with
Threshold-Based Opportunistic Spectrum Access | cs.IT math.IT | This paper studies the opportunistic spectrum access (OSA) of the secondary
users in a large-scale overlay cognitive radio (CR) network. Two
threshold-based OSA schemes, namely the primary receiver assisted (PRA)
protocol and the primary transmitter assisted (PTA) protocol, are investigated.
Under the PRA/PTA protocol, a secondary transmitter (ST) is allowed to access
the spectrum only when the maximum signal power of the received beacons/pilots
sent from the active primary receivers/transmitters (PRs/PTs) is lower than a
certain threshold. To measure the resulting transmission opportunity for the
secondary users by the proposed OSA protocols, the concept of spatial
opportunity, which is defined as the probability that an arbitrary location in
the primary network is detected as a spatial spectrum hole, is introduced and
then evaluated by applying tools from stochastic geometry. Based on spatial
opportunity, the coverage (non-outage transmission) performance in the overlay
CR network is analyzed. With the obtained results of spatial opportunity and
coverage probability, we finally characterize the spatial throughput, which is
defined as the average spatial density of successful transmissions in the
primary/secondary network, under the PRA and PTA protocols, respectively.
|
1312.2709 | Phishing Detection by determining reliability factor using rough set
theory | cs.AI | Phishing is a common online weapon, used against users, by Phishers for
acquiring a confidential information through deception. Since the inception of
internet, nearly everything, ranging from money transaction to sharing
information, is done online in most parts of the world. This has also given
rise to malicious activities such as Phishing. Detecting Phishing is an
intricate process due to complexity, ambiguity and copious amount of
possibilities of factors responsible for phishing . Rough sets can be a
powerful tool, when working on such kind of Applications containing vague or
imprecise data. This paper proposes an approach towards Phishing Detection
Using Rough Set Theory. The Thirteen basic factors, directly responsible
towards Phishing, are grouped into four Strata. Reliability Factor is
determined on the basis of the outcome of these strata, using Rough Set Theory
. Reliability Factor determines the possibility of a suspected site to be Valid
or Fake. Using Rough set Theory most and the least influential factors towards
Phishing are also determined.
|
1312.2710 | Improving circuit miniaturization and its efficiency using Rough Set
Theory | cs.LG cs.AI | High-speed, accuracy, meticulousness and quick response are notion of the
vital necessities for modern digital world. An efficient electronic circuit
unswervingly affects the maneuver of the whole system. Different tools are
required to unravel different types of engineering tribulations. Improving the
efficiency, accuracy and low power consumption in an electronic circuit is
always been a bottle neck problem. So the need of circuit miniaturization is
always there. It saves a lot of time and power that is wasted in switching of
gates, the wiring-crises is reduced, cross-sectional area of chip is reduced,
the number of transistors that can implemented in chip is multiplied many
folds. Therefore to trounce with this problem we have proposed an Artificial
intelligence (AI) based approach that make use of Rough Set Theory for its
implementation. Theory of rough set has been proposed by Z Pawlak in the year
1982. Rough set theory is a new mathematical tool which deals with uncertainty
and vagueness. Decisions can be generated using rough set theory by reducing
the unwanted and superfluous data. We have condensed the number of gates
without upsetting the productivity of the given circuit. This paper proposes an
approach with the help of rough set theory which basically lessens the number
of gates in the circuit, based on decision rules.
|
1312.2738 | Shortest Unique Substring Query Revisited | cs.DS cs.DB | We revisit the problem of finding shortest unique substring (SUS) proposed
recently by [6]. We propose an optimal $O(n)$ time and space algorithm that can
find an SUS for every location of a string of size $n$. Our algorithm
significantly improves the $O(n^2)$ time complexity needed by [6]. We also
support finding all the SUSes covering every location, whereas the solution in
[6] can find only one SUS for every location. Further, our solution is simpler
and easier to implement and can also be more space efficient in practice, since
we only use the inverse suffix array and longest common prefix array of the
string, while the algorithm in [6] uses the suffix tree of the string and other
auxiliary data structures. Our theoretical results are validated by an
empirical study that shows our algorithm is much faster and more space-saving
than the one in [6].
|
1312.2785 | An efficient length- and rate-preserving concatenation of polar and
repetition codes | cs.IT math.IT | We improve the method in \cite{Seidl:10} for increasing the finite-lengh
performance of polar codes by protecting specific, less reliable symbols with
simple outer repetition codes. Decoding of the scheme integrates easily in the
known successive decoding algorithms for polar codes. Overall rate and block
length remain unchanged, the decoding complexity is at most doubled. A
comparison to related methods for performance improvement of polar codes is
drawn.
|
1312.2789 | Performance Analysis Of Regularized Linear Regression Models For
Oxazolines And Oxazoles Derivitive Descriptor Dataset | cs.LG | Regularized regression techniques for linear regression have been created the
last few ten years to reduce the flaws of ordinary least squares regression
with regard to prediction accuracy. In this paper, new methods for using
regularized regression in model choice are introduced, and we distinguish the
conditions in which regularized regression develops our ability to discriminate
models. We applied all the five methods that use penalty-based (regularization)
shrinkage to handle Oxazolines and Oxazoles derivatives descriptor dataset with
far more predictors than observations. The lasso, ridge, elasticnet, lars and
relaxed lasso further possess the desirable property that they simultaneously
select relevant predictive descriptors and optimally estimate their effects.
Here, we comparatively evaluate the performance of five regularized linear
regression methods The assessment of the performance of each model by means of
benchmark experiments is an established exercise. Cross-validation and
resampling methods are generally used to arrive point evaluates the
efficiencies which are compared to recognize methods with acceptable features.
Predictive accuracy was evaluated using the root mean squared error (RMSE) and
Square of usual correlation between predictors and observed mean inhibitory
concentration of antitubercular activity (R square). We found that all five
regularized regression models were able to produce feasible models and
efficient capturing the linearity in the data. The elastic net and lars had
similar accuracies as well as lasso and relaxed lasso had similar accuracies
but outperformed ridge regression in terms of the RMSE and R square metrics.
|
1312.2798 | OntoVerbal: a Generic Tool and Practical Application to SNOMED CT | cs.AI | Ontology development is a non-trivial task requiring expertise in the chosen
ontological language. We propose a method for making the content of ontologies
more transparent by presenting, through the use of natural language generation,
naturalistic descriptions of ontology classes as textual paragraphs. The method
has been implemented in a proof-of- concept system, OntoVerbal, that
automatically generates paragraph-sized textual descriptions of ontological
classes expressed in OWL. OntoVerbal has been applied to ontologies that can be
loaded into Prot\'eg\'e and been evaluated with SNOMED CT, showing that it
provides coherent, well-structured and accurate textual descriptions of
ontology classes.
|
1312.2818 | Can electoral popularity be predicted using socially generated big data? | physics.soc-ph cs.CY cs.SI physics.data-an | Today, our more-than-ever digital lives leave significant footprints in
cyberspace. Large scale collections of these socially generated footprints,
often known as big data, could help us to re-investigate different aspects of
our social collective behaviour in a quantitative framework. In this
contribution we discuss one such possibility: the monitoring and predicting of
popularity dynamics of candidates and parties through the analysis of socially
generated data on the web during electoral campaigns. Such data offer
considerable possibility for improving our awareness of popularity dynamics.
However they also suffer from significant drawbacks in terms of
representativeness and generalisability. In this paper we discuss potential
ways around such problems, suggesting the nature of different political systems
and contexts might lend differing levels of predictive power to certain types
of data source. We offer an initial exploratory test of these ideas, focussing
on two data streams, Wikipedia page views and Google search queries. On the
basis of this data, we present popularity dynamics from real case examples of
recent elections in three different countries.
|
1312.2822 | 3D Maps Registration and Path Planning for Autonomous Robot Navigation | cs.RO | Mobile robots dedicated in security tasks should be capable of clearly
perceiving their environment to competently navigate within cluttered areas, so
as to accomplish their assigned mission. The paper in hand describes such an
autonomous agent designed to deploy competently in hazardous environments
equipped with a laser scanner sensor. During the robot's motion, consecutive
scans are obtained to produce dense 3D maps of the area. A 3D point cloud
registration technique is exploited to merge the successively created maps
during the robot's motion followed by an ICP refinement step. The reconstructed
3D area is then top-down projected with great resolution, to be fed in a path
planning algorithm suitable to trace obstacle-free trajectories in the explored
area. The main characteristic of the path planner is that the robot's
embodiment is considered for producing detailed and safe trajectories of $1$
$cm$ resolution. The proposed method has been evaluated with our mobile robot
in several outdoor scenarios revealing remarkable performance.
|
1312.2841 | Predictive Comparative QSAR Analysis Of As 5-Nitofuran-2-YL Derivatives
Myco bacterium tuberculosis H37RV Inhibitors Bacterium Tuberculosis H37RV
Inhibitors | cs.CE | Antitubercular activity of 5-nitrofuran-2-yl Derivatives series were
subjected to Quantitative Structure Activity Relationship (QSAR) Analysis with
an effort to derive and understand a correlation between the biological
activity as response variable and different molecular descriptors as
independent variables. QSAR models are built using 40 molecular descriptor
dataset. Different statistical regression expressions were got using Partial
Least Squares (PLS),Multiple Linear Regression (MLR) and Principal Component
Regression (PCR) techniques. The among these technique, Partial Least Square
Regression (PLS) technique has shown very promising result as compared to MLR
technique A QSAR model was build by a training set of 30 molecules with
correlation coefficient ($r^2$) of 0.8484, significant cross validated
correlation coefficient ($q^2$) is 0.0939, F test is 48.5187, ($r^2$) for
external test set (pred$_r^2$) is -0.5604, coefficient of correlation of
predicted data set (pred$_r^2se$) is 0.7252 and degree of freedom is 26 by
Partial Least Squares Regression technique.
|
1312.2844 | mARC: Memory by Association and Reinforcement of Contexts | cs.IR cs.CL nlin.AO nlin.CD | This paper introduces the memory by Association and Reinforcement of Contexts
(mARC). mARC is a novel data modeling technology rooted in the second
quantization formulation of quantum mechanics. It is an all-purpose incremental
and unsupervised data storage and retrieval system which can be applied to all
types of signal or data, structured or unstructured, textual or not. mARC can
be applied to a wide range of information clas-sification and retrieval
problems like e-Discovery or contextual navigation. It can also for-mulated in
the artificial life framework a.k.a Conway "Game Of Life" Theory. In contrast
to Conway approach, the objects evolve in a massively multidimensional space.
In order to start evaluating the potential of mARC we have built a mARC-based
Internet search en-gine demonstrator with contextual functionality. We compare
the behavior of the mARC demonstrator with Google search both in terms of
performance and relevance. In the study we find that the mARC search engine
demonstrator outperforms Google search by an order of magnitude in response
time while providing more relevant results for some classes of queries.
|
1312.2853 | Performance Analysis Of Neural Network Models For Oxazolines And
Oxazoles Derivatives Descriptor Dataset | cs.CE cs.NE | Neural networks have been used successfully to a broad range of areas such as
business, data mining, drug discovery and biology. In medicine, neural networks
have been applied widely in medical diagnosis, detection and evaluation of new
drugs and treatment cost estimation. In addition, neural networks have begin
practice in data mining strategies for the aim of prediction, knowledge
discovery. This paper will present the application of neural networks for the
prediction and analysis of antitubercular activity of Oxazolines and Oxazoles
derivatives. This study presents techniques based on the development of Single
hidden layer neural network (SHLFFNN), Gradient Descent Back propagation neural
network (GDBPNN), Gradient Descent Back propagation with momentum neural
network (GDBPMNN), Back propagation with Weight decay neural network (BPWDNN)
and Quantile regression neural network (QRNN) of artificial neural network
(ANN) models Here, we comparatively evaluate the performance of five neural
network techniques. The evaluation of the efficiency of each model by ways of
benchmark experiments is an accepted application. Cross-validation and
resampling techniques are commonly used to derive point estimates of the
performances which are compared to identify methods with good properties.
Predictive accuracy was evaluated using the root mean squared error (RMSE),
Coefficient determination(???), mean absolute error(MAE), mean percentage
error(MPE) and relative square error(RSE). We found that all five neural
network models were able to produce feasible models. QRNN model is outperforms
with all statistical tests amongst other four models.
|
1312.2859 | A Robust Missing Value Imputation Method MifImpute For Incomplete
Molecular Descriptor Data And Comparative Analysis With Other Missing Value
Imputation Methods | cs.CE | Missing data imputation is an important research topic in data mining.
Large-scale Molecular descriptor data may contains missing values (MVs).
However, some methods for downstream analyses, including some prediction tools,
require a complete descriptor data matrix. We propose and evaluate an iterative
imputation method MiFoImpute based on a random forest. By averaging over many
unpruned regression trees, random forest intrinsically constitutes a multiple
imputation scheme. Using the NRMSE and NMAE estimates of random forest, we are
able to estimate the imputation error. Evaluation is performed on two molecular
descriptor datasets generated from a diverse selection of pharmaceutical fields
with artificially introduced missing values ranging from 10% to 30%. The
experimental result demonstrates that missing values has a great impact on the
effectiveness of imputation techniques and our method MiFoImpute is more robust
to missing value than the other ten imputation methods used as benchmark.
Additionally, MiFoImpute exhibits attractive computational efficiency and can
cope with high-dimensional data.
|
1312.2861 | Identification Of Outliers In Oxazolines AND Oxazoles High Dimension
Molecular Descriptor Dataset Using Principal Component Outlier Detection
Algorithm And Comparative Numerical Study Of Other Robust Estimators | cs.CE | From the past decade outlier detection has been in use. Detection of outliers
is an emerging topic and is having robust applications in medical sciences and
pharmaceutical sciences. Outlier detection is used to detect anomalous
behaviour of data. Typical problems in Bioinformatics can be addressed by
outlier detection. A computationally fast method for detecting outliers is
shown, that is particularly effective in high dimensions. PrCmpOut algorithm
make use of simple properties of principal components to detect outliers in the
transformed space, leading to significant computational advantages for high
dimensional data. This procedure requires considerably less computational time
than existing methods for outlier detection. The properties of this estimator
(Outlier error rate (FN), Non-Outlier error rate(FP) and computational costs)
are analyzed and compared with those of other robust estimators described in
the literature through simulation studies. Numerical evidence based Oxazolines
and Oxazoles molecular descriptor dataset shows that the proposed method
performs well in a variety of situations of practical interest. It is thus a
valuable companion to the existing outlier detection methods.
|
1312.2867 | Study Of E-Smooth Support Vector Regression And Comparison With E-
Support Vector Regression And Potential Support Vector Machines For
Prediction For The Antitubercular Activity Of Oxazolines And Oxazoles
Derivatives | cs.CE cs.LO | A new smoothing method for solving ? -support vector regression (?-SVR),
tolerating a small error in fitting a given data sets nonlinearly is proposed
in this study. Which is a smooth unconstrained optimization reformulation of
the traditional linear programming associated with a ?-insensitive support
vector regression. We term this redeveloped problem as ?-smooth support vector
regression (?-SSVR). The performance and predictive ability of ?-SSVR are
investigated and compared with other methods such as LIBSVM (?-SVR) and P-SVM
methods. In the present study, two Oxazolines and Oxazoles molecular descriptor
data sets were evaluated. We demonstrate the merits of our algorithm in a
series of experiments. Primary experimental results illustrate that our
proposed approach improves the regression performance and the learning
efficiency. In both studied cases, the predictive ability of the ?- SSVR model
is comparable or superior to those obtained by LIBSVM and P-SVM. The results
indicate that ?-SSVR can be used as an alternative powerful modeling method for
regression studies. The experimental results show that the presented algorithm
?-SSVR, plays better precisely and effectively than LIBSVMand P-SVM in
predicting antitubercular activity.
|
1312.2877 | Automated Classification of L/R Hand Movement EEG Signals using Advanced
Feature Extraction and Machine Learning | cs.NE cs.CV cs.HC | In this paper, we propose an automated computer platform for the purpose of
classifying Electroencephalography (EEG) signals associated with left and right
hand movements using a hybrid system that uses advanced feature extraction
techniques and machine learning algorithms. It is known that EEG represents the
brain activity by the electrical voltage fluctuations along the scalp, and
Brain-Computer Interface (BCI) is a device that enables the use of the brain
neural activity to communicate with others or to control machines, artificial
limbs, or robots without direct physical movements. In our research work, we
aspired to find the best feature extraction method that enables the
differentiation between left and right executed fist movements through various
classification algorithms. The EEG dataset used in this research was created
and contributed to PhysioNet by the developers of the BCI2000 instrumentation
system. Data was preprocessed using the EEGLAB MATLAB toolbox and artifacts
removal was done using AAR. Data was epoched on the basis of Event-Related (De)
Synchronization (ERD/ERS) and movement-related cortical potentials (MRCP)
features. Mu/beta rhythms were isolated for the ERD/ERS analysis and delta
rhythms were isolated for the MRCP analysis. The Independent Component Analysis
(ICA) spatial filter was applied on related channels for noise reduction and
isolation of both artifactually and neutrally generated EEG sources. The final
feature vector included the ERD, ERS, and MRCP features in addition to the
mean, power and energy of the activations of the resulting independent
components of the epoched feature datasets. The datasets were inputted into two
machine-learning algorithms: Neural Networks (NNs) and Support Vector Machines
(SVMs). Intensive experiments were carried out and optimum classification
performances of 89.8 and 97.1 were obtained using NN and SVM, respectively.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.