id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1302.7069 | Learning Theory in the Arithmetic Hierarchy | math.LO cs.LG cs.LO | We consider the arithmetic complexity of index sets of uniformly computably
enumerable families learnable under different learning criteria. We determine
the exact complexity of these sets for the standard notions of finite learning,
learning in the limit, behaviorally correct learning and anomalous learning in
the limit. In proving the $\Sigma_5^0$-completeness result for behaviorally
correct learning we prove a result of independent interest; if a uniformly
computably enumerable family is not learnable, then for any computable learner
there is a $\Delta_2^0$ enumeration witnessing failure.
|
1302.7070 | Sound localization using compressive sensing | cs.SD cs.IT math.IT | In a sensor network with remote sensor devices, it is important to have a
method that can accurately localize a sound event with a small amount of data
transmitted from the sensors. In this paper, we propose a novel method for
localization of a sound source using compressive sensing. Instead of sampling a
large amount of data at the Nyquist sampling rate in time domain, the acoustic
sensors take compressive measurements integrated in time. The compressive
measurements can be used to accurately compute the location of a sound source.
|
1302.7074 | A null space property approach to compressed sensing with frames | cs.IT math.FA math.IT | An interesting topic in compressive sensing concerns problems of sensing and
recovering signals with sparse representations in a dictionary. In this note,
we study conditions of sensing matrices A for the L1-synthesis method to
accurately recover sparse, or nearly sparse signals in a given dictionary D. In
particular, we propose a dictionary based null space property (D-NSP) which, to
the best of our knowledge, is the first sufficient and necessary condition for
the success of the L1 recovery. This new property is then utilized to detect
some of those dictionaries whose sparse families cannot be compressed
universally. Moreover, when the dictionary is full spark, we show that AD being
NSP, which is well-known to be only sufficient for stable recovery via
L1-synthesis method, is indeed necessary as well.
|
1302.7080 | Parameter Identification of Induction Motor Using Modified Particle
Swarm Optimization Algorithm | cs.NE | This paper presents a new technique for induction motor parameter
identification. The proposed technique is based on a simple startup test using
a standard V/F inverter. The recorded startup currents are compared to that
obtained by simulation of an induction motor model. A Modified PSO optimization
is used to find out the best model parameter that minimizes the sum square
error between the measured and the simulated currents. The performance of the
modified PSO is compared with other optimization methods including line search,
conventional PSO and Genetic Algorithms. Simulation results demonstrate the
ability of the proposed technique to capture the true values of the machine
parameters and the superiority of the results obtained using the modified PSO
over other optimization techniques.
|
1302.7082 | K Means Segmentation of Alzheimers Disease in PET scan datasets: An
implementation | cs.CV cs.NE | The Positron Emission Tomography (PET) scan image requires expertise in the
segmentation where clustering algorithm plays an important role in the
automation process. The algorithm optimization is concluded based on the
performance, quality and number of clusters extracted. This paper is proposed
to study the commonly used K Means clustering algorithm and to discuss a brief
list of toolboxes for reproducing and extending works presented in medical
image analysis. This work is compiled using AForge .NET framework in windows
environment and MATrix LABoratory (MATLAB 7.0.1)
|
1302.7088 | Continuous-time Infinite Dynamic Topic Models | cs.IR stat.AP stat.ML | Topic models are probabilistic models for discovering topical themes in
collections of documents. In real world applications, these models provide us
with the means of organizing what would otherwise be unstructured collections.
They can help us cluster a huge collection into different topics or find a
subset of the collection that resembles the topical theme found in an article
at hand.
The first wave of topic models developed were able to discover the prevailing
topics in a big collection of documents spanning a period of time. It was later
realized that these time-invariant models were not capable of modeling 1) the
time varying number of topics they discover and 2) the time changing structure
of these topics. Few models were developed to address this two deficiencies.
The online-hierarchical Dirichlet process models the documents with a time
varying number of topics. It varies the structure of the topics over time as
well. However, it relies on document order, not timestamps to evolve the model
over time. The continuous-time dynamic topic model evolves topic structure in
continuous-time. However, it uses a fixed number of topics over time.
In this dissertation, I present a model, the continuous-time infinite dynamic
topic model, that combines the advantages of these two models 1) the
online-hierarchical Dirichlet process, and 2) the continuous-time dynamic topic
model. More specifically, the model I present is a probabilistic topic model
that does the following: 1) it changes the number of topics over continuous
time, and 2) it changes the topic structure over continuous-time.
I compared the model I developed with the two other models with different
setting values. The results obtained were favorable to my model and showed the
need for having a model that has a continuous-time varying number of topics and
topic structure.
|
1302.7090 | Adaptive Control in Swarm Robotics | cs.SY | Swarm robotic systems are mainly inspired by swarms of socials insects and
the collective emergent behavior that arises from their cooperation at the
lower lever. Despite the limited sensory ability, computational power, and
communication means of each swarm member, the swarm as a group manages to
achieve difficult tasks such as searching for food in terrains with obstacles
that individual robots cannot achieve in isolation of the other group members.
Moreover, such tasks are usually achieved without having information sharing
capabilities at the swarm level or having a centralized decision making system.
In this report, I survey the state of the field of applying adaptive control
method to increase swarm robotic systems robustness to the failure of
individual robots, and increase its efficiency in performing its task. A few
techniques for the division of labor problem are briefly presented while one of
them is given in more detail. A discussion of the advantages and disadvantages
of this system is given and suggestions of potential improvements that can be
made to the system are presented.
|
1302.7096 | Using Artificial Intelligence Models in System Identification | cs.NE cs.SY | Artificial Intelligence (AI) techniques are known for its ability in tackling
problems found to be unyielding to traditional mathematical methods. A recent
addition to these techniques are the Computational Intelligence (CI) techniques
which, in most cases, are nature or biologically inspired techniques. Different
CI techniques found their way to many control engineering applications,
including system identification, and the results obtained by many researchers
were encouraging. However, most control engineers and researchers used the
basic CI models as is or slightly modified them to match their needs.
Henceforth, the merits of one model over the other was not clear, and full
potential of these models was not exploited.
In this research, Genetic Algorithm (GA) and Particle Swarm Optimization
(PSO) methods, which are different CI techniques, are modified to best suit the
multimodal problem of system identification. In the first case of GA, an
extension to the basic algorithm, which is inspired from nature as well, was
deployed by introducing redundant genetic material. This extension, which come
in handy in living organisms, did not result in significant performance
improvement to the basic algorithm. In the second case, the Clubs-based PSO
(C-PSO) dynamic neighborhood structure was introduced to replace the basic
static structure used in canonical PSO algorithms. This modification of the
neighborhood structure resulted in significant performance of the algorithm
regarding convergence speed, and equipped it with a tool to handle multimodal
problems.
To understand the suitability of different GA and PSO techniques in the
problem of system identification, they were used in an induction motor's
parameter identification problem. The results enforced previous conclusions and
showed the superiority of PSO in general over the GA in such a multimodal
problem.
|
1302.7126 | Growing multiplex networks | physics.soc-ph cond-mat.dis-nn cs.SI | We propose a modeling framework for growing multiplexes where a node can
belong to different networks. We define new measures for multiplexes and we
identify a number of relevant ingredients for modeling their evolution such as
the coupling between the different layers and the arrival time distribution of
nodes. The topology of the multiplex changes significantly in the different
cases under consideration, with effects of the arrival time of nodes on the
degree distribution, average shortest paths and interdependence.
|
1302.7131 | Presence Factor-Oriented Blog Summarization | cs.IR | The research that has been carried out on blogs focused on blog posts only,
ignoring the title of the blog page. Also, in summarization only a set of
representative sentences are extracted. Some analysis has been done and it has
been found that the blog post contains the content that is likely to be related
to the topic of the blog post. Thus, proposed system of summarization makes use
of title contained in a blog page. The approach makes use of the Presence
factor that indicates the presence of each term of the title in each sentence
of the blog post. This is a key feature because it considers those sentences as
more relevant for summarization that contain each of the term present in the
title. The system has been implemented and evaluated experimentally. The system
has shown promising results.
|
1302.7172 | Should {\Delta}{\Sigma} Modulators Used in AC Motor Drives be Adapted to
the Mechanical Load of the Motor? | cs.SY | We consider the use of {\Delta}{\Sigma} modulators in ac motor drives,
focusing on the many additional degrees of freedom that this option offers over
Pulse Width Modulation (PWM). Following some recent results, we show that it is
possible to fully adapt the {\Delta}{\Sigma} modulator Noise Transfer Function
(NTF) to the rest of the drive chain and that the approach can be pushed even
to a fine adaptation of the NTF to the specific motor loading condition. We
investigate whether and to what extent the adaptation should be pursued. Using
a representative test case and extensive simulation, we conclude that a mild
adaptation can be beneficial, leading to Signal to Noise Ratio (SNR)
improvements in the order a few dB, while the advantage pushing the adaptation
to the load tracking is likely to be minimal.
|
1302.7175 | Estimating the Maximum Expected Value: An Analysis of (Nested) Cross
Validation and the Maximum Sample Average | stat.ML cs.AI cs.LG stat.ME | We investigate the accuracy of the two most common estimators for the maximum
expected value of a general set of random variables: a generalization of the
maximum sample average, and cross validation. No unbiased estimator exists and
we show that it is non-trivial to select a good estimator without knowledge
about the distributions of the random variables. We investigate and bound the
bias and variance of the aforementioned estimators and prove consistency. The
variance of cross validation can be significantly reduced, but not without
risking a large bias. The bias and variance of different variants of cross
validation are shown to be very problem-dependent, and a wrong choice can lead
to very inaccurate estimates.
|
1302.7180 | Fast Matching by 2 Lines of Code for Large Scale Face Recognition
Systems | cs.CV | In this paper, we propose a method to apply the popular cascade classifier
into face recognition to improve the computational efficiency while keeping
high recognition rate. In large scale face recognition systems, because the
probability of feature templates coming from different subjects is very high,
most of the matching pairs will be rejected by the early stages of the cascade.
Therefore, the cascade can improve the matching speed significantly. On the
other hand, using the nested structure of the cascade, we could drop some
stages at the end of feature to reduce the memory and bandwidth usage in some
resources intensive system while not sacrificing the performance too much. The
cascade is learned by two steps. Firstly, some kind of prepared features are
grouped into several nested stages. And then, the threshold of each stage is
learned to achieve user defined verification rate (VR). In the paper, we take a
landmark based Gabor+LDA face recognition system as baseline to illustrate the
process and advantages of the proposed method. However, the use of this method
is very generic and not limited in face recognition, which can be easily
generalized to other biometrics as a post-processing module. Experiments on the
FERET database show the good performance of our baseline and an experiment on a
self-collected large scale database illustrates that the cascade can improve
the matching speed significantly.
|
1302.7191 | The Rise and Fall of a Central Contributor: Dynamics of Social
Organization and Performance in the Gentoo Community | cs.SE cs.SI nlin.AO physics.soc-ph | Social organization and division of labor crucially influence the performance
of collaborative software engineering efforts. In this paper, we provide a
quantitative analysis of the relation between social organization and
performance in Gentoo, an Open Source community developing a Linux
distribution. We study the structure and dynamics of collaborations as recorded
in the project's bug tracking system over a period of ten years. We identify a
period of increasing centralization after which most interactions in the
community were mediated by a single central contributor. In this period of
maximum centralization, the central contributor unexpectedly left the project,
thus posing a significant challenge for the community. We quantify how the
rise, the activity as well as the subsequent sudden dropout of this central
contributor affected both the social organization and the bug handling
performance of the Gentoo community. We analyze social organization from the
perspective of network theory and augment our quantitative findings by
interviews with prominent members of the Gentoo community which shared their
personal insights.
|
1302.7251 | Modeling Stable Matching Problems with Answer Set Programming | cs.AI cs.LO | The Stable Marriage Problem (SMP) is a well-known matching problem first
introduced and solved by Gale and Shapley (1962). Several variants and
extensions to this problem have since been investigated to cover a wider set of
applications. Each time a new variant is considered, however, a new algorithm
needs to be developed and implemented. As an alternative, in this paper we
propose an encoding of the SMP using Answer Set Programming (ASP). Our encoding
can easily be extended and adapted to the needs of specific applications. As an
illustration we show how stable matchings can be found when individuals may
designate unacceptable partners and ties between preferences are allowed.
Subsequently, we show how our ASP based encoding naturally allows us to select
specific stable matchings which are optimal according to a given criterion.
Each time, we can rely on generic and efficient off-the-shelf answer set
solvers to find (optimal) stable matchings.
|
1302.7263 | Online Similarity Prediction of Networked Data from Known and Unknown
Graphs | cs.LG | We consider online similarity prediction problems over networked data. We
begin by relating this task to the more standard class prediction problem,
showing that, given an arbitrary algorithm for class prediction, we can
construct an algorithm for similarity prediction with "nearly" the same mistake
bound, and vice versa. After noticing that this general construction is
computationally infeasible, we target our study to {\em feasible} similarity
prediction algorithms on networked data. We initially assume that the network
structure is {\em known} to the learner. Here we observe that Matrix Winnow
\cite{w07} has a near-optimal mistake guarantee, at the price of cubic
prediction time per round. This motivates our effort for an efficient
implementation of a Perceptron algorithm with a weaker mistake guarantee but
with only poly-logarithmic prediction time. Our focus then turns to the
challenging case of networks whose structure is initially {\em unknown} to the
learner. In this novel setting, where the network structure is only
incrementally revealed, we obtain a mistake-bounded algorithm with a quadratic
prediction time per round.
|
1302.7264 | A Practical Cooperative Multicell MIMO-OFDMA Network Based on Rank
Coordination | cs.IT math.IT | An important challenge of wireless networks is to boost the cell edge
performance and enable multi-stream transmissions to cell edge users.
Interference mitigation techniques relying on multiple antennas and
coordination among cells are nowadays heavily studied in the literature.
Typical strategies in OFDMA networks include coordinated scheduling,
beamforming and power control. In this paper, we propose a novel and practical
type of coordination for OFDMA downlink networks relying on multiple antennas
at the transmitter and the receiver. The transmission ranks, i.e.\ the number
of transmitted streams, and the user scheduling in all cells are jointly
optimized in order to maximize a network utility function accounting for
fairness among users. A distributed coordinated scheduler motivated by an
interference pricing mechanism and relying on a master-slave architecture is
introduced. The proposed scheme is operated based on the user report of a
recommended rank for the interfering cells accounting for the receiver
interference suppression capability. It incurs a very low feedback and backhaul
overhead and enables efficient link adaptation. It is moreover robust to
channel measurement errors and applicable to both open-loop and closed-loop
MIMO operations. A 20% cell edge performance gain over uncoordinated LTE-A
system is shown through system level simulations.
|
1302.7280 | Bayesian Consensus Clustering | stat.ML cs.LG | The task of clustering a set of objects based on multiple sources of data
arises in several modern applications. We propose an integrative statistical
model that permits a separate clustering of the objects for each data source.
These separate clusterings adhere loosely to an overall consensus clustering,
and hence they are not independent. We describe a computationally scalable
Bayesian framework for simultaneous estimation of both the consensus clustering
and the source-specific clusterings. We demonstrate that this flexible approach
is more robust than joint clustering of all data sources, and is more powerful
than clustering each data source separately. This work is motivated by the
integrated analysis of heterogeneous biomedical data, and we present an
application to subtype identification of breast cancer tumor samples using
publicly available data from The Cancer Genome Atlas. Software is available at
http://people.duke.edu/~el113/software.html.
|
1302.7283 | Source Separation using Regularized NMF with MMSE Estimates under GMM
Priors with Online Learning for The Uncertainties | cs.LG cs.NA | We propose a new method to enforce priors on the solution of the nonnegative
matrix factorization (NMF). The proposed algorithm can be used for denoising or
single-channel source separation (SCSS) applications. The NMF solution is
guided to follow the Minimum Mean Square Error (MMSE) estimates under Gaussian
mixture prior models (GMM) for the source signal. In SCSS applications, the
spectra of the observed mixed signal are decomposed as a weighted linear
combination of trained basis vectors for each source using NMF. In this work,
the NMF decomposition weight matrices are treated as a distorted image by a
distortion operator, which is learned directly from the observed signals. The
MMSE estimate of the weights matrix under GMM prior and log-normal distribution
for the distortion is then found to improve the NMF decomposition results. The
MMSE estimate is embedded within the optimization objective to form a novel
regularized NMF cost function. The corresponding update rules for the new
objectives are derived in this paper. Experimental results show that, the
proposed regularized NMF algorithm improves the source separation performance
compared with using NMF without prior or with other prior models.
|
1302.7314 | Torque Saturation in Bipedal Robotic Walking through Control Lyapunov
Function Based Quadratic Programs | cs.SY cs.RO math.OC | This paper presents a novel method for directly incorporating user-defined
control input saturations into the calculation of a control Lyapunov function
(CLF)-based walking controller for a biped robot. Previous work by the authors
has demonstrated the effectiveness of CLF controllers for stabilizing periodic
gaits for biped walkers, and the current work expands on those results by
providing a more effective means for handling control saturations. The new
approach, based on a convex optimization routine running at a 1 kHz control
update rate, is useful not only for handling torque saturations but also for
incorporating a whole family of user-defined constraints into the online
computation of a CLF controller. The paper concludes with an experimental
implementation of the main results on the bipedal robot MABEL.
|
1303.0004 | Constructions of transitive latin hypercubes | cs.IT math.CO math.IT | A function $f:\{0,...,q-1\}^n\to\{0,...,q-1\}$ invertible in each argument is
called a latin hypercube. A collection $(\pi_0,\pi_1,...,\pi_n)$ of
permutations of $\{0,...,q-1\}$ is called an autotopism of a latin hypercube
$f$ if $\pi_0f(x_1,...,x_n)=f(\pi_1x_1,...,\pi_n x_n)$ for all $x_1$, ...,
$x_n$. We call a latin hypercube isotopically transitive (topolinear) if its
group of autotopisms acts transitively (regularly) on all $q^n$ collections of
argument values. We prove that the number of nonequivalent topolinear latin
hypercubes grows exponentially with respect to $\sqrt{n}$ if $q$ is even and
exponentially with respect to $n^2$ if $q$ is divisible by a square. We show a
connection of the class of isotopically transitive latin squares with the class
of G-loops, known in noncommutative algebra, and establish the existence of a
topolinear latin square that is not a group isotope. We characterize the class
of isotopically transitive latin hypercubes of orders $q=4$ and $q=5$.
Keywords: transitive code, propelinear code, latin square, latin hypercube,
autotopism, G-loop.
|
1303.0018 | Sparse Shape Reconstruction | math.FA cs.CV math-ph math.DG math.MP | This paper introduces a new shape-based image reconstruction technique
applicable to a large class of imaging problems formulated in a variational
sense. Given a collection of shape priors (a shape dictionary), we define our
problem as choosing the right elements and geometrically composing them through
basic set operations to characterize desired regions in the image. This
combinatorial problem can be relaxed and then solved using classical descent
methods. The main component of this relaxation is forming certain compactly
supported functions which we call "knolls", and reformulating the shape
representation as a basis expansion in terms of such functions. To select
suitable elements of the dictionary, our problem ultimately reduces to solving
a nonlinear program with sparsity constraints. We provide a new sparse
nonlinear reconstruction technique to approach this problem. The performance of
proposed technique is demonstrated with some standard imaging problems
including image segmentation, X-ray tomography and diffusive tomography.
|
1303.0031 | Time Scales in Probabilistic Models of Wireless Sensor Networks | math.PR cs.DC cs.MA math-ph math.MP | We consider a stochastic model of clock synchronization in a wireless network
consisting of N sensors interacting with one dedicated accurate time server.
For large N we find an estimate of the final time sychronization error for
global and relative synchronization. Main results concern a behavior of the
network on different time scales $t=t_N \to \infty$, $N \to \infty$. We discuss
existence of phase transitions and find exact time scales on which an effective
clock synchronization of the system takes place.
|
1303.0045 | The Mesh of Civilizations and International Email Flows | cs.SI physics.soc-ph | In The Clash of Civilizations, Samuel Huntington argued that the primary axis
of global conflict was no longer ideological or economic but cultural and
religious, and that this division would characterize the "battle lines of the
future." In contrast to the "top down" approach in previous research focused on
the relations among nation states, we focused on the flows of interpersonal
communication as a bottom-up view of international alignments. To that end, we
mapped the locations of the world's countries in global email networks to see
if we could detect cultural fault lines. Using IP-geolocation on a worldwide
anonymized dataset obtained from a large Internet company, we constructed a
global email network. In computing email flows we employ a novel rescaling
procedure to account for differences due to uneven adoption of a particular
Internet service across the world. Our analysis shows that email flows are
consistent with Huntington's thesis. In addition to location in Huntington's
"civilizations," our results also attest to the importance of both cultural and
economic factors in the patterning of inter-country communication ties.
|
1303.0050 | Tracking the Empirical Distribution of a Markov-modulated
Duplication-Deletion Random Graph | cs.IT math.IT | This paper considers a Markov-modulated duplication-deletion random graph
where at each time instant, one node can either join or leave the network; the
probabilities of joining or leaving evolve according to the realization of a
finite state Markov chain. The paper comprises of 2 results. First, motivated
by social network applications, we analyze the asymptotic behavior of the
degree distribution of the Markov-modulated random graph. Using the asymptotic
degree distribution, an expression is obtained for the delay in searching such
graphs. Second, a stochastic approximation algorithm is presented to track
empirical degree distribution as it evolves over time. The tracking performance
of the algorithm is analyzed in terms of mean square error and a functional
central limit theorem is presented for the asymptotic tracking error.
|
1303.0058 | A Cooperative MARC Scheme Using Analogue Network Coding to Achieve
Second-Order Diversity | cs.IT cs.NI math.IT | A multiple access relay channel (MARC) is considered in which an
analogue-like network coding is implemented in the relay node. This analogue
coding is a simple addition of the received signals at the relay node. Using
"nulling detection" structure employed in V-BLAST receiver, we propose a
detection scheme in the destination which is able to provide a diversity order
of two for all users. We analytically evaluate the performance of our proposed
scheme for the MARC with two users where tight upper bounds for both uncoded
and Convolutionally coded transmission blocks are provided. We verify our
analytical evaluations by simulations and compare the results with those of
noncooperative transmission and Alamouti's scheme for the same power and rate
transmission. Our results indicate that while our proposed scheme shows a
comparable performance compared to the Alamouti's scheme, it substantially
outperforms the non-cooperate transmission.
|
1303.0066 | Pure Coordination using the Coordinator--Configurator Pattern | cs.RO | This work-in-progress paper reports on our efforts to improve different
aspects of coordination in complex, component-based robotic systems.
Coordination is a system level aspect concerned with commanding, configuring
and monitoring functional computations such that the system as a whole behaves
as desired. To that end a variety of models such as Petri-nets or Finite State
Machines may be utilized. These models specify actions to be executed, such as
invoking operations or configuring components to achieve a certain goal.
This traditional approach has several disadvantages related to loss of
reusability of coordination models due to coupling with platform-specific
functionality, non-deterministic temporal behavior and limited robustness as a
result of executing platform operations within the context of the coordinator.
To avoid these shortcomings, we propose to split this "rich" coordinator into
a Pure Coordinator and a Configurator. Although the coordinator remains in
charge of commanding and reacting, the execution of actions is deferred to the
Configurator. This pattern, called "Coordinator-Configurator", is implemented
as a novel Configurator domain specific language that can be used together with
any model of coordination. We illustrate the approach by refactoring an
existing application that realizes a safe haptic coupling of two youBot mobile
manipulators.
|
1303.0070 | Entropy Distance | cs.IT math.CO math.IT | Motivated by the approach of random linear codes, a new distance in the
vector space over a finite field is defined as the logarithm of the "surface
area" of a Hamming ball with radius being the corresponding Hamming distance.
It is named entropy distance because of its close relation with entropy
function. It is shown that entropy distance is a metric for a non-binary field
and a pseudometric for the binary field. The entropy distance of a linear code
is defined to be the smallest entropy distance between distinct codewords of
the code. Analogues of the Gilbert bound, the Hamming bound, and the Singleton
bound are derived for the largest size of a linear code given the length and
entropy distance of the code. Furthermore, as an important property related to
lossless joint source-channel coding, the entropy distance of a linear encoder
is defined. Very tight upper and lower bounds are obtained for the largest
entropy distance of a linear encoder with given dimensions of input and output
vector spaces.
|
1303.0071 | Proceedings 1st International Workshop on Strategic Reasoning | cs.GT cs.LO cs.MA | This volume contains the proceedings of the 1st International Workshop on
Strategic Reasoning 2013 (SR 2013), held in Rome (Italy), March 1617, 2013. The
SR workshop aims to bring together researchers, possibly with different
backgrounds, working on various aspects of strategic reasoning in computer
science, both from a theoretical and a practical point of view. This year SR
has hosted four outstanding invited talks by Krishnendu Chatterjee, Alessio R.
Lomuscio, Jean-Francois Raskin, and Michael Wooldridge. Moreover, the program
committee selected 13 papers among the 23 contributions submitted. Almost all
of them have been revised by three reviews and the contributions have been
selected according to quality and relevance to the topics of the workshop.
|
1303.0073 | A Method for Comparing Hedge Funds | q-fin.ST cs.IR cs.LG stat.ML | The paper presents new machine learning methods: signal composition, which
classifies time-series regardless of length, type, and quantity; and
self-labeling, a supervised-learning enhancement. The paper describes further
the implementation of the methods on a financial search engine system to
identify behavioral similarities among time-series representing monthly returns
of 11,312 hedge funds operated during approximately one decade (2000 - 2010).
The presented approach of cross-category and cross-location classification
assists the investor to identify alternative investments.
|
1303.0076 | Bio-Signals-based Situation Comparison Approach to Predict Pain | stat.AP cs.LG stat.ML | This paper describes a time-series-based classification approach to identify
similarities between bio-medical-based situations. The proposed approach allows
classifying collections of time-series representing bio-medical measurements,
i.e., situations, regardless of the type, the length and the quantity of the
time-series a situation comprised of.
|
1303.0088 | Half-Duplex or Full-Duplex Relaying: A Capacity Analysis under
Self-Interference | cs.IT math.IT | In this paper multi-antenna half-duplex and full-duplex relaying are compared
from the perspective of achievable rates. Full-duplexing operation requires
additional resources at the relay such as antennas and RF chains for
self-interference cancellation. Using a practical model for the residual
self-interference, full-duplex achievable rates and degrees of freedom are
computed for the cases for which the relay has the same number of antennas or
the same number of RF chains as in the half-duplex case, and compared with
their half-duplex counterparts. It is shown that power scaling at the relay is
necessary to maximize the the degrees of freedom in the full-duplex mode.
|
1303.0089 | Estimating Thematic Similarity of Scholarly Papers with Their Resistance
Distance in an Electric Network Model | cs.DL cs.SI physics.soc-ph | We calculate resistance distances between papers in a nearly bipartite
citation network of 492 papers and the sources cited by them. We validate that
this is a realistic measure of thematic distance if each citation link has an
electric resistance equal to the geometric mean of the number of the paper's
references and the citation number of the cited source.
|
1303.0093 | Multidimensional Social Network in the Social Recommender System | cs.SI cs.IR physics.soc-ph | All online sharing systems gather data that reflects users' collective
behaviour and their shared activities. This data can be used to extract
different kinds of relationships, which can be grouped into layers, and which
are basic components of the multidimensional social network proposed in the
paper. The layers are created on the basis of two types of relations between
humans, i.e. direct and object-based ones which respectively correspond to
either social or semantic links between individuals. For better understanding
of the complexity of the social network structure, layers and their profiles
were identified and studied on two, spanned in time, snapshots of the Flickr
population. Additionally, for each layer, a separate strength measure was
proposed. The experiments on the Flickr photo sharing system revealed that the
relationships between users result either from semantic links between objects
they operate on or from social connections of these users. Moreover, the
density of the social network increases in time. The second part of the study
is devoted to building a social recommender system that supports the creation
of new relations between users in a multimedia sharing system. Its main goal is
to generate personalized suggestions that are continuously adapted to users'
needs depending on the personal weights assigned to each layer in the
multidimensional social network. The conducted experiments confirmed the
usefulness of the proposed model.
|
1303.0095 | Label-dependent Feature Extraction in Social Networks for Node
Classification | cs.SI cs.LG | A new method of feature extraction in the social network for within-network
classification is proposed in the paper. The method provides new features
calculated by combination of both: network structure information and class
labels assigned to nodes. The influence of various features on classification
performance has also been studied. The experiments on real-world data have
shown that features created owing to the proposed method can lead to
significant improvement of classification accuracy.
|
1303.0140 | Second-Order Non-Stationary Online Learning for Regression | cs.LG stat.ML | The goal of a learner, in standard online learning, is to have the cumulative
loss not much larger compared with the best-performing function from some fixed
class. Numerous algorithms were shown to have this gap arbitrarily close to
zero, compared with the best function that is chosen off-line. Nevertheless,
many real-world applications, such as adaptive filtering, are non-stationary in
nature, and the best prediction function may drift over time. We introduce two
novel algorithms for online regression, designed to work well in non-stationary
environment. Our first algorithm performs adaptive resets to forget the
history, while the second is last-step min-max optimal in context of a drift.
We analyze both algorithms in the worst-case regret framework and show that
they maintain an average loss close to that of the best slowly changing
sequence of linear functions, as long as the cumulative drift is sublinear. In
addition, in the stationary case, when no drift occurs, our algorithms suffer
logarithmic regret, as for previous algorithms. Our bounds improve over the
existing ones, and simulations demonstrate the usefulness of these algorithms
compared with other state-of-the-art approaches.
|
1303.0141 | Routing for Security in Networks with Adversarial Nodes | cs.IT math.IT | We consider the problem of secure unicast transmission between two nodes in a
directed graph, where an adversary eavesdrops/jams a subset of nodes. This
adversarial setting is in contrast to traditional ones where the adversary
controls a subset of links. In particular, we study, in the main, the class of
routing-only schemes (as opposed to those allowing coding inside the network).
Routing-only schemes usually have low implementation complexity, yet a
characterization of the rates achievable by such schemes was open prior to this
work. We first propose an LP based solution for secure communication against
eavesdropping, and show that it is information-theoretically rate-optimal among
all routing-only schemes. The idea behind our design is to balance information
flow in the network so that no subset of nodes observe "too much" information.
Interestingly, we show that the rates achieved by our routing-only scheme are
always at least as good as, and sometimes better, than those achieved by
"na\"ive" network coding schemes (i.e. the rate-optimal scheme designed for the
traditional scenario where the adversary controls links in a network rather
than nodes.) We also demonstrate non-trivial network coding schemes that
achieve rates at least as high as (and again sometimes better than) those
achieved by our routing schemes, but leave open the question of characterizing
the optimal rate-region of the problem under all possible coding schemes. We
then extend these routing-only schemes to the adversarial node-jamming
scenarios and show similar results. During the journey of our investigation, we
also develop a new technique that has the potential to derive non-trivial
bounds for general secure-communication schemes.
|
1303.0152 | Designing Unimodular Codes via Quadratic Optimization is not Always Hard | cs.SY cs.IT math.IT | The NP-hard problem of optimizing a quadratic form over the unimodular vector
set arises in radar code design scenarios as well as other active sensing and
communication applications. To tackle this problem (which we call unimodular
quadratic programming (UQP)), several computational approaches are devised and
studied. A specialized local optimization scheme for UQP is introduced and
shown to yield superior results compared to general local optimization methods.
Furthermore, a \textbf{m}onotonically \textbf{er}ror-bound \textbf{i}mproving
\textbf{t}echnique (MERIT) is proposed to obtain the global optimum or a local
optimum of UQP with good sub-optimality guarantees. The provided sub-optimality
guarantees are case-dependent and generally outperform the $\pi/4$
approximation guarantee of semi-definite relaxation. Several numerical examples
are presented to illustrate the performance of the proposed method. The
examples show that for cases including several matrix structures used in radar
code design, MERIT can solve UQP efficiently in the sense of sub-optimality
guarantee and computational time.
|
1303.0154 | Robust Estimation of Optical Phase Varying as a Continuous Resonant
Process | math.OC cs.SY quant-ph | It is well-known that adaptive homodyne estimation of continuously varying
optical phase provides superior accuracy in the phase estimate as compared to
adaptive or non-adaptive static estimation. However, most phase estimation
schemes rely on precise knowledge of the underlying parameters of the system
under measurement, and performance deteriorates significantly with changes in
these parameters; hence it is desired to develop robust estimation techniques
immune to such uncertainties. In related works, we have already shown how
adaptive homodyne estimation can be made robust to uncertainty in an underlying
parameter of the phase varying as a simplistic Ornstein-Uhlenbeck stochastic
noise process. Here, we demonstrate robust phase estimation for a more
complicated resonant noise process using a guaranteed cost robust filter.
|
1303.0156 | Exploiting the Accumulated Evidence for Gene Selection in Microarray
Gene Expression Data | cs.CE cs.LG q-bio.QM | Machine Learning methods have of late made significant efforts to solving
multidisciplinary problems in the field of cancer classification using
microarray gene expression data. Feature subset selection methods can play an
important role in the modeling process, since these tasks are characterized by
a large number of features and a few observations, making the modeling a
non-trivial undertaking. In this particular scenario, it is extremely important
to select genes by taking into account the possible interactions with other
gene subsets. This paper shows that, by accumulating the evidence in favour (or
against) each gene along the search process, the obtained gene subsets may
constitute better solutions, either in terms of predictive accuracy or gene
size, or in both. The proposed technique is extremely simple and applicable at
a negligible overhead in cost.
|
1303.0157 | Scalable Cost-Aware Multi-Way Influence Maximization | cs.DS cs.SI physics.soc-ph | Viral marketing is different from other marketing strategies since it
leverages the influence power in intimate relationship, e.g., close friends,
family members, couples. Due to the development and popularity of social
networking services, such as Facebook, Twitter, and Pinterest, the new notion
of "social media marketing" has appeared in recent years and presents new
opportunities for enabling large-scale and prevalent viral marketing online. To
boost the growth of their sales, business is embracing social media in a big
way. According to USA Today, the sales of software to run corporate social
networks will grow 61\% a year and be a $6.4$ billion business by 2016.
|
1303.0183 | Successful strategies for competing networks | physics.soc-ph cs.SI nlin.AO q-bio.MN q-bio.PE | Competitive interactions represent one of the driving forces behind evolution
and natural selection in biological and sociological systems. For example,
animals in an ecosystem may vie for food or mates; in a market economy, firms
may compete over the same group of customers; sensory stimuli may compete for
limited neural resources in order to enter the focus of attention. Here, we
derive rules based on the spectral properties of the network governing the
competitive interactions between groups of agents organized in networks. In the
scenario studied here the winner of the competition, and the time needed to
prevail, essentially depend on the way a given network connects to its
competitors and on its internal structure. Our results allow assessing the
extent to which real networks optimize the outcome of their interaction, but
also provide strategies through which competing networks can improve on their
situation. The proposed approach is applicable to a wide range of systems that
can be modeled as networks.
|
1303.0198 | On sparse sensing and sparse sampling of coded signals at sub-Landau
rates | cs.IT math.IT | Advances of information-theoretic understanding of sparse sampling of
continuous uncoded signals at sampling rates exceeding the Landau rate were
reported in recent works. This work examines sparse sampling of coded signals
at sub-Landau sampling rates. It is shown that with coded signals the Landau
condition may be relaxed and the sampling rate required for signal
reconstruction and for support detection can be lower than the effective
bandwidth. Equivalently, the number of measurements in the corresponding sparse
sensing problem can be smaller than the support size. Tight bounds on
information rates and on signal and support detection performance are derived
for the Gaussian sparsely sampled channel and for the frequency-sparse channel
using the context of state dependent channels. Support detection results are
verified by a simulation. When the system is high-dimensional the required SNR
is shown to be finite but high and rising with decreasing sampling rate, in
some practical applications it can be lowered by reducing the a-priory
uncertainty about the support e.g. by concentrating the frequency support into
a finite number of subbands.
|
1303.0213 | The Semantic Web takes Wing: Programming Ontologies with Tawny-OWL | cs.AI cs.DL | The Tawny-OWL library provides a fully-programmatic environment for ontology
building; it enables the use of a rich set of tools for ontology development,
by recasting development as a form of programming. It is built in Clojure - a
modern Lisp dialect, and is backed by the OWL API. Used simply, it has a
similar syntax to OWL Manchester syntax, but it provides arbitrary
extensibility and abstraction. It builds on existing facilities for Clojure,
which provides a rich and modern programming tool chain, for versioning,
distributed development, build, testing and continuous integration. In this
paper, we describe the library, this environment and the its potential
implications for the ontology development process.
|
1303.0229 | Wireless Network-Coded Multi-Way Relaying Using Latin Hyper-Cubes | cs.IT math.IT | Physical layer network-coding for the $n$-way wireless relaying scenario is
dealt with, where each of the $n$ user nodes $X_1,$ $X_2,...,X_n$ wishes to
communicate its messages to all the other $(n-1)$ nodes with the help of the
relay node R. The given scheme, based on the denoise-and-forward scheme
proposed for two-way relaying by Popovski et al. in \cite{PoY1}, employs two
phases: Multiple Access (MA) phase and Broadcast (BC) phase with each phase
utilizing one channel use and hence totally two channel uses. Physical layer
network-coding using the denoise-and-forward scheme was done for the two-way
relaying scenario in\cite{KPT}, for three-way relaying scenario in \cite{SVR},
and for four-way relaying scenario in \cite{ShR}. This paper employs
denoise-and-forward scheme for physical layer network coding of the $n$-way
relaying scenario illustrating with the help of the case $n = 5$ not dealt with
so far. It is observed that adaptively changing the network coding map used at
the relay according to the channel conditions reduces the impact of multiple
access interference which occurs at the relay during the MA phase. These
network coding maps are chosen so that they satisfy a requirement called
\textit{exclusive law}. We show that when the $n$ users transmit points from
the same $M$-PSK $(M=2^{\lambda})$ constellation, every such network coding map
that satisfies the exclusive law can be represented by a $n$-fold Latin
Hyper-Cube of side $M$. The singular fade subspaces resulting from the scheme
are described and enumerated for general values of $n$ and $M$ and are
classified based on their removability in the given scenario. A network code
map to be used by the relay for the BC phase aiming at reducing the effect of
interference at the MA stage is obtained.
|
1303.0247 | A Coding-Theoretic Application of Baranyai's Theorem | cs.IT math.IT | Baranyai's theorem is a well-known theorem in the theory of hypergraphs. A
corollary of this theorem says that one can partition the family of all
$u$-subsets of an $n$-element set into ${n-1\choose u-1}$ sub-families such
that each sub-family form a partition of the $n$-element set, where $n$ is
divisible by $u$. In this paper, we present a coding-theoretic application of
Baranyai's theorem (or equivalently, the corollary). More precisely, we propose
the first purely combinatorial construction of locally decodable codes. Locally
decodable codes are error-correcting codes that allow the recovery of any
message bit by looking at only a few bits of the codeword. Such codes have
attracted a lot of attention in recent years. We stress that our construction
does not improve the parameters of known constructions. What makes it
interesting is the underlying combinatorial techniques and their potential in
future applications.
|
1303.0283 | Inverse Signal Classification for Financial Instruments | cs.LG cs.IR q-fin.ST stat.ML | The paper presents new machine learning methods: signal composition, which
classifies time-series regardless of length, type, and quantity; and
self-labeling, a supervised-learning enhancement. The paper describes further
the implementation of the methods on a financial search engine system using a
collection of 7,881 financial instruments traded during 2011 to identify
inverse behavior among the time-series.
|
1303.0284 | Social Recommendations within the Multimedia Sharing Systems | cs.SI cs.IR physics.soc-ph | The social recommender system that supports the creation of new relations
between users in the multimedia sharing system is presented in the paper. To
generate suggestions the new concept of the multirelational social network was
introduced. It covers both direct as well as object-based relationships that
reflect social and semantic links between users. The main goal of the new
method is to create the personalized suggestions that are continuously adapted
to users' needs depending on the personal weights assigned to each layer from
the social network. The conducted experiments confirmed the usefulness of the
proposed model.
|
1303.0296 | Performance of Spatially-Coupled LDPC Codes and Threshold Saturation
over BICM Channels | cs.IT math.IT | We study the performance of binary spatially-coupled low-density parity-check
codes (SC-LDPC) when used with bit-interleaved coded-modulation (BICM) schemes.
This paper considers the cases when transmission takes place over additive
white Gaussian noise (AWGN)channels and Rayleigh fast-fading channels. The
technique of upper bounding the maximum-a-posteriori (MAP) decoding performance
of LDPC codes using an area theorem is extended for BICM schemes. The upper
bound is computed for both the optimal MAP demapper and the suboptimal
max-log-MAP (MLM) demapper. It is observed that this bound approaches the noise
threshold of BICM channels for regular LDPC codes with large degrees. The rest
of the paper extends these techniques to SC-LDPC codes and the phenomenon of
threshold saturation is demonstrated numerically. Based on numerical evidence,
we conjecture that the belief-propagation (BP) decoding threshold of SC-LDPC
codes approaches the MAP decoding threshold of the underlying LDPC ensemble on
BICM channels. Numerical results also show that SC-LDPC codes approach the BICM
capacity over different channels and modulation schemes.
|
1303.0309 | One-Class Support Measure Machines for Group Anomaly Detection | stat.ML cs.LG | We propose one-class support measure machines (OCSMMs) for group anomaly
detection which aims at recognizing anomalous aggregate behaviors of data
points. The OCSMMs generalize well-known one-class support vector machines
(OCSVMs) to a space of probability measures. By formulating the problem as
quantile estimation on distributions, we can establish an interesting
connection to the OCSVMs and variable kernel density estimators (VKDEs) over
the input space on which the distributions are defined, bridging the gap
between large-margin methods and kernel density estimators. In particular, we
show that various types of VKDEs can be considered as solutions to a class of
regularization problems studied in this paper. Experiments on Sloan Digital Sky
Survey dataset and High Energy Particle Physics dataset demonstrate the
benefits of the proposed framework in real-world applications.
|
1303.0323 | Clubs-based Particle Swarm Optimization | cs.NE | This paper introduces a new dynamic neighborhood network for particle swarm
optimization. In the proposed Clubs-based Particle Swarm Optimization (C-PSO)
algorithm, each particle initially joins a default number of what we call
'clubs'. Each particle is affected by its own experience and the experience of
the best performing member of the clubs it is a member of. Clubs membership is
dynamic, where the worst performing particles socialize more by joining more
clubs to learn from other particles and the best performing particles are made
to socialize less by leaving clubs to reduce their strong influence on other
members. Particles return gradually to default membership level when they stop
showing extreme performance. Inertia weights of swarm members are made random
within a predefined range. This proposed dynamic neighborhood algorithm is
compared with other two algorithms having static neighborhood topologies on a
set of classic benchmark problems. The results showed superior performance for
C-PSO regarding escaping local optima and convergence speed.
|
1303.0339 | Learning Hash Functions Using Column Generation | cs.LG | Fast nearest neighbor searching is becoming an increasingly important tool in
solving many large-scale problems. Recently a number of approaches to learning
data-dependent hash functions have been developed. In this work, we propose a
column generation based method for learning data-dependent hash functions on
the basis of proximity comparison information. Given a set of triplets that
encode the pairwise proximity comparison information, our method learns hash
functions that preserve the relative comparison relationships in the data as
well as possible within the large-margin learning framework. The learning
procedure is implemented using column generation and hence is named CGHash. At
each iteration of the column generation procedure, the best hash function is
selected. Unlike most other hashing methods, our method generalizes to new data
points naturally; and has a training objective which is convex, thus ensuring
that the global optimum can be identified. Experiments demonstrate that the
proposed method learns compact binary codes and that its retrieval performance
compares favorably with state-of-the-art methods when tested on a few benchmark
datasets.
|
1303.0341 | Matrix Completion via Max-Norm Constrained Optimization | cs.LG cs.IT math.IT stat.ML | Matrix completion has been well studied under the uniform sampling model and
the trace-norm regularized methods perform well both theoretically and
numerically in such a setting. However, the uniform sampling model is
unrealistic for a range of applications and the standard trace-norm relaxation
can behave very poorly when the underlying sampling scheme is non-uniform.
In this paper we propose and analyze a max-norm constrained empirical risk
minimization method for noisy matrix completion under a general sampling model.
The optimal rate of convergence is established under the Frobenius norm loss in
the context of approximately low-rank matrix reconstruction. It is shown that
the max-norm constrained method is minimax rate-optimal and yields a unified
and robust approximate recovery guarantee, with respect to the sampling
distributions. The computational effectiveness of this method is also
discussed, based on first-order algorithms for solving convex optimizations
involving max-norm regularization.
|
1303.0344 | Network-based stochastic competitive learning approach to disambiguation
in collaborative networks | cs.SI physics.soc-ph | Many patterns have been uncovered in complex systems through the application
of concepts and methodologies of complex networks. Unfortunately, the validity
and accuracy of the unveiled patterns are strongly dependent on the amount of
unavoidable noise pervading the data, such as the presence of homonymous
individuals in social networks. In the current paper, we investigate the
problem of name disambiguation in collaborative networks, a task that plays a
fundamental role on a myriad of scientific contexts. In special, we use an
unsupervised technique which relies on a particle competition mechanism in a
networked environment to detect the clusters. It has been shown that, in this
kind of environment, the learning process can be improved because the network
representation of data can capture topological features of the input data set.
Specifically, in the proposed disambiguating model, a set of particles is
randomly spawned into the nodes constituting the network. As time progresses,
the particles employ a movement strategy composed of a probabilistic convex
mixture of random and preferential walking policies. In the former, the walking
rule exclusively depends on the topology of the network and is responsible for
the exploratory behavior of the particles. In the latter, the walking rule
depends both on the topology and the domination levels that the particles
impose on the neighboring nodes. This type of behavior compels the particles to
perform a defensive strategy, because it will force them to revisit nodes that
are already dominated by them, rather than exploring rival territories.
Computer simulations conducted on the networks extracted from the arXiv
repository of preprint papers and also from other databases reveal the
effectiveness of the model, which turned out to be more accurate than
traditional clustering methods.
|
1303.0346 | Secure Distance Bounding Verification using Physical-Channel Properties | cs.CR cs.IT math.IT | We consider the problem of distance bounding verification (DBV), where a
proving party claims a distance and a verifying party ensures that the prover
is within the claimed distance. Current approaches to "secure" distance
estimation use signal's time of flight, which requires the verifier to have an
accurate clock. We study secure DBV using physical channel properties as an
alternative to time measurement. We consider a signal propagation environment
that attenuates signal as a function of distance, and then corrupts it by an
additive noise.
We consider three attacking scenarios against DBV, namely distance fraud
(DFA), mafia fraud (MFA) and terrorist fraud (TFA) attacks. We show it is
possible to construct efficient DBV protocols with DFA and MFA security, even
against an unbounded adversary; on the other hand, it is impossible to design
TFA-secure protocols without time measurement, even with a
computationally-bounded adversary. We however provide a TFA-secure construction
under the condition that the adversary's communication capability is limited to
the bounded retrieval model (BRM). We use numerical analysis to examine the
communication complexity of the introduced DBV protocols. We discuss our
results and give directions for future research.
|
1303.0347 | Probing the statistical properties of unknown texts: application to the
Voynich Manuscript | physics.soc-ph cs.CL physics.data-an | While the use of statistical physics methods to analyze large corpora has
been useful to unveil many patterns in texts, no comprehensive investigation
has been performed investigating the properties of statistical measurements
across different languages and texts. In this study we propose a framework that
aims at determining if a text is compatible with a natural language and which
languages are closest to it, without any knowledge of the meaning of the words.
The approach is based on three types of statistical measurements, i.e. obtained
from first-order statistics of word properties in a text, from the topology of
complex networks representing text, and from intermittency concepts where text
is treated as a time series. Comparative experiments were performed with the
New Testament in 15 different languages and with distinct books in English and
Portuguese in order to quantify the dependency of the different measurements on
the language and on the story being told in the book. The metrics found to be
informative in distinguishing real texts from their shuffled versions include
assortativity, degree and selectivity of words. As an illustration, we analyze
an undeciphered medieval manuscript known as the Voynich Manuscript. We show
that it is mostly compatible with natural languages and incompatible with
random texts. We also obtain candidates for key-words of the Voynich Manuscript
which could be helpful in the effort of deciphering it. Because we were able to
identify statistical measurements that are more dependent on the syntax than on
the semantics, the framework may also serve for text analysis in
language-dependent applications.
|
1303.0350 | Structure-semantics interplay in complex networks and its effects on the
predictability of similarity in texts | cs.CL physics.soc-ph | There are different ways to define similarity for grouping similar texts into
clusters, as the concept of similarity may depend on the purpose of the task.
For instance, in topic extraction similar texts mean those within the same
semantic field, whereas in author recognition stylistic features should be
considered. In this study, we introduce ways to classify texts employing
concepts of complex networks, which may be able to capture syntactic, semantic
and even pragmatic features. The interplay between the various metrics of the
complex networks is analyzed with three applications, namely identification of
machine translation (MT) systems, evaluation of quality of machine translated
texts and authorship recognition. We shall show that topological features of
the networks representing texts can enhance the ability to identify MT systems
in particular cases. For evaluating the quality of MT texts, on the other hand,
high correlation was obtained with methods capable of capturing the semantics.
This was expected because the golden standards used are themselves based on
word co-occurrence. Notwithstanding, the Katz similarity, which involves
semantic and structure in the comparison of texts, achieved the highest
correlation with the NIST measurement, indicating that in some cases the
combination of both approaches can improve the ability to quantify quality in
MT. In authorship recognition, again the topological features were relevant in
some contexts, though for the books and authors analyzed good results were
obtained with semantic features as well. Because hybrid approaches encompassing
semantic and topological features have not been extensively used, we believe
that the methodology proposed here may be useful to enhance text classification
considerably, as it combines well-established strategies.
|
1303.0362 | Inductive Sparse Subspace Clustering | cs.LG | Sparse Subspace Clustering (SSC) has achieved state-of-the-art clustering
quality by performing spectral clustering over a $\ell^{1}$-norm based
similarity graph. However, SSC is a transductive method which does not handle
with the data not used to construct the graph (out-of-sample data). For each
new datum, SSC requires solving $n$ optimization problems in O(n) variables for
performing the algorithm over the whole data set, where $n$ is the number of
data points. Therefore, it is inefficient to apply SSC in fast online
clustering and scalable graphing. In this letter, we propose an inductive
spectral clustering algorithm, called inductive Sparse Subspace Clustering
(iSSC), which makes SSC feasible to cluster out-of-sample data. iSSC adopts the
assumption that high-dimensional data actually lie on the low-dimensional
manifold such that out-of-sample data could be grouped in the embedding space
learned from in-sample data. Experimental results show that iSSC is promising
in clustering out-of-sample data.
|
1303.0381 | Spectral Efficient Optimization in OFDM Systems with Wireless
Information and Power Transfer | cs.IT math.IT | This paper considers an orthogonal frequency division multiplexing (OFDM)
point-to-point wireless communication system with simultaneous wireless
information and power transfer. We study a receiver which is able to harvest
energy from the desired signal, noise, and interference. In particular, we
consider a power splitting receiver which dynamically splits the received power
into two power streams for information decoding and energy harvesting. We
design power allocation algorithms maximizing the spectral efficiency
(bit/s/Hz) of data transmission. In particular, the algorithm design is
formulated as a nonconvex optimization problem which takes into account the
constraint on the minimum power delivered to the receiver. The problem is
solved by using convex optimization techniques and a one-dimensional search.
The optimal power allocation algorithm serves as a system benchmark scheme due
to its high complexity. To strike a balance between system performance and
computational complexity, we also propose two suboptimal algorithms which
require a low computational complexity. Simulation results demonstrate the
excellent performance of the proposed suboptimal algorithms.
|
1303.0388 | On computation of the total set of robust discrete-time PID controllers | cs.SY | The problem of finding the set of all multi-model robust PID and three-term
stabilizers for discrete-time systems is solved in this paper. The method uses
the fact that decoupling of parameter space at singular frequencies is
invariant under a linear transformation. The resulting stable regions are
composed by convex polygonal slices. The design problem includes the assertion
of intervals with stable polygons and the detection of stable polygons. This
paper completes the solutions to both problems.
|
1303.0407 | IRS for Computer Character Sequences Filtration: a new software tool and
algorithm to support the IRS at tokenization process | cs.IR | Tokenization is the task of chopping it up into pieces, called tokens,
perhaps at the same time throwing away certain characters, such as punctuation.
A token is an instance of token a sequence of characters in some particular
document that are grouped together as a useful semantic unit for processing.
New software tool and algorithm to support the IRS at tokenization process are
presented. Our proposed tool will filter out the three computer character
Sequences: IP-Addresses, Web URLs, Date, and Email Addresses. Our tool will use
the pattern matching algorithms and filtration methods. After this process, the
IRS can start a new tokenization process on the new retrieved text which will
be free of these sequences.
|
1303.0415 | Distributed Power Allocation for Coordinated Multipoint Transmissions in
Distributed Antenna Systems | cs.IT math.IT | This paper investigates the distributed power allocation problem for
coordinated multipoint (CoMP) transmissions in distributed antenna systems
(DAS). Traditional duality based optimization techniques cannot be directly
applied to this problem, because the non-strict concavity of the CoMP
transmission's achievable rate with respect to the transmission power induces
that the local power allocation subproblems have non-unique optimum solutions.
We propose a distributed power allocation algorithm to resolve this non-strict
concavity difficulty. This algorithm only requires local information exchange
among neighboring base stations serving the same user, and is thus scalable as
the network size grows. The step-size parameters of this algorithm are
determined by only local user access relationship (i.e., the number of users
served by each antenna), but do not rely on channel coefficients. Therefore,
the convergence speed of this algorithm is quite robust to different channel
fading coefficients. We rigorously prove that this algorithm converges to an
optimum solution of the power allocation problem. Simulation results are
presented to demonstrate the effectiveness of the proposed power allocation
algorithm.
|
1303.0417 | On the convergence of the IRLS algorithm in Non-Local Patch Regression | cs.CV stat.ML | Recently, it was demonstrated in [CS2012,CS2013] that the robustness of the
classical Non-Local Means (NLM) algorithm [BCM2005] can be improved by
incorporating $\ell^p (0 < p \leq 2)$ regression into the NLM framework. This
general optimization framework, called Non-Local Patch Regression (NLPR),
contains NLM as a special case. Denoising results on synthetic and natural
images show that NLPR consistently performs better than NLM beyond a moderate
noise level, and significantly so when $p$ is close to zero. An iteratively
reweighted least-squares (IRLS) algorithm was proposed for solving the
regression problem in NLPR, where the NLM output was used to initialize the
iterations. Based on exhaustive numerical experiments, we observe that the IRLS
algorithm is globally convergent (for arbitrary initialization) in the convex
regime $1 \leq p \leq 2$, and locally convergent (fails very rarely using NLM
initialization) in the non-convex regime $0 < p < 1$. In this letter, we adapt
the "majorize-minimize" framework introduced in [Voss1980] to explain these
observations.
[CS2012] Chaudhury et al. (2012), "Non-local Euclidean medians," IEEE Signal
Processing Letters.
[CS2013] Chaudhury et al. (2013), "Non-local patch regression: Robust image
denoising in patch space," IEEE ICASSP.
[BCM2005] Buades et al. (2005), "A review of image denoising algorithms, with
a new one," Multiscale Modeling and Simulation.
[Voss1980] Voss et al. (1980), "Linear convergence of generalized Weiszfeld's
method," Computing.
|
1303.0418 | Transparent Data Encryption -- Solution for Security of Database
Contents | cs.DB cs.CR | The present study deals with Transparent Data Encryption which is a
technology used to solve the problems of security of data. Transparent Data
Encryption means encrypting databases on hard disk and on any backup media.
Present day global business environment presents numerous security threats and
compliance challenges. To protect against data thefts and frauds we require
security solutions that are transparent by design.
|
1303.0425 | Methods for robust PID control | cs.SY | A comprehensive theory for robust PID control in continuous-time and
discrete-time domain is reviewed in this paper. For a given finite set of
linear time invariant plants, algorithms for fast computation of robustly
stabilizing regions in the ($k_P, k_I, k_D$)-parameter space are introduced.
The main impetus is given by the fact that non-convex stable regions in the PID
parameter space can be built up by convex polygonal slices. A simple and an
elegant theory evolved in the last few years up to a quite mature level.
|
1303.0444 | Reconciliation between the Tsallis maximum entropy principle and large
deviation theory | cond-mat.stat-mech cs.IT math.IT | The necessary conditions (NC) that reconcile canonical probability
distributions obtained from the q-maximum entropy principle, subjected to both
i) the additive duality of generalized statistics and ii) normal averages
expectations with the large deviation theory, are derived. The validity of
these necessary conditions is established on the basis of a result concerning
large deviation properties of conditional measures. The NC for normal averages
expectations are advantageous because they avoid the excessively prohibitive
conditions obtained by previous studies when employing other forms for defining
q-expectations. Numerical examples for an exemplary case are provided.
|
1303.0445 | Detecting and resolving spatial ambiguity in text using named entity
extraction and self learning fuzzy logic techniques | cs.IR cs.CL | Information extraction identifies useful and relevant text in a document and
converts unstructured text into a form that can be loaded into a database
table. Named entity extraction is a main task in the process of information
extraction and is a classification problem in which words are assigned to one
or more semantic classes or to a default non-entity class. A word which can
belong to one or more classes and which has a level of uncertainty in it can be
best handled by a self learning Fuzzy Logic Technique. This paper proposes a
method for detecting the presence of spatial uncertainty in the text and
dealing with spatial ambiguity using named entity extraction techniques coupled
with self learning fuzzy logic techniques
|
1303.0446 | Statistical sentiment analysis performance in Opinum | cs.CL | The classification of opinion texts in positive and negative is becoming a
subject of great interest in sentiment analysis. The existence of many labeled
opinions motivates the use of statistical and machine-learning methods.
First-order statistics have proven to be very limited in this field. The Opinum
approach is based on the order of the words without using any syntactic and
semantic information. It consists of building one probabilistic model for the
positive and another one for the negative opinions. Then the test opinions are
compared to both models and a decision and confidence measure are calculated.
In order to reduce the complexity of the training corpus we first lemmatize the
texts and we replace most named-entities with wildcards. Opinum presents an
accuracy above 81% for Spanish opinions in the financial products domain. In
this work we discuss which are the most important factors that have impact on
the classification performance.
|
1303.0447 | A Study on Application of Spatial Data Mining Techniques for Rural
Progress | cs.DB cs.CY | This paper focuses on the application of Spatial Data mining Techniques to
efficiently manage the challenges faced by peripheral rural areas in analyzing
and predicting market scenario and better manage their economy. Spatial data
mining is the task of unfolding the implicit knowledge hidden in the spatial
databases. The spatial Databases contain both spatial and non-spatial
attributes of the areas under study. Finding implicit regularities, rules or
patterns hidden in spatial databases is an important task, e.g. for
geo-marketing, traffic control or environmental studies. In this paper the
focus is on the effective use of Spatial Data Mining Techniques in the field of
Economic Geography constrained to the rural areas
|
1303.0448 | Learning Stable Multilevel Dictionaries for Sparse Representations | cs.CV stat.ML | Sparse representations using learned dictionaries are being increasingly used
with success in several data processing and machine learning applications. The
availability of abundant training data necessitates the development of
efficient, robust and provably good dictionary learning algorithms. Algorithmic
stability and generalization are desirable characteristics for dictionary
learning algorithms that aim to build global dictionaries which can efficiently
model any test data similar to the training samples. In this paper, we propose
an algorithm to learn dictionaries for sparse representations from large scale
data, and prove that the proposed learning algorithm is stable and
generalizable asymptotically. The algorithm employs a 1-D subspace clustering
procedure, the K-hyperline clustering, in order to learn a hierarchical
dictionary with multiple levels. We also propose an information-theoretic
scheme to estimate the number of atoms needed in each level of learning and
develop an ensemble approach to learn robust dictionaries. Using the proposed
dictionaries, the sparse code for novel test data can be computed using a
low-complexity pursuit procedure. We demonstrate the stability and
generalization characteristics of the proposed algorithm using simulations. We
also evaluate the utility of the multilevel dictionaries in compressed recovery
and subspace learning applications.
|
1303.0460 | Genetic Programming for Document Segmentation and Region Classification
Using Discipulus | cs.CV cs.NE | Document segmentation is a method of rending the document into distinct
regions. A document is an assortment of information and a standard mode of
conveying information to others. Pursuance of data from documents involves ton
of human effort, time intense and might severely prohibit the usage of data
systems. So, automatic information pursuance from the document has become a big
issue. It is been shown that document segmentation will facilitate to beat such
problems. This paper proposes a new approach to segment and classify the
document regions as text, image, drawings and table. Document image is divided
into blocks using Run length smearing rule and features are extracted from
every blocks. Discipulus tool has been used to construct the Genetic
programming based classifier model and located 97.5% classification accuracy.
|
1303.0462 | Distributed Evolutionary Computation: A New Technique for Solving Large
Number of Equations | cs.NE | Evolutionary computation techniques have mostly been used to solve various
optimization and learning problems successfully. Evolutionary algorithm is more
effective to gain optimal solution(s) to solve complex problems than
traditional methods. In case of problems with large set of parameters,
evolutionary computation technique incurs a huge computational burden for a
single processing unit. Taking this limitation into account, this paper
presents a new distributed evolutionary computation technique, which decomposes
decision vectors into smaller components and achieves optimal solution in a
short time. In this technique, a Jacobi-based Time Variant Adaptive (JBTVA)
Hybrid Evolutionary Algorithm is distributed incorporating cluster computation.
Moreover, two new selection methods named Best All Selection (BAS) and Twin
Selection (TS) are introduced for selecting best fit solution vector.
Experimental results show that optimal solution is achieved for different kinds
of problems having huge parameters and a considerable speedup is obtained in
proposed distributed system.
|
1303.0463 | Mobile Jammers for Secrecy Rate Maximization in Cooperative Networks | cs.IT math.IT | We consider a source (Alice) trying to communicate with a destination (Bob),
in a way that an unauthorized node (Eve) cannot infer, based on her
observations, the information that is being transmitted. The communication is
assisted by multiple multi-antenna cooperating nodes (helpers) who have the
ability to move. While Alice transmits, the helpers transmit noise that is
designed to affect the entire space except Bob. We consider the problem of
selecting the helper weights and positions that maximize the system secrecy
rate. It turns out that this optimization problem can be efficiently solved,
leading to a novel decentralized helper motion control scheme. Simulations
indicate that introducing helper mobility leads to considerable savings in
terms of helper transmit power, as well as total number of helpers required for
secrecy communications.
|
1303.0479 | Scale Selection of Adaptive Kernel Regression by Joint Saliency Map for
Nonrigid Image Registration | cs.CV | Joint saliency map (JSM) [1] was developed to assign high joint saliency
values to the corresponding saliency structures (called Joint Saliency
Structures, JSSs) but zero or low joint saliency values to the outliers (or
mismatches) that are introduced by missing correspondence or local large
deformations between the reference and moving images to be registered. JSM
guides the local structure matching in nonrigid registration by emphasizing
these JSSs' sparse deformation vectors in adaptive kernel regression of
hierarchical sparse deformation vectors for iterative dense deformation
reconstruction. By designing an effective superpixel-based local structure
scale estimator to compute the reference structure's structure scale, we
further propose to determine the scale (the width) of kernels in the adaptive
kernel regression through combining the structure scales to JSM-based scales of
mismatch between the local saliency structures. Therefore, we can adaptively
select the sample size of sparse deformation vectors to reconstruct the dense
deformation vectors for accurately matching the every local structures in the
two images. The experimental results demonstrate better accuracy of our method
in aligning two images with missing correspondence and local large deformation
than the state-of-the-art methods.
|
1303.0481 | Situation-Aware Approach to Improve Context-based Recommender System | cs.IR | In this paper, we introduce a novel situation aware approach to improve a
context based recommender system. To build situation aware user profiles, we
rely on evidence issued from retrieval situations. A retrieval situation refers
to the social spatio temporal context of the user when he interacts with the
recommender system. A situation is represented as a combination of social
spatio temporal concepts inferred from ontological knowledge given social
group, location and time information. User's interests are inferred from past
user's interaction with the recommender system related to the identified
situations. They are represented using concepts issued from a domain ontology.
We also propose a method to dynamically adapt the system to the user's
interest's evolution.
|
1303.0484 | Onomastics 2.0 - The Power of Social Co-Occurrences | cs.IR cs.SI physics.soc-ph | Onomastics is "the science or study of the origin and forms of proper names
of persons or places." ["Onomastics". Merriam-Webster.com, 2013.
http://www.merriam-webster.com (11 February 2013)]. Especially personal names
play an important role in daily life, as all over the world future parents are
facing the task of finding a suitable given name for their child. This choice
is influenced by different factors, such as the social context, language,
cultural background and, in particular, personal taste.
With the rise of the Social Web and its applications, users more and more
interact digitally and participate in the creation of heterogeneous,
distributed, collaborative data collections. These sources of data also reflect
current and new naming trends as well as new emerging interrelations among
names.
The present work shows, how basic approaches from the field of social network
analysis and information retrieval can be applied for discovering relations
among names, thus extending Onomastics by data mining techniques. The
considered approach starts with building co-occurrence graphs relative to data
from the Social Web, respectively for given names and city names. As a main
result, correlations between semantically grounded similarities among names
(e.g., geographical distance for city names) and structural graph based
similarities are observed.
The discovered relations among given names are the foundation of "nameling"
[http://nameling.net], a search engine and academic research platform for given
names which attracted more than 30,000 users within four months,
underpinningthe relevance of the proposed methodology.
|
1303.0485 | Optimizing an Utility Function for Exploration / Exploitation Trade-off
in Context-Aware Recommender System | cs.IR | In this paper, we develop a dynamic exploration/ exploitation (exr/exp)
strategy for contextual recommender systems (CRS). Specifically, our methods
can adaptively balance the two aspects of exr/exp by automatically learning the
optimal tradeoff. This consists of optimizing a utility function represented by
a linearized form of the probability distributions of the rewards of the
clicked and the non-clicked documents already recommended. Within an offline
simulation framework we apply our algorithms to a CRS and conduct an evaluation
with real event log data. The experimental results and detailed analysis
demonstrate that our algorithms outperform existing algorithms in terms of
click-through-rate (CTR).
|
1303.0489 | A Semantic approach for effective document clustering using WordNet | cs.CL cs.IR | Now a days, the text document is spontaneously increasing over the internet,
e-mail and web pages and they are stored in the electronic database format. To
arrange and browse the document it becomes difficult. To overcome such problem
the document preprocessing, term selection, attribute reduction and maintaining
the relationship between the important terms using background knowledge,
WordNet, becomes an important parameters in data mining. In these paper the
different stages are formed, firstly the document preprocessing is done by
removing stop words, stemming is performed using porter stemmer algorithm, word
net thesaurus is applied for maintaining relationship between the important
terms, global unique words, and frequent word sets get generated, Secondly,
data matrix is formed, and thirdly terms are extracted from the documents by
using term selection approaches tf-idf, tf-df, and tf2 based on their minimum
threshold value. Further each and every document terms gets preprocessed, where
the frequency of each term within the document is counted for representation.
The purpose of this approach is to reduce the attributes and find the effective
term selection method using WordNet for better clustering accuracy. Experiments
are evaluated on Reuters Transcription Subsets, wheat, trade, money grain, and
ship, Reuters 21578, Classic 30, 20 News group (atheism), 20 News group
(Hardware), 20 News group (Computer Graphics) etc.
|
1303.0503 | The Weight Distributions of a Class of Cyclic Codes with Three Nonzeros
over F3 | cs.IT math.IT | Cyclic codes have efficient encoding and decoding algorithms. The decoding
error probability and the undetected error probability are usually bounded by
or given from the weight distributions of the codes. Most researches are about
the determination of the weight distributions of cyclic codes with few
nonzeros, by using quadratic form and exponential sum but limited to low
moments. In this paper, we focus on the application of higher moments of the
exponential sum to determine the weight distributions of a class of ternary
cyclic codes with three nonzeros, combining with not only quadratic form but
also MacWilliams' identities. Another application of this paper is to emphasize
the computer algebra system Magma for the investigation of the higher moments.
In the end, the result is verified by one example using Matlab.
|
1303.0529 | Average Rate of Downlink Heterogeneous Cellular Networks over
Generalized Fading Channels - A Stochastic Geometry Approach | cs.IT math.IT | In this paper, we introduce an analytical framework to compute the average
rate of downlink heterogeneous cellular networks. The framework leverages
recent application of stochastic geometry to other-cell interference modeling
and analysis. The heterogeneous cellular network is modeled as the
superposition of many tiers of Base Stations (BSs) having different transmit
power, density, path-loss exponent, fading parameters and distribution, and
unequal biasing for flexible tier association. A long-term averaged maximum
biased-received-power tier association is considered. The positions of the BSs
in each tier are modeled as points of an independent Poisson Point Process
(PPP). Under these assumptions, we introduce a new analytical methodology to
evaluate the average rate, which avoids the computation of the Coverage
Probability (Pcov) and needs only the Moment Generating Function (MGF) of the
aggregate interference at the probe mobile terminal. The distinguishable
characteristic of our analytical methodology consists in providing a tractable
and numerically efficient framework that is applicable to general fading
distributions, including composite fading channels with small- and mid-scale
fluctuations. In addition, our method can efficiently handle correlated
Log-Normal shadowing with little increase of the computational complexity. The
proposed MGF-based approach needs the computation of either a single or a
two-fold numerical integral, thus reducing the complexity of Pcov-based
frameworks, which require, for general fading distributions, the computation of
a four-fold integral.
|
1303.0539 | Novel Method for Mutational Disease Prediction using Bioinformatics
Techniques and Backpropagation Algorithm | cs.CE | Cancer is one of the most feared diseases in the world it has increased
disturbingly and breast cancer occurs in one out of eight women, the prediction
of malignancies plays essential roles not only in revealing human genome, but
also in discovering effective prevention and treatment of cancers. Generally
cancer disease driven by somatic mutations in an individual DNA sequence, or
genome that accumulates during the lifetime of person. This paper is proposed a
novel method can predict the disease by mutations despite The presence in gene
sequence is not necessary it are malignant, so will be compare the protein of
patient with the gene's protein of disease if there is difference between these
two proteins then can say there is malignant mutations. This method will use
bioinformatics techniques like FASTA, CLUSTALW, etc which shows whether
malignant mutations or not, then training the backpropagation algorithm using
all expected malignant mutations for a certain genes (e.g. BRCA1 and BRCA2) of
disease, and using it to test whether patient is holder the disease or not.
Implementing this novel method as the first way to predict the disease based on
mutations in the sequence of the gene that causes the disease shows two
decisions are achieved successfully, the first diagnose whether the patient has
mutations of cancer or not using bioinformatics techniques the second
classifying these mutations are related to breast cancer (e.g. BRCA1 and BRCA2)
using backpropagation with mean square rate 0.0000001. Keywords-Gene sequence;
Protein; Deoxyribonucleic Acid DNA; Malignant mutation; Bioinformatics;
Back-propagation algorithm.
|
1303.0540 | The Space of Solutions of Coupled XORSAT Formulae | cond-mat.dis-nn cs.DM cs.IT math.IT | The XOR-satisfiability (XORSAT) problem deals with a system of $n$ Boolean
variables and $m$ clauses. Each clause is a linear Boolean equation (XOR) of a
subset of the variables. A $K$-clause is a clause involving $K$ distinct
variables. In the random $K$-XORSAT problem a formula is created by choosing
$m$ $K$-clauses uniformly at random from the set of all possible clauses on $n$
variables. The set of solutions of a random formula exhibits various
geometrical transitions as the ratio $\frac{m}{n}$ varies.
We consider a {\em coupled} $K$-XORSAT ensemble, consisting of a chain of
random XORSAT models that are spatially coupled across a finite window along
the chain direction. We observe that the threshold saturation phenomenon takes
place for this ensemble and we characterize various properties of the space of
solutions of such coupled formulae.
|
1303.0542 | A multidimensional tropical optimization problem with nonlinear
objective function and linear constraints | math.OC cs.SY | We examine a multidimensional optimisation problem in the tropical
mathematics setting. The problem involves the minimisation of a nonlinear
function defined on a finite-dimensional semimodule over an idempotent
semifield subject to linear inequality constraints. We start with an overview
of known tropical optimisation problems with linear and nonlinear objective
functions. A short introduction to tropical algebra is provided to offer a
formal framework for solving the problem under study. As a preliminary result,
a solution to a linear inequality with an arbitrary matrix is presented. We
describe an example optimisation problem drawn from project scheduling and then
offer a general representation of the problem. To solve the problem, we
introduce an additional variable and reduce the problem to the solving of a
linear inequality, in which the variable plays the role of a parameter. A
necessary and sufficient condition for the inequality to hold is used to
evaluate the parameter, whereas the solution to the inequality is considered a
solution to the problem. Based on this approach, a complete direct solution in
a compact vector form is derived for the optimisation problem under fairly
general conditions. Numerical and graphical examples for two-dimensional
problems are given to illustrate the obtained results.
|
1303.0551 | Sparse PCA through Low-rank Approximations | stat.ML cs.IT cs.LG math.IT | We introduce a novel algorithm that computes the $k$-sparse principal
component of a positive semidefinite matrix $A$. Our algorithm is combinatorial
and operates by examining a discrete set of special vectors lying in a
low-dimensional eigen-subspace of $A$. We obtain provable approximation
guarantees that depend on the spectral decay profile of the matrix: the faster
the eigenvalue decay, the better the quality of our approximation. For example,
if the eigenvalues of $A$ follow a power-law decay, we obtain a polynomial-time
approximation algorithm for any desired accuracy.
A key algorithmic component of our scheme is a combinatorial feature
elimination step that is provably safe and in practice significantly reduces
the running complexity of our algorithm. We implement our algorithm and test it
on multiple artificial and real data sets. Due to the feature elimination step,
it is possible to perform sparse PCA on data sets consisting of millions of
entries in a few minutes. Our experimental evaluation shows that our scheme is
nearly optimal while finding very sparse vectors. We compare to the prior state
of the art and show that our scheme matches or outperforms previous algorithms
in all tested data sets.
|
1303.0556 | A Joint Localization and Clock Bias Estimation Technique Using
Time-of-Arrival at Multiple Antenna Receivers | cs.SY cs.IT math.IT | In this work, a system scheme is proposed for tracking a radio emitting
target moving in two-dimensional space. The localization is based on the use of
biased time-of-arrival (TOA) measurements obtained at two asynchronous
receivers, each equipped with two closely spaced antennas. By exploiting the
multi-antenna configuration and using all the TOA measurements up to current
time step, the relative clock bias at each receiver and the target position are
jointly estimated by solving a nonlinear least-squares (NLS) problem. To this
end, a novel time recursive algorithm is proposed which fully takes advantage
of the problem structure to achieve computational efficiency while using
orthogonal transformations to ensure numerical reliability. Simulations show
that the mean-squared error (MSE) of the proposed method is much smaller than
that of existing methods with the same antenna scheme, and approaches the
Cramer-Rao lower bound (CRLB) closely.
|
1303.0557 | Security Analysis on "An Authentication Code Against Pollution Attacks
in Network Coding" | cs.CR cs.IT math.IT | We analyze the security of the authentication code against pollution attacks
in network coding given by Oggier and Fathi and show one way to remove one very
strong condition they required. Actually, we find a way to attack their
authentication scheme. In their scheme, they considered that if some malicious
nodes in the network collude to make pollution in the network flow or make
substitution attacks to other nodes, they thought these malicious nodes must
solve a system of linear equations to recover the secret parameters. Then they
concluded that their scheme is an unconditional secure scheme. Actually, note
that the authentication tag in the scheme of Oggier and Fathi is nearly linear
on the messages, so it is very easy for any malicious node to make pollution
attack in the network flow, replacing the vector of any incoming edge by linear
combination of his incoming vectors whose coefficients have sum 1. And if the
coalition of malicious nodes can carry out decoding of the network coding, they
can easily make substitution attack to any other node even if they do not know
any information of the private key of the node. Moreover, even if their scheme
can work fruitfully, the condition in their scheme $H\leqslant M$ in a network
can be removed, where $H$ is the sum of numbers of the incoming edges at
adversaries. Under the condition $H\leqslant M$, $H$ may be large, so we need
large parameter $M$ which increases the cost of computation a lot. On the other
hand, the parameter $M$ can not be very large as it can not exceed the length
of original messages.
|
1303.0561 | Top-down particle filtering for Bayesian decision trees | stat.ML cs.LG | Decision tree learning is a popular approach for classification and
regression in machine learning and statistics, and Bayesian
formulations---which introduce a prior distribution over decision trees, and
formulate learning as posterior inference given data---have been shown to
produce competitive performance. Unlike classic decision tree learning
algorithms like ID3, C4.5 and CART, which work in a top-down manner, existing
Bayesian algorithms produce an approximation to the posterior distribution by
evolving a complete tree (or collection thereof) iteratively via local Monte
Carlo modifications to the structure of the tree, e.g., using Markov chain
Monte Carlo (MCMC). We present a sequential Monte Carlo (SMC) algorithm that
instead works in a top-down manner, mimicking the behavior and speed of classic
algorithms. We demonstrate empirically that our approach delivers accuracy
comparable to the most popular MCMC method, but operates more than an order of
magnitude faster, and thus represents a better computation-accuracy tradeoff.
|
1303.0566 | Arabic documents classification using fuzzy R.B.F. classifier with
sliding window | cs.IR | In this paper, we propose a system for contextual and semantic Arabic
documents classification by improving the standard fuzzy model. Indeed,
promoting neighborhood semantic terms that seems absent in this model by using
a radial basis modeling. In order to identify the relevant documents to the
query. This approach calculates the similarity between related terms by
determining the relevance of each relative to documents (NEAR operator), based
on a kernel function. The use of sliding window improves the process of
classification. The results obtained on a arabic dataset of press show very
good performance compared with the literature.
|
1303.0567 | Adjacent-Channel Interference in Frequency-Hopping Ad Hoc Networks | cs.IT math.IT | This paper considers ad hoc networks that use the combination of coded
continuous-phase frequency-shift keying (CPFSK) and frequency-hopping multiple
access. Although CPFSK has a compact spectrum, some of the signal power
inevitably splatters into adjacent frequency channels, thereby causing
adjacent-channel interference (ACI). The amount of ACI is controlled by setting
the fractional in-band power; i.e., the fraction of the signal power that lies
within the band of each frequency channel. While this quantity is often
selected arbitrarily, a tradeoff is involved in the choice. This paper presents
a new analysis of frequency-hopping ad hoc networks that carefully incorporates
the effect of ACI. The analysis accounts for the shadowing, Nakagami fading,
CPFSK modulation index, code rate, number of frequency channels, fractional
in-band power, and spatial distribution of the interfering mobiles. Expressions
are presented for both outage probability and transmission capacity. With the
objective of maximizing the transmission capacity, the optimal fractional
in-band power that should be contained in each frequency channel is identified.
|
1303.0572 | New Non-asymptotic Random Channel Coding Theorems | cs.IT math.IT | New non-asymptotic random coding theorems (with error probability $\epsilon$
and finite block length $n$) based on Gallager parity check ensemble and
Shannon random code ensemble with a fixed codeword type are established for
discrete input arbitrary output channels. The resulting non-asymptotic
achievability bounds, when combined with non-asymptotic equipartition
properties developed in the paper, can be easily computed. Analytically, these
non-asymptotic achievability bounds are shown to be asymptotically tight up to
the second order of the coding rate as $n$ goes to infinity with either
constant or sub-exponentially decreasing $\epsilon$. Numerically, they are also
compared favourably, for finite $n$ and $\epsilon$ of practical interest, with
existing non-asymptotic achievability bounds in the literature in general.
|
1303.0582 | Multiple Kernel Sparse Representations for Supervised and Unsupervised
Learning | cs.CV | In complex visual recognition tasks it is typical to adopt multiple
descriptors, that describe different aspects of the images, for obtaining an
improved recognition performance. Descriptors that have diverse forms can be
fused into a unified feature space in a principled manner using kernel methods.
Sparse models that generalize well to the test data can be learned in the
unified kernel space, and appropriate constraints can be incorporated for
application in supervised and unsupervised learning. In this paper, we propose
to perform sparse coding and dictionary learning in the multiple kernel space,
where the weights of the ensemble kernel are tuned based on graph-embedding
principles such that class discrimination is maximized. In our proposed
algorithm, dictionaries are inferred using multiple levels of 1-D subspace
clustering in the kernel space, and the sparse codes are obtained using a
simple levelwise pursuit scheme. Empirical results for object recognition and
image clustering show that our algorithm outperforms existing sparse coding
based approaches, and compares favorably to other state-of-the-art methods.
|
1303.0592 | Random Beamforming with Heterogeneous Users and Selective Feedback:
Individual Sum Rate and Individual Scaling Laws | cs.IT math.IT | This paper investigates three open problems in random beamforming based
communication systems: the scheduling policy with heterogeneous users, the
closed form sum rate, and the randomness of multiuser diversity with selective
feedback. By employing the cumulative distribution function based scheduling
policy, we guarantee fairness among users as well as obtain multiuser diversity
gain in the heterogeneous scenario. Under this scheduling framework, the
individual sum rate, namely the average rate for a given user multiplied by the
number of users, is of interest and analyzed under different feedback schemes.
Firstly, under the full feedback scheme, we derive the closed form individual
sum rate by employing a decomposition of the probability density function of
the selected user's signal-to-interference-plus-noise ratio. This technique is
employed to further obtain a closed form rate approximation with selective
feedback in the spatial dimension. The analysis is also extended to random
beamforming in a wideband OFDMA system with additional selective feedback in
the spectral dimension wherein only the best beams for the best-L resource
blocks are fed back. We utilize extreme value theory to examine the randomness
of multiuser diversity incurred by selective feedback. Finally, by leveraging
the tail equivalence method, the multiplicative effect of selective feedback
and random observations is observed to establish the individual rate scaling.
|
1303.0594 | On the Coherence Properties of Random Euclidean Distance Matrices | cs.IT math.IT | In the present paper we focus on the coherence properties of general random
Euclidean distance matrices, which are very closely related to the respective
matrix completion problem. This problem is of great interest in several
applications such as node localization in sensor networks with limited
connectivity. Our results can directly provide the sufficient conditions under
which an EDM can be successfully recovered with high probability from a limited
number of measurements.
|
1303.0597 | The Velocity of Censorship: High-Fidelity Detection of Microblog Post
Deletions | cs.CY cs.IR cs.SI | Weibo and other popular Chinese microblogging sites are well known for
exercising internal censorship, to comply with Chinese government requirements.
This research seeks to quantify the mechanisms of this censorship: how fast and
how comprehensively posts are deleted.Our analysis considered 2.38 million
posts gathered over roughly two months in 2012, with our attention focused on
repeatedly visiting "sensitive" users. This gives us a view of censorship
events within minutes of their occurrence, albeit at a cost of our data no
longer representing a random sample of the general Weibo population. We also
have a larger 470 million post sampling from Weibo's public timeline, taken
over a longer time period, that is more representative of a random sample.
We found that deletions happen most heavily in the first hour after a post
has been submitted. Focusing on original posts, not reposts/retweets, we
observed that nearly 30% of the total deletion events occur within 5- 30
minutes. Nearly 90% of the deletions happen within the first 24 hours.
Leveraging our data, we also considered a variety of hypotheses about the
mechanisms used by Weibo for censorship, such as the extent to which Weibo's
censors use retrospective keyword-based censorship, and how repost/retweet
popularity interacts with censorship. We also used natural language processing
techniques to analyze which topics were more likely to be censored.
|
1303.0606 | Quantum Information Transmission over a Partially Degradable Channel | quant-ph cs.IT math.IT | We investigate a quantum coding for quantum communication over a PD
(partially degradable) degradable quantum channel. For a PD channel, the
degraded environment state can be expressed from the channel output state up to
a degrading map. PD channels can be restricted to the set of optical channels
which allows for the parties to exploit the benefits in experimental quantum
communications. We show that for a PD channel, the partial degradability
property leads to higher quantum data rates in comparison to those of a
degradable channel. The PD property is particular convenient for quantum
communications and allows one to implement the experimental quantum protocols
with higher performance. We define a coding scheme for PD-channels and give the
achievable rates of quantum communication.
|
1303.0618 | Convergence of The Relative Value Iteration for the Ergodic Control
Problem of Nondegenerate Diffusions under Near-Monotone Costs | math.OC cs.SY math.AP | We study the relative value iteration for the ergodic control problem under a
near-monotone running cost structure for a nondegenerate diffusion controlled
through its drift. This algorithm takes the form of a quasilinear parabolic
Cauchy initial value problem in $\RR^{d}$. We show that this Cauchy problem
stabilizes, or in other words, that the solution of the quasilinear parabolic
equation converges for every bounded initial condition in $\Cc^{2}(\RR^{d})$ to
the solution of the Hamilton--Jacobi--Bellman (HJB) equation associated with
the ergodic control problem.
|
1303.0631 | Modeling for the Dynamics of Human Innovative Behaviors | physics.soc-ph cond-mat.stat-mech cs.SI | How to promote the innovative activities is an important problem for modern
society. In this paper, combining with the evolutionary games and information
spreading, we propose a lattice model to investigate dynamics of human
innovative behaviors based on benefit-driven assumption. Simulations show
several properties in agreement with peoples' daily cognition on innovative
behaviors, such as slow diffusion of innovative behaviors, gathering of
innovative strategy on "innovative centers", and quasi-localized dynamics.
Furthermore, our model also emerges rich non-Poisson properties in the
temporal-spacial patterns of the innovative status, including the scaling law
in the interval time of innovation releases and the bimodal distributions on
the spreading range of innovations, which would be universal in human
innovative behaviors. Our model provide a basic framework on the study of the
issue relevant to the evolution of human innovative behaviors and the promotion
measurement of innovative activities.
|
1303.0633 | Omega Model for Human Detection and Counting for application in Smart
Surveillance System | cs.CV | Driven by the significant advancements in technology and social issues such
as security management, there is a strong need for Smart Surveillance System in
our society today. One of the key features of a Smart Surveillance System is
efficient human detection and counting such that the system can decide and
label events on its own. In this paper we propose a new, novel and robust
model, The Omega Model, for detecting and counting human beings present in the
scene. The proposed model employs a set of four distinct descriptors for
identifying the unique features of the head, neck and shoulder regions of a
person. This unique head neck shoulder signature given by the Omega Model
exploits the challenges such as inter person variations in size and shape of
peoples head, neck and shoulder regions to achieve robust detection of human
beings even under partial occlusion, dynamically changing background and
varying illumination conditions. After experimentation we observe and analyze
the influences of each of the four descriptors on the system performance and
computation speed and conclude that a weight based decision making system
produces the best results. Evaluation results on a number of images indicate
the validation of our method in actual situation.
|
1303.0634 | Indian Sign Language Recognition Using Eigen Value Weighted Euclidean
Distance Based Classification Technique | cs.CV | Sign Language Recognition is one of the most growing fields of research
today. Many new techniques have been developed recently in these fields. Here
in this paper, we have proposed a system using Eigen value weighted Euclidean
distance as a classification technique for recognition of various Sign
Languages of India. The system comprises of four parts: Skin Filtering, Hand
Cropping, Feature Extraction and Classification. Twenty four signs were
considered in this paper, each having ten samples, thus a total of two hundred
forty images was considered for which recognition rate obtained was 97 percent.
|
1303.0635 | Recognition of Facial Expression Using Eigenvector Based Distributed
Features and Euclidean Distance Based Decision Making Technique | cs.CV | In this paper, an Eigenvector based system has been presented to recognize
facial expressions from digital facial images. In the approach, firstly the
images were acquired and cropping of five significant portions from the image
was performed to extract and store the Eigenvectors specific to the
expressions. The Eigenvectors for the test images were also computed, and
finally the input facial image was recognized when similarity was obtained by
calculating the minimum Euclidean distance between the test image and the
different expressions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.