id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1204.1240
|
Optimal Save-Then-Transmit Protocol for Energy Harvesting Wireless
Transmitters
|
cs.IT math.IT
|
In this paper, the design of a wireless communication device relying
exclusively on energy harvesting is considered. Due to the inability of
rechargeable energy sources to charge and discharge at the same time, a
constraint we term the energy half-duplex constraint, two rechargeable energy
storage devices (ESDs) are assumed so that at any given time, there is always
one ESD being recharged. The energy harvesting rate is assumed to be a random
variable that is constant over the time interval of interest. A
save-then-transmit (ST) protocol is introduced, in which a fraction of time
{\rho} (dubbed the save-ratio) is devoted exclusively to energy harvesting,
with the remaining fraction 1 - {\rho} used for data transmission. The ratio of
the energy obtainable from an ESD to the energy harvested is termed the energy
storage efficiency, {\eta}. We address the practical case of the secondary ESD
being a battery with {\eta} < 1, and the main ESD being a super-capacitor with
{\eta} = 1. The optimal save-ratio that minimizes outage probability is
derived, from which some useful design guidelines are drawn. In addition, we
compare the outage performance of random power supply to that of constant power
supply over the Rayleigh fading channel. The diversity order with random power
is shown to be the same as that of constant power, but the performance gap can
be large. Furthermore, we extend the proposed ST protocol to wireless networks
with multiple transmitters. It is shown that the system-level outage
performance is critically dependent on the relationship between the number of
transmitters and the optimal save-ratio for single-channel outage minimization.
Numerical results are provided to validate our proposed study.
|
1204.1241
|
Reducing Total Power Consumption Method in Cloud Computing Environments
|
cs.NI cs.SY
|
The widespread use of cloud computing services is expected to increase the
power consumed by ICT equipment in cloud computing environments rapidly. This
paper first identifies the need of the collaboration among servers, the
communication network and the power network, in order to reduce the total power
consumption by the entire ICT equipment in cloud computing environments. Five
fundamental policies for the collaboration are proposed and the algorithm to
realize each collaboration policy is outlined. Next, this paper proposes
possible signaling sequences to exchange information on power consumption
between network and servers, in order to realize the proposed collaboration
policy. Then, in order to reduce the power consumption by the network, this
paper proposes a method of estimating the volume of power consumption by all
network devices simply and assigning it to an individual user.
|
1204.1243
|
Proposed congestion control method for cloud computing environments
|
cs.NI cs.SY
|
As cloud computing services rapidly expand their customer base, it has become
important to share cloud resources, so as to provide them economically. In
cloud computing services, multiple types of resources, such as processing
ability, bandwidth and storage, need to be allocated simultaneously. If there
is a surge of requests, a competition will arise between these requests for the
use of cloud resources. This leads to the disruption of the service and it is
necessary to consider a measure to avoid or relieve congestion of cloud
computing environments.
This paper proposes a new congestion control method for cloud computing
environments which reduces the size of required resource for congested resource
type instead of restricting all service requests as in the existing networks.
Next, this paper proposes the user service specifications for the proposed
congestion control method, and clarifies the algorithm to decide the optimal
size of required resource to be reduced, based on the load offered to the
system. It is demonstrated by simulation evaluations that the proposed method
can handle more requests compared with the conventional methods and relieve the
congestion. Then, this paper proposes to enhance the proposed method, so as to
enable the fair resource allocation among users in congested situation.
|
1204.1245
|
Proposed optimal LSP selection method in MPLS networks
|
cs.NI cs.SY
|
Multi-Protocol Label Switching (MPLS) had been deployed by many data
networking service providers, including the next-generation mobile backhaul
networks, because of its undeniable potential in terms of virtual private
network (VPN) management, traffic engineering, etc. In MPLS networks, IP
packets are transmitted along a Label Switched Path (LSP) established between
edge nodes. To improve the efficiency of resource use in MPLS networks, it is
essential to utilize the LSPs efficiently.
This paper proposes a method of selecting the optimal LSP pair from among
multiple LSP pairs which are established between the same pair of edge nodes,
on the assumption that both the upward and downward LSPs are established as a
pair (both-way operation). It is supposed that both upward and downward
bandwidths are allocated simultaneously in the selected LSP pair for each
service request. It is demonstrated by simulation evaluations that the proposal
method could reduce the total amount of the bandwidth required by up to 15%
compared with the conventional selection method. The proposed method can also
reuse the know-how and management tools in many existing networks which are
based on both-way operation.
|
1204.1259
|
Fast ALS-based tensor factorization for context-aware recommendation
from implicit feedback
|
cs.LG cs.IR cs.NA
|
Albeit, the implicit feedback based recommendation problem - when only the
user history is available but there are no ratings - is the most typical
setting in real-world applications, it is much less researched than the
explicit feedback case. State-of-the-art algorithms that are efficient on the
explicit case cannot be straightforwardly transformed to the implicit case if
scalability should be maintained. There are few if any implicit feedback
benchmark datasets, therefore new ideas are usually experimented on explicit
benchmarks. In this paper, we propose a generic context-aware implicit feedback
recommender algorithm, coined iTALS. iTALS apply a fast, ALS-based tensor
factorization learning method that scales linearly with the number of non-zero
elements in the tensor. The method also allows us to incorporate diverse
context information into the model while maintaining its computational
efficiency. In particular, we present two such context-aware implementation
variants of iTALS. The first incorporates seasonality and enables to
distinguish user behavior in different time intervals. The other views the user
history as sequential information and has the ability to recognize usage
pattern typical to certain group of items, e.g. to automatically tell apart
product types or categories that are typically purchased repetitively
(collectibles, grocery goods) or once (household appliances). Experiments
performed on three implicit datasets (two proprietary ones and an implicit
variant of the Netflix dataset) show that by integrating context-aware
information with our factorization framework into the state-of-the-art implicit
recommender algorithm the recommendation quality improves significantly.
|
1204.1276
|
Distribution-Dependent Sample Complexity of Large Margin Learning
|
stat.ML cs.LG
|
We obtain a tight distribution-specific characterization of the sample
complexity of large-margin classification with L2 regularization: We introduce
the margin-adapted dimension, which is a simple function of the second order
statistics of the data distribution, and show distribution-specific upper and
lower bounds on the sample complexity, both governed by the margin-adapted
dimension of the data distribution. The upper bounds are universal, and the
lower bounds hold for the rich family of sub-Gaussian distributions with
independent features. We conclude that this new quantity tightly characterizes
the true sample complexity of large-margin classification. To prove the lower
bound, we develop several new tools of independent interest. These include new
connections between shattering and hardness of learning, new properties of
shattering with linear classifiers, and a new lower bound on the smallest
eigenvalue of a random Gram matrix generated by sub-Gaussian variables. Our
results can be used to quantitatively compare large margin learning to other
learning rules, and to improve the effectiveness of methods that use sample
complexity bounds, such as active learning.
|
1204.1277
|
Mouse Simulation Using Two Coloured Tapes
|
cs.AI cs.CV
|
In this paper, we present a novel approach for Human Computer Interaction
(HCI) where, we control cursor movement using a real-time camera. Current
methods involve changing mouse parts such as adding more buttons or changing
the position of the tracking ball. Instead, our method is to use a camera and
computer vision technology, such as image segmentation and gesture recognition,
to control mouse tasks (left and right clicking, double-clicking, and
scrolling) and we show how it can perform everything as current mouse devices
can. The software will be developed in JAVA language. Recognition and pose
estimation in this system are user independent and robust as we will be using
colour tapes on our finger to perform actions. The software can be used as an
intuitive input interface to applications that require multi-dimensional
control e.g. computer games etc.
|
1204.1290
|
A Sliding Mode Control for a Sensorless Tracker: Application on a
Photovoltaic System
|
cs.SY
|
The photovoltaic sun tracker allows us to increase the energy production. The
sun tracker considered in this study has two degrees of freedom (2-DOF) and
especially specified by the lack of sensors. In this way, the tracker will have
as a set point the sun position at every second during the day for a period of
five years. After sunset, the tracker goes back to the initial position (which
of sunrise). The sliding mode control (SMC) will be applied to ensure at best
the tracking mechanism and, in another hand, the sliding mode observer will
replace the velocity sensor which suffers from a lot of measurement
disturbances. Experimental measurements show that this autonomic dual axis Sun
Tracker increases the power production by over 40%.
|
1204.1336
|
An Implementation of Intrusion Detection System Using Genetic Algorithm
|
cs.CR cs.NE cs.NI
|
Nowadays it is very important to maintain a high level security to ensure
safe and trusted communication of information between various organizations.
But secured data communication over internet and any other network is always
under threat of intrusions and misuses. So Intrusion Detection Systems have
become a needful component in terms of computer and network security. There are
various approaches being utilized in intrusion detections, but unfortunately
any of the systems so far is not completely flawless. So, the quest of
betterment continues. In this progression, here we present an Intrusion
Detection System (IDS), by applying genetic algorithm (GA) to efficiently
detect various types of network intrusions. Parameters and evolution processes
for GA are discussed in details and implemented. This approach uses evolution
theory to information evolution in order to filter the traffic data and thus
reduce the complexity. To implement and measure the performance of our system
we used the KDD99 benchmark dataset and obtained reasonable detection rate.
|
1204.1369
|
An approximation algorithm for the link building problem
|
cs.DS cs.SI
|
In this work we consider the problem of maximizing the PageRank of a given
target node in a graph by adding $k$ new links. We consider the case that the
new links must point to the given target node (backlinks). Previous work shows
that this problem has no fully polynomial time approximation schemes unless
$P=NP$. We present a polynomial time algorithm yielding a PageRank value within
a constant factor from the optimal. We also consider the naive algorithm where
we choose backlinks from nodes with high PageRank values compared to the
outdegree and show that the naive algorithm performs much worse on certain
graphs compared to the constant factor approximation scheme.
|
1204.1393
|
Continuous Markov Random Fields for Robust Stereo Estimation
|
cs.CV
|
In this paper we present a novel slanted-plane MRF model which reasons
jointly about occlusion boundaries as well as depth. We formulate the problem
as the one of inference in a hybrid MRF composed of both continuous (i.e.,
slanted 3D planes) and discrete (i.e., occlusion boundaries) random variables.
This allows us to define potentials encoding the ownership of the pixels that
compose the boundary between segments, as well as potentials encoding which
junctions are physically possible. Our approach outperforms the
state-of-the-art on Middlebury high resolution imagery as well as in the more
challenging KITTI dataset, while being more efficient than existing slanted
plane MRF-based methods, taking on average 2 minutes to perform inference on
high resolution imagery.
|
1204.1398
|
Partial LLL Reduction
|
cs.IT math.IT
|
The Lenstra-Lenstra-Lovasz (LLL) reduction has wide applications in digital
communications. It can greatly improve the speed of the sphere decoding (SD)
algorithms for solving an integer least squares (ILS) problem and the
performance of the Babai integer point, a suboptimal solution to the ILS
problem. Recently Ling and Howgrave-Graham proposed the so-called effective LLL
(ELLL) reduction. It has less computational complexity than LLL, while it has
the same effect on the performance of the Babai integer point as LLL. In this
paper we propose a partial LLL (PLLL) reduction. PLLL avoids the numerical
stability problem with ELLL, which may result in very poor performance of the
Babai integer point. Furthermore, numerical simulations indicated that it is
faster than ELLL. We also show that in theory PLLL and ELLL have the same
effect on the search speed of a typical SD algorithm as LLL.
|
1204.1400
|
Connectivity of Large Wireless Networks under A Generic Connection Model
|
cs.NI cs.IT math.IT
|
This paper provides a necessary and sufficient condition for a random network
with nodes Poissonly distributed on a unit square and a pair of nodes directly
connected following a generic random connection model to be asymptotically
almost surely connected. The results established in this paper expand recent
results obtained for connectivity of random geometric graphs from the unit disk
model and the fewer results from the log-normal model to the more generic and
more practical random connection model.
|
1204.1406
|
An Effective Information Retrieval for Ambiguous Query
|
cs.IR
|
Search engine returns thousands of web pages for a single user query, in
which most of them are not relevant. In this context, effective information
retrieval from the expanding web is a challenging task, in particular, if the
query is ambiguous. The major question arises here is that how to get the
relevant pages for an ambiguous query. We propose an approach for the effective
result of an ambiguous query by forming community vector based on association
concept of data minning using vector space model and the freedictionary. We
develop clusters by computing the similarity between community vectors and
document vectors formed from the extracted web pages by the search engine. We
use Gensim package to implement the algorithm because of its simplicity and
robust nature. Analysis shows that our approach is an effective way to form
clusters for an ambiguous query.
|
1204.1407
|
Column Reordering for Box-Constrained Integer Least Squares Problems
|
cs.IT math.IT
|
The box-constrained integer least squares problem (BILS) arises in MIMO
wireless communications applications. Typically a sphere decoding algorithm (a
tree search algorithm) is used to solve the problem. In order to make the
search algorithm more efficient, the columns of the channel matrix in the BILS
problem have to be reordered. To our knowledge, there are currently two
algorithms for column reordering that provide the best known results. Both use
all available information, but they were derived respectively from geometric
and algebraic points of view and look different. In this paper we modify one to
make it more computationally efficient and easier to comprehend. Then we prove
the modified one and the other actually give the same column reordering in
theory. Finally we propose a new mathematically equivalent algorithm, which is
more computationally efficient and is still easy to understand.
|
1204.1413
|
An integrated ranking algorithm for efficient information computing in
social networks
|
cs.SI
|
Social networks have ensured the expanding disproportion between the face of
WWW stored traditionally in search engine repositories and the actual ever
changing face of Web. Exponential growth of web users and the ease with which
they can upload contents on web highlights the need of content controls on
material published on the web. As definition of search is changing,
socially-enhanced interactive search methodologies are the need of the hour.
Ranking is pivotal for efficient web search as the search performance mainly
depends upon the ranking results. In this paper new integrated ranking model
based on fused rank of web object based on popularity factor earned over only
valid interlinks from multiple social forums is proposed. This model identifies
relationships between web objects in separate social networks based on the
object inheritance graph. Experimental study indicates the effectiveness of
proposed Fusion based ranking algorithm in terms of better search results.
|
1204.1414
|
Improved Spatial Modulation for High Spectral Efficiency
|
cs.SY cs.IT math.IT
|
Spatial Modulation (SM) is a technique that can enhance the capacity of MIMO
schemes by exploiting the index of transmit antenna to convey information bits.
In this paper, we describe this technique, and present a new MIMO transmission
scheme that combines SM and spatial multiplexing. In the basic form of SM, only
one out of MT available antennas is selected for transmission in any given
symbol interval. We propose to use more than one antenna to transmit several
symbols simultaneously. This would increase the spectral efficiency. At the
receiver, an optimal detector is employed to jointly estimate the transmitted
symbols as well as the index of the active transmit antennas. In this paper we
evaluate the performance of this scheme in an uncorrelated Rayleigh fading
channel. The simulations results show that the proposed scheme outperforms the
optimal SM and V-BLAST (Vertical Bell Laboratories Layered space-time at high
signal-to-noise ratio (SNR). For example, if we seek a spectral efficiency of 8
bits/s/Hz at bit error rate (BER) of 10^-5, the proposed scheme provides 5dB
and 7dB improvements over SM and V-BLAST, respectively.
|
1204.1433
|
Relay selection for multiple access relay channel with decode-forward
and analog network coding
|
cs.IT math.IT
|
This paper presents a relay selection for decode-and-forward based on network
coding (DF-NC) and analog-NC protocols in general scheme of cellular network
system. In the propose scheme the two source node simultaneously transmit their
own information to all the relays as well as the destination node, and then, a
single relay i.e. best with a minimum symbol error rate (SER) will be selected
to forward the new version of the received signal. Simulation results show
that, the DF-NC scheme with considerable performance has exactness over
analog-NC scheme. To improve the system performance, optimal power allocation
between the two sources and the best relay is determined based on the
asymptotic SER. By increasing the number of relays node, the optimum power
allocation achieve better performance than asymptotic SER.
|
1204.1437
|
Fast projections onto mixed-norm balls with applications
|
stat.ML cs.LG math.OC
|
Joint sparsity offers powerful structural cues for feature selection,
especially for variables that are expected to demonstrate a "grouped" behavior.
Such behavior is commonly modeled via group-lasso, multitask lasso, and related
methods where feature selection is effected via mixed-norms. Several mixed-norm
based sparse models have received substantial attention, and for some cases
efficient algorithms are also available. Surprisingly, several constrained
sparse models seem to be lacking scalable algorithms. We address this
deficiency by presenting batch and online (stochastic-gradient) optimization
methods, both of which rely on efficient projections onto mixed-norm balls. We
illustrate our methods by applying them to the multitask lasso. We conclude by
mentioning some open problems.
|
1204.1467
|
Learning Fuzzy {\beta}-Certain and {\beta}-Possible rules from
incomplete quantitative data by rough sets
|
cs.DS cs.LG
|
The rough-set theory proposed by Pawlak, has been widely used in dealing with
data classification problems. The original rough-set model is, however, quite
sensitive to noisy data. Tzung thus proposed deals with the problem of
producing a set of fuzzy certain and fuzzy possible rules from quantitative
data with a predefined tolerance degree of uncertainty and misclassification.
This model allowed, which combines the variable precision rough-set model and
the fuzzy set theory, is thus proposed to solve this problem. This paper thus
deals with the problem of producing a set of fuzzy certain and fuzzy possible
rules from incomplete quantitative data with a predefined tolerance degree of
uncertainty and misclassification. A new method, incomplete quantitative data
for rough-set model and the fuzzy set theory, is thus proposed to solve this
problem. It first transforms each quantitative value into a fuzzy set of
linguistic terms using membership functions and then finding incomplete
quantitative data with lower and the fuzzy upper approximations. It second
calculates the fuzzy {\beta}-lower and the fuzzy {\beta}-upper approximations.
The certain and possible rules are then generated based on these fuzzy
approximations. These rules can then be used to classify unknown objects.
|
1204.1528
|
Extracting Geospatial Preferences Using Relational Neighbors
|
cs.IR
|
With the increasing popularity of location-based social media applications
and devices that automatically tag generated content with locations, large
repositories of collaborative geo-referenced data are appearing on-line.
Efficiently extracting user preferences from these data to determine what
information to recommend is challenging because of the sheer volume of data as
well as the frequency of updates. Traditional recommender systems focus on the
interplay between users and items, but ignore contextual parameters such as
location. In this paper we take a geospatial approach to determine locational
preferences and similarities between users. We propose to capture the
geographic context of user preferences for items using a relational graph,
through which we are able to derive many new and state-of-the-art
recommendation algorithms, including combinations of them, requiring changes
only in the definition of the edge weights. Furthermore, we discuss several
solutions for cold-start scenarios. Finally, we conduct experiments using two
real-world datasets and provide empirical evidence that many of the proposed
algorithms outperform existing location-aware recommender algorithms.
|
1204.1548
|
On Cascade Source Coding with A Side Information "Vending Machine"
|
cs.IT math.IT
|
The model of a side information "vending machine" accounts for scenarios in
which acquiring side information is costly and thus should be done efficiently.
In this paper, the three-node cascade source coding problem is studied under
the assumption that a side information vending machine is available either at
the intermediate or at the end node. In both cases, a single-letter
characterization of the available trade-offs among the rate, the distortions in
the reconstructions at the intermediate and at the end node, and the cost in
acquiring the side information are derived under given conditions.
|
1204.1559
|
Goppa goemetry codes via elementary methods (In Portuguese)
|
cs.IT math.AG math.IT
|
The central objective of this dissertation was to present the Goppa Geometry
Codes via elementary methods which were introduced by J.H. van Lint
,R.Pellikaan and T. Hohold about 1998. On the first part of such dissertation
are presented the fundamental concepts about bodies of rational functions of an
algebraic curve in the direction as to define the Goppa Codes on a classical
manner. In this study we based ourselves mainly on the book ? Algebraic
Function Fields and Codes? of H. Stichtenoth. The second part is initiated with
an introduction about
the functions weight, degree and order which are fundamental for the study of
the Goppa Codes through elementary methods of linear algebra and of semigroups
and such study was based on ? Algebraic Geometry Codes ? of J.H. van
Lint,R.Pellikaan and T. Hohold.
|
1204.1563
|
Generalized Error Exponents For Small Sample Universal Hypothesis
Testing
|
math.ST cs.IT math.IT stat.TH
|
The small sample universal hypothesis testing problem is investigated in this
paper, in which the number of samples $n$ is smaller than the number of
possible outcomes $m$. The goal of this work is to find an appropriate
criterion to analyze statistical tests in this setting. A suitable model for
analysis is the high-dimensional model in which both $n$ and $m$ increase to
infinity, and $n=o(m)$. A new performance criterion based on large deviations
analysis is proposed and it generalizes the classical error exponent applicable
for large sample problems (in which $m=O(n)$). This generalized error exponent
criterion provides insights that are not available from asymptotic consistency
or central limit theorem analysis. The following results are established for
the uniform null distribution:
(i) The best achievable probability of error $P_e$ decays as
$P_e=\exp\{-(n^2/m) J (1+o(1))\}$ for some $J>0$.
(ii) A class of tests based on separable statistics, including the
coincidence-based test, attains the optimal generalized error exponents.
(iii) Pearson's chi-square test has a zero generalized error exponent and
thus its probability of error is asymptotically larger than the optimal test.
|
1204.1564
|
Minimal model of associative learning for cross-situational lexicon
acquisition
|
q-bio.NC cs.LG
|
An explanation for the acquisition of word-object mappings is the associative
learning in a cross-situational scenario. Here we present analytical results of
the performance of a simple associative learning algorithm for acquiring a
one-to-one mapping between $N$ objects and $N$ words based solely on the
co-occurrence between objects and words. In particular, a learning trial in our
learning scenario consists of the presentation of $C + 1 < N$ objects together
with a target word, which refers to one of the objects in the context. We find
that the learning times are distributed exponentially and the learning rates
are given by $\ln{[\frac{N(N-1)}{C + (N-1)^{2}}]}$ in the case the $N$ target
words are sampled randomly and by $\frac{1}{N} \ln [\frac{N-1}{C}] $ in the
case they follow a deterministic presentation sequence. This learning
performance is much superior to those exhibited by humans and more realistic
learning algorithms in cross-situational experiments. We show that introduction
of discrimination limitations using Weber's law and forgetting reduce the
performance of the associative algorithm to the human level.
|
1204.1576
|
Development of knowledge Base Expert System for Natural treatment of
Diabetes disease
|
cs.AI
|
The development of expert system for treatment of Diabetes disease by using
natural methods is new information technology derived from Artificial
Intelligent research using ESTA (Expert System Text Animation) System. The
proposed expert system contains knowledge about various methods of natural
treatment methods (Massage, Herbal/Proper Nutrition, Acupuncture, Gems) for
Diabetes diseases of Human Beings. The system is developed in the ESTA (Expert
System shell for Text Animation) which is Visual Prolog 7.3 Application. The
knowledge for the said system will be acquired from domain experts, texts and
other related sources.
|
1204.1580
|
Certifying the restricted isometry property is hard
|
math.FA cs.CC cs.IT math.IT
|
This paper is concerned with an important matrix condition in compressed
sensing known as the restricted isometry property (RIP). We demonstrate that
testing whether a matrix satisfies RIP is NP-hard. As a consequence of our
result, it is impossible to efficiently test for RIP provided P \neq NP.
|
1204.1581
|
A new approach of designing Multi-Agent Systems
|
cs.MA cs.AI
|
Agent technology is a software paradigm that permits to implement large and
complex distributed applications. In order to assist analyzing, conception and
development or implementation phases of multi-agent systems, we've tried to
present a practical application of a generic and scalable method of a MAS with
a component-oriented architecture and agent-based approach that allows MDA to
generate source code from a given model. We've designed on AUML the class
diagrams as a class meta-model of different agents of a MAS. Then we generated
the source code of the models developed using an open source tool called
AndroMDA. This agent-based and evolutive approach enhances the modularity and
genericity developments and promotes their reusability in future developments.
This property distinguishes our design methodology of existing methodologies in
that it is constrained by any particular agent-based model while providing a
library of generic models
|
1204.1595
|
Femtocaching and Device-to-Device Collaboration: A New Architecture for
Wireless Video Distribution
|
cs.NI cs.IT math.IT
|
We present a new architecture to handle the ongoing explosive increase in the
demand for video content in wireless networks. It is based on distributed
caching of the content in femto-basestations with small or non-existing
backhaul capacity but with considerable storage space, called helper nodes. We
also consider using the mobile terminals themselves as caching helpers, which
can distribute video through device-to-device communications. This approach
allows an improvement in the video throughput without deployment of any
additional infrastructure. The new architecture can improve video throughput by
one to two orders-of-magnitude.
|
1204.1596
|
An Intelligent Location Management approaches in GSM Mobile Network
|
cs.NI cs.AI
|
Location management refers to the problem of updating and searching the
current location of mobile nodes in a wireless network. To make it efficient,
the sum of update costs of location database must be minimized. Previous work
relying on fixed location databases is unable to fully exploit the knowledge of
user mobility patterns in the system so as to achieve this minimization. The
study presents an intelligent location management approach which has interacts
between intelligent information system and knowledge-base technologies, so we
can dynamically change the user patterns and reduce the transition between the
VLR and HLR. The study provides algorithms are ability to handle location
registration and call delivery
|
1204.1598
|
Improving Seek Time for Column Store Using MMH Algorithm
|
cs.DB cs.PF
|
Hash based search has, proven excellence on large data warehouses stored in
column store. Data distribution has significant impact on hash based search. To
reduce impact of data distribution, we have proposed Memory Managed Hash (MMH)
algorithm that uses shift XOR group for Queries and Transactions in column
store. Our experiments show that MMH improves read and write throughput by 22%
for TPC-H distribution.
|
1204.1611
|
Vision-based Human Gender Recognition: A Survey
|
cs.CV
|
Gender is an important demographic attribute of people. This paper provides a
survey of human gender recognition in computer vision. A review of approaches
exploiting information from face and whole body (either from a still image or
gait sequence) is presented. We highlight the challenges faced and survey the
representative methods of these approaches. Based on the results, good
performance have been achieved for datasets captured under controlled
environments, but there is still much work that can be done to improve the
robustness of gender recognition under real-life environments.
|
1204.1615
|
Discrimination between Arabic and Latin from bilingual documents
|
cs.CV cs.CL cs.IR
|
2011 International Conference on Communications, Computing and Control
Applications (CCCA)
|
1204.1624
|
UCB Algorithm for Exponential Distributions
|
stat.ML cs.LG
|
We introduce in this paper a new algorithm for Multi-Armed Bandit (MAB)
problems. A machine learning paradigm popular within Cognitive Network related
topics (e.g., Spectrum Sensing and Allocation). We focus on the case where the
rewards are exponentially distributed, which is common when dealing with
Rayleigh fading channels. This strategy, named Multiplicative Upper Confidence
Bound (MUCB), associates a utility index to every available arm, and then
selects the arm with the highest index. For every arm, the associated index is
equal to the product of a multiplicative factor by the sample mean of the
rewards collected by this arm. We show that the MUCB policy has a low
complexity and is order optimal.
|
1204.1629
|
Image segmentation by adaptive distance based on EM algorithm
|
cs.CV
|
This paper introduces a Bayesian image segmentation algorithm based on finite
mixtures. An EM algorithm is developed to estimate parameters of the Gaussian
mixtures. The finite mixture is a flexible and powerful probabilistic modeling
tool. It can be used to provide a model-based clustering in the field of
pattern recognition. However, the application of finite mixtures to image
segmentation presents some difficulties; especially it's sensible to noise. In
this paper we propose a variant of this method which aims to resolve this
problem. Our approach proceeds by the characterization of pixels by two
features: the first one describes the intrinsic properties of the pixel and the
second characterizes the neighborhood of pixel. Then the classification is made
on the base on adaptive distance which privileges the one or the other features
according to the spatial position of the pixel in the image. The obtained
results have shown a significant improvement of our approach compared to the
standard version of EM algorithm.
|
1204.1631
|
New approach using Bayesian Network to improve content based image
classification systems
|
cs.CV cs.IR
|
This paper proposes a new approach based on augmented naive Bayes for image
classification. Initially, each image is cutting in a whole of blocks. For each
block, we compute a vector of descriptors. Then, we propose to carry out a
classification of the vectors of descriptors to build a vector of labels for
each image. Finally, we propose three variants of Bayesian Networks such as
Naive Bayesian Network (NB), Tree Augmented Naive Bayes (TAN) and Forest
Augmented Naive Bayes (FAN) to classify the image using the vector of labels.
The results showed a marked improvement over the FAN, NB and TAN.
|
1204.1634
|
Automatic liver segmentation method in CT images
|
cs.CV
|
The aim of this work is to develop a method for automatic segmentation of the
liver based on a priori knowledge of the image, such as location and shape of
the liver.
|
1204.1637
|
Characterization of Dynamic Bayesian Network
|
cs.AI
|
In this report, we will be interested at Dynamic Bayesian Network (DBNs) as a
model that tries to incorporate temporal dimension with uncertainty. We start
with basics of DBN where we especially focus in Inference and Learning concepts
and algorithms. Then we will present different levels and methods of creating
DBNs as well as approaches of incorporating temporal dimension in static
Bayesian network.
|
1204.1649
|
Design and Engineering of a Chess-Robotic Arm
|
cs.RO math.AG
|
In the scope of the "Chess-Bot" project, this study's goal is to choose the
right model for the robotic arm that the "the Chess-Bot" will use to move the
pawn from a cell to another. In this paper, there is the definition and the
structure of a robot arm. Also, the different engineering and kinematics
fundamentals of the robot and its components will be detailed. Furthermore, the
different structures of robotic arms will be presented and compared based on
different criteria. Finally, a model for "the Chess-Bot" arm will be
synthesized based on accurate algorithms and equations.
|
1204.1650
|
The Lego Mindstorms Robotics Invention Systems 2.0 Toolkit: A Study Case
|
cs.RO
|
This paper reviews the aspects of the LEGO\textregistered
Mindstorms\trademark robotics invention system 2.0 \trademark (RIS), by
presenting the different elements of the kit, and relating them to actual robot
components and norms. Furthermore a comparison between the LCS and Java is
made, as well as comparing the RCX board to other technologies, specifically
LEGO \textregistered NXT and MIT's "Handy Board". Also, concrete examples of
application using the RIS are presented.
|
1204.1653
|
Machine Cognition Models: EPAM and GPS
|
cs.AI
|
Through history, the human being tried to relay its daily tasks to other
creatures, which was the main reason behind the rise of civilizations. It
started with deploying animals to automate tasks in the field of
agriculture(bulls), transportation (e.g. horses and donkeys), and even
communication (pigeons). Millenniums after, come the Golden age with
"Al-jazari" and other Muslim inventors, which were the pioneers of automation,
this has given birth to industrial revolution in Europe, centuries after. At
the end of the nineteenth century, a new era was to begin, the computational
era, the most advanced technological and scientific development that is driving
the mankind and the reason behind all the evolutions of science; such as
medicine, communication, education, and physics. At this edge of technology
engineers and scientists are trying to model a machine that behaves the same as
they do, which pushed us to think about designing and implementing "Things
that-Thinks", then artificial intelligence was. In this work we will cover each
of the major discoveries and studies in the field of machine cognition, which
are the "Elementary Perceiver and Memorizer"(EPAM) and "The General Problem
Solver"(GPS). The First one focus mainly on implementing the human-verbal
learning behavior, while the second one tries to model an architecture that is
able to solve problems generally (e.g. theorem proving, chess playing, and
arithmetic). We will cover the major goals and the main ideas of each model, as
well as comparing their strengths and weaknesses, and finally giving their
fields of applications. And Finally, we will suggest a real life implementation
of a cognitive machine.
|
1204.1677
|
Space-Time MIMO Multicasting
|
cs.IT math.IT
|
Multicasting is the general method of conveying the same information to
multiple users over a broadcast channel. In this work, the Gaussian MIMO
broadcast channel is considered, with multiple users and any number of antennas
at each node. A "closed loop" scenario is assumed, for which a practical
capacity-achieving multicast scheme is constructed. In the proposed scheme,
linear modulation is carried over time and space together, which allows to
transform the problem into that of transmission over parallel scalar
sub-channels, the gains of which are equal, except for a fraction of
sub-channels that vanishes with the number of time slots used. Over these
sub-channels, off-the-shelf fixed-rate AWGN codes can be used to approach
capacity.
|
1204.1678
|
A New Approach for Arabic Handwritten Postal Addresses Recognition
|
cs.CV
|
In this paper, we propose an automatic analysis system for the Arabic
handwriting postal addresses recognition, by using the beta elliptical model.
Our system is divided into different steps: analysis, pre-processing and
classification. The first operation is the filtering of image. In the second,
we remove the border print, stamps and graphics. After locating the address on
the envelope, the address segmentation allows the extraction of postal code and
city name separately. The pre-processing system and the modeling approach are
based on two basic steps. The first step is the extraction of the temporal
order in the image of the handwritten trajectory. The second step is based on
the use of Beta-Elliptical model for the representation of handwritten script.
The recognition system is based on Graph-matching algorithm. Our modeling and
recognition approaches were validated by using the postal code and city names
extracted from the Tunisian postal envelopes data. The recognition rate
obtained is about 98%.
|
1204.1679
|
Clustering and Bayesian network for image of faces classification
|
cs.CV cs.AI
|
In a content based image classification system, target images are sorted by
feature similarities with respect to the query (CBIR). In this paper, we
propose to use new approach combining distance tangent, k-means algorithm and
Bayesian network for image classification. First, we use the technique of
tangent distance to calculate several tangent spaces representing the same
image. The objective is to reduce the error in the classification phase.
Second, we cut the image in a whole of blocks. For each block, we compute a
vector of descriptors. Then, we use K-means to cluster the low-level features
including color and texture information to build a vector of labels for each
image. Finally, we apply five variants of Bayesian networks classifiers
(Na\"ive Bayes, Global Tree Augmented Na\"ive Bayes (GTAN), Global Forest
Augmented Na\"ive Bayes (GFAN), Tree Augmented Na\"ive Bayes for each class
(TAN), and Forest Augmented Na\"ive Bayes for each class (FAN) to classify the
image of faces using the vector of labels. In order to validate the feasibility
and effectively, we compare the results of GFAN to FAN and to the others
classifiers (NB, GTAN, TAN). The results demonstrate FAN outperforms than GFAN,
NB, GTAN and TAN in the overall classification accuracy.
|
1204.1681
|
The threshold EM algorithm for parameter learning in bayesian network
with incomplete data
|
cs.AI cs.LG stat.ML
|
Bayesian networks (BN) are used in a big range of applications but they have
one issue concerning parameter learning. In real application, training data are
always incomplete or some nodes are hidden. To deal with this problem many
learning parameter algorithms are suggested foreground EM, Gibbs sampling and
RBE algorithms. In order to limit the search space and escape from local maxima
produced by executing EM algorithm, this paper presents a learning parameter
algorithm that is a fusion of EM and RBE algorithms. This algorithm
incorporates the range of a parameter into the EM algorithm. This range is
calculated by the first step of RBE algorithm allowing a regularization of each
parameter in bayesian network after the maximization step of the EM algorithm.
The threshold EM algorithm is applied in brain tumor diagnosis and show some
advantages and disadvantages over the EM algorithm.
|
1204.1685
|
Density-sensitive semisupervised inference
|
math.ST cs.LG stat.ML stat.TH
|
Semisupervised methods are techniques for using labeled data
$(X_1,Y_1),\ldots,(X_n,Y_n)$ together with unlabeled data $X_{n+1},\ldots,X_N$
to make predictions. These methods invoke some assumptions that link the
marginal distribution $P_X$ of X to the regression function f(x). For example,
it is common to assume that f is very smooth over high density regions of
$P_X$. Many of the methods are ad-hoc and have been shown to work in specific
examples but are lacking a theoretical foundation. We provide a minimax
framework for analyzing semisupervised methods. In particular, we study methods
based on metrics that are sensitive to the distribution $P_X$. Our model
includes a parameter $\alpha$ that controls the strength of the semisupervised
assumption. We then use the data to adapt to $\alpha$.
|
1204.1688
|
The asymptotics of ranking algorithms
|
math.ST cs.LG stat.ML stat.TH
|
We consider the predictive problem of supervised ranking, where the task is
to rank sets of candidate items returned in response to queries. Although there
exist statistical procedures that come with guarantees of consistency in this
setting, these procedures require that individuals provide a complete ranking
of all items, which is rarely feasible in practice. Instead, individuals
routinely provide partial preference information, such as pairwise comparisons
of items, and more practical approaches to ranking have aimed at modeling this
partial preference data directly. As we show, however, such an approach raises
serious theoretical challenges. Indeed, we demonstrate that many commonly used
surrogate losses for pairwise comparison data do not yield consistency;
surprisingly, we show inconsistency even in low-noise settings. With these
negative results as motivation, we present a new approach to supervised ranking
based on aggregation of partial preferences, and we develop $U$-statistic-based
empirical risk minimization procedures. We present an asymptotic analysis of
these new procedures, showing that they yield consistency results that parallel
those available for classification. We complement our theoretical results with
an experiment studying the new procedures in a large-scale web-ranking task.
|
1204.1704
|
Multi-Level Coding Efficiency with Improved Quality for Image
Compression based on AMBTC
|
cs.CV
|
In this paper, we have proposed an extended version of Absolute Moment Block
Truncation Coding (AMBTC) to compress images. Generally the elements of a
bitplane used in the variants of Block Truncation Coding (BTC) are of size 1
bit. But it has been extended to two bits in the proposed method. Number of
statistical moments preserved to reconstruct the compressed has also been
raised from 2 to 4. Hence, the quality of the reconstructed images has been
improved significantly from 33.62 to 38.12 with the increase in bpp by 1. The
increased bpp (3) is further reduced to 1.75in multiple levels: in one level,
by dropping 4 elements of the bitplane in such a away that the pixel values of
the dropped elements can easily be interpolated with out much of loss in the
quality, in level two, eight elements are dropped and reconstructed later and
in level three, the size of the statistical moments is reduced. The experiments
were carried over standard images of varying intensities. In all the cases, the
proposed method outperforms the existing AMBTC technique in terms of both PSNR
and bpp.
|
1204.1706
|
Efficient Design of Triplet Based Spike-Timing Dependent Plasticity
|
cs.NE
|
Spike-Timing Dependent Plasticity (STDP) is believed to play an important
role in learning and the formation of computational function in the brain. The
classical model of STDP which considers the timing between pairs of
pre-synaptic and post-synaptic spikes (p-STDP) is incapable of reproducing
synaptic weight changes similar to those seen in biological experiments which
investigate the effect of either higher order spike trains (e.g. triplet and
quadruplet of spikes), or, simultaneous effect of the rate and timing of spike
pairs on synaptic plasticity. In this paper, we firstly investigate synaptic
weight changes using a p-STDP circuit and show how it fails to reproduce the
mentioned complex biological experiments. We then present a new STDP VLSI
circuit which acts based on the timing among triplets of spikes (t-STDP) that
is able to reproduce all the mentioned experimental results. We believe that
our new STDP VLSI circuit improves upon previous circuits, whose learning
capacity exceeds current designs due to its capability of mimicking the
outcomes of biological experiments more closely; thus plays a significant role
in future VLSI implementation of neuromorphic systems.
|
1204.1710
|
Hiding Sensitive Association Rules without Altering the Support of
Sensitive Item(s)
|
cs.DB cs.DC
|
Association rule mining is an important data-mining technique that finds
interesting association among a large set of data items. Since it may disclose
patterns and various kinds of sensitive knowledge that are difficult to find
otherwise, it may pose a threat to the privacy of discovered confidential
information. Such information is to be protected against unauthorized access.
Many strategies had been proposed to hide the information. Some use distributed
databases over several sites, data perturbation, clustering, and data
distortion techniques. Hiding sensitive rules problem, and still not
sufficiently investigated, is the requirement to balance the confidentiality of
the disclosed data with the legitimate needs of the user. The proposed approach
uses the data distortion technique where the position of the sensitive items is
altered but its support is never changed. The size of the database remains the
same. It uses the idea of representative rules to prune the rules first and
then hides the sensitive rules. Advantage of this approach is that it hides
maximum number of rules however, the existing approaches fail to hide all the
desired rules, which are supposed to be hidden in minimum number of passes. The
paper also compares of the proposed approach with existing ones.
|
1204.1739
|
Relay Placement for Physical Layer Security: A Secure Connection
Perspective
|
cs.IT math.IT
|
This work studies the problem of secure connection in cooperative wireless
communication with two relay strategies, decode-and-forward (DF) and
randomize-and-forward (RF). The four-node scenario and cellular scenario are
considered. For the typical four-node (source, destination, relay, and
eavesdropper) scenario, we derive the optimal power allocation for the DF
strategy and find that the RF strategy is always better than the DF to enhance
secure connection. In cellular networks, we show that without relay, it is
difficult to establish secure connections from the base station to the cell
edge users. The effect of relay placement for the cell edge users is
demonstrated by simulation. For both scenarios, we find that the benefit of
relay transmission increases when path loss becomes severer.
|
1204.1751
|
Automated Feedback Generation for Introductory Programming Assignments
|
cs.PL cs.AI
|
We present a new method for automatically providing feedback for introductory
programming problems. In order to use this method, we need a reference
implementation of the assignment, and an error model consisting of potential
corrections to errors that students might make. Using this information, the
system automatically derives minimal corrections to student's incorrect
solutions, providing them with a quantifiable measure of exactly how incorrect
a given solution was, as well as feedback about what they did wrong.
We introduce a simple language for describing error models in terms of
correction rules, and formally define a rule-directed translation strategy that
reduces the problem of finding minimal corrections in an incorrect program to
the problem of synthesizing a correct program from a sketch. We have evaluated
our system on thousands of real student attempts obtained from 6.00 and 6.00x.
Our results show that relatively simple error models can correct on average 65%
of all incorrect submissions.
|
1204.1754
|
Vision Paper: Towards an Understanding of the Limits of Map-Reduce
Computation
|
cs.DB cs.DC
|
A significant amount of recent research work has addressed the problem of
solving various data management problems in the cloud. The major algorithmic
challenges in map-reduce computations involve balancing a multitude of factors
such as the number of machines available for mappers/reducers, their memory
requirements, and communication cost (total amount of data sent from mappers to
reducers). Most past work provides custom solutions to specific problems, e.g.,
performing fuzzy joins in map-reduce, clustering, graph analyses, and so on.
While some problems are amenable to very efficient map-reduce algorithms, some
other problems do not lend themselves to a natural distribution, and have
provable lower bounds. Clearly, the ease of "map-reducability" is closely
related to whether the problem can be partitioned into independent pieces,
which are distributed across mappers/reducers. What makes a problem
distributable? Can we characterize general properties of problems that
determine how easy or hard it is to find efficient map-reduce algorithms?
This is a vision paper that attempts to answer the questions described above.
|
1204.1756
|
Human Muscle Fatigue Model in Dynamic Motions
|
cs.RO
|
Human muscle fatigue is considered to be one of the main reasons for
Musculoskeletal Disorder (MSD). Recent models have been introduced to define
muscle fatigue for static postures. However, the main drawbacks of these models
are that the dynamic effect of the human and the external load are not taken
into account. In this paper, each human joint is assumed to be controlled by
two muscle groups to generate motions such as push/pull. The joint torques are
computed using Lagrange's formulation to evaluate the dynamic factors of the
muscle fatigue model. An experiment is defined to validate this assumption and
the result for one person confirms its feasibility. The evaluation of this
model can predict the fatigue and MSD risk in industry production quickly.
|
1204.1757
|
Compensation of compliance errors in parallel manipulators composed of
non-perfect kinematic chains
|
cs.RO
|
The paper is devoted to the compliance errors compensation for parallel
manipulators under external loading. Proposed approach is based on the
non-linear stiffness modeling and reduces to a proper adjusting of a target
trajectory. In contrast to previous works, in addition to compliance errors
caused by machining forces, the problem of assembling errors caused by
inaccuracy in the kinematic chains is considered. The advantages and practical
significance of the proposed approach are illustrated by examples that deal
with groove milling with Orthoglide manipulator.
|
1204.1800
|
On Power-law Kernels, corresponding Reproducing Kernel Hilbert Space and
Applications
|
cs.LG cs.IT math.IT stat.ML
|
The role of kernels is central to machine learning. Motivated by the
importance of power-law distributions in statistical modeling, in this paper,
we propose the notion of power-law kernels to investigate power-laws in
learning problem. We propose two power-law kernels by generalizing Gaussian and
Laplacian kernels. This generalization is based on distributions, arising out
of maximization of a generalized information measure known as nonextensive
entropy that is very well studied in statistical mechanics. We prove that the
proposed kernels are positive definite, and provide some insights regarding the
corresponding Reproducing Kernel Hilbert Space (RKHS). We also study practical
significance of both kernels in classification and regression, and present some
simulation results.
|
1204.1811
|
Skin-color based videos categorization
|
cs.CV cs.AI
|
On dedicated websites, people can upload videos and share it with the rest of
the world. Currently these videos are cat- egorized manually by the help of the
user community. In this paper, we propose a combination of color spaces with
the Bayesian network approach for robust detection of skin color followed by an
automated video categorization. Exper- imental results show that our method can
achieve satisfactory performance for categorizing videos based on skin color.
|
1204.1815
|
Using Nyquist or Nyquist-Like Plot to Predict Three Typical
Instabilities in DC-DC Converters
|
cs.SY math.DS nlin.CD
|
By transforming an exact stability condition, a new Nyquist-like plot is
proposed to predict occurrences of three typical instabilities in DC-DC
converters. The three instabilities are saddle-node bifurcation (coexistence of
multiple solutions), period-doubling bifurcation (subharmonic oscillation), and
Neimark bifurcation (quasi-periodic oscillation). In a single plot, it
accurately predicts whether an instability occurs and what type the instability
is. The plot is equivalent to the Nyquist plot, and it is a useful design tool
to avoid these instabilities. Nine examples are used to illustrate the accuracy
of this new plot to predict instabilities in the buck or boost converter with
fixed or variable switching frequency.
|
1204.1821
|
Permutation Complexity and Coupling Measures in Hidden Markov Models
|
nlin.CD cs.IT math.IT physics.data-an
|
In [Haruna, T. and Nakajima, K., 2011. Physica D 240, 1370-1377], the authors
introduced the duality between values (words) and orderings (permutations) as a
basis to discuss the relationship between information theoretic measures for
finite-alphabet stationary stochastic processes and their permutation
analogues. It has been used to give a simple proof of the equality between the
entropy rate and the permutation entropy rate for any finite-alphabet
stationary stochastic process and show some results on the excess entropy and
the transfer entropy for finite-alphabet stationary ergodic Markov processes.
In this paper, we extend our previous results to hidden Markov models and show
the equalities between various information theoretic complexity and coupling
measures and their permutation analogues. In particular, we show the following
two results within the realm of hidden Markov models with ergodic internal
processes: the two permutation analogues of the transfer entropy, the symbolic
transfer entropy and the transfer entropy on rank vectors, are both equivalent
to the transfer entropy if they are considered as the rates, and the directed
information theory can be captured by the permutation entropy approach.
|
1204.1832
|
Mathematical Modeling of Competitive Group Recommendation Systems with
Application to Peer Review Systems
|
cs.IR cs.PF
|
In this paper, we present a mathematical model to capture various factors
which may influence the accuracy of a competitive group recommendation system.
We apply this model to peer review systems, i.e., conference or research grants
review, which is an essential component in our scientific community. We explore
number of important questions, i.e., how will the number of reviews per paper
affect the accuracy of the overall recommendation? Will the score aggregation
policy influence the final recommendation? How reviewers' preference may affect
the accuracy of the final recommendation? To answer these important questions,
we formally analyze our model. Through this analysis, we obtain the insight on
how to design a randomized algorithm which is both computationally efficient
and asymptotically accurate in evaluating the accuracy of a competitive group
recommendation system. We obtain number of interesting observations: i.e., for
a medium tier conference, three reviews per paper is sufficient for a high
accuracy recommendation. For prestigious conferences, one may need at least
seven reviews per paper to achieve high accuracy. We also propose a
heterogeneous review strategy which requires equal or less reviewing workload,
but can improve over a homogeneous review strategy in recommendation accuracy
by as much as 30% . We believe our models and methodology are important
building blocks to study competitive group recommendation systems.
|
1204.1851
|
A Probabilistic Logic Programming Event Calculus
|
cs.AI
|
We present a system for recognising human activity given a symbolic
representation of video content. The input of our system is a set of
time-stamped short-term activities (STA) detected on video frames. The output
is a set of recognised long-term activities (LTA), which are pre-defined
temporal combinations of STA. The constraints on the STA that, if satisfied,
lead to the recognition of a LTA, have been expressed using a dialect of the
Event Calculus. In order to handle the uncertainty that naturally occurs in
human activity recognition, we adapted this dialect to a state-of-the-art
probabilistic logic programming framework. We present a detailed evaluation and
comparison of the crisp and probabilistic approaches through experimentation on
a benchmark dataset of human surveillance videos.
|
1204.1868
|
User-based key frame detection in social web video
|
cs.MM cs.HC cs.IR
|
Video search results and suggested videos on web sites are represented with a
video thumbnail, which is manually selected by the video up-loader among three
randomly generated ones (e.g., YouTube). In contrast, we present a grounded
user-based approach for automatically detecting interesting key-frames within a
video through aggregated users' replay interactions with the video player.
Previous research has focused on content-based systems that have the benefit of
analyzing a video without user interactions, but they are monolithic, because
the resulting video thumbnails are the same regardless of the user preferences.
We constructed a user interest function, which is based on aggregate video
replays, and analyzed hundreds of user interactions. We found that the local
maximum of the replaying activity stands for the semantics of information rich
videos, such as lecture, and how-to. The concept of user-based key-frame
detection could be applied to any video on the web, in order to generate a
user-based and dynamic video thumbnail in search results.
|
1204.1880
|
Scalable Frames
|
math.NA cs.IT cs.NA math.FA math.IT
|
Tight frames can be characterized as those frames which possess optimal
numerical stability properties. In this paper, we consider the question of
modifying a general frame to generate a tight frame by rescaling its frame
vectors; a process which can also be regarded as perfect preconditioning of a
frame by a diagonal operator. A frame is called scalable, if such a diagonal
operator exists. We derive various characterizations of scalable frames,
thereby including the infinite-dimensional situation. Finally, we provide a
geometric interpretation of scalability in terms of conical surfaces.
|
1204.1909
|
Knapsack based Optimal Policies for Budget-Limited Multi-Armed Bandits
|
cs.AI cs.LG
|
In budget-limited multi-armed bandit (MAB) problems, the learner's actions
are costly and constrained by a fixed budget. Consequently, an optimal
exploitation policy may not be to pull the optimal arm repeatedly, as is the
case in other variants of MAB, but rather to pull the sequence of different
arms that maximises the agent's total reward within the budget. This difference
from existing MABs means that new approaches to maximising the total reward are
required. Given this, we develop two pulling policies, namely: (i) KUBE; and
(ii) fractional KUBE. Whereas the former provides better performance up to 40%
in our experimental settings, the latter is computationally less expensive. We
also prove logarithmic upper bounds for the regret of both policies, and show
that these bounds are asymptotically optimal (i.e. they only differ from the
best possible regret by a constant factor).
|
1204.1910
|
Multi-intersection Traffic Light Control Using Infinitesimal
Perturbation Analysis
|
cs.SY
|
We address the traffic light control problem for multiple intersections in
tandem by viewing it as a stochastic hybrid system and developing a Stochastic
Flow Model (SFM) for it. Using Infinitesimal Perturbation Analysis (IPA), we
derive on-line gradient estimates of a cost metric with respect to the
controllable green and red cycle lengths. The IPA estimators obtained require
counting traffic light switchings and estimating car flow rates only when
specific events occur. The estimators are used to iteratively adjust light
cycle lengths to improve performance and, in conjunction with a standard
gradient-based algorithm, to obtain optimal values which adapt to changing
traffic conditions. Simulation results are included to illustrate the approach.
|
1204.1912
|
Reference Based Genome Compression
|
cs.IT math.IT
|
DNA sequencing technology has advanced to a point where storage is becoming
the central bottleneck in the acquisition and mining of more data. Large
amounts of data are vital for genomics research, and generic compression tools,
while viable, cannot offer the same savings as approaches tuned to inherent
biological properties. We propose an algorithm to compress a target genome
given a known reference genome. The proposed algorithm first generates a
mapping from the reference to the target genome, and then compresses this
mapping with an entropy coder. As an illustration of the performance: applying
our algorithm to James Watson's genome with hg18 as a reference, we are able to
reduce the 2991 megabyte (MB) genome down to 6.99 MB, while Gzip compresses it
to 834.8 MB.
|
1204.1924
|
Two-Way Communication with Energy Exchange
|
cs.IT math.IT
|
The conventional assumption made in the design of communication systems is
that the energy used to transfer information between a sender and a recipient
cannot be reused for future communication tasks. A notable exception to this
norm is given by passive RFID systems, in which a reader can transfer both
information and energy via the transmitted radio signal. Conceivably, any
system that exchanges information via the transfer of given physical resources
(radio waves, particles, qubits) can potentially reuse, at least part, of the
received resources for communication later on. In this paper, a two-way
communication system is considered that operates with a given initial number of
physical resources, referred to as energy units. The energy units are not
replenished from outside the system, and are assumed, for simplicity, to be
constant over time. A node can either send an "on" symbol (or "1"), which costs
one unit of energy, or an "off" signal (or "0"), which does not require any
energy expenditure. Upon reception of a "1" signal, the recipient node
"harvests" the energy contained in the signal and stores it for future
communication tasks. Inner and outer bounds on the achievable rates are
derived, and shown via numerical results to coincide if the number of energy
units is large enough.
|
1204.1933
|
A Lattice-Theoretic Characterization of Optimal Minimum-Distance Linear
Precoders
|
cs.IT math.IT
|
This work investigates linear precoding over non-singular linear channels
with additive white Gaussian noise, with lattice-type inputs. The aim is to
maximize the minimum distance of the received lattice points, where the
precoder is subject to an energy constraint. It is shown that the optimal
precoder only produces a finite number of different lattices, namely perfect
lattices, at the receiver. The well-known densest lattice packings are
instances of perfect lattices, however it is analytically shown that the
densest lattices are not always the solution. This is a counter-intuitive
result at first sight, since previous work in the area showed a tight
connection between densest lattices and minimum distance. Since there are only
finitely many different perfect lattices, they can theoretically be enumerated
off-line. A new upper bound on the optimal minimum distance is derived, which
significantly improves upon a previously reported bound. Based on this bound,
we propose an enumeration algorithm that produces a finite codebook of optimal
precoders.
|
1204.1949
|
Social Recommender Systems Based on Coupling Network Structure Analysis
|
cs.IR cs.SI physics.soc-ph
|
The past few years has witnessed the great success of recommender systems,
which can significantly help users find relevant and interesting items for them
in the information era. However, a vast class of researches in this area mainly
focus on predicting missing links in bipartite user-item networks (represented
as behavioral networks). Comparatively, the social impact, especially the
network structure based properties, is relatively lack of study. In this paper,
we firstly obtain five corresponding network-based features, including user
activity, average neighbors' degree, clustering coefficient, assortative
coefficient and discrimination, from social and behavioral networks,
respectively. A hybrid algorithm is proposed to integrate those features from
two respective networks. Subsequently, we employ a machine learning process to
use those features to provide recommendation results in a binary classifier
method. Experimental results on a real dataset, Flixster, suggest that the
proposed method can significantly enhance the algorithmic accuracy. In
addition, as network-based properties consider not only the social activities,
but also take into account user preferences in the behavioral networks,
therefore, it performs much better than that from either social or behavioral
networks. Furthermore, since the features based on the behavioral network
contain more diverse and meaningfully structural information, they play a vital
role in uncovering users' potential preference, which, might show light in
deeply understanding the structure and function of the social and behavioral
networks.
|
1204.1956
|
Learning Topic Models - Going beyond SVD
|
cs.LG cs.DS cs.IR
|
Topic Modeling is an approach used for automatic comprehension and
classification of data in a variety of settings, and perhaps the canonical
application is in uncovering thematic structure in a corpus of documents. A
number of foundational works both in machine learning and in theory have
suggested a probabilistic model for documents, whereby documents arise as a
convex combination of (i.e. distribution on) a small number of topic vectors,
each topic vector being a distribution on words (i.e. a vector of
word-frequencies). Similar models have since been used in a variety of
application areas; the Latent Dirichlet Allocation or LDA model of Blei et al.
is especially popular.
Theoretical studies of topic modeling focus on learning the model's
parameters assuming the data is actually generated from it. Existing approaches
for the most part rely on Singular Value Decomposition(SVD), and consequently
have one of two limitations: these works need to either assume that each
document contains only one topic, or else can only recover the span of the
topic vectors instead of the topic vectors themselves.
This paper formally justifies Nonnegative Matrix Factorization(NMF) as a main
tool in this context, which is an analog of SVD where all vectors are
nonnegative. Using this tool we give the first polynomial-time algorithm for
learning topic models without the above two limitations. The algorithm uses a
fairly mild assumption about the underlying topic matrix called separability,
which is usually found to hold in real-life data. A compelling feature of our
algorithm is that it generalizes to models that incorporate topic-topic
correlations, such as the Correlated Topic Model and the Pachinko Allocation
Model.
We hope that this paper will motivate further theoretical results that use
NMF as a replacement for SVD - just as NMF has come to replace SVD in many
applications.
|
1204.1995
|
Attribute Exploration of Gene Regulatory Processes
|
q-bio.MN cs.CE cs.LO math.LO
|
This thesis aims at the logical analysis of discrete processes, in particular
of such generated by gene regulatory networks. States, transitions and
operators from temporal logics are expressed in the language of Formal Concept
Analysis. By the attribute exploration algorithm, an expert or a computer
program is enabled to validate a minimal and complete set of implications, e.g.
by comparison of predictions derived from literature with observed data. Here,
these rules represent temporal dependencies within gene regulatory networks
including coexpression of genes, reachability of states, invariants or possible
causal relationships. This new approach is embedded into the theory of
universal coalgebras, particularly automata, Kripke structures and Labelled
Transition Systems. A comparison with the temporal expressivity of Description
Logics is made. The main theoretical results concern the integration of
background knowledge into the successive exploration of the defined data
structures (formal contexts). Applying the method a Boolean network from
literature modelling sporulation of Bacillus subtilis is examined. Finally, we
developed an asynchronous Boolean network for extracellular matrix formation
and destruction in the context of rheumatoid arthritis.
|
1204.2003
|
Directed Information Graphs
|
cs.IT cs.AI math.IT stat.ML
|
We propose a graphical model for representing networks of stochastic
processes, the minimal generative model graph. It is based on reduced
factorizations of the joint distribution over time. We show that under
appropriate conditions, it is unique and consistent with another type of
graphical model, the directed information graph, which is based on a
generalization of Granger causality. We demonstrate how directed information
quantifies Granger causality in a particular sequential prediction setting. We
also develop efficient methods to estimate the topological structure from data
that obviate estimating the joint statistics. One algorithm assumes
upper-bounds on the degrees and uses the minimal dimension statistics
necessary. In the event that the upper-bounds are not valid, the resulting
graph is nonetheless an optimal approximation. Another algorithm uses
near-minimal dimension statistics when no bounds are known but the distribution
satisfies a certain criterion. Analogous to how structure learning algorithms
for undirected graphical models use mutual information estimates, these
algorithms use directed information estimates. We characterize the
sample-complexity of two plug-in directed information estimators and obtain
confidence intervals. For the setting when point estimates are unreliable, we
propose an algorithm that uses confidence intervals to identify the best
approximation that is robust to estimation error. Lastly, we demonstrate the
effectiveness of the proposed algorithms through analysis of both synthetic
data and real data from the Twitter network. In the latter case, we identify
which news sources influence users in the network by merely analyzing tweet
times.
|
1204.2009
|
Effects of the LLL reduction on the success probability of the Babai
point and on the complexity of sphere decoding
|
cs.IT math.IT
|
The common method to estimate an unknown integer parameter vector in a linear
model is to solve an integer least squares (ILS) problem. A typical approach to
solving an ILS problem is sphere decoding. To make a sphere decoder faster, the
well-known LLL reduction is often used as preprocessing. The Babai point
produced by the Babai nearest plan algorithm is a suboptimal solution of the
ILS problem. First we prove that the success probability of the Babai point as
a lower bound on the success probability of the ILS estimator is sharper than
the lower bound given by Hassibi and Boyd [1]. Then we show rigorously that
applying the LLL reduction algorithm will increase the success probability of
the Babai point. Finally we show rigorously that applying the LLL reduction
algorithm will also reduce the computational complexity of sphere decoders,
which is measured approximately by the number of nodes in the search tree in
the literature
|
1204.2018
|
Applications of fuzzy logic to Case-Based Reasoning
|
cs.AI
|
The article discusses some applications of fuzzy logic ideas to formalizing
of the Case-Based Reasoning (CBR) process and to measuring the effectiveness of
CBR systems
|
1204.2032
|
Multi-Output Recommender: Items, Groups and Friends, and Their Mutual
Contributing Effects
|
cs.IR
|
Due to the development of social media technology, it becomes easier for
users to gather together to form groups. Take the Last.fm for example, users
can join groups they may be interested where they can share their loved songs
and discuss topics about songs and singers. However, the number of groups grows
over time, users need effective groups recommendations in order to meet more
like-minded users.
|
1204.2033
|
Computing Constrained Cramer Rao Bounds
|
cs.IT cs.DS math.IT
|
We revisit the problem of computing submatrices of the Cram\'er-Rao bound
(CRB), which lower bounds the variance of any unbiased estimator of a vector
parameter $\vth$. We explore iterative methods that avoid direct inversion of
the Fisher information matrix, which can be computationally expensive when the
dimension of $\vth$ is large. The computation of the bound is related to the
quadratic matrix program, where there are highly efficient methods for solving
it. We present several methods, and show that algorithms in prior work are
special instances of existing optimization algorithms. Some of these methods
converge to the bound monotonically, but in particular, algorithms converging
non-monotonically are much faster. We then extend the work to encompass the
computation of the CRB when the Fisher information matrix is singular and when
the parameter $\vth$ is subject to constraints. As an application, we consider
the design of a data streaming algorithm for network measurement.
|
1204.2035
|
Wireless Information Transfer with Opportunistic Energy Harvesting
|
cs.IT math.IT
|
Energy harvesting is a promising solution to prolong the operation of
energy-constrained wireless networks. In particular, scavenging energy from
ambient radio signals, namely wireless energy harvesting (WEH), has recently
drawn significant attention. In this paper, we consider a point-to-point
wireless link over the narrowband flat-fading channel subject to time-varying
co-channel interference. It is assumed that the receiver has no fixed power
supplies and thus needs to replenish energy opportunistically via WEH from the
unintended interference and/or the intended signal sent by the transmitter. We
further assume a single-antenna receiver that can only decode information or
harvest energy at any time due to the practical circuit limitation. Therefore,
it is important to investigate when the receiver should switch between the two
modes of information decoding (ID) and energy harvesting (EH), based on the
instantaneous channel and interference condition. In this paper, we derive the
optimal mode switching rule at the receiver to achieve various trade-offs
between wireless information transfer and energy harvesting. Specifically, we
determine the minimum transmission outage probability for delay-limited
information transfer and the maximum ergodic capacity for no-delay-limited
information transfer versus the maximum average energy harvested at the
receiver, which are characterized by the boundary of so-called "outage-energy"
region and "rate-energy" region, respectively. Moreover, for the case when the
channel state information (CSI) is known at the transmitter, we investigate the
joint optimization of transmit power control, information and energy transfer
scheduling, and the receiver's mode switching. Our results provide useful
guidelines for the efficient design of emerging wireless communication systems
powered by opportunistic WEH.
|
1204.2058
|
A technical study and analysis on fuzzy similarity based models for text
classification
|
cs.IR cs.LG
|
In this new and current era of technology, advancements and techniques,
efficient and effective text document classification is becoming a challenging
and highly required area to capably categorize text documents into mutually
exclusive categories. Fuzzy similarity provides a way to find the similarity of
features among various documents. In this paper, a technical review on various
fuzzy similarity based models is given. These models are discussed and compared
to frame out their use and necessity. A tour of different methodologies is
provided which is based upon fuzzy similarity related concerns. It shows that
how text and web documents are categorized efficiently into different
categories. Various experimental results of these models are also discussed.
The technical comparisons among each model's parameters are shown in the form
of a 3-D chart. Such study and technical review provide a strong base of
research work done on fuzzy similarity based text document categorization.
|
1204.2061
|
A Fuzzy Similarity Based Concept Mining Model for Text Classification
|
cs.IR cs.LG
|
Text Classification is a challenging and a red hot field in the current
scenario and has great importance in text categorization applications. A lot of
research work has been done in this field but there is a need to categorize a
collection of text documents into mutually exclusive categories by extracting
the concepts or features using supervised learning paradigm and different
classification algorithms. In this paper, a new Fuzzy Similarity Based Concept
Mining Model (FSCMM) is proposed to classify a set of text documents into pre -
defined Category Groups (CG) by providing them training and preparing on the
sentence, document and integrated corpora levels along with feature reduction,
ambiguity removal on each level to achieve high system performance. Fuzzy
Feature Category Similarity Analyzer (FFCSA) is used to analyze each extracted
feature of Integrated Corpora Feature Vector (ICFV) with the corresponding
categories or classes. This model uses Support Vector Machine Classifier (SVMC)
to classify correctly the training data patterns into two groups; i. e., + 1
and - 1, thereby producing accurate and correct results. The proposed model
works efficiently and effectively with great performance and high - accuracy
results.
|
1204.2062
|
SVD-EBP Algorithm for Iris Pattern Recognition
|
cs.CV
|
This paper proposes a neural network approach based on Error Back Propagation
(EBP) for classification of different eye images. To reduce the complexity of
layered neural network the dimensions of input vectors are optimized using
Singular Value Decomposition (SVD). The main of this work is to provide for
best method for feature extraction and classification. The details of this
combined system named as SVD-EBP system, and results thereof are presented in
this paper.
Keywords- Singular value decomposition(SVD), Error back Propagation(EBP).
|
1204.2069
|
Asymptotic Accuracy of Distribution-Based Estimation for Latent
Variables
|
stat.ML cs.LG
|
Hierarchical statistical models are widely employed in information science
and data engineering. The models consist of two types of variables: observable
variables that represent the given data and latent variables for the
unobservable labels. An asymptotic analysis of the models plays an important
role in evaluating the learning process; the result of the analysis is applied
not only to theoretical but also to practical situations, such as optimal model
selection and active learning. There are many studies of generalization errors,
which measure the prediction accuracy of the observable variables. However, the
accuracy of estimating the latent variables has not yet been elucidated. For a
quantitative evaluation of this, the present paper formulates
distribution-based functions for the errors in the estimation of the latent
variables. The asymptotic behavior is analyzed for both the maximum likelihood
and the Bayes methods.
|
1204.2073
|
Automatic facial feature extraction and expression recognition based on
neural network
|
cs.CV
|
In this paper, an approach to the problem of automatic facial feature
extraction from a still frontal posed image and classification and recognition
of facial expression and hence emotion and mood of a person is presented. Feed
forward back propagation neural network is used as a classifier for classifying
the expressions of supplied face into seven basic categories like surprise,
neutral, sad, disgust, fear, happy and angry. For face portion segmentation and
localization, morphological image processing operations are used. Permanent
facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN
edge detection operator, facial geometry, edge projection analysis. Experiments
are carried out on JAFFE facial expression database and gives better
performance in terms of 100% accuracy for training set and 95.26% accuracy for
test set.
|
1204.2079
|
A Theoretical and Empirical Evaluation of Software Component Search
Engines, Semantic Search Engines and Google Search Engine in the Context of
COTS-Based Development
|
cs.IR cs.SE
|
COTS-based development is a component reuse approach promising to reduce
costs and risks, and ensure higher quality. The growing availability of COTS
components on the Web has concretized the possibility of achieving these
objectives. In this multitude, a recurrent problem is the identification of the
COTS components that best satisfy the user requirements. Finding an adequate
COTS component implies searching among heterogeneous descriptions of the
components within a broad search space. Thus, the use of search engines is
required to make more efficient the COTS components identification. In this
paper, we investigate, theoretically and empirically, the COTS component search
performance of eight software component search engines, nine semantic search
engines and a conventional search engine (Google). Our empirical evaluation is
conducted with respect to precision and normalized recall. We defined ten
queries for the assessed search engines. These queries were carefully selected
to evaluate the capability of each search engine for handling COTS component
identification.
|
1204.2080
|
Ergodic Capacity of Cognitive Radio under Imperfect Channel State
Information
|
cs.IT math.IT
|
A spectrum-sharing communication system where the secondary user is aware of
the instantaneous channel state information (CSI) of the secondary link, but
knows only the statistics and an estimated version of the secondary
transmitter-primary receiver (ST-PR) link, is investigated. The optimum power
profile and the ergodic capacity of the secondary link are derived for general
fading channels (with continuous probability density function) under average
and peak transmit-power constraints and with respect to two different
interference constraints: an interference outage constraint and a
signal-to-interference outage constraint. When applied to Rayleigh fading
channels, our results show, for instance, that the interference constraint is
harmful at high-power regime in the sense that the capacity does not increase
with the power, whereas at low-power regime, it has a marginal impact and
no-interference performance corresponding to the ergodic capacity under average
or peak transmit power constraint in absence of the primary user, may be
achieved.
|
1204.2083
|
Primary Rate-Splitting Achieves Capacity for the Gaussian Cognitive
Interference Channel
|
cs.IT math.IT
|
The cognitive interference channel models cognitive overlay radio systems,
where cognitive radios overhear the transmission of neighboring nodes. Capacity
for this channel is not known in general. For the Gaussian case capacity is
known in three regimes, usually denoted as the "weak interference", "very
strong interference" and "primary decodes cognitive". This paper provides a new
capacity result, based on rate-splitting of the primary user's message into a
public and private part and that generalizes the capacity results in the "very
strong interference" and "primary decodes cognitive" regimes. This result
indicates that capacity of the cognitive interference channel not only depends
on channel conditions but also the level of cooperation with the primary user.
|
1204.2114
|
Image-based Vehicle Classification System
|
cs.CV
|
Electronic toll collection (ETC) system has been a common trend used for toll
collection on toll road nowadays. The implementation of electronic toll
collection allows vehicles to travel at low or full speed during the toll
payment, which help to avoid the traffic delay at toll road. One of the major
components of an electronic toll collection is the automatic vehicle detection
and classification (AVDC) system which is important to classify the vehicle so
that the toll is charged according to the vehicle classes. Vision-based vehicle
classification system is one type of vehicle classification system which adopt
camera as the input sensing device for the system. This type of system has
advantage over the rest for it is cost efficient as low cost camera is used.
The implementation of vision-based vehicle classification system requires lower
initial investment cost and very suitable for the toll collection trend
migration in Malaysia from single ETC system to full-scale multi-lane free flow
(MLFF). This project includes the development of an image-based vehicle
classification system as an effort to seek for a robust vision-based vehicle
classification system. The techniques used in the system include
scale-invariant feature transform (SIFT) technique, Canny's edge detector,
K-means clustering as well as Euclidean distance matching. In this project, a
unique way to image description as matching medium is proposed. This
distinctiveness of method is analogous to the human DNA concept which is highly
unique. The system is evaluated on open datasets and return promising results.
|
1204.2134
|
The steepest watershed: from graphs to images
|
cs.CV
|
The watershed is a powerful tool for segmenting objects whose contours appear
as crest lines on a gradient image. The watershed transform associates to a
topographic surface a partition into catchment basins, defined as attraction
zones of a drop of water falling on the relief and following a line of steepest
descent. Unfortunately, catchment basins may overlap and do not form a
partition. Moreover, current watershed algorithms, being shortsighted, do not
correctly estimate the steepness of the downwards trajectories and overestimate
the overlapping zones of catchment basins. An arbitrary division of these zones
between adjacent catchment basin results in a poor localization of the
contours. We propose an algorithm without myopia, which considers the total
length of a trajectory for estimating its steepness. We first consider
topographic surfaces defined on node weighted graphs. The graphs are pruned in
order to eliminate all downwards trajectories which are not the steepest. An
iterative algorithm with simple neighborhood operations performs the pruning
and constructs the catchment basins. The algorithm is then adapted to gray tone
images. The graph structure itself is encoded as an image thanks to the fixed
neighborhood structure of grids. A pair of adaptative erosions and dilations
prune the graph and extend the catchment basins. As a result one obtains a
precise detection of the catchment basins and a graph of the steepest
trajectories. A last iterative algorithm allows to follow selected downwards
trajectories in order to detect particular structures such as rivers or thalweg
lines of the topographic surface.
|
1204.2139
|
Affine Image Registration Transformation Estimation Using a Real Coded
Genetic Algorithm with SBX
|
cs.NE
|
This paper describes the application of a real coded genetic algorithm (GA)
to align two or more 2-D images by means of image registration. The proposed
search strategy is a transformation parameters-based approach involving the
affine transform. The real coded GA uses Simulated Binary Crossover (SBX), a
parent-centric recombination operator that has shown to deliver a good
performance in many optimization problems in the continuous domain. In
addition, we propose a new technique for matching points between a warped and
static images by using a randomized ordering when visiting the points during
the matching procedure. This new technique makes the evaluation of the
objective function somewhat noisy, but GAs and other population-based search
algorithms have been shown to cope well with noisy fitness evaluations. The
results obtained are competitive to those obtained by state-of-the-art
classical methods in image registration, confirming the usefulness of the
proposed noisy objective function and the suitability of SBX as a recombination
operator for this type of problem.
|
1204.2150
|
Analog Network Coding in General SNR Regime: Performance of Network
Simplification
|
cs.IT math.IT
|
We consider a communication scenario where a source communicates with a
destination over a directed layered relay network. Each relay performs analog
network coding where it scales and forwards the signals received at its input.
In this scenario, we address the question: What portion of the maximum
end-to-end achievable rate can be maintained if only a fraction of relay nodes
available at each layer are used?
We consider, in particular, the Gaussian diamond network (layered network
with a single layer of relay nodes) and a class of symmetric layered networks.
For these networks we show that each relay layer increases the additive gap
between the optimal analog network coding performance with and without network
simplification (using k instead of N relays in each layer, k < N) by no more
than log(N/k)^2 bits and the corresponding multiplicative gap by no more than a
factor of (N/k)^2, asymptotically (in source power). To the best of our
knowledge, this work offers the first characterization of the performance of
network simplification in general layered amplify-and-forward relay networks.
Further, unlike most of the current approximation results that attempt to bound
optimal rates either within an additive gap or a multiplicative gap, our
results suggest a new rate approximation scheme that allows for the
simultaneous computation of additive and multiplicative gaps.
|
1204.2218
|
Decoder for Nonbinary CWS Quantum Codes
|
cs.IT math.IT quant-ph
|
We present a decoder for nonbinary CWS quantum codes using the structure of
union codes. The decoder runs in two steps: first we use a union of stabilizer
codes to detect a sequence of errors, and second we build a new code, called
union code, that allows to correct the errors.
|
1204.2231
|
Investigating Keyphrase Indexing with Text Denoising
|
cs.DL cs.IR
|
In this paper, we report on indexing performance by a state-of-the-art
keyphrase indexer, Maui, when paired with a text extraction procedure called
text denoising. Text denoising is a method that extracts the denoised text,
comprising the content-rich sentences, from full texts. The performance of the
keyphrase indexer is demonstrated on three standard corpora collected from
three domains, namely food and agriculture, high energy physics, and biomedical
science. Maui is trained using the full texts and denoised texts. The indexer,
using its trained models, then extracts keyphrases from test sets comprising
full texts, and their denoised and noise parts (i.e., the part of texts that
remains after denoising). Experimental findings show that against a gold
standard, the denoised-text-trained indexer indexing full texts, performs
either better than or as good as its benchmark performance produced by a
full-text-trained indexer indexing full texts.
|
1204.2235
|
Publishing Identifiable Experiment Code And Configuration Is Important,
Good and Easy
|
cs.RO cs.AI cs.DL
|
We argue for the value of publishing the exact code, configuration and data
processing scripts used to produce empirical work in robotics. In particular,
we recommend publishing a unique identifier for the code package in the paper
itself, as a promise to the reader that this is the relavant code. We review
some recent discussion of best practice for reproducibility in various
professional organisations and journals, and discuss the current reward
structure for publishing code in robotics, along with some ideas for
improvement.
|
1204.2240
|
Interdependent binary choices under social influence: phase diagram for
homogeneous unbiased populations
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Coupled Ising models are studied in a discrete choice theory framework, where
they can be understood to represent interdependent choice making processes for
homogeneous populations under social influence. Two different coupling schemes
are considered. The nonlocal or group interdependence model is used to study
two interrelated groups making the same binary choice. The local or individual
interdependence model represents a single group where agents make two binary
choices which depend on each other. For both models, phase diagrams, and their
implications in socioeconomic contexts, are described and compared in the
absence of private deterministic utilities (zero opinion fields).
|
1204.2245
|
Development of a Conceptual Structure for a Domain-Specific Corpus
|
cs.IR
|
The corpus reported in this paper was developed for the evaluation of a
domain-specific Text to Knowledge Mapping (TKM) prototype. The TKM prototype
operates on the basis of both a combinatory categorical grammar (CCG)
linguistic model and a knowledge model that consists of three layers: ontology,
qualitative and quantitative layers. In the course of this evaluation it was
necessary to populate these initial models with lexical items and semantic
relations. Both elements, the lexicon and semantic relations, are meant to
reflect the domain of the prototype; hence both had to be extracted from the
corpus. While dealing with the lexicon was straight forward, the identification
and extraction of appropriate semantic relations was much more involved. It was
necessary, therefore, to manually develop a conceptual structure for the domain
which was then used to formulate a domain-specific framework of semantic
relations. The conceptual structure was developed using the Cmap tool of IHMC.
The framework of semantic relations- that has resulted from this study
consisted of 55 relations, out of which 42 have inverse relations.
|
1204.2248
|
Robust Spatio-Temporal Signal Recovery from Noisy Counts in Social Media
|
cs.AI cs.SI
|
Many real-world phenomena can be represented by a spatio-temporal signal:
where, when, and how much. Social media is a tantalizing data source for those
who wish to monitor such signals. Unlike most prior work, we assume that the
target phenomenon is known and we are given a method to count its occurrences
in social media. However, counting is plagued by sample bias, incomplete data,
and, paradoxically, data scarcity -- issues inadequately addressed by prior
work. We formulate signal recovery as a Poisson point process estimation
problem. We explicitly incorporate human population bias, time delays and
spatial distortions, and spatio-temporal regularization into the model to
address the noisy count issues. We present an efficient optimization algorithm
and discuss its theoretical properties. We show that our model is more accurate
than commonly-used baselines. Finally, we present a case study on wildlife
roadkill monitoring, where our model produces qualitatively convincing results.
|
1204.2252
|
Coordinated Home Energy Management for Real-Time Power Balancing
|
cs.SY cs.ET
|
This paper proposes a coordinated home energy management system (HEMS)
architecture where the distributed residential units cooperate with each other
to achieve real-time power balancing. The economic benefits for the retailer
and incentives for the customers to participate in the proposed coordinated
HEMS program are given. We formulate the coordinated HEMS design problem as a
dynamic programming (DP) and use approximate DP approaches to efficiently
handle the design problem. A distributed implementation algorithm based on the
convex optimization based dual decomposition technique is also presented. Our
focus in the current paper is on the deferrable appliances, such as Plug-in
(Hybrid) Electric Vehicles (PHEV), in view of their higher impact on the grid
stability. Simulation results shows that the proposed coordinated HEMS
architecture can efficiently improve the real-time power balancing.
|
1204.2255
|
Identifying edge clusters in networks via edge graphlet degree vectors
(edge-GDVs) and edge-GDV-similarities
|
q-bio.MN cs.DM cs.SI
|
Inference of new biological knowledge, e.g., prediction of protein function,
from protein-protein interaction (PPI) networks has received attention in the
post-genomic era. A popular strategy has been to cluster the network into
functionally coherent groups of proteins and predict protein function from the
clusters. Traditionally, network research has focused on clustering of nodes.
However, why favor nodes over edges, when clustering of edges may be preferred?
For example, nodes belong to multiple functional groups, but clustering of
nodes typically cannot capture the group overlap, while clustering of edges
can. Clustering of adjacent edges that share many neighbors was proposed
recently, outperforming different node clustering methods. However, since some
biological processes can have characteristic "signatures" throughout the
network, not just locally, it may be of interest to consider edges that are not
necessarily adjacent. Hence, we design a sensitive measure of the "topological
similarity" of edges that can deal with edges that are not necessarily
adjacent. We cluster edges that are similar according to our measure in
different baker's yeast PPI networks, outperforming existing node and edge
clustering approaches.
|
1204.2274
|
Beamforming in Two-Way Fixed Gain Amplify-and-Forward Relay Systems with
CCI
|
cs.IT math.IT
|
We analyze the outage performance of a two-way fixed gain amplify-and-forward
(AF) relay system with beamforming, arbitrary antenna correlation, and
co-channel interference (CCI). Assuming CCI at the relay, we derive the exact
individual user outage probability in closed-form. Additionally, while
neglecting CCI, we also investigate the system outage probability of the
considered network, which is declared if any of the two users is in
transmission outage. Our results indicate that in this system, the position of
the relay plays an important role in determining the user as well as the system
outage probability via such parameters as signal-to-noise imbalance, antenna
configuration, spatial correlation, and CCI power. To render further insights
into the effect of antenna correlation and CCI on the diversity and array
gains, an asymptotic expression which tightly converges to exact results is
also derived.
|
1204.2294
|
Ubiquitous WLAN/Camera Positioning using Inverse Intensity Chromaticity
Space-based Feature Detection and Matching: A Preliminary Result
|
cs.CV
|
This paper present our new intensity chromaticity space-based feature
detection and matching algorithm. This approach utilizes hybridization of
wireless local area network and camera internal sensor which to receive signal
strength from a access point and the same time retrieve interest point
information from hallways. This information is combined by model fitting
approach in order to find the absolute of user target position. No conventional
searching algorithm is required, thus it is expected reducing the computational
complexity. Finally we present pre-experimental results to illustrate the
performance of the localization system for an indoor environment set-up.
|
1204.2310
|
A Novel Latin Square Image Cipher
|
cs.CR cs.IT math.IT
|
In this paper, we introduce a symmetric-key Latin square image cipher (LSIC)
for grayscale and color images. Our contributions to the image encryption
community include 1) we develop new Latin square image encryption primitives
including Latin Square Whitening, Latin Square S-box and Latin Square P-box ;
2) we provide a new way of integrating probabilistic encryption in image
encryption by embedding random noise in the least significant image bit-plane;
and 3) we construct LSIC with these Latin square image encryption primitives
all on one keyed Latin square in a new loom-like substitution-permutation
network. Consequently, the proposed LSIC achieve many desired properties of a
secure cipher including a large key space, high key sensitivities, uniformly
distributed ciphertext, excellent confusion and diffusion properties,
semantically secure, and robustness against channel noise. Theoretical analysis
show that the LSIC has good resistance to many attack models including
brute-force attacks, ciphertext-only attacks, known-plaintext attacks and
chosen-plaintext attacks. Experimental analysis under extensive simulation
results using the complete USC-SIPI Miscellaneous image dataset demonstrate
that LSIC outperforms or reach state of the art suggested by many peer
algorithms. All these analysis and results demonstrate that the LSIC is very
suitable for digital image encryption. Finally, we open source the LSIC MATLAB
code under webpage https://sites.google.com/site/tuftsyuewu/source-code.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.