id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1311.5933 | Network Strategies in Election Campaigns | physics.soc-ph cs.SI nlin.AO | This study considers a simple variation of the voter model with two competing
parties. In particular, we represent the case of political elections, where
people can choose to support one of the two candidates or to remain neutral.
People operate within a social network and their opinions depend on those of
the people with whom they interact. Therefore, they may change their opinions
over time, which may mean supporting one particular candidate or none.
Candidates attempt to gain people's support by interacting with them, whether
they are in the same social circle (i.e. neighbors) or not. In particular,
candidates follow a strategy of interacting for a time with people they do not
know (that is, people who are not their neighbors). Our analysis of the
proposed model sought to establish which network strategies are the most
effective for candidates to gain popular support. We found that the most
suitable strategy depends on the topology of the social network. Finally, we
investigated the role of charisma in these dynamics. Charisma is relevant in
several social contexts, since charismatic people usually exercise a strong
influence over others. Our results showed that candidates' charisma is an
important contributory factor to a successful network strategy in election
campaigns.
|
1311.5947 | Fast Training of Effective Multi-class Boosting Using Coordinate Descent
Optimization | cs.CV cs.LG stat.CO | Wepresentanovelcolumngenerationbasedboostingmethod for multi-class
classification. Our multi-class boosting is formulated in a single optimization
problem as in Shen and Hao (2011). Different from most existing multi-class
boosting methods, which use the same set of weak learners for all the classes,
we train class specified weak learners (i.e., each class has a different set of
weak learners). We show that using separate weak learner sets for each class
leads to fast convergence, without introducing additional computational
overhead in the training procedure. To further make the training more efficient
and scalable, we also propose a fast co- ordinate descent method for solving
the optimization problem at each boosting iteration. The proposed coordinate
descent method is conceptually simple and easy to implement in that it is a
closed-form solution for each coordinate update. Experimental results on a
variety of datasets show that, compared to a range of existing multi-class
boosting meth- ods, the proposed method has much faster convergence rate and
better generalization performance in most cases. We also empirically show that
the proposed fast coordinate descent algorithm needs less training time than
the MultiBoost algorithm in Shen and Hao (2011).
|
1311.5978 | Event Evolution Tracking from Streaming Social Posts | cs.SI physics.soc-ph | Online social post streams such as Twitter timelines and forum discussions
have emerged as important channels for information dissemination. They are
noisy, informal, and surge quickly. Real life events, which may happen and
evolve every minute, are perceived and circulated in post streams by social
users. Intuitively, an event can be viewed as a dense cluster of posts with a
life cycle sharing the same descriptive words. There are many previous works on
event detection from social streams. However, there has been surprisingly
little work on tracking the evolution patterns of events, e.g., birth/death,
growth/decay, merge/split, which we address in this paper. To define a tracking
scope, we use a sliding time window, where old posts disappear and new posts
appear at each moment. Following that, we model a social post stream as an
evolving network, where each social post is a node, and edges between posts are
constructed when the post similarity is above a threshold. We propose a
framework which summarizes the information in the stream within the current
time window as a ``sketch graph'' composed of ``core'' posts. We develop
incremental update algorithms to handle highly dynamic social streams and track
event evolution patterns in real time. Moreover, we visualize events as word
clouds to aid human perception. Our evaluation on a real data set consisting of
5.2 million posts demonstrates that our method can effectively track event
dynamics in the whole life cycle from very large volumes of social streams on
the fly.
|
1311.5989 | Robust Cosparse Greedy Signal Reconstruction for Compressive Sensing
with Multiplicative and Additive Noise | cs.IT cs.DS math.IT stat.AP | Greedy algorithms are popular in compressive sensing for their high
computational efficiency. But the performance of current greedy algorithms can
be degenerated seriously by noise (both multiplicative noise and additive
noise). A robust version of greedy cosparse greedy algorithm (greedy analysis
pursuit) is presented in this paper. Comparing with previous methods, The
proposed robust greedy analysis pursuit algorithm is based on an optimization
model which allows both multiplicative noise and additive noise in the data
fitting constraint. Besides, a new stopping criterion that is derived. The new
algorithm is applied to compressive sensing of ECG signals. Numerical
experiments based on real-life ECG signals demonstrate the performance
improvement of the proposed greedy algorithms.
|
1311.5998 | A brief network analysis of Artificial Intelligence publication | cs.AI cs.DL | In this paper, we present an illustration to the history of Artificial
Intelligence(AI) with a statistical analysis of publish since 1940. We
collected and mined through the IEEE publish data base to analysis the
geological and chronological variance of the activeness of research in AI. The
connections between different institutes are showed. The result shows that the
leading community of AI research are mainly in the USA, China, the Europe and
Japan. The key institutes, authors and the research hotspots are revealed. It
is found that the research institutes in the fields like Data Mining, Computer
Vision, Pattern Recognition and some other fields of Machine Learning are quite
consistent, implying a strong interaction between the community of each field.
It is also showed that the research of Electronic Engineering and Industrial or
Commercial applications are very active in California. Japan is also publishing
a lot of papers in robotics. Due to the limitation of data source, the result
might be overly influenced by the number of published articles, which is to our
best improved by applying network keynode analysis on the research community
instead of merely count the number of publish.
|
1311.6005 | Modeling and Simulation of the EV Charging in a Residential Distribution
Power Grid | cs.SY | There are numerous advantages of using Electric Vehicles (EVs) as an
alternative method of transportation. However, an increase in EV usage in the
existing residential distribution grid poses problems such as overloading the
existing infrastructure. In this paper, we have modeled and simulated a
residential distribution grid in GridLAB-D (an open-source software tool used
to model, simulate, and analyze power distribution systems) to illustrate the
problems associated with a higher EV market penetration rates in the
residential domain. Power grid upgrades or control algorithms at the
transformer level are required to overcome issues such as transformer
overloading. We demonstrate the method of coordinating EV charging in a
residential distribution grid so as to overcome the overloading problem without
any upgrades in the distribution grid.
|
1311.6007 | Dynamic Model of Facial Expression Recognition based on Eigen-face
Approach | cs.CV | Emotions are best way of communicating information; and sometimes it carry
more information than words. Recently, there has been a huge interest in
automatic recognition of human emotion because of its wide spread application
in security, surveillance, marketing, advertisement, and human-computer
interaction. To communicate with a computer in a natural way, it will be
desirable to use more natural modes of human communication based on voice,
gestures and facial expressions. In this paper, a holistic approach for facial
expression recognition is proposed which captures the variation in facial
features in temporal domain and classifies the sequence of images in different
emotions. The proposed method uses Haar-like features to detect face in an
image. The dimensionality of the eigenspace is reduced using Principal
Component Analysis (PCA). By projecting the subsequent face images into
principal eigen directions, the variation pattern of the obtained weight vector
is modeled to classify it into different emotions. Owing to the variations of
expressions for different people and its intensity, a person specific method
for emotion recognition is followed. Using the gray scale images of the frontal
face, the system is able to classify four basic emotions such as happiness,
sadness, surprise, and anger.
|
1311.6009 | Design of Fast Response Smart Electric Vehicle Charging Infrastructure | cs.SY | The response time of the smart electrical vehicle (EV) charging
infrastructure is the key index of the system performance. The traffic between
the smart EV charging station and the control center dominates the response
time of the smart charging stations. To accelerate the response of the smart EV
charging station, there is a need for a technology that collects the
information locally and relays it to the control center periodically. To reduce
the traffic between the smart EV charger and the control center, a Power
Information Collector (PIC), capable of collecting all the meters power
information in the charging station, is proposed and implemented in this paper.
The response time is further reduced by pushing the power information to the
control center. Thus, a fast response smart EV charging infrastructure is
achieved to handle the shortage of energy in the local grid.
|
1311.6010 | Derivative of Rotation Matrix Direct Matrix Derivation of Well Known
Formula | cs.SY | In motion Kinematics, it is well-known that the time derivative of a
3x3rotation matrix equals a skew-symmetric matrix multiplied by the rotation
matrix where the skew symmetric matrix is a linear (matrix valued) function of
the angular velocity and the rotation matrix represents the rotating motion of
a frame with respect to a reference frame. The equation is widely used in
engineering, e.g., robotics, control, air/spacecraft modeling, etc. However,
the derivations found in the literature are indirect. Motivated by the fact
that the set of 3x3rotation matrices, i.e., SO(3), is a Lie group, forming a
smooth (differentiable) manifold, we describe the infinitesimal increment of
the rotation matrix in terms of rotation matrices and show that the above
equation immediately follows.
|
1311.6012 | On a Flywheel-Based Regenerative Braking System for Regenerative Energy
Recovery | cs.SY | This paper presents a unique flywheel-based regenerative energy recovery,
storage and release system developed at the author's laboratory. It can recover
and store regenerative energy produced by braking a motion generator with
intermittent rotary velocity such as the rotor of a wind turbogenerator subject
to intermittent intake wind and the axels of electric and hybrid gas-electric
vehicles during frequent coasting and braking. Releasing of the stored
regenerative energy in the flywheel is converted to electricity by the attached
alternator. A proof-of-concept prototype called the SJSU-RBS was designed,
built and tested by author's students with able assistance of a technical staff
in his school.
|
1311.6015 | On the Sustainability of Electrical Vehicles | cs.SY | Many perceive electric vehicles (EVs) to be eco-environmentally sustainable
because they are free of emissions of toxic and greenhouse gases to the
environment. However, few have questioned the sustainability of the electric
power required to drive these vehicles. This paper presents an in-depth study
that indicates that massive infusion of EVs to our society in a short time span
will likely create a colossal demand for additional electric power generation
much beyond what the US electric power generating industry can provide with its
current generating capacity. Additionally, such demand would result in much
adverse environmental consequences if the current technology of electric power
generation by predominant fossil fuels continues. Other rarely accounted facts
on environmental impacts by EVs are the substantial electric energy required to
produce batteries that drive EVs, and the negative consequences relating to the
recycling of spent batteries.
|
1311.6020 | Security versus Reliability Analysis of Opportunistic Relaying | cs.IT cs.CR math.IT | Physical-layer security is emerging as a promising paradigm of securing
wireless communications against eavesdropping between legitimate users, when
the main link spanning from source to destination has better propagation
conditions than the wiretap link from source to eavesdropper. In this paper, we
identify and analyze the tradeoffs between the security and reliability of
wireless communications in the presence of eavesdropping attacks. Typically,
the reliability of the main link can be improved by increasing the source's
transmit power (or decreasing its date rate) to reduce the outage probability,
which unfortunately increases the risk that an eavesdropper succeeds in
intercepting the source message through the wiretap link, since the outage
probability of the wiretap link also decreases when a higher transmit power (or
lower date rate) is used. We characterize the security-reliability tradeoffs
(SRT) of conventional direct transmission from source to destination in the
presence of an eavesdropper, where the security and reliability are quantified
in terms of the intercept probability by an eavesdropper and the outage
probability experienced at the destination, respectively. In order to improve
the SRT, we then propose opportunistic relay selection (ORS) and quantify the
attainable SRT improvement upon increasing the number of relays. It is shown
that given the maximum tolerable intercept probability, the outage probability
of our ORS scheme approaches zero for $N \to \infty$, where $N$ is the number
of relays. Conversely, given the maximum tolerable outage probability, the
intercept probability of our ORS scheme tends to zero for $N \to \infty$.
|
1311.6023 | Third Order Intermodulation Power Estimation for N Sinusoidal Channels | cs.SY | In this paper analysis is given to find the third order intermodulation power
given sinusoids are fed into a nonlinear device. A simple expression of the
third order intermodulation power is given for the case that the center
frequencies of the input sinusoids are equally spaced. Further, if the powers
of the signals are equal, the expression becomes a closed form expression. The
analysis will be helpful for communication system engineering in estimating the
adjacent channel interference due to nonlinearity. Numerical results are
presented for various values of (number of input channels). Though the analysis
assumes the input signals to be sinusoids without phase modulation, the third
order intermodulation power estimate serves as a good estimate for link budget
computation purpose. For the case that the center frequencies of the input
sinusoids are not spaced equally, the analysis can still highly likely be
applied if we insert pseudo channels in between the real channels so that all
(real and pseudo) channels are spaced equally (or approximately equally for
approximation). In this case, the pseudo channel powers are set to zero so that
the interference powers due to the pseudo channels will not be included in the
analysis. In other words, the analysis is highly likely applicable without the
constraint of the input channel center frequencies being equally likely.
Simulations are also provided for the case that the input sinusoids are QPSK
modulated.
|
1311.6026 | Research and innovative design of a zeroemissions vehicle by
multidisciplinary student teams in multi-years | cs.SY | This paper presents a unique learning and research experience for students
from mechanical and electrical engineering majors in a course on senior design
projects involving research and development, design and production of a
proof-of-concept electric vehicle, the ZEM (Zero EMissions) vehicle. The ZEM
vehicle combined positive aspects and latest technologies in electric vehicle
design,solar-electric power conversions, and ergonomic human power into one
affordable and environmentally sustainable vehicle for urban transportation.
The 43 mechanical and 10 electrical engineering majors plus 7 students from
business participated in this multidisciplinary project spanned over two
academic years. The students involved in this multiyear endeavor gained
valuable experiences in real-world working environment with multifunctional and
multi-year sub-groups. The success of this new attempt in conducting senior
design projects classes have set a model for faculty members in the authors'
university in conducting similar courses.
|
1311.6041 | No Free Lunch Theorem and Bayesian probability theory: two sides of the
same coin. Some implications for black-box optimization and metaheuristics | cs.LG | Challenging optimization problems, which elude acceptable solution via
conventional calculus methods, arise commonly in different areas of industrial
design and practice. Hard optimization problems are those who manifest the
following behavior: a) high number of independent input variables; b) very
complex or irregular multi-modal fitness; c) computational expensive fitness
evaluation. This paper will focus on some theoretical issues that have strong
implications for practice. I will stress how an interpretation of the No Free
Lunch theorem leads naturally to a general Bayesian optimization framework. The
choice of a prior over the space of functions is a critical and inevitable step
in every black-box optimization.
|
1311.6045 | Build Electronic Arabic Lexicon | cs.CL | There are many known Arabic lexicons organized on different ways, each of
them has a different number of Arabic words according to its organization way.
This paper has used mathematical relations to count a number of Arabic words,
which proofs the number of Arabic words presented by Al Farahidy. The paper
also presents new way to build an electronic Arabic lexicon by using a hash
function that converts each word (as input) to correspond a unique integer
number (as output), these integer numbers will be used as an index to a lexicon
entry.
|
1311.6048 | On the Design and Analysis of Multiple View Descriptors | cs.CV | We propose an extension of popular descriptors based on gradient orientation
histograms (HOG, computed in a single image) to multiple views. It hinges on
interpreting HOG as a conditional density in the space of sampled images, where
the effects of nuisance factors such as viewpoint and illumination are
marginalized. However, such marginalization is performed with respect to a very
coarse approximation of the underlying distribution. Our extension leverages on
the fact that multiple views of the same scene allow separating intrinsic from
nuisance variability, and thus afford better marginalization of the latter. The
result is a descriptor that has the same complexity of single-view HOG, and can
be compared in the same manner, but exploits multiple views to better trade off
insensitivity to nuisance variability with specificity to intrinsic
variability. We also introduce a novel multi-view wide-baseline matching
dataset, consisting of a mixture of real and synthetic objects with ground
truthed camera motion and dense three-dimensional geometry.
|
1311.6049 | Skin Texture Recognition Using Neural Networks | cs.CV | Skin recognition is used in many applications ranging from algorithms for
face detection, hand gesture analysis, and to objectionable image filtering. In
this work a skin recognition system was developed and tested. While many skin
segmentation algorithms relay on skin color, our work relies on both skin color
and texture features (features derives from the GLCM) to give a better and more
efficient recognition accuracy of skin textures. We used feed forward neural
networks to classify input textures images to be skin or non skin textures. The
system gave very encouraging results during the neural network generalization
face.
|
1311.6054 | Q-learning optimization in a multi-agents system for image segmentation | cs.AI | To know which operators to apply and in which order, as well as attributing
good values to their parameters is a challenge for users of computer vision.
This paper proposes a solution to this problem as a multi-agent system modeled
according to the Vowel approach and using the Q-learning algorithm to optimize
its choice. An implementation is given to test and validate this method.
|
1311.6062 | Wigner function description of entanglement swapping using parametric
down conversion: the role of vacuum fluctuations in teleportation | quant-ph cs.IT math.IT | We apply the Wigner formalism of quantum optics to study the role of the
zeropoint field fluctuations in entanglement swapping produced via parametric
down conversion. It is shown that the generation of mode entanglement between
two initially non interacting photons is related to the quadruple correlation
properties of the electromagnetic field, through the stochastic properties of
the vacuum. The relationship between the process of transferring entanglement
and the different zeropoint inputs at the nonlinear crystal and the Bell-state
analyser is emphasized.
|
1311.6063 | NILE: Fast Natural Language Processing for Electronic Health Records | cs.CL | Objective: Narrative text in Electronic health records (EHR) contain rich
information for medical and data science studies. This paper introduces the
design and performance of Narrative Information Linear Extraction (NILE), a
natural language processing (NLP) package for EHR analysis that we share with
the medical informatics community. Methods: NILE uses a modified prefix-tree
search algorithm for named entity recognition, which can detect prefix and
suffix sharing. The semantic analyses are implemented as rule-based finite
state machines. Analyses include negation, location, modification, family
history, and ignoring. Result: The processing speed of NILE is hundreds to
thousands times faster than existing NLP software for medical text. The
accuracy of presence analysis of NILE is on par with the best performing models
on the 2010 i2b2/VA NLP challenge data. Conclusion: The speed, accuracy, and
being able to operate via API make NILE a valuable addition to the NLP software
for medical informatics and data science.
|
1311.6079 | Local Similarities, Global Coding: An Algorithm for Feature Coding and
its Applications | cs.CV cs.AI | Data coding as a building block of several image processing algorithms has
been received great attention recently. Indeed, the importance of the locality
assumption in coding approaches is studied in numerous works and several
methods are proposed based on this concept. We probe this assumption and claim
that taking the similarity between a data point and a more global set of anchor
points does not necessarily weaken the coding method as long as the underlying
structure of the anchor points are taken into account. Based on this fact, we
propose to capture this underlying structure by assuming a random walker over
the anchor points. We show that our method is a fast approximate learning
algorithm based on the diffusion map kernel. The experiments on various
datasets show that making different state-of-the-art coding algorithms aware of
this structure boosts them in different learning tasks.
|
1311.6091 | A Primal-Dual Method for Training Recurrent Neural Networks Constrained
by the Echo-State Property | cs.LG cs.NE | We present an architecture of a recurrent neural network (RNN) with a
fully-connected deep neural network (DNN) as its feature extractor. The RNN is
equipped with both causal temporal prediction and non-causal look-ahead, via
auto-regression (AR) and moving-average (MA), respectively. The focus of this
paper is a primal-dual training method that formulates the learning of the RNN
as a formal optimization problem with an inequality constraint that provides a
sufficient condition for the stability of the network dynamics. Experimental
results demonstrate the effectiveness of this new method, which achieves 18.86%
phone recognition error on the TIMIT benchmark for the core test set. The
result approaches the best result of 17.7%, which was obtained by using RNN
with long short-term memory (LSTM). The results also show that the proposed
primal-dual training method produces lower recognition errors than the popular
RNN methods developed earlier based on the carefully tuned threshold parameter
that heuristically prevents the gradient from exploding.
|
1311.6092 | Platform-Based Design Methodology and Modeling for Aircraft Electric
Power Systems | cs.SY cs.SE | In an aircraft electric power system (EPS), a supervisory control unit must
actuate a set of switches to distribute power from generators to loads, while
satisfying safety, reliability and real-time performance requirements. To
reduce expensive re-design steps in current design methodologies, such a
control problem is generally addressed based on minor incremental changes on
top of consolidated solutions, since it is difficult to estimate the impact of
earlier design decisions on the final implementation. In this paper, we
introduce a methodology for the design space exploration and virtual
prototyping of EPS supervisory control protocols, following the platform-based
design (PBD) paradigm. Moreover, we describe the modeling infrastructure that
supports the methodology. In PBD, design space exploration is carried out as a
sequence of refinement steps from the initial specification towards a final
implementation, by mapping higher-level behavioral models into a set of library
components at a lower level of abstraction. In our flow, the system
specification is captured using SysML requirement and structure diagrams.
State-machine diagrams enable verification of the control protocol at a high
level of abstraction, while lowerlevel hybrid models, implemented in Simulink,
are used to verify properties related to physical quantities, such as time,
voltage and current values. The effectiveness of our approach is illustrated on
a prototype EPS control protocol design.
|
1311.6094 | Flexibility of Commercial Building HVAC Fan as Ancillary Service for
Smart Grid | cs.SY | In this paper, we model energy use in commercial buildings using empirical
data captured through sMAP, a campus building data portal at UC Berkeley. We
conduct at-scale experiments in a newly constructed building on campus. By
modulating the supply duct static pressure (SDSP) for the main supply air duct,
we induce a response on the main supply fan and determine how much ancillary
power flexibility can be provided by a typical commercial building. We show
that the consequent intermittent fluctuations in the air mass flow into the
building does not influence the building climate in a human-noticeable way. We
estimate that at least 4 GW of regulation reserve is readily available only
through commercial buildings in the US. Based on predictions this value will
reach to 5.6 GW in 2035. We also show how thermal slack can be leveraged to
provide an ancillary service to deal with transient frequency fluctuations in
the grid. We consider a simplified model of the grid power system with time
varying demand and generation and present a simple control scheme to direct the
ancillary service power flow from buildings to improve on the classical
automatic generation control (AGC)-based approach. Simulation results are
provided to show the effectiveness of the proposed methodology for enhancing
grid frequency regulation.
|
1311.6107 | Off-policy reinforcement learning for $ H_\infty $ control design | cs.SY cs.LG math.OC stat.ML | The $H_\infty$ control design problem is considered for nonlinear systems
with unknown internal system model. It is known that the nonlinear $ H_\infty $
control problem can be transformed into solving the so-called
Hamilton-Jacobi-Isaacs (HJI) equation, which is a nonlinear partial
differential equation that is generally impossible to be solved analytically.
Even worse, model-based approaches cannot be used for approximately solving HJI
equation, when the accurate system model is unavailable or costly to obtain in
practice. To overcome these difficulties, an off-policy reinforcement leaning
(RL) method is introduced to learn the solution of HJI equation from real
system data instead of mathematical system model, and its convergence is
proved. In the off-policy RL method, the system data can be generated with
arbitrary policies rather than the evaluating policy, which is extremely
important and promising for practical systems. For implementation purpose, a
neural network (NN) based actor-critic structure is employed and a least-square
NN weight update algorithm is derived based on the method of weighted
residuals. Finally, the developed NN-based off-policy RL method is tested on a
linear F16 aircraft plant, and further applied to a rotational/translational
actuator system.
|
1311.6149 | Agent Approach in Support of Enterprise Application Integration | cs.MA | The present approach highlights the synergies between application integration
and interaction protocols. Since both fields have advanced in different
directions, a number of important technical problems can be addressed by their
proper synthesis. In our previous work, we proposed a methodological approach
based on Interaction Protocols for Enterprise Applica tion Integration (EAI).
This approach permits to specify MAS (Multi-Agent System) interaction
protocols, verify their behavior and use them to integrate multiple business
applications. The result of the proposed approach is a validated interaction
protocol. Based on this protocol, we define in this paper, an agent- based
architecture for the EAI. It includes all the concepts nec- essary to support
communication and coordination mechanisms such as inter-agent and agent-Web
services communication.
|
1311.6163 | Analytical Studies of Quasi Steady-State Model in Power System Long-Term
Stability Analysis | cs.SY | In this paper, a theoretical foundation for the Quasi Steady-State (QSS)
model in power system long-term stability analysis is developed. Sufficient
conditions under which the QSS model gives accurate approximations of the
long-term stability model in terms of trajectory and !-limit set are derived.
These sufficient conditions provide some physical insights regarding the reason
for the failure of the QSS model. Additionally, several numerical examples are
presented to illustrate the analytical results derived.
|
1311.6165 | Automated identification and characterization of parcels (AICP) with
OpenStreetMap and Points of Interest | cs.CY cs.DB | Against the paucity of urban parcels in China, this paper proposes a method
to automatically identify and characterize parcels (AICP) with OpenStreetMap
(OSM) and Points of Interest (POI) data. Parcels are the basic spatial units
for fine-scale urban modeling, urban studies, as well as spatial planning.
Conventional ways of identification and characterization of parcels rely on
remote sensing and field surveys, which are labor intensive and
resource-consuming. Poorly developed digital infrastructure, limited resources,
and institutional barriers have all hampered the gathering and application of
parcel data in developing countries. Against this backdrop, we employ OSM road
networks to identify parcel geometries and POI data to infer parcel
characteristics. A vector-based CA model is adopted to select urban parcels.
The method is applied to the entire state of China and identifies 82,645 urban
parcels in 297 cities. Notwithstanding all the caveats of open and/or
crowd-sourced data, our approach could produce reasonably good approximation of
parcels identified from conventional methods, thus having the potential to
become a useful supplement.
|
1311.6178 | Minimum Delay Huffman Code in Backward Decoding Procedure | cs.IT math.IT | For some applications where the speed of decoding and the fault tolerance are
important, like in video storing, one of the successful answers is Fix-Free
Codes. These codes have been applied in some standards like H.263+ and MPEG-4.
The cost of using fix-free codes is to increase the redundancy of the code
which means the increase in the amount of bits we need to represent any peace
of information. Thus we investigated the use of Huffman Codes with low and
negligible backward decoding delay. We showed that for almost all cases there
is always a Minimum Delay Huffman Code for a given length vector. The average
delay of this code for anti-uniform sources is calculated, that is in agreement
with the simulations, and it is shown that this delay is one bit for large
alphabet sources. Also an algorithm is proposed which will find the minimum
delay code with a good performance.
|
1311.6184 | Bounding the Test Log-Likelihood of Generative Models | cs.LG | Several interesting generative learning algorithms involve a complex
probability distribution over many random variables, involving intractable
normalization constants or latent variable normalization. Some of them may even
not have an analytic expression for the unnormalized probability function and
no tractable approximation. This makes it difficult to estimate the quality of
these models, once they have been trained, or to monitor their quality (e.g.
for early stopping) while training. A previously proposed method is based on
constructing a non-parametric density estimator of the model's probability
function from samples generated by the model. We revisit this idea, propose a
more efficient estimator, and prove that it provides a lower bound on the true
test log-likelihood, and an unbiased estimator as the number of generated
samples goes to infinity, although one that incorporates the effect of poor
mixing. We further propose a biased variant of the estimator that can be used
reliably with a finite number of samples for the purpose of model comparison.
|
1311.6199 | Battery Placement on Performance of VAR Controls | cs.SY | Battery's role in the development of smart grid is gaining greater attention
as an energy storage device that can be integrated with a Photovoltaic (PV)
cell in the distribution circuit. As more PVs are connected to the system, real
power injection to the distribution can cause fluctuation in the voltage. Due
to the rapid fluctuation of the voltage, a more advanced volt-ampere reactive
(VAR) power control scheme on a fast time scale is used to minimize the voltage
deviation on the distribution. Employing both global and local dynamic VAR
control schemes in our previous work, we show the effects of battery placement
on the performance of VAR controls in the example of a single branch radial
distribution circuit. Simulations verify that having battery placement at the
rear in the distribution circuit can provide smaller voltage variations and
higher energy savings than front battery placement when used with dynamic VAR
control algorithms.
|
1311.6211 | Novelty Detection Under Multi-Instance Multi-Label Framework | cs.LG | Novelty detection plays an important role in machine learning and signal
processing. This paper studies novelty detection in a new setting where the
data object is represented as a bag of instances and associated with multiple
class labels, referred to as multi-instance multi-label (MIML) learning.
Contrary to the common assumption in MIML that each instance in a bag belongs
to one of the known classes, in novelty detection, we focus on the scenario
where bags may contain novel-class instances. The goal is to determine, for any
given instance in a new bag, whether it belongs to a known class or a novel
class. Detecting novelty in the MIML setting captures many real-world phenomena
and has many potential applications. For example, in a collection of tagged
images, the tag may only cover a subset of objects existing in the images.
Discovering an object whose class has not been previously tagged can be useful
for the purpose of soliciting a label for the new object class. To address this
novel problem, we present a discriminative framework for detecting new class
instances. Experiments demonstrate the effectiveness of our proposed method,
and reveal that the presence of unlabeled novel instances in training bags is
helpful to the detection of such instances in testing stage.
|
1311.6215 | Using virtual parts to optimize the metrology process | cs.CE | In the measurement process, there are many parameters affecting the
measurement results: the influence of the probe system, material stiffness of
measured workpiece, the calibration of the probe with a reference sphere, the
thermal effects. We want to obtain the limits of a measurement methodology to
be able to validate a result. The study is applied to a simple part. We observe
the dispersion of the position of different drilled holes (XYZ values in a
coordinate system) when we change the quality of the part and the method of
calculation. We use the Design of Experiment (Taguchi method) to realize our
study. We study the influence of the part quality on a measurement results. We
consider two parameters to define the part quality (flatness and
perpendicularity). We will also study the influence of different methods of
calculation to determine the coordinate system. We can use two options in
Metrolog XG software (tangent plane with or without orientation constraint).
The originality of this paper is that we present a method for the design of
experiment that uses CATIA (CAD system) to generate the measured parts. In this
way we can realize a design of experiment with a largest number of experimental
results. This is a positive point for a statistical analysis. We are also free
to define the parts we want to study without manufacturing difficulties.
|
1311.6227 | Experience of Developing a Meta-Semantic Search Engine | cs.IR | Thinking of todays web search scenario which is mainly keyword based, leads
to the need of effective and meaningful search provided by Semantic Web.
Existing search engines are vulnerable to provide relevant answers to users
query due to their dependency on simple data available in web pages. On other
hand, semantic search engines provide efficient and relevant results as the
semantic web manages information with well defined meaning using ontology. A
Meta-Search engine is a search tool that forwards users query to several
existing search engines and provides combined results by using their own page
ranking algorithm. SemanTelli is a meta semantic search engine that fetches
results from different semantic search engines such as Hakia, DuckDuckGo,
SenseBot through intelligent agents. This paper proposes enhancement of
SemanTelli with improved snippet analysis based page ranking algorithm and
support for image and news search.
|
1311.6229 | Intelligent Agent for Prediction in E- Negotiation: An Approach | cs.MA | With the proliferation of web technologies it becomes more and more important
to make the traditional negotiation pricing mechanism automated and
intelligent. The behaviour of software agents which negotiate on behalf of
humans is determined by their tactics in the form of decision functions.
Prediction of partners behaviour in negotiation has been an active research
direction in recent years as it will improve the utility gain for the adaptive
negotiation agent and also achieve the agreement much quicker or look after
much higher benefits. In this paper we review the various negotiation methods
and the existing architecture. Although negotiation is practically very complex
activity to automate without human intervention we have proposed architecture
for predicting the opponents behaviour which will take into consideration
various factors which affect the process of negotiation. The basic concept is
that the information about negotiators, their individual actions and dynamics
can be used by software agents equipped with adaptive capabilities to learn
from past negotiations and assist in selecting appropriate negotiation tactics.
|
1311.6233 | Agent Based Negotiation using Cloud - an Approach in E-Commerce | cs.MA | Cloud computing allows subscription based access to computing. It also allows
storage services over Internet. Automated Negotiation is becoming an emerging,
and important area in the field of Multi Agent Systems in ECommerce. Multi
Agent based negotiation system is necessary to increase the efficiency of
E-negotiation process. Cloud computing provides security and privacy to the
user data and low maintenance costs. We propose a Negotiation system using
cloud. In this system, all product information and multiple agent details are
stored on cloud. Both parties select their agents through cloud for
negotiation. Agent acts as a negotiator. Agents have users details and their
requirements for a particular product. Using users requirement, agents
negotiate on some issues such as price, volume, duration, quality and so on.
After completing negotiation process, agents give feedback to the user about
whether negotiation is successful or not. This negotiation system is dynamic in
nature and increases the agents with the increase in participating user.
|
1311.6239 | Fundamental performance limits for ideal decoders in high-dimensional
linear inverse problems | cs.IT math.IT | This paper focuses on characterizing the fundamental performance limits that
can be expected from an ideal decoder given a general model, ie, a general
subset of "simple" vectors of interest. First, we extend the so-called notion
of instance optimality of a decoder to settings where one only wishes to
reconstruct some part of the original high dimensional vector from a
low-dimensional observation. This covers practical settings such as medical
imaging of a region of interest, or audio source separation when one is only
interested in estimating the contribution of a specific instrument to a musical
recording. We define instance optimality relatively to a model much beyond the
traditional framework of sparse recovery, and characterize the existence of an
instance optimal decoder in terms of joint properties of the model and the
considered linear operator. Noiseless and noise-robust settings are both
considered. We show somewhat surprisingly that the existence of noise-aware
instance optimal decoders for all noise levels implies the existence of a
noise-blind decoder. A consequence of our results is that for models that are
rich enough to contain an orthonormal basis, the existence of an L2/L2 instance
optimal decoder is only possible when the linear operator is not substantially
dimension-reducing. This covers well-known cases (sparse vectors, low-rank
matrices) as well as a number of seemingly new situations (structured sparsity
and sparse inverse covariance matrices for instance). We exhibit an
operator-dependent norm which, under a model-specific generalization of the
Restricted Isometry Property (RIP), always yields a feasible instance
optimality property. This norm can be upper bounded by an atomic norm relative
to the considered model.
|
1311.6240 | A Decision Tree Approach to Classify Web Services using Quality
Parameters | cs.IR | With the increase in the number of web services, many web services are
available on internet providing the same functionality, making it difficult to
choose the best one, fulfilling users all requirements. This problem can be
solved by considering the quality of web services to distinguish functionally
similar web services. Nine different quality parameters are considered. Web
services can be classified and ranked using decision tree approach since they
do not require long training period and can be easily interpreted. Various
decision tree and rules approaches available are applied and tested to find the
optimal decision method to correctly classify functionally similar web services
considering their quality parameters.
|
1311.6243 | Web-page Indexing based on the Prioritize Ontology Terms | cs.IR | In this world, globalization has become a basic and most popular human trend.
To globalize information, people are going to publish the documents in the
internet. As a result, information volume of internet has become huge. To
handle that huge volume of information, Web searcher uses search engines. The
Webpage indexing mechanism of a search engine plays a big role to retrieve Web
search results in a faster way from the huge volume of Web resources. Web
researchers have introduced various types of Web-page indexing mechanism to
retrieve Webpages from Webpage repository. In this paper, we have illustrated a
new approach of design and development of Webpage indexing. The proposed
Webpage indexing mechanism has applied on domain specific Webpages and we have
identified the Webpage domain based on an Ontology. In our approach, first we
prioritize the Ontology terms that exist in the Webpage content then apply our
own indexing mechanism to index that Webpage. The main advantage of storing an
index is to optimize the speed and performance while finding relevant documents
from the domain specific search engine storage area for a user given search
query.
|
1311.6245 | A Model Approach to Build Basic Ontology | cs.IR | As todays world grows with the technology on the other hand it seems to be
small with the World Wide Web. With the use of Internet more and more
information can be search from the web. When Users fires a query they want
relevancy in obtained results. In general, search engines perform the ranking
of web pages in an offline mode, which is after the web pages have been
retrieved and stored in the database. But most of the time this method does not
provide relevant results as most of the search engines were using some ranking
algorithms like page Rank, HITS, SALSA and Hilltop. Where these algorithms does
not always provides the results based on the semantic web. So a concept of
Ontology is been introduced in search engines to get more meaningful and
relevant results with respect to the users query.Ontologies are used to capture
knowledge about some domain of interest. Ontology describes the concepts in the
domain and also the relationships that hold between those concepts. Different
ontology languages provide different facilities. The most recent development in
standard ontology languages is OWL (Ontology Web Language) from the World Wide
Web Consortium. OWL makes it possible to describe concept to its full extent
and enables the search engines to provide accurate results to the user.
|
1311.6247 | Full-Duplex Relaying with Half-Duplex Relays | cs.IT math.IT | We consider "virtual" full-duplex relaying by means of half-duplex relays. In
this configuration, each relay stage in a multi-hop relaying network is formed
by at least two relays, used alternatively in transmit and receive modes, such
that while one relay transmits its signal to the next stage, the other relay
receives a signal from the previous stage. With such a pipelined scheme, the
source is active and sends a new information message in each time slot. We
consider the achievable rates for different coding schemes and compare them
with a cut-set upper bound, which is tight in certain conditions. In
particular, we show that both lattice-based Compute and Forward (CoF) and
Quantize reMap and Forward (QMF) yield attractive performance and can be easily
implemented. In particular, QMF in this context does not require "long"
messages and joint (non-unique) decoding, if the quantization mean-square
distortion at the relays is chosen appropriately. Also, in the multi-hop case
the gap of QMF from the cut-set upper bound grows logarithmically with the
number of stages, and not linearly as in the case of "noise level"
quantization. Furthermore, we show that CoF is particularly attractive in the
case of multi-hop relaying, when the channel gains have fluctuations not larger
than 3dB, yielding a rate that does not depend on the number of relaying
stages. In particular, we argue that such architecture may be useful for a
wireless backhaul with line-of-sight propagation between the relays.
|
1311.6272 | Service based hight-speed railway base station arrangement | cs.IT math.IT | To provide stable and high data rate wireless access for passengers in the
train, it is necessary to properly deploy base stations along the railway. We
consider this issue from the perspective of service, which is defined as the
integral of the time-varying instantaneous channel capacity. With large-scale
fading assumption, it will be shown that the total service of each base station
is inversely proportional to the velocity of the train. Besides, we find that
if the ratio of the service provided by a base station in its service region to
its total service is given, the base station interval (i.e. the distance
between two adjacent base stations) is a constant regardless of the velocity of
the train. On the other hand, if a certain amount of service is required, the
interval will increase with the velocity of the train. The above results apply
not only to simple curve rails, like line rail and arc rail, but also to any
irregular curve rail, provided that the train is travelling at a constant
velocity. Furthermore, the new developed results are applied to analyze the
on-off transmission strategy of base stations.
|
1311.6275 | Channel Service Based High Speed Railway Base Station Arrangement | cs.IT math.IT | With the rapid development of high-speed railways, demands on high mobility
wireless communication increase greatly. To provide stable and high data rate
wireless access for users in the train, it is necessary to properly deploy base
stations along the railway. In this paper, we consider this issue from the
perspective of channel service which is defined as the integral of the
time-varying instantaneous channel capacity. It will show that the total
service quantity of each base station is a constant. In order to keep high
service efficiency of the railway communication system with multiple base
stations along the railway, we need to use the time division to schedule the
multiple stations and allow one base station to work when the train is running
close to it. In this way, we find a fact that if the ratio of the service
quantity provided by each station to its total service quantity is given, the
base station interval(i.e. the distance between two adjacent base stations) is
a constant, regardless of the speed of the train. On the other hand, interval
between two neighboring base stations will increase with the speed of the
train. Furthermore, using the concept of channel service, we also analyze the
transmission strategy of base stations.
|
1311.6334 | Learning Reputation in an Authorship Network | cs.SI cs.IR cs.LG stat.ML | The problem of searching for experts in a given academic field is hugely
important in both industry and academia. We study exactly this issue with
respect to a database of authors and their publications. The idea is to use
Latent Semantic Indexing (LSI) and Latent Dirichlet Allocation (LDA) to perform
topic modelling in order to find authors who have worked in a query field. We
then construct a coauthorship graph and motivate the use of influence
maximisation and a variety of graph centrality measures to obtain a ranked list
of experts. The ranked lists are further improved using a Markov Chain-based
rank aggregation approach. The complete method is readily scalable to large
datasets. To demonstrate the efficacy of the approach we report on an extensive
set of computational simulations using the Arnetminer dataset. An improvement
in mean average precision is demonstrated over the baseline case of simply
using the order of authors found by the topic models.
|
1311.6335 | SOFA: An Extensible Logical Optimizer for UDF-heavy Dataflows | cs.DB | Recent years have seen an increased interest in large-scale analytical
dataflows on non-relational data. These dataflows are compiled into execution
graphs scheduled on large compute clusters. In many novel application areas the
predominant building blocks of such dataflows are user-defined predicates or
functions (UDFs). However, the heavy use of UDFs is not well taken into account
for dataflow optimization in current systems.
SOFA is a novel and extensible optimizer for UDF-heavy dataflows. It builds
on a concise set of properties for describing the semantics of Map/Reduce-style
UDFs and a small set of rewrite rules, which use these properties to find a
much larger number of semantically equivalent plan rewrites than possible with
traditional techniques. A salient feature of our approach is extensibility: We
arrange user-defined operators and their properties into a subsumption
hierarchy, which considerably eases integration and optimization of new
operators. We evaluate SOFA on a selection of UDF-heavy dataflows from
different domains and compare its performance to three other algorithms for
dataflow optimization. Our experiments reveal that SOFA finds efficient plans,
outperforming the best plans found by its competitors by a factor of up to 6.
|
1311.6355 | Exploration in Interactive Personalized Music Recommendation: A
Reinforcement Learning Approach | cs.MM cs.IR cs.LG | Current music recommender systems typically act in a greedy fashion by
recommending songs with the highest user ratings. Greedy recommendation,
however, is suboptimal over the long term: it does not actively gather
information on user preferences and fails to recommend novel songs that are
potentially interesting. A successful recommender system must balance the needs
to explore user preferences and to exploit this information for recommendation.
This paper presents a new approach to music recommendation by formulating this
exploration-exploitation trade-off as a reinforcement learning task called the
multi-armed bandit. To learn user preferences, it uses a Bayesian model, which
accounts for both audio content and the novelty of recommendations. A
piecewise-linear approximation to the model and a variational inference
algorithm are employed to speed up Bayesian inference. One additional benefit
of our approach is a single unified model for both music recommendation and
playlist generation. Both simulation results and a user study indicate strong
potential for the new approach.
|
1311.6360 | Performance Guarantees for Adaptive Estimation of Sparse Signals | cs.IT math.IT stat.ME | This paper studies adaptive sensing for estimating the nonzero amplitudes of
a sparse signal with the aim of providing analytical guarantees on the
performance gain due to adaptive resource allocation. We consider a previously
proposed optimal two-stage policy for allocating sensing resources. For
positive powers q, we derive tight upper bounds on the mean qth-power error
resulting from the optimal two-stage policy and corresponding lower bounds on
the improvement over non-adaptive uniform sensing. It is shown that the
adaptation gain is related to the detectability of nonzero signal components as
characterized by Chernoff coefficients, thus quantifying analytically the
dependence on the sparsity level of the signal, the signal-to-noise ratio, and
the sensing resource budget. For fixed sparsity levels and increasing
signal-to-noise ratio or sensing budget, we obtain the rate of convergence to
oracle performance and the rate at which the fraction of resources spent on the
first exploratory stage decreases to zero. For a vanishing fraction of nonzero
components, the gain increases without bound as a function of signal-to-noise
ratio and sensing budget. Numerical simulations demonstrate that the bounds on
adaptation gain are quite tight in non-asymptotic regimes as well.
|
1311.6371 | On Approximate Inference for Generalized Gaussian Process Models | stat.ML cs.CV cs.LG | A generalized Gaussian process model (GGPM) is a unifying framework that
encompasses many existing Gaussian process (GP) models, such as GP regression,
classification, and counting. In the GGPM framework, the observation likelihood
of the GP model is itself parameterized using the exponential family
distribution (EFD). In this paper, we consider efficient algorithms for
approximate inference on GGPMs using the general form of the EFD. A particular
GP model and its associated inference algorithms can then be formed by changing
the parameters of the EFD, thus greatly simplifying its creation for
task-specific output domains. We demonstrate the efficacy of this framework by
creating several new GP models for regressing to non-negative reals and to real
intervals. We also consider a closed-form Taylor approximation for efficient
inference on GGPMs, and elaborate on its connections with other model-specific
heuristic closed-form approximations. Finally, we present a comprehensive set
of experiments to compare approximate inference algorithms on a wide variety of
GGPMs.
|
1311.6372 | Analysis of block-preconditioners for models of coupled magma/mantle
dynamics | math.NA cs.CE physics.geo-ph | This article considers the iterative solution of a finite element
discretisation of the magma dynamics equations. In simplified form, the magma
dynamics equations share some features of the Stokes equations. We therefore
formulate, analyse and numerically test a Elman, Silvester and Wathen-type
block preconditioner for magma dynamics. We prove analytically and demonstrate
numerically the optimality of the preconditioner. The presented analysis
highlights the dependence of the preconditioner on parameters in the magma
dynamics equations that can affect convergence of iterative linear solvers. The
analysis is verified through a range of two- and three-dimensional numerical
examples on unstructured grids, from simple illustrative problems through to
large problems on subduction zone-like geometries. The computer code to
reproduce all numerical examples is freely available as supporting material.
|
1311.6392 | A Comprehensive Approach to Universal Piecewise Nonlinear Regression
Based on Trees | cs.LG stat.ML | In this paper, we investigate adaptive nonlinear regression and introduce
tree based piecewise linear regression algorithms that are highly efficient and
provide significantly improved performance with guaranteed upper bounds in an
individual sequence manner. We use a tree notion in order to partition the
space of regressors in a nested structure. The introduced algorithms adapt not
only their regression functions but also the complete tree structure while
achieving the performance of the "best" linear mixture of a doubly exponential
number of partitions, with a computational complexity only polynomial in the
number of nodes of the tree. While constructing these algorithms, we also avoid
using any artificial "weighting" of models (with highly data dependent
parameters) and, instead, directly minimize the final regression error, which
is the ultimate performance goal. The introduced methods are generic such that
they can readily incorporate different tree construction methods such as random
trees in their framework and can use different regressor or partitioning
functions as demonstrated in the paper.
|
1311.6396 | A Unified Approach to Universal Prediction: Generalized Upper and Lower
Bounds | cs.LG | We study sequential prediction of real-valued, arbitrary and unknown
sequences under the squared error loss as well as the best parametric predictor
out of a large, continuous class of predictors. Inspired by recent results from
computational learning theory, we refrain from any statistical assumptions and
define the performance with respect to the class of general parametric
predictors. In particular, we present generic lower and upper bounds on this
relative performance by transforming the prediction task into a parameter
learning problem. We first introduce the lower bounds on this relative
performance in the mixture of experts framework, where we show that for any
sequential algorithm, there always exists a sequence for which the performance
of the sequential algorithm is lower bounded by zero. We then introduce a
sequential learning algorithm to predict such arbitrary and unknown sequences,
and calculate upper bounds on its total squared prediction error for every
bounded sequence. We further show that in some scenarios we achieve matching
lower and upper bounds demonstrating that our algorithms are optimal in a
strong minimax sense such that their performances cannot be improved further.
As an interesting result we also prove that for the worst case scenario, the
performance of randomized algorithms can be achieved by sequential algorithms
so that randomized algorithms does not improve the performance.
|
1311.6401 | A model for generating tunable clustering coefficients independent of
the number of nodes in scale free and random networks | physics.soc-ph cs.SI | Probabilistic networks display a wide range of high average clustering
coefficients independent of the number of nodes in the network. In particular,
the local clustering coefficient decreases with the degree of the subtending
node in a complicated manner not explained by any current models. While a
number of hypotheses have been proposed to explain some of these observed
properties, there are no solvable models that explain them all. We propose a
novel growth model for both random and scale free networks that is capable of
predicting both tunable clustering coefficients independent of the network
size, and the inverse relationship between the local clustering coefficient and
node degree observed in most networks.
|
1311.6402 | Robust Least Squares Methods Under Bounded Data Uncertainties | cs.SY | We study the problem of estimating an unknown deterministic signal that is
observed through an unknown deterministic data matrix under additive noise. In
particular, we present a minimax optimization framework to the least squares
problems, where the estimator has imperfect data matrix and output vector
information. We define the performance of an estimator relative to the
performance of the optimal least squares (LS) estimator tuned to the underlying
unknown data matrix and output vector, which is defined as the regret of the
estimator. We then introduce an efficient robust LS estimation approach that
minimizes this regret for the worst possible data matrix and output vector,
where we refrain from any structural assumptions on the data. We demonstrate
that minimizing this worst-case regret can be cast as a semi-definite
programming (SDP) problem. We then consider the regularized and structured LS
problems and present novel robust estimation methods by demonstrating that
these problems can also be cast as SDP problems. We illustrate the merits of
the proposed algorithms with respect to the well-known alternatives in the
literature through our simulations.
|
1311.6421 | Synchronous Context-Free Grammars and Optimal Linear Parsing Strategies | cs.FL cs.CL | Synchronous Context-Free Grammars (SCFGs), also known as syntax-directed
translation schemata, are unlike context-free grammars in that they do not have
a binary normal form. In general, parsing with SCFGs takes space and time
polynomial in the length of the input strings, but with the degree of the
polynomial depending on the permutations of the SCFG rules. We consider linear
parsing strategies, which add one nonterminal at a time. We show that for a
given input permutation, the problems of finding the linear parsing strategy
with the minimum space and time complexity are both NP-hard.
|
1311.6425 | Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching | math.OC cs.LG stat.ML | Graph matching is a challenging problem with very important applications in a
wide range of fields, from image and video analysis to biological and
biomedical problems. We propose a robust graph matching algorithm inspired in
sparsity-related techniques. We cast the problem, resembling group or
collaborative sparsity formulations, as a non-smooth convex optimization
problem that can be efficiently solved using augmented Lagrangian techniques.
The method can deal with weighted or unweighted graphs, as well as multimodal
data, where different graphs represent different types of data. The proposed
approach is also naturally integrated with collaborative graph inference
techniques, solving general network inference problems where the observed
variables, possibly coming from different modalities, are not in
correspondence. The algorithm is tested and compared with state-of-the-art
graph matching techniques in both synthetic and real graphs. We also present
results on multimodal graphs and applications to collaborative inference of
brain connectivity from alignment-free functional magnetic resonance imaging
(fMRI) data. The code is publicly available.
|
1311.6460 | Wavelet Transform-Based Analysis of QRS complex in ECG Signals | cs.CE | In the present paper we have reported a wavelet based time-frequency
multiresolution analysis of an ECG signal. The ECG (electrocardiogram), which
records hearts electrical activity, is able to provide with useful information
about the type of Cardiac disorders suffered by the patient depending upon the
deviations from normal ECG signal pattern. We have plotted the coefficients of
continuous wavelet transform using Morlet wavelet. We used different ECG signal
available at MIT-BIH database and performed a comparative study. We
demonstrated that the coefficient at a particular scale represents the presence
of QRS signal very efficiently irrespective of the type or intensity of noise,
presence of unusually high amplitude of peaks other than QRS peaks and Base
line drift errors. We believe that the current studies can enlighten the path
towards development of very lucid and time efficient algorithms for identifying
and representing the QRS complexes that can be done with normal computers and
processors.
|
1311.6492 | Performance Evaluation of Multiterminal Backhaul Compression for Cloud
Radio Access Networks | cs.IT math.IT | In cloud radio access networks (C-RANs), the baseband processing of the
available macro- or pico/femto-base stations (BSs) is migrated to control
units, each of which manages a subset of BS antennas. The centralized
information processing at the control units enables effective interference
management. The main roadblock to the implementation of C-RANs hinges on the
effective integration of the radio units, i.e., the BSs, with the backhaul
network. This work first reviews in a unified way recent results on the
application of advanced multiterminal, as opposed to standard point-to-point,
backhaul compression techniques. The gains provided by multiterminal backhaul
compression are then confirmed via extensive simulations based on standard
cellular models. As an example, it is observed that multiterminal compression
strategies provide performance gains of more than 60% for both the uplink and
the downlink in terms of the cell-edge throughput.
|
1311.6500 | Stitched Panoramas from Toy Airborne Video Cameras | cs.CV | Effective panoramic photographs are taken from vantage points that are high.
High vantage points have recently become easier to reach as the cost of
quadrotor helicopters has dropped to nearly disposable levels. Although cameras
carried by such aircraft weigh only a few grams, their low-quality video can be
converted into panoramas of high quality and high resolution. Also, the small
size of these aircraft vastly reduces the risks inherent to flight.
|
1311.6510 | Are all training examples equally valuable? | cs.CV cs.LG stat.ML | When learning a new concept, not all training examples may prove equally
useful for training: some may have higher or lower training value than others.
The goal of this paper is to bring to the attention of the vision community the
following considerations: (1) some examples are better than others for training
detectors or classifiers, and (2) in the presence of better examples, some
examples may negatively impact performance and removing them may be beneficial.
In this paper, we propose an approach for measuring the training value of an
example, and use it for ranking and greedily sorting examples. We test our
methods on different vision tasks, models, datasets and classifiers. Our
experiments show that the performance of current state-of-the-art detectors and
classifiers can be improved when training on a subset, rather than the whole
training set.
|
1311.6526 | The Untold Story of the Clones: Content-agnostic Factors that Impact
YouTube Video Popularity | cs.SI cs.CY | Video dissemination through sites such as YouTube can have widespread impacts
on opinions, thoughts, and cultures. Not all videos will reach the same
popularity and have the same impact. Popularity differences arise not only
because of differences in video content, but also because of other
"content-agnostic" factors. The latter factors are of considerable interest but
it has been difficult to accurately study them. For example, videos uploaded by
users with large social networks may tend to be more popular because they tend
to have more interesting content, not because social network size has a
substantial direct impact on popularity. In this paper, we develop and apply a
methodology that is able to accurately assess, both qualitatively and
quantitatively, the impacts of various content-agnostic factors on video
popularity. When controlling for video content, we observe a strong linear
"rich-get-richer" behavior, with the total number of previous views as the most
important factor except for very young videos. The second most important factor
is found to be video age. We analyze a number of phenomena that may contribute
to rich-get-richer, including the first-mover advantage, and search bias
towards popular videos. For young videos we find that factors other than the
total number of previous views, such as uploader characteristics and number of
keywords, become relatively more important. Our findings also confirm that
inaccurate conclusions can be reached when not controlling for content.
|
1311.6531 | Brains and pseudorandom generators | math.DS cs.CR cs.NE math.NA | In a pioneering classic, Warren McCulloch and Walter Pitts proposed a model
of the central nervous system; motivated by EEG recordings of normal brain
activity, Chv\' atal and Goldsmith asked whether or not this model can be
engineered to provide pseudorandom number generators. We supply evidence
suggesting that the answer is negative.
|
1311.6536 | Universal Codes from Switching Strategies | cs.IT cs.LG math.IT | We discuss algorithms for combining sequential prediction strategies, a task
which can be viewed as a natural generalisation of the concept of universal
coding. We describe a graphical language based on Hidden Markov Models for
defining prediction strategies, and we provide both existing and new models as
examples. The models include efficient, parameterless models for switching
between the input strategies over time, including a model for the case where
switches tend to occur in clusters, and finally a new model for the scenario
where the prediction strategies have a known relationship, and where jumps are
typically between strongly related ones. This last model is relevant for coding
time series data where parameter drift is expected. As theoretical ontributions
we introduce an interpolation construction that is useful in the development
and analysis of new algorithms, and we establish a new sophisticated lemma for
analysing the individual sequence regret of parameterised models.
|
1311.6543 | ReputationPro: The Efficient Approaches to Contextual Transaction Trust
Computation in E-Commerce Environments | cs.DS cs.DB | In e-commerce environments, the trustworthiness of a seller is utterly
important to potential buyers, especially when the seller is unknown to them.
Most existing trust evaluation models compute a single value to reflect the
general trust level of a seller without taking any transaction context
information into account. In this paper, we first present a trust vector
consisting of three values for Contextual Transaction Trust (CTT). In the
computation of three CTT values, the identified three important context
dimensions, including product category, transaction amount and transaction
time, are taken into account. In particular, with different parameters
regarding context dimensions that are specified by a buyer, different sets of
CTT values can be calculated. As a result, all these values can outline the
reputation profile of a seller that indicates the dynamic trust levels of a
seller in different product categories, price ranges, time periods, and any
necessary combination of them. We term this new model as ReputationPro.
However, in ReputationPro, the computation of reputation profile requires novel
algorithms for the precomputation of aggregates over large-scale ratings and
transaction data of three context dimensions as well as new data structures for
appropriately indexing aggregation results to promptly answer buyers' CTT
requests. To solve these challenging problems, we then propose a new index
scheme CMK-tree. After that, we further extend CMK-tree and propose a
CMK-treeRS approach to reducing the storage space allocated to each seller.
Finally, the experimental results illustrate that the CMK-tree is superior in
efficiency for computing CTT values to all three existing approaches in the
literature. In addition, though with reduced storage space, the CMK-treeRS
approach can further improve the performance in answering buyers' CTT queries.
|
1311.6547 | Practical Inexact Proximal Quasi-Newton Method with Global Complexity
Analysis | cs.LG math.OC stat.ML | Recently several methods were proposed for sparse optimization which make
careful use of second-order information [10, 28, 16, 3] to improve local
convergence rates. These methods construct a composite quadratic approximation
using Hessian information, optimize this approximation using a first-order
method, such as coordinate descent and employ a line search to ensure
sufficient descent. Here we propose a general framework, which includes
slightly modified versions of existing algorithms and also a new algorithm,
which uses limited memory BFGS Hessian approximations, and provide a novel
global convergence rate analysis, which covers methods that solve subproblems
via coordinate descent.
|
1311.6556 | Double Ramp Loss Based Reject Option Classifier | cs.LG | We consider the problem of learning reject option classifiers. The goodness
of a reject option classifier is quantified using $0-d-1$ loss function wherein
a loss $d \in (0,.5)$ is assigned for rejection. In this paper, we propose {\em
double ramp loss} function which gives a continuous upper bound for $(0-d-1)$
loss. Our approach is based on minimizing regularized risk under the double
ramp loss using {\em difference of convex (DC) programming}. We show the
effectiveness of our approach through experiments on synthetic and benchmark
datasets. Our approach performs better than the state of the art reject option
classification approaches.
|
1311.6570 | XQuery Streaming by Forest Transducers | cs.DB | Streaming of XML transformations is a challenging task and only very few
systems support streaming. Research approaches generally define custom
fragments of XQuery and XPath that are amenable to streaming, and then design
custom algorithms for each fragment. These languages have several shortcomings.
Here we take a more principles approach to the problem of streaming
XQuery-based transformations. We start with an elegant transducer model for
which many static analysis problems are well-understood: the Macro Forest
Transducer (MFT). We show that a large fragment of XQuery can be translated
into MFTs --- indeed, a fragment of XQuery, that can express important features
that are missing from other XQuery stream engines, such as GCX: our fragment of
XQuery supports XPath predicates and let-statements. We then rely on a
streaming execution engine for MFTs, one which uses a well-founded set of
optimizations from functional programming, such as strictness analysis and
deforestation. Our prototype achieves time and memory efficiency comparable to
the fastest known engine for XQuery streaming, GCX. This is surprising because
our engine relies on the OCaml built in garbage collector and does not use any
specialized buffer management, while GCX's efficiency is due to clever and
explicit buffer management.
|
1311.6578 | Reverse Proxy Framework using Sanitization Technique for Intrusion
Prevention in Database | cs.DB cs.CR | With the increasing importance of the internet in our day to day life, data
security in web application has become very crucial. Ever increasing on line
and real time transaction services have led to manifold rise in the problems
associated with the database security. Attacker uses illegal and unauthorized
approaches to hijack the confidential information like username, password and
other vital details. Hence the real time transaction requires security against
web based attacks. SQL injection and cross site scripting attack are the most
common application layer attack. The SQL injection attacker pass SQL statement
through a web applications input fields, URL or hidden parameters and get
access to the database or update it. The attacker take a benefit from user
provided data in such a way that the users input is handled as a SQL code.
Using this vulnerability an attacker can execute SQL commands directly on the
database. SQL injection attacks are most serious threats which take users input
and integrate it into SQL query. Reverse Proxy is a technique which is used to
sanitize the users inputs that may transform into a database attack. In this
technique a data redirector program redirects the users input to the proxy
server before it is sent to the application server. At the proxy server, data
cleaning algorithm is triggered using a sanitizing application. In this
framework we include detection and sanitization of the tainted information
being sent to the database and innovate a new prototype.
|
1311.6591 | On the Complexity and Approximation of Binary Evidence in Lifted
Inference | cs.AI | Lifted inference algorithms exploit symmetries in probabilistic models to
speed up inference. They show impressive performance when calculating
unconditional probabilities in relational models, but often resort to
non-lifted inference when computing conditional probabilities. The reason is
that conditioning on evidence breaks many of the model's symmetries, which can
preempt standard lifting techniques. Recent theoretical results show, for
example, that conditioning on evidence which corresponds to binary relations is
#P-hard, suggesting that no lifting is to be expected in the worst case. In
this paper, we balance this negative result by identifying the Boolean rank of
the evidence as a key parameter for characterizing the complexity of
conditioning in lifted inference. In particular, we show that conditioning on
binary evidence with bounded Boolean rank is efficient. This opens up the
possibility of approximating evidence by a low-rank Boolean matrix
factorization, which we investigate both theoretically and empirically.
|
1311.6594 | Auto-adaptative Laplacian Pyramids for High-dimensional Data Analysis | cs.AI cs.LG stat.ML | Non-linear dimensionality reduction techniques such as manifold learning
algorithms have become a common way for processing and analyzing
high-dimensional patterns that often have attached a target that corresponds to
the value of an unknown function. Their application to new points consists in
two steps: first, embedding the new data point into the low dimensional space
and then, estimating the function value on the test point from its neighbors in
the embedded space.
However, finding the low dimension representation of a test point, while easy
for simple but often not powerful enough procedures such as PCA, can be much
more complicated for methods that rely on some kind of eigenanalysis, such as
Spectral Clustering (SC) or Diffusion Maps (DM). Similarly, when a target
function is to be evaluated, averaging methods like nearest neighbors may give
unstable results if the function is noisy. Thus, the smoothing of the target
function with respect to the intrinsic, low-dimensional representation that
describes the geometric structure of the examined data is a challenging task.
In this paper we propose Auto-adaptive Laplacian Pyramids (ALP), an extension
of the standard Laplacian Pyramids model that incorporates a modified LOOCV
procedure that avoids the large cost of the standard one and offers the
following advantages: (i) it selects automatically the optimal function
resolution (stopping time) adapted to the data and its noise, (ii) it is easy
to apply as it does not require parameterization, (iii) it does not overfit the
training set and (iv) it adds no extra cost compared to other classical
interpolation methods. We illustrate numerically ALP's behavior on a synthetic
problem and apply it to the computation of the DM projection of new patterns
and to the extension to them of target function values on a radiation
forecasting problem over very high dimensional patterns.
|
1311.6609 | Choreography In Inter-Organizational Innovation Networks | cs.SI physics.soc-ph | This paper introduces the concept of choreography with respect to
inter-organizational innovation networks, as they constitute an attractive
environment to create innovation in different sectors. We argue that
choreography governs behaviours by shaping the level of connectivity and
cohesion among network members. It represents a valid organizational system
able to sustain some activities and to reach effects generating innovation
outcomes. This issue is tackled introducing a new framework in which we propose
a network model as prerequisite for our hypothesis. The analysis is focused on
inter-organizational innovation networks characterized by the presence of hubs,
semi-peripheral and peripheral members lacking hierarchical authority. We
sustain that the features of a network, bringing to synchronization phenomena,
are extremely similar to those existing in innovation network characterized by
the emergence of choreography. The effectiveness of our model is verified by
providing a real case study that gives preliminary empirical hints on the
network aptitude to perform choreography. Indeed, the innovation network
analysed in the case study reveals characteristics causing synchronization and
consequently the establishment of choreography.
|
1311.6635 | Multiuser Random Coding Techniques for Mismatched Decoding | cs.IT math.IT | This paper studies multiuser random coding techniques for channel coding with
a given (possibly suboptimal) decoding rule. For the mismatched discrete
memoryless multiple-access channel, an error exponent is obtained that is tight
with respect to the ensemble average, and positive within the interior of
Lapidoth's achievable rate region. This exponent proves the ensemble tightness
of the exponent of Liu and Hughes in the case of maximum-likelihood decoding.
An equivalent dual form of Lapidoth's achievable rate region is given, and the
latter is shown to extend immediately to channels with infinite and continuous
alphabets. In the setting of single-user mismatched decoding, similar analysis
techniques are applied to a refined version of superposition coding, which is
shown to achieve rates at least as high as standard superposition coding for
any set of random-coding parameters.
|
1311.6647 | DoF Analysis of the K-user MISO Broadcast Channel with Alternating CSIT | cs.IT math.IT | We consider a $K$-user multiple-input single-output (MISO) broadcast channel
(BC) where the channel state information (CSI) of user $i(i=1,2,\ldots,K)$ may
be either perfect (P), delayed (D) or not known (N) at the transmitter with
probabilities $\lambda_P^i$, $\lambda_D^i$ and $\lambda_N^i$, respectively. In
this channel, according to the three possible CSIT for each user, joint CSIT of
the $K$ users could have at most $3^K$ realizations. Although the results by
Tandon et al. show that the Degrees of Freedom (DoF) region for the two user
MISO BC with symmetric marginal probabilities (i.e., $\lambda_Q^i=\lambda_Q
\forall i\in \{1,2,\ldots,K\}, Q\in \{P,D,N\}$) depends only on the marginal
probabilities, we show that this interesting result does not hold in general
when the number of users is more than two. In other words, the DoF region is a
function of the \textit{CSIT pattern}, or equivalently, all the joint
probabilities. In this paper, given the marginal probabilities of CSIT, we
derive an outer bound for the DoF region of the $K$-user MISO BC. Subsequently,
the achievability of these outer bounds are considered in certain scenarios.
Finally, we show the dependence of the DoF region on the joint probabilities.
|
1311.6658 | Efficiency Improvement of Measurement Pose Selection Techniques in Robot
Calibration | cs.RO | The paper deals with the design of experiments for manipulator geometric and
elastostatic calibration based on the test-pose approach. The main attention is
paid to the efficiency improvement of numerical techniques employed in the
selection of optimal measurement poses for calibration experiments. The
advantages of the developed technique are illustrated by simulation examples
that deal with the geometric calibration of the industrial robot of serial
architecture.
|
1311.6674 | Modelling of the gravity compensators in robotic manufacturing cells | cs.RO | The paper deals with the modeling and identification of the gravity
compensators used in heavy industrial robots. The main attention is paid to the
geometrical parameters identification and calibration accuracy. To reduce
impact of the measurement errors, the design of calibration experiments is
used. The advantages of the developed technique are illustrated by experimental
results
|
1311.6676 | Robust algorithm for calibration of robotic manipulator model | cs.RO | The paper focuses on the robust identification of geometrical and
elastostatic parameters of robotic manipulator. The main attention is paid to
the efficiency improvement of the identification algorithm. To increase the
identification accuracy, it is proposed to apply the weighted least square
technique that employs a new algorithm for assigning of the weighting
coefficients. The latter allows taking into account variation of the
measurement system precision in different directions and throughout the robot
workspace. The advantages of the proposed approach are illustrated by an
application example that deals with the elasto-static calibration of industrial
robot.
|
1311.6677 | Advanced robot calibration using partial pose measurements | cs.RO | The paper focuses on the calibration of serial industrial robots using
partial pose measurements. In contrast to other works, the developed advanced
robot calibration technique is suitable for geometrical and elastostatic
calibration. The main attention is paid to the model parameters identification
accuracy. To reduce the impact of measurement errors, it is proposed to use
directly position measurements of several points instead of computing
orientation of the end-effector. The proposed approach allows us to avoid the
problem of non-homogeneity of the least-square objective, which arises in the
classical identification technique with the full-pose information. The
developed technique does not require any normalization and can be efficiently
applied both for geometric and elastostatic identification. The advantages of a
new approach are confirmed by comparison analysis that deals with the
efficiency evaluation of different identification strategies. The obtained
results have been successfully applied to the elastostatic parameters
identification of the industrial robot employed in a machining work-cell for
aerospace industry.
|
1311.6685 | CAD-based approach for identification of elasto-static parameters of
robotic manipulators | cs.RO | The paper presents an approach for the identification of elasto-static
parameters of a robotic manipulator using the virtual experiments in a CAD
environment. It is based on the numerical processing of the data extracted from
the finite element analysis results, which are obtained for isolated
manipulator links. This approach allows to obtain the desired stiffness
matrices taking into account the complex shape of the links, couplings between
rotational/translational deflections and particularities of the joints
connecting adjacent links. These matrices are integral parts of the manipulator
lumped stiffness model that are widely used in robotics due to its high
computational efficiency. To improve the identification accuracy,
recommendations for optimal settings of the virtual experiments are given, as
well as relevant statistical processing techniques are proposed. Efficiency of
the developed approach is confirmed by a simulation study that shows that the
accuracy in evaluating the stiffness matrix elements is about 0.1%.
|
1311.6709 | A Framework for Semi-automated Web Service Composition in Semantic Web | cs.AI | Number of web services available on Internet and its usage are increasing
very fast. In many cases, one service is not enough to complete the business
requirement; composition of web services is carried out. Autonomous composition
of web services to achieve new functionality is generating considerable
attention in semantic web domain. Development time and effort for new
applications can be reduced with service composition. Various approaches to
carry out automated composition of web services are discussed in literature.
Web service composition using ontologies is one of the effective approaches. In
this paper we demonstrate how the ontology based composition can be made faster
for each customer. We propose a framework to provide precomposed web services
to fulfil user requirements. We detail how ontology merging can be used for
composition which expedites the whole process. We discuss how framework
provides customer specific ontology merging and repository. We also elaborate
on how merging of ontologies is carried out.
|
1311.6714 | Efficient XML Keyword Search based on DAG-Compression | cs.DB | In contrast to XML query languages as e.g. XPath which require knowledge on
the query language as well as on the document structure, keyword search is open
to anybody. As the size of XML sources grows rapidly, the need for efficient
search indices on XML data that support keyword search increases. In this
paper, we present an approach of XML keyword search which is based on the DAG
of the XML data, where repeated substructures are considered only once, and
therefore, have to be searched only once. As our performance evaluation shows,
this DAG-based extension of the set intersection search algorithm[1], [2], can
lead to search times that are on large documents more than twice as fast as the
search times of the XML-based approach. Additionally, we utilize a smaller
index, i.e., we consume less main memory to compute the results.
|
1311.6728 | Numerical Investigations on Quasi Steady-State Model for Voltage
Stability: Limitations and Nonlinear Analysis | cs.SY | In this paper, several numerical examples to illustrate limitations of Quasi
Steady-State (QSS) model in long-term voltage stability analysis are presented.
In those cases, the QSS model provided incorrect stability assessment. Causes
of failure of the QSS model are explained and analyzed in nonlinear system
framework. Sufficient conditions of the QSS model for correct approximation are
suggested.
|
1311.6740 | Hilditchs Algorithm Based Tamil Character Recognition | cs.CV | Character identification plays a vital role in the contemporary world of
Image processing. It can solve many composite problems and makes humans work
easier. An instance is Handwritten Character detection. Handwritten recognition
is not a novel expertise, but it has not gained community notice until Now. The
eventual aim of designing Handwritten Character recognition structure with an
accurateness rate of 100% is pretty illusionary. Tamil Handwritten Character
recognition system uses the Neural Networks to distinguish them. Neural Network
and structural characteristics are used to instruct and recognize written
characters. After training and testing the exactness rate reached 99%. This
correctness rate is extremely high. In this paper we are exploring image
processing through the Hilditch algorithm foundation and structural
characteristics of a character in the image. And we recognized some character
of the Tamil language, and we are trying to identify all the character of Tamil
In our future works.
|
1311.6751 | Stiffness modeling of robotic manipulator with gravity compensator | cs.RO | The paper focuses on the stiffness modeling of robotic manipulators with
gravity compensators. The main attention is paid to the development of the
stiffness model of a spring-based compensator located between sequential links
of a serial structure. The derived model allows us to describe the compensator
as an equivalent non-linear virtual spring integrated in the corresponding
actuated joint. The obtained results have been efficiently applied to the
stiffness modeling of a heavy industrial robot of the Kuka family.
|
1311.6758 | Detection of Partially Visible Objects | cs.CV | An "elephant in the room" for most current object detection and localization
methods is the lack of explicit modelling of partial visibility due to
occlusion by other objects or truncation by the image boundary. Based on a
sliding window approach, we propose a detection method which explicitly models
partial visibility by treating it as a latent variable. A novel non-maximum
suppression scheme is proposed which takes into account the inferred partial
visibility of objects while providing a globally optimal solution. The method
gives more detailed scene interpretations than conventional detectors in that
we are able to identify the visible parts of an object. We report improved
average precision on the PASCAL VOC 2010 dataset compared to a baseline
detector.
|
1311.6785 | Interest communities and flow roles in directed networks: the Twitter
network of the UK riots | physics.soc-ph cs.SI | Directionality is a crucial ingredient in many complex networks in which
information, energy or influence are transmitted. In such directed networks,
analysing flows (and not only the strength of connections) is crucial to reveal
important features of the network that might go undetected if the orientation
of connections is ignored. We showcase here a flow-based approach for community
detection in networks through the study of the network of the most influential
Twitter users during the 2011 riots in England. Firstly, we use directed Markov
Stability to extract descriptions of the network at different levels of
coarseness in terms of interest communities, i.e., groups of nodes within which
flows of information are contained and reinforced. Such interest communities
reveal user groupings according to location, profession, employer, and topic.
The study of flows also allows us to generate an interest distance, which
affords a personalised view of the attention in the network as viewed from the
vantage point of any given user. Secondly, we analyse the profiles of incoming
and outgoing long-range flows with a combined approach of role-based similarity
and the novel relaxed minimum spanning tree algorithm to reveal that the users
in the network can be classified into five roles. These flow roles go beyond
the standard leader/follower dichotomy and differ from classifications based on
regular/structural equivalence. We then show that the interest communities fall
into distinct informational organigrams characterised by a different mix of
user roles reflecting the quality of dialogue within them. Our generic
framework can be used to provide insight into how flows are generated,
distributed, preserved and consumed in directed networks.
|
1311.6799 | Wavelet and Fast Fourier Transform based analysis of Solar Image | cs.CV cs.CE | Both of Wavelet and Fast Fourier Transform are strong signal processing tools
in the field of Data Analysis. In this paper fast fourier transform (FFT) and
Wavelet Transform are employed to observe some important features of Solar
image (December, 2004). We have tried to find out the periodicity and coherence
of different sections of the solar image. We plotted the distribution of energy
in solar surface by analyzing the solar image with scalograms and
3D-coefficient plots.
|
1311.6802 | Recommending with an Agenda: Active Learning of Private Attributes using
Matrix Factorization | cs.LG cs.CY | Recommender systems leverage user demographic information, such as age,
gender, etc., to personalize recommendations and better place their targeted
ads. Oftentimes, users do not volunteer this information due to privacy
concerns, or due to a lack of initiative in filling out their online profiles.
We illustrate a new threat in which a recommender learns private attributes of
users who do not voluntarily disclose them. We design both passive and active
attacks that solicit ratings for strategically selected items, and could thus
be used by a recommender system to pursue this hidden agenda. Our methods are
based on a novel usage of Bayesian matrix factorization in an active learning
setting. Evaluations on multiple datasets illustrate that such attacks are
indeed feasible and use significantly fewer rated items than static inference
methods. Importantly, they succeed without sacrificing the quality of
recommendations to users.
|
1311.6809 | A Novel Family of Adaptive Filtering Algorithms Based on The Logarithmic
Cost | cs.LG | We introduce a novel family of adaptive filtering algorithms based on a
relative logarithmic cost. The new family intrinsically combines the higher and
lower order measures of the error into a single continuous update based on the
error amount. We introduce important members of this family of algorithms such
as the least mean logarithmic square (LMLS) and least logarithmic absolute
difference (LLAD) algorithms that improve the convergence performance of the
conventional algorithms. However, our approach and analysis are generic such
that they cover other well-known cost functions as described in the paper. The
LMLS algorithm achieves comparable convergence performance with the least mean
fourth (LMF) algorithm and extends the stability bound on the step size. The
LLAD and least mean square (LMS) algorithms demonstrate similar convergence
performance in impulse-free noise environments while the LLAD algorithm is
robust against impulsive interferences and outperforms the sign algorithm (SA).
We analyze the transient, steady state and tracking performance of the
introduced algorithms and demonstrate the match of the theoretical analyzes and
simulation results. We show the extended stability bound of the LMLS algorithm
and analyze the robustness of the LLAD algorithm against impulsive
interferences. Finally, we demonstrate the performance of our algorithms in
different scenarios through numerical examples.
|
1311.6810 | Identification of geometrical and elastostatic parameters of heavy
industrial robots | cs.RO | The paper focuses on the stiffness modeling of heavy industrial robots with
gravity compensators. The main attention is paid to the identification of
geometrical and elastostatic parameters and calibration accuracy. To reduce
impact of the measurement errors, the set of manipulator configurations for
calibration experiments is optimized with respect to the proposed performance
measure related to the end-effector position accuracy. Experimental results are
presented that illustrate the advantages of the developed technique.
|
1311.6834 | Semi-Supervised Sparse Coding | stat.ML cs.LG | Sparse coding approximates the data sample as a sparse linear combination of
some basic codewords and uses the sparse codes as new presentations. In this
paper, we investigate learning discriminative sparse codes by sparse coding in
a semi-supervised manner, where only a few training samples are labeled. By
using the manifold structure spanned by the data set of both labeled and
unlabeled samples and the constraints provided by the labels of the labeled
samples, we learn the variable class labels for all the samples. Furthermore,
to improve the discriminative ability of the learned sparse codes, we assume
that the class labels could be predicted from the sparse codes directly using a
linear classifier. By solving the codebook, sparse codes, class labels and
classifier parameters simultaneously in a unified objective function, we
develop a semi-supervised sparse coding algorithm. Experiments on two
real-world pattern recognition problems demonstrate the advantage of the
proposed methods over supervised sparse coding methods on partially labeled
data sets.
|
1311.6838 | Learning Prices for Repeated Auctions with Strategic Buyers | cs.LG cs.GT | Inspired by real-time ad exchanges for online display advertising, we
consider the problem of inferring a buyer's value distribution for a good when
the buyer is repeatedly interacting with a seller through a posted-price
mechanism. We model the buyer as a strategic agent, whose goal is to maximize
her long-term surplus, and we are interested in mechanisms that maximize the
seller's long-term revenue. We define the natural notion of strategic regret
--- the lost revenue as measured against a truthful (non-strategic) buyer. We
present seller algorithms that are no-(strategic)-regret when the buyer
discounts her future surplus --- i.e. the buyer prefers showing advertisements
to users sooner rather than later. We also give a lower bound on strategic
regret that increases as the buyer's discounting weakens and shows, in
particular, that any seller algorithm will suffer linear strategic regret if
there is no discounting.
|
1311.6853 | Channel, Phase Noise, and Frequency Offset in OFDM Systems: Joint
Estimation, Data Detection, and Hybrid Cramer-Rao Lower Bound | cs.IT math.IT | Oscillator phase noise (PHN) and carrier frequency offset (CFO) can adversely
impact the performance of orthogonal frequency division multiplexing (OFDM)
systems, since they can result in inter carrier interference and rotation of
the signal constellation. In this paper, we propose an expectation conditional
maximization (ECM) based algorithm for joint estimation of channel, PHN, and
CFO in OFDM systems. We present the signal model for the estimation problem and
derive the hybrid Cramer-Rao lower bound (HCRB) for the joint estimation
problem. Next, we propose an iterative receiver based on an extended Kalman
filter for joint data detection and PHN tracking. Numerical results show that,
compared to existing algorithms, the performance of the proposed ECM-based
estimator is closer to the derived HCRB and outperforms the existing estimation
algorithms at moderate-to-high signal-to-noise ratio (SNR). In addition, the
combined estimation algorithm and iterative receiver are more computationally
efficient than existing algorithms and result in improved average uncoded and
coded bit error rate (BER) performance.
|
1311.6868 | Dimension Reduction of Large AND-NOT Network Models | q-bio.MN cs.CE cs.SI q-bio.QM | Boolean networks have been used successfully in modeling biological networks
and provide a good framework for theoretical analysis. However, the analysis of
large networks is not trivial. In order to simplify the analysis of such
networks, several model reduction algorithms have been proposed; however, it is
not clear if such algorithms scale well with respect to the number of nodes.
The goal of this paper is to propose and implement an algorithm for the
reduction of AND-NOT network models for the purpose of steady state
computation. Our method of network reduction is the use of "steady state
approximations" that do not change the number of steady states. Our algorithm
is designed to work at the wiring diagram level without the need to evaluate or
simplify Boolean functions. Also, our implementation of the algorithm takes
advantage of the sparsity typical of discrete models of biological systems. The
main features of our algorithm are that it works at the wiring diagram level,
it runs in polynomial time, and it preserves the number of steady states. We
used our results to study AND-NOT network models of gene networks and showed
that our algorithm greatly simplifies steady state analysis. Furthermore, our
algorithm can handle sparse AND-NOT networks with up to 1000000 nodes.
|
1311.6870 | Multi-agent based protection system for distribution system with DG | cs.MA | This paper introduces the basic structure of multi-agent based protection
system for distribution system with DGs. The entire system consists of
intelligent agents and communication system. Intelligent agents can be divided
into three layers, the bottom layer, the middle layer and the upper layer. The
design of the agent in different layer is analyzed in detail. Communication
system is the bridge of multi-agent system (MAS). The transmission mode,
selective communication and other principles are discussed to improve the
transmission efficiency. Finally, some evaluations are proposed, which provides
the design of MAS with reference.
|
1311.6876 | Want a Good Answer? Ask a Good Question First! | cs.DB cs.AI cs.IR cs.SE | Community Question Answering (CQA) websites have become valuable repositories
which host a massive volume of human knowledge. To maximize the utility of such
knowledge, it is essential to evaluate the quality of an existing question or
answer, especially soon after it is posted on the CQA website.
In this paper, we study the problem of inferring the quality of questions and
answers through a case study of a software CQA (Stack Overflow). Our key
finding is that the quality of an answer is strongly positively correlated with
that of its question. Armed with this observation, we propose a family of
algorithms to jointly predict the quality of questions and answers, for both
quantifying numerical quality scores and differentiating the high-quality
questions/answers from those of low quality. We conduct extensive experimental
evaluations to demonstrate the effectiveness and efficiency of our methods.
|
1311.6877 | A Survey: Various Techniques of Image Compression | cs.IT cs.MM math.IT | This paper addresses about various image compression techniques. On the basis
of analyzing the various image compression techniques this paper presents a
survey of existing research papers. In this paper we analyze different types of
existing method of image compression. Compression of an image is significantly
different then compression of binary raw data. To solve these use different
types of techniques for image compression. Now there is question may be arise
that how to image compress and which types of technique is used. For this
purpose there are basically two types are method are introduced namely lossless
and lossy image compression techniques. In present time some other techniques
are added with basic method. In some area neural network genetic algorithms are
used for image compression.
Keywords-Image Compression; Lossless; Lossy; Redundancy; Benefits of
Compression.
|
1311.6880 | The Degrees of Freedom of the $K$-pair-user Full-Duplex Two-way
Interference Channel with and without a MIMO Relay | cs.IT math.IT | In a $K$-pair-user two-way interference channel (TWIC), $2K$ messages and
$2K$ transmitters/receivers form a $K$-user IC in the forward direction ($K$
messages) and another $K$-user IC in the backward direction which operate in
full-duplex mode. All nodes may interact, or adapt inputs to past received
signals. We derive a new outer bound to demonstrate that the optimal degrees of
freedom (DoF, also known as the multiplexing gain) is $K$: full-duplex
operation doubles the DoF, but interaction does not further increase the DoF.
We next characterize the DoF of the $K$-pair-user TWIC with a MIMO, full-duplex
relay. If the relay is non-causal/instantaneous (at time $k$ forwards a
function of its received signals up to time $k$) and has $2K$ antennas, we
demonstrate a one-shot scheme where the relay mitigates all interference to
achieve the interference-free $2K$ DoF. In contrast, if the relay is causal (at
time $k$ forwards a function of its received signals up to time $k-1$), we show
that a full-duplex MIMO relay cannot increase the DoF of the $K$-pair-user TWIC
beyond $K$, as if no relay or interaction is present. We comment on reducing
the number of antennas at the instantaneous relay.
|
1311.6881 | Color and Shape Content Based Image Classification using RBF Network and
PSO Technique: A Survey | cs.CV cs.LG cs.NE | The improvement of the accuracy of image query retrieval used image
classification technique. Image classification is well known technique of
supervised learning. The improved method of image classification increases the
working efficiency of image query retrieval. For the improvements of
classification technique we used RBF neural network function for better
prediction of feature used in image retrieval.Colour content is represented by
pixel values in image classification using radial base function(RBF) technique.
This approach provides better result compare to SVM technique in image
representation.Image is represented by matrix though RBF using pixel values of
colour intensity of image. Firstly we using RGB colour model. In this colour
model we use red, green and blue colour intensity values in matrix.SVM with
partical swarm optimization for image classification is implemented in content
of images which provide better Results based on the proposed approach are found
encouraging in terms of color image classification accuracy.
|
1311.6887 | Modeling Radiometric Uncertainty for Vision with Tone-mapped Color
Images | cs.CV | To produce images that are suitable for display, tone-mapping is widely used
in digital cameras to map linear color measurements into narrow gamuts with
limited dynamic range. This introduces non-linear distortion that must be
undone, through a radiometric calibration process, before computer vision
systems can analyze such photographs radiometrically. This paper considers the
inherent uncertainty of undoing the effects of tone-mapping. We observe that
this uncertainty varies substantially across color space, making some pixels
more reliable than others. We introduce a model for this uncertainty and a
method for fitting it to a given camera or imaging pipeline. Once fit, the
model provides for each pixel in a tone-mapped digital photograph a probability
distribution over linear scene colors that could have induced it. We
demonstrate how these distributions can be useful for visual inference by
incorporating them into estimation algorithms for a representative set of
vision tasks.
|
1311.6907 | A Constraint Programming Approach for Mining Sequential Patterns in a
Sequence Database | cs.AI cs.DB | Constraint-based pattern discovery is at the core of numerous data mining
tasks. Patterns are extracted with respect to a given set of constraints
(frequency, closedness, size, etc). In the context of sequential pattern
mining, a large number of devoted techniques have been developed for solving
particular classes of constraints. The aim of this paper is to investigate the
use of Constraint Programming (CP) to model and mine sequential patterns in a
sequence database. Our CP approach offers a natural way to simultaneously
combine in a same framework a large set of constraints coming from various
origins. Experiments show the feasibility and the interest of our approach.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.