id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1212.4920 | Automatic landmark annotation and dense correspondence registration for
3D human facial images | cs.CV q-bio.QM | Dense surface registration of three-dimensional (3D) human facial images
holds great potential for studies of human trait diversity, disease genetics,
and forensics. Non-rigid registration is particularly useful for establishing
dense anatomical correspondences between faces. Here we describe a novel
non-rigid registration method for fully automatic 3D facial image mapping. This
method comprises two steps: first, seventeen facial landmarks are automatically
annotated, mainly via PCA-based feature recognition following 3D-to-2D data
transformation. Second, an efficient thin-plate spline (TPS) protocol is used
to establish the dense anatomical correspondence between facial images, under
the guidance of the predefined landmarks. We demonstrate that this method is
robust and highly accurate, even for different ethnicities. The average face is
calculated for individuals of Han Chinese and Uyghur origins. While fully
automatic and computationally efficient, this method enables high-throughput
analysis of human facial feature variation.
|
1212.4940 | Fourier Domain Beamforming for Medical Ultrasound | cs.IT math.IT | Sonography techniques use multiple transducer elements for tissue
visualization. Signals detected at each element are sampled prior to digital
beamforming. The required sampling rates are up to 4 times the Nyquist rate of
the signal and result in considerable amount of data, that needs to be stored
and processed. A developed technique, based on the finite rate of innovation
model, compressed sensing (CS) and Xampling ideas, allows to reduce the number
of samples needed to reconstruct an image comprised of strong reflectors. A
significant drawback of this method is its inability to treat speckle, which is
of significant importance in medical imaging. Here we build on previous work
and show explicitly how to perform beamforming in the Fourier domain.
Beamforming in frequency exploits the low bandwidth of the beamformed signal
and allows to bypass the oversampling dictated by digital implementation of
beamforming in time. We show that this allows to obtain the same beamformed
image as in standard beamforming but from far fewer samples. Finally, we
present an analysis based CS-technique that allows for further reduction in
sampling rate, using only a portion of the beamformed signal's bandwidth,
namely, sampling the signal at sub-Nyquist rates. We demonstrate our methods on
in vivo cardiac ultrasound data and show that reductions up to 1/25 over
standard beamforming rates are possible.
|
1212.4950 | Data Mapping for Unreliable Memories | cs.IT math.IT | Future digital signal processing (DSP) systems must provide robustness on
algorithm and application level to the presence of reliability issues that come
along with corresponding implementations in modern semiconductor process
technologies. In this paper, we address this issue by investigating the impact
of unreliable memories on general DSP systems. In particular, we propose a
novel framework to characterize the effects of unreliable memories, which
enables us to devise novel methods to mitigate the associated performance loss.
We propose to deploy specifically designed data representations, which have the
capability of substantially improving the system reliability compared to that
realized by conventional data representations used in digital integrated
circuits, such as 2s complement or sign-magnitude number formats. To
demonstrate the efficacy of the proposed framework, we analyze the impact of
unreliable memories on coded communication systems, and we show that the
deployment of optimized data representations substantially improves the
error-rate performance of such systems.
|
1212.4968 | Dual-Polarized Ricean MIMO Channels: Modeling and Performance Assessment | cs.IT math.IT | In wireless communication systems, dual-polarized (DP) instead of
single-polarized (SP) multiple-input multiple-output (MIMO) transmission is
used to improve the spectral efficiency under certain conditions on the channel
and the signal-to-noise ratio (SNR). In order to identify these conditions, we
first propose a novel channel model for DP mobile Ricean MIMO channels for
which statistical channel parameters are readily obtained from a moment-based
channel decomposition. Second, we derive an approximation of the mutual
information (MI), which can be expressed as a function of those statistical
channel parameters. Based on this approximation, we characterize the required
SNR for a DP MIMO system to outperform an SP MIMO system in terms of the MI.
Finally, we apply our results to channel measurements at 2.53 GHz. We find
that, using the proposed channel decomposition and the approximation of the MI,
we are able to reproduce the (practically relevant) SNR values above which DP
MIMO systems outperform SP MIMO systems.
|
1212.4989 | Towards Trustworthy Mobile Social Networking Services for Disaster
Response | cs.SI cs.CR | Situational awareness is crucial for effective disaster management. However,
obtaining information about the actual situation is usually difficult and
time-consuming. While there has been some effort in terms of incorporating the
affected population as a source of information, the issue of obtaining
trustworthy information has not yet received much attention. Therefore, we
introduce the concept of witness-based report verification, which enables users
from the affected population to evaluate reports issued by other users. We
present an extensive overview of the objectives to be fulfilled by such a
scheme and provide a first approach considering security and privacy. Finally,
we evaluate the performance of our approach in a simulation study. Our results
highlight synergetic effects of group mobility patterns that are likely in
disaster situations.
|
1212.4991 | A Physical Layer Secured Key Distribution Technique for IEEE 802.11g
Wireless Networks | cs.IT cs.CR math.IT | Key distribution and renewing in wireless local area networks is a crucial
issue to guarantee that unauthorized users are prevented from accessing the
network. In this paper, we propose a technique for allowing an automatic
bootstrap and periodic renewing of the network key by exploiting physical layer
security principles, that is, the inherent differences among transmission
channels. The proposed technique is based on scrambling of groups of
consecutive packets and does not need the use of an initial authentication nor
automatic repeat request protocols. We present a modification of the scrambling
circuits included in the IEEE 802.11g standard which allows for a suitable
error propagation at the unauthorized receiver, thus achieving physical layer
security.
|
1212.4999 | Hamiltonian Perspective on Compartmental Reaction-Diffusion Networks | cs.SY math.OC nlin.PS | Inspired by the recent developments in modeling and analysis of reaction
networks, we provide a geometric formulation of the reversible reaction
networks under the influence of diffusion. Using the graph knowledge of the
underlying reaction network, the obtained reaction-diffusion system is a
distributed-parameter port-Hamiltonian system on a compact spatial domain.
Motivated by the need for computer based design, we offer a spatially
consistent discretization of the PDE system and, in a systematic manner,
recover a compartmental ODE model on a simplicial triangulation of the spatial
domain. Exploring the properties of a balanced weighted Laplacian matrix of the
reaction network and the Laplacian of the simplicial complex, we characterize
the space of equilibrium points and provide a simple stability analysis on the
state space modulo the space of equilibrium points. The paper rules out the
possibility of the persistence of spatial patterns for the compartmental
balanced reaction-diffusion networks.
|
1212.5024 | On the Complexity of Joint Subcarrier and Power Allocation for
Multi-User OFDMA Systems | cs.IT math.CO math.IT | Consider a multi-user Orthogonal Frequency Division Multiple Access (OFDMA)
system where multiple users share multiple discrete subcarriers, but at most
one user is allowed to transmit power on each subcarrier. To adapt fast traffic
and channel fluctuations and improve the spectrum efficiency, the system should
have the ability to dynamically allocate subcarriers and power resources to
users. Assuming perfect channel knowledge, two formulations for the joint
subcarrier and power allocation problem are considered in this paper: the first
is to minimize the total transmission power subject to quality of service
constraints and the OFDMA constraint, and the second is to maximize some system
utility function (including the sum-rate utility, the proportional fairness
utility, the harmonic mean utility, and the min-rate utility) subject to the
total transmission power constraint per user and the OFDMA constraint. In spite
of the existence of various heuristics approaches, little is known about the
computational complexity status of the above problem. This paper aims to fill
this theoretical gap, i.e., characterizing the complexity of the joint
subcarrier and power allocation problem for the multi-user OFDMA system. It is
shown in this paper that both formulations of the joint subcarrier and power
allocation problem are strongly NP-hard. The proof is based on a polynomial
time transformation from the so-called 3-dimensional matching problem. Several
subclasses of the problem which can be solved to global optimality or
$\epsilon$-global optimality in polynomial time are also identified. These
complexity results suggest that there are not polynomial time algorithms which
are able to solve the general joint subcarrier and power allocation problem to
global optimality (unless P$=$NP), and determining an approximately optimal
subcarrier and power allocation strategy is more realistic in practice.
|
1212.5032 | Distributed Rate Allocation in Inter-Session Network Coding | cs.NI cs.IT math.IT | In this work, we propose a distributed rate allocation algorithm that
minimizes the average decoding delay for multimedia clients in inter-session
network coding systems. We consider a scenario where the users are organized in
a mesh network and each user requests the content of one of the available
sources. We propose a novel distributed algorithm where network users determine
the coding operations and the packet rates to be requested from the parent
nodes, such that the decoding delay is minimized for all the clients. A rate
allocation problem is solved by every user, which seeks the rates that minimize
the average decoding delay for its children and for itself. Since the
optimization problem is a priori non-convex, we introduce the concept of
equivalent packet flows, which permits to estimate the expected number of
packets that every user needs to collect for decoding. We then decompose our
original rate allocation problem into a set of convex subproblems, which are
eventually combined to obtain an effective approximate solution to the delay
minimization problem. The results demonstrate that the proposed scheme
eliminates the bottlenecks and reduces the decoding delay experienced by users
with limited bandwidth resources. We validate the performance of our
distributed rate allocation algorithm in different video streaming scenarios
using the NS-3 network simulator. We show that our system is able to take
benefit of inter-session network coding for simultaneous delivery of video
sessions in networks with path diversity.
|
1212.5035 | Online Myopic Network Covering | cs.SI physics.soc-ph | Efficient marketing or awareness-raising campaigns seek to recruit $n$
influential individuals -- where $n$ is the campaign budget -- that are able to
cover a large target audience through their social connections. So far most of
the related literature on maximizing this network cover assumes that the social
network topology is known. Even in such a case the optimal solution is NP-hard.
In practice, however, the network topology is generally unknown and needs to be
discovered on-the-fly. In this work we consider an unknown topology where
recruited individuals disclose their social connections (a feature known as
{\em one-hop lookahead}). The goal of this work is to provide an efficient
greedy online algorithm that recruits individuals as to maximize the size of
target audience covered by the campaign.
We propose a new greedy online algorithm, Maximum Expected $d$-Excess Degree
(MEED), and provide, to the best of our knowledge, the first detailed
theoretical analysis of the cover size of a variety of well known network
sampling algorithms on finite networks. Our proposed algorithm greedily
maximizes the expected size of the cover. For a class of random power law
networks we show that MEED simplifies into a straightforward procedure, which
we denote MOD (Maximum Observed Degree). We substantiate our analytical results
with extensive simulations and show that MOD significantly outperforms all
analyzed myopic algorithms. We note that performance may be further improved if
the node degree distribution is known or can be estimated online during the
campaign.
|
1212.5091 | Maximally Informative Observables and Categorical Perception | cs.LG cs.SD | We formulate the problem of perception in the framework of information
theory, and prove that categorical perception is equivalent to the existence of
an observable that has the maximum possible information on the target of
perception. We call such an observable maximally informative. Regardless
whether categorical perception is real, maximally informative observables can
form the basis of a theory of perception. We conclude with the implications of
such a theory for the problem of speech perception.
|
1212.5095 | Modelling of Optimal Design of Manufacturing Cell Layout Considering
Material Flow and Closeness Rating Factors | cs.CE | Developing a group of machine cells and their corresponding part families to
minimize the inter-cell and intra-cell material flow is the basic objective of
the designing of a cellular manufacturing system (CMS). Afterwards achieving a
competent cell layout is essential in order to minimize the total inter-cell
part travels, which is principally noteworthy. There are plentiful articles of
CMS literature which considered cell formation problems; however cell layout
topic has rarely been addressed. Therefore this research is intended to focus
on an adapted mathematical model of the layout design problem considering
material handling cost and closeness ratings of manufacturing cells. Owing to
the combinatorial class of the said problem, an efficient NP-hard technique
based on Simulated Annealing metaheuristic is proposed henceforth. Some test
problems are solved using the proposed technique. Computational results show
that the proposed metaheuristic approach is extremely effective and efficient
in terms of solution quality and computational complexity.
|
1212.5101 | Hybrid Fuzzy-ART based K-Means Clustering Methodology to Cellular
Manufacturing Using Operational Time | cs.LG | This paper presents a new hybrid Fuzzy-ART based K-Means Clustering technique
to solve the part machine grouping problem in cellular manufacturing systems
considering operational time. The performance of the proposed technique is
tested with problems from open literature and the results are compared to the
existing clustering models such as simple K-means algorithm and modified ART1
algorithm using an efficient modified performance measure known as modified
grouping efficiency (MGE) as found in the literature. The results support the
better performance of the proposed algorithm. The Novelty of this study lies in
the simple and efficient methodology to produce quick solutions for shop floor
managers with least computational efforts and time.
|
1212.5108 | Rewrite Closure and CF Hedge Automata | cs.LO cs.DB cs.FL | We introduce an extension of hedge automata called bidimensional context-free
hedge automata. The class of unranked ordered tree languages they recognize is
shown to be preserved by rewrite closure with inverse-monadic rules. We also
extend the parameterized rewriting rules used for modeling the W3C XQuery
Update Facility in previous works, by the possibility to insert a new parent
node above a given node. We show that the rewrite closure of hedge automata
languages with these extended rewriting systems are context-free hedge
languages.
|
1212.5156 | Nonparametric ridge estimation | math.ST cs.LG stat.ML stat.TH | We study the problem of estimating the ridges of a density function. Ridge
estimation is an extension of mode finding and is useful for understanding the
structure of a density. It can also be used to find hidden structure in point
cloud data. We show that, under mild regularity conditions, the ridges of the
kernel density estimator consistently estimate the ridges of the true density.
When the data are noisy measurements of a manifold, we show that the ridges are
close and topologically similar to the hidden manifold. To find the estimated
ridges in practice, we adapt the modified mean-shift algorithm proposed by
Ozertem and Erdogmus [J. Mach. Learn. Res. 12 (2011) 1249-1286]. Some numerical
experiments verify that the algorithm is accurate.
|
1212.5182 | Performance Evaluation of an Orthogonal Frequency Division Multiplexing
based Wireless Communication System with implementation of Least Mean Square
Equalization technique | cs.IT math.IT | Orthogonal Frequency Division Multiplexing (OFDM) has recently been applied
in wireless communication systems due to its high data rate transmission
capability with high bandwidth efficiency and its robustness to multi-path
delay. Fading is the one of the major aspect which is considered in the
receiver. To cancel the effect of fading, channel estimation and equalization
procedure must be done at the receiver before data demodulation. This paper
mainly deals with pilot based channel estimation techniques for OFDM
communication over frequency selective fading channels. This paper proposes a
specific approach to channel equalization for Orthogonal Frequency Division
Multiplex (OFDM) systems. Inserting an equalizer realized as an adaptive system
before the FFT processing, the influence of variable delay and multi path could
be mitigated in order to remove or reduce considerably the guard interval and
to gain some spectral efficiency. The adaptive algorithm is based on adaptive
filtering with averaging (AFA) for parameter update. Based on the development
of a model of the OFDM system, through extensive computer simulations, we
investigate the performance of the channel equalized system. The results show
much higher convergence and adaptation rate compared to one of the most
frequently used algorithms - Least Mean Squares (LMS).
|
1212.5188 | Combinatorial neural codes from a mathematical coding theory perspective | q-bio.NC cs.IT math.IT | Shannon's seminal 1948 work gave rise to two distinct areas of research:
information theory and mathematical coding theory. While information theory has
had a strong influence on theoretical neuroscience, ideas from mathematical
coding theory have received considerably less attention. Here we take a new
look at combinatorial neural codes from a mathematical coding theory
perspective, examining the error correction capabilities of familiar receptive
field codes (RF codes). We find, perhaps surprisingly, that the high levels of
redundancy present in these codes does not support accurate error correction,
although the error-correcting performance of RF codes "catches up" to that of
random comparison codes when a small tolerance to error is introduced. On the
other hand, RF codes are good at reflecting distances between represented
stimuli, while the random comparison codes are not. We suggest that a
compromise in error-correcting capability may be a necessary price to pay for a
neural code whose structure serves not only error correction, but must also
reflect relationships between stimuli.
|
1212.5197 | Seven new champion linear codes | math.CO cs.IT math.IT | We exhibit seven linear codes exceeding the current best known minimum
distance d for their dimension k and block length n. Each code is defined over
F_8, and their invariants [n,k,d] are given by [49,13,27], [49,14,26],
[49,16,24], [49,17,23], [49,19,21], [49,25,16] and [49,26,15]. Our method
includes an exhaustive search of all monomial evaluation codes generated by
points in the [0,5]x[0,5] lattice square.
|
1212.5211 | Bibliometric Networks | cs.DL cs.SI physics.soc-ph | This text is based on a translation of a chapter in a handbook about network
analysis (published in German) where we tried to make beginners familiar with
some basic notions and recent developments of network analysis applied to
bibliometric issues (Havemann and Scharnhorst 2010). We have added some recent
references.
|
1212.5217 | A Neural Network Approach to ECG Denoising | cs.CE cs.NE | We propose an ECG denoising method based on a feed forward neural network
with three hidden layers. Particulary useful for very noisy signals, this
approach uses the available ECG channels to reconstruct a noisy channel. We
tested the method, on all the records from Physionet MIT-BIH Arrhythmia
Database, adding electrode motion artifact noise. This denoising method
improved the perfomance of publicly available ECG analysis programs on noisy
ECG signals. This is an offline method that can be used to remove noise from
very corrupted Holter records.
|
1212.5238 | The Twitter of Babel: Mapping World Languages through Microblogging
Platforms | physics.soc-ph cs.CL cs.SI | Large scale analysis and statistics of socio-technical systems that just a
few short years ago would have required the use of consistent economic and
human resources can nowadays be conveniently performed by mining the enormous
amount of digital data produced by human activities. Although a
characterization of several aspects of our societies is emerging from the data
revolution, a number of questions concerning the reliability and the biases
inherent to the big data "proxies" of social life are still open. Here, we
survey worldwide linguistic indicators and trends through the analysis of a
large-scale dataset of microblogging posts. We show that available data allow
for the study of language geography at scales ranging from country-level
aggregation to specific city neighborhoods. The high resolution and coverage of
the data allows us to investigate different indicators such as the linguistic
homogeneity of different countries, the touristic seasonal patterns within
countries and the geographical distribution of different languages in
multilingual regions. This work highlights the potential of geolocalized
studies of open data sources to improve current analysis and develop indicators
for major social phenomena in specific communities.
|
1212.5250 | A genetic algorithm applied to the validation of building thermal models | cs.NE | This paper presents the coupling of a building thermal simulation code with
genetic algorithms (GAs). GAs are randomized search algorithms that are based
on the mechanisms of natural selection and genetics. We show that this coupling
allows the location of defective sub-models of a building thermal model i.e.
parts of model that are responsible for the disagreements between measurements
and model predictions. The method first of all is checked and validated on the
basis of a numerical model of a building taken as reference. It is then applied
to a real building case. The results show that the method could constitute an
efficient tool when checking the model validity.
|
1212.5252 | Bringing simulation to implementation: Presentation of a global approach
in the design of passive solar buildings under humid tropical climates | cs.CE physics.class-ph | In early 1995, a DSM pilot initiative has been launched in the French islands
of Guadeloupe and Reunion through a partnership between several public and
private partners (the French Public Utility EDF, the University of Reunion
Island, low cost housing companies, architects, energy consultants, etc...) to
set up standards to improve thermal design of new residential buildings in
tropical climates. This partnership led to defining optimized bio-climatic
urban planning and architectural designs featuring the use of passive cooling
architectural principles (solar shading, natural ventilation) and components,
as well as energy efficient systems and technologies. The design and sizing of
each architectural component on internal thermal comfort in building has been
assessed with a validated thermal and airflow building simulation software
(CODYRUN). These technical specifications have been edited in a reference
document which has been used to build over 300 new pilot dwellings through the
years 1996-1998 in Reunion Island and in Guadeloupe. An experimental monitoring
has been made in these first ECODOM dwellings in 1998 and 1999. It will result
in experimental validation of impact of the passive cooling strategies on
thermal comfort of occupants leading to modify specifications if necessary. The
paper present all the methodology used for the elaboration of ECODOM, from the
simulations to the experimental results. This follow up is important, as the
setting up of the ECODOM standard will be the first step towards the setting up
of thermal regulations in the French overseas territories, by the year 2002.
|
1212.5253 | Development of a new model to predict indoor daylighting: Integration in
CODYRUN software and validation | cs.CE | Many models exist in the scientific literature for determining indoor
daylighting values. They are classified in three categories: numerical,
simplified and empirical models. Nevertheless, each of these categories of
models are not convenient for every application. Indeed, the numerical model
requires high calculation time; conditions of use of the simplified models are
limited, and experimental models need not only important financial resources
but also a perfect control of experimental devices (e.g. scale model), as well
as climatic characteristics of the location (e.g. in situ experiment). In this
article, a new model based on a combination of multiple simplified models is
established. The objective is to improve this category of model. The
originality of our paper relies on the coupling of several simplified models of
indoor daylighting calculations. The accuracy of the simulation code,
introduced into CODYRUN software to simulate correctly indoor illuminance, is
then verified. Besides, the software consists of a numerical building
simulation code, developed in the Physics and Mathematical Engineering
Laboratory for Energy and Environment (P.I.M.E.N.T) at the University of
Reunion. Initially dedicated to the thermal, airflow and hydrous phenomena in
the buildings, the software has been completed for the calculation of indoor
daylighting. New models and algorithms - which rely on a semi-detailed approach
- will be presented in this paper. In order to validate the accuracy of the
integrated models, many test cases have been considered as analytical,
inter-software comparisons and experimental comparisons. In order to prove the
accuracy of the new model - which can properly simulate the illuminance - a
confrontation between the results obtained from the software (developed in this
research paper) and the major made at a given place is described in details. A
new statistical indicator to appreciate the margins of errors - named RSD
(Reliability of Software Degrees) - is also be defined.
|
1212.5255 | A Comparison between CODYRUN and TRNSYS, simulation models for thermal
buildings behaviour | cs.CE | Simulation codes of thermal behaviour could significantly improve housing
construction design. Among the existing software, CODYRUN and TRNSYS are
calculations codes of different conceptions. CODYRUN is exclusively dedicated
to housing thermal behaviour, whereas TRNSYS is more generally used on any
thermal system. The purpose of this article is to compare these two instruments
in two different conditions . We will first modelize a mono-zone test cell, and
analyse the results by means of signal treatment methods. Then, we will
modelize a real case of multi-zone housing, representative of housing in wet
tropical climates. We could so evaluate influences of meteorological and
building description data on model errors.
|
1212.5256 | Thermal Building Simulation and Computer Generation of Nodal Models | cs.CE | The designer's preoccupation to reduce the energy needs and get a better
thermal quality of ambiances helped in the development of several packages
simulating the dynamic behaviour of buildings. This paper shows the adaptation
of a method of thermal analysis, the nodal analysis, linked to the case of
building's thermal behaviour. We take successively an interest in the case of
conduction into a wall, in the coupling with superficial exchanges and finally
in the constitution of thermal state models of the building. Big variations
existing from one building to another, it's necessary to build the thermal
model from the building description. This article shows the chosen method in
the case of our thermal simulation program for buildings, CODYRUN
|
1212.5260 | Heat transfer in buildings : application to air solar heating and Trombe
wall design | cs.CE | The aim of this paper is to briefly recall heat transfer modes and explain
their integration within a software dedicated to building simulation (CODYRUN).
Detailed elements of the validation of this software are presented and two
applications are finally discussed. One concerns the modeling of a flat plate
air collector and the second focuses on the modeling of Trombe solar walls. In
each case, detailed modeling of heat transfer allows precise understanding of
thermal and energetic behavior of the studied structures. Recent decades have
seen a proliferation of tools for building thermal simulation. These
applications cover a wide spectrum from very simplified steady state models to
dynamic simulation ones, including computational fluid dynamics modules
(Clarke, 2001). These tools are widely available in design offices and
engineering firms. They are often used for the design of HVAC systems and still
subject to detailed research, particularly with respect to the integration of
new fields (specific insulation materials, lighting, pollutants transport,
etc.). Available from:
http://www.intechopen.com/books/evaporation-condensation-and-heat-transfer/heat-transfer-in-buildings-application-to-solar-air-collector-and-trombe-wall-design
|
1212.5262 | A multimodel approach to building thermal simulation for design and
research purposes | cs.CE | The designers pre-occupation to reduce energy consumption and to achieve
better thermal ambience levels, has favoured the setting up of numerous
building thermal dynamic simulation programs. The progress in the modelling of
phenomenas and its transfer into the professional field has resulted in various
numerical approaches ranging from softwares dedicated to architects for design
use to tools for laboratory use by the expert thermal researcher. This analysis
shows that each approach tends to fulfil the specific needs of a certain kind
of manipulator only, in the building conception process. Our objective is
notably different as it is a tool which can be used from the very initial stage
of a construction project, to the energy audit for the existing building. In
each of these cases, the objective results, the precision advocated and the
time delay of the results are different parameters which call for a multiple
model approach of the building system
|
1212.5263 | Use of BESTEST procedure to improve a building thermal simulation
program | cs.CE | Validation of building energy simulation programs is of major interest to
both users and modellers. To achieve such a task, it is essential to apply a
methodology based on a priori test and empirical validation. A priori test
consists in verifying that models embedded in a program and their
implementation are correct. this should be achieved before carrying out
experiments. The aim of this report is to present results from the application
of the BESTEST procedure to our code. We will emphasise the way it allows to
find bugs in our program and also how it permits to qualify models of heat
transfer by conduction
|
1212.5264 | Statistical Traffic State Analysis in Large-scale Transportation
Networks Using Locality-Preserving Non-negative Matrix Factorization | cs.CE | Statistical traffic data analysis is a hot topic in traffic management and
control. In this field, current research progresses focus on analyzing traffic
flows of individual links or local regions in a transportation network. Less
attention are paid to the global view of traffic states over the entire
network, which is important for modeling large-scale traffic scenes. Our aim is
precisely to propose a new methodology for extracting spatio-temporal traffic
patterns, ultimately for modeling large-scale traffic dynamics, and long-term
traffic forecasting. We attack this issue by utilizing Locality-Preserving
Non-negative Matrix Factorization (LPNMF) to derive low-dimensional
representation of network-level traffic states. Clustering is performed on the
compact LPNMF projections to unveil typical spatial patterns and temporal
dynamics of network-level traffic states. We have tested the proposed method on
simulated traffic data generated for a large-scale road network, and reported
experimental results validate the ability of our approach for extracting
meaningful large-scale space-time traffic patterns. Furthermore, the derived
clustering results provide an intuitive understanding of spatial-temporal
characteristics of traffic flows in the large-scale network, and a basis for
potential long-term forecasting.
|
1212.5265 | An Effective Machine-Part Grouping Algorithm to Construct Manufacturing
Cells | cs.CE | The machine-part cell formation problem consists of creating machine cells
and their corresponding part families with the objective of minimizing the
inter-cell and intra-cell movement while maximizing the machine utilization.
This article demonstrates a hybrid clustering approach for the cell formation
problem in cellular manufacturing that conjoins Sorenson s similarity
coefficient based method to form the production cells. Computational results
are shown over the test datasets obtained from the past literature. The hybrid
technique is shown to outperform the other methods proposed in literature and
including powerful soft computing approaches such as genetic algorithms,
genetic programming by exceeding the solution quality on the test problems.
|
1212.5271 | Towards the Evolution of Novel Vertical-Axis Wind Turbines | cs.NE cs.AI cs.CE | Renewable and sustainable energy is one of the most important challenges
currently facing mankind. Wind has made an increasing contribution to the
world's energy supply mix, but still remains a long way from reaching its full
potential. In this paper, we investigate the use of artificial evolution to
design vertical-axis wind turbine prototypes that are physically instantiated
and evaluated under approximated wind tunnel conditions. An artificial neural
network is used as a surrogate model to assist learning and found to reduce the
number of fabrications required to reach a higher aerodynamic efficiency,
resulting in an important cost reduction. Unlike in other approaches, such as
computational fluid dynamics simulations, no mathematical formulations are used
and no model assumptions are made.
|
1212.5275 | A Picard Newton method to solve non linear airflow networks | cs.CE | In detailled buiding simulation models, airflow modelling and solving are
still open and crucial problems, specially in the case of open buildings as
encountered in tropical climates. As a consequence, wind speed conditioning
indoor thermal comfort or energy needs in case of air conditionning are uneasy
to predict. A first part of the problem is the lack of reliable and usable
large opening elementary modelling and another one concerns the numerical
solving of airflow network. This non linear pressure system is solved by
numerous methods mainly based on Newton Raphson (NR) method. This paper is
adressing this part of the difficulty, in our software CODYRUN. After model
checks, we propose to use Picard method (known also as fixed point) to
initialise zone pressures. A linear system (extracted from the non linear set
of equations) is solved around 10 times at each time step and NR uses this
result for initial values. Known to be uniformly but slowly convergent, this
method appears to be really powerful for the building pressure system. The
comparison of the methods in terms of number of iterations is illustrated using
a real test case experiment.
|
1212.5276 | Multi-Objective AI Planning: Evaluating DAE-YAHSP on a Tunable Benchmark | cs.AI | All standard AI planners to-date can only handle a single objective, and the
only way for them to take into account multiple objectives is by aggregation of
the objectives. Furthermore, and in deep contrast with the single objective
case, there exists no benchmark problems on which to test the algorithms for
multi-objective planning. Divide and Evolve (DAE) is an evolutionary planner
that won the (single-objective) deterministic temporal satisficing track in the
last International Planning Competition. Even though it uses intensively the
classical (and hence single-objective) planner YAHSP, it is possible to turn
DAE-YAHSP into a multi-objective evolutionary planner. A tunable benchmark
suite for multi-objective planning is first proposed, and the performances of
several variants of multi-objective DAE-YAHSP are compared on different
instances of this benchmark, hopefully paving the road to further
multi-objective competitions in AI planning.
|
1212.5284 | Dual-Based Bounds for Resource Allocation in Zero-forcing Beamforming
OFDMA-SDMA Systems | cs.IT math.IT | We consider multi-antenna base stations using orthogonal frequency division
multiple access and space division multiple access techniques to serve
single-antenna users. Some users, called real-time users, have minimum rate
requirements and must be served in the current time slot while others, called
non real-time users, do not have strict timing constraints and are served on a
best-effort basis. The resource allocation problem is to find the assignment of
users to subcarriers and the transmit beamforming vectors that maximize the
total user rates subject to power and minimum rate constraints. In general,
this is a nonlinear and non-convex program and the zero-forcing technique used
here makes it integer as well, exact optimal solutions cannot be computed in
reasonable time for realistic cases. For this reason, we present a technique to
compute both upper and lower bounds and show that these are quite close for
some realistic cases.
First, we formulate the dual problem whose optimum provides an upper bound to
all feasible solutions. We then use a simple method to get a primal-feasible
point starting from the dual optimal solution, which is a lower bound on the
primal optimal solution. Numerical results for several cases show that the two
bounds are close so that the dual method can be used to benchmark any heuristic
used to solve this problem. As an example, we provide numerical results showing
the performance gap of the well-known weight adjustment method and show that
there is considerable room for improvement.
|
1212.5288 | Quantized Network Coding for Correlated Sources | cs.IT math.IT | Non-adaptive joint source network coding of correlated sources is discussed
in this paper. By studying the information flow in the network, we propose
quantized network coding as an alternative for packet forwarding. This
technique has both network coding and distributed source coding advantages,
simultaneously. Quantized network coding is a combination of random linear
network coding in the (infinite) field of real numbers and quantization to cope
with the limited capacity of links. With the aid of the results in the
literature of compressed sensing, we discuss theoretical and practical
feasibility of quantized network coding in lossless networks. We show that, due
to the nature of the field it operates on, quantized network coding can provide
good quality decoding at a sink node with the reception of a reduced number of
packets. Specifically, we discuss the required conditions on local network
coding coefficients, by using restricted isometry property and suggest a
design, which yields in appropriate linear measurements. Finally, our
simulation results show the achieved gain in terms of delivery delay, compared
to conventional routing based packet forwarding.
|
1212.5289 | Modeling and performance evaluation of computer systems security
operation | cs.CR cs.SY eess.SY math.OC | A model of computer system security operation is developed based on the
fork-join queueing network formalism. We introduce a security operation
performance measure, and show how it may be used to performance evaluation of
actual systems.
|
1212.5291 | Products of random matrices and queueing system performance evaluation | math.OC cs.SY | We consider (max,+)-algebra products of random matrices, which arise from
performance evaluation of acyclic fork-join queueing networks. A new algebraic
technique to examine properties of the product and investigate its limiting
behaviour is proposed based on an extension of the standard matrix
(max,+)-algebra by endowing it with the ordinary matrix addition as an external
operation. As an application, we derive bounds on the (max,+)-algebra maximal
Lyapunov exponent which can be considered as the cycle time of the networks.
|
1212.5300 | Distributed Full-duplex via Wireless Side Channels: Bounds and Protocols | cs.IT math.IT | In this paper, we study a three-node full-duplex network, where a base
station is engaged in simultaneous up- and downlink communication in the same
frequency band with two half-duplex mobile nodes. To reduce the impact of
inter- node interference between the two mobile nodes on the system capacity,
we study how an orthogonal side-channel between the two mobile nodes can be
leveraged to achieve full-duplex-like multiplexing gains. We propose and
characterize the achievable rates of four distributed full-duplex schemes,
labeled bin-and- cancel, compress-and-cancel, estimate-and-cancel and decode-
and-cancel. Of the four, bin-and-cancel is shown to achieve within 1 bit/s/Hz
of the capacity region for all values of channel parameters. In contrast, the
other three schemes achieve the near-optimal performance only in certain
regimes of channel values. Asymptotic multiplexing gains of all proposed
schemes are derived to show that the side-channel is extremely effective in
regimes where inter-node interference has the highest impact.
|
1212.5303 | Relational Foundations For Functorial Data Migration | cs.DB math.CT math.LO | We study the data transformation capabilities associated with schemas that
are presented by directed multi-graphs and path equations. Unlike most
approaches which treat graph-based schemas as abbreviations for relational
schemas, we treat graph-based schemas as categories. A schema $S$ is a
finitely-presented category, and the collection of all $S$-instances forms a
category, $S$-inst. A functor $F$ between schemas $S$ and $T$, which can be
generated from a visual mapping between graphs, induces three adjoint data
migration functors, $\Sigma_F:S$-inst$\to T$-inst, $\Pi_F: S$-inst $\to
T$-inst, and $\Delta_F:T$-inst $\to S$-inst. We present an algebraic query
language FQL based on these functors, prove that FQL is closed under
composition, prove that FQL can be implemented with the
select-project-product-union relational algebra (SPCU) extended with a
key-generation operation, and prove that SPCU can be implemented with FQL.
|
1212.5315 | A hybrid FD-FV method for first-order hyperbolic conservation laws on
Cartesian grids: The smooth problem case | math.NA cs.CE cs.NA | We present a class of hybrid FD-FV (finite difference and finite volume)
methods for solving general hyperbolic conservation laws written in first-order
form. The presentation focuses on one- and two-dimensional Cartesian grids;
however, the generalization to higher dimensions is straightforward. These
methods use both cell-averaged values and nodal values as dependent variables
to discretize the governing partial differential equation (PDE) in space, and
they are combined with method of lines for integration in time. This framework
is absent of any Riemann solvers while it achieves numerical conservation
naturally. This paper focuses on the accuracy and linear stability of the
proposed FD-FV methods, thus we suppose in addition that the solutions are
sufficiently smooth. In particular, we prove that the spatial-order of the
FD-FV method is typically one-order higher than that of the discrete
differential operator, which is involved in the construction of the method. In
addition, the methods are linearly stable subjected to a Courant-Friedrich-Lewy
condition when appropriate time-integrators are used. The numerical performance
of the methods is assessed by a number of benchmark problems in one and two
dimensions. These examples include the linear advection equation, nonlinear
Euler equations, the solid dynamics problem for linear elastic orthotropic
materials, and the Buckley-Leverett equation.
|
1212.5316 | Quantum rate distortion coding with auxiliary resources | quant-ph cs.IT math.IT | We extend quantum rate distortion theory by considering auxiliary resources
that might be available to a sender and receiver performing lossy quantum data
compression. The first setting we consider is that of quantum rate distortion
coding with the help of a classical side channel. Our result here is that the
regularized entanglement of formation characterizes the quantum rate distortion
function, extending earlier work of Devetak and Berger. We also combine this
bound with the entanglement-assisted bound from our prior work to obtain the
best known bounds on the quantum rate distortion function for an isotropic
qubit source. The second setting we consider is that of quantum rate distortion
coding with quantum side information (QSI) available to the receiver. In order
to prove results in this setting, we first state and prove a quantum reverse
Shannon theorem with QSI (for tensor-power states), which extends the known
tensor-power quantum reverse Shannon theorem. The achievability part of this
theorem relies on the quantum state redistribution protocol, while the converse
relies on the fact that the protocol can cause only a negligible disturbance to
the joint state of the reference and the receiver's QSI. This quantum reverse
Shannon theorem with QSI naturally leads to quantum rate-distortion theorems
with QSI, with or without entanglement assistance.
|
1212.5331 | Adapting Voting Techniques for Online Forum Thread Retrieval | cs.IR | Online forums or message boards are rich knowledge-based communities. In
these communities, thread retrieval is an essential tool facilitating
information access. However, the issue on thread search is how to combine
evidence from text units(messages) to estimate thread relevance. In this paper,
we first rank a list of messages, then we score threads by aggregating their
ranked messages' scores. To aggregate the message scores, we adopt several
voting techniques that have been applied in ranking aggregates tasks such as
blog distillation and expert finding. The experimental result shows that many
voting techniques should be preferred over a baseline that treats a thread as a
concatenation of its message texts.
|
1212.5352 | On the Adaptability of Neural Network Image Super-Resolution | cs.CV | In this paper, we described and developed a framework for Multilayer
Perceptron (MLP) to work on low level image processing, where MLP will be used
to perform image super-resolution. Meanwhile, MLP are trained with different
types of images from various categories, hence analyse the behaviour and
performance of the neural network. The tests are carried out using qualitative
test, in which Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR) and
Structural Similarity Index (SSIM). The results showed that MLP trained with
single image category can perform reasonably well compared to methods proposed
by other researchers.
|
1212.5359 | Fuzzy soft rough K-Means clustering approach for gene expression data | cs.LG cs.CE | Clustering is one of the widely used data mining techniques for medical
diagnosis. Clustering can be considered as the most important unsupervised
learning technique. Most of the clustering methods group data based on distance
and few methods cluster data based on similarity. The clustering algorithms
classify gene expression data into clusters and the functionally related genes
are grouped together in an efficient manner. The groupings are constructed such
that the degree of relationship is strong among members of the same cluster and
weak among members of different clusters. In this work, we focus on a
similarity relationship among genes with similar expression patterns so that a
consequential and simple analytical decision can be made from the proposed
Fuzzy Soft Rough K-Means algorithm. The algorithm is developed based on Fuzzy
Soft sets and Rough sets. Comparative analysis of the proposed work is made
with bench mark algorithms like K-Means and Rough K-Means and efficiency of the
proposed algorithm is illustrated in this work by using various cluster
validity measures such as DB index and Xie-Beni index.
|
1212.5374 | A Blind Time-Reversal Detector in the Presence of Channel Correlation | cs.IT cs.PF math.IT | A blind target detector using the time reversal transmission is proposed in
the presence of channel correlation. We calculate the exact moments of the test
statistics involved. The derived moments are used to construct an accurate
approximative Likelihood Ratio Test (LRT) based on multivariate Edgeworth
expansion. Performance gain over an existing detector is observed in scenarios
with channel correlation and relatively strong target signal.
|
1212.5389 | Relationship-aware sequential pattern mining | cs.DB stat.AP | Relationship-aware sequential pattern mining is the problem of mining
frequent patterns in sequences in which the events of a sequence are mutually
related by one or more concepts from some respective hierarchical taxonomies,
based on the type of the events. Additionally events themselves are also
described with a certain number of taxonomical concepts. We present RaSP an
algorithm that is able to mine relationship-aware patterns over such sequences;
RaSP follows a two stage approach. In the first stage it mines for frequent
type patterns and {\em all} their occurrences within the different sequences.
In the second stage it performs hierarchical mining where for each frequent
type pattern and its occurrences it mines for more specific frequent patterns
in the lower levels of the taxonomies. We test RaSP on a real world medical
application, that provided the inspiration for its development, in which we
mine for frequent patterns of medical behavior in the antibiotic treatment of
microbes and show that it has a very good computational performance given the
complexity of the relationship-aware sequential pattern mining problem.
|
1212.5391 | Soft Set Based Feature Selection Approach for Lung Cancer Images | cs.LG cs.CE | Lung cancer is the deadliest type of cancer for both men and women. Feature
selection plays a vital role in cancer classification. This paper investigates
the feature selection process in Computed Tomographic (CT) lung cancer images
using soft set theory. We propose a new soft set based unsupervised feature
selection algorithm. Nineteen features are extracted from the segmented lung
images using gray level co-occurence matrix (GLCM) and gray level different
matrix (GLDM). In this paper, an efficient Unsupervised Soft Set based Quick
Reduct (SSUSQR) algorithm is presented. This method is used to select features
from the data set and compared with existing rough set based unsupervised
feature selection methods. Then K-Means and Self Organizing Map (SOM)
clustering algorithms are used to cluster the data. The performance of the
feature selection algorithms is evaluated based on performance of clustering
techniques. The results show that the proposed method effectively removes
redundant features.
|
1212.5394 | Optimal Scheduling and Power Allocation for Two-Hop Energy Harvesting
Communication Systems | cs.IT math.IT | Energy harvesting (EH) has recently emerged as a promising technique for
green communications. To realize its potential, communication protocols need to
be redesigned to combat the randomness of the harvested energy. In this paper,
we investigate how to apply relaying to improve the short-term performance of
EH communication systems. With an EH source and a non-EH half-duplex relay, we
consider two different design objectives: 1) short-term throughput
maximization; and 2) transmission completion time minimization. Both problems
are joint scheduling and power allocation problems, rendered quite challenging
by the half-duplex constraint at the relay. A key finding is that directional
water-filling (DWF), which is the optimal power allocation algorithm for the
single-hop EH system, can serve as guideline for the design of two-hop
communication systems, as it not only determines the value of the optimal
performance, but also forms the basis to derive optimal solutions for both
design problems. Based on a relaxed energy profile along with the DWF
algorithm, we derive key properties of the optimal solutions for both problems
and thereafter propose efficient algorithms. Simulation results will show that
both scheduling and power allocation optimizations are necessary in two-hop EH
communication systems.
|
1212.5404 | Edge Union of Networks on the Same Vertex Set | physics.soc-ph cond-mat.stat-mech cs.SI | Random networks generators like Erdoes-Renyi, Watts-Strogatz and
Barabasi-Albert models are used as models to study real-world networks. Let
G^1(V,E_1) and G^2(V,E_2) be two such networks on the same vertex set V. This
paper studies the degree distribution and cluster coefficient of the resultant
networks, G(V, E_1 U E_2).
|
1212.5406 | Relaying Protocols for Wireless Energy Harvesting and Information
Processing | cs.IT math.IT | An emerging solution for prolonging the lifetime of energy constrained relay
nodes in wireless networks is to avail the ambient radio-frequency (RF) signal
and to simultaneously harvest energy and process information. In this paper, an
amplify-and-forward (AF) relaying network is considered, where an energy
constrained relay node harvests energy from the received RF signal and uses
that harvested energy to forward the source information to the destination.
Based on the time switching and power splitting receiver architectures, two
relaying protocols, namely, i) time switching-based relaying (TSR) protocol and
ii) power splitting-based relaying (PSR) protocol are proposed to enable energy
harvesting and information processing at the relay. In order to determine the
throughput, analytical expressions for the outage probability and the ergodic
capacity are derived for delay-limited and delay-tolerant transmission modes,
respectively. The numerical analysis provides practical insights into the
effect of various system parameters, such as energy harvesting time, power
splitting ratio, source transmission rate, source to relay distance, noise
power, and energy harvesting efficiency, on the performance of wireless energy
harvesting and information processing using AF relay nodes. In particular, the
TSR protocol outperforms the PSR protocol in terms of throughput at relatively
low signal-to-noise-ratios and high transmission rate.
|
1212.5421 | Design of a Smart Embedded Uninterrupted Power Supply System for
Personal Computers | cs.SY | Digital equipment such as computers, telecommunication systems and
instruments use microprocessors that operate at high frequencies allowing them
to carry out millions or even billions of operations per second. A disturbance
in the electrical supply lasting just a few milliseconds can affect thousands
or millions of basic operations. The result may be malfunctioning and loss of
data with dangerous or costly consequences (e.g. loss of production). That is
why many loads, called sensitive or critical loads, require a supply that is
protected. Many manufacturers of sensitive equipment specify very strict
tolerances, much stricter than those in the distribution system for the supply
of their equipment, one example being Computer Business Equipment Manufacturers
Association for computer equipment against distribution system disturbances.
The design of this uninterrupted power supply (UPS) for personal computer (PC)
is necessitated due to a need for enhanced portability in the design of
personal computer desktop workstations. Apart from its original functionality
as a backup source of power, this design incorporates the unit within the
system unit casing, thereby reducing the number of system components available.
Also, the embedding of this unit removes the untidiness of connecting wires and
makes the whole computer act like a laptop. Not to be left out is the choice of
a microcontroller as an important part of the circuitry. This has eliminated
the weight and space-consuming components that make up an original design. The
singular use of this microcontroller places the UPS under the class of an
advanced technology device.
|
1212.5423 | Topic Extraction and Bundling of Related Scientific Articles | cs.IR cs.DL stat.ML | Automatic classification of scientific articles based on common
characteristics is an interesting problem with many applications in digital
library and information retrieval systems. Properly organized articles can be
useful for automatic generation of taxonomies in scientific writings, textual
summarization, efficient information retrieval etc. Generating article bundles
from a large number of input articles, based on the associated features of the
articles is tedious and computationally expensive task. In this report we
propose an automatic two-step approach for topic extraction and bundling of
related articles from a set of scientific articles in real-time. For topic
extraction, we make use of Latent Dirichlet Allocation (LDA) topic modeling
techniques and for bundling, we make use of hierarchical agglomerative
clustering techniques.
We run experiments to validate our bundling semantics and compare it with
existing models in use. We make use of an online crowdsourcing marketplace
provided by Amazon called Amazon Mechanical Turk to carry out experiments. We
explain our experimental setup and empirical results in detail and show that
our method is advantageous over existing ones.
|
1212.5440 | Development of an Anti-collision Model for Vehicles | cs.SY | The Anti Collision device is a detection device meant to be incorporated into
cars for the purpose of safety. As opposed to the anti collision devices
present in the market today, this system is not designed to control the
vehicle. Instead, it serves as an alert in the face of imminent collision. The
device is intended to find a way to implement a minimum spacing for cars in
traffic in an affordable way. It would also achieve safety for the passengers
of a moving car. The device is made up of an infrared transmitter and receiver.
Also incorporated into it is an audio visual alarm to work in with the receiver
and effectively alert the driver and/or the passengers. To achieve this design,
555 timers coupled both as astable and monostable circuits were used along with
a 38 KHz Square Pulse generator. The device works by sending out streams of
infrared radiation and when these rays are seen by the other equipped vehicle,
both are meant to take the necessary precaution to avert a collision. The
device would still sound an alarm even though it is not receiving infrared
beams from the oncoming vehicle. This is due to reflection of its own infrared
beams. At the end of the design and testing process, overall system was
implemented with a constructed work, tested working and perfectly functional.
|
1212.5442 | \'Etude compar\'ee de quatre logiciels de gestion de r\'ef\'erences
bibliographiques libres ou gratuits | cs.IR | This article is the result of the analysis of various bibliographic reference
management tools, especially those that are free. The use of editorial tools by
bibliographic editors has evolved rapidly since 2007. But, until recently, free
software has fallen short when it comes to ergonomics or use. The functional
and technical panorama offered by free software is the result of the comparison
of JabRef, Mendeley Desktop, BibDesk and Zotero software undertaken in January
2012 by two research professors affiliated with the Institut national
fran\c{c}ais des techniques de la documentation (INTD).
|
1212.5449 | Characterizing Multivariate Information Flows | cs.IT math.DS math.IT stat.ME | One of the crucial steps in scientific studies is to specify dependent
relationships among factors in a system of interest. Given little knowledge of
a system, can we characterize the underlying dependent relationships through
observation of its temporal behaviors? In multivariate systems, there are
potentially many possible dependent structures confusable with each other, and
it may cause false detection of illusory dependency between unrelated factors.
The present study proposes a new information-theoretic measure with
consideration to such potential multivariate relationships. The proposed
measure, called multivariate transfer entropy, is an extension of transfer
entropy, a measure of temporal predictability. In the simulations and empirical
studies, we demonstrated that the proposed measure characterized the latent
dependent relationships in unknown dynamical systems more accurately than its
alternative measure.
|
1212.5454 | In Vivo Quantification of Clot Formation in Extracorporeal Circuits | cs.CV physics.med-ph | Clot formation is a common complication in extracorporeal circuits. In this
paper we describe a novel method for clot formation analysis using image
processing. We assembled a closed extracorporeal circuit and circulated blood
at varying speeds. Blood filters were placed in downstream of the flow, and
clotting agents were added to the circuit. Digital images of the filter were
subsequently taken, and image analysis was applied to calculate the density of
the clot. Our results show a significant correlation between the cumulative
size of the clots, the density measure of the clot based on image analysis, and
flow duration in the system.
|
1212.5461 | Interactive Ant Colony Optimisation (iACO) for Early Lifecycle Software
Design | cs.SE cs.AI | Software design is crucial to successful software development, yet is a
demanding multi-objective problem for software engineers. In an attempt to
assist the software designer, interactive (i.e. human in-the-loop)
meta-heuristic search techniques such as evolutionary computing have been
applied and show promising results. Recent investigations have also shown that
Ant Colony Optimization (ACO) can outperform evolutionary computing as a
potential search engine for interactive software design. With a limited
computational budget, ACO produces superior candidate design solutions in a
smaller number of iterations. Building on these findings, we propose a novel
interactive ACO (iACO) approach to assist the designer in early lifecycle
software design, in which the search is steered jointly by subjective designer
evaluation as well as machine fitness functions relating the structural
integrity and surrogate elegance of software designs. Results show that iACO is
speedy, responsive and highly effective in enabling interactive, dynamic
multi-objective search in early lifecycle software design. Study participants
rate the iACO search experience as compelling. Results of machine learning of
fitness measure weightings indicate that software design elegance does indeed
play a significant role in designer evaluation of candidate software design. We
conclude that the evenness of the number of attributes and methods among
classes (NAC) is a significant surrogate elegance measure, which in turn
suggests that this evenness of distribution, when combined with structural
integrity, is an implicit but crucial component of effective early lifecycle
software design.
|
1212.5462 | On the Impact of Phase Noise on Active Cancellation in Wireless
Full-Duplex | cs.IT math.IT | Recent experimental results have shown that full-duplex communication is
possible for short-range communications. However, extending full-duplex to
long-range communication remains a challenge, primarily due to residual
self-interference even with a combination of passive suppression and active
cancellation methods. In this paper, we investigate the root cause of
performance bottlenecks in current full-duplex systems. We first classify all
known full-duplex architectures based on how they compute their cancelling
signal and where the cancelling signal is injected to cancel self-interference.
Based on the classification, we analytically explain several published
experimental results. The key bottleneck in current systems turns out to be the
phase noise in the local oscillators in the transmit and receive chain of the
full-duplex node. As a key by-product of our analysis, we propose signal models
for wideband and MIMO full-duplex systems, capturing all the salient design
parameters, and thus allowing future analytical development of advanced coding
and signal design for full-duplex systems.
|
1212.5473 | Spin foam with topologically encoded tetrad on trivalent spin networks | cs.IT math.IT | We explore discrete approaches in LQG where all fields, the gravitational
tetrad, and the matter and energy fields, are encoded implicitly in a graph
instead of being additional data. Our graph should therefore be richer than a
simple simplicial decomposition. It has to embed geometrical information and
the standard model. We start from Lisi's model. We build a trivalent graph
which is an F4 lattice of 48-valent supernodes, reduced as trivalent subgraphs,
and topologically encoding data. We show it is a solution for EFE with no
matter. We define bosons and half-fermions in two dual basis. They are encoded
by bit exchange in supernodes, operated by Pachner 2-2 move, and rest state can
be restored thanks to information redundancy. Despite its 4 dimensional nature,
our graph is a trivalent spin network, and its history is a pentavalent spin
foam.
|
1212.5524 | Reinforcement learning for port-Hamiltonian systems | cs.SY cs.LG | Passivity-based control (PBC) for port-Hamiltonian systems provides an
intuitive way of achieving stabilization by rendering a system passive with
respect to a desired storage function. However, in most instances the control
law is obtained without any performance considerations and it has to be
calculated by solving a complex partial differential equation (PDE). In order
to address these issues we introduce a reinforcement learning approach into the
energy-balancing passivity-based control (EB-PBC) method, which is a form of
PBC in which the closed-loop energy is equal to the difference between the
stored and supplied energies. We propose a technique to parameterize EB-PBC
that preserves the systems's PDE matching conditions, does not require the
specification of a global desired Hamiltonian, includes performance criteria,
and is robust to extra non-linearities such as control input saturation. The
parameters of the control law are found using actor-critic reinforcement
learning, enabling learning near-optimal control policies satisfying a desired
closed-loop energy landscape. The advantages are that near-optimal controllers
can be generated using standard energy shaping techniques and that the
solutions learned can be interpreted in terms of energy shaping and damping
injection, which makes it possible to numerically assess stability using
passivity theory. From the reinforcement learning perspective, our proposal
allows for the class of port-Hamiltonian systems to be incorporated in the
actor-critic framework, speeding up the learning thanks to the resulting
parameterization of the policy. The method has been successfully applied to the
pendulum swing-up problem in simulations and real-life experiments.
|
1212.5525 | Synchronization of a class of cyclic discrete-event systems describing
legged locomotion | cs.SY | It has been shown that max-plus linear systems are well suited for
applications in synchronization and scheduling, such as the generation of train
timetables, manufacturing, or traffic. In this paper we show that the same is
true for multi-legged locomotion. In this framework, the max-plus eigenvalue of
the system matrix represents the total cycle time, whereas the max-plus
eigenvector dictates the steady-state behavior. Uniqueness of the
eigenstructure also indicates uniqueness of the resulting behavior. For the
particular case of legged locomotion, the movement of each leg is abstracted to
two-state circuits: swing and stance (leg in flight and on the ground,
respectively). The generation of a gait (a manner of walking) for a multiple
legged robot is then achieved by synchronizing the multiple discrete-event
cycles via the max-plus framework. By construction, different gaits and gait
parameters can be safely interleaved by using different system matrices. In
this paper we address both the transient and steady-state behavior for a class
of gaits by presenting closed-form expressions for the max-plus eigenvalue and
max-plus eigenvector of the system matrix and the coupling time. The
significance of this result is in showing guaranteed robustness to
perturbations and gait switching, and also a systematic methodology for
synthesizing controllers that allow for legged robots to change rhythms fast.
|
1212.5554 | Re-encoding reformulation and application to Welch-Berlekamp algorithm | cs.IT math.IT | The main decoding algorithms for Reed-Solomon codes are based on a bivariate
interpolation step, which is expensive in time complexity. Lot of interpolation
methods were proposed in order to decrease the complexity of this procedure,
but they stay still expensive. Then Koetter, Ma and Vardy proposed in 2010 a
technique, called re-encoding, which allows to reduce the practical running
time. However, this trick is only devoted for the Koetter interpolation
algorithm. We propose a reformulation of the re-encoding for any interpolation
methods. The assumption for this reformulation permits only to apply it to the
Welch-Berlekamp algorithm.
|
1212.5577 | A Structured Construction of Optimal Measurement Matrix for Noiseless
Compressed Sensing via Analog Polarization | cs.IT math.IT | In this paper, we propose a method of structured construction of the optimal
measurement matrix for noiseless compressed sensing (CS), which achieves the
minimum number of measurements which only needs to be as large as the sparsity
of the signal itself to be recovered to guarantee almost error-free recovery,
for sufficiently large dimension. To arrive at the results, we employ a duality
between noiseless CS and analog coding across sparse additive noisy channel
(SANC). Extending Renyi Information Dimension to Mutual Information Dimension
(MID), we show the operational meaning of MID to be the fundamental limit of
asymptotically error-free analog transmission across SANC under linear analog
encoding constraint. We prove that MID polarizes after analog polar
transformation and obeys the same recursive relationship as BEC. We further
prove that analog polar encoding can achieve the fundamental limit of
achievable dimension rate with vanishing Pe across SANC. From the duality, a
structured construction scheme is proposed for the linear measurement matrix
which achieves the minimum measurement requirement for noiseless CS.
|
1212.5589 | CODYRUN, outil de simulation et d'aide \`a la conception
thermo-a\'eraulique de b\^atiments | cs.CE | This article presents the CODYRUN software developped by University of La
R\'eunion. It is a multizone thermal software, with detailled airflow and
humidity transfer calculations. One of its specific aspects is that it
constitutes a research tool, a design tool used by the lab and professionnals
and also a teaching tool. After a presentation of the multiple model aspect,
some details of the tree modules associated to physical phenomenons are given.
Elements of validation are exposed in next paraghaph, and then a few details of
the front end.
|
1212.5590 | Online Forum Thread Retrieval using Pseudo Cluster Selection and Voting
Techniques | cs.IR | Online forums facilitate knowledge seeking and sharing on the Web. However,
the shared knowledge is not fully utilized due to information overload. Thread
retrieval is one method to overcome information overload. In this paper, we
propose a model that combines two existing approaches: the Pseudo Cluster
Selection and the Voting Techniques. In both, a retrieval system first scores a
list of messages and then ranks threads by aggregating their scored messages.
They differ on what and how to aggregate. The pseudo cluster selection focuses
on input, while voting techniques focus on the aggregation method. Our combined
models focus on the input and the aggregation methods. The result shows that
some combined models are statistically superior to baseline methods.
|
1212.5592 | Multiple model software for airflow and thermal building simulation. A
case study under tropical humid climate, in R\'eunion Island | cs.CE | The first purpose of our work has been to allow -as far as heat transfer
modes, airflow calculation and meteorological data reconstitution are
concerned- the integration of diverse interchangeable physical models in a
single software tool for professional use, CODYRUN. The designer's objectives,
precision requested and calculation time consideration, lead us to design a
structure accepting selective use of models, taking into account multizone
description and airflow patterns. With a building case study in Reunion Island,
we first analyse the sensibility of the thermal model to diffuse radiation
reconstitution on tilted surfaces. Then, a realistic balance between precision
required and calculation time leads us to select detailed models for the zone
of main interest, but to choose simplified models for the other zones.
|
1212.5593 | Time-variant Linear reduction model approximation : application to
thermal and airflow building simulation | cs.CE | Considering the natural ventilation, the thermal behavior of buildings can be
described by a linear time varying model. In this paper, we describe an
implementation of model reduction of linear time varying systems. We show the
consequences of the model reduction on computing time and accuracy. Finally, we
compare experimental measures and simulation results using the initial model or
the reduced model. The reduced model shows negligible difference in accuracy,
and the computing time shortens.
|
1212.5594 | Black box modelling of HVAC system : improving the performances of
neural networks | cs.NE cs.CE | This paper deals with neural networks modelling of HVAC systems. In order to
increase the neural networks performances, a method based on sensitivity
analysis is applied. The same technique is also used to compute the relevance
of each input. To avoid the prediction errors in dry coil conditions, a
metamodel for each capacity is derived from the neural networks. The regression
coefficients of the polynomial forms are identified through the use of spectral
analysis. These methods based on sensitivity and spectral analysis lead to an
optimized neural network model, as regard to its architecture and predictions.
|
1212.5599 | Elaboration of a new tool for weather data sequences generation | cs.CE | This paper deals about the presentation of a new software RUNEOLE used to
provide weather data in buildings physics. RUNEOLE associates three modules
leading to the description, the modelling and the generation of weather data.
The first module is dedicated to the description of each climatic variable
included in the database. Graphic representation is possible (with histograms
for example). Mathematical tools used to compare statistical distributions,
determine daily characteristic evolutions, find typical days, and the
correlations between the different climatic variables have been elaborated in
the second module. Artificial weather datafiles adapted to different simulation
codes are available at the issue of the third module. This tool can then be
used in HVAC system evaluation, or in the study of thermal comfort. The studied
buildings can then be tested under different thermal, aeraulic, and radiative
solicitations, leading to a best understanding of their behaviour for example
in humid climates.
|
1212.5620 | Topological Analysis and Mitigation Strategies for Cascading Failures in
Power Grid Networks | physics.soc-ph cs.SI physics.comp-ph | Recently, there has been a growing concern about the overload status of the
power grid networks, and the increasing possibility of cascading failures. Many
researchers have studied these networks to provide design guidelines for more
robust power grids. Topological analysis is one of the components of system
analysis for its robustness. This paper presents a complex systems analysis of
power grid networks. First, the cascading effect has been simulated on three
well known networks: the IEEE 300 bus test system, the IEEE 118 bus test
system, and the WSCC 179 bus equivalent model. To extend the analysis to a
larger set of networks, we develop a network generator and generate multiple
graphs with characteristics similar to the IEEE test networks but with
different topologies. The generated graphs are then compared to the test
networks to show the effect of topology in determining their robustness with
respect to cascading failures. The generated graphs turn out to be more robust
than the test graphs, showing the importance of topology in the robust design
of power grids. The second part of this paper concerns the discussion of two
novel mitigation strategies for cascading failures: Targeted Load Reduction and
Islanding using Distributed Sources. These new mitigation strategies are
compared with the Homogeneous Load Reduction strategy. Even though the
Homogeneous Load Reduction is simpler to implement, the Targeted Load Reduction
is much more effective. Additionally, an algorithm is presented for the
partitioning of the network for islanding as an effort towards fault isolation
to prevent cascading failures. The results for island formation are better if
the sources are well distributed, else the algorithm leads to the formation of
superislands.
|
1212.5633 | Design, implementation and experiment of a YeSQL Web Crawler | cs.IR | We describe a novel, "focusable", scalable, distributed web crawler based on
GNU/Linux and PostgreSQL that we designed to be easily extendible and which we
have released under a GNU public licence. We also report a first use case
related to an analysis of Twitter's streams about the french 2012 presidential
elections and the URL's it contains.
|
1212.5636 | Partout: A Distributed Engine for Efficient RDF Processing | cs.DB | The increasing interest in Semantic Web technologies has led not only to a
rapid growth of semantic data on the Web but also to an increasing number of
backend applications with already more than a trillion triples in some cases.
Confronted with such huge amounts of data and the future growth, existing
state-of-the-art systems for storing RDF and processing SPARQL queries are no
longer sufficient. In this paper, we introduce Partout, a distributed engine
for efficient RDF processing in a cluster of machines. We propose an effective
approach for fragmenting RDF data sets based on a query log, allocating the
fragments to nodes in a cluster, and finding the optimal configuration. Partout
can efficiently handle updates and its query optimizer produces efficient query
execution plans for ad-hoc SPARQL queries. Our experiments show the superiority
of our approach to state-of-the-art approaches for partitioning and distributed
SPARQL query processing.
|
1212.5637 | Random Spanning Trees and the Prediction of Weighted Graphs | cs.LG stat.ML | We investigate the problem of sequentially predicting the binary labels on
the nodes of an arbitrary weighted graph. We show that, under a suitable
parametrization of the problem, the optimal number of prediction mistakes can
be characterized (up to logarithmic factors) by the cutsize of a random
spanning tree of the graph. The cutsize is induced by the unknown adversarial
labeling of the graph nodes. In deriving our characterization, we obtain a
simple randomized algorithm achieving in expectation the optimal mistake bound
on any polynomially connected weighted graph. Our algorithm draws a random
spanning tree of the original graph and then predicts the nodes of this tree in
constant expected amortized time and linear space. Experiments on real-world
datasets show that our method compares well to both global (Perceptron) and
local (label propagation) methods, while being generally faster in practice.
|
1212.5650 | Learning the Gain Values and Discount Factors of DCG | cs.IR | Evaluation metrics are an essential part of a ranking system, and in the past
many evaluation metrics have been proposed in information retrieval and Web
search. Discounted Cumulated Gains (DCG) has emerged as one of the evaluation
metrics widely adopted for evaluating the performance of ranking functions used
in Web search. However, the two sets of parameters, gain values and discount
factors, used in DCG are determined in a rather ad-hoc way. In this paper we
first show that DCG is generally not coherent, meaning that comparing the
performance of ranking functions using DCG very much depends on the particular
gain values and discount factors used. We then propose a novel methodology that
can learn the gain values and discount factors from user preferences over
rankings. Numerical simulations illustrate the effectiveness of our proposed
methods. Please contact the authors for the full version of this work.
|
1212.5656 | High-precision camera distortion measurements with a "calibration harp" | cs.CV | This paper addresses the high precision measurement of the distortion of a
digital camera from photographs. Traditionally, this distortion is measured
from photographs of a flat pattern which contains aligned elements.
Nevertheless, it is nearly impossible to fabricate a very flat pattern and to
validate its flatness. This fact limits the attainable measurable precisions.
In contrast, it is much easier to obtain physically very precise straight lines
by tightly stretching good quality strings on a frame. Taking literally
"plumb-line methods", we built a "calibration harp" instead of the classic flat
patterns to obtain a high precision measurement tool, demonstrably reaching
2/100 pixel precisions. The harp is complemented with the algorithms computing
automatically from harp photographs two different and complementary lens
distortion measurements. The precision of the method is evaluated on images
corrected by state-of-the-art distortion correction algorithms, and by popular
software. Three applications are shown: first an objective and reliable
measurement of the result of any distortion correction. Second, the harp
permits to control state-of-the art global camera calibration algorithms: It
permits to select the right distortion model, thus avoiding internal
compensation errors inherent to these methods. Third, the method replaces
manual procedures in other distortion correction methods, makes them fully
automatic, and increases their reliability and precision.
|
1212.5663 | On the decoding of quasi-BCH codes | cs.IT math.IT | In this paper we investigate the structure of quasi-BCH codes. In the first
part of this paper we show that quasi-BCH codes can be derived from
Reed-Solomon codes over square matrices extending the known relation about
classical BCH and Reed-Solomon codes. This allows us to adapt the
Welch-Berlekamp algorithm to quasi-BCH codes. In the second part of this paper
we show that quasi-BCH codes can be seen as subcodes of interleaved
Reed-Solomon codes over finite fields. This provides another approach for
decoding quasi-BCH codes.
|
1212.5664 | Weather sequences for predicting HVAC system behaviour in residential
units located in tropical climates | cs.CE | The purpose of our research deals with the description of a methodology for
the definition of specific weather sequences and their influence on the energy
needs of HVAC system. We'll apply the method on the tropical Reunion Island.
The methodological approach based on a detailed analysis of weather sequences
leads to a classification of climatic situations that can be applied to the
site. These sequences have been used to simulate buildings and air handling
systems thanks to a thermal simulation code, CODYRUN. Results bring to the
light how necessary it is to have coherent meteorological data for this kind of
simulation.
|
1212.5665 | Multiple model approach and experimental validation of a residential
air-to-air heat pump | cs.CE | The beginning of this work is the achievement of a design tool, which is a
multiple model software called " CODYRUN ", suitable for professionnals and
usable by researchers. The original aspect of this software is that the
designer has at his disposal a wide panel of choices between different heat
transfer models More precisely, it consists in a multizone software integrating
both natural ventilation and moisture tranfers . This software is developed on
PC micro computer and gets advantage of the Microsoft WINDOWS front-end. Most
of time, HVAC systems and specially domestic air conditioners, are taken into
account in a very simplified way, or in a elaborated one. On one side,they are
just supposed to supply the demand of cooling loads with an ideal control loop
(no delay between the sollicitations and the time response of the system), The
available outputs are initially the hourly cooling and heating consumptions
without integrating the real caracteristics of the HVAC system This paper is
also following the same multiple model approach than for the building modelling
by defining different modelling levels for the air conditionning systems, from
a very simplified one to a detailled one. An experimental validation is
achieved in order to compare the sensitivity of each defined model and to point
out the interaction between the thermal behaviour of the envelop and the
electrical system consumption. For validation purposes, we will describe the
data acquisition system. and the used real size test cell located in the
University of Reunion island, Indian Ocean.
|
1212.5667 | Efficient Incremental Relaying | cs.IT math.IT | We propose a novel relaying scheme which improves the spectral efficiency of
cooperative diversity systems by utilizing limited feedback from destination.
Our scheme capitalizes on the fact that relaying is only required when direct
transmission suffers deep fading. We calculate the packet error rate for the
proposed efficient incremental relaying scheme with both amplify and forward
and decode and forward relaying. Numerical results are also presented to verify
their analytical counterparts.
|
1212.5679 | Cumulative Distance Enumerators of Random Codes and their Thresholds | cs.IT math.IT | Cumulative weight enumerators of random linear codes are introduced, their
asymptotic properties are studied, and very sharp thresholds are exhibited; as
a consequence, it is shown that the asymptotic Gilbert-Varshamov bound is a
very sharp threshold point for the density of the linear codes whose relative
distance is greater than a given positive number. For arbitrary random codes,
similar settings and results are exhibited; in particular, the very sharp
threshold point for the density of the codes whose relative distance is greater
than a given positive number is located at half the asymptotic
Gilbert-Varshamov bound.
|
1212.5687 | On the Construction of Nonbinary Quantum BCH Codes | quant-ph cs.IT math.IT | Four quantum code constructions generating several new families of good
nonbinary quantum nonprimitive non-narrow-sense Bose-Chaudhuri-Hocquenghem
(BCH) codes are presented in this paper. The first two ones are based on
Calderbank-Shor-Steane (CSS) construction derived from two nonprimitive BCH
codes, not necessarily self-orthogonal. The third one is based on nonbinary
Steane's enlargement of CSS codes applied to suitable sub-families of
nonprimitive non-narrow-sense BCH codes. The fourth construction is derived
from suitable sub-families of Hermitian self-orthogonal nonprimitive
non-narrow-sense BCH codes. These constructions generate new families of
quantum BCH codes whose parameters are better than the ones available in the
literature.
|
1212.5701 | ADADELTA: An Adaptive Learning Rate Method | cs.LG | We present a novel per-dimension learning rate method for gradient descent
called ADADELTA. The method dynamically adapts over time using only first order
information and has minimal computational overhead beyond vanilla stochastic
gradient descent. The method requires no manual tuning of a learning rate and
appears robust to noisy gradient information, different model architecture
choices, various data modalities and selection of hyperparameters. We show
promising results compared to other methods on the MNIST digit classification
task using a single machine and on a large scale voice dataset in a distributed
cluster environment.
|
1212.5711 | Normalized Compression Distance of Multisets with Applications | cs.CV cs.IT math.IT physics.data-an | Normalized compression distance (NCD) is a parameter-free, feature-free,
alignment-free, similarity measure between a pair of finite objects based on
compression. However, it is not sufficient for all applications. We propose an
NCD of finite multisets (a.k.a. multiples) of finite objects that is also a
metric. Previously, attempts to obtain such an NCD failed. We cover the entire
trajectory from theoretical underpinning to feasible practice. The new NCD for
multisets is applied to retinal progenitor cell classification questions and to
related synthetically generated data that were earlier treated with the
pairwise NCD. With the new method we achieved significantly better results.
Similarly for questions about axonal organelle transport. We also applied the
new NCD to handwritten digit recognition and improved classification accuracy
significantly over that of pairwise NCD by incorporating both the pairwise and
NCD for multisets. In the analysis we use the incomputable Kolmogorov
complexity that for practical purposes is approximated from above by the length
of the compressed version of the file involved, using a real-world compression
program.
Index Terms--- Normalized compression distance, multisets or multiples,
pattern recognition, data mining, similarity, classification, Kolmogorov
complexity, retinal progenitor cells, synthetic data, organelle transport,
handwritten character recognition
|
1212.5720 | Hierarchical Graphical Models for Multigroup Shape Analysis using
Expectation Maximization with Sampling in Kendall's Shape Space | cs.CV | This paper proposes a novel framework for multi-group shape analysis relying
on a hierarchical graphical statistical model on shapes within a population.The
framework represents individual shapes as point setsmodulo translation,
rotation, and scale, following the notion in Kendall shape space.While
individual shapes are derived from their group shape model, each group shape
model is derived from a single population shape model. The hierarchical model
follows the natural organization of population data and the top level in the
hierarchy provides a common frame of reference for multigroup shape analysis,
e.g. classification and hypothesis testing. Unlike typical shape-modeling
approaches, the proposed model is a generative model that defines a joint
distribution of object-boundary data and the shape-model variables.
Furthermore, it naturally enforces optimal correspondences during the process
of model fitting and thereby subsumes the so-called correspondence problem. The
proposed inference scheme employs an expectation maximization (EM) algorithm
that treats the individual and group shape variables as hidden random variables
and integrates them out before estimating the parameters (population mean and
variance and the group variances). The underpinning of the EM algorithm is the
sampling of pointsets, in Kendall shape space, from their posterior
distribution, for which we exploit a highly-efficient scheme based on
Hamiltonian Monte Carlo simulation. Experiments in this paper use the fitted
hierarchical model to perform (1) hypothesis testing for comparison between
pairs of groups using permutation testing and (2) classification for image
retrieval. The paper validates the proposed framework on simulated data and
demonstrates results on real data.
|
1212.5764 | Strategy-Proof Prediction Markets | cs.GT cs.MA | Prediction markets aggregate agents' beliefs regarding a future event, where
each agent is paid based on the accuracy of its reported belief when compared
to the realized outcome. Agents may strategically manipulate the market (e.g.,
delay reporting, make false reports) aiming for higher expected payments, and
hence the accuracy of the market's aggregated information will be in question.
In this study, we present a general belief model that captures how agents
influence each other beliefs, and show that there are three necessary and
sufficient conditions for agents to behave truthfully in scoring rule based
markets (SRMs). Given that these conditions are restrictive and difficult to
satisfy in real-life, we present novel strategy-proof SRMs where agents are
truthful while dismissing all these conditions. Although achieving such a
strong form of truthfulness increases the worst-case loss in the new markets,
we show that this is the minimum loss required to dismiss these conditions.
|
1212.5765 | Stochastic Subspace Identification: Valid Model, Asymptotics and Model
Error Bounds | cs.SY math.OC | This paper investigates the ability of the stochastic subspace identification
technique to return a valid model from finite measurement data, its asymptotic
properties as the data set becomes large, and asymptotic error bounds of the
identified model (in terms of $\mathcal{H}_2$ and $\mathcal{H}_{\infty}$
norms). First, a new and straightforward LMI-based approach is proposed, which
returns a valid identified model even in cases where the system poles are very
close to unit circle and there is insufficient data to accurately estimate the
covariance matrices. The approach, which is demonstrated by numerical examples,
provides an altenative to other techniques which often fail under these
circumstances. Then, an explicit expression for the variance of the
asymptotically normally distributed sample output covariance matrices and
block-Hankel matrix are derived. From this result, together with perturbation
techniques, error bounds for the state-space matrices in the innovations model
are derived, for a given confidence level. This result is in turn used to
derive several error bounds for the identified transfer functions, for a given
confidence level. One is an explicit $\mathcal{H}_2$ bound. Additionally, two
$\mathcal{H}_{\infty}$ error bounds are derived, one via perturbation analysis,
and the other via an LMI-based technique.
|
1212.5768 | Consensus with Ternary Messages | math.OC cs.SY | We provide a protocol for real-valued average consensus by networks of agents
which exchange only a single message from the ternary alphabet {-1,0,1} between
neighbors at each step. Our protocol works on time-varying undirected graphs
subject to a connectivity condition, has a worst-case convergence time which is
polynomial in the number of agents and the initial values, and requires no
global knowledge about the graph topologies on the part of each node to
implement except for knowing an upper bound on the degrees of its neighbors.
|
1212.5776 | Improving problem solving by exploiting the concept of symmetry | cs.AI | We investigate the concept of symmetry and its role in problem solving. This
paper first defines precisely the elements that constitute a "problem" and its
"solution," and gives several examples to illustrate these definitions. Given
precise definitions of problems, it is relatively straightforward to construct
a search process for finding solutions. Finally this paper attempts to exploit
the concept of symmetry in improving problem solving.
|
1212.5777 | Collaborating Robotics Using Nature-Inspired Meta-Heuristics | cs.NE cs.RO | This paper introduces collaborating robots which provide the possibility of
enhanced task performance, high reliability and decreased. Collaborating-bots
are a collection of mobile robots able to self-assemble and to self-organize in
order to solve problems that cannot be solved by a single robot. These robots
combine the power of swarm intelligence with the flexibility of
self-reconfiguration as aggregate Collaborating-bots can dynamically change
their structure to match environmental variations. Collaborating robots are
more than just networks of independent agents, they are potentially
reconfigurable networks of communicating agents capable of coordinated sensing
and interaction with the environment. Robots are going to be an important part
of the future. Collaborating robots are limited in individual capability, but
robots deployed in large numbers can represent a strong force similar to a
colony of ants or swarm of bees. We present a mechanism for collaborating
robots based on swarm intelligence such as Ant colony optimization and Particle
swarm Optimization
|
1212.5782 | Random Access with Physical-layer Network Coding | cs.IT math.IT | Leveraging recent progress in physical-layer network coding we propose a new
approach to random access: When packets collide, it is possible to recover a
linear combination of the packets at the receiver. Over many rounds of
transmission, the receiver can thus obtain many linear combinations and
eventually recover all original packets. This is by contrast to slotted ALOHA
where packet collisions lead to complete erasures. The throughput of the
proposed strategy is derived and shown to be significantly superior to the best
known strategies, including multipacket reception.
|
1212.5789 | Self-embeddings of Hamming Steiner triple systems of small order and APN
permutations | cs.IT math.IT | The classification, up to isomorphism, of all self-embedding monomial power
permutations of Hamming Steiner triple systems of order n=2^m-1 for small m, m
< 23, is given. As far as we know, for m in {5,7,11,13,17,19}, all given
self-embeddings in closed surfaces are new. Moreover, they are cyclic for all m
and nonorientable at least for all m < 21. For any non prime m, the
nonexistence of such self-embeddings in a closed surface is proven.
|
1212.5791 | Carrier Frequency Offset Estimation Approach for Multicarrier
Transmission on Hexagonal Time-Frequency Lattice | cs.IT math.IT | In this paper, a novel carrier frequency offset estimation approach,
including preamble structure, carrier frequency offset estimation algorithm, is
proposed for hexagonal multi-carrier transmission (HMCT) system. The
closed-form Cramer-Rao lower bound of the proposed carrier frequency offset
estimation scheme is given. Theoretical analyses and simulation results show
that the proposed preamble structure and carrier frequency offset estimation
algorithm for HMCT system obtains an approximation to the Cramer-Rao lower
bound mean square error (MSE) performance over the doubly dispersive (DD)
propagation channel.
|
1212.5792 | On Max-SINR Receiver for Hexagonal Multicarrier Transmission Over Doubly
Dispersive Channel | cs.IT math.IT | In this paper, a novel receiver for Hexagonal Multicarrier Transmission (HMT)
system based on the maximizing Signal-to-Interference-plus-Noise Ratio
(Max-SINR) criterion is proposed. Theoretical analysis shows that the prototype
pulse of the proposed Max-SINR receiver should adapt to the root mean square
(RMS) delay spread of the doubly dispersive (DD) channel with exponential power
delay profile and U-shape Doppler spectrum. Simulation results show that the
proposed Max-SINR receiver outperforms traditional projection scheme and
obtains an approximation to the theoretical upper bound SINR performance within
the full range of channel spread factor. Meanwhile, the SINR performance of the
proposed prototype pulse is robust to the estimation error between the
estimated value and the real value of time delay spread.
|
1212.5815 | Classical Model Predictive Control of a Permanent Magnet Synchronous
Motor | cs.SY math.OC | A model predictive control (MPC) scheme for a permanent-magnet synchronous
motor (PMSM) is presented. The torque controller optimizes a quadratic cost
consisting of control error and machine losses repeatedly, accounting the
voltage and current limitations. The scheme extensively relies on optimization,
to meet the runtime limitation, a suboptimal algorithm based on differential
flatness, continuous parameterization and linear programming is introduced.
The multivariable controller exploits cross-coupling effects in the
long-range constrained predictive control strategy. The optimization results in
fast and smooth torque dynamics while inherently using field-weakening to
improve the power efficiency and the current dynamics in high speed operation.
As distinctive MPC feature, constraint handling is improved, instead of just
saturating the control input, field weakening is applied dynamically to bypass
the voltage limitation. The performance of the scheme is demonstrated by
experimental and numerical results.
|
1212.5829 | Modeling Non-Uniform UE Distributions in Downlink Cellular Networks | cs.IT math.IT stat.AP | A recent way to model and analyze downlink cellular networks is by using
random spatial models. Assuming user equipment (UE) distribution to be uniform,
the analysis is performed at a typical UE located at the origin. While this
method of sampling UEs provides statistics averaged over the UE locations, it
is not possible to sample cell interior and cell edge UEs separately. This
complicates the problem of analyzing deployment scenarios involving non-uniform
distribution of UEs, especially when the locations of the UEs and the base
stations (BSs) are dependent. To facilitate this separation, we propose a new
tractable method of sampling UEs by conditionally thinning the BS point process
and show that the resulting framework can be used as a tractable generative
model to study cellular networks with non-uniform UE distribution.
|
1212.5841 | Data complexity measured by principal graphs | cs.LG cs.IT math.IT | How to measure the complexity of a finite set of vectors embedded in a
multidimensional space? This is a non-trivial question which can be approached
in many different ways. Here we suggest a set of data complexity measures using
universal approximators, principal cubic complexes. Principal cubic complexes
generalise the notion of principal manifolds for datasets with non-trivial
topologies. The type of the principal cubic complex is determined by its
dimension and a grammar of elementary graph transformations. The simplest
grammar produces principal trees.
We introduce three natural types of data complexity: 1) geometric (deviation
of the data's approximator from some "idealized" configuration, such as
deviation from harmonicity); 2) structural (how many elements of a principal
graph are needed to approximate the data), and 3) construction complexity (how
many applications of elementary graph transformations are needed to construct
the principal object starting from the simplest one).
We compute these measures for several simulated and real-life data
distributions and show them in the "accuracy-complexity" plots, helping to
optimize the accuracy/complexity ratio. We discuss various issues connected
with measuring data complexity. Software for computing data complexity measures
from principal cubic complexes is provided as well.
|
1212.5855 | Keep Ballots Secret: On the Futility of Social Learning in Decision
Making by Voting | cs.IT math.IT | We show that social learning is not useful in a model of team binary decision
making by voting, where each vote carries equal weight. Specifically, we
consider Bayesian binary hypothesis testing where agents have any
conditionally-independent observation distribution and their local decisions
are fused by any L-out-of-N fusion rule. The agents make local decisions
sequentially, with each allowed to use its own private signal and all precedent
local decisions. Though social learning generally occurs in that precedent
local decisions affect an agent's belief, optimal team performance is obtained
when all precedent local decisions are ignored. Thus, social learning is
futile, and secret ballots are optimal. This contrasts with typical studies of
social learning because we include a fusion center rather than concentrating on
the performance of the latest-acting agents.
|
1212.5860 | A short note on the tail bound of Wishart distribution | math.ST cs.LG stat.TH | We study the tail bound of the emperical covariance of multivariate normal
distribution. Following the work of (Gittens & Tropp, 2011), we provide a tail
bound with a small constant.
|
1212.5863 | Influence Analysis in the Blogosphere | cs.SI physics.soc-ph | In this paper we analyze influence in the blogosphere. Recently, influence
analysis has become an increasingly important research topic, as online
communities, such as social networks and e-commerce sites, playing a more and
more significant role in our daily life. However, so far few studies have
succeeded in extracting influence from online communities in a satisfactory
way. One of the challenges that limited previous researches is that it is
difficult to capture user behaviors. Consequently, the influence among users
could only be inferred in an indirect and heuristic way, which is inaccurate
and noise-prone. In this study, we conduct an extensive investigation in regard
to influence among bloggers at a Japanese blog web site, BIGLOBE. By processing
the log files of the web servers, we are able to accurately extract the
activities of BIGLOBE members in terms of writing their blog posts and reading
other member's posts. Based on these activities, we propose a principled
framework to detect influence among the members with high confidence level.
From the extracted influence, we conduct in-depth analysis on how influence
varies over different topics and how influence varies over different members.
We also show the potentials of leveraging the extracted influence to make
personalized recommendation in BIGLOBE. To our best knowledge, this is one of
the first studies that capture and analyze influence in the blogosphere in such
a large scale.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.