id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2201.12673 | Marco Rasetto | Marco Rasetto, Qingzhou Wan, Himanshu Akolkar, Feng Xiong, Bertram Shi
and Ryad Benosman | Building time-surfaces by exploiting the complex volatility of an ECRAM
memristor | null | null | 10.1109/JETCAS.2023.3330832 | null | cs.ET | http://creativecommons.org/licenses/by/4.0/ | Memristors have emerged as a promising technology for efficient neuromorphic
architectures owing to their ability to act as programmable synapses, combining
processing and memory into a single device. Although they are most commonly
used for static encoding of synaptic weights, recent work has begun to
investigate the use of their dynamical properties, such as Short Term
Plasticity (STP), to integrate events over time in event-based architectures.
However, we are still far from completely understanding the range of possible
behaviors and how they might be exploited in neuromorphic computation. This
work focuses on a newly developed Li$_\textbf{x}$WO$_\textbf{3}$-based
three-terminal memristor that exhibits tunable STP and a conductance response
modeled by a double exponential decay. We derive a stochastic model of the
device from experimental data and investigate how device stochasticity, STP,
and the double exponential decay affect accuracy in a hierarchy of
time-surfaces (HOTS) architecture. We found that the device's stochasticity
does not affect accuracy, that STP can reduce the effect of salt and pepper
noise in signals from event-based sensors, and that the double exponential
decay improves accuracy by integrating temporal information over multiple time
scales. Our approach can be generalized to study other memristive devices to
build a better understanding of how control over temporal dynamics can enable
neuromorphic engineers to fine-tune devices and architectures to fit their
problems at hand.
| [
{
"created": "Sat, 29 Jan 2022 22:18:55 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Apr 2024 09:21:39 GMT",
"version": "v2"
}
] | 2024-04-16 | [
[
"Rasetto",
"Marco",
""
],
[
"Wan",
"Qingzhou",
""
],
[
"Akolkar",
"Himanshu",
""
],
[
"Xiong",
"Feng",
""
],
[
"Shi",
"Bertram",
""
],
[
"Benosman",
"Ryad",
""
]
] | Memristors have emerged as a promising technology for efficient neuromorphic architectures owing to their ability to act as programmable synapses, combining processing and memory into a single device. Although they are most commonly used for static encoding of synaptic weights, recent work has begun to investigate the use of their dynamical properties, such as Short Term Plasticity (STP), to integrate events over time in event-based architectures. However, we are still far from completely understanding the range of possible behaviors and how they might be exploited in neuromorphic computation. This work focuses on a newly developed Li$_\textbf{x}$WO$_\textbf{3}$-based three-terminal memristor that exhibits tunable STP and a conductance response modeled by a double exponential decay. We derive a stochastic model of the device from experimental data and investigate how device stochasticity, STP, and the double exponential decay affect accuracy in a hierarchy of time-surfaces (HOTS) architecture. We found that the device's stochasticity does not affect accuracy, that STP can reduce the effect of salt and pepper noise in signals from event-based sensors, and that the double exponential decay improves accuracy by integrating temporal information over multiple time scales. Our approach can be generalized to study other memristive devices to build a better understanding of how control over temporal dynamics can enable neuromorphic engineers to fine-tune devices and architectures to fit their problems at hand. |
1803.08601 | Carl Yang | Carl Yang, Aydin Buluc and John D. Owens | Design Principles for Sparse Matrix Multiplication on the GPU | 16 pages, 7 figures, International European Conference on Parallel
and Distributed Computing (Euro-Par) 2018 | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We implement two novel algorithms for sparse-matrix dense-matrix
multiplication (SpMM) on the GPU. Our algorithms expect the sparse input in the
popular compressed-sparse-row (CSR) format and thus do not require expensive
format conversion. While previous SpMM work concentrates on thread-level
parallelism, we additionally focus on latency hiding with instruction-level
parallelism and load-balancing. We show, both theoretically and experimentally,
that the proposed SpMM is a better fit for the GPU than previous approaches. We
identify a key memory access pattern that allows efficient access into both
input and output matrices that is crucial to getting excellent performance on
SpMM. By combining these two ingredients---(i) merge-based load-balancing and
(ii) row-major coalesced memory access---we demonstrate a 4.1x peak speedup and
a 31.7% geomean speedup over state-of-the-art SpMM implementations on
real-world datasets.
| [
{
"created": "Thu, 22 Mar 2018 22:31:17 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Jun 2018 06:30:45 GMT",
"version": "v2"
}
] | 2018-06-13 | [
[
"Yang",
"Carl",
""
],
[
"Buluc",
"Aydin",
""
],
[
"Owens",
"John D.",
""
]
] | We implement two novel algorithms for sparse-matrix dense-matrix multiplication (SpMM) on the GPU. Our algorithms expect the sparse input in the popular compressed-sparse-row (CSR) format and thus do not require expensive format conversion. While previous SpMM work concentrates on thread-level parallelism, we additionally focus on latency hiding with instruction-level parallelism and load-balancing. We show, both theoretically and experimentally, that the proposed SpMM is a better fit for the GPU than previous approaches. We identify a key memory access pattern that allows efficient access into both input and output matrices that is crucial to getting excellent performance on SpMM. By combining these two ingredients---(i) merge-based load-balancing and (ii) row-major coalesced memory access---we demonstrate a 4.1x peak speedup and a 31.7% geomean speedup over state-of-the-art SpMM implementations on real-world datasets. |
1807.09224 | Pierre Augier | Pierre Augier, Ashwin Vishnu Mohanan, Cyrille Bonamy | FluidDyn: a Python open-source framework for research and teaching in
fluid dynamics | null | null | 10.5334/jors.237 | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | FluidDyn is a project to foster open-science and open-source in the fluid
dynamics community. It is thought of as a research project to channel
open-source dynamics, methods and tools to do science. We propose a set of
Python packages forming a framework to study fluid dynamics with different
methods, in particular laboratory experiments (package fluidlab), simulations
(packages fluidfft, fluidsim and fluidfoam) and data processing (package
fluidimage). In the present article, we give an overview of the specialized
packages of the project and then focus on the base package called fluiddyn,
which contains common code used in the specialized packages. Packages fluidfft
and fluidsim are described with greater detail in two companion papers, Mohanan
et al. (2018a,b). With the project FluidDyn, we demonstrate that specialized
scientific code can be written with methods and good practices of the
open-source community. The Mercurial repositories are available in Bitbucket
(https://bitbucket.org/fluiddyn/). All codes are documented using Sphinx and
Read the Docs, and tested with continuous integration run on Bitbucket,
Pipelines and Travis. To improve the reuse potential, the codes are as modular
as possible, leveraging the simple object-oriented programming model of Python.
All codes are also written to be highly efficient, using C++, Cython and
Pythran to speedup the performance of critical functions.
| [
{
"created": "Tue, 3 Jul 2018 10:04:39 GMT",
"version": "v1"
}
] | 2019-04-10 | [
[
"Augier",
"Pierre",
""
],
[
"Mohanan",
"Ashwin Vishnu",
""
],
[
"Bonamy",
"Cyrille",
""
]
] | FluidDyn is a project to foster open-science and open-source in the fluid dynamics community. It is thought of as a research project to channel open-source dynamics, methods and tools to do science. We propose a set of Python packages forming a framework to study fluid dynamics with different methods, in particular laboratory experiments (package fluidlab), simulations (packages fluidfft, fluidsim and fluidfoam) and data processing (package fluidimage). In the present article, we give an overview of the specialized packages of the project and then focus on the base package called fluiddyn, which contains common code used in the specialized packages. Packages fluidfft and fluidsim are described with greater detail in two companion papers, Mohanan et al. (2018a,b). With the project FluidDyn, we demonstrate that specialized scientific code can be written with methods and good practices of the open-source community. The Mercurial repositories are available in Bitbucket (https://bitbucket.org/fluiddyn/). All codes are documented using Sphinx and Read the Docs, and tested with continuous integration run on Bitbucket, Pipelines and Travis. To improve the reuse potential, the codes are as modular as possible, leveraging the simple object-oriented programming model of Python. All codes are also written to be highly efficient, using C++, Cython and Pythran to speedup the performance of critical functions. |
2302.11458 | Manuel Stoiber | Manuel Stoiber, Mariam Elsayed, Anne E. Reichert, Florian Steidle,
Dongheui Lee, Rudolph Triebel | Fusing Visual Appearance and Geometry for Multi-modality 6DoF Object
Tracking | Submitted to IEEE/RSJ International Conference on Intelligent Robots | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In many applications of advanced robotic manipulation, six degrees of freedom
(6DoF) object pose estimates are continuously required. In this work, we
develop a multi-modality tracker that fuses information from visual appearance
and geometry to estimate object poses. The algorithm extends our previous
method ICG, which uses geometry, to additionally consider surface appearance.
In general, object surfaces contain local characteristics from text, graphics,
and patterns, as well as global differences from distinct materials and colors.
To incorporate this visual information, two modalities are developed. For local
characteristics, keypoint features are used to minimize distances between
points from keyframes and the current image. For global differences, a novel
region approach is developed that considers multiple regions on the object
surface. In addition, it allows the modeling of external geometries.
Experiments on the YCB-Video and OPT datasets demonstrate that our approach
ICG+ performs best on both datasets, outperforming both conventional and deep
learning-based methods. At the same time, the algorithm is highly efficient and
runs at more than 300 Hz. The source code of our tracker is publicly available.
| [
{
"created": "Wed, 22 Feb 2023 15:53:00 GMT",
"version": "v1"
}
] | 2023-02-23 | [
[
"Stoiber",
"Manuel",
""
],
[
"Elsayed",
"Mariam",
""
],
[
"Reichert",
"Anne E.",
""
],
[
"Steidle",
"Florian",
""
],
[
"Lee",
"Dongheui",
""
],
[
"Triebel",
"Rudolph",
""
]
] | In many applications of advanced robotic manipulation, six degrees of freedom (6DoF) object pose estimates are continuously required. In this work, we develop a multi-modality tracker that fuses information from visual appearance and geometry to estimate object poses. The algorithm extends our previous method ICG, which uses geometry, to additionally consider surface appearance. In general, object surfaces contain local characteristics from text, graphics, and patterns, as well as global differences from distinct materials and colors. To incorporate this visual information, two modalities are developed. For local characteristics, keypoint features are used to minimize distances between points from keyframes and the current image. For global differences, a novel region approach is developed that considers multiple regions on the object surface. In addition, it allows the modeling of external geometries. Experiments on the YCB-Video and OPT datasets demonstrate that our approach ICG+ performs best on both datasets, outperforming both conventional and deep learning-based methods. At the same time, the algorithm is highly efficient and runs at more than 300 Hz. The source code of our tracker is publicly available. |
2301.11313 | Ola Shorinw | Ola Shorinwa, Trevor Halsted, Javier Yu, Mac Schwager | Distributed Optimization Methods for Multi-Robot Systems: Part I -- A
Tutorial | null | null | null | null | cs.RO cs.MA | http://creativecommons.org/licenses/by/4.0/ | Distributed optimization provides a framework for deriving distributed
algorithms for a variety of multi-robot problems. This tutorial constitutes the
first part of a two-part series on distributed optimization applied to
multi-robot problems, which seeks to advance the application of distributed
optimization in robotics. In this tutorial, we demonstrate that many canonical
multi-robot problems can be cast within the distributed optimization framework,
such as multi-robot simultaneous localization and planning (SLAM), multi-robot
target tracking, and multi-robot task assignment problems. We identify three
broad categories of distributed optimization algorithms: distributed
first-order methods, distributed sequential convex programming, and the
alternating direction method of multipliers (ADMM). We describe the basic
structure of each category and provide representative algorithms within each
category. We then work through a simulation case study of multiple drones
collaboratively tracking a ground vehicle. We compare solutions to this problem
using a number of different distributed optimization algorithms. In addition,
we implement a distributed optimization algorithm in hardware on a network of
Rasberry Pis communicating with XBee modules to illustrate robustness to the
challenges of real-world communication networks.
| [
{
"created": "Thu, 26 Jan 2023 18:52:07 GMT",
"version": "v1"
}
] | 2023-01-27 | [
[
"Shorinwa",
"Ola",
""
],
[
"Halsted",
"Trevor",
""
],
[
"Yu",
"Javier",
""
],
[
"Schwager",
"Mac",
""
]
] | Distributed optimization provides a framework for deriving distributed algorithms for a variety of multi-robot problems. This tutorial constitutes the first part of a two-part series on distributed optimization applied to multi-robot problems, which seeks to advance the application of distributed optimization in robotics. In this tutorial, we demonstrate that many canonical multi-robot problems can be cast within the distributed optimization framework, such as multi-robot simultaneous localization and planning (SLAM), multi-robot target tracking, and multi-robot task assignment problems. We identify three broad categories of distributed optimization algorithms: distributed first-order methods, distributed sequential convex programming, and the alternating direction method of multipliers (ADMM). We describe the basic structure of each category and provide representative algorithms within each category. We then work through a simulation case study of multiple drones collaboratively tracking a ground vehicle. We compare solutions to this problem using a number of different distributed optimization algorithms. In addition, we implement a distributed optimization algorithm in hardware on a network of Rasberry Pis communicating with XBee modules to illustrate robustness to the challenges of real-world communication networks. |
1902.09941 | Jian Zhang | Runsheng Zhang, jian zhang, Yaping Huang and Qi Zou | Unsupervised Part Mining for Fine-grained Image Classification | 10 pages,4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-grained image classification remains challenging due to the large
intra-class variance and small inter-class variance. Since the subtle visual
differences are only in local regions of discriminative parts among
subcategories, part localization is a key issue for fine-grained image
classification. Most existing approaches localize object or parts in an image
with object or part annotations, which are expensive and labor-consuming. To
tackle this issue, we propose a fully unsupervised part mining (UPM) approach
to localize the discriminative parts without even image-level annotations,
which largely improves the fine-grained classification performance. We first
utilize pattern mining techniques to discover frequent patterns, i.e.,
co-occurrence highlighted regions, in the feature maps extracted from a
pre-trained convolutional neural network (CNN) model. Inspired by the fact that
these relevant meaningful patterns typically hold appearance and spatial
consistency, we then cluster the mined regions to obtain the cluster centers
and the discriminative parts surrounding the cluster centers are generated.
Importantly, any annotations and sophisticated training procedures are not used
in our proposed part localization approach. Finally, a multi-stream
classification network is built for aggregating the original, object-level and
part-level features simultaneously. Compared with other state-of-the-art
approaches, our UPM approach achieves the competitive performance.
| [
{
"created": "Tue, 26 Feb 2019 14:04:58 GMT",
"version": "v1"
},
{
"created": "Thu, 31 Mar 2022 13:20:38 GMT",
"version": "v2"
}
] | 2022-04-01 | [
[
"Zhang",
"Runsheng",
""
],
[
"zhang",
"jian",
""
],
[
"Huang",
"Yaping",
""
],
[
"Zou",
"Qi",
""
]
] | Fine-grained image classification remains challenging due to the large intra-class variance and small inter-class variance. Since the subtle visual differences are only in local regions of discriminative parts among subcategories, part localization is a key issue for fine-grained image classification. Most existing approaches localize object or parts in an image with object or part annotations, which are expensive and labor-consuming. To tackle this issue, we propose a fully unsupervised part mining (UPM) approach to localize the discriminative parts without even image-level annotations, which largely improves the fine-grained classification performance. We first utilize pattern mining techniques to discover frequent patterns, i.e., co-occurrence highlighted regions, in the feature maps extracted from a pre-trained convolutional neural network (CNN) model. Inspired by the fact that these relevant meaningful patterns typically hold appearance and spatial consistency, we then cluster the mined regions to obtain the cluster centers and the discriminative parts surrounding the cluster centers are generated. Importantly, any annotations and sophisticated training procedures are not used in our proposed part localization approach. Finally, a multi-stream classification network is built for aggregating the original, object-level and part-level features simultaneously. Compared with other state-of-the-art approaches, our UPM approach achieves the competitive performance. |
2206.04667 | Zhirong Wu | Zhirong Wu, Zihang Lai, Xiao Sun, Stephen Lin | Extreme Masking for Learning Instance and Distributed Visual
Representations | Accepted in TMLR | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The paper presents a scalable approach for learning spatially distributed
visual representations over individual tokens and a holistic instance
representation simultaneously. We use self-attention blocks to represent
spatially distributed tokens, followed by cross-attention blocks to aggregate
the holistic image instance. The core of the approach is the use of extremely
large token masking (75\%-90\%) as the data augmentation for supervision. Our
model, named ExtreMA, follows the plain BYOL approach where the instance
representation from the unmasked subset is trained to predict that from the
intact input. Instead of encouraging invariance across inputs, the model is
required to capture informative variations in an image. The paper makes three
contributions: 1) It presents random masking as a strong and computationally
efficient data augmentation for siamese representation learning. 2) With
multiple sampling per instance, extreme masking greatly speeds up learning and
improves performance with more data. 3) ExtreMA obtains stronger linear probing
performance than masked modeling methods, and better transfer performance than
prior contrastive models.
| [
{
"created": "Thu, 9 Jun 2022 17:59:43 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Mar 2023 09:51:25 GMT",
"version": "v2"
}
] | 2023-03-09 | [
[
"Wu",
"Zhirong",
""
],
[
"Lai",
"Zihang",
""
],
[
"Sun",
"Xiao",
""
],
[
"Lin",
"Stephen",
""
]
] | The paper presents a scalable approach for learning spatially distributed visual representations over individual tokens and a holistic instance representation simultaneously. We use self-attention blocks to represent spatially distributed tokens, followed by cross-attention blocks to aggregate the holistic image instance. The core of the approach is the use of extremely large token masking (75\%-90\%) as the data augmentation for supervision. Our model, named ExtreMA, follows the plain BYOL approach where the instance representation from the unmasked subset is trained to predict that from the intact input. Instead of encouraging invariance across inputs, the model is required to capture informative variations in an image. The paper makes three contributions: 1) It presents random masking as a strong and computationally efficient data augmentation for siamese representation learning. 2) With multiple sampling per instance, extreme masking greatly speeds up learning and improves performance with more data. 3) ExtreMA obtains stronger linear probing performance than masked modeling methods, and better transfer performance than prior contrastive models. |
2212.05420 | Mike Thelwall Prof | Mike Thelwall, Kayvan Kousha, Mahshid Abdoli, Emma Stuart, Meiko
Makita, Paul Wilson, Jonathan Levitt | Terms in journal articles associating with high quality: Can qualitative
research be world-leading? | null | null | null | null | cs.DL | http://creativecommons.org/licenses/by/4.0/ | Purpose: Scholars often aim to conduct high quality research and their
success is judged primarily by peer reviewers. Research quality is difficult
for either group to identify, however, and misunderstandings can reduce the
efficiency of the scientific enterprise. In response, we use a novel term
association strategy to seek quantitative evidence of aspects of research that
associate with high or low quality. Design/methodology/approach: We extracted
the words and 2-5-word phrases most strongly associating with different quality
scores in each of 34 Units of Assessment (UoAs) in the Research Excellence
Framework (REF) 2021. We extracted the terms from 122,331 journal articles
2014-2020 with individual REF2021 quality scores. Findings: The terms
associating with high- or low-quality scores vary between fields but relate to
writing styles, methods, and topics. We show that the first-person writing
style strongly associates with higher quality research in many areas because it
is the norm for a set of large prestigious journals. We found methods and
topics that associate with both high- and low-quality scores. Worryingly, terms
associating with educational and qualitative research attract lower quality
scores in multiple areas. REF experts may rarely give high scores to
qualitative or educational research because the authors tend to be less
competent, because it is harder to make world leading research with these
themes, or because they do not value them. Originality: This is the first
investigation of journal article terms associating with research quality.
| [
{
"created": "Sun, 11 Dec 2022 06:12:31 GMT",
"version": "v1"
}
] | 2022-12-13 | [
[
"Thelwall",
"Mike",
""
],
[
"Kousha",
"Kayvan",
""
],
[
"Abdoli",
"Mahshid",
""
],
[
"Stuart",
"Emma",
""
],
[
"Makita",
"Meiko",
""
],
[
"Wilson",
"Paul",
""
],
[
"Levitt",
"Jonathan",
""
]
] | Purpose: Scholars often aim to conduct high quality research and their success is judged primarily by peer reviewers. Research quality is difficult for either group to identify, however, and misunderstandings can reduce the efficiency of the scientific enterprise. In response, we use a novel term association strategy to seek quantitative evidence of aspects of research that associate with high or low quality. Design/methodology/approach: We extracted the words and 2-5-word phrases most strongly associating with different quality scores in each of 34 Units of Assessment (UoAs) in the Research Excellence Framework (REF) 2021. We extracted the terms from 122,331 journal articles 2014-2020 with individual REF2021 quality scores. Findings: The terms associating with high- or low-quality scores vary between fields but relate to writing styles, methods, and topics. We show that the first-person writing style strongly associates with higher quality research in many areas because it is the norm for a set of large prestigious journals. We found methods and topics that associate with both high- and low-quality scores. Worryingly, terms associating with educational and qualitative research attract lower quality scores in multiple areas. REF experts may rarely give high scores to qualitative or educational research because the authors tend to be less competent, because it is harder to make world leading research with these themes, or because they do not value them. Originality: This is the first investigation of journal article terms associating with research quality. |
2405.05853 | Zheming Zuo | Zheming Zuo, Joseph Smith, Jonathan Stonehouse, Boguslaw Obara | Robust and Explainable Fine-Grained Visual Classification with Transfer
Learning: A Dual-Carriageway Framework | Accepted in the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR) 2024 workshop | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In the realm of practical fine-grained visual classification applications
rooted in deep learning, a common scenario involves training a model using a
pre-existing dataset. Subsequently, a new dataset becomes available, prompting
the desire to make a pivotal decision for achieving enhanced and leveraged
inference performance on both sides: Should one opt to train datasets from
scratch or fine-tune the model trained on the initial dataset using the newly
released dataset? The existing literature reveals a lack of methods to
systematically determine the optimal training strategy, necessitating
explainability. To this end, we present an automatic best-suit training
solution searching framework, the Dual-Carriageway Framework (DCF), to fill
this gap. DCF benefits from the design of a dual-direction search (starting
from the pre-existing or the newly released dataset) where five different
training settings are enforced. In addition, DCF is not only capable of
figuring out the optimal training strategy with the capability of avoiding
overfitting but also yields built-in quantitative and visual explanations
derived from the actual input and weights of the trained model. We validated
DCF's effectiveness through experiments with three convolutional neural
networks (ResNet18, ResNet34 and Inception-v3) on two temporally continued
commercial product datasets. Results showed fine-tuning pathways outperformed
training-from-scratch ones by up to 2.13% and 1.23% on the pre-existing and new
datasets, respectively, in terms of mean accuracy. Furthermore, DCF identified
reflection padding as the superior padding method, enhancing testing accuracy
by 3.72% on average. This framework stands out for its potential to guide the
development of robust and explainable AI solutions in fine-grained visual
classification tasks.
| [
{
"created": "Thu, 9 May 2024 15:41:10 GMT",
"version": "v1"
}
] | 2024-05-10 | [
[
"Zuo",
"Zheming",
""
],
[
"Smith",
"Joseph",
""
],
[
"Stonehouse",
"Jonathan",
""
],
[
"Obara",
"Boguslaw",
""
]
] | In the realm of practical fine-grained visual classification applications rooted in deep learning, a common scenario involves training a model using a pre-existing dataset. Subsequently, a new dataset becomes available, prompting the desire to make a pivotal decision for achieving enhanced and leveraged inference performance on both sides: Should one opt to train datasets from scratch or fine-tune the model trained on the initial dataset using the newly released dataset? The existing literature reveals a lack of methods to systematically determine the optimal training strategy, necessitating explainability. To this end, we present an automatic best-suit training solution searching framework, the Dual-Carriageway Framework (DCF), to fill this gap. DCF benefits from the design of a dual-direction search (starting from the pre-existing or the newly released dataset) where five different training settings are enforced. In addition, DCF is not only capable of figuring out the optimal training strategy with the capability of avoiding overfitting but also yields built-in quantitative and visual explanations derived from the actual input and weights of the trained model. We validated DCF's effectiveness through experiments with three convolutional neural networks (ResNet18, ResNet34 and Inception-v3) on two temporally continued commercial product datasets. Results showed fine-tuning pathways outperformed training-from-scratch ones by up to 2.13% and 1.23% on the pre-existing and new datasets, respectively, in terms of mean accuracy. Furthermore, DCF identified reflection padding as the superior padding method, enhancing testing accuracy by 3.72% on average. This framework stands out for its potential to guide the development of robust and explainable AI solutions in fine-grained visual classification tasks. |
2210.09197 | Zhixue Zhao | Zhixue Zhao, George Chrysostomou, Kalina Bontcheva, Nikolaos Aletras | On the Impact of Temporal Concept Drift on Model Explanations | Accepted at EMNLP Findings 2022 | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explanation faithfulness of model predictions in natural language processing
is typically evaluated on held-out data from the same temporal distribution as
the training data (i.e. synchronous settings). While model performance often
deteriorates due to temporal variation (i.e. temporal concept drift), it is
currently unknown how explanation faithfulness is impacted when the time span
of the target data is different from the data used to train the model (i.e.
asynchronous settings). For this purpose, we examine the impact of temporal
variation on model explanations extracted by eight feature attribution methods
and three select-then-predict models across six text classification tasks. Our
experiments show that (i)faithfulness is not consistent under temporal
variations across feature attribution methods (e.g. it decreases or increases
depending on the method), with an attention-based method demonstrating the most
robust faithfulness scores across datasets; and (ii) select-then-predict models
are mostly robust in asynchronous settings with only small degradation in
predictive performance. Finally, feature attribution methods show conflicting
behavior when used in FRESH (i.e. a select-and-predict model) and for measuring
sufficiency/comprehensiveness (i.e. as post-hoc methods), suggesting that we
need more robust metrics to evaluate post-hoc explanation faithfulness.
| [
{
"created": "Mon, 17 Oct 2022 15:53:09 GMT",
"version": "v1"
}
] | 2022-10-18 | [
[
"Zhao",
"Zhixue",
""
],
[
"Chrysostomou",
"George",
""
],
[
"Bontcheva",
"Kalina",
""
],
[
"Aletras",
"Nikolaos",
""
]
] | Explanation faithfulness of model predictions in natural language processing is typically evaluated on held-out data from the same temporal distribution as the training data (i.e. synchronous settings). While model performance often deteriorates due to temporal variation (i.e. temporal concept drift), it is currently unknown how explanation faithfulness is impacted when the time span of the target data is different from the data used to train the model (i.e. asynchronous settings). For this purpose, we examine the impact of temporal variation on model explanations extracted by eight feature attribution methods and three select-then-predict models across six text classification tasks. Our experiments show that (i)faithfulness is not consistent under temporal variations across feature attribution methods (e.g. it decreases or increases depending on the method), with an attention-based method demonstrating the most robust faithfulness scores across datasets; and (ii) select-then-predict models are mostly robust in asynchronous settings with only small degradation in predictive performance. Finally, feature attribution methods show conflicting behavior when used in FRESH (i.e. a select-and-predict model) and for measuring sufficiency/comprehensiveness (i.e. as post-hoc methods), suggesting that we need more robust metrics to evaluate post-hoc explanation faithfulness. |
2303.02601 | Maria Lymperaiou | Theodoti Stoikou, Maria Lymperaiou, Giorgos Stamou | Knowledge-Based Counterfactual Queries for Visual Question Answering | null | AAAI MAKE 2023 | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Visual Question Answering (VQA) has been a popular task that combines vision
and language, with numerous relevant implementations in literature. Even though
there are some attempts that approach explainability and robustness issues in
VQA models, very few of them employ counterfactuals as a means of probing such
challenges in a model-agnostic way. In this work, we propose a systematic
method for explaining the behavior and investigating the robustness of VQA
models through counterfactual perturbations. For this reason, we exploit
structured knowledge bases to perform deterministic, optimal and controllable
word-level replacements targeting the linguistic modality, and we then evaluate
the model's response against such counterfactual inputs. Finally, we
qualitatively extract local and global explanations based on counterfactual
responses, which are ultimately proven insightful towards interpreting VQA
model behaviors. By performing a variety of perturbation types, targeting
different parts of speech of the input question, we gain insights to the
reasoning of the model, through the comparison of its responses in different
adversarial circumstances. Overall, we reveal possible biases in the
decision-making process of the model, as well as expected and unexpected
patterns, which impact its performance quantitatively and qualitatively, as
indicated by our analysis.
| [
{
"created": "Sun, 5 Mar 2023 08:00:30 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Stoikou",
"Theodoti",
""
],
[
"Lymperaiou",
"Maria",
""
],
[
"Stamou",
"Giorgos",
""
]
] | Visual Question Answering (VQA) has been a popular task that combines vision and language, with numerous relevant implementations in literature. Even though there are some attempts that approach explainability and robustness issues in VQA models, very few of them employ counterfactuals as a means of probing such challenges in a model-agnostic way. In this work, we propose a systematic method for explaining the behavior and investigating the robustness of VQA models through counterfactual perturbations. For this reason, we exploit structured knowledge bases to perform deterministic, optimal and controllable word-level replacements targeting the linguistic modality, and we then evaluate the model's response against such counterfactual inputs. Finally, we qualitatively extract local and global explanations based on counterfactual responses, which are ultimately proven insightful towards interpreting VQA model behaviors. By performing a variety of perturbation types, targeting different parts of speech of the input question, we gain insights to the reasoning of the model, through the comparison of its responses in different adversarial circumstances. Overall, we reveal possible biases in the decision-making process of the model, as well as expected and unexpected patterns, which impact its performance quantitatively and qualitatively, as indicated by our analysis. |
2406.08101 | Qianli Wang | Qianli Wang, Tatiana Anikina, Nils Feldhus, Simon Ostermann, Sebastian
M\"oller | CoXQL: A Dataset for Parsing Explanation Requests in Conversational XAI
Systems | 4 pages, short paper | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Conversational explainable artificial intelligence (ConvXAI) systems based on
large language models (LLMs) have garnered significant interest from the
research community in natural language processing (NLP) and human-computer
interaction (HCI). Such systems can provide answers to user questions about
explanations in dialogues, have the potential to enhance users' comprehension
and offer more information about the decision-making and generation processes
of LLMs. Currently available ConvXAI systems are based on intent recognition
rather than free chat, as this has been found to be more precise and reliable
in identifying users' intentions. However, the recognition of intents still
presents a challenge in the case of ConvXAI, since little training data exist
and the domain is highly specific, as there is a broad range of XAI methods to
map requests onto. In order to bridge this gap, we present CoXQL, the first
dataset for user intent recognition in ConvXAI, covering 31 intents, seven of
which require filling multiple slots. Subsequently, we enhance an existing
parsing approach by incorporating template validations, and conduct an
evaluation of several LLMs on CoXQL using different parsing strategies. We
conclude that the improved parsing approach (MP+) surpasses the performance of
previous approaches. We also discover that intents with multiple slots remain
highly challenging for LLMs.
| [
{
"created": "Wed, 12 Jun 2024 11:27:10 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Jun 2024 03:16:47 GMT",
"version": "v2"
}
] | 2024-06-14 | [
[
"Wang",
"Qianli",
""
],
[
"Anikina",
"Tatiana",
""
],
[
"Feldhus",
"Nils",
""
],
[
"Ostermann",
"Simon",
""
],
[
"Möller",
"Sebastian",
""
]
] | Conversational explainable artificial intelligence (ConvXAI) systems based on large language models (LLMs) have garnered significant interest from the research community in natural language processing (NLP) and human-computer interaction (HCI). Such systems can provide answers to user questions about explanations in dialogues, have the potential to enhance users' comprehension and offer more information about the decision-making and generation processes of LLMs. Currently available ConvXAI systems are based on intent recognition rather than free chat, as this has been found to be more precise and reliable in identifying users' intentions. However, the recognition of intents still presents a challenge in the case of ConvXAI, since little training data exist and the domain is highly specific, as there is a broad range of XAI methods to map requests onto. In order to bridge this gap, we present CoXQL, the first dataset for user intent recognition in ConvXAI, covering 31 intents, seven of which require filling multiple slots. Subsequently, we enhance an existing parsing approach by incorporating template validations, and conduct an evaluation of several LLMs on CoXQL using different parsing strategies. We conclude that the improved parsing approach (MP+) surpasses the performance of previous approaches. We also discover that intents with multiple slots remain highly challenging for LLMs. |
2209.07088 | Zhengming Zhou | Zhengming Zhou and Qiulei Dong | Self-distilled Feature Aggregation for Self-supervised Monocular Depth
Estimation | Accepted to ECCV 2022 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Self-supervised monocular depth estimation has received much attention
recently in computer vision. Most of the existing works in literature aggregate
multi-scale features for depth prediction via either straightforward
concatenation or element-wise addition, however, such feature aggregation
operations generally neglect the contextual consistency between multi-scale
features. Addressing this problem, we propose the Self-Distilled Feature
Aggregation (SDFA) module for simultaneously aggregating a pair of low-scale
and high-scale features and maintaining their contextual consistency. The SDFA
employs three branches to learn three feature offset maps respectively: one
offset map for refining the input low-scale feature and the other two for
refining the input high-scale feature under a designed self-distillation
manner. Then, we propose an SDFA-based network for self-supervised monocular
depth estimation, and design a self-distilled training strategy to train the
proposed network with the SDFA module. Experimental results on the KITTI
dataset demonstrate that the proposed method outperforms the comparative
state-of-the-art methods in most cases. The code is available at
https://github.com/ZM-Zhou/SDFA-Net_pytorch.
| [
{
"created": "Thu, 15 Sep 2022 07:00:52 GMT",
"version": "v1"
}
] | 2022-09-16 | [
[
"Zhou",
"Zhengming",
""
],
[
"Dong",
"Qiulei",
""
]
] | Self-supervised monocular depth estimation has received much attention recently in computer vision. Most of the existing works in literature aggregate multi-scale features for depth prediction via either straightforward concatenation or element-wise addition, however, such feature aggregation operations generally neglect the contextual consistency between multi-scale features. Addressing this problem, we propose the Self-Distilled Feature Aggregation (SDFA) module for simultaneously aggregating a pair of low-scale and high-scale features and maintaining their contextual consistency. The SDFA employs three branches to learn three feature offset maps respectively: one offset map for refining the input low-scale feature and the other two for refining the input high-scale feature under a designed self-distillation manner. Then, we propose an SDFA-based network for self-supervised monocular depth estimation, and design a self-distilled training strategy to train the proposed network with the SDFA module. Experimental results on the KITTI dataset demonstrate that the proposed method outperforms the comparative state-of-the-art methods in most cases. The code is available at https://github.com/ZM-Zhou/SDFA-Net_pytorch. |
2311.13681 | Joo Chan Lee | Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, Eunbyung Park | Compact 3D Gaussian Representation for Radiance Field | Project page: http://maincold2.github.io/c3dgs/ | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural Radiance Fields (NeRFs) have demonstrated remarkable potential in
capturing complex 3D scenes with high fidelity. However, one persistent
challenge that hinders the widespread adoption of NeRFs is the computational
bottleneck due to the volumetric rendering. On the other hand, 3D Gaussian
splatting (3DGS) has recently emerged as an alternative representation that
leverages a 3D Gaussisan-based representation and adopts the rasterization
pipeline to render the images rather than volumetric rendering, achieving very
fast rendering speed and promising image quality. However, a significant
drawback arises as 3DGS entails a substantial number of 3D Gaussians to
maintain the high fidelity of the rendered images, which requires a large
amount of memory and storage. To address this critical issue, we place a
specific emphasis on two key objectives: reducing the number of Gaussian points
without sacrificing performance and compressing the Gaussian attributes, such
as view-dependent color and covariance. To this end, we propose a learnable
mask strategy that significantly reduces the number of Gaussians while
preserving high performance. In addition, we propose a compact but effective
representation of view-dependent color by employing a grid-based neural field
rather than relying on spherical harmonics. Finally, we learn codebooks to
compactly represent the geometric attributes of Gaussian by vector
quantization. With model compression techniques such as quantization and
entropy coding, we consistently show over 25$\times$ reduced storage and
enhanced rendering speed, while maintaining the quality of the scene
representation, compared to 3DGS. Our work provides a comprehensive framework
for 3D scene representation, achieving high performance, fast training,
compactness, and real-time rendering. Our project page is available at
https://maincold2.github.io/c3dgs/.
| [
{
"created": "Wed, 22 Nov 2023 20:31:16 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Feb 2024 13:52:53 GMT",
"version": "v2"
}
] | 2024-02-16 | [
[
"Lee",
"Joo Chan",
""
],
[
"Rho",
"Daniel",
""
],
[
"Sun",
"Xiangyu",
""
],
[
"Ko",
"Jong Hwan",
""
],
[
"Park",
"Eunbyung",
""
]
] | Neural Radiance Fields (NeRFs) have demonstrated remarkable potential in capturing complex 3D scenes with high fidelity. However, one persistent challenge that hinders the widespread adoption of NeRFs is the computational bottleneck due to the volumetric rendering. On the other hand, 3D Gaussian splatting (3DGS) has recently emerged as an alternative representation that leverages a 3D Gaussisan-based representation and adopts the rasterization pipeline to render the images rather than volumetric rendering, achieving very fast rendering speed and promising image quality. However, a significant drawback arises as 3DGS entails a substantial number of 3D Gaussians to maintain the high fidelity of the rendered images, which requires a large amount of memory and storage. To address this critical issue, we place a specific emphasis on two key objectives: reducing the number of Gaussian points without sacrificing performance and compressing the Gaussian attributes, such as view-dependent color and covariance. To this end, we propose a learnable mask strategy that significantly reduces the number of Gaussians while preserving high performance. In addition, we propose a compact but effective representation of view-dependent color by employing a grid-based neural field rather than relying on spherical harmonics. Finally, we learn codebooks to compactly represent the geometric attributes of Gaussian by vector quantization. With model compression techniques such as quantization and entropy coding, we consistently show over 25$\times$ reduced storage and enhanced rendering speed, while maintaining the quality of the scene representation, compared to 3DGS. Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering. Our project page is available at https://maincold2.github.io/c3dgs/. |
2006.02903 | Pengzhen Ren | Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li,
Xiaojiang Chen, and Xin Wang | A Comprehensive Survey of Neural Architecture Search: Challenges and
Solutions | Accepted by ACM Computing Surveys 2021 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has made breakthroughs and substantial in many fields due to
its powerful automatic representation capabilities. It has been proven that
neural architecture design is crucial to the feature representation of data and
the final performance. However, the design of the neural architecture heavily
relies on the researchers' prior knowledge and experience. And due to the
limitations of human' inherent knowledge, it is difficult for people to jump
out of their original thinking paradigm and design an optimal model. Therefore,
an intuitive idea would be to reduce human intervention as much as possible and
let the algorithm automatically design the neural architecture. Neural
Architecture Search (NAS) is just such a revolutionary algorithm, and the
related research work is complicated and rich. Therefore, a comprehensive and
systematic survey on the NAS is essential. Previously related surveys have
begun to classify existing work mainly based on the key components of NAS:
search space, search strategy, and evaluation strategy. While this
classification method is more intuitive, it is difficult for readers to grasp
the challenges and the landmark work involved. Therefore, in this survey, we
provide a new perspective: beginning with an overview of the characteristics of
the earliest NAS algorithms, summarizing the problems in these early NAS
algorithms, and then providing solutions for subsequent related research work.
Besides, we conduct a detailed and comprehensive analysis, comparison, and
summary of these works. Finally, we provide some possible future research
directions.
| [
{
"created": "Mon, 1 Jun 2020 13:08:03 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Jan 2021 03:56:19 GMT",
"version": "v2"
},
{
"created": "Tue, 2 Mar 2021 08:35:02 GMT",
"version": "v3"
}
] | 2021-03-03 | [
[
"Ren",
"Pengzhen",
""
],
[
"Xiao",
"Yun",
""
],
[
"Chang",
"Xiaojun",
""
],
[
"Huang",
"Po-Yao",
""
],
[
"Li",
"Zhihui",
""
],
[
"Chen",
"Xiaojiang",
""
],
[
"Wang",
"Xin",
""
]
] | Deep learning has made breakthroughs and substantial in many fields due to its powerful automatic representation capabilities. It has been proven that neural architecture design is crucial to the feature representation of data and the final performance. However, the design of the neural architecture heavily relies on the researchers' prior knowledge and experience. And due to the limitations of human' inherent knowledge, it is difficult for people to jump out of their original thinking paradigm and design an optimal model. Therefore, an intuitive idea would be to reduce human intervention as much as possible and let the algorithm automatically design the neural architecture. Neural Architecture Search (NAS) is just such a revolutionary algorithm, and the related research work is complicated and rich. Therefore, a comprehensive and systematic survey on the NAS is essential. Previously related surveys have begun to classify existing work mainly based on the key components of NAS: search space, search strategy, and evaluation strategy. While this classification method is more intuitive, it is difficult for readers to grasp the challenges and the landmark work involved. Therefore, in this survey, we provide a new perspective: beginning with an overview of the characteristics of the earliest NAS algorithms, summarizing the problems in these early NAS algorithms, and then providing solutions for subsequent related research work. Besides, we conduct a detailed and comprehensive analysis, comparison, and summary of these works. Finally, we provide some possible future research directions. |
2406.17714 | Purva Pruthi | Purva Pruthi and David Jensen | Compositional Models for Estimating Causal Effects | null | null | null | null | cs.AI cs.LG stat.ME | http://creativecommons.org/licenses/by/4.0/ | Many real-world systems can be represented as sets of interacting components.
Examples of such systems include computational systems such as query
processors, natural systems such as cells, and social systems such as families.
Many approaches have been proposed in traditional (associational) machine
learning to model such structured systems, including statistical relational
models and graph neural networks. Despite this prior work, existing approaches
to estimating causal effects typically treat such systems as single units,
represent them with a fixed set of variables and assume a homogeneous
data-generating process. We study a compositional approach for estimating
individual treatment effects (ITE) in structured systems, where each unit is
represented by the composition of multiple heterogeneous components. This
approach uses a modular architecture to model potential outcomes at each
component and aggregates component-level potential outcomes to obtain the
unit-level potential outcomes. We discover novel benefits of the compositional
approach in causal inference - systematic generalization to estimate
counterfactual outcomes of unseen combinations of components and improved
overlap guarantees between treatment and control groups compared to the
classical methods for causal effect estimation. We also introduce a set of
novel environments for empirically evaluating the compositional approach and
demonstrate the effectiveness of our approach using both simulated and
real-world data.
| [
{
"created": "Tue, 25 Jun 2024 16:56:17 GMT",
"version": "v1"
}
] | 2024-06-26 | [
[
"Pruthi",
"Purva",
""
],
[
"Jensen",
"David",
""
]
] | Many real-world systems can be represented as sets of interacting components. Examples of such systems include computational systems such as query processors, natural systems such as cells, and social systems such as families. Many approaches have been proposed in traditional (associational) machine learning to model such structured systems, including statistical relational models and graph neural networks. Despite this prior work, existing approaches to estimating causal effects typically treat such systems as single units, represent them with a fixed set of variables and assume a homogeneous data-generating process. We study a compositional approach for estimating individual treatment effects (ITE) in structured systems, where each unit is represented by the composition of multiple heterogeneous components. This approach uses a modular architecture to model potential outcomes at each component and aggregates component-level potential outcomes to obtain the unit-level potential outcomes. We discover novel benefits of the compositional approach in causal inference - systematic generalization to estimate counterfactual outcomes of unseen combinations of components and improved overlap guarantees between treatment and control groups compared to the classical methods for causal effect estimation. We also introduce a set of novel environments for empirically evaluating the compositional approach and demonstrate the effectiveness of our approach using both simulated and real-world data. |
1711.08475 | Aleksander Cis{\l}ak | Aleksander Cis{\l}ak, Szymon Grabowski | Lightweight Fingerprints for Fast Approximate Keyword Matching Using
Bitwise Operations | 16 pages, 1 figure, 4 tables | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We aim to speed up approximate keyword matching by storing a lightweight,
fixed-size block of data for each string, called a fingerprint. These work in a
similar way to hash values; however, they can be also used for matching with
errors. They store information regarding symbol occurrences using individual
bits, and they can be compared against each other with a constant number of
bitwise operations. In this way, certain strings can be deduced to be at least
within the distance $k$ from each other (using Hamming or Levenshtein distance)
without performing an explicit verification. We show experimentally that for a
preprocessed collection of strings, fingerprints can provide substantial
speedups for $k = 1$, namely over $2.5$ times for the Hamming distance and over
$10$ times for the Levenshtein distance. Tests were conducted on synthetic and
real-world English and URL data.
| [
{
"created": "Wed, 22 Nov 2017 19:13:08 GMT",
"version": "v1"
}
] | 2017-11-27 | [
[
"Cisłak",
"Aleksander",
""
],
[
"Grabowski",
"Szymon",
""
]
] | We aim to speed up approximate keyword matching by storing a lightweight, fixed-size block of data for each string, called a fingerprint. These work in a similar way to hash values; however, they can be also used for matching with errors. They store information regarding symbol occurrences using individual bits, and they can be compared against each other with a constant number of bitwise operations. In this way, certain strings can be deduced to be at least within the distance $k$ from each other (using Hamming or Levenshtein distance) without performing an explicit verification. We show experimentally that for a preprocessed collection of strings, fingerprints can provide substantial speedups for $k = 1$, namely over $2.5$ times for the Hamming distance and over $10$ times for the Levenshtein distance. Tests were conducted on synthetic and real-world English and URL data. |
2206.12540 | David Munechika | David Munechika, Zijie J. Wang, Jack Reidy, Josh Rubin, Krishna Gade,
Krishnaram Kenthapadi, Duen Horng Chau | Visual Auditor: Interactive Visualization for Detection and
Summarization of Model Biases | null | null | null | null | cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As machine learning (ML) systems become increasingly widespread, it is
necessary to audit these systems for biases prior to their deployment. Recent
research has developed algorithms for effectively identifying intersectional
bias in the form of interpretable, underperforming subsets (or slices) of the
data. However, these solutions and their insights are limited without a tool
for visually understanding and interacting with the results of these
algorithms. We propose Visual Auditor, an interactive visualization tool for
auditing and summarizing model biases. Visual Auditor assists model validation
by providing an interpretable overview of intersectional bias (bias that is
present when examining populations defined by multiple features), details about
relationships between problematic data slices, and a comparison between
underperforming and overperforming data slices in a model. Our open-source tool
runs directly in both computational notebooks and web browsers, making model
auditing accessible and easily integrated into current ML development
workflows. An observational user study in collaboration with domain experts at
Fiddler AI highlights that our tool can help ML practitioners identify and
understand model biases.
| [
{
"created": "Sat, 25 Jun 2022 02:48:27 GMT",
"version": "v1"
}
] | 2022-06-28 | [
[
"Munechika",
"David",
""
],
[
"Wang",
"Zijie J.",
""
],
[
"Reidy",
"Jack",
""
],
[
"Rubin",
"Josh",
""
],
[
"Gade",
"Krishna",
""
],
[
"Kenthapadi",
"Krishnaram",
""
],
[
"Chau",
"Duen Horng",
""
]
] | As machine learning (ML) systems become increasingly widespread, it is necessary to audit these systems for biases prior to their deployment. Recent research has developed algorithms for effectively identifying intersectional bias in the form of interpretable, underperforming subsets (or slices) of the data. However, these solutions and their insights are limited without a tool for visually understanding and interacting with the results of these algorithms. We propose Visual Auditor, an interactive visualization tool for auditing and summarizing model biases. Visual Auditor assists model validation by providing an interpretable overview of intersectional bias (bias that is present when examining populations defined by multiple features), details about relationships between problematic data slices, and a comparison between underperforming and overperforming data slices in a model. Our open-source tool runs directly in both computational notebooks and web browsers, making model auditing accessible and easily integrated into current ML development workflows. An observational user study in collaboration with domain experts at Fiddler AI highlights that our tool can help ML practitioners identify and understand model biases. |
2008.04411 | Giuseppe Patane' | Giuseppe Patan\`e | Meshless Approximation and Helmholtz-Hodge Decomposition of Vector
Fields | null | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The analysis of vector fields is crucial for the understanding of several
physical phenomena, such as natural events (e.g., analysis of waves), diffusive
processes, electric and electromagnetic fields. While previous work has been
focused mainly on the analysis of 2D or 3D vector fields on volumes or
surfaces, we address the meshless analysis of a vector field defined on an
arbitrary domain, without assumptions on its dimension and discretisation. The
meshless approximation of the Helmholtz-Hodge decomposition of a vector field
is achieved by expressing the potential of its components as a linear
combination of radial basis functions and by computing the corresponding
conservative, irrotational, and harmonic components as solution to a
least-squares or to a differential problem. To this end, we identify the
conditions on the kernel of the radial basis functions that guarantee the
existence of their derivatives. Finally, we demonstrate our approach on 2D and
3D vector fields measured by sensors or generated through simulation.
| [
{
"created": "Mon, 10 Aug 2020 20:58:47 GMT",
"version": "v1"
}
] | 2020-08-12 | [
[
"Patanè",
"Giuseppe",
""
]
] | The analysis of vector fields is crucial for the understanding of several physical phenomena, such as natural events (e.g., analysis of waves), diffusive processes, electric and electromagnetic fields. While previous work has been focused mainly on the analysis of 2D or 3D vector fields on volumes or surfaces, we address the meshless analysis of a vector field defined on an arbitrary domain, without assumptions on its dimension and discretisation. The meshless approximation of the Helmholtz-Hodge decomposition of a vector field is achieved by expressing the potential of its components as a linear combination of radial basis functions and by computing the corresponding conservative, irrotational, and harmonic components as solution to a least-squares or to a differential problem. To this end, we identify the conditions on the kernel of the radial basis functions that guarantee the existence of their derivatives. Finally, we demonstrate our approach on 2D and 3D vector fields measured by sensors or generated through simulation. |
2402.09760 | Hongjin Qian | Hongjin Qian, Zheng Liu, Kelong Mao, Yujia Zhou, Zhicheng Dou | Grounding Language Model with Chunking-Free In-Context Retrieval | null | null | null | null | cs.CL cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel Chunking-Free In-Context (CFIC) retrieval
approach, specifically tailored for Retrieval-Augmented Generation (RAG)
systems. Traditional RAG systems often struggle with grounding responses using
precise evidence text due to the challenges of processing lengthy documents and
filtering out irrelevant content. Commonly employed solutions, such as document
chunking and adapting language models to handle longer contexts, have their
limitations. These methods either disrupt the semantic coherence of the text or
fail to effectively address the issues of noise and inaccuracy in evidence
retrieval.
CFIC addresses these challenges by circumventing the conventional chunking
process. It utilizes the encoded hidden states of documents for in-context
retrieval, employing auto-aggressive decoding to accurately identify the
specific evidence text required for user queries, eliminating the need for
chunking. CFIC is further enhanced by incorporating two decoding strategies,
namely Constrained Sentence Prefix Decoding and Skip Decoding. These strategies
not only improve the efficiency of the retrieval process but also ensure that
the fidelity of the generated grounding text evidence is maintained. Our
evaluations of CFIC on a range of open QA datasets demonstrate its superiority
in retrieving relevant and accurate evidence, offering a significant
improvement over traditional methods. By doing away with the need for document
chunking, CFIC presents a more streamlined, effective, and efficient retrieval
solution, making it a valuable advancement in the field of RAG systems.
| [
{
"created": "Thu, 15 Feb 2024 07:22:04 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Qian",
"Hongjin",
""
],
[
"Liu",
"Zheng",
""
],
[
"Mao",
"Kelong",
""
],
[
"Zhou",
"Yujia",
""
],
[
"Dou",
"Zhicheng",
""
]
] | This paper presents a novel Chunking-Free In-Context (CFIC) retrieval approach, specifically tailored for Retrieval-Augmented Generation (RAG) systems. Traditional RAG systems often struggle with grounding responses using precise evidence text due to the challenges of processing lengthy documents and filtering out irrelevant content. Commonly employed solutions, such as document chunking and adapting language models to handle longer contexts, have their limitations. These methods either disrupt the semantic coherence of the text or fail to effectively address the issues of noise and inaccuracy in evidence retrieval. CFIC addresses these challenges by circumventing the conventional chunking process. It utilizes the encoded hidden states of documents for in-context retrieval, employing auto-aggressive decoding to accurately identify the specific evidence text required for user queries, eliminating the need for chunking. CFIC is further enhanced by incorporating two decoding strategies, namely Constrained Sentence Prefix Decoding and Skip Decoding. These strategies not only improve the efficiency of the retrieval process but also ensure that the fidelity of the generated grounding text evidence is maintained. Our evaluations of CFIC on a range of open QA datasets demonstrate its superiority in retrieving relevant and accurate evidence, offering a significant improvement over traditional methods. By doing away with the need for document chunking, CFIC presents a more streamlined, effective, and efficient retrieval solution, making it a valuable advancement in the field of RAG systems. |
1903.09030 | Juan Maro\~nas | Juan Maro\~nas, Roberto Paredes, Daniel Ramos | Generative Models For Deep Learning with Very Scarce Data | null | null | 10.1007/978-3-030-13469-3_3 | null | cs.LG stat.ML | http://creativecommons.org/publicdomain/zero/1.0/ | The goal of this paper is to deal with a data scarcity scenario where deep
learning techniques use to fail. We compare the use of two well established
techniques, Restricted Boltzmann Machines and Variational Auto-encoders, as
generative models in order to increase the training set in a classification
framework. Essentially, we rely on Markov Chain Monte Carlo (MCMC) algorithms
for generating new samples. We show that generalization can be improved
comparing this methodology to other state-of-the-art techniques, e.g.
semi-supervised learning with ladder networks. Furthermore, we show that RBM is
better than VAE generating new samples for training a classifier with good
generalization capabilities.
| [
{
"created": "Thu, 21 Mar 2019 14:38:45 GMT",
"version": "v1"
}
] | 2020-03-02 | [
[
"Maroñas",
"Juan",
""
],
[
"Paredes",
"Roberto",
""
],
[
"Ramos",
"Daniel",
""
]
] | The goal of this paper is to deal with a data scarcity scenario where deep learning techniques use to fail. We compare the use of two well established techniques, Restricted Boltzmann Machines and Variational Auto-encoders, as generative models in order to increase the training set in a classification framework. Essentially, we rely on Markov Chain Monte Carlo (MCMC) algorithms for generating new samples. We show that generalization can be improved comparing this methodology to other state-of-the-art techniques, e.g. semi-supervised learning with ladder networks. Furthermore, we show that RBM is better than VAE generating new samples for training a classifier with good generalization capabilities. |
cs/0006007 | Stephen Marsand | Stephen Marsland, Ulrich Nehmzow and Jonathan Shapiro | Novelty Detection on a Mobile Robot Using Habituation | 10 pages, 6 figures. In From Animals to Animats, The Sixth
International Conference on Simulation of Adaptive Behaviour, Paris, 2000 | null | null | null | cs.RO cs.NE nlin.AO | null | In this paper a novelty filter is introduced which allows a robot operating
in an un structured environment to produce a self-organised model of its
surroundings and to detect deviations from the learned model. The environment
is perceived using the rob ot's 16 sonar sensors. The algorithm produces a
novelty measure for each sensor scan relative to the model it has learned. This
means that it highlights stimuli which h ave not been previously experienced.
The novelty filter proposed uses a model of hab ituation. Habituation is a
decrement in behavioural response when a stimulus is pre sented repeatedly.
Robot experiments are presented which demonstrate the reliable o peration of
the filter in a number of environments.
| [
{
"created": "Fri, 2 Jun 2000 12:33:13 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Marsland",
"Stephen",
""
],
[
"Nehmzow",
"Ulrich",
""
],
[
"Shapiro",
"Jonathan",
""
]
] | In this paper a novelty filter is introduced which allows a robot operating in an un structured environment to produce a self-organised model of its surroundings and to detect deviations from the learned model. The environment is perceived using the rob ot's 16 sonar sensors. The algorithm produces a novelty measure for each sensor scan relative to the model it has learned. This means that it highlights stimuli which h ave not been previously experienced. The novelty filter proposed uses a model of hab ituation. Habituation is a decrement in behavioural response when a stimulus is pre sented repeatedly. Robot experiments are presented which demonstrate the reliable o peration of the filter in a number of environments. |
2208.12125 | Yuci Han | Yuci Han, Jianli Wei, Alper Yilmaz | UAS Navigation in the Real World Using Visual Observation | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel end-to-end Unmanned Aerial System (UAS)
navigation approach for long-range visual navigation in the real world.
Inspired by dual-process visual navigation system of human's instinct:
environment understanding and landmark recognition, we formulate the UAS
navigation task into two same phases. Our system combines the reinforcement
learning (RL) and image matching approaches. First, the agent learns the
navigation policy using RL in the specified environment. To achieve this, we
design an interactive UASNAV environment for the training process. Once the
agent learns the navigation policy, which means 'familiarized themselves with
the environment', we let the UAS fly in the real world to recognize the
landmarks using image matching method and take action according to the learned
policy. During the navigation process, the UAS is embedded with single camera
as the only visual sensor. We demonstrate that the UAS can learn navigating to
the destination hundreds meters away from the starting point with the shortest
path in the real world scenario.
| [
{
"created": "Thu, 25 Aug 2022 14:40:53 GMT",
"version": "v1"
}
] | 2022-08-26 | [
[
"Han",
"Yuci",
""
],
[
"Wei",
"Jianli",
""
],
[
"Yilmaz",
"Alper",
""
]
] | This paper presents a novel end-to-end Unmanned Aerial System (UAS) navigation approach for long-range visual navigation in the real world. Inspired by dual-process visual navigation system of human's instinct: environment understanding and landmark recognition, we formulate the UAS navigation task into two same phases. Our system combines the reinforcement learning (RL) and image matching approaches. First, the agent learns the navigation policy using RL in the specified environment. To achieve this, we design an interactive UASNAV environment for the training process. Once the agent learns the navigation policy, which means 'familiarized themselves with the environment', we let the UAS fly in the real world to recognize the landmarks using image matching method and take action according to the learned policy. During the navigation process, the UAS is embedded with single camera as the only visual sensor. We demonstrate that the UAS can learn navigating to the destination hundreds meters away from the starting point with the shortest path in the real world scenario. |
2112.01479 | Subarna Tripathi | Sourya Roy, Kyle Min, Subarna Tripathi, Tanaya Guha and Somdeb
Majumdar | Learning Spatial-Temporal Graphs for Active Speaker Detection | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of active speaker detection through a new framework,
called SPELL, that learns long-range multimodal graphs to encode the
inter-modal relationship between audio and visual data. We cast active speaker
detection as a node classification task that is aware of longer-term
dependencies. We first construct a graph from a video so that each node
corresponds to one person. Nodes representing the same identity share edges
between them within a defined temporal window. Nodes within the same video
frame are also connected to encode inter-person interactions. Through extensive
experiments on the Ava-ActiveSpeaker dataset, we demonstrate that learning
graph-based representation, owing to its explicit spatial and temporal
structure, significantly improves the overall performance. SPELL outperforms
several relevant baselines and performs at par with state of the art models
while requiring an order of magnitude lower computation cost.
| [
{
"created": "Thu, 2 Dec 2021 18:29:07 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Dec 2021 19:41:06 GMT",
"version": "v2"
}
] | 2021-12-07 | [
[
"Roy",
"Sourya",
""
],
[
"Min",
"Kyle",
""
],
[
"Tripathi",
"Subarna",
""
],
[
"Guha",
"Tanaya",
""
],
[
"Majumdar",
"Somdeb",
""
]
] | We address the problem of active speaker detection through a new framework, called SPELL, that learns long-range multimodal graphs to encode the inter-modal relationship between audio and visual data. We cast active speaker detection as a node classification task that is aware of longer-term dependencies. We first construct a graph from a video so that each node corresponds to one person. Nodes representing the same identity share edges between them within a defined temporal window. Nodes within the same video frame are also connected to encode inter-person interactions. Through extensive experiments on the Ava-ActiveSpeaker dataset, we demonstrate that learning graph-based representation, owing to its explicit spatial and temporal structure, significantly improves the overall performance. SPELL outperforms several relevant baselines and performs at par with state of the art models while requiring an order of magnitude lower computation cost. |
2010.14244 | Dr. Vinita Jindal | Vinita Jindal and Punam Bedi | GMACO-P: GPU assisted Preemptive MACO algorithm for enabling Smart
Transportation | 13 pages | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vehicular Ad-hoc NETworks (VANETs) are developing at a very fast pace to
enable smart transportation in urban cities, by designing some mechanisms for
decreasing travel time for commuters by reducing congestion. Inefficient
Traffic signals and routing mechanisms are the major factors that contribute to
the increase of road congestion. For smoother traffic movement and reducing
congestion on the roads, the waiting time at intersections must be reduced and
an optimal path should be chosen simultaneously. In this paper, A GPU assisted
Preemptive MACO (GMACO-P) algorithm has been proposed to minimize the total
travel time of the commuters. GMACO-P is an improvement of MACO-P algorithm
that uses the harnessing the power of the GPU to provide faster computations
for further minimizing the travel time. The MACO-P algorithm is based on an
existing MACO algorithm that avoid the path with the congestion. The MACO-P
algorithm reduces the average queue length at intersections by incorporating
preemption that ensures less waiting time. In this paper, GMACO-P algorithm is
proposed harnessing the power of GPU to improve MACO-P to further reduce the
travel time. The GMACO-P algorithm is executed with CUDA toolkit 7.5 using C
language and the obtained results were compared with existing Dijkstra, ACO,
MACO, MACO-P, parallel implementation of the Dijkstra, ACO and MACO algorithms.
Obtained results show the significant reduction in the travel time after using
the proposed GMACO-P algorithm.
| [
{
"created": "Tue, 27 Oct 2020 12:38:10 GMT",
"version": "v1"
}
] | 2020-10-28 | [
[
"Jindal",
"Vinita",
""
],
[
"Bedi",
"Punam",
""
]
] | Vehicular Ad-hoc NETworks (VANETs) are developing at a very fast pace to enable smart transportation in urban cities, by designing some mechanisms for decreasing travel time for commuters by reducing congestion. Inefficient Traffic signals and routing mechanisms are the major factors that contribute to the increase of road congestion. For smoother traffic movement and reducing congestion on the roads, the waiting time at intersections must be reduced and an optimal path should be chosen simultaneously. In this paper, A GPU assisted Preemptive MACO (GMACO-P) algorithm has been proposed to minimize the total travel time of the commuters. GMACO-P is an improvement of MACO-P algorithm that uses the harnessing the power of the GPU to provide faster computations for further minimizing the travel time. The MACO-P algorithm is based on an existing MACO algorithm that avoid the path with the congestion. The MACO-P algorithm reduces the average queue length at intersections by incorporating preemption that ensures less waiting time. In this paper, GMACO-P algorithm is proposed harnessing the power of GPU to improve MACO-P to further reduce the travel time. The GMACO-P algorithm is executed with CUDA toolkit 7.5 using C language and the obtained results were compared with existing Dijkstra, ACO, MACO, MACO-P, parallel implementation of the Dijkstra, ACO and MACO algorithms. Obtained results show the significant reduction in the travel time after using the proposed GMACO-P algorithm. |
2003.07383 | Jacopo Mauro | Michael Lienhardt, Ferruccio Damiani, Einar Broch Johnsen, Jacopo
Mauro | Lazy Product Discovery in Huge Configuration Spaces | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Highly-configurable software systems can have thousands of interdependent
configuration options across different subsystems. In the resulting
configuration space, discovering a valid product configuration for some
selected options can be complex and error prone. The configuration space can be
organized using a feature model, fragmented into smaller interdependent feature
models reflecting the configuration options of each subsystem.
We propose a method for lazy product discovery in large fragmented feature
models with interdependent features. We formalize the method and prove its
soundness and completeness. The evaluation explores an industrial-size
configuration space. The results show that lazy product discovery has
significant performance benefits compared to standard product discovery, which
in contrast to our method requires all fragments to be composed to analyze the
feature model. Furthermore, the method succeeds when more efficient,
heuristics-based engines fail to find a valid configuration.
| [
{
"created": "Mon, 16 Mar 2020 18:06:26 GMT",
"version": "v1"
}
] | 2020-03-18 | [
[
"Lienhardt",
"Michael",
""
],
[
"Damiani",
"Ferruccio",
""
],
[
"Johnsen",
"Einar Broch",
""
],
[
"Mauro",
"Jacopo",
""
]
] | Highly-configurable software systems can have thousands of interdependent configuration options across different subsystems. In the resulting configuration space, discovering a valid product configuration for some selected options can be complex and error prone. The configuration space can be organized using a feature model, fragmented into smaller interdependent feature models reflecting the configuration options of each subsystem. We propose a method for lazy product discovery in large fragmented feature models with interdependent features. We formalize the method and prove its soundness and completeness. The evaluation explores an industrial-size configuration space. The results show that lazy product discovery has significant performance benefits compared to standard product discovery, which in contrast to our method requires all fragments to be composed to analyze the feature model. Furthermore, the method succeeds when more efficient, heuristics-based engines fail to find a valid configuration. |
1812.06300 | Patrick Spettel | Patrick Spettel and Hans-Georg Beyer | Analysis of the $(\mu/\mu_I,\lambda)$-$\sigma$-Self-Adaptation Evolution
Strategy with Repair by Projection Applied to a Conically Constrained Problem | This is a PREPRINT of an article submitted to IEEE Transactions On
Evolutionary Computation. It is currently under review. Due to size
limitations, this manuscript comprises figures with reduced resolution. 10
pages + supplementary material. Copyright 2018 IEEE. The work was supported
by the Austrian Science Fund FWF under grant P29651-N32 | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A theoretical performance analysis of the
$(\mu/\mu_I,\lambda)$-$\sigma$-Self-Adaptation Evolution Strategy
($\sigma$SA-ES) is presented considering a conically constrained problem.
Infeasible offspring are repaired using projection onto the boundary of the
feasibility region. Closed-form approximations are used for the one-generation
progress of the evolution strategy. Approximate deterministic evolution
equations are formulated for analyzing the strategy's dynamics. By iterating
the evolution equations with the approximate one-generation expressions, the
evolution strategy's dynamics can be predicted. The derived theoretical results
are compared to experiments for assessing the approximation quality. It is
shown that in the steady state the $(\mu/\mu_I,\lambda)$-$\sigma$SA-ES exhibits
a performance as if the ES were optimizing a sphere model. Unlike the
non-recombinative $(1,\lambda)$-ES, the parental steady state behavior does not
evolve on the cone boundary but stays away from the boundary to a certain
extent.
| [
{
"created": "Sat, 15 Dec 2018 14:48:40 GMT",
"version": "v1"
}
] | 2018-12-18 | [
[
"Spettel",
"Patrick",
""
],
[
"Beyer",
"Hans-Georg",
""
]
] | A theoretical performance analysis of the $(\mu/\mu_I,\lambda)$-$\sigma$-Self-Adaptation Evolution Strategy ($\sigma$SA-ES) is presented considering a conically constrained problem. Infeasible offspring are repaired using projection onto the boundary of the feasibility region. Closed-form approximations are used for the one-generation progress of the evolution strategy. Approximate deterministic evolution equations are formulated for analyzing the strategy's dynamics. By iterating the evolution equations with the approximate one-generation expressions, the evolution strategy's dynamics can be predicted. The derived theoretical results are compared to experiments for assessing the approximation quality. It is shown that in the steady state the $(\mu/\mu_I,\lambda)$-$\sigma$SA-ES exhibits a performance as if the ES were optimizing a sphere model. Unlike the non-recombinative $(1,\lambda)$-ES, the parental steady state behavior does not evolve on the cone boundary but stays away from the boundary to a certain extent. |
1511.03524 | Buddhadeb Sau | B. Sau and S. Mukhopadhyaya and K. Mukhopadhyaya | MAINT: Localization of Mobile Sensors with Energy Control | null | null | null | null | cs.DC cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Localization is an important issue for Wireless Sensor Networks (WSN). A
mobile sensor may change its position rapidly and thus require localization
calls frequently. A localization may require network wide information and
increase traffic over the network. It dissipates valuable energy for message
communication. Thus localization is very costly. The control of the number of
localization calls may save energy consumption, as it is rather expensive. To
reduce the frequency of localization calls for a mobile sensor, we propose a
technique that involves \textit{Mobility Aware Interpolation} (MAINT) for
position estimation. It controls the number of localizations which gives much
better result than the existing localization control schemes using mobility
aware extrapolation. The proposed method involves very low arithmetic
computation overheads. We find analytical expressions for the expected error in
position estimation. A parameter, the time interval, has been introduced to
externally control the energy dissipation. Simulation studies are carried out
to compare the performances of the proposed method with some existing
localization control schemes as well as the theoretical results. The simulation
results shows that the expected error at any point of time may be computed from
this expression. We have seen that constant error limit can be maintained
increasing the time period of localization proportional to rate of change of
direction of its motion. Increasing time period, the energy may be saved with a
stable error limit.
| [
{
"created": "Tue, 10 Nov 2015 07:22:10 GMT",
"version": "v1"
}
] | 2015-11-12 | [
[
"Sau",
"B.",
""
],
[
"Mukhopadhyaya",
"S.",
""
],
[
"Mukhopadhyaya",
"K.",
""
]
] | Localization is an important issue for Wireless Sensor Networks (WSN). A mobile sensor may change its position rapidly and thus require localization calls frequently. A localization may require network wide information and increase traffic over the network. It dissipates valuable energy for message communication. Thus localization is very costly. The control of the number of localization calls may save energy consumption, as it is rather expensive. To reduce the frequency of localization calls for a mobile sensor, we propose a technique that involves \textit{Mobility Aware Interpolation} (MAINT) for position estimation. It controls the number of localizations which gives much better result than the existing localization control schemes using mobility aware extrapolation. The proposed method involves very low arithmetic computation overheads. We find analytical expressions for the expected error in position estimation. A parameter, the time interval, has been introduced to externally control the energy dissipation. Simulation studies are carried out to compare the performances of the proposed method with some existing localization control schemes as well as the theoretical results. The simulation results shows that the expected error at any point of time may be computed from this expression. We have seen that constant error limit can be maintained increasing the time period of localization proportional to rate of change of direction of its motion. Increasing time period, the energy may be saved with a stable error limit. |
2405.15863 | Chang Li | Chang Li, Ruoyu Wang, Lijuan Liu, Jun Du, Yixuan Sun, Zilu Guo,
Zhenrong Zhang, Yuan Jiang | Quality-aware Masked Diffusion Transformer for Enhanced Music Generation | null | null | null | null | cs.SD cs.AI eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, diffusion-based text-to-music (TTM) generation has gained
prominence, offering a novel approach to synthesizing musical content from
textual descriptions. Achieving high accuracy and diversity in this generation
process requires extensive, high-quality data, which often constitutes only a
fraction of available datasets. Within open-source datasets, the prevalence of
issues like mislabeling, weak labeling, unlabeled data, and low-quality music
waveform significantly hampers the development of music generation models. To
overcome these challenges, we introduce a novel quality-aware masked diffusion
transformer (QA-MDT) approach that enables generative models to discern the
quality of input music waveform during training. Building on the unique
properties of musical signals, we have adapted and implemented a MDT model for
TTM task, while further unveiling its distinct capacity for quality control.
Moreover, we address the issue of low-quality captions with a caption
refinement data processing approach. Our demo page is shown in
https://qa-mdt.github.io/. Code on https://github.com/ivcylc/qa-mdt
| [
{
"created": "Fri, 24 May 2024 18:09:27 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Li",
"Chang",
""
],
[
"Wang",
"Ruoyu",
""
],
[
"Liu",
"Lijuan",
""
],
[
"Du",
"Jun",
""
],
[
"Sun",
"Yixuan",
""
],
[
"Guo",
"Zilu",
""
],
[
"Zhang",
"Zhenrong",
""
],
[
"Jiang",
"Yuan",
""
]
] | In recent years, diffusion-based text-to-music (TTM) generation has gained prominence, offering a novel approach to synthesizing musical content from textual descriptions. Achieving high accuracy and diversity in this generation process requires extensive, high-quality data, which often constitutes only a fraction of available datasets. Within open-source datasets, the prevalence of issues like mislabeling, weak labeling, unlabeled data, and low-quality music waveform significantly hampers the development of music generation models. To overcome these challenges, we introduce a novel quality-aware masked diffusion transformer (QA-MDT) approach that enables generative models to discern the quality of input music waveform during training. Building on the unique properties of musical signals, we have adapted and implemented a MDT model for TTM task, while further unveiling its distinct capacity for quality control. Moreover, we address the issue of low-quality captions with a caption refinement data processing approach. Our demo page is shown in https://qa-mdt.github.io/. Code on https://github.com/ivcylc/qa-mdt |
1310.1693 | Borhan Sanandaji | Borhan M. Sanandaji, He Hao, Kameshwar Poolla, and Tyrone L. Vincent | Improved Battery Models of an Aggregation of Thermostatically Controlled
Loads for Frequency Regulation | to appear in the 2014 American Control Conference - ACC | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently it has been shown that an aggregation of Thermostatically Controlled
Loads (TCLs) can be utilized to provide fast regulating reserve service for
power grids and the behavior of the aggregation can be captured by a stochastic
battery with dissipation. In this paper, we address two practical issues
associated with the proposed battery model. First, we address clustering of a
heterogeneous collection and show that by finding the optimal dissipation
parameter for a given collection, one can divide these units into few clusters
and improve the overall battery model. Second, we analytically characterize the
impact of imposing a no-short-cycling requirement on TCLs as constraints on the
ramping rate of the regulation signal. We support our theorems by providing
simulation results.
| [
{
"created": "Mon, 7 Oct 2013 07:44:38 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Apr 2014 01:31:35 GMT",
"version": "v2"
}
] | 2014-04-09 | [
[
"Sanandaji",
"Borhan M.",
""
],
[
"Hao",
"He",
""
],
[
"Poolla",
"Kameshwar",
""
],
[
"Vincent",
"Tyrone L.",
""
]
] | Recently it has been shown that an aggregation of Thermostatically Controlled Loads (TCLs) can be utilized to provide fast regulating reserve service for power grids and the behavior of the aggregation can be captured by a stochastic battery with dissipation. In this paper, we address two practical issues associated with the proposed battery model. First, we address clustering of a heterogeneous collection and show that by finding the optimal dissipation parameter for a given collection, one can divide these units into few clusters and improve the overall battery model. Second, we analytically characterize the impact of imposing a no-short-cycling requirement on TCLs as constraints on the ramping rate of the regulation signal. We support our theorems by providing simulation results. |
1509.06321 | Wojciech Samek | Wojciech Samek, Alexander Binder, Gr\'egoire Montavon, Sebastian Bach,
Klaus-Robert M\"uller | Evaluating the visualization of what a Deep Neural Network has learned | 13 pages, 8 Figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Neural Networks (DNNs) have demonstrated impressive performance in
complex machine learning tasks such as image classification or speech
recognition. However, due to their multi-layer nonlinear structure, they are
not transparent, i.e., it is hard to grasp what makes them arrive at a
particular classification or recognition decision given a new unseen data
sample. Recently, several approaches have been proposed enabling one to
understand and interpret the reasoning embodied in a DNN for a single test
image. These methods quantify the ''importance'' of individual pixels wrt the
classification decision and allow a visualization in terms of a heatmap in
pixel/input space. While the usefulness of heatmaps can be judged subjectively
by a human, an objective quality measure is missing. In this paper we present a
general methodology based on region perturbation for evaluating ordered
collections of pixels such as heatmaps. We compare heatmaps computed by three
different methods on the SUN397, ILSVRC2012 and MIT Places data sets. Our main
result is that the recently proposed Layer-wise Relevance Propagation (LRP)
algorithm qualitatively and quantitatively provides a better explanation of
what made a DNN arrive at a particular classification decision than the
sensitivity-based approach or the deconvolution method. We provide theoretical
arguments to explain this result and discuss its practical implications.
Finally, we investigate the use of heatmaps for unsupervised assessment of
neural network performance.
| [
{
"created": "Mon, 21 Sep 2015 17:36:22 GMT",
"version": "v1"
}
] | 2015-09-22 | [
[
"Samek",
"Wojciech",
""
],
[
"Binder",
"Alexander",
""
],
[
"Montavon",
"Grégoire",
""
],
[
"Bach",
"Sebastian",
""
],
[
"Müller",
"Klaus-Robert",
""
]
] | Deep Neural Networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multi-layer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the ''importance'' of individual pixels wrt the classification decision and allow a visualization in terms of a heatmap in pixel/input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012 and MIT Places data sets. Our main result is that the recently proposed Layer-wise Relevance Propagation (LRP) algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of neural network performance. |
2209.13792 | Gustavo Lacerda | Gustavo Cunha Lacerda, Raimundo Claudio da Silva Vasconcelos | A Machine Learning Approach for DeepFake Detection | 4 pages, accepted for presentation at the SIBGRAPI 2022 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the spread of DeepFake techniques, this technology has become quite
accessible and good enough that there is concern about its malicious use. Faced
with this problem, detecting forged faces is of utmost importance to ensure
security and avoid socio-political problems, both on a global and private
scale. This paper presents a solution for the detection of DeepFakes using
convolution neural networks and a dataset developed for this purpose -
Celeb-DF. The results show that, with an overall accuracy of 95% in the
classification of these images, the proposed model is close to what exists in
the state of the art with the possibility of adjustment for better results in
the manipulation techniques that arise in the future.
| [
{
"created": "Wed, 28 Sep 2022 02:46:04 GMT",
"version": "v1"
}
] | 2022-09-29 | [
[
"Lacerda",
"Gustavo Cunha",
""
],
[
"Vasconcelos",
"Raimundo Claudio da Silva",
""
]
] | With the spread of DeepFake techniques, this technology has become quite accessible and good enough that there is concern about its malicious use. Faced with this problem, detecting forged faces is of utmost importance to ensure security and avoid socio-political problems, both on a global and private scale. This paper presents a solution for the detection of DeepFakes using convolution neural networks and a dataset developed for this purpose - Celeb-DF. The results show that, with an overall accuracy of 95% in the classification of these images, the proposed model is close to what exists in the state of the art with the possibility of adjustment for better results in the manipulation techniques that arise in the future. |
2305.03953 | Xu Chen | Xu Chen and Zida Cheng and Shuai Xiao and Xiaoyi Zeng and Weilin Huang | Cross-domain Augmentation Networks for Click-Through Rate Prediction | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data sparsity is an important issue for click-through rate (CTR) prediction,
particularly when user-item interactions is too sparse to learn a reliable
model. Recently, many works on cross-domain CTR (CDCTR) prediction have been
developed in an effort to leverage meaningful data from a related domain.
However, most existing CDCTR works have an impractical limitation that requires
homogeneous inputs (\textit{i.e.} shared feature fields) across domains, and
CDCTR with heterogeneous inputs (\textit{i.e.} varying feature fields) across
domains has not been widely explored but is an urgent and important research
problem. In this work, we propose a cross-domain augmentation network (CDAnet)
being able to perform knowledge transfer between two domains with
\textit{heterogeneous inputs}. Specifically, CDAnet contains a designed
translation network and an augmentation network which are trained sequentially.
The translation network is able to compute features from two domains with
heterogeneous inputs separately by designing two independent branches, and then
learn meaningful cross-domain knowledge using a designed cross-supervised
feature translator. Later the augmentation network encodes the learned
cross-domain knowledge via feature translation performed in the latent space
and fine-tune the model for final CTR prediction. Through extensive experiments
on two public benchmarks and one industrial production dataset, we show CDAnet
can learn meaningful translated features and largely improve the performance of
CTR prediction. CDAnet has been conducted online A/B test in image2product
retrieval at Taobao app over 20days, bringing an absolute \textbf{0.11 point}
CTR improvement and a relative \textbf{1.26\%} GMV increase.
| [
{
"created": "Sat, 6 May 2023 06:37:52 GMT",
"version": "v1"
},
{
"created": "Tue, 9 May 2023 08:43:29 GMT",
"version": "v2"
}
] | 2023-05-10 | [
[
"Chen",
"Xu",
""
],
[
"Cheng",
"Zida",
""
],
[
"Xiao",
"Shuai",
""
],
[
"Zeng",
"Xiaoyi",
""
],
[
"Huang",
"Weilin",
""
]
] | Data sparsity is an important issue for click-through rate (CTR) prediction, particularly when user-item interactions is too sparse to learn a reliable model. Recently, many works on cross-domain CTR (CDCTR) prediction have been developed in an effort to leverage meaningful data from a related domain. However, most existing CDCTR works have an impractical limitation that requires homogeneous inputs (\textit{i.e.} shared feature fields) across domains, and CDCTR with heterogeneous inputs (\textit{i.e.} varying feature fields) across domains has not been widely explored but is an urgent and important research problem. In this work, we propose a cross-domain augmentation network (CDAnet) being able to perform knowledge transfer between two domains with \textit{heterogeneous inputs}. Specifically, CDAnet contains a designed translation network and an augmentation network which are trained sequentially. The translation network is able to compute features from two domains with heterogeneous inputs separately by designing two independent branches, and then learn meaningful cross-domain knowledge using a designed cross-supervised feature translator. Later the augmentation network encodes the learned cross-domain knowledge via feature translation performed in the latent space and fine-tune the model for final CTR prediction. Through extensive experiments on two public benchmarks and one industrial production dataset, we show CDAnet can learn meaningful translated features and largely improve the performance of CTR prediction. CDAnet has been conducted online A/B test in image2product retrieval at Taobao app over 20days, bringing an absolute \textbf{0.11 point} CTR improvement and a relative \textbf{1.26\%} GMV increase. |
1406.2395 | Ines Dutra | Ezilda Almeida, Pedro Ferreira, Tiago Vinhoza, In\^es Dutra, Jingwei
Li, Yirong Wu, Elizabeth Burnside | ExpertBayes: Automatically refining manually built Bayesian networks | 14 pages | null | null | null | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian network structures are usually built using only the data and
starting from an empty network or from a naive Bayes structure. Very often, in
some domains, like medicine, a prior structure knowledge is already known. This
structure can be automatically or manually refined in search for better
performance models. In this work, we take Bayesian networks built by
specialists and show that minor perturbations to this original network can
yield better classifiers with a very small computational cost, while
maintaining most of the intended meaning of the original model.
| [
{
"created": "Tue, 10 Jun 2014 00:50:05 GMT",
"version": "v1"
}
] | 2014-06-11 | [
[
"Almeida",
"Ezilda",
""
],
[
"Ferreira",
"Pedro",
""
],
[
"Vinhoza",
"Tiago",
""
],
[
"Dutra",
"Inês",
""
],
[
"Li",
"Jingwei",
""
],
[
"Wu",
"Yirong",
""
],
[
"Burnside",
"Elizabeth",
""
]
] | Bayesian network structures are usually built using only the data and starting from an empty network or from a naive Bayes structure. Very often, in some domains, like medicine, a prior structure knowledge is already known. This structure can be automatically or manually refined in search for better performance models. In this work, we take Bayesian networks built by specialists and show that minor perturbations to this original network can yield better classifiers with a very small computational cost, while maintaining most of the intended meaning of the original model. |
2011.05119 | Mostafa Khalaji | Mostafa Khalaji | TRSM-RS: A Movie Recommender System Based on Users' Gender and New
Weighted Similarity Measure | 11 pages, 17th Iran Media Technology Exhibition and Conference, At:
Tehran, Iran, November 2020 | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | With the growing data on the Internet, recommender systems have been able to
predict users' preferences and offer related movies. Collaborative filtering is
one of the most popular algorithms in these systems. The main purpose of
collaborative filtering is to find the users or the same items using the rating
matrix. By increasing the number of users and items, this algorithm suffers
from the scalability problem. On the other hand, due to the unavailability of a
large number of user preferences for different items, there is a cold start
problem for a new user or item that has a significant impact on system
performance. The purpose of this paper is to design a movie recommender system
named TRSM-RS using users' demographic information (just users' gender) along
with the new weighted similarity measure. By segmenting users based on their
gender, the scalability problem is improved, and by considering the reliability
of the users' similarity as the weight in the new similarity measure (Tanimoto
Reliability Similarity Measure, TRSM), the effect of the cold-start problem is
undermined and the performance of the system is improved. Experiments were
performed on the MovieLens dataset and the system was evaluated using mean
absolute error (MAE), Accuracy, Precision, and Recall metrics. The results of
the experiments indicate improved performance (accuracy and precision) and
system error rate compared to other research methods of the researchers. The
maximum improved MAE rate of the system for men and women is 5.5% and 13.8%,
respectively.
| [
{
"created": "Tue, 10 Nov 2020 14:41:40 GMT",
"version": "v1"
}
] | 2020-11-11 | [
[
"Khalaji",
"Mostafa",
""
]
] | With the growing data on the Internet, recommender systems have been able to predict users' preferences and offer related movies. Collaborative filtering is one of the most popular algorithms in these systems. The main purpose of collaborative filtering is to find the users or the same items using the rating matrix. By increasing the number of users and items, this algorithm suffers from the scalability problem. On the other hand, due to the unavailability of a large number of user preferences for different items, there is a cold start problem for a new user or item that has a significant impact on system performance. The purpose of this paper is to design a movie recommender system named TRSM-RS using users' demographic information (just users' gender) along with the new weighted similarity measure. By segmenting users based on their gender, the scalability problem is improved, and by considering the reliability of the users' similarity as the weight in the new similarity measure (Tanimoto Reliability Similarity Measure, TRSM), the effect of the cold-start problem is undermined and the performance of the system is improved. Experiments were performed on the MovieLens dataset and the system was evaluated using mean absolute error (MAE), Accuracy, Precision, and Recall metrics. The results of the experiments indicate improved performance (accuracy and precision) and system error rate compared to other research methods of the researchers. The maximum improved MAE rate of the system for men and women is 5.5% and 13.8%, respectively. |
2403.08650 | Wentao Jiang | Wentao Jiang, Yige Zhang, Shaozhong Zheng, Si Liu, Shuicheng Yan | Data Augmentation in Human-Centric Vision | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This survey presents a comprehensive analysis of data augmentation techniques
in human-centric vision tasks, a first of its kind in the field. It delves into
a wide range of research areas including person ReID, human parsing, human pose
estimation, and pedestrian detection, addressing the significant challenges
posed by overfitting and limited training data in these domains. Our work
categorizes data augmentation methods into two main types: data generation and
data perturbation. Data generation covers techniques like graphic engine-based
generation, generative model-based generation, and data recombination, while
data perturbation is divided into image-level and human-level perturbations.
Each method is tailored to the unique requirements of human-centric tasks, with
some applicable across multiple areas. Our contributions include an extensive
literature review, providing deep insights into the influence of these
augmentation techniques in human-centric vision and highlighting the nuances of
each method. We also discuss open issues and future directions, such as the
integration of advanced generative models like Latent Diffusion Models, for
creating more realistic and diverse training data. This survey not only
encapsulates the current state of data augmentation in human-centric vision but
also charts a course for future research, aiming to develop more robust,
accurate, and efficient human-centric vision systems.
| [
{
"created": "Wed, 13 Mar 2024 16:05:18 GMT",
"version": "v1"
}
] | 2024-03-14 | [
[
"Jiang",
"Wentao",
""
],
[
"Zhang",
"Yige",
""
],
[
"Zheng",
"Shaozhong",
""
],
[
"Liu",
"Si",
""
],
[
"Yan",
"Shuicheng",
""
]
] | This survey presents a comprehensive analysis of data augmentation techniques in human-centric vision tasks, a first of its kind in the field. It delves into a wide range of research areas including person ReID, human parsing, human pose estimation, and pedestrian detection, addressing the significant challenges posed by overfitting and limited training data in these domains. Our work categorizes data augmentation methods into two main types: data generation and data perturbation. Data generation covers techniques like graphic engine-based generation, generative model-based generation, and data recombination, while data perturbation is divided into image-level and human-level perturbations. Each method is tailored to the unique requirements of human-centric tasks, with some applicable across multiple areas. Our contributions include an extensive literature review, providing deep insights into the influence of these augmentation techniques in human-centric vision and highlighting the nuances of each method. We also discuss open issues and future directions, such as the integration of advanced generative models like Latent Diffusion Models, for creating more realistic and diverse training data. This survey not only encapsulates the current state of data augmentation in human-centric vision but also charts a course for future research, aiming to develop more robust, accurate, and efficient human-centric vision systems. |
0904.3711 | Florentina Pintea | Mihai Timis | Output Width Signal Control In Asynchronous Digital Systems Using
External Clock Signal | 6 pages,exposed on 1st "European Conference on Computer Sciences &
Applications" - XA2006, Timisoara, Romania | Ann. Univ. Tibiscus Comp. Sci. Series IV (2006), 237-242 | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In present paper, I propose a method for resolving the timing delays for
output signals from an asynchronous sequential system. It will be used an
example of an asynchronous sequential system that will set up an output signal
when an input signal will be set up. The width of the output signal depends on
the input signal width, and in this case it is very short. There are many
synthesis methods, like using a RC group system, a monostable system in design
of the asynchronous digital system or using an external clock signal, CK. In
this paper will be used an external clock signal, CK.
| [
{
"created": "Thu, 23 Apr 2009 14:45:42 GMT",
"version": "v1"
}
] | 2009-04-24 | [
[
"Timis",
"Mihai",
""
]
] | In present paper, I propose a method for resolving the timing delays for output signals from an asynchronous sequential system. It will be used an example of an asynchronous sequential system that will set up an output signal when an input signal will be set up. The width of the output signal depends on the input signal width, and in this case it is very short. There are many synthesis methods, like using a RC group system, a monostable system in design of the asynchronous digital system or using an external clock signal, CK. In this paper will be used an external clock signal, CK. |
2307.02103 | Abdelhadi Soudi | Abdelhadi Soudi, Manal El Hakkaoui, Kristof Van Laerhoven | Do predictability factors towards signing avatars hold across cultures? | 5 pages, Proceedings of the ICASSP 2023 8th Workshop on Sign Language
Translation and Avatar Technology, Rhodes Island, Greece, June 10, 2023 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Avatar technology can offer accessibility possibilities and improve the
Deaf-and-Hard of Hearing sign language users access to communication, education
and services, such as the healthcare system. However, sign language users
acceptance of signing avatars as well as their attitudes towards them vary and
depend on many factors. Furthermore, research on avatar technology is mostly
done by researchers who are not Deaf. The study examines the extent to which
intrinsic or extrinsic factors contribute to predict the attitude towards
avatars across cultures. Intrinsic factors include the characteristics of the
avatar, such as appearance, movements and facial expressions. Extrinsic factors
include users technology experience, their hearing status, age and their sign
language fluency. This work attempts to answer questions such as, if lower
attitude ratings are related to poor technology experience with ASL users, for
example, is that also true for Moroccan Sign Language (MSL) users? For the
purposes of the study, we designed a questionnaire to understand MSL users
attitude towards avatars. Three groups of participants were surveyed: Deaf
(57), Hearing (20) and Hard-of-Hearing (3). The results of our study were then
compared with those reported in other relevant studies.
| [
{
"created": "Wed, 5 Jul 2023 08:22:46 GMT",
"version": "v1"
}
] | 2023-07-20 | [
[
"Soudi",
"Abdelhadi",
""
],
[
"Hakkaoui",
"Manal El",
""
],
[
"Van Laerhoven",
"Kristof",
""
]
] | Avatar technology can offer accessibility possibilities and improve the Deaf-and-Hard of Hearing sign language users access to communication, education and services, such as the healthcare system. However, sign language users acceptance of signing avatars as well as their attitudes towards them vary and depend on many factors. Furthermore, research on avatar technology is mostly done by researchers who are not Deaf. The study examines the extent to which intrinsic or extrinsic factors contribute to predict the attitude towards avatars across cultures. Intrinsic factors include the characteristics of the avatar, such as appearance, movements and facial expressions. Extrinsic factors include users technology experience, their hearing status, age and their sign language fluency. This work attempts to answer questions such as, if lower attitude ratings are related to poor technology experience with ASL users, for example, is that also true for Moroccan Sign Language (MSL) users? For the purposes of the study, we designed a questionnaire to understand MSL users attitude towards avatars. Three groups of participants were surveyed: Deaf (57), Hearing (20) and Hard-of-Hearing (3). The results of our study were then compared with those reported in other relevant studies. |
1202.2759 | Alyson Fletcher | Alyson K. Fletcher and Sundeep Rangan | Iterative Reconstruction of Rank-One Matrices in Noise | 28 pages, 2 figures | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of estimating a rank-one matrix in Gaussian noise
under a probabilistic model for the left and right factors of the matrix. The
probabilistic model can impose constraints on the factors including sparsity
and positivity that arise commonly in learning problems. We propose a family of
algorithms that reduce the problem to a sequence of scalar estimation
computations. These algorithms are similar to approximate message passing
techniques based on Gaussian approximations of loopy belief propagation that
have been used recently in compressed sensing. Leveraging analysis methods by
Bayati and Montanari, we show that the asymptotic behavior of the algorithm is
described by a simple scalar equivalent model, where the distribution of the
estimates at each iteration is identical to certain scalar estimates of the
variables in Gaussian noise. Moreover, the effective Gaussian noise level is
described by a set of state evolution equations. The proposed approach to
deriving algorithms thus provides a computationally simple and general method
for rank-one estimation problems with a precise analysis in certain
high-dimensional settings.
| [
{
"created": "Mon, 13 Feb 2012 15:18:02 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Apr 2012 17:54:21 GMT",
"version": "v2"
},
{
"created": "Tue, 8 May 2012 04:06:04 GMT",
"version": "v3"
},
{
"created": "Tue, 15 Sep 2015 19:50:18 GMT",
"version": "v4"
}
] | 2015-09-16 | [
[
"Fletcher",
"Alyson K.",
""
],
[
"Rangan",
"Sundeep",
""
]
] | We consider the problem of estimating a rank-one matrix in Gaussian noise under a probabilistic model for the left and right factors of the matrix. The probabilistic model can impose constraints on the factors including sparsity and positivity that arise commonly in learning problems. We propose a family of algorithms that reduce the problem to a sequence of scalar estimation computations. These algorithms are similar to approximate message passing techniques based on Gaussian approximations of loopy belief propagation that have been used recently in compressed sensing. Leveraging analysis methods by Bayati and Montanari, we show that the asymptotic behavior of the algorithm is described by a simple scalar equivalent model, where the distribution of the estimates at each iteration is identical to certain scalar estimates of the variables in Gaussian noise. Moreover, the effective Gaussian noise level is described by a set of state evolution equations. The proposed approach to deriving algorithms thus provides a computationally simple and general method for rank-one estimation problems with a precise analysis in certain high-dimensional settings. |
2312.15313 | Haiwei Dong | Zijian Long, Haiwei Dong, and Abdulmotaleb El Saddik | Human-Centric Resource Allocation for the Metaverse With Multiaccess
Edge Computing | null | IEEE Internet of Things Journal, vol. 10, no. 22, pp. 19993-20005,
2023 | 10.1109/JIOT.2023.3283335 | null | cs.MM cs.AI cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-access edge computing (MEC) is a promising solution to the
computation-intensive, low-latency rendering tasks of the metaverse. However,
how to optimally allocate limited communication and computation resources at
the edge to a large number of users in the metaverse is quite challenging. In
this paper, we propose an adaptive edge resource allocation method based on
multi-agent soft actor-critic with graph convolutional networks (SAC-GCN).
Specifically, SAC-GCN models the multi-user metaverse environment as a graph
where each agent is denoted by a node. Each agent learns the interplay between
agents by graph convolutional networks with self-attention mechanism to further
determine the resource usage for one user in the metaverse. The effectiveness
of SAC-GCN is demonstrated through the analysis of user experience, balance of
resource allocation, and resource utilization rate by taking a virtual city
park metaverse as an example. Experimental results indicate that SAC-GCN
outperforms other resource allocation methods in improving overall user
experience, balancing resource allocation, and increasing resource utilization
rate by at least 27%, 11%, and 8%, respectively.
| [
{
"created": "Sat, 23 Dec 2023 18:07:46 GMT",
"version": "v1"
}
] | 2023-12-27 | [
[
"Long",
"Zijian",
""
],
[
"Dong",
"Haiwei",
""
],
[
"Saddik",
"Abdulmotaleb El",
""
]
] | Multi-access edge computing (MEC) is a promising solution to the computation-intensive, low-latency rendering tasks of the metaverse. However, how to optimally allocate limited communication and computation resources at the edge to a large number of users in the metaverse is quite challenging. In this paper, we propose an adaptive edge resource allocation method based on multi-agent soft actor-critic with graph convolutional networks (SAC-GCN). Specifically, SAC-GCN models the multi-user metaverse environment as a graph where each agent is denoted by a node. Each agent learns the interplay between agents by graph convolutional networks with self-attention mechanism to further determine the resource usage for one user in the metaverse. The effectiveness of SAC-GCN is demonstrated through the analysis of user experience, balance of resource allocation, and resource utilization rate by taking a virtual city park metaverse as an example. Experimental results indicate that SAC-GCN outperforms other resource allocation methods in improving overall user experience, balancing resource allocation, and increasing resource utilization rate by at least 27%, 11%, and 8%, respectively. |
1501.04797 | Sven Puchinger | Wenhui Li, Johan S. R. Nielsen, Sven Puchinger, Vladimir Sidorenko | Solving Shift Register Problems over Skew Polynomial Rings using Module
Minimisation | 10 pages, submitted to WCC 2015 | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For many algebraic codes the main part of decoding can be reduced to a shift
register synthesis problem. In this paper we present an approach for solving
generalised shift register problems over skew polynomial rings which occur in
error and erasure decoding of $\ell$-Interleaved Gabidulin codes. The algorithm
is based on module minimisation and has time complexity $O(\ell \mu^2)$ where
$\mu$ measures the size of the input problem.
| [
{
"created": "Tue, 20 Jan 2015 13:07:59 GMT",
"version": "v1"
}
] | 2015-01-21 | [
[
"Li",
"Wenhui",
""
],
[
"Nielsen",
"Johan S. R.",
""
],
[
"Puchinger",
"Sven",
""
],
[
"Sidorenko",
"Vladimir",
""
]
] | For many algebraic codes the main part of decoding can be reduced to a shift register synthesis problem. In this paper we present an approach for solving generalised shift register problems over skew polynomial rings which occur in error and erasure decoding of $\ell$-Interleaved Gabidulin codes. The algorithm is based on module minimisation and has time complexity $O(\ell \mu^2)$ where $\mu$ measures the size of the input problem. |
2210.14208 | Khasa Gillani | Khasa Gillani, Jorge Mart\'in P\'erez, Milan Groshev, Antonio de la
Oliva, Robert Gazda | Don't Let Me Down! Offloading Robot VFs Up to the Cloud | 5 Pages, 6 figures, submitted to 2023 IEEE 9th International
Conference on Network Softwarization (NetSoft) | null | null | null | cs.RO cs.NI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent trends in robotic services propose offloading robot functionalities to
the Edge to meet the strict latency requirements of networked robotics.
However, the Edge is typically an expensive resource and sometimes the Cloud is
also an option, thus, decreasing the cost. Following this idea, we propose
Don't Let Me Down! (DLMD), an algorithm that promotes offloading robot
functions to the Cloud when possible to minimize the consumption of Edge
resources. Additionally, DLMD takes the appropriate migration, traffic
steering, and radio handover decisions to meet robotic service requirements as
strict latency constraints. In the paper, we formulate the optimization problem
that DLMD aims to solve, compare DLMD performance against state of art, and
perform stress tests to assess DLMD performance in small & large networks.
Results show that DLMD (i) always finds solutions in less than 30ms; (ii) is
optimal in a local warehousing use case, and (iii) consumes only 5% of the Edge
resources upon network stress.
| [
{
"created": "Tue, 25 Oct 2022 17:53:16 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Feb 2023 10:14:06 GMT",
"version": "v2"
},
{
"created": "Tue, 14 Feb 2023 15:19:03 GMT",
"version": "v3"
}
] | 2023-02-15 | [
[
"Gillani",
"Khasa",
""
],
[
"Pérez",
"Jorge Martín",
""
],
[
"Groshev",
"Milan",
""
],
[
"de la Oliva",
"Antonio",
""
],
[
"Gazda",
"Robert",
""
]
] | Recent trends in robotic services propose offloading robot functionalities to the Edge to meet the strict latency requirements of networked robotics. However, the Edge is typically an expensive resource and sometimes the Cloud is also an option, thus, decreasing the cost. Following this idea, we propose Don't Let Me Down! (DLMD), an algorithm that promotes offloading robot functions to the Cloud when possible to minimize the consumption of Edge resources. Additionally, DLMD takes the appropriate migration, traffic steering, and radio handover decisions to meet robotic service requirements as strict latency constraints. In the paper, we formulate the optimization problem that DLMD aims to solve, compare DLMD performance against state of art, and perform stress tests to assess DLMD performance in small & large networks. Results show that DLMD (i) always finds solutions in less than 30ms; (ii) is optimal in a local warehousing use case, and (iii) consumes only 5% of the Edge resources upon network stress. |
2102.07889 | Pei Wang | Arash Givchi, Pei Wang, Junqi Wang, Patrick Shafto | Distributionally-Constrained Policy Optimization via Unbalanced Optimal
Transport | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider constrained policy optimization in Reinforcement Learning, where
the constraints are in form of marginals on state visitations and global action
executions. Given these distributions, we formulate policy optimization as
unbalanced optimal transport over the space of occupancy measures. We propose a
general purpose RL objective based on Bregman divergence and optimize it using
Dykstra's algorithm. The approach admits an actor-critic algorithm for when the
state or action space is large, and only samples from the marginals are
available. We discuss applications of our approach and provide demonstrations
to show the effectiveness of our algorithm.
| [
{
"created": "Mon, 15 Feb 2021 23:04:37 GMT",
"version": "v1"
}
] | 2021-02-17 | [
[
"Givchi",
"Arash",
""
],
[
"Wang",
"Pei",
""
],
[
"Wang",
"Junqi",
""
],
[
"Shafto",
"Patrick",
""
]
] | We consider constrained policy optimization in Reinforcement Learning, where the constraints are in form of marginals on state visitations and global action executions. Given these distributions, we formulate policy optimization as unbalanced optimal transport over the space of occupancy measures. We propose a general purpose RL objective based on Bregman divergence and optimize it using Dykstra's algorithm. The approach admits an actor-critic algorithm for when the state or action space is large, and only samples from the marginals are available. We discuss applications of our approach and provide demonstrations to show the effectiveness of our algorithm. |
2402.07872 | Brian Ichter | Soroush Nasiriany, Fei Xia, Wenhao Yu, Ted Xiao, Jacky Liang, Ishita
Dasgupta, Annie Xie, Danny Driess, Ayzaan Wahid, Zhuo Xu, Quan Vuong, Tingnan
Zhang, Tsang-Wei Edward Lee, Kuang-Huei Lee, Peng Xu, Sean Kirmani, Yuke Zhu,
Andy Zeng, Karol Hausman, Nicolas Heess, Chelsea Finn, Sergey Levine, Brian
Ichter | PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs | null | null | null | null | cs.RO cs.CL cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision language models (VLMs) have shown impressive capabilities across a
variety of tasks, from logical reasoning to visual understanding. This opens
the door to richer interaction with the world, for example robotic control.
However, VLMs produce only textual outputs, while robotic control and other
spatial tasks require outputting continuous coordinates, actions, or
trajectories. How can we enable VLMs to handle such settings without
fine-tuning on task-specific data?
In this paper, we propose a novel visual prompting approach for VLMs that we
call Prompting with Iterative Visual Optimization (PIVOT), which casts tasks as
iterative visual question answering. In each iteration, the image is annotated
with a visual representation of proposals that the VLM can refer to (e.g.,
candidate robot actions, localizations, or trajectories). The VLM then selects
the best ones for the task. These proposals are iteratively refined, allowing
the VLM to eventually zero in on the best available answer. We investigate
PIVOT on real-world robotic navigation, real-world manipulation from images,
instruction following in simulation, and additional spatial inference tasks
such as localization. We find, perhaps surprisingly, that our approach enables
zero-shot control of robotic systems without any robot training data,
navigation in a variety of environments, and other capabilities. Although
current performance is far from perfect, our work highlights potentials and
limitations of this new regime and shows a promising approach for
Internet-Scale VLMs in robotic and spatial reasoning domains. Website:
pivot-prompt.github.io and HuggingFace:
https://huggingface.co/spaces/pivot-prompt/pivot-prompt-demo.
| [
{
"created": "Mon, 12 Feb 2024 18:33:47 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Nasiriany",
"Soroush",
""
],
[
"Xia",
"Fei",
""
],
[
"Yu",
"Wenhao",
""
],
[
"Xiao",
"Ted",
""
],
[
"Liang",
"Jacky",
""
],
[
"Dasgupta",
"Ishita",
""
],
[
"Xie",
"Annie",
""
],
[
"Driess",
"Danny",
""
],
[
"Wahid",
"Ayzaan",
""
],
[
"Xu",
"Zhuo",
""
],
[
"Vuong",
"Quan",
""
],
[
"Zhang",
"Tingnan",
""
],
[
"Lee",
"Tsang-Wei Edward",
""
],
[
"Lee",
"Kuang-Huei",
""
],
[
"Xu",
"Peng",
""
],
[
"Kirmani",
"Sean",
""
],
[
"Zhu",
"Yuke",
""
],
[
"Zeng",
"Andy",
""
],
[
"Hausman",
"Karol",
""
],
[
"Heess",
"Nicolas",
""
],
[
"Finn",
"Chelsea",
""
],
[
"Levine",
"Sergey",
""
],
[
"Ichter",
"Brian",
""
]
] | Vision language models (VLMs) have shown impressive capabilities across a variety of tasks, from logical reasoning to visual understanding. This opens the door to richer interaction with the world, for example robotic control. However, VLMs produce only textual outputs, while robotic control and other spatial tasks require outputting continuous coordinates, actions, or trajectories. How can we enable VLMs to handle such settings without fine-tuning on task-specific data? In this paper, we propose a novel visual prompting approach for VLMs that we call Prompting with Iterative Visual Optimization (PIVOT), which casts tasks as iterative visual question answering. In each iteration, the image is annotated with a visual representation of proposals that the VLM can refer to (e.g., candidate robot actions, localizations, or trajectories). The VLM then selects the best ones for the task. These proposals are iteratively refined, allowing the VLM to eventually zero in on the best available answer. We investigate PIVOT on real-world robotic navigation, real-world manipulation from images, instruction following in simulation, and additional spatial inference tasks such as localization. We find, perhaps surprisingly, that our approach enables zero-shot control of robotic systems without any robot training data, navigation in a variety of environments, and other capabilities. Although current performance is far from perfect, our work highlights potentials and limitations of this new regime and shows a promising approach for Internet-Scale VLMs in robotic and spatial reasoning domains. Website: pivot-prompt.github.io and HuggingFace: https://huggingface.co/spaces/pivot-prompt/pivot-prompt-demo. |
2005.04986 | Yunjin Tong | Yunjin Tong, Shiying Xiong, Xingzhe He, Guanghan Pan, Bo Zhu | Symplectic Neural Networks in Taylor Series Form for Hamiltonian Systems | null | Journal of Computational Physics, p.110325 (2021) | 10.1016/j.jcp.2021.110325 | null | cs.LG math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an effective and lightweight learning algorithm, Symplectic Taylor
Neural Networks (Taylor-nets), to conduct continuous, long-term predictions of
a complex Hamiltonian dynamic system based on sparse, short-term observations.
At the heart of our algorithm is a novel neural network architecture consisting
of two sub-networks. Both are embedded with terms in the form of Taylor series
expansion designed with symmetric structure. The key mechanism underpinning our
infrastructure is the strong expressiveness and special symmetric property of
the Taylor series expansion, which naturally accommodate the numerical fitting
process of the gradients of the Hamiltonian with respect to the generalized
coordinates as well as preserve its symplectic structure. We further
incorporate a fourth-order symplectic integrator in conjunction with neural
ODEs' framework into our Taylor-net architecture to learn the continuous-time
evolution of the target systems while simultaneously preserving their
symplectic structures. We demonstrated the efficacy of our Taylor-net in
predicting a broad spectrum of Hamiltonian dynamic systems, including the
pendulum, the Lotka--Volterra, the Kepler, and the H\'enon--Heiles systems. Our
model exhibits unique computational merits by outperforming previous methods to
a great extent regarding the prediction accuracy, the convergence rate, and the
robustness despite using extremely small training data with a short training
period (6000 times shorter than the predicting period), small sample sizes, and
no intermediate data to train the networks.
| [
{
"created": "Mon, 11 May 2020 10:32:29 GMT",
"version": "v1"
},
{
"created": "Wed, 13 May 2020 05:10:17 GMT",
"version": "v2"
},
{
"created": "Thu, 8 Apr 2021 18:49:23 GMT",
"version": "v3"
},
{
"created": "Sun, 20 Feb 2022 01:20:28 GMT",
"version": "v4"
}
] | 2022-02-22 | [
[
"Tong",
"Yunjin",
""
],
[
"Xiong",
"Shiying",
""
],
[
"He",
"Xingzhe",
""
],
[
"Pan",
"Guanghan",
""
],
[
"Zhu",
"Bo",
""
]
] | We propose an effective and lightweight learning algorithm, Symplectic Taylor Neural Networks (Taylor-nets), to conduct continuous, long-term predictions of a complex Hamiltonian dynamic system based on sparse, short-term observations. At the heart of our algorithm is a novel neural network architecture consisting of two sub-networks. Both are embedded with terms in the form of Taylor series expansion designed with symmetric structure. The key mechanism underpinning our infrastructure is the strong expressiveness and special symmetric property of the Taylor series expansion, which naturally accommodate the numerical fitting process of the gradients of the Hamiltonian with respect to the generalized coordinates as well as preserve its symplectic structure. We further incorporate a fourth-order symplectic integrator in conjunction with neural ODEs' framework into our Taylor-net architecture to learn the continuous-time evolution of the target systems while simultaneously preserving their symplectic structures. We demonstrated the efficacy of our Taylor-net in predicting a broad spectrum of Hamiltonian dynamic systems, including the pendulum, the Lotka--Volterra, the Kepler, and the H\'enon--Heiles systems. Our model exhibits unique computational merits by outperforming previous methods to a great extent regarding the prediction accuracy, the convergence rate, and the robustness despite using extremely small training data with a short training period (6000 times shorter than the predicting period), small sample sizes, and no intermediate data to train the networks. |
2302.04899 | Dmitry Kazhdan | Dmitry Kazhdan, Botty Dimanov, Lucie Charlotte Magister, Pietro
Barbiero, Mateja Jamnik, Pietro Lio | GCI: A (G)raph (C)oncept (I)nterpretation Framework | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Explainable AI (XAI) underwent a recent surge in research on concept
extraction, focusing on extracting human-interpretable concepts from Deep
Neural Networks. An important challenge facing concept extraction approaches is
the difficulty of interpreting and evaluating discovered concepts, especially
for complex tasks such as molecular property prediction. We address this
challenge by presenting GCI: a (G)raph (C)oncept (I)nterpretation framework,
used for quantitatively measuring alignment between concepts discovered from
Graph Neural Networks (GNNs) and their corresponding human interpretations. GCI
encodes concept interpretations as functions, which can be used to
quantitatively measure the alignment between a given interpretation and concept
definition. We demonstrate four applications of GCI: (i) quantitatively
evaluating concept extractors, (ii) measuring alignment between concept
extractors and human interpretations, (iii) measuring the completeness of
interpretations with respect to an end task and (iv) a practical application of
GCI to molecular property prediction, in which we demonstrate how to use
chemical functional groups to explain GNNs trained on molecular property
prediction tasks, and implement interpretations with a 0.76 AUCROC completeness
score.
| [
{
"created": "Thu, 9 Feb 2023 19:02:45 GMT",
"version": "v1"
}
] | 2023-02-13 | [
[
"Kazhdan",
"Dmitry",
""
],
[
"Dimanov",
"Botty",
""
],
[
"Magister",
"Lucie Charlotte",
""
],
[
"Barbiero",
"Pietro",
""
],
[
"Jamnik",
"Mateja",
""
],
[
"Lio",
"Pietro",
""
]
] | Explainable AI (XAI) underwent a recent surge in research on concept extraction, focusing on extracting human-interpretable concepts from Deep Neural Networks. An important challenge facing concept extraction approaches is the difficulty of interpreting and evaluating discovered concepts, especially for complex tasks such as molecular property prediction. We address this challenge by presenting GCI: a (G)raph (C)oncept (I)nterpretation framework, used for quantitatively measuring alignment between concepts discovered from Graph Neural Networks (GNNs) and their corresponding human interpretations. GCI encodes concept interpretations as functions, which can be used to quantitatively measure the alignment between a given interpretation and concept definition. We demonstrate four applications of GCI: (i) quantitatively evaluating concept extractors, (ii) measuring alignment between concept extractors and human interpretations, (iii) measuring the completeness of interpretations with respect to an end task and (iv) a practical application of GCI to molecular property prediction, in which we demonstrate how to use chemical functional groups to explain GNNs trained on molecular property prediction tasks, and implement interpretations with a 0.76 AUCROC completeness score. |
2004.14084 | Yuki Nishida | Yuki Nishida and Atsushi Igarashi | Compilation of Coordinated Choice | null | null | null | null | cs.PL cs.LO | http://creativecommons.org/licenses/by/4.0/ | Recently, we have proposed coordinated choices, which are nondeterministic
choices equipped with names. The main characteristic of coordinated choices is
that they synchronize nondeterministic decision among choices of the same name.
The motivation of the synchronization mechanism is to solve a theoretical
problem. So, as a practical programming language, we still want to use
coordinated choices like standard ones. In other words, we want to avoid
synchronization. Now, there are two problems: (i) practically, it is a bit
complicated work to write a program using coordinated choices in which
execution synchronization never happens; and (ii) theoretically, it is unknown
whether any programs using standard choices can be written by using only
coordinated ones.
In this paper, we define two simply typed lambda calculi called
$\lambda^\parallel$ equipped with standard choices and
$\lambda^{\parallel\omega}$ equipped with coordinated choices, and give
compilation rules from the former into the latter. The challenge is to show the
correctness of the compilation because behavioral correspondence between
expressions before and after compiling cannot be defined directly by the
compilation rules. For the challenge, we give an effect system for
$\lambda^{\parallel\omega}$ that characterizes expressions in which execution
synchronization never happens. Then, we show that all compiled expressions can
be typed by the effect system. As a result, we can easily show the correctness
because the main concern of the correctness is whether synchronization happens
or not.
| [
{
"created": "Wed, 29 Apr 2020 11:15:19 GMT",
"version": "v1"
}
] | 2020-05-05 | [
[
"Nishida",
"Yuki",
""
],
[
"Igarashi",
"Atsushi",
""
]
] | Recently, we have proposed coordinated choices, which are nondeterministic choices equipped with names. The main characteristic of coordinated choices is that they synchronize nondeterministic decision among choices of the same name. The motivation of the synchronization mechanism is to solve a theoretical problem. So, as a practical programming language, we still want to use coordinated choices like standard ones. In other words, we want to avoid synchronization. Now, there are two problems: (i) practically, it is a bit complicated work to write a program using coordinated choices in which execution synchronization never happens; and (ii) theoretically, it is unknown whether any programs using standard choices can be written by using only coordinated ones. In this paper, we define two simply typed lambda calculi called $\lambda^\parallel$ equipped with standard choices and $\lambda^{\parallel\omega}$ equipped with coordinated choices, and give compilation rules from the former into the latter. The challenge is to show the correctness of the compilation because behavioral correspondence between expressions before and after compiling cannot be defined directly by the compilation rules. For the challenge, we give an effect system for $\lambda^{\parallel\omega}$ that characterizes expressions in which execution synchronization never happens. Then, we show that all compiled expressions can be typed by the effect system. As a result, we can easily show the correctness because the main concern of the correctness is whether synchronization happens or not. |
1810.09270 | Rui Zhu | Rui Zhu and Di Niu | A Model Parallel Proximal Stochastic Gradient Algorithm for Partially
Asynchronous Systems | arXiv admin note: substantial text overlap with arXiv:1802.08880 | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Large models are prevalent in modern machine learning scenarios, including
deep learning, recommender systems, etc., which can have millions or even
billions of parameters. Parallel algorithms have become an essential solution
technique to many large-scale machine learning jobs. In this paper, we propose
a model parallel proximal stochastic gradient algorithm, AsyB-ProxSGD, to deal
with large models using model parallel blockwise updates while in the meantime
handling a large amount of training data using proximal stochastic gradient
descent (ProxSGD). In our algorithm, worker nodes communicate with the
parameter servers asynchronously, and each worker performs proximal stochastic
gradient for only one block of model parameters during each iteration. Our
proposed algorithm generalizes ProxSGD to the asynchronous and model parallel
setting. We prove that AsyB-ProxSGD achieves a convergence rate of
$O(1/\sqrt{K})$ to stationary points for nonconvex problems under
\emph{constant} minibatch sizes, where $K$ is the total number of block
updates. This rate matches the best-known rates of convergence for a wide range
of gradient-like algorithms. Furthermore, we show that when the number of
workers is bounded by $O(K^{1/4})$, we can expect AsyB-ProxSGD to achieve
linear speedup as the number of workers increases. We implement the proposed
algorithm on MXNet and demonstrate its convergence behavior and near-linear
speedup on a real-world dataset involving both a large model size and large
amounts of data.
| [
{
"created": "Fri, 19 Oct 2018 17:22:30 GMT",
"version": "v1"
}
] | 2018-10-23 | [
[
"Zhu",
"Rui",
""
],
[
"Niu",
"Di",
""
]
] | Large models are prevalent in modern machine learning scenarios, including deep learning, recommender systems, etc., which can have millions or even billions of parameters. Parallel algorithms have become an essential solution technique to many large-scale machine learning jobs. In this paper, we propose a model parallel proximal stochastic gradient algorithm, AsyB-ProxSGD, to deal with large models using model parallel blockwise updates while in the meantime handling a large amount of training data using proximal stochastic gradient descent (ProxSGD). In our algorithm, worker nodes communicate with the parameter servers asynchronously, and each worker performs proximal stochastic gradient for only one block of model parameters during each iteration. Our proposed algorithm generalizes ProxSGD to the asynchronous and model parallel setting. We prove that AsyB-ProxSGD achieves a convergence rate of $O(1/\sqrt{K})$ to stationary points for nonconvex problems under \emph{constant} minibatch sizes, where $K$ is the total number of block updates. This rate matches the best-known rates of convergence for a wide range of gradient-like algorithms. Furthermore, we show that when the number of workers is bounded by $O(K^{1/4})$, we can expect AsyB-ProxSGD to achieve linear speedup as the number of workers increases. We implement the proposed algorithm on MXNet and demonstrate its convergence behavior and near-linear speedup on a real-world dataset involving both a large model size and large amounts of data. |
2110.08477 | Jian Du | Yan Shen and Jian Du and Han Zhao and Benyu Zhang and Zhanghexuan Ji
and Mingchen Gao | FedMM: Saddle Point Optimization for Federated Adversarial Domain
Adaptation | 34 pages | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Federated adversary domain adaptation is a unique distributed minimax
training task due to the prevalence of label imbalance among clients, with each
client only seeing a subset of the classes of labels required to train a global
model. To tackle this problem, we propose a distributed minimax optimizer
referred to as FedMM, designed specifically for the federated adversary domain
adaptation problem. It works well even in the extreme case where each client
has different label classes and some clients only have unsupervised tasks. We
prove that FedMM ensures convergence to a stationary point with domain-shifted
unsupervised data. On a variety of benchmark datasets, extensive experiments
show that FedMM consistently achieves either significant communication savings
or significant accuracy improvements over federated optimizers based on the
gradient descent ascent (GDA) algorithm. When training from scratch, for
example, it outperforms other GDA based federated average methods by around
$20\%$ in accuracy over the same communication rounds; and it consistently
outperforms when training from pre-trained models with an accuracy improvement
from $5.4\%$ to $9\%$ for different networks.
| [
{
"created": "Sat, 16 Oct 2021 05:32:03 GMT",
"version": "v1"
},
{
"created": "Sun, 24 Oct 2021 17:52:37 GMT",
"version": "v2"
},
{
"created": "Tue, 16 Nov 2021 03:36:08 GMT",
"version": "v3"
}
] | 2021-11-17 | [
[
"Shen",
"Yan",
""
],
[
"Du",
"Jian",
""
],
[
"Zhao",
"Han",
""
],
[
"Zhang",
"Benyu",
""
],
[
"Ji",
"Zhanghexuan",
""
],
[
"Gao",
"Mingchen",
""
]
] | Federated adversary domain adaptation is a unique distributed minimax training task due to the prevalence of label imbalance among clients, with each client only seeing a subset of the classes of labels required to train a global model. To tackle this problem, we propose a distributed minimax optimizer referred to as FedMM, designed specifically for the federated adversary domain adaptation problem. It works well even in the extreme case where each client has different label classes and some clients only have unsupervised tasks. We prove that FedMM ensures convergence to a stationary point with domain-shifted unsupervised data. On a variety of benchmark datasets, extensive experiments show that FedMM consistently achieves either significant communication savings or significant accuracy improvements over federated optimizers based on the gradient descent ascent (GDA) algorithm. When training from scratch, for example, it outperforms other GDA based federated average methods by around $20\%$ in accuracy over the same communication rounds; and it consistently outperforms when training from pre-trained models with an accuracy improvement from $5.4\%$ to $9\%$ for different networks. |
2102.10629 | Aaron Zimba | Aaron Zimba, Tozgani Fainess Mbale, Mumbi Chishimba, Mathews Chibuluma | Liberalisation of the International Gateway and Internet Development in
Zambia: The Genesis, Opportunities, Challenges, and Future Directions | null | null | null | null | cs.NI | http://creativecommons.org/licenses/by/4.0/ | Telecommunication reforms in Zambia and the subsequent liberalisation of the
international gateway was perceived as one of the means of promoting social and
economic growth in both the urban and rural areas of the country. The outcome
of this undertaking propelled the rapid development of Internet which has
evidently brought about unprecedented paradigm shifts in the use of Information
and Communication Technologies (ICTs). It is indisputable that ICTs, and the
Internet in particular, have revolutionalised the way we communicate today.
Furthermore, the penetration of ICTs to other spheres of our daily lives is
evidence enough that the impacts thereof go beyond mere communicative facets of
our lives. However, many challenges arose in the implementation of
telecommunications reforms. In order to achieve the status quo, government had
to make strategic liberalisation policies in the telecoms sector that saw the
opening up of the international communication gateways to the private sector.
This is in tandem with the fact that the relationship between government
(through its formulation of policies and regulations) and other stakeholders
determines the ability of a country to generate and use advanced knowledge for
industrial competitiveness. As such, in this paper, we present the genesis and
evaluate the impacts associated with the telecommunications reforms and the
subsequent liberalisation of international communication gateways, and Internet
development in Zambia. We further consider the challenges this has brought
about and discuss possible future directions. This is helpful in forecasting
the future landscape of the ICT sector considering that the country seeks to
achieve universal coverage of both Internet and communication facilities to all
Zambians across the country.
| [
{
"created": "Sun, 21 Feb 2021 15:49:37 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"Zimba",
"Aaron",
""
],
[
"Mbale",
"Tozgani Fainess",
""
],
[
"Chishimba",
"Mumbi",
""
],
[
"Chibuluma",
"Mathews",
""
]
] | Telecommunication reforms in Zambia and the subsequent liberalisation of the international gateway was perceived as one of the means of promoting social and economic growth in both the urban and rural areas of the country. The outcome of this undertaking propelled the rapid development of Internet which has evidently brought about unprecedented paradigm shifts in the use of Information and Communication Technologies (ICTs). It is indisputable that ICTs, and the Internet in particular, have revolutionalised the way we communicate today. Furthermore, the penetration of ICTs to other spheres of our daily lives is evidence enough that the impacts thereof go beyond mere communicative facets of our lives. However, many challenges arose in the implementation of telecommunications reforms. In order to achieve the status quo, government had to make strategic liberalisation policies in the telecoms sector that saw the opening up of the international communication gateways to the private sector. This is in tandem with the fact that the relationship between government (through its formulation of policies and regulations) and other stakeholders determines the ability of a country to generate and use advanced knowledge for industrial competitiveness. As such, in this paper, we present the genesis and evaluate the impacts associated with the telecommunications reforms and the subsequent liberalisation of international communication gateways, and Internet development in Zambia. We further consider the challenges this has brought about and discuss possible future directions. This is helpful in forecasting the future landscape of the ICT sector considering that the country seeks to achieve universal coverage of both Internet and communication facilities to all Zambians across the country. |
2109.01718 | Jing Ma | Jing Ma, Qiuchen Zhang, Jian Lou, Li Xiong, Sivasubramanium Bhavani,
Joyce C. Ho | Communication Efficient Generalized Tensor Factorization for
Decentralized Healthcare Networks | Short version accepted to IEEE ICDM 2021 | null | null | null | cs.LG cs.DC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Tensor factorization has been proved as an efficient unsupervised learning
approach for health data analysis, especially for computational phenotyping,
where the high-dimensional Electronic Health Records (EHRs) with patients'
history of medical procedures, medications, diagnosis, lab tests, etc., are
converted to meaningful and interpretable medical concepts. Federated tensor
factorization distributes the tensor computation to multiple workers under the
coordination of a central server, which enables jointly learning the phenotypes
across multiple hospitals while preserving the privacy of the patient
information. However, existing federated tensor factorization algorithms
encounter the single-point-failure issue with the involvement of the central
server, which is not only easily exposed to external attacks but also limits
the number of clients sharing information with the server under restricted
uplink bandwidth. In this paper, we propose CiderTF, a communication-efficient
decentralized generalized tensor factorization, which reduces the uplink
communication cost by leveraging a four-level communication reduction strategy
designed for a generalized tensor factorization, which has the flexibility of
modeling different tensor distribution with multiple kinds of loss functions.
Experiments on two real-world EHR datasets demonstrate that CiderTF achieves
comparable convergence with a communication reduction up to 99.99%.
| [
{
"created": "Fri, 3 Sep 2021 19:47:08 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Nov 2022 06:15:31 GMT",
"version": "v2"
}
] | 2022-11-04 | [
[
"Ma",
"Jing",
""
],
[
"Zhang",
"Qiuchen",
""
],
[
"Lou",
"Jian",
""
],
[
"Xiong",
"Li",
""
],
[
"Bhavani",
"Sivasubramanium",
""
],
[
"Ho",
"Joyce C.",
""
]
] | Tensor factorization has been proved as an efficient unsupervised learning approach for health data analysis, especially for computational phenotyping, where the high-dimensional Electronic Health Records (EHRs) with patients' history of medical procedures, medications, diagnosis, lab tests, etc., are converted to meaningful and interpretable medical concepts. Federated tensor factorization distributes the tensor computation to multiple workers under the coordination of a central server, which enables jointly learning the phenotypes across multiple hospitals while preserving the privacy of the patient information. However, existing federated tensor factorization algorithms encounter the single-point-failure issue with the involvement of the central server, which is not only easily exposed to external attacks but also limits the number of clients sharing information with the server under restricted uplink bandwidth. In this paper, we propose CiderTF, a communication-efficient decentralized generalized tensor factorization, which reduces the uplink communication cost by leveraging a four-level communication reduction strategy designed for a generalized tensor factorization, which has the flexibility of modeling different tensor distribution with multiple kinds of loss functions. Experiments on two real-world EHR datasets demonstrate that CiderTF achieves comparable convergence with a communication reduction up to 99.99%. |
1908.11610 | Zhuoren Jiang | Zhuoren Jiang, Jian Wang, Lujun Zhao, Changlong Sun, Yao Lu, Xiaozhong
Liu | Cross-domain Aspect Category Transfer and Detection via Traceable
Heterogeneous Graph Representation Learning | Accepted as a full paper of The 28th ACM International Conference on
Information and Knowledge Management (CIKM '19) | null | 10.1145/3357384.3357989 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aspect category detection is an essential task for sentiment analysis and
opinion mining. However, the cost of categorical data labeling, e.g., label the
review aspect information for a large number of product domains, can be
inevitable but unaffordable. In this study, we propose a novel problem,
cross-domain aspect category transfer and detection, which faces three
challenges: various feature spaces, different data distributions, and diverse
output spaces. To address these problems, we propose an innovative solution,
Traceable Heterogeneous Graph Representation Learning (THGRL). Unlike prior
text-based aspect detection works, THGRL explores latent domain aspect category
connections via massive user behavior information on a heterogeneous graph.
Moreover, an innovative latent variable "Walker Tracer" is introduced to
characterize the global semantic/aspect dependencies and capture the
informative vertexes on the random walk paths. By using THGRL, we project
different domains' feature spaces into a common one, while allowing data
distributions and output spaces stay differently. Experiment results show that
the proposed method outperforms a series of state-of-the-art baseline models.
| [
{
"created": "Fri, 30 Aug 2019 09:30:38 GMT",
"version": "v1"
}
] | 2019-09-02 | [
[
"Jiang",
"Zhuoren",
""
],
[
"Wang",
"Jian",
""
],
[
"Zhao",
"Lujun",
""
],
[
"Sun",
"Changlong",
""
],
[
"Lu",
"Yao",
""
],
[
"Liu",
"Xiaozhong",
""
]
] | Aspect category detection is an essential task for sentiment analysis and opinion mining. However, the cost of categorical data labeling, e.g., label the review aspect information for a large number of product domains, can be inevitable but unaffordable. In this study, we propose a novel problem, cross-domain aspect category transfer and detection, which faces three challenges: various feature spaces, different data distributions, and diverse output spaces. To address these problems, we propose an innovative solution, Traceable Heterogeneous Graph Representation Learning (THGRL). Unlike prior text-based aspect detection works, THGRL explores latent domain aspect category connections via massive user behavior information on a heterogeneous graph. Moreover, an innovative latent variable "Walker Tracer" is introduced to characterize the global semantic/aspect dependencies and capture the informative vertexes on the random walk paths. By using THGRL, we project different domains' feature spaces into a common one, while allowing data distributions and output spaces stay differently. Experiment results show that the proposed method outperforms a series of state-of-the-art baseline models. |
1802.00912 | Zongwei Zhou | Zongwei Zhou, Jae Y. Shin, Suryakanth R. Gurudu, Michael B. Gotway,
Jianming Liang | Active, Continual Fine Tuning of Convolutional Neural Networks for
Reducing Annotation Efforts | null | null | 10.1016/j.media.2021.101997 | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The splendid success of convolutional neural networks (CNNs) in computer
vision is largely attributable to the availability of massive annotated
datasets, such as ImageNet and Places. However, in medical imaging, it is
challenging to create such large annotated datasets, as annotating medical
images is not only tedious, laborious, and time consuming, but it also demands
costly, specialty-oriented skills, which are not easily accessible. To
dramatically reduce annotation cost, this paper presents a novel method to
naturally integrate active learning and transfer learning (fine-tuning) into a
single framework, which starts directly with a pre-trained CNN to seek "worthy"
samples for annotation and gradually enhances the (fine-tuned) CNN via
continual fine-tuning. We have evaluated our method using three distinct
medical imaging applications, demonstrating that it can reduce annotation
efforts by at least half compared with random selection.
| [
{
"created": "Sat, 3 Feb 2018 05:01:17 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Feb 2018 02:13:28 GMT",
"version": "v2"
},
{
"created": "Sat, 23 May 2020 18:03:48 GMT",
"version": "v3"
},
{
"created": "Tue, 30 Mar 2021 00:19:51 GMT",
"version": "v4"
},
{
"created": "Sat, 10 Apr 2021 22:38:32 GMT",
"version": "v5"
}
] | 2021-04-13 | [
[
"Zhou",
"Zongwei",
""
],
[
"Shin",
"Jae Y.",
""
],
[
"Gurudu",
"Suryakanth R.",
""
],
[
"Gotway",
"Michael B.",
""
],
[
"Liang",
"Jianming",
""
]
] | The splendid success of convolutional neural networks (CNNs) in computer vision is largely attributable to the availability of massive annotated datasets, such as ImageNet and Places. However, in medical imaging, it is challenging to create such large annotated datasets, as annotating medical images is not only tedious, laborious, and time consuming, but it also demands costly, specialty-oriented skills, which are not easily accessible. To dramatically reduce annotation cost, this paper presents a novel method to naturally integrate active learning and transfer learning (fine-tuning) into a single framework, which starts directly with a pre-trained CNN to seek "worthy" samples for annotation and gradually enhances the (fine-tuned) CNN via continual fine-tuning. We have evaluated our method using three distinct medical imaging applications, demonstrating that it can reduce annotation efforts by at least half compared with random selection. |
2201.09769 | Martin Bromberger | Martin Bromberger (1), Irina Dragoste (2), Rasha Faqeh (2), Christof
Fetzer (2), Larry Gonz\'alez (2), Markus Kr\"otzsch (2), Maximilian Marx (2),
Harish K Murali, (1 and 3), Christoph Weidenbach (1) ((1) Max Planck
Institute for Informatics, Saarland Informatics Campus, Saarbr\"ucken,
Germany, (2) TU Dresden, Dresden, Germany, (3) IIITDM Kancheepuram, Chennai,
India) | A Sorted Datalog Hammer for Supervisor Verification Conditions Modulo
Simple Linear Arithmetic | 34 pages, to be published in the proceedings for TACAS 2022. arXiv
admin note: text overlap with arXiv:2107.03189 | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a previous paper, we have shown that clause sets belonging to the Horn
Bernays-Sch\"onfinkel fragment over simple linear real arithmetic (HBS(SLR))
can be translated into HBS clause sets over a finite set of first-order
constants. The translation preserves validity and satisfiability and it is
still applicable if we extend our input with positive universally or
existentially quantified verification conditions (conjectures). We call this
translation a Datalog hammer. The combination of its implementation in
SPASS-SPL with the Datalog reasoner VLog establishes an effective way of
deciding verification conditions in the Horn fragment. We verify supervisor
code for two examples: a lane change assistant in a car and an electronic
control unit of a supercharged combustion engine. In this paper, we improve our
Datalog hammer in several ways: we generalize it to mixed real-integer
arithmetic and finite first-order sorts; we extend the class of acceptable
inequalities beyond variable bounds and positively grounded inequalities; and
we significantly reduce the size of the hammer output by a soft typing
discipline. We call the result the sorted Datalog hammer. It not only allows us
to handle more complex supervisor code and to model already considered
supervisor code more concisely, but it also improves our performance on real
world benchmark examples. Finally, we replace the before file-based interface
between SPASS-SPL and VLog by a close coupling resulting in a single executable
binary.
| [
{
"created": "Mon, 24 Jan 2022 15:58:37 GMT",
"version": "v1"
}
] | 2022-01-25 | [
[
"Bromberger",
"Martin",
""
],
[
"Dragoste",
"Irina",
""
],
[
"Faqeh",
"Rasha",
""
],
[
"Fetzer",
"Christof",
""
],
[
"González",
"Larry",
""
],
[
"Krötzsch",
"Markus",
""
],
[
"Marx",
"Maximilian",
""
],
[
"Murali",
"Harish K",
""
],
[
"Weidenbach",
"Christoph",
""
]
] | In a previous paper, we have shown that clause sets belonging to the Horn Bernays-Sch\"onfinkel fragment over simple linear real arithmetic (HBS(SLR)) can be translated into HBS clause sets over a finite set of first-order constants. The translation preserves validity and satisfiability and it is still applicable if we extend our input with positive universally or existentially quantified verification conditions (conjectures). We call this translation a Datalog hammer. The combination of its implementation in SPASS-SPL with the Datalog reasoner VLog establishes an effective way of deciding verification conditions in the Horn fragment. We verify supervisor code for two examples: a lane change assistant in a car and an electronic control unit of a supercharged combustion engine. In this paper, we improve our Datalog hammer in several ways: we generalize it to mixed real-integer arithmetic and finite first-order sorts; we extend the class of acceptable inequalities beyond variable bounds and positively grounded inequalities; and we significantly reduce the size of the hammer output by a soft typing discipline. We call the result the sorted Datalog hammer. It not only allows us to handle more complex supervisor code and to model already considered supervisor code more concisely, but it also improves our performance on real world benchmark examples. Finally, we replace the before file-based interface between SPASS-SPL and VLog by a close coupling resulting in a single executable binary. |
2006.09645 | Atsuya Kobayashi | Atsuya Kobayashi, Reo Anzai, Nao Tokui | ExSampling: a system for the real-time ensemble performance of
field-recorded environmental sounds | The International Conference on New Interfaces for Musical Expression
2020 poster presentation. 4 pages | null | null | null | cs.HC cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | We propose ExSampling: an integrated system of recording application and Deep
Learning environment for a real-time music performance of environmental sounds
sampled by field recording. Automated sound mapping to Ableton Live tracks by
Deep Learning enables field recording to be applied to real-time performance,
and create interactions among sound recorders, composers and performers.
| [
{
"created": "Wed, 17 Jun 2020 04:07:13 GMT",
"version": "v1"
}
] | 2020-06-18 | [
[
"Kobayashi",
"Atsuya",
""
],
[
"Anzai",
"Reo",
""
],
[
"Tokui",
"Nao",
""
]
] | We propose ExSampling: an integrated system of recording application and Deep Learning environment for a real-time music performance of environmental sounds sampled by field recording. Automated sound mapping to Ableton Live tracks by Deep Learning enables field recording to be applied to real-time performance, and create interactions among sound recorders, composers and performers. |
1402.2941 | Zohaib Khan | Zohaib Khan, Faisal Shafait, Yiqun Hu, Ajmal Mian | Multispectral Palmprint Encoding and Recognition | Preliminary version of this manuscript was published in ICCV 2011. Z.
Khan A. Mian and Y. Hu, "Contour Code: Robust and Efficient Multispectral
Palmprint Encoding for Human Recognition", International Conference on
Computer Vision, 2011. MATLAB Code available:
https://sites.google.com/site/zohaibnet/Home/codes | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Palmprints are emerging as a new entity in multi-modal biometrics for human
identification and verification. Multispectral palmprint images captured in the
visible and infrared spectrum not only contain the wrinkles and ridge structure
of a palm, but also the underlying pattern of veins; making them a highly
discriminating biometric identifier. In this paper, we propose a feature
encoding scheme for robust and highly accurate representation and matching of
multispectral palmprints. To facilitate compact storage of the feature, we
design a binary hash table structure that allows for efficient matching in
large databases. Comprehensive experiments for both identification and
verification scenarios are performed on two public datasets -- one captured
with a contact-based sensor (PolyU dataset), and the other with a contact-free
sensor (CASIA dataset). Recognition results in various experimental setups show
that the proposed method consistently outperforms existing state-of-the-art
methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA)
are the lowest reported in literature on both dataset and clearly indicate the
viability of palmprint as a reliable and promising biometric. All source codes
are publicly available.
| [
{
"created": "Thu, 6 Feb 2014 06:35:51 GMT",
"version": "v1"
}
] | 2014-02-13 | [
[
"Khan",
"Zohaib",
""
],
[
"Shafait",
"Faisal",
""
],
[
"Hu",
"Yiqun",
""
],
[
"Mian",
"Ajmal",
""
]
] | Palmprints are emerging as a new entity in multi-modal biometrics for human identification and verification. Multispectral palmprint images captured in the visible and infrared spectrum not only contain the wrinkles and ridge structure of a palm, but also the underlying pattern of veins; making them a highly discriminating biometric identifier. In this paper, we propose a feature encoding scheme for robust and highly accurate representation and matching of multispectral palmprints. To facilitate compact storage of the feature, we design a binary hash table structure that allows for efficient matching in large databases. Comprehensive experiments for both identification and verification scenarios are performed on two public datasets -- one captured with a contact-based sensor (PolyU dataset), and the other with a contact-free sensor (CASIA dataset). Recognition results in various experimental setups show that the proposed method consistently outperforms existing state-of-the-art methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA) are the lowest reported in literature on both dataset and clearly indicate the viability of palmprint as a reliable and promising biometric. All source codes are publicly available. |
1902.05718 | Justinas Miseikis | Justinas Miseikis, Inka Brijacak, Saeed Yahyanejad, Kyrre Glette, Ole
Jakob Elle, Jim Torresen | Two-Stage Transfer Learning for Heterogeneous Robot Detection and 3D
Joint Position Estimation in a 2D Camera Image using CNN | 6+n pages, ICRA 2019 submission | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative robots are becoming more common on factory floors as well as
regular environments, however, their safety still is not a fully solved issue.
Collision detection does not always perform as expected and collision avoidance
is still an active research area. Collision avoidance works well for fixed
robot-camera setups, however, if they are shifted around, Eye-to-Hand
calibration becomes invalid making it difficult to accurately run many of the
existing collision avoidance algorithms. We approach the problem by presenting
a stand-alone system capable of detecting the robot and estimating its
position, including individual joints, by using a simple 2D colour image as an
input, where no Eye-to-Hand calibration is needed. As an extension of previous
work, a two-stage transfer learning approach is used to re-train a
multi-objective convolutional neural network (CNN) to allow it to be used with
heterogeneous robot arms. Our method is capable of detecting the robot in
real-time and new robot types can be added by having significantly smaller
training datasets compared to the requirements of a fully trained network. We
present data collection approach, the structure of the multi-objective CNN, the
two-stage transfer learning training and test results by using real robots from
Universal Robots, Kuka, and Franka Emika. Eventually, we analyse possible
application areas of our method together with the possible improvements.
| [
{
"created": "Fri, 15 Feb 2019 08:25:02 GMT",
"version": "v1"
}
] | 2019-02-18 | [
[
"Miseikis",
"Justinas",
""
],
[
"Brijacak",
"Inka",
""
],
[
"Yahyanejad",
"Saeed",
""
],
[
"Glette",
"Kyrre",
""
],
[
"Elle",
"Ole Jakob",
""
],
[
"Torresen",
"Jim",
""
]
] | Collaborative robots are becoming more common on factory floors as well as regular environments, however, their safety still is not a fully solved issue. Collision detection does not always perform as expected and collision avoidance is still an active research area. Collision avoidance works well for fixed robot-camera setups, however, if they are shifted around, Eye-to-Hand calibration becomes invalid making it difficult to accurately run many of the existing collision avoidance algorithms. We approach the problem by presenting a stand-alone system capable of detecting the robot and estimating its position, including individual joints, by using a simple 2D colour image as an input, where no Eye-to-Hand calibration is needed. As an extension of previous work, a two-stage transfer learning approach is used to re-train a multi-objective convolutional neural network (CNN) to allow it to be used with heterogeneous robot arms. Our method is capable of detecting the robot in real-time and new robot types can be added by having significantly smaller training datasets compared to the requirements of a fully trained network. We present data collection approach, the structure of the multi-objective CNN, the two-stage transfer learning training and test results by using real robots from Universal Robots, Kuka, and Franka Emika. Eventually, we analyse possible application areas of our method together with the possible improvements. |
2308.07496 | Yuqi Nie | Zepu Wang, Yuqi Nie, Peng Sun, Nam H. Nguyen, John Mulvey, H. Vincent
Poor | ST-MLP: A Cascaded Spatio-Temporal Linear Framework with
Channel-Independence Strategy for Traffic Forecasting | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The criticality of prompt and precise traffic forecasting in optimizing
traffic flow management in Intelligent Transportation Systems (ITS) has drawn
substantial scholarly focus. Spatio-Temporal Graph Neural Networks (STGNNs)
have been lauded for their adaptability to road graph structures. Yet, current
research on STGNNs architectures often prioritizes complex designs, leading to
elevated computational burdens with only minor enhancements in accuracy. To
address this issue, we propose ST-MLP, a concise spatio-temporal model solely
based on cascaded Multi-Layer Perceptron (MLP) modules and linear layers.
Specifically, we incorporate temporal information, spatial information and
predefined graph structure with a successful implementation of the
channel-independence strategy - an effective technique in time series
forecasting. Empirical results demonstrate that ST-MLP outperforms
state-of-the-art STGNNs and other models in terms of accuracy and computational
efficiency. Our finding encourages further exploration of more concise and
effective neural network architectures in the field of traffic forecasting.
| [
{
"created": "Mon, 14 Aug 2023 23:34:59 GMT",
"version": "v1"
}
] | 2023-08-16 | [
[
"Wang",
"Zepu",
""
],
[
"Nie",
"Yuqi",
""
],
[
"Sun",
"Peng",
""
],
[
"Nguyen",
"Nam H.",
""
],
[
"Mulvey",
"John",
""
],
[
"Poor",
"H. Vincent",
""
]
] | The criticality of prompt and precise traffic forecasting in optimizing traffic flow management in Intelligent Transportation Systems (ITS) has drawn substantial scholarly focus. Spatio-Temporal Graph Neural Networks (STGNNs) have been lauded for their adaptability to road graph structures. Yet, current research on STGNNs architectures often prioritizes complex designs, leading to elevated computational burdens with only minor enhancements in accuracy. To address this issue, we propose ST-MLP, a concise spatio-temporal model solely based on cascaded Multi-Layer Perceptron (MLP) modules and linear layers. Specifically, we incorporate temporal information, spatial information and predefined graph structure with a successful implementation of the channel-independence strategy - an effective technique in time series forecasting. Empirical results demonstrate that ST-MLP outperforms state-of-the-art STGNNs and other models in terms of accuracy and computational efficiency. Our finding encourages further exploration of more concise and effective neural network architectures in the field of traffic forecasting. |
2203.05782 | Michael Mozer | Shruthi Sukumar, Adrian F. Ward, Camden Elliott-Williams, Shabnam
Hakimi, Michael C. Mozer | Overcoming Temptation: Incentive Design For Intertemporal Choice | null | null | null | null | cs.LG q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Individuals are often faced with temptations that can lead them astray from
long-term goals. We're interested in developing interventions that steer
individuals toward making good initial decisions and then maintaining those
decisions over time. In the realm of financial decision making, a particularly
successful approach is the prize-linked savings account: individuals are
incentivized to make deposits by tying deposits to a periodic lottery that
awards bonuses to the savers. Although these lotteries have been very effective
in motivating savers across the globe, they are a one-size-fits-all solution.
We investigate whether customized bonuses can be more effective. We formalize a
delayed-gratification task as a Markov decision problem and characterize
individuals as rational agents subject to temporal discounting, a cost
associated with effort, and fluctuations in willpower. Our theory is able to
explain key behavioral findings in intertemporal choice. We created an online
delayed-gratification game in which the player scores points by selecting a
queue to wait in and then performing a series of actions to advance to the
front. Data collected from the game is fit to the model, and the instantiated
model is then used to optimize predicted player performance over a space of
incentives. We demonstrate that customized incentive structures can improve an
individual's goal-directed decision making.
| [
{
"created": "Fri, 11 Mar 2022 07:42:07 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Mar 2022 04:47:31 GMT",
"version": "v2"
}
] | 2022-03-15 | [
[
"Sukumar",
"Shruthi",
""
],
[
"Ward",
"Adrian F.",
""
],
[
"Elliott-Williams",
"Camden",
""
],
[
"Hakimi",
"Shabnam",
""
],
[
"Mozer",
"Michael C.",
""
]
] | Individuals are often faced with temptations that can lead them astray from long-term goals. We're interested in developing interventions that steer individuals toward making good initial decisions and then maintaining those decisions over time. In the realm of financial decision making, a particularly successful approach is the prize-linked savings account: individuals are incentivized to make deposits by tying deposits to a periodic lottery that awards bonuses to the savers. Although these lotteries have been very effective in motivating savers across the globe, they are a one-size-fits-all solution. We investigate whether customized bonuses can be more effective. We formalize a delayed-gratification task as a Markov decision problem and characterize individuals as rational agents subject to temporal discounting, a cost associated with effort, and fluctuations in willpower. Our theory is able to explain key behavioral findings in intertemporal choice. We created an online delayed-gratification game in which the player scores points by selecting a queue to wait in and then performing a series of actions to advance to the front. Data collected from the game is fit to the model, and the instantiated model is then used to optimize predicted player performance over a space of incentives. We demonstrate that customized incentive structures can improve an individual's goal-directed decision making. |
1509.07813 | Jacopo Baggio | Kehinde R. Salau, Jacopo A. Baggio, Marco A. Janssen, Joshua K.
Abbott, Eli P. Fenichel | Taking a moment to measure Networks - A hierarchical approach | Main Paper: 32 Pages, Suppl0Material: 9 pages | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network-theoretic tools contribute to understanding real-world system
dynamics, e.g., in wildlife conservation, epidemics, and power outages. Network
visualization helps illustrate structural heterogeneity; however, details about
heterogeneity are lost when summarizing networks with a single mean-style
measure. Researchers have indicated that a hierarchical system composed of
multiple metrics may be a more useful determinant of structure, but a formal
method for grouping metrics is still lacking. We develop a hierarchy using the
statistical concept of moments and systematically test the hypothesis that this
system of metrics is sufficient to explain the variation in processes that take
place on networks, using an ecological systems example. Results indicate that
the moments approach outperforms single summary metrics and accounts for a
majority of the variation in process outcomes. The hierarchical measurement
scheme is helpful for indicating when additional structural information is
needed to describe system process outcomes.
| [
{
"created": "Fri, 25 Sep 2015 18:00:01 GMT",
"version": "v1"
}
] | 2015-09-28 | [
[
"Salau",
"Kehinde R.",
""
],
[
"Baggio",
"Jacopo A.",
""
],
[
"Janssen",
"Marco A.",
""
],
[
"Abbott",
"Joshua K.",
""
],
[
"Fenichel",
"Eli P.",
""
]
] | Network-theoretic tools contribute to understanding real-world system dynamics, e.g., in wildlife conservation, epidemics, and power outages. Network visualization helps illustrate structural heterogeneity; however, details about heterogeneity are lost when summarizing networks with a single mean-style measure. Researchers have indicated that a hierarchical system composed of multiple metrics may be a more useful determinant of structure, but a formal method for grouping metrics is still lacking. We develop a hierarchy using the statistical concept of moments and systematically test the hypothesis that this system of metrics is sufficient to explain the variation in processes that take place on networks, using an ecological systems example. Results indicate that the moments approach outperforms single summary metrics and accounts for a majority of the variation in process outcomes. The hierarchical measurement scheme is helpful for indicating when additional structural information is needed to describe system process outcomes. |
2110.01774 | Uttaran Bhattacharya | Uttaran Bhattacharya and Gang Wu and Stefano Petrangeli and
Viswanathan Swaminathan and Dinesh Manocha | HighlightMe: Detecting Highlights from Human-Centric Videos | 10 pages, 5 figures, 5 tables. In Proceedings of the IEEE/CVF
International Conference on Computer Vision (ICCV), 2021 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present a domain- and user-preference-agnostic approach to detect
highlightable excerpts from human-centric videos. Our method works on the
graph-based representation of multiple observable human-centric modalities in
the videos, such as poses and faces. We use an autoencoder network equipped
with spatial-temporal graph convolutions to detect human activities and
interactions based on these modalities. We train our network to map the
activity- and interaction-based latent structural representations of the
different modalities to per-frame highlight scores based on the
representativeness of the frames. We use these scores to compute which frames
to highlight and stitch contiguous frames to produce the excerpts. We train our
network on the large-scale AVA-Kinetics action dataset and evaluate it on four
benchmark video highlight datasets: DSH, TVSum, PHD2, and SumMe. We observe a
4-12% improvement in the mean average precision of matching the human-annotated
highlights over state-of-the-art methods in these datasets, without requiring
any user-provided preferences or dataset-specific fine-tuning.
| [
{
"created": "Tue, 5 Oct 2021 01:18:15 GMT",
"version": "v1"
}
] | 2021-10-06 | [
[
"Bhattacharya",
"Uttaran",
""
],
[
"Wu",
"Gang",
""
],
[
"Petrangeli",
"Stefano",
""
],
[
"Swaminathan",
"Viswanathan",
""
],
[
"Manocha",
"Dinesh",
""
]
] | We present a domain- and user-preference-agnostic approach to detect highlightable excerpts from human-centric videos. Our method works on the graph-based representation of multiple observable human-centric modalities in the videos, such as poses and faces. We use an autoencoder network equipped with spatial-temporal graph convolutions to detect human activities and interactions based on these modalities. We train our network to map the activity- and interaction-based latent structural representations of the different modalities to per-frame highlight scores based on the representativeness of the frames. We use these scores to compute which frames to highlight and stitch contiguous frames to produce the excerpts. We train our network on the large-scale AVA-Kinetics action dataset and evaluate it on four benchmark video highlight datasets: DSH, TVSum, PHD2, and SumMe. We observe a 4-12% improvement in the mean average precision of matching the human-annotated highlights over state-of-the-art methods in these datasets, without requiring any user-provided preferences or dataset-specific fine-tuning. |
2107.11817 | Fuzhao Xue | Fuzhao Xue, Ziji Shi, Futao Wei, Yuxuan Lou, Yong Liu, Yang You | Go Wider Instead of Deeper | null | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | More transformer blocks with residual connections have recently achieved
impressive results on various tasks. To achieve better performance with fewer
trainable parameters, recent methods are proposed to go shallower by parameter
sharing or model compressing along with the depth. However, weak modeling
capacity limits their performance. Contrastively, going wider by inducing more
trainable matrixes and parameters would produce a huge model requiring advanced
parallelism to train and inference.
In this paper, we propose a parameter-efficient framework, going wider
instead of deeper. Specially, following existing works, we adapt parameter
sharing to compress along depth. But, such deployment would limit the
performance. To maximize modeling capacity, we scale along model width by
replacing feed-forward network (FFN) with mixture-of-experts (MoE). Across
transformer blocks, instead of sharing normalization layers, we propose to use
individual layernorms to transform various semantic representations in a more
parameter-efficient way. To evaluate our plug-and-run framework, we design
WideNet and conduct comprehensive experiments on popular computer vision and
natural language processing benchmarks. On ImageNet-1K, our best model
outperforms Vision Transformer (ViT) by $1.5\%$ with $0.72 \times$ trainable
parameters. Using $0.46 \times$ and $0.13 \times$ parameters, our WideNet can
still surpass ViT and ViT-MoE by $0.8\%$ and $2.1\%$, respectively. On four
natural language processing datasets, WideNet outperforms ALBERT by $1.8\%$ on
average and surpass BERT using factorized embedding parameterization by $0.8\%$
with fewer parameters.
| [
{
"created": "Sun, 25 Jul 2021 14:44:24 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Jul 2021 10:17:23 GMT",
"version": "v2"
},
{
"created": "Tue, 7 Sep 2021 11:58:00 GMT",
"version": "v3"
}
] | 2021-09-08 | [
[
"Xue",
"Fuzhao",
""
],
[
"Shi",
"Ziji",
""
],
[
"Wei",
"Futao",
""
],
[
"Lou",
"Yuxuan",
""
],
[
"Liu",
"Yong",
""
],
[
"You",
"Yang",
""
]
] | More transformer blocks with residual connections have recently achieved impressive results on various tasks. To achieve better performance with fewer trainable parameters, recent methods are proposed to go shallower by parameter sharing or model compressing along with the depth. However, weak modeling capacity limits their performance. Contrastively, going wider by inducing more trainable matrixes and parameters would produce a huge model requiring advanced parallelism to train and inference. In this paper, we propose a parameter-efficient framework, going wider instead of deeper. Specially, following existing works, we adapt parameter sharing to compress along depth. But, such deployment would limit the performance. To maximize modeling capacity, we scale along model width by replacing feed-forward network (FFN) with mixture-of-experts (MoE). Across transformer blocks, instead of sharing normalization layers, we propose to use individual layernorms to transform various semantic representations in a more parameter-efficient way. To evaluate our plug-and-run framework, we design WideNet and conduct comprehensive experiments on popular computer vision and natural language processing benchmarks. On ImageNet-1K, our best model outperforms Vision Transformer (ViT) by $1.5\%$ with $0.72 \times$ trainable parameters. Using $0.46 \times$ and $0.13 \times$ parameters, our WideNet can still surpass ViT and ViT-MoE by $0.8\%$ and $2.1\%$, respectively. On four natural language processing datasets, WideNet outperforms ALBERT by $1.8\%$ on average and surpass BERT using factorized embedding parameterization by $0.8\%$ with fewer parameters. |
2011.11688 | Kenneth Joseph | Zijian An, Kenneth Joseph | An analysis of replies to Trump's tweets | Accepted at ICWSM'21 | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Donald Trump has tweeted thousands of times during his presidency. These
public statements are an increasingly important way through which Trump
communicates his political and personal views. A better understanding of the
way the American public consumes and responds to these tweets is therefore
critical. In the present work, we address both consumption of and response to
Trump's tweets by studying replies to them on Twitter. With respect to
response, we find that a small number of older, white, left-leaning, and female
Americans are responsible for the vast majority of replies to Trump's tweets.
These individuals also attend to a broader range of Trump's tweets than the
rest of the individuals we study. With respect to consumption, we note that
Trump's tweets are often viewed not in isolation, but rather in the context of
a set of algorithmically-curated replies. These replies may therefore color the
way Americans consume Trump's tweets. To this end, we find some evidence that
Twitter accounts see replies in line with their political leanings. However, we
show that this can be largely, although not entirely, attributed to the fact
that Twitter is more likely to show replies by accounts a user follows. As a
basis for comparison, all results for Trump are compared and contrasted with
replies to Joe Biden's tweets.
| [
{
"created": "Mon, 23 Nov 2020 19:29:29 GMT",
"version": "v1"
}
] | 2020-11-25 | [
[
"An",
"Zijian",
""
],
[
"Joseph",
"Kenneth",
""
]
] | Donald Trump has tweeted thousands of times during his presidency. These public statements are an increasingly important way through which Trump communicates his political and personal views. A better understanding of the way the American public consumes and responds to these tweets is therefore critical. In the present work, we address both consumption of and response to Trump's tweets by studying replies to them on Twitter. With respect to response, we find that a small number of older, white, left-leaning, and female Americans are responsible for the vast majority of replies to Trump's tweets. These individuals also attend to a broader range of Trump's tweets than the rest of the individuals we study. With respect to consumption, we note that Trump's tweets are often viewed not in isolation, but rather in the context of a set of algorithmically-curated replies. These replies may therefore color the way Americans consume Trump's tweets. To this end, we find some evidence that Twitter accounts see replies in line with their political leanings. However, we show that this can be largely, although not entirely, attributed to the fact that Twitter is more likely to show replies by accounts a user follows. As a basis for comparison, all results for Trump are compared and contrasted with replies to Joe Biden's tweets. |
2111.03977 | Robert Wilson | Robert L. Wilson, Daniel Browne, Jonathan Wagstaff, and Steve McGuire | A Virtual Reality Simulation Pipeline for Online Mental Workload
Modeling | 7 pages, 4 figures, and 1 table Currently under review as a
conference paper for IEEE VR 2022, v2 - Spelling Corrections | null | null | null | cs.HC cs.RO | http://creativecommons.org/licenses/by/4.0/ | Seamless human robot interaction (HRI) and cooperative human-robot (HR)
teaming critically rely upon accurate and timely human mental workload (MW)
models. Cognitive Load Theory (CLT) suggests representative physical
environments produce representative mental processes; physical environment
fidelity corresponds with improved modeling accuracy. Virtual Reality (VR)
systems provide immersive environments capable of replicating complicated
scenarios, particularly those associated with high-risk, high-stress scenarios.
Passive biosignal modeling shows promise as a noninvasive method of MW
modeling. However, VR systems rarely include multimodal psychophysiological
feedback or capitalize on biosignal data for online MW modeling. Here, we
develop a novel VR simulation pipeline, inspired by the NASA Multi-Attribute
Task Battery II (MATB-II) task architecture, capable of synchronous collection
of objective performance, subjective performance, and passive human biosignals
in a simulated hazardous exploration environment. Our system design extracts
and publishes biofeatures through the Robot Operating System (ROS),
facilitating real time psychophysiology-based MW model integration into
complete end-to-end systems. A VR simulation pipeline capable of evaluating MWs
online could be foundational for advancing HR systems and VR experiences by
enabling these systems to adaptively alter their behaviors in response to
operator MW.
| [
{
"created": "Sun, 7 Nov 2021 00:50:39 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Nov 2021 16:09:45 GMT",
"version": "v2"
}
] | 2021-11-25 | [
[
"Wilson",
"Robert L.",
""
],
[
"Browne",
"Daniel",
""
],
[
"Wagstaff",
"Jonathan",
""
],
[
"McGuire",
"Steve",
""
]
] | Seamless human robot interaction (HRI) and cooperative human-robot (HR) teaming critically rely upon accurate and timely human mental workload (MW) models. Cognitive Load Theory (CLT) suggests representative physical environments produce representative mental processes; physical environment fidelity corresponds with improved modeling accuracy. Virtual Reality (VR) systems provide immersive environments capable of replicating complicated scenarios, particularly those associated with high-risk, high-stress scenarios. Passive biosignal modeling shows promise as a noninvasive method of MW modeling. However, VR systems rarely include multimodal psychophysiological feedback or capitalize on biosignal data for online MW modeling. Here, we develop a novel VR simulation pipeline, inspired by the NASA Multi-Attribute Task Battery II (MATB-II) task architecture, capable of synchronous collection of objective performance, subjective performance, and passive human biosignals in a simulated hazardous exploration environment. Our system design extracts and publishes biofeatures through the Robot Operating System (ROS), facilitating real time psychophysiology-based MW model integration into complete end-to-end systems. A VR simulation pipeline capable of evaluating MWs online could be foundational for advancing HR systems and VR experiences by enabling these systems to adaptively alter their behaviors in response to operator MW. |
2404.17697 | Thomas Billington | Thomas Billington, Ansh Gwash, Aadi Kothari, Lucas Izquierdo, Timothy
Talty | Enhancing Track Management Systems with Vehicle-To-Vehicle Enabled
Sensor Fusion | 6 pages, 5 figures | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the rapidly advancing landscape of connected and automated vehicles (CAV),
the integration of Vehicle-to-Everything (V2X) communication in traditional
fusion systems presents a promising avenue for enhancing vehicle perception.
Addressing current limitations with vehicle sensing, this paper proposes a
novel Vehicle-to-Vehicle (V2V) enabled track management system that leverages
the synergy between V2V signals and detections from radar and camera sensors.
The core innovation lies in the creation of independent priority track lists,
consisting of fused detections validated through V2V communication. This
approach enables more flexible and resilient thresholds for track management,
particularly in scenarios with numerous occlusions where the tracked objects
move outside the field of view of the perception sensors. The proposed system
considers the implications of falsification of V2X signals which is combated
through an initial vehicle identification process using detection from
perception sensors. Presented are the fusion algorithm, simulated environments,
and validation mechanisms. Experimental results demonstrate the improved
accuracy and robustness of the proposed system in common driving scenarios,
highlighting its potential to advance the reliability and efficiency of
autonomous vehicles.
| [
{
"created": "Fri, 26 Apr 2024 20:54:44 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Billington",
"Thomas",
""
],
[
"Gwash",
"Ansh",
""
],
[
"Kothari",
"Aadi",
""
],
[
"Izquierdo",
"Lucas",
""
],
[
"Talty",
"Timothy",
""
]
] | In the rapidly advancing landscape of connected and automated vehicles (CAV), the integration of Vehicle-to-Everything (V2X) communication in traditional fusion systems presents a promising avenue for enhancing vehicle perception. Addressing current limitations with vehicle sensing, this paper proposes a novel Vehicle-to-Vehicle (V2V) enabled track management system that leverages the synergy between V2V signals and detections from radar and camera sensors. The core innovation lies in the creation of independent priority track lists, consisting of fused detections validated through V2V communication. This approach enables more flexible and resilient thresholds for track management, particularly in scenarios with numerous occlusions where the tracked objects move outside the field of view of the perception sensors. The proposed system considers the implications of falsification of V2X signals which is combated through an initial vehicle identification process using detection from perception sensors. Presented are the fusion algorithm, simulated environments, and validation mechanisms. Experimental results demonstrate the improved accuracy and robustness of the proposed system in common driving scenarios, highlighting its potential to advance the reliability and efficiency of autonomous vehicles. |
2312.15122 | Andreas Pasternak | Moritz Harmel, Anubhav Paras, Andreas Pasternak, Nicholas Roy, Gary
Linscott | Scaling Is All You Need: Autonomous Driving with JAX-Accelerated
Reinforcement Learning | null | null | null | null | cs.LG cs.AI cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Reinforcement learning has been demonstrated to outperform even the best
humans in complex domains like video games. However, running reinforcement
learning experiments on the required scale for autonomous driving is extremely
difficult. Building a large scale reinforcement learning system and
distributing it across many GPUs is challenging. Gathering experience during
training on real world vehicles is prohibitive from a safety and scalability
perspective. Therefore, an efficient and realistic driving simulator is
required that uses a large amount of data from real-world driving. We bring
these capabilities together and conduct large-scale reinforcement learning
experiments for autonomous driving. We demonstrate that our policy performance
improves with increasing scale. Our best performing policy reduces the failure
rate by 64% while improving the rate of driving progress by 25% compared to the
policies produced by state-of-the-art machine learning for autonomous driving.
| [
{
"created": "Sat, 23 Dec 2023 00:07:06 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Feb 2024 00:07:19 GMT",
"version": "v2"
},
{
"created": "Thu, 8 Feb 2024 19:39:19 GMT",
"version": "v3"
}
] | 2024-02-12 | [
[
"Harmel",
"Moritz",
""
],
[
"Paras",
"Anubhav",
""
],
[
"Pasternak",
"Andreas",
""
],
[
"Roy",
"Nicholas",
""
],
[
"Linscott",
"Gary",
""
]
] | Reinforcement learning has been demonstrated to outperform even the best humans in complex domains like video games. However, running reinforcement learning experiments on the required scale for autonomous driving is extremely difficult. Building a large scale reinforcement learning system and distributing it across many GPUs is challenging. Gathering experience during training on real world vehicles is prohibitive from a safety and scalability perspective. Therefore, an efficient and realistic driving simulator is required that uses a large amount of data from real-world driving. We bring these capabilities together and conduct large-scale reinforcement learning experiments for autonomous driving. We demonstrate that our policy performance improves with increasing scale. Our best performing policy reduces the failure rate by 64% while improving the rate of driving progress by 25% compared to the policies produced by state-of-the-art machine learning for autonomous driving. |
2111.11703 | Taketo Akama | Taketo Akama | A Contextual Latent Space Model: Subsequence Modulation in Melodic
Sequence | 22nd International Society for Music Information Retrieval Conference
(ISMIR), 2021; 8 pages | null | null | null | cs.LG cs.AI cs.SD eess.AS stat.ML | http://creativecommons.org/licenses/by/4.0/ | Some generative models for sequences such as music and text allow us to edit
only subsequences, given surrounding context sequences, which plays an
important part in steering generation interactively. However, editing
subsequences mainly involves randomly resampling subsequences from a possible
generation space. We propose a contextual latent space model (CLSM) in order
for users to be able to explore subsequence generation with a sense of
direction in the generation space, e.g., interpolation, as well as exploring
variations -- semantically similar possible subsequences. A context-informed
prior and decoder constitute the generative model of CLSM, and a context
position-informed encoder is the inference model. In experiments, we use a
monophonic symbolic music dataset, demonstrating that our contextual latent
space is smoother in interpolation than baselines, and the quality of generated
samples is superior to baseline models. The generation examples are available
online.
| [
{
"created": "Tue, 23 Nov 2021 07:51:39 GMT",
"version": "v1"
}
] | 2021-11-24 | [
[
"Akama",
"Taketo",
""
]
] | Some generative models for sequences such as music and text allow us to edit only subsequences, given surrounding context sequences, which plays an important part in steering generation interactively. However, editing subsequences mainly involves randomly resampling subsequences from a possible generation space. We propose a contextual latent space model (CLSM) in order for users to be able to explore subsequence generation with a sense of direction in the generation space, e.g., interpolation, as well as exploring variations -- semantically similar possible subsequences. A context-informed prior and decoder constitute the generative model of CLSM, and a context position-informed encoder is the inference model. In experiments, we use a monophonic symbolic music dataset, demonstrating that our contextual latent space is smoother in interpolation than baselines, and the quality of generated samples is superior to baseline models. The generation examples are available online. |
2309.08585 | Yuan Jianlong | Xiaonan Lu, Jianlong Yuan, Ruigang Niu, Yuan Hu, Fan Wang | Viewpoint Integration and Registration with Vision Language Foundation
Model for Image Change Understanding | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the development of pre-trained vision language foundation models
(VLFMs) has led to remarkable performance in many tasks. However, these models
tend to have strong single-image understanding capability but lack the ability
to understand multiple images. Therefore, they cannot be directly applied to
cope with image change understanding (ICU), which requires models to capture
actual changes between multiple images and describe them in language. In this
paper, we discover that existing VLFMs perform poorly when applied directly to
ICU because of the following problems: (1) VLFMs generally learn the global
representation of a single image, while ICU requires capturing nuances between
multiple images. (2) The ICU performance of VLFMs is significantly affected by
viewpoint variations, which is caused by the altered relationships between
objects when viewpoint changes. To address these problems, we propose a
Viewpoint Integration and Registration method. Concretely, we introduce a fused
adapter image encoder that fine-tunes pre-trained encoders by inserting
designed trainable adapters and fused adapters, to effectively capture nuances
between image pairs. Additionally, a viewpoint registration flow and a semantic
emphasizing module are designed to reduce the performance degradation caused by
viewpoint variations in the visual and semantic space, respectively.
Experimental results on CLEVR-Change and Spot-the-Diff demonstrate that our
method achieves state-of-the-art performance in all metrics.
| [
{
"created": "Fri, 15 Sep 2023 17:41:29 GMT",
"version": "v1"
}
] | 2023-09-18 | [
[
"Lu",
"Xiaonan",
""
],
[
"Yuan",
"Jianlong",
""
],
[
"Niu",
"Ruigang",
""
],
[
"Hu",
"Yuan",
""
],
[
"Wang",
"Fan",
""
]
] | Recently, the development of pre-trained vision language foundation models (VLFMs) has led to remarkable performance in many tasks. However, these models tend to have strong single-image understanding capability but lack the ability to understand multiple images. Therefore, they cannot be directly applied to cope with image change understanding (ICU), which requires models to capture actual changes between multiple images and describe them in language. In this paper, we discover that existing VLFMs perform poorly when applied directly to ICU because of the following problems: (1) VLFMs generally learn the global representation of a single image, while ICU requires capturing nuances between multiple images. (2) The ICU performance of VLFMs is significantly affected by viewpoint variations, which is caused by the altered relationships between objects when viewpoint changes. To address these problems, we propose a Viewpoint Integration and Registration method. Concretely, we introduce a fused adapter image encoder that fine-tunes pre-trained encoders by inserting designed trainable adapters and fused adapters, to effectively capture nuances between image pairs. Additionally, a viewpoint registration flow and a semantic emphasizing module are designed to reduce the performance degradation caused by viewpoint variations in the visual and semantic space, respectively. Experimental results on CLEVR-Change and Spot-the-Diff demonstrate that our method achieves state-of-the-art performance in all metrics. |
2304.10996 | Maryem Rhanoui | Ayoub Harnoune and Maryem Rhanoui and Mounia Mikram and Siham Yousfi
and Zineb Elkaimbillah and Bouchra El Asri | BERT Based Clinical Knowledge Extraction for Biomedical Knowledge Graph
Construction and Analysis | null | null | 10.1016/j.cmpbup.2021.100042 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Background : Knowledge is evolving over time, often as a result of new
discoveries or changes in the adopted methods of reasoning. Also, new facts or
evidence may become available, leading to new understandings of complex
phenomena. This is particularly true in the biomedical field, where scientists
and physicians are constantly striving to find new methods of diagnosis,
treatment and eventually cure. Knowledge Graphs (KGs) offer a real way of
organizing and retrieving the massive and growing amount of biomedical
knowledge.
Objective : We propose an end-to-end approach for knowledge extraction and
analysis from biomedical clinical notes using the Bidirectional Encoder
Representations from Transformers (BERT) model and Conditional Random Field
(CRF) layer.
Methods : The approach is based on knowledge graphs, which can effectively
process abstract biomedical concepts such as relationships and interactions
between medical entities. Besides offering an intuitive way to visualize these
concepts, KGs can solve more complex knowledge retrieval problems by
simplifying them into simpler representations or by transforming the problems
into representations from different perspectives. We created a biomedical
Knowledge Graph using using Natural Language Processing models for named entity
recognition and relation extraction. The generated biomedical knowledge graphs
(KGs) are then used for question answering.
Results : The proposed framework can successfully extract relevant structured
information with high accuracy (90.7% for Named-entity recognition (NER), 88%
for relation extraction (RE)), according to experimental findings based on
real-world 505 patient biomedical unstructured clinical notes.
Conclusions : In this paper, we propose a novel end-to-end system for the
construction of a biomedical knowledge graph from clinical textual using a
variation of BERT models.
| [
{
"created": "Fri, 21 Apr 2023 14:45:33 GMT",
"version": "v1"
}
] | 2023-04-24 | [
[
"Harnoune",
"Ayoub",
""
],
[
"Rhanoui",
"Maryem",
""
],
[
"Mikram",
"Mounia",
""
],
[
"Yousfi",
"Siham",
""
],
[
"Elkaimbillah",
"Zineb",
""
],
[
"Asri",
"Bouchra El",
""
]
] | Background : Knowledge is evolving over time, often as a result of new discoveries or changes in the adopted methods of reasoning. Also, new facts or evidence may become available, leading to new understandings of complex phenomena. This is particularly true in the biomedical field, where scientists and physicians are constantly striving to find new methods of diagnosis, treatment and eventually cure. Knowledge Graphs (KGs) offer a real way of organizing and retrieving the massive and growing amount of biomedical knowledge. Objective : We propose an end-to-end approach for knowledge extraction and analysis from biomedical clinical notes using the Bidirectional Encoder Representations from Transformers (BERT) model and Conditional Random Field (CRF) layer. Methods : The approach is based on knowledge graphs, which can effectively process abstract biomedical concepts such as relationships and interactions between medical entities. Besides offering an intuitive way to visualize these concepts, KGs can solve more complex knowledge retrieval problems by simplifying them into simpler representations or by transforming the problems into representations from different perspectives. We created a biomedical Knowledge Graph using using Natural Language Processing models for named entity recognition and relation extraction. The generated biomedical knowledge graphs (KGs) are then used for question answering. Results : The proposed framework can successfully extract relevant structured information with high accuracy (90.7% for Named-entity recognition (NER), 88% for relation extraction (RE)), according to experimental findings based on real-world 505 patient biomedical unstructured clinical notes. Conclusions : In this paper, we propose a novel end-to-end system for the construction of a biomedical knowledge graph from clinical textual using a variation of BERT models. |
1903.07507 | Ishan Jindal | Ishan Jindal, Daniel Pressel, Brian Lester, Matthew Nokleby | An Effective Label Noise Model for DNN Text Classification | Accepted at NAACL-HLT 2019 Main Conference Long paper | null | null | null | cs.LG cs.CL cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Because large, human-annotated datasets suffer from labeling errors, it is
crucial to be able to train deep neural networks in the presence of label
noise. While training image classification models with label noise have
received much attention, training text classification models have not. In this
paper, we propose an approach to training deep networks that is robust to label
noise. This approach introduces a non-linear processing layer (noise model)
that models the statistics of the label noise into a convolutional neural
network (CNN) architecture. The noise model and the CNN weights are learned
jointly from noisy training data, which prevents the model from overfitting to
erroneous labels. Through extensive experiments on several text classification
datasets, we show that this approach enables the CNN to learn better sentence
representations and is robust even to extreme label noise. We find that proper
initialization and regularization of this noise model is critical. Further, by
contrast to results focusing on large batch sizes for mitigating label noise
for image classification, we find that altering the batch size does not have
much effect on classification performance.
| [
{
"created": "Mon, 18 Mar 2019 15:27:50 GMT",
"version": "v1"
}
] | 2019-03-19 | [
[
"Jindal",
"Ishan",
""
],
[
"Pressel",
"Daniel",
""
],
[
"Lester",
"Brian",
""
],
[
"Nokleby",
"Matthew",
""
]
] | Because large, human-annotated datasets suffer from labeling errors, it is crucial to be able to train deep neural networks in the presence of label noise. While training image classification models with label noise have received much attention, training text classification models have not. In this paper, we propose an approach to training deep networks that is robust to label noise. This approach introduces a non-linear processing layer (noise model) that models the statistics of the label noise into a convolutional neural network (CNN) architecture. The noise model and the CNN weights are learned jointly from noisy training data, which prevents the model from overfitting to erroneous labels. Through extensive experiments on several text classification datasets, we show that this approach enables the CNN to learn better sentence representations and is robust even to extreme label noise. We find that proper initialization and regularization of this noise model is critical. Further, by contrast to results focusing on large batch sizes for mitigating label noise for image classification, we find that altering the batch size does not have much effect on classification performance. |
2312.13770 | Zheheng Jiang | Zheheng Jiang, Hossein Rahmani, Sue Black, Bryan M. Williams | 3D Points Splatting for Real-Time Dynamic Hand Reconstruction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present 3D Points Splatting Hand Reconstruction (3D-PSHR), a real-time and
photo-realistic hand reconstruction approach. We propose a self-adaptive
canonical points upsampling strategy to achieve high-resolution hand geometry
representation. This is followed by a self-adaptive deformation that deforms
the hand from the canonical space to the target pose, adapting to the dynamic
changing of canonical points which, in contrast to the common practice of
subdividing the MANO model, offers greater flexibility and results in improved
geometry fitting. To model texture, we disentangle the appearance color into
the intrinsic albedo and pose-aware shading, which are learned through a
Context-Attention module. Moreover, our approach allows the geometric and the
appearance models to be trained simultaneously in an end-to-end manner. We
demonstrate that our method is capable of producing animatable, photorealistic
and relightable hand reconstructions using multiple datasets, including
monocular videos captured with handheld smartphones and large-scale multi-view
videos featuring various hand poses. We also demonstrate that our approach
achieves real-time rendering speeds while simultaneously maintaining superior
performance compared to existing state-of-the-art methods.
| [
{
"created": "Thu, 21 Dec 2023 11:50:49 GMT",
"version": "v1"
}
] | 2023-12-22 | [
[
"Jiang",
"Zheheng",
""
],
[
"Rahmani",
"Hossein",
""
],
[
"Black",
"Sue",
""
],
[
"Williams",
"Bryan M.",
""
]
] | We present 3D Points Splatting Hand Reconstruction (3D-PSHR), a real-time and photo-realistic hand reconstruction approach. We propose a self-adaptive canonical points upsampling strategy to achieve high-resolution hand geometry representation. This is followed by a self-adaptive deformation that deforms the hand from the canonical space to the target pose, adapting to the dynamic changing of canonical points which, in contrast to the common practice of subdividing the MANO model, offers greater flexibility and results in improved geometry fitting. To model texture, we disentangle the appearance color into the intrinsic albedo and pose-aware shading, which are learned through a Context-Attention module. Moreover, our approach allows the geometric and the appearance models to be trained simultaneously in an end-to-end manner. We demonstrate that our method is capable of producing animatable, photorealistic and relightable hand reconstructions using multiple datasets, including monocular videos captured with handheld smartphones and large-scale multi-view videos featuring various hand poses. We also demonstrate that our approach achieves real-time rendering speeds while simultaneously maintaining superior performance compared to existing state-of-the-art methods. |
1412.7664 | Roshan Ragel | M.G.G.C.R. Salgado and R. G. Ragel | Register Spilling for Specific Application Domains in Application
Specific Instruction-set Processors | The 7th International Conference on Information and Automation for
Sustainability (ICIAfS) 2014 | null | null | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An Application Specific Instruction set Processor (ASIP) is an important
component in designing embedded systems. One of the problems in designing an
instruction set for such processors is determining the number of registers is
needed in the processor that will optimize the computational time and the cost.
The performance of a processor may fall short due to register spilling, which
is caused by the lack of available registers in a processor. In the design
perspective, it will result in processors with great performance and low power
consumption if we can avoid register spilling by deciding a value for the
number of registers needed in an ASIP. However, as of now, it has not clearly
been recognized how the number of registers changes with different application
domains. In this paper, we evaluated whether different application domains have
any significant effect on register spilling and therefore the performance of a
processor so that we could use different number of registers when building
ASIPs for different application domains rather than using a constant set of
registers. Such utilization of registers will result in processors with high
performance, low cost and low power consumption.
| [
{
"created": "Wed, 24 Dec 2014 14:15:19 GMT",
"version": "v1"
}
] | 2014-12-25 | [
[
"Salgado",
"M. G. G. C. R.",
""
],
[
"Ragel",
"R. G.",
""
]
] | An Application Specific Instruction set Processor (ASIP) is an important component in designing embedded systems. One of the problems in designing an instruction set for such processors is determining the number of registers is needed in the processor that will optimize the computational time and the cost. The performance of a processor may fall short due to register spilling, which is caused by the lack of available registers in a processor. In the design perspective, it will result in processors with great performance and low power consumption if we can avoid register spilling by deciding a value for the number of registers needed in an ASIP. However, as of now, it has not clearly been recognized how the number of registers changes with different application domains. In this paper, we evaluated whether different application domains have any significant effect on register spilling and therefore the performance of a processor so that we could use different number of registers when building ASIPs for different application domains rather than using a constant set of registers. Such utilization of registers will result in processors with high performance, low cost and low power consumption. |
1005.2894 | Fabian Kuhn | Fabian Kuhn, Christoph Lenzen, Thomas Locher, Rotem Oshman | Optimal Gradient Clock Synchronization in Dynamic Networks | 68 pages; conference version: 29th Annual ACM Symposium on Principles
of Distributed Computing (PODC 2010) | null | null | null | cs.DC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of clock synchronization in highly dynamic networks,
where communication links can appear or disappear at any time. The nodes in the
network are equipped with hardware clocks, but the rate of the hardware clocks
can vary arbitrarily within specific bounds, and the estimates that nodes can
obtain about the clock values of other nodes are inherently inaccurate. Our
goal in this setting is to output a logical clock at each node such that the
logical clocks of any two nodes are not too far apart, and nodes that remain
close to each other in the network for a long time are better synchronized than
distant nodes. This property is called gradient clock synchronization.
Gradient clock synchronization has been widely studied in the static setting,
where the network topology does not change. We show that the asymptotically
optimal bounds obtained for the static case also apply to our highly dynamic
setting: if two nodes remain at distance $d$ from each other for sufficiently
long, it is possible to upper bound the difference between their clock values
by $O(d \log (D / d))$, where $D$ is the diameter of the network. This is known
to be optimal even for static networks. Furthermore, we show that our algorithm
has optimal stabilization time: when a path of length $d$ appears between two
nodes, the time required until the clock skew between the two nodes is reduced
to $O(d \log (D / d))$ is $O(D)$, which we prove to be optimal. Finally, the
techniques employed for the more intricate analysis of the algorithm for
dynamic graphs provide additional insights that are also of interest for the
static setting. In particular, we establish self-stabilization of the gradient
property within $O(D)$ time.
| [
{
"created": "Mon, 17 May 2010 11:56:31 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Aug 2010 08:01:04 GMT",
"version": "v2"
},
{
"created": "Sat, 8 Dec 2018 13:00:54 GMT",
"version": "v3"
}
] | 2018-12-11 | [
[
"Kuhn",
"Fabian",
""
],
[
"Lenzen",
"Christoph",
""
],
[
"Locher",
"Thomas",
""
],
[
"Oshman",
"Rotem",
""
]
] | We study the problem of clock synchronization in highly dynamic networks, where communication links can appear or disappear at any time. The nodes in the network are equipped with hardware clocks, but the rate of the hardware clocks can vary arbitrarily within specific bounds, and the estimates that nodes can obtain about the clock values of other nodes are inherently inaccurate. Our goal in this setting is to output a logical clock at each node such that the logical clocks of any two nodes are not too far apart, and nodes that remain close to each other in the network for a long time are better synchronized than distant nodes. This property is called gradient clock synchronization. Gradient clock synchronization has been widely studied in the static setting, where the network topology does not change. We show that the asymptotically optimal bounds obtained for the static case also apply to our highly dynamic setting: if two nodes remain at distance $d$ from each other for sufficiently long, it is possible to upper bound the difference between their clock values by $O(d \log (D / d))$, where $D$ is the diameter of the network. This is known to be optimal even for static networks. Furthermore, we show that our algorithm has optimal stabilization time: when a path of length $d$ appears between two nodes, the time required until the clock skew between the two nodes is reduced to $O(d \log (D / d))$ is $O(D)$, which we prove to be optimal. Finally, the techniques employed for the more intricate analysis of the algorithm for dynamic graphs provide additional insights that are also of interest for the static setting. In particular, we establish self-stabilization of the gradient property within $O(D)$ time. |
2011.05519 | Dilusha Weeraddana Dr | Dilusha Weeraddana, Nguyen Lu Dang Khoa, Lachlan O Neil, Weihong Wang,
and Chen Cai | Energy consumption forecasting using a stacked nonparametric Bayesian
approach | Conference: ECML-PKDD 2020 | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, the process of forecasting household energy consumption is
studied within the framework of the nonparametric Gaussian Process (GP), using
multiple short time series data. As we begin to use smart meter data to paint a
clearer picture of residential electricity use, it becomes increasingly
apparent that we must also construct a detailed picture and understanding of
consumer's complex relationship with gas consumption. Both electricity and gas
consumption patterns are highly dependent on various factors, and the intricate
interplay of these factors is sophisticated. Moreover, since typical gas
consumption data is low granularity with very few time points, naive
application of conventional time-series forecasting techniques can lead to
severe over-fitting. Given these considerations, we construct a stacked GP
method where the predictive posteriors of each GP applied to each task are used
in the prior and likelihood of the next level GP. We apply our model to a
real-world dataset to forecast energy consumption in Australian households
across several states. We compare intuitively appealing results against other
commonly used machine learning techniques. Overall, the results indicate that
the proposed stacked GP model outperforms other forecasting techniques that we
tested, especially when we have a multiple short time-series instances.
| [
{
"created": "Wed, 11 Nov 2020 02:27:00 GMT",
"version": "v1"
}
] | 2020-11-12 | [
[
"Weeraddana",
"Dilusha",
""
],
[
"Khoa",
"Nguyen Lu Dang",
""
],
[
"Neil",
"Lachlan O",
""
],
[
"Wang",
"Weihong",
""
],
[
"Cai",
"Chen",
""
]
] | In this paper, the process of forecasting household energy consumption is studied within the framework of the nonparametric Gaussian Process (GP), using multiple short time series data. As we begin to use smart meter data to paint a clearer picture of residential electricity use, it becomes increasingly apparent that we must also construct a detailed picture and understanding of consumer's complex relationship with gas consumption. Both electricity and gas consumption patterns are highly dependent on various factors, and the intricate interplay of these factors is sophisticated. Moreover, since typical gas consumption data is low granularity with very few time points, naive application of conventional time-series forecasting techniques can lead to severe over-fitting. Given these considerations, we construct a stacked GP method where the predictive posteriors of each GP applied to each task are used in the prior and likelihood of the next level GP. We apply our model to a real-world dataset to forecast energy consumption in Australian households across several states. We compare intuitively appealing results against other commonly used machine learning techniques. Overall, the results indicate that the proposed stacked GP model outperforms other forecasting techniques that we tested, especially when we have a multiple short time-series instances. |
2102.07515 | Pierre Vial | Pierre Vial | Sequence Types and Infinitary Semantics | 68 pages, 18 figures | null | null | null | cs.LO cs.PL | http://creativecommons.org/licenses/by/4.0/ | We introduce a new representation of non-idempotent intersection types, using
\textbf{sequences} (families indexed with natural numbers) instead of lists or
multisets. This allows scaling up \textbf{intersection type} theory to the
infinitary $\lambda$-calculus. We thus characterize hereditary head
normalization, which gives a positive answer to a question known as
\textbf{Klop's Problem}. On our way, we use \textbf{non-idempotent
intersection} to retrieve some well-known results on infinitary terms.
| [
{
"created": "Mon, 15 Feb 2021 12:33:41 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Dec 2021 09:12:09 GMT",
"version": "v2"
}
] | 2021-12-16 | [
[
"Vial",
"Pierre",
""
]
] | We introduce a new representation of non-idempotent intersection types, using \textbf{sequences} (families indexed with natural numbers) instead of lists or multisets. This allows scaling up \textbf{intersection type} theory to the infinitary $\lambda$-calculus. We thus characterize hereditary head normalization, which gives a positive answer to a question known as \textbf{Klop's Problem}. On our way, we use \textbf{non-idempotent intersection} to retrieve some well-known results on infinitary terms. |
1111.4045 | Javier Parra-Arnau | David Rebollo-Monedero, Javier Parra-Arnau, Jordi Forn\'e | An Information-Theoretic Privacy Criterion for Query Forgery in
Information Retrieval | This paper has 15 pages and 1 figure | null | null | null | cs.IT cs.CR math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In previous work, we presented a novel information-theoretic privacy
criterion for query forgery in the domain of information retrieval. Our
criterion measured privacy risk as a divergence between the user's and the
population's query distribution, and contemplated the entropy of the user's
distribution as a particular case. In this work, we make a twofold
contribution. First, we thoroughly interpret and justify the privacy metric
proposed in our previous work, elaborating on the intimate connection between
the celebrated method of entropy maximization and the use of entropies and
divergences as measures of privacy. Secondly, we attempt to bridge the gap
between the privacy and the information-theoretic communities by substantially
adapting some technicalities of our original work to reach a wider audience,
not intimately familiar with information theory and the method of types.
| [
{
"created": "Thu, 17 Nov 2011 10:06:49 GMT",
"version": "v1"
}
] | 2015-03-19 | [
[
"Rebollo-Monedero",
"David",
""
],
[
"Parra-Arnau",
"Javier",
""
],
[
"Forné",
"Jordi",
""
]
] | In previous work, we presented a novel information-theoretic privacy criterion for query forgery in the domain of information retrieval. Our criterion measured privacy risk as a divergence between the user's and the population's query distribution, and contemplated the entropy of the user's distribution as a particular case. In this work, we make a twofold contribution. First, we thoroughly interpret and justify the privacy metric proposed in our previous work, elaborating on the intimate connection between the celebrated method of entropy maximization and the use of entropies and divergences as measures of privacy. Secondly, we attempt to bridge the gap between the privacy and the information-theoretic communities by substantially adapting some technicalities of our original work to reach a wider audience, not intimately familiar with information theory and the method of types. |
2308.07522 | Victor Zitian Chen | Victor Zitian Chen | Finding Stakeholder-Material Information from 10-K Reports using
Fine-Tuned BERT and LSTM Models | null | null | null | null | cs.CL cs.CE | http://creativecommons.org/licenses/by/4.0/ | All public companies are required by federal securities law to disclose their
business and financial activities in their annual 10-K reports. Each report
typically spans hundreds of pages, making it difficult for human readers to
identify and extract the material information efficiently. To solve the
problem, I have fine-tuned BERT models and RNN models with LSTM layers to
identify stakeholder-material information, defined as statements that carry
information about a company's influence on its stakeholders, including
customers, employees, investors, and the community and natural environment. The
existing practice uses keyword search to identify such information, which is my
baseline model. Using business expert-labeled training data of nearly 6,000
sentences from 62 10-K reports published in 2022, the best model has achieved
an accuracy of 0.904 and an F1 score of 0.899 in test data, significantly above
the baseline model's 0.781 and 0.749 respectively. Furthermore, the same work
was replicated on more granular taxonomies, based on which four distinct groups
of stakeholders (i.e., customers, investors, employees, and the community and
natural environment) are tested separately. Similarly, fined-tuned BERT models
outperformed LSTM and the baseline. The implications for industry application
and ideas for future extensions are discussed.
| [
{
"created": "Tue, 15 Aug 2023 01:25:34 GMT",
"version": "v1"
}
] | 2023-08-16 | [
[
"Chen",
"Victor Zitian",
""
]
] | All public companies are required by federal securities law to disclose their business and financial activities in their annual 10-K reports. Each report typically spans hundreds of pages, making it difficult for human readers to identify and extract the material information efficiently. To solve the problem, I have fine-tuned BERT models and RNN models with LSTM layers to identify stakeholder-material information, defined as statements that carry information about a company's influence on its stakeholders, including customers, employees, investors, and the community and natural environment. The existing practice uses keyword search to identify such information, which is my baseline model. Using business expert-labeled training data of nearly 6,000 sentences from 62 10-K reports published in 2022, the best model has achieved an accuracy of 0.904 and an F1 score of 0.899 in test data, significantly above the baseline model's 0.781 and 0.749 respectively. Furthermore, the same work was replicated on more granular taxonomies, based on which four distinct groups of stakeholders (i.e., customers, investors, employees, and the community and natural environment) are tested separately. Similarly, fined-tuned BERT models outperformed LSTM and the baseline. The implications for industry application and ideas for future extensions are discussed. |
2112.11701 | Rui Zhao | Rui Zhao, Jinming Song, Yufeng Yuan, Hu Haifeng, Yang Gao, Yi Wu,
Zhongqian Sun, Yang Wei | Maximum Entropy Population-Based Training for Zero-Shot Human-AI
Coordination | Accepted by NeurIPS Cooperative AI Workshop, 2021, link:
https://www.cooperativeai.com/workshop/neurips-2021#Workshop-Papers. Under
review at a conference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of training a Reinforcement Learning (RL) agent that is
collaborative with humans without using any human data. Although such agents
can be obtained through self-play training, they can suffer significantly from
distributional shift when paired with unencountered partners, such as humans.
To mitigate this distributional shift, we propose Maximum Entropy
Population-based training (MEP). In MEP, agents in the population are trained
with our derived Population Entropy bonus to promote both pairwise diversity
between agents and individual diversity of agents themselves, and a common best
agent is trained by paring with agents in this diversified population via
prioritized sampling. The prioritization is dynamically adjusted based on the
training progress. We demonstrate the effectiveness of our method MEP, with
comparison to Self-Play PPO (SP), Population-Based Training (PBT), Trajectory
Diversity (TrajeDi), and Fictitious Co-Play (FCP) in the Overcooked game
environment, with partners being human proxy models and real humans. A
supplementary video showing experimental results is available at
https://youtu.be/Xh-FKD0AAKE.
| [
{
"created": "Wed, 22 Dec 2021 07:19:36 GMT",
"version": "v1"
},
{
"created": "Mon, 23 May 2022 06:43:58 GMT",
"version": "v2"
},
{
"created": "Mon, 27 Jun 2022 05:15:20 GMT",
"version": "v3"
}
] | 2022-06-28 | [
[
"Zhao",
"Rui",
""
],
[
"Song",
"Jinming",
""
],
[
"Yuan",
"Yufeng",
""
],
[
"Haifeng",
"Hu",
""
],
[
"Gao",
"Yang",
""
],
[
"Wu",
"Yi",
""
],
[
"Sun",
"Zhongqian",
""
],
[
"Wei",
"Yang",
""
]
] | We study the problem of training a Reinforcement Learning (RL) agent that is collaborative with humans without using any human data. Although such agents can be obtained through self-play training, they can suffer significantly from distributional shift when paired with unencountered partners, such as humans. To mitigate this distributional shift, we propose Maximum Entropy Population-based training (MEP). In MEP, agents in the population are trained with our derived Population Entropy bonus to promote both pairwise diversity between agents and individual diversity of agents themselves, and a common best agent is trained by paring with agents in this diversified population via prioritized sampling. The prioritization is dynamically adjusted based on the training progress. We demonstrate the effectiveness of our method MEP, with comparison to Self-Play PPO (SP), Population-Based Training (PBT), Trajectory Diversity (TrajeDi), and Fictitious Co-Play (FCP) in the Overcooked game environment, with partners being human proxy models and real humans. A supplementary video showing experimental results is available at https://youtu.be/Xh-FKD0AAKE. |
2009.11465 | Yanshi Luo | Yanshi Luo, Abdeslam Boularias and Mridul Aanjaneya | Model Identification and Control of a Low-Cost Wheeled Mobile Robot
Using Differentiable Physics | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the design of a low-cost wheeled mobile robot, and an analytical
model for predicting its motion under the influence of motor torques and
friction forces. Using our proposed model, we show how to analytically compute
the gradient of an appropriate loss function, that measures the deviation
between predicted motion trajectories and real-world trajectories, which are
estimated using Apriltags and an overhead camera. These analytical gradients
allow us to automatically infer the unknown friction coefficients, by
minimizing the loss function using gradient descent. Motion trajectories that
are predicted by the optimized model are in excellent agreement with their
real-world counterparts. Experiments show that our proposed approach is
computationally superior to existing black-box system identification methods
and other data-driven techniques, and also requires very few real-world samples
for accurate trajectory prediction. The proposed approach combines the data
efficiency of analytical models based on first principles, with the flexibility
of data-driven methods, which makes it appropriate for low-cost robots. Using
the learned model and our gradient-based optimization approach, we show how to
automatically compute motor control signals for driving the robot along
pre-specified curves.
| [
{
"created": "Thu, 24 Sep 2020 03:27:28 GMT",
"version": "v1"
}
] | 2020-09-25 | [
[
"Luo",
"Yanshi",
""
],
[
"Boularias",
"Abdeslam",
""
],
[
"Aanjaneya",
"Mridul",
""
]
] | We present the design of a low-cost wheeled mobile robot, and an analytical model for predicting its motion under the influence of motor torques and friction forces. Using our proposed model, we show how to analytically compute the gradient of an appropriate loss function, that measures the deviation between predicted motion trajectories and real-world trajectories, which are estimated using Apriltags and an overhead camera. These analytical gradients allow us to automatically infer the unknown friction coefficients, by minimizing the loss function using gradient descent. Motion trajectories that are predicted by the optimized model are in excellent agreement with their real-world counterparts. Experiments show that our proposed approach is computationally superior to existing black-box system identification methods and other data-driven techniques, and also requires very few real-world samples for accurate trajectory prediction. The proposed approach combines the data efficiency of analytical models based on first principles, with the flexibility of data-driven methods, which makes it appropriate for low-cost robots. Using the learned model and our gradient-based optimization approach, we show how to automatically compute motor control signals for driving the robot along pre-specified curves. |
2210.15593 | Shikhar Makhija | Udit Kumar Agarwal, Shikhar Makhija, Varun Tripathi and Kunwar Singh | An Investigation into Neuromorphic ICs using Memristor-CMOS Hybrid
Circuits | Bachelor's thesis | null | null | null | cs.NE cs.AR eess.IV eess.SP | http://creativecommons.org/licenses/by/4.0/ | The memristance of a memristor depends on the amount of charge flowing
through it and when current stops flowing through it, it remembers the state.
Thus, memristors are extremely suited for implementation of memory units.
Memristors find great application in neuromorphic circuits as it is possible to
couple memory and processing, compared to traditional Von-Neumann digital
architectures where memory and processing are separate. Neural networks have a
layered structure where information passes from one layer to another and each
of these layers have the possibility of a high degree of parallelism.
CMOS-Memristor based neural network accelerators provide a method of speeding
up neural networks by making use of this parallelism and analog computation. In
this project we have conducted an initial investigation into the current state
of the art implementation of memristor based programming circuits. Various
memristor programming circuits and basic neuromorphic circuits have been
simulated. The next phase of our project revolved around designing basic
building blocks which can be used to design neural networks. A memristor bridge
based synaptic weighting block, a operational transconductor based summing
block were initially designed. We then designed activation function blocks
which are used to introduce controlled non-linearity. Blocks for a basic
rectified linear unit and a novel implementation for tan-hyperbolic function
have been proposed. An artificial neural network has been designed using these
blocks to validate and test their performance. We have also used these
fundamental blocks to design basic layers of Convolutional Neural Networks.
Convolutional Neural Networks are heavily used in image processing
applications. The core convolutional block has been designed and it has been
used as an image processing kernel to test its performance.
| [
{
"created": "Fri, 19 Aug 2022 18:04:03 GMT",
"version": "v1"
}
] | 2022-10-28 | [
[
"Agarwal",
"Udit Kumar",
""
],
[
"Makhija",
"Shikhar",
""
],
[
"Tripathi",
"Varun",
""
],
[
"Singh",
"Kunwar",
""
]
] | The memristance of a memristor depends on the amount of charge flowing through it and when current stops flowing through it, it remembers the state. Thus, memristors are extremely suited for implementation of memory units. Memristors find great application in neuromorphic circuits as it is possible to couple memory and processing, compared to traditional Von-Neumann digital architectures where memory and processing are separate. Neural networks have a layered structure where information passes from one layer to another and each of these layers have the possibility of a high degree of parallelism. CMOS-Memristor based neural network accelerators provide a method of speeding up neural networks by making use of this parallelism and analog computation. In this project we have conducted an initial investigation into the current state of the art implementation of memristor based programming circuits. Various memristor programming circuits and basic neuromorphic circuits have been simulated. The next phase of our project revolved around designing basic building blocks which can be used to design neural networks. A memristor bridge based synaptic weighting block, a operational transconductor based summing block were initially designed. We then designed activation function blocks which are used to introduce controlled non-linearity. Blocks for a basic rectified linear unit and a novel implementation for tan-hyperbolic function have been proposed. An artificial neural network has been designed using these blocks to validate and test their performance. We have also used these fundamental blocks to design basic layers of Convolutional Neural Networks. Convolutional Neural Networks are heavily used in image processing applications. The core convolutional block has been designed and it has been used as an image processing kernel to test its performance. |
1304.6777 | Tauhid Zaman | Tauhid Zaman, Emily B. Fox, Eric T. Bradlow | A Bayesian approach for predicting the popularity of tweets | Published in at http://dx.doi.org/10.1214/14-AOAS741 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org) | Annals of Applied Statistics 2014, Vol. 8, No. 3, 1583-1611 | 10.1214/14-AOAS741 | IMS-AOAS-AOAS741 | cs.SI physics.soc-ph stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We predict the popularity of short messages called tweets created in the
micro-blogging site known as Twitter. We measure the popularity of a tweet by
the time-series path of its retweets, which is when people forward the tweet to
others. We develop a probabilistic model for the evolution of the retweets
using a Bayesian approach, and form predictions using only observations on the
retweet times and the local network or "graph" structure of the retweeters. We
obtain good step ahead forecasts and predictions of the final total number of
retweets even when only a small fraction (i.e., less than one tenth) of the
retweet path is observed. This translates to good predictions within a few
minutes of a tweet being posted, and has potential implications for
understanding the spread of broader ideas, memes, or trends in social networks.
| [
{
"created": "Thu, 25 Apr 2013 00:26:18 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Mar 2014 04:17:57 GMT",
"version": "v2"
},
{
"created": "Mon, 24 Nov 2014 11:29:48 GMT",
"version": "v3"
}
] | 2014-11-25 | [
[
"Zaman",
"Tauhid",
""
],
[
"Fox",
"Emily B.",
""
],
[
"Bradlow",
"Eric T.",
""
]
] | We predict the popularity of short messages called tweets created in the micro-blogging site known as Twitter. We measure the popularity of a tweet by the time-series path of its retweets, which is when people forward the tweet to others. We develop a probabilistic model for the evolution of the retweets using a Bayesian approach, and form predictions using only observations on the retweet times and the local network or "graph" structure of the retweeters. We obtain good step ahead forecasts and predictions of the final total number of retweets even when only a small fraction (i.e., less than one tenth) of the retweet path is observed. This translates to good predictions within a few minutes of a tweet being posted, and has potential implications for understanding the spread of broader ideas, memes, or trends in social networks. |
2312.01315 | Wenlong Shi | Wenlong Shi, Changsheng Lu, Ming Shao, Yinjie Zhang, Siyu Xia, Piotr
Koniusz | Few-shot Shape Recognition by Learning Deep Shape-aware Features | Accepted by WACV 2024; 8 pages for main paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional shape descriptors have been gradually replaced by convolutional
neural networks due to their superior performance in feature extraction and
classification. The state-of-the-art methods recognize object shapes via image
reconstruction or pixel classification. However , these methods are biased
toward texture information and overlook the essential shape descriptions, thus,
they fail to generalize to unseen shapes. We are the first to propose a fewshot
shape descriptor (FSSD) to recognize object shapes given only one or a few
samples. We employ an embedding module for FSSD to extract
transformation-invariant shape features. Secondly, we develop a dual attention
mechanism to decompose and reconstruct the shape features via learnable shape
primitives. In this way, any shape can be formed through a finite set basis,
and the learned representation model is highly interpretable and extendable to
unseen shapes. Thirdly, we propose a decoding module to include the supervision
of shape masks and edges and align the original and reconstructed shape
features, enforcing the learned features to be more shape-aware. Lastly, all
the proposed modules are assembled into a few-shot shape recognition scheme.
Experiments on five datasets show that our FSSD significantly improves the
shape classification compared to the state-of-the-art under the few-shot
setting.
| [
{
"created": "Sun, 3 Dec 2023 08:12:23 GMT",
"version": "v1"
}
] | 2023-12-05 | [
[
"Shi",
"Wenlong",
""
],
[
"Lu",
"Changsheng",
""
],
[
"Shao",
"Ming",
""
],
[
"Zhang",
"Yinjie",
""
],
[
"Xia",
"Siyu",
""
],
[
"Koniusz",
"Piotr",
""
]
] | Traditional shape descriptors have been gradually replaced by convolutional neural networks due to their superior performance in feature extraction and classification. The state-of-the-art methods recognize object shapes via image reconstruction or pixel classification. However , these methods are biased toward texture information and overlook the essential shape descriptions, thus, they fail to generalize to unseen shapes. We are the first to propose a fewshot shape descriptor (FSSD) to recognize object shapes given only one or a few samples. We employ an embedding module for FSSD to extract transformation-invariant shape features. Secondly, we develop a dual attention mechanism to decompose and reconstruct the shape features via learnable shape primitives. In this way, any shape can be formed through a finite set basis, and the learned representation model is highly interpretable and extendable to unseen shapes. Thirdly, we propose a decoding module to include the supervision of shape masks and edges and align the original and reconstructed shape features, enforcing the learned features to be more shape-aware. Lastly, all the proposed modules are assembled into a few-shot shape recognition scheme. Experiments on five datasets show that our FSSD significantly improves the shape classification compared to the state-of-the-art under the few-shot setting. |
2201.01703 | Nishant Sinha | Saurabh Kumar, Nishant Sinha | Probing TryOnGAN | 5 pages, to appear in the proceedings of the 9th ACM IKDD CODS and
27th COMAD (CODS-COMAD '22) | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | TryOnGAN is a recent virtual try-on approach, which generates highly
realistic images and outperforms most previous approaches. In this article, we
reproduce the TryOnGAN implementation and probe it along diverse angles: impact
of transfer learning, variants of conditioning image generation with poses and
properties of latent space interpolation. Some of these facets have never been
explored in literature earlier. We find that transfer helps training initially
but gains are lost as models train longer and pose conditioning via
concatenation performs better. The latent space self-disentangles the pose and
the style features and enables style transfer across poses. Our code and models
are available in open source.
| [
{
"created": "Wed, 5 Jan 2022 16:51:19 GMT",
"version": "v1"
}
] | 2022-01-06 | [
[
"Kumar",
"Saurabh",
""
],
[
"Sinha",
"Nishant",
""
]
] | TryOnGAN is a recent virtual try-on approach, which generates highly realistic images and outperforms most previous approaches. In this article, we reproduce the TryOnGAN implementation and probe it along diverse angles: impact of transfer learning, variants of conditioning image generation with poses and properties of latent space interpolation. Some of these facets have never been explored in literature earlier. We find that transfer helps training initially but gains are lost as models train longer and pose conditioning via concatenation performs better. The latent space self-disentangles the pose and the style features and enables style transfer across poses. Our code and models are available in open source. |
2307.08575 | Romaric Neveu | Nicolas Aragon, Lo\"ic Bidoux, Jes\'us-Javier Chi-Dom\'inguez,
Thibauld Feneuil, Philippe Gaborit, Romaric Neveu, Matthieu Rivain | MIRA: a Digital Signature Scheme based on the MinRank problem and the
MPC-in-the-Head paradigm | null | null | null | null | cs.CR | http://creativecommons.org/publicdomain/zero/1.0/ | We exploit the idea of [Fen22] which proposes to build an efficient signature
scheme based on a zero-knowledge proof of knowledge of a solution of a MinRank
instance. The scheme uses the MPCitH paradigm, which is an efficient way to
build ZK proofs. We combine this idea with another idea, the hypercube
technique introduced in [AMGH+22], which leads to more efficient MPCitH-based
scheme. This new approach is more efficient than classical MPCitH, as it allows
to reduce the number of party computation. This gives us a first scheme called
MIRA-Additive. We then present an other scheme, based on low-threshold secret
sharings, called MIRA-Threshold, which is a faster scheme, at the price of
larger signatures. The construction of MPCitH using threshold secret sharing is
detailed in [FR22]. These two constructions allows us to be faster than
classical MPCitH, with a size of signature around 5.6kB with MIRA-Additive, and
8.3kB with MIRA-Threshold. We detail here the constructions and optimizations
of the schemes, as well as their security proofs.
| [
{
"created": "Mon, 17 Jul 2023 15:44:12 GMT",
"version": "v1"
}
] | 2023-07-18 | [
[
"Aragon",
"Nicolas",
""
],
[
"Bidoux",
"Loïc",
""
],
[
"Chi-Domínguez",
"Jesús-Javier",
""
],
[
"Feneuil",
"Thibauld",
""
],
[
"Gaborit",
"Philippe",
""
],
[
"Neveu",
"Romaric",
""
],
[
"Rivain",
"Matthieu",
""
]
] | We exploit the idea of [Fen22] which proposes to build an efficient signature scheme based on a zero-knowledge proof of knowledge of a solution of a MinRank instance. The scheme uses the MPCitH paradigm, which is an efficient way to build ZK proofs. We combine this idea with another idea, the hypercube technique introduced in [AMGH+22], which leads to more efficient MPCitH-based scheme. This new approach is more efficient than classical MPCitH, as it allows to reduce the number of party computation. This gives us a first scheme called MIRA-Additive. We then present an other scheme, based on low-threshold secret sharings, called MIRA-Threshold, which is a faster scheme, at the price of larger signatures. The construction of MPCitH using threshold secret sharing is detailed in [FR22]. These two constructions allows us to be faster than classical MPCitH, with a size of signature around 5.6kB with MIRA-Additive, and 8.3kB with MIRA-Threshold. We detail here the constructions and optimizations of the schemes, as well as their security proofs. |
1606.04288 | Polyvios Pratikakis | Alexandros Labrineas, Polyvios Pratikakis, Dimitrios S. Nikolopoulos,
Angelos Bilas | BDDT-SCC: A Task-parallel Runtime for Non Cache-Coherent Multicores | null | null | null | null | cs.DC cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents BDDT-SCC, a task-parallel runtime system for non
cache-coherent multicore processors, implemented for the Intel Single-Chip
Cloud Computer. The BDDT-SCC runtime includes a dynamic dependence analysis and
automatic synchronization, and executes OpenMP-Ss tasks on a non cache-coherent
architecture. We design a runtime that uses fast on-chip inter-core
communication with small messages. At the same time, we use non coherent shared
memory to avoid large core-to-core data transfers that would incur a high
volume of unnecessary copying. We evaluate BDDT-SCC on a set of representative
benchmarks, in terms of task granularity, locality, and communication. We find
that memory locality and allocation plays a very important role in performance,
as the architecture of the SCC memory controllers can create strong contention
effects. We suggest patterns that improve memory locality and thus the
performance of applications, and measure their impact.
| [
{
"created": "Tue, 14 Jun 2016 10:09:42 GMT",
"version": "v1"
}
] | 2016-06-15 | [
[
"Labrineas",
"Alexandros",
""
],
[
"Pratikakis",
"Polyvios",
""
],
[
"Nikolopoulos",
"Dimitrios S.",
""
],
[
"Bilas",
"Angelos",
""
]
] | This paper presents BDDT-SCC, a task-parallel runtime system for non cache-coherent multicore processors, implemented for the Intel Single-Chip Cloud Computer. The BDDT-SCC runtime includes a dynamic dependence analysis and automatic synchronization, and executes OpenMP-Ss tasks on a non cache-coherent architecture. We design a runtime that uses fast on-chip inter-core communication with small messages. At the same time, we use non coherent shared memory to avoid large core-to-core data transfers that would incur a high volume of unnecessary copying. We evaluate BDDT-SCC on a set of representative benchmarks, in terms of task granularity, locality, and communication. We find that memory locality and allocation plays a very important role in performance, as the architecture of the SCC memory controllers can create strong contention effects. We suggest patterns that improve memory locality and thus the performance of applications, and measure their impact. |
2108.13205 | Angel Romero | Angel Romero, Sihao Sun, Philipp Foehn, Davide Scaramuzza | Model Predictive Contouring Control for Time-Optimal Quadrotor Flight | 17 pages, 16 figures. Video:
https://www.youtube.com/watch?v=mHDQcckqdg4 This paper has been accepted for
publication in the IEEE Transactions on Robotics (T-RO), 2022 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tackle the problem of flying time-optimal trajectories through multiple
waypoints with quadrotors. State-of-the-art solutions split the problem into a
planning task - where a global, time-optimal trajectory is generated - and a
control task - where this trajectory is accurately tracked. However, at the
current state, generating a time-optimal trajectory that considers the full
quadrotor model requires solving a difficult time allocation problem via
optimization, which is computationally demanding (in the order of minutes or
even hours). This is detrimental for replanning in presence of disturbances. We
overcome this issue by solving the time allocation problem and the control
problem concurrently via Model Predictive Contouring Control (MPCC). Our MPCC
optimally selects the future states of the platform at runtime, while
maximizing the progress along the reference path and minimizing the distance to
it. We show that, even when tracking simplified trajectories, the proposed MPCC
results in a path that approaches the true time-optimal one, and which can be
generated in real-time. We validate our approach in the real world, where we
show that our method outperforms both the current state-of-the-art and a
world-class human pilot in terms of lap time achieving speeds of up to 60 km/h.
| [
{
"created": "Mon, 30 Aug 2021 13:01:49 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Oct 2021 15:34:38 GMT",
"version": "v2"
},
{
"created": "Thu, 17 Feb 2022 09:12:34 GMT",
"version": "v3"
},
{
"created": "Wed, 4 May 2022 07:06:09 GMT",
"version": "v4"
}
] | 2022-05-05 | [
[
"Romero",
"Angel",
""
],
[
"Sun",
"Sihao",
""
],
[
"Foehn",
"Philipp",
""
],
[
"Scaramuzza",
"Davide",
""
]
] | We tackle the problem of flying time-optimal trajectories through multiple waypoints with quadrotors. State-of-the-art solutions split the problem into a planning task - where a global, time-optimal trajectory is generated - and a control task - where this trajectory is accurately tracked. However, at the current state, generating a time-optimal trajectory that considers the full quadrotor model requires solving a difficult time allocation problem via optimization, which is computationally demanding (in the order of minutes or even hours). This is detrimental for replanning in presence of disturbances. We overcome this issue by solving the time allocation problem and the control problem concurrently via Model Predictive Contouring Control (MPCC). Our MPCC optimally selects the future states of the platform at runtime, while maximizing the progress along the reference path and minimizing the distance to it. We show that, even when tracking simplified trajectories, the proposed MPCC results in a path that approaches the true time-optimal one, and which can be generated in real-time. We validate our approach in the real world, where we show that our method outperforms both the current state-of-the-art and a world-class human pilot in terms of lap time achieving speeds of up to 60 km/h. |
1911.08650 | Jordan MacLachlan | Jordan MacLachlan, Yi Mei, Juergen Branke, Mengjie Zhang | Genetic Programming Hyper-Heuristics with Vehicle Collaboration for
Uncertain Capacitated Arc Routing Problems | null | null | 10.1162/evco_a_00267 | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to its direct relevance to post-disaster operations, meter reading and
civil refuse collection, the Uncertain Capacitated Arc Routing Problem (UCARP)
is an important optimisation problem. Stochastic models are critical to study
as they more accurately represent the real-world than their deterministic
counterparts. Although there have been extensive studies in solving routing
problems under uncertainty, very few have considered UCARP, and none consider
collaboration between vehicles to handle the negative effects of uncertainty.
This paper proposes a novel Solution Construction Procedure (SCP) that
generates solutions to UCARP within a collaborative, multi-vehicle framework.
It consists of two types of collaborative activities: one when a vehicle
unexpectedly expends capacity (\emph{route failure}), and the other during the
refill process. Then, we propose a Genetic Programming Hyper-Heuristic (GPHH)
algorithm to evolve the routing policy used within the collaborative framework.
The experimental studies show that the new heuristic with vehicle collaboration
and GP-evolved routing policy significantly outperforms the compared
state-of-the-art algorithms on commonly studied test problems. This is shown to
be especially true on instances with larger numbers of tasks and vehicles. This
clearly shows the advantage of vehicle collaboration in handling the uncertain
environment, and the effectiveness of the newly proposed algorithm.
| [
{
"created": "Wed, 20 Nov 2019 00:55:00 GMT",
"version": "v1"
}
] | 2019-11-21 | [
[
"MacLachlan",
"Jordan",
""
],
[
"Mei",
"Yi",
""
],
[
"Branke",
"Juergen",
""
],
[
"Zhang",
"Mengjie",
""
]
] | Due to its direct relevance to post-disaster operations, meter reading and civil refuse collection, the Uncertain Capacitated Arc Routing Problem (UCARP) is an important optimisation problem. Stochastic models are critical to study as they more accurately represent the real-world than their deterministic counterparts. Although there have been extensive studies in solving routing problems under uncertainty, very few have considered UCARP, and none consider collaboration between vehicles to handle the negative effects of uncertainty. This paper proposes a novel Solution Construction Procedure (SCP) that generates solutions to UCARP within a collaborative, multi-vehicle framework. It consists of two types of collaborative activities: one when a vehicle unexpectedly expends capacity (\emph{route failure}), and the other during the refill process. Then, we propose a Genetic Programming Hyper-Heuristic (GPHH) algorithm to evolve the routing policy used within the collaborative framework. The experimental studies show that the new heuristic with vehicle collaboration and GP-evolved routing policy significantly outperforms the compared state-of-the-art algorithms on commonly studied test problems. This is shown to be especially true on instances with larger numbers of tasks and vehicles. This clearly shows the advantage of vehicle collaboration in handling the uncertain environment, and the effectiveness of the newly proposed algorithm. |
2211.14383 | Yushun Dong | Yushun Dong, Song Wang, Jing Ma, Ninghao Liu, Jundong Li | Interpreting Unfairness in Graph Neural Networks via Training Node
Attribution | Published as a conference paper at AAAI 2023 | null | null | null | cs.LG cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Neural Networks (GNNs) have emerged as the leading paradigm for solving
graph analytical problems in various real-world applications. Nevertheless,
GNNs could potentially render biased predictions towards certain demographic
subgroups. Understanding how the bias in predictions arises is critical, as it
guides the design of GNN debiasing mechanisms. However, most existing works
overwhelmingly focus on GNN debiasing, but fall short on explaining how such
bias is induced. In this paper, we study a novel problem of interpreting GNN
unfairness through attributing it to the influence of training nodes.
Specifically, we propose a novel strategy named Probabilistic Distribution
Disparity (PDD) to measure the bias exhibited in GNNs, and develop an algorithm
to efficiently estimate the influence of each training node on such bias. We
verify the validity of PDD and the effectiveness of influence estimation
through experiments on real-world datasets. Finally, we also demonstrate how
the proposed framework could be used for debiasing GNNs. Open-source code can
be found at https://github.com/yushundong/BIND.
| [
{
"created": "Fri, 25 Nov 2022 21:52:30 GMT",
"version": "v1"
}
] | 2022-11-29 | [
[
"Dong",
"Yushun",
""
],
[
"Wang",
"Song",
""
],
[
"Ma",
"Jing",
""
],
[
"Liu",
"Ninghao",
""
],
[
"Li",
"Jundong",
""
]
] | Graph Neural Networks (GNNs) have emerged as the leading paradigm for solving graph analytical problems in various real-world applications. Nevertheless, GNNs could potentially render biased predictions towards certain demographic subgroups. Understanding how the bias in predictions arises is critical, as it guides the design of GNN debiasing mechanisms. However, most existing works overwhelmingly focus on GNN debiasing, but fall short on explaining how such bias is induced. In this paper, we study a novel problem of interpreting GNN unfairness through attributing it to the influence of training nodes. Specifically, we propose a novel strategy named Probabilistic Distribution Disparity (PDD) to measure the bias exhibited in GNNs, and develop an algorithm to efficiently estimate the influence of each training node on such bias. We verify the validity of PDD and the effectiveness of influence estimation through experiments on real-world datasets. Finally, we also demonstrate how the proposed framework could be used for debiasing GNNs. Open-source code can be found at https://github.com/yushundong/BIND. |
2407.14387 | Aurelio Sulser | Aurelio Sulser, Johann Wenckstern, Clara Kuempel | GLAudio Listens to the Sound of the Graph | null | ICML 2024 ELLIS Workshop on Geometry-grounded Representation
Learning and Generative Modeling | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose GLAudio: Graph Learning on Audio representation of the node
features and the connectivity structure. This novel architecture propagates the
node features through the graph network according to the discrete wave equation
and then employs a sequence learning architecture to learn the target node
function from the audio wave signal. This leads to a new paradigm of learning
on graph-structured data, in which information propagation and information
processing are separated into two distinct steps. We theoretically characterize
the expressivity of our model, introducing the notion of the receptive field of
a vertex, and investigate our model's susceptibility to over-smoothing and
over-squashing both theoretically as well as experimentally on various graph
datasets.
| [
{
"created": "Fri, 19 Jul 2024 15:13:22 GMT",
"version": "v1"
}
] | 2024-07-22 | [
[
"Sulser",
"Aurelio",
""
],
[
"Wenckstern",
"Johann",
""
],
[
"Kuempel",
"Clara",
""
]
] | We propose GLAudio: Graph Learning on Audio representation of the node features and the connectivity structure. This novel architecture propagates the node features through the graph network according to the discrete wave equation and then employs a sequence learning architecture to learn the target node function from the audio wave signal. This leads to a new paradigm of learning on graph-structured data, in which information propagation and information processing are separated into two distinct steps. We theoretically characterize the expressivity of our model, introducing the notion of the receptive field of a vertex, and investigate our model's susceptibility to over-smoothing and over-squashing both theoretically as well as experimentally on various graph datasets. |
2303.11595 | Yiming Chen | Yiming Chen, Jinyu Tian, Xiangyu Chen, and Jiantao Zhou | Effective Ambiguity Attack Against Passport-based DNN Intellectual
Property Protection Schemes through Fully Connected Layer Substitution | Accepted to CVPR2023 | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since training a deep neural network (DNN) is costly, the well-trained deep
models can be regarded as valuable intellectual property (IP) assets. The IP
protection associated with deep models has been receiving increasing attentions
in recent years. Passport-based method, which replaces normalization layers
with passport layers, has been one of the few protection solutions that are
claimed to be secure against advanced attacks. In this work, we tackle the
issue of evaluating the security of passport-based IP protection methods. We
propose a novel and effective ambiguity attack against passport-based method,
capable of successfully forging multiple valid passports with a small training
dataset. This is accomplished by inserting a specially designed accessory block
ahead of the passport parameters. Using less than 10% of training data, with
the forged passport, the model exhibits almost indistinguishable performance
difference (less than 2%) compared with that of the authorized passport. In
addition, it is shown that our attack strategy can be readily generalized to
attack other IP protection methods based on watermark embedding. Directions for
potential remedy solutions are also given.
| [
{
"created": "Tue, 21 Mar 2023 04:59:05 GMT",
"version": "v1"
}
] | 2023-03-22 | [
[
"Chen",
"Yiming",
""
],
[
"Tian",
"Jinyu",
""
],
[
"Chen",
"Xiangyu",
""
],
[
"Zhou",
"Jiantao",
""
]
] | Since training a deep neural network (DNN) is costly, the well-trained deep models can be regarded as valuable intellectual property (IP) assets. The IP protection associated with deep models has been receiving increasing attentions in recent years. Passport-based method, which replaces normalization layers with passport layers, has been one of the few protection solutions that are claimed to be secure against advanced attacks. In this work, we tackle the issue of evaluating the security of passport-based IP protection methods. We propose a novel and effective ambiguity attack against passport-based method, capable of successfully forging multiple valid passports with a small training dataset. This is accomplished by inserting a specially designed accessory block ahead of the passport parameters. Using less than 10% of training data, with the forged passport, the model exhibits almost indistinguishable performance difference (less than 2%) compared with that of the authorized passport. In addition, it is shown that our attack strategy can be readily generalized to attack other IP protection methods based on watermark embedding. Directions for potential remedy solutions are also given. |
1708.00602 | Subhadip Mukherjee | Subhadip Mukherjee and Chandra Sekhar Seelamantula | Phase Retrieval From Binary Measurements | null | null | 10.1109/LSP.2018.2791102 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of signal reconstruction from quadratic measurements
that are encoded as +1 or -1 depending on whether they exceed a predetermined
positive threshold or not. Binary measurements are fast to acquire and
inexpensive in terms of hardware. We formulate the problem of signal
reconstruction using a consistency criterion, wherein one seeks to find a
signal that is in agreement with the measurements. To enforce consistency, we
construct a convex cost using a one-sided quadratic penalty and minimize it
using an iterative accelerated projected gradient-descent (APGD) technique. The
PGD scheme reduces the cost function in each iteration, whereas incorporating
momentum into PGD, notwithstanding the lack of such a descent property,
exhibits faster convergence than PGD empirically. We refer to the resulting
algorithm as binary phase retrieval (BPR). Considering additive white noise
contamination prior to quantization, we also derive the Cramer-Rao Bound (CRB)
for the binary encoding model. Experimental results demonstrate that the BPR
algorithm yields a signal-to- reconstruction error ratio (SRER) of
approximately 25 dB in the absence of noise. In the presence of noise prior to
quantization, the SRER is within 2 to 3 dB of the CRB.
| [
{
"created": "Wed, 2 Aug 2017 04:46:07 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Nov 2017 10:41:25 GMT",
"version": "v2"
}
] | 2018-03-14 | [
[
"Mukherjee",
"Subhadip",
""
],
[
"Seelamantula",
"Chandra Sekhar",
""
]
] | We consider the problem of signal reconstruction from quadratic measurements that are encoded as +1 or -1 depending on whether they exceed a predetermined positive threshold or not. Binary measurements are fast to acquire and inexpensive in terms of hardware. We formulate the problem of signal reconstruction using a consistency criterion, wherein one seeks to find a signal that is in agreement with the measurements. To enforce consistency, we construct a convex cost using a one-sided quadratic penalty and minimize it using an iterative accelerated projected gradient-descent (APGD) technique. The PGD scheme reduces the cost function in each iteration, whereas incorporating momentum into PGD, notwithstanding the lack of such a descent property, exhibits faster convergence than PGD empirically. We refer to the resulting algorithm as binary phase retrieval (BPR). Considering additive white noise contamination prior to quantization, we also derive the Cramer-Rao Bound (CRB) for the binary encoding model. Experimental results demonstrate that the BPR algorithm yields a signal-to- reconstruction error ratio (SRER) of approximately 25 dB in the absence of noise. In the presence of noise prior to quantization, the SRER is within 2 to 3 dB of the CRB. |
2205.03198 | Paolo Baldi | Paolo Baldi and Hykel Hosni | A Logic-based Tractable Approximation of Probability | null | null | null | null | cs.LO cs.AI | http://creativecommons.org/licenses/by/4.0/ | We provide a logical framework in which a resource-bounded agent can be seen
to perform approximations of probabilistic reasoning. Our main results read as
follows. First we identify the conditions under which propositional probability
functions can be approximated by a hierarchy of depth-bounded Belief functions.
Second we show that under rather palatable restrictions, our approximations of
probability lead to uncertain reasoning which, under the usual assumptions in
the field, qualifies as tractable.
| [
{
"created": "Fri, 6 May 2022 13:25:12 GMT",
"version": "v1"
}
] | 2022-05-09 | [
[
"Baldi",
"Paolo",
""
],
[
"Hosni",
"Hykel",
""
]
] | We provide a logical framework in which a resource-bounded agent can be seen to perform approximations of probabilistic reasoning. Our main results read as follows. First we identify the conditions under which propositional probability functions can be approximated by a hierarchy of depth-bounded Belief functions. Second we show that under rather palatable restrictions, our approximations of probability lead to uncertain reasoning which, under the usual assumptions in the field, qualifies as tractable. |
0902.1284 | Daniel Hsu | Daniel Hsu, Sham M. Kakade, John Langford, Tong Zhang | Multi-Label Prediction via Compressed Sensing | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider multi-label prediction problems with large output spaces under
the assumption of output sparsity -- that the target (label) vectors have small
support. We develop a general theory for a variant of the popular error
correcting output code scheme, using ideas from compressed sensing for
exploiting this sparsity. The method can be regarded as a simple reduction from
multi-label regression problems to binary regression problems. We show that the
number of subproblems need only be logarithmic in the total number of possible
labels, making this approach radically more efficient than others. We also
state and prove robustness guarantees for this method in the form of regret
transform bounds (in general), and also provide a more detailed analysis for
the linear prediction setting.
| [
{
"created": "Sun, 8 Feb 2009 02:30:06 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Jun 2009 16:23:28 GMT",
"version": "v2"
}
] | 2009-06-02 | [
[
"Hsu",
"Daniel",
""
],
[
"Kakade",
"Sham M.",
""
],
[
"Langford",
"John",
""
],
[
"Zhang",
"Tong",
""
]
] | We consider multi-label prediction problems with large output spaces under the assumption of output sparsity -- that the target (label) vectors have small support. We develop a general theory for a variant of the popular error correcting output code scheme, using ideas from compressed sensing for exploiting this sparsity. The method can be regarded as a simple reduction from multi-label regression problems to binary regression problems. We show that the number of subproblems need only be logarithmic in the total number of possible labels, making this approach radically more efficient than others. We also state and prove robustness guarantees for this method in the form of regret transform bounds (in general), and also provide a more detailed analysis for the linear prediction setting. |
2011.11827 | Yunzhe Tao | Yunzhe Tao, Sahika Genc, Jonathan Chung, Tao Sun, Sunil Mallya | REPAINT: Knowledge Transfer in Deep Reinforcement Learning | Published at ICML 2021 | null | null | null | cs.LG cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accelerating learning processes for complex tasks by leveraging previously
learned tasks has been one of the most challenging problems in reinforcement
learning, especially when the similarity between source and target tasks is
low. This work proposes REPresentation And INstance Transfer (REPAINT)
algorithm for knowledge transfer in deep reinforcement learning. REPAINT not
only transfers the representation of a pre-trained teacher policy in the
on-policy learning, but also uses an advantage-based experience selection
approach to transfer useful samples collected following the teacher policy in
the off-policy learning. Our experimental results on several benchmark tasks
show that REPAINT significantly reduces the total training time in generic
cases of task similarity. In particular, when the source tasks are dissimilar
to, or sub-tasks of, the target tasks, REPAINT outperforms other baselines in
both training-time reduction and asymptotic performance of return scores.
| [
{
"created": "Tue, 24 Nov 2020 01:18:32 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Feb 2021 18:57:25 GMT",
"version": "v2"
},
{
"created": "Wed, 26 May 2021 05:25:23 GMT",
"version": "v3"
}
] | 2021-05-27 | [
[
"Tao",
"Yunzhe",
""
],
[
"Genc",
"Sahika",
""
],
[
"Chung",
"Jonathan",
""
],
[
"Sun",
"Tao",
""
],
[
"Mallya",
"Sunil",
""
]
] | Accelerating learning processes for complex tasks by leveraging previously learned tasks has been one of the most challenging problems in reinforcement learning, especially when the similarity between source and target tasks is low. This work proposes REPresentation And INstance Transfer (REPAINT) algorithm for knowledge transfer in deep reinforcement learning. REPAINT not only transfers the representation of a pre-trained teacher policy in the on-policy learning, but also uses an advantage-based experience selection approach to transfer useful samples collected following the teacher policy in the off-policy learning. Our experimental results on several benchmark tasks show that REPAINT significantly reduces the total training time in generic cases of task similarity. In particular, when the source tasks are dissimilar to, or sub-tasks of, the target tasks, REPAINT outperforms other baselines in both training-time reduction and asymptotic performance of return scores. |
2201.05478 | Dinesh Garg | Philip Tetlow, Dinesh Garg, Leigh Chase, Mark Mattingley-Scott,
Nicholas Bronn, Kugendran Naidoo, Emil Reinert | Towards a Semantic Information Theory (Introducing Quantum Corollas) | null | null | null | null | cs.IT math.IT quant-ph | http://creativecommons.org/licenses/by-sa/4.0/ | The field of Information Theory is founded on Claude Shannon's seminal ideas
relating to entropy. Nevertheless, his well-known avoidance of meaning
(Shannon, 1948) still persists to this day, so that Information Theory remains
poorly connected to many fields with clear informational content and a
dependence on semantics. Herein we propose an extension to Quantum Information
Theory which, subject to constraints, applies quantum entanglement and
information entropy as linguistic tools that model semantics through measures
of both difference and equivalence. This extension integrates Denotational
Semantics with Information Theory via a model based on distributional
representation and partial data triples known as Corolla.
| [
{
"created": "Fri, 14 Jan 2022 14:33:13 GMT",
"version": "v1"
}
] | 2022-01-17 | [
[
"Tetlow",
"Philip",
""
],
[
"Garg",
"Dinesh",
""
],
[
"Chase",
"Leigh",
""
],
[
"Mattingley-Scott",
"Mark",
""
],
[
"Bronn",
"Nicholas",
""
],
[
"Naidoo",
"Kugendran",
""
],
[
"Reinert",
"Emil",
""
]
] | The field of Information Theory is founded on Claude Shannon's seminal ideas relating to entropy. Nevertheless, his well-known avoidance of meaning (Shannon, 1948) still persists to this day, so that Information Theory remains poorly connected to many fields with clear informational content and a dependence on semantics. Herein we propose an extension to Quantum Information Theory which, subject to constraints, applies quantum entanglement and information entropy as linguistic tools that model semantics through measures of both difference and equivalence. This extension integrates Denotational Semantics with Information Theory via a model based on distributional representation and partial data triples known as Corolla. |
2005.01889 | Seonho Park | Seonho Park, George Adosoglou, Panos M. Pardalos | Interpreting Rate-Distortion of Variational Autoencoder and Using Model
Uncertainty for Anomaly Detection | Corrected typos | null | null | null | cs.LG cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building a scalable machine learning system for unsupervised anomaly
detection via representation learning is highly desirable. One of the prevalent
methods is using a reconstruction error from variational autoencoder (VAE) via
maximizing the evidence lower bound. We revisit VAE from the perspective of
information theory to provide some theoretical foundations on using the
reconstruction error, and finally arrive at a simpler and more effective model
for anomaly detection. In addition, to enhance the effectiveness of detecting
anomalies, we incorporate a practical model uncertainty measure into the
metric. We show empirically the competitive performance of our approach on
benchmark datasets.
| [
{
"created": "Tue, 5 May 2020 00:03:48 GMT",
"version": "v1"
},
{
"created": "Thu, 7 May 2020 16:59:36 GMT",
"version": "v2"
}
] | 2020-05-08 | [
[
"Park",
"Seonho",
""
],
[
"Adosoglou",
"George",
""
],
[
"Pardalos",
"Panos M.",
""
]
] | Building a scalable machine learning system for unsupervised anomaly detection via representation learning is highly desirable. One of the prevalent methods is using a reconstruction error from variational autoencoder (VAE) via maximizing the evidence lower bound. We revisit VAE from the perspective of information theory to provide some theoretical foundations on using the reconstruction error, and finally arrive at a simpler and more effective model for anomaly detection. In addition, to enhance the effectiveness of detecting anomalies, we incorporate a practical model uncertainty measure into the metric. We show empirically the competitive performance of our approach on benchmark datasets. |
2012.04767 | Clement Moreau | Clement Moreau and Thomas Devogele and Laurent Etienne and Veronika
Peralta and Cyril de Runz | Methodology for Mining, Discovering and Analyzing Semantic Human
Mobility Behaviors | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Various institutes produce large semantic datasets containing information
regarding daily activities and human mobility. The analysis and understanding
of such data are crucial for urban planning, socio-psychology, political
sciences, and epidemiology. However, none of the typical data mining processes
have been customized for the thorough analysis of semantic mobility sequences
to translate data into understandable behaviors. Based on an extended
literature review, we propose a novel methodological pipeline called simba
(Semantic Indicators for Mobility and Behavior Analysis), for mining and
analyzing semantic mobility sequences to identify coherent information and
human behaviors. A framework for semantic sequence mobility analysis and
clustering explicability based on integrating different complementary
statistical indicators and visual tools is implemented. To validate this
methodology, we used a large set of real daily mobility sequences obtained from
a household travel survey. Complementary knowledge is automatically discovered
in the proposed method.
| [
{
"created": "Tue, 8 Dec 2020 22:24:19 GMT",
"version": "v1"
},
{
"created": "Sun, 20 Dec 2020 17:23:48 GMT",
"version": "v2"
}
] | 2020-12-22 | [
[
"Moreau",
"Clement",
""
],
[
"Devogele",
"Thomas",
""
],
[
"Etienne",
"Laurent",
""
],
[
"Peralta",
"Veronika",
""
],
[
"de Runz",
"Cyril",
""
]
] | Various institutes produce large semantic datasets containing information regarding daily activities and human mobility. The analysis and understanding of such data are crucial for urban planning, socio-psychology, political sciences, and epidemiology. However, none of the typical data mining processes have been customized for the thorough analysis of semantic mobility sequences to translate data into understandable behaviors. Based on an extended literature review, we propose a novel methodological pipeline called simba (Semantic Indicators for Mobility and Behavior Analysis), for mining and analyzing semantic mobility sequences to identify coherent information and human behaviors. A framework for semantic sequence mobility analysis and clustering explicability based on integrating different complementary statistical indicators and visual tools is implemented. To validate this methodology, we used a large set of real daily mobility sequences obtained from a household travel survey. Complementary knowledge is automatically discovered in the proposed method. |
1011.4597 | Elena Veronica Belmega | E. V. Belmega and S. Lasaulce | Energy-Efficient Precoding for Multiple-Antenna Terminals | null | null | 10.1109/TSP.2010.2086451 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of energy-efficient precoding is investigated when the terminals
in the system are equipped with multiple antennas. Considering static and
fast-fading multiple-input multiple-output (MIMO) channels, the
energy-efficiency is defined as the transmission rate to power ratio and shown
to be maximized at low transmit power. The most interesting case is the one of
slow fading MIMO channels. For this type of channels, the optimal precoding
scheme is generally not trivial. Furthermore, using all the available transmit
power is not always optimal in the sense of energy-efficiency (which, in this
case, corresponds to the communication-theoretic definition of the
goodput-to-power (GPR) ratio). Finding the optimal precoding matrices is shown
to be a new open problem and is solved in several special cases: 1. when there
is only one receive antenna; 2. in the low or high signal-to-noise ratio
regime; 3. when uniform power allocation and the regime of large numbers of
antennas are assumed. A complete numerical analysis is provided to illustrate
the derived results and stated conjectures. In particular, the impact of the
number of antennas on the energy-efficiency is assessed and shown to be
significant.
| [
{
"created": "Sat, 20 Nov 2010 18:45:23 GMT",
"version": "v1"
}
] | 2015-05-20 | [
[
"Belmega",
"E. V.",
""
],
[
"Lasaulce",
"S.",
""
]
] | The problem of energy-efficient precoding is investigated when the terminals in the system are equipped with multiple antennas. Considering static and fast-fading multiple-input multiple-output (MIMO) channels, the energy-efficiency is defined as the transmission rate to power ratio and shown to be maximized at low transmit power. The most interesting case is the one of slow fading MIMO channels. For this type of channels, the optimal precoding scheme is generally not trivial. Furthermore, using all the available transmit power is not always optimal in the sense of energy-efficiency (which, in this case, corresponds to the communication-theoretic definition of the goodput-to-power (GPR) ratio). Finding the optimal precoding matrices is shown to be a new open problem and is solved in several special cases: 1. when there is only one receive antenna; 2. in the low or high signal-to-noise ratio regime; 3. when uniform power allocation and the regime of large numbers of antennas are assumed. A complete numerical analysis is provided to illustrate the derived results and stated conjectures. In particular, the impact of the number of antennas on the energy-efficiency is assessed and shown to be significant. |
1702.04510 | Christian Hadiwinoto | Christian Hadiwinoto, Hwee Tou Ng | A Dependency-Based Neural Reordering Model for Statistical Machine
Translation | 7 pages, 3 figures, Proceedings of AAAI-17 | Proceedings of AAAI-17 (2017) | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In machine translation (MT) that involves translating between two languages
with significant differences in word order, determining the correct word order
of translated words is a major challenge. The dependency parse tree of a source
sentence can help to determine the correct word order of the translated words.
In this paper, we present a novel reordering approach utilizing a neural
network and dependency-based embeddings to predict whether the translations of
two source words linked by a dependency relation should remain in the same
order or should be swapped in the translated sentence. Experiments on
Chinese-to-English translation show that our approach yields a statistically
significant improvement of 0.57 BLEU point on benchmark NIST test sets,
compared to our prior state-of-the-art statistical MT system that uses sparse
dependency-based reordering features.
| [
{
"created": "Wed, 15 Feb 2017 09:08:21 GMT",
"version": "v1"
}
] | 2017-02-16 | [
[
"Hadiwinoto",
"Christian",
""
],
[
"Ng",
"Hwee Tou",
""
]
] | In machine translation (MT) that involves translating between two languages with significant differences in word order, determining the correct word order of translated words is a major challenge. The dependency parse tree of a source sentence can help to determine the correct word order of the translated words. In this paper, we present a novel reordering approach utilizing a neural network and dependency-based embeddings to predict whether the translations of two source words linked by a dependency relation should remain in the same order or should be swapped in the translated sentence. Experiments on Chinese-to-English translation show that our approach yields a statistically significant improvement of 0.57 BLEU point on benchmark NIST test sets, compared to our prior state-of-the-art statistical MT system that uses sparse dependency-based reordering features. |
1804.10123 | Sam Leroux | Sam Leroux, Pavlo Molchanov, Pieter Simoens, Bart Dhoedt, Thomas
Breuel, Jan Kautz | IamNN: Iterative and Adaptive Mobile Neural Network for Efficient Image
Classification | ICLR 2018 Workshop track | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep residual networks (ResNets) made a recent breakthrough in deep learning.
The core idea of ResNets is to have shortcut connections between layers that
allow the network to be much deeper while still being easy to optimize avoiding
vanishing gradients. These shortcut connections have interesting side-effects
that make ResNets behave differently from other typical network architectures.
In this work we use these properties to design a network based on a ResNet but
with parameter sharing and with adaptive computation time. The resulting
network is much smaller than the original network and can adapt the
computational cost to the complexity of the input image.
| [
{
"created": "Thu, 26 Apr 2018 15:57:00 GMT",
"version": "v1"
}
] | 2018-04-30 | [
[
"Leroux",
"Sam",
""
],
[
"Molchanov",
"Pavlo",
""
],
[
"Simoens",
"Pieter",
""
],
[
"Dhoedt",
"Bart",
""
],
[
"Breuel",
"Thomas",
""
],
[
"Kautz",
"Jan",
""
]
] | Deep residual networks (ResNets) made a recent breakthrough in deep learning. The core idea of ResNets is to have shortcut connections between layers that allow the network to be much deeper while still being easy to optimize avoiding vanishing gradients. These shortcut connections have interesting side-effects that make ResNets behave differently from other typical network architectures. In this work we use these properties to design a network based on a ResNet but with parameter sharing and with adaptive computation time. The resulting network is much smaller than the original network and can adapt the computational cost to the complexity of the input image. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.