id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2012.00150
Hanchen Xie
Hanchen Xie, Mohamed E. Hussein, Aram Galstyan, Wael Abd-Almageed
MUSCLE: Strengthening Semi-Supervised Learning Via Concurrent Unsupervised Learning Using Mutual Information Maximization
10 pages, 3 figures, Accepted to WACV2021
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Deep neural networks are powerful, massively parameterized machine learning models that have been shown to perform well in supervised learning tasks. However, very large amounts of labeled data are usually needed to train deep neural networks. Several semi-supervised learning approaches have been proposed to train neural networks using smaller amounts of labeled data with a large amount of unlabeled data. The performance of these semi-supervised methods significantly degrades as the size of labeled data decreases. We introduce Mutual-information-based Unsupervised & Semi-supervised Concurrent LEarning (MUSCLE), a hybrid learning approach that uses mutual information to combine both unsupervised and semi-supervised learning. MUSCLE can be used as a stand-alone training scheme for neural networks, and can also be incorporated into other learning approaches. We show that the proposed hybrid model outperforms state of the art on several standard benchmarks, including CIFAR-10, CIFAR-100, and Mini-Imagenet. Furthermore, the performance gain consistently increases with the reduction in the amount of labeled data, as well as in the presence of bias. We also show that MUSCLE has the potential to boost the classification performance when used in the fine-tuning phase for a model pre-trained only on unlabeled data.
[ { "created": "Mon, 30 Nov 2020 23:01:04 GMT", "version": "v1" } ]
2020-12-02
[ [ "Xie", "Hanchen", "" ], [ "Hussein", "Mohamed E.", "" ], [ "Galstyan", "Aram", "" ], [ "Abd-Almageed", "Wael", "" ] ]
Deep neural networks are powerful, massively parameterized machine learning models that have been shown to perform well in supervised learning tasks. However, very large amounts of labeled data are usually needed to train deep neural networks. Several semi-supervised learning approaches have been proposed to train neural networks using smaller amounts of labeled data with a large amount of unlabeled data. The performance of these semi-supervised methods significantly degrades as the size of labeled data decreases. We introduce Mutual-information-based Unsupervised & Semi-supervised Concurrent LEarning (MUSCLE), a hybrid learning approach that uses mutual information to combine both unsupervised and semi-supervised learning. MUSCLE can be used as a stand-alone training scheme for neural networks, and can also be incorporated into other learning approaches. We show that the proposed hybrid model outperforms state of the art on several standard benchmarks, including CIFAR-10, CIFAR-100, and Mini-Imagenet. Furthermore, the performance gain consistently increases with the reduction in the amount of labeled data, as well as in the presence of bias. We also show that MUSCLE has the potential to boost the classification performance when used in the fine-tuning phase for a model pre-trained only on unlabeled data.
2404.08985
Yijiang Liu
Yijiang Liu, Rongyu Zhang, Huanrui Yang, Kurt Keutzer, Yuan Du, Li Du, Shanghang Zhang
Intuition-aware Mixture-of-Rank-1-Experts for Parameter Efficient Finetuning
13 pages, 5 figures
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) have demonstrated significant potential in performing multiple tasks in multimedia applications, ranging from content generation to interactive entertainment, and artistic creation. However, the diversity of downstream tasks in multitask scenarios presents substantial adaptation challenges for LLMs. While traditional methods often succumb to knowledge confusion on their monolithic dense models, Mixture-of-Experts (MoE) has been emerged as a promising solution with its sparse architecture for effective task decoupling. Inspired by the principles of human cognitive neuroscience, we design a novel framework \texttt{Intuition-MoR1E} that leverages the inherent semantic clustering of instances to mimic the human brain to deal with multitask, offering implicit guidance to router for optimized feature allocation. Moreover, we introduce cutting-edge Rank-1 Experts formulation designed to manage a spectrum of intuitions, demonstrating enhanced parameter efficiency and effectiveness in multitask LLM finetuning. Extensive experiments demonstrate that Intuition-MoR1E achieves superior efficiency and 2.15\% overall accuracy improvement across 14 public datasets against other state-of-the-art baselines.
[ { "created": "Sat, 13 Apr 2024 12:14:58 GMT", "version": "v1" } ]
2024-04-16
[ [ "Liu", "Yijiang", "" ], [ "Zhang", "Rongyu", "" ], [ "Yang", "Huanrui", "" ], [ "Keutzer", "Kurt", "" ], [ "Du", "Yuan", "" ], [ "Du", "Li", "" ], [ "Zhang", "Shanghang", "" ] ]
Large Language Models (LLMs) have demonstrated significant potential in performing multiple tasks in multimedia applications, ranging from content generation to interactive entertainment, and artistic creation. However, the diversity of downstream tasks in multitask scenarios presents substantial adaptation challenges for LLMs. While traditional methods often succumb to knowledge confusion on their monolithic dense models, Mixture-of-Experts (MoE) has been emerged as a promising solution with its sparse architecture for effective task decoupling. Inspired by the principles of human cognitive neuroscience, we design a novel framework \texttt{Intuition-MoR1E} that leverages the inherent semantic clustering of instances to mimic the human brain to deal with multitask, offering implicit guidance to router for optimized feature allocation. Moreover, we introduce cutting-edge Rank-1 Experts formulation designed to manage a spectrum of intuitions, demonstrating enhanced parameter efficiency and effectiveness in multitask LLM finetuning. Extensive experiments demonstrate that Intuition-MoR1E achieves superior efficiency and 2.15\% overall accuracy improvement across 14 public datasets against other state-of-the-art baselines.
2004.00517
Christoph G\"unther
Christoph G\"unther, Michael G\"unther, Daniel G\"unther
Tracing Contacts to Control the COVID-19 Pandemic
5 pages, no figures
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The control of the COVID-19 pandemic requires a considerable reduction of contacts mostly achieved by imposing movement control up to the level of enforced quarantine. This has lead to a collapse of substantial parts of the economy. Carriers of the disease are infectious roughly 3 days after exposure to the virus. First symptoms occur later or not at all. As a consequence tracing the contacts of people identified as carriers is essential for controlling the pandemic. This tracing must work everywhere, in particular indoors, where people are closest to each other. Furthermore, it should respect people's privacy. The present paper presents a method to enable a thorough traceability with very little risk on privacy. In our opinion, the latter capabilities are necessary to control the pandemic during a future relaunch of our economy.
[ { "created": "Wed, 1 Apr 2020 15:40:48 GMT", "version": "v1" } ]
2020-04-02
[ [ "Günther", "Christoph", "" ], [ "Günther", "Michael", "" ], [ "Günther", "Daniel", "" ] ]
The control of the COVID-19 pandemic requires a considerable reduction of contacts mostly achieved by imposing movement control up to the level of enforced quarantine. This has lead to a collapse of substantial parts of the economy. Carriers of the disease are infectious roughly 3 days after exposure to the virus. First symptoms occur later or not at all. As a consequence tracing the contacts of people identified as carriers is essential for controlling the pandemic. This tracing must work everywhere, in particular indoors, where people are closest to each other. Furthermore, it should respect people's privacy. The present paper presents a method to enable a thorough traceability with very little risk on privacy. In our opinion, the latter capabilities are necessary to control the pandemic during a future relaunch of our economy.
2407.16889
Modan Tailleur
Modan Tailleur (LS2N), Pierre Aumond (UMRAE), Vincent Tourre (AAU), Mathieu Lagrange (LS2N)
Towards better visualizations of urban sound environments: insights from interviews
null
INTERNOISE 2024, Aug 2024, Nantes (France), France
null
null
cs.CY cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Urban noise maps and noise visualizations traditionally provide macroscopic representations of noise levels across cities. However, those representations fail at accurately gauging the sound perception associated with these sound environments, as perception highly depends on the sound sources involved. This paper aims at analyzing the need for the representations of sound sources, by identifying the urban stakeholders for whom such representations are assumed to be of importance. Through spoken interviews with various urban stakeholders, we have gained insight into current practices, the strengths and weaknesses of existing tools and the relevance of incorporating sound sources into existing urban sound environment representations. Three distinct use of sound source representations emerged in this study: 1) noise-related complaints for industrials and specialized citizens, 2) soundscape quality assessment for citizens, and 3) guidance for urban planners. Findings also reveal diverse perspectives for the use of visualizations, which should use indicators adapted to the target audience, and enable data accessibility.
[ { "created": "Tue, 11 Jun 2024 07:39:48 GMT", "version": "v1" } ]
2024-07-25
[ [ "Tailleur", "Modan", "", "LS2N" ], [ "Aumond", "Pierre", "", "UMRAE" ], [ "Tourre", "Vincent", "", "AAU" ], [ "Lagrange", "Mathieu", "", "LS2N" ] ]
Urban noise maps and noise visualizations traditionally provide macroscopic representations of noise levels across cities. However, those representations fail at accurately gauging the sound perception associated with these sound environments, as perception highly depends on the sound sources involved. This paper aims at analyzing the need for the representations of sound sources, by identifying the urban stakeholders for whom such representations are assumed to be of importance. Through spoken interviews with various urban stakeholders, we have gained insight into current practices, the strengths and weaknesses of existing tools and the relevance of incorporating sound sources into existing urban sound environment representations. Three distinct use of sound source representations emerged in this study: 1) noise-related complaints for industrials and specialized citizens, 2) soundscape quality assessment for citizens, and 3) guidance for urban planners. Findings also reveal diverse perspectives for the use of visualizations, which should use indicators adapted to the target audience, and enable data accessibility.
2301.09310
Jinho Lee
Seongyeon Park, Hajin Kim, Tanveer Ahmad, Nauman Ahmed, Zaid Al-Ars, H. Peter Hofstee, Youngsok Kim, and Jinho Lee
SaLoBa: Maximizing Data Locality and Workload Balance for Fast Sequence Alignment on GPUs
Published at IPDPS'22
null
null
null
cs.DB cs.DC
http://creativecommons.org/licenses/by/4.0/
Sequence alignment forms an important backbone in many sequencing applications. A commonly used strategy for sequence alignment is an approximate string matching with a two-dimensional dynamic programming approach. Although some prior work has been conducted on GPU acceleration of a sequence alignment, we identify several shortcomings that limit exploiting the full computational capability of modern GPUs. This paper presents SaLoBa, a GPU-accelerated sequence alignment library focused on seed extension. Based on the analysis of previous work with real-world sequencing data, we propose techniques to exploit the data locality and improve workload balancing. The experimental results reveal that SaLoBa significantly improves the seed extension kernel compared to state-of-the-art GPU-based methods.
[ { "created": "Mon, 23 Jan 2023 08:14:40 GMT", "version": "v1" } ]
2023-01-24
[ [ "Park", "Seongyeon", "" ], [ "Kim", "Hajin", "" ], [ "Ahmad", "Tanveer", "" ], [ "Ahmed", "Nauman", "" ], [ "Al-Ars", "Zaid", "" ], [ "Hofstee", "H. Peter", "" ], [ "Kim", "Youngsok", "" ], [ "Lee", "Jinho", "" ] ]
Sequence alignment forms an important backbone in many sequencing applications. A commonly used strategy for sequence alignment is an approximate string matching with a two-dimensional dynamic programming approach. Although some prior work has been conducted on GPU acceleration of a sequence alignment, we identify several shortcomings that limit exploiting the full computational capability of modern GPUs. This paper presents SaLoBa, a GPU-accelerated sequence alignment library focused on seed extension. Based on the analysis of previous work with real-world sequencing data, we propose techniques to exploit the data locality and improve workload balancing. The experimental results reveal that SaLoBa significantly improves the seed extension kernel compared to state-of-the-art GPU-based methods.
2102.08868
Fartash Faghri
Fartash Faghri, Sven Gowal, Cristina Vasconcelos, David J. Fleet, Fabian Pedregosa, Nicolas Le Roux
Bridging the Gap Between Adversarial Robustness and Optimization Bias
New CIFAR-10 experiments and Fourier attack variations
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We demonstrate that the choice of optimizer, neural network architecture, and regularizer significantly affect the adversarial robustness of linear neural networks, providing guarantees without the need for adversarial training. To this end, we revisit a known result linking maximally robust classifiers and minimum norm solutions, and combine it with recent results on the implicit bias of optimizers. First, we show that, under certain conditions, it is possible to achieve both perfect standard accuracy and a certain degree of robustness, simply by training an overparametrized model using the implicit bias of the optimization. In that regime, there is a direct relationship between the type of the optimizer and the attack to which the model is robust. To the best of our knowledge, this work is the first to study the impact of optimization methods such as sign gradient descent and proximal methods on adversarial robustness. Second, we characterize the robustness of linear convolutional models, showing that they resist attacks subject to a constraint on the Fourier-$\ell_\infty$ norm. To illustrate these findings we design a novel Fourier-$\ell_\infty$ attack that finds adversarial examples with controllable frequencies. We evaluate Fourier-$\ell_\infty$ robustness of adversarially-trained deep CIFAR-10 models from the standard RobustBench benchmark and visualize adversarial perturbations.
[ { "created": "Wed, 17 Feb 2021 16:58:04 GMT", "version": "v1" }, { "created": "Mon, 7 Jun 2021 15:27:16 GMT", "version": "v2" } ]
2021-06-08
[ [ "Faghri", "Fartash", "" ], [ "Gowal", "Sven", "" ], [ "Vasconcelos", "Cristina", "" ], [ "Fleet", "David J.", "" ], [ "Pedregosa", "Fabian", "" ], [ "Roux", "Nicolas Le", "" ] ]
We demonstrate that the choice of optimizer, neural network architecture, and regularizer significantly affect the adversarial robustness of linear neural networks, providing guarantees without the need for adversarial training. To this end, we revisit a known result linking maximally robust classifiers and minimum norm solutions, and combine it with recent results on the implicit bias of optimizers. First, we show that, under certain conditions, it is possible to achieve both perfect standard accuracy and a certain degree of robustness, simply by training an overparametrized model using the implicit bias of the optimization. In that regime, there is a direct relationship between the type of the optimizer and the attack to which the model is robust. To the best of our knowledge, this work is the first to study the impact of optimization methods such as sign gradient descent and proximal methods on adversarial robustness. Second, we characterize the robustness of linear convolutional models, showing that they resist attacks subject to a constraint on the Fourier-$\ell_\infty$ norm. To illustrate these findings we design a novel Fourier-$\ell_\infty$ attack that finds adversarial examples with controllable frequencies. We evaluate Fourier-$\ell_\infty$ robustness of adversarially-trained deep CIFAR-10 models from the standard RobustBench benchmark and visualize adversarial perturbations.
2110.02369
Karl Stratos
Wenzheng Zhang, Wenyue Hua, Karl Stratos
EntQA: Entity Linking as Question Answering
ICLR 2022
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A conventional approach to entity linking is to first find mentions in a given document and then infer their underlying entities in the knowledge base. A well-known limitation of this approach is that it requires finding mentions without knowing their entities, which is unnatural and difficult. We present a new model that does not suffer from this limitation called EntQA, which stands for Entity linking as Question Answering. EntQA first proposes candidate entities with a fast retrieval module, and then scrutinizes the document to find mentions of each candidate with a powerful reader module. Our approach combines progress in entity linking with that in open-domain question answering and capitalizes on pretrained models for dense entity retrieval and reading comprehension. Unlike in previous works, we do not rely on a mention-candidates dictionary or large-scale weak supervision. EntQA achieves strong results on the GERBIL benchmarking platform.
[ { "created": "Tue, 5 Oct 2021 21:39:57 GMT", "version": "v1" }, { "created": "Mon, 7 Mar 2022 21:53:43 GMT", "version": "v2" } ]
2022-03-09
[ [ "Zhang", "Wenzheng", "" ], [ "Hua", "Wenyue", "" ], [ "Stratos", "Karl", "" ] ]
A conventional approach to entity linking is to first find mentions in a given document and then infer their underlying entities in the knowledge base. A well-known limitation of this approach is that it requires finding mentions without knowing their entities, which is unnatural and difficult. We present a new model that does not suffer from this limitation called EntQA, which stands for Entity linking as Question Answering. EntQA first proposes candidate entities with a fast retrieval module, and then scrutinizes the document to find mentions of each candidate with a powerful reader module. Our approach combines progress in entity linking with that in open-domain question answering and capitalizes on pretrained models for dense entity retrieval and reading comprehension. Unlike in previous works, we do not rely on a mention-candidates dictionary or large-scale weak supervision. EntQA achieves strong results on the GERBIL benchmarking platform.
1609.02191
Chuang Wang
Chuang Wang and Yue M. Lu
Online Learning for Sparse PCA in High Dimensions: Exact Dynamics and Phase Transitions
5 pages
null
null
null
cs.IT cond-mat.dis-nn math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the dynamics of an online algorithm for learning a sparse leading eigenvector from samples generated from a spiked covariance model. This algorithm combines the classical Oja's method for online PCA with an element-wise nonlinearity at each iteration to promote sparsity. In the high-dimensional limit, the joint empirical measure of the underlying sparse eigenvector and its estimate provided by the algorithm is shown to converge weakly to a deterministic, measure-valued process. This scaling limit is characterized as the unique solution of a nonlinear PDE, and it provides exact information regarding the asymptotic performance of the algorithm. For example, performance metrics such as the cosine similarity and the misclassification rate in sparse support recovery can be obtained by examining the limiting dynamics. A steady-state analysis of the nonlinear PDE also reveals an interesting phase transition phenomenon. Although our analysis is asymptotic in nature, numerical simulations show that the theoretical predictions are accurate for moderate signal dimensions.
[ { "created": "Wed, 7 Sep 2016 20:55:38 GMT", "version": "v1" } ]
2016-09-09
[ [ "Wang", "Chuang", "" ], [ "Lu", "Yue M.", "" ] ]
We study the dynamics of an online algorithm for learning a sparse leading eigenvector from samples generated from a spiked covariance model. This algorithm combines the classical Oja's method for online PCA with an element-wise nonlinearity at each iteration to promote sparsity. In the high-dimensional limit, the joint empirical measure of the underlying sparse eigenvector and its estimate provided by the algorithm is shown to converge weakly to a deterministic, measure-valued process. This scaling limit is characterized as the unique solution of a nonlinear PDE, and it provides exact information regarding the asymptotic performance of the algorithm. For example, performance metrics such as the cosine similarity and the misclassification rate in sparse support recovery can be obtained by examining the limiting dynamics. A steady-state analysis of the nonlinear PDE also reveals an interesting phase transition phenomenon. Although our analysis is asymptotic in nature, numerical simulations show that the theoretical predictions are accurate for moderate signal dimensions.
2210.01800
Fengdi Che
Fengdi Che, Xiru Zhu, Doina Precup, David Meger, and Gregory Dudek
Bayesian Q-learning With Imperfect Expert Demonstrations
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Guided exploration with expert demonstrations improves data efficiency for reinforcement learning, but current algorithms often overuse expert information. We propose a novel algorithm to speed up Q-learning with the help of a limited amount of imperfect expert demonstrations. The algorithm avoids excessive reliance on expert data by relaxing the optimal expert assumption and gradually reducing the usage of uninformative expert data. Experimentally, we evaluate our approach on a sparse-reward chain environment and six more complicated Atari games with delayed rewards. With the proposed methods, we can achieve better results than Deep Q-learning from Demonstrations (Hester et al., 2017) in most environments.
[ { "created": "Sat, 1 Oct 2022 17:38:19 GMT", "version": "v1" } ]
2022-10-06
[ [ "Che", "Fengdi", "" ], [ "Zhu", "Xiru", "" ], [ "Precup", "Doina", "" ], [ "Meger", "David", "" ], [ "Dudek", "Gregory", "" ] ]
Guided exploration with expert demonstrations improves data efficiency for reinforcement learning, but current algorithms often overuse expert information. We propose a novel algorithm to speed up Q-learning with the help of a limited amount of imperfect expert demonstrations. The algorithm avoids excessive reliance on expert data by relaxing the optimal expert assumption and gradually reducing the usage of uninformative expert data. Experimentally, we evaluate our approach on a sparse-reward chain environment and six more complicated Atari games with delayed rewards. With the proposed methods, we can achieve better results than Deep Q-learning from Demonstrations (Hester et al., 2017) in most environments.
2104.00148
Gonzalo M\'endez Dr
Gonzalo Gabriel M\'endez, Luis Gal\'arraga and Katherine Chiluiza
Showing Academic Performance Predictions during Term Planning: Effects on Students' Decisions, Behaviors, and Preferences
17 pages
null
10.1145/3411764
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Course selection is a crucial activity for students as it directly impacts their workload and performance. It is also time-consuming, prone to subjectivity, and often carried out based on incomplete information. This task can, nevertheless, be assisted with computational tools, for instance, by predicting performance based on historical data. We investigate the effects of showing grade predictions to students through an interactive visualization tool. A qualitative study suggests that in the presence of predictions, students may focus too much on maximizing their performance, to the detriment of other factors such as the workload. A follow-up quantitative study explored whether these effects are mitigated by changing how predictions are conveyed. Our observations suggest the presence of a framing effect that induces students to put more effort into course selection when faced with more specific predictions. We discuss these and other findings and outline considerations for designing better data-driven course selection tools.
[ { "created": "Wed, 31 Mar 2021 22:32:21 GMT", "version": "v1" } ]
2021-04-02
[ [ "Méndez", "Gonzalo Gabriel", "" ], [ "Galárraga", "Luis", "" ], [ "Chiluiza", "Katherine", "" ] ]
Course selection is a crucial activity for students as it directly impacts their workload and performance. It is also time-consuming, prone to subjectivity, and often carried out based on incomplete information. This task can, nevertheless, be assisted with computational tools, for instance, by predicting performance based on historical data. We investigate the effects of showing grade predictions to students through an interactive visualization tool. A qualitative study suggests that in the presence of predictions, students may focus too much on maximizing their performance, to the detriment of other factors such as the workload. A follow-up quantitative study explored whether these effects are mitigated by changing how predictions are conveyed. Our observations suggest the presence of a framing effect that induces students to put more effort into course selection when faced with more specific predictions. We discuss these and other findings and outline considerations for designing better data-driven course selection tools.
1705.09180
Jingjin Yu
Shuai D. Han, Nicholas M. Stiffler, Athansios Krontiris, Kostas E. Bekris and Jingjin Yu
High-Quality Tabletop Rearrangement with Overhand Grasps: Hardness Results and Fast Methods
Updated manuscript
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the underlying combinatorial structure of a class of object rearrangement problems, which appear frequently in applications. The problems involve multiple, similar-geometry objects placed on a flat, horizontal surface, where a robot can approach them from above and perform pick-and-place operations to rearrange them. The paper considers both the case where the start and goal object poses overlap, and where they do not. For overlapping poses, the primary objective is to minimize the number of pick-and-place actions and then to minimize the distance traveled by the end-effector. For the non-overlapping case, the objective is solely to minimize the end-effector distance. While such problems do not involve all the complexities of general rearrangement, they remain computationally hard challenges in both cases. This is shown through two-way reductions between well-understood, hard combinatorial challenges and these rearrangement problems. The benefit of the reduction is that there are well studied algorithms for solving these well-established combinatorial challenges. These algorithms can be very efficient in practice despite the hardness results. The paper builds on these reduction results to propose an algorithmic pipeline for dealing with the rearrangement problems. Experimental evaluation shows that the proposed pipeline achieves high-quality paths with regards to the optimization objectives. Furthermore, it exhibits highly desirable scalability as the number of objects increases in both the overlapping and non-overlapping setups.
[ { "created": "Thu, 25 May 2017 13:54:27 GMT", "version": "v1" }, { "created": "Sun, 11 Jun 2017 14:08:43 GMT", "version": "v2" }, { "created": "Tue, 20 Jun 2017 20:21:47 GMT", "version": "v3" } ]
2017-06-22
[ [ "Han", "Shuai D.", "" ], [ "Stiffler", "Nicholas M.", "" ], [ "Krontiris", "Athansios", "" ], [ "Bekris", "Kostas E.", "" ], [ "Yu", "Jingjin", "" ] ]
This paper studies the underlying combinatorial structure of a class of object rearrangement problems, which appear frequently in applications. The problems involve multiple, similar-geometry objects placed on a flat, horizontal surface, where a robot can approach them from above and perform pick-and-place operations to rearrange them. The paper considers both the case where the start and goal object poses overlap, and where they do not. For overlapping poses, the primary objective is to minimize the number of pick-and-place actions and then to minimize the distance traveled by the end-effector. For the non-overlapping case, the objective is solely to minimize the end-effector distance. While such problems do not involve all the complexities of general rearrangement, they remain computationally hard challenges in both cases. This is shown through two-way reductions between well-understood, hard combinatorial challenges and these rearrangement problems. The benefit of the reduction is that there are well studied algorithms for solving these well-established combinatorial challenges. These algorithms can be very efficient in practice despite the hardness results. The paper builds on these reduction results to propose an algorithmic pipeline for dealing with the rearrangement problems. Experimental evaluation shows that the proposed pipeline achieves high-quality paths with regards to the optimization objectives. Furthermore, it exhibits highly desirable scalability as the number of objects increases in both the overlapping and non-overlapping setups.
2105.10859
Dipika Singhania
Dipika Singhania, Rahul Rahaman, Angela Yao
Coarse to Fine Multi-Resolution Temporal Convolutional Network
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Temporal convolutional networks (TCNs) are a commonly used architecture for temporal video segmentation. TCNs however, tend to suffer from over-segmentation errors and require additional refinement modules to ensure smoothness and temporal coherency. In this work, we propose a novel temporal encoder-decoder to tackle the problem of sequence fragmentation. In particular, the decoder follows a coarse-to-fine structure with an implicit ensemble of multiple temporal resolutions. The ensembling produces smoother segmentations that are more accurate and better-calibrated, bypassing the need for additional refinement modules. In addition, we enhance our training with a multi-resolution feature-augmentation strategy to promote robustness to varying temporal resolutions. Finally, to support our architecture and encourage further sequence coherency, we propose an action loss that penalizes misclassifications at the video level. Experiments show that our stand-alone architecture, together with our novel feature-augmentation strategy and new loss, outperforms the state-of-the-art on three temporal video segmentation benchmarks.
[ { "created": "Sun, 23 May 2021 06:07:40 GMT", "version": "v1" } ]
2021-05-25
[ [ "Singhania", "Dipika", "" ], [ "Rahaman", "Rahul", "" ], [ "Yao", "Angela", "" ] ]
Temporal convolutional networks (TCNs) are a commonly used architecture for temporal video segmentation. TCNs however, tend to suffer from over-segmentation errors and require additional refinement modules to ensure smoothness and temporal coherency. In this work, we propose a novel temporal encoder-decoder to tackle the problem of sequence fragmentation. In particular, the decoder follows a coarse-to-fine structure with an implicit ensemble of multiple temporal resolutions. The ensembling produces smoother segmentations that are more accurate and better-calibrated, bypassing the need for additional refinement modules. In addition, we enhance our training with a multi-resolution feature-augmentation strategy to promote robustness to varying temporal resolutions. Finally, to support our architecture and encourage further sequence coherency, we propose an action loss that penalizes misclassifications at the video level. Experiments show that our stand-alone architecture, together with our novel feature-augmentation strategy and new loss, outperforms the state-of-the-art on three temporal video segmentation benchmarks.
1805.12308
Luliang Jia
Luliang Jia, Yuhua Xu, Youming Sun, Shuo Feng, and Alagan Anpalagan
Stackelberg Game Approaches for Anti-jamming Defence in Wireless Networks
8 pages, 6figures, to appear in IEEE Wireless Communications
null
null
null
cs.GT cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article investigates the anti-jamming communications problem in wireless networks from a Stackelberg game perspective. By exploring and analyzing the inherent characteristics of the anti-jamming problem, we present and discuss some technical challenges and fundamental requirements to address them. To be specific, the adversarial characteristic, incomplete information constraints, dynamics, uncertainty, dense deployment, and heterogeneous feature bring technical challenges to anti-jamming communications in wireless networks. Then, for the purpose of improving system performance, four requirements for anti-jamming communications are presented and discussed. Following the advantages of the Stackelberg game model in anti-jamming field, we formulate an anti-jamming decision-making framework based on the Stackelberg game for anti-jamming defence in wireless networks. Moreover, two preliminary case studies are presented and discussed for better understanding of the anti-jamming Stackelberg game problem. Finally, some future research directions are also provided.
[ { "created": "Thu, 31 May 2018 03:28:57 GMT", "version": "v1" } ]
2018-06-01
[ [ "Jia", "Luliang", "" ], [ "Xu", "Yuhua", "" ], [ "Sun", "Youming", "" ], [ "Feng", "Shuo", "" ], [ "Anpalagan", "Alagan", "" ] ]
This article investigates the anti-jamming communications problem in wireless networks from a Stackelberg game perspective. By exploring and analyzing the inherent characteristics of the anti-jamming problem, we present and discuss some technical challenges and fundamental requirements to address them. To be specific, the adversarial characteristic, incomplete information constraints, dynamics, uncertainty, dense deployment, and heterogeneous feature bring technical challenges to anti-jamming communications in wireless networks. Then, for the purpose of improving system performance, four requirements for anti-jamming communications are presented and discussed. Following the advantages of the Stackelberg game model in anti-jamming field, we formulate an anti-jamming decision-making framework based on the Stackelberg game for anti-jamming defence in wireless networks. Moreover, two preliminary case studies are presented and discussed for better understanding of the anti-jamming Stackelberg game problem. Finally, some future research directions are also provided.
1902.10895
Jordan Malof
Wei Hu, Kyle Bradbury, Jordan M. Malof, Boning Li, Bohao Huang, Artem Streltsov, K. Sydny Fujita, and Ben Hoen
What you get is not always what you see: pitfalls in solar array assessment using overhead imagery
25 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Effective integration planning for small, distributed solar photovoltaic (PV) arrays into electric power grids requires access to high quality data: the location and power capacity of individual solar PV arrays. Unfortunately, national databases of small-scale solar PV do not exist; those that do are limited in their spatial resolution, typically aggregated up to state or national levels. While several promising approaches for solar PV detection have been published, strategies for evaluating the performance of these models are often highly heterogeneous from study to study. The resulting comparison of these methods for practical applications for energy assessments becomes challenging and may imply that the reported performance evaluations are overly optimistic. The heterogeneity comes in many forms, each of which we explore in this work: the level of spatial aggregation, the validation of ground truth, inconsistencies in the training and validation datasets, and the degree of diversity of the locations and sensors from which the training and validation data originate. For each, we discuss emerging practices from the literature to address them or suggest directions of future research. As part of our investigation, we evaluate solar PV identification performance in two large regions. Our findings suggest that traditional performance evaluation of the automated identification of solar PV from satellite imagery may be optimistic due to common limitations in the validation process. The takeaways from this work are intended to inform and catalyze the large-scale practical application of automated solar PV assessment techniques by energy researchers and professionals.
[ { "created": "Thu, 28 Feb 2019 05:10:08 GMT", "version": "v1" }, { "created": "Mon, 25 Jul 2022 22:09:37 GMT", "version": "v2" } ]
2022-07-27
[ [ "Hu", "Wei", "" ], [ "Bradbury", "Kyle", "" ], [ "Malof", "Jordan M.", "" ], [ "Li", "Boning", "" ], [ "Huang", "Bohao", "" ], [ "Streltsov", "Artem", "" ], [ "Fujita", "K. Sydny", "" ], [ "Hoen", "Ben", "" ] ]
Effective integration planning for small, distributed solar photovoltaic (PV) arrays into electric power grids requires access to high quality data: the location and power capacity of individual solar PV arrays. Unfortunately, national databases of small-scale solar PV do not exist; those that do are limited in their spatial resolution, typically aggregated up to state or national levels. While several promising approaches for solar PV detection have been published, strategies for evaluating the performance of these models are often highly heterogeneous from study to study. The resulting comparison of these methods for practical applications for energy assessments becomes challenging and may imply that the reported performance evaluations are overly optimistic. The heterogeneity comes in many forms, each of which we explore in this work: the level of spatial aggregation, the validation of ground truth, inconsistencies in the training and validation datasets, and the degree of diversity of the locations and sensors from which the training and validation data originate. For each, we discuss emerging practices from the literature to address them or suggest directions of future research. As part of our investigation, we evaluate solar PV identification performance in two large regions. Our findings suggest that traditional performance evaluation of the automated identification of solar PV from satellite imagery may be optimistic due to common limitations in the validation process. The takeaways from this work are intended to inform and catalyze the large-scale practical application of automated solar PV assessment techniques by energy researchers and professionals.
2302.09419
Ce Zhou
Ce Zhou (1), Qian Li (2), Chen Li (2), Jun Yu (3), Yixin Liu (3), Guangjing Wang (1), Kai Zhang (3), Cheng Ji (2), Qiben Yan (1), Lifang He (3), Hao Peng (2), Jianxin Li (2), Jia Wu (4), Ziwei Liu (5), Pengtao Xie (6), Caiming Xiong (7), Jian Pei (8), Philip S. Yu (9), Lichao Sun (3) ((1) Michigan State University, (2) Beihang University, (3) Lehigh University, (4) Macquarie University, (5) Nanyang Technological University, (6) University of California San Diego, (7) Salesforce AI Research, (8) Duke University, (9) University of Illinois at Chicago)
A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT
99 pages, 16 figures
null
null
null
cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pretrained Foundation Models (PFMs) are regarded as the foundation for various downstream tasks with different data modalities. A PFM (e.g., BERT, ChatGPT, and GPT-4) is trained on large-scale data which provides a reasonable parameter initialization for a wide range of downstream applications. BERT learns bidirectional encoder representations from Transformers, which are trained on large datasets as contextual language models. Similarly, the generative pretrained transformer (GPT) method employs Transformers as the feature extractor and is trained using an autoregressive paradigm on large datasets. Recently, ChatGPT shows promising success on large language models, which applies an autoregressive language model with zero shot or few shot prompting. The remarkable achievements of PFM have brought significant breakthroughs to various fields of AI. Numerous studies have proposed different methods, raising the demand for an updated survey. This study provides a comprehensive review of recent research advancements, challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities. The review covers the basic components and existing pretraining methods used in natural language processing, computer vision, and graph learning. Additionally, it explores advanced PFMs used for different data modalities and unified PFMs that consider data quality and quantity. The review also discusses research related to the fundamentals of PFMs, such as model efficiency and compression, security, and privacy. Finally, the study provides key implications, future research directions, challenges, and open problems in the field of PFMs. Overall, this survey aims to shed light on the research of the PFMs on scalability, security, logical reasoning ability, cross-domain learning ability, and the user-friendly interactive ability for artificial general intelligence.
[ { "created": "Sat, 18 Feb 2023 20:51:09 GMT", "version": "v1" }, { "created": "Thu, 30 Mar 2023 14:44:09 GMT", "version": "v2" }, { "created": "Mon, 1 May 2023 07:48:05 GMT", "version": "v3" } ]
2023-05-02
[ [ "Zhou", "Ce", "" ], [ "Li", "Qian", "" ], [ "Li", "Chen", "" ], [ "Yu", "Jun", "" ], [ "Liu", "Yixin", "" ], [ "Wang", "Guangjing", "" ], [ "Zhang", "Kai", "" ], [ "Ji", "Cheng", "" ], [ "Yan", "Qiben", "" ], [ "He", "Lifang", "" ], [ "Peng", "Hao", "" ], [ "Li", "Jianxin", "" ], [ "Wu", "Jia", "" ], [ "Liu", "Ziwei", "" ], [ "Xie", "Pengtao", "" ], [ "Xiong", "Caiming", "" ], [ "Pei", "Jian", "" ], [ "Yu", "Philip S.", "" ], [ "Sun", "Lichao", "" ] ]
Pretrained Foundation Models (PFMs) are regarded as the foundation for various downstream tasks with different data modalities. A PFM (e.g., BERT, ChatGPT, and GPT-4) is trained on large-scale data which provides a reasonable parameter initialization for a wide range of downstream applications. BERT learns bidirectional encoder representations from Transformers, which are trained on large datasets as contextual language models. Similarly, the generative pretrained transformer (GPT) method employs Transformers as the feature extractor and is trained using an autoregressive paradigm on large datasets. Recently, ChatGPT shows promising success on large language models, which applies an autoregressive language model with zero shot or few shot prompting. The remarkable achievements of PFM have brought significant breakthroughs to various fields of AI. Numerous studies have proposed different methods, raising the demand for an updated survey. This study provides a comprehensive review of recent research advancements, challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities. The review covers the basic components and existing pretraining methods used in natural language processing, computer vision, and graph learning. Additionally, it explores advanced PFMs used for different data modalities and unified PFMs that consider data quality and quantity. The review also discusses research related to the fundamentals of PFMs, such as model efficiency and compression, security, and privacy. Finally, the study provides key implications, future research directions, challenges, and open problems in the field of PFMs. Overall, this survey aims to shed light on the research of the PFMs on scalability, security, logical reasoning ability, cross-domain learning ability, and the user-friendly interactive ability for artificial general intelligence.
2006.13286
Chao Zhang
Chao Zhang, Yuanwei Liu, Zhijin Qin and Zhiguo Ding
Semi-Grant-Free NOMA: A Stochastic Geometry Model
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Grant-free (GF) transmission holds promise in terms of low latency communication by directly transmitting messages without waiting for any permissions. However, collision situations may frequently happen when limited spectrum is occupied by numerous GF users. The non-orthogonal multiple access (NOMA) technique can be a promising solution to achieve massive connectivity and fewer collisions for GF transmission by multiplexing users in power domain. We utilize a semi-grant-free (semi-GF) NOMA scheme for enhancing network connectivity and spectral efficiency by enabling grant-based (GB) and GF users to share the same spectrum resources. With the aid of semi-GF protocols, uplink NOMA networks are investigated by invoking stochastic geometry techniques. We propose a novel \textit{dynamic protocol} to interpret which part of the GF users are paired in NOMA transmissions via transmitting various channel quality thresholds by an added handshake. We utilize open-loop protocol with a fixed average threshold as the benchmark to investigate performance improvement. It is observed that dynamic protocol provides more accurate channel quality thresholds than open-loop protocol, thereby the interference from the GF users is reduced to a large extent. We analyze the outage performance and diversity gains under two protocols. Numerical results demonstrate that dynamic protocol is capable of enhancing the outage performance than open-loop protocol.
[ { "created": "Tue, 23 Jun 2020 19:32:48 GMT", "version": "v1" } ]
2020-06-25
[ [ "Zhang", "Chao", "" ], [ "Liu", "Yuanwei", "" ], [ "Qin", "Zhijin", "" ], [ "Ding", "Zhiguo", "" ] ]
Grant-free (GF) transmission holds promise in terms of low latency communication by directly transmitting messages without waiting for any permissions. However, collision situations may frequently happen when limited spectrum is occupied by numerous GF users. The non-orthogonal multiple access (NOMA) technique can be a promising solution to achieve massive connectivity and fewer collisions for GF transmission by multiplexing users in power domain. We utilize a semi-grant-free (semi-GF) NOMA scheme for enhancing network connectivity and spectral efficiency by enabling grant-based (GB) and GF users to share the same spectrum resources. With the aid of semi-GF protocols, uplink NOMA networks are investigated by invoking stochastic geometry techniques. We propose a novel \textit{dynamic protocol} to interpret which part of the GF users are paired in NOMA transmissions via transmitting various channel quality thresholds by an added handshake. We utilize open-loop protocol with a fixed average threshold as the benchmark to investigate performance improvement. It is observed that dynamic protocol provides more accurate channel quality thresholds than open-loop protocol, thereby the interference from the GF users is reduced to a large extent. We analyze the outage performance and diversity gains under two protocols. Numerical results demonstrate that dynamic protocol is capable of enhancing the outage performance than open-loop protocol.
1206.4914
Mario Alejandro Castrillon
Mario A. Castrillon, Damian A. Morero, and Mario R. Hueda
Joint Demapping and Decoding for DQPSK Optical Coherent Receivers
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a low-complexity joint demapper-decoder scheme for coherent optical receivers with DQPSK modulation. The new technique reduces to 0.7dB the gap between QPSK and DQPSK in 100Gb/s coherent optical systems.
[ { "created": "Thu, 21 Jun 2012 15:25:05 GMT", "version": "v1" } ]
2012-06-22
[ [ "Castrillon", "Mario A.", "" ], [ "Morero", "Damian A.", "" ], [ "Hueda", "Mario R.", "" ] ]
We present a low-complexity joint demapper-decoder scheme for coherent optical receivers with DQPSK modulation. The new technique reduces to 0.7dB the gap between QPSK and DQPSK in 100Gb/s coherent optical systems.
2001.01049
Ziling Heng
Ziling Heng, Cunsheng Ding, Weiqiong Wang
Optimal Binary Linear Codes from Maximal Arcs
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The binary Hamming codes with parameters $[2^m-1, 2^m-1-m, 3]$ are perfect. Their extended codes have parameters $[2^m, 2^m-1-m, 4]$ and are distance-optimal. The first objective of this paper is to construct a class of binary linear codes with parameters $[2^{m+s}+2^s-2^m,2^{m+s}+2^s-2^m-2m-2,4]$, which have better information rates than the class of extended binary Hamming codes, and are also distance-optimal. The second objective is to construct a class of distance-optimal binary codes with parameters $[2^m+2, 2^m-2m, 6]$. Both classes of binary linear codes have new parameters.
[ { "created": "Sat, 4 Jan 2020 07:02:18 GMT", "version": "v1" } ]
2020-01-07
[ [ "Heng", "Ziling", "" ], [ "Ding", "Cunsheng", "" ], [ "Wang", "Weiqiong", "" ] ]
The binary Hamming codes with parameters $[2^m-1, 2^m-1-m, 3]$ are perfect. Their extended codes have parameters $[2^m, 2^m-1-m, 4]$ and are distance-optimal. The first objective of this paper is to construct a class of binary linear codes with parameters $[2^{m+s}+2^s-2^m,2^{m+s}+2^s-2^m-2m-2,4]$, which have better information rates than the class of extended binary Hamming codes, and are also distance-optimal. The second objective is to construct a class of distance-optimal binary codes with parameters $[2^m+2, 2^m-2m, 6]$. Both classes of binary linear codes have new parameters.
2104.08638
Priyanka Bose
Priyanka Bose, Dipanjan Das, Yanju Chen, Yu Feng, Christopher Kruegel, Giovanni Vigna
SAILFISH: Vetting Smart Contract State-Inconsistency Bugs in Seconds
null
IEEE Symposium on Security & Privacy, May 2022
null
null
cs.CR cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents SAILFISH, a scalable system for automatically finding state-inconsistency bugs in smart contracts. To make the analysis tractable, we introduce a hybrid approach that includes (i) a light-weight exploration phase that dramatically reduces the number of instructions to analyze, and (ii) a precise refinement phase based on symbolic evaluation guided by our novel value-summary analysis, which generates extra constraints to over-approximate the side effects of whole-program execution, thereby ensuring the precision of the symbolic evaluation. We developed a prototype of SAILFISH and evaluated its ability to detect two state-inconsistency flaws, viz., reentrancy and transaction order dependence (TOD) in Ethereum smart contracts. Further, we present detection rules for other kinds of smart contract flaws that SAILFISH can be extended to detect. Our experiments demonstrate the efficiency of our hybrid approach as well as the benefit of the value summary analysis. In particular, we show that S SAILFISH outperforms five state-of-the-art smart contract analyzers (SECURITY, MYTHRIL, OYENTE, SEREUM and VANDAL ) in terms of performance, and precision. In total, SAILFISH discovered 47 previously unknown vulnerable smart contracts out of 89,853 smart contracts from ETHERSCAN .
[ { "created": "Sat, 17 Apr 2021 20:21:07 GMT", "version": "v1" }, { "created": "Mon, 13 Dec 2021 04:23:57 GMT", "version": "v2" } ]
2021-12-14
[ [ "Bose", "Priyanka", "" ], [ "Das", "Dipanjan", "" ], [ "Chen", "Yanju", "" ], [ "Feng", "Yu", "" ], [ "Kruegel", "Christopher", "" ], [ "Vigna", "Giovanni", "" ] ]
This paper presents SAILFISH, a scalable system for automatically finding state-inconsistency bugs in smart contracts. To make the analysis tractable, we introduce a hybrid approach that includes (i) a light-weight exploration phase that dramatically reduces the number of instructions to analyze, and (ii) a precise refinement phase based on symbolic evaluation guided by our novel value-summary analysis, which generates extra constraints to over-approximate the side effects of whole-program execution, thereby ensuring the precision of the symbolic evaluation. We developed a prototype of SAILFISH and evaluated its ability to detect two state-inconsistency flaws, viz., reentrancy and transaction order dependence (TOD) in Ethereum smart contracts. Further, we present detection rules for other kinds of smart contract flaws that SAILFISH can be extended to detect. Our experiments demonstrate the efficiency of our hybrid approach as well as the benefit of the value summary analysis. In particular, we show that S SAILFISH outperforms five state-of-the-art smart contract analyzers (SECURITY, MYTHRIL, OYENTE, SEREUM and VANDAL ) in terms of performance, and precision. In total, SAILFISH discovered 47 previously unknown vulnerable smart contracts out of 89,853 smart contracts from ETHERSCAN .
1811.11553
Michael Alcorn
Michael A. Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-Shinn Ku, Anh Nguyen
Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects
Poster at the 2019 Conference on Computer Vision and Pattern Recognition
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite excellent performance on stationary test sets, deep neural networks (DNNs) can fail to generalize to out-of-distribution (OoD) inputs, including natural, non-adversarial ones, which are common in real-world settings. In this paper, we present a framework for discovering DNN failures that harnesses 3D renderers and 3D models. That is, we estimate the parameters of a 3D renderer that cause a target DNN to misbehave in response to the rendered image. Using our framework and a self-assembled dataset of 3D objects, we investigate the vulnerability of DNNs to OoD poses of well-known objects in ImageNet. For objects that are readily recognized by DNNs in their canonical poses, DNNs incorrectly classify 97% of their pose space. In addition, DNNs are highly sensitive to slight pose perturbations. Importantly, adversarial poses transfer across models and datasets. We find that 99.9% and 99.4% of the poses misclassified by Inception-v3 also transfer to the AlexNet and ResNet-50 image classifiers trained on the same ImageNet dataset, respectively, and 75.5% transfer to the YOLOv3 object detector trained on MS COCO.
[ { "created": "Wed, 28 Nov 2018 13:39:27 GMT", "version": "v1" }, { "created": "Sun, 13 Jan 2019 23:55:45 GMT", "version": "v2" }, { "created": "Thu, 18 Apr 2019 13:54:20 GMT", "version": "v3" } ]
2019-04-19
[ [ "Alcorn", "Michael A.", "" ], [ "Li", "Qi", "" ], [ "Gong", "Zhitao", "" ], [ "Wang", "Chengfei", "" ], [ "Mai", "Long", "" ], [ "Ku", "Wei-Shinn", "" ], [ "Nguyen", "Anh", "" ] ]
Despite excellent performance on stationary test sets, deep neural networks (DNNs) can fail to generalize to out-of-distribution (OoD) inputs, including natural, non-adversarial ones, which are common in real-world settings. In this paper, we present a framework for discovering DNN failures that harnesses 3D renderers and 3D models. That is, we estimate the parameters of a 3D renderer that cause a target DNN to misbehave in response to the rendered image. Using our framework and a self-assembled dataset of 3D objects, we investigate the vulnerability of DNNs to OoD poses of well-known objects in ImageNet. For objects that are readily recognized by DNNs in their canonical poses, DNNs incorrectly classify 97% of their pose space. In addition, DNNs are highly sensitive to slight pose perturbations. Importantly, adversarial poses transfer across models and datasets. We find that 99.9% and 99.4% of the poses misclassified by Inception-v3 also transfer to the AlexNet and ResNet-50 image classifiers trained on the same ImageNet dataset, respectively, and 75.5% transfer to the YOLOv3 object detector trained on MS COCO.
2212.05891
Jia-Rui Lin
Zhe Zheng, Bo-Rui Kang, Qi-Tian Yuan, Yu-Cheng Zhou, Xin-Zheng Lu, Jia-Rui Lin
Text Mining-Based Patent Analysis for Automated Rule Checking in AEC
null
null
null
null
cs.IR cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Automated rule checking (ARC), which is expected to promote the efficiency of the compliance checking process in the architecture, engineering, and construction (AEC) industry, is gaining increasing attention. Throwing light on the ARC application hotspots and forecasting its trends are useful to the related research and drive innovations. Therefore, this study takes the patents from the database of the Derwent Innovations Index database (DII) and China national knowledge infrastructure (CNKI) as data sources and then carried out a three-step analysis including (1) quantitative characteristics (i.e., annual distribution analysis) of patents, (2) identification of ARC topics using a latent Dirichlet allocation (LDA) and, (3) SNA-based co-occurrence analysis of ARC topics. The results show that the research hotspots and trends of Chinese and English patents are different. The contributions of this study have three aspects: (1) an approach to a comprehensive analysis of patents by integrating multiple text mining methods (i.e., SNA and LDA) is introduced ; (2) the application hotspots and development trends of ARC are reviewed based on patent analysis; and (3) a signpost for technological development and innovation of ARC is provided.
[ { "created": "Mon, 12 Dec 2022 13:48:38 GMT", "version": "v1" } ]
2022-12-13
[ [ "Zheng", "Zhe", "" ], [ "Kang", "Bo-Rui", "" ], [ "Yuan", "Qi-Tian", "" ], [ "Zhou", "Yu-Cheng", "" ], [ "Lu", "Xin-Zheng", "" ], [ "Lin", "Jia-Rui", "" ] ]
Automated rule checking (ARC), which is expected to promote the efficiency of the compliance checking process in the architecture, engineering, and construction (AEC) industry, is gaining increasing attention. Throwing light on the ARC application hotspots and forecasting its trends are useful to the related research and drive innovations. Therefore, this study takes the patents from the database of the Derwent Innovations Index database (DII) and China national knowledge infrastructure (CNKI) as data sources and then carried out a three-step analysis including (1) quantitative characteristics (i.e., annual distribution analysis) of patents, (2) identification of ARC topics using a latent Dirichlet allocation (LDA) and, (3) SNA-based co-occurrence analysis of ARC topics. The results show that the research hotspots and trends of Chinese and English patents are different. The contributions of this study have three aspects: (1) an approach to a comprehensive analysis of patents by integrating multiple text mining methods (i.e., SNA and LDA) is introduced ; (2) the application hotspots and development trends of ARC are reviewed based on patent analysis; and (3) a signpost for technological development and innovation of ARC is provided.
2201.09051
Marco Virgolin
Marco Virgolin and Saverio Fracaros
On the Robustness of Sparse Counterfactual Explanations to Adverse Perturbations
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Counterfactual explanations (CEs) are a powerful means for understanding how decisions made by algorithms can be changed. Researchers have proposed a number of desiderata that CEs should meet to be practically useful, such as requiring minimal effort to enact, or complying with causal models. We consider a further aspect to improve the usability of CEs: robustness to adverse perturbations, which may naturally happen due to unfortunate circumstances. Since CEs typically prescribe a sparse form of intervention (i.e., only a subset of the features should be changed), we study the effect of addressing robustness separately for the features that are recommended to be changed and those that are not. Our definitions are workable in that they can be incorporated as penalty terms in the loss functions that are used for discovering CEs. To experiment with robustness, we create and release code where five data sets (commonly used in the field of fair and explainable machine learning) have been enriched with feature-specific annotations that can be used to sample meaningful perturbations. Our experiments show that CEs are often not robust and, if adverse perturbations take place (even if not worst-case), the intervention they prescribe may require a much larger cost than anticipated, or even become impossible. However, accounting for robustness in the search process, which can be done rather easily, allows discovering robust CEs systematically. Robust CEs make additional intervention to contrast perturbations much less costly than non-robust CEs. We also find that robustness is easier to achieve for the features to change, posing an important point of consideration for the choice of what counterfactual explanation is best for the user. Our code is available at: https://github.com/marcovirgolin/robust-counterfactuals.
[ { "created": "Sat, 22 Jan 2022 13:57:45 GMT", "version": "v1" }, { "created": "Thu, 3 Mar 2022 09:33:49 GMT", "version": "v2" }, { "created": "Fri, 23 Sep 2022 15:00:08 GMT", "version": "v3" } ]
2022-09-26
[ [ "Virgolin", "Marco", "" ], [ "Fracaros", "Saverio", "" ] ]
Counterfactual explanations (CEs) are a powerful means for understanding how decisions made by algorithms can be changed. Researchers have proposed a number of desiderata that CEs should meet to be practically useful, such as requiring minimal effort to enact, or complying with causal models. We consider a further aspect to improve the usability of CEs: robustness to adverse perturbations, which may naturally happen due to unfortunate circumstances. Since CEs typically prescribe a sparse form of intervention (i.e., only a subset of the features should be changed), we study the effect of addressing robustness separately for the features that are recommended to be changed and those that are not. Our definitions are workable in that they can be incorporated as penalty terms in the loss functions that are used for discovering CEs. To experiment with robustness, we create and release code where five data sets (commonly used in the field of fair and explainable machine learning) have been enriched with feature-specific annotations that can be used to sample meaningful perturbations. Our experiments show that CEs are often not robust and, if adverse perturbations take place (even if not worst-case), the intervention they prescribe may require a much larger cost than anticipated, or even become impossible. However, accounting for robustness in the search process, which can be done rather easily, allows discovering robust CEs systematically. Robust CEs make additional intervention to contrast perturbations much less costly than non-robust CEs. We also find that robustness is easier to achieve for the features to change, posing an important point of consideration for the choice of what counterfactual explanation is best for the user. Our code is available at: https://github.com/marcovirgolin/robust-counterfactuals.
1106.1516
Francesco De Pellegrini Dr.
Francesco De Pellegrini, Karina Gomez, Daniele Miorandi and Imrich Chlamtac
Distributed Wake-Up Scheduling for Energy Saving in Wireless Networks
13 pages, 4 figures
null
null
null
cs.NI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A customary solution to reduce the energy consumption of wireless communication devices is to periodically put the radio into low-power sleep mode. A relevant problem is to schedule the wake-up of nodes in such a way as to ensure proper coordination among devices, respecting delay constraints while still saving energy. In this paper, we introduce a simple algebraic characterization of the problem of periodic wake-up scheduling under both energy consumption and delay constraints. We demonstrate that the general problem of wake-up times coordination is equivalent to integer factorization and discuss the implications on the design of efficient scheduling algorithms. We then propose simple polynomial time heuristic algorithms that can be implemented in a distributed fashion and present a message complexity of the order of the number of links in the network. Numerical results are provided in order to assess the performance of the proposed techniques when applied to wireless sensor networks.
[ { "created": "Wed, 8 Jun 2011 08:12:19 GMT", "version": "v1" } ]
2011-06-09
[ [ "De Pellegrini", "Francesco", "" ], [ "Gomez", "Karina", "" ], [ "Miorandi", "Daniele", "" ], [ "Chlamtac", "Imrich", "" ] ]
A customary solution to reduce the energy consumption of wireless communication devices is to periodically put the radio into low-power sleep mode. A relevant problem is to schedule the wake-up of nodes in such a way as to ensure proper coordination among devices, respecting delay constraints while still saving energy. In this paper, we introduce a simple algebraic characterization of the problem of periodic wake-up scheduling under both energy consumption and delay constraints. We demonstrate that the general problem of wake-up times coordination is equivalent to integer factorization and discuss the implications on the design of efficient scheduling algorithms. We then propose simple polynomial time heuristic algorithms that can be implemented in a distributed fashion and present a message complexity of the order of the number of links in the network. Numerical results are provided in order to assess the performance of the proposed techniques when applied to wireless sensor networks.
2303.18087
Esther Rolf
Esther Rolf
Evaluation Challenges for Geospatial ML
ICLR 2023 Workshop on Machine Learning for Remote Sensing
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As geospatial machine learning models and maps derived from their predictions are increasingly used for downstream analyses in science and policy, it is imperative to evaluate their accuracy and applicability. Geospatial machine learning has key distinctions from other learning paradigms, and as such, the correct way to measure performance of spatial machine learning outputs has been a topic of debate. In this paper, I delineate unique challenges of model evaluation for geospatial machine learning with global or remotely sensed datasets, culminating in concrete takeaways to improve evaluations of geospatial model performance.
[ { "created": "Fri, 31 Mar 2023 14:24:06 GMT", "version": "v1" } ]
2023-04-03
[ [ "Rolf", "Esther", "" ] ]
As geospatial machine learning models and maps derived from their predictions are increasingly used for downstream analyses in science and policy, it is imperative to evaluate their accuracy and applicability. Geospatial machine learning has key distinctions from other learning paradigms, and as such, the correct way to measure performance of spatial machine learning outputs has been a topic of debate. In this paper, I delineate unique challenges of model evaluation for geospatial machine learning with global or remotely sensed datasets, culminating in concrete takeaways to improve evaluations of geospatial model performance.
2305.18665
Arshdeep Singh
Arshdeep Singh, Haohe Liu, Mark D. Plumbley
E-PANNs: Sound Recognition Using Efficient Pre-trained Audio Neural Networks
Accepted in Internoise 2023 conference
null
null
null
cs.SD cs.AI eess.AS eess.SP
http://creativecommons.org/licenses/by/4.0/
Sounds carry an abundance of information about activities and events in our everyday environment, such as traffic noise, road works, music, or people talking. Recent machine learning methods, such as convolutional neural networks (CNNs), have been shown to be able to automatically recognize sound activities, a task known as audio tagging. One such method, pre-trained audio neural networks (PANNs), provides a neural network which has been pre-trained on over 500 sound classes from the publicly available AudioSet dataset, and can be used as a baseline or starting point for other tasks. However, the existing PANNs model has a high computational complexity and large storage requirement. This could limit the potential for deploying PANNs on resource-constrained devices, such as on-the-edge sound sensors, and could lead to high energy consumption if many such devices were deployed. In this paper, we reduce the computational complexity and memory requirement of the PANNs model by taking a pruning approach to eliminate redundant parameters from the PANNs model. The resulting Efficient PANNs (E-PANNs) model, which requires 36\% less computations and 70\% less memory, also slightly improves the sound recognition (audio tagging) performance. The code for the E-PANNs model has been released under an open source license.
[ { "created": "Tue, 30 May 2023 00:08:55 GMT", "version": "v1" } ]
2023-05-31
[ [ "Singh", "Arshdeep", "" ], [ "Liu", "Haohe", "" ], [ "Plumbley", "Mark D.", "" ] ]
Sounds carry an abundance of information about activities and events in our everyday environment, such as traffic noise, road works, music, or people talking. Recent machine learning methods, such as convolutional neural networks (CNNs), have been shown to be able to automatically recognize sound activities, a task known as audio tagging. One such method, pre-trained audio neural networks (PANNs), provides a neural network which has been pre-trained on over 500 sound classes from the publicly available AudioSet dataset, and can be used as a baseline or starting point for other tasks. However, the existing PANNs model has a high computational complexity and large storage requirement. This could limit the potential for deploying PANNs on resource-constrained devices, such as on-the-edge sound sensors, and could lead to high energy consumption if many such devices were deployed. In this paper, we reduce the computational complexity and memory requirement of the PANNs model by taking a pruning approach to eliminate redundant parameters from the PANNs model. The resulting Efficient PANNs (E-PANNs) model, which requires 36\% less computations and 70\% less memory, also slightly improves the sound recognition (audio tagging) performance. The code for the E-PANNs model has been released under an open source license.
2002.09587
Zhanyu Wang
Zhanyu Wang and Jean Honorio
The Sample Complexity of Meta Sparse Regression
null
Artificial Intelligence and Statistics (AISTATS), 2021
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the meta-learning problem in sparse linear regression with infinite tasks. We assume that the learner can access several similar tasks. The goal of the learner is to transfer knowledge from the prior tasks to a similar but novel task. For p parameters, size of the support set k , and l samples per task, we show that T \in O (( k log(p) ) /l ) tasks are sufficient in order to recover the common support of all tasks. With the recovered support, we can greatly reduce the sample complexity for estimating the parameter of the novel task, i.e., l \in O (1) with respect to T and p . We also prove that our rates are minimax optimal. A key difference between meta-learning and the classical multi-task learning, is that meta-learning focuses only on the recovery of the parameters of the novel task, while multi-task learning estimates the parameter of all tasks, which requires l to grow with T . Instead, our efficient meta-learning estimator allows for l to be constant with respect to T (i.e., few-shot learning).
[ { "created": "Sat, 22 Feb 2020 00:59:53 GMT", "version": "v1" }, { "created": "Sun, 21 Jun 2020 18:35:21 GMT", "version": "v2" } ]
2021-02-19
[ [ "Wang", "Zhanyu", "" ], [ "Honorio", "Jean", "" ] ]
This paper addresses the meta-learning problem in sparse linear regression with infinite tasks. We assume that the learner can access several similar tasks. The goal of the learner is to transfer knowledge from the prior tasks to a similar but novel task. For p parameters, size of the support set k , and l samples per task, we show that T \in O (( k log(p) ) /l ) tasks are sufficient in order to recover the common support of all tasks. With the recovered support, we can greatly reduce the sample complexity for estimating the parameter of the novel task, i.e., l \in O (1) with respect to T and p . We also prove that our rates are minimax optimal. A key difference between meta-learning and the classical multi-task learning, is that meta-learning focuses only on the recovery of the parameters of the novel task, while multi-task learning estimates the parameter of all tasks, which requires l to grow with T . Instead, our efficient meta-learning estimator allows for l to be constant with respect to T (i.e., few-shot learning).
2109.03702
Ziyue Zhang
Ziyue Zhang, Shuai Jiang, Congzhentao Huang, Richard YiDa Xu
Unsupervised clothing change adaptive person ReID
9 pages
null
10.1109/LSP.2021.3134195
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clothing changes and lack of data labels are both crucial challenges in person ReID. For the former challenge, people may occur multiple times at different locations wearing different clothing. However, most of the current person ReID research works focus on the benchmarks in which a person's clothing is kept the same all the time. For the last challenge, some researchers try to make model learn information from a labeled dataset as a source to an unlabeled dataset. Whereas purely unsupervised training is less used. In this paper, we aim to solve both problems at the same time. We design a novel unsupervised model, Sync-Person-Cloud ReID, to solve the unsupervised clothing change person ReID problem. We developer a purely unsupervised clothing change person ReID pipeline with person sync augmentation operation and same person feature restriction. The person sync augmentation is to supply additional same person resources. These same person's resources can be used as part supervised input by same person feature restriction. The extensive experiments on clothing change ReID datasets show the out-performance of our methods.
[ { "created": "Wed, 8 Sep 2021 15:08:10 GMT", "version": "v1" }, { "created": "Tue, 14 Sep 2021 14:42:00 GMT", "version": "v2" } ]
2022-02-09
[ [ "Zhang", "Ziyue", "" ], [ "Jiang", "Shuai", "" ], [ "Huang", "Congzhentao", "" ], [ "Xu", "Richard YiDa", "" ] ]
Clothing changes and lack of data labels are both crucial challenges in person ReID. For the former challenge, people may occur multiple times at different locations wearing different clothing. However, most of the current person ReID research works focus on the benchmarks in which a person's clothing is kept the same all the time. For the last challenge, some researchers try to make model learn information from a labeled dataset as a source to an unlabeled dataset. Whereas purely unsupervised training is less used. In this paper, we aim to solve both problems at the same time. We design a novel unsupervised model, Sync-Person-Cloud ReID, to solve the unsupervised clothing change person ReID problem. We developer a purely unsupervised clothing change person ReID pipeline with person sync augmentation operation and same person feature restriction. The person sync augmentation is to supply additional same person resources. These same person's resources can be used as part supervised input by same person feature restriction. The extensive experiments on clothing change ReID datasets show the out-performance of our methods.
1807.05543
Arman Ahmadian
Arman Ahmadian, and Hyuncheol Park
Maximizing Ergodic Throughput in Wireless Powered Communication Networks
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper considers a single-antenna wirelesspowered communication network (WPCN) over a flat-fading channel. We show that, by using our probabilistic harvestand-transmit (PHAT) strategy, which requires the knowledge of instantaneous full channel state information (CSI) and fading probability distribution, the ergodic throughput of this system may be greatly increased relative to that achieved by the harvestthen-transmit (HTT) protocol. To do so, instead of dividing every frame to the uplink (UL) and downlink (DL), the channel is allocated to the UL wireless information transmission (WIT) and DL wireless power transfer (WPT) based on the estimated channel power gain. In other words, based on the fading probability distribution, we will derive some thresholds that determine the association of a frame to the DL WPT or UL WIT. More specifically, if the channel gain falls below or goes over these thresholds, the channel will be allocated to WPT or WIT. Simulation results verify the performance of our proposed scheme.
[ { "created": "Sun, 15 Jul 2018 13:21:35 GMT", "version": "v1" } ]
2018-07-17
[ [ "Ahmadian", "Arman", "" ], [ "Park", "Hyuncheol", "" ] ]
This paper considers a single-antenna wirelesspowered communication network (WPCN) over a flat-fading channel. We show that, by using our probabilistic harvestand-transmit (PHAT) strategy, which requires the knowledge of instantaneous full channel state information (CSI) and fading probability distribution, the ergodic throughput of this system may be greatly increased relative to that achieved by the harvestthen-transmit (HTT) protocol. To do so, instead of dividing every frame to the uplink (UL) and downlink (DL), the channel is allocated to the UL wireless information transmission (WIT) and DL wireless power transfer (WPT) based on the estimated channel power gain. In other words, based on the fading probability distribution, we will derive some thresholds that determine the association of a frame to the DL WPT or UL WIT. More specifically, if the channel gain falls below or goes over these thresholds, the channel will be allocated to WPT or WIT. Simulation results verify the performance of our proposed scheme.
2109.14812
Sajad Meisami
Sajad Meisami, Mohammad Beheshti-Atashgah, Mohammad Reza Aref
Using Blockchain to Achieve Decentralized Privacy In IoT Healthcare
6 pages
International Journal on Cybernetics & Informatics (IJCI) Vol. 12, No.2, April 2023, Page 97-108
10.5121/ijci.2023.120208
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
With the advent of the Internet of Things (IoT), e-health has become one of the main topics of research. Due to the sensitivity of patient information, patient privacy seems challenging. Nowadays, patient data is usually stored in the cloud in healthcare programs, making it difficult for users to have enough control over their data. The recent increment in announced cases of security and surveillance breaches compromising patients' privacy call into question the conventional model, in which third-parties gather and control immense amounts of patients' Healthcare data. In this work, we try to resolve the issues mentioned above by using blockchain technology. We propose a blockchain-based protocol suitable for e-health applications that does not require trust in a third party and provides an efficient privacy-preserving access control mechanism. Transactions in our proposed system, unlike Bitcoin, are not entirely financial, and we do not use conventional methods for consensus operations in blockchain like Proof of Work (PoW). It is not suitable for IoT applications because IoT devices have resources-constraints. Usage of appropriate consensus method helps us to increase network security and efficiency, as well as reducing network cost, i.e., bandwidth and processor usage. Finally, we provide security and privacy analysis of our proposed protocol.
[ { "created": "Thu, 30 Sep 2021 02:30:09 GMT", "version": "v1" } ]
2023-04-04
[ [ "Meisami", "Sajad", "" ], [ "Beheshti-Atashgah", "Mohammad", "" ], [ "Aref", "Mohammad Reza", "" ] ]
With the advent of the Internet of Things (IoT), e-health has become one of the main topics of research. Due to the sensitivity of patient information, patient privacy seems challenging. Nowadays, patient data is usually stored in the cloud in healthcare programs, making it difficult for users to have enough control over their data. The recent increment in announced cases of security and surveillance breaches compromising patients' privacy call into question the conventional model, in which third-parties gather and control immense amounts of patients' Healthcare data. In this work, we try to resolve the issues mentioned above by using blockchain technology. We propose a blockchain-based protocol suitable for e-health applications that does not require trust in a third party and provides an efficient privacy-preserving access control mechanism. Transactions in our proposed system, unlike Bitcoin, are not entirely financial, and we do not use conventional methods for consensus operations in blockchain like Proof of Work (PoW). It is not suitable for IoT applications because IoT devices have resources-constraints. Usage of appropriate consensus method helps us to increase network security and efficiency, as well as reducing network cost, i.e., bandwidth and processor usage. Finally, we provide security and privacy analysis of our proposed protocol.
1705.08503
Fionn Murtagh
Fionn Murtagh
The Geometry and Topology of Data and Information for Analytics of Processes and Behaviours: Building on Bourdieu and Addressing New Societal Challenges
16 pages, 7 figures
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We begin by summarizing the relevance and importance of inductive analytics based on the geometry and topology of data and information. Contemporary issues are then discussed. These include how sampling data for representativity is increasingly to be questioned. While we can always avail of analytics from a "bag of tools and techniques", in the application of machine learning and predictive analytics, nonetheless we present the case for Bourdieu and Benz\'ecri-based science of data, as follows. This is to construct bridges between data sources and position-taking, and decision-making. There is summary presentation of a few case studies, illustrating and exemplifying application domains.
[ { "created": "Mon, 15 May 2017 22:44:53 GMT", "version": "v1" } ]
2017-05-25
[ [ "Murtagh", "Fionn", "" ] ]
We begin by summarizing the relevance and importance of inductive analytics based on the geometry and topology of data and information. Contemporary issues are then discussed. These include how sampling data for representativity is increasingly to be questioned. While we can always avail of analytics from a "bag of tools and techniques", in the application of machine learning and predictive analytics, nonetheless we present the case for Bourdieu and Benz\'ecri-based science of data, as follows. This is to construct bridges between data sources and position-taking, and decision-making. There is summary presentation of a few case studies, illustrating and exemplifying application domains.
1507.08396
Shuangyin Li
Shuangyin Li, Jiefei Li, Guan Huang, Ruiyang Tan, and Rong Pan
Tag-Weighted Topic Model For Large-scale Semi-Structured Documents
null
null
null
null
cs.CL cs.IR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To date, there have been massive Semi-Structured Documents (SSDs) during the evolution of the Internet. These SSDs contain both unstructured features (e.g., plain text) and metadata (e.g., tags). Most previous works focused on modeling the unstructured text, and recently, some other methods have been proposed to model the unstructured text with specific tags. To build a general model for SSDs remains an important problem in terms of both model fitness and efficiency. We propose a novel method to model the SSDs by a so-called Tag-Weighted Topic Model (TWTM). TWTM is a framework that leverages both the tags and words information, not only to learn the document-topic and topic-word distributions, but also to infer the tag-topic distributions for text mining tasks. We present an efficient variational inference method with an EM algorithm for estimating the model parameters. Meanwhile, we propose three large-scale solutions for our model under the MapReduce distributed computing platform for modeling large-scale SSDs. The experimental results show the effectiveness, efficiency and the robustness by comparing our model with the state-of-the-art methods in document modeling, tags prediction and text classification. We also show the performance of the three distributed solutions in terms of time and accuracy on document modeling.
[ { "created": "Thu, 30 Jul 2015 06:44:37 GMT", "version": "v1" } ]
2015-07-31
[ [ "Li", "Shuangyin", "" ], [ "Li", "Jiefei", "" ], [ "Huang", "Guan", "" ], [ "Tan", "Ruiyang", "" ], [ "Pan", "Rong", "" ] ]
To date, there have been massive Semi-Structured Documents (SSDs) during the evolution of the Internet. These SSDs contain both unstructured features (e.g., plain text) and metadata (e.g., tags). Most previous works focused on modeling the unstructured text, and recently, some other methods have been proposed to model the unstructured text with specific tags. To build a general model for SSDs remains an important problem in terms of both model fitness and efficiency. We propose a novel method to model the SSDs by a so-called Tag-Weighted Topic Model (TWTM). TWTM is a framework that leverages both the tags and words information, not only to learn the document-topic and topic-word distributions, but also to infer the tag-topic distributions for text mining tasks. We present an efficient variational inference method with an EM algorithm for estimating the model parameters. Meanwhile, we propose three large-scale solutions for our model under the MapReduce distributed computing platform for modeling large-scale SSDs. The experimental results show the effectiveness, efficiency and the robustness by comparing our model with the state-of-the-art methods in document modeling, tags prediction and text classification. We also show the performance of the three distributed solutions in terms of time and accuracy on document modeling.
2109.09476
Junya Morita
Junya Morita, Thanakit Pitakchokchai, Giri Basanta Raj, Yusuke Yamamoto, Hiroyasu Yuhashi and Teppei Koguchi
Regulating Ruminative Web-browsing Based on the Counterbalance Modeling Approach
null
Frontiers in Artificial Intelligence, 2022
10.3389/frai.2022.741610
null
cs.HC cs.AI
http://creativecommons.org/licenses/by/4.0/
Even though the web environment facilitates daily life, emotional problems caused by its incompatibility with human cognition are becoming increasingly serious. To alleviate negative emotions during web use, we developed a browser extension that presents memorized product images to users, in the form of web advertisements. This system utilizes the cognitive architecture Adaptive Control of Thought-Rational (ACT-R) as a model of memory and emotion. A heart rate sensor modulates the ACT-R model parameters: The emotional states of the model are synchronized or counterbalanced with the physiological state of the user. An experiment demonstrates that the counterbalance model suppresses negative ruminative web browsing. The authors claim that this approach is advantageous in terms of explainability.
[ { "created": "Mon, 20 Sep 2021 12:31:03 GMT", "version": "v1" } ]
2022-08-16
[ [ "Morita", "Junya", "" ], [ "Pitakchokchai", "Thanakit", "" ], [ "Raj", "Giri Basanta", "" ], [ "Yamamoto", "Yusuke", "" ], [ "Yuhashi", "Hiroyasu", "" ], [ "Koguchi", "Teppei", "" ] ]
Even though the web environment facilitates daily life, emotional problems caused by its incompatibility with human cognition are becoming increasingly serious. To alleviate negative emotions during web use, we developed a browser extension that presents memorized product images to users, in the form of web advertisements. This system utilizes the cognitive architecture Adaptive Control of Thought-Rational (ACT-R) as a model of memory and emotion. A heart rate sensor modulates the ACT-R model parameters: The emotional states of the model are synchronized or counterbalanced with the physiological state of the user. An experiment demonstrates that the counterbalance model suppresses negative ruminative web browsing. The authors claim that this approach is advantageous in terms of explainability.
2401.04592
Mihael Arcan
Mihael Arcan, David-Paul Niland and Fionn Delahunty
An Assessment on Comprehending Mental Health through Large Language Models
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Mental health challenges pose considerable global burdens on individuals and communities. Recent data indicates that more than 20% of adults may encounter at least one mental disorder in their lifetime. On the one hand, the advancements in large language models have facilitated diverse applications, yet a significant research gap persists in understanding and enhancing the potential of large language models within the domain of mental health. On the other hand, across various applications, an outstanding question involves the capacity of large language models to comprehend expressions of human mental health conditions in natural language. This study presents an initial evaluation of large language models in addressing this gap. Due to this, we compare the performance of Llama-2 and ChatGPT with classical Machine as well as Deep learning models. Our results on the DAIC-WOZ dataset show that transformer-based models, like BERT or XLNet, outperform the large language models.
[ { "created": "Tue, 9 Jan 2024 14:50:04 GMT", "version": "v1" }, { "created": "Fri, 2 Feb 2024 09:36:58 GMT", "version": "v2" } ]
2024-02-05
[ [ "Arcan", "Mihael", "" ], [ "Niland", "David-Paul", "" ], [ "Delahunty", "Fionn", "" ] ]
Mental health challenges pose considerable global burdens on individuals and communities. Recent data indicates that more than 20% of adults may encounter at least one mental disorder in their lifetime. On the one hand, the advancements in large language models have facilitated diverse applications, yet a significant research gap persists in understanding and enhancing the potential of large language models within the domain of mental health. On the other hand, across various applications, an outstanding question involves the capacity of large language models to comprehend expressions of human mental health conditions in natural language. This study presents an initial evaluation of large language models in addressing this gap. Due to this, we compare the performance of Llama-2 and ChatGPT with classical Machine as well as Deep learning models. Our results on the DAIC-WOZ dataset show that transformer-based models, like BERT or XLNet, outperform the large language models.
2304.11205
Camille Coti
Camille Coti and Kevin Huck and Allen D. Malony
STaKTAU: profiling HPC applications' operating system usage
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
This paper presents a approach for measuring the time spent by HPC applications in the operating system's kernel. We use the SystemTap interface to insert timers before and after system calls, and take advantage of its stability to design a tool that can be used with multiple versions of the kernel. We evaluate its performance overhead, using an OS-intensive mini-benchmark and a raytracing mini app.
[ { "created": "Fri, 21 Apr 2023 18:27:57 GMT", "version": "v1" } ]
2023-04-25
[ [ "Coti", "Camille", "" ], [ "Huck", "Kevin", "" ], [ "Malony", "Allen D.", "" ] ]
This paper presents a approach for measuring the time spent by HPC applications in the operating system's kernel. We use the SystemTap interface to insert timers before and after system calls, and take advantage of its stability to design a tool that can be used with multiple versions of the kernel. We evaluate its performance overhead, using an OS-intensive mini-benchmark and a raytracing mini app.
2109.05927
Mohammad Masiur Rahaman
Mohammad Masiur Rahaman
An open-source implementation of a phase-field model for brittle fracture using Gridap in Julia
null
null
null
null
cs.CE physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article proposes an open-source implementation of a phase-field model for brittle fracture using a recently developed finite element toolbox, Gridap in Julia. The present work exploits the advantages of both the phase-field model and Gridap toolbox for simulating fracture in brittle materials. On one hand, the use of the phase-field model, which is a continuum approach and uses a diffuse representation of sharp cracks, enables the proposed implementation to overcome such well-known drawbacks of the discrete approach for predicting complex crack paths as the need for re-meshing, enrichment of finite element shape functions and an explicit tracking of the crack surfaces. On the other hand, the use of Gridap makes the proposed implementation very compact and user-friendly that requires low memory usage, and provides a high degree of flexibility to the users in defining weak forms of partial differential equations. A test on a notched beam under symmetric three-point bending and a set of tests on a notched beam with three holes under asymmetric three-point bending is considered to demonstrate how the proposed Gridap based phase-field Julia code can be used to simulate fracture in brittle materials.
[ { "created": "Fri, 10 Sep 2021 03:02:59 GMT", "version": "v1" } ]
2021-09-14
[ [ "Rahaman", "Mohammad Masiur", "" ] ]
This article proposes an open-source implementation of a phase-field model for brittle fracture using a recently developed finite element toolbox, Gridap in Julia. The present work exploits the advantages of both the phase-field model and Gridap toolbox for simulating fracture in brittle materials. On one hand, the use of the phase-field model, which is a continuum approach and uses a diffuse representation of sharp cracks, enables the proposed implementation to overcome such well-known drawbacks of the discrete approach for predicting complex crack paths as the need for re-meshing, enrichment of finite element shape functions and an explicit tracking of the crack surfaces. On the other hand, the use of Gridap makes the proposed implementation very compact and user-friendly that requires low memory usage, and provides a high degree of flexibility to the users in defining weak forms of partial differential equations. A test on a notched beam under symmetric three-point bending and a set of tests on a notched beam with three holes under asymmetric three-point bending is considered to demonstrate how the proposed Gridap based phase-field Julia code can be used to simulate fracture in brittle materials.
2003.06713
Rodrigo Nogueira
Rodrigo Nogueira, Zhiying Jiang, Jimmy Lin
Document Ranking with a Pretrained Sequence-to-Sequence Model
null
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work proposes a novel adaptation of a pretrained sequence-to-sequence model to the task of document ranking. Our approach is fundamentally different from a commonly-adopted classification-based formulation of ranking, based on encoder-only pretrained transformer architectures such as BERT. We show how a sequence-to-sequence model can be trained to generate relevance labels as "target words", and how the underlying logits of these target words can be interpreted as relevance probabilities for ranking. On the popular MS MARCO passage ranking task, experimental results show that our approach is at least on par with previous classification-based models and can surpass them with larger, more-recent models. On the test collection from the TREC 2004 Robust Track, we demonstrate a zero-shot transfer-based approach that outperforms previous state-of-the-art models requiring in-dataset cross-validation. Furthermore, we find that our approach significantly outperforms an encoder-only model in a data-poor regime (i.e., with few training examples). We investigate this observation further by varying target words to probe the model's use of latent knowledge.
[ { "created": "Sat, 14 Mar 2020 22:29:50 GMT", "version": "v1" } ]
2020-03-17
[ [ "Nogueira", "Rodrigo", "" ], [ "Jiang", "Zhiying", "" ], [ "Lin", "Jimmy", "" ] ]
This work proposes a novel adaptation of a pretrained sequence-to-sequence model to the task of document ranking. Our approach is fundamentally different from a commonly-adopted classification-based formulation of ranking, based on encoder-only pretrained transformer architectures such as BERT. We show how a sequence-to-sequence model can be trained to generate relevance labels as "target words", and how the underlying logits of these target words can be interpreted as relevance probabilities for ranking. On the popular MS MARCO passage ranking task, experimental results show that our approach is at least on par with previous classification-based models and can surpass them with larger, more-recent models. On the test collection from the TREC 2004 Robust Track, we demonstrate a zero-shot transfer-based approach that outperforms previous state-of-the-art models requiring in-dataset cross-validation. Furthermore, we find that our approach significantly outperforms an encoder-only model in a data-poor regime (i.e., with few training examples). We investigate this observation further by varying target words to probe the model's use of latent knowledge.
1401.5697
Evgeniy Gabrilovich
Evgeniy Gabrilovich, Shaul Markovitch
Wikipedia-based Semantic Interpretation for Natural Language Processing
null
Journal Of Artificial Intelligence Research, Volume 34, pages 443-498, 2009
10.1613/jair.2669
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adequate representation of natural language semantics requires access to vast amounts of common sense and domain-specific world knowledge. Prior work in the field was based on purely statistical techniques that did not make use of background knowledge, on limited lexicographic knowledge bases such as WordNet, or on huge manual efforts such as the CYC project. Here we propose a novel method, called Explicit Semantic Analysis (ESA), for fine-grained semantic interpretation of unrestricted natural language texts. Our method represents meaning in a high-dimensional space of concepts derived from Wikipedia, the largest encyclopedia in existence. We explicitly represent the meaning of any text in terms of Wikipedia-based concepts. We evaluate the effectiveness of our method on text categorization and on computing the degree of semantic relatedness between fragments of natural language text. Using ESA results in significant improvements over the previous state of the art in both tasks. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users.
[ { "created": "Wed, 15 Jan 2014 05:21:01 GMT", "version": "v1" } ]
2014-01-23
[ [ "Gabrilovich", "Evgeniy", "" ], [ "Markovitch", "Shaul", "" ] ]
Adequate representation of natural language semantics requires access to vast amounts of common sense and domain-specific world knowledge. Prior work in the field was based on purely statistical techniques that did not make use of background knowledge, on limited lexicographic knowledge bases such as WordNet, or on huge manual efforts such as the CYC project. Here we propose a novel method, called Explicit Semantic Analysis (ESA), for fine-grained semantic interpretation of unrestricted natural language texts. Our method represents meaning in a high-dimensional space of concepts derived from Wikipedia, the largest encyclopedia in existence. We explicitly represent the meaning of any text in terms of Wikipedia-based concepts. We evaluate the effectiveness of our method on text categorization and on computing the degree of semantic relatedness between fragments of natural language text. Using ESA results in significant improvements over the previous state of the art in both tasks. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users.
1604.04326
Stephan Zheng
Stephan Zheng, Yang Song, Thomas Leung, Ian Goodfellow
Improving the Robustness of Deep Neural Networks via Stability Training
Published in CVPR 2016
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we address the issue of output instability of deep neural networks: small perturbations in the visual input can significantly distort the feature embeddings and output of a neural network. Such instability affects many deep architectures with state-of-the-art performance on a wide range of computer vision tasks. We present a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such as compression, rescaling, and cropping. We validate our method by stabilizing the state-of-the-art Inception architecture against these types of distortions. In addition, we demonstrate that our stabilized model gives robust state-of-the-art performance on large-scale near-duplicate detection, similar-image ranking, and classification on noisy datasets.
[ { "created": "Fri, 15 Apr 2016 01:15:18 GMT", "version": "v1" } ]
2016-04-18
[ [ "Zheng", "Stephan", "" ], [ "Song", "Yang", "" ], [ "Leung", "Thomas", "" ], [ "Goodfellow", "Ian", "" ] ]
In this paper we address the issue of output instability of deep neural networks: small perturbations in the visual input can significantly distort the feature embeddings and output of a neural network. Such instability affects many deep architectures with state-of-the-art performance on a wide range of computer vision tasks. We present a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such as compression, rescaling, and cropping. We validate our method by stabilizing the state-of-the-art Inception architecture against these types of distortions. In addition, we demonstrate that our stabilized model gives robust state-of-the-art performance on large-scale near-duplicate detection, similar-image ranking, and classification on noisy datasets.
2111.07226
Kostis Kaffes
Kostis Kaffes and Neeraja J. Yadwadkar and Christos Kozyrakis
Practical Scheduling for Real-World Serverless Computing
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Serverless computing has seen rapid growth due to the ease-of-use and cost-efficiency it provides. However, function scheduling, a critical component of serverless systems, has been overlooked. In this paper, we take a first-principles approach toward designing a scheduler that caters to the unique characteristics of serverless functions as seen in real-world deployments. We first create a taxonomy of scheduling policies along three dimensions. Next, we use simulation to explore the scheduling policy space for the function characteristics in a 14-day trace of Azure functions and conclude that frequently used features such as late binding and random load balancing are sub-optimal for common execution time distributions and load ranges. We use these insights to design Hermes, a scheduler for serverless functions with three key characteristics. First, to avoid head-of-line blocking due to high function execution time variability, Hermes uses a combination of early binding and processor sharing for scheduling at individual worker machines. Second, Hermes uses a hybrid load balancing approach that improves consolidation at low load while employing least-loaded balancing at high load to retain high performance. Third, Hermes is both load and locality-aware, reducing the number of cold starts compared to pure load-based policies. We implement Hermes for Apache OpenWhisk and demonstrate that, for the case of the function patterns observed both in the Azure and in other real-world traces, it achieves up to 85% lower function slowdown and 60% higher throughput compared to existing policies.
[ { "created": "Sun, 14 Nov 2021 02:55:48 GMT", "version": "v1" } ]
2021-11-16
[ [ "Kaffes", "Kostis", "" ], [ "Yadwadkar", "Neeraja J.", "" ], [ "Kozyrakis", "Christos", "" ] ]
Serverless computing has seen rapid growth due to the ease-of-use and cost-efficiency it provides. However, function scheduling, a critical component of serverless systems, has been overlooked. In this paper, we take a first-principles approach toward designing a scheduler that caters to the unique characteristics of serverless functions as seen in real-world deployments. We first create a taxonomy of scheduling policies along three dimensions. Next, we use simulation to explore the scheduling policy space for the function characteristics in a 14-day trace of Azure functions and conclude that frequently used features such as late binding and random load balancing are sub-optimal for common execution time distributions and load ranges. We use these insights to design Hermes, a scheduler for serverless functions with three key characteristics. First, to avoid head-of-line blocking due to high function execution time variability, Hermes uses a combination of early binding and processor sharing for scheduling at individual worker machines. Second, Hermes uses a hybrid load balancing approach that improves consolidation at low load while employing least-loaded balancing at high load to retain high performance. Third, Hermes is both load and locality-aware, reducing the number of cold starts compared to pure load-based policies. We implement Hermes for Apache OpenWhisk and demonstrate that, for the case of the function patterns observed both in the Azure and in other real-world traces, it achieves up to 85% lower function slowdown and 60% higher throughput compared to existing policies.
1706.05059
Victor Dalmau
Victor Dalmau
Conjunctions of Among Constraints
15 pages plus appendix
null
null
null
cs.AI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many existing global constraints can be encoded as a conjunction of among constraints. An among constraint holds if the number of the variables in its scope whose value belongs to a prespecified set, which we call its range, is within some given bounds. It is known that domain filtering algorithms can benefit from reasoning about the interaction of among constraints so that values can be filtered out taking into consideration several among constraints simultaneously. The present pa- per embarks into a systematic investigation on the circumstances under which it is possible to obtain efficient and complete domain filtering algorithms for conjunctions of among constraints. We start by observing that restrictions on both the scope and the range of the among constraints are necessary to obtain meaningful results. Then, we derive a domain flow-based filtering algorithm and present several applications. In particular, it is shown that the algorithm unifies and generalizes several previous existing results.
[ { "created": "Thu, 15 Jun 2017 19:51:52 GMT", "version": "v1" } ]
2017-06-19
[ [ "Dalmau", "Victor", "" ] ]
Many existing global constraints can be encoded as a conjunction of among constraints. An among constraint holds if the number of the variables in its scope whose value belongs to a prespecified set, which we call its range, is within some given bounds. It is known that domain filtering algorithms can benefit from reasoning about the interaction of among constraints so that values can be filtered out taking into consideration several among constraints simultaneously. The present pa- per embarks into a systematic investigation on the circumstances under which it is possible to obtain efficient and complete domain filtering algorithms for conjunctions of among constraints. We start by observing that restrictions on both the scope and the range of the among constraints are necessary to obtain meaningful results. Then, we derive a domain flow-based filtering algorithm and present several applications. In particular, it is shown that the algorithm unifies and generalizes several previous existing results.
2310.13570
Alexandros Xenos
Alexandros Xenos, Themos Stafylakis, Ioannis Patras and Georgios Tzimiropoulos
A Simple Baseline for Knowledge-Based Visual Question Answering
Accepted at EMNLP 2023 (camera-ready version)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper is on the problem of Knowledge-Based Visual Question Answering (KB-VQA). Recent works have emphasized the significance of incorporating both explicit (through external databases) and implicit (through LLMs) knowledge to answer questions requiring external knowledge effectively. A common limitation of such approaches is that they consist of relatively complicated pipelines and often heavily rely on accessing GPT-3 API. Our main contribution in this paper is to propose a much simpler and readily reproducible pipeline which, in a nutshell, is based on efficient in-context learning by prompting LLaMA (1 and 2) using question-informative captions as contextual information. Contrary to recent approaches, our method is training-free, does not require access to external databases or APIs, and yet achieves state-of-the-art accuracy on the OK-VQA and A-OK-VQA datasets. Finally, we perform several ablation studies to understand important aspects of our method. Our code is publicly available at https://github.com/alexandrosXe/ASimple-Baseline-For-Knowledge-Based-VQA
[ { "created": "Fri, 20 Oct 2023 15:08:17 GMT", "version": "v1" }, { "created": "Tue, 24 Oct 2023 13:24:25 GMT", "version": "v2" } ]
2023-10-25
[ [ "Xenos", "Alexandros", "" ], [ "Stafylakis", "Themos", "" ], [ "Patras", "Ioannis", "" ], [ "Tzimiropoulos", "Georgios", "" ] ]
This paper is on the problem of Knowledge-Based Visual Question Answering (KB-VQA). Recent works have emphasized the significance of incorporating both explicit (through external databases) and implicit (through LLMs) knowledge to answer questions requiring external knowledge effectively. A common limitation of such approaches is that they consist of relatively complicated pipelines and often heavily rely on accessing GPT-3 API. Our main contribution in this paper is to propose a much simpler and readily reproducible pipeline which, in a nutshell, is based on efficient in-context learning by prompting LLaMA (1 and 2) using question-informative captions as contextual information. Contrary to recent approaches, our method is training-free, does not require access to external databases or APIs, and yet achieves state-of-the-art accuracy on the OK-VQA and A-OK-VQA datasets. Finally, we perform several ablation studies to understand important aspects of our method. Our code is publicly available at https://github.com/alexandrosXe/ASimple-Baseline-For-Knowledge-Based-VQA
1601.00184
Asaf Shabtai
Ben Feher, Lior Sidi, Asaf Shabtai, Rami Puzis
The Security of WebRTC
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
WebRTC is an API that allows users to share streaming information, whether it is text, sound, video or files. It is supported by all major browsers and has a flexible underlying infrastructure. In this study we review current WebRTC structure and security in the contexts of communication disruption, modification and eavesdropping. In addition, we examine WebRTC security in a few representative scenarios, setting up and simulating real WebRTC environments and attacks.
[ { "created": "Sat, 2 Jan 2016 15:59:55 GMT", "version": "v1" } ]
2016-01-05
[ [ "Feher", "Ben", "" ], [ "Sidi", "Lior", "" ], [ "Shabtai", "Asaf", "" ], [ "Puzis", "Rami", "" ] ]
WebRTC is an API that allows users to share streaming information, whether it is text, sound, video or files. It is supported by all major browsers and has a flexible underlying infrastructure. In this study we review current WebRTC structure and security in the contexts of communication disruption, modification and eavesdropping. In addition, we examine WebRTC security in a few representative scenarios, setting up and simulating real WebRTC environments and attacks.
0909.0685
Chris Giannella
Joel W. Branch, Chris Giannella, Boleslaw Szymanski, Ran Wolff, Hillol Kargupta
In-Network Outlier Detection in Wireless Sensor Networks
Extended version of a paper appearing in the Int'l Conference on Distributed Computing Systems 2006
Knowledge and Information Systems 34(1) January, 2013, pp. 23-54
10.1007/s10115-011-0474-5
null
cs.DB cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To address the problem of unsupervised outlier detection in wireless sensor networks, we develop an approach that (1) is flexible with respect to the outlier definition, (2) computes the result in-network to reduce both bandwidth and energy usage,(3) only uses single hop communication thus permitting very simple node failure detection and message reliability assurance mechanisms (e.g., carrier-sense), and (4) seamlessly accommodates dynamic updates to data. We examine performance using simulation with real sensor data streams. Our results demonstrate that our approach is accurate and imposes a reasonable communication load and level of power consumption.
[ { "created": "Thu, 3 Sep 2009 15:26:38 GMT", "version": "v1" } ]
2013-05-15
[ [ "Branch", "Joel W.", "" ], [ "Giannella", "Chris", "" ], [ "Szymanski", "Boleslaw", "" ], [ "Wolff", "Ran", "" ], [ "Kargupta", "Hillol", "" ] ]
To address the problem of unsupervised outlier detection in wireless sensor networks, we develop an approach that (1) is flexible with respect to the outlier definition, (2) computes the result in-network to reduce both bandwidth and energy usage,(3) only uses single hop communication thus permitting very simple node failure detection and message reliability assurance mechanisms (e.g., carrier-sense), and (4) seamlessly accommodates dynamic updates to data. We examine performance using simulation with real sensor data streams. Our results demonstrate that our approach is accurate and imposes a reasonable communication load and level of power consumption.
2005.10217
Zhiguo Ding
Z. Ding and R. Schober and H. V. Poor
Unveiling the Importance of SIC in NOMA Systems: Part II: New Results and Future Directions
null
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In most existing works on non-orthogonal multiple access (NOMA), the decoding order of successive interference cancellation (SIC) is prefixed and based on either the users' channel conditions or their quality of service (QoS) requirements. A recent work on NOMA assisted semi-grant-free transmission showed that the use of a more sophisticated hybrid SIC scheme can yield significant performance improvements. This letter illustrates how the concept of hybrid SIC can be generalized and applied to different NOMA applications. We first use NOMA assisted mobile edge computing (MEC) as an example to illustrate the benefits of hybrid SIC, where new results for delay and energy minimization are presented. Then, future directions for generalizing hybrid SIC with adaptive decoding order selection as well as its promising applications are discussed.
[ { "created": "Wed, 20 May 2020 17:29:21 GMT", "version": "v1" } ]
2020-05-21
[ [ "Ding", "Z.", "" ], [ "Schober", "R.", "" ], [ "Poor", "H. V.", "" ] ]
In most existing works on non-orthogonal multiple access (NOMA), the decoding order of successive interference cancellation (SIC) is prefixed and based on either the users' channel conditions or their quality of service (QoS) requirements. A recent work on NOMA assisted semi-grant-free transmission showed that the use of a more sophisticated hybrid SIC scheme can yield significant performance improvements. This letter illustrates how the concept of hybrid SIC can be generalized and applied to different NOMA applications. We first use NOMA assisted mobile edge computing (MEC) as an example to illustrate the benefits of hybrid SIC, where new results for delay and energy minimization are presented. Then, future directions for generalizing hybrid SIC with adaptive decoding order selection as well as its promising applications are discussed.
2211.06720
Shubham Varma
Rupali Patil, Bhairav Narkhede, Shubham Varma, Shreyans Suraliya, Ninad Mehendale
Auto Lead Extraction and Digitization of ECG Paper Records using cGAN
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Purpose: An Electrocardiogram (ECG) is the simplest and fastest bio-medical test that is used to detect any heart-related disease. ECG signals are generally stored in paper form, which makes it difficult to store and analyze the data. While capturing ECG leads from paper ECG records, a lot of background information is also captured, which results in incorrect data interpretation. Methods: We propose a deep learning-based model for individually extracting all 12 leads from 12-lead ECG images captured using a camera. To simplify the analysis of the ECG and the calculation of complex parameters, we also propose a method to convert the paper ECG format into a storable digital format. The You Only Look Once, Version 3 (YOLOv3) algorithm has been used to extract the leads present in the image. These leads are then passed on to another deep learning model which separates the ECG signal and background from the single-lead image. After that, vertical scanning is performed on the ECG signal to convert it into a 1-Dimensional (1D) digital form. To perform the task of digitalization, we used the pix-2-pix deep learning model and binarized the ECG signals. Results: Our proposed method was able to achieve an accuracy of 97.4 %. Conclusion: The information on the paper ECG fades away over time. Hence, the digitized ECG signals make it possible to store the records and access them anytime. This proves highly beneficial for heart patients who require frequent ECG reports. The stored data can also be useful for research purposes, as this data can be used to develop computer algorithms that are capable of analyzing the data.
[ { "created": "Sat, 12 Nov 2022 18:36:29 GMT", "version": "v1" } ]
2022-11-15
[ [ "Patil", "Rupali", "" ], [ "Narkhede", "Bhairav", "" ], [ "Varma", "Shubham", "" ], [ "Suraliya", "Shreyans", "" ], [ "Mehendale", "Ninad", "" ] ]
Purpose: An Electrocardiogram (ECG) is the simplest and fastest bio-medical test that is used to detect any heart-related disease. ECG signals are generally stored in paper form, which makes it difficult to store and analyze the data. While capturing ECG leads from paper ECG records, a lot of background information is also captured, which results in incorrect data interpretation. Methods: We propose a deep learning-based model for individually extracting all 12 leads from 12-lead ECG images captured using a camera. To simplify the analysis of the ECG and the calculation of complex parameters, we also propose a method to convert the paper ECG format into a storable digital format. The You Only Look Once, Version 3 (YOLOv3) algorithm has been used to extract the leads present in the image. These leads are then passed on to another deep learning model which separates the ECG signal and background from the single-lead image. After that, vertical scanning is performed on the ECG signal to convert it into a 1-Dimensional (1D) digital form. To perform the task of digitalization, we used the pix-2-pix deep learning model and binarized the ECG signals. Results: Our proposed method was able to achieve an accuracy of 97.4 %. Conclusion: The information on the paper ECG fades away over time. Hence, the digitized ECG signals make it possible to store the records and access them anytime. This proves highly beneficial for heart patients who require frequent ECG reports. The stored data can also be useful for research purposes, as this data can be used to develop computer algorithms that are capable of analyzing the data.
2302.00861
Jiaxiang Dong
Jiaxiang Dong, Haixu Wu, Haoran Zhang, Li Zhang, Jianmin Wang, Mingsheng Long
SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Time series analysis is widely used in extensive areas. Recently, to reduce labeling expenses and benefit various tasks, self-supervised pre-training has attracted immense interest. One mainstream paradigm is masked modeling, which successfully pre-trains deep models by learning to reconstruct the masked content based on the unmasked part. However, since the semantic information of time series is mainly contained in temporal variations, the standard way of randomly masking a portion of time points will seriously ruin vital temporal variations of time series, making the reconstruction task too difficult to guide representation learning. We thus present SimMTM, a Simple pre-training framework for Masked Time-series Modeling. By relating masked modeling to manifold learning, SimMTM proposes to recover masked time points by the weighted aggregation of multiple neighbors outside the manifold, which eases the reconstruction task by assembling ruined but complementary temporal variations from multiple masked series. SimMTM further learns to uncover the local structure of the manifold, which is helpful for masked modeling. Experimentally, SimMTM achieves state-of-the-art fine-tuning performance compared to the most advanced time series pre-training methods in two canonical time series analysis tasks: forecasting and classification, covering both in- and cross-domain settings.
[ { "created": "Thu, 2 Feb 2023 04:12:29 GMT", "version": "v1" }, { "created": "Fri, 3 Feb 2023 05:25:58 GMT", "version": "v2" }, { "created": "Fri, 26 May 2023 14:29:09 GMT", "version": "v3" }, { "created": "Mon, 23 Oct 2023 13:02:38 GMT", "version": "v4" } ]
2023-10-24
[ [ "Dong", "Jiaxiang", "" ], [ "Wu", "Haixu", "" ], [ "Zhang", "Haoran", "" ], [ "Zhang", "Li", "" ], [ "Wang", "Jianmin", "" ], [ "Long", "Mingsheng", "" ] ]
Time series analysis is widely used in extensive areas. Recently, to reduce labeling expenses and benefit various tasks, self-supervised pre-training has attracted immense interest. One mainstream paradigm is masked modeling, which successfully pre-trains deep models by learning to reconstruct the masked content based on the unmasked part. However, since the semantic information of time series is mainly contained in temporal variations, the standard way of randomly masking a portion of time points will seriously ruin vital temporal variations of time series, making the reconstruction task too difficult to guide representation learning. We thus present SimMTM, a Simple pre-training framework for Masked Time-series Modeling. By relating masked modeling to manifold learning, SimMTM proposes to recover masked time points by the weighted aggregation of multiple neighbors outside the manifold, which eases the reconstruction task by assembling ruined but complementary temporal variations from multiple masked series. SimMTM further learns to uncover the local structure of the manifold, which is helpful for masked modeling. Experimentally, SimMTM achieves state-of-the-art fine-tuning performance compared to the most advanced time series pre-training methods in two canonical time series analysis tasks: forecasting and classification, covering both in- and cross-domain settings.
2304.00363
Pablo Gamallo
Miguel Cavadas and Pablo Gamallo
Automatic Authorship Attribution in the Work of Tirso de Molina
20 pages, 2 figures
Recent Advances in Digital Humanities: Romance Language Applications, Peter Lang Edition, 2022, DOI 10.3726/b19920. ISBN 978-3-631-81147-4
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Automatic Authorship Attribution (AAA) is the result of applying tools and techniques from Digital Humanities to authorship attribution studies. Through a quantitative and statistical approach this discipline can draw further conclusions about renowned authorship issues which traditional critics have been dealing with for centuries, opening a new door to style comparison. The aim of this paper is to prove the potential of these tools and techniques by testing the authorship of five comedies traditionally attributed to Spanish playwright Tirso de Molina (1579-1648): La ninfa del cielo, El burlador de Sevilla, Tan largo me lo fiais, La mujer por fuerza and El condenado por desconfiado. To accomplish this purpose some experiments concerning clustering analysis by Stylo package from R and four distance measures are carried out on a corpus built with plays by Tirso, Andres de Claramonte (c. 1560-1626), Antonio Mira de Amescua (1577-1644) and Luis Velez de Guevara (1579-1644). The results obtained point to the denial of all the attributions to Tirso except for the case of La mujer por fuerza.
[ { "created": "Sat, 1 Apr 2023 18:05:14 GMT", "version": "v1" } ]
2023-04-04
[ [ "Cavadas", "Miguel", "" ], [ "Gamallo", "Pablo", "" ] ]
Automatic Authorship Attribution (AAA) is the result of applying tools and techniques from Digital Humanities to authorship attribution studies. Through a quantitative and statistical approach this discipline can draw further conclusions about renowned authorship issues which traditional critics have been dealing with for centuries, opening a new door to style comparison. The aim of this paper is to prove the potential of these tools and techniques by testing the authorship of five comedies traditionally attributed to Spanish playwright Tirso de Molina (1579-1648): La ninfa del cielo, El burlador de Sevilla, Tan largo me lo fiais, La mujer por fuerza and El condenado por desconfiado. To accomplish this purpose some experiments concerning clustering analysis by Stylo package from R and four distance measures are carried out on a corpus built with plays by Tirso, Andres de Claramonte (c. 1560-1626), Antonio Mira de Amescua (1577-1644) and Luis Velez de Guevara (1579-1644). The results obtained point to the denial of all the attributions to Tirso except for the case of La mujer por fuerza.
2211.08704
Sicheng Mo
Sicheng Mo, Fangzhou Mu, Yin Li
A Simple Transformer-Based Model for Ego4D Natural Language Queries Challenge
5 pages, 2 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This report describes Badgers@UW-Madison, our submission to the Ego4D Natural Language Queries (NLQ) Challenge. Our solution inherits the point-based event representation from our prior work on temporal action localization, and develops a Transformer-based model for video grounding. Further, our solution integrates several strong video features including SlowFast, Omnivore and EgoVLP. Without bells and whistles, our submission based on a single model achieves 12.64% Mean R@1 and is ranked 2nd on the public leaderboard. Meanwhile, our method garners 28.45% (18.03%) R@5 at tIoU=0.3 (0.5), surpassing the top-ranked solution by up to 5.5 absolute percentage points.
[ { "created": "Wed, 16 Nov 2022 06:33:37 GMT", "version": "v1" } ]
2022-11-17
[ [ "Mo", "Sicheng", "" ], [ "Mu", "Fangzhou", "" ], [ "Li", "Yin", "" ] ]
This report describes Badgers@UW-Madison, our submission to the Ego4D Natural Language Queries (NLQ) Challenge. Our solution inherits the point-based event representation from our prior work on temporal action localization, and develops a Transformer-based model for video grounding. Further, our solution integrates several strong video features including SlowFast, Omnivore and EgoVLP. Without bells and whistles, our submission based on a single model achieves 12.64% Mean R@1 and is ranked 2nd on the public leaderboard. Meanwhile, our method garners 28.45% (18.03%) R@5 at tIoU=0.3 (0.5), surpassing the top-ranked solution by up to 5.5 absolute percentage points.
2208.10817
Hsien-Chin Lin
Hsien-Chin Lin, Christian Geishauser, Shutong Feng, Nurul Lubis, Carel van Niekerk, Michael Heck, and Milica Ga\v{s}i\'c
GenTUS: Simulating User Behaviour and Language in Task-oriented Dialogues with Generative Transformers
Accepted as a long paper to SIGDial 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
User simulators (USs) are commonly used to train task-oriented dialogue systems (DSs) via reinforcement learning. The interactions often take place on semantic level for efficiency, but there is still a gap from semantic actions to natural language, which causes a mismatch between training and deployment environment. Incorporating a natural language generation (NLG) module with USs during training can partly deal with this problem. However, since the policy and NLG of USs are optimised separately, these simulated user utterances may not be natural enough in a given context. In this work, we propose a generative transformer-based user simulator (GenTUS). GenTUS consists of an encoder-decoder structure, which means it can optimise both the user policy and natural language generation jointly. GenTUS generates both semantic actions and natural language utterances, preserving interpretability and enhancing language variation. In addition, by representing the inputs and outputs as word sequences and by using a large pre-trained language model we can achieve generalisability in feature representation. We evaluate GenTUS with automatic metrics and human evaluation. Our results show that GenTUS generates more natural language and is able to transfer to an unseen ontology in a zero-shot fashion. In addition, its behaviour can be further shaped with reinforcement learning opening the door to training specialised user simulators.
[ { "created": "Tue, 23 Aug 2022 09:01:17 GMT", "version": "v1" } ]
2022-08-24
[ [ "Lin", "Hsien-Chin", "" ], [ "Geishauser", "Christian", "" ], [ "Feng", "Shutong", "" ], [ "Lubis", "Nurul", "" ], [ "van Niekerk", "Carel", "" ], [ "Heck", "Michael", "" ], [ "Gašić", "Milica", "" ] ]
User simulators (USs) are commonly used to train task-oriented dialogue systems (DSs) via reinforcement learning. The interactions often take place on semantic level for efficiency, but there is still a gap from semantic actions to natural language, which causes a mismatch between training and deployment environment. Incorporating a natural language generation (NLG) module with USs during training can partly deal with this problem. However, since the policy and NLG of USs are optimised separately, these simulated user utterances may not be natural enough in a given context. In this work, we propose a generative transformer-based user simulator (GenTUS). GenTUS consists of an encoder-decoder structure, which means it can optimise both the user policy and natural language generation jointly. GenTUS generates both semantic actions and natural language utterances, preserving interpretability and enhancing language variation. In addition, by representing the inputs and outputs as word sequences and by using a large pre-trained language model we can achieve generalisability in feature representation. We evaluate GenTUS with automatic metrics and human evaluation. Our results show that GenTUS generates more natural language and is able to transfer to an unseen ontology in a zero-shot fashion. In addition, its behaviour can be further shaped with reinforcement learning opening the door to training specialised user simulators.
2406.08946
Enrico Ferrentino
Lorenzo Pagliara, Vincenzo Petrone, Enrico Ferrentino, Pasquale Chiacchio
Human-Robot Interface for Teleoperated Robotized Planetary Sample Collection and Assembly
null
2023 IEEE 10th International Workshop on Metrology for AeroSpace (MetroAeroSpace), Milan, Italy, 2023, pp. 171-176
10.1109/MetroAeroSpace57412.2023.10189984
null
cs.RO cs.HC cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As human space exploration evolves toward longer voyages farther from our home planet, in-situ resource utilization (ISRU) becomes increasingly important. Haptic teleoperations are one of the technologies by which such activities can be carried out remotely by humans, whose expertise is still necessary for complex activities. In order to perform precision tasks with effectiveness, the operator must experience ease of use and accuracy. The same features are demanded to reduce the complexity of the training procedures and the associated learning time for operators without a specific background in robotic teleoperations. Haptic teleoperation systems, that allow for a natural feeling of forces, need to cope with the trade-off between accurate movements and workspace extension. Clearly, both of them are required for typical ISRU tasks. In this work, we develop a new concept of operations and suitable human-robot interfaces to achieve sample collection and assembly with ease of use and accuracy. In the proposed operational concept, the teleoperation space is extended by executing automated trajectories, offline planned at the control station. In three different experimental scenarios, we validate the end-to-end system involving the control station and the robotic asset, by assessing the contribution of haptics to mission success, the system robustness to consistent delays, and the ease of training new operators.
[ { "created": "Thu, 13 Jun 2024 09:17:10 GMT", "version": "v1" } ]
2024-06-14
[ [ "Pagliara", "Lorenzo", "" ], [ "Petrone", "Vincenzo", "" ], [ "Ferrentino", "Enrico", "" ], [ "Chiacchio", "Pasquale", "" ] ]
As human space exploration evolves toward longer voyages farther from our home planet, in-situ resource utilization (ISRU) becomes increasingly important. Haptic teleoperations are one of the technologies by which such activities can be carried out remotely by humans, whose expertise is still necessary for complex activities. In order to perform precision tasks with effectiveness, the operator must experience ease of use and accuracy. The same features are demanded to reduce the complexity of the training procedures and the associated learning time for operators without a specific background in robotic teleoperations. Haptic teleoperation systems, that allow for a natural feeling of forces, need to cope with the trade-off between accurate movements and workspace extension. Clearly, both of them are required for typical ISRU tasks. In this work, we develop a new concept of operations and suitable human-robot interfaces to achieve sample collection and assembly with ease of use and accuracy. In the proposed operational concept, the teleoperation space is extended by executing automated trajectories, offline planned at the control station. In three different experimental scenarios, we validate the end-to-end system involving the control station and the robotic asset, by assessing the contribution of haptics to mission success, the system robustness to consistent delays, and the ease of training new operators.
1203.4364
Marilyne Rosselle
Marilyne Rosselle
Teacher Module in an Assistance Tool - Adaptating a device to a teaching context and and teacher's preferences
6 pages, 3 figures. This article is a long version of the one edited in ICALT'2012
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This communication presents the genesis and the implementation of a teacher module, which is included in an Assistance Tool (AT). The teacher module is based on a teacher model for which we did a thorough analysis of the state of the art. The aim of the AT is to help a teacher to design pedagogical devices. Teachers can formulate their needs (assistance in the design) and the AT can relieve them from repetitive tasks related to the deployment of a teaching device (assistance in the deployment).
[ { "created": "Tue, 20 Mar 2012 10:01:37 GMT", "version": "v1" } ]
2013-03-12
[ [ "Rosselle", "Marilyne", "" ] ]
This communication presents the genesis and the implementation of a teacher module, which is included in an Assistance Tool (AT). The teacher module is based on a teacher model for which we did a thorough analysis of the state of the art. The aim of the AT is to help a teacher to design pedagogical devices. Teachers can formulate their needs (assistance in the design) and the AT can relieve them from repetitive tasks related to the deployment of a teaching device (assistance in the deployment).
2311.03534
Anurag Koul
Anurag Koul, Shivakanth Sujit, Shaoru Chen, Ben Evans, Lili Wu, Byron Xu, Rajan Chari, Riashat Islam, Raihan Seraj, Yonathan Efroni, Lekan Molu, Miro Dudik, John Langford, Alex Lamb
PcLast: Discovering Plannable Continuous Latent States
Accepted at ICML 2024
null
null
null
cs.LG cs.AI cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Goal-conditioned planning benefits from learned low-dimensional representations of rich observations. While compact latent representations typically learned from variational autoencoders or inverse dynamics enable goal-conditioned decision making, they ignore state reachability, hampering their performance. In this paper, we learn a representation that associates reachable states together for effective planning and goal-conditioned policy learning. We first learn a latent representation with multi-step inverse dynamics (to remove distracting information), and then transform this representation to associate reachable states together in $\ell_2$ space. Our proposals are rigorously tested in various simulation testbeds. Numerical results in reward-based settings show significant improvements in sampling efficiency. Further, in reward-free settings this approach yields layered state abstractions that enable computationally efficient hierarchical planning for reaching ad hoc goals with zero additional samples.
[ { "created": "Mon, 6 Nov 2023 21:16:37 GMT", "version": "v1" }, { "created": "Tue, 11 Jun 2024 03:32:58 GMT", "version": "v2" } ]
2024-06-12
[ [ "Koul", "Anurag", "" ], [ "Sujit", "Shivakanth", "" ], [ "Chen", "Shaoru", "" ], [ "Evans", "Ben", "" ], [ "Wu", "Lili", "" ], [ "Xu", "Byron", "" ], [ "Chari", "Rajan", "" ], [ "Islam", "Riashat", "" ], [ "Seraj", "Raihan", "" ], [ "Efroni", "Yonathan", "" ], [ "Molu", "Lekan", "" ], [ "Dudik", "Miro", "" ], [ "Langford", "John", "" ], [ "Lamb", "Alex", "" ] ]
Goal-conditioned planning benefits from learned low-dimensional representations of rich observations. While compact latent representations typically learned from variational autoencoders or inverse dynamics enable goal-conditioned decision making, they ignore state reachability, hampering their performance. In this paper, we learn a representation that associates reachable states together for effective planning and goal-conditioned policy learning. We first learn a latent representation with multi-step inverse dynamics (to remove distracting information), and then transform this representation to associate reachable states together in $\ell_2$ space. Our proposals are rigorously tested in various simulation testbeds. Numerical results in reward-based settings show significant improvements in sampling efficiency. Further, in reward-free settings this approach yields layered state abstractions that enable computationally efficient hierarchical planning for reaching ad hoc goals with zero additional samples.
2401.16422
Eliot Shekhtman
Eliot Shekhtman and Sarah Dean
Strategic Usage in a Multi-Learner Setting
18 pages, 9 figures
null
null
null
cs.LG cs.GT
http://creativecommons.org/licenses/by/4.0/
Real-world systems often involve some pool of users choosing between a set of services. With the increase in popularity of online learning algorithms, these services can now self-optimize, leveraging data collected on users to maximize some reward such as service quality. On the flipside, users may strategically choose which services to use in order to pursue their own reward functions, in the process wielding power over which services can see and use their data. Extensive prior research has been conducted on the effects of strategic users in single-service settings, with strategic behavior manifesting in the manipulation of observable features to achieve a desired classification; however, this can often be costly or unattainable for users and fails to capture the full behavior of multi-service dynamic systems. As such, we analyze a setting in which strategic users choose among several available services in order to pursue positive classifications, while services seek to minimize loss functions on their observations. We focus our analysis on realizable settings, and show that naive retraining can still lead to oscillation even if all users are observed at different times; however, if this retraining uses memory of past observations, convergent behavior can be guaranteed for certain loss function classes. We provide results obtained from synthetic and real-world data to empirically validate our theoretical findings.
[ { "created": "Mon, 29 Jan 2024 18:59:22 GMT", "version": "v1" }, { "created": "Fri, 8 Mar 2024 21:01:08 GMT", "version": "v2" } ]
2024-03-12
[ [ "Shekhtman", "Eliot", "" ], [ "Dean", "Sarah", "" ] ]
Real-world systems often involve some pool of users choosing between a set of services. With the increase in popularity of online learning algorithms, these services can now self-optimize, leveraging data collected on users to maximize some reward such as service quality. On the flipside, users may strategically choose which services to use in order to pursue their own reward functions, in the process wielding power over which services can see and use their data. Extensive prior research has been conducted on the effects of strategic users in single-service settings, with strategic behavior manifesting in the manipulation of observable features to achieve a desired classification; however, this can often be costly or unattainable for users and fails to capture the full behavior of multi-service dynamic systems. As such, we analyze a setting in which strategic users choose among several available services in order to pursue positive classifications, while services seek to minimize loss functions on their observations. We focus our analysis on realizable settings, and show that naive retraining can still lead to oscillation even if all users are observed at different times; however, if this retraining uses memory of past observations, convergent behavior can be guaranteed for certain loss function classes. We provide results obtained from synthetic and real-world data to empirically validate our theoretical findings.
2310.13016
Sengul Dogan
Turker Tuncer and Sengul Dogan and Mehmet Baygin and Prabal Datta Barua and Abdul Hafeez-Baig and Ru-San Tan and Subrata Chakraborty and U. Rajendra Acharya
Solving the multiplication problem of a large language model system using a graph-based method
9 pages, 3 figures
null
null
null
cs.OH cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The generative pre-trained transformer (GPT)-based chatbot software ChatGPT possesses excellent natural language processing capabilities but is inadequate for solving arithmetic problems, especially multiplication. Its GPT structure uses a computational graph for multiplication, which has limited accuracy beyond simple multiplication operations. We developed a graph-based multiplication algorithm that emulated human-like numerical operations by incorporating a 10k operator, where k represents the maximum power to base 10 of the larger of two input numbers. Our proposed algorithm attained 100% accuracy for 1,000,000 large number multiplication tasks, effectively solving the multiplication challenge of GPT-based and other large language models. Our work highlights the importance of blending simple human insights into the design of artificial intelligence algorithms. Keywords: Graph-based multiplication; ChatGPT; Multiplication problem
[ { "created": "Wed, 18 Oct 2023 08:02:00 GMT", "version": "v1" } ]
2023-10-23
[ [ "Tuncer", "Turker", "" ], [ "Dogan", "Sengul", "" ], [ "Baygin", "Mehmet", "" ], [ "Barua", "Prabal Datta", "" ], [ "Hafeez-Baig", "Abdul", "" ], [ "Tan", "Ru-San", "" ], [ "Chakraborty", "Subrata", "" ], [ "Acharya", "U. Rajendra", "" ] ]
The generative pre-trained transformer (GPT)-based chatbot software ChatGPT possesses excellent natural language processing capabilities but is inadequate for solving arithmetic problems, especially multiplication. Its GPT structure uses a computational graph for multiplication, which has limited accuracy beyond simple multiplication operations. We developed a graph-based multiplication algorithm that emulated human-like numerical operations by incorporating a 10k operator, where k represents the maximum power to base 10 of the larger of two input numbers. Our proposed algorithm attained 100% accuracy for 1,000,000 large number multiplication tasks, effectively solving the multiplication challenge of GPT-based and other large language models. Our work highlights the importance of blending simple human insights into the design of artificial intelligence algorithms. Keywords: Graph-based multiplication; ChatGPT; Multiplication problem
2108.05774
Kuldeep Singh
Anson Bastos, Kuldeep Singh, Abhishek Nadgeri, Saeedeh Shekarpour, Isaiah Onando Mulang, Johannes Hoffart
HopfE: Knowledge Graph Representation Learning using Inverse Hopf Fibrations
CIKM 2021 : 30th ACM International Conference on Information and Knowledge Management (full paper)
null
null
null
cs.IR cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Recently, several Knowledge Graph Embedding (KGE) approaches have been devised to represent entities and relations in dense vector space and employed in downstream tasks such as link prediction. A few KGE techniques address interpretability, i.e., mapping the connectivity patterns of the relations (i.e., symmetric/asymmetric, inverse, and composition) to a geometric interpretation such as rotations. Other approaches model the representations in higher dimensional space such as four-dimensional space (4D) to enhance the ability to infer the connectivity patterns (i.e., expressiveness). However, modeling relation and entity in a 4D space often comes at the cost of interpretability. This paper proposes HopfE, a novel KGE approach aiming to achieve the interpretability of inferred relations in the four-dimensional space. We first model the structural embeddings in 3D Euclidean space and view the relation operator as an SO(3) rotation. Next, we map the entity embedding vector from a 3D space to a 4D hypersphere using the inverse Hopf Fibration, in which we embed the semantic information from the KG ontology. Thus, HopfE considers the structural and semantic properties of the entities without losing expressivity and interpretability. Our empirical results on four well-known benchmarks achieve state-of-the-art performance for the KG completion task.
[ { "created": "Thu, 12 Aug 2021 14:34:02 GMT", "version": "v1" } ]
2021-08-13
[ [ "Bastos", "Anson", "" ], [ "Singh", "Kuldeep", "" ], [ "Nadgeri", "Abhishek", "" ], [ "Shekarpour", "Saeedeh", "" ], [ "Mulang", "Isaiah Onando", "" ], [ "Hoffart", "Johannes", "" ] ]
Recently, several Knowledge Graph Embedding (KGE) approaches have been devised to represent entities and relations in dense vector space and employed in downstream tasks such as link prediction. A few KGE techniques address interpretability, i.e., mapping the connectivity patterns of the relations (i.e., symmetric/asymmetric, inverse, and composition) to a geometric interpretation such as rotations. Other approaches model the representations in higher dimensional space such as four-dimensional space (4D) to enhance the ability to infer the connectivity patterns (i.e., expressiveness). However, modeling relation and entity in a 4D space often comes at the cost of interpretability. This paper proposes HopfE, a novel KGE approach aiming to achieve the interpretability of inferred relations in the four-dimensional space. We first model the structural embeddings in 3D Euclidean space and view the relation operator as an SO(3) rotation. Next, we map the entity embedding vector from a 3D space to a 4D hypersphere using the inverse Hopf Fibration, in which we embed the semantic information from the KG ontology. Thus, HopfE considers the structural and semantic properties of the entities without losing expressivity and interpretability. Our empirical results on four well-known benchmarks achieve state-of-the-art performance for the KG completion task.
2208.04286
Pilhyeon Lee
Sungpil Kho, Pilhyeon Lee, Wonyoung Lee, Minsong Ki, Hyeran Byun
Exploiting Shape Cues for Weakly Supervised Semantic Segmentation
Accepted by Pattern Recognition. The first two authors contributed equally
Pattern Recognition 132 (2022): 108953
10.1016/j.patcog.2022.108953
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Weakly supervised semantic segmentation (WSSS) aims to produce pixel-wise class predictions with only image-level labels for training. To this end, previous methods adopt the common pipeline: they generate pseudo masks from class activation maps (CAMs) and use such masks to supervise segmentation networks. However, it is challenging to derive comprehensive pseudo masks that cover the whole extent of objects due to the local property of CAMs, i.e., they tend to focus solely on small discriminative object parts. In this paper, we associate the locality of CAMs with the texture-biased property of convolutional neural networks (CNNs). Accordingly, we propose to exploit shape information to supplement the texture-biased CNN features, thereby encouraging mask predictions to be not only comprehensive but also well-aligned with object boundaries. We further refine the predictions in an online fashion with a novel refinement method that takes into account both the class and the color affinities, in order to generate reliable pseudo masks to supervise the model. Importantly, our model is end-to-end trained within a single-stage framework and therefore efficient in terms of the training cost. Through extensive experiments on PASCAL VOC 2012, we validate the effectiveness of our method in producing precise and shape-aligned segmentation results. Specifically, our model surpasses the existing state-of-the-art single-stage approaches by large margins. What is more, it also achieves a new state-of-the-art performance over multi-stage approaches, when adopted in a simple two-stage pipeline without bells and whistles.
[ { "created": "Mon, 8 Aug 2022 17:25:31 GMT", "version": "v1" } ]
2022-08-09
[ [ "Kho", "Sungpil", "" ], [ "Lee", "Pilhyeon", "" ], [ "Lee", "Wonyoung", "" ], [ "Ki", "Minsong", "" ], [ "Byun", "Hyeran", "" ] ]
Weakly supervised semantic segmentation (WSSS) aims to produce pixel-wise class predictions with only image-level labels for training. To this end, previous methods adopt the common pipeline: they generate pseudo masks from class activation maps (CAMs) and use such masks to supervise segmentation networks. However, it is challenging to derive comprehensive pseudo masks that cover the whole extent of objects due to the local property of CAMs, i.e., they tend to focus solely on small discriminative object parts. In this paper, we associate the locality of CAMs with the texture-biased property of convolutional neural networks (CNNs). Accordingly, we propose to exploit shape information to supplement the texture-biased CNN features, thereby encouraging mask predictions to be not only comprehensive but also well-aligned with object boundaries. We further refine the predictions in an online fashion with a novel refinement method that takes into account both the class and the color affinities, in order to generate reliable pseudo masks to supervise the model. Importantly, our model is end-to-end trained within a single-stage framework and therefore efficient in terms of the training cost. Through extensive experiments on PASCAL VOC 2012, we validate the effectiveness of our method in producing precise and shape-aligned segmentation results. Specifically, our model surpasses the existing state-of-the-art single-stage approaches by large margins. What is more, it also achieves a new state-of-the-art performance over multi-stage approaches, when adopted in a simple two-stage pipeline without bells and whistles.
2004.03073
Anastasios Petropoulos
Anastasios Petropoulos, Irem Boybat, Manuel Le Gallo, Evangelos Eleftheriou, Abu Sebastian and Theodore Antonakopoulos
Accurate Emulation of Memristive Crossbar Arrays for In-Memory Computing
5 pages, 4 figures, accepted for publication at ISCAS 2020
null
null
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In-memory computing is an emerging non-von Neumann computing paradigm where certain computational tasks are performed in memory by exploiting the physical attributes of the memory devices. Memristive devices such as phase-change memory (PCM), where information is stored in terms of their conductance levels, are especially well suited for in-memory computing. In particular, memristive devices, when organized in a crossbar configuration can be used to perform matrix-vector multiply operations by exploiting Kirchhoff's circuit laws. To explore the feasibility of such in-memory computing cores in applications such as deep learning as well as for system-level architectural exploration, it is highly desirable to develop an accurate hardware emulator that captures the key physical attributes of the memristive devices. Here, we present one such emulator for PCM and experimentally validate it using measurements from a PCM prototype chip. Moreover, we present an application of the emulator for neural network inference where our emulator can capture the conductance evolution of approximately 400,000 PCM devices remarkably well.
[ { "created": "Tue, 7 Apr 2020 01:53:56 GMT", "version": "v1" } ]
2020-04-08
[ [ "Petropoulos", "Anastasios", "" ], [ "Boybat", "Irem", "" ], [ "Gallo", "Manuel Le", "" ], [ "Eleftheriou", "Evangelos", "" ], [ "Sebastian", "Abu", "" ], [ "Antonakopoulos", "Theodore", "" ] ]
In-memory computing is an emerging non-von Neumann computing paradigm where certain computational tasks are performed in memory by exploiting the physical attributes of the memory devices. Memristive devices such as phase-change memory (PCM), where information is stored in terms of their conductance levels, are especially well suited for in-memory computing. In particular, memristive devices, when organized in a crossbar configuration can be used to perform matrix-vector multiply operations by exploiting Kirchhoff's circuit laws. To explore the feasibility of such in-memory computing cores in applications such as deep learning as well as for system-level architectural exploration, it is highly desirable to develop an accurate hardware emulator that captures the key physical attributes of the memristive devices. Here, we present one such emulator for PCM and experimentally validate it using measurements from a PCM prototype chip. Moreover, we present an application of the emulator for neural network inference where our emulator can capture the conductance evolution of approximately 400,000 PCM devices remarkably well.
2010.13464
Moritz Beller
Moritz Beller, Chu-Pan Wong, Johannes Bader, Andrew Scott, Mateusz Machalica, Satish Chandra, Erik Meijer
What It Would Take to Use Mutation Testing in Industry--A Study at Facebook
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditionally, mutation testing generates an abundance of small deviations of a program, called mutants. At industrial systems the scale and size of Facebook's, doing this is infeasible. We should not create mutants that the test suite would likely fail on or that give no actionable signal to developers. To tackle this problem, in this paper, we semi-automatically learn error-inducing patterns from a corpus of common Java coding errors and from changes that caused operational anomalies at Facebook specifically. We combine the mutations with instrumentation that measures which tests exactly visited the mutated piece of code. Results on more than 15,000 generated mutants show that more than half of the generated mutants survive Facebook's rigorous test suite of unit, integration, and system tests. Moreover, in a case study with 26 developers, all but two found information of automatically detected test holes interesting in principle. As such, almost half of the 26 would actually act on the mutant presented to them by adapting an existing or creating a new test. The others did not for a variety of reasons often outside the scope of mutation testing. It remains a practical challenge how we can include such external information to increase the true actionability rate on mutants.
[ { "created": "Mon, 26 Oct 2020 10:03:58 GMT", "version": "v1" }, { "created": "Tue, 27 Oct 2020 06:35:50 GMT", "version": "v2" }, { "created": "Wed, 27 Jan 2021 16:43:42 GMT", "version": "v3" } ]
2021-01-28
[ [ "Beller", "Moritz", "" ], [ "Wong", "Chu-Pan", "" ], [ "Bader", "Johannes", "" ], [ "Scott", "Andrew", "" ], [ "Machalica", "Mateusz", "" ], [ "Chandra", "Satish", "" ], [ "Meijer", "Erik", "" ] ]
Traditionally, mutation testing generates an abundance of small deviations of a program, called mutants. At industrial systems the scale and size of Facebook's, doing this is infeasible. We should not create mutants that the test suite would likely fail on or that give no actionable signal to developers. To tackle this problem, in this paper, we semi-automatically learn error-inducing patterns from a corpus of common Java coding errors and from changes that caused operational anomalies at Facebook specifically. We combine the mutations with instrumentation that measures which tests exactly visited the mutated piece of code. Results on more than 15,000 generated mutants show that more than half of the generated mutants survive Facebook's rigorous test suite of unit, integration, and system tests. Moreover, in a case study with 26 developers, all but two found information of automatically detected test holes interesting in principle. As such, almost half of the 26 would actually act on the mutant presented to them by adapting an existing or creating a new test. The others did not for a variety of reasons often outside the scope of mutation testing. It remains a practical challenge how we can include such external information to increase the true actionability rate on mutants.
1909.13548
Sai Dayapule
Fan Yao, Kathy Ngyugen, Sai Santosh Dayapule, Jingxin Wu, Bingqian Lu, Suresh Subramaniam, and Guru Venkataramani
HolDCSim: A Holistic Simulator for Data Centers
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cloud computing based systems, that span data centers, are commonly deployed to offer high performance for user service requests. As data centers continue to expand, computer architects and system designers are facing many challenges on how to balance resource utilization efficiency, server and network performance, energy consumption and quality-of-service (QoS) demands from the users. To develop effective data center management policies, it becomes essential to have an in-depth understanding and synergistic control of the various sub-components inside large scale computing systems, that include both computation and communication resources. In this paper, we propose HolDCSim, a light-weight, holistic, extensible, event-driven data center simulation platform that effectively models both server and network architectures. HolDCSim can be used in a variety of data center system studies including job/task scheduling, resource provisioning, global and local server farm power management, and network and server performance analysis. We demonstrate the design of our simulation infrastructure, and illustrate the usefulness of our framework with several case studies that analyze server/network performance and energy efficiency. We also perform validation on real machines to verify our simulator.
[ { "created": "Mon, 30 Sep 2019 09:24:40 GMT", "version": "v1" }, { "created": "Mon, 7 Oct 2019 14:13:21 GMT", "version": "v2" } ]
2019-10-08
[ [ "Yao", "Fan", "" ], [ "Ngyugen", "Kathy", "" ], [ "Dayapule", "Sai Santosh", "" ], [ "Wu", "Jingxin", "" ], [ "Lu", "Bingqian", "" ], [ "Subramaniam", "Suresh", "" ], [ "Venkataramani", "Guru", "" ] ]
Cloud computing based systems, that span data centers, are commonly deployed to offer high performance for user service requests. As data centers continue to expand, computer architects and system designers are facing many challenges on how to balance resource utilization efficiency, server and network performance, energy consumption and quality-of-service (QoS) demands from the users. To develop effective data center management policies, it becomes essential to have an in-depth understanding and synergistic control of the various sub-components inside large scale computing systems, that include both computation and communication resources. In this paper, we propose HolDCSim, a light-weight, holistic, extensible, event-driven data center simulation platform that effectively models both server and network architectures. HolDCSim can be used in a variety of data center system studies including job/task scheduling, resource provisioning, global and local server farm power management, and network and server performance analysis. We demonstrate the design of our simulation infrastructure, and illustrate the usefulness of our framework with several case studies that analyze server/network performance and energy efficiency. We also perform validation on real machines to verify our simulator.
2310.15829
Corentin Kervadec
Corentin Kervadec, Francesca Franzon and Marco Baroni
Unnatural language processing: How do language models handle machine-generated prompts?
Findings of EMNLP 2023 Camera-Ready
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Language model prompt optimization research has shown that semantically and grammatically well-formed manually crafted prompts are routinely outperformed by automatically generated token sequences with no apparent meaning or syntactic structure, including sequences of vectors from a model's embedding space. We use machine-generated prompts to probe how models respond to input that is not composed of natural language expressions. We study the behavior of models of different sizes in multiple semantic tasks in response to both continuous and discrete machine-generated prompts, and compare it to the behavior in response to human-generated natural-language prompts. Even when producing a similar output, machine-generated and human prompts trigger different response patterns through the network processing pathways, including different perplexities, different attention and output entropy distributions, and different unit activation profiles. We provide preliminary insight into the nature of the units activated by different prompt types, suggesting that only natural language prompts recruit a genuinely linguistic circuit.
[ { "created": "Tue, 24 Oct 2023 13:32:20 GMT", "version": "v1" } ]
2023-10-25
[ [ "Kervadec", "Corentin", "" ], [ "Franzon", "Francesca", "" ], [ "Baroni", "Marco", "" ] ]
Language model prompt optimization research has shown that semantically and grammatically well-formed manually crafted prompts are routinely outperformed by automatically generated token sequences with no apparent meaning or syntactic structure, including sequences of vectors from a model's embedding space. We use machine-generated prompts to probe how models respond to input that is not composed of natural language expressions. We study the behavior of models of different sizes in multiple semantic tasks in response to both continuous and discrete machine-generated prompts, and compare it to the behavior in response to human-generated natural-language prompts. Even when producing a similar output, machine-generated and human prompts trigger different response patterns through the network processing pathways, including different perplexities, different attention and output entropy distributions, and different unit activation profiles. We provide preliminary insight into the nature of the units activated by different prompt types, suggesting that only natural language prompts recruit a genuinely linguistic circuit.
2312.04140
Ryota Maeda
Ryota Maeda, Shinsaku Hiura
Polarimetric Light Transport Analysis for Specular Inter-reflection
Accepted to IEEE Transactions on Computational Imaging (TCI)
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Polarization is well known for its ability to decompose diffuse and specular reflections. However, the existing decomposition methods only focus on direct reflection and overlook multiple reflections, especially specular inter-reflection. In this paper, we propose a novel decomposition method for handling specular inter-reflection of metal objects by using a unique polarimetric feature: the rotation direction of linear polarization. This rotation direction serves as a discriminative factor between direct and inter-reflection on specular surfaces. To decompose the reflectance components, we actively rotate the linear polarization of incident light and analyze the rotation direction of the reflected light. We evaluate our method using both synthetic and real data, demonstrating its effectiveness in decomposing specular inter-reflections of metal objects. Furthermore, we demonstrate that our method can be combined with other decomposition methods for a detailed analysis of light transport. As a practical application, we show its effectiveness in improving the accuracy of 3D measurement against strong specular inter-reflection.
[ { "created": "Thu, 7 Dec 2023 08:55:28 GMT", "version": "v1" }, { "created": "Wed, 15 May 2024 16:24:54 GMT", "version": "v2" } ]
2024-05-16
[ [ "Maeda", "Ryota", "" ], [ "Hiura", "Shinsaku", "" ] ]
Polarization is well known for its ability to decompose diffuse and specular reflections. However, the existing decomposition methods only focus on direct reflection and overlook multiple reflections, especially specular inter-reflection. In this paper, we propose a novel decomposition method for handling specular inter-reflection of metal objects by using a unique polarimetric feature: the rotation direction of linear polarization. This rotation direction serves as a discriminative factor between direct and inter-reflection on specular surfaces. To decompose the reflectance components, we actively rotate the linear polarization of incident light and analyze the rotation direction of the reflected light. We evaluate our method using both synthetic and real data, demonstrating its effectiveness in decomposing specular inter-reflections of metal objects. Furthermore, we demonstrate that our method can be combined with other decomposition methods for a detailed analysis of light transport. As a practical application, we show its effectiveness in improving the accuracy of 3D measurement against strong specular inter-reflection.
2001.10298
Amer Krivo\v{s}ija
Maike Buchin and Nicole Funk and Amer Krivo\v{s}ija
On the complexity of the middle curve problem
null
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For a set of curves, Ahn et al. introduced the notion of a middle curve and gave algorithms computing these with run time exponential in the number of curves. Here we study the computational complexity of this problem: we show that it is NP-complete and give approximation algorithms.
[ { "created": "Tue, 28 Jan 2020 12:54:16 GMT", "version": "v1" } ]
2020-01-29
[ [ "Buchin", "Maike", "" ], [ "Funk", "Nicole", "" ], [ "Krivošija", "Amer", "" ] ]
For a set of curves, Ahn et al. introduced the notion of a middle curve and gave algorithms computing these with run time exponential in the number of curves. Here we study the computational complexity of this problem: we show that it is NP-complete and give approximation algorithms.
2106.15846
Zhiyuan Wen
Wen Zhiyuan, Cao Jiannong, Yang Ruosong, Liu Shuaiqi, Shen Jiaxing
Automatically Select Emotion for Response via Personality-affected Emotion Transition
Accepted by Findings of ACL-IJCNLP 2021
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To provide consistent emotional interaction with users, dialog systems should be capable to automatically select appropriate emotions for responses like humans. However, most existing works focus on rendering specified emotions in responses or empathetically respond to the emotion of users, yet the individual difference in emotion expression is overlooked. This may lead to inconsistent emotional expressions and disinterest users. To tackle this issue, we propose to equip the dialog system with personality and enable it to automatically select emotions in responses by simulating the emotion transition of humans in conversation. In detail, the emotion of the dialog system is transitioned from its preceding emotion in context. The transition is triggered by the preceding dialog context and affected by the specified personality trait. To achieve this, we first model the emotion transition in the dialog system as the variation between the preceding emotion and the response emotion in the Valence-Arousal-Dominance (VAD) emotion space. Then, we design neural networks to encode the preceding dialog context and the specified personality traits to compose the variation. Finally, the emotion for response is selected from the sum of the preceding emotion and the variation. We construct a dialog dataset with emotion and personality labels and conduct emotion prediction tasks for evaluation. Experimental results validate the effectiveness of the personality-affected emotion transition.
[ { "created": "Wed, 30 Jun 2021 07:00:42 GMT", "version": "v1" } ]
2021-07-01
[ [ "Zhiyuan", "Wen", "" ], [ "Jiannong", "Cao", "" ], [ "Ruosong", "Yang", "" ], [ "Shuaiqi", "Liu", "" ], [ "Jiaxing", "Shen", "" ] ]
To provide consistent emotional interaction with users, dialog systems should be capable to automatically select appropriate emotions for responses like humans. However, most existing works focus on rendering specified emotions in responses or empathetically respond to the emotion of users, yet the individual difference in emotion expression is overlooked. This may lead to inconsistent emotional expressions and disinterest users. To tackle this issue, we propose to equip the dialog system with personality and enable it to automatically select emotions in responses by simulating the emotion transition of humans in conversation. In detail, the emotion of the dialog system is transitioned from its preceding emotion in context. The transition is triggered by the preceding dialog context and affected by the specified personality trait. To achieve this, we first model the emotion transition in the dialog system as the variation between the preceding emotion and the response emotion in the Valence-Arousal-Dominance (VAD) emotion space. Then, we design neural networks to encode the preceding dialog context and the specified personality traits to compose the variation. Finally, the emotion for response is selected from the sum of the preceding emotion and the variation. We construct a dialog dataset with emotion and personality labels and conduct emotion prediction tasks for evaluation. Experimental results validate the effectiveness of the personality-affected emotion transition.
2302.12910
Jia Shen
Jia Tracy Shen, Dongwon Lee
Imputing Knowledge Tracing Data with Subject-Based Training via LSTM Variational Autoencoders Frameworks
Accepted by AAAI2023 AI4ED Workshop
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
The issue of missing data poses a great challenge on boosting performance and application of deep learning models in the {\em Knowledge Tracing} (KT) problem. However, there has been the lack of understanding on the issue in the literature. %are not sufficient studies tackling this problem. In this work, to address this challenge, we adopt a subject-based training method to split and impute data by student IDs instead of row number splitting which we call non-subject based training. The benefit of subject-based training can retain the complete sequence for each student and hence achieve efficient training. Further, we leverage two existing deep generative frameworks, namely variational Autoencoders (VAE) and Longitudinal Variational Autoencoders (LVAE) frameworks and build LSTM kernels into them to form LSTM-VAE and LSTM LVAE (noted as VAE and LVAE for simplicity) models to generate quality data. In LVAE, a Gaussian Process (GP) model is trained to disentangle the correlation between the subject (i.e., student) descriptor information (e.g., age, gender) and the latent space. The paper finally compare the model performance between training the original data and training the data imputed with generated data from non-subject based model VAE-NS and subject-based training models (i.e., VAE and LVAE). We demonstrate that the generated data from LSTM-VAE and LSTM-LVAE can boost the original model performance by about 50%. Moreover, the original model just needs 10% more student data to surpass the original performance if the prediction model is small and 50\% more data if the prediction model is large with our proposed frameworks.
[ { "created": "Fri, 24 Feb 2023 21:56:03 GMT", "version": "v1" } ]
2023-02-28
[ [ "Shen", "Jia Tracy", "" ], [ "Lee", "Dongwon", "" ] ]
The issue of missing data poses a great challenge on boosting performance and application of deep learning models in the {\em Knowledge Tracing} (KT) problem. However, there has been the lack of understanding on the issue in the literature. %are not sufficient studies tackling this problem. In this work, to address this challenge, we adopt a subject-based training method to split and impute data by student IDs instead of row number splitting which we call non-subject based training. The benefit of subject-based training can retain the complete sequence for each student and hence achieve efficient training. Further, we leverage two existing deep generative frameworks, namely variational Autoencoders (VAE) and Longitudinal Variational Autoencoders (LVAE) frameworks and build LSTM kernels into them to form LSTM-VAE and LSTM LVAE (noted as VAE and LVAE for simplicity) models to generate quality data. In LVAE, a Gaussian Process (GP) model is trained to disentangle the correlation between the subject (i.e., student) descriptor information (e.g., age, gender) and the latent space. The paper finally compare the model performance between training the original data and training the data imputed with generated data from non-subject based model VAE-NS and subject-based training models (i.e., VAE and LVAE). We demonstrate that the generated data from LSTM-VAE and LSTM-LVAE can boost the original model performance by about 50%. Moreover, the original model just needs 10% more student data to surpass the original performance if the prediction model is small and 50\% more data if the prediction model is large with our proposed frameworks.
2312.14925
Timo Kaufmann
Timo Kaufmann, Paul Weng, Viktor Bengs, Eyke H\"ullermeier
A Survey of Reinforcement Learning from Human Feedback
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Reinforcement learning from human feedback (RLHF) is a variant of reinforcement learning (RL) that learns from human feedback instead of relying on an engineered reward function. Building on prior work on the related setting of preference-based reinforcement learning (PbRL), it stands at the intersection of artificial intelligence and human-computer interaction. This positioning offers a promising avenue to enhance the performance and adaptability of intelligent systems while also improving the alignment of their objectives with human values. The training of large language models (LLMs) has impressively demonstrated this potential in recent years, where RLHF played a decisive role in directing the model's capabilities toward human objectives. This article provides a comprehensive overview of the fundamentals of RLHF, exploring the intricate dynamics between RL agents and human input. While recent focus has been on RLHF for LLMs, our survey adopts a broader perspective, examining the diverse applications and wide-ranging impact of the technique. We delve into the core principles that underpin RLHF, shedding light on the symbiotic relationship between algorithms and human feedback, and discuss the main research trends in the field. By synthesizing the current landscape of RLHF research, this article aims to provide researchers as well as practitioners with a comprehensive understanding of this rapidly growing field of research.
[ { "created": "Fri, 22 Dec 2023 18:58:06 GMT", "version": "v1" }, { "created": "Tue, 30 Apr 2024 17:59:01 GMT", "version": "v2" } ]
2024-05-01
[ [ "Kaufmann", "Timo", "" ], [ "Weng", "Paul", "" ], [ "Bengs", "Viktor", "" ], [ "Hüllermeier", "Eyke", "" ] ]
Reinforcement learning from human feedback (RLHF) is a variant of reinforcement learning (RL) that learns from human feedback instead of relying on an engineered reward function. Building on prior work on the related setting of preference-based reinforcement learning (PbRL), it stands at the intersection of artificial intelligence and human-computer interaction. This positioning offers a promising avenue to enhance the performance and adaptability of intelligent systems while also improving the alignment of their objectives with human values. The training of large language models (LLMs) has impressively demonstrated this potential in recent years, where RLHF played a decisive role in directing the model's capabilities toward human objectives. This article provides a comprehensive overview of the fundamentals of RLHF, exploring the intricate dynamics between RL agents and human input. While recent focus has been on RLHF for LLMs, our survey adopts a broader perspective, examining the diverse applications and wide-ranging impact of the technique. We delve into the core principles that underpin RLHF, shedding light on the symbiotic relationship between algorithms and human feedback, and discuss the main research trends in the field. By synthesizing the current landscape of RLHF research, this article aims to provide researchers as well as practitioners with a comprehensive understanding of this rapidly growing field of research.
1911.03852
Amir Gholami
Zhen Dong, Zhewei Yao, Yaohui Cai, Daiyaan Arfeen, Amir Gholami, Michael W. Mahoney, Kurt Keutzer
HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks
null
NeurIPS 2020 paper, link: https://proceedings.neurips.cc/paper/2020/file/d77c703536718b95308130ff2e5cf9ee-Supplemental.pdf
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantization is an effective method for reducing memory footprint and inference time of Neural Networks, e.g., for efficient inference in the cloud, especially at the edge. However, ultra low precision quantization could lead to significant degradation in model generalization. A promising method to address this is to perform mixed-precision quantization, where more sensitive layers are kept at higher precision. However, the search space for a mixed-precision quantization is exponential in the number of layers. Recent work has proposed HAWQ, a novel Hessian based framework, with the aim of reducing this exponential search space by using second-order information. While promising, this prior work has three major limitations: (i) HAWQV1 only uses the top Hessian eigenvalue as a measure of sensitivity and do not consider the rest of the Hessian spectrum; (ii) HAWQV1 approach only provides relative sensitivity of different layers and therefore requires a manual selection of the mixed-precision setting; and (iii) HAWQV1 does not consider mixed-precision activation quantization. Here, we present HAWQV2 which addresses these shortcomings. For (i), we perform a theoretical analysis showing that a better sensitivity metric is to compute the average of all of the Hessian eigenvalues. For (ii), we develop a Pareto frontier based method for selecting the exact bit precision of different layers without any manual selection. For (iii), we extend the Hessian analysis to mixed-precision activation quantization. We have found this to be very beneficial for object detection. We show that HAWQV2 achieves new state-of-the-art results for a wide range of tasks.
[ { "created": "Sun, 10 Nov 2019 04:46:17 GMT", "version": "v1" } ]
2021-05-11
[ [ "Dong", "Zhen", "" ], [ "Yao", "Zhewei", "" ], [ "Cai", "Yaohui", "" ], [ "Arfeen", "Daiyaan", "" ], [ "Gholami", "Amir", "" ], [ "Mahoney", "Michael W.", "" ], [ "Keutzer", "Kurt", "" ] ]
Quantization is an effective method for reducing memory footprint and inference time of Neural Networks, e.g., for efficient inference in the cloud, especially at the edge. However, ultra low precision quantization could lead to significant degradation in model generalization. A promising method to address this is to perform mixed-precision quantization, where more sensitive layers are kept at higher precision. However, the search space for a mixed-precision quantization is exponential in the number of layers. Recent work has proposed HAWQ, a novel Hessian based framework, with the aim of reducing this exponential search space by using second-order information. While promising, this prior work has three major limitations: (i) HAWQV1 only uses the top Hessian eigenvalue as a measure of sensitivity and do not consider the rest of the Hessian spectrum; (ii) HAWQV1 approach only provides relative sensitivity of different layers and therefore requires a manual selection of the mixed-precision setting; and (iii) HAWQV1 does not consider mixed-precision activation quantization. Here, we present HAWQV2 which addresses these shortcomings. For (i), we perform a theoretical analysis showing that a better sensitivity metric is to compute the average of all of the Hessian eigenvalues. For (ii), we develop a Pareto frontier based method for selecting the exact bit precision of different layers without any manual selection. For (iii), we extend the Hessian analysis to mixed-precision activation quantization. We have found this to be very beneficial for object detection. We show that HAWQV2 achieves new state-of-the-art results for a wide range of tasks.
1907.00483
Amit Kumar Jaiswal
Amit Kumar Jaiswal, Haiming Liu and Ingo Frommholz
Effects of Foraging in Personalized Content-based Image Recommendation
Accepted in Proceedings of the the 2nd International Workshop on Explainable Recommendation and Search (EARS) at SIGIR 2019
null
null
null
cs.IR cs.HC cs.MM cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A major challenge of recommender systems is to help users locating interesting items. Personalized recommender systems have become very popular as they attempt to predetermine the needs of users and provide them with recommendations to personalize their navigation. However, few studies have addressed the question of what drives the users' attention to specific content within the collection and what influences the selection of interesting items. To this end, we employ the lens of Information Foraging Theory (IFT) to image recommendation to demonstrate how the user could utilize visual bookmarks to locate interesting images. We investigate a personalized content-based image recommendation system to understand what affects user attention by reinforcing visual attention cues based on IFT. We further find that visual bookmarks (cues) lead to a stronger scent of the recommended image collection. Our evaluation is based on the Pinterest image collection.
[ { "created": "Sun, 30 Jun 2019 22:16:32 GMT", "version": "v1" }, { "created": "Sat, 20 Jul 2019 12:43:53 GMT", "version": "v2" } ]
2019-07-23
[ [ "Jaiswal", "Amit Kumar", "" ], [ "Liu", "Haiming", "" ], [ "Frommholz", "Ingo", "" ] ]
A major challenge of recommender systems is to help users locating interesting items. Personalized recommender systems have become very popular as they attempt to predetermine the needs of users and provide them with recommendations to personalize their navigation. However, few studies have addressed the question of what drives the users' attention to specific content within the collection and what influences the selection of interesting items. To this end, we employ the lens of Information Foraging Theory (IFT) to image recommendation to demonstrate how the user could utilize visual bookmarks to locate interesting images. We investigate a personalized content-based image recommendation system to understand what affects user attention by reinforcing visual attention cues based on IFT. We further find that visual bookmarks (cues) lead to a stronger scent of the recommended image collection. Our evaluation is based on the Pinterest image collection.
1509.06983
Marc Hellmuth
Marc Hellmuth, Adrian Fritz, Nicolas Wieseke and Peter F. Stadler
Techniques for the Cograph Editing Problem: Module Merge is equivalent to Editing P4s
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cographs are graphs in which no four vertices induce a simple connected path $P_4$. Cograph editing is to find for a given graph $G = (V,E)$ a set of at most $k$ edge additions and deletions that transform $G$ into a cograph. This combinatorial optimization problem is NP-hard. It has, recently found applications in the context of phylogenetics, hence good heuristics are of practical importance. It is well-known that the cograph editing problem can be solved independently on the so-called strong prime modules of the modular decomposition of $G$. We show here that editing the induced $P_4$'s of a given graph is equivalent to resolving strong prime modules by means of a newly defined merge operation on the submodules. This observation leads to a new exact algorithm for the cograph editing problem that can be used as a starting point for the construction of novel heuristics.
[ { "created": "Wed, 23 Sep 2015 13:54:15 GMT", "version": "v1" }, { "created": "Thu, 24 Sep 2015 09:34:29 GMT", "version": "v2" } ]
2015-09-25
[ [ "Hellmuth", "Marc", "" ], [ "Fritz", "Adrian", "" ], [ "Wieseke", "Nicolas", "" ], [ "Stadler", "Peter F.", "" ] ]
Cographs are graphs in which no four vertices induce a simple connected path $P_4$. Cograph editing is to find for a given graph $G = (V,E)$ a set of at most $k$ edge additions and deletions that transform $G$ into a cograph. This combinatorial optimization problem is NP-hard. It has, recently found applications in the context of phylogenetics, hence good heuristics are of practical importance. It is well-known that the cograph editing problem can be solved independently on the so-called strong prime modules of the modular decomposition of $G$. We show here that editing the induced $P_4$'s of a given graph is equivalent to resolving strong prime modules by means of a newly defined merge operation on the submodules. This observation leads to a new exact algorithm for the cograph editing problem that can be used as a starting point for the construction of novel heuristics.
2401.06080
Rui Zheng
Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, Songyang Gao, Nuo Xu, Yuhao Zhou, Xiaoran Fan, Zhiheng Xi, Jun Zhao, Xiao Wang, Tao Ji, Hang Yan, Lixing Shen, Zhan Chen, Tao Gui, Qi Zhang, Xipeng Qiu, Xuanjing Huang, Zuxuan Wu, Yu-Gang Jiang
Secrets of RLHF in Large Language Models Part II: Reward Modeling
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement Learning from Human Feedback (RLHF) has become a crucial technology for aligning language models with human values and intentions, enabling models to produce more helpful and harmless responses. Reward models are trained as proxies for human preferences to drive reinforcement learning optimization. While reward models are often considered central to achieving high performance, they face the following challenges in practical applications: (1) Incorrect and ambiguous preference pairs in the dataset may hinder the reward model from accurately capturing human intent. (2) Reward models trained on data from a specific distribution often struggle to generalize to examples outside that distribution and are not suitable for iterative RLHF training. In this report, we attempt to address these two issues. (1) From a data perspective, we propose a method to measure the strength of preferences within the data, based on a voting mechanism of multiple reward models. Experimental results confirm that data with varying preference strengths have different impacts on reward model performance. We introduce a series of novel methods to mitigate the influence of incorrect and ambiguous preferences in the dataset and fully leverage high-quality preference data. (2) From an algorithmic standpoint, we introduce contrastive learning to enhance the ability of reward models to distinguish between chosen and rejected responses, thereby improving model generalization. Furthermore, we employ meta-learning to enable the reward model to maintain the ability to differentiate subtle differences in out-of-distribution samples, and this approach can be utilized for iterative RLHF optimization.
[ { "created": "Thu, 11 Jan 2024 17:56:59 GMT", "version": "v1" }, { "created": "Fri, 12 Jan 2024 09:46:10 GMT", "version": "v2" } ]
2024-01-15
[ [ "Wang", "Binghai", "" ], [ "Zheng", "Rui", "" ], [ "Chen", "Lu", "" ], [ "Liu", "Yan", "" ], [ "Dou", "Shihan", "" ], [ "Huang", "Caishuang", "" ], [ "Shen", "Wei", "" ], [ "Jin", "Senjie", "" ], [ "Zhou", "Enyu", "" ], [ "Shi", "Chenyu", "" ], [ "Gao", "Songyang", "" ], [ "Xu", "Nuo", "" ], [ "Zhou", "Yuhao", "" ], [ "Fan", "Xiaoran", "" ], [ "Xi", "Zhiheng", "" ], [ "Zhao", "Jun", "" ], [ "Wang", "Xiao", "" ], [ "Ji", "Tao", "" ], [ "Yan", "Hang", "" ], [ "Shen", "Lixing", "" ], [ "Chen", "Zhan", "" ], [ "Gui", "Tao", "" ], [ "Zhang", "Qi", "" ], [ "Qiu", "Xipeng", "" ], [ "Huang", "Xuanjing", "" ], [ "Wu", "Zuxuan", "" ], [ "Jiang", "Yu-Gang", "" ] ]
Reinforcement Learning from Human Feedback (RLHF) has become a crucial technology for aligning language models with human values and intentions, enabling models to produce more helpful and harmless responses. Reward models are trained as proxies for human preferences to drive reinforcement learning optimization. While reward models are often considered central to achieving high performance, they face the following challenges in practical applications: (1) Incorrect and ambiguous preference pairs in the dataset may hinder the reward model from accurately capturing human intent. (2) Reward models trained on data from a specific distribution often struggle to generalize to examples outside that distribution and are not suitable for iterative RLHF training. In this report, we attempt to address these two issues. (1) From a data perspective, we propose a method to measure the strength of preferences within the data, based on a voting mechanism of multiple reward models. Experimental results confirm that data with varying preference strengths have different impacts on reward model performance. We introduce a series of novel methods to mitigate the influence of incorrect and ambiguous preferences in the dataset and fully leverage high-quality preference data. (2) From an algorithmic standpoint, we introduce contrastive learning to enhance the ability of reward models to distinguish between chosen and rejected responses, thereby improving model generalization. Furthermore, we employ meta-learning to enable the reward model to maintain the ability to differentiate subtle differences in out-of-distribution samples, and this approach can be utilized for iterative RLHF optimization.
1508.03261
He Sun
Yin Tat Lee and He Sun
Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time
22 pages. A preliminary version of this paper is to appear in proceedings of the 56th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2015)
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the first almost-linear time algorithm for constructing linear-sized spectral sparsification for graphs. This improves all previous constructions of linear-sized spectral sparsification, which requires $\Omega(n^2)$ time. A key ingredient in our algorithm is a novel combination of two techniques used in literature for constructing spectral sparsification: Random sampling by effective resistance, and adaptive constructions based on barrier functions.
[ { "created": "Thu, 13 Aug 2015 16:24:28 GMT", "version": "v1" } ]
2015-08-14
[ [ "Lee", "Yin Tat", "" ], [ "Sun", "He", "" ] ]
We present the first almost-linear time algorithm for constructing linear-sized spectral sparsification for graphs. This improves all previous constructions of linear-sized spectral sparsification, which requires $\Omega(n^2)$ time. A key ingredient in our algorithm is a novel combination of two techniques used in literature for constructing spectral sparsification: Random sampling by effective resistance, and adaptive constructions based on barrier functions.
2207.02802
Qianglong Chen
Qianglong Chen, Xiangji Zeng, Jiangang Zhu, Yin Zhang, Bojia Lin, Yang Yang, Daxin Jiang
Rethinking the Value of Gazetteer in Chinese Named Entity Recognition
Accepted by NLPCC 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Gazetteer is widely used in Chinese named entity recognition (NER) to enhance span boundary detection and type classification. However, to further understand the generalizability and effectiveness of gazetteers, the NLP community still lacks a systematic analysis of the gazetteer-enhanced NER model. In this paper, we first re-examine the effectiveness several common practices of the gazetteer-enhanced NER models and carry out a series of detailed analysis to evaluate the relationship between the model performance and the gazetteer characteristics, which can guide us to build a more suitable gazetteer. The findings of this paper are as follows: (1) the gazetteer improves most of the situations that the traditional NER model datasets are difficult to learn. (2) the performance of model greatly benefits from the high-quality pre-trained lexeme embeddings. (3) a good gazetteer should cover more entities that can be matched in both the training set and testing set.
[ { "created": "Wed, 6 Jul 2022 16:45:25 GMT", "version": "v1" }, { "created": "Mon, 18 Jul 2022 09:13:26 GMT", "version": "v2" } ]
2022-07-19
[ [ "Chen", "Qianglong", "" ], [ "Zeng", "Xiangji", "" ], [ "Zhu", "Jiangang", "" ], [ "Zhang", "Yin", "" ], [ "Lin", "Bojia", "" ], [ "Yang", "Yang", "" ], [ "Jiang", "Daxin", "" ] ]
Gazetteer is widely used in Chinese named entity recognition (NER) to enhance span boundary detection and type classification. However, to further understand the generalizability and effectiveness of gazetteers, the NLP community still lacks a systematic analysis of the gazetteer-enhanced NER model. In this paper, we first re-examine the effectiveness several common practices of the gazetteer-enhanced NER models and carry out a series of detailed analysis to evaluate the relationship between the model performance and the gazetteer characteristics, which can guide us to build a more suitable gazetteer. The findings of this paper are as follows: (1) the gazetteer improves most of the situations that the traditional NER model datasets are difficult to learn. (2) the performance of model greatly benefits from the high-quality pre-trained lexeme embeddings. (3) a good gazetteer should cover more entities that can be matched in both the training set and testing set.
2311.00800
AmirHosein Fadaei
Amir Hosein Fadaei, Mohammad-Reza A. Dehaqani
Beyond still images: Temporal features and input variance resilience
13 pages, 9 figures
null
10.1038/s41598-024-66346-w
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditionally, vision models have predominantly relied on spatial features extracted from static images, deviating from the continuous stream of spatiotemporal features processed by the brain in natural vision. While numerous video-understanding models have emerged, incorporating videos into image-understanding models with spatiotemporal features has been limited. Drawing inspiration from natural vision, which exhibits remarkable resilience to input changes, our research focuses on the development of a brain-inspired model for vision understanding trained with videos. Our findings demonstrate that models that train on videos instead of still images and include temporal features become more resilient to various alternations on input media.
[ { "created": "Wed, 1 Nov 2023 19:34:45 GMT", "version": "v1" }, { "created": "Wed, 14 Feb 2024 15:41:08 GMT", "version": "v2" } ]
2024-07-18
[ [ "Fadaei", "Amir Hosein", "" ], [ "Dehaqani", "Mohammad-Reza A.", "" ] ]
Traditionally, vision models have predominantly relied on spatial features extracted from static images, deviating from the continuous stream of spatiotemporal features processed by the brain in natural vision. While numerous video-understanding models have emerged, incorporating videos into image-understanding models with spatiotemporal features has been limited. Drawing inspiration from natural vision, which exhibits remarkable resilience to input changes, our research focuses on the development of a brain-inspired model for vision understanding trained with videos. Our findings demonstrate that models that train on videos instead of still images and include temporal features become more resilient to various alternations on input media.
0810.1248
Ali Parandehgheibi
Ali ParandehGheibi, Atilla Eryilmaz, Asuman Ozdaglar, Muriel Medard
Resource Allocation in Multiple Access Channels
5 pages, In proc. of ACSSC 2007
null
null
null
cs.IT cs.NI math.IT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of rate allocation in a Gaussian multiple-access channel, with the goal of maximizing a utility function over transmission rates. In contrast to the literature which focuses on linear utility functions, we study general concave utility functions. We present a gradient projection algorithm for this problem. Since the constraint set of the problem is described by exponentially many constraints, methods that use exact projections are computationally intractable. Therefore, we develop a new method that uses approximate projections. We use the polymatroid structure of the capacity region to show that the approximate projection can be implemented by a recursive algorithm in time polynomial in the number of users. We further propose another algorithm for implementing the approximate projections using rate-splitting and show improved bounds on its convergence time.
[ { "created": "Tue, 7 Oct 2008 17:29:52 GMT", "version": "v1" } ]
2008-10-08
[ [ "ParandehGheibi", "Ali", "" ], [ "Eryilmaz", "Atilla", "" ], [ "Ozdaglar", "Asuman", "" ], [ "Medard", "Muriel", "" ] ]
We consider the problem of rate allocation in a Gaussian multiple-access channel, with the goal of maximizing a utility function over transmission rates. In contrast to the literature which focuses on linear utility functions, we study general concave utility functions. We present a gradient projection algorithm for this problem. Since the constraint set of the problem is described by exponentially many constraints, methods that use exact projections are computationally intractable. Therefore, we develop a new method that uses approximate projections. We use the polymatroid structure of the capacity region to show that the approximate projection can be implemented by a recursive algorithm in time polynomial in the number of users. We further propose another algorithm for implementing the approximate projections using rate-splitting and show improved bounds on its convergence time.
2407.16395
Valderi Leithardt Valderi
Pedro Costa, Valderi Leithardt
Prisec II -- A Comprehensive Model for IoT Security: Cryptographic Algorithms and Cloud Integration
8 pages
IEEE Latam Transactions 2024
null
null
cs.CR cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study addresses the critical issue of ensuring data security and efficiency in interconnected devices, especially in IoT environments. The objective is to design and implement a model using cryptographic algorithms to enhance data security in 5G networks. Challenges arise from the limited computational capabilities of IoT devices, which require the analysis and selection of cryptographic algorithms to achieve efficient data transmission. This study proposes a model that includes four levels of security, each employing different levels of encryption to provide better data security. Finally, cloud computing optimizes processing efficiency and resource utilization to improve data transmission.
[ { "created": "Tue, 23 Jul 2024 11:35:24 GMT", "version": "v1" } ]
2024-07-24
[ [ "Costa", "Pedro", "" ], [ "Leithardt", "Valderi", "" ] ]
This study addresses the critical issue of ensuring data security and efficiency in interconnected devices, especially in IoT environments. The objective is to design and implement a model using cryptographic algorithms to enhance data security in 5G networks. Challenges arise from the limited computational capabilities of IoT devices, which require the analysis and selection of cryptographic algorithms to achieve efficient data transmission. This study proposes a model that includes four levels of security, each employing different levels of encryption to provide better data security. Finally, cloud computing optimizes processing efficiency and resource utilization to improve data transmission.
0906.0426
Li Li
Li Li, Yudong Chen, Yi Zhang
A Mixed-Fractal Model for Network Traffic
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this short paper, we propose a new multi-fractal flow model, aiming to provide a possible explanation for the crossover phenomena that appear in the estimation of Hurst exponent for network traffic. It is shown that crossover occurs if the network flow consists of several components with different Hurst components. Our results indicate that this model might be useful in network traffic modeling and simulation.
[ { "created": "Tue, 2 Jun 2009 06:41:19 GMT", "version": "v1" } ]
2009-06-03
[ [ "Li", "Li", "" ], [ "Chen", "Yudong", "" ], [ "Zhang", "Yi", "" ] ]
In this short paper, we propose a new multi-fractal flow model, aiming to provide a possible explanation for the crossover phenomena that appear in the estimation of Hurst exponent for network traffic. It is shown that crossover occurs if the network flow consists of several components with different Hurst components. Our results indicate that this model might be useful in network traffic modeling and simulation.
2009.00548
Philipp Meschenmoser
Philipp Meschenmoser, Juri F. Buchm\"uller, Daniel Seebacher, Martin Wikelski and Daniel A. Keim
MultiSegVA: Using Visual Analytics to Segment Biologging Time Series on Multiple Scales
IEEE VAST 2020 - Proceedings of IEEE Conference on Visual Analytics Science and Technology (VAST), 2020
null
null
null
cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Segmenting biologging time series of animals on multiple temporal scales is an essential step that requires complex techniques with careful parameterization and possibly cross-domain expertise. Yet, there is a lack of visual-interactive tools that strongly support such multi-scale segmentation. To close this gap, we present our MultiSegVA platform for interactively defining segmentation techniques and parameters on multiple temporal scales. MultiSegVA primarily contributes tailored, visual-interactive means and visual analytics paradigms for segmenting unlabeled time series on multiple scales. Further, to flexibly compose the multi-scale segmentation, the platform contributes a new visual query language that links a variety of segmentation techniques. To illustrate our approach, we present a domain-oriented set of segmentation techniques derived in collaboration with movement ecologists. We demonstrate the applicability and usefulness of MultiSegVA in two real-world use cases from movement ecology, related to behavior analysis after environment-aware segmentation, and after progressive clustering. Expert feedback from movement ecologists shows the effectiveness of tailored visual-interactive means and visual analytics paradigms at segmenting multi-scale data, enabling them to perform semantically meaningful analyses. A third use case demonstrates that MultiSegVA is generalizable to other domains.
[ { "created": "Tue, 1 Sep 2020 16:27:08 GMT", "version": "v1" }, { "created": "Wed, 2 Sep 2020 08:22:29 GMT", "version": "v2" } ]
2020-09-03
[ [ "Meschenmoser", "Philipp", "" ], [ "Buchmüller", "Juri F.", "" ], [ "Seebacher", "Daniel", "" ], [ "Wikelski", "Martin", "" ], [ "Keim", "Daniel A.", "" ] ]
Segmenting biologging time series of animals on multiple temporal scales is an essential step that requires complex techniques with careful parameterization and possibly cross-domain expertise. Yet, there is a lack of visual-interactive tools that strongly support such multi-scale segmentation. To close this gap, we present our MultiSegVA platform for interactively defining segmentation techniques and parameters on multiple temporal scales. MultiSegVA primarily contributes tailored, visual-interactive means and visual analytics paradigms for segmenting unlabeled time series on multiple scales. Further, to flexibly compose the multi-scale segmentation, the platform contributes a new visual query language that links a variety of segmentation techniques. To illustrate our approach, we present a domain-oriented set of segmentation techniques derived in collaboration with movement ecologists. We demonstrate the applicability and usefulness of MultiSegVA in two real-world use cases from movement ecology, related to behavior analysis after environment-aware segmentation, and after progressive clustering. Expert feedback from movement ecologists shows the effectiveness of tailored visual-interactive means and visual analytics paradigms at segmenting multi-scale data, enabling them to perform semantically meaningful analyses. A third use case demonstrates that MultiSegVA is generalizable to other domains.
1602.03031
Can Alkan
Atalay M. Ileri, Halil I. Ozercan, Alper Gundogdu, Ahmet K. Senol, M. Yusuf Ozkaya, Can Alkan
Coinami: A Cryptocurrency with DNA Sequence Alignment as Proof-of-work
null
null
null
null
cs.CE cs.CR q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rate of growth of the amount of data generated using the high throughput sequencing (HTS) platforms now exceeds the growth stipulated by Moore's Law. The HTS data is expected to surpass those of other "big data" domains such as astronomy, before the year 2025. In addition to sequencing genomes for research purposes, genome and exome sequencing in clinical settings will be a routine part of health care. The analysis of such large amounts of data, however, is not without computational challenges. This burden is even more increased due to the periodic updates to reference genomes, which typically require re-analysis of existing data. Here we propose Coin-Application Mediator Interface (Coinami) to distribute the workload for mapping reads to reference genomes using a volunteer grid computer approach similar to Berkeley Open Infrastructure for Network Computing (BOINC). However, since HTS read mapping requires substantial computational resources and fast analysis turnout is desired, Coinami uses the HTS read mapping as proof-of-work to generate valid blocks to main its own cryptocurrency system, which may help motivate volunteers to dedicate more resources. The Coinami protocol includes mechanisms to ensure that jobs performed by volunteers are correct, and provides genomic data privacy. The prototype implementation of Coinami is available at http://coinami.github.io/.
[ { "created": "Tue, 9 Feb 2016 15:23:38 GMT", "version": "v1" }, { "created": "Fri, 19 Feb 2016 11:19:35 GMT", "version": "v2" } ]
2016-02-22
[ [ "Ileri", "Atalay M.", "" ], [ "Ozercan", "Halil I.", "" ], [ "Gundogdu", "Alper", "" ], [ "Senol", "Ahmet K.", "" ], [ "Ozkaya", "M. Yusuf", "" ], [ "Alkan", "Can", "" ] ]
Rate of growth of the amount of data generated using the high throughput sequencing (HTS) platforms now exceeds the growth stipulated by Moore's Law. The HTS data is expected to surpass those of other "big data" domains such as astronomy, before the year 2025. In addition to sequencing genomes for research purposes, genome and exome sequencing in clinical settings will be a routine part of health care. The analysis of such large amounts of data, however, is not without computational challenges. This burden is even more increased due to the periodic updates to reference genomes, which typically require re-analysis of existing data. Here we propose Coin-Application Mediator Interface (Coinami) to distribute the workload for mapping reads to reference genomes using a volunteer grid computer approach similar to Berkeley Open Infrastructure for Network Computing (BOINC). However, since HTS read mapping requires substantial computational resources and fast analysis turnout is desired, Coinami uses the HTS read mapping as proof-of-work to generate valid blocks to main its own cryptocurrency system, which may help motivate volunteers to dedicate more resources. The Coinami protocol includes mechanisms to ensure that jobs performed by volunteers are correct, and provides genomic data privacy. The prototype implementation of Coinami is available at http://coinami.github.io/.
2006.03377
Emil Bj\"ornson
Emil Bj\"ornson, \"Ozgecan \"Ozdogan, Erik G. Larsson
Reconfigurable Intelligent Surfaces: Three Myths and Two Critical Questions
To appear in IEEE Communications Magazine, 7 pages, 6 figures
IEEE Communications Magazine, vol. 58, no. 12, pp. 90-96, December 2020
10.1109/MCOM.001.2000407
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The search for physical-layer technologies that can play a key role in beyond-5G systems has started. One option is reconfigurable intelligent surfaces (RIS), which can collect wireless signals from a transmitter and passively beamform them towards the receiver. The technology has exciting prospects and is quickly gaining traction in the communication community, but in the current hype we have witnessed how several myths and overstatements are spreading in the literature. In this article, we take a neutral look at the RIS technology. We first review the fundamentals and then explain specific features that can be easily misinterpreted. In particular, we debunk three myths: 1) Current network technology can only control the transmitter and receiver, not the environment in between; 2) A better asymptotic array gain is achieved than with conventional beamforming; 3) The pathloss is the same as with anomalous mirrors. To inspire further research, we conclude by identifying two critical questions that must be answered for RIS to become a successful technology: 1) What is a convincing use case for RIS?; 2) How can we estimate channels and control an RIS in real time?
[ { "created": "Fri, 5 Jun 2020 11:25:32 GMT", "version": "v1" }, { "created": "Thu, 1 Oct 2020 14:01:49 GMT", "version": "v2" } ]
2021-01-05
[ [ "Björnson", "Emil", "" ], [ "Özdogan", "Özgecan", "" ], [ "Larsson", "Erik G.", "" ] ]
The search for physical-layer technologies that can play a key role in beyond-5G systems has started. One option is reconfigurable intelligent surfaces (RIS), which can collect wireless signals from a transmitter and passively beamform them towards the receiver. The technology has exciting prospects and is quickly gaining traction in the communication community, but in the current hype we have witnessed how several myths and overstatements are spreading in the literature. In this article, we take a neutral look at the RIS technology. We first review the fundamentals and then explain specific features that can be easily misinterpreted. In particular, we debunk three myths: 1) Current network technology can only control the transmitter and receiver, not the environment in between; 2) A better asymptotic array gain is achieved than with conventional beamforming; 3) The pathloss is the same as with anomalous mirrors. To inspire further research, we conclude by identifying two critical questions that must be answered for RIS to become a successful technology: 1) What is a convincing use case for RIS?; 2) How can we estimate channels and control an RIS in real time?
2005.07174
Elena Kochkina
Elena Kochkina and Maria Liakata
Estimating predictive uncertainty for rumour verification models
Accepted to the Annual Conference of the Association for Computational Linguistics (ACL) 2020
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
The inability to correctly resolve rumours circulating online can have harmful real-world consequences. We present a method for incorporating model and data uncertainty estimates into natural language processing models for automatic rumour verification. We show that these estimates can be used to filter out model predictions likely to be erroneous, so that these difficult instances can be prioritised by a human fact-checker. We propose two methods for uncertainty-based instance rejection, supervised and unsupervised. We also show how uncertainty estimates can be used to interpret model performance as a rumour unfolds.
[ { "created": "Thu, 14 May 2020 17:42:25 GMT", "version": "v1" } ]
2020-05-15
[ [ "Kochkina", "Elena", "" ], [ "Liakata", "Maria", "" ] ]
The inability to correctly resolve rumours circulating online can have harmful real-world consequences. We present a method for incorporating model and data uncertainty estimates into natural language processing models for automatic rumour verification. We show that these estimates can be used to filter out model predictions likely to be erroneous, so that these difficult instances can be prioritised by a human fact-checker. We propose two methods for uncertainty-based instance rejection, supervised and unsupervised. We also show how uncertainty estimates can be used to interpret model performance as a rumour unfolds.
1506.09061
Darryl Hill
Prosenjit Bose, Darryl Hill, and Michiel Smid
Improved Spanning Ratio for Low Degree Plane Spanners
39 pages, appendix has been integrated into the main paper
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe an algorithm that builds a plane spanner with a maximum degree of 8 and a spanning ratio of approximately 4.414 with respect to the complete graph. This is the best currently known spanning ratio for a plane spanner with a maximum degree of less than 14.
[ { "created": "Tue, 30 Jun 2015 12:35:02 GMT", "version": "v1" }, { "created": "Thu, 2 Jul 2015 18:19:53 GMT", "version": "v2" } ]
2015-07-03
[ [ "Bose", "Prosenjit", "" ], [ "Hill", "Darryl", "" ], [ "Smid", "Michiel", "" ] ]
We describe an algorithm that builds a plane spanner with a maximum degree of 8 and a spanning ratio of approximately 4.414 with respect to the complete graph. This is the best currently known spanning ratio for a plane spanner with a maximum degree of less than 14.
2208.08759
Benjamin Doerr
Benjamin Doerr and Zhongdi Qu
Runtime Analysis for the NSGA-II: Provable Speed-Ups From Crossover
Extended version of a paper that appears in the proceedings of AAAI 2023
null
null
null
cs.NE cs.AI cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Very recently, the first mathematical runtime analyses for the NSGA-II, the most common multi-objective evolutionary algorithm, have been conducted. Continuing this research direction, we prove that the NSGA-II optimizes the OneJumpZeroJump benchmark asymptotically faster when crossover is employed. Together with a parallel independent work by Dang, Opris, Salehi, and Sudholt, this is the first time such an advantage of crossover is proven for the NSGA-II. Our arguments can be transferred to single-objective optimization. They then prove that crossover can speed up the $(\mu+1)$ genetic algorithm in a different way and more pronounced than known before. Our experiments confirm the added value of crossover and show that the observed advantages are even larger than what our proofs can guarantee.
[ { "created": "Thu, 18 Aug 2022 10:41:44 GMT", "version": "v1" }, { "created": "Wed, 15 Mar 2023 08:58:10 GMT", "version": "v2" } ]
2023-03-16
[ [ "Doerr", "Benjamin", "" ], [ "Qu", "Zhongdi", "" ] ]
Very recently, the first mathematical runtime analyses for the NSGA-II, the most common multi-objective evolutionary algorithm, have been conducted. Continuing this research direction, we prove that the NSGA-II optimizes the OneJumpZeroJump benchmark asymptotically faster when crossover is employed. Together with a parallel independent work by Dang, Opris, Salehi, and Sudholt, this is the first time such an advantage of crossover is proven for the NSGA-II. Our arguments can be transferred to single-objective optimization. They then prove that crossover can speed up the $(\mu+1)$ genetic algorithm in a different way and more pronounced than known before. Our experiments confirm the added value of crossover and show that the observed advantages are even larger than what our proofs can guarantee.
2111.01625
Xutian Deng
Xutian Deng, Yiting Chen, Fei Chen and Miao Li
Learning Robotic Ultrasound Scanning Skills via Human Demonstrations and Guided Explorations
null
null
10.1109/ROBIO54168.2021.9739464
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Medical ultrasound has become a routine examination approach nowadays and is widely adopted for different medical applications, so it is desired to have a robotic ultrasound system to perform the ultrasound scanning autonomously. However, the ultrasound scanning skill is considerably complex, which highly depends on the experience of the ultrasound physician. In this paper, we propose a learning-based approach to learn the robotic ultrasound scanning skills from human demonstrations. First, the robotic ultrasound scanning skill is encapsulated into a high-dimensional multi-modal model, which takes the ultrasound images, the pose/position of the probe and the contact force into account. Second, we leverage the power of imitation learning to train the multi-modal model with the training data collected from the demonstrations of experienced ultrasound physicians. Finally, a post-optimization procedure with guided explorations is proposed to further improve the performance of the learned model. Robotic experiments are conducted to validate the advantages of our proposed framework and the learned models.
[ { "created": "Tue, 2 Nov 2021 14:38:09 GMT", "version": "v1" } ]
2023-07-27
[ [ "Deng", "Xutian", "" ], [ "Chen", "Yiting", "" ], [ "Chen", "Fei", "" ], [ "Li", "Miao", "" ] ]
Medical ultrasound has become a routine examination approach nowadays and is widely adopted for different medical applications, so it is desired to have a robotic ultrasound system to perform the ultrasound scanning autonomously. However, the ultrasound scanning skill is considerably complex, which highly depends on the experience of the ultrasound physician. In this paper, we propose a learning-based approach to learn the robotic ultrasound scanning skills from human demonstrations. First, the robotic ultrasound scanning skill is encapsulated into a high-dimensional multi-modal model, which takes the ultrasound images, the pose/position of the probe and the contact force into account. Second, we leverage the power of imitation learning to train the multi-modal model with the training data collected from the demonstrations of experienced ultrasound physicians. Finally, a post-optimization procedure with guided explorations is proposed to further improve the performance of the learned model. Robotic experiments are conducted to validate the advantages of our proposed framework and the learned models.
2001.11595
Matteo Pirotta
Jian Qian, Ronan Fruit, Matteo Pirotta, Alessandro Lazaric
Concentration Inequalities for Multinoulli Random Variables
Tutorial at ALT'19 on Regret Minimization in Infinite-Horizon Finite Markov Decision Processes
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate concentration inequalities for Dirichlet and Multinomial random variables.
[ { "created": "Thu, 30 Jan 2020 22:44:15 GMT", "version": "v1" } ]
2020-02-03
[ [ "Qian", "Jian", "" ], [ "Fruit", "Ronan", "" ], [ "Pirotta", "Matteo", "" ], [ "Lazaric", "Alessandro", "" ] ]
We investigate concentration inequalities for Dirichlet and Multinomial random variables.
2403.08245
Shawn Tan
Shawn Tan, Yikang Shen, Rameswar Panda, Aaron Courville
Scattered Mixture-of-Experts Implementation
null
null
null
null
cs.LG cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present ScatterMoE, an implementation of Sparse Mixture-of-Experts (SMoE) on GPUs. ScatterMoE builds upon existing implementations, and overcoming some of the limitations to improve inference and training speed, and memory footprint. This implementation achieves this by avoiding padding and making excessive copies of the input. We introduce ParallelLinear, the main component we use to build our implementation and the various kernels used to speed up the operation. We benchmark our implementation against Megablocks, and show that it enables a higher throughput and lower memory footprint. We also show how ParallelLinear enables extension of the Mixture-of-Experts concept by demonstrating with an implementation of Mixture of Attention.
[ { "created": "Wed, 13 Mar 2024 05:00:23 GMT", "version": "v1" } ]
2024-03-14
[ [ "Tan", "Shawn", "" ], [ "Shen", "Yikang", "" ], [ "Panda", "Rameswar", "" ], [ "Courville", "Aaron", "" ] ]
We present ScatterMoE, an implementation of Sparse Mixture-of-Experts (SMoE) on GPUs. ScatterMoE builds upon existing implementations, and overcoming some of the limitations to improve inference and training speed, and memory footprint. This implementation achieves this by avoiding padding and making excessive copies of the input. We introduce ParallelLinear, the main component we use to build our implementation and the various kernels used to speed up the operation. We benchmark our implementation against Megablocks, and show that it enables a higher throughput and lower memory footprint. We also show how ParallelLinear enables extension of the Mixture-of-Experts concept by demonstrating with an implementation of Mixture of Attention.
2206.08422
Henning U. Voss
Henning U. Voss
Real-time motion amplification on mobile devices
Supplemental data at https://doi.org/10.6084/m9.figshare.20084981.v2. Changes to v1: Inclusion of offline video processing
null
null
null
cs.GR cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
A simple motion amplification algorithm suitable for real-time applications on mobile devices, including smartphones, is presented. It is based on motion enhancement by moving average differencing (MEMAD), a temporal high-pass filter for video streams. MEMAD can amplify small moving objects or subtle motion in larger objects. It is computationally sufficiently simple to be implemented in real time on smartphones. In the specific implementation as an Android phone app, MEMAD is demonstrated on examples chosen such as to motivate applications in the engineering, biological, and medical sciences.
[ { "created": "Thu, 16 Jun 2022 19:48:00 GMT", "version": "v1" }, { "created": "Wed, 10 May 2023 13:34:50 GMT", "version": "v2" } ]
2023-05-11
[ [ "Voss", "Henning U.", "" ] ]
A simple motion amplification algorithm suitable for real-time applications on mobile devices, including smartphones, is presented. It is based on motion enhancement by moving average differencing (MEMAD), a temporal high-pass filter for video streams. MEMAD can amplify small moving objects or subtle motion in larger objects. It is computationally sufficiently simple to be implemented in real time on smartphones. In the specific implementation as an Android phone app, MEMAD is demonstrated on examples chosen such as to motivate applications in the engineering, biological, and medical sciences.
2204.05169
Vishal Sunder
Vishal Sunder, Samuel Thomas, Hong-Kwang J. Kuo, Jatin Ganhotra, Brian Kingsbury, Eric Fosler-Lussier
Towards End-to-End Integration of Dialog History for Improved Spoken Language Understanding
5 pages, 1 figure
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Dialog history plays an important role in spoken language understanding (SLU) performance in a dialog system. For end-to-end (E2E) SLU, previous work has used dialog history in text form, which makes the model dependent on a cascaded automatic speech recognizer (ASR). This rescinds the benefits of an E2E system which is intended to be compact and robust to ASR errors. In this paper, we propose a hierarchical conversation model that is capable of directly using dialog history in speech form, making it fully E2E. We also distill semantic knowledge from the available gold conversation transcripts by jointly training a similar text-based conversation model with an explicit tying of acoustic and semantic embeddings. We also propose a novel technique that we call DropFrame to deal with the long training time incurred by adding dialog history in an E2E manner. On the HarperValleyBank dialog dataset, our E2E history integration outperforms a history independent baseline by 7.7% absolute F1 score on the task of dialog action recognition. Our model performs competitively with the state-of-the-art history based cascaded baseline, but uses 48% fewer parameters. In the absence of gold transcripts to fine-tune an ASR model, our model outperforms this baseline by a significant margin of 10% absolute F1 score.
[ { "created": "Mon, 11 Apr 2022 14:56:05 GMT", "version": "v1" } ]
2022-04-12
[ [ "Sunder", "Vishal", "" ], [ "Thomas", "Samuel", "" ], [ "Kuo", "Hong-Kwang J.", "" ], [ "Ganhotra", "Jatin", "" ], [ "Kingsbury", "Brian", "" ], [ "Fosler-Lussier", "Eric", "" ] ]
Dialog history plays an important role in spoken language understanding (SLU) performance in a dialog system. For end-to-end (E2E) SLU, previous work has used dialog history in text form, which makes the model dependent on a cascaded automatic speech recognizer (ASR). This rescinds the benefits of an E2E system which is intended to be compact and robust to ASR errors. In this paper, we propose a hierarchical conversation model that is capable of directly using dialog history in speech form, making it fully E2E. We also distill semantic knowledge from the available gold conversation transcripts by jointly training a similar text-based conversation model with an explicit tying of acoustic and semantic embeddings. We also propose a novel technique that we call DropFrame to deal with the long training time incurred by adding dialog history in an E2E manner. On the HarperValleyBank dialog dataset, our E2E history integration outperforms a history independent baseline by 7.7% absolute F1 score on the task of dialog action recognition. Our model performs competitively with the state-of-the-art history based cascaded baseline, but uses 48% fewer parameters. In the absence of gold transcripts to fine-tune an ASR model, our model outperforms this baseline by a significant margin of 10% absolute F1 score.
2408.05184
Denis Kokosinskii
Denis Kokosinskii, Mikhail Kuklin, Nikolay Arefyev
Deep-change at AXOLOTL-24: Orchestrating WSD and WSI Models for Semantic Change Modeling
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper describes our solution of the first subtask from the AXOLOTL-24 shared task on Semantic Change Modeling. The goal of this subtask is to distribute a given set of usages of a polysemous word from a newer time period between senses of this word from an older time period and clusters representing gained senses of this word. We propose and experiment with three new methods solving this task. Our methods achieve SOTA results according to both official metrics of the first substask. Additionally, we develop a model that can tell if a given word usage is not described by any of the provided sense definitions. This model serves as a component in one of our methods, but can potentially be useful on its own.
[ { "created": "Fri, 9 Aug 2024 17:15:54 GMT", "version": "v1" } ]
2024-08-12
[ [ "Kokosinskii", "Denis", "" ], [ "Kuklin", "Mikhail", "" ], [ "Arefyev", "Nikolay", "" ] ]
This paper describes our solution of the first subtask from the AXOLOTL-24 shared task on Semantic Change Modeling. The goal of this subtask is to distribute a given set of usages of a polysemous word from a newer time period between senses of this word from an older time period and clusters representing gained senses of this word. We propose and experiment with three new methods solving this task. Our methods achieve SOTA results according to both official metrics of the first substask. Additionally, we develop a model that can tell if a given word usage is not described by any of the provided sense definitions. This model serves as a component in one of our methods, but can potentially be useful on its own.
1212.3631
Pablo Sprechmann
Pablo Sprechmann, Alex M. Bronstein and Guillermo Sapiro
Learning efficient sparse and low rank models
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speedup compared to the exact optimization algorithms.
[ { "created": "Fri, 14 Dec 2012 22:50:44 GMT", "version": "v1" } ]
2012-12-18
[ [ "Sprechmann", "Pablo", "" ], [ "Bronstein", "Alex M.", "" ], [ "Sapiro", "Guillermo", "" ] ]
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speedup compared to the exact optimization algorithms.
2205.14440
Leye Wang
Xiao Han, Leye Wang, Junjie Wu, Yuncong Yang
Large-Scale Privacy-Preserving Network Embedding against Private Link Inference Attacks
null
null
null
null
cs.LG cs.AI cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network embedding represents network nodes by a low-dimensional informative vector. While it is generally effective for various downstream tasks, it may leak some private information of networks, such as hidden private links. In this work, we address a novel problem of privacy-preserving network embedding against private link inference attacks. Basically, we propose to perturb the original network by adding or removing links, and expect the embedding generated on the perturbed network can leak little information about private links but hold high utility for various downstream tasks. Towards this goal, we first propose general measurements to quantify privacy gain and utility loss incurred by candidate network perturbations; we then design a PPNE framework to identify the optimal perturbation solution with the best privacy-utility trade-off in an iterative way. Furthermore, we propose many techniques to accelerate PPNE and ensure its scalability. For instance, as the skip-gram embedding methods including DeepWalk and LINE can be seen as matrix factorization with closed form embedding results, we devise efficient privacy gain and utility loss approximation methods to avoid the repetitive time-consuming embedding training for every candidate network perturbation in each iteration. Experiments on real-life network datasets (with up to millions of nodes) verify that PPNE outperforms baselines by sacrificing less utility and obtaining higher privacy protection.
[ { "created": "Sat, 28 May 2022 13:59:39 GMT", "version": "v1" } ]
2022-05-31
[ [ "Han", "Xiao", "" ], [ "Wang", "Leye", "" ], [ "Wu", "Junjie", "" ], [ "Yang", "Yuncong", "" ] ]
Network embedding represents network nodes by a low-dimensional informative vector. While it is generally effective for various downstream tasks, it may leak some private information of networks, such as hidden private links. In this work, we address a novel problem of privacy-preserving network embedding against private link inference attacks. Basically, we propose to perturb the original network by adding or removing links, and expect the embedding generated on the perturbed network can leak little information about private links but hold high utility for various downstream tasks. Towards this goal, we first propose general measurements to quantify privacy gain and utility loss incurred by candidate network perturbations; we then design a PPNE framework to identify the optimal perturbation solution with the best privacy-utility trade-off in an iterative way. Furthermore, we propose many techniques to accelerate PPNE and ensure its scalability. For instance, as the skip-gram embedding methods including DeepWalk and LINE can be seen as matrix factorization with closed form embedding results, we devise efficient privacy gain and utility loss approximation methods to avoid the repetitive time-consuming embedding training for every candidate network perturbation in each iteration. Experiments on real-life network datasets (with up to millions of nodes) verify that PPNE outperforms baselines by sacrificing less utility and obtaining higher privacy protection.
2007.00328
Hui Li
Hui Li, Xiao-Jun Wu, Tariq Durrani
NestFuse: An Infrared and Visible Image Fusion Architecture based on Nest Connection and Spatial/Channel Attention Models
12 pages, 13 figures, 6 tables. IEEE Transactions on Instrumentation and Measurement
null
10.1109/TIM.2020.3005230
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a novel method for infrared and visible image fusion where we develop nest connection-based network and spatial/channel attention models. The nest connection-based network can preserve significant amounts of information from input data in a multi-scale perspective. The approach comprises three key elements: encoder, fusion strategy and decoder respectively. In our proposed fusion strategy, spatial attention models and channel attention models are developed that describe the importance of each spatial position and of each channel with deep features. Firstly, the source images are fed into the encoder to extract multi-scale deep features. The novel fusion strategy is then developed to fuse these features for each scale. Finally, the fused image is reconstructed by the nest connection-based decoder. Experiments are performed on publicly available datasets. These exhibit that our proposed approach has better fusion performance than other state-of-the-art methods. This claim is justified through both subjective and objective evaluation. The code of our fusion method is available at https://github.com/hli1221/imagefusion-nestfuse
[ { "created": "Wed, 1 Jul 2020 08:46:23 GMT", "version": "v1" }, { "created": "Sat, 11 Jul 2020 06:31:34 GMT", "version": "v2" } ]
2020-07-14
[ [ "Li", "Hui", "" ], [ "Wu", "Xiao-Jun", "" ], [ "Durrani", "Tariq", "" ] ]
In this paper we propose a novel method for infrared and visible image fusion where we develop nest connection-based network and spatial/channel attention models. The nest connection-based network can preserve significant amounts of information from input data in a multi-scale perspective. The approach comprises three key elements: encoder, fusion strategy and decoder respectively. In our proposed fusion strategy, spatial attention models and channel attention models are developed that describe the importance of each spatial position and of each channel with deep features. Firstly, the source images are fed into the encoder to extract multi-scale deep features. The novel fusion strategy is then developed to fuse these features for each scale. Finally, the fused image is reconstructed by the nest connection-based decoder. Experiments are performed on publicly available datasets. These exhibit that our proposed approach has better fusion performance than other state-of-the-art methods. This claim is justified through both subjective and objective evaluation. The code of our fusion method is available at https://github.com/hli1221/imagefusion-nestfuse
2006.03423
Kieran Chin-Cheong
Kieran Chin-Cheong, Thomas Sutter and Julia E. Vogt
Generation of Differentially Private Heterogeneous Electronic Health Records
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electronic Health Records (EHRs) are commonly used by the machine learning community for research on problems specifically related to health care and medicine. EHRs have the advantages that they can be easily distributed and contain many features useful for e.g. classification problems. What makes EHR data sets different from typical machine learning data sets is that they are often very sparse, due to their high dimensionality, and often contain heterogeneous (mixed) data types. Furthermore, the data sets deal with sensitive information, which limits the distribution of any models learned using them, due to privacy concerns. For these reasons, using EHR data in practice presents a real challenge. In this work, we explore using Generative Adversarial Networks to generate synthetic, heterogeneous EHRs with the goal of using these synthetic records in place of existing data sets for downstream classification tasks. We will further explore applying differential privacy (DP) preserving optimization in order to produce DP synthetic EHR data sets, which provide rigorous privacy guarantees, and are therefore shareable and usable in the real world. The performance (measured by AUROC, AUPRC and accuracy) of our model's synthetic, heterogeneous data is very close to the original data set (within 3 - 5% of the baseline) for the non-DP model when tested in a binary classification task. Using strong $(1, 10^{-5})$ DP, our model still produces data useful for machine learning tasks, albeit incurring a roughly 17% performance penalty in our tested classification task. We additionally perform a sub-population analysis and find that our model does not introduce any bias into the synthetic EHR data compared to the baseline in either male/female populations, or the 0-18, 19-50 and 51+ age groups in terms of classification performance for either the non-DP or DP variant.
[ { "created": "Fri, 5 Jun 2020 13:21:46 GMT", "version": "v1" } ]
2020-06-08
[ [ "Chin-Cheong", "Kieran", "" ], [ "Sutter", "Thomas", "" ], [ "Vogt", "Julia E.", "" ] ]
Electronic Health Records (EHRs) are commonly used by the machine learning community for research on problems specifically related to health care and medicine. EHRs have the advantages that they can be easily distributed and contain many features useful for e.g. classification problems. What makes EHR data sets different from typical machine learning data sets is that they are often very sparse, due to their high dimensionality, and often contain heterogeneous (mixed) data types. Furthermore, the data sets deal with sensitive information, which limits the distribution of any models learned using them, due to privacy concerns. For these reasons, using EHR data in practice presents a real challenge. In this work, we explore using Generative Adversarial Networks to generate synthetic, heterogeneous EHRs with the goal of using these synthetic records in place of existing data sets for downstream classification tasks. We will further explore applying differential privacy (DP) preserving optimization in order to produce DP synthetic EHR data sets, which provide rigorous privacy guarantees, and are therefore shareable and usable in the real world. The performance (measured by AUROC, AUPRC and accuracy) of our model's synthetic, heterogeneous data is very close to the original data set (within 3 - 5% of the baseline) for the non-DP model when tested in a binary classification task. Using strong $(1, 10^{-5})$ DP, our model still produces data useful for machine learning tasks, albeit incurring a roughly 17% performance penalty in our tested classification task. We additionally perform a sub-population analysis and find that our model does not introduce any bias into the synthetic EHR data compared to the baseline in either male/female populations, or the 0-18, 19-50 and 51+ age groups in terms of classification performance for either the non-DP or DP variant.
1504.04930
Wentao Huang
Wentao Huang and Michael Langberg and Joerg Kliewer
Connecting Multiple-unicast and Network Error Correction: Reduction and Unachievability
ISIT 2015. arXiv admin note: text overlap with arXiv:1410.1905
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that solving a multiple-unicast network coding problem can be reduced to solving a single-unicast network error correction problem, where an adversary may jam at most a single edge in the network. Specifically, we present an efficient reduction that maps a multiple-unicast network coding instance to a network error correction instance while preserving feasibility. The reduction holds for both the zero probability of error model and the vanishing probability of error model. Previous reductions are restricted to the zero-error case. As an application of the reduction, we present a constructive example showing that the single-unicast network error correction capacity may not be achievable, a result of separate interest.
[ { "created": "Mon, 20 Apr 2015 04:02:35 GMT", "version": "v1" } ]
2015-04-21
[ [ "Huang", "Wentao", "" ], [ "Langberg", "Michael", "" ], [ "Kliewer", "Joerg", "" ] ]
We show that solving a multiple-unicast network coding problem can be reduced to solving a single-unicast network error correction problem, where an adversary may jam at most a single edge in the network. Specifically, we present an efficient reduction that maps a multiple-unicast network coding instance to a network error correction instance while preserving feasibility. The reduction holds for both the zero probability of error model and the vanishing probability of error model. Previous reductions are restricted to the zero-error case. As an application of the reduction, we present a constructive example showing that the single-unicast network error correction capacity may not be achievable, a result of separate interest.
2401.07745
Doris Yan
Mi Yan, Jiazhao Zhang, Yan Zhu, He Wang
MaskClustering: View Consensus based Mask Graph Clustering for Open-Vocabulary 3D Instance Segmentation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Open-vocabulary 3D instance segmentation is cutting-edge for its ability to segment 3D instances without predefined categories. However, progress in 3D lags behind its 2D counterpart due to limited annotated 3D data. To address this, recent works first generate 2D open-vocabulary masks through 2D models and then merge them into 3D instances based on metrics calculated between two neighboring frames. In contrast to these local metrics, we propose a novel metric, view consensus rate, to enhance the utilization of multi-view observations. The key insight is that two 2D masks should be deemed part of the same 3D instance if a significant number of other 2D masks from different views contain both these two masks. Using this metric as edge weight, we construct a global mask graph where each mask is a node. Through iterative clustering of masks showing high view consensus, we generate a series of clusters, each representing a distinct 3D instance. Notably, our model is training-free. Through extensive experiments on publicly available datasets, including ScanNet++, ScanNet200 and MatterPort3D, we demonstrate that our method achieves state-of-the-art performance in open-vocabulary 3D instance segmentation. Our project page is at https://pku-epic.github.io/MaskClustering.
[ { "created": "Mon, 15 Jan 2024 14:56:15 GMT", "version": "v1" }, { "created": "Wed, 10 Apr 2024 15:30:23 GMT", "version": "v2" } ]
2024-04-11
[ [ "Yan", "Mi", "" ], [ "Zhang", "Jiazhao", "" ], [ "Zhu", "Yan", "" ], [ "Wang", "He", "" ] ]
Open-vocabulary 3D instance segmentation is cutting-edge for its ability to segment 3D instances without predefined categories. However, progress in 3D lags behind its 2D counterpart due to limited annotated 3D data. To address this, recent works first generate 2D open-vocabulary masks through 2D models and then merge them into 3D instances based on metrics calculated between two neighboring frames. In contrast to these local metrics, we propose a novel metric, view consensus rate, to enhance the utilization of multi-view observations. The key insight is that two 2D masks should be deemed part of the same 3D instance if a significant number of other 2D masks from different views contain both these two masks. Using this metric as edge weight, we construct a global mask graph where each mask is a node. Through iterative clustering of masks showing high view consensus, we generate a series of clusters, each representing a distinct 3D instance. Notably, our model is training-free. Through extensive experiments on publicly available datasets, including ScanNet++, ScanNet200 and MatterPort3D, we demonstrate that our method achieves state-of-the-art performance in open-vocabulary 3D instance segmentation. Our project page is at https://pku-epic.github.io/MaskClustering.
2405.19074
Dipam Goswami Mr.
Dipam Goswami, Albin Soutif--Cormerais, Yuyang Liu, Sandesh Kamath, Bart{\l}omiej Twardowski, Joost van de Weijer
Resurrecting Old Classes with New Data for Exemplar-Free Continual Learning
Accepted at CVPR 2024
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Continual learning methods are known to suffer from catastrophic forgetting, a phenomenon that is particularly hard to counter for methods that do not store exemplars of previous tasks. Therefore, to reduce potential drift in the feature extractor, existing exemplar-free methods are typically evaluated in settings where the first task is significantly larger than subsequent tasks. Their performance drops drastically in more challenging settings starting with a smaller first task. To address this problem of feature drift estimation for exemplar-free methods, we propose to adversarially perturb the current samples such that their embeddings are close to the old class prototypes in the old model embedding space. We then estimate the drift in the embedding space from the old to the new model using the perturbed images and compensate the prototypes accordingly. We exploit the fact that adversarial samples are transferable from the old to the new feature space in a continual learning setting. The generation of these images is simple and computationally cheap. We demonstrate in our experiments that the proposed approach better tracks the movement of prototypes in embedding space and outperforms existing methods on several standard continual learning benchmarks as well as on fine-grained datasets. Code is available at https://github.com/dipamgoswami/ADC.
[ { "created": "Wed, 29 May 2024 13:31:42 GMT", "version": "v1" } ]
2024-05-30
[ [ "Goswami", "Dipam", "" ], [ "Soutif--Cormerais", "Albin", "" ], [ "Liu", "Yuyang", "" ], [ "Kamath", "Sandesh", "" ], [ "Twardowski", "Bartłomiej", "" ], [ "van de Weijer", "Joost", "" ] ]
Continual learning methods are known to suffer from catastrophic forgetting, a phenomenon that is particularly hard to counter for methods that do not store exemplars of previous tasks. Therefore, to reduce potential drift in the feature extractor, existing exemplar-free methods are typically evaluated in settings where the first task is significantly larger than subsequent tasks. Their performance drops drastically in more challenging settings starting with a smaller first task. To address this problem of feature drift estimation for exemplar-free methods, we propose to adversarially perturb the current samples such that their embeddings are close to the old class prototypes in the old model embedding space. We then estimate the drift in the embedding space from the old to the new model using the perturbed images and compensate the prototypes accordingly. We exploit the fact that adversarial samples are transferable from the old to the new feature space in a continual learning setting. The generation of these images is simple and computationally cheap. We demonstrate in our experiments that the proposed approach better tracks the movement of prototypes in embedding space and outperforms existing methods on several standard continual learning benchmarks as well as on fine-grained datasets. Code is available at https://github.com/dipamgoswami/ADC.
2312.01330
Asaf Shabtai
Roy Peled, Eran Aizikovich, Edan Habler, Yuval Elovici, Asaf Shabtai
Evaluating the Security of Satellite Systems
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Satellite systems are facing an ever-increasing amount of cybersecurity threats as their role in communications, navigation, and other services expands. Recent papers have examined attacks targeting satellites and space systems; however, they did not comprehensively analyze the threats to satellites and systematically identify adversarial techniques across the attack lifecycle. This paper presents a comprehensive taxonomy of adversarial tactics, techniques, and procedures explicitly targeting LEO satellites. First, we analyze the space ecosystem including the ground, space, Communication, and user segments, highlighting their architectures, functions, and vulnerabilities. Then, we examine the threat landscape, including adversary types, and capabilities, and survey historical and recent attacks such as jamming, spoofing, and supply chain. Finally, we propose a novel extension of the MITRE ATT&CK framework to categorize satellite attack techniques across the adversary lifecycle from reconnaissance to impact. The taxonomy is demonstrated by modeling high-profile incidents, including the Viasat attack that disrupted Ukraine's communications. The taxonomy provides the foundation for the development of defenses against emerging cyber risks to space assets. The proposed threat model will advance research in the space domain and contribute to the security of the space domain against sophisticated attacks.
[ { "created": "Sun, 3 Dec 2023 09:38:28 GMT", "version": "v1" } ]
2023-12-05
[ [ "Peled", "Roy", "" ], [ "Aizikovich", "Eran", "" ], [ "Habler", "Edan", "" ], [ "Elovici", "Yuval", "" ], [ "Shabtai", "Asaf", "" ] ]
Satellite systems are facing an ever-increasing amount of cybersecurity threats as their role in communications, navigation, and other services expands. Recent papers have examined attacks targeting satellites and space systems; however, they did not comprehensively analyze the threats to satellites and systematically identify adversarial techniques across the attack lifecycle. This paper presents a comprehensive taxonomy of adversarial tactics, techniques, and procedures explicitly targeting LEO satellites. First, we analyze the space ecosystem including the ground, space, Communication, and user segments, highlighting their architectures, functions, and vulnerabilities. Then, we examine the threat landscape, including adversary types, and capabilities, and survey historical and recent attacks such as jamming, spoofing, and supply chain. Finally, we propose a novel extension of the MITRE ATT&CK framework to categorize satellite attack techniques across the adversary lifecycle from reconnaissance to impact. The taxonomy is demonstrated by modeling high-profile incidents, including the Viasat attack that disrupted Ukraine's communications. The taxonomy provides the foundation for the development of defenses against emerging cyber risks to space assets. The proposed threat model will advance research in the space domain and contribute to the security of the space domain against sophisticated attacks.
2102.09032
Karl B\"ackstr\"om
Karl B\"ackstr\"om, Ivan Walulya, Marina Papatriantafilou, Philippas Tsigas
Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and Stable Convergence
13 pages, 10 figures. Accepted in the 35th IEEE International Parallel & Distributed Processing Symposium
null
null
null
cs.DC cs.DS
http://creativecommons.org/licenses/by/4.0/
Stochastic gradient descent (SGD) is an essential element in Machine Learning (ML) algorithms. Asynchronous parallel shared-memory SGD (AsyncSGD), including synchronization-free algorithms, e.g. HOGWILD!, have received interest in certain contexts, due to reduced overhead compared to synchronous parallelization. Despite that they induce staleness and inconsistency, they have shown speedup for problems satisfying smooth, strongly convex targets, and gradient sparsity. Recent works take important steps towards understanding the potential of parallel SGD for problems not conforming to these strong assumptions, in particular for deep learning (DL). There is however a gap in current literature in understanding when AsyncSGD algorithms are useful in practice, and in particular how mechanisms for synchronization and consistency play a role. We focus on the impact of consistency-preserving non-blocking synchronization in SGD convergence, and in sensitivity to hyper-parameter tuning. We propose Leashed-SGD, an extensible algorithmic framework of consistency-preserving implementations of AsyncSGD, employing lock-free synchronization, effectively balancing throughput and latency. We argue analytically about the dynamics of the algorithms, memory consumption, the threads' progress over time, and the expected contention. We provide a comprehensive empirical evaluation, validating the analytical claims, benchmarking the proposed Leashed-SGD framework, and comparing to baselines for training multilayer perceptrons (MLP) and convolutional neural networks (CNN). We observe the crucial impact of contention, staleness and consistency and show how Leashed-SGD provides significant improvements in stability as well as wall-clock time to convergence (from 20-80% up to 4x improvements) compared to the standard lock-based AsyncSGD algorithm and HOGWILD!, while reducing the overall memory footprint.
[ { "created": "Wed, 17 Feb 2021 21:24:44 GMT", "version": "v1" } ]
2021-02-19
[ [ "Bäckström", "Karl", "" ], [ "Walulya", "Ivan", "" ], [ "Papatriantafilou", "Marina", "" ], [ "Tsigas", "Philippas", "" ] ]
Stochastic gradient descent (SGD) is an essential element in Machine Learning (ML) algorithms. Asynchronous parallel shared-memory SGD (AsyncSGD), including synchronization-free algorithms, e.g. HOGWILD!, have received interest in certain contexts, due to reduced overhead compared to synchronous parallelization. Despite that they induce staleness and inconsistency, they have shown speedup for problems satisfying smooth, strongly convex targets, and gradient sparsity. Recent works take important steps towards understanding the potential of parallel SGD for problems not conforming to these strong assumptions, in particular for deep learning (DL). There is however a gap in current literature in understanding when AsyncSGD algorithms are useful in practice, and in particular how mechanisms for synchronization and consistency play a role. We focus on the impact of consistency-preserving non-blocking synchronization in SGD convergence, and in sensitivity to hyper-parameter tuning. We propose Leashed-SGD, an extensible algorithmic framework of consistency-preserving implementations of AsyncSGD, employing lock-free synchronization, effectively balancing throughput and latency. We argue analytically about the dynamics of the algorithms, memory consumption, the threads' progress over time, and the expected contention. We provide a comprehensive empirical evaluation, validating the analytical claims, benchmarking the proposed Leashed-SGD framework, and comparing to baselines for training multilayer perceptrons (MLP) and convolutional neural networks (CNN). We observe the crucial impact of contention, staleness and consistency and show how Leashed-SGD provides significant improvements in stability as well as wall-clock time to convergence (from 20-80% up to 4x improvements) compared to the standard lock-based AsyncSGD algorithm and HOGWILD!, while reducing the overall memory footprint.
2407.08028
Bingjie Tang
Bingjie Tang, Iretiayo Akinola, Jie Xu, Bowen Wen, Ankur Handa, Karl Van Wyk, Dieter Fox, Gaurav S. Sukhatme, Fabio Ramos, Yashraj Narang
AutoMate: Specialist and Generalist Assembly Policies over Diverse Geometries
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robotic assembly for high-mixture settings requires adaptivity to diverse parts and poses, which is an open challenge. Meanwhile, in other areas of robotics, large models and sim-to-real have led to tremendous progress. Inspired by such work, we present AutoMate, a learning framework and system that consists of 4 parts: 1) a dataset of 100 assemblies compatible with simulation and the real world, along with parallelized simulation environments for policy learning, 2) a novel simulation-based approach for learning specialist (i.e., part-specific) policies and generalist (i.e., unified) assembly policies, 3) demonstrations of specialist policies that individually solve 80 assemblies with 80% or higher success rates in simulation, as well as a generalist policy that jointly solves 20 assemblies with an 80%+ success rate, and 4) zero-shot sim-to-real transfer that achieves similar (or better) performance than simulation, including on perception-initialized assembly. The key methodological takeaway is that a union of diverse algorithms from manufacturing engineering, character animation, and time-series analysis provides a generic and robust solution for a diverse range of robotic assembly problems. To our knowledge, AutoMate provides the first simulation-based framework for learning specialist and generalist policies over a wide range of assemblies, as well as the first system demonstrating zero-shot sim-to-real transfer over such a range. For videos and additional details, please see our project website: https://bingjietang718.github.io/automate/
[ { "created": "Wed, 10 Jul 2024 20:11:29 GMT", "version": "v1" }, { "created": "Thu, 1 Aug 2024 01:01:45 GMT", "version": "v2" } ]
2024-08-02
[ [ "Tang", "Bingjie", "" ], [ "Akinola", "Iretiayo", "" ], [ "Xu", "Jie", "" ], [ "Wen", "Bowen", "" ], [ "Handa", "Ankur", "" ], [ "Van Wyk", "Karl", "" ], [ "Fox", "Dieter", "" ], [ "Sukhatme", "Gaurav S.", "" ], [ "Ramos", "Fabio", "" ], [ "Narang", "Yashraj", "" ] ]
Robotic assembly for high-mixture settings requires adaptivity to diverse parts and poses, which is an open challenge. Meanwhile, in other areas of robotics, large models and sim-to-real have led to tremendous progress. Inspired by such work, we present AutoMate, a learning framework and system that consists of 4 parts: 1) a dataset of 100 assemblies compatible with simulation and the real world, along with parallelized simulation environments for policy learning, 2) a novel simulation-based approach for learning specialist (i.e., part-specific) policies and generalist (i.e., unified) assembly policies, 3) demonstrations of specialist policies that individually solve 80 assemblies with 80% or higher success rates in simulation, as well as a generalist policy that jointly solves 20 assemblies with an 80%+ success rate, and 4) zero-shot sim-to-real transfer that achieves similar (or better) performance than simulation, including on perception-initialized assembly. The key methodological takeaway is that a union of diverse algorithms from manufacturing engineering, character animation, and time-series analysis provides a generic and robust solution for a diverse range of robotic assembly problems. To our knowledge, AutoMate provides the first simulation-based framework for learning specialist and generalist policies over a wide range of assemblies, as well as the first system demonstrating zero-shot sim-to-real transfer over such a range. For videos and additional details, please see our project website: https://bingjietang718.github.io/automate/
2204.03456
Rafael Rego Drumond
Lukas Brinkmeyer and Rafael Rego Drumond and Johannes Burchert and Lars Schmidt-Thieme
Few-Shot Forecasting of Time-Series with Heterogeneous Channels
Under review. Equal contribution (Brinkmeyer and Rego Drumond)
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning complex time series forecasting models usually requires a large amount of data, as each model is trained from scratch for each task/data set. Leveraging learning experience with similar datasets is a well-established technique for classification problems called few-shot classification. However, existing approaches cannot be applied to time-series forecasting because i) multivariate time-series datasets have different channels and ii) forecasting is principally different from classification. In this paper we formalize the problem of few-shot forecasting of time-series with heterogeneous channels for the first time. Extending recent work on heterogeneous attributes in vector data, we develop a model composed of permutation-invariant deep set-blocks which incorporate a temporal embedding. We assemble the first meta-dataset of 40 multivariate time-series datasets and show through experiments that our model provides a good generalization, outperforming baselines carried over from simpler scenarios that either fail to learn across tasks or miss temporal information.
[ { "created": "Thu, 7 Apr 2022 14:02:15 GMT", "version": "v1" }, { "created": "Thu, 18 Aug 2022 14:27:13 GMT", "version": "v2" } ]
2022-08-19
[ [ "Brinkmeyer", "Lukas", "" ], [ "Drumond", "Rafael Rego", "" ], [ "Burchert", "Johannes", "" ], [ "Schmidt-Thieme", "Lars", "" ] ]
Learning complex time series forecasting models usually requires a large amount of data, as each model is trained from scratch for each task/data set. Leveraging learning experience with similar datasets is a well-established technique for classification problems called few-shot classification. However, existing approaches cannot be applied to time-series forecasting because i) multivariate time-series datasets have different channels and ii) forecasting is principally different from classification. In this paper we formalize the problem of few-shot forecasting of time-series with heterogeneous channels for the first time. Extending recent work on heterogeneous attributes in vector data, we develop a model composed of permutation-invariant deep set-blocks which incorporate a temporal embedding. We assemble the first meta-dataset of 40 multivariate time-series datasets and show through experiments that our model provides a good generalization, outperforming baselines carried over from simpler scenarios that either fail to learn across tasks or miss temporal information.
2306.01457
Stefan Arnold
Stefan Arnold, Dilara Yesilbas, Sven Weinzierl
Driving Context into Text-to-Text Privatization
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
\textit{Metric Differential Privacy} enables text-to-text privatization by adding calibrated noise to the vector of a word derived from an embedding space and projecting this noisy vector back to a discrete vocabulary using a nearest neighbor search. Since words are substituted without context, this mechanism is expected to fall short at finding substitutes for words with ambiguous meanings, such as \textit{'bank'}. To account for these ambiguous words, we leverage a sense embedding and incorporate a sense disambiguation step prior to noise injection. We encompass our modification to the privatization mechanism with an estimation of privacy and utility. For word sense disambiguation on the \textit{Words in Context} dataset, we demonstrate a substantial increase in classification accuracy by $6.05\%$.
[ { "created": "Fri, 2 Jun 2023 11:33:06 GMT", "version": "v1" } ]
2023-06-05
[ [ "Arnold", "Stefan", "" ], [ "Yesilbas", "Dilara", "" ], [ "Weinzierl", "Sven", "" ] ]
\textit{Metric Differential Privacy} enables text-to-text privatization by adding calibrated noise to the vector of a word derived from an embedding space and projecting this noisy vector back to a discrete vocabulary using a nearest neighbor search. Since words are substituted without context, this mechanism is expected to fall short at finding substitutes for words with ambiguous meanings, such as \textit{'bank'}. To account for these ambiguous words, we leverage a sense embedding and incorporate a sense disambiguation step prior to noise injection. We encompass our modification to the privatization mechanism with an estimation of privacy and utility. For word sense disambiguation on the \textit{Words in Context} dataset, we demonstrate a substantial increase in classification accuracy by $6.05\%$.
2202.10019
Md. Rafat Rahman Tushar
Ismot Sadik Peyas, Zahid Hasan, Md. Rafat Rahman Tushar, Al Musabbir, Raisa Mehjabin Azni, Shahnewaz Siddique
Autonomous Warehouse Robot using Deep Q-Learning
TENCON 2021
null
10.1109/TENCON54134.2021.9707256
null
cs.RO cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In warehouses, specialized agents need to navigate, avoid obstacles and maximize the use of space in the warehouse environment. Due to the unpredictability of these environments, reinforcement learning approaches can be applied to complete these tasks. In this paper, we propose using Deep Reinforcement Learning (DRL) to address the robot navigation and obstacle avoidance problem and traditional Q-learning with minor variations to maximize the use of space for product placement. We first investigate the problem for the single robot case. Next, based on the single robot model, we extend our system to the multi-robot case. We use a strategic variation of Q-tables to perform multi-agent Q-learning. We successfully test the performance of our model in a 2D simulation environment for both the single and multi-robot cases.
[ { "created": "Mon, 21 Feb 2022 07:16:51 GMT", "version": "v1" } ]
2022-02-22
[ [ "Peyas", "Ismot Sadik", "" ], [ "Hasan", "Zahid", "" ], [ "Tushar", "Md. Rafat Rahman", "" ], [ "Musabbir", "Al", "" ], [ "Azni", "Raisa Mehjabin", "" ], [ "Siddique", "Shahnewaz", "" ] ]
In warehouses, specialized agents need to navigate, avoid obstacles and maximize the use of space in the warehouse environment. Due to the unpredictability of these environments, reinforcement learning approaches can be applied to complete these tasks. In this paper, we propose using Deep Reinforcement Learning (DRL) to address the robot navigation and obstacle avoidance problem and traditional Q-learning with minor variations to maximize the use of space for product placement. We first investigate the problem for the single robot case. Next, based on the single robot model, we extend our system to the multi-robot case. We use a strategic variation of Q-tables to perform multi-agent Q-learning. We successfully test the performance of our model in a 2D simulation environment for both the single and multi-robot cases.