id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1606.01342
Adam Kasperski
Mikita Hradovich, Adam Kasperski, Pawel Zielinski
Recoverable robust spanning tree problem under interval uncertainty representations
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper deals with the recoverable robust spanning tree problem under interval uncertainty representations. A polynomial time, combinatorial algorithm for the recoverable spanning tree problem is first constructed. This problem generalizes the incremental spanning tree problem, previously discussed in literature. The algorithm built is then applied to solve the recoverable robust spanning tree problem, under the traditional interval uncertainty representation, in polynomial time. Moreover, the algorithm allows to obtain, under some mild assumptions about the uncertainty intervals,several approximation results for the recoverable robust spanning tree problem under the Bertsimas and Sim interval uncertainty representation and the interval uncertainty representation with a budget constraint.
[ { "created": "Sat, 4 Jun 2016 08:02:23 GMT", "version": "v1" }, { "created": "Sat, 27 Aug 2016 08:02:41 GMT", "version": "v2" } ]
2016-08-30
[ [ "Hradovich", "Mikita", "" ], [ "Kasperski", "Adam", "" ], [ "Zielinski", "Pawel", "" ] ]
This paper deals with the recoverable robust spanning tree problem under interval uncertainty representations. A polynomial time, combinatorial algorithm for the recoverable spanning tree problem is first constructed. This problem generalizes the incremental spanning tree problem, previously discussed in literature. The algorithm built is then applied to solve the recoverable robust spanning tree problem, under the traditional interval uncertainty representation, in polynomial time. Moreover, the algorithm allows to obtain, under some mild assumptions about the uncertainty intervals,several approximation results for the recoverable robust spanning tree problem under the Bertsimas and Sim interval uncertainty representation and the interval uncertainty representation with a budget constraint.
2212.07424
Shankar Biradar Mr
Pranjal Aggarwal, Pasupuleti Chandana, Jagrut Nemade, Shubham Sharma, Sunil Saumya, Shankar Biradar
Hope Speech Detection on Social Media Platforms
14 pages, 05 figures. accepted for publication in the book chapter "Cyber Crime in Social Media: Theory and Solutions"
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Since personal computers became widely available in the consumer market, the amount of harmful content on the internet has significantly expanded. In simple terms, harmful content is anything online which causes a person distress or harm. It may include hate speech, violent content, threats, non-hope speech, etc. The online content must be positive, uplifting and supportive. Over the past few years, many studies have focused on solving this problem through hate speech detection, but very few focused on identifying hope speech. This paper discusses various machine learning approaches to identify a sentence as Hope Speech, Non-Hope Speech, or a Neutral sentence. The dataset used in the study contains English YouTube comments and is released as a part of the shared task "EACL-2021: Hope Speech Detection for Equality, Diversity, and Inclusion". Initially, the dataset obtained from the shared task had three classes: Hope Speech, non-Hope speech, and not in English; however, upon deeper inspection, we discovered that dataset relabeling is required. A group of undergraduates was hired to help perform the entire dataset's relabeling task. We experimented with conventional machine learning models (such as Na\"ive Bayes, logistic regression and support vector machine) and pre-trained models (such as BERT) on relabeled data. According to the experimental results, the relabeled data has achieved a better accuracy for Hope speech identification than the original data set.
[ { "created": "Mon, 14 Nov 2022 10:58:22 GMT", "version": "v1" } ]
2022-12-16
[ [ "Aggarwal", "Pranjal", "" ], [ "Chandana", "Pasupuleti", "" ], [ "Nemade", "Jagrut", "" ], [ "Sharma", "Shubham", "" ], [ "Saumya", "Sunil", "" ], [ "Biradar", "Shankar", "" ] ]
Since personal computers became widely available in the consumer market, the amount of harmful content on the internet has significantly expanded. In simple terms, harmful content is anything online which causes a person distress or harm. It may include hate speech, violent content, threats, non-hope speech, etc. The online content must be positive, uplifting and supportive. Over the past few years, many studies have focused on solving this problem through hate speech detection, but very few focused on identifying hope speech. This paper discusses various machine learning approaches to identify a sentence as Hope Speech, Non-Hope Speech, or a Neutral sentence. The dataset used in the study contains English YouTube comments and is released as a part of the shared task "EACL-2021: Hope Speech Detection for Equality, Diversity, and Inclusion". Initially, the dataset obtained from the shared task had three classes: Hope Speech, non-Hope speech, and not in English; however, upon deeper inspection, we discovered that dataset relabeling is required. A group of undergraduates was hired to help perform the entire dataset's relabeling task. We experimented with conventional machine learning models (such as Na\"ive Bayes, logistic regression and support vector machine) and pre-trained models (such as BERT) on relabeled data. According to the experimental results, the relabeled data has achieved a better accuracy for Hope speech identification than the original data set.
2310.10169
Guanting Dong
Guanting Dong, Tingfeng Hui, Zhuoma GongQue, Jinxu Zhao, Daichi Guo, Gang Zhao, Keqing He, Weiran Xu
DemoNSF: A Multi-task Demonstration-based Generative Framework for Noisy Slot Filling Task
Findings of EMNLP 2023 (Short Paper)
null
null
null
cs.CL cs.AI cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, prompt-based generative frameworks have shown impressive capabilities in sequence labeling tasks. However, in practical dialogue scenarios, relying solely on simplistic templates and traditional corpora presents a challenge for these methods in generalizing to unknown input perturbations. To address this gap, we propose a multi-task demonstration based generative framework for noisy slot filling, named DemoNSF. Specifically, we introduce three noisy auxiliary tasks, namely noisy recovery (NR), random mask (RM), and hybrid discrimination (HD), to implicitly capture semantic structural information of input perturbations at different granularities. In the downstream main task, we design a noisy demonstration construction strategy for the generative framework, which explicitly incorporates task-specific information and perturbed distribution during training and inference. Experiments on two benchmarks demonstrate that DemoNSF outperforms all baseline methods and achieves strong generalization. Further analysis provides empirical guidance for the practical application of generative frameworks. Our code is released at https://github.com/dongguanting/Demo-NSF.
[ { "created": "Mon, 16 Oct 2023 08:16:53 GMT", "version": "v1" } ]
2023-10-17
[ [ "Dong", "Guanting", "" ], [ "Hui", "Tingfeng", "" ], [ "GongQue", "Zhuoma", "" ], [ "Zhao", "Jinxu", "" ], [ "Guo", "Daichi", "" ], [ "Zhao", "Gang", "" ], [ "He", "Keqing", "" ], [ "Xu", "Weiran", "" ] ]
Recently, prompt-based generative frameworks have shown impressive capabilities in sequence labeling tasks. However, in practical dialogue scenarios, relying solely on simplistic templates and traditional corpora presents a challenge for these methods in generalizing to unknown input perturbations. To address this gap, we propose a multi-task demonstration based generative framework for noisy slot filling, named DemoNSF. Specifically, we introduce three noisy auxiliary tasks, namely noisy recovery (NR), random mask (RM), and hybrid discrimination (HD), to implicitly capture semantic structural information of input perturbations at different granularities. In the downstream main task, we design a noisy demonstration construction strategy for the generative framework, which explicitly incorporates task-specific information and perturbed distribution during training and inference. Experiments on two benchmarks demonstrate that DemoNSF outperforms all baseline methods and achieves strong generalization. Further analysis provides empirical guidance for the practical application of generative frameworks. Our code is released at https://github.com/dongguanting/Demo-NSF.
2305.18651
Zhen Xiang
Zhen Xiang, Zidi Xiong, Bo Li
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
Proceedings of the 40th International Conference on Machine Learning
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:38013-38038, 2023
null
null
cs.LG cs.CR cs.CV
http://creativecommons.org/licenses/by/4.0/
Backdoor (Trojan) attack is a common threat to deep neural networks, where samples from one or more source classes embedded with a backdoor trigger will be misclassified to adversarial target classes. Existing methods for detecting whether a classifier is backdoor attacked are mostly designed for attacks with a single adversarial target (e.g., all-to-one attack). To the best of our knowledge, without supervision, no existing methods can effectively address the more general X2X attack with an arbitrary number of source classes, each paired with an arbitrary target class. In this paper, we propose UMD, the first Unsupervised Model Detection method that effectively detects X2X backdoor attacks via a joint inference of the adversarial (source, target) class pairs. In particular, we first define a novel transferability statistic to measure and select a subset of putative backdoor class pairs based on a proposed clustering approach. Then, these selected class pairs are jointly assessed based on an aggregation of their reverse-engineered trigger size for detection inference, using a robust and unsupervised anomaly detector we proposed. We conduct comprehensive evaluations on CIFAR-10, GTSRB, and Imagenette dataset, and show that our unsupervised UMD outperforms SOTA detectors (even with supervision) by 17%, 4%, and 8%, respectively, in terms of the detection accuracy against diverse X2X attacks. We also show the strong detection performance of UMD against several strong adaptive attacks.
[ { "created": "Mon, 29 May 2023 23:06:05 GMT", "version": "v1" }, { "created": "Fri, 2 Jun 2023 01:56:40 GMT", "version": "v2" }, { "created": "Tue, 8 Aug 2023 08:48:48 GMT", "version": "v3" }, { "created": "Wed, 15 Nov 2023 21:51:23 GMT", "version": "v4" } ]
2023-11-17
[ [ "Xiang", "Zhen", "" ], [ "Xiong", "Zidi", "" ], [ "Li", "Bo", "" ] ]
Backdoor (Trojan) attack is a common threat to deep neural networks, where samples from one or more source classes embedded with a backdoor trigger will be misclassified to adversarial target classes. Existing methods for detecting whether a classifier is backdoor attacked are mostly designed for attacks with a single adversarial target (e.g., all-to-one attack). To the best of our knowledge, without supervision, no existing methods can effectively address the more general X2X attack with an arbitrary number of source classes, each paired with an arbitrary target class. In this paper, we propose UMD, the first Unsupervised Model Detection method that effectively detects X2X backdoor attacks via a joint inference of the adversarial (source, target) class pairs. In particular, we first define a novel transferability statistic to measure and select a subset of putative backdoor class pairs based on a proposed clustering approach. Then, these selected class pairs are jointly assessed based on an aggregation of their reverse-engineered trigger size for detection inference, using a robust and unsupervised anomaly detector we proposed. We conduct comprehensive evaluations on CIFAR-10, GTSRB, and Imagenette dataset, and show that our unsupervised UMD outperforms SOTA detectors (even with supervision) by 17%, 4%, and 8%, respectively, in terms of the detection accuracy against diverse X2X attacks. We also show the strong detection performance of UMD against several strong adaptive attacks.
2007.06685
Richard Barr
Richard S. Barr (1), Fred Glover (2), Toby Huskinson (1), Gary Kochenberger (3) ((1) Southern Methodist University, (2) University of Colorado at Boulder, (3) University of Colorado at Denver)
A Self-Organizing Extreme-Point Tabu-Search Algorithm for Fixed Charge Network Problems with Extensions
27 pages
null
null
null
cs.DM math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new self-organizing algorithm for fixed-charge network flow problems based on ghost image (GI) processes as proposed in Glover (1994) and adapted to fixed-charge transportation problems in Glover, Amini and Kochenberger (2005). Our self-organizing GI algorithm iteratively modifies an idealized representation of the problem embodied in a parametric ghost image, enabling all steps to be performed with a primal network flow algorithm operating on the parametric GI. Computational tests are carried out on an extensive set of benchmark problems which includes the previous largest set in the literature, comparing our algorithm to the best methods previously proposed for fixed-charge transportation problems, though our algorithm is not specialized to this class. We also provide comparisons for additional more general fixed-charge network flow problems against Cplex 12.8 to demonstrate that the new self-organizing GI algorithm is effective on large problem instances, finding solutions with statistically equivalent objective values at least 700 times faster. The attractive outcomes produced by the current GI/TS implementation provide a significant advance in our ability to solve fixed-cost network problems efficiently and invites its use for larger instances from a variety of application domains.
[ { "created": "Mon, 13 Jul 2020 20:55:48 GMT", "version": "v1" } ]
2020-07-15
[ [ "Barr", "Richard S.", "" ], [ "Glover", "Fred", "" ], [ "Huskinson", "Toby", "" ], [ "Kochenberger", "Gary", "" ] ]
We propose a new self-organizing algorithm for fixed-charge network flow problems based on ghost image (GI) processes as proposed in Glover (1994) and adapted to fixed-charge transportation problems in Glover, Amini and Kochenberger (2005). Our self-organizing GI algorithm iteratively modifies an idealized representation of the problem embodied in a parametric ghost image, enabling all steps to be performed with a primal network flow algorithm operating on the parametric GI. Computational tests are carried out on an extensive set of benchmark problems which includes the previous largest set in the literature, comparing our algorithm to the best methods previously proposed for fixed-charge transportation problems, though our algorithm is not specialized to this class. We also provide comparisons for additional more general fixed-charge network flow problems against Cplex 12.8 to demonstrate that the new self-organizing GI algorithm is effective on large problem instances, finding solutions with statistically equivalent objective values at least 700 times faster. The attractive outcomes produced by the current GI/TS implementation provide a significant advance in our ability to solve fixed-cost network problems efficiently and invites its use for larger instances from a variety of application domains.
1610.01354
Prabhat Kushwaha
Prabhat Kushwaha
Improved Lower Bound on DHP: Towards the Equivalence of DHP and DLP for Important Elliptic Curves Used for Implementation
To keep the paper short, we have not included appendices in the main paper. The appendices have been separately added. The reader may refer to appendices for the relevant values which have been used to complete Table 1 and Table 2 in the paper
Journal of Mathematical Cryptology 2018
10.1515/jmc-2017-0053
ISSN: 1862-2976
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In 2004, Muzereau et al. showed how to use a reduction algorithm of the discrete logarithm problem to Diffie-Hellman problem in order to estimate lower bound on Diffie-Hellman problem on elliptic curves. They presented their estimates for various elliptic curves that are used in practical applications. In this paper, we show that a much tighter lower bound for Diffie-Hellman problem on those curves can be achieved, if one uses the multiplicative group of a finite field as an auxiliary group. Moreover, improved lower bound estimates on Diffie-Hellman problem for various recommended curves are also given which are the tightest; thus, leading us towards the equivalence of Diffie-Hellman problem and the discrete logarithm problem for these recommended elliptic curves.
[ { "created": "Wed, 5 Oct 2016 10:46:47 GMT", "version": "v1" }, { "created": "Fri, 11 Nov 2016 09:26:52 GMT", "version": "v2" }, { "created": "Sat, 26 Nov 2016 07:33:25 GMT", "version": "v3" } ]
2020-11-17
[ [ "Kushwaha", "Prabhat", "" ] ]
In 2004, Muzereau et al. showed how to use a reduction algorithm of the discrete logarithm problem to Diffie-Hellman problem in order to estimate lower bound on Diffie-Hellman problem on elliptic curves. They presented their estimates for various elliptic curves that are used in practical applications. In this paper, we show that a much tighter lower bound for Diffie-Hellman problem on those curves can be achieved, if one uses the multiplicative group of a finite field as an auxiliary group. Moreover, improved lower bound estimates on Diffie-Hellman problem for various recommended curves are also given which are the tightest; thus, leading us towards the equivalence of Diffie-Hellman problem and the discrete logarithm problem for these recommended elliptic curves.
2405.09707
Jadie Adams
Jadie Adams, Shireen Elhabian
Point2SSM++: Self-Supervised Learning of Anatomical Shape Models from Point Clouds
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Correspondence-based statistical shape modeling (SSM) stands as a powerful technology for morphometric analysis in clinical research. SSM facilitates population-level characterization and quantification of anatomical shapes such as bones and organs, aiding in pathology and disease diagnostics and treatment planning. Despite its potential, SSM remains under-utilized in medical research due to the significant overhead associated with automatic construction methods, which demand complete, aligned shape surface representations. Additionally, optimization-based techniques rely on bias-inducing assumptions or templates and have prolonged inference times as the entire cohort is simultaneously optimized. To overcome these challenges, we introduce Point2SSM++, a principled, self-supervised deep learning approach that directly learns correspondence points from point cloud representations of anatomical shapes. Point2SSM++ is robust to misaligned and inconsistent input, providing SSM that accurately samples individual shape surfaces while effectively capturing population-level statistics. Additionally, we present principled extensions of Point2SSM++ to adapt it for dynamic spatiotemporal and multi-anatomy use cases, demonstrating the broad versatility of the Point2SSM++ framework. Furthermore, we present extensions of Point2SSM++ tailored for dynamic spatiotemporal and multi-anatomy scenarios, showcasing the broad versatility of the framework. Through extensive validation across diverse anatomies, evaluation metrics, and clinically relevant downstream tasks, we demonstrate Point2SSM++'s superiority over existing state-of-the-art deep learning models and traditional approaches. Point2SSM++ substantially enhances the feasibility of SSM generation and significantly broadens its array of potential clinical applications.
[ { "created": "Wed, 15 May 2024 21:13:54 GMT", "version": "v1" } ]
2024-05-17
[ [ "Adams", "Jadie", "" ], [ "Elhabian", "Shireen", "" ] ]
Correspondence-based statistical shape modeling (SSM) stands as a powerful technology for morphometric analysis in clinical research. SSM facilitates population-level characterization and quantification of anatomical shapes such as bones and organs, aiding in pathology and disease diagnostics and treatment planning. Despite its potential, SSM remains under-utilized in medical research due to the significant overhead associated with automatic construction methods, which demand complete, aligned shape surface representations. Additionally, optimization-based techniques rely on bias-inducing assumptions or templates and have prolonged inference times as the entire cohort is simultaneously optimized. To overcome these challenges, we introduce Point2SSM++, a principled, self-supervised deep learning approach that directly learns correspondence points from point cloud representations of anatomical shapes. Point2SSM++ is robust to misaligned and inconsistent input, providing SSM that accurately samples individual shape surfaces while effectively capturing population-level statistics. Additionally, we present principled extensions of Point2SSM++ to adapt it for dynamic spatiotemporal and multi-anatomy use cases, demonstrating the broad versatility of the Point2SSM++ framework. Furthermore, we present extensions of Point2SSM++ tailored for dynamic spatiotemporal and multi-anatomy scenarios, showcasing the broad versatility of the framework. Through extensive validation across diverse anatomies, evaluation metrics, and clinically relevant downstream tasks, we demonstrate Point2SSM++'s superiority over existing state-of-the-art deep learning models and traditional approaches. Point2SSM++ substantially enhances the feasibility of SSM generation and significantly broadens its array of potential clinical applications.
2007.12230
Himan Abdollahpouri
Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher
Addressing the Multistakeholder Impact of Popularity Bias in Recommendation Through Calibration
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Popularity bias is a well-known phenomenon in recommender systems: popular items are recommended even more frequently than their popularity would warrant, amplifying long-tail effects already present in many recommendation domains. Prior research has examined various approaches for mitigating popularity bias and enhancing the recommendation of long-tail items overall. The effectiveness of these approaches, however, has not been assessed in multistakeholder environments where in addition to the users who receive the recommendations, the utility of the suppliers of the recommended items should also be considered. In this paper, we propose the concept of popularity calibration which measures the match between the popularity distribution of items in a user's profile and that of the recommended items. We also develop an algorithm that optimizes this metric. In addition, we demonstrate that existing evaluation metrics for popularity bias do not reflect the performance of the algorithms when it is measured from the perspective of different stakeholders. Using music and movie datasets, we empirically show that our approach outperforms the existing state-of-the-art approaches in addressing popularity bias by calibrating the recommendations to users' preferences. We also show that our proposed algorithm has a secondary effect of improving supplier fairness.
[ { "created": "Thu, 23 Jul 2020 19:51:16 GMT", "version": "v1" } ]
2020-07-27
[ [ "Abdollahpouri", "Himan", "" ], [ "Mansoury", "Masoud", "" ], [ "Burke", "Robin", "" ], [ "Mobasher", "Bamshad", "" ] ]
Popularity bias is a well-known phenomenon in recommender systems: popular items are recommended even more frequently than their popularity would warrant, amplifying long-tail effects already present in many recommendation domains. Prior research has examined various approaches for mitigating popularity bias and enhancing the recommendation of long-tail items overall. The effectiveness of these approaches, however, has not been assessed in multistakeholder environments where in addition to the users who receive the recommendations, the utility of the suppliers of the recommended items should also be considered. In this paper, we propose the concept of popularity calibration which measures the match between the popularity distribution of items in a user's profile and that of the recommended items. We also develop an algorithm that optimizes this metric. In addition, we demonstrate that existing evaluation metrics for popularity bias do not reflect the performance of the algorithms when it is measured from the perspective of different stakeholders. Using music and movie datasets, we empirically show that our approach outperforms the existing state-of-the-art approaches in addressing popularity bias by calibrating the recommendations to users' preferences. We also show that our proposed algorithm has a secondary effect of improving supplier fairness.
1203.2870
Giuseppe Cocco
Giuseppe Cocco, Deniz G\"und\"uz and Christian Ibars
Streaming Transmitter over Block-Fading Channels with Delay Constraint
Submitted to IEEE Transactions on Wireless Communications
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data streaming transmission over a block fading channel is studied. It is assumed that the transmitter receives a new message at each channel block at a constant rate, which is fixed by an underlying application, and tries to deliver the arriving messages by a common deadline. Various transmission schemes are proposed and compared with an informed transmitter upper bound in terms of the average decoded rate. It is shown that in the single receiver case the adaptive joint encoding (aJE) scheme is asymptotically optimal, in that it achieves the ergodic capacity as the transmission deadline goes to infinity; and it closely follows the performance of the informed transmitter upper bound in the case of finite transmission deadline. On the other hand, in the presence of multiple receivers with different signal-to-noise ratios (SNR), memoryless transmission (MT), time sharing (TS) and superposition transmission (ST) schemes are shown to be more robust than the joint encoding (JE) scheme as they have gradual performance loss with decreasing SNR.
[ { "created": "Tue, 13 Mar 2012 17:24:59 GMT", "version": "v1" }, { "created": "Sat, 17 Mar 2012 10:33:58 GMT", "version": "v2" }, { "created": "Tue, 8 May 2012 11:51:14 GMT", "version": "v3" }, { "created": "Wed, 19 Sep 2012 14:27:04 GMT", "version": "v4" } ]
2012-09-20
[ [ "Cocco", "Giuseppe", "" ], [ "Gündüz", "Deniz", "" ], [ "Ibars", "Christian", "" ] ]
Data streaming transmission over a block fading channel is studied. It is assumed that the transmitter receives a new message at each channel block at a constant rate, which is fixed by an underlying application, and tries to deliver the arriving messages by a common deadline. Various transmission schemes are proposed and compared with an informed transmitter upper bound in terms of the average decoded rate. It is shown that in the single receiver case the adaptive joint encoding (aJE) scheme is asymptotically optimal, in that it achieves the ergodic capacity as the transmission deadline goes to infinity; and it closely follows the performance of the informed transmitter upper bound in the case of finite transmission deadline. On the other hand, in the presence of multiple receivers with different signal-to-noise ratios (SNR), memoryless transmission (MT), time sharing (TS) and superposition transmission (ST) schemes are shown to be more robust than the joint encoding (JE) scheme as they have gradual performance loss with decreasing SNR.
2305.19685
Elena Orlova
Elena Orlova, Aleksei Ustimenko, Ruoxi Jiang, Peter Y. Lu, Rebecca Willett
Deep Stochastic Mechanics
ICML 2024
null
null
null
cs.LG quant-ph stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a novel deep-learning-based approach for numerical simulation of a time-evolving Schr\"odinger equation inspired by stochastic mechanics and generative diffusion models. Unlike existing approaches, which exhibit computational complexity that scales exponentially in the problem dimension, our method allows us to adapt to the latent low-dimensional structure of the wave function by sampling from the Markovian diffusion. Depending on the latent dimension, our method may have far lower computational complexity in higher dimensions. Moreover, we propose novel equations for stochastic quantum mechanics, resulting in quadratic computational complexity with respect to the number of dimensions. Numerical simulations verify our theoretical findings and show a significant advantage of our method compared to other deep-learning-based approaches used for quantum mechanics.
[ { "created": "Wed, 31 May 2023 09:28:03 GMT", "version": "v1" }, { "created": "Wed, 4 Oct 2023 15:23:47 GMT", "version": "v2" }, { "created": "Wed, 14 Feb 2024 17:48:39 GMT", "version": "v3" }, { "created": "Tue, 4 Jun 2024 18:30:51 GMT", "version": "v4" }, { "created": "Sun, 7 Jul 2024 17:42:40 GMT", "version": "v5" } ]
2024-07-09
[ [ "Orlova", "Elena", "" ], [ "Ustimenko", "Aleksei", "" ], [ "Jiang", "Ruoxi", "" ], [ "Lu", "Peter Y.", "" ], [ "Willett", "Rebecca", "" ] ]
This paper introduces a novel deep-learning-based approach for numerical simulation of a time-evolving Schr\"odinger equation inspired by stochastic mechanics and generative diffusion models. Unlike existing approaches, which exhibit computational complexity that scales exponentially in the problem dimension, our method allows us to adapt to the latent low-dimensional structure of the wave function by sampling from the Markovian diffusion. Depending on the latent dimension, our method may have far lower computational complexity in higher dimensions. Moreover, we propose novel equations for stochastic quantum mechanics, resulting in quadratic computational complexity with respect to the number of dimensions. Numerical simulations verify our theoretical findings and show a significant advantage of our method compared to other deep-learning-based approaches used for quantum mechanics.
2003.03836
Ruoyi Du
Ruoyi Du, Dongliang Chang, Ayan Kumar Bhunia, Jiyang Xie, Zhanyu Ma, Yi-Zhe Song, Jun Guo
Fine-Grained Visual Classification via Progressive Multi-Granularity Training of Jigsaw Patches
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fine-grained visual classification (FGVC) is much more challenging than traditional classification tasks due to the inherently subtle intra-class object variations. Recent works mainly tackle this problem by focusing on how to locate the most discriminative parts, more complementary parts, and parts of various granularities. However, less effort has been placed to which granularities are the most discriminative and how to fuse information cross multi-granularity. In this work, we propose a novel framework for fine-grained visual classification to tackle these problems. In particular, we propose: (i) a progressive training strategy that effectively fuses features from different granularities, and (ii) a random jigsaw patch generator that encourages the network to learn features at specific granularities. We obtain state-of-the-art performances on several standard FGVC benchmark datasets, where the proposed method consistently outperforms existing methods or delivers competitive results. The code will be available at https://github.com/PRIS-CV/PMG-Progressive-Multi-Granularity-Training.
[ { "created": "Sun, 8 Mar 2020 19:27:30 GMT", "version": "v1" }, { "created": "Tue, 10 Mar 2020 11:09:49 GMT", "version": "v2" }, { "created": "Sun, 19 Jul 2020 06:56:10 GMT", "version": "v3" } ]
2020-07-21
[ [ "Du", "Ruoyi", "" ], [ "Chang", "Dongliang", "" ], [ "Bhunia", "Ayan Kumar", "" ], [ "Xie", "Jiyang", "" ], [ "Ma", "Zhanyu", "" ], [ "Song", "Yi-Zhe", "" ], [ "Guo", "Jun", "" ] ]
Fine-grained visual classification (FGVC) is much more challenging than traditional classification tasks due to the inherently subtle intra-class object variations. Recent works mainly tackle this problem by focusing on how to locate the most discriminative parts, more complementary parts, and parts of various granularities. However, less effort has been placed to which granularities are the most discriminative and how to fuse information cross multi-granularity. In this work, we propose a novel framework for fine-grained visual classification to tackle these problems. In particular, we propose: (i) a progressive training strategy that effectively fuses features from different granularities, and (ii) a random jigsaw patch generator that encourages the network to learn features at specific granularities. We obtain state-of-the-art performances on several standard FGVC benchmark datasets, where the proposed method consistently outperforms existing methods or delivers competitive results. The code will be available at https://github.com/PRIS-CV/PMG-Progressive-Multi-Granularity-Training.
2405.06225
Zhipeng Gao
Zhipeng Gao, Yanqi Su, Xing Hu, Xin Xia
Automating TODO-missed Methods Detection and Patching
null
null
10.1145/3652152
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
TODO comments are widely used by developers to remind themselves or others about incomplete tasks. In other words, TODO comments are usually associated with temporary or suboptimal solutions. In practice, all the equivalent suboptimal implementations should be updated (e.g., adding TODOs) simultaneously. However, due to various reasons (e.g., time constraints or carelessness), developers may forget or even are unaware of adding TODO comments to all necessary places, which results in the TODO-missed methods. These "hidden" suboptimal implementations in TODO-missed methods may hurt the software quality and maintainability in the long-term. Therefore, in this paper, we propose the novel task of TODO-missed methods detection and patching, and develop a novel model, namely TDPatcher (TODO-comment Patcher), to automatically patch TODO comments to the TODO-missed methods in software projects. Our model has two main stages: offline learning and online inference. During the offline learning stage, TDPatcher employs GraphCodeBERT and contrastive learning for encoding the TODO comment (natural language) and its suboptimal implementation (code fragment) into vector representations. For the online inference stage, we can identify the TODO-missed methods and further determine their patching position by leveraging the offline trained model. We built our dataset by collecting TODO-introduced methods from the top-10,000 Python GitHub repositories and evaluated TDPatcher on them. Extensive experimental results show the promising performance of our model over a set of benchmarks. We further conduct an in-the-wild evaluation which successfully detects 26 \textit{\major{TODO-missed} methods} from 50 GitHub repositories.
[ { "created": "Fri, 10 May 2024 03:38:28 GMT", "version": "v1" } ]
2024-05-13
[ [ "Gao", "Zhipeng", "" ], [ "Su", "Yanqi", "" ], [ "Hu", "Xing", "" ], [ "Xia", "Xin", "" ] ]
TODO comments are widely used by developers to remind themselves or others about incomplete tasks. In other words, TODO comments are usually associated with temporary or suboptimal solutions. In practice, all the equivalent suboptimal implementations should be updated (e.g., adding TODOs) simultaneously. However, due to various reasons (e.g., time constraints or carelessness), developers may forget or even are unaware of adding TODO comments to all necessary places, which results in the TODO-missed methods. These "hidden" suboptimal implementations in TODO-missed methods may hurt the software quality and maintainability in the long-term. Therefore, in this paper, we propose the novel task of TODO-missed methods detection and patching, and develop a novel model, namely TDPatcher (TODO-comment Patcher), to automatically patch TODO comments to the TODO-missed methods in software projects. Our model has two main stages: offline learning and online inference. During the offline learning stage, TDPatcher employs GraphCodeBERT and contrastive learning for encoding the TODO comment (natural language) and its suboptimal implementation (code fragment) into vector representations. For the online inference stage, we can identify the TODO-missed methods and further determine their patching position by leveraging the offline trained model. We built our dataset by collecting TODO-introduced methods from the top-10,000 Python GitHub repositories and evaluated TDPatcher on them. Extensive experimental results show the promising performance of our model over a set of benchmarks. We further conduct an in-the-wild evaluation which successfully detects 26 \textit{\major{TODO-missed} methods} from 50 GitHub repositories.
2110.10431
Daniel Fern\'andez-Gonz\'alez
Daniel Fern\'andez-Gonz\'alez and Carlos G\'omez-Rodr\'iguez
Discontinuous Grammar as a Foreign Language
Final peer-reviewed manuscript accepted for publication in Neurocomputing
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to achieve deep natural language understanding, syntactic constituent parsing is a vital step, highly demanded by many artificial intelligence systems to process both text and speech. One of the most recent proposals is the use of standard sequence-to-sequence models to perform constituent parsing as a machine translation task, instead of applying task-specific parsers. While they show a competitive performance, these text-to-parse transducers are still lagging behind classic techniques in terms of accuracy, coverage and speed. To close the gap, we here extend the framework of sequence-to-sequence models for constituent parsing, not only by providing a more powerful neural architecture for improving their performance, but also by enlarging their coverage to handle the most complex syntactic phenomena: discontinuous structures. To that end, we design several novel linearizations that can fully produce discontinuities and, for the first time, we test a sequence-to-sequence model on the main discontinuous benchmarks, obtaining competitive results on par with task-specific discontinuous constituent parsers and achieving state-of-the-art scores on the (discontinuous) English Penn Treebank.
[ { "created": "Wed, 20 Oct 2021 08:58:02 GMT", "version": "v1" }, { "created": "Thu, 22 Dec 2022 19:49:49 GMT", "version": "v2" } ]
2022-12-26
[ [ "Fernández-González", "Daniel", "" ], [ "Gómez-Rodríguez", "Carlos", "" ] ]
In order to achieve deep natural language understanding, syntactic constituent parsing is a vital step, highly demanded by many artificial intelligence systems to process both text and speech. One of the most recent proposals is the use of standard sequence-to-sequence models to perform constituent parsing as a machine translation task, instead of applying task-specific parsers. While they show a competitive performance, these text-to-parse transducers are still lagging behind classic techniques in terms of accuracy, coverage and speed. To close the gap, we here extend the framework of sequence-to-sequence models for constituent parsing, not only by providing a more powerful neural architecture for improving their performance, but also by enlarging their coverage to handle the most complex syntactic phenomena: discontinuous structures. To that end, we design several novel linearizations that can fully produce discontinuities and, for the first time, we test a sequence-to-sequence model on the main discontinuous benchmarks, obtaining competitive results on par with task-specific discontinuous constituent parsers and achieving state-of-the-art scores on the (discontinuous) English Penn Treebank.
2205.11126
Qiaoyong Zhong
Yingying Zhang, Qiaoyong Zhong, Di Xie, Shiliang Pu
KRNet: Towards Efficient Knowledge Replay
Accepted by ICPR 2022
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The knowledge replay technique has been widely used in many tasks such as continual learning and continuous domain adaptation. The key lies in how to effectively encode the knowledge extracted from previous data and replay them during current training procedure. A simple yet effective model to achieve knowledge replay is autoencoder. However, the number of stored latent codes in autoencoder increases linearly with the scale of data and the trained encoder is redundant for the replaying stage. In this paper, we propose a novel and efficient knowledge recording network (KRNet) which directly maps an arbitrary sample identity number to the corresponding datum. Compared with autoencoder, our KRNet requires significantly ($400\times$) less storage cost for the latent codes and can be trained without the encoder sub-network. Extensive experiments validate the efficiency of KRNet, and as a showcase, it is successfully applied in the task of continual learning.
[ { "created": "Mon, 23 May 2022 08:34:17 GMT", "version": "v1" } ]
2022-05-24
[ [ "Zhang", "Yingying", "" ], [ "Zhong", "Qiaoyong", "" ], [ "Xie", "Di", "" ], [ "Pu", "Shiliang", "" ] ]
The knowledge replay technique has been widely used in many tasks such as continual learning and continuous domain adaptation. The key lies in how to effectively encode the knowledge extracted from previous data and replay them during current training procedure. A simple yet effective model to achieve knowledge replay is autoencoder. However, the number of stored latent codes in autoencoder increases linearly with the scale of data and the trained encoder is redundant for the replaying stage. In this paper, we propose a novel and efficient knowledge recording network (KRNet) which directly maps an arbitrary sample identity number to the corresponding datum. Compared with autoencoder, our KRNet requires significantly ($400\times$) less storage cost for the latent codes and can be trained without the encoder sub-network. Extensive experiments validate the efficiency of KRNet, and as a showcase, it is successfully applied in the task of continual learning.
2112.13246
Yongxin Guo
Yongxin Guo, Tao Lin, Xiaoying Tang
Towards Federated Learning on Time-Evolving Heterogeneous Data
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Federated Learning (FL) is a learning paradigm that protects privacy by keeping client data on edge devices. However, optimizing FL in practice can be difficult due to the diversity and heterogeneity of the learning system. Despite recent research efforts to improve the optimization of heterogeneous data, the impact of time-evolving heterogeneous data in real-world scenarios, such as changing client data or intermittent clients joining or leaving during training, has not been studied well. In this work, we propose Continual Federated Learning (CFL), a flexible framework for capturing the time-evolving heterogeneity of FL. CFL can handle complex and realistic scenarios, which are difficult to evaluate in previous FL formulations, by extracting information from past local data sets and approximating local objective functions. We theoretically demonstrate that CFL methods have a faster convergence rate than FedAvg in time-evolving scenarios, with the benefit depending on approximation quality. Through experiments, we show that our numerical findings match the convergence analysis and that CFL methods significantly outperform other state-of-the-art FL baselines.
[ { "created": "Sat, 25 Dec 2021 14:58:52 GMT", "version": "v1" }, { "created": "Sun, 13 Mar 2022 09:34:34 GMT", "version": "v2" }, { "created": "Mon, 20 Feb 2023 04:10:03 GMT", "version": "v3" } ]
2023-02-21
[ [ "Guo", "Yongxin", "" ], [ "Lin", "Tao", "" ], [ "Tang", "Xiaoying", "" ] ]
Federated Learning (FL) is a learning paradigm that protects privacy by keeping client data on edge devices. However, optimizing FL in practice can be difficult due to the diversity and heterogeneity of the learning system. Despite recent research efforts to improve the optimization of heterogeneous data, the impact of time-evolving heterogeneous data in real-world scenarios, such as changing client data or intermittent clients joining or leaving during training, has not been studied well. In this work, we propose Continual Federated Learning (CFL), a flexible framework for capturing the time-evolving heterogeneity of FL. CFL can handle complex and realistic scenarios, which are difficult to evaluate in previous FL formulations, by extracting information from past local data sets and approximating local objective functions. We theoretically demonstrate that CFL methods have a faster convergence rate than FedAvg in time-evolving scenarios, with the benefit depending on approximation quality. Through experiments, we show that our numerical findings match the convergence analysis and that CFL methods significantly outperform other state-of-the-art FL baselines.
1702.01815
Gemma Boleda
Gemma Boleda, Sebastian Pad\'o, Nghia The Pham, Marco Baroni
Living a discrete life in a continuous world: Reference with distributed representations
Accepted at IWCS 2017. Final version, 9 pages
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reference is a crucial property of language that allows us to connect linguistic expressions to the world. Modeling it requires handling both continuous and discrete aspects of meaning. Data-driven models excel at the former, but struggle with the latter, and the reverse is true for symbolic models. This paper (a) introduces a concrete referential task to test both aspects, called cross-modal entity tracking; (b) proposes a neural network architecture that uses external memory to build an entity library inspired in the DRSs of DRT, with a mechanism to dynamically introduce new referents or add information to referents that are already in the library. Our model shows promise: it beats traditional neural network architectures on the task. However, it is still outperformed by Memory Networks, another model with external memory.
[ { "created": "Mon, 6 Feb 2017 22:50:49 GMT", "version": "v1" }, { "created": "Mon, 4 Sep 2017 08:44:28 GMT", "version": "v2" } ]
2017-09-05
[ [ "Boleda", "Gemma", "" ], [ "Padó", "Sebastian", "" ], [ "Pham", "Nghia The", "" ], [ "Baroni", "Marco", "" ] ]
Reference is a crucial property of language that allows us to connect linguistic expressions to the world. Modeling it requires handling both continuous and discrete aspects of meaning. Data-driven models excel at the former, but struggle with the latter, and the reverse is true for symbolic models. This paper (a) introduces a concrete referential task to test both aspects, called cross-modal entity tracking; (b) proposes a neural network architecture that uses external memory to build an entity library inspired in the DRSs of DRT, with a mechanism to dynamically introduce new referents or add information to referents that are already in the library. Our model shows promise: it beats traditional neural network architectures on the task. However, it is still outperformed by Memory Networks, another model with external memory.
1610.08095
Mengting Wan
Mengting Wan, Julian McAuley
Modeling Ambiguity, Subjectivity, and Diverging Viewpoints in Opinion Question Answering Systems
10 pages, accepted by ICDM'2016
null
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Product review websites provide an incredible lens into the wide variety of opinions and experiences of different people, and play a critical role in helping users discover products that match their personal needs and preferences. To help address questions that can't easily be answered by reading others' reviews, some review websites also allow users to pose questions to the community via a question-answering (QA) system. As one would expect, just as opinions diverge among different reviewers, answers to such questions may also be subjective, opinionated, and divergent. This means that answering such questions automatically is quite different from traditional QA tasks, where it is assumed that a single `correct' answer is available. While recent work introduced the idea of question-answering using product reviews, it did not account for two aspects that we consider in this paper: (1) Questions have multiple, often divergent, answers, and this full spectrum of answers should somehow be used to train the system; and (2) What makes a `good' answer depends on the asker and the answerer, and these factors should be incorporated in order for the system to be more personalized. Here we build a new QA dataset with 800 thousand questions---and over 3.1 million answers---and show that explicitly accounting for personalization and ambiguity leads both to quantitatively better answers, but also a more nuanced view of the range of supporting, but subjective, opinions.
[ { "created": "Tue, 25 Oct 2016 21:08:15 GMT", "version": "v1" } ]
2016-10-27
[ [ "Wan", "Mengting", "" ], [ "McAuley", "Julian", "" ] ]
Product review websites provide an incredible lens into the wide variety of opinions and experiences of different people, and play a critical role in helping users discover products that match their personal needs and preferences. To help address questions that can't easily be answered by reading others' reviews, some review websites also allow users to pose questions to the community via a question-answering (QA) system. As one would expect, just as opinions diverge among different reviewers, answers to such questions may also be subjective, opinionated, and divergent. This means that answering such questions automatically is quite different from traditional QA tasks, where it is assumed that a single `correct' answer is available. While recent work introduced the idea of question-answering using product reviews, it did not account for two aspects that we consider in this paper: (1) Questions have multiple, often divergent, answers, and this full spectrum of answers should somehow be used to train the system; and (2) What makes a `good' answer depends on the asker and the answerer, and these factors should be incorporated in order for the system to be more personalized. Here we build a new QA dataset with 800 thousand questions---and over 3.1 million answers---and show that explicitly accounting for personalization and ambiguity leads both to quantitatively better answers, but also a more nuanced view of the range of supporting, but subjective, opinions.
2006.13362
Yuxiang Luo
Yuxiang Luo, Cheng Zhang, Yunqi Zhang, Chaoshun Zuo, Dong Xuan, Zhiqiang Lin, Adam C. Champion, and Ness Shroff
ACOUSTIC-TURF: Acoustic-based Privacy-Preserving COVID-19 Contact Tracing
null
null
null
null
cs.CR cs.NI cs.SD cs.SI eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a new privacy-preserving, automated contact tracing system, ACOUSTIC-TURF, to fight COVID-19 using acoustic signals sent from ubiquitous mobile devices. At a high level, ACOUSTIC-TURF adaptively broadcasts inaudible ultrasonic signals with randomly generated IDs in the vicinity. Simultaneously, the system receives other ultrasonic signals sent from nearby (e.g., 6 feet) users. In such a system, individual user IDs are not disclosed to others and the system can accurately detect encounters in physical proximity with 6-foot granularity. We have implemented a prototype of ACOUSTIC-TURF on Android and evaluated its performance in terms of acoustic-signal-based encounter detection accuracy and power consumption at different ranges and under various occlusion scenarios. Experimental results show that ACOUSTIC-TURF can detect multiple contacts within a 6-foot range for mobile phones placed in pockets and outside pockets. Furthermore, our acoustic-signal-based system achieves greater precision than wireless-signal-based approaches when contact tracing is performed through walls. ACOUSTIC-TURF correctly determines that people on opposite sides of a wall are not in contact with one another, whereas the Bluetooth-based approaches detect nonexistent contacts among them.
[ { "created": "Tue, 23 Jun 2020 22:17:36 GMT", "version": "v1" } ]
2020-06-25
[ [ "Luo", "Yuxiang", "" ], [ "Zhang", "Cheng", "" ], [ "Zhang", "Yunqi", "" ], [ "Zuo", "Chaoshun", "" ], [ "Xuan", "Dong", "" ], [ "Lin", "Zhiqiang", "" ], [ "Champion", "Adam C.", "" ], [ "Shroff", "Ness", "" ] ]
In this paper, we propose a new privacy-preserving, automated contact tracing system, ACOUSTIC-TURF, to fight COVID-19 using acoustic signals sent from ubiquitous mobile devices. At a high level, ACOUSTIC-TURF adaptively broadcasts inaudible ultrasonic signals with randomly generated IDs in the vicinity. Simultaneously, the system receives other ultrasonic signals sent from nearby (e.g., 6 feet) users. In such a system, individual user IDs are not disclosed to others and the system can accurately detect encounters in physical proximity with 6-foot granularity. We have implemented a prototype of ACOUSTIC-TURF on Android and evaluated its performance in terms of acoustic-signal-based encounter detection accuracy and power consumption at different ranges and under various occlusion scenarios. Experimental results show that ACOUSTIC-TURF can detect multiple contacts within a 6-foot range for mobile phones placed in pockets and outside pockets. Furthermore, our acoustic-signal-based system achieves greater precision than wireless-signal-based approaches when contact tracing is performed through walls. ACOUSTIC-TURF correctly determines that people on opposite sides of a wall are not in contact with one another, whereas the Bluetooth-based approaches detect nonexistent contacts among them.
1809.07444
Matthias Springer
Matthias Springer
DynaSOAr: Accelerating Single-Method Multiple-Objects Applications on GPUs
ACM Student Research Competition, Grand Finals Submission, Graduate Category
null
null
null
cs.PL cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object-oriented programming (OOP) has long been regarded as too inefficient for SIMD high-performance computing, despite the fact that many important HPC applications have an inherent object structure. We discovered a broad subset of OOP that can be implemented efficiently on massively parallel SIMD accelerators. We call it Single-Method Multiple-Objects (SMMO), because parallelism is expressed by running a method on all objects of a type. To make fast GPU programming available to domain experts who are less experienced in GPU programming, we developed DynaSOAr, a CUDA framework for SMMO applications. DynaSOAr improves the usage of allocated memory with an SOA data layout and achieves low memory fragmentation through efficient management of free and allocated memory blocks with lock-free, hierarchical bitmaps.
[ { "created": "Thu, 20 Sep 2018 01:39:05 GMT", "version": "v1" }, { "created": "Wed, 29 May 2019 03:59:43 GMT", "version": "v2" } ]
2019-05-30
[ [ "Springer", "Matthias", "" ] ]
Object-oriented programming (OOP) has long been regarded as too inefficient for SIMD high-performance computing, despite the fact that many important HPC applications have an inherent object structure. We discovered a broad subset of OOP that can be implemented efficiently on massively parallel SIMD accelerators. We call it Single-Method Multiple-Objects (SMMO), because parallelism is expressed by running a method on all objects of a type. To make fast GPU programming available to domain experts who are less experienced in GPU programming, we developed DynaSOAr, a CUDA framework for SMMO applications. DynaSOAr improves the usage of allocated memory with an SOA data layout and achieves low memory fragmentation through efficient management of free and allocated memory blocks with lock-free, hierarchical bitmaps.
1903.00743
Udayan Khurana
Udayan Khurana and Horst Samulowitz
Automating Predictive Modeling Process using Reinforcement Learning
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Building a good predictive model requires an array of activities such as data imputation, feature transformations, estimator selection, hyper-parameter search and ensemble construction. Given the large, complex and heterogenous space of options, off-the-shelf optimization methods are infeasible for realistic response times. In practice, much of the predictive modeling process is conducted by experienced data scientists, who selectively make use of available tools. Over time, they develop an understanding of the behavior of operators, and perform serial decision making under uncertainty, colloquially referred to as educated guesswork. With an unprecedented demand for application of supervised machine learning, there is a call for solutions that automatically search for a good combination of parameters across these tasks to minimize the modeling error. We introduce a novel system called APRL (Autonomous Predictive modeler via Reinforcement Learning), that uses past experience through reinforcement learning to optimize such sequential decision making from within a set of diverse actions under a time constraint on a previously unseen predictive learning problem. APRL actions are taken to optimize the performance of a final ensemble. This is in contrast to other systems, which maximize individual model accuracy first and create ensembles as a disconnected post-processing step. As a result, APRL is able to reduce up to 71\% of classification error on average over a wide variety of problems.
[ { "created": "Sat, 2 Mar 2019 18:22:19 GMT", "version": "v1" } ]
2019-03-06
[ [ "Khurana", "Udayan", "" ], [ "Samulowitz", "Horst", "" ] ]
Building a good predictive model requires an array of activities such as data imputation, feature transformations, estimator selection, hyper-parameter search and ensemble construction. Given the large, complex and heterogenous space of options, off-the-shelf optimization methods are infeasible for realistic response times. In practice, much of the predictive modeling process is conducted by experienced data scientists, who selectively make use of available tools. Over time, they develop an understanding of the behavior of operators, and perform serial decision making under uncertainty, colloquially referred to as educated guesswork. With an unprecedented demand for application of supervised machine learning, there is a call for solutions that automatically search for a good combination of parameters across these tasks to minimize the modeling error. We introduce a novel system called APRL (Autonomous Predictive modeler via Reinforcement Learning), that uses past experience through reinforcement learning to optimize such sequential decision making from within a set of diverse actions under a time constraint on a previously unseen predictive learning problem. APRL actions are taken to optimize the performance of a final ensemble. This is in contrast to other systems, which maximize individual model accuracy first and create ensembles as a disconnected post-processing step. As a result, APRL is able to reduce up to 71\% of classification error on average over a wide variety of problems.
2008.12016
Deboleena Roy
Deboleena Roy, Indranil Chakraborty, Timur Ibrayev and Kaushik Roy
On the Intrinsic Robustness of NVM Crossbars Against Adversarial Attacks
to appear in Proceedings of DAC, 2021
null
null
null
cs.ET cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The increasing computational demand of Deep Learning has propelled research in special-purpose inference accelerators based on emerging non-volatile memory (NVM) technologies. Such NVM crossbars promise fast and energy-efficient in-situ Matrix Vector Multiplication (MVM) thus alleviating the long-standing von Neuman bottleneck in today's digital hardware. However, the analog nature of computing in these crossbars is inherently approximate and results in deviations from ideal output values, which reduces the overall performance of Deep Neural Networks (DNNs) under normal circumstances. In this paper, we study the impact of these non-idealities under adversarial circumstances. We show that the non-ideal behavior of analog computing lowers the effectiveness of adversarial attacks, in both Black-Box and White-Box attack scenarios. In a non-adaptive attack, where the attacker is unaware of the analog hardware, we observe that analog computing offers a varying degree of intrinsic robustness, with a peak adversarial accuracy improvement of 35.34%, 22.69%, and 9.90% for white box PGD (epsilon=1/255, iter=30) for CIFAR-10, CIFAR-100, and ImageNet respectively. We also demonstrate "Hardware-in-Loop" adaptive attacks that circumvent this robustness by utilizing the knowledge of the NVM model.
[ { "created": "Thu, 27 Aug 2020 09:36:50 GMT", "version": "v1" }, { "created": "Mon, 15 Mar 2021 19:48:27 GMT", "version": "v2" } ]
2021-03-17
[ [ "Roy", "Deboleena", "" ], [ "Chakraborty", "Indranil", "" ], [ "Ibrayev", "Timur", "" ], [ "Roy", "Kaushik", "" ] ]
The increasing computational demand of Deep Learning has propelled research in special-purpose inference accelerators based on emerging non-volatile memory (NVM) technologies. Such NVM crossbars promise fast and energy-efficient in-situ Matrix Vector Multiplication (MVM) thus alleviating the long-standing von Neuman bottleneck in today's digital hardware. However, the analog nature of computing in these crossbars is inherently approximate and results in deviations from ideal output values, which reduces the overall performance of Deep Neural Networks (DNNs) under normal circumstances. In this paper, we study the impact of these non-idealities under adversarial circumstances. We show that the non-ideal behavior of analog computing lowers the effectiveness of adversarial attacks, in both Black-Box and White-Box attack scenarios. In a non-adaptive attack, where the attacker is unaware of the analog hardware, we observe that analog computing offers a varying degree of intrinsic robustness, with a peak adversarial accuracy improvement of 35.34%, 22.69%, and 9.90% for white box PGD (epsilon=1/255, iter=30) for CIFAR-10, CIFAR-100, and ImageNet respectively. We also demonstrate "Hardware-in-Loop" adaptive attacks that circumvent this robustness by utilizing the knowledge of the NVM model.
2006.12311
Lingxiao Wang
Lingxiao Wang, Zhuoran Yang, Zhaoran Wang
Provably Efficient Causal Reinforcement Learning with Confounded Observational Data
42 pages, 4 figures
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Empowered by expressive function approximators such as neural networks, deep reinforcement learning (DRL) achieves tremendous empirical successes. However, learning expressive function approximators requires collecting a large dataset (interventional data) by interacting with the environment. Such a lack of sample efficiency prohibits the application of DRL to critical scenarios, e.g., autonomous driving and personalized medicine, since trial and error in the online setting is often unsafe and even unethical. In this paper, we study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting. To incorporate the possibly confounded observational data, we propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner. More specifically, DOVI explicitly adjusts for the confounding bias in the observational data, where the confounders are partially observed or unobserved. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information acquired from the offline setting. In particular, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting by a multiplicative factor, which decreases towards zero when the confounded observational data are more informative upon the adjustments. Our algorithm and analysis serve as a step towards causal reinforcement learning.
[ { "created": "Mon, 22 Jun 2020 14:49:33 GMT", "version": "v1" } ]
2020-06-23
[ [ "Wang", "Lingxiao", "" ], [ "Yang", "Zhuoran", "" ], [ "Wang", "Zhaoran", "" ] ]
Empowered by expressive function approximators such as neural networks, deep reinforcement learning (DRL) achieves tremendous empirical successes. However, learning expressive function approximators requires collecting a large dataset (interventional data) by interacting with the environment. Such a lack of sample efficiency prohibits the application of DRL to critical scenarios, e.g., autonomous driving and personalized medicine, since trial and error in the online setting is often unsafe and even unethical. In this paper, we study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting. To incorporate the possibly confounded observational data, we propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner. More specifically, DOVI explicitly adjusts for the confounding bias in the observational data, where the confounders are partially observed or unobserved. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information acquired from the offline setting. In particular, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting by a multiplicative factor, which decreases towards zero when the confounded observational data are more informative upon the adjustments. Our algorithm and analysis serve as a step towards causal reinforcement learning.
1908.08026
David Shriver
David Shriver, Dong Xu, Sebastian Elbaum, Matthew B. Dwyer
Refactoring Neural Networks for Verification
null
null
null
null
cs.NE cs.LG cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks (DNN) are growing in capability and applicability. Their effectiveness has led to their use in safety critical and autonomous systems, yet there is a dearth of cost-effective methods available for reasoning about the behavior of a DNN. In this paper, we seek to expand the applicability and scalability of existing DNN verification techniques through DNN refactoring. A DNN refactoring defines (a) the transformation of the DNN's architecture, i.e., the number and size of its layers, and (b) the distillation of the learned relationships between the input features and function outputs of the original to train the transformed network. Unlike with traditional code refactoring, DNN refactoring does not guarantee functional equivalence of the two networks, but rather it aims to preserve the accuracy of the original network while producing a simpler network that is amenable to more efficient property verification. We present an automated framework for DNN refactoring, and demonstrate its potential effectiveness through three case studies on networks used in autonomous systems.
[ { "created": "Tue, 6 Aug 2019 21:51:05 GMT", "version": "v1" } ]
2019-08-22
[ [ "Shriver", "David", "" ], [ "Xu", "Dong", "" ], [ "Elbaum", "Sebastian", "" ], [ "Dwyer", "Matthew B.", "" ] ]
Deep neural networks (DNN) are growing in capability and applicability. Their effectiveness has led to their use in safety critical and autonomous systems, yet there is a dearth of cost-effective methods available for reasoning about the behavior of a DNN. In this paper, we seek to expand the applicability and scalability of existing DNN verification techniques through DNN refactoring. A DNN refactoring defines (a) the transformation of the DNN's architecture, i.e., the number and size of its layers, and (b) the distillation of the learned relationships between the input features and function outputs of the original to train the transformed network. Unlike with traditional code refactoring, DNN refactoring does not guarantee functional equivalence of the two networks, but rather it aims to preserve the accuracy of the original network while producing a simpler network that is amenable to more efficient property verification. We present an automated framework for DNN refactoring, and demonstrate its potential effectiveness through three case studies on networks used in autonomous systems.
1909.01994
Jacob Rafati
Jacob Rafati and Roummel F. Marcia
Quasi-Newton Optimization Methods For Deep Learning Applications
arXiv admin note: substantial text overlap with arXiv:1811.02693
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning algorithms often require solving a highly non-linear and nonconvex unconstrained optimization problem. Methods for solving optimization problems in large-scale machine learning, such as deep learning and deep reinforcement learning (RL), are generally restricted to the class of first-order algorithms, like stochastic gradient descent (SGD). While SGD iterates are inexpensive to compute, they have slow theoretical convergence rates. Furthermore, they require exhaustive trial-and-error to fine-tune many learning parameters. Using second-order curvature information to find search directions can help with more robust convergence for non-convex optimization problems. However, computing Hessian matrices for large-scale problems is not computationally practical. Alternatively, quasi-Newton methods construct an approximate of the Hessian matrix to build a quadratic model of the objective function. Quasi-Newton methods, like SGD, require only first-order gradient information, but they can result in superlinear convergence, which makes them attractive alternatives to SGD. The limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) approach is one of the most popular quasi-Newton methods that construct positive definite Hessian approximations. In this chapter, we propose efficient optimization methods based on L-BFGS quasi-Newton methods using line search and trust-region strategies. Our methods bridge the disparity between first- and second-order methods by using gradient information to calculate low-rank updates to Hessian approximations. We provide formal convergence analysis of these methods as well as empirical results on deep learning applications, such as image classification tasks and deep reinforcement learning on a set of ATARI 2600 video games. Our results show a robust convergence with preferred generalization characteristics as well as fast training time.
[ { "created": "Wed, 4 Sep 2019 15:52:08 GMT", "version": "v1" } ]
2019-09-06
[ [ "Rafati", "Jacob", "" ], [ "Marcia", "Roummel F.", "" ] ]
Deep learning algorithms often require solving a highly non-linear and nonconvex unconstrained optimization problem. Methods for solving optimization problems in large-scale machine learning, such as deep learning and deep reinforcement learning (RL), are generally restricted to the class of first-order algorithms, like stochastic gradient descent (SGD). While SGD iterates are inexpensive to compute, they have slow theoretical convergence rates. Furthermore, they require exhaustive trial-and-error to fine-tune many learning parameters. Using second-order curvature information to find search directions can help with more robust convergence for non-convex optimization problems. However, computing Hessian matrices for large-scale problems is not computationally practical. Alternatively, quasi-Newton methods construct an approximate of the Hessian matrix to build a quadratic model of the objective function. Quasi-Newton methods, like SGD, require only first-order gradient information, but they can result in superlinear convergence, which makes them attractive alternatives to SGD. The limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) approach is one of the most popular quasi-Newton methods that construct positive definite Hessian approximations. In this chapter, we propose efficient optimization methods based on L-BFGS quasi-Newton methods using line search and trust-region strategies. Our methods bridge the disparity between first- and second-order methods by using gradient information to calculate low-rank updates to Hessian approximations. We provide formal convergence analysis of these methods as well as empirical results on deep learning applications, such as image classification tasks and deep reinforcement learning on a set of ATARI 2600 video games. Our results show a robust convergence with preferred generalization characteristics as well as fast training time.
2408.02138
Abrar Majeedi
Abrar Majeedi, Viswanatha Reddy Gajjala, Satya Sai Srinath Namburi GNVV, Yin Li
RICA2: Rubric-Informed, Calibrated Assessment of Actions
Accepted at European Conference on Computer Vision (ECCV) 2024
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The ability to quantify how well an action is carried out, also known as action quality assessment (AQA), has attracted recent interest in the vision community. Unfortunately, prior methods often ignore the score rubric used by human experts and fall short of quantifying the uncertainty of the model prediction. To bridge the gap, we present RICA^2 - a deep probabilistic model that integrates score rubric and accounts for prediction uncertainty for AQA. Central to our method lies in stochastic embeddings of action steps, defined on a graph structure that encodes the score rubric. The embeddings spread probabilistic density in the latent space and allow our method to represent model uncertainty. The graph encodes the scoring criteria, based on which the quality scores can be decoded. We demonstrate that our method establishes new state of the art on public benchmarks, including FineDiving, MTL-AQA, and JIGSAWS, with superior performance in score prediction and uncertainty calibration. Our code is available at https://abrarmajeedi.github.io/rica2_aqa/
[ { "created": "Sun, 4 Aug 2024 20:35:33 GMT", "version": "v1" }, { "created": "Tue, 6 Aug 2024 19:27:12 GMT", "version": "v2" } ]
2024-08-08
[ [ "Majeedi", "Abrar", "" ], [ "Gajjala", "Viswanatha Reddy", "" ], [ "GNVV", "Satya Sai Srinath Namburi", "" ], [ "Li", "Yin", "" ] ]
The ability to quantify how well an action is carried out, also known as action quality assessment (AQA), has attracted recent interest in the vision community. Unfortunately, prior methods often ignore the score rubric used by human experts and fall short of quantifying the uncertainty of the model prediction. To bridge the gap, we present RICA^2 - a deep probabilistic model that integrates score rubric and accounts for prediction uncertainty for AQA. Central to our method lies in stochastic embeddings of action steps, defined on a graph structure that encodes the score rubric. The embeddings spread probabilistic density in the latent space and allow our method to represent model uncertainty. The graph encodes the scoring criteria, based on which the quality scores can be decoded. We demonstrate that our method establishes new state of the art on public benchmarks, including FineDiving, MTL-AQA, and JIGSAWS, with superior performance in score prediction and uncertainty calibration. Our code is available at https://abrarmajeedi.github.io/rica2_aqa/
1910.06215
Godfred Koi-Akrofi
Godfred Yaw Koi-Akrofi, Eleanor Afful and Henry Akwetey Matey
I.T. Project Success: Practical Frameworks based on key Project Control Variables
15 pages, 7 Figures
Vol.10 No.5 September 2019
10.5121/ijsea.2019.10504
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
The objectives of this study were to research into the interdependencies IT project control variables, and also come out with frameworks to help IT project managers understand how to effectively control these variables to ensure the success of IT projects. The study employed six control variables: Cost, Time (Schedule), Scope, Quality, Risk, and Benefits. A qualitative approach was adopted, where selected IT program and project managers of the Telecom industry in Ghana were interviewed individually and in a group based on a set of questions. The findings, espoused in the frameworks, reiterated the theory of the dependence of one control variable on the other, and the fact that varying one affects the others positively or negatively in relation to IT project success, as is the case for the iron triangle. Again, key activities of the control variables necessary to ensure IT project success were discovered.
[ { "created": "Mon, 14 Oct 2019 15:40:04 GMT", "version": "v1" } ]
2019-10-15
[ [ "Koi-Akrofi", "Godfred Yaw", "" ], [ "Afful", "Eleanor", "" ], [ "Matey", "Henry Akwetey", "" ] ]
The objectives of this study were to research into the interdependencies IT project control variables, and also come out with frameworks to help IT project managers understand how to effectively control these variables to ensure the success of IT projects. The study employed six control variables: Cost, Time (Schedule), Scope, Quality, Risk, and Benefits. A qualitative approach was adopted, where selected IT program and project managers of the Telecom industry in Ghana were interviewed individually and in a group based on a set of questions. The findings, espoused in the frameworks, reiterated the theory of the dependence of one control variable on the other, and the fact that varying one affects the others positively or negatively in relation to IT project success, as is the case for the iron triangle. Again, key activities of the control variables necessary to ensure IT project success were discovered.
1803.09992
Alberto Perez Veiga
Alberto Perez Veiga
Applications of Artificial Intelligence to Network Security
null
null
null
null
cs.CR cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Attacks to networks are becoming more complex and sophisticated every day. Beyond the so-called script-kiddies and hacking newbies, there is a myriad of professional attackers seeking to make serious profits infiltrating in corporate networks. Either hostile governments, big corporations or mafias are constantly increasing their resources and skills in cybercrime in order to spy, steal or cause damage more effectively. traditional approaches to Network Security seem to start hitting their limits and it is being recognized the need for a smarter approach to threat detections. This paper provides an introduction on the need for evolution of Cyber Security techniques and how Artificial Intelligence could be of application to help solving some of the problems. It provides also, a high-level overview of some state of the art AI Network Security techniques, to finish analysing what is the foreseeable future of the application of AI to Network Security.
[ { "created": "Tue, 27 Mar 2018 09:54:30 GMT", "version": "v1" } ]
2018-03-28
[ [ "Veiga", "Alberto Perez", "" ] ]
Attacks to networks are becoming more complex and sophisticated every day. Beyond the so-called script-kiddies and hacking newbies, there is a myriad of professional attackers seeking to make serious profits infiltrating in corporate networks. Either hostile governments, big corporations or mafias are constantly increasing their resources and skills in cybercrime in order to spy, steal or cause damage more effectively. traditional approaches to Network Security seem to start hitting their limits and it is being recognized the need for a smarter approach to threat detections. This paper provides an introduction on the need for evolution of Cyber Security techniques and how Artificial Intelligence could be of application to help solving some of the problems. It provides also, a high-level overview of some state of the art AI Network Security techniques, to finish analysing what is the foreseeable future of the application of AI to Network Security.
2212.00139
Irish Mehta
Irish Mehta and Aashal Kamdar
Movie Recommendation System using Composite Ranking
Accepted into the EAI ICISML'22 Conference
null
null
null
cs.IR cs.MM
http://creativecommons.org/licenses/by/4.0/
In today's world, abundant digital content like e-books, movies, videos and articles are available for consumption. It is daunting to review everything accessible and decide what to watch next. Consequently, digital media providers want to capitalise on this confusion and tackle it to increase user engagement, eventually leading to higher revenues. Content providers often utilise recommendation systems as an efficacious approach for combating such information overload. This paper concentrates on developing a synthetic approach for recommending movies. Traditionally, movie recommendation systems use either collaborative filtering, which utilises user interaction with the media, or content-based filtering, which makes use of the movie's available metadata. Technological advancements have also introduced a hybrid technique that integrates both systems. However, our approach deals solely with content-based recommendations, further enhancing it with a ranking algorithm based on content similarity metrics. The three metrics contributing to the ranking are similarity in metadata, visual content, and user reviews of the movies. We use text vectorization followed by cosine similarity for metadata, feature extraction by a pre-trained VGG19 followed by K-means clustering for visual content, and a comparison of sentiments for user reviews. Such a system allows viewers to know movies that "feel" the same.
[ { "created": "Wed, 30 Nov 2022 22:11:43 GMT", "version": "v1" }, { "created": "Mon, 5 Dec 2022 16:32:37 GMT", "version": "v2" } ]
2022-12-06
[ [ "Mehta", "Irish", "" ], [ "Kamdar", "Aashal", "" ] ]
In today's world, abundant digital content like e-books, movies, videos and articles are available for consumption. It is daunting to review everything accessible and decide what to watch next. Consequently, digital media providers want to capitalise on this confusion and tackle it to increase user engagement, eventually leading to higher revenues. Content providers often utilise recommendation systems as an efficacious approach for combating such information overload. This paper concentrates on developing a synthetic approach for recommending movies. Traditionally, movie recommendation systems use either collaborative filtering, which utilises user interaction with the media, or content-based filtering, which makes use of the movie's available metadata. Technological advancements have also introduced a hybrid technique that integrates both systems. However, our approach deals solely with content-based recommendations, further enhancing it with a ranking algorithm based on content similarity metrics. The three metrics contributing to the ranking are similarity in metadata, visual content, and user reviews of the movies. We use text vectorization followed by cosine similarity for metadata, feature extraction by a pre-trained VGG19 followed by K-means clustering for visual content, and a comparison of sentiments for user reviews. Such a system allows viewers to know movies that "feel" the same.
2301.09533
Milo Roucairol
Milo Roucairol and Tristan Cazenave
Solving the HP model with Nested Monte Carlo Search
Accepted to AAAI's workshop AI2ASE 2023: 2nd Annual AAAI Workshop on AI to Accelerate Science and Engineering. 6 pages, 1 for references
null
null
null
cs.AI cs.CE
http://creativecommons.org/licenses/by/4.0/
In this paper we present a new Monte Carlo Search (MCS) algorithm for finding the ground state energy of proteins in the HP-model. We also compare it briefly to other MCS algorithms not usually used on the HP-model and provide an overview of the algorithms used on HP-model. The algorithm presented in this paper does not beat state of the art algorithms, see PERM (Hsu and Grassberger 2011), REMC (Thachuk, Shmygelska, and Hoos 2007) or WLRE (W\"ust and Landau 2012) for better results. Hsu, H.-P.; and Grassberger, P. 2011. A review of Monte Carlo simulations of polymers with PERM. Journal of Statistical Physics, 144 (3): 597 to 637. Thachuk, C.; Shmygelska, A.; and Hoos, H. H. 2007. A replica exchange Monte Carlo algorithm for protein folding in the HP model. BMC Bioinformatics, 8(1): 342. W\"ust, T.; and Landau, D. P. 2012. Optimized Wang-Landau sampling of lattice polymers: Ground state search and folding thermodynamics of HP model proteins. The Journal of Chemical Physics, 137(6): 064903.
[ { "created": "Mon, 23 Jan 2023 16:35:51 GMT", "version": "v1" }, { "created": "Wed, 25 Jan 2023 15:19:27 GMT", "version": "v2" } ]
2023-01-26
[ [ "Roucairol", "Milo", "" ], [ "Cazenave", "Tristan", "" ] ]
In this paper we present a new Monte Carlo Search (MCS) algorithm for finding the ground state energy of proteins in the HP-model. We also compare it briefly to other MCS algorithms not usually used on the HP-model and provide an overview of the algorithms used on HP-model. The algorithm presented in this paper does not beat state of the art algorithms, see PERM (Hsu and Grassberger 2011), REMC (Thachuk, Shmygelska, and Hoos 2007) or WLRE (W\"ust and Landau 2012) for better results. Hsu, H.-P.; and Grassberger, P. 2011. A review of Monte Carlo simulations of polymers with PERM. Journal of Statistical Physics, 144 (3): 597 to 637. Thachuk, C.; Shmygelska, A.; and Hoos, H. H. 2007. A replica exchange Monte Carlo algorithm for protein folding in the HP model. BMC Bioinformatics, 8(1): 342. W\"ust, T.; and Landau, D. P. 2012. Optimized Wang-Landau sampling of lattice polymers: Ground state search and folding thermodynamics of HP model proteins. The Journal of Chemical Physics, 137(6): 064903.
1811.00648
Matthias Rottmann
Matthias Rottmann, Pascal Colling, Thomas-Paul Hack, Robin Chan, Fabian H\"uger, Peter Schlicht, Hanno Gottschalk
Prediction Error Meta Classification in Semantic Segmentation: Detection via Aggregated Dispersion Measures of Softmax Probabilities
null
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method that "meta" classifies whether seg-ments predicted by a semantic segmentation neural networkintersect with the ground truth. For this purpose, we employ measures of dispersion for predicted pixel-wise class probability distributions, like classification entropy, that yield heat maps of the input scene's size. We aggregate these dispersion measures segment-wise and derive metrics that are well-correlated with the segment-wise IoU of prediction and ground truth. This procedure yields an almost plug and play post-processing tool to rate the prediction quality of semantic segmentation networks on segment level. This is especially relevant for monitoring neural networks in online applications like automated driving or medical imaging where reliability is of utmost importance. In our tests, we use publicly available state-of-the-art networks trained on the Cityscapes dataset and the BraTS2017 dataset and analyze the predictive power of different metrics as well as different sets of metrics. To this end, we compute logistic LASSO regression fits for the task of classifying IoU=0 vs. IoU>0 per segment and obtain AUROC values of up to 91.55%. We complement these tests with linear regression fits to predict the segment-wise IoU and obtain prediction standard deviations of down to 0.130 as well as $R^2$ values of up to 84.15%. We show that these results clearly outperform standard approaches.
[ { "created": "Thu, 1 Nov 2018 22:00:00 GMT", "version": "v1" }, { "created": "Wed, 2 Oct 2019 14:38:24 GMT", "version": "v2" } ]
2019-10-03
[ [ "Rottmann", "Matthias", "" ], [ "Colling", "Pascal", "" ], [ "Hack", "Thomas-Paul", "" ], [ "Chan", "Robin", "" ], [ "Hüger", "Fabian", "" ], [ "Schlicht", "Peter", "" ], [ "Gottschalk", "Hanno", "" ] ]
We present a method that "meta" classifies whether seg-ments predicted by a semantic segmentation neural networkintersect with the ground truth. For this purpose, we employ measures of dispersion for predicted pixel-wise class probability distributions, like classification entropy, that yield heat maps of the input scene's size. We aggregate these dispersion measures segment-wise and derive metrics that are well-correlated with the segment-wise IoU of prediction and ground truth. This procedure yields an almost plug and play post-processing tool to rate the prediction quality of semantic segmentation networks on segment level. This is especially relevant for monitoring neural networks in online applications like automated driving or medical imaging where reliability is of utmost importance. In our tests, we use publicly available state-of-the-art networks trained on the Cityscapes dataset and the BraTS2017 dataset and analyze the predictive power of different metrics as well as different sets of metrics. To this end, we compute logistic LASSO regression fits for the task of classifying IoU=0 vs. IoU>0 per segment and obtain AUROC values of up to 91.55%. We complement these tests with linear regression fits to predict the segment-wise IoU and obtain prediction standard deviations of down to 0.130 as well as $R^2$ values of up to 84.15%. We show that these results clearly outperform standard approaches.
2107.10931
Xiaofeng Liu
Xiaofeng Liu, Bo Hu, Linghao Jin, Xu Han, Fangxu Xing, Jinsong Ouyang, Jun Lu, Georges EL Fakhri, Jonghye Woo
Domain Generalization under Conditional and Label Shifts via Variational Bayesian Inference
30th International Joint Conference on Artificial Intelligence (IJCAI) 2021
null
null
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
In this work, we propose a domain generalization (DG) approach to learn on several labeled source domains and transfer knowledge to a target domain that is inaccessible in training. Considering the inherent conditional and label shifts, we would expect the alignment of $p(x|y)$ and $p(y)$. However, the widely used domain invariant feature learning (IFL) methods relies on aligning the marginal concept shift w.r.t. $p(x)$, which rests on an unrealistic assumption that $p(y)$ is invariant across domains. We thereby propose a novel variational Bayesian inference framework to enforce the conditional distribution alignment w.r.t. $p(x|y)$ via the prior distribution matching in a latent space, which also takes the marginal label shift w.r.t. $p(y)$ into consideration with the posterior alignment. Extensive experiments on various benchmarks demonstrate that our framework is robust to the label shift and the cross-domain accuracy is significantly improved, thereby achieving superior performance over the conventional IFL counterparts.
[ { "created": "Thu, 22 Jul 2021 21:19:12 GMT", "version": "v1" } ]
2021-07-26
[ [ "Liu", "Xiaofeng", "" ], [ "Hu", "Bo", "" ], [ "Jin", "Linghao", "" ], [ "Han", "Xu", "" ], [ "Xing", "Fangxu", "" ], [ "Ouyang", "Jinsong", "" ], [ "Lu", "Jun", "" ], [ "Fakhri", "Georges EL", "" ], [ "Woo", "Jonghye", "" ] ]
In this work, we propose a domain generalization (DG) approach to learn on several labeled source domains and transfer knowledge to a target domain that is inaccessible in training. Considering the inherent conditional and label shifts, we would expect the alignment of $p(x|y)$ and $p(y)$. However, the widely used domain invariant feature learning (IFL) methods relies on aligning the marginal concept shift w.r.t. $p(x)$, which rests on an unrealistic assumption that $p(y)$ is invariant across domains. We thereby propose a novel variational Bayesian inference framework to enforce the conditional distribution alignment w.r.t. $p(x|y)$ via the prior distribution matching in a latent space, which also takes the marginal label shift w.r.t. $p(y)$ into consideration with the posterior alignment. Extensive experiments on various benchmarks demonstrate that our framework is robust to the label shift and the cross-domain accuracy is significantly improved, thereby achieving superior performance over the conventional IFL counterparts.
2401.09591
Quan Ze Chen
Kianna Bolante, Kevin Chen, Quan Ze Chen, Amy Zhang
Bringing Social Computing to Secondary School Classrooms
null
null
10.1145/3626252.3630795
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Social computing is the study of how technology shapes human social interactions. This topic has become increasingly relevant to secondary school students (ages 11--18) as more of young people's everyday social experiences take place online, particularly with the continuing effects of the COVID-19 pandemic. However, social computing topics are rarely touched upon in existing middle and high school curricula. We seek to introduce concepts from social computing to secondary school students so they can understand how computing has wide-ranging social implications that touch upon their everyday lives, as well as think critically about both the positive and negative sides of different social technology designs. In this report, we present a series of six lessons combining presentations and hands-on activities covering topics within social computing and detail our experience teaching these lessons to approximately 1,405 students across 13 middle and high schools in our local school district. We developed lessons covering how social computing relates to the topics of Data Management, Encrypted Messaging, Human-Computer Interaction Careers, Machine Learning and Bias, Misinformation, and Online Behavior. We found that 81.13% of students expressed greater interest in the content of our lessons compared to their interest in STEM overall. We also found from pre- and post-lesson comprehension questions that 63.65% learned new concepts from the main activity. We release all lesson materials on a website for public use. From our experience, we observed that students were engaged in these topics and found enjoyment in finding connections between computing and their own lives.
[ { "created": "Wed, 17 Jan 2024 20:40:51 GMT", "version": "v1" } ]
2024-01-19
[ [ "Bolante", "Kianna", "" ], [ "Chen", "Kevin", "" ], [ "Chen", "Quan Ze", "" ], [ "Zhang", "Amy", "" ] ]
Social computing is the study of how technology shapes human social interactions. This topic has become increasingly relevant to secondary school students (ages 11--18) as more of young people's everyday social experiences take place online, particularly with the continuing effects of the COVID-19 pandemic. However, social computing topics are rarely touched upon in existing middle and high school curricula. We seek to introduce concepts from social computing to secondary school students so they can understand how computing has wide-ranging social implications that touch upon their everyday lives, as well as think critically about both the positive and negative sides of different social technology designs. In this report, we present a series of six lessons combining presentations and hands-on activities covering topics within social computing and detail our experience teaching these lessons to approximately 1,405 students across 13 middle and high schools in our local school district. We developed lessons covering how social computing relates to the topics of Data Management, Encrypted Messaging, Human-Computer Interaction Careers, Machine Learning and Bias, Misinformation, and Online Behavior. We found that 81.13% of students expressed greater interest in the content of our lessons compared to their interest in STEM overall. We also found from pre- and post-lesson comprehension questions that 63.65% learned new concepts from the main activity. We release all lesson materials on a website for public use. From our experience, we observed that students were engaged in these topics and found enjoyment in finding connections between computing and their own lives.
2002.09425
Sihao Sun
Sihao Sun, Matthias Baert, Bram Adriaan Strack van Schijndel, Coen de Visser
Upset Recovery Control for Quadrotors Subjected to a Complete Rotor Failure from Large Initial Disturbances
7 pages, 9 figures, accepted by International Conference of Robotics and Automation (ICRA) 2020
IEEE International Conference on Robotics and Automation (ICRA) 2020
10.1109/ICRA40945.2020.9197239
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study has developed a fault-tolerant controller that is able to recover a quadrotor from arbitrary initial orientations and angular velocities, despite the complete failure of a rotor. This cascaded control method includes a position/altitude controller, an almost-global convergence attitude controller, and a control allocation method based on quadratic programming. As a major novelty, a constraint of undesirable angular velocity is derived and fused into the control allocator, which significantly improves the recovery performance. For validation, we have conducted a set of Monte-Carlo simulation to test the reliability of the proposed method of recovering the quadrotor from arbitrary initial attitude/rate conditions. In addition, real-life flight tests have been performed. The results demonstrate that the post-failure quadrotor can recover after being casually tossed into the air.
[ { "created": "Fri, 21 Feb 2020 17:17:28 GMT", "version": "v1" } ]
2020-10-28
[ [ "Sun", "Sihao", "" ], [ "Baert", "Matthias", "" ], [ "van Schijndel", "Bram Adriaan Strack", "" ], [ "de Visser", "Coen", "" ] ]
This study has developed a fault-tolerant controller that is able to recover a quadrotor from arbitrary initial orientations and angular velocities, despite the complete failure of a rotor. This cascaded control method includes a position/altitude controller, an almost-global convergence attitude controller, and a control allocation method based on quadratic programming. As a major novelty, a constraint of undesirable angular velocity is derived and fused into the control allocator, which significantly improves the recovery performance. For validation, we have conducted a set of Monte-Carlo simulation to test the reliability of the proposed method of recovering the quadrotor from arbitrary initial attitude/rate conditions. In addition, real-life flight tests have been performed. The results demonstrate that the post-failure quadrotor can recover after being casually tossed into the air.
1702.04078
Muhammad Bilal
Muhammad Bilal and Shin-Gak Kang
A Cache Management Scheme for Efficient Content Eviction and Replication in Cache Networks
This print includes minor enhancement and corrections to the published journal version of this article in IEEE Access
IEEE Access, Vol. 5, pp. 1692-1701 (2017)
10.1109/ACCESS.2017.2669344
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To cope with the ongoing changing demands of the internet, 'in-network caching' has been presented as an application solution for two decades. With the advent of information-centric network (ICN) architecture, 'in-network caching' becomes a network level solution. Some unique features of ICNs, e.g., rapidly changing cache states, higher request arrival rates, smaller cache sizes, and other factors, impose diverse requirements on the content eviction policies. In particular, eviction policies should be fast and lightweight. In this study, we propose cache replication and eviction schemes, Conditional Leave Cope Everywhere (CLCE) and Least Frequent Recently Used (LFRU), which are well suited for the ICN type of cache networks (CNs). The CLCE replication scheme reduces the redundant caching of contents; hence improves the cache space utilization. LFRU approximates the Least Frequently Used (LFU) scheme coupled with the Least Recently Used (LRU) scheme and is practically implementable for rapidly changing cache networks like ICNs.
[ { "created": "Tue, 14 Feb 2017 04:31:46 GMT", "version": "v1" }, { "created": "Wed, 22 Feb 2017 06:52:20 GMT", "version": "v2" }, { "created": "Thu, 30 Mar 2017 09:02:45 GMT", "version": "v3" }, { "created": "Tue, 27 Jun 2017 02:14:16 GMT", "version": "v4" } ]
2017-06-28
[ [ "Bilal", "Muhammad", "" ], [ "Kang", "Shin-Gak", "" ] ]
To cope with the ongoing changing demands of the internet, 'in-network caching' has been presented as an application solution for two decades. With the advent of information-centric network (ICN) architecture, 'in-network caching' becomes a network level solution. Some unique features of ICNs, e.g., rapidly changing cache states, higher request arrival rates, smaller cache sizes, and other factors, impose diverse requirements on the content eviction policies. In particular, eviction policies should be fast and lightweight. In this study, we propose cache replication and eviction schemes, Conditional Leave Cope Everywhere (CLCE) and Least Frequent Recently Used (LFRU), which are well suited for the ICN type of cache networks (CNs). The CLCE replication scheme reduces the redundant caching of contents; hence improves the cache space utilization. LFRU approximates the Least Frequently Used (LFU) scheme coupled with the Least Recently Used (LRU) scheme and is practically implementable for rapidly changing cache networks like ICNs.
2404.13862
Hao Wang
Hao Wang, Qingshan Xu, Hongyuan Chen, Rui Ma
PGAHum: Prior-Guided Geometry and Appearance Learning for High-Fidelity Animatable Human Reconstruction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent techniques on implicit geometry representation learning and neural rendering have shown promising results for 3D clothed human reconstruction from sparse video inputs. However, it is still challenging to reconstruct detailed surface geometry and even more difficult to synthesize photorealistic novel views with animated human poses. In this work, we introduce PGAHum, a prior-guided geometry and appearance learning framework for high-fidelity animatable human reconstruction. We thoroughly exploit 3D human priors in three key modules of PGAHum to achieve high-quality geometry reconstruction with intricate details and photorealistic view synthesis on unseen poses. First, a prior-based implicit geometry representation of 3D human, which contains a delta SDF predicted by a tri-plane network and a base SDF derived from the prior SMPL model, is proposed to model the surface details and the body shape in a disentangled manner. Second, we introduce a novel prior-guided sampling strategy that fully leverages the prior information of the human pose and body to sample the query points within or near the body surface. By avoiding unnecessary learning in the empty 3D space, the neural rendering can recover more appearance details. Last, we propose a novel iterative backward deformation strategy to progressively find the correspondence for the query point in observation space. A skinning weights prediction model is learned based on the prior provided by the SMPL model to achieve the iterative backward LBS deformation. Extensive quantitative and qualitative comparisons on various datasets are conducted and the results demonstrate the superiority of our framework. Ablation studies also verify the effectiveness of each scheme for geometry and appearance learning.
[ { "created": "Mon, 22 Apr 2024 04:22:30 GMT", "version": "v1" } ]
2024-04-23
[ [ "Wang", "Hao", "" ], [ "Xu", "Qingshan", "" ], [ "Chen", "Hongyuan", "" ], [ "Ma", "Rui", "" ] ]
Recent techniques on implicit geometry representation learning and neural rendering have shown promising results for 3D clothed human reconstruction from sparse video inputs. However, it is still challenging to reconstruct detailed surface geometry and even more difficult to synthesize photorealistic novel views with animated human poses. In this work, we introduce PGAHum, a prior-guided geometry and appearance learning framework for high-fidelity animatable human reconstruction. We thoroughly exploit 3D human priors in three key modules of PGAHum to achieve high-quality geometry reconstruction with intricate details and photorealistic view synthesis on unseen poses. First, a prior-based implicit geometry representation of 3D human, which contains a delta SDF predicted by a tri-plane network and a base SDF derived from the prior SMPL model, is proposed to model the surface details and the body shape in a disentangled manner. Second, we introduce a novel prior-guided sampling strategy that fully leverages the prior information of the human pose and body to sample the query points within or near the body surface. By avoiding unnecessary learning in the empty 3D space, the neural rendering can recover more appearance details. Last, we propose a novel iterative backward deformation strategy to progressively find the correspondence for the query point in observation space. A skinning weights prediction model is learned based on the prior provided by the SMPL model to achieve the iterative backward LBS deformation. Extensive quantitative and qualitative comparisons on various datasets are conducted and the results demonstrate the superiority of our framework. Ablation studies also verify the effectiveness of each scheme for geometry and appearance learning.
2012.10548
Richard Marriott
Richard T. Marriott, Sami Romdhani, St\'ephane Gentric and Liming Chen
Robustness of Facial Recognition to GAN-based Face-morphing Attacks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face-morphing attacks have been a cause for concern for a number of years. Striving to remain one step ahead of attackers, researchers have proposed many methods of both creating and detecting morphed images. These detection methods, however, have generally proven to be inadequate. In this work we identify two new, GAN-based methods that an attacker may already have in his arsenal. Each method is evaluated against state-of-the-art facial recognition (FR) algorithms and we demonstrate that improvements to the fidelity of FR algorithms do lead to a reduction in the success rate of attacks provided morphed images are considered when setting operational acceptance thresholds.
[ { "created": "Fri, 18 Dec 2020 23:06:20 GMT", "version": "v1" } ]
2020-12-22
[ [ "Marriott", "Richard T.", "" ], [ "Romdhani", "Sami", "" ], [ "Gentric", "Stéphane", "" ], [ "Chen", "Liming", "" ] ]
Face-morphing attacks have been a cause for concern for a number of years. Striving to remain one step ahead of attackers, researchers have proposed many methods of both creating and detecting morphed images. These detection methods, however, have generally proven to be inadequate. In this work we identify two new, GAN-based methods that an attacker may already have in his arsenal. Each method is evaluated against state-of-the-art facial recognition (FR) algorithms and we demonstrate that improvements to the fidelity of FR algorithms do lead to a reduction in the success rate of attacks provided morphed images are considered when setting operational acceptance thresholds.
2301.10681
Philipp Wiesner
Thorsten Wittkopp, Dominik Scheinert, Philipp Wiesner, Alexander Acker, Odej Kao
PULL: Reactive Log Anomaly Detection Based On Iterative PU Learning
published in the proceedings of the 56th Hawaii International Conference on System Sciences (HICSS 2023)
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the complexity of modern IT services, failures can be manifold, occur at any stage, and are hard to detect. For this reason, anomaly detection applied to monitoring data such as logs allows gaining relevant insights to improve IT services steadily and eradicate failures. However, existing anomaly detection methods that provide high accuracy often rely on labeled training data, which are time-consuming to obtain in practice. Therefore, we propose PULL, an iterative log analysis method for reactive anomaly detection based on estimated failure time windows provided by monitoring systems instead of labeled data. Our attention-based model uses a novel objective function for weak supervision deep learning that accounts for imbalanced data and applies an iterative learning strategy for positive and unknown samples (PU learning) to identify anomalous logs. Our evaluation shows that PULL consistently outperforms ten benchmark baselines across three different datasets and detects anomalous log messages with an F1-score of more than 0.99 even within imprecise failure time windows.
[ { "created": "Wed, 25 Jan 2023 16:34:43 GMT", "version": "v1" } ]
2023-01-26
[ [ "Wittkopp", "Thorsten", "" ], [ "Scheinert", "Dominik", "" ], [ "Wiesner", "Philipp", "" ], [ "Acker", "Alexander", "" ], [ "Kao", "Odej", "" ] ]
Due to the complexity of modern IT services, failures can be manifold, occur at any stage, and are hard to detect. For this reason, anomaly detection applied to monitoring data such as logs allows gaining relevant insights to improve IT services steadily and eradicate failures. However, existing anomaly detection methods that provide high accuracy often rely on labeled training data, which are time-consuming to obtain in practice. Therefore, we propose PULL, an iterative log analysis method for reactive anomaly detection based on estimated failure time windows provided by monitoring systems instead of labeled data. Our attention-based model uses a novel objective function for weak supervision deep learning that accounts for imbalanced data and applies an iterative learning strategy for positive and unknown samples (PU learning) to identify anomalous logs. Our evaluation shows that PULL consistently outperforms ten benchmark baselines across three different datasets and detects anomalous log messages with an F1-score of more than 0.99 even within imprecise failure time windows.
2104.09754
Nao Uehara
Nao Uehara, Teruaki Hayashi, Yukio Ohsawa
Hierarchical entropy and domain interaction to understand the structure in an image
20pages,17figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this study, we devise a model that introduces two hierarchies into information entropy. The two hierarchies are the size of the region for which entropy is calculated and the size of the component that determines whether the structures in the image are integrated or not. And this model uses two indicators, hierarchical entropy and domain interaction. Both indicators increase or decrease due to the integration or fragmentation of the structure in the image. It aims to help people interpret and explain what the structure in an image looks like from two indicators that change with the size of the region and the component. First, we conduct experiments using images and qualitatively evaluate how the two indicators change. Next, we explain the relationship with the hidden structure of Vermeer's girl with a pearl earring using the change of hierarchical entropy. Finally, we clarify the relationship between the change of domain interaction and the appropriate segment result of the image by an experiment using a questionnaire.
[ { "created": "Tue, 20 Apr 2021 04:29:13 GMT", "version": "v1" } ]
2021-04-21
[ [ "Uehara", "Nao", "" ], [ "Hayashi", "Teruaki", "" ], [ "Ohsawa", "Yukio", "" ] ]
In this study, we devise a model that introduces two hierarchies into information entropy. The two hierarchies are the size of the region for which entropy is calculated and the size of the component that determines whether the structures in the image are integrated or not. And this model uses two indicators, hierarchical entropy and domain interaction. Both indicators increase or decrease due to the integration or fragmentation of the structure in the image. It aims to help people interpret and explain what the structure in an image looks like from two indicators that change with the size of the region and the component. First, we conduct experiments using images and qualitatively evaluate how the two indicators change. Next, we explain the relationship with the hidden structure of Vermeer's girl with a pearl earring using the change of hierarchical entropy. Finally, we clarify the relationship between the change of domain interaction and the appropriate segment result of the image by an experiment using a questionnaire.
2310.07147
Zhikai Li
Zhikai Li, Xiaoxuan Liu, Banghua Zhu, Zhen Dong, Qingyi Gu, Kurt Keutzer
QFT: Quantized Full-parameter Tuning of LLMs with Affordable Resources
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models (LLMs) have showcased remarkable impacts across a wide spectrum of natural language processing tasks. Fine-tuning these pre-trained models on downstream datasets provides further significant performance gains, but this process has been challenging due to its extraordinary resource requirements. To this end, existing efforts focus on parameter-efficient fine-tuning, which, unfortunately, fail to capitalize on the powerful potential of full-parameter fine-tuning. In this work, we propose QFT, a novel Quantized Full-parameter Tuning framework for LLMs that enables memory-efficient fine-tuning without harming performance. Our framework incorporates two novel ideas: (i) we adopt the efficient Lion optimizer, which only keeps track of the momentum and has consistent update magnitudes for each parameter, an inherent advantage for robust quantization; and (ii) we quantize all model states and store them as integer values, and present a gradient flow and parameter update scheme for the quantized weights. As a result, QFT reduces the model state memory to 21% of the standard solution while achieving comparable performance, e.g., tuning a LLaMA-7B model requires only <30GB of memory, satisfied by a single A6000 GPU.
[ { "created": "Wed, 11 Oct 2023 02:47:40 GMT", "version": "v1" } ]
2023-10-12
[ [ "Li", "Zhikai", "" ], [ "Liu", "Xiaoxuan", "" ], [ "Zhu", "Banghua", "" ], [ "Dong", "Zhen", "" ], [ "Gu", "Qingyi", "" ], [ "Keutzer", "Kurt", "" ] ]
Large Language Models (LLMs) have showcased remarkable impacts across a wide spectrum of natural language processing tasks. Fine-tuning these pre-trained models on downstream datasets provides further significant performance gains, but this process has been challenging due to its extraordinary resource requirements. To this end, existing efforts focus on parameter-efficient fine-tuning, which, unfortunately, fail to capitalize on the powerful potential of full-parameter fine-tuning. In this work, we propose QFT, a novel Quantized Full-parameter Tuning framework for LLMs that enables memory-efficient fine-tuning without harming performance. Our framework incorporates two novel ideas: (i) we adopt the efficient Lion optimizer, which only keeps track of the momentum and has consistent update magnitudes for each parameter, an inherent advantage for robust quantization; and (ii) we quantize all model states and store them as integer values, and present a gradient flow and parameter update scheme for the quantized weights. As a result, QFT reduces the model state memory to 21% of the standard solution while achieving comparable performance, e.g., tuning a LLaMA-7B model requires only <30GB of memory, satisfied by a single A6000 GPU.
2012.06209
Lynnette Hui Xian Ng
Chua Hao Yang, Yong Shan Jie, Boon Kok Chin, Lander Chin, Lynnette Hui Xian Ng
KOSMOS: Knowledge-graph Oriented Social media and Mainstream media Overview System
null
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
We introduce KOSMOS, a knowledge retrieval system based on the constructed knowledge graph of social media and mainstream media documents. The system first identifies key events from the documents at each time frame through clustering, extracting a document to represent each cluster, then describing the document in terms of 5W1H (Who, What, When, Where, Why, How). The event centric knowledge graph is enhanced by relation triplets and entity disambiguation from the representative document. This knowledge retrieval is supported by a web interface that presents a graph visualisation of related nodes and relevant articles based on a user query. The interface facilitates understanding relationships between events reported in mainstream and social media journalism through the KOSMOS information extraction pipeline, which is valuable to understand media slant and public opinions. Finally, we explore a use case in extracting events and relations from documents to understand the media and community's view to the 2020 COVID19 pandemic.
[ { "created": "Fri, 11 Dec 2020 09:36:06 GMT", "version": "v1" }, { "created": "Thu, 17 Dec 2020 13:48:17 GMT", "version": "v2" } ]
2020-12-18
[ [ "Yang", "Chua Hao", "" ], [ "Jie", "Yong Shan", "" ], [ "Chin", "Boon Kok", "" ], [ "Chin", "Lander", "" ], [ "Ng", "Lynnette Hui Xian", "" ] ]
We introduce KOSMOS, a knowledge retrieval system based on the constructed knowledge graph of social media and mainstream media documents. The system first identifies key events from the documents at each time frame through clustering, extracting a document to represent each cluster, then describing the document in terms of 5W1H (Who, What, When, Where, Why, How). The event centric knowledge graph is enhanced by relation triplets and entity disambiguation from the representative document. This knowledge retrieval is supported by a web interface that presents a graph visualisation of related nodes and relevant articles based on a user query. The interface facilitates understanding relationships between events reported in mainstream and social media journalism through the KOSMOS information extraction pipeline, which is valuable to understand media slant and public opinions. Finally, we explore a use case in extracting events and relations from documents to understand the media and community's view to the 2020 COVID19 pandemic.
2001.09001
Priyabrata Saha
Priyabrata Saha, Arslan Ali, Burhan A. Mudassar, Yun Long, and Saibal Mukhopadhyay
MagNet: Discovering Multi-agent Interaction Dynamics using Neural Network
Accepted manuscript by ICRA 2020
ICRA 2020, pp. 8158-8164
10.1109/ICRA40945.2020.9196846
null
cs.LG cs.MA cs.RO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the MagNet, a neural network-based multi-agent interaction model to discover the governing dynamics and predict evolution of a complex multi-agent system from observations. We formulate a multi-agent system as a coupled non-linear network with a generic ordinary differential equation (ODE) based state evolution, and develop a neural network-based realization of its time-discretized model. MagNet is trained to discover the core dynamics of a multi-agent system from observations, and tuned on-line to learn agent-specific parameters of the dynamics to ensure accurate prediction even when physical or relational attributes of agents, or number of agents change. We evaluate MagNet on a point-mass system in two-dimensional space, Kuramoto phase synchronization dynamics and predator-swarm interaction dynamics demonstrating orders of magnitude improvement in prediction accuracy over traditional deep learning models.
[ { "created": "Fri, 24 Jan 2020 13:41:01 GMT", "version": "v1" }, { "created": "Tue, 3 Mar 2020 21:17:45 GMT", "version": "v2" } ]
2020-10-01
[ [ "Saha", "Priyabrata", "" ], [ "Ali", "Arslan", "" ], [ "Mudassar", "Burhan A.", "" ], [ "Long", "Yun", "" ], [ "Mukhopadhyay", "Saibal", "" ] ]
We present the MagNet, a neural network-based multi-agent interaction model to discover the governing dynamics and predict evolution of a complex multi-agent system from observations. We formulate a multi-agent system as a coupled non-linear network with a generic ordinary differential equation (ODE) based state evolution, and develop a neural network-based realization of its time-discretized model. MagNet is trained to discover the core dynamics of a multi-agent system from observations, and tuned on-line to learn agent-specific parameters of the dynamics to ensure accurate prediction even when physical or relational attributes of agents, or number of agents change. We evaluate MagNet on a point-mass system in two-dimensional space, Kuramoto phase synchronization dynamics and predator-swarm interaction dynamics demonstrating orders of magnitude improvement in prediction accuracy over traditional deep learning models.
1911.08764
Matteo Testa
Matteo Testa, Arslan Ali, Tiziano Bianchi, Enrico Magli
Learning mappings onto regularized latent spaces for biometric authentication
Accepted at IEEE MMSP 2019
null
null
null
cs.CV cs.LG cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel architecture for generic biometric authentication based on deep neural networks: RegNet. Differently from other methods, RegNet learns a mapping of the input biometric traits onto a target distribution in a well-behaved space in which users can be separated by means of simple and tunable boundaries. More specifically, authorized and unauthorized users are mapped onto two different and well behaved Gaussian distributions. The novel approach of learning the mapping instead of the boundaries further avoids the problem encountered in typical classifiers for which the learnt boundaries may be complex and difficult to analyze. RegNet achieves high performance in terms of security metrics such as Equal Error Rate (EER), False Acceptance Rate (FAR) and Genuine Acceptance Rate (GAR). The experiments we conducted on publicly available datasets of face and fingerprint confirm the effectiveness of the proposed system.
[ { "created": "Wed, 20 Nov 2019 08:40:44 GMT", "version": "v1" } ]
2019-11-21
[ [ "Testa", "Matteo", "" ], [ "Ali", "Arslan", "" ], [ "Bianchi", "Tiziano", "" ], [ "Magli", "Enrico", "" ] ]
We propose a novel architecture for generic biometric authentication based on deep neural networks: RegNet. Differently from other methods, RegNet learns a mapping of the input biometric traits onto a target distribution in a well-behaved space in which users can be separated by means of simple and tunable boundaries. More specifically, authorized and unauthorized users are mapped onto two different and well behaved Gaussian distributions. The novel approach of learning the mapping instead of the boundaries further avoids the problem encountered in typical classifiers for which the learnt boundaries may be complex and difficult to analyze. RegNet achieves high performance in terms of security metrics such as Equal Error Rate (EER), False Acceptance Rate (FAR) and Genuine Acceptance Rate (GAR). The experiments we conducted on publicly available datasets of face and fingerprint confirm the effectiveness of the proposed system.
1803.08696
Saritha S
S Saritha, G Santhosh Kumar
An Incremental Boolean Tensor Factorization approach to model Change Patterns of Objects in Images
This work is not submitted to any journals/conferences
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Change detection process has recently progressed from a post-classification method to an expert knowledge interpretation process of the time-series data. The technique finds applications mainly in remote sensing images and can be utilized to analyze urbanization and monitor forest regions. In this paper, a framework to perform a knowledge based interpretation of the changes/no changes observed in a spatiotemporal domain using tensor based approaches is presented. An incremental approach to Boolean Tensor Factorization method is proposed in this work, which is adopted to model the change patterns of objects/classes as well as their associated features. The framework is evaluated under different datasets to visualize the performance for the dependency factors. The algorithm is also validated in comparison with the tradition Boolean Tensor Factorization method and the results are substantial.
[ { "created": "Fri, 23 Mar 2018 09:08:35 GMT", "version": "v1" } ]
2018-03-26
[ [ "Saritha", "S", "" ], [ "Kumar", "G Santhosh", "" ] ]
Change detection process has recently progressed from a post-classification method to an expert knowledge interpretation process of the time-series data. The technique finds applications mainly in remote sensing images and can be utilized to analyze urbanization and monitor forest regions. In this paper, a framework to perform a knowledge based interpretation of the changes/no changes observed in a spatiotemporal domain using tensor based approaches is presented. An incremental approach to Boolean Tensor Factorization method is proposed in this work, which is adopted to model the change patterns of objects/classes as well as their associated features. The framework is evaluated under different datasets to visualize the performance for the dependency factors. The algorithm is also validated in comparison with the tradition Boolean Tensor Factorization method and the results are substantial.
2001.04574
Joshua I. James
Min Jin Park, Joshua I. James
Preliminary Study of a Google Home Mini
12 pages, 6 figures, 3 tables
Journal of Digital Forensics 13-3: 163-174 (2019). https://kdfs.jams.or.kr/jams/download/KCI_FI002513079.pdf
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many artificial intelligence (AI) speakers have recently come to market. Beginning with Amazon Echo, many companies producing their own speaker technologies. Due to the limitations of technology, most speakers have similar functions, but the way of handling the data of each speaker is different. In the case of Amazon echo, the API of the cloud is open for any developers to develop their API. The Amazon Echo has been around for a while, and much research has been done on it. However, not much research has been done on Google Home Mini analysis for digital investigations. In this paper, we will conduct some initial research on the data storing and security methods of Google Home Mini.
[ { "created": "Tue, 14 Jan 2020 00:12:04 GMT", "version": "v1" } ]
2020-01-15
[ [ "Park", "Min Jin", "" ], [ "James", "Joshua I.", "" ] ]
Many artificial intelligence (AI) speakers have recently come to market. Beginning with Amazon Echo, many companies producing their own speaker technologies. Due to the limitations of technology, most speakers have similar functions, but the way of handling the data of each speaker is different. In the case of Amazon echo, the API of the cloud is open for any developers to develop their API. The Amazon Echo has been around for a while, and much research has been done on it. However, not much research has been done on Google Home Mini analysis for digital investigations. In this paper, we will conduct some initial research on the data storing and security methods of Google Home Mini.
0910.3033
Ersen Ekrem
Ersen Ekrem and Sennur Ulukus
Degraded Compound Multi-receiver Wiretap Channels
Submitted to IEEE Transactions on Information Theory, October 2009
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the degraded compound multi-receiver wiretap channel. The degraded compound multi-receiver wiretap channel consists of two groups of users and a group of eavesdroppers, where, if we pick an arbitrary user from each group of users and an arbitrary eavesdropper, they satisfy a certain Markov chain. We study two different communication scenarios for this channel. In the first scenario, the transmitter wants to send a confidential message to users in the first (stronger) group and a different confidential message to users in the second (weaker) group, where both messages need to be kept confidential from the eavesdroppers. For this scenario, we assume that there is only one eavesdropper. We obtain the secrecy capacity region for the general discrete memoryless channel model, the parallel channel model, and the Gaussian parallel channel model. For the Gaussian multiple-input multiple-output (MIMO) channel model, we obtain the secrecy capacity region when there is only one user in the second group. In the second scenario we study, the transmitter sends a confidential message to users in the first group which needs to be kept confidential from the second group of users and the eavesdroppers. Furthermore, the transmitter sends a different confidential message to users in the second group which needs to be kept confidential only from the eavesdroppers. For this scenario, we do not put any restriction on the number of eavesdroppers. As in the first scenario, we obtain the secrecy capacity region for the general discrete memoryless channel model, the parallel channel model, and the Gaussian parallel channel model. For the Gaussian MIMO channel model, we establish the secrecy capacity region when there is only one user in the second group.
[ { "created": "Fri, 16 Oct 2009 04:00:24 GMT", "version": "v1" } ]
2009-10-19
[ [ "Ekrem", "Ersen", "" ], [ "Ulukus", "Sennur", "" ] ]
In this paper, we study the degraded compound multi-receiver wiretap channel. The degraded compound multi-receiver wiretap channel consists of two groups of users and a group of eavesdroppers, where, if we pick an arbitrary user from each group of users and an arbitrary eavesdropper, they satisfy a certain Markov chain. We study two different communication scenarios for this channel. In the first scenario, the transmitter wants to send a confidential message to users in the first (stronger) group and a different confidential message to users in the second (weaker) group, where both messages need to be kept confidential from the eavesdroppers. For this scenario, we assume that there is only one eavesdropper. We obtain the secrecy capacity region for the general discrete memoryless channel model, the parallel channel model, and the Gaussian parallel channel model. For the Gaussian multiple-input multiple-output (MIMO) channel model, we obtain the secrecy capacity region when there is only one user in the second group. In the second scenario we study, the transmitter sends a confidential message to users in the first group which needs to be kept confidential from the second group of users and the eavesdroppers. Furthermore, the transmitter sends a different confidential message to users in the second group which needs to be kept confidential only from the eavesdroppers. For this scenario, we do not put any restriction on the number of eavesdroppers. As in the first scenario, we obtain the secrecy capacity region for the general discrete memoryless channel model, the parallel channel model, and the Gaussian parallel channel model. For the Gaussian MIMO channel model, we establish the secrecy capacity region when there is only one user in the second group.
1909.08726
Sergey Loyka
S. Loyka, M. Khojastehnia
Comments on "On Favorable Propagation in Massive MIMO Systems and Different Antenna Configurations" [1]
submitted for publication
IEEE Access, vol. 7, pp. 185369-185372, Dec. 2019
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is shown that the condition of Theorem 1 in [1] never holds in practice and that Theorem 2 is incorrect under the stated condition. Extra assumptions or/and modifications are needed to make the conclusions of Theorem 1 and 2 above valid, which are provided below.
[ { "created": "Wed, 18 Sep 2019 22:21:58 GMT", "version": "v1" } ]
2020-02-20
[ [ "Loyka", "S.", "" ], [ "Khojastehnia", "M.", "" ] ]
It is shown that the condition of Theorem 1 in [1] never holds in practice and that Theorem 2 is incorrect under the stated condition. Extra assumptions or/and modifications are needed to make the conclusions of Theorem 1 and 2 above valid, which are provided below.
2310.09767
Jiwan Chung
Jiwan Chung, Youngjae Yu
VLIS: Unimodal Language Models Guide Multimodal Language Generation
Accepted as main paper in EMNLP 2023
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Multimodal language generation, which leverages the synergy of language and vision, is a rapidly expanding field. However, existing vision-language models face challenges in tasks that require complex linguistic understanding. To address this issue, we introduce Visual-Language models as Importance Sampling weights (VLIS), a novel framework that combines the visual conditioning capability of vision-language models with the language understanding of unimodal text-only language models without further training. It extracts pointwise mutual information of each image and text from a visual-language model and uses the value as an importance sampling weight to adjust the token likelihood from a text-only model. VLIS improves vision-language models on diverse tasks, including commonsense understanding (WHOOPS, OK-VQA, and ScienceQA) and complex text generation (Concadia, Image Paragraph Captioning, and ROCStories). Our results suggest that VLIS represents a promising new direction for multimodal language generation.
[ { "created": "Sun, 15 Oct 2023 07:58:52 GMT", "version": "v1" }, { "created": "Tue, 19 Dec 2023 13:01:50 GMT", "version": "v2" } ]
2023-12-20
[ [ "Chung", "Jiwan", "" ], [ "Yu", "Youngjae", "" ] ]
Multimodal language generation, which leverages the synergy of language and vision, is a rapidly expanding field. However, existing vision-language models face challenges in tasks that require complex linguistic understanding. To address this issue, we introduce Visual-Language models as Importance Sampling weights (VLIS), a novel framework that combines the visual conditioning capability of vision-language models with the language understanding of unimodal text-only language models without further training. It extracts pointwise mutual information of each image and text from a visual-language model and uses the value as an importance sampling weight to adjust the token likelihood from a text-only model. VLIS improves vision-language models on diverse tasks, including commonsense understanding (WHOOPS, OK-VQA, and ScienceQA) and complex text generation (Concadia, Image Paragraph Captioning, and ROCStories). Our results suggest that VLIS represents a promising new direction for multimodal language generation.
2406.13499
Tao Wu
Tao Wu, Xinwen Cao, Chao Wang, Shaojie Qiao, Xingping Xian, Lin Yuan, Canyixing Cui, Yanbing Liu
GraphMU: Repairing Robustness of Graph Neural Networks via Machine Unlearning
null
null
null
null
cs.SI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Neural Networks (GNNs) have demonstrated significant application potential in various fields. However, GNNs are still vulnerable to adversarial attacks. Numerous adversarial defense methods on GNNs are proposed to address the problem of adversarial attacks. However, these methods can only serve as a defense before poisoning, but cannot repair poisoned GNN. Therefore, there is an urgent need for a method to repair poisoned GNN. In this paper, we address this gap by introducing the novel concept of model repair for GNNs. We propose a repair framework, Repairing Robustness of Graph Neural Networks via Machine Unlearning (GraphMU), which aims to fine-tune poisoned GNN to forget adversarial samples without the need for complete retraining. We also introduce a unlearning validation method to ensure that our approach effectively forget specified poisoned data. To evaluate the effectiveness of GraphMU, we explore three fine-tuned subgraph construction scenarios based on the available perturbation information: (i) Known Perturbation Ratios, (ii) Known Complete Knowledge of Perturbations, and (iii) Unknown any Knowledge of Perturbations. Our extensive experiments, conducted across four citation datasets and four adversarial attack scenarios, demonstrate that GraphMU can effectively restore the performance of poisoned GNN.
[ { "created": "Wed, 19 Jun 2024 12:41:15 GMT", "version": "v1" } ]
2024-06-21
[ [ "Wu", "Tao", "" ], [ "Cao", "Xinwen", "" ], [ "Wang", "Chao", "" ], [ "Qiao", "Shaojie", "" ], [ "Xian", "Xingping", "" ], [ "Yuan", "Lin", "" ], [ "Cui", "Canyixing", "" ], [ "Liu", "Yanbing", "" ] ]
Graph Neural Networks (GNNs) have demonstrated significant application potential in various fields. However, GNNs are still vulnerable to adversarial attacks. Numerous adversarial defense methods on GNNs are proposed to address the problem of adversarial attacks. However, these methods can only serve as a defense before poisoning, but cannot repair poisoned GNN. Therefore, there is an urgent need for a method to repair poisoned GNN. In this paper, we address this gap by introducing the novel concept of model repair for GNNs. We propose a repair framework, Repairing Robustness of Graph Neural Networks via Machine Unlearning (GraphMU), which aims to fine-tune poisoned GNN to forget adversarial samples without the need for complete retraining. We also introduce a unlearning validation method to ensure that our approach effectively forget specified poisoned data. To evaluate the effectiveness of GraphMU, we explore three fine-tuned subgraph construction scenarios based on the available perturbation information: (i) Known Perturbation Ratios, (ii) Known Complete Knowledge of Perturbations, and (iii) Unknown any Knowledge of Perturbations. Our extensive experiments, conducted across four citation datasets and four adversarial attack scenarios, demonstrate that GraphMU can effectively restore the performance of poisoned GNN.
2202.10650
Shixing Chen
Shixing Chen, Chun-Hao Liu, Xiang Hao, Xiaohan Nie, Maxim Arap, Raffay Hamid
Movies2Scenes: Using Movie Metadata to Learn Scene Representation
Accepted to CVPR 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Understanding scenes in movies is crucial for a variety of applications such as video moderation, search, and recommendation. However, labeling individual scenes is a time-consuming process. In contrast, movie level metadata (e.g., genre, synopsis, etc.) regularly gets produced as part of the film production process, and is therefore significantly more commonly available. In this work, we propose a novel contrastive learning approach that uses movie metadata to learn a general-purpose scene representation. Specifically, we use movie metadata to define a measure of movie similarity, and use it during contrastive learning to limit our search for positive scene-pairs to only the movies that are considered similar to each other. Our learned scene representation consistently outperforms existing state-of-the-art methods on a diverse set of tasks evaluated using multiple benchmark datasets. Notably, our learned representation offers an average improvement of 7.9% on the seven classification tasks and 9.7% improvement on the two regression tasks in LVU dataset. Furthermore, using a newly collected movie dataset, we present comparative results of our scene representation on a set of video moderation tasks to demonstrate its generalizability on previously less explored tasks.
[ { "created": "Tue, 22 Feb 2022 03:31:33 GMT", "version": "v1" }, { "created": "Sat, 12 Mar 2022 03:08:46 GMT", "version": "v2" }, { "created": "Thu, 30 Mar 2023 00:51:47 GMT", "version": "v3" } ]
2023-03-31
[ [ "Chen", "Shixing", "" ], [ "Liu", "Chun-Hao", "" ], [ "Hao", "Xiang", "" ], [ "Nie", "Xiaohan", "" ], [ "Arap", "Maxim", "" ], [ "Hamid", "Raffay", "" ] ]
Understanding scenes in movies is crucial for a variety of applications such as video moderation, search, and recommendation. However, labeling individual scenes is a time-consuming process. In contrast, movie level metadata (e.g., genre, synopsis, etc.) regularly gets produced as part of the film production process, and is therefore significantly more commonly available. In this work, we propose a novel contrastive learning approach that uses movie metadata to learn a general-purpose scene representation. Specifically, we use movie metadata to define a measure of movie similarity, and use it during contrastive learning to limit our search for positive scene-pairs to only the movies that are considered similar to each other. Our learned scene representation consistently outperforms existing state-of-the-art methods on a diverse set of tasks evaluated using multiple benchmark datasets. Notably, our learned representation offers an average improvement of 7.9% on the seven classification tasks and 9.7% improvement on the two regression tasks in LVU dataset. Furthermore, using a newly collected movie dataset, we present comparative results of our scene representation on a set of video moderation tasks to demonstrate its generalizability on previously less explored tasks.
2309.03315
David D'Ambrosio
David B. D'Ambrosio, Jonathan Abelian, Saminda Abeyruwan, Michael Ahn, Alex Bewley, Justin Boyd, Krzysztof Choromanski, Omar Cortes, Erwin Coumans, Tianli Ding, Wenbo Gao, Laura Graesser, Atil Iscen, Navdeep Jaitly, Deepali Jain, Juhana Kangaspunta, Satoshi Kataoka, Gus Kouretas, Yuheng Kuang, Nevena Lazic, Corey Lynch, Reza Mahjourian, Sherry Q. Moore, Thinh Nguyen, Ken Oslund, Barney J Reed, Krista Reymann, Pannag R. Sanketi, Anish Shankar, Pierre Sermanet, Vikas Sindhwani, Avi Singh, Vincent Vanhoucke, Grace Vesom, and Peng Xu
Robotic Table Tennis: A Case Study into a High Speed Learning System
Published and presented at Robotics: Science and Systems (RSS2023)
null
10.15607/RSS.2023.XIX.006
null
cs.RO cs.LG
http://creativecommons.org/licenses/by/4.0/
We present a deep-dive into a real-world robotic learning system that, in previous work, was shown to be capable of hundreds of table tennis rallies with a human and has the ability to precisely return the ball to desired targets. This system puts together a highly optimized perception subsystem, a high-speed low-latency robot controller, a simulation paradigm that can prevent damage in the real world and also train policies for zero-shot transfer, and automated real world environment resets that enable autonomous training and evaluation on physical robots. We complement a complete system description, including numerous design decisions that are typically not widely disseminated, with a collection of studies that clarify the importance of mitigating various sources of latency, accounting for training and deployment distribution shifts, robustness of the perception system, sensitivity to policy hyper-parameters, and choice of action space. A video demonstrating the components of the system and details of experimental results can be found at https://youtu.be/uFcnWjB42I0.
[ { "created": "Wed, 6 Sep 2023 18:56:20 GMT", "version": "v1" } ]
2023-09-19
[ [ "D'Ambrosio", "David B.", "" ], [ "Abelian", "Jonathan", "" ], [ "Abeyruwan", "Saminda", "" ], [ "Ahn", "Michael", "" ], [ "Bewley", "Alex", "" ], [ "Boyd", "Justin", "" ], [ "Choromanski", "Krzysztof", "" ], [ "Cortes", "Omar", "" ], [ "Coumans", "Erwin", "" ], [ "Ding", "Tianli", "" ], [ "Gao", "Wenbo", "" ], [ "Graesser", "Laura", "" ], [ "Iscen", "Atil", "" ], [ "Jaitly", "Navdeep", "" ], [ "Jain", "Deepali", "" ], [ "Kangaspunta", "Juhana", "" ], [ "Kataoka", "Satoshi", "" ], [ "Kouretas", "Gus", "" ], [ "Kuang", "Yuheng", "" ], [ "Lazic", "Nevena", "" ], [ "Lynch", "Corey", "" ], [ "Mahjourian", "Reza", "" ], [ "Moore", "Sherry Q.", "" ], [ "Nguyen", "Thinh", "" ], [ "Oslund", "Ken", "" ], [ "Reed", "Barney J", "" ], [ "Reymann", "Krista", "" ], [ "Sanketi", "Pannag R.", "" ], [ "Shankar", "Anish", "" ], [ "Sermanet", "Pierre", "" ], [ "Sindhwani", "Vikas", "" ], [ "Singh", "Avi", "" ], [ "Vanhoucke", "Vincent", "" ], [ "Vesom", "Grace", "" ], [ "Xu", "Peng", "" ] ]
We present a deep-dive into a real-world robotic learning system that, in previous work, was shown to be capable of hundreds of table tennis rallies with a human and has the ability to precisely return the ball to desired targets. This system puts together a highly optimized perception subsystem, a high-speed low-latency robot controller, a simulation paradigm that can prevent damage in the real world and also train policies for zero-shot transfer, and automated real world environment resets that enable autonomous training and evaluation on physical robots. We complement a complete system description, including numerous design decisions that are typically not widely disseminated, with a collection of studies that clarify the importance of mitigating various sources of latency, accounting for training and deployment distribution shifts, robustness of the perception system, sensitivity to policy hyper-parameters, and choice of action space. A video demonstrating the components of the system and details of experimental results can be found at https://youtu.be/uFcnWjB42I0.
1709.07528
Kenneth Hess
Kenneth L. Hess, Hugo D. Paz
Defining a Lingua Franca to Open the Black Box of a Na\"ive Bayes Recommender
null
null
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many AI systems have a black box nature that makes it difficult to understand how they make their recommendations. This can be unsettling, as the designer cannot be certain how the system will respond to novelty. To penetrate our Na\"ive Bayes recommender's black box, we first asked, what do we want to know from our system, and how can it be obtained? The answers led us to recursively define a common lexicon with the AI, a lingua franca, using the very items that the system ranks to create meta-symbols recognized by the system, and enabling us to understand the system's knowledge in plain terms and at different levels of abstraction. As one bonus, using its existing knowledge, the lingua franca can enable the system to extend recommendations to related, but entirely new areas, ameliorating the cold start problem. We also supplement the lingua franca with techniques for visualizing the system's knowledge state, develop metrics for evaluating the meaningfulness of terms in the lingua franca, and generalize the requirements for developing a similar lingua franca in other applications.
[ { "created": "Thu, 21 Sep 2017 22:06:26 GMT", "version": "v1" } ]
2017-09-25
[ [ "Hess", "Kenneth L.", "" ], [ "Paz", "Hugo D.", "" ] ]
Many AI systems have a black box nature that makes it difficult to understand how they make their recommendations. This can be unsettling, as the designer cannot be certain how the system will respond to novelty. To penetrate our Na\"ive Bayes recommender's black box, we first asked, what do we want to know from our system, and how can it be obtained? The answers led us to recursively define a common lexicon with the AI, a lingua franca, using the very items that the system ranks to create meta-symbols recognized by the system, and enabling us to understand the system's knowledge in plain terms and at different levels of abstraction. As one bonus, using its existing knowledge, the lingua franca can enable the system to extend recommendations to related, but entirely new areas, ameliorating the cold start problem. We also supplement the lingua franca with techniques for visualizing the system's knowledge state, develop metrics for evaluating the meaningfulness of terms in the lingua franca, and generalize the requirements for developing a similar lingua franca in other applications.
2309.16034
Filip Lemic
Guillem Pascual, Filip Lemic, Carmen Delgado, Xavier Costa-Perez
Analytical Modelling of Raw Data for Flow-Guided In-body Nanoscale Localization
6 pages, 7 figures, 4 tables, 16 references. arXiv admin note: substantial text overlap with arXiv:2307.05551
null
null
null
cs.ET cs.IR cs.LG cs.NI eess.SP
http://creativecommons.org/licenses/by/4.0/
Advancements in nanotechnology and material science are paving the way toward nanoscale devices that combine sensing, computing, data and energy storage, and wireless communication. In precision medicine, these nanodevices show promise for disease diagnostics, treatment, and monitoring from within the patients' bloodstreams. Assigning the location of a sensed biological event with the event itself, which is the main proposition of flow-guided in-body nanoscale localization, would be immensely beneficial from the perspective of precision medicine. The nanoscale nature of the nanodevices and the challenging environment that the bloodstream represents, result in current flow-guided localization approaches being constrained in their communication and energy-related capabilities. The communication and energy constraints of the nanodevices result in different features of raw data for flow-guided localization, in turn affecting its performance. An analytical modeling of the effects of imperfect communication and constrained energy causing intermittent operation of the nanodevices on the raw data produced by the nanodevices would be beneficial. Hence, we propose an analytical model of raw data for flow-guided localization, where the raw data is modeled as a function of communication and energy-related capabilities of the nanodevice. We evaluate the model by comparing its output with the one obtained through the utilization of a simulator for objective evaluation of flow-guided localization, featuring comparably higher level of realism. Our results across a number of scenarios and heterogeneous performance metrics indicate high similarity between the model and simulator-generated raw datasets.
[ { "created": "Wed, 27 Sep 2023 21:26:01 GMT", "version": "v1" }, { "created": "Mon, 22 Jan 2024 11:26:35 GMT", "version": "v2" } ]
2024-02-27
[ [ "Pascual", "Guillem", "" ], [ "Lemic", "Filip", "" ], [ "Delgado", "Carmen", "" ], [ "Costa-Perez", "Xavier", "" ] ]
Advancements in nanotechnology and material science are paving the way toward nanoscale devices that combine sensing, computing, data and energy storage, and wireless communication. In precision medicine, these nanodevices show promise for disease diagnostics, treatment, and monitoring from within the patients' bloodstreams. Assigning the location of a sensed biological event with the event itself, which is the main proposition of flow-guided in-body nanoscale localization, would be immensely beneficial from the perspective of precision medicine. The nanoscale nature of the nanodevices and the challenging environment that the bloodstream represents, result in current flow-guided localization approaches being constrained in their communication and energy-related capabilities. The communication and energy constraints of the nanodevices result in different features of raw data for flow-guided localization, in turn affecting its performance. An analytical modeling of the effects of imperfect communication and constrained energy causing intermittent operation of the nanodevices on the raw data produced by the nanodevices would be beneficial. Hence, we propose an analytical model of raw data for flow-guided localization, where the raw data is modeled as a function of communication and energy-related capabilities of the nanodevice. We evaluate the model by comparing its output with the one obtained through the utilization of a simulator for objective evaluation of flow-guided localization, featuring comparably higher level of realism. Our results across a number of scenarios and heterogeneous performance metrics indicate high similarity between the model and simulator-generated raw datasets.
1805.04494
Carmela Troncoso
Rebekah Overdorf, Carmela Troncoso, Rachel Greenstadt, Damon McCoy
Under the Underground: Predicting Private Interactions in Underground Forums
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Underground forums where users discuss, buy, and sell illicit services and goods facilitate a better understanding of the economy and organization of cybercriminals. Prior work has shown that in particular private interactions provide a wealth of information about the cybercriminal ecosystem. Yet, those messages are seldom available to analysts, except when there is a leak. To address this problem we propose a supervised machine learning based method able to predict which public \threads will generate private messages, after a partial leak of such messages has occurred. To the best of our knowledge, we are the first to develop a solution to overcome the barrier posed by limited to no information on private activity for underground forum analysis. Additionally, we propose an automate method for labeling posts, significantly reducing the cost of our approach in the presence of real unlabeled data. This method can be tuned to focus on the likelihood of users receiving private messages, or \threads triggering private interactions. We evaluate the performance of our methods using data from three real forum leaks. Our results show that public information can indeed be used to predict private activity, although prediction models do not transfer well between forums. We also find that neither the length of the leak period nor the time between the leak and the prediction have significant impact on our technique's performance, and that NLP features dominate the prediction power.
[ { "created": "Fri, 11 May 2018 17:14:45 GMT", "version": "v1" } ]
2018-05-14
[ [ "Overdorf", "Rebekah", "" ], [ "Troncoso", "Carmela", "" ], [ "Greenstadt", "Rachel", "" ], [ "McCoy", "Damon", "" ] ]
Underground forums where users discuss, buy, and sell illicit services and goods facilitate a better understanding of the economy and organization of cybercriminals. Prior work has shown that in particular private interactions provide a wealth of information about the cybercriminal ecosystem. Yet, those messages are seldom available to analysts, except when there is a leak. To address this problem we propose a supervised machine learning based method able to predict which public \threads will generate private messages, after a partial leak of such messages has occurred. To the best of our knowledge, we are the first to develop a solution to overcome the barrier posed by limited to no information on private activity for underground forum analysis. Additionally, we propose an automate method for labeling posts, significantly reducing the cost of our approach in the presence of real unlabeled data. This method can be tuned to focus on the likelihood of users receiving private messages, or \threads triggering private interactions. We evaluate the performance of our methods using data from three real forum leaks. Our results show that public information can indeed be used to predict private activity, although prediction models do not transfer well between forums. We also find that neither the length of the leak period nor the time between the leak and the prediction have significant impact on our technique's performance, and that NLP features dominate the prediction power.
2205.07985
Maik G\"unther
F. Lorenz, M. G\"unther
Expert Systems with Logic#. A Novel Modeling Framework for Logic Programming in an Object-Oriented Context of C#
23 pages, 4 figures, 4 tables, 7 appendices
null
null
null
cs.AI cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel approach how logic programming for expert systems can be declared directly in an object-oriented language.
[ { "created": "Mon, 16 May 2022 20:52:27 GMT", "version": "v1" } ]
2022-05-18
[ [ "Lorenz", "F.", "" ], [ "Günther", "M.", "" ] ]
We present a novel approach how logic programming for expert systems can be declared directly in an object-oriented language.
1707.05414
Peng Liu
Peng Liu, Ruogu Fang
Wide Inference Network for Image Denoising via Learning Pixel-distribution Prior
There is a code issue that makes our work may be regarded as entirely out the way of the correct research direction. Therefore, we add the correction into abstract to answer the questions being often asked. Besides. we hope the most talent you may try to think about how to map the particular matrix to generative ones. Then, you may have a significant innovation published
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore an innovative strategy for image denoising by using convolutional neural networks (CNN) to learn similar pixel-distribution features from noisy images. Many types of image noise follow a certain pixel-distribution in common, such as additive white Gaussian noise (AWGN). By increasing CNN's width with larger reception fields and more channels in each layer, CNNs can reveal the ability to extract more accurate pixel-distribution features. The key to our approach is a discovery that wider CNNs with more convolutions tend to learn the similar pixel-distribution features, which reveals a new strategy to solve low-level vision problems effectively that the inference mapping primarily relies on the priors behind the noise property instead of deeper CNNs with more stacked nonlinear layers. We evaluate our work, Wide inference Networks (WIN), on AWGN and demonstrate that by learning pixel-distribution features from images, WIN-based network consistently achieves significantly better performance than current state-of-the-art deep CNN-based methods in both quantitative and visual evaluations. \textit{Code and models are available at \url{https://github.com/cswin/WIN}}.
[ { "created": "Mon, 17 Jul 2017 23:39:38 GMT", "version": "v1" }, { "created": "Wed, 16 Aug 2017 04:40:00 GMT", "version": "v2" }, { "created": "Wed, 23 Aug 2017 23:11:15 GMT", "version": "v3" }, { "created": "Wed, 16 May 2018 13:40:34 GMT", "version": "v4" }, { "created": "Sun, 3 Jun 2018 21:25:45 GMT", "version": "v5" } ]
2018-06-05
[ [ "Liu", "Peng", "" ], [ "Fang", "Ruogu", "" ] ]
We explore an innovative strategy for image denoising by using convolutional neural networks (CNN) to learn similar pixel-distribution features from noisy images. Many types of image noise follow a certain pixel-distribution in common, such as additive white Gaussian noise (AWGN). By increasing CNN's width with larger reception fields and more channels in each layer, CNNs can reveal the ability to extract more accurate pixel-distribution features. The key to our approach is a discovery that wider CNNs with more convolutions tend to learn the similar pixel-distribution features, which reveals a new strategy to solve low-level vision problems effectively that the inference mapping primarily relies on the priors behind the noise property instead of deeper CNNs with more stacked nonlinear layers. We evaluate our work, Wide inference Networks (WIN), on AWGN and demonstrate that by learning pixel-distribution features from images, WIN-based network consistently achieves significantly better performance than current state-of-the-art deep CNN-based methods in both quantitative and visual evaluations. \textit{Code and models are available at \url{https://github.com/cswin/WIN}}.
2008.08055
Amir Alansary
Guy Leroy, Daniel Rueckert, and Amir Alansary
Communicative Reinforcement Learning Agents for Landmark Detection in Brain Images
Accepted for the MLCN workshop, MICCAI 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate detection of anatomical landmarks is an essential step in several medical imaging tasks. We propose a novel communicative multi-agent reinforcement learning (C-MARL) system to automatically detect landmarks in 3D brain images. C-MARL enables the agents to learn explicit communication channels, as well as implicit communication signals by sharing certain weights of the architecture among all the agents. The proposed approach is evaluated on two brain imaging datasets from adult magnetic resonance imaging (MRI) and fetal ultrasound scans. Our experiments show that involving multiple cooperating agents by learning their communication with each other outperforms previous approaches using single agents.
[ { "created": "Tue, 18 Aug 2020 17:36:56 GMT", "version": "v1" }, { "created": "Sun, 27 Sep 2020 22:11:48 GMT", "version": "v2" } ]
2020-09-29
[ [ "Leroy", "Guy", "" ], [ "Rueckert", "Daniel", "" ], [ "Alansary", "Amir", "" ] ]
Accurate detection of anatomical landmarks is an essential step in several medical imaging tasks. We propose a novel communicative multi-agent reinforcement learning (C-MARL) system to automatically detect landmarks in 3D brain images. C-MARL enables the agents to learn explicit communication channels, as well as implicit communication signals by sharing certain weights of the architecture among all the agents. The proposed approach is evaluated on two brain imaging datasets from adult magnetic resonance imaging (MRI) and fetal ultrasound scans. Our experiments show that involving multiple cooperating agents by learning their communication with each other outperforms previous approaches using single agents.
1907.05800
Swakkhar Shatabda
Md. Tashfiqul Bari, Tanvir Hassan, Raisa Tabassum, Zubaida Ahmed and Swakkhar Shatabda
Find It: A Novel Way to Learn Through Play
null
International Joint Conference on Computational Intelligence (IJCCI 2018)
null
null
cs.HC
http://creativecommons.org/publicdomain/zero/1.0/
Autism Spectrum Disorder (ASD) is the area where many researches enduring like Magnetic Resonance Imaging (MRI), called diffusion tensor imaging, Early Start Denver Model (ESDM) to provide an easier life for the people diagnosed. After years and years of combined funding sources from public and private funding, these researches show great promises in recent years. In this paper, we have tried to show a way how children with Down Syndrome Autism can learn through game therapy. These game therapies have shown an immense number of improvements among those children to learn alphabets along with developing their motor skills and memory challenges.
[ { "created": "Fri, 12 Jul 2019 15:34:50 GMT", "version": "v1" } ]
2019-07-15
[ [ "Bari", "Md. Tashfiqul", "" ], [ "Hassan", "Tanvir", "" ], [ "Tabassum", "Raisa", "" ], [ "Ahmed", "Zubaida", "" ], [ "Shatabda", "Swakkhar", "" ] ]
Autism Spectrum Disorder (ASD) is the area where many researches enduring like Magnetic Resonance Imaging (MRI), called diffusion tensor imaging, Early Start Denver Model (ESDM) to provide an easier life for the people diagnosed. After years and years of combined funding sources from public and private funding, these researches show great promises in recent years. In this paper, we have tried to show a way how children with Down Syndrome Autism can learn through game therapy. These game therapies have shown an immense number of improvements among those children to learn alphabets along with developing their motor skills and memory challenges.
2311.10787
Ryan O'Shea
Ari Goodman, Ryan O'Shea, Noam Hirschorn, Hubert Chrostowski
Assurance for Deployed Continual Learning Systems
8 pages, 11 figures. Published in the Proceedings of the ASNE 2023 Technology, Systems & Ships Symposium. Reproduced with permission from the American Society of Naval Engineers. Distribution Statement A: Approved for public release; distribution is unlimited, as submitted under NAVAIR Public Release Authorization 2023-022
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The future success of the Navy will depend, in part, on artificial intelligence. In practice, many artificially intelligent algorithms, and in particular deep learning models, rely on continual learning to maintain performance in dynamic environments. The software requires adaptation to maintain its initial level of performance in unseen situations. However, if not monitored properly, continual learning may lead to several issues including catastrophic forgetting in which a trained model forgets previously learned tasks when being retrained on new data. The authors created a new framework for safely performing continual learning with the goal of pairing this safety framework with a deep learning computer vision algorithm to allow for safe and high-performing automatic deck tracking on carriers and amphibious assault ships. The safety framework includes several features, such as an ensemble of convolutional neural networks to perform image classification, a manager to record confidences and determine the best answer from the ensemble, a model of the environment to predict when the system may fail to meet minimum performance metrics, a performance monitor to log system and domain performance and check against requirements, and a retraining component to update the ensemble and manager to maintain performance. The authors validated the proposed method using extensive simulation studies based on dynamic image classification. The authors showed the safety framework could probabilistically detect out of distribution data. The results also show the framework can detect when the system is no longer performing safely and can significantly extend the working envelope of an image classifier.
[ { "created": "Thu, 16 Nov 2023 22:22:13 GMT", "version": "v1" } ]
2023-11-21
[ [ "Goodman", "Ari", "" ], [ "O'Shea", "Ryan", "" ], [ "Hirschorn", "Noam", "" ], [ "Chrostowski", "Hubert", "" ] ]
The future success of the Navy will depend, in part, on artificial intelligence. In practice, many artificially intelligent algorithms, and in particular deep learning models, rely on continual learning to maintain performance in dynamic environments. The software requires adaptation to maintain its initial level of performance in unseen situations. However, if not monitored properly, continual learning may lead to several issues including catastrophic forgetting in which a trained model forgets previously learned tasks when being retrained on new data. The authors created a new framework for safely performing continual learning with the goal of pairing this safety framework with a deep learning computer vision algorithm to allow for safe and high-performing automatic deck tracking on carriers and amphibious assault ships. The safety framework includes several features, such as an ensemble of convolutional neural networks to perform image classification, a manager to record confidences and determine the best answer from the ensemble, a model of the environment to predict when the system may fail to meet minimum performance metrics, a performance monitor to log system and domain performance and check against requirements, and a retraining component to update the ensemble and manager to maintain performance. The authors validated the proposed method using extensive simulation studies based on dynamic image classification. The authors showed the safety framework could probabilistically detect out of distribution data. The results also show the framework can detect when the system is no longer performing safely and can significantly extend the working envelope of an image classifier.
2206.09359
Perry Gibson
Perry Gibson, Jos\'e Cano
Productive Reproducible Workflows for DNNs: A Case Study for Industrial Defect Detection
7 pages, 5 figures, AccML 2022
null
null
null
cs.LG cs.CV cs.PF cs.SE
http://creativecommons.org/licenses/by-nc-sa/4.0/
As Deep Neural Networks (DNNs) have become an increasingly ubiquitous workload, the range of libraries and tooling available to aid in their development and deployment has grown significantly. Scalable, production quality tools are freely available under permissive licenses, and are accessible enough to enable even small teams to be very productive. However within the research community, awareness and usage of said tools is not necessarily widespread, and researchers may be missing out on potential productivity gains from exploiting the latest tools and workflows. This paper presents a case study where we discuss our recent experience producing an end-to-end artificial intelligence application for industrial defect detection. We detail the high level deep learning libraries, containerized workflows, continuous integration/deployment pipelines, and open source code templates we leveraged to produce a competitive result, matching the performance of other ranked solutions to our three target datasets. We highlight the value that exploiting such systems can bring, even for research, and detail our solution and present our best results in terms of accuracy and inference time on a server class GPU, as well as inference times on a server class CPU, and a Raspberry Pi 4.
[ { "created": "Sun, 19 Jun 2022 09:10:13 GMT", "version": "v1" } ]
2022-06-22
[ [ "Gibson", "Perry", "" ], [ "Cano", "José", "" ] ]
As Deep Neural Networks (DNNs) have become an increasingly ubiquitous workload, the range of libraries and tooling available to aid in their development and deployment has grown significantly. Scalable, production quality tools are freely available under permissive licenses, and are accessible enough to enable even small teams to be very productive. However within the research community, awareness and usage of said tools is not necessarily widespread, and researchers may be missing out on potential productivity gains from exploiting the latest tools and workflows. This paper presents a case study where we discuss our recent experience producing an end-to-end artificial intelligence application for industrial defect detection. We detail the high level deep learning libraries, containerized workflows, continuous integration/deployment pipelines, and open source code templates we leveraged to produce a competitive result, matching the performance of other ranked solutions to our three target datasets. We highlight the value that exploiting such systems can bring, even for research, and detail our solution and present our best results in terms of accuracy and inference time on a server class GPU, as well as inference times on a server class CPU, and a Raspberry Pi 4.
1502.05818
Petri Luoto
Petri Luoto, Pekka Pirinen, Mehdi Bennis, Sumudu Samarakoon, Simon Scott and Matti Latva-aho
Co-Primary Multi-Operator Resource Sharing for Small Cell Networks
null
null
10.1109/TWC.2015.2402671
null
cs.NI cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To tackle the challenge of providing higher data rates within limited spectral resources we consider the case of multiple operators sharing a common pool of radio resources. Four algorithms are proposed to address co-primary multi-operator radio resource sharing under heterogeneous traffic in both centralized and distributed scenarios. The performance of these algorithms is assessed through extensive system-level simulations for two indoor small cell layouts. It is assumed that the spectral allocations of the small cells are orthogonal to the macro network layer and thus, only the small cell traffic is modeled. The main performance metrics are user throughput and the relative amount of shared spectral resources. The numerical results demonstrate the importance of coordination among co-primary operators for an optimal resource sharing. Also, maximizing the spectrum sharing percentage generally improves the achievable throughput gains over non-sharing.
[ { "created": "Fri, 20 Feb 2015 10:17:19 GMT", "version": "v1" } ]
2015-02-23
[ [ "Luoto", "Petri", "" ], [ "Pirinen", "Pekka", "" ], [ "Bennis", "Mehdi", "" ], [ "Samarakoon", "Sumudu", "" ], [ "Scott", "Simon", "" ], [ "Latva-aho", "Matti", "" ] ]
To tackle the challenge of providing higher data rates within limited spectral resources we consider the case of multiple operators sharing a common pool of radio resources. Four algorithms are proposed to address co-primary multi-operator radio resource sharing under heterogeneous traffic in both centralized and distributed scenarios. The performance of these algorithms is assessed through extensive system-level simulations for two indoor small cell layouts. It is assumed that the spectral allocations of the small cells are orthogonal to the macro network layer and thus, only the small cell traffic is modeled. The main performance metrics are user throughput and the relative amount of shared spectral resources. The numerical results demonstrate the importance of coordination among co-primary operators for an optimal resource sharing. Also, maximizing the spectrum sharing percentage generally improves the achievable throughput gains over non-sharing.
2212.08522
Kevin Kappelmann
Michael Benedikt, Maxime Buron, Stefano Germano, Kevin Kappelmann, Boris Motik
Rewriting the Infinite Chase
null
Proceedings of the VLDB Endowment, Volume 15, Issue 11, July 2022, pp 3045-3057
10.14778/3551793.3551851
null
cs.LO cs.DB
http://creativecommons.org/licenses/by-nc-nd/4.0/
Guarded tuple-generating dependencies (GTGDs) are a natural extension of description logics and referential constraints. It has long been known that queries over GTGDs can be answered by a variant of the chase - a quintessential technique for reasoning with dependencies. However, there has been little work on concrete algorithms and even less on implementation. To address this gap, we revisit Datalog rewriting approaches to query answering, where GTGDs are transformed to a Datalog program that entails the same base facts on each base instance. We show that the rewriting can be seen as containing "shortcut" rules that circumvent certain chase steps, we present several algorithms that compute the rewriting by simulating specific types of chase steps, and we discuss important implementation issues. Finally, we show empirically that our techniques can process complex GTGDs derived from synthetic and real benchmarks and are thus suitable for practical use.
[ { "created": "Fri, 16 Dec 2022 15:11:42 GMT", "version": "v1" } ]
2022-12-19
[ [ "Benedikt", "Michael", "" ], [ "Buron", "Maxime", "" ], [ "Germano", "Stefano", "" ], [ "Kappelmann", "Kevin", "" ], [ "Motik", "Boris", "" ] ]
Guarded tuple-generating dependencies (GTGDs) are a natural extension of description logics and referential constraints. It has long been known that queries over GTGDs can be answered by a variant of the chase - a quintessential technique for reasoning with dependencies. However, there has been little work on concrete algorithms and even less on implementation. To address this gap, we revisit Datalog rewriting approaches to query answering, where GTGDs are transformed to a Datalog program that entails the same base facts on each base instance. We show that the rewriting can be seen as containing "shortcut" rules that circumvent certain chase steps, we present several algorithms that compute the rewriting by simulating specific types of chase steps, and we discuss important implementation issues. Finally, we show empirically that our techniques can process complex GTGDs derived from synthetic and real benchmarks and are thus suitable for practical use.
1004.4481
William Jackson
Ravinder Yadav and Rinkle Rani Aggarwal
Survey and Comparison of Optical Switch Fabrication Techniques and Architectures
https://sites.google.com/site/journalofcomputing/
Journal of Computing, Volume 2, Issue 4, April 2010, 133-137
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The main issue in the optical transmission is switching speed. The optical packet switching faces many significant challenges in processing and buffering. The generalized multilevel protocol switching seeks to eliminate the asynchronous transfer mode and synchronous optical network layer, hence the implementation of IP over WDM (wave length division multiplexing). Optical burst switching attempts to minimize the need for processing and buffering by aggregating flow of data packets in to burst. In this paper there is an extensive overview on current technologies and techniques concerning optical switching.
[ { "created": "Mon, 26 Apr 2010 11:04:14 GMT", "version": "v1" } ]
2010-04-27
[ [ "Yadav", "Ravinder", "" ], [ "Aggarwal", "Rinkle Rani", "" ] ]
The main issue in the optical transmission is switching speed. The optical packet switching faces many significant challenges in processing and buffering. The generalized multilevel protocol switching seeks to eliminate the asynchronous transfer mode and synchronous optical network layer, hence the implementation of IP over WDM (wave length division multiplexing). Optical burst switching attempts to minimize the need for processing and buffering by aggregating flow of data packets in to burst. In this paper there is an extensive overview on current technologies and techniques concerning optical switching.
2002.05017
Fabrizio Bottarel
Fabrizio Bottarel, Giulia Vezzani, Ugo Pattacini, Lorenzo Natale
GRASPA 1.0: GRASPA is a Robot Arm graSping Performance benchmArk
To cite this work, please refer to the journal reference entry. For more information, code, pictures and video please visit https://github.com/robotology/GRASPA-benchmark
in IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 836-843, April 2020
10.1109/LRA.2020.2965865
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of benchmarks is a widespread and scientifically meaningful practice to validate performance of different approaches to the same task. In the context of robot grasping the use of common object sets has emerged in recent years, however no dominant protocols and metrics to test grasping pipelines have taken root yet. In this paper, we present version 1.0 of GRASPA, a benchmark to test effectiveness of grasping pipelines on physical robot setups. This approach tackles the complexity of such pipelines by proposing different metrics that account for the features and limits of the test platform. As an example application, we deploy GRASPA on the iCub humanoid robot and use it to benchmark our grasping pipeline. As closing remarks, we discuss how the GRASPA indicators we obtained as outcome can provide insight into how different steps of the pipeline affect the overall grasping performance.
[ { "created": "Wed, 12 Feb 2020 14:26:05 GMT", "version": "v1" } ]
2021-08-25
[ [ "Bottarel", "Fabrizio", "" ], [ "Vezzani", "Giulia", "" ], [ "Pattacini", "Ugo", "" ], [ "Natale", "Lorenzo", "" ] ]
The use of benchmarks is a widespread and scientifically meaningful practice to validate performance of different approaches to the same task. In the context of robot grasping the use of common object sets has emerged in recent years, however no dominant protocols and metrics to test grasping pipelines have taken root yet. In this paper, we present version 1.0 of GRASPA, a benchmark to test effectiveness of grasping pipelines on physical robot setups. This approach tackles the complexity of such pipelines by proposing different metrics that account for the features and limits of the test platform. As an example application, we deploy GRASPA on the iCub humanoid robot and use it to benchmark our grasping pipeline. As closing remarks, we discuss how the GRASPA indicators we obtained as outcome can provide insight into how different steps of the pipeline affect the overall grasping performance.
2206.05714
Lachlan Chumbley
Lachlan Chumbley, Morris Gu, Rhys Newbury, Jurgen Leitner and Akansel Cosgun
Integrating High-Resolution Tactile Sensing into Grasp Stability Prediction
8 pages, 6 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate how high-resolution tactile sensors can be utilized in combination with vision and depth sensing, to improve grasp stability prediction. Recent advances in simulating high-resolution tactile sensing, in particular the TACTO simulator, enabled us to evaluate how neural networks can be trained with a combination of sensing modalities. With the large amounts of data needed to train large neural networks, robotic simulators provide a fast way to automate the data collection process. We expand on the existing work through an ablation study and an increased set of objects taken from the YCB benchmark set. Our results indicate that while the combination of vision, depth, and tactile sensing provides the best prediction results on known objects, the network fails to generalize to unknown objects. Our work also addresses existing issues with robotic grasping in tactile simulation and how to overcome them.
[ { "created": "Sun, 12 Jun 2022 10:34:39 GMT", "version": "v1" } ]
2022-06-14
[ [ "Chumbley", "Lachlan", "" ], [ "Gu", "Morris", "" ], [ "Newbury", "Rhys", "" ], [ "Leitner", "Jurgen", "" ], [ "Cosgun", "Akansel", "" ] ]
We investigate how high-resolution tactile sensors can be utilized in combination with vision and depth sensing, to improve grasp stability prediction. Recent advances in simulating high-resolution tactile sensing, in particular the TACTO simulator, enabled us to evaluate how neural networks can be trained with a combination of sensing modalities. With the large amounts of data needed to train large neural networks, robotic simulators provide a fast way to automate the data collection process. We expand on the existing work through an ablation study and an increased set of objects taken from the YCB benchmark set. Our results indicate that while the combination of vision, depth, and tactile sensing provides the best prediction results on known objects, the network fails to generalize to unknown objects. Our work also addresses existing issues with robotic grasping in tactile simulation and how to overcome them.
2202.05002
Sadaf Ul Zuhra Dr
Sadaf ul Zuhra, Samir M. Perlaza, H. Vincent Poor, Eitan Altman
Achievable Information-Energy Region in the Finite Block-Length Regime with Finite Constellations
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
This paper characterizes an achievable information-energy region of simultaneous information and energy transmission over an additive white Gaussian noise channel. This analysis is performed in the finite block-length regime with finite constellations. More specifically, a method for constructing a family of codes is proposed and the set of achievable tuples of information rate, energy rate, decoding error probability (DEP) and energy outage probability (EOP) is characterized. Using existing converse results, it is shown that the construction is information rate, energy rate, and EOP optimal. The achieved DEP is, however, sub-optimal.
[ { "created": "Thu, 10 Feb 2022 13:01:24 GMT", "version": "v1" } ]
2022-02-11
[ [ "Zuhra", "Sadaf ul", "" ], [ "Perlaza", "Samir M.", "" ], [ "Poor", "H. Vincent", "" ], [ "Altman", "Eitan", "" ] ]
This paper characterizes an achievable information-energy region of simultaneous information and energy transmission over an additive white Gaussian noise channel. This analysis is performed in the finite block-length regime with finite constellations. More specifically, a method for constructing a family of codes is proposed and the set of achievable tuples of information rate, energy rate, decoding error probability (DEP) and energy outage probability (EOP) is characterized. Using existing converse results, it is shown that the construction is information rate, energy rate, and EOP optimal. The achieved DEP is, however, sub-optimal.
2402.05121
Weizheng Lu
Weizheng Lu and Jing Zhang and Ju Fan and Zihao Fu and Yueguo Chen and Xiaoyong Du
Large Language Model for Table Processing: A Survey
null
null
null
null
cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Tables, typically two-dimensional and structured to store large amounts of data, are essential in daily activities like database queries, spreadsheet manipulations, web table question answering, and image table information extraction. Automating these table-centric tasks with Large Language Models (LLMs) or Visual Language Models (VLMs) offers significant public benefits, garnering interest from academia and industry. This survey provides a comprehensive overview of table-related tasks, examining both user scenarios and technical aspects. It covers traditional tasks like table question answering as well as emerging fields such as spreadsheet manipulation and table data analysis. We summarize the training techniques for LLMs and VLMs tailored for table processing. Additionally, we discuss prompt engineering, particularly the use of LLM-powered agents, for various table-related tasks. Finally, we highlight several challenges, including processing implicit user intentions and extracting information from various table sources.
[ { "created": "Sun, 4 Feb 2024 00:47:53 GMT", "version": "v1" }, { "created": "Fri, 26 Jul 2024 14:12:33 GMT", "version": "v2" } ]
2024-07-29
[ [ "Lu", "Weizheng", "" ], [ "Zhang", "Jing", "" ], [ "Fan", "Ju", "" ], [ "Fu", "Zihao", "" ], [ "Chen", "Yueguo", "" ], [ "Du", "Xiaoyong", "" ] ]
Tables, typically two-dimensional and structured to store large amounts of data, are essential in daily activities like database queries, spreadsheet manipulations, web table question answering, and image table information extraction. Automating these table-centric tasks with Large Language Models (LLMs) or Visual Language Models (VLMs) offers significant public benefits, garnering interest from academia and industry. This survey provides a comprehensive overview of table-related tasks, examining both user scenarios and technical aspects. It covers traditional tasks like table question answering as well as emerging fields such as spreadsheet manipulation and table data analysis. We summarize the training techniques for LLMs and VLMs tailored for table processing. Additionally, we discuss prompt engineering, particularly the use of LLM-powered agents, for various table-related tasks. Finally, we highlight several challenges, including processing implicit user intentions and extracting information from various table sources.
2402.09949
Leonidas Gee
Leonidas Gee, Leonardo Rigutini, Marco Ernandes, Andrea Zugarini
Multi-word Tokenization for Sequence Compression
The 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023)
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track
10.18653/v1/2023.emnlp-industry.58
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Language Models have proven highly successful at modelling a variety of tasks. However, this comes at a steep computational cost that hinders wider industrial uptake. In this paper, we present MWT: a Multi-Word Tokenizer that goes beyond word boundaries by representing frequent multi-word expressions as single tokens. MWTs produce a more compact and efficient tokenization that yields two benefits: (1) Increase in performance due to a greater coverage of input data given a fixed sequence length budget; (2) Faster and lighter inference due to the ability to reduce the sequence length with negligible drops in performance. Our results show that MWT is more robust across shorter sequence lengths, thus allowing for major speedups via early sequence truncation.
[ { "created": "Thu, 15 Feb 2024 13:52:23 GMT", "version": "v1" }, { "created": "Thu, 4 Apr 2024 22:50:25 GMT", "version": "v2" } ]
2024-04-08
[ [ "Gee", "Leonidas", "" ], [ "Rigutini", "Leonardo", "" ], [ "Ernandes", "Marco", "" ], [ "Zugarini", "Andrea", "" ] ]
Large Language Models have proven highly successful at modelling a variety of tasks. However, this comes at a steep computational cost that hinders wider industrial uptake. In this paper, we present MWT: a Multi-Word Tokenizer that goes beyond word boundaries by representing frequent multi-word expressions as single tokens. MWTs produce a more compact and efficient tokenization that yields two benefits: (1) Increase in performance due to a greater coverage of input data given a fixed sequence length budget; (2) Faster and lighter inference due to the ability to reduce the sequence length with negligible drops in performance. Our results show that MWT is more robust across shorter sequence lengths, thus allowing for major speedups via early sequence truncation.
2110.10534
Manjesh Kumar Hanawal
Vinod S. Khandkar and Manjesh K. Hanawal
FairNet: A Measurement Framework for Traffic Discrimination Detection on the Internet
null
null
null
null
cs.NI cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Network neutrality is related to the non-discriminatory treatment of packets on the Internet. Any deliberate discrimination of traffic of one application while favoring others violates the principle of neutrality. Many countries have enforced laws against such discrimination. To enforce such laws, one requires tools to detect any net neutrality violations. However, detecting such violations is challenging as it is hard to separate any degradation in quality due to natural network effects and selective degradation. Also, legitimate traffic management and deliberate discrimination methods can be technically the same, making it further challenging to distinguish them. We developed an end-to-end measurement framework named FairNet to detect discrimination of traffic. It compares the performance of similar services. Our focus is on HTTPS streaming services which constitute a predominant portion of the Internet traffic. The effect of confounding factors (congestion, traffic management policy, dynamic rate adaptation) is made `similar' on the test services to ensure a fair comparison. FairNet framework uses a ``replay server'' and user-client that exchanges correctly identifiable traffic streams over the Internet. The Server Name Indication (SNI) field in the TLS handshake, which goes in plaintext, ensures that the traffic from the replay server appears to network middle-boxes as that coming from its actual server. We validated that appropriate SNIs results in the correct classification of services using a commercial traffic shaper. FairNet uses two novel algorithms based on application-level throughput and connection status to detect traffic discrimination. We also validated the methodology's effectiveness by collecting network logs through mobile apps over the live Internet and analyzing them.
[ { "created": "Wed, 20 Oct 2021 12:38:02 GMT", "version": "v1" } ]
2021-10-22
[ [ "Khandkar", "Vinod S.", "" ], [ "Hanawal", "Manjesh K.", "" ] ]
Network neutrality is related to the non-discriminatory treatment of packets on the Internet. Any deliberate discrimination of traffic of one application while favoring others violates the principle of neutrality. Many countries have enforced laws against such discrimination. To enforce such laws, one requires tools to detect any net neutrality violations. However, detecting such violations is challenging as it is hard to separate any degradation in quality due to natural network effects and selective degradation. Also, legitimate traffic management and deliberate discrimination methods can be technically the same, making it further challenging to distinguish them. We developed an end-to-end measurement framework named FairNet to detect discrimination of traffic. It compares the performance of similar services. Our focus is on HTTPS streaming services which constitute a predominant portion of the Internet traffic. The effect of confounding factors (congestion, traffic management policy, dynamic rate adaptation) is made `similar' on the test services to ensure a fair comparison. FairNet framework uses a ``replay server'' and user-client that exchanges correctly identifiable traffic streams over the Internet. The Server Name Indication (SNI) field in the TLS handshake, which goes in plaintext, ensures that the traffic from the replay server appears to network middle-boxes as that coming from its actual server. We validated that appropriate SNIs results in the correct classification of services using a commercial traffic shaper. FairNet uses two novel algorithms based on application-level throughput and connection status to detect traffic discrimination. We also validated the methodology's effectiveness by collecting network logs through mobile apps over the live Internet and analyzing them.
1510.00295
Nicolas Bousquet
Nicolas Bousquet and Yang Cai and Adrian Vetta
Welfare and Rationality Guarantees for the Simultaneous Multiple-Round Ascending Auction
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The simultaneous multiple-round auction (SMRA) and the combinatorial clock auction (CCA) are the two primary mechanisms used to sell bandwidth. Under truthful bidding, the SMRA is known to output a Walrasian equilibrium that maximizes social welfare provided the bidder valuation functions satisfy the gross substitutes property. Recently, it was shown that the combinatorial clock auction (CCA) provides good welfare guarantees for general classes of valuation functions. This motivates the question of whether similar welfare guarantees hold for the SMRA in the case of general valuation functions. We show the answer is no. But we prove that good welfare guarantees still arise if the degree of complementarities in the bidder valuations are bounded. In particular, if bidder valuations functions are $\alpha$-near-submodular then, under truthful bidding, the SMRA has a welfare ratio (the worst case ratio between the social welfare of the optimal allocation and the auction allocation) of at most $(1+\alpha)$. The special case of submodular valuations, namely $\alpha=1$, and produces individually rational solutions. However, for $\alpha>1$, this is a bicriteria guarantee, to obtain good welfare under truthful bidding requires relaxing individual rationality. Finally, we examine what strategies are required to ensure individual rationality in the SMRA with general valuation functions. First, we provide a weak characterization, namely \emph{secure bidding}, for individual rationality. We then show that if the bidders use a profit-maximizing secure bidding strategy the welfare ratio is at most $1+\alpha$. Consequently, by bidding securely, it is possible to obtain the same welfare guarantees as truthful bidding without the loss of individual rationality.
[ { "created": "Thu, 1 Oct 2015 16:09:39 GMT", "version": "v1" } ]
2015-10-02
[ [ "Bousquet", "Nicolas", "" ], [ "Cai", "Yang", "" ], [ "Vetta", "Adrian", "" ] ]
The simultaneous multiple-round auction (SMRA) and the combinatorial clock auction (CCA) are the two primary mechanisms used to sell bandwidth. Under truthful bidding, the SMRA is known to output a Walrasian equilibrium that maximizes social welfare provided the bidder valuation functions satisfy the gross substitutes property. Recently, it was shown that the combinatorial clock auction (CCA) provides good welfare guarantees for general classes of valuation functions. This motivates the question of whether similar welfare guarantees hold for the SMRA in the case of general valuation functions. We show the answer is no. But we prove that good welfare guarantees still arise if the degree of complementarities in the bidder valuations are bounded. In particular, if bidder valuations functions are $\alpha$-near-submodular then, under truthful bidding, the SMRA has a welfare ratio (the worst case ratio between the social welfare of the optimal allocation and the auction allocation) of at most $(1+\alpha)$. The special case of submodular valuations, namely $\alpha=1$, and produces individually rational solutions. However, for $\alpha>1$, this is a bicriteria guarantee, to obtain good welfare under truthful bidding requires relaxing individual rationality. Finally, we examine what strategies are required to ensure individual rationality in the SMRA with general valuation functions. First, we provide a weak characterization, namely \emph{secure bidding}, for individual rationality. We then show that if the bidders use a profit-maximizing secure bidding strategy the welfare ratio is at most $1+\alpha$. Consequently, by bidding securely, it is possible to obtain the same welfare guarantees as truthful bidding without the loss of individual rationality.
1304.7861
EPTCS
Jared Davis (Centaur Technology), Sol Swords (Centaur Technology)
Verified AIG Algorithms in ACL2
In Proceedings ACL2 2013, arXiv:1304.7123
EPTCS 114, 2013, pp. 95-110
10.4204/EPTCS.114.8
null
cs.LO cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
And-Inverter Graphs (AIGs) are a popular way to represent Boolean functions (like circuits). AIG simplification algorithms can dramatically reduce an AIG, and play an important role in modern hardware verification tools like equivalence checkers. In practice, these tricky algorithms are implemented with optimized C or C++ routines with no guarantee of correctness. Meanwhile, many interactive theorem provers can now employ SAT or SMT solvers to automatically solve finite goals, but no theorem prover makes use of these advanced, AIG-based approaches. We have developed two ways to represent AIGs within the ACL2 theorem prover. One representation, Hons-AIGs, is especially convenient to use and reason about. The other, Aignet, is the opposite; it is styled after modern AIG packages and allows for efficient algorithms. We have implemented functions for converting between these representations, random vector simulation, conversion to CNF, etc., and developed reasoning strategies for verifying these algorithms. Aside from these contributions towards verifying AIG algorithms, this work has an immediate, practical benefit for ACL2 users who are using GL to bit-blast finite ACL2 theorems: they can now optionally trust an off-the-shelf SAT solver to carry out the proof, instead of using the built-in BDD package. Looking to the future, it is a first step toward implementing verified AIG simplification algorithms that might further improve GL performance.
[ { "created": "Tue, 30 Apr 2013 04:14:44 GMT", "version": "v1" } ]
2013-05-01
[ [ "Davis", "Jared", "", "Centaur Technology" ], [ "Swords", "Sol", "", "Centaur Technology" ] ]
And-Inverter Graphs (AIGs) are a popular way to represent Boolean functions (like circuits). AIG simplification algorithms can dramatically reduce an AIG, and play an important role in modern hardware verification tools like equivalence checkers. In practice, these tricky algorithms are implemented with optimized C or C++ routines with no guarantee of correctness. Meanwhile, many interactive theorem provers can now employ SAT or SMT solvers to automatically solve finite goals, but no theorem prover makes use of these advanced, AIG-based approaches. We have developed two ways to represent AIGs within the ACL2 theorem prover. One representation, Hons-AIGs, is especially convenient to use and reason about. The other, Aignet, is the opposite; it is styled after modern AIG packages and allows for efficient algorithms. We have implemented functions for converting between these representations, random vector simulation, conversion to CNF, etc., and developed reasoning strategies for verifying these algorithms. Aside from these contributions towards verifying AIG algorithms, this work has an immediate, practical benefit for ACL2 users who are using GL to bit-blast finite ACL2 theorems: they can now optionally trust an off-the-shelf SAT solver to carry out the proof, instead of using the built-in BDD package. Looking to the future, it is a first step toward implementing verified AIG simplification algorithms that might further improve GL performance.
2401.16536
Phillip Guan
Yuna Kwak, Eric Penner, Xuan Wang, Mohammad R. Saeedpour-Parizi, Olivier Mercier, Xiuyun Wu, T. Scott Murdison, Phillip Guan
Saccade-Contingent Rendering
main paper and supplementary materials
null
10.1145/3641519.3657420
null
cs.GR cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Battery-constrained power consumption, compute limitations, and high frame rate requirements in head-mounted displays present unique challenges in the drive to present increasingly immersive and comfortable imagery in virtual reality. However, humans are not equally sensitive to all regions of the visual field, and perceptually-optimized rendering techniques are increasingly utilized to address these bottlenecks. Many of these techniques are gaze-contingent and often render reduced detail away from a user's fixation. Such techniques are dependent on spatio-temporally-accurate gaze tracking and can result in obvious visual artifacts when eye tracking is inaccurate. In this work we present a gaze-contingent rendering technique which only requires saccade detection, bypassing the need for highly-accurate eye tracking. In our first experiment, we show that visual acuity is reduced for several hundred milliseconds after a saccade. In our second experiment, we use these results to reduce the rendered image resolution after saccades in a controlled psychophysical setup, and find that observers cannot discriminate between saccade-contingent reduced-resolution rendering and full-resolution rendering. Finally, in our third experiment, we introduce a 90 pixels per degree headset and validate our saccade-contingent rendering method under typical VR viewing conditions.
[ { "created": "Mon, 29 Jan 2024 20:17:51 GMT", "version": "v1" } ]
2024-07-16
[ [ "Kwak", "Yuna", "" ], [ "Penner", "Eric", "" ], [ "Wang", "Xuan", "" ], [ "Saeedpour-Parizi", "Mohammad R.", "" ], [ "Mercier", "Olivier", "" ], [ "Wu", "Xiuyun", "" ], [ "Murdison", "T. Scott", "" ], [ "Guan", "Phillip", "" ] ]
Battery-constrained power consumption, compute limitations, and high frame rate requirements in head-mounted displays present unique challenges in the drive to present increasingly immersive and comfortable imagery in virtual reality. However, humans are not equally sensitive to all regions of the visual field, and perceptually-optimized rendering techniques are increasingly utilized to address these bottlenecks. Many of these techniques are gaze-contingent and often render reduced detail away from a user's fixation. Such techniques are dependent on spatio-temporally-accurate gaze tracking and can result in obvious visual artifacts when eye tracking is inaccurate. In this work we present a gaze-contingent rendering technique which only requires saccade detection, bypassing the need for highly-accurate eye tracking. In our first experiment, we show that visual acuity is reduced for several hundred milliseconds after a saccade. In our second experiment, we use these results to reduce the rendered image resolution after saccades in a controlled psychophysical setup, and find that observers cannot discriminate between saccade-contingent reduced-resolution rendering and full-resolution rendering. Finally, in our third experiment, we introduce a 90 pixels per degree headset and validate our saccade-contingent rendering method under typical VR viewing conditions.
2009.13364
Haifeng Li
Haifeng Li, Zhenqi Cui, Zhiqing Zhu, Li Chen, Jiawei Zhu, Haozhe Huang, Chao Tao
RS-MetaNet: Deep meta metric learning for few-shot remote sensing scene classification
13 pages, 11 figures
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2020
10.1109/TGRS.2020.3027387
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Training a modern deep neural network on massive labeled samples is the main paradigm in solving the scene classification problem for remote sensing, but learning from only a few data points remains a challenge. Existing methods for few-shot remote sensing scene classification are performed in a sample-level manner, resulting in easy overfitting of learned features to individual samples and inadequate generalization of learned category segmentation surfaces. To solve this problem, learning should be organized at the task level rather than the sample level. Learning on tasks sampled from a task family can help tune learning algorithms to perform well on new tasks sampled in that family. Therefore, we propose a simple but effective method, called RS-MetaNet, to resolve the issues related to few-shot remote sensing scene classification in the real world. On the one hand, RS-MetaNet raises the level of learning from the sample to the task by organizing training in a meta way, and it learns to learn a metric space that can well classify remote sensing scenes from a series of tasks. We also propose a new loss function, called Balance Loss, which maximizes the generalization ability of the model to new samples by maximizing the distance between different categories, providing the scenes in different categories with better linear segmentation planes while ensuring model fit. The experimental results on three open and challenging remote sensing datasets, UCMerced\_LandUse, NWPU-RESISC45, and Aerial Image Data, demonstrate that our proposed RS-MetaNet method achieves state-of-the-art results in cases where there are only 1-20 labeled samples.
[ { "created": "Mon, 28 Sep 2020 14:34:15 GMT", "version": "v1" } ]
2020-09-29
[ [ "Li", "Haifeng", "" ], [ "Cui", "Zhenqi", "" ], [ "Zhu", "Zhiqing", "" ], [ "Chen", "Li", "" ], [ "Zhu", "Jiawei", "" ], [ "Huang", "Haozhe", "" ], [ "Tao", "Chao", "" ] ]
Training a modern deep neural network on massive labeled samples is the main paradigm in solving the scene classification problem for remote sensing, but learning from only a few data points remains a challenge. Existing methods for few-shot remote sensing scene classification are performed in a sample-level manner, resulting in easy overfitting of learned features to individual samples and inadequate generalization of learned category segmentation surfaces. To solve this problem, learning should be organized at the task level rather than the sample level. Learning on tasks sampled from a task family can help tune learning algorithms to perform well on new tasks sampled in that family. Therefore, we propose a simple but effective method, called RS-MetaNet, to resolve the issues related to few-shot remote sensing scene classification in the real world. On the one hand, RS-MetaNet raises the level of learning from the sample to the task by organizing training in a meta way, and it learns to learn a metric space that can well classify remote sensing scenes from a series of tasks. We also propose a new loss function, called Balance Loss, which maximizes the generalization ability of the model to new samples by maximizing the distance between different categories, providing the scenes in different categories with better linear segmentation planes while ensuring model fit. The experimental results on three open and challenging remote sensing datasets, UCMerced\_LandUse, NWPU-RESISC45, and Aerial Image Data, demonstrate that our proposed RS-MetaNet method achieves state-of-the-art results in cases where there are only 1-20 labeled samples.
2211.00082
Soumyanil Banerjee
Soumyanil Banerjee, Ming Dong, Weisong Shi
Spatial-Temporal Synchronous Graph Transformer network (STSGT) for COVID-19 forecasting
11 pages, 8 figures, 5 tables, accepted for CHASE 2022 conference and Smart Health journal
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
COVID-19 has become a matter of serious concern over the last few years. It has adversely affected numerous people around the globe and has led to the loss of billions of dollars of business capital. In this paper, we propose a novel Spatial-Temporal Synchronous Graph Transformer network (STSGT) to capture the complex spatial and temporal dependency of the COVID-19 time series data and forecast the future status of an evolving pandemic. The layers of STSGT combine the graph convolution network (GCN) with the self-attention mechanism of transformers on a synchronous spatial-temporal graph to capture the dynamically changing pattern of the COVID time series. The spatial-temporal synchronous graph simultaneously captures the spatial and temporal dependencies between the vertices of the graph at a given and subsequent time-steps, which helps capture the heterogeneity in the time series and improve the forecasting accuracy. Our extensive experiments on two publicly available real-world COVID-19 time series datasets demonstrate that STSGT significantly outperforms state-of-the-art algorithms that were designed for spatial-temporal forecasting tasks. Specifically, on average over a 12-day horizon, we observe a potential improvement of 12.19% and 3.42% in Mean Absolute Error(MAE) over the next best algorithm while forecasting the daily infected and death cases respectively for the 50 states of US and Washington, D.C. Additionally, STSGT also outperformed others when forecasting the daily infected cases at the state level, e.g., for all the counties in the State of Michigan. The code and models are publicly available at https://github.com/soumbane/STSGT.
[ { "created": "Mon, 31 Oct 2022 18:29:40 GMT", "version": "v1" } ]
2022-11-02
[ [ "Banerjee", "Soumyanil", "" ], [ "Dong", "Ming", "" ], [ "Shi", "Weisong", "" ] ]
COVID-19 has become a matter of serious concern over the last few years. It has adversely affected numerous people around the globe and has led to the loss of billions of dollars of business capital. In this paper, we propose a novel Spatial-Temporal Synchronous Graph Transformer network (STSGT) to capture the complex spatial and temporal dependency of the COVID-19 time series data and forecast the future status of an evolving pandemic. The layers of STSGT combine the graph convolution network (GCN) with the self-attention mechanism of transformers on a synchronous spatial-temporal graph to capture the dynamically changing pattern of the COVID time series. The spatial-temporal synchronous graph simultaneously captures the spatial and temporal dependencies between the vertices of the graph at a given and subsequent time-steps, which helps capture the heterogeneity in the time series and improve the forecasting accuracy. Our extensive experiments on two publicly available real-world COVID-19 time series datasets demonstrate that STSGT significantly outperforms state-of-the-art algorithms that were designed for spatial-temporal forecasting tasks. Specifically, on average over a 12-day horizon, we observe a potential improvement of 12.19% and 3.42% in Mean Absolute Error(MAE) over the next best algorithm while forecasting the daily infected and death cases respectively for the 50 states of US and Washington, D.C. Additionally, STSGT also outperformed others when forecasting the daily infected cases at the state level, e.g., for all the counties in the State of Michigan. The code and models are publicly available at https://github.com/soumbane/STSGT.
2003.04976
Gaurav Pandey
Gaurav Pandey, Dinesh Raghu and Sachindra Joshi
Mask & Focus: Conversation Modelling by Learning Concepts
AAAI 2020
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequence to sequence models attempt to capture the correlation between all the words in the input and output sequences. While this is quite useful for machine translation where the correlation among the words is indeed quite strong, it becomes problematic for conversation modelling where the correlation is often at a much abstract level. In contrast, humans tend to focus on the essential concepts discussed in the conversation context and generate responses accordingly. In this paper, we attempt to mimic this response generating mechanism by learning the essential concepts in the context and response in an unsupervised manner. The proposed model, referred to as Mask \& Focus maps the input context to a sequence of concepts which are then used to generate the response concepts. Together, the context and the response concepts generate the final response. In order to learn context concepts from the training data automatically, we \emph{mask} words in the input and observe the effect of masking on response generation. We train our model to learn those response concepts that have high mutual information with respect to the context concepts, thereby guiding the model to \emph{focus} on the context concepts. Mask \& Focus achieves significant improvement over the existing baselines in several established metrics for dialogues.
[ { "created": "Tue, 11 Feb 2020 15:11:55 GMT", "version": "v1" } ]
2020-04-02
[ [ "Pandey", "Gaurav", "" ], [ "Raghu", "Dinesh", "" ], [ "Joshi", "Sachindra", "" ] ]
Sequence to sequence models attempt to capture the correlation between all the words in the input and output sequences. While this is quite useful for machine translation where the correlation among the words is indeed quite strong, it becomes problematic for conversation modelling where the correlation is often at a much abstract level. In contrast, humans tend to focus on the essential concepts discussed in the conversation context and generate responses accordingly. In this paper, we attempt to mimic this response generating mechanism by learning the essential concepts in the context and response in an unsupervised manner. The proposed model, referred to as Mask \& Focus maps the input context to a sequence of concepts which are then used to generate the response concepts. Together, the context and the response concepts generate the final response. In order to learn context concepts from the training data automatically, we \emph{mask} words in the input and observe the effect of masking on response generation. We train our model to learn those response concepts that have high mutual information with respect to the context concepts, thereby guiding the model to \emph{focus} on the context concepts. Mask \& Focus achieves significant improvement over the existing baselines in several established metrics for dialogues.
2307.15915
Jin Wang
Jin Wang, Zishan Huang, Hui Xiao, Yinhao Xiao
JFinder: A Novel Architecture for Java Vulnerability Identification Based Quad Self-Attention and Pre-training Mechanism
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software vulnerabilities pose significant risks to computer systems, impacting our daily lives, productivity, and even our health. Identifying and addressing security vulnerabilities in a timely manner is crucial to prevent hacking and data breaches. Unfortunately, current vulnerability identification methods, including classical and deep learning-based approaches, exhibit critical drawbacks that prevent them from meeting the demands of the contemporary software industry. To tackle these issues, we present JFinder, a novel architecture for Java vulnerability identification that leverages quad self-attention and pre-training mechanisms to combine structural information and semantic representations. Experimental results demonstrate that JFinder outperforms all baseline methods, achieving an accuracy of 0.97 on the CWE dataset and an F1 score of 0.84 on the PROMISE dataset. Furthermore, a case study reveals that JFinder can accurately identify four cases of vulnerabilities after patching.
[ { "created": "Sat, 29 Jul 2023 07:02:47 GMT", "version": "v1" } ]
2023-08-01
[ [ "Wang", "Jin", "" ], [ "Huang", "Zishan", "" ], [ "Xiao", "Hui", "" ], [ "Xiao", "Yinhao", "" ] ]
Software vulnerabilities pose significant risks to computer systems, impacting our daily lives, productivity, and even our health. Identifying and addressing security vulnerabilities in a timely manner is crucial to prevent hacking and data breaches. Unfortunately, current vulnerability identification methods, including classical and deep learning-based approaches, exhibit critical drawbacks that prevent them from meeting the demands of the contemporary software industry. To tackle these issues, we present JFinder, a novel architecture for Java vulnerability identification that leverages quad self-attention and pre-training mechanisms to combine structural information and semantic representations. Experimental results demonstrate that JFinder outperforms all baseline methods, achieving an accuracy of 0.97 on the CWE dataset and an F1 score of 0.84 on the PROMISE dataset. Furthermore, a case study reveals that JFinder can accurately identify four cases of vulnerabilities after patching.
cs/0402059
Patrick Baillot
Patrick Baillot, Kazushige Terui
Light types for polynomial time computation in lambda-calculus
20 pages (including 10 pages of appendix). (revised version; in particular section 5 has been modified). A short version is to appear in the proceedings of the conference LICS 2004 (IEEE Computer Society Press)
null
null
null
cs.LO
null
We propose a new type system for lambda-calculus ensuring that well-typed programs can be executed in polynomial time: Dual light affine logic (DLAL). DLAL has a simple type language with a linear and an intuitionistic type arrow, and one modality. It corresponds to a fragment of Light affine logic (LAL). We show that contrarily to LAL, DLAL ensures good properties on lambda-terms: subject reduction is satisfied and a well-typed term admits a polynomial bound on the reduction by any strategy. We establish that as LAL, DLAL allows to represent all polytime functions. Finally we give a type inference procedure for propositional DLAL.
[ { "created": "Thu, 26 Feb 2004 15:47:36 GMT", "version": "v1" }, { "created": "Tue, 11 May 2004 15:07:03 GMT", "version": "v2" } ]
2016-08-31
[ [ "Baillot", "Patrick", "" ], [ "Terui", "Kazushige", "" ] ]
We propose a new type system for lambda-calculus ensuring that well-typed programs can be executed in polynomial time: Dual light affine logic (DLAL). DLAL has a simple type language with a linear and an intuitionistic type arrow, and one modality. It corresponds to a fragment of Light affine logic (LAL). We show that contrarily to LAL, DLAL ensures good properties on lambda-terms: subject reduction is satisfied and a well-typed term admits a polynomial bound on the reduction by any strategy. We establish that as LAL, DLAL allows to represent all polytime functions. Finally we give a type inference procedure for propositional DLAL.
2011.10428
Simon Hengchen
Jani Marjanen, Elaine Zosa, Simon Hengchen, Lidia Pivovarova, Mikko Tolonen
Topic modelling discourse dynamics in historical newspapers
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper addresses methodological issues in diachronic data analysis for historical research. We apply two families of topic models (LDA and DTM) on a relatively large set of historical newspapers, with the aim of capturing and understanding discourse dynamics. Our case study focuses on newspapers and periodicals published in Finland between 1854 and 1917, but our method can easily be transposed to any diachronic data. Our main contributions are a) a combined sampling, training and inference procedure for applying topic models to huge and imbalanced diachronic text collections; b) a discussion on the differences between two topic models for this type of data; c) quantifying topic prominence for a period and thus a generalization of document-wise topic assignment to a discourse level; and d) a discussion of the role of humanistic interpretation with regard to analysing discourse dynamics through topic models.
[ { "created": "Fri, 20 Nov 2020 14:51:07 GMT", "version": "v1" } ]
2020-11-23
[ [ "Marjanen", "Jani", "" ], [ "Zosa", "Elaine", "" ], [ "Hengchen", "Simon", "" ], [ "Pivovarova", "Lidia", "" ], [ "Tolonen", "Mikko", "" ] ]
This paper addresses methodological issues in diachronic data analysis for historical research. We apply two families of topic models (LDA and DTM) on a relatively large set of historical newspapers, with the aim of capturing and understanding discourse dynamics. Our case study focuses on newspapers and periodicals published in Finland between 1854 and 1917, but our method can easily be transposed to any diachronic data. Our main contributions are a) a combined sampling, training and inference procedure for applying topic models to huge and imbalanced diachronic text collections; b) a discussion on the differences between two topic models for this type of data; c) quantifying topic prominence for a period and thus a generalization of document-wise topic assignment to a discourse level; and d) a discussion of the role of humanistic interpretation with regard to analysing discourse dynamics through topic models.
2003.08211
Shoupu Wan
Shoupu Wan
An Efficient Implementation of Manacher's Algorithm
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Manacher's algorithm has been shown to be optimal to the longest palindromic substring problem. Many of the existing implementations of this algorithm, however, unanimously required in-memory construction of an augmented string that is twice as long as the original string. Although it has found widespread use, we found that this preprocessing is neither economic nor necessary. We present a more efficient implementation of Manacher's algorithm based on index mapping that makes the string augmentation process obsolete.
[ { "created": "Tue, 17 Mar 2020 04:26:35 GMT", "version": "v1" }, { "created": "Thu, 19 Mar 2020 01:23:39 GMT", "version": "v2" } ]
2020-03-20
[ [ "Wan", "Shoupu", "" ] ]
Manacher's algorithm has been shown to be optimal to the longest palindromic substring problem. Many of the existing implementations of this algorithm, however, unanimously required in-memory construction of an augmented string that is twice as long as the original string. Although it has found widespread use, we found that this preprocessing is neither economic nor necessary. We present a more efficient implementation of Manacher's algorithm based on index mapping that makes the string augmentation process obsolete.
2209.09338
Skye Purchase
S. Purchase, A. Zhao, R. D. Mullins
Revisiting Embeddings for Graph Neural Networks
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Current graph representation learning techniques use Graph Neural Networks (GNNs) to extract features from dataset embeddings. In this work, we examine the quality of these embeddings and assess how changing them can affect the accuracy of GNNs. We explore different embedding extraction techniques for both images and texts; and find that the performance of different GNN architectures is dependent on the embedding style used. We see a prevalence of bag of words (BoW) embeddings and text classification tasks in available graph datasets. Given the impact embeddings has on GNN performance. this leads to a phenomenon that GNNs being optimised for BoW vectors.
[ { "created": "Mon, 19 Sep 2022 20:37:55 GMT", "version": "v1" }, { "created": "Wed, 21 Sep 2022 10:22:53 GMT", "version": "v2" }, { "created": "Thu, 29 Sep 2022 10:37:57 GMT", "version": "v3" }, { "created": "Tue, 29 Nov 2022 13:27:34 GMT", "version": "v4" } ]
2022-11-30
[ [ "Purchase", "S.", "" ], [ "Zhao", "A.", "" ], [ "Mullins", "R. D.", "" ] ]
Current graph representation learning techniques use Graph Neural Networks (GNNs) to extract features from dataset embeddings. In this work, we examine the quality of these embeddings and assess how changing them can affect the accuracy of GNNs. We explore different embedding extraction techniques for both images and texts; and find that the performance of different GNN architectures is dependent on the embedding style used. We see a prevalence of bag of words (BoW) embeddings and text classification tasks in available graph datasets. Given the impact embeddings has on GNN performance. this leads to a phenomenon that GNNs being optimised for BoW vectors.
1911.03927
Jinmingwu Jiang
Jinmingwu Jiang, Kaigui Wu
Cooperative Pathfinding based on memory-efficient Multi-agent RRT*
IROS 2020, October 25-29, Las Vegas, NV, USA
null
null
null
cs.MA cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In cooperative pathfinding problems, no-conflicts paths that bring several agents from their start location to their destination need to be planned. This problem can be efficiently solved by Multi-agent RRT*(MA-RRT*) algorithm, which is still state-of-the-art in the field of coupled methods. However, the implementation of this algorithm is hindered in systems with limited memory because the number of nodes in the tree grows indefinitely as the paths get optimized. This paper proposes an improved version of MA-RRT*, called Multi-agent RRT* Fixed Node(MA-RRT*FN), which limits the number of nodes stored in the tree by removing the weak nodes on the path which are not likely to reach the goal. The results show that MA-RRT*FN performs close to MA-RRT* in terms of scalability and solution quality while the memory required is much lower and fixed.
[ { "created": "Sun, 10 Nov 2019 13:21:14 GMT", "version": "v1" }, { "created": "Sat, 16 Nov 2019 12:45:44 GMT", "version": "v2" }, { "created": "Wed, 4 Mar 2020 03:19:25 GMT", "version": "v3" } ]
2020-03-05
[ [ "Jiang", "Jinmingwu", "" ], [ "Wu", "Kaigui", "" ] ]
In cooperative pathfinding problems, no-conflicts paths that bring several agents from their start location to their destination need to be planned. This problem can be efficiently solved by Multi-agent RRT*(MA-RRT*) algorithm, which is still state-of-the-art in the field of coupled methods. However, the implementation of this algorithm is hindered in systems with limited memory because the number of nodes in the tree grows indefinitely as the paths get optimized. This paper proposes an improved version of MA-RRT*, called Multi-agent RRT* Fixed Node(MA-RRT*FN), which limits the number of nodes stored in the tree by removing the weak nodes on the path which are not likely to reach the goal. The results show that MA-RRT*FN performs close to MA-RRT* in terms of scalability and solution quality while the memory required is much lower and fixed.
1701.06432
Alessandro Pluchino
A. Greco, A. Pluchino, S. Caddemi, I. Cali\`o, F. Cannizzaro
On profile reconstruction of Euler-Bernoulli beams by means of an energy based genetic algorithm
27 pages, 7 figures, 6 tables
Engineering with Computers 2019
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the inverse problem related to the identification of the flexural stiffness of an Euler Bernoulli beam in order to reconstruct its profile starting from available response data. The proposed identification procedure makes use of energy measurements and is based on the application of a closed form solution for the static displacements of multi-stepped beams. This solution allows to easily calculate the energy related to beams modeled with arbitrary multi-step shapes subjected to a transversal roving force, and to compare it with the correspondent data obtained through direct measurements on real beams. The optimal solution which minimizes the difference between measured and calculated data is then sought by means of genetic algorithms. In the paper several different stepped beams are investigated showing that the proposed procedure allows in many cases to identify the exact beam profile. However it is shown that in some other cases different multi-step profiles may correspond to very similar static responses, and therefore to comparable minima in the optimization problem, thus complicating the profile identification problem.
[ { "created": "Sat, 14 Jan 2017 08:39:00 GMT", "version": "v1" }, { "created": "Sat, 6 Jul 2019 11:00:56 GMT", "version": "v2" } ]
2019-07-09
[ [ "Greco", "A.", "" ], [ "Pluchino", "A.", "" ], [ "Caddemi", "S.", "" ], [ "Caliò", "I.", "" ], [ "Cannizzaro", "F.", "" ] ]
This paper studies the inverse problem related to the identification of the flexural stiffness of an Euler Bernoulli beam in order to reconstruct its profile starting from available response data. The proposed identification procedure makes use of energy measurements and is based on the application of a closed form solution for the static displacements of multi-stepped beams. This solution allows to easily calculate the energy related to beams modeled with arbitrary multi-step shapes subjected to a transversal roving force, and to compare it with the correspondent data obtained through direct measurements on real beams. The optimal solution which minimizes the difference between measured and calculated data is then sought by means of genetic algorithms. In the paper several different stepped beams are investigated showing that the proposed procedure allows in many cases to identify the exact beam profile. However it is shown that in some other cases different multi-step profiles may correspond to very similar static responses, and therefore to comparable minima in the optimization problem, thus complicating the profile identification problem.
1412.8648
Deliang Fan
Deliang Fan, Yong Shim, Anand Raghunathan, and Kaushik Roy
STT-SNN: A Spin-Transfer-Torque Based Soft-Limiting Non-Linear Neuron for Low-Power Artificial Neural Networks
This paper was submitted to IEEE Transactions on Nanotechnology for review
null
10.1109/TNANO.2015.2437902
null
cs.ET cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have witnessed growing interest in the use of Artificial Neural Networks (ANNs) for vision, classification, and inference problems. An artificial neuron sums N weighted inputs and passes the result through a non-linear transfer function. Large-scale ANNs impose very high computing requirements for training and classification, leading to great interest in the use of post-CMOS devices to realize them in an energy efficient manner. In this paper, we propose a spin-transfer-torque (STT) device based on Domain Wall Motion (DWM) magnetic strip that can efficiently implement a Soft-limiting Non-linear Neuron (SNN) operating at ultra-low supply voltage and current. In contrast to previous spin-based neurons that can only realize hard-limiting transfer functions, the proposed STT-SNN displays a continuous resistance change with varying input current, and can therefore be employed to implement a soft-limiting neuron transfer function. Soft-limiting neurons are greatly preferred to hard-limiting ones due to their much improved modeling capacity, which leads to higher network accuracy and lower network complexity. We also present an ANN hardware design employing the proposed STT-SNNs and Memristor Crossbar Arrays (MCA) as synapses. The ultra-low voltage operation of the magneto metallic STT-SNN enables the programmable MCA-synapses, computing analog-domain weighted summation of input voltages, to also operate at ultra-low voltage. We modeled the STT-SNN using micro-magnetic simulation and evaluated them using an ANN for character recognition. Comparisons with analog and digital CMOS neurons show that STT-SNNs can achieve more than two orders of magnitude lower energy consumption.
[ { "created": "Tue, 23 Dec 2014 18:00:51 GMT", "version": "v1" } ]
2016-11-18
[ [ "Fan", "Deliang", "" ], [ "Shim", "Yong", "" ], [ "Raghunathan", "Anand", "" ], [ "Roy", "Kaushik", "" ] ]
Recent years have witnessed growing interest in the use of Artificial Neural Networks (ANNs) for vision, classification, and inference problems. An artificial neuron sums N weighted inputs and passes the result through a non-linear transfer function. Large-scale ANNs impose very high computing requirements for training and classification, leading to great interest in the use of post-CMOS devices to realize them in an energy efficient manner. In this paper, we propose a spin-transfer-torque (STT) device based on Domain Wall Motion (DWM) magnetic strip that can efficiently implement a Soft-limiting Non-linear Neuron (SNN) operating at ultra-low supply voltage and current. In contrast to previous spin-based neurons that can only realize hard-limiting transfer functions, the proposed STT-SNN displays a continuous resistance change with varying input current, and can therefore be employed to implement a soft-limiting neuron transfer function. Soft-limiting neurons are greatly preferred to hard-limiting ones due to their much improved modeling capacity, which leads to higher network accuracy and lower network complexity. We also present an ANN hardware design employing the proposed STT-SNNs and Memristor Crossbar Arrays (MCA) as synapses. The ultra-low voltage operation of the magneto metallic STT-SNN enables the programmable MCA-synapses, computing analog-domain weighted summation of input voltages, to also operate at ultra-low voltage. We modeled the STT-SNN using micro-magnetic simulation and evaluated them using an ANN for character recognition. Comparisons with analog and digital CMOS neurons show that STT-SNNs can achieve more than two orders of magnitude lower energy consumption.
2310.11446
Pierre Fernandez
Pierre Fernandez, Guillaume Couairon, Teddy Furon, Matthijs Douze
Functional Invariants to Watermark Large Transformers
Published at ICASSP 2024. Webpage at https://pierrefdz.github.io/publications/invariancewm/
null
null
null
cs.CR cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
The rapid growth of transformer-based models increases the concerns about their integrity and ownership insurance. Watermarking addresses this issue by embedding a unique identifier into the model, while preserving its performance. However, most existing approaches require to optimize the weights to imprint the watermark signal, which is not suitable at scale due to the computational cost. This paper explores watermarks with virtually no computational cost, applicable to a non-blind white-box setting (assuming access to both the original and watermarked networks). They generate functionally equivalent copies by leveraging the models' invariance, via operations like dimension permutations or scaling/unscaling. This enables to watermark models without any change in their outputs and remains stealthy. Experiments demonstrate the effectiveness of the approach and its robustness against various model transformations (fine-tuning, quantization, pruning), making it a practical solution to protect the integrity of large models.
[ { "created": "Tue, 17 Oct 2023 17:56:18 GMT", "version": "v1" }, { "created": "Thu, 18 Jan 2024 18:50:55 GMT", "version": "v2" } ]
2024-01-19
[ [ "Fernandez", "Pierre", "" ], [ "Couairon", "Guillaume", "" ], [ "Furon", "Teddy", "" ], [ "Douze", "Matthijs", "" ] ]
The rapid growth of transformer-based models increases the concerns about their integrity and ownership insurance. Watermarking addresses this issue by embedding a unique identifier into the model, while preserving its performance. However, most existing approaches require to optimize the weights to imprint the watermark signal, which is not suitable at scale due to the computational cost. This paper explores watermarks with virtually no computational cost, applicable to a non-blind white-box setting (assuming access to both the original and watermarked networks). They generate functionally equivalent copies by leveraging the models' invariance, via operations like dimension permutations or scaling/unscaling. This enables to watermark models without any change in their outputs and remains stealthy. Experiments demonstrate the effectiveness of the approach and its robustness against various model transformations (fine-tuning, quantization, pruning), making it a practical solution to protect the integrity of large models.
2309.08897
Yoonchang Sung
Yoonchang Sung, Rahul Shome, Peter Stone
Asynchronous Task Plan Refinement for Multi-Robot Task and Motion Planning
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
This paper explores general multi-robot task and motion planning, where multiple robots in close proximity manipulate objects while satisfying constraints and a given goal. In particular, we formulate the plan refinement problem--which, given a task plan, finds valid assignments of variables corresponding to solution trajectories--as a hybrid constraint satisfaction problem. The proposed algorithm follows several design principles that yield the following features: (1) efficient solution finding due to sequential heuristics and implicit time and roadmap representations, and (2) maximized feasible solution space obtained by introducing minimally necessary coordination-induced constraints and not relying on prevalent simplifications that exist in the literature. The evaluation results demonstrate the planning efficiency of the proposed algorithm, outperforming the synchronous approach in terms of makespan.
[ { "created": "Sat, 16 Sep 2023 06:35:22 GMT", "version": "v1" } ]
2023-09-19
[ [ "Sung", "Yoonchang", "" ], [ "Shome", "Rahul", "" ], [ "Stone", "Peter", "" ] ]
This paper explores general multi-robot task and motion planning, where multiple robots in close proximity manipulate objects while satisfying constraints and a given goal. In particular, we formulate the plan refinement problem--which, given a task plan, finds valid assignments of variables corresponding to solution trajectories--as a hybrid constraint satisfaction problem. The proposed algorithm follows several design principles that yield the following features: (1) efficient solution finding due to sequential heuristics and implicit time and roadmap representations, and (2) maximized feasible solution space obtained by introducing minimally necessary coordination-induced constraints and not relying on prevalent simplifications that exist in the literature. The evaluation results demonstrate the planning efficiency of the proposed algorithm, outperforming the synchronous approach in terms of makespan.
2009.07177
Jason Lee
Jason Lee, Raphael Shu, Kyunghyun Cho
Iterative Refinement in the Continuous Space for Non-Autoregressive Neural Machine Translation
Accepted to EMNLP 2020
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an efficient inference procedure for non-autoregressive machine translation that iteratively refines translation purely in the continuous space. Given a continuous latent variable model for machine translation (Shu et al., 2020), we train an inference network to approximate the gradient of the marginal log probability of the target sentence, using only the latent variable as input. This allows us to use gradient-based optimization to find the target sentence at inference time that approximately maximizes its marginal probability. As each refinement step only involves computation in the latent space of low dimensionality (we use 8 in our experiments), we avoid computational overhead incurred by existing non-autoregressive inference procedures that often refine in token space. We compare our approach to a recently proposed EM-like inference procedure (Shu et al., 2020) that optimizes in a hybrid space, consisting of both discrete and continuous variables. We evaluate our approach on WMT'14 En-De, WMT'16 Ro-En and IWSLT'16 De-En, and observe two advantages over the EM-like inference: (1) it is computationally efficient, i.e. each refinement step is twice as fast, and (2) it is more effective, resulting in higher marginal probabilities and BLEU scores with the same number of refinement steps. On WMT'14 En-De, for instance, our approach is able to decode 6.2 times faster than the autoregressive model with minimal degradation to translation quality (0.9 BLEU).
[ { "created": "Tue, 15 Sep 2020 15:30:14 GMT", "version": "v1" } ]
2020-09-16
[ [ "Lee", "Jason", "" ], [ "Shu", "Raphael", "" ], [ "Cho", "Kyunghyun", "" ] ]
We propose an efficient inference procedure for non-autoregressive machine translation that iteratively refines translation purely in the continuous space. Given a continuous latent variable model for machine translation (Shu et al., 2020), we train an inference network to approximate the gradient of the marginal log probability of the target sentence, using only the latent variable as input. This allows us to use gradient-based optimization to find the target sentence at inference time that approximately maximizes its marginal probability. As each refinement step only involves computation in the latent space of low dimensionality (we use 8 in our experiments), we avoid computational overhead incurred by existing non-autoregressive inference procedures that often refine in token space. We compare our approach to a recently proposed EM-like inference procedure (Shu et al., 2020) that optimizes in a hybrid space, consisting of both discrete and continuous variables. We evaluate our approach on WMT'14 En-De, WMT'16 Ro-En and IWSLT'16 De-En, and observe two advantages over the EM-like inference: (1) it is computationally efficient, i.e. each refinement step is twice as fast, and (2) it is more effective, resulting in higher marginal probabilities and BLEU scores with the same number of refinement steps. On WMT'14 En-De, for instance, our approach is able to decode 6.2 times faster than the autoregressive model with minimal degradation to translation quality (0.9 BLEU).
2304.11527
Jake Buzhardt
Jake Buzhardt, Prashanth Chivkula, and Phanindra Tallapragada
A Pendulum-Driven Legless Rolling Jumping Robot
Final version of paper in IROS 2023. View the supplemental video at https://youtu.be/9hKQilCpeaw
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
In this paper, we present a novel rolling, jumping robot. The robot consists of a driven pendulum mounted to a wheel in a compact, lightweight, 3D printed design. We show that by driving the pendulum to shift the robot's weight distribution, the robot is able to obtain significant rolling speed, achieve jumps of up to 2.5 body lengths vertically, and clear horizontal distances of over 6 body lengths. The robot's dynamic model is derived and simulation results indicate that it is consistent with the rolling motion and jumping observed on the robot. The ability to both roll and jump effectively using a minimalistic design makes this robot unique and could inspire the use of similar mechanisms on robots intended for applications in which agile locomotion on unstructured terrain is necessary, such as disaster response or planetary exploration.
[ { "created": "Sun, 23 Apr 2023 03:55:52 GMT", "version": "v1" }, { "created": "Thu, 26 Oct 2023 23:50:15 GMT", "version": "v2" } ]
2023-10-30
[ [ "Buzhardt", "Jake", "" ], [ "Chivkula", "Prashanth", "" ], [ "Tallapragada", "Phanindra", "" ] ]
In this paper, we present a novel rolling, jumping robot. The robot consists of a driven pendulum mounted to a wheel in a compact, lightweight, 3D printed design. We show that by driving the pendulum to shift the robot's weight distribution, the robot is able to obtain significant rolling speed, achieve jumps of up to 2.5 body lengths vertically, and clear horizontal distances of over 6 body lengths. The robot's dynamic model is derived and simulation results indicate that it is consistent with the rolling motion and jumping observed on the robot. The ability to both roll and jump effectively using a minimalistic design makes this robot unique and could inspire the use of similar mechanisms on robots intended for applications in which agile locomotion on unstructured terrain is necessary, such as disaster response or planetary exploration.
2311.00164
Dongyue Li
Abhinav Nippani, Dongyue Li, Haotian Ju, Haris N. Koutsopoulos, Hongyang R. Zhang
Graph Neural Networks for Road Safety Modeling: Datasets and Evaluations for Accident Analysis
24 pages. Appeared in NeurIPS 2023 Datasets Track
null
null
null
cs.SI cs.LG
http://creativecommons.org/licenses/by/4.0/
We consider the problem of traffic accident analysis on a road network based on road network connections and traffic volume. Previous works have designed various deep-learning methods using historical records to predict traffic accident occurrences. However, there is a lack of consensus on how accurate existing methods are, and a fundamental issue is the lack of public accident datasets for comprehensive evaluations. This paper constructs a large-scale, unified dataset of traffic accident records from official reports of various states in the US, totaling 9 million records, accompanied by road networks and traffic volume reports. Using this new dataset, we evaluate existing deep-learning methods for predicting the occurrence of accidents on road networks. Our main finding is that graph neural networks such as GraphSAGE can accurately predict the number of accidents on roads with less than 22% mean absolute error (relative to the actual count) and whether an accident will occur or not with over 87% AUROC, averaged over states. We achieve these results by using multitask learning to account for cross-state variabilities (e.g., availability of accident labels) and transfer learning to combine traffic volume with accident prediction. Ablation studies highlight the importance of road graph-structural features, amongst other features. Lastly, we discuss the implications of the analysis and develop a package for easily using our new dataset.
[ { "created": "Tue, 31 Oct 2023 21:43:10 GMT", "version": "v1" }, { "created": "Mon, 12 Feb 2024 17:09:19 GMT", "version": "v2" } ]
2024-02-13
[ [ "Nippani", "Abhinav", "" ], [ "Li", "Dongyue", "" ], [ "Ju", "Haotian", "" ], [ "Koutsopoulos", "Haris N.", "" ], [ "Zhang", "Hongyang R.", "" ] ]
We consider the problem of traffic accident analysis on a road network based on road network connections and traffic volume. Previous works have designed various deep-learning methods using historical records to predict traffic accident occurrences. However, there is a lack of consensus on how accurate existing methods are, and a fundamental issue is the lack of public accident datasets for comprehensive evaluations. This paper constructs a large-scale, unified dataset of traffic accident records from official reports of various states in the US, totaling 9 million records, accompanied by road networks and traffic volume reports. Using this new dataset, we evaluate existing deep-learning methods for predicting the occurrence of accidents on road networks. Our main finding is that graph neural networks such as GraphSAGE can accurately predict the number of accidents on roads with less than 22% mean absolute error (relative to the actual count) and whether an accident will occur or not with over 87% AUROC, averaged over states. We achieve these results by using multitask learning to account for cross-state variabilities (e.g., availability of accident labels) and transfer learning to combine traffic volume with accident prediction. Ablation studies highlight the importance of road graph-structural features, amongst other features. Lastly, we discuss the implications of the analysis and develop a package for easily using our new dataset.
2102.10873
Oskar Allerbo
Oskar Allerbo, Rebecka J\"ornsten
Non-linear, Sparse Dimensionality Reduction via Path Lasso Penalized Autoencoders
null
Journal of Machine Learning Research 22 (2021) 1-28
null
null
cs.LG stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-dimensional data sets are often analyzed and explored via the construction of a latent low-dimensional space which enables convenient visualization and efficient predictive modeling or clustering. For complex data structures, linear dimensionality reduction techniques like PCA may not be sufficiently flexible to enable low-dimensional representation. Non-linear dimension reduction techniques, like kernel PCA and autoencoders, suffer from loss of interpretability since each latent variable is dependent of all input dimensions. To address this limitation, we here present path lasso penalized autoencoders. This structured regularization enhances interpretability by penalizing each path through the encoder from an input to a latent variable, thus restricting how many input variables are represented in each latent dimension. Our algorithm uses a group lasso penalty and non-negative matrix factorization to construct a sparse, non-linear latent representation. We compare the path lasso regularized autoencoder to PCA, sparse PCA, autoencoders and sparse autoencoders on real and simulated data sets. We show that the algorithm exhibits much lower reconstruction errors than sparse PCA and parameter-wise lasso regularized autoencoders for low-dimensional representations. Moreover, path lasso representations provide a more accurate reconstruction match, i.e. preserved relative distance between objects in the original and reconstructed spaces.
[ { "created": "Mon, 22 Feb 2021 10:14:46 GMT", "version": "v1" }, { "created": "Wed, 20 Oct 2021 09:37:57 GMT", "version": "v2" } ]
2022-05-25
[ [ "Allerbo", "Oskar", "" ], [ "Jörnsten", "Rebecka", "" ] ]
High-dimensional data sets are often analyzed and explored via the construction of a latent low-dimensional space which enables convenient visualization and efficient predictive modeling or clustering. For complex data structures, linear dimensionality reduction techniques like PCA may not be sufficiently flexible to enable low-dimensional representation. Non-linear dimension reduction techniques, like kernel PCA and autoencoders, suffer from loss of interpretability since each latent variable is dependent of all input dimensions. To address this limitation, we here present path lasso penalized autoencoders. This structured regularization enhances interpretability by penalizing each path through the encoder from an input to a latent variable, thus restricting how many input variables are represented in each latent dimension. Our algorithm uses a group lasso penalty and non-negative matrix factorization to construct a sparse, non-linear latent representation. We compare the path lasso regularized autoencoder to PCA, sparse PCA, autoencoders and sparse autoencoders on real and simulated data sets. We show that the algorithm exhibits much lower reconstruction errors than sparse PCA and parameter-wise lasso regularized autoencoders for low-dimensional representations. Moreover, path lasso representations provide a more accurate reconstruction match, i.e. preserved relative distance between objects in the original and reconstructed spaces.
1807.00935
Youhong Feng
Youhong Feng, Shihao Yan, and Zhen Yang
Secure Transmission to the Strong User with Optimal Power Allocation in NOMA
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With non-orthogonal multiple access(NOMA), we tackle the maximization of the secrecy rate for the strong user subject to a maximum allowable secrecy outage probability, while guaranteeing a constraint on the transmission rate to the weak user. For the first time, the dependence between the eavesdropper's ability to conduct successive interference cancellation and her channel quality is considered. We determine the optimal power allocation and the redundancy rate, based on which the cost of security in terms of the reduction in the strong user's secrecy rate is examined and the benefits of NOMA for secure transmissions are explicitly revealed.
[ { "created": "Tue, 3 Jul 2018 00:28:59 GMT", "version": "v1" } ]
2018-07-04
[ [ "Feng", "Youhong", "" ], [ "Yan", "Shihao", "" ], [ "Yang", "Zhen", "" ] ]
With non-orthogonal multiple access(NOMA), we tackle the maximization of the secrecy rate for the strong user subject to a maximum allowable secrecy outage probability, while guaranteeing a constraint on the transmission rate to the weak user. For the first time, the dependence between the eavesdropper's ability to conduct successive interference cancellation and her channel quality is considered. We determine the optimal power allocation and the redundancy rate, based on which the cost of security in terms of the reduction in the strong user's secrecy rate is examined and the benefits of NOMA for secure transmissions are explicitly revealed.
2004.14454
Preslav Nakov
Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Marcos Zampieri, Preslav Nakov
SOLID: A Large-Scale Semi-Supervised Dataset for Offensive Language Identification
offensive language, hate speech, cyberbullying, cyber-aggression, taxonomy for offensive language identification
ACL-2021 (Findings)
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The widespread use of offensive content in social media has led to an abundance of research in detecting language such as hate speech, cyberbullying, and cyber-aggression. Recent work presented the OLID dataset, which follows a taxonomy for offensive language identification that provides meaningful information for understanding the type and the target of offensive messages. However, it is limited in size and it might be biased towards offensive language as it was collected using keywords. In this work, we present SOLID, an expanded dataset, where the tweets were collected in a more principled manner. SOLID contains over nine million English tweets labeled in a semi-supervised fashion. We demonstrate that using SOLID along with OLID yields sizable performance gains on the OLID test set for two different models, especially for the lower levels of the taxonomy.
[ { "created": "Wed, 29 Apr 2020 20:02:58 GMT", "version": "v1" }, { "created": "Fri, 24 Sep 2021 16:36:35 GMT", "version": "v2" } ]
2021-09-27
[ [ "Rosenthal", "Sara", "" ], [ "Atanasova", "Pepa", "" ], [ "Karadzhov", "Georgi", "" ], [ "Zampieri", "Marcos", "" ], [ "Nakov", "Preslav", "" ] ]
The widespread use of offensive content in social media has led to an abundance of research in detecting language such as hate speech, cyberbullying, and cyber-aggression. Recent work presented the OLID dataset, which follows a taxonomy for offensive language identification that provides meaningful information for understanding the type and the target of offensive messages. However, it is limited in size and it might be biased towards offensive language as it was collected using keywords. In this work, we present SOLID, an expanded dataset, where the tweets were collected in a more principled manner. SOLID contains over nine million English tweets labeled in a semi-supervised fashion. We demonstrate that using SOLID along with OLID yields sizable performance gains on the OLID test set for two different models, especially for the lower levels of the taxonomy.
2004.03719
Christoph Strnadl
Christoph F. Strnadl
The Mathematical Syntax of Architectures
22 pages, 4 figures, 1 table, 12 definitions, 2 theorems, 1 lemma, 1 corollary. This is a considerably extended version of the initial submission with new material in almost every section including minor technical amendments and (typographical) corrections
null
null
Report no.: SAG-CTO-20-002
cs.LO cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite several (accepted) standards, core notions typically employed in information technology or system engineering architectures lack the precise and exact foundations encountered in logic, algebra, and other branches of mathematics. In this contribution we define the syntactical aspects of the term architecture in a mathematically rigorous way. We motivate our particular choice by demonstrating (i) how commonly understood and expected properties of an architecture--as defined by various standards--can be suitably identified or derived within our formalization, (ii) how our concept is fully compatible with real life (business) architectures, and (iii) how our definition complements recent foundational work in this area (Wilkinson 2018, Dickersen 2020). We furthermore develop a rigorous notion of architectural similarity based on the notion of homomorphisms allowing the class of architectures to be regarded as a category, Arch. We demonstrate the applicability of our concepts to theory by deriving theorems on the classification of certain types of architectures.
[ { "created": "Tue, 7 Apr 2020 21:18:31 GMT", "version": "v1" }, { "created": "Mon, 18 May 2020 08:19:16 GMT", "version": "v2" } ]
2020-05-19
[ [ "Strnadl", "Christoph F.", "" ] ]
Despite several (accepted) standards, core notions typically employed in information technology or system engineering architectures lack the precise and exact foundations encountered in logic, algebra, and other branches of mathematics. In this contribution we define the syntactical aspects of the term architecture in a mathematically rigorous way. We motivate our particular choice by demonstrating (i) how commonly understood and expected properties of an architecture--as defined by various standards--can be suitably identified or derived within our formalization, (ii) how our concept is fully compatible with real life (business) architectures, and (iii) how our definition complements recent foundational work in this area (Wilkinson 2018, Dickersen 2020). We furthermore develop a rigorous notion of architectural similarity based on the notion of homomorphisms allowing the class of architectures to be regarded as a category, Arch. We demonstrate the applicability of our concepts to theory by deriving theorems on the classification of certain types of architectures.
1311.5068
Alvaro Martinez
A. Mart\'inez-P\'erez
Gromov-Hausdorff stability of linkage-based hierarchical clustering methods
25 pages, 5 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A hierarchical clustering method is stable if small perturbations on the data set produce small perturbations in the result. These perturbations are measured using the Gromov-Hausdorff metric. We study the problem of stability on linkage-based hierarchical clustering methods. We obtain that, under some basic conditions, standard linkage-based methods are semi-stable. This means that they are stable if the input data is close enough to an ultrametric space. We prove that, apart from exotic examples, introducing any unchaining condition in the algorithm always produces unstable methods.
[ { "created": "Wed, 20 Nov 2013 14:31:00 GMT", "version": "v1" } ]
2013-11-21
[ [ "Martínez-Pérez", "A.", "" ] ]
A hierarchical clustering method is stable if small perturbations on the data set produce small perturbations in the result. These perturbations are measured using the Gromov-Hausdorff metric. We study the problem of stability on linkage-based hierarchical clustering methods. We obtain that, under some basic conditions, standard linkage-based methods are semi-stable. This means that they are stable if the input data is close enough to an ultrametric space. We prove that, apart from exotic examples, introducing any unchaining condition in the algorithm always produces unstable methods.
2404.15129
Sara Dadjouy
Sara Dadjouy, Hedieh Sajedi
Gallbladder Cancer Detection in Ultrasound Images based on YOLO and Faster R-CNN
Published in 2024 10th International Conference on Artificial Intelligence and Robotics (QICAR)
2024 10th International Conference on Artificial Intelligence and Robotics (QICAR) (pp. 227-231). IEEE
10.1109/QICAR61538.2024.10496645
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Medical image analysis is a significant application of artificial intelligence for disease diagnosis. A crucial step in this process is the identification of regions of interest within the images. This task can be automated using object detection algorithms. YOLO and Faster R-CNN are renowned for such algorithms, each with its own strengths and weaknesses. This study aims to explore the advantages of both techniques to select more accurate bounding boxes for gallbladder detection from ultrasound images, thereby enhancing gallbladder cancer classification. A fusion method that leverages the benefits of both techniques is presented in this study. The proposed method demonstrated superior classification performance, with an accuracy of 92.62%, compared to the individual use of Faster R-CNN and YOLOv8, which yielded accuracies of 90.16% and 82.79%, respectively.
[ { "created": "Tue, 23 Apr 2024 15:29:02 GMT", "version": "v1" } ]
2024-04-24
[ [ "Dadjouy", "Sara", "" ], [ "Sajedi", "Hedieh", "" ] ]
Medical image analysis is a significant application of artificial intelligence for disease diagnosis. A crucial step in this process is the identification of regions of interest within the images. This task can be automated using object detection algorithms. YOLO and Faster R-CNN are renowned for such algorithms, each with its own strengths and weaknesses. This study aims to explore the advantages of both techniques to select more accurate bounding boxes for gallbladder detection from ultrasound images, thereby enhancing gallbladder cancer classification. A fusion method that leverages the benefits of both techniques is presented in this study. The proposed method demonstrated superior classification performance, with an accuracy of 92.62%, compared to the individual use of Faster R-CNN and YOLOv8, which yielded accuracies of 90.16% and 82.79%, respectively.
2102.02543
Vittorio Lippi
Vittorio Lippi, Christoph Maurer and Thomas Mergner
The Importance of Models in Data Analysis with Small Human Movement Datasets -- Inspirations from Neurorobotics Applied to Posture Control of Humanoids and Humans
Presented at ICPRAM 2021, The International Conference on Pattern Recognition Applications and Methods
roceedings of the 10th International Conference on Pattern Recognition Applications and Methods - Volume 1: ICPRAM, (2021) ISBN 978-989-758-486-2, pages 579-585
10.5220/0010297005790585
null
cs.RO cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
This work presents a system identification procedure based on Convolutional Neural Networks (CNN) for human posture control using the DEC (Disturbance Estimation and Compensation) parametric model. The modular structure of the proposed control model inspired the design of a modular identification procedure, in the sense that the same neural network is used to identify the parameters of the modules controlling different degrees of freedom. In this way the presented examples of body sway induced by external stimuli provide several training samples at once.
[ { "created": "Thu, 4 Feb 2021 11:02:11 GMT", "version": "v1" } ]
2021-03-08
[ [ "Lippi", "Vittorio", "" ], [ "Maurer", "Christoph", "" ], [ "Mergner", "Thomas", "" ] ]
This work presents a system identification procedure based on Convolutional Neural Networks (CNN) for human posture control using the DEC (Disturbance Estimation and Compensation) parametric model. The modular structure of the proposed control model inspired the design of a modular identification procedure, in the sense that the same neural network is used to identify the parameters of the modules controlling different degrees of freedom. In this way the presented examples of body sway induced by external stimuli provide several training samples at once.
2309.03851
Ahmed Shahin
Ahmed H. Shahin, An Zhao, Alexander C. Whitehead, Daniel C. Alexander, Joseph Jacob, David Barber
CenTime: Event-Conditional Modelling of Censoring in Survival Analysis
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Survival analysis is a valuable tool for estimating the time until specific events, such as death or cancer recurrence, based on baseline observations. This is particularly useful in healthcare to prognostically predict clinically important events based on patient data. However, existing approaches often have limitations; some focus only on ranking patients by survivability, neglecting to estimate the actual event time, while others treat the problem as a classification task, ignoring the inherent time-ordered structure of the events. Furthermore, the effective utilization of censored samples - training data points where the exact event time is unknown - is essential for improving the predictive accuracy of the model. In this paper, we introduce CenTime, a novel approach to survival analysis that directly estimates the time to event. Our method features an innovative event-conditional censoring mechanism that performs robustly even when uncensored data is scarce. We demonstrate that our approach forms a consistent estimator for the event model parameters, even in the absence of uncensored data. Furthermore, CenTime is easily integrated with deep learning models with no restrictions on batch size or the number of uncensored samples. We compare our approach with standard survival analysis methods, including the Cox proportional-hazard model and DeepHit. Our results indicate that CenTime offers state-of-the-art performance in predicting time-to-death while maintaining comparable ranking performance. Our implementation is publicly available at https://github.com/ahmedhshahin/CenTime.
[ { "created": "Thu, 7 Sep 2023 17:07:33 GMT", "version": "v1" }, { "created": "Fri, 15 Sep 2023 10:27:43 GMT", "version": "v2" }, { "created": "Wed, 10 Jan 2024 16:25:47 GMT", "version": "v3" } ]
2024-01-11
[ [ "Shahin", "Ahmed H.", "" ], [ "Zhao", "An", "" ], [ "Whitehead", "Alexander C.", "" ], [ "Alexander", "Daniel C.", "" ], [ "Jacob", "Joseph", "" ], [ "Barber", "David", "" ] ]
Survival analysis is a valuable tool for estimating the time until specific events, such as death or cancer recurrence, based on baseline observations. This is particularly useful in healthcare to prognostically predict clinically important events based on patient data. However, existing approaches often have limitations; some focus only on ranking patients by survivability, neglecting to estimate the actual event time, while others treat the problem as a classification task, ignoring the inherent time-ordered structure of the events. Furthermore, the effective utilization of censored samples - training data points where the exact event time is unknown - is essential for improving the predictive accuracy of the model. In this paper, we introduce CenTime, a novel approach to survival analysis that directly estimates the time to event. Our method features an innovative event-conditional censoring mechanism that performs robustly even when uncensored data is scarce. We demonstrate that our approach forms a consistent estimator for the event model parameters, even in the absence of uncensored data. Furthermore, CenTime is easily integrated with deep learning models with no restrictions on batch size or the number of uncensored samples. We compare our approach with standard survival analysis methods, including the Cox proportional-hazard model and DeepHit. Our results indicate that CenTime offers state-of-the-art performance in predicting time-to-death while maintaining comparable ranking performance. Our implementation is publicly available at https://github.com/ahmedhshahin/CenTime.
2212.06874
Craig Knuth
Craig Knuth, Glen Chou, Jamie Reese, Joe Moore
Statistical Safety and Robustness Guarantees for Feedback Motion Planning of Unknown Underactuated Stochastic Systems
Submitted to ICRA 2023
null
null
null
cs.RO cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method for providing statistical guarantees on runtime safety and goal reachability for integrated planning and control of a class of systems with unknown nonlinear stochastic underactuated dynamics. Specifically, given a dynamics dataset, our method jointly learns a mean dynamics model, a spatially-varying disturbance bound that captures the effect of noise and model mismatch, and a feedback controller based on contraction theory that stabilizes the learned dynamics. We propose a sampling-based planner that uses the mean dynamics model and simultaneously bounds the closed-loop tracking error via a learned disturbance bound. We employ techniques from Extreme Value Theory (EVT) to estimate, to a specified level of confidence, several constants which characterize the learned components and govern the size of the tracking error bound. This ensures plans are guaranteed to be safely tracked at runtime. We validate that our guarantees translate to empirical safety in simulation on a 10D quadrotor, and in the real world on a physical CrazyFlie quadrotor and Clearpath Jackal robot, whereas baselines that ignore the model error and stochasticity are unsafe.
[ { "created": "Tue, 13 Dec 2022 19:38:39 GMT", "version": "v1" } ]
2022-12-15
[ [ "Knuth", "Craig", "" ], [ "Chou", "Glen", "" ], [ "Reese", "Jamie", "" ], [ "Moore", "Joe", "" ] ]
We present a method for providing statistical guarantees on runtime safety and goal reachability for integrated planning and control of a class of systems with unknown nonlinear stochastic underactuated dynamics. Specifically, given a dynamics dataset, our method jointly learns a mean dynamics model, a spatially-varying disturbance bound that captures the effect of noise and model mismatch, and a feedback controller based on contraction theory that stabilizes the learned dynamics. We propose a sampling-based planner that uses the mean dynamics model and simultaneously bounds the closed-loop tracking error via a learned disturbance bound. We employ techniques from Extreme Value Theory (EVT) to estimate, to a specified level of confidence, several constants which characterize the learned components and govern the size of the tracking error bound. This ensures plans are guaranteed to be safely tracked at runtime. We validate that our guarantees translate to empirical safety in simulation on a 10D quadrotor, and in the real world on a physical CrazyFlie quadrotor and Clearpath Jackal robot, whereas baselines that ignore the model error and stochasticity are unsafe.
2202.02666
Bangalore Ravi Kiran
Weishuang Zhang, B Ravi Kiran, Thomas Gauthier, Yanis Mazouz, Theo Steger
Simulation-to-Reality domain adaptation for offline 3D object annotation on pointclouds with correlation alignment
Accepted at IMPROVE 2022
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Annotating objects with 3D bounding boxes in LiDAR pointclouds is a costly human driven process in an autonomous driving perception system. In this paper, we present a method to semi-automatically annotate real-world pointclouds collected by deployment vehicles using simulated data. We train a 3D object detector model on labeled simulated data from CARLA jointly with real world pointclouds from our target vehicle. The supervised object detection loss is augmented with a CORAL loss term to reduce the distance between labeled simulated and unlabeled real pointcloud feature representations. The goal here is to learn representations that are invariant to simulated (labeled) and real-world (unlabeled) target domains. We also provide an updated survey on domain adaptation methods for pointclouds.
[ { "created": "Sun, 6 Feb 2022 00:40:18 GMT", "version": "v1" }, { "created": "Sat, 26 Feb 2022 13:32:07 GMT", "version": "v2" } ]
2022-03-01
[ [ "Zhang", "Weishuang", "" ], [ "Kiran", "B Ravi", "" ], [ "Gauthier", "Thomas", "" ], [ "Mazouz", "Yanis", "" ], [ "Steger", "Theo", "" ] ]
Annotating objects with 3D bounding boxes in LiDAR pointclouds is a costly human driven process in an autonomous driving perception system. In this paper, we present a method to semi-automatically annotate real-world pointclouds collected by deployment vehicles using simulated data. We train a 3D object detector model on labeled simulated data from CARLA jointly with real world pointclouds from our target vehicle. The supervised object detection loss is augmented with a CORAL loss term to reduce the distance between labeled simulated and unlabeled real pointcloud feature representations. The goal here is to learn representations that are invariant to simulated (labeled) and real-world (unlabeled) target domains. We also provide an updated survey on domain adaptation methods for pointclouds.
2309.06519
Ioannis Faros
Ioannis Faros and Aditya Dave and Andreas A. Malikopoulos
A Q-learning Approach for Adherence-Aware Recommendations
null
IEEE Control Systems Letters (L-CSS), Vol 7, 2023
10.1109/LCSYS.2023.3339591
null
cs.LG cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many real-world scenarios involving high-stakes and safety implications, a human decision-maker (HDM) may receive recommendations from an artificial intelligence while holding the ultimate responsibility of making decisions. In this letter, we develop an "adherence-aware Q-learning" algorithm to address this problem. The algorithm learns the "adherence level" that captures the frequency with which an HDM follows the recommended actions and derives the best recommendation policy in real time. We prove the convergence of the proposed Q-learning algorithm to the optimal value and evaluate its performance across various scenarios.
[ { "created": "Tue, 12 Sep 2023 18:50:24 GMT", "version": "v1" } ]
2024-07-18
[ [ "Faros", "Ioannis", "" ], [ "Dave", "Aditya", "" ], [ "Malikopoulos", "Andreas A.", "" ] ]
In many real-world scenarios involving high-stakes and safety implications, a human decision-maker (HDM) may receive recommendations from an artificial intelligence while holding the ultimate responsibility of making decisions. In this letter, we develop an "adherence-aware Q-learning" algorithm to address this problem. The algorithm learns the "adherence level" that captures the frequency with which an HDM follows the recommended actions and derives the best recommendation policy in real time. We prove the convergence of the proposed Q-learning algorithm to the optimal value and evaluate its performance across various scenarios.
1911.01370
Wataru Shimoda
Wataru Shimoda, Keiji Yanai
Self-Supervised Difference Detection for Weakly-Supervised Semantic Segmentation
ICCV 2019, source codes: https://github.com/shimoda-uec/ssdd
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To minimize the annotation costs associated with the training of semantic segmentation models, researchers have extensively investigated weakly-supervised segmentation approaches. In the current weakly-supervised segmentation methods, the most widely adopted approach is based on visualization. However, the visualization results are not generally equal to semantic segmentation. Therefore, to perform accurate semantic segmentation under the weakly supervised condition, it is necessary to consider the mapping functions that convert the visualization results into semantic segmentation. For such mapping functions, the conditional random field and iterative re-training using the outputs of a segmentation model are usually used. However, these methods do not always guarantee improvements in accuracy; therefore, if we apply these mapping functions iteratively multiple times, eventually the accuracy will not improve or will decrease. In this paper, to make the most of such mapping functions, we assume that the results of the mapping function include noise, and we improve the accuracy by removing noise. To achieve our aim, we propose the self-supervised difference detection module, which estimates noise from the results of the mapping functions by predicting the difference between the segmentation masks before and after the mapping. We verified the effectiveness of the proposed method by performing experiments on the PASCAL Visual Object Classes 2012 dataset, and we achieved 64.9\% in the val set and 65.5\% in the test set. Both of the results become new state-of-the-art under the same setting of weakly supervised semantic segmentation.
[ { "created": "Mon, 4 Nov 2019 17:57:23 GMT", "version": "v1" }, { "created": "Tue, 12 Nov 2019 14:40:59 GMT", "version": "v2" } ]
2019-11-13
[ [ "Shimoda", "Wataru", "" ], [ "Yanai", "Keiji", "" ] ]
To minimize the annotation costs associated with the training of semantic segmentation models, researchers have extensively investigated weakly-supervised segmentation approaches. In the current weakly-supervised segmentation methods, the most widely adopted approach is based on visualization. However, the visualization results are not generally equal to semantic segmentation. Therefore, to perform accurate semantic segmentation under the weakly supervised condition, it is necessary to consider the mapping functions that convert the visualization results into semantic segmentation. For such mapping functions, the conditional random field and iterative re-training using the outputs of a segmentation model are usually used. However, these methods do not always guarantee improvements in accuracy; therefore, if we apply these mapping functions iteratively multiple times, eventually the accuracy will not improve or will decrease. In this paper, to make the most of such mapping functions, we assume that the results of the mapping function include noise, and we improve the accuracy by removing noise. To achieve our aim, we propose the self-supervised difference detection module, which estimates noise from the results of the mapping functions by predicting the difference between the segmentation masks before and after the mapping. We verified the effectiveness of the proposed method by performing experiments on the PASCAL Visual Object Classes 2012 dataset, and we achieved 64.9\% in the val set and 65.5\% in the test set. Both of the results become new state-of-the-art under the same setting of weakly supervised semantic segmentation.
1902.09887
Qianyi Wu
Zi-Hang Jiang, Qianyi Wu, Keyu Chen and Juyong Zhang
Disentangled Representation Learning for 3D Face Shape
15 pages, 8 figures. CVPR 2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present a novel strategy to design disentangled 3D face shape representation. Specifically, a given 3D face shape is decomposed into identity part and expression part, which are both encoded and decoded in a nonlinear way. To solve this problem, we propose an attribute decomposition framework for 3D face mesh. To better represent face shapes which are usually nonlinear deformed between each other, the face shapes are represented by a vertex based deformation representation rather than Euclidean coordinates. The experimental results demonstrate that our method has better performance than existing methods on decomposing the identity and expression parts. Moreover, more natural expression transfer results can be achieved with our method than existing methods.
[ { "created": "Tue, 26 Feb 2019 12:22:18 GMT", "version": "v1" }, { "created": "Sun, 3 Mar 2019 06:38:24 GMT", "version": "v2" } ]
2019-03-05
[ [ "Jiang", "Zi-Hang", "" ], [ "Wu", "Qianyi", "" ], [ "Chen", "Keyu", "" ], [ "Zhang", "Juyong", "" ] ]
In this paper, we present a novel strategy to design disentangled 3D face shape representation. Specifically, a given 3D face shape is decomposed into identity part and expression part, which are both encoded and decoded in a nonlinear way. To solve this problem, we propose an attribute decomposition framework for 3D face mesh. To better represent face shapes which are usually nonlinear deformed between each other, the face shapes are represented by a vertex based deformation representation rather than Euclidean coordinates. The experimental results demonstrate that our method has better performance than existing methods on decomposing the identity and expression parts. Moreover, more natural expression transfer results can be achieved with our method than existing methods.