id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
1906.11278
Fatemeh Kazemi
Fatemeh Kazemi, Esmaeil Karimi, Anoosheh Heidarzadeh, and Alex Sprintson
Private Information Retrieval with Private Coded Side Information: The Multi-Server Case
11 pages
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we consider the multi-server setting of Private Information Retrieval with Private Coded Side Information (PIR-PCSI) problem. In this problem, there is a database of $K$ messages whose copies are replicated across $N$ servers, and there is a user who knows a random linear combination of a random subset of $M$ messages in the database as side information. The user wishes to download one message from the servers, while protecting the identities of both the demand message and the messages forming the side information. We assume that the servers know the number of messages forming the user's side information in advance, whereas the indices of these messages and their coefficients in the side information are not known to any of the servers a priori. Our goal is to characterize (or derive a lower bound on) the capacity, i.e., the maximum achievable download rate, for the following two settings. In the first setting, the set of messages forming the linear combination available to the user as side information, does not include the user's demanded message. For this setting, we show that the capacity is equal to $\left(1+{1}/{N}+\dots+{1}/{N^{K-M-1}}\right)^{-1}$. In the second setting, the demand message contributes to the linear combination available to the user as side information, i.e., the demand message is one of the messages that form the user's side information. For this setting, we show that the capacity is lower-bounded by $\left(1+{1}/{N}+\dots+{1}/{N^{K-M}}\right)^{-1}$. The proposed achievability schemes and proof techniques leverage ideas from both our recent methods proposed for the single-server PIR-PCSI problem as well as the techniques proposed by Sun and Jafar for multi-server private computation problem.
[ { "created": "Wed, 26 Jun 2019 18:12:01 GMT", "version": "v1" } ]
2019-06-28
[ [ "Kazemi", "Fatemeh", "" ], [ "Karimi", "Esmaeil", "" ], [ "Heidarzadeh", "Anoosheh", "" ], [ "Sprintson", "Alex", "" ] ]
In this paper, we consider the multi-server setting of Private Information Retrieval with Private Coded Side Information (PIR-PCSI) problem. In this problem, there is a database of $K$ messages whose copies are replicated across $N$ servers, and there is a user who knows a random linear combination of a random subset of $M$ messages in the database as side information. The user wishes to download one message from the servers, while protecting the identities of both the demand message and the messages forming the side information. We assume that the servers know the number of messages forming the user's side information in advance, whereas the indices of these messages and their coefficients in the side information are not known to any of the servers a priori. Our goal is to characterize (or derive a lower bound on) the capacity, i.e., the maximum achievable download rate, for the following two settings. In the first setting, the set of messages forming the linear combination available to the user as side information, does not include the user's demanded message. For this setting, we show that the capacity is equal to $\left(1+{1}/{N}+\dots+{1}/{N^{K-M-1}}\right)^{-1}$. In the second setting, the demand message contributes to the linear combination available to the user as side information, i.e., the demand message is one of the messages that form the user's side information. For this setting, we show that the capacity is lower-bounded by $\left(1+{1}/{N}+\dots+{1}/{N^{K-M}}\right)^{-1}$. The proposed achievability schemes and proof techniques leverage ideas from both our recent methods proposed for the single-server PIR-PCSI problem as well as the techniques proposed by Sun and Jafar for multi-server private computation problem.
2309.04025
Kashyap Todi
Kashyap Todi, Tanya R. Jonker
A Framework for Computational Design and Adaptation of Extended Reality User Interfaces
5 pages, CHI 2023 Workshop on The Future of Computational Approaches for Understanding and Adapting User Interfaces
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
To facilitate high quality interaction during the regular use of computing systems, it is essential that the user interface (UI) deliver content and components in an appropriate manner. Although extended reality (XR) is emerging as a new computing platform, we still have a limited understanding of how best to design and present interactive content to users in such immersive environments. Adaptive UIs offer a promising approach for optimal presentation in XR as the user's environment, tasks, capabilities, and preferences vary under changing context. In this position paper, we present a design framework for adapting various characteristics of content presented in XR. We frame these as five considerations that need to be taken into account for adaptive XR UIs: What?, How Much?, Where?, How?, and When?. With this framework, we review literature on UI design and adaptation to reflect on approaches that have been adopted or developed in the past towards identifying current gaps and challenges, and opportunities for applying such approaches in XR. Using our framework, future work could identify and develop novel computational approaches for achieving successful adaptive user interfaces in such immersive environments.
[ { "created": "Thu, 7 Sep 2023 21:37:52 GMT", "version": "v1" } ]
2023-09-11
[ [ "Todi", "Kashyap", "" ], [ "Jonker", "Tanya R.", "" ] ]
To facilitate high quality interaction during the regular use of computing systems, it is essential that the user interface (UI) deliver content and components in an appropriate manner. Although extended reality (XR) is emerging as a new computing platform, we still have a limited understanding of how best to design and present interactive content to users in such immersive environments. Adaptive UIs offer a promising approach for optimal presentation in XR as the user's environment, tasks, capabilities, and preferences vary under changing context. In this position paper, we present a design framework for adapting various characteristics of content presented in XR. We frame these as five considerations that need to be taken into account for adaptive XR UIs: What?, How Much?, Where?, How?, and When?. With this framework, we review literature on UI design and adaptation to reflect on approaches that have been adopted or developed in the past towards identifying current gaps and challenges, and opportunities for applying such approaches in XR. Using our framework, future work could identify and develop novel computational approaches for achieving successful adaptive user interfaces in such immersive environments.
2102.04828
Sebastian U. Stich
Lingjing Kong, Tao Lin, Anastasia Koloskova, Martin Jaggi, Sebastian U. Stich
Consensus Control for Decentralized Deep Learning
LK and TL contribute equally - ICML 2021
Proceedings of the 38th International Conference on Machine Learning (ICML), PMLR 139, 2021
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters. Experiments in earlier works reveal that, even in a data-center setup, decentralized training often suffers from the degradation in the quality of the model: the training and test performance of models trained in a decentralized fashion is in general worse than that of models trained in a centralized fashion, and this performance drop is impacted by parameters such as network size, communication topology and data partitioning. We identify the changing consensus distance between devices as a key parameter to explain the gap between centralized and decentralized training. We show in theory that when the training consensus distance is lower than a critical quantity, decentralized training converges as fast as the centralized counterpart. We empirically validate that the relation between generalization performance and consensus distance is consistent with this theoretical observation. Our empirical insights allow the principled design of better decentralized training schemes that mitigate the performance drop. To this end, we provide practical training guidelines and exemplify its effectiveness on the data-center setup as the important first step.
[ { "created": "Tue, 9 Feb 2021 13:58:33 GMT", "version": "v1" }, { "created": "Fri, 18 Jun 2021 08:15:00 GMT", "version": "v2" } ]
2021-06-21
[ [ "Kong", "Lingjing", "" ], [ "Lin", "Tao", "" ], [ "Koloskova", "Anastasia", "" ], [ "Jaggi", "Martin", "" ], [ "Stich", "Sebastian U.", "" ] ]
Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters. Experiments in earlier works reveal that, even in a data-center setup, decentralized training often suffers from the degradation in the quality of the model: the training and test performance of models trained in a decentralized fashion is in general worse than that of models trained in a centralized fashion, and this performance drop is impacted by parameters such as network size, communication topology and data partitioning. We identify the changing consensus distance between devices as a key parameter to explain the gap between centralized and decentralized training. We show in theory that when the training consensus distance is lower than a critical quantity, decentralized training converges as fast as the centralized counterpart. We empirically validate that the relation between generalization performance and consensus distance is consistent with this theoretical observation. Our empirical insights allow the principled design of better decentralized training schemes that mitigate the performance drop. To this end, we provide practical training guidelines and exemplify its effectiveness on the data-center setup as the important first step.
1507.05283
Kumar Sankar Ray
Kingshuk Chatterjee, Kumar Sankar Ray
Reversible Watson-Crick Automata
arXiv admin note: text overlap with arXiv:1507.05282
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Watson-Crick automata are finite automata working on double strands. Extensive research work has already been done on non-deterministic Watson-Crick automata and on deterministic Watson-Crick automata. In this paper, we introduce a new model of Watson-Crick automata which is reversible in nature named reversible Watson-Crick automata and explore its computational power. We show even though the model is reversible and one way it accepts all regular languages and also analyze the state complexity of the above stated model with respect to non-deterministic block automata and non-deterministic finite automata and establish its superiority. We further explore the relation of the reversible model with twin-shuffle language and recursively enumerable languages.
[ { "created": "Sun, 19 Jul 2015 13:00:59 GMT", "version": "v1" }, { "created": "Wed, 9 Dec 2015 07:54:05 GMT", "version": "v2" } ]
2015-12-10
[ [ "Chatterjee", "Kingshuk", "" ], [ "Ray", "Kumar Sankar", "" ] ]
Watson-Crick automata are finite automata working on double strands. Extensive research work has already been done on non-deterministic Watson-Crick automata and on deterministic Watson-Crick automata. In this paper, we introduce a new model of Watson-Crick automata which is reversible in nature named reversible Watson-Crick automata and explore its computational power. We show even though the model is reversible and one way it accepts all regular languages and also analyze the state complexity of the above stated model with respect to non-deterministic block automata and non-deterministic finite automata and establish its superiority. We further explore the relation of the reversible model with twin-shuffle language and recursively enumerable languages.
2303.10561
Peng Zou
Peng Zou, Rui Wang, Kehua Wen, Yasi Peng and Xiao Sun
Spatial-temporal Transformer for Affective Behavior Analysis
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The in-the-wild affective behavior analysis has been an important study. In this paper, we submit our solutions for the 5th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW), which includes V-A Estimation, Facial Expression Classification and AU Detection Sub-challenges. We propose a Transformer Encoder with Multi-Head Attention framework to learn the distribution of both the spatial and temporal features. Besides, there are virious effective data augmentation strategies employed to alleviate the problems of sample imbalance during model training. The results fully demonstrate the effectiveness of our proposed model based on the Aff-Wild2 dataset.
[ { "created": "Sun, 19 Mar 2023 04:34:17 GMT", "version": "v1" } ]
2023-03-21
[ [ "Zou", "Peng", "" ], [ "Wang", "Rui", "" ], [ "Wen", "Kehua", "" ], [ "Peng", "Yasi", "" ], [ "Sun", "Xiao", "" ] ]
The in-the-wild affective behavior analysis has been an important study. In this paper, we submit our solutions for the 5th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW), which includes V-A Estimation, Facial Expression Classification and AU Detection Sub-challenges. We propose a Transformer Encoder with Multi-Head Attention framework to learn the distribution of both the spatial and temporal features. Besides, there are virious effective data augmentation strategies employed to alleviate the problems of sample imbalance during model training. The results fully demonstrate the effectiveness of our proposed model based on the Aff-Wild2 dataset.
2011.11722
Deepali Jain
Deepali Jain, Atil Iscen, Ken Caluwaerts
From Pixels to Legs: Hierarchical Learning of Quadruped Locomotion
null
4th Conference on Robot Learning (CoRL 2020), Cambridge MA, USA
null
null
cs.RO cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Legged robots navigating crowded scenes and complex terrains in the real world are required to execute dynamic leg movements while processing visual input for obstacle avoidance and path planning. We show that a quadruped robot can acquire both of these skills by means of hierarchical reinforcement learning (HRL). By virtue of their hierarchical structure, our policies learn to implicitly break down this joint problem by concurrently learning High Level (HL) and Low Level (LL) neural network policies. These two levels are connected by a low dimensional hidden layer, which we call latent command. HL receives a first-person camera view, whereas LL receives the latent command from HL and the robot's on-board sensors to control its actuators. We train policies to walk in two different environments: a curved cliff and a maze. We show that hierarchical policies can concurrently learn to locomote and navigate in these environments, and show they are more efficient than non-hierarchical neural network policies. This architecture also allows for knowledge reuse across tasks. LL networks trained on one task can be transferred to a new task in a new environment. Finally HL, which processes camera images, can be evaluated at much lower and varying frequencies compared to LL, thus reducing computation times and bandwidth requirements.
[ { "created": "Mon, 23 Nov 2020 20:55:54 GMT", "version": "v1" } ]
2020-12-01
[ [ "Jain", "Deepali", "" ], [ "Iscen", "Atil", "" ], [ "Caluwaerts", "Ken", "" ] ]
Legged robots navigating crowded scenes and complex terrains in the real world are required to execute dynamic leg movements while processing visual input for obstacle avoidance and path planning. We show that a quadruped robot can acquire both of these skills by means of hierarchical reinforcement learning (HRL). By virtue of their hierarchical structure, our policies learn to implicitly break down this joint problem by concurrently learning High Level (HL) and Low Level (LL) neural network policies. These two levels are connected by a low dimensional hidden layer, which we call latent command. HL receives a first-person camera view, whereas LL receives the latent command from HL and the robot's on-board sensors to control its actuators. We train policies to walk in two different environments: a curved cliff and a maze. We show that hierarchical policies can concurrently learn to locomote and navigate in these environments, and show they are more efficient than non-hierarchical neural network policies. This architecture also allows for knowledge reuse across tasks. LL networks trained on one task can be transferred to a new task in a new environment. Finally HL, which processes camera images, can be evaluated at much lower and varying frequencies compared to LL, thus reducing computation times and bandwidth requirements.
1307.0339
Cheng-Yuan Liou
Cheng-Yuan Liou, Bo-Shiang Huang, Daw-Ran Liou and Alex A. Simak
Syntactic sensitive complexity for symbol-free sequence
11 pages, 5 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work uses the L-system to construct a tree structure for the text sequence and derives its complexity. It serves as a measure of structural complexity of the text. It is applied to anomaly detection in data transmission.
[ { "created": "Mon, 1 Jul 2013 12:00:59 GMT", "version": "v1" }, { "created": "Tue, 2 Jul 2013 02:08:48 GMT", "version": "v2" } ]
2013-07-03
[ [ "Liou", "Cheng-Yuan", "" ], [ "Huang", "Bo-Shiang", "" ], [ "Liou", "Daw-Ran", "" ], [ "Simak", "Alex A.", "" ] ]
This work uses the L-system to construct a tree structure for the text sequence and derives its complexity. It serves as a measure of structural complexity of the text. It is applied to anomaly detection in data transmission.
1810.03649
Sainandan Ramakrishnan
Sainandan Ramakrishnan, Aishwarya Agrawal, Stefan Lee
Overcoming Language Priors in Visual Question Answering with Adversarial Regularization
NIPS 2018. 11 pages ( with references ), 4 figures, 2 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern Visual Question Answering (VQA) models have been shown to rely heavily on superficial correlations between question and answer words learned during training such as overwhelmingly reporting the type of room as kitchen or the sport being played as tennis, irrespective of the image. Most alarmingly, this shortcoming is often not well reflected during evaluation because the same strong priors exist in test distributions; however, a VQA system that fails to ground questions in image content would likely perform poorly in real-world settings. In this work, we present a novel regularization scheme for VQA that reduces this effect. We introduce a question-only model that takes as input the question encoding from the VQA model and must leverage language biases in order to succeed. We then pose training as an adversarial game between the VQA model and this question-only adversary -- discouraging the VQA model from capturing language biases in its question encoding. Further,we leverage this question-only model to estimate the increase in model confidence after considering the image, which we maximize explicitly to encourage visual grounding. Our approach is a model agnostic training procedure and simple to implement. We show empirically that it can improve performance significantly on a bias-sensitive split of the VQA dataset for multiple base models -- achieving state-of-the-art on this task. Further, on standard VQA tasks, our approach shows significantly less drop in accuracy compared to existing bias-reducing VQA models.
[ { "created": "Mon, 8 Oct 2018 18:29:05 GMT", "version": "v1" }, { "created": "Thu, 8 Nov 2018 20:51:44 GMT", "version": "v2" } ]
2018-11-12
[ [ "Ramakrishnan", "Sainandan", "" ], [ "Agrawal", "Aishwarya", "" ], [ "Lee", "Stefan", "" ] ]
Modern Visual Question Answering (VQA) models have been shown to rely heavily on superficial correlations between question and answer words learned during training such as overwhelmingly reporting the type of room as kitchen or the sport being played as tennis, irrespective of the image. Most alarmingly, this shortcoming is often not well reflected during evaluation because the same strong priors exist in test distributions; however, a VQA system that fails to ground questions in image content would likely perform poorly in real-world settings. In this work, we present a novel regularization scheme for VQA that reduces this effect. We introduce a question-only model that takes as input the question encoding from the VQA model and must leverage language biases in order to succeed. We then pose training as an adversarial game between the VQA model and this question-only adversary -- discouraging the VQA model from capturing language biases in its question encoding. Further,we leverage this question-only model to estimate the increase in model confidence after considering the image, which we maximize explicitly to encourage visual grounding. Our approach is a model agnostic training procedure and simple to implement. We show empirically that it can improve performance significantly on a bias-sensitive split of the VQA dataset for multiple base models -- achieving state-of-the-art on this task. Further, on standard VQA tasks, our approach shows significantly less drop in accuracy compared to existing bias-reducing VQA models.
1712.08910
Simina Br\^anzei
Simina Br\^anzei and Aris Filos-Ratsikas
Walrasian Dynamics in Multi-unit Markets
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a multi-unit market, a seller brings multiple units of a good and tries to sell them to a set of buyers that have monetary endowments. While a Walrasian equilibrium does not always exist in this model, natural relaxations of the concept that retain its desirable fairness properties do exist. We study the dynamics of (Walrasian) envy-free pricing mechanisms in this environment, showing that for any such pricing mechanism, the best response dynamic starting from truth-telling converges to a pure Nash equilibrium with small loss in revenue and welfare. Moreover, we generalize these bounds to capture all the Nash equilibria for a large class of (monotone) pricing mechanisms. We also identify a natural mechanism, which selects the minimum Walrasian envy-free price, in which for $n=2$ buyers the best response dynamic converges from any starting profile, and for which we conjecture convergence for any number of buyers.
[ { "created": "Sun, 24 Dec 2017 12:13:24 GMT", "version": "v1" }, { "created": "Wed, 21 Feb 2018 01:02:22 GMT", "version": "v2" }, { "created": "Thu, 27 Sep 2018 16:24:04 GMT", "version": "v3" } ]
2018-09-28
[ [ "Brânzei", "Simina", "" ], [ "Filos-Ratsikas", "Aris", "" ] ]
In a multi-unit market, a seller brings multiple units of a good and tries to sell them to a set of buyers that have monetary endowments. While a Walrasian equilibrium does not always exist in this model, natural relaxations of the concept that retain its desirable fairness properties do exist. We study the dynamics of (Walrasian) envy-free pricing mechanisms in this environment, showing that for any such pricing mechanism, the best response dynamic starting from truth-telling converges to a pure Nash equilibrium with small loss in revenue and welfare. Moreover, we generalize these bounds to capture all the Nash equilibria for a large class of (monotone) pricing mechanisms. We also identify a natural mechanism, which selects the minimum Walrasian envy-free price, in which for $n=2$ buyers the best response dynamic converges from any starting profile, and for which we conjecture convergence for any number of buyers.
1703.10318
Sung-Han Lin
Sung-Han Lin, Ranjan Pal, Marco Paolieri, Leana Golubchik
SC-Share: Performance Driven Resource Sharing Markets for the Small Cloud
To be published in ICDCS 2017
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Small-scale clouds (SCs) often suffer from resource under-provisioning during peak demand, leading to inability to satisfy service level agreements (SLAs) and consequent loss of customers. One approach to address this problem is for a set of autonomous SCs to share resources among themselves in a cost-induced cooperative fashion, thereby increasing their individual capacities (when needed) without having to significantly invest in more resources. A central problem (in this context) is how to properly share resources (for a price) to achieve profitable service while maintaining customer SLAs. To address this problem, in this paper, we propose the SC-Share framework that utilizes two interacting models: (i) a stochastic performance model that estimates the achieved performance characteristics under given SLA requirements, and (ii) a market-based game-theoretic model that (as shown empirically) converges to efficient resource sharing decisions at market equilibrium. Our results include extensive evaluations that illustrate the utility of the proposed framework.
[ { "created": "Thu, 30 Mar 2017 05:28:36 GMT", "version": "v1" }, { "created": "Mon, 7 Aug 2017 00:04:21 GMT", "version": "v2" } ]
2017-08-08
[ [ "Lin", "Sung-Han", "" ], [ "Pal", "Ranjan", "" ], [ "Paolieri", "Marco", "" ], [ "Golubchik", "Leana", "" ] ]
Small-scale clouds (SCs) often suffer from resource under-provisioning during peak demand, leading to inability to satisfy service level agreements (SLAs) and consequent loss of customers. One approach to address this problem is for a set of autonomous SCs to share resources among themselves in a cost-induced cooperative fashion, thereby increasing their individual capacities (when needed) without having to significantly invest in more resources. A central problem (in this context) is how to properly share resources (for a price) to achieve profitable service while maintaining customer SLAs. To address this problem, in this paper, we propose the SC-Share framework that utilizes two interacting models: (i) a stochastic performance model that estimates the achieved performance characteristics under given SLA requirements, and (ii) a market-based game-theoretic model that (as shown empirically) converges to efficient resource sharing decisions at market equilibrium. Our results include extensive evaluations that illustrate the utility of the proposed framework.
2303.13137
Liping Yi
Liping Yi, Gang Wang, Xiaoguang Liu, Zhuan Shi, Han Yu
FedGH: Heterogeneous Federated Learning with Generalized Global Header
11 pages, 5 figures,accepted by Proceedings of the 31st ACM International Conference on Multimedia (MM 2023)
null
null
null
cs.LG cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning (FL) is an emerging machine learning paradigm that allows multiple parties to train a shared model collaboratively in a privacy-preserving manner. Existing horizontal FL methods generally assume that the FL server and clients hold the same model structure. However, due to system heterogeneity and the need for personalization, enabling clients to hold models with diverse structures has become an important direction. Existing model-heterogeneous FL approaches often require publicly available datasets and incur high communication and/or computational costs, which limit their performances. To address these limitations, we propose a simple but effective Federated Global prediction Header (FedGH) approach. It is a communication and computation-efficient model-heterogeneous FL framework which trains a shared generalized global prediction header with representations extracted by heterogeneous extractors for clients' models at the FL server. The trained generalized global prediction header learns from different clients. The acquired global knowledge is then transferred to clients to substitute each client's local prediction header. We derive the non-convex convergence rate of FedGH. Extensive experiments on two real-world datasets demonstrate that FedGH achieves significantly more advantageous performance in both model-homogeneous and -heterogeneous FL scenarios compared to seven state-of-the-art personalized FL models, beating the best-performing baseline by up to 8.87% (for model-homogeneous FL) and 1.83% (for model-heterogeneous FL) in terms of average test accuracy, while saving up to 85.53% of communication overhead.
[ { "created": "Thu, 23 Mar 2023 09:38:52 GMT", "version": "v1" }, { "created": "Tue, 1 Aug 2023 16:30:48 GMT", "version": "v2" } ]
2023-08-02
[ [ "Yi", "Liping", "" ], [ "Wang", "Gang", "" ], [ "Liu", "Xiaoguang", "" ], [ "Shi", "Zhuan", "" ], [ "Yu", "Han", "" ] ]
Federated learning (FL) is an emerging machine learning paradigm that allows multiple parties to train a shared model collaboratively in a privacy-preserving manner. Existing horizontal FL methods generally assume that the FL server and clients hold the same model structure. However, due to system heterogeneity and the need for personalization, enabling clients to hold models with diverse structures has become an important direction. Existing model-heterogeneous FL approaches often require publicly available datasets and incur high communication and/or computational costs, which limit their performances. To address these limitations, we propose a simple but effective Federated Global prediction Header (FedGH) approach. It is a communication and computation-efficient model-heterogeneous FL framework which trains a shared generalized global prediction header with representations extracted by heterogeneous extractors for clients' models at the FL server. The trained generalized global prediction header learns from different clients. The acquired global knowledge is then transferred to clients to substitute each client's local prediction header. We derive the non-convex convergence rate of FedGH. Extensive experiments on two real-world datasets demonstrate that FedGH achieves significantly more advantageous performance in both model-homogeneous and -heterogeneous FL scenarios compared to seven state-of-the-art personalized FL models, beating the best-performing baseline by up to 8.87% (for model-homogeneous FL) and 1.83% (for model-heterogeneous FL) in terms of average test accuracy, while saving up to 85.53% of communication overhead.
2211.08681
Atsuro Okazawa
Atsuro Okazawa
Interclass Prototype Relation for Few-Shot Segmentation
Accepted to ECCV2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional semantic segmentation requires a large labeled image dataset and can only be predicted within predefined classes. To solve this problem, few-shot segmentation, which requires only a handful of annotations for the new target class, is important. However, with few-shot segmentation, the target class data distribution in the feature space is sparse and has low coverage because of the slight variations in the sample data. Setting the classification boundary that properly separates the target class from other classes is an impossible task. In particular, it is difficult to classify classes that are similar to the target class near the boundary. This study proposes the Interclass Prototype Relation Network (IPRNet), which improves the separation performance by reducing the similarity between other classes. We conducted extensive experiments with Pascal-5i and COCO-20i and showed that IPRNet provides the best segmentation performance compared with previous research.
[ { "created": "Wed, 16 Nov 2022 05:27:52 GMT", "version": "v1" } ]
2022-11-17
[ [ "Okazawa", "Atsuro", "" ] ]
Traditional semantic segmentation requires a large labeled image dataset and can only be predicted within predefined classes. To solve this problem, few-shot segmentation, which requires only a handful of annotations for the new target class, is important. However, with few-shot segmentation, the target class data distribution in the feature space is sparse and has low coverage because of the slight variations in the sample data. Setting the classification boundary that properly separates the target class from other classes is an impossible task. In particular, it is difficult to classify classes that are similar to the target class near the boundary. This study proposes the Interclass Prototype Relation Network (IPRNet), which improves the separation performance by reducing the similarity between other classes. We conducted extensive experiments with Pascal-5i and COCO-20i and showed that IPRNet provides the best segmentation performance compared with previous research.
2407.05210
Rendani Mbuvha
Rendani Mbuvha, Yassine Yaakoubi, John Bagiliko, Santiago Hincapie Potes, Amal Nammouchi, Sabrina Amrouche
Leveraging AI for Climate Resilience in Africa: Challenges, Opportunities, and the Need for Collaboration
null
null
null
null
cs.CY cs.AI stat.AP
http://creativecommons.org/licenses/by/4.0/
As climate change issues become more pressing, their impact in Africa calls for urgent, innovative solutions tailored to the continent's unique challenges. While Artificial Intelligence (AI) emerges as a critical and valuable tool for climate change adaptation and mitigation, its effectiveness and potential are contingent upon overcoming significant challenges such as data scarcity, infrastructure gaps, and limited local AI development. This position paper explores the role of AI in climate change adaptation and mitigation in Africa. It advocates for a collaborative approach to build capacity, develop open-source data repositories, and create context-aware, robust AI-driven climate solutions that are culturally and contextually relevant.
[ { "created": "Wed, 24 Apr 2024 14:05:22 GMT", "version": "v1" } ]
2024-07-09
[ [ "Mbuvha", "Rendani", "" ], [ "Yaakoubi", "Yassine", "" ], [ "Bagiliko", "John", "" ], [ "Potes", "Santiago Hincapie", "" ], [ "Nammouchi", "Amal", "" ], [ "Amrouche", "Sabrina", "" ] ]
As climate change issues become more pressing, their impact in Africa calls for urgent, innovative solutions tailored to the continent's unique challenges. While Artificial Intelligence (AI) emerges as a critical and valuable tool for climate change adaptation and mitigation, its effectiveness and potential are contingent upon overcoming significant challenges such as data scarcity, infrastructure gaps, and limited local AI development. This position paper explores the role of AI in climate change adaptation and mitigation in Africa. It advocates for a collaborative approach to build capacity, develop open-source data repositories, and create context-aware, robust AI-driven climate solutions that are culturally and contextually relevant.
2406.01609
Akshat Mohan Dasula Mr
Akshat Mohan Dasula, Hrushitha Tigulla, Preethika Bhukya
Judgement Citation Retrieval using Contextual Similarity
14 pages, 16 images
null
null
null
cs.IR cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Traditionally in the domain of legal research, the retrieval of pertinent citations from intricate case descriptions has demanded manual effort and keyword-based search applications that mandate expertise in understanding legal jargon. Legal case descriptions hold pivotal information for legal professionals and researchers, necessitating more efficient and automated approaches. We propose a methodology that combines natural language processing (NLP) and machine learning techniques to enhance the organization and utilization of legal case descriptions. This approach revolves around the creation of textual embeddings with the help of state-of-art embedding models. Our methodology addresses two primary objectives: unsupervised clustering and supervised citation retrieval, both designed to automate the citation extraction process. Although the proposed methodology can be used for any dataset, we employed the Supreme Court of The United States (SCOTUS) dataset, yielding remarkable results. Our methodology achieved an impressive accuracy rate of 90.9%. By automating labor-intensive processes, we pave the way for a more efficient, time-saving, and accessible landscape in legal research, benefiting legal professionals, academics, and researchers.
[ { "created": "Tue, 28 May 2024 04:22:28 GMT", "version": "v1" }, { "created": "Thu, 15 Aug 2024 06:11:27 GMT", "version": "v2" } ]
2024-08-16
[ [ "Dasula", "Akshat Mohan", "" ], [ "Tigulla", "Hrushitha", "" ], [ "Bhukya", "Preethika", "" ] ]
Traditionally in the domain of legal research, the retrieval of pertinent citations from intricate case descriptions has demanded manual effort and keyword-based search applications that mandate expertise in understanding legal jargon. Legal case descriptions hold pivotal information for legal professionals and researchers, necessitating more efficient and automated approaches. We propose a methodology that combines natural language processing (NLP) and machine learning techniques to enhance the organization and utilization of legal case descriptions. This approach revolves around the creation of textual embeddings with the help of state-of-art embedding models. Our methodology addresses two primary objectives: unsupervised clustering and supervised citation retrieval, both designed to automate the citation extraction process. Although the proposed methodology can be used for any dataset, we employed the Supreme Court of The United States (SCOTUS) dataset, yielding remarkable results. Our methodology achieved an impressive accuracy rate of 90.9%. By automating labor-intensive processes, we pave the way for a more efficient, time-saving, and accessible landscape in legal research, benefiting legal professionals, academics, and researchers.
1908.08216
Sanath Narayan
Sanath Narayan, Hisham Cholakkal, Fahad Shahbaz Khan, Ling Shao
3C-Net: Category Count and Center Loss for Weakly-Supervised Action Localization
To appear in ICCV 2019
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Temporal action localization is a challenging computer vision problem with numerous real-world applications. Most existing methods require laborious frame-level supervision to train action localization models. In this work, we propose a framework, called 3C-Net, which only requires video-level supervision (weak supervision) in the form of action category labels and the corresponding count. We introduce a novel formulation to learn discriminative action features with enhanced localization capabilities. Our joint formulation has three terms: a classification term to ensure the separability of learned action features, an adapted multi-label center loss term to enhance the action feature discriminability and a counting loss term to delineate adjacent action sequences, leading to improved localization. Comprehensive experiments are performed on two challenging benchmarks: THUMOS14 and ActivityNet 1.2. Our approach sets a new state-of-the-art for weakly-supervised temporal action localization on both datasets. On the THUMOS14 dataset, the proposed method achieves an absolute gain of 4.6% in terms of mean average precision (mAP), compared to the state-of-the-art. Source code is available at https://github.com/naraysa/3c-net.
[ { "created": "Thu, 22 Aug 2019 06:20:38 GMT", "version": "v1" }, { "created": "Mon, 18 Nov 2019 12:28:41 GMT", "version": "v2" } ]
2019-11-19
[ [ "Narayan", "Sanath", "" ], [ "Cholakkal", "Hisham", "" ], [ "Khan", "Fahad Shahbaz", "" ], [ "Shao", "Ling", "" ] ]
Temporal action localization is a challenging computer vision problem with numerous real-world applications. Most existing methods require laborious frame-level supervision to train action localization models. In this work, we propose a framework, called 3C-Net, which only requires video-level supervision (weak supervision) in the form of action category labels and the corresponding count. We introduce a novel formulation to learn discriminative action features with enhanced localization capabilities. Our joint formulation has three terms: a classification term to ensure the separability of learned action features, an adapted multi-label center loss term to enhance the action feature discriminability and a counting loss term to delineate adjacent action sequences, leading to improved localization. Comprehensive experiments are performed on two challenging benchmarks: THUMOS14 and ActivityNet 1.2. Our approach sets a new state-of-the-art for weakly-supervised temporal action localization on both datasets. On the THUMOS14 dataset, the proposed method achieves an absolute gain of 4.6% in terms of mean average precision (mAP), compared to the state-of-the-art. Source code is available at https://github.com/naraysa/3c-net.
2309.04381
Fredrik Hellstr\"om
Fredrik Hellstr\"om, Giuseppe Durisi, Benjamin Guedj, Maxim Raginsky
Generalization Bounds: Perspectives from Information Theory and PAC-Bayes
228 pages
null
null
null
cs.LG cs.AI cs.IT math.IT math.ST stat.ML stat.TH
http://creativecommons.org/licenses/by-nc-sa/4.0/
A fundamental question in theoretical machine learning is generalization. Over the past decades, the PAC-Bayesian approach has been established as a flexible framework to address the generalization capabilities of machine learning algorithms, and design new ones. Recently, it has garnered increased interest due to its potential applicability for a variety of learning algorithms, including deep neural networks. In parallel, an information-theoretic view of generalization has developed, wherein the relation between generalization and various information measures has been established. This framework is intimately connected to the PAC-Bayesian approach, and a number of results have been independently discovered in both strands. In this monograph, we highlight this strong connection and present a unified treatment of PAC-Bayesian and information-theoretic generalization bounds. We present techniques and results that the two perspectives have in common, and discuss the approaches and interpretations that differ. In particular, we demonstrate how many proofs in the area share a modular structure, through which the underlying ideas can be intuited. We pay special attention to the conditional mutual information (CMI) framework; analytical studies of the information complexity of learning algorithms; and the application of the proposed methods to deep learning. This monograph is intended to provide a comprehensive introduction to information-theoretic generalization bounds and their connection to PAC-Bayes, serving as a foundation from which the most recent developments are accessible. It is aimed broadly towards researchers with an interest in generalization and theoretical machine learning.
[ { "created": "Fri, 8 Sep 2023 15:23:40 GMT", "version": "v1" }, { "created": "Wed, 27 Mar 2024 17:07:47 GMT", "version": "v2" } ]
2024-03-28
[ [ "Hellström", "Fredrik", "" ], [ "Durisi", "Giuseppe", "" ], [ "Guedj", "Benjamin", "" ], [ "Raginsky", "Maxim", "" ] ]
A fundamental question in theoretical machine learning is generalization. Over the past decades, the PAC-Bayesian approach has been established as a flexible framework to address the generalization capabilities of machine learning algorithms, and design new ones. Recently, it has garnered increased interest due to its potential applicability for a variety of learning algorithms, including deep neural networks. In parallel, an information-theoretic view of generalization has developed, wherein the relation between generalization and various information measures has been established. This framework is intimately connected to the PAC-Bayesian approach, and a number of results have been independently discovered in both strands. In this monograph, we highlight this strong connection and present a unified treatment of PAC-Bayesian and information-theoretic generalization bounds. We present techniques and results that the two perspectives have in common, and discuss the approaches and interpretations that differ. In particular, we demonstrate how many proofs in the area share a modular structure, through which the underlying ideas can be intuited. We pay special attention to the conditional mutual information (CMI) framework; analytical studies of the information complexity of learning algorithms; and the application of the proposed methods to deep learning. This monograph is intended to provide a comprehensive introduction to information-theoretic generalization bounds and their connection to PAC-Bayes, serving as a foundation from which the most recent developments are accessible. It is aimed broadly towards researchers with an interest in generalization and theoretical machine learning.
1504.03385
Chen-Yu Lee
Chen-Yu Lee, Deng-Jyi Chen
A Content Creation and Protection Scheme for Medical Images
15 pages, submitted
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Medical images contain metadata information on where, when, and how an image was acquired, and the majority of this information is stored as pixel data. Image feature descriptions are often captured only as free text stored in the image file or in the hospital information system. Correlations between the free text and the location of the feature are often inaccurate, making it difficult to link image observations to their corresponding image locations. This limits the interpretation of image data from a clinical, research, and academic standpoint. An efficient medical image protection design should allow for compatibility, usability, and privacy. This paper proposes a medical-content creation and protection scheme that contains a) a DICOM-compatible multimedia annotation scheme for medical content creation; b) a DICOM-compatible partial DRM scheme for medical record transmission under this scheme, authorized users can view only information to which they have been granted to access.
[ { "created": "Mon, 13 Apr 2015 22:43:45 GMT", "version": "v1" } ]
2015-04-15
[ [ "Lee", "Chen-Yu", "" ], [ "Chen", "Deng-Jyi", "" ] ]
Medical images contain metadata information on where, when, and how an image was acquired, and the majority of this information is stored as pixel data. Image feature descriptions are often captured only as free text stored in the image file or in the hospital information system. Correlations between the free text and the location of the feature are often inaccurate, making it difficult to link image observations to their corresponding image locations. This limits the interpretation of image data from a clinical, research, and academic standpoint. An efficient medical image protection design should allow for compatibility, usability, and privacy. This paper proposes a medical-content creation and protection scheme that contains a) a DICOM-compatible multimedia annotation scheme for medical content creation; b) a DICOM-compatible partial DRM scheme for medical record transmission under this scheme, authorized users can view only information to which they have been granted to access.
2405.19209
Ziyang Wang
Ziyang Wang, Shoubin Yu, Elias Stengel-Eskin, Jaehong Yoon, Feng Cheng, Gedas Bertasius, Mohit Bansal
VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos
20 pages, first three authors contributed equally; Project page: https://videotree2024.github.io/
null
null
null
cs.CV cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Video-language understanding tasks have focused on short video clips, often struggling with long-form video understanding tasks. Recently, many long video-language understanding approaches have leveraged the reasoning capabilities of Large Language Models (LLMs) to perform long video QA, transforming videos into densely sampled frame captions, and asking LLMs to respond to text queries over captions. However, the frames used for captioning are often redundant and contain irrelevant information, making dense sampling inefficient, and ignoring the fact that video QA requires varying levels of granularity, with some video segments being highly relevant to the question (needing more fine-grained detail) while others being less relevant. Thus, these LLM-based approaches are prone to missing information and operate on large numbers of irrelevant captions, lowering both performance and efficiency. To address these issues, we introduce VideoTree, a query-adaptive and hierarchical framework for long-video understanding with LLMs. VideoTree dynamically extracts query-related information from a video and builds a tree-based representation for LLM reasoning. First, VideoTree adaptively selects frames for captioning by iteratively clustering frames based on their visual features and scoring clusters using their relevance to the query. Second, it organizes visual clusters into a query-adaptive and hierarchical tree structure; the tree encodes varying levels of granularity, with higher resolution on relevant segments. Finally, VideoTree produces an answer by traversing the tree's keyframes and passing their captions to an LLM answerer. Our method improves both reasoning accuracy and efficiency compared to existing methods: VideoTree achieves a 7.0%, 2.2%, and 2.7% accuracy gain over baselines on the EgoSchema, NExT-QA, and IntentQA benchmarks, respectively, while reducing inference time by 40%.
[ { "created": "Wed, 29 May 2024 15:49:09 GMT", "version": "v1" } ]
2024-05-30
[ [ "Wang", "Ziyang", "" ], [ "Yu", "Shoubin", "" ], [ "Stengel-Eskin", "Elias", "" ], [ "Yoon", "Jaehong", "" ], [ "Cheng", "Feng", "" ], [ "Bertasius", "Gedas", "" ], [ "Bansal", "Mohit", "" ] ]
Video-language understanding tasks have focused on short video clips, often struggling with long-form video understanding tasks. Recently, many long video-language understanding approaches have leveraged the reasoning capabilities of Large Language Models (LLMs) to perform long video QA, transforming videos into densely sampled frame captions, and asking LLMs to respond to text queries over captions. However, the frames used for captioning are often redundant and contain irrelevant information, making dense sampling inefficient, and ignoring the fact that video QA requires varying levels of granularity, with some video segments being highly relevant to the question (needing more fine-grained detail) while others being less relevant. Thus, these LLM-based approaches are prone to missing information and operate on large numbers of irrelevant captions, lowering both performance and efficiency. To address these issues, we introduce VideoTree, a query-adaptive and hierarchical framework for long-video understanding with LLMs. VideoTree dynamically extracts query-related information from a video and builds a tree-based representation for LLM reasoning. First, VideoTree adaptively selects frames for captioning by iteratively clustering frames based on their visual features and scoring clusters using their relevance to the query. Second, it organizes visual clusters into a query-adaptive and hierarchical tree structure; the tree encodes varying levels of granularity, with higher resolution on relevant segments. Finally, VideoTree produces an answer by traversing the tree's keyframes and passing their captions to an LLM answerer. Our method improves both reasoning accuracy and efficiency compared to existing methods: VideoTree achieves a 7.0%, 2.2%, and 2.7% accuracy gain over baselines on the EgoSchema, NExT-QA, and IntentQA benchmarks, respectively, while reducing inference time by 40%.
1503.04918
EPTCS
Marcin Benke, Viviana Bono, Aleksy Schubert
Lucretia - intersection type polymorphism for scripting languages
In Proceedings ITRS 2014, arXiv:1503.04377
EPTCS 177, 2015, pp. 65-78
10.4204/EPTCS.177.6
null
cs.LO cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Scripting code may present maintenance problems in the long run. There is, then, the call for methodologies that make it possible to control the properties of programs written in dynamic languages in an automatic fashion. We introduce Lucretia, a core language with an introspection primitive. Lucretia is equipped with a (retrofitted) static type system based on local updates of types that describe the structure of objects being used. In this way, we deal with one of the most dynamic features of scripting languages, that is, the runtime modification of object interfaces. Judgements in our systems have a Hoare-like shape, as they have a precondition and a postcondition part. Preconditions describe static approximations of the interfaces of visible objects before a certain expression has been executed and postconditions describe them after its execution. The field update operation complicates the issue of aliasing in the system. We cope with it by introducing intersection types in method signatures.
[ { "created": "Tue, 17 Mar 2015 04:03:54 GMT", "version": "v1" } ]
2015-03-18
[ [ "Benke", "Marcin", "" ], [ "Bono", "Viviana", "" ], [ "Schubert", "Aleksy", "" ] ]
Scripting code may present maintenance problems in the long run. There is, then, the call for methodologies that make it possible to control the properties of programs written in dynamic languages in an automatic fashion. We introduce Lucretia, a core language with an introspection primitive. Lucretia is equipped with a (retrofitted) static type system based on local updates of types that describe the structure of objects being used. In this way, we deal with one of the most dynamic features of scripting languages, that is, the runtime modification of object interfaces. Judgements in our systems have a Hoare-like shape, as they have a precondition and a postcondition part. Preconditions describe static approximations of the interfaces of visible objects before a certain expression has been executed and postconditions describe them after its execution. The field update operation complicates the issue of aliasing in the system. We cope with it by introducing intersection types in method signatures.
1911.12327
Charalambos Poullis
Yashas Joshi, Charalambos Poullis
Inattentional Blindness for Redirected Walking Using Dynamic Foveated Rendering
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Redirected walking is a Virtual Reality(VR) locomotion technique which enables users to navigate virtual environments (VEs) that are spatially larger than the available physical tracked space. In this work we present a novel technique for redirected walking in VR based on the psychological phenomenon of inattentional blindness. Based on the user's visual fixation points we divide the user's view into zones. Spatially-varying rotations are applied according to the zone's importance and are rendered using foveated rendering. Our technique is real-time and applicable to small and large physical spaces. Furthermore, the proposed technique does not require the use of stimulated saccades but rather takes advantage of naturally occurring saccades and blinks for a complete refresh of the framebuffer. We performed extensive testing and present the analysis of the results of three user studies conducted for the evaluation.
[ { "created": "Wed, 27 Nov 2019 18:08:21 GMT", "version": "v1" } ]
2019-11-28
[ [ "Joshi", "Yashas", "" ], [ "Poullis", "Charalambos", "" ] ]
Redirected walking is a Virtual Reality(VR) locomotion technique which enables users to navigate virtual environments (VEs) that are spatially larger than the available physical tracked space. In this work we present a novel technique for redirected walking in VR based on the psychological phenomenon of inattentional blindness. Based on the user's visual fixation points we divide the user's view into zones. Spatially-varying rotations are applied according to the zone's importance and are rendered using foveated rendering. Our technique is real-time and applicable to small and large physical spaces. Furthermore, the proposed technique does not require the use of stimulated saccades but rather takes advantage of naturally occurring saccades and blinks for a complete refresh of the framebuffer. We performed extensive testing and present the analysis of the results of three user studies conducted for the evaluation.
2407.14086
Yunfei Zhang
Yunfei Zhang, Chao Liang, Jin Gao, Zhipeng Zhang, Weiming Hu, Stephen Maybank, Xue Zhou, Liang Li
Temporal Correlation Meets Embedding: Towards a 2nd Generation of JDE-based Real-Time Multi-Object Tracking
A submission to IJCV
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Joint Detection and Embedding (JDE) trackers have demonstrated excellent performance in Multi-Object Tracking (MOT) tasks by incorporating the extraction of appearance features as auxiliary tasks through embedding Re-Identification task (ReID) into the detector, achieving a balance between inference speed and tracking performance. However, solving the competition between the detector and the feature extractor has always been a challenge. Meanwhile, the issue of directly embedding the ReID task into MOT has remained unresolved. The lack of high discriminability in appearance features results in their limited utility. In this paper, a new learning approach using cross-correlation to capture temporal information of objects is proposed. The feature extraction network is no longer trained solely on appearance features from each frame but learns richer motion features by utilizing feature heatmaps from consecutive frames, which addresses the challenge of inter-class feature similarity. Furthermore, our learning approach is applied to a more lightweight feature extraction network, and treat the feature matching scores as strong cues rather than auxiliary cues, with an appropriate weight calculation to reflect the compatibility between our obtained features and the MOT task. Our tracker, named TCBTrack, achieves state-of-the-art performance on multiple public benchmarks, i.e., MOT17, MOT20, and DanceTrack datasets. Specifically, on the DanceTrack test set, we achieve 56.8 HOTA, 58.1 IDF1 and 92.5 MOTA, making it the best online tracker capable of achieving real-time performance. Comparative evaluations with other trackers prove that our tracker achieves the best balance between speed, robustness and accuracy. Code is available at https://github.com/yfzhang1214/TCBTrack.
[ { "created": "Fri, 19 Jul 2024 07:48:45 GMT", "version": "v1" }, { "created": "Tue, 6 Aug 2024 09:56:36 GMT", "version": "v2" } ]
2024-08-07
[ [ "Zhang", "Yunfei", "" ], [ "Liang", "Chao", "" ], [ "Gao", "Jin", "" ], [ "Zhang", "Zhipeng", "" ], [ "Hu", "Weiming", "" ], [ "Maybank", "Stephen", "" ], [ "Zhou", "Xue", "" ], [ "Li", "Liang", "" ] ]
Joint Detection and Embedding (JDE) trackers have demonstrated excellent performance in Multi-Object Tracking (MOT) tasks by incorporating the extraction of appearance features as auxiliary tasks through embedding Re-Identification task (ReID) into the detector, achieving a balance between inference speed and tracking performance. However, solving the competition between the detector and the feature extractor has always been a challenge. Meanwhile, the issue of directly embedding the ReID task into MOT has remained unresolved. The lack of high discriminability in appearance features results in their limited utility. In this paper, a new learning approach using cross-correlation to capture temporal information of objects is proposed. The feature extraction network is no longer trained solely on appearance features from each frame but learns richer motion features by utilizing feature heatmaps from consecutive frames, which addresses the challenge of inter-class feature similarity. Furthermore, our learning approach is applied to a more lightweight feature extraction network, and treat the feature matching scores as strong cues rather than auxiliary cues, with an appropriate weight calculation to reflect the compatibility between our obtained features and the MOT task. Our tracker, named TCBTrack, achieves state-of-the-art performance on multiple public benchmarks, i.e., MOT17, MOT20, and DanceTrack datasets. Specifically, on the DanceTrack test set, we achieve 56.8 HOTA, 58.1 IDF1 and 92.5 MOTA, making it the best online tracker capable of achieving real-time performance. Comparative evaluations with other trackers prove that our tracker achieves the best balance between speed, robustness and accuracy. Code is available at https://github.com/yfzhang1214/TCBTrack.
1803.05747
Hongfei Fan
Hongfei Fan, Lin Ding, Xiaodong Xie, Huizhu Jia, Wen Gao
Joint Rate Allocation with Both Look-ahead And Feedback Model For High Efficiency Video Coding
null
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The objective of joint rate allocation among multiple coded video streams is to share the bandwidth to meet the demands of minimum average distortion (minAVE) or minimum distortion variance (minVAR). In previous works on minVAR problems, bits are directly assigned in proportion to their complexity measures and we call it look-ahead allocation model (LAM), which leads to the fact that the performance will totally depend on the accuracy of the complexity measures. This paper proposes a look-ahead and feedback allocation model (LFAM) for joint rate allocation for High Efficiency Video Coding (HEVC) platform which requires negligible computational cost. We derive the model from the target function of minVAR theoretically. The bits are assigned according to the complexity measures, the distortion and bitrate values fed back by the encoder together. We integrated the proposed allocation model in HEVC reference software HM16.0 and several complexity measures were applied to our allocation model. Results demonstrate that our proposed LFAM performs better than LAM, and an average of 65.94% variance of mean square error (MSE) is saved with different complexity measures.
[ { "created": "Thu, 15 Mar 2018 13:50:04 GMT", "version": "v1" } ]
2018-03-16
[ [ "Fan", "Hongfei", "" ], [ "Ding", "Lin", "" ], [ "Xie", "Xiaodong", "" ], [ "Jia", "Huizhu", "" ], [ "Gao", "Wen", "" ] ]
The objective of joint rate allocation among multiple coded video streams is to share the bandwidth to meet the demands of minimum average distortion (minAVE) or minimum distortion variance (minVAR). In previous works on minVAR problems, bits are directly assigned in proportion to their complexity measures and we call it look-ahead allocation model (LAM), which leads to the fact that the performance will totally depend on the accuracy of the complexity measures. This paper proposes a look-ahead and feedback allocation model (LFAM) for joint rate allocation for High Efficiency Video Coding (HEVC) platform which requires negligible computational cost. We derive the model from the target function of minVAR theoretically. The bits are assigned according to the complexity measures, the distortion and bitrate values fed back by the encoder together. We integrated the proposed allocation model in HEVC reference software HM16.0 and several complexity measures were applied to our allocation model. Results demonstrate that our proposed LFAM performs better than LAM, and an average of 65.94% variance of mean square error (MSE) is saved with different complexity measures.
1908.10717
Tao Zhuo
Tao Zhuo, Zhiyong Cheng, Mohan Kankanhalli
Fast Video Object Segmentation via Mask Transfer Network
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accuracy and processing speed are two important factors that affect the use of video object segmentation (VOS) in real applications. With the advanced techniques of deep neural networks, the accuracy has been significantly improved, however, the speed is still far below the real-time needs because of the complicated network design, such as the requirement of the first frame fine-tuning step. To overcome this limitation, we propose a novel mask transfer network (MTN), which can greatly boost the processing speed of VOS and also achieve a reasonable accuracy. The basic idea of MTN is to transfer the reference mask to the target frame via an efficient global pixel matching strategy. The global pixel matching between the reference frame and the target frame is to ensure good matching results. To enhance the matching speed, we perform the matching on a downsampled feature map with 1/32 of the original frame size. At the same time, to preserve the detailed mask information in such a small feature map, a mask network is designed to encode the annotated mask information with 512 channels. Finally, an efficient feature warping method is used to transfer the encoded reference mask to the target frame. Based on this design, our method avoids the fine-tuning step on the first frame and does not rely on the temporal cues and particular object categories. Therefore, it runs very fast and can be conveniently trained only with images, as well as being robust to unseen objects. Experiments on the DAVIS datasets demonstrate that MTN can achieve a speed of 37 fps, and also shows a competitive accuracy in comparison to the state-of-the-art methods.
[ { "created": "Wed, 28 Aug 2019 13:31:34 GMT", "version": "v1" } ]
2019-08-29
[ [ "Zhuo", "Tao", "" ], [ "Cheng", "Zhiyong", "" ], [ "Kankanhalli", "Mohan", "" ] ]
Accuracy and processing speed are two important factors that affect the use of video object segmentation (VOS) in real applications. With the advanced techniques of deep neural networks, the accuracy has been significantly improved, however, the speed is still far below the real-time needs because of the complicated network design, such as the requirement of the first frame fine-tuning step. To overcome this limitation, we propose a novel mask transfer network (MTN), which can greatly boost the processing speed of VOS and also achieve a reasonable accuracy. The basic idea of MTN is to transfer the reference mask to the target frame via an efficient global pixel matching strategy. The global pixel matching between the reference frame and the target frame is to ensure good matching results. To enhance the matching speed, we perform the matching on a downsampled feature map with 1/32 of the original frame size. At the same time, to preserve the detailed mask information in such a small feature map, a mask network is designed to encode the annotated mask information with 512 channels. Finally, an efficient feature warping method is used to transfer the encoded reference mask to the target frame. Based on this design, our method avoids the fine-tuning step on the first frame and does not rely on the temporal cues and particular object categories. Therefore, it runs very fast and can be conveniently trained only with images, as well as being robust to unseen objects. Experiments on the DAVIS datasets demonstrate that MTN can achieve a speed of 37 fps, and also shows a competitive accuracy in comparison to the state-of-the-art methods.
2208.10833
Hongcheng Guo
Hongcheng Guo, Yuhui Guo, Renjie Chen, Jian Yang, Jiaheng Liu, Zhoujun Li, Tieqiao Zheng, Weichao Hou, Liangfan Zheng, Bo Zhang
LogLG: Weakly Supervised Log Anomaly Detection via Log-Event Graph Construction
12 pages
null
null
null
cs.SE cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fully supervised log anomaly detection methods suffer the heavy burden of annotating massive unlabeled log data. Recently, many semi-supervised methods have been proposed to reduce annotation costs with the help of parsed templates. However, these methods consider each keyword independently, which disregards the correlation between keywords and the contextual relationships among log sequences. In this paper, we propose a novel weakly supervised log anomaly detection framework, named LogLG, to explore the semantic connections among keywords from sequences. Specifically, we design an end-to-end iterative process, where the keywords of unlabeled logs are first extracted to construct a log-event graph. Then, we build a subgraph annotator to generate pseudo labels for unlabeled log sequences. To ameliorate the annotation quality, we adopt a self-supervised task to pre-train a subgraph annotator. After that, a detection model is trained with the generated pseudo labels. Conditioned on the classification results, we re-extract the keywords from the log sequences and update the log-event graph for the next iteration. Experiments on five benchmarks validate the effectiveness of LogLG for detecting anomalies on unlabeled log data and demonstrate that LogLG, as the state-of-the-art weakly supervised method, achieves significant performance improvements compared to existing methods.
[ { "created": "Tue, 23 Aug 2022 09:32:19 GMT", "version": "v1" }, { "created": "Thu, 25 Aug 2022 06:42:18 GMT", "version": "v2" }, { "created": "Tue, 6 Sep 2022 02:36:58 GMT", "version": "v3" }, { "created": "Thu, 2 Feb 2023 09:17:52 GMT", "version": "v4" }, { "created": "Tue, 11 Apr 2023 07:46:32 GMT", "version": "v5" } ]
2023-04-12
[ [ "Guo", "Hongcheng", "" ], [ "Guo", "Yuhui", "" ], [ "Chen", "Renjie", "" ], [ "Yang", "Jian", "" ], [ "Liu", "Jiaheng", "" ], [ "Li", "Zhoujun", "" ], [ "Zheng", "Tieqiao", "" ], [ "Hou", "Weichao", "" ], [ "Zheng", "Liangfan", "" ], [ "Zhang", "Bo", "" ] ]
Fully supervised log anomaly detection methods suffer the heavy burden of annotating massive unlabeled log data. Recently, many semi-supervised methods have been proposed to reduce annotation costs with the help of parsed templates. However, these methods consider each keyword independently, which disregards the correlation between keywords and the contextual relationships among log sequences. In this paper, we propose a novel weakly supervised log anomaly detection framework, named LogLG, to explore the semantic connections among keywords from sequences. Specifically, we design an end-to-end iterative process, where the keywords of unlabeled logs are first extracted to construct a log-event graph. Then, we build a subgraph annotator to generate pseudo labels for unlabeled log sequences. To ameliorate the annotation quality, we adopt a self-supervised task to pre-train a subgraph annotator. After that, a detection model is trained with the generated pseudo labels. Conditioned on the classification results, we re-extract the keywords from the log sequences and update the log-event graph for the next iteration. Experiments on five benchmarks validate the effectiveness of LogLG for detecting anomalies on unlabeled log data and demonstrate that LogLG, as the state-of-the-art weakly supervised method, achieves significant performance improvements compared to existing methods.
2204.03082
Zudi Lin
Leander Lauenburg, Zudi Lin, Ruihan Zhang, M\'arcia dos Santos, Siyu Huang, Ignacio Arganda-Carreras, Edward S. Boyden, Hanspeter Pfister, Donglai Wei
Instance Segmentation of Unlabeled Modalities via Cyclic Segmentation GAN
13 pages with appendix
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Instance segmentation for unlabeled imaging modalities is a challenging but essential task as collecting expert annotation can be expensive and time-consuming. Existing works segment a new modality by either deploying a pre-trained model optimized on diverse training data or conducting domain translation and image segmentation as two independent steps. In this work, we propose a novel Cyclic Segmentation Generative Adversarial Network (CySGAN) that conducts image translation and instance segmentation jointly using a unified framework. Besides the CycleGAN losses for image translation and supervised losses for the annotated source domain, we introduce additional self-supervised and segmentation-based adversarial objectives to improve the model performance by leveraging unlabeled target domain images. We benchmark our approach on the task of 3D neuronal nuclei segmentation with annotated electron microscopy (EM) images and unlabeled expansion microscopy (ExM) data. Our CySGAN outperforms both pretrained generalist models and the baselines that sequentially conduct image translation and segmentation. Our implementation and the newly collected, densely annotated ExM nuclei dataset, named NucExM, are available at https://connectomics-bazaar.github.io/proj/CySGAN/index.html.
[ { "created": "Wed, 6 Apr 2022 20:46:39 GMT", "version": "v1" } ]
2022-04-08
[ [ "Lauenburg", "Leander", "" ], [ "Lin", "Zudi", "" ], [ "Zhang", "Ruihan", "" ], [ "Santos", "Márcia dos", "" ], [ "Huang", "Siyu", "" ], [ "Arganda-Carreras", "Ignacio", "" ], [ "Boyden", "Edward S.", "" ], [ "Pfister", "Hanspeter", "" ], [ "Wei", "Donglai", "" ] ]
Instance segmentation for unlabeled imaging modalities is a challenging but essential task as collecting expert annotation can be expensive and time-consuming. Existing works segment a new modality by either deploying a pre-trained model optimized on diverse training data or conducting domain translation and image segmentation as two independent steps. In this work, we propose a novel Cyclic Segmentation Generative Adversarial Network (CySGAN) that conducts image translation and instance segmentation jointly using a unified framework. Besides the CycleGAN losses for image translation and supervised losses for the annotated source domain, we introduce additional self-supervised and segmentation-based adversarial objectives to improve the model performance by leveraging unlabeled target domain images. We benchmark our approach on the task of 3D neuronal nuclei segmentation with annotated electron microscopy (EM) images and unlabeled expansion microscopy (ExM) data. Our CySGAN outperforms both pretrained generalist models and the baselines that sequentially conduct image translation and segmentation. Our implementation and the newly collected, densely annotated ExM nuclei dataset, named NucExM, are available at https://connectomics-bazaar.github.io/proj/CySGAN/index.html.
2308.15726
Fei Yu
Nan Che and Chenrui Liu and Fei Yu
AGS: An Dataset and Taxonomy for Domestic Scene Sound Event Recognition
null
null
null
null
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by/4.0/
Environmental sound scene and sound event recognition is important for the recognition of suspicious events in indoor and outdoor environments (such as nurseries, smart homes, nursing homes, etc.) and is a fundamental task involved in many audio surveillance applications. In particular, there is no public common data set for the research field of sound event recognition for the data set of the indoor environmental sound scene. Therefore, this paper proposes a data set (called as AGS) for the home environment sound. This data set considers various types of overlapping audio in the scene, background noise. Moreover, based on the proposed data set, this paper compares and analyzes the advanced methods for sound event recognition, and then illustrates the reliability of the data set proposed in this paper, and studies the challenges raised by the new data set. Our proposed AGS and the source code of the corresponding baselines at https://github.com/taolunzu11/AGS .
[ { "created": "Wed, 30 Aug 2023 03:03:47 GMT", "version": "v1" } ]
2023-08-31
[ [ "Che", "Nan", "" ], [ "Liu", "Chenrui", "" ], [ "Yu", "Fei", "" ] ]
Environmental sound scene and sound event recognition is important for the recognition of suspicious events in indoor and outdoor environments (such as nurseries, smart homes, nursing homes, etc.) and is a fundamental task involved in many audio surveillance applications. In particular, there is no public common data set for the research field of sound event recognition for the data set of the indoor environmental sound scene. Therefore, this paper proposes a data set (called as AGS) for the home environment sound. This data set considers various types of overlapping audio in the scene, background noise. Moreover, based on the proposed data set, this paper compares and analyzes the advanced methods for sound event recognition, and then illustrates the reliability of the data set proposed in this paper, and studies the challenges raised by the new data set. Our proposed AGS and the source code of the corresponding baselines at https://github.com/taolunzu11/AGS .
2405.05386
Andreas Madsen
Andreas Madsen, Himabindu Lakkaraju, Siva Reddy, Sarath Chandar
Interpretability Needs a New Paradigm
null
null
null
null
cs.LG cs.CL cs.CV stat.ML
http://creativecommons.org/licenses/by-sa/4.0/
Interpretability is the study of explaining models in understandable terms to humans. At present, interpretability is divided into two paradigms: the intrinsic paradigm, which believes that only models designed to be explained can be explained, and the post-hoc paradigm, which believes that black-box models can be explained. At the core of this debate is how each paradigm ensures its explanations are faithful, i.e., true to the model's behavior. This is important, as false but convincing explanations lead to unsupported confidence in artificial intelligence (AI), which can be dangerous. This paper's position is that we should think about new paradigms while staying vigilant regarding faithfulness. First, by examining the history of paradigms in science, we see that paradigms are constantly evolving. Then, by examining the current paradigms, we can understand their underlying beliefs, the value they bring, and their limitations. Finally, this paper presents 3 emerging paradigms for interpretability. The first paradigm designs models such that faithfulness can be easily measured. Another optimizes models such that explanations become faithful. The last paradigm proposes to develop models that produce both a prediction and an explanation.
[ { "created": "Wed, 8 May 2024 19:31:06 GMT", "version": "v1" } ]
2024-05-10
[ [ "Madsen", "Andreas", "" ], [ "Lakkaraju", "Himabindu", "" ], [ "Reddy", "Siva", "" ], [ "Chandar", "Sarath", "" ] ]
Interpretability is the study of explaining models in understandable terms to humans. At present, interpretability is divided into two paradigms: the intrinsic paradigm, which believes that only models designed to be explained can be explained, and the post-hoc paradigm, which believes that black-box models can be explained. At the core of this debate is how each paradigm ensures its explanations are faithful, i.e., true to the model's behavior. This is important, as false but convincing explanations lead to unsupported confidence in artificial intelligence (AI), which can be dangerous. This paper's position is that we should think about new paradigms while staying vigilant regarding faithfulness. First, by examining the history of paradigms in science, we see that paradigms are constantly evolving. Then, by examining the current paradigms, we can understand their underlying beliefs, the value they bring, and their limitations. Finally, this paper presents 3 emerging paradigms for interpretability. The first paradigm designs models such that faithfulness can be easily measured. Another optimizes models such that explanations become faithful. The last paradigm proposes to develop models that produce both a prediction and an explanation.
2312.04362
Hamed Hematian Hemati
Hamed Hematian Hemati, Atousa Toghyani, Atena Souri, Sayed Hesam Alavian, Hossein Sameti, Hamid Beigy
PCoQA: Persian Conversational Question Answering Dataset
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Humans seek information regarding a specific topic through performing a conversation containing a series of questions and answers. In the pursuit of conversational question answering research, we introduce the PCoQA, the first \textbf{P}ersian \textbf{Co}nversational \textbf{Q}uestion \textbf{A}nswering dataset, a resource comprising information-seeking dialogs encompassing a total of 9,026 contextually-driven questions. Each dialog involves a questioner, a responder, and a document from the Wikipedia; The questioner asks several inter-connected questions from the text and the responder provides a span of the document as the answer for each question. PCoQA is designed to present novel challenges compared to previous question answering datasets including having more open-ended non-factual answers, longer answers, and fewer lexical overlaps. This paper not only presents the comprehensive PCoQA dataset but also reports the performance of various benchmark models. Our models include baseline models and pre-trained models, which are leveraged to boost the performance of the model. The dataset and benchmarks are available at our Github page.
[ { "created": "Thu, 7 Dec 2023 15:29:34 GMT", "version": "v1" } ]
2023-12-08
[ [ "Hemati", "Hamed Hematian", "" ], [ "Toghyani", "Atousa", "" ], [ "Souri", "Atena", "" ], [ "Alavian", "Sayed Hesam", "" ], [ "Sameti", "Hossein", "" ], [ "Beigy", "Hamid", "" ] ]
Humans seek information regarding a specific topic through performing a conversation containing a series of questions and answers. In the pursuit of conversational question answering research, we introduce the PCoQA, the first \textbf{P}ersian \textbf{Co}nversational \textbf{Q}uestion \textbf{A}nswering dataset, a resource comprising information-seeking dialogs encompassing a total of 9,026 contextually-driven questions. Each dialog involves a questioner, a responder, and a document from the Wikipedia; The questioner asks several inter-connected questions from the text and the responder provides a span of the document as the answer for each question. PCoQA is designed to present novel challenges compared to previous question answering datasets including having more open-ended non-factual answers, longer answers, and fewer lexical overlaps. This paper not only presents the comprehensive PCoQA dataset but also reports the performance of various benchmark models. Our models include baseline models and pre-trained models, which are leveraged to boost the performance of the model. The dataset and benchmarks are available at our Github page.
1807.01544
Shangbang Long
Shangbang Long, Jiaqiang Ruan, Wenjie Zhang, Xin He, Wenhao Wu, Cong Yao
TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes
17 pages, accepted to ECCV2018
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Driven by deep neural networks and large scale datasets, scene text detection methods have progressed substantially over the past years, continuously refreshing the performance records on various standard benchmarks. However, limited by the representations (axis-aligned rectangles, rotated rectangles or quadrangles) adopted to describe text, existing methods may fall short when dealing with much more free-form text instances, such as curved text, which are actually very common in real-world scenarios. To tackle this problem, we propose a more flexible representation for scene text, termed as TextSnake, which is able to effectively represent text instances in horizontal, oriented and curved forms. In TextSnake, a text instance is described as a sequence of ordered, overlapping disks centered at symmetric axes, each of which is associated with potentially variable radius and orientation. Such geometry attributes are estimated via a Fully Convolutional Network (FCN) model. In experiments, the text detector based on TextSnake achieves state-of-the-art or comparable performance on Total-Text and SCUT-CTW1500, the two newly published benchmarks with special emphasis on curved text in natural images, as well as the widely-used datasets ICDAR 2015 and MSRA-TD500. Specifically, TextSnake outperforms the baseline on Total-Text by more than 40% in F-measure.
[ { "created": "Wed, 4 Jul 2018 12:37:07 GMT", "version": "v1" }, { "created": "Tue, 18 Aug 2020 00:54:35 GMT", "version": "v2" } ]
2020-08-19
[ [ "Long", "Shangbang", "" ], [ "Ruan", "Jiaqiang", "" ], [ "Zhang", "Wenjie", "" ], [ "He", "Xin", "" ], [ "Wu", "Wenhao", "" ], [ "Yao", "Cong", "" ] ]
Driven by deep neural networks and large scale datasets, scene text detection methods have progressed substantially over the past years, continuously refreshing the performance records on various standard benchmarks. However, limited by the representations (axis-aligned rectangles, rotated rectangles or quadrangles) adopted to describe text, existing methods may fall short when dealing with much more free-form text instances, such as curved text, which are actually very common in real-world scenarios. To tackle this problem, we propose a more flexible representation for scene text, termed as TextSnake, which is able to effectively represent text instances in horizontal, oriented and curved forms. In TextSnake, a text instance is described as a sequence of ordered, overlapping disks centered at symmetric axes, each of which is associated with potentially variable radius and orientation. Such geometry attributes are estimated via a Fully Convolutional Network (FCN) model. In experiments, the text detector based on TextSnake achieves state-of-the-art or comparable performance on Total-Text and SCUT-CTW1500, the two newly published benchmarks with special emphasis on curved text in natural images, as well as the widely-used datasets ICDAR 2015 and MSRA-TD500. Specifically, TextSnake outperforms the baseline on Total-Text by more than 40% in F-measure.
cs/0512105
Valmir Barbosa
Alexandre O. Stauffer, Valmir C. Barbosa
A study of the edge-switching Markov-chain method for the generation of random graphs
Minor typos corrected
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of generating connected random graphs with no self-loops or multiple edges and that, in addition, have a given degree sequence. The generation method we focus on is the edge-switching Markov-chain method, whose functioning depends on a parameter w related to the method's core operation of an edge switch. We analyze two existing heuristics for adjusting w during the generation of a graph and show that they result in a Markov chain whose stationary distribution is uniform, thus ensuring that generation occurs uniformly at random. We also introduce a novel w-adjusting heuristic which, even though it does not always lead to a Markov chain, is still guaranteed to converge to the uniform distribution under relatively mild conditions. We report on extensive computer experiments comparing the three heuristics' performance at generating random graphs whose node degrees are distributed as power laws.
[ { "created": "Thu, 29 Dec 2005 18:37:01 GMT", "version": "v1" }, { "created": "Thu, 30 Jun 2011 18:52:12 GMT", "version": "v2" } ]
2011-07-01
[ [ "Stauffer", "Alexandre O.", "" ], [ "Barbosa", "Valmir C.", "" ] ]
We study the problem of generating connected random graphs with no self-loops or multiple edges and that, in addition, have a given degree sequence. The generation method we focus on is the edge-switching Markov-chain method, whose functioning depends on a parameter w related to the method's core operation of an edge switch. We analyze two existing heuristics for adjusting w during the generation of a graph and show that they result in a Markov chain whose stationary distribution is uniform, thus ensuring that generation occurs uniformly at random. We also introduce a novel w-adjusting heuristic which, even though it does not always lead to a Markov chain, is still guaranteed to converge to the uniform distribution under relatively mild conditions. We report on extensive computer experiments comparing the three heuristics' performance at generating random graphs whose node degrees are distributed as power laws.
2207.05643
Koorosh Aslansefat
Koorosh Aslansefat, Panagiota Nikolaou, Martin Walker, Mohammed Naveed Akram, Ioannis Sorokos, Jan Reich, Panayiotis Kolios, Maria K. Michael, Theocharis Theocharides, Georgios Ellinas, Daniel Schneider, Yiannis Papadopoulos
SafeDrones: Real-Time Reliability Evaluation of UAVs using Executable Digital Dependable Identities
null
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
The use of Unmanned Arial Vehicles (UAVs) offers many advantages across a variety of applications. However, safety assurance is a key barrier to widespread usage, especially given the unpredictable operational and environmental factors experienced by UAVs, which are hard to capture solely at design-time. This paper proposes a new reliability modeling approach called SafeDrones to help address this issue by enabling runtime reliability and risk assessment of UAVs. It is a prototype instantiation of the Executable Digital Dependable Identity (EDDI) concept, which aims to create a model-based solution for real-time, data-driven dependability assurance for multi-robot systems. By providing real-time reliability estimates, SafeDrones allows UAVs to update their missions accordingly in an adaptive manner.
[ { "created": "Tue, 12 Jul 2022 16:19:03 GMT", "version": "v1" } ]
2022-07-13
[ [ "Aslansefat", "Koorosh", "" ], [ "Nikolaou", "Panagiota", "" ], [ "Walker", "Martin", "" ], [ "Akram", "Mohammed Naveed", "" ], [ "Sorokos", "Ioannis", "" ], [ "Reich", "Jan", "" ], [ "Kolios", "Panayiotis", "" ], [ "Michael", "Maria K.", "" ], [ "Theocharides", "Theocharis", "" ], [ "Ellinas", "Georgios", "" ], [ "Schneider", "Daniel", "" ], [ "Papadopoulos", "Yiannis", "" ] ]
The use of Unmanned Arial Vehicles (UAVs) offers many advantages across a variety of applications. However, safety assurance is a key barrier to widespread usage, especially given the unpredictable operational and environmental factors experienced by UAVs, which are hard to capture solely at design-time. This paper proposes a new reliability modeling approach called SafeDrones to help address this issue by enabling runtime reliability and risk assessment of UAVs. It is a prototype instantiation of the Executable Digital Dependable Identity (EDDI) concept, which aims to create a model-based solution for real-time, data-driven dependability assurance for multi-robot systems. By providing real-time reliability estimates, SafeDrones allows UAVs to update their missions accordingly in an adaptive manner.
2008.00962
Lucas Tabelini Torres
Lucas Tabelini, Rodrigo Berriel, Thiago M. Paix\~ao, Alberto F. De Souza, Claudine Badue, Nicu Sebe and Thiago Oliveira-Santos
Deep Traffic Sign Detection and Recognition Without Target Domain Real Images
arXiv admin note: text overlap with arXiv:1907.09679
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has been successfully applied to several problems related to autonomous driving, often relying on large databases of real target-domain images for proper training. The acquisition of such real-world data is not always possible in the self-driving context, and sometimes their annotation is not feasible. Moreover, in many tasks, there is an intrinsic data imbalance that most learning-based methods struggle to cope with. Particularly, traffic sign detection is a challenging problem in which these three issues are seen altogether. To address these challenges, we propose a novel database generation method that requires only (i) arbitrary natural images, i.e., requires no real image from the target-domain, and (ii) templates of the traffic signs. The method does not aim at overcoming the training with real data, but to be a compatible alternative when the real data is not available. The effortlessly generated database is shown to be effective for the training of a deep detector on traffic signs from multiple countries. On large data sets, training with a fully synthetic data set almost matches the performance of training with a real one. When compared to training with a smaller data set of real images, training with synthetic images increased the accuracy by 12.25%. The proposed method also improves the performance of the detector when target-domain data are available.
[ { "created": "Thu, 30 Jul 2020 21:06:47 GMT", "version": "v1" } ]
2020-08-04
[ [ "Tabelini", "Lucas", "" ], [ "Berriel", "Rodrigo", "" ], [ "Paixão", "Thiago M.", "" ], [ "De Souza", "Alberto F.", "" ], [ "Badue", "Claudine", "" ], [ "Sebe", "Nicu", "" ], [ "Oliveira-Santos", "Thiago", "" ] ]
Deep learning has been successfully applied to several problems related to autonomous driving, often relying on large databases of real target-domain images for proper training. The acquisition of such real-world data is not always possible in the self-driving context, and sometimes their annotation is not feasible. Moreover, in many tasks, there is an intrinsic data imbalance that most learning-based methods struggle to cope with. Particularly, traffic sign detection is a challenging problem in which these three issues are seen altogether. To address these challenges, we propose a novel database generation method that requires only (i) arbitrary natural images, i.e., requires no real image from the target-domain, and (ii) templates of the traffic signs. The method does not aim at overcoming the training with real data, but to be a compatible alternative when the real data is not available. The effortlessly generated database is shown to be effective for the training of a deep detector on traffic signs from multiple countries. On large data sets, training with a fully synthetic data set almost matches the performance of training with a real one. When compared to training with a smaller data set of real images, training with synthetic images increased the accuracy by 12.25%. The proposed method also improves the performance of the detector when target-domain data are available.
2004.08145
Zhiwei Gao
Zhiwei Gao, Shuntaro Yada, Shoko Wakamiya and Eiji Aramaki
NAIST COVID: Multilingual COVID-19 Twitter and Weibo Dataset
null
null
null
null
cs.SI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since the outbreak of coronavirus disease 2019 (COVID-19) in the late 2019, it has affected over 200 countries and billions of people worldwide. This has affected the social life of people owing to enforcements, such as "social distancing" and "stay at home." This has resulted in an increasing interaction through social media. Given that social media can bring us valuable information about COVID-19 at a global scale, it is important to share the data and encourage social media studies against COVID-19 or other infectious diseases. Therefore, we have released a multilingual dataset of social media posts related to COVID-19, consisting of microblogs in English and Japanese from Twitter and those in Chinese from Weibo. The data cover microblogs from January 20, 2020, to March 24, 2020. This paper also provides a quantitative as well as qualitative analysis of these datasets by creating daily word clouds as an example of text-mining analysis. The dataset is now available on Github. This dataset can be analyzed in a multitude of ways and is expected to help in efficient communication of precautions related to COVID-19.
[ { "created": "Fri, 17 Apr 2020 09:48:14 GMT", "version": "v1" } ]
2020-04-20
[ [ "Gao", "Zhiwei", "" ], [ "Yada", "Shuntaro", "" ], [ "Wakamiya", "Shoko", "" ], [ "Aramaki", "Eiji", "" ] ]
Since the outbreak of coronavirus disease 2019 (COVID-19) in the late 2019, it has affected over 200 countries and billions of people worldwide. This has affected the social life of people owing to enforcements, such as "social distancing" and "stay at home." This has resulted in an increasing interaction through social media. Given that social media can bring us valuable information about COVID-19 at a global scale, it is important to share the data and encourage social media studies against COVID-19 or other infectious diseases. Therefore, we have released a multilingual dataset of social media posts related to COVID-19, consisting of microblogs in English and Japanese from Twitter and those in Chinese from Weibo. The data cover microblogs from January 20, 2020, to March 24, 2020. This paper also provides a quantitative as well as qualitative analysis of these datasets by creating daily word clouds as an example of text-mining analysis. The dataset is now available on Github. This dataset can be analyzed in a multitude of ways and is expected to help in efficient communication of precautions related to COVID-19.
2408.04870
Mulong Luo
Ayush RoyChowdhury, Mulong Luo, Prateek Sahu, Sarbartha Banerjee, Mohit Tiwari
ConfusedPilot: Confused Deputy Risks in RAG-based LLMs
null
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by/4.0/
Retrieval augmented generation (RAG) is a process where a large language model (LLM) retrieves useful information from a database and then generates the responses. It is becoming popular in enterprise settings for daily business operations. For example, Copilot for Microsoft 365 has accumulated millions of businesses. However, the security implications of adopting such RAG-based systems are unclear. In this paper, we introduce ConfusedPilot, a class of security vulnerabilities of RAG systems that confuse Copilot and cause integrity and confidentiality violations in its responses. First, we investigate a vulnerability that embeds malicious text in the modified prompt in RAG, corrupting the responses generated by the LLM. Second, we demonstrate a vulnerability that leaks secret data, which leverages the caching mechanism during retrieval. Third, we investigate how both vulnerabilities can be exploited to propagate misinformation within the enterprise and ultimately impact its operations, such as sales and manufacturing. We also discuss the root cause of these attacks by investigating the architecture of a RAG-based system. This study highlights the security vulnerabilities in today's RAG-based systems and proposes design guidelines to secure future RAG-based systems.
[ { "created": "Fri, 9 Aug 2024 05:20:05 GMT", "version": "v1" }, { "created": "Tue, 13 Aug 2024 22:51:30 GMT", "version": "v2" }, { "created": "Thu, 15 Aug 2024 05:24:19 GMT", "version": "v3" } ]
2024-08-16
[ [ "RoyChowdhury", "Ayush", "" ], [ "Luo", "Mulong", "" ], [ "Sahu", "Prateek", "" ], [ "Banerjee", "Sarbartha", "" ], [ "Tiwari", "Mohit", "" ] ]
Retrieval augmented generation (RAG) is a process where a large language model (LLM) retrieves useful information from a database and then generates the responses. It is becoming popular in enterprise settings for daily business operations. For example, Copilot for Microsoft 365 has accumulated millions of businesses. However, the security implications of adopting such RAG-based systems are unclear. In this paper, we introduce ConfusedPilot, a class of security vulnerabilities of RAG systems that confuse Copilot and cause integrity and confidentiality violations in its responses. First, we investigate a vulnerability that embeds malicious text in the modified prompt in RAG, corrupting the responses generated by the LLM. Second, we demonstrate a vulnerability that leaks secret data, which leverages the caching mechanism during retrieval. Third, we investigate how both vulnerabilities can be exploited to propagate misinformation within the enterprise and ultimately impact its operations, such as sales and manufacturing. We also discuss the root cause of these attacks by investigating the architecture of a RAG-based system. This study highlights the security vulnerabilities in today's RAG-based systems and proposes design guidelines to secure future RAG-based systems.
2109.10431
Hao Wang
Haewon Jeong, Hao Wang, Flavio P. Calmon
Fairness without Imputation: A Decision Tree Approach for Fair Prediction with Missing Values
null
null
null
null
cs.LG cs.CY cs.IT math.IT stat.ML
http://creativecommons.org/licenses/by/4.0/
We investigate the fairness concerns of training a machine learning model using data with missing values. Even though there are a number of fairness intervention methods in the literature, most of them require a complete training set as input. In practice, data can have missing values, and data missing patterns can depend on group attributes (e.g. gender or race). Simply applying off-the-shelf fair learning algorithms to an imputed dataset may lead to an unfair model. In this paper, we first theoretically analyze different sources of discrimination risks when training with an imputed dataset. Then, we propose an integrated approach based on decision trees that does not require a separate process of imputation and learning. Instead, we train a tree with missing incorporated as attribute (MIA), which does not require explicit imputation, and we optimize a fairness-regularized objective function. We demonstrate that our approach outperforms existing fairness intervention methods applied to an imputed dataset, through several experiments on real-world datasets.
[ { "created": "Tue, 21 Sep 2021 20:46:22 GMT", "version": "v1" }, { "created": "Thu, 14 Apr 2022 02:13:34 GMT", "version": "v2" } ]
2022-04-15
[ [ "Jeong", "Haewon", "" ], [ "Wang", "Hao", "" ], [ "Calmon", "Flavio P.", "" ] ]
We investigate the fairness concerns of training a machine learning model using data with missing values. Even though there are a number of fairness intervention methods in the literature, most of them require a complete training set as input. In practice, data can have missing values, and data missing patterns can depend on group attributes (e.g. gender or race). Simply applying off-the-shelf fair learning algorithms to an imputed dataset may lead to an unfair model. In this paper, we first theoretically analyze different sources of discrimination risks when training with an imputed dataset. Then, we propose an integrated approach based on decision trees that does not require a separate process of imputation and learning. Instead, we train a tree with missing incorporated as attribute (MIA), which does not require explicit imputation, and we optimize a fairness-regularized objective function. We demonstrate that our approach outperforms existing fairness intervention methods applied to an imputed dataset, through several experiments on real-world datasets.
2312.02963
Zhangyang Xiong
Zhangyang Xiong, Chenghong Li, Kenkun Liu, Hongjie Liao, Jianqiao Hu, Junyi Zhu, Shuliang Ning, Lingteng Qiu, Chongjie Wang, Shijie Wang, Shuguang Cui and Xiaoguang Han
MVHumanNet: A Large-scale Dataset of Multi-view Daily Dressing Human Captures
Project page: https://x-zhangyang.github.io/MVHumanNet/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this era, the success of large language models and text-to-image models can be attributed to the driving force of large-scale datasets. However, in the realm of 3D vision, while remarkable progress has been made with models trained on large-scale synthetic and real-captured object data like Objaverse and MVImgNet, a similar level of progress has not been observed in the domain of human-centric tasks partially due to the lack of a large-scale human dataset. Existing datasets of high-fidelity 3D human capture continue to be mid-sized due to the significant challenges in acquiring large-scale high-quality 3D human data. To bridge this gap, we present MVHumanNet, a dataset that comprises multi-view human action sequences of 4,500 human identities. The primary focus of our work is on collecting human data that features a large number of diverse identities and everyday clothing using a multi-view human capture system, which facilitates easily scalable data collection. Our dataset contains 9,000 daily outfits, 60,000 motion sequences and 645 million frames with extensive annotations, including human masks, camera parameters, 2D and 3D keypoints, SMPL/SMPLX parameters, and corresponding textual descriptions. To explore the potential of MVHumanNet in various 2D and 3D visual tasks, we conducted pilot studies on view-consistent action recognition, human NeRF reconstruction, text-driven view-unconstrained human image generation, as well as 2D view-unconstrained human image and 3D avatar generation. Extensive experiments demonstrate the performance improvements and effective applications enabled by the scale provided by MVHumanNet. As the current largest-scale 3D human dataset, we hope that the release of MVHumanNet data with annotations will foster further innovations in the domain of 3D human-centric tasks at scale.
[ { "created": "Tue, 5 Dec 2023 18:50:12 GMT", "version": "v1" } ]
2023-12-06
[ [ "Xiong", "Zhangyang", "" ], [ "Li", "Chenghong", "" ], [ "Liu", "Kenkun", "" ], [ "Liao", "Hongjie", "" ], [ "Hu", "Jianqiao", "" ], [ "Zhu", "Junyi", "" ], [ "Ning", "Shuliang", "" ], [ "Qiu", "Lingteng", "" ], [ "Wang", "Chongjie", "" ], [ "Wang", "Shijie", "" ], [ "Cui", "Shuguang", "" ], [ "Han", "Xiaoguang", "" ] ]
In this era, the success of large language models and text-to-image models can be attributed to the driving force of large-scale datasets. However, in the realm of 3D vision, while remarkable progress has been made with models trained on large-scale synthetic and real-captured object data like Objaverse and MVImgNet, a similar level of progress has not been observed in the domain of human-centric tasks partially due to the lack of a large-scale human dataset. Existing datasets of high-fidelity 3D human capture continue to be mid-sized due to the significant challenges in acquiring large-scale high-quality 3D human data. To bridge this gap, we present MVHumanNet, a dataset that comprises multi-view human action sequences of 4,500 human identities. The primary focus of our work is on collecting human data that features a large number of diverse identities and everyday clothing using a multi-view human capture system, which facilitates easily scalable data collection. Our dataset contains 9,000 daily outfits, 60,000 motion sequences and 645 million frames with extensive annotations, including human masks, camera parameters, 2D and 3D keypoints, SMPL/SMPLX parameters, and corresponding textual descriptions. To explore the potential of MVHumanNet in various 2D and 3D visual tasks, we conducted pilot studies on view-consistent action recognition, human NeRF reconstruction, text-driven view-unconstrained human image generation, as well as 2D view-unconstrained human image and 3D avatar generation. Extensive experiments demonstrate the performance improvements and effective applications enabled by the scale provided by MVHumanNet. As the current largest-scale 3D human dataset, we hope that the release of MVHumanNet data with annotations will foster further innovations in the domain of 3D human-centric tasks at scale.
2212.07612
Kai Huang
Kai Huang, Haibo Hu, Qingqing Ye, Kai Tian, Bolong Zheng, Xiaofang Zhou
TED: Towards Discovering Top-k Edge-Diversified Patterns in a Graph Database
This paper is accepted by SIGMOD 2023
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
With an exponentially growing number of graphs from disparate repositories, there is a strong need to analyze a graph database containing an extensive collection of small- or medium-sized data graphs (e.g., chemical compounds). Although subgraph enumeration and subgraph mining have been proposed to bring insights into a graph database by a set of subgraph structures, they often end up with similar or homogenous topologies, which is undesirable in many graph applications. To address this limitation, we propose the Top-k Edge-Diversified Patterns Discovery problem to retrieve a set of subgraphs that cover the maximum number of edges in a database. To efficiently process such query, we present a generic and extensible framework called Ted which achieves a guaranteed approximation ratio to the optimal result. Two optimization strategies are further developed to improve the performance. Experimental studies on real-world datasets demonstrate the superiority of Ted to traditional techniques.
[ { "created": "Thu, 15 Dec 2022 04:27:29 GMT", "version": "v1" } ]
2022-12-16
[ [ "Huang", "Kai", "" ], [ "Hu", "Haibo", "" ], [ "Ye", "Qingqing", "" ], [ "Tian", "Kai", "" ], [ "Zheng", "Bolong", "" ], [ "Zhou", "Xiaofang", "" ] ]
With an exponentially growing number of graphs from disparate repositories, there is a strong need to analyze a graph database containing an extensive collection of small- or medium-sized data graphs (e.g., chemical compounds). Although subgraph enumeration and subgraph mining have been proposed to bring insights into a graph database by a set of subgraph structures, they often end up with similar or homogenous topologies, which is undesirable in many graph applications. To address this limitation, we propose the Top-k Edge-Diversified Patterns Discovery problem to retrieve a set of subgraphs that cover the maximum number of edges in a database. To efficiently process such query, we present a generic and extensible framework called Ted which achieves a guaranteed approximation ratio to the optimal result. Two optimization strategies are further developed to improve the performance. Experimental studies on real-world datasets demonstrate the superiority of Ted to traditional techniques.
1511.00148
Indre Zliobaite
Indre Zliobaite
A survey on measuring indirect discrimination in machine learning
null
null
null
null
cs.CY stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, many decisions are made using predictive models built on historical data.Predictive models may systematically discriminate groups of people even if the computing process is fair and well-intentioned. Discrimination-aware data mining studies how to make predictive models free from discrimination, when historical data, on which they are built, may be biased, incomplete, or even contain past discriminatory decisions. Discrimination refers to disadvantageous treatment of a person based on belonging to a category rather than on individual merit. In this survey we review and organize various discrimination measures that have been used for measuring discrimination in data, as well as in evaluating performance of discrimination-aware predictive models. We also discuss related measures from other disciplines, which have not been used for measuring discrimination, but potentially could be suitable for this purpose. We computationally analyze properties of selected measures. We also review and discuss measuring procedures, and present recommendations for practitioners. The primary target audience is data mining, machine learning, pattern recognition, statistical modeling researchers developing new methods for non-discriminatory predictive modeling. In addition, practitioners and policy makers would use the survey for diagnosing potential discrimination by predictive models.
[ { "created": "Sat, 31 Oct 2015 16:04:12 GMT", "version": "v1" } ]
2015-11-23
[ [ "Zliobaite", "Indre", "" ] ]
Nowadays, many decisions are made using predictive models built on historical data.Predictive models may systematically discriminate groups of people even if the computing process is fair and well-intentioned. Discrimination-aware data mining studies how to make predictive models free from discrimination, when historical data, on which they are built, may be biased, incomplete, or even contain past discriminatory decisions. Discrimination refers to disadvantageous treatment of a person based on belonging to a category rather than on individual merit. In this survey we review and organize various discrimination measures that have been used for measuring discrimination in data, as well as in evaluating performance of discrimination-aware predictive models. We also discuss related measures from other disciplines, which have not been used for measuring discrimination, but potentially could be suitable for this purpose. We computationally analyze properties of selected measures. We also review and discuss measuring procedures, and present recommendations for practitioners. The primary target audience is data mining, machine learning, pattern recognition, statistical modeling researchers developing new methods for non-discriminatory predictive modeling. In addition, practitioners and policy makers would use the survey for diagnosing potential discrimination by predictive models.
2307.06983
Hu Wei
Wei Hu, Xuhong Wang, Ding Wang, Shengyue Yao, Zuqiu Mao, Li Li, Fei-Yue Wang, Yilun Lin
IR Design for Application-Specific Natural Language: A Case Study on Traffic Data
null
null
null
null
cs.SE cs.AI cs.PL
http://creativecommons.org/licenses/by/4.0/
In the realm of software applications in the transportation industry, Domain-Specific Languages (DSLs) have enjoyed widespread adoption due to their ease of use and various other benefits. With the ceaseless progress in computer performance and the rapid development of large-scale models, the possibility of programming using natural language in specified applications - referred to as Application-Specific Natural Language (ASNL) - has emerged. ASNL exhibits greater flexibility and freedom, which, in turn, leads to an increase in computational complexity for parsing and a decrease in processing performance. To tackle this issue, our paper advances a design for an intermediate representation (IR) that caters to ASNL and can uniformly process transportation data into graph data format, improving data processing performance. Experimental comparisons reveal that in standard data query operations, our proposed IR design can achieve a speed improvement of over forty times compared to direct usage of standard XML format data.
[ { "created": "Thu, 13 Jul 2023 15:52:05 GMT", "version": "v1" } ]
2023-07-17
[ [ "Hu", "Wei", "" ], [ "Wang", "Xuhong", "" ], [ "Wang", "Ding", "" ], [ "Yao", "Shengyue", "" ], [ "Mao", "Zuqiu", "" ], [ "Li", "Li", "" ], [ "Wang", "Fei-Yue", "" ], [ "Lin", "Yilun", "" ] ]
In the realm of software applications in the transportation industry, Domain-Specific Languages (DSLs) have enjoyed widespread adoption due to their ease of use and various other benefits. With the ceaseless progress in computer performance and the rapid development of large-scale models, the possibility of programming using natural language in specified applications - referred to as Application-Specific Natural Language (ASNL) - has emerged. ASNL exhibits greater flexibility and freedom, which, in turn, leads to an increase in computational complexity for parsing and a decrease in processing performance. To tackle this issue, our paper advances a design for an intermediate representation (IR) that caters to ASNL and can uniformly process transportation data into graph data format, improving data processing performance. Experimental comparisons reveal that in standard data query operations, our proposed IR design can achieve a speed improvement of over forty times compared to direct usage of standard XML format data.
2210.10765
Fahim Tajwar
Annie Xie, Fahim Tajwar, Archit Sharma, Chelsea Finn
When to Ask for Help: Proactive Interventions in Autonomous Reinforcement Learning
36th Conference on Neural Information Processing Systems (NeurIPS 2022)
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A long-term goal of reinforcement learning is to design agents that can autonomously interact and learn in the world. A critical challenge to such autonomy is the presence of irreversible states which require external assistance to recover from, such as when a robot arm has pushed an object off of a table. While standard agents require constant monitoring to decide when to intervene, we aim to design proactive agents that can request human intervention only when needed. To this end, we propose an algorithm that efficiently learns to detect and avoid states that are irreversible, and proactively asks for help in case the agent does enter them. On a suite of continuous control environments with unknown irreversible states, we find that our algorithm exhibits better sample- and intervention-efficiency compared to existing methods. Our code is publicly available at https://sites.google.com/view/proactive-interventions
[ { "created": "Wed, 19 Oct 2022 17:57:24 GMT", "version": "v1" } ]
2022-10-20
[ [ "Xie", "Annie", "" ], [ "Tajwar", "Fahim", "" ], [ "Sharma", "Archit", "" ], [ "Finn", "Chelsea", "" ] ]
A long-term goal of reinforcement learning is to design agents that can autonomously interact and learn in the world. A critical challenge to such autonomy is the presence of irreversible states which require external assistance to recover from, such as when a robot arm has pushed an object off of a table. While standard agents require constant monitoring to decide when to intervene, we aim to design proactive agents that can request human intervention only when needed. To this end, we propose an algorithm that efficiently learns to detect and avoid states that are irreversible, and proactively asks for help in case the agent does enter them. On a suite of continuous control environments with unknown irreversible states, we find that our algorithm exhibits better sample- and intervention-efficiency compared to existing methods. Our code is publicly available at https://sites.google.com/view/proactive-interventions
2002.04114
Guan-An Wang
Guan-An Wang, Tianzhu Zhang. Yang Yang, Jian Cheng, Jianlong Chang, Xu Liang, Zengguang Hou
Cross-Modality Paired-Images Generation for RGB-Infrared Person Re-Identification
accepted by AAAI'20
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RGB-Infrared (IR) person re-identification is very challenging due to the large cross-modality variations between RGB and IR images. The key solution is to learn aligned features to the bridge RGB and IR modalities. However, due to the lack of correspondence labels between every pair of RGB and IR images, most methods try to alleviate the variations with set-level alignment by reducing the distance between the entire RGB and IR sets. However, this set-level alignment may lead to misalignment of some instances, which limits the performance for RGB-IR Re-ID. Different from existing methods, in this paper, we propose to generate cross-modality paired-images and perform both global set-level and fine-grained instance-level alignments. Our proposed method enjoys several merits. First, our method can perform set-level alignment by disentangling modality-specific and modality-invariant features. Compared with conventional methods, ours can explicitly remove the modality-specific features and the modality variation can be better reduced. Second, given cross-modality unpaired-images of a person, our method can generate cross-modality paired images from exchanged images. With them, we can directly perform instance-level alignment by minimizing distances of every pair of images. Extensive experimental results on two standard benchmarks demonstrate that the proposed model favourably against state-of-the-art methods. Especially, on SYSU-MM01 dataset, our model can achieve a gain of 9.2% and 7.7% in terms of Rank-1 and mAP. Code is available at https://github.com/wangguanan/JSIA-ReID.
[ { "created": "Mon, 10 Feb 2020 22:15:19 GMT", "version": "v1" }, { "created": "Tue, 18 Feb 2020 00:03:01 GMT", "version": "v2" } ]
2020-02-19
[ [ "Wang", "Guan-An", "" ], [ "Yang", "Tianzhu Zhang. Yang", "" ], [ "Cheng", "Jian", "" ], [ "Chang", "Jianlong", "" ], [ "Liang", "Xu", "" ], [ "Hou", "Zengguang", "" ] ]
RGB-Infrared (IR) person re-identification is very challenging due to the large cross-modality variations between RGB and IR images. The key solution is to learn aligned features to the bridge RGB and IR modalities. However, due to the lack of correspondence labels between every pair of RGB and IR images, most methods try to alleviate the variations with set-level alignment by reducing the distance between the entire RGB and IR sets. However, this set-level alignment may lead to misalignment of some instances, which limits the performance for RGB-IR Re-ID. Different from existing methods, in this paper, we propose to generate cross-modality paired-images and perform both global set-level and fine-grained instance-level alignments. Our proposed method enjoys several merits. First, our method can perform set-level alignment by disentangling modality-specific and modality-invariant features. Compared with conventional methods, ours can explicitly remove the modality-specific features and the modality variation can be better reduced. Second, given cross-modality unpaired-images of a person, our method can generate cross-modality paired images from exchanged images. With them, we can directly perform instance-level alignment by minimizing distances of every pair of images. Extensive experimental results on two standard benchmarks demonstrate that the proposed model favourably against state-of-the-art methods. Especially, on SYSU-MM01 dataset, our model can achieve a gain of 9.2% and 7.7% in terms of Rank-1 and mAP. Code is available at https://github.com/wangguanan/JSIA-ReID.
2110.12884
Boris van Breugel
Boris van Breugel, Trent Kyono, Jeroen Berrevoets, Mihaela van der Schaar
DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks
null
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Machine learning models have been criticized for reflecting unfair biases in the training data. Instead of solving for this by introducing fair learning algorithms directly, we focus on generating fair synthetic data, such that any downstream learner is fair. Generating fair synthetic data from unfair data - while remaining truthful to the underlying data-generating process (DGP) - is non-trivial. In this paper, we introduce DECAF: a GAN-based fair synthetic data generator for tabular data. With DECAF we embed the DGP explicitly as a structural causal model in the input layers of the generator, allowing each variable to be reconstructed conditioned on its causal parents. This procedure enables inference time debiasing, where biased edges can be strategically removed for satisfying user-defined fairness requirements. The DECAF framework is versatile and compatible with several popular definitions of fairness. In our experiments, we show that DECAF successfully removes undesired bias and - in contrast to existing methods - is capable of generating high-quality synthetic data. Furthermore, we provide theoretical guarantees on the generator's convergence and the fairness of downstream models.
[ { "created": "Mon, 25 Oct 2021 12:39:56 GMT", "version": "v1" }, { "created": "Thu, 4 Nov 2021 21:25:22 GMT", "version": "v2" } ]
2021-11-08
[ [ "van Breugel", "Boris", "" ], [ "Kyono", "Trent", "" ], [ "Berrevoets", "Jeroen", "" ], [ "van der Schaar", "Mihaela", "" ] ]
Machine learning models have been criticized for reflecting unfair biases in the training data. Instead of solving for this by introducing fair learning algorithms directly, we focus on generating fair synthetic data, such that any downstream learner is fair. Generating fair synthetic data from unfair data - while remaining truthful to the underlying data-generating process (DGP) - is non-trivial. In this paper, we introduce DECAF: a GAN-based fair synthetic data generator for tabular data. With DECAF we embed the DGP explicitly as a structural causal model in the input layers of the generator, allowing each variable to be reconstructed conditioned on its causal parents. This procedure enables inference time debiasing, where biased edges can be strategically removed for satisfying user-defined fairness requirements. The DECAF framework is versatile and compatible with several popular definitions of fairness. In our experiments, we show that DECAF successfully removes undesired bias and - in contrast to existing methods - is capable of generating high-quality synthetic data. Furthermore, we provide theoretical guarantees on the generator's convergence and the fairness of downstream models.
1603.01472
Yinan Wang
Yinan Wang, Hakan Johansson, Hui Xu, and Jietao Diao
Minimax Design and Order Estimation of FIR Filters for Extending the Bandwidth of ADCs
4 pages, 3 figures, IEEE Int. Symp. Circuits Syst. (to appear), Montreal, Canada, 2016
null
10.1109/ISCAS.2016.7539015
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The bandwidth of the sampling systems, especially for time-interleaved analog-to-digital converters, needs to be extended along with the rapid increase of the sampling rate. A digitally assisted technique becomes a feasible approach to extend the analog bandwidth, as it is impractical to implement the extension in analog circuits. This paper derives accurate order estimation formulas for the bandwidth extension filter, which is designed in the minimax sense with the ripple constraints as the design criteria. The derived filter order estimation is significant in evaluating the computational complexity from the viewpoint of the top-level system design. Moreover, with the proposed order estimates, one can conveniently obtain the minimal order that satisfies the given ripple constraints, which contributes to reducing the design time. Both the performance of the extension filter and its order estimation are illustrated and demonstrated through simulation examples.
[ { "created": "Fri, 4 Mar 2016 14:16:33 GMT", "version": "v1" } ]
2016-11-18
[ [ "Wang", "Yinan", "" ], [ "Johansson", "Hakan", "" ], [ "Xu", "Hui", "" ], [ "Diao", "Jietao", "" ] ]
The bandwidth of the sampling systems, especially for time-interleaved analog-to-digital converters, needs to be extended along with the rapid increase of the sampling rate. A digitally assisted technique becomes a feasible approach to extend the analog bandwidth, as it is impractical to implement the extension in analog circuits. This paper derives accurate order estimation formulas for the bandwidth extension filter, which is designed in the minimax sense with the ripple constraints as the design criteria. The derived filter order estimation is significant in evaluating the computational complexity from the viewpoint of the top-level system design. Moreover, with the proposed order estimates, one can conveniently obtain the minimal order that satisfies the given ripple constraints, which contributes to reducing the design time. Both the performance of the extension filter and its order estimation are illustrated and demonstrated through simulation examples.
2206.11668
H\'ector Cadavid
H\'ector Cadavid, Vasilios Andrikopoulos, Paris Avgeriou
Documentation-as-code for Interface Control Document Management in Systems of Systems: a Technical Action Research Study
Preprint of the paper accepted in the 16th European Conference on Software Architecture (ECSA)
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The architecting of Systems of Systems (SoS), that is, of systems that emerge from the cooperation of multiple independent constituent systems, is a topic of increasing interest in both industry and academia. However, recent empirical studies revealed what seems to be an overlooked aspect of the architecting of SoS that is linked to major integration and operational issues: the interplay between the various disciplines involved in such an architecting process. This aspect becomes particularly relevant for the management of the interfaces between the SoS constituents, where such disciplines inevitably meet. In this paper, we present the results of the first cycle of a Technical Action Research (TAR) study conducted in cooperation between the authors and a group of practitioners involved in the long-running architecting process of a large-scale radio astronomy SoS project. This TAR is aimed at exploring potential improvements of the document-centered interface management approach currently followed in this project by adopting elements of the \textit{documentation-as-code} philosophy, which is widely adopted in the domain of software systems. As a result, a working proof-of-concept of an ICD (Interface Control Document) management approach was developed by the researchers and evaluated by the practitioners. The results of the study and the corresponding lessons learned are reported in this work.
[ { "created": "Thu, 23 Jun 2022 12:47:47 GMT", "version": "v1" } ]
2022-06-24
[ [ "Cadavid", "Héctor", "" ], [ "Andrikopoulos", "Vasilios", "" ], [ "Avgeriou", "Paris", "" ] ]
The architecting of Systems of Systems (SoS), that is, of systems that emerge from the cooperation of multiple independent constituent systems, is a topic of increasing interest in both industry and academia. However, recent empirical studies revealed what seems to be an overlooked aspect of the architecting of SoS that is linked to major integration and operational issues: the interplay between the various disciplines involved in such an architecting process. This aspect becomes particularly relevant for the management of the interfaces between the SoS constituents, where such disciplines inevitably meet. In this paper, we present the results of the first cycle of a Technical Action Research (TAR) study conducted in cooperation between the authors and a group of practitioners involved in the long-running architecting process of a large-scale radio astronomy SoS project. This TAR is aimed at exploring potential improvements of the document-centered interface management approach currently followed in this project by adopting elements of the \textit{documentation-as-code} philosophy, which is widely adopted in the domain of software systems. As a result, a working proof-of-concept of an ICD (Interface Control Document) management approach was developed by the researchers and evaluated by the practitioners. The results of the study and the corresponding lessons learned are reported in this work.
2306.00316
Jia Li
Jia Li, Shiva Nejati, Mehrdad Sabetzadeh
Using Genetic Programming to Build Self-Adaptivity into Software-Defined Networks
Accepted for publication by ACM Transactions on Autonomous and Adaptive Systems (TAAS) (in Aug 2023). arXiv admin note: substantial text overlap with arXiv:2205.04352
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-adaptation solutions need to periodically monitor, reason about, and adapt a running system. The adaptation step involves generating an adaptation strategy and applying it to the running system whenever an anomaly arises. In this article, we argue that, rather than generating individual adaptation strategies, the goal should be to adapt the control logic of the running system in such a way that the system itself would learn how to steer clear of future anomalies, without triggering self-adaptation too frequently. While the need for adaptation is never eliminated, especially noting the uncertain and evolving environment of complex systems, reducing the frequency of adaptation interventions is advantageous for various reasons, e.g., to increase performance and to make a running system more robust. We instantiate and empirically examine the above idea for software-defined networking -- a key enabling technology for modern data centres and Internet of Things applications. Using genetic programming,(GP), we propose a self-adaptation solution that continuously learns and updates the control constructs in the data-forwarding logic of a software-defined network. Our evaluation, performed using open-source synthetic and industrial data, indicates that, compared to a baseline adaptation technique that attempts to generate individual adaptations, our GP-based approach is more effective in resolving network congestion, and further, reduces the frequency of adaptation interventions over time. In addition, we show that, for networks with the same topology, reusing over larger networks the knowledge that is learned on smaller networks leads to significant improvements in the performance of our GP-based adaptation approach. Finally, we compare our approach against a standard data-forwarding algorithm from the network literature, demonstrating that our approach significantly reduces packet loss.
[ { "created": "Thu, 1 Jun 2023 03:30:33 GMT", "version": "v1" }, { "created": "Tue, 15 Aug 2023 15:38:27 GMT", "version": "v2" } ]
2023-08-16
[ [ "Li", "Jia", "" ], [ "Nejati", "Shiva", "" ], [ "Sabetzadeh", "Mehrdad", "" ] ]
Self-adaptation solutions need to periodically monitor, reason about, and adapt a running system. The adaptation step involves generating an adaptation strategy and applying it to the running system whenever an anomaly arises. In this article, we argue that, rather than generating individual adaptation strategies, the goal should be to adapt the control logic of the running system in such a way that the system itself would learn how to steer clear of future anomalies, without triggering self-adaptation too frequently. While the need for adaptation is never eliminated, especially noting the uncertain and evolving environment of complex systems, reducing the frequency of adaptation interventions is advantageous for various reasons, e.g., to increase performance and to make a running system more robust. We instantiate and empirically examine the above idea for software-defined networking -- a key enabling technology for modern data centres and Internet of Things applications. Using genetic programming,(GP), we propose a self-adaptation solution that continuously learns and updates the control constructs in the data-forwarding logic of a software-defined network. Our evaluation, performed using open-source synthetic and industrial data, indicates that, compared to a baseline adaptation technique that attempts to generate individual adaptations, our GP-based approach is more effective in resolving network congestion, and further, reduces the frequency of adaptation interventions over time. In addition, we show that, for networks with the same topology, reusing over larger networks the knowledge that is learned on smaller networks leads to significant improvements in the performance of our GP-based adaptation approach. Finally, we compare our approach against a standard data-forwarding algorithm from the network literature, demonstrating that our approach significantly reduces packet loss.
2403.17231
Saad Abdul Ghani
Saad Abdul Ghani, Zizhao Wang, Peter Stone, Xuesu Xiao
Dyna-LfLH: Learning Agile Navigation in Dynamic Environments from Learned Hallucination
Submitted to International Conference on Intelligent Robots and Systems (IROS) 2024
null
null
null
cs.RO cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper presents a self-supervised learning method to safely learn a motion planner for ground robots to navigate environments with dense and dynamic obstacles. When facing highly-cluttered, fast-moving, hard-to-predict obstacles, classical motion planners may not be able to keep up with limited onboard computation. For learning-based planners, high-quality demonstrations are difficult to acquire for imitation learning while reinforcement learning becomes inefficient due to the high probability of collision during exploration. To safely and efficiently provide training data, the Learning from Hallucination (LfH) approaches synthesize difficult navigation environments based on past successful navigation experiences in relatively easy or completely open ones, but unfortunately cannot address dynamic obstacles. In our new Dynamic Learning from Learned Hallucination (Dyna-LfLH), we design and learn a novel latent distribution and sample dynamic obstacles from it, so the generated training data can be used to learn a motion planner to navigate in dynamic environments. Dyna-LfLH is evaluated on a ground robot in both simulated and physical environments and achieves up to 25% better success rate compared to baselines.
[ { "created": "Mon, 25 Mar 2024 22:17:51 GMT", "version": "v1" } ]
2024-03-27
[ [ "Ghani", "Saad Abdul", "" ], [ "Wang", "Zizhao", "" ], [ "Stone", "Peter", "" ], [ "Xiao", "Xuesu", "" ] ]
This paper presents a self-supervised learning method to safely learn a motion planner for ground robots to navigate environments with dense and dynamic obstacles. When facing highly-cluttered, fast-moving, hard-to-predict obstacles, classical motion planners may not be able to keep up with limited onboard computation. For learning-based planners, high-quality demonstrations are difficult to acquire for imitation learning while reinforcement learning becomes inefficient due to the high probability of collision during exploration. To safely and efficiently provide training data, the Learning from Hallucination (LfH) approaches synthesize difficult navigation environments based on past successful navigation experiences in relatively easy or completely open ones, but unfortunately cannot address dynamic obstacles. In our new Dynamic Learning from Learned Hallucination (Dyna-LfLH), we design and learn a novel latent distribution and sample dynamic obstacles from it, so the generated training data can be used to learn a motion planner to navigate in dynamic environments. Dyna-LfLH is evaluated on a ground robot in both simulated and physical environments and achieves up to 25% better success rate compared to baselines.
1802.01021
Jonathan Raiman
Jonathan Raiman and Olivier Raiman
DeepType: Multilingual Entity Linking by Neural Type System Evolution
Presented at AAAI 2018
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The wealth of structured (e.g. Wikidata) and unstructured data about the world available today presents an incredible opportunity for tomorrow's Artificial Intelligence. So far, integration of these two different modalities is a difficult process, involving many decisions concerning how best to represent the information so that it will be captured or useful, and hand-labeling large amounts of data. DeepType overcomes this challenge by explicitly integrating symbolic information into the reasoning process of a neural network with a type system. First we construct a type system, and second, we use it to constrain the outputs of a neural network to respect the symbolic structure. We achieve this by reformulating the design problem into a mixed integer problem: create a type system and subsequently train a neural network with it. In this reformulation discrete variables select which parent-child relations from an ontology are types within the type system, while continuous variables control a classifier fit to the type system. The original problem cannot be solved exactly, so we propose a 2-step algorithm: 1) heuristic search or stochastic optimization over discrete variables that define a type system informed by an Oracle and a Learnability heuristic, 2) gradient descent to fit classifier parameters. We apply DeepType to the problem of Entity Linking on three standard datasets (i.e. WikiDisamb30, CoNLL (YAGO), TAC KBP 2010) and find that it outperforms all existing solutions by a wide margin, including approaches that rely on a human-designed type system or recent deep learning-based entity embeddings, while explicitly using symbolic information lets it integrate new entities without retraining.
[ { "created": "Sat, 3 Feb 2018 20:13:42 GMT", "version": "v1" } ]
2018-02-06
[ [ "Raiman", "Jonathan", "" ], [ "Raiman", "Olivier", "" ] ]
The wealth of structured (e.g. Wikidata) and unstructured data about the world available today presents an incredible opportunity for tomorrow's Artificial Intelligence. So far, integration of these two different modalities is a difficult process, involving many decisions concerning how best to represent the information so that it will be captured or useful, and hand-labeling large amounts of data. DeepType overcomes this challenge by explicitly integrating symbolic information into the reasoning process of a neural network with a type system. First we construct a type system, and second, we use it to constrain the outputs of a neural network to respect the symbolic structure. We achieve this by reformulating the design problem into a mixed integer problem: create a type system and subsequently train a neural network with it. In this reformulation discrete variables select which parent-child relations from an ontology are types within the type system, while continuous variables control a classifier fit to the type system. The original problem cannot be solved exactly, so we propose a 2-step algorithm: 1) heuristic search or stochastic optimization over discrete variables that define a type system informed by an Oracle and a Learnability heuristic, 2) gradient descent to fit classifier parameters. We apply DeepType to the problem of Entity Linking on three standard datasets (i.e. WikiDisamb30, CoNLL (YAGO), TAC KBP 2010) and find that it outperforms all existing solutions by a wide margin, including approaches that rely on a human-designed type system or recent deep learning-based entity embeddings, while explicitly using symbolic information lets it integrate new entities without retraining.
2102.11395
Ghani Lawal Mr.
Ghani O. Lawal and Michael Greenspan
Procam Calibration from a Single Pose of a Planar Target
11 pages, 9 figures, 10 tables. Submitted to the VISAPP Conference. Stored in the SciTepress Digital Library: https://www.scitepress.org/PublicationsDetail.aspx?ID=rGG70YCQyOs=&t=1
In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 5: VISAPP, pages 817-827
10.5220/0010327708170827
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel user friendly method is proposed for calibrating a procam system from a single pose of a planar chessboard target. The user simply needs to orient the chessboard in a single appropriate pose. A sequence of Gray Code patterns are projected onto the chessboard, which allows correspondences between the camera, projector and the chessboard to be automatically extracted. These correspondences are fed as input to a nonlinear optimization method that models the projector of the principle points onto the chessboard, and accurately calculates the intrinsic and extrinsic parameters of both the camera and the projector, as well as the camera's distortion coefficients. The method is experimentally validated on the procam system, which is shown to be comparable in accuracy with existing multi-pose approaches. The impact of the orientation of the chessboard with respect to the procam imaging places is also explored through extensive simulation.
[ { "created": "Mon, 22 Feb 2021 22:53:29 GMT", "version": "v1" } ]
2021-02-24
[ [ "Lawal", "Ghani O.", "" ], [ "Greenspan", "Michael", "" ] ]
A novel user friendly method is proposed for calibrating a procam system from a single pose of a planar chessboard target. The user simply needs to orient the chessboard in a single appropriate pose. A sequence of Gray Code patterns are projected onto the chessboard, which allows correspondences between the camera, projector and the chessboard to be automatically extracted. These correspondences are fed as input to a nonlinear optimization method that models the projector of the principle points onto the chessboard, and accurately calculates the intrinsic and extrinsic parameters of both the camera and the projector, as well as the camera's distortion coefficients. The method is experimentally validated on the procam system, which is shown to be comparable in accuracy with existing multi-pose approaches. The impact of the orientation of the chessboard with respect to the procam imaging places is also explored through extensive simulation.
1807.09380
Jue Wang
Jue Wang and Anoop Cherian
Contrastive Video Representation Learning via Adversarial Perturbations
Revised version of ECCV 2018 Paper: Learning Discriminative Video Representations Using Adversarial Perturbations
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Adversarial perturbations are noise-like patterns that can subtly change the data, while failing an otherwise accurate classifier. In this paper, we propose to use such perturbations within a novel contrastive learning setup to build negative samples, which are then used to produce improved video representations. To this end, given a well-trained deep model for per-frame video recognition, we first generate adversarial noise adapted to this model. Positive and negative bags are produced using the original data features from the full video sequence and their perturbed counterparts, respectively. Unlike the classic contrastive learning methods, we develop a binary classification problem that learns a set of discriminative hyperplanes -- as a subspace -- that will separate the two bags from each other. This subspace is then used as a descriptor for the video, dubbed \emph{discriminative subspace pooling}. As the perturbed features belong to data classes that are likely to be confused with the original features, the discriminative subspace will characterize parts of the feature space that are more representative of the original data, and thus may provide robust video representations. To learn such descriptors, we formulate a subspace learning objective on the Stiefel manifold and resort to Riemannian optimization methods for solving it efficiently. We provide experiments on several video datasets and demonstrate state-of-the-art results.
[ { "created": "Tue, 24 Jul 2018 22:46:42 GMT", "version": "v1" }, { "created": "Thu, 26 Jul 2018 13:46:24 GMT", "version": "v2" }, { "created": "Thu, 16 Apr 2020 00:03:53 GMT", "version": "v3" } ]
2020-04-17
[ [ "Wang", "Jue", "" ], [ "Cherian", "Anoop", "" ] ]
Adversarial perturbations are noise-like patterns that can subtly change the data, while failing an otherwise accurate classifier. In this paper, we propose to use such perturbations within a novel contrastive learning setup to build negative samples, which are then used to produce improved video representations. To this end, given a well-trained deep model for per-frame video recognition, we first generate adversarial noise adapted to this model. Positive and negative bags are produced using the original data features from the full video sequence and their perturbed counterparts, respectively. Unlike the classic contrastive learning methods, we develop a binary classification problem that learns a set of discriminative hyperplanes -- as a subspace -- that will separate the two bags from each other. This subspace is then used as a descriptor for the video, dubbed \emph{discriminative subspace pooling}. As the perturbed features belong to data classes that are likely to be confused with the original features, the discriminative subspace will characterize parts of the feature space that are more representative of the original data, and thus may provide robust video representations. To learn such descriptors, we formulate a subspace learning objective on the Stiefel manifold and resort to Riemannian optimization methods for solving it efficiently. We provide experiments on several video datasets and demonstrate state-of-the-art results.
2211.01812
Hadi Hajieghrary
Sevag Tafnakaji and Hadi Hajieghrary and Quentin Teixeira and Yasemin Bekiroglu
Benchmarking local motion planners for navigation of mobile manipulators
Accepted to be presented at 2023 IEEE/SICE International Symposium on System Integration
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
There are various trajectory planners for mobile manipulators. It is often challenging to compare their performance under similar circumstances due to differences in hardware, dissimilarity of tasks and objectives, as well as uncertainties in measurements and operating environments. In this paper, we propose a simulation framework to evaluate the performance of the local trajectory planners to generate smooth, and dynamically and kinematically feasible trajectories for mobile manipulators in the same environment. We focus on local planners as they are key components that provide smooth trajectories while carrying a load, react to dynamic obstacles, and avoid collisions. We evaluate two prominent local trajectory planners, Dynamic-Window Approach (DWA) and Time Elastic Band (TEB) using the metrics that we introduce. Moreover, our software solution is applicable to any other local planners used in the Robot Operating System (ROS) framework, without additional programming effort.
[ { "created": "Thu, 3 Nov 2022 13:45:55 GMT", "version": "v1" } ]
2022-11-04
[ [ "Tafnakaji", "Sevag", "" ], [ "Hajieghrary", "Hadi", "" ], [ "Teixeira", "Quentin", "" ], [ "Bekiroglu", "Yasemin", "" ] ]
There are various trajectory planners for mobile manipulators. It is often challenging to compare their performance under similar circumstances due to differences in hardware, dissimilarity of tasks and objectives, as well as uncertainties in measurements and operating environments. In this paper, we propose a simulation framework to evaluate the performance of the local trajectory planners to generate smooth, and dynamically and kinematically feasible trajectories for mobile manipulators in the same environment. We focus on local planners as they are key components that provide smooth trajectories while carrying a load, react to dynamic obstacles, and avoid collisions. We evaluate two prominent local trajectory planners, Dynamic-Window Approach (DWA) and Time Elastic Band (TEB) using the metrics that we introduce. Moreover, our software solution is applicable to any other local planners used in the Robot Operating System (ROS) framework, without additional programming effort.
2205.09837
Keming Lu
Keming Lu, I-Hung Hsu, Wenxuan Zhou, Mingyu Derek Ma, Muhao Chen
Summarization as Indirect Supervision for Relation Extraction
Accepted by EMNLP 2022
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relation extraction (RE) models have been challenged by their reliance on training data with expensive annotations. Considering that summarization tasks aim at acquiring concise expressions of synoptical information from the longer context, these tasks naturally align with the objective of RE, i.e., extracting a kind of synoptical information that describes the relation of entity mentions. We present SuRE, which converts RE into a summarization formulation. SuRE leads to more precise and resource-efficient RE based on indirect supervision from summarization tasks. To achieve this goal, we develop sentence and relation conversion techniques that essentially bridge the formulation of summarization and RE tasks. We also incorporate constraint decoding techniques with Trie scoring to further enhance summarization-based RE with robust inference. Experiments on three RE datasets demonstrate the effectiveness of SuRE in both full-dataset and low-resource settings, showing that summarization is a promising source of indirect supervision to improve RE models.
[ { "created": "Thu, 19 May 2022 20:25:29 GMT", "version": "v1" }, { "created": "Fri, 21 Oct 2022 04:31:52 GMT", "version": "v2" } ]
2022-10-24
[ [ "Lu", "Keming", "" ], [ "Hsu", "I-Hung", "" ], [ "Zhou", "Wenxuan", "" ], [ "Ma", "Mingyu Derek", "" ], [ "Chen", "Muhao", "" ] ]
Relation extraction (RE) models have been challenged by their reliance on training data with expensive annotations. Considering that summarization tasks aim at acquiring concise expressions of synoptical information from the longer context, these tasks naturally align with the objective of RE, i.e., extracting a kind of synoptical information that describes the relation of entity mentions. We present SuRE, which converts RE into a summarization formulation. SuRE leads to more precise and resource-efficient RE based on indirect supervision from summarization tasks. To achieve this goal, we develop sentence and relation conversion techniques that essentially bridge the formulation of summarization and RE tasks. We also incorporate constraint decoding techniques with Trie scoring to further enhance summarization-based RE with robust inference. Experiments on three RE datasets demonstrate the effectiveness of SuRE in both full-dataset and low-resource settings, showing that summarization is a promising source of indirect supervision to improve RE models.
1201.1650
Matthew Patitz
Sarah Cannon, Erik D. Demaine, Martin L. Demaine, Sarah Eisenstat, Matthew J. Patitz, Robert Schweller, Scott M. Summers, Andrew Winslow
Two Hands Are Better Than One (up to constant factors)
null
null
null
null
cs.CC cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the difference between the standard seeded model of tile self-assembly, and the "seedless" two-handed model of tile self-assembly. Most of our results suggest that the two-handed model is more powerful. In particular, we show how to simulate any seeded system with a two-handed system that is essentially just a constant factor larger. We exhibit finite shapes with a busy-beaver separation in the number of distinct tiles required by seeded versus two-handed, and exhibit an infinite shape that can be constructed two-handed but not seeded. Finally, we show that verifying whether a given system uniquely assembles a desired supertile is co-NP-complete in the two-handed model, while it was known to be polynomially solvable in the seeded model.
[ { "created": "Sun, 8 Jan 2012 18:51:32 GMT", "version": "v1" } ]
2015-03-20
[ [ "Cannon", "Sarah", "" ], [ "Demaine", "Erik D.", "" ], [ "Demaine", "Martin L.", "" ], [ "Eisenstat", "Sarah", "" ], [ "Patitz", "Matthew J.", "" ], [ "Schweller", "Robert", "" ], [ "Summers", "Scott M.", "" ], [ "Winslow", "Andrew", "" ] ]
We study the difference between the standard seeded model of tile self-assembly, and the "seedless" two-handed model of tile self-assembly. Most of our results suggest that the two-handed model is more powerful. In particular, we show how to simulate any seeded system with a two-handed system that is essentially just a constant factor larger. We exhibit finite shapes with a busy-beaver separation in the number of distinct tiles required by seeded versus two-handed, and exhibit an infinite shape that can be constructed two-handed but not seeded. Finally, we show that verifying whether a given system uniquely assembles a desired supertile is co-NP-complete in the two-handed model, while it was known to be polynomially solvable in the seeded model.
2011.10776
Suping Wu
Lei Li, Suping Wu
DmifNet:3D Shape Reconstruction Based on Dynamic Multi-Branch Information Fusion
ICPR 2020 (Oral)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
3D object reconstruction from a single-view image is a long-standing challenging problem. Previous work was difficult to accurately reconstruct 3D shapes with a complex topology which has rich details at the edges and corners. Moreover, previous works used synthetic data to train their network, but domain adaptation problems occurred when tested on real data. In this paper, we propose a Dynamic Multi-branch Information Fusion Network (DmifNet) which can recover a high-fidelity 3D shape of arbitrary topology from a 2D image. Specifically, we design several side branches from the intermediate layers to make the network produce more diverse representations to improve the generalization ability of network. In addition, we utilize DoG (Difference of Gaussians) to extract edge geometry and corners information from input images. Then, we use a separate side branch network to process the extracted data to better capture edge geometry and corners feature information. Finally, we dynamically fuse the information of all branches to gain final predicted probability. Extensive qualitative and quantitative experiments on a large-scale publicly available dataset demonstrate the validity and efficiency of our method. Code and models are publicly available at https://github.com/leilimaster/DmifNet.
[ { "created": "Sat, 21 Nov 2020 11:31:27 GMT", "version": "v1" } ]
2020-11-24
[ [ "Li", "Lei", "" ], [ "Wu", "Suping", "" ] ]
3D object reconstruction from a single-view image is a long-standing challenging problem. Previous work was difficult to accurately reconstruct 3D shapes with a complex topology which has rich details at the edges and corners. Moreover, previous works used synthetic data to train their network, but domain adaptation problems occurred when tested on real data. In this paper, we propose a Dynamic Multi-branch Information Fusion Network (DmifNet) which can recover a high-fidelity 3D shape of arbitrary topology from a 2D image. Specifically, we design several side branches from the intermediate layers to make the network produce more diverse representations to improve the generalization ability of network. In addition, we utilize DoG (Difference of Gaussians) to extract edge geometry and corners information from input images. Then, we use a separate side branch network to process the extracted data to better capture edge geometry and corners feature information. Finally, we dynamically fuse the information of all branches to gain final predicted probability. Extensive qualitative and quantitative experiments on a large-scale publicly available dataset demonstrate the validity and efficiency of our method. Code and models are publicly available at https://github.com/leilimaster/DmifNet.
2210.16160
Melissa Antonelli
Melissa Antonelli
Some Remarks on Counting Propositional Logic
joint work with Ugo Dal Lago and Paolo Pistone
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Counting propositional logic was recently introduced in relation to randomized computation and shown able to logically characterize the full counting hierarchy. In this paper we aim to clarify the intuitive meaning and expressive power of its univariate fragment. On the one hand, we provide an effective procedure to measure the probability of counting formulas. On the other, we make the connection between this logic and stochastic experiments explicit, proving that the counting language can simulate any (and only) event associated with dyadic distributions.
[ { "created": "Fri, 28 Oct 2022 14:34:22 GMT", "version": "v1" }, { "created": "Wed, 16 Nov 2022 17:14:33 GMT", "version": "v2" } ]
2022-11-17
[ [ "Antonelli", "Melissa", "" ] ]
Counting propositional logic was recently introduced in relation to randomized computation and shown able to logically characterize the full counting hierarchy. In this paper we aim to clarify the intuitive meaning and expressive power of its univariate fragment. On the one hand, we provide an effective procedure to measure the probability of counting formulas. On the other, we make the connection between this logic and stochastic experiments explicit, proving that the counting language can simulate any (and only) event associated with dyadic distributions.
1808.02941
Xiangyi Chen
Xiangyi Chen, Sijia Liu, Ruoyu Sun, Mingyi Hong
On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization
null
null
null
null
cs.LG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. This class, which we refer to as the "Adam-type", includes the popular algorithms such as the Adam, AMSGrad and AdaGrad. Despite their popularity in training deep neural networks, the convergence of these algorithms for solving nonconvex problems remains an open question. This paper provides a set of mild sufficient conditions that guarantee the convergence for the Adam-type methods. We prove that under our derived conditions, these methods can achieve the convergence rate of order $O(\log{T}/\sqrt{T})$ for nonconvex stochastic optimization. We show the conditions are essential in the sense that violating them may make the algorithm diverge. Moreover, we propose and analyze a class of (deterministic) incremental adaptive gradient algorithms, which has the same $O(\log{T}/\sqrt{T})$ convergence rate. Our study could also be extended to a broader class of adaptive gradient methods in machine learning and optimization.
[ { "created": "Wed, 8 Aug 2018 21:14:07 GMT", "version": "v1" }, { "created": "Sun, 10 Mar 2019 00:48:35 GMT", "version": "v2" } ]
2019-03-12
[ [ "Chen", "Xiangyi", "" ], [ "Liu", "Sijia", "" ], [ "Sun", "Ruoyu", "" ], [ "Hong", "Mingyi", "" ] ]
This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. This class, which we refer to as the "Adam-type", includes the popular algorithms such as the Adam, AMSGrad and AdaGrad. Despite their popularity in training deep neural networks, the convergence of these algorithms for solving nonconvex problems remains an open question. This paper provides a set of mild sufficient conditions that guarantee the convergence for the Adam-type methods. We prove that under our derived conditions, these methods can achieve the convergence rate of order $O(\log{T}/\sqrt{T})$ for nonconvex stochastic optimization. We show the conditions are essential in the sense that violating them may make the algorithm diverge. Moreover, we propose and analyze a class of (deterministic) incremental adaptive gradient algorithms, which has the same $O(\log{T}/\sqrt{T})$ convergence rate. Our study could also be extended to a broader class of adaptive gradient methods in machine learning and optimization.
2405.15556
Chong Xiang
Chong Xiang, Tong Wu, Zexuan Zhong, David Wagner, Danqi Chen, Prateek Mittal
Certifiably Robust RAG against Retrieval Corruption
null
null
null
null
cs.LG cs.CL cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Retrieval-augmented generation (RAG) has been shown vulnerable to retrieval corruption attacks: an attacker can inject malicious passages into retrieval results to induce inaccurate responses. In this paper, we propose RobustRAG as the first defense framework against retrieval corruption attacks. The key insight of RobustRAG is an isolate-then-aggregate strategy: we get LLM responses from each passage in isolation and then securely aggregate these isolated responses. To instantiate RobustRAG, we design keyword-based and decoding-based algorithms for securely aggregating unstructured text responses. Notably, RobustRAG can achieve certifiable robustness: we can formally prove and certify that, for certain queries, RobustRAG can always return accurate responses, even when the attacker has full knowledge of our defense and can arbitrarily inject a small number of malicious passages. We evaluate RobustRAG on open-domain QA and long-form text generation datasets and demonstrate its effectiveness and generalizability across various tasks and datasets.
[ { "created": "Fri, 24 May 2024 13:44:25 GMT", "version": "v1" } ]
2024-05-27
[ [ "Xiang", "Chong", "" ], [ "Wu", "Tong", "" ], [ "Zhong", "Zexuan", "" ], [ "Wagner", "David", "" ], [ "Chen", "Danqi", "" ], [ "Mittal", "Prateek", "" ] ]
Retrieval-augmented generation (RAG) has been shown vulnerable to retrieval corruption attacks: an attacker can inject malicious passages into retrieval results to induce inaccurate responses. In this paper, we propose RobustRAG as the first defense framework against retrieval corruption attacks. The key insight of RobustRAG is an isolate-then-aggregate strategy: we get LLM responses from each passage in isolation and then securely aggregate these isolated responses. To instantiate RobustRAG, we design keyword-based and decoding-based algorithms for securely aggregating unstructured text responses. Notably, RobustRAG can achieve certifiable robustness: we can formally prove and certify that, for certain queries, RobustRAG can always return accurate responses, even when the attacker has full knowledge of our defense and can arbitrarily inject a small number of malicious passages. We evaluate RobustRAG on open-domain QA and long-form text generation datasets and demonstrate its effectiveness and generalizability across various tasks and datasets.
2308.03299
Aimen Gaba
Aimen Gaba, Zhanna Kaufman, Jason Chueng, Marie Shvakel, Kyle Wm. Hall, Yuriy Brun, and Cindy Xiong Bearfield
My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning
11 pages, 6 figures, to appear in IEEE Transactions of Visualization and Computer Graphics (Also in proceedings of IEEE VIS 2023)
IEEE TVCG 30(1):327-337
10.1109/TVCG.2023.3327192
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims to empirically answer "Can visualization design choices affect a stakeholder's perception of model bias, trust in a model, and willingness to adopt a model?" Through a series of controlled, crowd-sourced experiments with more than 1,500 participants, we identify a set of strategies people follow in deciding which models to trust. Our results show that men and women prioritize fairness and performance differently and that visual design choices significantly affect that prioritization. For example, women trust fairer models more often than men do, participants value fairness more when it is explained using text than as a bar chart, and being explicitly told a model is biased has a bigger impact than showing past biased performance. We test the generalizability of our results by comparing the effect of multiple textual and visual design choices and offer potential explanations of the cognitive mechanisms behind the difference in fairness perception and trust. Our research guides design considerations to support future work developing visualization systems for machine learning.
[ { "created": "Mon, 7 Aug 2023 05:01:39 GMT", "version": "v1" } ]
2024-01-12
[ [ "Gaba", "Aimen", "" ], [ "Kaufman", "Zhanna", "" ], [ "Chueng", "Jason", "" ], [ "Shvakel", "Marie", "" ], [ "Hall", "Kyle Wm.", "" ], [ "Brun", "Yuriy", "" ], [ "Bearfield", "Cindy Xiong", "" ] ]
Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims to empirically answer "Can visualization design choices affect a stakeholder's perception of model bias, trust in a model, and willingness to adopt a model?" Through a series of controlled, crowd-sourced experiments with more than 1,500 participants, we identify a set of strategies people follow in deciding which models to trust. Our results show that men and women prioritize fairness and performance differently and that visual design choices significantly affect that prioritization. For example, women trust fairer models more often than men do, participants value fairness more when it is explained using text than as a bar chart, and being explicitly told a model is biased has a bigger impact than showing past biased performance. We test the generalizability of our results by comparing the effect of multiple textual and visual design choices and offer potential explanations of the cognitive mechanisms behind the difference in fairness perception and trust. Our research guides design considerations to support future work developing visualization systems for machine learning.
1002.4057
Julien Langou
Emmanuel Agullo, Henricus Bouwmeester, Jack Dongarra, Jakub Kurzak, Julien Langou, and Lee Rosenberg
Towards an Efficient Tile Matrix Inversion of Symmetric Positive Definite Matrices on Multicore Architectures
8 pages, extended abstract submitted to VecPar10 on 12/11/09, notification of acceptance received on 02/05/10. See: http://vecpar.fe.up.pt/2010/
null
null
null
cs.MS cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The algorithms in the current sequential numerical linear algebra libraries (e.g. LAPACK) do not parallelize well on multicore architectures. A new family of algorithms, the tile algorithms, has recently been introduced. Previous research has shown that it is possible to write efficient and scalable tile algorithms for performing a Cholesky factorization, a (pseudo) LU factorization, and a QR factorization. In this extended abstract, we attack the problem of the computation of the inverse of a symmetric positive definite matrix. We observe that, using a dynamic task scheduler, it is relatively painless to translate existing LAPACK code to obtain a ready-to-be-executed tile algorithm. However we demonstrate that non trivial compiler techniques (array renaming, loop reversal and pipelining) need then to be applied to further increase the parallelism of our application. We present preliminary experimental results.
[ { "created": "Mon, 22 Feb 2010 06:11:41 GMT", "version": "v1" } ]
2010-02-23
[ [ "Agullo", "Emmanuel", "" ], [ "Bouwmeester", "Henricus", "" ], [ "Dongarra", "Jack", "" ], [ "Kurzak", "Jakub", "" ], [ "Langou", "Julien", "" ], [ "Rosenberg", "Lee", "" ] ]
The algorithms in the current sequential numerical linear algebra libraries (e.g. LAPACK) do not parallelize well on multicore architectures. A new family of algorithms, the tile algorithms, has recently been introduced. Previous research has shown that it is possible to write efficient and scalable tile algorithms for performing a Cholesky factorization, a (pseudo) LU factorization, and a QR factorization. In this extended abstract, we attack the problem of the computation of the inverse of a symmetric positive definite matrix. We observe that, using a dynamic task scheduler, it is relatively painless to translate existing LAPACK code to obtain a ready-to-be-executed tile algorithm. However we demonstrate that non trivial compiler techniques (array renaming, loop reversal and pipelining) need then to be applied to further increase the parallelism of our application. We present preliminary experimental results.
2206.01441
Paula Delgado-Santos
Paula Delgado-Santos, Ruben Tolosana, Richard Guest, Farzin Deravi, Ruben Vera-Rodriguez
Exploring Transformers for Behavioural Biometrics: A Case Study in Gait Recognition
null
null
null
null
cs.CV cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Biometrics on mobile devices has attracted a lot of attention in recent years as it is considered a user-friendly authentication method. This interest has also been motivated by the success of Deep Learning (DL). Architectures based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have been established to be convenient for the task, improving the performance and robustness in comparison to traditional machine learning techniques. However, some aspects must still be revisited and improved. To the best of our knowledge, this is the first article that intends to explore and propose novel gait biometric recognition systems based on Transformers, which currently obtain state-of-the-art performance in many applications. Several state-of-the-art architectures (Vanilla, Informer, Autoformer, Block-Recurrent Transformer, and THAT) are considered in the experimental framework. In addition, new configurations of the Transformers are proposed to further increase the performance. Experiments are carried out using the two popular public databases whuGAIT and OU-ISIR. The results achieved prove the high ability of the proposed Transformer, outperforming state-of-the-art CNN and RNN architectures.
[ { "created": "Fri, 3 Jun 2022 08:08:40 GMT", "version": "v1" } ]
2022-06-06
[ [ "Delgado-Santos", "Paula", "" ], [ "Tolosana", "Ruben", "" ], [ "Guest", "Richard", "" ], [ "Deravi", "Farzin", "" ], [ "Vera-Rodriguez", "Ruben", "" ] ]
Biometrics on mobile devices has attracted a lot of attention in recent years as it is considered a user-friendly authentication method. This interest has also been motivated by the success of Deep Learning (DL). Architectures based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have been established to be convenient for the task, improving the performance and robustness in comparison to traditional machine learning techniques. However, some aspects must still be revisited and improved. To the best of our knowledge, this is the first article that intends to explore and propose novel gait biometric recognition systems based on Transformers, which currently obtain state-of-the-art performance in many applications. Several state-of-the-art architectures (Vanilla, Informer, Autoformer, Block-Recurrent Transformer, and THAT) are considered in the experimental framework. In addition, new configurations of the Transformers are proposed to further increase the performance. Experiments are carried out using the two popular public databases whuGAIT and OU-ISIR. The results achieved prove the high ability of the proposed Transformer, outperforming state-of-the-art CNN and RNN architectures.
2403.15472
Boxuan Ma Dr.
Boxaun Ma, Li Chen and Shin'ichi Konomi
Enhancing Programming Education with ChatGPT: A Case Study on Student Perceptions and Interactions in a Python Course
null
null
null
null
cs.CY cs.AI cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The integration of ChatGPT as a supportive tool in education, notably in programming courses, addresses the unique challenges of programming education by providing assistance with debugging, code generation, and explanations. Despite existing research validating ChatGPT's effectiveness, its application in university-level programming education and a detailed understanding of student interactions and perspectives remain limited. This paper explores ChatGPT's impact on learning in a Python programming course tailored for first-year students over eight weeks. By analyzing responses from surveys, open-ended questions, and student-ChatGPT dialog data, we aim to provide a comprehensive view of ChatGPT's utility and identify both its advantages and limitations as perceived by students. Our study uncovers a generally positive reception toward ChatGPT and offers insights into its role in enhancing the programming education experience. These findings contribute to the broader discourse on AI's potential in education, suggesting paths for future research and application.
[ { "created": "Wed, 20 Mar 2024 15:47:28 GMT", "version": "v1" }, { "created": "Wed, 27 Mar 2024 06:22:41 GMT", "version": "v2" }, { "created": "Fri, 5 Apr 2024 11:32:24 GMT", "version": "v3" } ]
2024-04-08
[ [ "Ma", "Boxaun", "" ], [ "Chen", "Li", "" ], [ "Konomi", "Shin'ichi", "" ] ]
The integration of ChatGPT as a supportive tool in education, notably in programming courses, addresses the unique challenges of programming education by providing assistance with debugging, code generation, and explanations. Despite existing research validating ChatGPT's effectiveness, its application in university-level programming education and a detailed understanding of student interactions and perspectives remain limited. This paper explores ChatGPT's impact on learning in a Python programming course tailored for first-year students over eight weeks. By analyzing responses from surveys, open-ended questions, and student-ChatGPT dialog data, we aim to provide a comprehensive view of ChatGPT's utility and identify both its advantages and limitations as perceived by students. Our study uncovers a generally positive reception toward ChatGPT and offers insights into its role in enhancing the programming education experience. These findings contribute to the broader discourse on AI's potential in education, suggesting paths for future research and application.
1107.2781
Rami C.
Rami Cohen
Face Recognition using Curvelet Transform
24 pages
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/3.0/
Face recognition has been studied extensively for more than 20 years now. Since the beginning of 90s the subject has became a major issue. This technology is used in many important real-world applications, such as video surveillance, smart cards, database security, internet and intranet access. This report reviews recent two algorithms for face recognition which take advantage of a relatively new multiscale geometric analysis tool - Curvelet transform, for facial processing and feature extraction. This transform proves to be efficient especially due to its good ability to detect curves and lines, which characterize the human's face. An algorithm which is based on the two algorithms mentioned above is proposed, and its performance is evaluated on three data bases of faces: AT&T (ORL), Essex Grimace and Georgia-Tech. k-nearest neighbour (k-NN) and Support vector machine (SVM) classifiers are used, along with Principal Component Analysis (PCA) for dimensionality reduction. This algorithm shows good results, and it even outperforms other algorithms in some cases.
[ { "created": "Thu, 14 Jul 2011 10:44:01 GMT", "version": "v1" } ]
2011-07-15
[ [ "Cohen", "Rami", "" ] ]
Face recognition has been studied extensively for more than 20 years now. Since the beginning of 90s the subject has became a major issue. This technology is used in many important real-world applications, such as video surveillance, smart cards, database security, internet and intranet access. This report reviews recent two algorithms for face recognition which take advantage of a relatively new multiscale geometric analysis tool - Curvelet transform, for facial processing and feature extraction. This transform proves to be efficient especially due to its good ability to detect curves and lines, which characterize the human's face. An algorithm which is based on the two algorithms mentioned above is proposed, and its performance is evaluated on three data bases of faces: AT&T (ORL), Essex Grimace and Georgia-Tech. k-nearest neighbour (k-NN) and Support vector machine (SVM) classifiers are used, along with Principal Component Analysis (PCA) for dimensionality reduction. This algorithm shows good results, and it even outperforms other algorithms in some cases.
2109.12052
Ajith Anil Meera
Fred Bos, Ajith Anil Meera, Dennis Benders and Martijn Wisse
Free Energy Principle for State and Input Estimation of a Quadcopter Flying in Wind
Submitted manuscript under review
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The free energy principle from neuroscience provides a brain-inspired perception scheme through a data-driven model learning algorithm called Dynamic Expectation Maximization (DEM). This paper aims at introducing an experimental design to provide the first experimental confirmation of the usefulness of DEM as a state and input estimator for real robots. Through a series of quadcopter flight experiments under unmodelled wind dynamics, we prove that DEM can leverage the information from colored noise for accurate state and input estimation through the use of generalized coordinates. We demonstrate the superior performance of DEM for state estimation under colored noise with respect to other benchmarks like State Augmentation, SMIKF and Kalman Filtering through its minimal estimation error. We demonstrate the similarities in the performance of DEM and Unknown Input Observer (UIO) for input estimation. The paper concludes by showing the influence of prior beliefs in shaping the accuracy-complexity trade-off during DEM's estimation.
[ { "created": "Fri, 24 Sep 2021 16:18:04 GMT", "version": "v1" } ]
2021-09-27
[ [ "Bos", "Fred", "" ], [ "Meera", "Ajith Anil", "" ], [ "Benders", "Dennis", "" ], [ "Wisse", "Martijn", "" ] ]
The free energy principle from neuroscience provides a brain-inspired perception scheme through a data-driven model learning algorithm called Dynamic Expectation Maximization (DEM). This paper aims at introducing an experimental design to provide the first experimental confirmation of the usefulness of DEM as a state and input estimator for real robots. Through a series of quadcopter flight experiments under unmodelled wind dynamics, we prove that DEM can leverage the information from colored noise for accurate state and input estimation through the use of generalized coordinates. We demonstrate the superior performance of DEM for state estimation under colored noise with respect to other benchmarks like State Augmentation, SMIKF and Kalman Filtering through its minimal estimation error. We demonstrate the similarities in the performance of DEM and Unknown Input Observer (UIO) for input estimation. The paper concludes by showing the influence of prior beliefs in shaping the accuracy-complexity trade-off during DEM's estimation.
1511.03383
Weimin Wang
Xue Dong, Kun Wang, Chong Xu, Weimin Wang
Information Rate Decomposition for Feedback Systems with Output Disturbance
5 pages, technical note
null
null
null
cs.IT cs.SY math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This technical note considers the problem of resource allocation in linear feedback control system with output disturbance. By decomposing the information rate in the feedback communication channel, the channel resource allocation is thoroughly analyzed. The results show that certain amount of resource is used to transmit the output disturbance and this resource allocation is independent from feedback controller design.
[ { "created": "Wed, 11 Nov 2015 04:22:21 GMT", "version": "v1" } ]
2015-11-12
[ [ "Dong", "Xue", "" ], [ "Wang", "Kun", "" ], [ "Xu", "Chong", "" ], [ "Wang", "Weimin", "" ] ]
This technical note considers the problem of resource allocation in linear feedback control system with output disturbance. By decomposing the information rate in the feedback communication channel, the channel resource allocation is thoroughly analyzed. The results show that certain amount of resource is used to transmit the output disturbance and this resource allocation is independent from feedback controller design.
2304.06287
Chen Yang
Chen Yang, Peihao Li, Zanwei Zhou, Shanxin Yuan, Bingbing Liu, Xiaokang Yang, Weichao Qiu, Wei Shen
NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry Scaffolds
10 pages, 7 figures
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room. NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views. To address this issue, we utilize the holistic priors, including pseudo depth maps and view coverage information, from neural reconstruction to guide the learning of implicit neural representations of 3D indoor scenes. Concretely, an off-the-shelf neural reconstruction method is leveraged to generate a geometry scaffold. Then, two loss functions based on the holistic priors are proposed to improve the learning of NeRF: 1) A robust depth loss that can tolerate the error of the pseudo depth map to guide the geometry learning of NeRF; 2) A variance loss to regularize the variance of implicit neural representations to reduce the geometry and color ambiguity in the learning procedure. These two loss functions are modulated during NeRF optimization according to the view coverage information to reduce the negative influence brought by the view coverage imbalance. Extensive results demonstrate that our NeRFVS outperforms state-of-the-art view synthesis methods quantitatively and qualitatively on indoor scenes, achieving high-fidelity free navigation results.
[ { "created": "Thu, 13 Apr 2023 06:40:08 GMT", "version": "v1" }, { "created": "Tue, 23 May 2023 12:49:17 GMT", "version": "v2" } ]
2023-05-24
[ [ "Yang", "Chen", "" ], [ "Li", "Peihao", "" ], [ "Zhou", "Zanwei", "" ], [ "Yuan", "Shanxin", "" ], [ "Liu", "Bingbing", "" ], [ "Yang", "Xiaokang", "" ], [ "Qiu", "Weichao", "" ], [ "Shen", "Wei", "" ] ]
We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room. NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views. To address this issue, we utilize the holistic priors, including pseudo depth maps and view coverage information, from neural reconstruction to guide the learning of implicit neural representations of 3D indoor scenes. Concretely, an off-the-shelf neural reconstruction method is leveraged to generate a geometry scaffold. Then, two loss functions based on the holistic priors are proposed to improve the learning of NeRF: 1) A robust depth loss that can tolerate the error of the pseudo depth map to guide the geometry learning of NeRF; 2) A variance loss to regularize the variance of implicit neural representations to reduce the geometry and color ambiguity in the learning procedure. These two loss functions are modulated during NeRF optimization according to the view coverage information to reduce the negative influence brought by the view coverage imbalance. Extensive results demonstrate that our NeRFVS outperforms state-of-the-art view synthesis methods quantitatively and qualitatively on indoor scenes, achieving high-fidelity free navigation results.
2105.12374
Khadija Shaheen
Khadija Shaheen, Muhammad Abdullah Hanif, Osman Hasan, Muhammad Shafique
Continual Learning for Real-World Autonomous Systems: Algorithms, Challenges and Frameworks
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Continual learning is essential for all real-world applications, as frozen pre-trained models cannot effectively deal with non-stationary data distributions. The purpose of this study is to review the state-of-the-art methods that allow continuous learning of computational models over time. We primarily focus on the learning algorithms that perform continuous learning in an online fashion from considerably large (or infinite) sequential data and require substantially low computational and memory resources. We critically analyze the key challenges associated with continual learning for autonomous real-world systems and compare current methods in terms of computations, memory, and network/model complexity. We also briefly describe the implementations of continuous learning algorithms under three main autonomous systems, i.e., self-driving vehicles, unmanned aerial vehicles, and urban robots. The learning methods of these autonomous systems and their strengths and limitations are extensively explored in this article.
[ { "created": "Wed, 26 May 2021 07:38:20 GMT", "version": "v1" }, { "created": "Fri, 25 Feb 2022 00:08:20 GMT", "version": "v2" } ]
2022-02-28
[ [ "Shaheen", "Khadija", "" ], [ "Hanif", "Muhammad Abdullah", "" ], [ "Hasan", "Osman", "" ], [ "Shafique", "Muhammad", "" ] ]
Continual learning is essential for all real-world applications, as frozen pre-trained models cannot effectively deal with non-stationary data distributions. The purpose of this study is to review the state-of-the-art methods that allow continuous learning of computational models over time. We primarily focus on the learning algorithms that perform continuous learning in an online fashion from considerably large (or infinite) sequential data and require substantially low computational and memory resources. We critically analyze the key challenges associated with continual learning for autonomous real-world systems and compare current methods in terms of computations, memory, and network/model complexity. We also briefly describe the implementations of continuous learning algorithms under three main autonomous systems, i.e., self-driving vehicles, unmanned aerial vehicles, and urban robots. The learning methods of these autonomous systems and their strengths and limitations are extensively explored in this article.
1806.07586
Andreas Bjorklund
Andreas Bj\"orklund and Thore Husfeldt
Counting Shortest Two Disjoint Paths in Cubic Planar Graphs with an NC Algorithm
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given an undirected graph and two disjoint vertex pairs $s_1,t_1$ and $s_2,t_2$, the Shortest two disjoint paths problem (S2DP) asks for the minimum total length of two vertex disjoint paths connecting $s_1$ with $t_1$, and $s_2$ with $t_2$, respectively. We show that for cubic planar graphs there are NC algorithms, uniform circuits of polynomial size and polylogarithmic depth, that compute the S2DP and moreover also output the number of such minimum length path pairs. Previously, to the best of our knowledge, no deterministic polynomial time algorithm was known for S2DP in cubic planar graphs with arbitrary placement of the terminals. In contrast, the randomized polynomial time algorithm by Bj\"orklund and Husfeldt, ICALP 2014, for general graphs is much slower, is serial in nature, and cannot count the solutions. Our results are built on an approach by Hirai and Namba, Algorithmica 2017, for a generalisation of S2DP, and fast algorithms for counting perfect matchings in planar graphs.
[ { "created": "Wed, 20 Jun 2018 07:26:37 GMT", "version": "v1" } ]
2018-06-21
[ [ "Björklund", "Andreas", "" ], [ "Husfeldt", "Thore", "" ] ]
Given an undirected graph and two disjoint vertex pairs $s_1,t_1$ and $s_2,t_2$, the Shortest two disjoint paths problem (S2DP) asks for the minimum total length of two vertex disjoint paths connecting $s_1$ with $t_1$, and $s_2$ with $t_2$, respectively. We show that for cubic planar graphs there are NC algorithms, uniform circuits of polynomial size and polylogarithmic depth, that compute the S2DP and moreover also output the number of such minimum length path pairs. Previously, to the best of our knowledge, no deterministic polynomial time algorithm was known for S2DP in cubic planar graphs with arbitrary placement of the terminals. In contrast, the randomized polynomial time algorithm by Bj\"orklund and Husfeldt, ICALP 2014, for general graphs is much slower, is serial in nature, and cannot count the solutions. Our results are built on an approach by Hirai and Namba, Algorithmica 2017, for a generalisation of S2DP, and fast algorithms for counting perfect matchings in planar graphs.
2204.01731
Shuang Liang
Shuang Liang, Yinan Zou, and Yong Zhou
Gan-Based Joint Activity Detection and Channel Estimation For Grant-free Random Access
5 pages, 5 figures IEEE ICASSP2022
null
null
null
cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Joint activity detection and channel estimation (JADCE) for grant-free random access is a critical issue that needs to be addressed to support massive connectivity in IoT networks. However, the existing model-free learning method can only achieve either activity detection or channel estimation, but not both. In this paper, we propose a novel model-free learning method based on generative adversarial network (GAN) to tackle the JADCE problem. We adopt the U-net architecture to build the generator rather than the standard GAN architecture, where a pre-estimated value that contains the activity information is adopted as input to the generator. By leveraging the properties of the pseudoinverse, the generator is refined by using an affine projection and a skip connection to ensure the output of the generator is consistent with the measurement. Moreover, we build a two-layer fully-connected neural network to design pilot matrix for reducing the impact of receiver noise. Simulation results show that the proposed method outperforms the existing methods in high SNR regimes, as both data consistency projection and pilot matrix optimization improve the learning ability.
[ { "created": "Mon, 4 Apr 2022 12:35:37 GMT", "version": "v1" } ]
2022-04-06
[ [ "Liang", "Shuang", "" ], [ "Zou", "Yinan", "" ], [ "Zhou", "Yong", "" ] ]
Joint activity detection and channel estimation (JADCE) for grant-free random access is a critical issue that needs to be addressed to support massive connectivity in IoT networks. However, the existing model-free learning method can only achieve either activity detection or channel estimation, but not both. In this paper, we propose a novel model-free learning method based on generative adversarial network (GAN) to tackle the JADCE problem. We adopt the U-net architecture to build the generator rather than the standard GAN architecture, where a pre-estimated value that contains the activity information is adopted as input to the generator. By leveraging the properties of the pseudoinverse, the generator is refined by using an affine projection and a skip connection to ensure the output of the generator is consistent with the measurement. Moreover, we build a two-layer fully-connected neural network to design pilot matrix for reducing the impact of receiver noise. Simulation results show that the proposed method outperforms the existing methods in high SNR regimes, as both data consistency projection and pilot matrix optimization improve the learning ability.
2106.14139
Zhongyun Hua
Zhongyun Hua and Yanxiang Wang and Shuang Yi and Yicong Zhou and Xiaohua Jia
Secure Reversible Data Hiding in Encrypted Images Using Cipher-Feedback Secret Sharing
14 pages
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reversible data hiding in encrypted images (RDH-EI) has attracted increasing attention, since it can protect the privacy of original images while the embedded data can be exactly extracted. Recently, some RDH-EI schemes with multiple data hiders have been proposed using secret sharing technique. However, these schemes protect the contents of the original images with lightweight security level. In this paper, we propose a high-security RDH-EI scheme with multiple data hiders. First, we introduce a cipher-feedback secret sharing (CFSS) technique. It follows the cryptography standards by introducing the cipher-feedback strategy of AES. Then, using the CFSS technique, we devise a new (r,n)-threshold (r<=n) RDH-EI scheme with multiple data hiders called CFSS-RDHEI. It can encrypt an original image into n encrypted images with reduced size using an encryption key and sends each encrypted image to one data hider. Each data hider can independently embed secret data into the encrypted image to obtain the corresponding marked encrypted image. The original image can be completely recovered from r marked encrypted images and the encryption key. Performance evaluations show that our CFSS-RDHEI scheme has high embedding rate and its generated encrypted images are much smaller, compared to existing secret sharing-based RDH-EI schemes. Security analysis demonstrates that it can achieve high security to defense some commonly used security attacks.
[ { "created": "Sun, 27 Jun 2021 04:03:56 GMT", "version": "v1" } ]
2021-06-29
[ [ "Hua", "Zhongyun", "" ], [ "Wang", "Yanxiang", "" ], [ "Yi", "Shuang", "" ], [ "Zhou", "Yicong", "" ], [ "Jia", "Xiaohua", "" ] ]
Reversible data hiding in encrypted images (RDH-EI) has attracted increasing attention, since it can protect the privacy of original images while the embedded data can be exactly extracted. Recently, some RDH-EI schemes with multiple data hiders have been proposed using secret sharing technique. However, these schemes protect the contents of the original images with lightweight security level. In this paper, we propose a high-security RDH-EI scheme with multiple data hiders. First, we introduce a cipher-feedback secret sharing (CFSS) technique. It follows the cryptography standards by introducing the cipher-feedback strategy of AES. Then, using the CFSS technique, we devise a new (r,n)-threshold (r<=n) RDH-EI scheme with multiple data hiders called CFSS-RDHEI. It can encrypt an original image into n encrypted images with reduced size using an encryption key and sends each encrypted image to one data hider. Each data hider can independently embed secret data into the encrypted image to obtain the corresponding marked encrypted image. The original image can be completely recovered from r marked encrypted images and the encryption key. Performance evaluations show that our CFSS-RDHEI scheme has high embedding rate and its generated encrypted images are much smaller, compared to existing secret sharing-based RDH-EI schemes. Security analysis demonstrates that it can achieve high security to defense some commonly used security attacks.
2403.07118
Ameeta Agrawal
Atharva Phatak, Vijay K. Mago, Ameeta Agrawal, Aravind Inbasekaran, Philippe J. Giabbanelli
Narrating Causal Graphs with Large Language Models
HICSS '24
Proceedings of the 57th Hawaii International Conference on System Sciences 2024
null
https://hdl.handle.net/10125/107290
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
The use of generative AI to create text descriptions from graphs has mostly focused on knowledge graphs, which connect concepts using facts. In this work we explore the capability of large pretrained language models to generate text from causal graphs, where salient concepts are represented as nodes and causality is represented via directed, typed edges. The causal reasoning encoded in these graphs can support applications as diverse as healthcare or marketing. Using two publicly available causal graph datasets, we empirically investigate the performance of four GPT-3 models under various settings. Our results indicate that while causal text descriptions improve with training data, compared to fact-based graphs, they are harder to generate under zero-shot settings. Results further suggest that users of generative AI can deploy future applications faster since similar performances are obtained when training a model with only a few examples as compared to fine-tuning via a large curated dataset.
[ { "created": "Mon, 11 Mar 2024 19:19:59 GMT", "version": "v1" } ]
2024-04-09
[ [ "Phatak", "Atharva", "" ], [ "Mago", "Vijay K.", "" ], [ "Agrawal", "Ameeta", "" ], [ "Inbasekaran", "Aravind", "" ], [ "Giabbanelli", "Philippe J.", "" ] ]
The use of generative AI to create text descriptions from graphs has mostly focused on knowledge graphs, which connect concepts using facts. In this work we explore the capability of large pretrained language models to generate text from causal graphs, where salient concepts are represented as nodes and causality is represented via directed, typed edges. The causal reasoning encoded in these graphs can support applications as diverse as healthcare or marketing. Using two publicly available causal graph datasets, we empirically investigate the performance of four GPT-3 models under various settings. Our results indicate that while causal text descriptions improve with training data, compared to fact-based graphs, they are harder to generate under zero-shot settings. Results further suggest that users of generative AI can deploy future applications faster since similar performances are obtained when training a model with only a few examples as compared to fine-tuning via a large curated dataset.
2111.14018
Raja Karmakar
Priyanka Bothra, Raja Karmakar, Sanjukta Bhattacharya, Sayantani De
How Can Applications of Blockchain and Artificial Intelligence Improve Performance of Internet of Things? -- A Survey
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
In the era of the Internet of Things (IoT), massive computing devices surrounding us operate and interact with each other to provide several significant services in industries, medical as well as in daily life activities at home, office, education sectors, and so on. The participating devices in an IoT network usually have resource constraints and the devices are prone to different cyber attacks, leading to the loopholes in the security and authentication. As a revolutionized and innovated technology, blockchain, that is applied in cryptocurrency, market prediction, etc., uses a distributed ledger that records transactions securely and efficiently. To utilize the great potential of blockchain, both industries and academia have paid a significant attention to integrate it with the IoT, as reported by several existing literature. On the other hand, Artificial Intelligence (AI) is able to embed intelligence in a system, and thus the AI can be integrated with IoT devices in order to automatically cope with different environments according to the demands. Furthermore, both blockchain and AI can be integrated with the IoT to design an automated secure and robust IoT model, as mentioned by numerous existing works. In this survey, we present a discussion on the IoT, blockchain, and AI, along with the descriptions of several research works that apply blockchain and AI in the IoT. In this direction, we point out strengths and limitations of the related existing researches. We also discuss different open challenges to exploit the full capacities of blockchain and AI in designing an IoT-based model. Therefore, the highlighted challenging issues can open the door for the development of future IoT models which will be intelligent and secure based on the integration of blockchain and AI with the IoT.
[ { "created": "Sun, 28 Nov 2021 01:45:15 GMT", "version": "v1" } ]
2021-11-30
[ [ "Bothra", "Priyanka", "" ], [ "Karmakar", "Raja", "" ], [ "Bhattacharya", "Sanjukta", "" ], [ "De", "Sayantani", "" ] ]
In the era of the Internet of Things (IoT), massive computing devices surrounding us operate and interact with each other to provide several significant services in industries, medical as well as in daily life activities at home, office, education sectors, and so on. The participating devices in an IoT network usually have resource constraints and the devices are prone to different cyber attacks, leading to the loopholes in the security and authentication. As a revolutionized and innovated technology, blockchain, that is applied in cryptocurrency, market prediction, etc., uses a distributed ledger that records transactions securely and efficiently. To utilize the great potential of blockchain, both industries and academia have paid a significant attention to integrate it with the IoT, as reported by several existing literature. On the other hand, Artificial Intelligence (AI) is able to embed intelligence in a system, and thus the AI can be integrated with IoT devices in order to automatically cope with different environments according to the demands. Furthermore, both blockchain and AI can be integrated with the IoT to design an automated secure and robust IoT model, as mentioned by numerous existing works. In this survey, we present a discussion on the IoT, blockchain, and AI, along with the descriptions of several research works that apply blockchain and AI in the IoT. In this direction, we point out strengths and limitations of the related existing researches. We also discuss different open challenges to exploit the full capacities of blockchain and AI in designing an IoT-based model. Therefore, the highlighted challenging issues can open the door for the development of future IoT models which will be intelligent and secure based on the integration of blockchain and AI with the IoT.
2309.01949
Berthy Feng
Berthy T. Feng, Katherine L. Bouman
Efficient Bayesian Computational Imaging with a Surrogate Score-Based Prior
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a surrogate function for efficient use of score-based priors for Bayesian inverse imaging. Recent work turned score-based diffusion models into probabilistic priors for solving ill-posed imaging problems by appealing to an ODE-based log-probability function. However, evaluating this function is computationally inefficient and inhibits posterior estimation of high-dimensional images. Our proposed surrogate prior is based on the evidence lower-bound of a score-based diffusion model. We demonstrate the surrogate prior on variational inference for efficient approximate posterior sampling of large images. Compared to the exact prior in previous work, our surrogate prior accelerates optimization of the variational image distribution by at least two orders of magnitude. We also find that our principled approach achieves higher-fidelity images than non-Bayesian baselines that involve hyperparameter-tuning at inference. Our work establishes a practical path forward for using score-based diffusion models as general-purpose priors for imaging.
[ { "created": "Tue, 5 Sep 2023 04:55:10 GMT", "version": "v1" } ]
2023-09-06
[ [ "Feng", "Berthy T.", "" ], [ "Bouman", "Katherine L.", "" ] ]
We propose a surrogate function for efficient use of score-based priors for Bayesian inverse imaging. Recent work turned score-based diffusion models into probabilistic priors for solving ill-posed imaging problems by appealing to an ODE-based log-probability function. However, evaluating this function is computationally inefficient and inhibits posterior estimation of high-dimensional images. Our proposed surrogate prior is based on the evidence lower-bound of a score-based diffusion model. We demonstrate the surrogate prior on variational inference for efficient approximate posterior sampling of large images. Compared to the exact prior in previous work, our surrogate prior accelerates optimization of the variational image distribution by at least two orders of magnitude. We also find that our principled approach achieves higher-fidelity images than non-Bayesian baselines that involve hyperparameter-tuning at inference. Our work establishes a practical path forward for using score-based diffusion models as general-purpose priors for imaging.
2402.04168
Daniel Bogdoll
Daniel Bogdoll, Jing Qin, Moritz Nekolla, Ahmed Abouelazm, Tim Joseph, J. Marius Z\"ollner
Informed Reinforcement Learning for Situation-Aware Traffic Rule Exceptions
Daniel Bogdoll and Jing Qin contributed equally. Accepted for publication at ICRA 2024
null
null
null
cs.LG cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
Reinforcement Learning is a highly active research field with promising advancements. In the field of autonomous driving, however, often very simple scenarios are being examined. Common approaches use non-interpretable control commands as the action space and unstructured reward designs which lack structure. In this work, we introduce Informed Reinforcement Learning, where a structured rulebook is integrated as a knowledge source. We learn trajectories and asses them with a situation-aware reward design, leading to a dynamic reward which allows the agent to learn situations which require controlled traffic rule exceptions. Our method is applicable to arbitrary RL models. We successfully demonstrate high completion rates of complex scenarios with recent model-based agents.
[ { "created": "Tue, 6 Feb 2024 17:24:06 GMT", "version": "v1" }, { "created": "Wed, 12 Jun 2024 11:34:30 GMT", "version": "v2" } ]
2024-06-13
[ [ "Bogdoll", "Daniel", "" ], [ "Qin", "Jing", "" ], [ "Nekolla", "Moritz", "" ], [ "Abouelazm", "Ahmed", "" ], [ "Joseph", "Tim", "" ], [ "Zöllner", "J. Marius", "" ] ]
Reinforcement Learning is a highly active research field with promising advancements. In the field of autonomous driving, however, often very simple scenarios are being examined. Common approaches use non-interpretable control commands as the action space and unstructured reward designs which lack structure. In this work, we introduce Informed Reinforcement Learning, where a structured rulebook is integrated as a knowledge source. We learn trajectories and asses them with a situation-aware reward design, leading to a dynamic reward which allows the agent to learn situations which require controlled traffic rule exceptions. Our method is applicable to arbitrary RL models. We successfully demonstrate high completion rates of complex scenarios with recent model-based agents.
2310.05471
Siddharth Gupta
Siddharth Gupta, Guy Sa'ar, Meirav Zehavi
Drawn Tree Decomposition: New Approach for Graph Drawing Problems
A preliminary version of this paper will appear in the Proceedings of IPEC 2023
null
null
null
cs.DS cs.CG
http://creativecommons.org/licenses/by/4.0/
Over the past decade, we witness an increasing amount of interest in the design of exact exponential-time and parameterized algorithms for problems in Graph Drawing. Unfortunately, we still lack knowledge of general methods to develop such algorithms. An even more serious issue is that, here, "standard" parameters very often yield intractability. In particular, for the most common structural parameter, namely, treewidth, we frequently observe NP-hardness already when the input graphs are restricted to have constant (often, being just $1$ or $2$) treewidth. Our work deals with both drawbacks simultaneously. We introduce a novel form of tree decomposition that, roughly speaking, does not decompose (only) a graph, but an entire drawing. As such, its bags and separators are of geometric (rather than only combinatorial) nature. While the corresponding parameter -- like treewidth -- can be arbitrarily smaller than the height (and width) of the drawing, we show that -- unlike treewidth -- it gives rise to efficient algorithms. Specifically, we get slice-wise polynomial (XP) time algorithms parameterized by our parameter. We present a general scheme for the design of such algorithms, and apply it to several central problems in Graph Drawing, including the recognition of grid graphs, minimization of crossings and bends, and compaction. Other than for the class of problems we discussed in the paper, we believe that our decomposition and scheme are of independent interest and can be further extended or generalized to suit even a wider class of problems. Additionally, we discuss classes of drawings where our parameter is bounded by $O(\sqrt{n})$ (where $n$ is the number of vertices of the graph), yielding subexponential-time algorithms. Lastly, we prove which relations exist between drawn treewidth and other width measures, including treewidth, pathwidth, (dual) carving-width and embedded-width.
[ { "created": "Mon, 9 Oct 2023 07:27:17 GMT", "version": "v1" } ]
2023-10-10
[ [ "Gupta", "Siddharth", "" ], [ "Sa'ar", "Guy", "" ], [ "Zehavi", "Meirav", "" ] ]
Over the past decade, we witness an increasing amount of interest in the design of exact exponential-time and parameterized algorithms for problems in Graph Drawing. Unfortunately, we still lack knowledge of general methods to develop such algorithms. An even more serious issue is that, here, "standard" parameters very often yield intractability. In particular, for the most common structural parameter, namely, treewidth, we frequently observe NP-hardness already when the input graphs are restricted to have constant (often, being just $1$ or $2$) treewidth. Our work deals with both drawbacks simultaneously. We introduce a novel form of tree decomposition that, roughly speaking, does not decompose (only) a graph, but an entire drawing. As such, its bags and separators are of geometric (rather than only combinatorial) nature. While the corresponding parameter -- like treewidth -- can be arbitrarily smaller than the height (and width) of the drawing, we show that -- unlike treewidth -- it gives rise to efficient algorithms. Specifically, we get slice-wise polynomial (XP) time algorithms parameterized by our parameter. We present a general scheme for the design of such algorithms, and apply it to several central problems in Graph Drawing, including the recognition of grid graphs, minimization of crossings and bends, and compaction. Other than for the class of problems we discussed in the paper, we believe that our decomposition and scheme are of independent interest and can be further extended or generalized to suit even a wider class of problems. Additionally, we discuss classes of drawings where our parameter is bounded by $O(\sqrt{n})$ (where $n$ is the number of vertices of the graph), yielding subexponential-time algorithms. Lastly, we prove which relations exist between drawn treewidth and other width measures, including treewidth, pathwidth, (dual) carving-width and embedded-width.
1810.01172
Yousef Alnagar
Yousef AlNagar, Sameh Hosny and Amr A. El-Sherif
Towards Mobility-Aware Proactive Caching for Vehicular Ad hoc Networks
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Harnessing information about the user mobility pattern and daily demand can enhance the network capability to improve the quality of experience (QoE) at Vehicular Ad-Hoc Networks (VANETs). Proactive caching, as one of the key features offered by 5G networks, has lately received much interest. However, more research is still needed to convey large-sized multimedia content including video, audio and pictures to the high speed moving vehicles. In this paper, we study the gains achieved by proactive caching in Roadside Units (RSUs) where we take into consideration the effect of the vehicle velocity on the optimal caching decision. Information about the user demand and mobility is harnessed to cache some files in RSUs, which will communicate with vehicles traversing along the visited roads before the actual demand. Our main objective is to minimize the total network latency. Towards this objective, we formulate two optimization problems for non-cooperative and cooperative caching schemes to find the optimal caching policy to decide which files to be cached by the RSUs. Due to the complexity of these problems, we propose a sub-optimal caching policy for each scheme. We compare the performance of the optimal caching policy to that of the sub-optimal caching policy. Numerical results show that proactive caching has a significant performance gain when compared to the baseline reactive scenario. Moreover, results reveal that the cooperative caching scheme is more efficient than the non-cooperative scheme.
[ { "created": "Tue, 2 Oct 2018 11:19:30 GMT", "version": "v1" } ]
2018-10-03
[ [ "AlNagar", "Yousef", "" ], [ "Hosny", "Sameh", "" ], [ "El-Sherif", "Amr A.", "" ] ]
Harnessing information about the user mobility pattern and daily demand can enhance the network capability to improve the quality of experience (QoE) at Vehicular Ad-Hoc Networks (VANETs). Proactive caching, as one of the key features offered by 5G networks, has lately received much interest. However, more research is still needed to convey large-sized multimedia content including video, audio and pictures to the high speed moving vehicles. In this paper, we study the gains achieved by proactive caching in Roadside Units (RSUs) where we take into consideration the effect of the vehicle velocity on the optimal caching decision. Information about the user demand and mobility is harnessed to cache some files in RSUs, which will communicate with vehicles traversing along the visited roads before the actual demand. Our main objective is to minimize the total network latency. Towards this objective, we formulate two optimization problems for non-cooperative and cooperative caching schemes to find the optimal caching policy to decide which files to be cached by the RSUs. Due to the complexity of these problems, we propose a sub-optimal caching policy for each scheme. We compare the performance of the optimal caching policy to that of the sub-optimal caching policy. Numerical results show that proactive caching has a significant performance gain when compared to the baseline reactive scenario. Moreover, results reveal that the cooperative caching scheme is more efficient than the non-cooperative scheme.
1607.04629
Aronee Dasgupta Mr
Aronee Dasgupta, Sahil Chakraborty, Astha Nachrani and Pritam Gajkumar Shah
Lightweight Security Protocol for WiSense based Wireless Sensor Network
5 pages, 4 figures, 2 tables. Published with International Journal of Computer Applications (IJCA)
International Journal of Computer Applications 145(3):6-10, July 2016
10.5120/ijca2016910172
null
cs.NI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wireless Sensor Networks have emerged as one of the leading technologies. These networks are designed to monitor crucial environmental parameters of humidity, temperature, wind speed, soil moisture content, UV index, sound, etc. and then transfer the required information to the base station. However, security remains the key challenge of such networks as critical data is being transferred. Most sensor nodes currently deployed have constraints on memory and processing power and hence operate without an efficient security protocol. Hereby a protocol which is lightweight and is secure for wireless sensor applications is proposed.
[ { "created": "Fri, 15 Jul 2016 19:46:36 GMT", "version": "v1" } ]
2016-07-18
[ [ "Dasgupta", "Aronee", "" ], [ "Chakraborty", "Sahil", "" ], [ "Nachrani", "Astha", "" ], [ "Shah", "Pritam Gajkumar", "" ] ]
Wireless Sensor Networks have emerged as one of the leading technologies. These networks are designed to monitor crucial environmental parameters of humidity, temperature, wind speed, soil moisture content, UV index, sound, etc. and then transfer the required information to the base station. However, security remains the key challenge of such networks as critical data is being transferred. Most sensor nodes currently deployed have constraints on memory and processing power and hence operate without an efficient security protocol. Hereby a protocol which is lightweight and is secure for wireless sensor applications is proposed.
2011.04702
Zhiqian Qiao
Josiah Coad, Zhiqian Qiao, John M. Dolan
Safe Trajectory Planning Using Reinforcement Learning for Self Driving
7 pages, 5 figures
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-driving vehicles must be able to act intelligently in diverse and difficult environments, marked by high-dimensional state spaces, a myriad of optimization objectives and complex behaviors. Traditionally, classical optimization and search techniques have been applied to the problem of self-driving; but they do not fully address operations in environments with high-dimensional states and complex behaviors. Recently, imitation learning has been proposed for the task of self-driving; but it is labor-intensive to obtain enough training data. Reinforcement learning has been proposed as a way to directly control the car, but this has safety and comfort concerns. We propose using model-free reinforcement learning for the trajectory planning stage of self-driving and show that this approach allows us to operate the car in a more safe, general and comfortable manner, required for the task of self driving.
[ { "created": "Mon, 9 Nov 2020 19:29:14 GMT", "version": "v1" } ]
2020-11-11
[ [ "Coad", "Josiah", "" ], [ "Qiao", "Zhiqian", "" ], [ "Dolan", "John M.", "" ] ]
Self-driving vehicles must be able to act intelligently in diverse and difficult environments, marked by high-dimensional state spaces, a myriad of optimization objectives and complex behaviors. Traditionally, classical optimization and search techniques have been applied to the problem of self-driving; but they do not fully address operations in environments with high-dimensional states and complex behaviors. Recently, imitation learning has been proposed for the task of self-driving; but it is labor-intensive to obtain enough training data. Reinforcement learning has been proposed as a way to directly control the car, but this has safety and comfort concerns. We propose using model-free reinforcement learning for the trajectory planning stage of self-driving and show that this approach allows us to operate the car in a more safe, general and comfortable manner, required for the task of self driving.
2006.12063
Perttu H\"am\"al\"ainen
Perttu H\"am\"al\"ainen and Martin Trapp and Tuure Saloheimo and Arno Solin
Deep Residual Mixture Models
Code and examples can be found at https://github.com/PerttuHamalainen/DRMM
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose Deep Residual Mixture Models (DRMMs), a novel deep generative model architecture. Compared to other deep models, DRMMs allow more flexible conditional sampling: The model can be trained once with all variables, and then used for sampling with arbitrary combinations of conditioning variables, Gaussian priors, and (in)equality constraints. This provides new opportunities for interactive and exploratory machine learning, where one should minimize the user waiting for retraining a model. We demonstrate DRMMs in constrained multi-limb inverse kinematics and controllable generation of animations.
[ { "created": "Mon, 22 Jun 2020 08:25:41 GMT", "version": "v1" }, { "created": "Fri, 20 Nov 2020 14:35:15 GMT", "version": "v2" }, { "created": "Wed, 21 Jul 2021 06:26:14 GMT", "version": "v3" } ]
2021-07-22
[ [ "Hämäläinen", "Perttu", "" ], [ "Trapp", "Martin", "" ], [ "Saloheimo", "Tuure", "" ], [ "Solin", "Arno", "" ] ]
We propose Deep Residual Mixture Models (DRMMs), a novel deep generative model architecture. Compared to other deep models, DRMMs allow more flexible conditional sampling: The model can be trained once with all variables, and then used for sampling with arbitrary combinations of conditioning variables, Gaussian priors, and (in)equality constraints. This provides new opportunities for interactive and exploratory machine learning, where one should minimize the user waiting for retraining a model. We demonstrate DRMMs in constrained multi-limb inverse kinematics and controllable generation of animations.
1512.00659
Sanjay Sahay
Aruna Govada, Bhavul Gauri and S.K.Sahay
Centroid Based Binary Tree Structured SVM for Multi Classification
Presented in ICACCI, Kochi, India, 2015
IEEE Xplore, Advances in Computing, Communications and Informatics (ICACCI), p.258 - 262, 2015
10.1109/ICACCI.2015.7275618
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Support Vector Machines (SVMs) were primarily designed for 2-class classification. But they have been extended for N-class classification also based on the requirement of multiclasses in the practical applications. Although N-class classification using SVM has considerable research attention, getting minimum number of classifiers at the time of training and testing is still a continuing research. We propose a new algorithm CBTS-SVM (Centroid based Binary Tree Structured SVM) which addresses this issue. In this we build a binary tree of SVM models based on the similarity of the class labels by finding their distance from the corresponding centroids at the root level. The experimental results demonstrates the comparable accuracy for CBTS with OVO with reasonable gamma and cost values. On the other hand when CBTS is compared with OVA, it gives the better accuracy with reduced training time and testing time. Furthermore CBTS is also scalable as it is able to handle the large data sets.
[ { "created": "Wed, 2 Dec 2015 11:48:38 GMT", "version": "v1" } ]
2015-12-03
[ [ "Govada", "Aruna", "" ], [ "Gauri", "Bhavul", "" ], [ "Sahay", "S. K.", "" ] ]
Support Vector Machines (SVMs) were primarily designed for 2-class classification. But they have been extended for N-class classification also based on the requirement of multiclasses in the practical applications. Although N-class classification using SVM has considerable research attention, getting minimum number of classifiers at the time of training and testing is still a continuing research. We propose a new algorithm CBTS-SVM (Centroid based Binary Tree Structured SVM) which addresses this issue. In this we build a binary tree of SVM models based on the similarity of the class labels by finding their distance from the corresponding centroids at the root level. The experimental results demonstrates the comparable accuracy for CBTS with OVO with reasonable gamma and cost values. On the other hand when CBTS is compared with OVA, it gives the better accuracy with reduced training time and testing time. Furthermore CBTS is also scalable as it is able to handle the large data sets.
1911.03904
Deli Chen
Deli Chen, Xiaoqian Liu, Yankai Lin, Peng Li, Jie Zhou, Qi Su, Xu Sun
HighwayGraph: Modelling Long-distance Node Relations for Improving General Graph Neural Network
8 pages
null
null
null
cs.LG cs.CL cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Neural Networks (GNNs) are efficient approaches to process graph-structured data. Modelling long-distance node relations is essential for GNN training and applications. However, conventional GNNs suffer from bad performance in modelling long-distance node relations due to limited-layer information propagation. Existing studies focus on building deep GNN architectures, which face the over-smoothing issue and cannot model node relations in particularly long distance. To address this issue, we propose to model long-distance node relations by simply relying on shallow GNN architectures with two solutions: (1) Implicitly modelling by learning to predict node pair relations (2) Explicitly modelling by adding edges between nodes that potentially have the same label. To combine our two solutions, we propose a model-agnostic training framework named HighwayGraph, which overcomes the challenge of insufficient labeled nodes by sampling node pairs from the training set and adopting the self-training method. Extensive experimental results show that our HighwayGraph achieves consistent and significant improvements over four representative GNNs on three benchmark datasets.
[ { "created": "Sun, 10 Nov 2019 11:23:37 GMT", "version": "v1" }, { "created": "Sun, 17 May 2020 05:18:55 GMT", "version": "v2" } ]
2020-05-19
[ [ "Chen", "Deli", "" ], [ "Liu", "Xiaoqian", "" ], [ "Lin", "Yankai", "" ], [ "Li", "Peng", "" ], [ "Zhou", "Jie", "" ], [ "Su", "Qi", "" ], [ "Sun", "Xu", "" ] ]
Graph Neural Networks (GNNs) are efficient approaches to process graph-structured data. Modelling long-distance node relations is essential for GNN training and applications. However, conventional GNNs suffer from bad performance in modelling long-distance node relations due to limited-layer information propagation. Existing studies focus on building deep GNN architectures, which face the over-smoothing issue and cannot model node relations in particularly long distance. To address this issue, we propose to model long-distance node relations by simply relying on shallow GNN architectures with two solutions: (1) Implicitly modelling by learning to predict node pair relations (2) Explicitly modelling by adding edges between nodes that potentially have the same label. To combine our two solutions, we propose a model-agnostic training framework named HighwayGraph, which overcomes the challenge of insufficient labeled nodes by sampling node pairs from the training set and adopting the self-training method. Extensive experimental results show that our HighwayGraph achieves consistent and significant improvements over four representative GNNs on three benchmark datasets.
1401.0734
Megasthenis Asteris
Megasthenis Asteris, Alexandros G. Dimakis
Repairable Fountain Codes
To appear in IEEE Journal on Selected Areas in Communications, Issue on Communication Methodologies for Next-Generation Storage Systems 2013, 11 pages, 2 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new family of Fountain codes that are systematic and also have sparse parities. Given an input of $k$ symbols, our codes produce an unbounded number of output symbols, generating each parity independently by linearly combining a logarithmic number of randomly selected input symbols. The construction guarantees that for any $\epsilon>0$ accessing a random subset of $(1+\epsilon)k$ encoded symbols, asymptotically suffices to recover the $k$ input symbols with high probability. Our codes have the additional benefit of logarithmic locality: a single lost symbol can be repaired by accessing a subset of $O(\log k)$ of the remaining encoded symbols. This is a desired property for distributed storage systems where symbols are spread over a network of storage nodes. Beyond recovery upon loss, local reconstruction provides an efficient alternative for reading symbols that cannot be accessed directly. In our code, a logarithmic number of disjoint local groups is associated with each systematic symbol, allowing multiple parallel reads. Our main mathematical contribution involves analyzing the rank of sparse random matrices with specific structure over finite fields. We rely on establishing that a new family of sparse random bipartite graphs have perfect matchings with high probability.
[ { "created": "Fri, 3 Jan 2014 21:18:12 GMT", "version": "v1" } ]
2014-01-07
[ [ "Asteris", "Megasthenis", "" ], [ "Dimakis", "Alexandros G.", "" ] ]
We introduce a new family of Fountain codes that are systematic and also have sparse parities. Given an input of $k$ symbols, our codes produce an unbounded number of output symbols, generating each parity independently by linearly combining a logarithmic number of randomly selected input symbols. The construction guarantees that for any $\epsilon>0$ accessing a random subset of $(1+\epsilon)k$ encoded symbols, asymptotically suffices to recover the $k$ input symbols with high probability. Our codes have the additional benefit of logarithmic locality: a single lost symbol can be repaired by accessing a subset of $O(\log k)$ of the remaining encoded symbols. This is a desired property for distributed storage systems where symbols are spread over a network of storage nodes. Beyond recovery upon loss, local reconstruction provides an efficient alternative for reading symbols that cannot be accessed directly. In our code, a logarithmic number of disjoint local groups is associated with each systematic symbol, allowing multiple parallel reads. Our main mathematical contribution involves analyzing the rank of sparse random matrices with specific structure over finite fields. We rely on establishing that a new family of sparse random bipartite graphs have perfect matchings with high probability.
2306.07618
Lingfeng Wen
Lingfeng Wen, Xuan Tang, Mingjie Ouyang, Xiangxiang Shen, Jian Yang, Daxin Zhu, Mingsong Chen, Xian Wei
Hyperbolic Graph Diffusion Model
accepted by AAAI 2024
null
null
null
cs.LG cs.AI q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion generative models (DMs) have achieved promising results in image and graph generation. However, real-world graphs, such as social networks, molecular graphs, and traffic graphs, generally share non-Euclidean topologies and hidden hierarchies. For example, the degree distributions of graphs are mostly power-law distributions. The current latent diffusion model embeds the hierarchical data in a Euclidean space, which leads to distortions and interferes with modeling the distribution. Instead, hyperbolic space has been found to be more suitable for capturing complex hierarchical structures due to its exponential growth property. In order to simultaneously utilize the data generation capabilities of diffusion models and the ability of hyperbolic embeddings to extract latent hierarchical distributions, we propose a novel graph generation method called, Hyperbolic Graph Diffusion Model (HGDM), which consists of an auto-encoder to encode nodes into successive hyperbolic embeddings, and a DM that operates in the hyperbolic latent space. HGDM captures the crucial graph structure distributions by constructing a hyperbolic potential node space that incorporates edge information. Extensive experiments show that HGDM achieves better performance in generic graph and molecule generation benchmarks, with a $48\%$ improvement in the quality of graph generation with highly hierarchical structures.
[ { "created": "Tue, 13 Jun 2023 08:22:18 GMT", "version": "v1" }, { "created": "Thu, 15 Jun 2023 06:25:24 GMT", "version": "v2" }, { "created": "Wed, 3 Jan 2024 11:22:21 GMT", "version": "v3" } ]
2024-01-04
[ [ "Wen", "Lingfeng", "" ], [ "Tang", "Xuan", "" ], [ "Ouyang", "Mingjie", "" ], [ "Shen", "Xiangxiang", "" ], [ "Yang", "Jian", "" ], [ "Zhu", "Daxin", "" ], [ "Chen", "Mingsong", "" ], [ "Wei", "Xian", "" ] ]
Diffusion generative models (DMs) have achieved promising results in image and graph generation. However, real-world graphs, such as social networks, molecular graphs, and traffic graphs, generally share non-Euclidean topologies and hidden hierarchies. For example, the degree distributions of graphs are mostly power-law distributions. The current latent diffusion model embeds the hierarchical data in a Euclidean space, which leads to distortions and interferes with modeling the distribution. Instead, hyperbolic space has been found to be more suitable for capturing complex hierarchical structures due to its exponential growth property. In order to simultaneously utilize the data generation capabilities of diffusion models and the ability of hyperbolic embeddings to extract latent hierarchical distributions, we propose a novel graph generation method called, Hyperbolic Graph Diffusion Model (HGDM), which consists of an auto-encoder to encode nodes into successive hyperbolic embeddings, and a DM that operates in the hyperbolic latent space. HGDM captures the crucial graph structure distributions by constructing a hyperbolic potential node space that incorporates edge information. Extensive experiments show that HGDM achieves better performance in generic graph and molecule generation benchmarks, with a $48\%$ improvement in the quality of graph generation with highly hierarchical structures.
2312.16210
James Davenport
James H. Davenport and Matthew England and Scott McCallum and Ali K. Uncu
Iterated Resultants and Rational Functions in Real Quantifier Elimination
To be submitted to Mathematics in Computer Science
null
null
null
cs.SC math.AG
http://creativecommons.org/licenses/by/4.0/
This paper builds and extends on the authors previous work related to the algorithmic tool, Cylindrical Algebraic Decomposition (CAD), and one of its core applications, Real Quantifier Elimination (QE). These topics are at the heart of symbolic computation and were first implemented in computer algebra systems decades ago, but have recently received renewed interest as part of the ongoing development of SMT solvers for non-linear real arithmetic. First, we consider the use of iterated univariate resultants in traditional CAD, and how this leads to inefficiencies, especially in the case of an input with multiple equational constraints. We reproduce the workshop paper [Davenport \& England, 2023], adding important clarifications to our suggestions first made there to make use of multivariate resultants in the projection phase of CAD. We then consider an alternative approach to this problem first documented in [McCallum \& Brown, 2009] which redefines the actual object under construction, albeit only in the case of two equational constraints. We correct an important typo and provide a missing proof in that paper. We finish by revising the topic of how to deal with SMT or Real QE problems expressed using rational functions (as opposed to the usual polynomial ones) noting that these are often found in industrial applications. We revisit a proposal made in [Uncu, Davenport and England, 2023] for doing this in the case of satisfiability, explaining why such an approach does not trivially extend to more complicated quantification structure and giving a suitable alternative.
[ { "created": "Sat, 23 Dec 2023 17:32:45 GMT", "version": "v1" } ]
2023-12-29
[ [ "Davenport", "James H.", "" ], [ "England", "Matthew", "" ], [ "McCallum", "Scott", "" ], [ "Uncu", "Ali K.", "" ] ]
This paper builds and extends on the authors previous work related to the algorithmic tool, Cylindrical Algebraic Decomposition (CAD), and one of its core applications, Real Quantifier Elimination (QE). These topics are at the heart of symbolic computation and were first implemented in computer algebra systems decades ago, but have recently received renewed interest as part of the ongoing development of SMT solvers for non-linear real arithmetic. First, we consider the use of iterated univariate resultants in traditional CAD, and how this leads to inefficiencies, especially in the case of an input with multiple equational constraints. We reproduce the workshop paper [Davenport \& England, 2023], adding important clarifications to our suggestions first made there to make use of multivariate resultants in the projection phase of CAD. We then consider an alternative approach to this problem first documented in [McCallum \& Brown, 2009] which redefines the actual object under construction, albeit only in the case of two equational constraints. We correct an important typo and provide a missing proof in that paper. We finish by revising the topic of how to deal with SMT or Real QE problems expressed using rational functions (as opposed to the usual polynomial ones) noting that these are often found in industrial applications. We revisit a proposal made in [Uncu, Davenport and England, 2023] for doing this in the case of satisfiability, explaining why such an approach does not trivially extend to more complicated quantification structure and giving a suitable alternative.
2302.09079
Biplav Srivastava
Biplav Srivastava, Kausik Lakkaraju, Mariana Bernagozzi, Marco Valtorta
Advances in Automatically Rating the Trustworthiness of Text Processing Services
9 pages, Accepted at 2023 Spring Symposium on AI Trustworthiness Assessment
null
null
null
cs.HC cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
AI services are known to have unstable behavior when subjected to changes in data, models or users. Such behaviors, whether triggered by omission or commission, lead to trust issues when AI works with humans. The current approach of assessing AI services in a black box setting, where the consumer does not have access to the AI's source code or training data, is limited. The consumer has to rely on the AI developer's documentation and trust that the system has been built as stated. Further, if the AI consumer reuses the service to build other services which they sell to their customers, the consumer is at the risk of the service providers (both data and model providers). Our approach, in this context, is inspired by the success of nutritional labeling in food industry to promote health and seeks to assess and rate AI services for trust from the perspective of an independent stakeholder. The ratings become a means to communicate the behavior of AI systems so that the consumer is informed about the risks and can make an informed decision. In this paper, we will first describe recent progress in developing rating methods for text-based machine translator AI services that have been found promising with user studies. Then, we will outline challenges and vision for a principled, multi-modal, causality-based rating methodologies and its implication for decision-support in real-world scenarios like health and food recommendation.
[ { "created": "Sat, 4 Feb 2023 14:27:46 GMT", "version": "v1" } ]
2023-02-21
[ [ "Srivastava", "Biplav", "" ], [ "Lakkaraju", "Kausik", "" ], [ "Bernagozzi", "Mariana", "" ], [ "Valtorta", "Marco", "" ] ]
AI services are known to have unstable behavior when subjected to changes in data, models or users. Such behaviors, whether triggered by omission or commission, lead to trust issues when AI works with humans. The current approach of assessing AI services in a black box setting, where the consumer does not have access to the AI's source code or training data, is limited. The consumer has to rely on the AI developer's documentation and trust that the system has been built as stated. Further, if the AI consumer reuses the service to build other services which they sell to their customers, the consumer is at the risk of the service providers (both data and model providers). Our approach, in this context, is inspired by the success of nutritional labeling in food industry to promote health and seeks to assess and rate AI services for trust from the perspective of an independent stakeholder. The ratings become a means to communicate the behavior of AI systems so that the consumer is informed about the risks and can make an informed decision. In this paper, we will first describe recent progress in developing rating methods for text-based machine translator AI services that have been found promising with user studies. Then, we will outline challenges and vision for a principled, multi-modal, causality-based rating methodologies and its implication for decision-support in real-world scenarios like health and food recommendation.
2204.07724
Dongxiao Zhang
Hao Xu, Yuntian Chen, Dongxiao Zhang
Semantic interpretation for convolutional neural networks: What makes a cat a cat?
33 pages, 11 figures
Advanced Science, 2022
10.1002/advs.202204723
null
cs.LG cs.AI cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
The interpretability of deep neural networks has attracted increasing attention in recent years, and several methods have been created to interpret the "black box" model. Fundamental limitations remain, however, that impede the pace of understanding the networks, especially the extraction of understandable semantic space. In this work, we introduce the framework of semantic explainable AI (S-XAI), which utilizes row-centered principal component analysis to obtain the common traits from the best combination of superpixels discovered by a genetic algorithm, and extracts understandable semantic spaces on the basis of discovered semantically sensitive neurons and visualization techniques. Statistical interpretation of the semantic space is also provided, and the concept of semantic probability is proposed for the first time. Our experimental results demonstrate that S-XAI is effective in providing a semantic interpretation for the CNN, and offers broad usage, including trustworthiness assessment and semantic sample searching.
[ { "created": "Sat, 16 Apr 2022 05:25:17 GMT", "version": "v1" } ]
2023-12-05
[ [ "Xu", "Hao", "" ], [ "Chen", "Yuntian", "" ], [ "Zhang", "Dongxiao", "" ] ]
The interpretability of deep neural networks has attracted increasing attention in recent years, and several methods have been created to interpret the "black box" model. Fundamental limitations remain, however, that impede the pace of understanding the networks, especially the extraction of understandable semantic space. In this work, we introduce the framework of semantic explainable AI (S-XAI), which utilizes row-centered principal component analysis to obtain the common traits from the best combination of superpixels discovered by a genetic algorithm, and extracts understandable semantic spaces on the basis of discovered semantically sensitive neurons and visualization techniques. Statistical interpretation of the semantic space is also provided, and the concept of semantic probability is proposed for the first time. Our experimental results demonstrate that S-XAI is effective in providing a semantic interpretation for the CNN, and offers broad usage, including trustworthiness assessment and semantic sample searching.
2301.10761
Tanvi Dinkar
Tanvi Dinkar, Chlo\'e Clavel, Ioana Vasilescu
Fillers in Spoken Language Understanding: Computational and Psycholinguistic Perspectives
\footnote{This article has been published in the journal "Traitement Automatique des Langues" 63(3): 37-62, 2022,@ATALA. The original manuscript is available on the web site www.atala.org}
null
null
null
cs.CL cs.HC
http://creativecommons.org/licenses/by/4.0/
Disfluencies (i.e. interruptions in the regular flow of speech), are ubiquitous to spoken discourse. Fillers ("uh", "um") are disfluencies that occur the most frequently compared to other kinds of disfluencies. Yet, to the best of our knowledge, there isn't a resource that brings together the research perspectives influencing Spoken Language Understanding (SLU) on these speech events. This aim of this article is to survey a breadth of perspectives in a holistic way; i.e. from considering underlying (psycho)linguistic theory, to their annotation and consideration in Automatic Speech Recognition (ASR) and SLU systems, to lastly, their study from a generation standpoint. This article aims to present the perspectives in an approachable way to the SLU and Conversational AI community, and discuss moving forward, what we believe are the trends and challenges in each area.
[ { "created": "Wed, 25 Jan 2023 18:55:05 GMT", "version": "v1" }, { "created": "Wed, 8 Mar 2023 19:10:39 GMT", "version": "v2" }, { "created": "Fri, 10 Mar 2023 11:04:26 GMT", "version": "v3" }, { "created": "Fri, 24 Mar 2023 15:35:49 GMT", "version": "v4" } ]
2023-03-27
[ [ "Dinkar", "Tanvi", "" ], [ "Clavel", "Chloé", "" ], [ "Vasilescu", "Ioana", "" ] ]
Disfluencies (i.e. interruptions in the regular flow of speech), are ubiquitous to spoken discourse. Fillers ("uh", "um") are disfluencies that occur the most frequently compared to other kinds of disfluencies. Yet, to the best of our knowledge, there isn't a resource that brings together the research perspectives influencing Spoken Language Understanding (SLU) on these speech events. This aim of this article is to survey a breadth of perspectives in a holistic way; i.e. from considering underlying (psycho)linguistic theory, to their annotation and consideration in Automatic Speech Recognition (ASR) and SLU systems, to lastly, their study from a generation standpoint. This article aims to present the perspectives in an approachable way to the SLU and Conversational AI community, and discuss moving forward, what we believe are the trends and challenges in each area.
2110.09443
Benjamin Chamberlain Dr
Benjamin Paul Chamberlain, James Rowbottom, Davide Eynard, Francesco Di Giovanni, Xiaowen Dong, Michael M Bronstein
Beltrami Flow and Neural Diffusion on Graphs
21 pages, 5 figures. Proceedings of the Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS) 2021
null
null
null
cs.LG cs.AI stat.ML
http://creativecommons.org/licenses/by/4.0/
We propose a novel class of graph neural networks based on the discretised Beltrami flow, a non-Euclidean diffusion PDE. In our model, node features are supplemented with positional encodings derived from the graph topology and jointly evolved by the Beltrami flow, producing simultaneously continuous feature learning and topology evolution. The resulting model generalises many popular graph neural networks and achieves state-of-the-art results on several benchmarks.
[ { "created": "Mon, 18 Oct 2021 16:23:38 GMT", "version": "v1" } ]
2021-10-19
[ [ "Chamberlain", "Benjamin Paul", "" ], [ "Rowbottom", "James", "" ], [ "Eynard", "Davide", "" ], [ "Di Giovanni", "Francesco", "" ], [ "Dong", "Xiaowen", "" ], [ "Bronstein", "Michael M", "" ] ]
We propose a novel class of graph neural networks based on the discretised Beltrami flow, a non-Euclidean diffusion PDE. In our model, node features are supplemented with positional encodings derived from the graph topology and jointly evolved by the Beltrami flow, producing simultaneously continuous feature learning and topology evolution. The resulting model generalises many popular graph neural networks and achieves state-of-the-art results on several benchmarks.
2309.08871
Yanhao Yang
Yanhao Yang, Capprin Bass, Ross L. Hatton
Towards Geometric Motion Planning for High-Dimensional Systems: Gait-Based Coordinate Optimization and Local Metrics
7 pages, 6 figures, accepted to the 2024 IEEE International Conference on Robotics and Automation (ICRA 2024)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Geometric motion planning offers effective and interpretable gait analysis and optimization tools for locomoting systems. However, due to the curse of dimensionality in coordinate optimization, a key component of geometric motion planning, it is almost infeasible to apply current geometric motion planning to high-dimensional systems. In this paper, we propose a gait-based coordinate optimization method that overcomes the curse of dimensionality. We also identify a unified geometric representation of locomotion by generalizing various nonholonomic constraints into local metrics. By combining these two approaches, we take a step towards geometric motion planning for high-dimensional systems. We test our method in two classes of high-dimensional systems - low Reynolds number swimmers and free-falling Cassie - with up to 11-dimensional shape variables. The resulting optimal gait in the high-dimensional system shows better efficiency compared to that of the reduced-order model. Furthermore, we provide a geometric optimality interpretation of the optimal gait.
[ { "created": "Sat, 16 Sep 2023 04:28:12 GMT", "version": "v1" }, { "created": "Thu, 7 Mar 2024 04:51:34 GMT", "version": "v2" } ]
2024-03-08
[ [ "Yang", "Yanhao", "" ], [ "Bass", "Capprin", "" ], [ "Hatton", "Ross L.", "" ] ]
Geometric motion planning offers effective and interpretable gait analysis and optimization tools for locomoting systems. However, due to the curse of dimensionality in coordinate optimization, a key component of geometric motion planning, it is almost infeasible to apply current geometric motion planning to high-dimensional systems. In this paper, we propose a gait-based coordinate optimization method that overcomes the curse of dimensionality. We also identify a unified geometric representation of locomotion by generalizing various nonholonomic constraints into local metrics. By combining these two approaches, we take a step towards geometric motion planning for high-dimensional systems. We test our method in two classes of high-dimensional systems - low Reynolds number swimmers and free-falling Cassie - with up to 11-dimensional shape variables. The resulting optimal gait in the high-dimensional system shows better efficiency compared to that of the reduced-order model. Furthermore, we provide a geometric optimality interpretation of the optimal gait.
2403.15458
Daniel Fesalbon
Daniel Fesalbon, Arvin De La Cruz, Marvin Mallari, and Nelson Rodelas
Fine-Tuning Pre-trained Language Models to Detect In-Game Trash Talks
null
IJFMR Volume 6, Issue 2, March-April 2024
10.36948/ijfmr.2024.v06i02.14927
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Common problems in playing online mobile and computer games were related to toxic behavior and abusive communication among players. Based on different reports and studies, the study also discusses the impact of online hate speech and toxicity on players' in-game performance and overall well-being. This study investigates the capability of pre-trained language models to classify or detect trash talk or toxic in-game messages The study employs and evaluates the performance of pre-trained BERT and GPT language models in detecting toxicity within in-game chats. Using publicly available APIs, in-game chat data from DOTA 2 game matches were collected, processed, reviewed, and labeled as non-toxic, mild (toxicity), and toxic. The study was able to collect around two thousand in-game chats to train and test BERT (Base-uncased), BERT (Large-uncased), and GPT-3 models. Based on the three models' state-of-the-art performance, this study concludes pre-trained language models' promising potential for addressing online hate speech and in-game insulting trash talk.
[ { "created": "Tue, 19 Mar 2024 11:36:53 GMT", "version": "v1" } ]
2024-03-28
[ [ "Fesalbon", "Daniel", "" ], [ "De La Cruz", "Arvin", "" ], [ "Mallari", "Marvin", "" ], [ "Rodelas", "Nelson", "" ] ]
Common problems in playing online mobile and computer games were related to toxic behavior and abusive communication among players. Based on different reports and studies, the study also discusses the impact of online hate speech and toxicity on players' in-game performance and overall well-being. This study investigates the capability of pre-trained language models to classify or detect trash talk or toxic in-game messages The study employs and evaluates the performance of pre-trained BERT and GPT language models in detecting toxicity within in-game chats. Using publicly available APIs, in-game chat data from DOTA 2 game matches were collected, processed, reviewed, and labeled as non-toxic, mild (toxicity), and toxic. The study was able to collect around two thousand in-game chats to train and test BERT (Base-uncased), BERT (Large-uncased), and GPT-3 models. Based on the three models' state-of-the-art performance, this study concludes pre-trained language models' promising potential for addressing online hate speech and in-game insulting trash talk.
2303.08225
Hao Tang
Hao Tang, Zhenyu Zhang, Humphrey Shi, Bo Li, Ling Shao, Nicu Sebe, Radu Timofte, Luc Van Gool
Graph Transformer GANs for Graph-Constrained House Generation
CVPR 2023
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task. The proposed graph-Transformer-based generator includes a novel graph Transformer encoder that combines graph convolutions and self-attentions in a Transformer to model both local and global interactions across connected and non-connected graph nodes. Specifically, the proposed connected node attention (CNA) and non-connected node attention (NNA) aim to capture the global relations across connected nodes and non-connected nodes in the input graph, respectively. The proposed graph modeling block (GMB) aims to exploit local vertex interactions based on a house layout topology. Moreover, we propose a new node classification-based discriminator to preserve the high-level semantic and discriminative node features for different house components. Finally, we propose a novel graph-based cycle-consistency loss that aims at maintaining the relative spatial relationships between ground truth and predicted graphs. Experiments on two challenging graph-constrained house generation tasks (i.e., house layout and roof generation) with two public datasets demonstrate the effectiveness of GTGAN in terms of objective quantitative scores and subjective visual realism. New state-of-the-art results are established by large margins on both tasks.
[ { "created": "Tue, 14 Mar 2023 20:35:45 GMT", "version": "v1" } ]
2023-03-16
[ [ "Tang", "Hao", "" ], [ "Zhang", "Zhenyu", "" ], [ "Shi", "Humphrey", "" ], [ "Li", "Bo", "" ], [ "Shao", "Ling", "" ], [ "Sebe", "Nicu", "" ], [ "Timofte", "Radu", "" ], [ "Van Gool", "Luc", "" ] ]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task. The proposed graph-Transformer-based generator includes a novel graph Transformer encoder that combines graph convolutions and self-attentions in a Transformer to model both local and global interactions across connected and non-connected graph nodes. Specifically, the proposed connected node attention (CNA) and non-connected node attention (NNA) aim to capture the global relations across connected nodes and non-connected nodes in the input graph, respectively. The proposed graph modeling block (GMB) aims to exploit local vertex interactions based on a house layout topology. Moreover, we propose a new node classification-based discriminator to preserve the high-level semantic and discriminative node features for different house components. Finally, we propose a novel graph-based cycle-consistency loss that aims at maintaining the relative spatial relationships between ground truth and predicted graphs. Experiments on two challenging graph-constrained house generation tasks (i.e., house layout and roof generation) with two public datasets demonstrate the effectiveness of GTGAN in terms of objective quantitative scores and subjective visual realism. New state-of-the-art results are established by large margins on both tasks.
2110.03246
Jannik Vierling
Stefan Hetzl, Jannik Vierling
Unprovability results for clause set cycles
Revised version
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
The notion of clause set cycle abstracts a family of methods for automated inductive theorem proving based on the detection of cyclic dependencies between clause sets. By discerning the underlying logical features of clause set cycles, we are able to characterize clause set cycles by a logical theory. We make use of this characterization to provide practically relevant unprovability results for clause set cycles that exploit different logical features.
[ { "created": "Thu, 7 Oct 2021 08:00:11 GMT", "version": "v1" }, { "created": "Thu, 4 Aug 2022 15:12:47 GMT", "version": "v2" } ]
2022-08-05
[ [ "Hetzl", "Stefan", "" ], [ "Vierling", "Jannik", "" ] ]
The notion of clause set cycle abstracts a family of methods for automated inductive theorem proving based on the detection of cyclic dependencies between clause sets. By discerning the underlying logical features of clause set cycles, we are able to characterize clause set cycles by a logical theory. We make use of this characterization to provide practically relevant unprovability results for clause set cycles that exploit different logical features.
0808.0247
Danilo Gligoroski
Danilo Gligoroski and Smile Markovski and Svein Johan Knapskog
A Public Key Block Cipher Based on Multivariate Quadratic Quasigroups
This is an extended and updated version of a paper "Multivariate Quadratic Trapdoor Functions Based on Multivariate Quadratic Quasigroups", Proceedings of the American Conference On Applied Mathematics (MATH '08), Cambridge, Massachusetts, USA, March 24-26, 2008
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have designed a new class of public key algorithms based on quasigroup string transformations using a specific class of quasigroups called multivariate quadratic quasigroups (MQQ). Our public key algorithm is a bijective mapping, it does not perform message expansions and can be used both for encryption and signatures. The public key consist of n quadratic polynomials with n variables where n=140, 160, ... . A particular characteristic of our public key algorithm is that it is very fast and highly parallelizable. More concretely, it has the speed of a typical modern symmetric block cipher - the reason for the phrase "A Public Key Block Cipher" in the title of this paper. Namely the reference C code for the 160-bit variant of the algorithm performs decryption in less than 11,000 cycles (on Intel Core 2 Duo -- using only one processor core), and around 6,000 cycles using two CPU cores and OpenMP 2.0 library. However, implemented in Xilinx Virtex-5 FPGA that is running on 249.4 MHz it achieves decryption throughput of 399 Mbps, and implemented on four Xilinx Virtex-5 chips that are running on 276.7 MHz it achieves encryption throughput of 44.27 Gbps. Compared to fastest RSA implementations on similar FPGA platforms, MQQ algorithm is more than 10,000 times faster.
[ { "created": "Sat, 2 Aug 2008 09:48:16 GMT", "version": "v1" } ]
2008-08-05
[ [ "Gligoroski", "Danilo", "" ], [ "Markovski", "Smile", "" ], [ "Knapskog", "Svein Johan", "" ] ]
We have designed a new class of public key algorithms based on quasigroup string transformations using a specific class of quasigroups called multivariate quadratic quasigroups (MQQ). Our public key algorithm is a bijective mapping, it does not perform message expansions and can be used both for encryption and signatures. The public key consist of n quadratic polynomials with n variables where n=140, 160, ... . A particular characteristic of our public key algorithm is that it is very fast and highly parallelizable. More concretely, it has the speed of a typical modern symmetric block cipher - the reason for the phrase "A Public Key Block Cipher" in the title of this paper. Namely the reference C code for the 160-bit variant of the algorithm performs decryption in less than 11,000 cycles (on Intel Core 2 Duo -- using only one processor core), and around 6,000 cycles using two CPU cores and OpenMP 2.0 library. However, implemented in Xilinx Virtex-5 FPGA that is running on 249.4 MHz it achieves decryption throughput of 399 Mbps, and implemented on four Xilinx Virtex-5 chips that are running on 276.7 MHz it achieves encryption throughput of 44.27 Gbps. Compared to fastest RSA implementations on similar FPGA platforms, MQQ algorithm is more than 10,000 times faster.
2305.14936
Ivan Habernal
Cleo Matzken, Steffen Eger, Ivan Habernal
Trade-Offs Between Fairness and Privacy in Language Modeling
Findings of ACL 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Protecting privacy in contemporary NLP models is gaining in importance. So does the need to mitigate social biases of such models. But can we have both at the same time? Existing research suggests that privacy preservation comes at the price of worsening biases in classification tasks. In this paper, we explore the extent to which this tradeoff really holds when we incorporate both privacy preservation and de-biasing techniques into training text generation models. How does improving the model along one dimension affect the other dimension as well as the utility of the model? We conduct an extensive set of experiments that include bias detection, privacy attacks, language modeling, and performance on downstream tasks.
[ { "created": "Wed, 24 May 2023 09:18:28 GMT", "version": "v1" } ]
2023-05-25
[ [ "Matzken", "Cleo", "" ], [ "Eger", "Steffen", "" ], [ "Habernal", "Ivan", "" ] ]
Protecting privacy in contemporary NLP models is gaining in importance. So does the need to mitigate social biases of such models. But can we have both at the same time? Existing research suggests that privacy preservation comes at the price of worsening biases in classification tasks. In this paper, we explore the extent to which this tradeoff really holds when we incorporate both privacy preservation and de-biasing techniques into training text generation models. How does improving the model along one dimension affect the other dimension as well as the utility of the model? We conduct an extensive set of experiments that include bias detection, privacy attacks, language modeling, and performance on downstream tasks.
2403.10698
Minh-Hao Van
Minh-Hao Van, Alycia N. Carey, Xintao Wu
Robust Influence-based Training Methods for Noisy Brain MRI
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Correctly classifying brain tumors is imperative to the prompt and accurate treatment of a patient. While several classification algorithms based on classical image processing or deep learning methods have been proposed to rapidly classify tumors in MR images, most assume the unrealistic setting of noise-free training data. In this work, we study a difficult but realistic setting of training a deep learning model on noisy MR images to classify brain tumors. We propose two training methods that are robust to noisy MRI training data, Influence-based Sample Reweighing (ISR) and Influence-based Sample Perturbation (ISP), which are based on influence functions from robust statistics. Using the influence functions, in ISR, we adaptively reweigh training examples according to how helpful/harmful they are to the training process, while in ISP, we craft and inject helpful perturbation proportional to the influence score. Both ISR and ISP harden the classification model against noisy training data without significantly affecting the generalization ability of the model on test data. We conduct empirical evaluations over a common brain tumor dataset and compare ISR and ISP to three baselines. Our empirical results show that ISR and ISP can efficiently train deep learning models robust against noisy training data.
[ { "created": "Fri, 15 Mar 2024 21:30:25 GMT", "version": "v1" }, { "created": "Thu, 9 May 2024 22:38:25 GMT", "version": "v2" } ]
2024-05-13
[ [ "Van", "Minh-Hao", "" ], [ "Carey", "Alycia N.", "" ], [ "Wu", "Xintao", "" ] ]
Correctly classifying brain tumors is imperative to the prompt and accurate treatment of a patient. While several classification algorithms based on classical image processing or deep learning methods have been proposed to rapidly classify tumors in MR images, most assume the unrealistic setting of noise-free training data. In this work, we study a difficult but realistic setting of training a deep learning model on noisy MR images to classify brain tumors. We propose two training methods that are robust to noisy MRI training data, Influence-based Sample Reweighing (ISR) and Influence-based Sample Perturbation (ISP), which are based on influence functions from robust statistics. Using the influence functions, in ISR, we adaptively reweigh training examples according to how helpful/harmful they are to the training process, while in ISP, we craft and inject helpful perturbation proportional to the influence score. Both ISR and ISP harden the classification model against noisy training data without significantly affecting the generalization ability of the model on test data. We conduct empirical evaluations over a common brain tumor dataset and compare ISR and ISP to three baselines. Our empirical results show that ISR and ISP can efficiently train deep learning models robust against noisy training data.
2002.06765
Teppei Suzuki
Teppei Suzuki
Superpixel Segmentation via Convolutional Neural Networks with Regularized Information Maximization
To appear in ICASSP 2020
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an unsupervised superpixel segmentation method by optimizing a randomly-initialized convolutional neural network (CNN) in inference time. Our method generates superpixels via CNN from a single image without any labels by minimizing a proposed objective function for superpixel segmentation in inference time. There are three advantages to our method compared with many of existing methods: (i) leverages an image prior of CNN for superpixel segmentation, (ii) adaptively changes the number of superpixels according to the given images, and (iii) controls the property of superpixels by adding an auxiliary cost to the objective function. We verify the advantages of our method quantitatively and qualitatively on BSDS500 and SBD datasets.
[ { "created": "Mon, 17 Feb 2020 04:32:03 GMT", "version": "v1" }, { "created": "Thu, 9 Apr 2020 02:12:09 GMT", "version": "v2" }, { "created": "Fri, 26 Jun 2020 14:02:13 GMT", "version": "v3" } ]
2020-06-29
[ [ "Suzuki", "Teppei", "" ] ]
We propose an unsupervised superpixel segmentation method by optimizing a randomly-initialized convolutional neural network (CNN) in inference time. Our method generates superpixels via CNN from a single image without any labels by minimizing a proposed objective function for superpixel segmentation in inference time. There are three advantages to our method compared with many of existing methods: (i) leverages an image prior of CNN for superpixel segmentation, (ii) adaptively changes the number of superpixels according to the given images, and (iii) controls the property of superpixels by adding an auxiliary cost to the objective function. We verify the advantages of our method quantitatively and qualitatively on BSDS500 and SBD datasets.
1603.08853
Jeffrey Liu
Jeffrey Liu, Saurabh Amin, Galina Schwartz
Effects of Information Heterogeneity in Bayesian Routing Games
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article studies the value of information in route choice decisions when a fraction of players have access to high accuracy information about traffic incidents relative to others. To model such environments, we introduce a Bayesian congestion game, in which players have private information about incidents, and each player chooses her route on a network of parallel links. The links are prone to incidents that occur with an ex-ante known probability. The demand is comprised of two player populations: one with access to high accuracy incident information and another with low accuracy information, i.e. the populations differ only by their access to information. The common knowledge includes: (i) the demand and route cost functions, (ii) the fraction of highly-informed players, (iii) the incident probability, and (iv) the marginal type distributions induced by the information structure of the game. We present a full characterization of the Bayesian Wardrop Equilibrium of this game under the assumption that low information players receive no additional information beyond common knowledge. We also compute the cost to individual players and the social cost as a function of the fraction of highly-informed players when they receive perfectly accurate information. Our first result suggests that below a certain threshold of highly-informed players, both populations experience a reduction in individual cost, with the highly-informed players receiving a greater reduction. However, above this threshold, both populations realize the same equilibrium cost. Secondly, there exists another (lower or equal) threshold above which a further increase in the fraction of highly-informed players does not reduce the expected social costs. Thus, once a sufficiently large number of players are highly informed, wider distribution of more accurate information is ineffective at best, and otherwise socially harmful.
[ { "created": "Tue, 29 Mar 2016 17:22:31 GMT", "version": "v1" } ]
2016-03-30
[ [ "Liu", "Jeffrey", "" ], [ "Amin", "Saurabh", "" ], [ "Schwartz", "Galina", "" ] ]
This article studies the value of information in route choice decisions when a fraction of players have access to high accuracy information about traffic incidents relative to others. To model such environments, we introduce a Bayesian congestion game, in which players have private information about incidents, and each player chooses her route on a network of parallel links. The links are prone to incidents that occur with an ex-ante known probability. The demand is comprised of two player populations: one with access to high accuracy incident information and another with low accuracy information, i.e. the populations differ only by their access to information. The common knowledge includes: (i) the demand and route cost functions, (ii) the fraction of highly-informed players, (iii) the incident probability, and (iv) the marginal type distributions induced by the information structure of the game. We present a full characterization of the Bayesian Wardrop Equilibrium of this game under the assumption that low information players receive no additional information beyond common knowledge. We also compute the cost to individual players and the social cost as a function of the fraction of highly-informed players when they receive perfectly accurate information. Our first result suggests that below a certain threshold of highly-informed players, both populations experience a reduction in individual cost, with the highly-informed players receiving a greater reduction. However, above this threshold, both populations realize the same equilibrium cost. Secondly, there exists another (lower or equal) threshold above which a further increase in the fraction of highly-informed players does not reduce the expected social costs. Thus, once a sufficiently large number of players are highly informed, wider distribution of more accurate information is ineffective at best, and otherwise socially harmful.
2306.09048
Shubhada Agrawal
Shubhada Agrawal, Sandeep Juneja, Karthikeyan Shanmugam, Arun Sai Suggala
Optimal Best-Arm Identification in Bandits with Access to Offline Data
45 pages, 5 figures
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning paradigms based purely on offline data as well as those based solely on sequential online learning have been well-studied in the literature. In this paper, we consider combining offline data with online learning, an area less studied but of obvious practical importance. We consider the stochastic $K$-armed bandit problem, where our goal is to identify the arm with the highest mean in the presence of relevant offline data, with confidence $1-\delta$. We conduct a lower bound analysis on policies that provide such $1-\delta$ probabilistic correctness guarantees. We develop algorithms that match the lower bound on sample complexity when $\delta$ is small. Our algorithms are computationally efficient with an average per-sample acquisition cost of $\tilde{O}(K)$, and rely on a careful characterization of the optimality conditions of the lower bound problem.
[ { "created": "Thu, 15 Jun 2023 11:12:35 GMT", "version": "v1" } ]
2023-06-16
[ [ "Agrawal", "Shubhada", "" ], [ "Juneja", "Sandeep", "" ], [ "Shanmugam", "Karthikeyan", "" ], [ "Suggala", "Arun Sai", "" ] ]
Learning paradigms based purely on offline data as well as those based solely on sequential online learning have been well-studied in the literature. In this paper, we consider combining offline data with online learning, an area less studied but of obvious practical importance. We consider the stochastic $K$-armed bandit problem, where our goal is to identify the arm with the highest mean in the presence of relevant offline data, with confidence $1-\delta$. We conduct a lower bound analysis on policies that provide such $1-\delta$ probabilistic correctness guarantees. We develop algorithms that match the lower bound on sample complexity when $\delta$ is small. Our algorithms are computationally efficient with an average per-sample acquisition cost of $\tilde{O}(K)$, and rely on a careful characterization of the optimality conditions of the lower bound problem.
1212.2865
Martin Kasparick
Gerhard Wunder, Robert F. H. Fischer, Holger Boche, Simon Litsyn, Jong-Seon No
The PAPR Problem in OFDM Transmission: New Directions for a Long-Lasting Problem
Accepted for publication in IEEE Signal Processing Magazine
null
10.1109/MSP.2012.2218138
null
cs.IT math.IT math.MG math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Peak power control for multicarrier communications has been a long-lasting problem in signal processing and communications. However, industry and academia are confronted with new challenges regarding energy efficient system design. Particularly, the envisioned boost in network energy efficiency (e.g. at least by a factor of 1000 in the Green Touch consortium) will tighten the requirements on component level so that the efficiency gap with respect to single-carrier transmission must considerably diminish. This paper reflects these challenges together with a unified framework and new directions in this field. The combination of large deviation theory, de-randomization and selected elements of Banach space geometry will offer a novel approach and will provide ideas and concepts for researchers with a background in industry as well as those from academia.
[ { "created": "Wed, 12 Dec 2012 16:25:56 GMT", "version": "v1" }, { "created": "Tue, 18 Dec 2012 16:48:50 GMT", "version": "v2" } ]
2016-11-18
[ [ "Wunder", "Gerhard", "" ], [ "Fischer", "Robert F. H.", "" ], [ "Boche", "Holger", "" ], [ "Litsyn", "Simon", "" ], [ "No", "Jong-Seon", "" ] ]
Peak power control for multicarrier communications has been a long-lasting problem in signal processing and communications. However, industry and academia are confronted with new challenges regarding energy efficient system design. Particularly, the envisioned boost in network energy efficiency (e.g. at least by a factor of 1000 in the Green Touch consortium) will tighten the requirements on component level so that the efficiency gap with respect to single-carrier transmission must considerably diminish. This paper reflects these challenges together with a unified framework and new directions in this field. The combination of large deviation theory, de-randomization and selected elements of Banach space geometry will offer a novel approach and will provide ideas and concepts for researchers with a background in industry as well as those from academia.
2108.09108
Hyeongseok Son
Hyeongseok Son, Junyong Lee, Sunghyun Cho, Seungyong Lee
Single Image Defocus Deblurring Using Kernel-Sharing Parallel Atrous Convolutions
Accepted to ICCV 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a novel deep learning approach for single image defocus deblurring based on inverse kernels. In a defocused image, the blur shapes are similar among pixels although the blur sizes can spatially vary. To utilize the property with inverse kernels, we exploit the observation that when only the size of a defocus blur changes while keeping the shape, the shape of the corresponding inverse kernel remains the same and only the scale changes. Based on the observation, we propose a kernel-sharing parallel atrous convolutional (KPAC) block specifically designed by incorporating the property of inverse kernels for single image defocus deblurring. To effectively simulate the invariant shapes of inverse kernels with different scales, KPAC shares the same convolutional weights among multiple atrous convolution layers. To efficiently simulate the varying scales of inverse kernels, KPAC consists of only a few atrous convolution layers with different dilations and learns per-pixel scale attentions to aggregate the outputs of the layers. KPAC also utilizes the shape attention to combine the outputs of multiple convolution filters in each atrous convolution layer, to deal with defocus blur with a slightly varying shape. We demonstrate that our approach achieves state-of-the-art performance with a much smaller number of parameters than previous methods.
[ { "created": "Fri, 20 Aug 2021 11:06:19 GMT", "version": "v1" } ]
2021-08-23
[ [ "Son", "Hyeongseok", "" ], [ "Lee", "Junyong", "" ], [ "Cho", "Sunghyun", "" ], [ "Lee", "Seungyong", "" ] ]
This paper proposes a novel deep learning approach for single image defocus deblurring based on inverse kernels. In a defocused image, the blur shapes are similar among pixels although the blur sizes can spatially vary. To utilize the property with inverse kernels, we exploit the observation that when only the size of a defocus blur changes while keeping the shape, the shape of the corresponding inverse kernel remains the same and only the scale changes. Based on the observation, we propose a kernel-sharing parallel atrous convolutional (KPAC) block specifically designed by incorporating the property of inverse kernels for single image defocus deblurring. To effectively simulate the invariant shapes of inverse kernels with different scales, KPAC shares the same convolutional weights among multiple atrous convolution layers. To efficiently simulate the varying scales of inverse kernels, KPAC consists of only a few atrous convolution layers with different dilations and learns per-pixel scale attentions to aggregate the outputs of the layers. KPAC also utilizes the shape attention to combine the outputs of multiple convolution filters in each atrous convolution layer, to deal with defocus blur with a slightly varying shape. We demonstrate that our approach achieves state-of-the-art performance with a much smaller number of parameters than previous methods.
1208.2261
Sudarshan Nandy
Sudarshan Nandy, Partha Pratim Sarkar and Achintya Das
Analysis of Statistical Hypothesis based Learning Mechanism for Faster Crawling
14 Pages, 7 Figures This paper has been withdrawn by the author due to a crucial sign error in page no. 3,4,7 and 11. The error is also observed with equation no in page 10
International Journal of Artificial Intelligence & Applications (IJAIA), Vol.3, No.4, July 2012, 117-130
10.5121/ijaia.2012.3409
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growth of world-wide-web (WWW) spreads its wings from an intangible quantities of web-pages to a gigantic hub of web information which gradually increases the complexity of crawling process in a search engine. A search engine handles a lot of queries from various parts of this world, and the answers of it solely depend on the knowledge that it gathers by means of crawling. The information sharing becomes a most common habit of the society, and it is done by means of publishing structured, semi-structured and unstructured resources on the web. This social practice leads to an exponential growth of web-resource, and hence it became essential to crawl for continuous updating of web-knowledge and modification of several existing resources in any situation. In this paper one statistical hypothesis based learning mechanism is incorporated for learning the behavior of crawling speed in different environment of network, and for intelligently control of the speed of crawler. The scaling technique is used to compare the performance proposed method with the standard crawler. The high speed performance is observed after scaling, and the retrieval of relevant web-resource in such a high speed is analyzed.
[ { "created": "Fri, 10 Aug 2012 19:43:43 GMT", "version": "v1" }, { "created": "Mon, 13 Aug 2012 05:40:17 GMT", "version": "v2" } ]
2012-08-14
[ [ "Nandy", "Sudarshan", "" ], [ "Sarkar", "Partha Pratim", "" ], [ "Das", "Achintya", "" ] ]
The growth of world-wide-web (WWW) spreads its wings from an intangible quantities of web-pages to a gigantic hub of web information which gradually increases the complexity of crawling process in a search engine. A search engine handles a lot of queries from various parts of this world, and the answers of it solely depend on the knowledge that it gathers by means of crawling. The information sharing becomes a most common habit of the society, and it is done by means of publishing structured, semi-structured and unstructured resources on the web. This social practice leads to an exponential growth of web-resource, and hence it became essential to crawl for continuous updating of web-knowledge and modification of several existing resources in any situation. In this paper one statistical hypothesis based learning mechanism is incorporated for learning the behavior of crawling speed in different environment of network, and for intelligently control of the speed of crawler. The scaling technique is used to compare the performance proposed method with the standard crawler. The high speed performance is observed after scaling, and the retrieval of relevant web-resource in such a high speed is analyzed.
2208.04125
Haoye Tian
Haoye Tian, Xunzhu Tang, Andrew Habib, Shangwen Wang, Kui Liu, Xin Xia, Jacques Klein, Tegawend\'e F. Bissyand\'e
Is this Change the Answer to that Problem? Correlating Descriptions of Bug and Code Changes for Evaluating Patch Correctness
null
null
10.1145/3551349.3556914
null
cs.SE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we propose a novel perspective to the problem of patch correctness assessment: a correct patch implements changes that "answer" to a problem posed by buggy behaviour. Concretely, we turn the patch correctness assessment into a Question Answering problem. To tackle this problem, our intuition is that natural language processing can provide the necessary representations and models for assessing the semantic correlation between a bug (question) and a patch (answer). Specifically, we consider as inputs the bug reports as well as the natural language description of the generated patches. Our approach, Quatrain, first considers state of the art commit message generation models to produce the relevant inputs associated to each generated patch. Then we leverage a neural network architecture to learn the semantic correlation between bug reports and commit messages. Experiments on a large dataset of 9135 patches generated for three bug datasets (Defects4j, Bugs.jar and Bears) show that Quatrain can achieve an AUC of 0.886 on predicting patch correctness, and recalling 93% correct patches while filtering out 62% incorrect patches. Our experimental results further demonstrate the influence of inputs quality on prediction performance. We further perform experiments to highlight that the model indeed learns the relationship between bug reports and code change descriptions for the prediction. Finally, we compare against prior work and discuss the benefits of our approach.
[ { "created": "Mon, 8 Aug 2022 13:32:58 GMT", "version": "v1" }, { "created": "Thu, 1 Sep 2022 08:37:38 GMT", "version": "v2" } ]
2022-09-02
[ [ "Tian", "Haoye", "" ], [ "Tang", "Xunzhu", "" ], [ "Habib", "Andrew", "" ], [ "Wang", "Shangwen", "" ], [ "Liu", "Kui", "" ], [ "Xia", "Xin", "" ], [ "Klein", "Jacques", "" ], [ "Bissyandé", "Tegawendé F.", "" ] ]
In this work, we propose a novel perspective to the problem of patch correctness assessment: a correct patch implements changes that "answer" to a problem posed by buggy behaviour. Concretely, we turn the patch correctness assessment into a Question Answering problem. To tackle this problem, our intuition is that natural language processing can provide the necessary representations and models for assessing the semantic correlation between a bug (question) and a patch (answer). Specifically, we consider as inputs the bug reports as well as the natural language description of the generated patches. Our approach, Quatrain, first considers state of the art commit message generation models to produce the relevant inputs associated to each generated patch. Then we leverage a neural network architecture to learn the semantic correlation between bug reports and commit messages. Experiments on a large dataset of 9135 patches generated for three bug datasets (Defects4j, Bugs.jar and Bears) show that Quatrain can achieve an AUC of 0.886 on predicting patch correctness, and recalling 93% correct patches while filtering out 62% incorrect patches. Our experimental results further demonstrate the influence of inputs quality on prediction performance. We further perform experiments to highlight that the model indeed learns the relationship between bug reports and code change descriptions for the prediction. Finally, we compare against prior work and discuss the benefits of our approach.