id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2407.06110
Yashwardhan Chaudhuri
Yashwardhan Chaudhuri, Ankit Kumar, Arun Balaji Buduru, Adel Alshamrani
FGA: Fourier-Guided Attention Network for Crowd Count Estimation
Accepted to IJCNN'24
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Crowd counting is gaining societal relevance, particularly in domains of Urban Planning, Crowd Management, and Public Safety. This paper introduces Fourier-guided attention (FGA), a novel attention mechanism for crowd count estimation designed to address the inefficient full-scale global pattern capture in existing works on convolution-based attention networks. FGA efficiently captures multi-scale information, including full-scale global patterns, by utilizing Fast-Fourier Transformations (FFT) along with spatial attention for global features and convolutions with channel-wise attention for semi-global and local features. The architecture of FGA involves a dual-path approach: (1) a path for processing full-scale global features through FFT, allowing for efficient extraction of information in the frequency domain, and (2) a path for processing remaining feature maps for semi-global and local features using traditional convolutions and channel-wise attention. This dual-path architecture enables FGA to seamlessly integrate frequency and spatial information, enhancing its ability to capture diverse crowd patterns. We apply FGA in the last layers of two popular crowd-counting works, CSRNet and CANNet, to evaluate the module's performance on benchmark datasets such as ShanghaiTech-A, ShanghaiTech-B, UCF-CC-50, and JHU++ crowd. The experiments demonstrate a notable improvement across all datasets based on Mean-Squared-Error (MSE) and Mean-Absolute-Error (MAE) metrics, showing comparable performance to recent state-of-the-art methods. Additionally, we illustrate the interpretability using qualitative analysis, leveraging Grad-CAM heatmaps, to show the effectiveness of FGA in capturing crowd patterns.
[ { "created": "Mon, 8 Jul 2024 16:47:19 GMT", "version": "v1" } ]
2024-07-09
[ [ "Chaudhuri", "Yashwardhan", "" ], [ "Kumar", "Ankit", "" ], [ "Buduru", "Arun Balaji", "" ], [ "Alshamrani", "Adel", "" ] ]
Crowd counting is gaining societal relevance, particularly in domains of Urban Planning, Crowd Management, and Public Safety. This paper introduces Fourier-guided attention (FGA), a novel attention mechanism for crowd count estimation designed to address the inefficient full-scale global pattern capture in existing works on convolution-based attention networks. FGA efficiently captures multi-scale information, including full-scale global patterns, by utilizing Fast-Fourier Transformations (FFT) along with spatial attention for global features and convolutions with channel-wise attention for semi-global and local features. The architecture of FGA involves a dual-path approach: (1) a path for processing full-scale global features through FFT, allowing for efficient extraction of information in the frequency domain, and (2) a path for processing remaining feature maps for semi-global and local features using traditional convolutions and channel-wise attention. This dual-path architecture enables FGA to seamlessly integrate frequency and spatial information, enhancing its ability to capture diverse crowd patterns. We apply FGA in the last layers of two popular crowd-counting works, CSRNet and CANNet, to evaluate the module's performance on benchmark datasets such as ShanghaiTech-A, ShanghaiTech-B, UCF-CC-50, and JHU++ crowd. The experiments demonstrate a notable improvement across all datasets based on Mean-Squared-Error (MSE) and Mean-Absolute-Error (MAE) metrics, showing comparable performance to recent state-of-the-art methods. Additionally, we illustrate the interpretability using qualitative analysis, leveraging Grad-CAM heatmaps, to show the effectiveness of FGA in capturing crowd patterns.
2207.02013
Jiahao Ma
Jiahao Ma, Zicheng Duan, Liang Zheng, Chuong Nguyen
Multiview Detection with Cardboard Human Modeling
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Multiview detection uses multiple calibrated cameras with overlapping fields of views to locate occluded pedestrians. In this field, existing methods typically adopt a ``human modeling - aggregation'' strategy. To find robust pedestrian representations, some intuitively incorporate 2D perception results from each frame, while others use entire frame features projected to the ground plane. However, the former does not consider the human appearance and leads to many ambiguities, and the latter suffers from projection errors due to the lack of accurate height of the human torso and head. In this paper, we propose a new pedestrian representation scheme based on human point clouds modeling. Specifically, using ray tracing for holistic human depth estimation, we model pedestrians as upright, thin cardboard point clouds on the ground. Then, we aggregate the point clouds of the pedestrian cardboard across multiple views for a final decision. Compared with existing representations, the proposed method explicitly leverages human appearance and reduces projection errors significantly by relatively accurate height estimation. On four standard evaluation benchmarks, the proposed method achieves very competitive results. Our code and data will be released at https://github.com/ZichengDuan/MvCHM.
[ { "created": "Tue, 5 Jul 2022 12:47:26 GMT", "version": "v1" }, { "created": "Sun, 10 Jul 2022 11:26:29 GMT", "version": "v2" }, { "created": "Thu, 28 Jul 2022 23:38:32 GMT", "version": "v3" }, { "created": "Tue, 16 Aug 2022 12:17:23 GMT", "version": "v4" }, { "created": "Fri, 6 Jan 2023 00:55:01 GMT", "version": "v5" } ]
2023-01-09
[ [ "Ma", "Jiahao", "" ], [ "Duan", "Zicheng", "" ], [ "Zheng", "Liang", "" ], [ "Nguyen", "Chuong", "" ] ]
Multiview detection uses multiple calibrated cameras with overlapping fields of views to locate occluded pedestrians. In this field, existing methods typically adopt a ``human modeling - aggregation'' strategy. To find robust pedestrian representations, some intuitively incorporate 2D perception results from each frame, while others use entire frame features projected to the ground plane. However, the former does not consider the human appearance and leads to many ambiguities, and the latter suffers from projection errors due to the lack of accurate height of the human torso and head. In this paper, we propose a new pedestrian representation scheme based on human point clouds modeling. Specifically, using ray tracing for holistic human depth estimation, we model pedestrians as upright, thin cardboard point clouds on the ground. Then, we aggregate the point clouds of the pedestrian cardboard across multiple views for a final decision. Compared with existing representations, the proposed method explicitly leverages human appearance and reduces projection errors significantly by relatively accurate height estimation. On four standard evaluation benchmarks, the proposed method achieves very competitive results. Our code and data will be released at https://github.com/ZichengDuan/MvCHM.
2207.07883
Wei Liu
Zhilu Lai, Wei Liu, Xudong Jian, Kiran Bacsa, Limin Sun, Eleni Chatzi
Neural modal ordinary differential equations: Integrating physics-based modeling with neural ordinary differential equations for modeling high-dimensional monitored structures
Accepted for publication in Data-Centric Engineering
Data-Centric Engineering, Volume 3, 2022, e34
10.1017/dce.2022.35
null
cs.LG cs.CE physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The order/dimension of models derived on the basis of data is commonly restricted by the number of observations, or in the context of monitored systems, sensing nodes. This is particularly true for structural systems (e.g., civil or mechanical structures), which are typically high-dimensional in nature. In the scope of physics-informed machine learning, this paper proposes a framework -- termed Neural Modal ODEs -- to integrate physics-based modeling with deep learning for modeling the dynamics of monitored and high-dimensional engineered systems. Neural Ordinary Differential Equations -- Neural ODEs are exploited as the deep learning operator. In this initiating exploration, we restrict ourselves to linear or mildly nonlinear systems. We propose an architecture that couples a dynamic version of variational autoencoders with physics-informed Neural ODEs (Pi-Neural ODEs). An encoder, as a part of the autoencoder, learns the abstract mappings from the first few items of observational data to the initial values of the latent variables, which drive the learning of embedded dynamics via physics-informed Neural ODEs, imposing a modal model structure on that latent space. The decoder of the proposed model adopts the eigenmodes derived from an eigen-analysis applied to the linearized portion of a physics-based model: a process implicitly carrying the spatial relationship between degrees-of-freedom (DOFs). The framework is validated on a numerical example, and an experimental dataset of a scaled cable-stayed bridge, where the learned hybrid model is shown to outperform a purely physics-based approach to modeling. We further show the functionality of the proposed scheme within the context of virtual sensing, i.e., the recovery of generalized response quantities in unmeasured DOFs from spatially sparse data.
[ { "created": "Sat, 16 Jul 2022 09:30:20 GMT", "version": "v1" }, { "created": "Wed, 30 Nov 2022 05:58:25 GMT", "version": "v2" } ]
2022-12-01
[ [ "Lai", "Zhilu", "" ], [ "Liu", "Wei", "" ], [ "Jian", "Xudong", "" ], [ "Bacsa", "Kiran", "" ], [ "Sun", "Limin", "" ], [ "Chatzi", "Eleni", "" ] ]
The order/dimension of models derived on the basis of data is commonly restricted by the number of observations, or in the context of monitored systems, sensing nodes. This is particularly true for structural systems (e.g., civil or mechanical structures), which are typically high-dimensional in nature. In the scope of physics-informed machine learning, this paper proposes a framework -- termed Neural Modal ODEs -- to integrate physics-based modeling with deep learning for modeling the dynamics of monitored and high-dimensional engineered systems. Neural Ordinary Differential Equations -- Neural ODEs are exploited as the deep learning operator. In this initiating exploration, we restrict ourselves to linear or mildly nonlinear systems. We propose an architecture that couples a dynamic version of variational autoencoders with physics-informed Neural ODEs (Pi-Neural ODEs). An encoder, as a part of the autoencoder, learns the abstract mappings from the first few items of observational data to the initial values of the latent variables, which drive the learning of embedded dynamics via physics-informed Neural ODEs, imposing a modal model structure on that latent space. The decoder of the proposed model adopts the eigenmodes derived from an eigen-analysis applied to the linearized portion of a physics-based model: a process implicitly carrying the spatial relationship between degrees-of-freedom (DOFs). The framework is validated on a numerical example, and an experimental dataset of a scaled cable-stayed bridge, where the learned hybrid model is shown to outperform a purely physics-based approach to modeling. We further show the functionality of the proposed scheme within the context of virtual sensing, i.e., the recovery of generalized response quantities in unmeasured DOFs from spatially sparse data.
2001.03671
Harsh Mehta
Harsh Mehta, Yoav Artzi, Jason Baldridge, Eugene Ie, Piotr Mirowski
Retouchdown: Adding Touchdown to StreetLearn as a Shareable Resource for Language Grounding Tasks in Street View
null
null
null
null
cs.CV cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing the 29k raw Street View panoramas needed for Touchdown. We follow the process used for the StreetLearn data release (Mirowski et al., 2019) to check panoramas for personally identifiable information and blur them as necessary. These have been added to the StreetLearn dataset and can be obtained via the same process as used previously for StreetLearn. We also provide a reference implementation for both of the Touchdown tasks: vision and language navigation (VLN) and spatial description resolution (SDR). We compare our model results to those given in Chen et al. (2019) and show that the panoramas we have added to StreetLearn fully support both Touchdown tasks and can be used effectively for further research and comparison.
[ { "created": "Fri, 10 Jan 2020 21:35:28 GMT", "version": "v1" } ]
2020-01-14
[ [ "Mehta", "Harsh", "" ], [ "Artzi", "Yoav", "" ], [ "Baldridge", "Jason", "" ], [ "Ie", "Eugene", "" ], [ "Mirowski", "Piotr", "" ] ]
The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing the 29k raw Street View panoramas needed for Touchdown. We follow the process used for the StreetLearn data release (Mirowski et al., 2019) to check panoramas for personally identifiable information and blur them as necessary. These have been added to the StreetLearn dataset and can be obtained via the same process as used previously for StreetLearn. We also provide a reference implementation for both of the Touchdown tasks: vision and language navigation (VLN) and spatial description resolution (SDR). We compare our model results to those given in Chen et al. (2019) and show that the panoramas we have added to StreetLearn fully support both Touchdown tasks and can be used effectively for further research and comparison.
2209.03945
Tanujit Chakraborty
Lena Sasal, Tanujit Chakraborty, Abdenour Hadid
W-Transformers : A Wavelet-based Transformer Framework for Univariate Time Series Forecasting
null
2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)
10.1109/ICMLA55696.2022.00111
Pages 671-676
cs.LG econ.EM eess.SP stat.ML
http://creativecommons.org/licenses/by/4.0/
Deep learning utilizing transformers has recently achieved a lot of success in many vital areas such as natural language processing, computer vision, anomaly detection, and recommendation systems, among many others. Among several merits of transformers, the ability to capture long-range temporal dependencies and interactions is desirable for time series forecasting, leading to its progress in various time series applications. In this paper, we build a transformer model for non-stationary time series. The problem is challenging yet crucially important. We present a novel framework for univariate time series representation learning based on the wavelet-based transformer encoder architecture and call it W-Transformer. The proposed W-Transformers utilize a maximal overlap discrete wavelet transformation (MODWT) to the time series data and build local transformers on the decomposed datasets to vividly capture the nonstationarity and long-range nonlinear dependencies in the time series. Evaluating our framework on several publicly available benchmark time series datasets from various domains and with diverse characteristics, we demonstrate that it performs, on average, significantly better than the baseline forecasters for short-term and long-term forecasting, even for datasets that consist of only a few hundred training samples.
[ { "created": "Thu, 8 Sep 2022 17:39:38 GMT", "version": "v1" } ]
2023-12-05
[ [ "Sasal", "Lena", "" ], [ "Chakraborty", "Tanujit", "" ], [ "Hadid", "Abdenour", "" ] ]
Deep learning utilizing transformers has recently achieved a lot of success in many vital areas such as natural language processing, computer vision, anomaly detection, and recommendation systems, among many others. Among several merits of transformers, the ability to capture long-range temporal dependencies and interactions is desirable for time series forecasting, leading to its progress in various time series applications. In this paper, we build a transformer model for non-stationary time series. The problem is challenging yet crucially important. We present a novel framework for univariate time series representation learning based on the wavelet-based transformer encoder architecture and call it W-Transformer. The proposed W-Transformers utilize a maximal overlap discrete wavelet transformation (MODWT) to the time series data and build local transformers on the decomposed datasets to vividly capture the nonstationarity and long-range nonlinear dependencies in the time series. Evaluating our framework on several publicly available benchmark time series datasets from various domains and with diverse characteristics, we demonstrate that it performs, on average, significantly better than the baseline forecasters for short-term and long-term forecasting, even for datasets that consist of only a few hundred training samples.
2301.13060
Sam Adam-Day
Sam Adam-Day, Theodor Mihai Iliant, \.Ismail \.Ilkan Ceylan
Zero-One Laws of Graph Neural Networks
NeurIPS '23 camera-ready version; 10 pages + references + 10 pages appendices, 7 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph neural networks (GNNs) are the de facto standard deep learning architectures for machine learning on graphs. This has led to a large body of work analyzing the capabilities and limitations of these models, particularly pertaining to their representation and extrapolation capacity. We offer a novel theoretical perspective on the representation and extrapolation capacity of GNNs, by answering the question: how do GNNs behave as the number of graph nodes become very large? Under mild assumptions, we show that when we draw graphs of increasing size from the Erd\H{o}s-R\'enyi model, the probability that such graphs are mapped to a particular output by a class of GNN classifiers tends to either zero or to one. This class includes the popular graph convolutional network architecture. The result establishes 'zero-one laws' for these GNNs, and analogously to other convergence laws, entails theoretical limitations on their capacity. We empirically verify our results, observing that the theoretical asymptotic limits are evident already on relatively small graphs.
[ { "created": "Mon, 30 Jan 2023 17:02:23 GMT", "version": "v1" }, { "created": "Fri, 10 Mar 2023 12:53:09 GMT", "version": "v2" }, { "created": "Tue, 14 Mar 2023 11:30:48 GMT", "version": "v3" }, { "created": "Wed, 24 May 2023 14:00:04 GMT", "version": "v4" }, { "created": "Tue, 24 Oct 2023 09:28:08 GMT", "version": "v5" } ]
2023-10-25
[ [ "Adam-Day", "Sam", "" ], [ "Iliant", "Theodor Mihai", "" ], [ "Ceylan", "İsmail İlkan", "" ] ]
Graph neural networks (GNNs) are the de facto standard deep learning architectures for machine learning on graphs. This has led to a large body of work analyzing the capabilities and limitations of these models, particularly pertaining to their representation and extrapolation capacity. We offer a novel theoretical perspective on the representation and extrapolation capacity of GNNs, by answering the question: how do GNNs behave as the number of graph nodes become very large? Under mild assumptions, we show that when we draw graphs of increasing size from the Erd\H{o}s-R\'enyi model, the probability that such graphs are mapped to a particular output by a class of GNN classifiers tends to either zero or to one. This class includes the popular graph convolutional network architecture. The result establishes 'zero-one laws' for these GNNs, and analogously to other convergence laws, entails theoretical limitations on their capacity. We empirically verify our results, observing that the theoretical asymptotic limits are evident already on relatively small graphs.
1903.02885
Matthias Frey
Navneet Agrawal, Matthias Frey and Slawomir Stanczak
A Scalable Max-Consensus Protocol For Noisy Ultra-Dense Networks
v2: Revised version after reviews; slightly extended w.r.t. version submitted to SPAWC
2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Cannes, France, 2019, pp. 1-5
10.1109/SPAWC.2019.8815597
null
cs.IT cs.SY eess.SY math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce \emph{ScalableMax}, a novel communication scheme for achieving max-consensus in a network of multiple agents which harnesses the interference in the wireless channel as well as its multicast capabilities. In a sufficiently dense network, the amount of communication resources required grows logarithmically with the number of nodes, while in state-of-the-art approaches, this growth is at least linear. ScalableMax can handle additive noise and works well in a high SNR regime. For medium and low SNR, we propose the \emph{ScalableMax-EC} scheme, which extends the ideas of ScalableMax by introducing a novel error correction scheme. It achieves lower error rates at the cost of using more channel resources. However, it preserves the logarithmic growth with the number of agents in the system.
[ { "created": "Thu, 7 Mar 2019 12:50:00 GMT", "version": "v1" }, { "created": "Tue, 14 May 2019 08:19:24 GMT", "version": "v2" } ]
2021-03-01
[ [ "Agrawal", "Navneet", "" ], [ "Frey", "Matthias", "" ], [ "Stanczak", "Slawomir", "" ] ]
We introduce \emph{ScalableMax}, a novel communication scheme for achieving max-consensus in a network of multiple agents which harnesses the interference in the wireless channel as well as its multicast capabilities. In a sufficiently dense network, the amount of communication resources required grows logarithmically with the number of nodes, while in state-of-the-art approaches, this growth is at least linear. ScalableMax can handle additive noise and works well in a high SNR regime. For medium and low SNR, we propose the \emph{ScalableMax-EC} scheme, which extends the ideas of ScalableMax by introducing a novel error correction scheme. It achieves lower error rates at the cost of using more channel resources. However, it preserves the logarithmic growth with the number of agents in the system.
2212.13679
Hao Zhang
Hao Zhang, Tingting Wu, Siyao Cheng, Jie Liu
CC-FedAvg: Computationally Customized Federated Averaging
16 pages, 10 figures
null
null
null
cs.LG cs.AI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning (FL) is an emerging paradigm to train model with distributed data from numerous Internet of Things (IoT) devices. It inherently assumes a uniform capacity among participants. However, due to different conditions such as differing energy budgets or executing parallel unrelated tasks, participants have diverse computational resources in practice. Participants with insufficient computation budgets must plan for the use of restricted computational resources appropriately, otherwise they would be unable to complete the entire training procedure, resulting in model performance decline. To address this issue, we propose a strategy for estimating local models without computationally intensive iterations. Based on it, we propose Computationally Customized Federated Averaging (CC-FedAvg), which allows participants to determine whether to perform traditional local training or model estimation in each round based on their current computational budgets. Both theoretical analysis and exhaustive experiments indicate that CC-FedAvg has the same convergence rate and comparable performance as FedAvg without resource constraints. Furthermore, CC-FedAvg can be viewed as a computation-efficient version of FedAvg that retains model performance while considerably lowering computation overhead.
[ { "created": "Wed, 28 Dec 2022 03:32:29 GMT", "version": "v1" }, { "created": "Sat, 22 Apr 2023 09:10:56 GMT", "version": "v2" }, { "created": "Sat, 1 Jul 2023 08:36:44 GMT", "version": "v3" } ]
2023-07-04
[ [ "Zhang", "Hao", "" ], [ "Wu", "Tingting", "" ], [ "Cheng", "Siyao", "" ], [ "Liu", "Jie", "" ] ]
Federated learning (FL) is an emerging paradigm to train model with distributed data from numerous Internet of Things (IoT) devices. It inherently assumes a uniform capacity among participants. However, due to different conditions such as differing energy budgets or executing parallel unrelated tasks, participants have diverse computational resources in practice. Participants with insufficient computation budgets must plan for the use of restricted computational resources appropriately, otherwise they would be unable to complete the entire training procedure, resulting in model performance decline. To address this issue, we propose a strategy for estimating local models without computationally intensive iterations. Based on it, we propose Computationally Customized Federated Averaging (CC-FedAvg), which allows participants to determine whether to perform traditional local training or model estimation in each round based on their current computational budgets. Both theoretical analysis and exhaustive experiments indicate that CC-FedAvg has the same convergence rate and comparable performance as FedAvg without resource constraints. Furthermore, CC-FedAvg can be viewed as a computation-efficient version of FedAvg that retains model performance while considerably lowering computation overhead.
1609.07228
Deng Cai
Cong Fu and Deng Cai
EFANNA : An Extremely Fast Approximate Nearest Neighbor Search Algorithm Based on kNN Graph
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Approximate nearest neighbor (ANN) search is a fundamental problem in many areas of data mining, machine learning and computer vision. The performance of traditional hierarchical structure (tree) based methods decreases as the dimensionality of data grows, while hashing based methods usually lack efficiency in practice. Recently, the graph based methods have drawn considerable attention. The main idea is that \emph{a neighbor of a neighbor is also likely to be a neighbor}, which we refer as \emph{NN-expansion}. These methods construct a $k$-nearest neighbor ($k$NN) graph offline. And at online search stage, these methods find candidate neighbors of a query point in some way (\eg, random selection), and then check the neighbors of these candidate neighbors for closer ones iteratively. Despite some promising results, there are mainly two problems with these approaches: 1) These approaches tend to converge to local optima. 2) Constructing a $k$NN graph is time consuming. We find that these two problems can be nicely solved when we provide a good initialization for NN-expansion. In this paper, we propose EFANNA, an extremely fast approximate nearest neighbor search algorithm based on $k$NN Graph. Efanna nicely combines the advantages of hierarchical structure based methods and nearest-neighbor-graph based methods. Extensive experiments have shown that EFANNA outperforms the state-of-art algorithms both on approximate nearest neighbor search and approximate nearest neighbor graph construction. To the best of our knowledge, EFANNA is the fastest algorithm so far both on approximate nearest neighbor graph construction and approximate nearest neighbor search. A library EFANNA based on this research is released on Github.
[ { "created": "Fri, 23 Sep 2016 04:34:54 GMT", "version": "v1" }, { "created": "Fri, 18 Nov 2016 08:08:58 GMT", "version": "v2" }, { "created": "Sat, 3 Dec 2016 06:31:31 GMT", "version": "v3" } ]
2016-12-06
[ [ "Fu", "Cong", "" ], [ "Cai", "Deng", "" ] ]
Approximate nearest neighbor (ANN) search is a fundamental problem in many areas of data mining, machine learning and computer vision. The performance of traditional hierarchical structure (tree) based methods decreases as the dimensionality of data grows, while hashing based methods usually lack efficiency in practice. Recently, the graph based methods have drawn considerable attention. The main idea is that \emph{a neighbor of a neighbor is also likely to be a neighbor}, which we refer as \emph{NN-expansion}. These methods construct a $k$-nearest neighbor ($k$NN) graph offline. And at online search stage, these methods find candidate neighbors of a query point in some way (\eg, random selection), and then check the neighbors of these candidate neighbors for closer ones iteratively. Despite some promising results, there are mainly two problems with these approaches: 1) These approaches tend to converge to local optima. 2) Constructing a $k$NN graph is time consuming. We find that these two problems can be nicely solved when we provide a good initialization for NN-expansion. In this paper, we propose EFANNA, an extremely fast approximate nearest neighbor search algorithm based on $k$NN Graph. Efanna nicely combines the advantages of hierarchical structure based methods and nearest-neighbor-graph based methods. Extensive experiments have shown that EFANNA outperforms the state-of-art algorithms both on approximate nearest neighbor search and approximate nearest neighbor graph construction. To the best of our knowledge, EFANNA is the fastest algorithm so far both on approximate nearest neighbor graph construction and approximate nearest neighbor search. A library EFANNA based on this research is released on Github.
2001.09956
Zhe Xu
Zhe Xu, Yuxin Chen and Ufuk Topcu
Adaptive Teaching of Temporal Logic Formulas to Learners with Preferences
25 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine teaching is an algorithmic framework for teaching a target hypothesis via a sequence of examples or demonstrations. We investigate machine teaching for temporal logic formulas -- a novel and expressive hypothesis class amenable to time-related task specifications. In the context of teaching temporal logic formulas, an exhaustive search even for a myopic solution takes exponential time (with respect to the time span of the task). We propose an efficient approach for teaching parametric linear temporal logic formulas. Concretely, we derive a necessary condition for the minimal time length of a demonstration to eliminate a set of hypotheses. Utilizing this condition, we propose a myopic teaching algorithm by solving a sequence of integer programming problems. We further show that, under two notions of teaching complexity, the proposed algorithm has near-optimal performance. The results strictly generalize the previous results on teaching preference-based version space learners. We evaluate our algorithm extensively under a variety of learner types (i.e., learners with different preference models) and interactive protocols (e.g., batched and adaptive). The results show that the proposed algorithms can efficiently teach a given target temporal logic formula under various settings, and that there are significant gains of teaching efficacy when the teacher adapts to the learner's current hypotheses or uses oracles.
[ { "created": "Mon, 27 Jan 2020 18:22:53 GMT", "version": "v1" } ]
2020-01-28
[ [ "Xu", "Zhe", "" ], [ "Chen", "Yuxin", "" ], [ "Topcu", "Ufuk", "" ] ]
Machine teaching is an algorithmic framework for teaching a target hypothesis via a sequence of examples or demonstrations. We investigate machine teaching for temporal logic formulas -- a novel and expressive hypothesis class amenable to time-related task specifications. In the context of teaching temporal logic formulas, an exhaustive search even for a myopic solution takes exponential time (with respect to the time span of the task). We propose an efficient approach for teaching parametric linear temporal logic formulas. Concretely, we derive a necessary condition for the minimal time length of a demonstration to eliminate a set of hypotheses. Utilizing this condition, we propose a myopic teaching algorithm by solving a sequence of integer programming problems. We further show that, under two notions of teaching complexity, the proposed algorithm has near-optimal performance. The results strictly generalize the previous results on teaching preference-based version space learners. We evaluate our algorithm extensively under a variety of learner types (i.e., learners with different preference models) and interactive protocols (e.g., batched and adaptive). The results show that the proposed algorithms can efficiently teach a given target temporal logic formula under various settings, and that there are significant gains of teaching efficacy when the teacher adapts to the learner's current hypotheses or uses oracles.
1702.02369
Daniel Dietsch
Marius Greitschus, Daniel Dietsch, Andreas Podelski
Refining Trace Abstraction using Abstract Interpretation
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by-sa/4.0/
The CEGAR loop in software model checking notoriously diverges when the abstraction refinement procedure does not derive a loop invariant. An abstraction refinement procedure based on an SMT solver is applied to a trace, i.e., a restricted form of a program (without loops). In this paper, we present a new abstraction refinement procedure that aims at circumventing this restriction whenever possible. We apply abstract interpretation to a program that we derive from the given trace. If the program contains a loop, we are guaranteed to obtain a loop invariant. We call an SMT solver only in the case where the abstract interpretation returns an indefinite answer. That is, the idea is to use abstract interpretation and an SMT solver in tandem. An experimental evaluation in the setting of trace abstraction indicates the practical potential of this idea.
[ { "created": "Wed, 8 Feb 2017 11:04:12 GMT", "version": "v1" } ]
2017-02-09
[ [ "Greitschus", "Marius", "" ], [ "Dietsch", "Daniel", "" ], [ "Podelski", "Andreas", "" ] ]
The CEGAR loop in software model checking notoriously diverges when the abstraction refinement procedure does not derive a loop invariant. An abstraction refinement procedure based on an SMT solver is applied to a trace, i.e., a restricted form of a program (without loops). In this paper, we present a new abstraction refinement procedure that aims at circumventing this restriction whenever possible. We apply abstract interpretation to a program that we derive from the given trace. If the program contains a loop, we are guaranteed to obtain a loop invariant. We call an SMT solver only in the case where the abstract interpretation returns an indefinite answer. That is, the idea is to use abstract interpretation and an SMT solver in tandem. An experimental evaluation in the setting of trace abstraction indicates the practical potential of this idea.
2101.00531
Baiming Chen
Baiming Chen, Zuxin Liu, Jiacheng Zhu, Mengdi Xu, Wenhao Ding, Ding Zhao
Context-Aware Safe Reinforcement Learning for Non-Stationary Environments
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Safety is a critical concern when deploying reinforcement learning agents for realistic tasks. Recently, safe reinforcement learning algorithms have been developed to optimize the agent's performance while avoiding violations of safety constraints. However, few studies have addressed the non-stationary disturbances in the environments, which may cause catastrophic outcomes. In this paper, we propose the context-aware safe reinforcement learning (CASRL) method, a meta-learning framework to realize safe adaptation in non-stationary environments. We use a probabilistic latent variable model to achieve fast inference of the posterior environment transition distribution given the context data. Safety constraints are then evaluated with uncertainty-aware trajectory sampling. The high cost of safety violations leads to the rareness of unsafe records in the dataset. We address this issue by enabling prioritized sampling during model training and formulating prior safety constraints with domain knowledge during constrained planning. The algorithm is evaluated in realistic safety-critical environments with non-stationary disturbances. Results show that the proposed algorithm significantly outperforms existing baselines in terms of safety and robustness.
[ { "created": "Sat, 2 Jan 2021 23:52:22 GMT", "version": "v1" } ]
2021-01-05
[ [ "Chen", "Baiming", "" ], [ "Liu", "Zuxin", "" ], [ "Zhu", "Jiacheng", "" ], [ "Xu", "Mengdi", "" ], [ "Ding", "Wenhao", "" ], [ "Zhao", "Ding", "" ] ]
Safety is a critical concern when deploying reinforcement learning agents for realistic tasks. Recently, safe reinforcement learning algorithms have been developed to optimize the agent's performance while avoiding violations of safety constraints. However, few studies have addressed the non-stationary disturbances in the environments, which may cause catastrophic outcomes. In this paper, we propose the context-aware safe reinforcement learning (CASRL) method, a meta-learning framework to realize safe adaptation in non-stationary environments. We use a probabilistic latent variable model to achieve fast inference of the posterior environment transition distribution given the context data. Safety constraints are then evaluated with uncertainty-aware trajectory sampling. The high cost of safety violations leads to the rareness of unsafe records in the dataset. We address this issue by enabling prioritized sampling during model training and formulating prior safety constraints with domain knowledge during constrained planning. The algorithm is evaluated in realistic safety-critical environments with non-stationary disturbances. Results show that the proposed algorithm significantly outperforms existing baselines in terms of safety and robustness.
2002.10733
Alexander Levine
Alexander Levine, Soheil Feizi
(De)Randomized Smoothing for Certifiable Defense against Patch Attacks
NeurIPS 2020
null
null
null
cs.LG cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Patch adversarial attacks on images, in which the attacker can distort pixels within a region of bounded size, are an important threat model since they provide a quantitative model for physical adversarial attacks. In this paper, we introduce a certifiable defense against patch attacks that guarantees for a given image and patch attack size, no patch adversarial examples exist. Our method is related to the broad class of randomized smoothing robustness schemes which provide high-confidence probabilistic robustness certificates. By exploiting the fact that patch attacks are more constrained than general sparse attacks, we derive meaningfully large robustness certificates against them. Additionally, in contrast to smoothing-based defenses against L_p and sparse attacks, our defense method against patch attacks is de-randomized, yielding improved, deterministic certificates. Compared to the existing patch certification method proposed by Chiang et al. (2020), which relies on interval bound propagation, our method can be trained significantly faster, achieves high clean and certified robust accuracy on CIFAR-10, and provides certificates at ImageNet scale. For example, for a 5-by-5 patch attack on CIFAR-10, our method achieves up to around 57.6% certified accuracy (with a classifier with around 83.8% clean accuracy), compared to at most 30.3% certified accuracy for the existing method (with a classifier with around 47.8% clean accuracy). Our results effectively establish a new state-of-the-art of certifiable defense against patch attacks on CIFAR-10 and ImageNet. Code is available at https://github.com/alevine0/patchSmoothing.
[ { "created": "Tue, 25 Feb 2020 08:39:46 GMT", "version": "v1" }, { "created": "Tue, 8 Dec 2020 19:09:10 GMT", "version": "v2" }, { "created": "Fri, 8 Jan 2021 06:36:56 GMT", "version": "v3" } ]
2021-01-11
[ [ "Levine", "Alexander", "" ], [ "Feizi", "Soheil", "" ] ]
Patch adversarial attacks on images, in which the attacker can distort pixels within a region of bounded size, are an important threat model since they provide a quantitative model for physical adversarial attacks. In this paper, we introduce a certifiable defense against patch attacks that guarantees for a given image and patch attack size, no patch adversarial examples exist. Our method is related to the broad class of randomized smoothing robustness schemes which provide high-confidence probabilistic robustness certificates. By exploiting the fact that patch attacks are more constrained than general sparse attacks, we derive meaningfully large robustness certificates against them. Additionally, in contrast to smoothing-based defenses against L_p and sparse attacks, our defense method against patch attacks is de-randomized, yielding improved, deterministic certificates. Compared to the existing patch certification method proposed by Chiang et al. (2020), which relies on interval bound propagation, our method can be trained significantly faster, achieves high clean and certified robust accuracy on CIFAR-10, and provides certificates at ImageNet scale. For example, for a 5-by-5 patch attack on CIFAR-10, our method achieves up to around 57.6% certified accuracy (with a classifier with around 83.8% clean accuracy), compared to at most 30.3% certified accuracy for the existing method (with a classifier with around 47.8% clean accuracy). Our results effectively establish a new state-of-the-art of certifiable defense against patch attacks on CIFAR-10 and ImageNet. Code is available at https://github.com/alevine0/patchSmoothing.
2012.01940
Ben Sattelberg
Ben Sattelberg, Renzo Cavalieri, Michael Kirby, Chris Peterson, Ross Beveridge
Locally Linear Attributes of ReLU Neural Networks
18 pages, 12 figures, 2 tables, submitted to SIMODS
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A ReLU neural network determines/is a continuous piecewise linear map from an input space to an output space. The weights in the neural network determine a decomposition of the input space into convex polytopes and on each of these polytopes the network can be described by a single affine mapping. The structure of the decomposition, together with the affine map attached to each polytope, can be analyzed to investigate the behavior of the associated neural network.
[ { "created": "Mon, 30 Nov 2020 19:31:23 GMT", "version": "v1" } ]
2020-12-04
[ [ "Sattelberg", "Ben", "" ], [ "Cavalieri", "Renzo", "" ], [ "Kirby", "Michael", "" ], [ "Peterson", "Chris", "" ], [ "Beveridge", "Ross", "" ] ]
A ReLU neural network determines/is a continuous piecewise linear map from an input space to an output space. The weights in the neural network determine a decomposition of the input space into convex polytopes and on each of these polytopes the network can be described by a single affine mapping. The structure of the decomposition, together with the affine map attached to each polytope, can be analyzed to investigate the behavior of the associated neural network.
2303.12238
Yanshen Sun
Yanshen Sun, Kaiqun Fu, and Chang-Tien Lu
DG-Trans: Dual-level Graph Transformer for Spatiotemporal Incident Impact Prediction on Traffic Networks
null
null
null
null
cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
The prompt estimation of traffic incident impacts can guide commuters in their trip planning and improve the resilience of transportation agencies' decision-making on resilience. However, it is more challenging than node-level and graph-level forecasting tasks, as it requires extracting the anomaly subgraph or sub-time-series from dynamic graphs. In this paper, we propose DG-Trans, a novel traffic incident impact prediction framework, to foresee the impact of traffic incidents through dynamic graph learning. The proposed framework contains a dual-level spatial transformer and an importance-score-based temporal transformer, and the performance of this framework is justified by two newly constructed benchmark datasets. The dual-level spatial transformer removes unnecessary edges between nodes to isolate the affected subgraph from the other nodes. Meanwhile, the importance-score-based temporal transformer identifies abnormal changes in node features, causing the predictions to rely more on measurement changes after the incident occurs. Therefore, DG-Trans is equipped with dual abilities that extract spatiotemporal dependency and identify anomaly nodes affected by incidents while removing noise introduced by benign nodes. Extensive experiments on real-world datasets verify that DG-Trans outperforms the existing state-of-the-art methods, especially in extracting spatiotemporal dependency patterns and predicting traffic accident impacts. It offers promising potential for traffic incident management systems.
[ { "created": "Tue, 21 Mar 2023 23:44:09 GMT", "version": "v1" } ]
2023-03-23
[ [ "Sun", "Yanshen", "" ], [ "Fu", "Kaiqun", "" ], [ "Lu", "Chang-Tien", "" ] ]
The prompt estimation of traffic incident impacts can guide commuters in their trip planning and improve the resilience of transportation agencies' decision-making on resilience. However, it is more challenging than node-level and graph-level forecasting tasks, as it requires extracting the anomaly subgraph or sub-time-series from dynamic graphs. In this paper, we propose DG-Trans, a novel traffic incident impact prediction framework, to foresee the impact of traffic incidents through dynamic graph learning. The proposed framework contains a dual-level spatial transformer and an importance-score-based temporal transformer, and the performance of this framework is justified by two newly constructed benchmark datasets. The dual-level spatial transformer removes unnecessary edges between nodes to isolate the affected subgraph from the other nodes. Meanwhile, the importance-score-based temporal transformer identifies abnormal changes in node features, causing the predictions to rely more on measurement changes after the incident occurs. Therefore, DG-Trans is equipped with dual abilities that extract spatiotemporal dependency and identify anomaly nodes affected by incidents while removing noise introduced by benign nodes. Extensive experiments on real-world datasets verify that DG-Trans outperforms the existing state-of-the-art methods, especially in extracting spatiotemporal dependency patterns and predicting traffic accident impacts. It offers promising potential for traffic incident management systems.
2211.11798
Rafal Kocielnik
Rafal Kocielnik, Sara Kangaslahti, Shrimai Prabhumoye, Meena Hari, R. Michael Alvarez, Anima Anandkumar
Can You Label Less by Using Out-of-Domain Data? Active & Transfer Learning with Few-shot Instructions
Accepted to NeurIPS Workshop on Transfer Learning for Natural Language Processing, 2022, New Orleans
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Labeling social-media data for custom dimensions of toxicity and social bias is challenging and labor-intensive. Existing transfer and active learning approaches meant to reduce annotation effort require fine-tuning, which suffers from over-fitting to noise and can cause domain shift with small sample sizes. In this work, we propose a novel Active Transfer Few-shot Instructions (ATF) approach which requires no fine-tuning. ATF leverages the internal linguistic knowledge of pre-trained language models (PLMs) to facilitate the transfer of information from existing pre-labeled datasets (source-domain task) with minimum labeling effort on unlabeled target data (target-domain task). Our strategy can yield positive transfer achieving a mean AUC gain of 10.5% compared to no transfer with a large 22b parameter PLM. We further show that annotation of just a few target-domain samples via active learning can be beneficial for transfer, but the impact diminishes with more annotation effort (26% drop in gain between 100 and 2000 annotated examples). Finally, we find that not all transfer scenarios yield a positive gain, which seems related to the PLMs initial performance on the target-domain task.
[ { "created": "Mon, 21 Nov 2022 19:03:31 GMT", "version": "v1" } ]
2022-11-23
[ [ "Kocielnik", "Rafal", "" ], [ "Kangaslahti", "Sara", "" ], [ "Prabhumoye", "Shrimai", "" ], [ "Hari", "Meena", "" ], [ "Alvarez", "R. Michael", "" ], [ "Anandkumar", "Anima", "" ] ]
Labeling social-media data for custom dimensions of toxicity and social bias is challenging and labor-intensive. Existing transfer and active learning approaches meant to reduce annotation effort require fine-tuning, which suffers from over-fitting to noise and can cause domain shift with small sample sizes. In this work, we propose a novel Active Transfer Few-shot Instructions (ATF) approach which requires no fine-tuning. ATF leverages the internal linguistic knowledge of pre-trained language models (PLMs) to facilitate the transfer of information from existing pre-labeled datasets (source-domain task) with minimum labeling effort on unlabeled target data (target-domain task). Our strategy can yield positive transfer achieving a mean AUC gain of 10.5% compared to no transfer with a large 22b parameter PLM. We further show that annotation of just a few target-domain samples via active learning can be beneficial for transfer, but the impact diminishes with more annotation effort (26% drop in gain between 100 and 2000 annotated examples). Finally, we find that not all transfer scenarios yield a positive gain, which seems related to the PLMs initial performance on the target-domain task.
1711.01894
Erwan Le Merrer
Erwan Le Merrer, Patrick Perez, Gilles Tr\'edan
Adversarial Frontier Stitching for Remote Neural Network Watermarking
To appear in the journal of Neural Computing and Applications, 2019
Neural Computing and Applications, 2020, 32(13), 9233-9244
10.1007/s00521-019-04434-z
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The state of the art performance of deep learning models comes at a high cost for companies and institutions, due to the tedious data collection and the heavy processing requirements. Recently, [35, 22] proposed to watermark convolutional neural networks for image classification, by embedding information into their weights. While this is a clear progress towards model protection, this technique solely allows for extracting the watermark from a network that one accesses locally and entirely. Instead, we aim at allowing the extraction of the watermark from a neural network (or any other machine learning model) that is operated remotely, and available through a service API. To this end, we propose to mark the model's action itself, tweaking slightly its decision frontiers so that a set of specific queries convey the desired information. In the present paper, we formally introduce the problem and propose a novel zero-bit watermarking algorithm that makes use of adversarial model examples. While limiting the loss of performance of the protected model, this algorithm allows subsequent extraction of the watermark using only few queries. We experimented the approach on three neural networks designed for image classification, in the context of MNIST digit recognition task.
[ { "created": "Mon, 6 Nov 2017 13:57:08 GMT", "version": "v1" }, { "created": "Wed, 7 Aug 2019 07:22:53 GMT", "version": "v2" } ]
2021-04-14
[ [ "Merrer", "Erwan Le", "" ], [ "Perez", "Patrick", "" ], [ "Trédan", "Gilles", "" ] ]
The state of the art performance of deep learning models comes at a high cost for companies and institutions, due to the tedious data collection and the heavy processing requirements. Recently, [35, 22] proposed to watermark convolutional neural networks for image classification, by embedding information into their weights. While this is a clear progress towards model protection, this technique solely allows for extracting the watermark from a network that one accesses locally and entirely. Instead, we aim at allowing the extraction of the watermark from a neural network (or any other machine learning model) that is operated remotely, and available through a service API. To this end, we propose to mark the model's action itself, tweaking slightly its decision frontiers so that a set of specific queries convey the desired information. In the present paper, we formally introduce the problem and propose a novel zero-bit watermarking algorithm that makes use of adversarial model examples. While limiting the loss of performance of the protected model, this algorithm allows subsequent extraction of the watermark using only few queries. We experimented the approach on three neural networks designed for image classification, in the context of MNIST digit recognition task.
2001.07155
Vladimir Berikov
Vladimir Berikov
Heterogeneous Transfer Learning in Ensemble Clustering
10 pages, 5 figures
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work proposes an ensemble clustering method using transfer learning approach. We consider a clustering problem, in which in addition to data under consideration, "similar" labeled data are available. The datasets can be described with different features. The method is based on constructing meta-features which describe structural characteristics of data, and their transfer from source to target domain. An experimental study of the method using Monte Carlo modeling has confirmed its efficiency. In comparison with other similar methods, the proposed one is able to work under arbitrary feature descriptions of source and target domains; it has smaller complexity.
[ { "created": "Mon, 20 Jan 2020 16:03:38 GMT", "version": "v1" } ]
2020-01-22
[ [ "Berikov", "Vladimir", "" ] ]
This work proposes an ensemble clustering method using transfer learning approach. We consider a clustering problem, in which in addition to data under consideration, "similar" labeled data are available. The datasets can be described with different features. The method is based on constructing meta-features which describe structural characteristics of data, and their transfer from source to target domain. An experimental study of the method using Monte Carlo modeling has confirmed its efficiency. In comparison with other similar methods, the proposed one is able to work under arbitrary feature descriptions of source and target domains; it has smaller complexity.
2003.10325
Antoine Boutet
Claude Rosin Ngueveu (UQAM), Antoine Boutet (PRIVATICS), Carole Frindel (CREATIS), S\'ebastien Gambs (UQAM), Th\'eo Jourdan (CREATIS, PRIVATICS), Claude Rosin
DYSAN: Dynamically sanitizing motion sensor data against sensitive inferences through adversarial networks
null
null
null
null
cs.CR cs.AI cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the widespread adoption of the quantified self movement, an increasing number of users rely on mobile applications to monitor their physical activity through their smartphones. Granting to applications a direct access to sensor data expose users to privacy risks. Indeed, usually these motion sensor data are transmitted to analytics applications hosted on the cloud leveraging machine learning models to provide feedback on their health to users. However, nothing prevents the service provider to infer private and sensitive information about a user such as health or demographic attributes.In this paper, we present DySan, a privacy-preserving framework to sanitize motion sensor data against unwanted sensitive inferences (i.e., improving privacy) while limiting the loss of accuracy on the physical activity monitoring (i.e., maintaining data utility). To ensure a good trade-off between utility and privacy, DySan leverages on the framework of Generative Adversarial Network (GAN) to sanitize the sensor data. More precisely, by learning in a competitive manner several networks, DySan is able to build models that sanitize motion data against inferences on a specified sensitive attribute (e.g., gender) while maintaining a high accuracy on activity recognition. In addition, DySan dynamically selects the sanitizing model which maximize the privacy according to the incoming data. Experiments conducted on real datasets demonstrate that DySan can drasticallylimit the gender inference to 47% while only reducing the accuracy of activity recognition by 3%.
[ { "created": "Mon, 23 Mar 2020 15:16:43 GMT", "version": "v1" }, { "created": "Thu, 8 Oct 2020 13:57:46 GMT", "version": "v2" } ]
2020-10-09
[ [ "Ngueveu", "Claude Rosin", "", "UQAM" ], [ "Boutet", "Antoine", "", "PRIVATICS" ], [ "Frindel", "Carole", "", "CREATIS" ], [ "Gambs", "Sébastien", "", "UQAM" ], [ "Jourdan", "Théo", "", "CREATIS,\n PRIVATICS" ], [ "Rosin", "Claude", "" ] ]
With the widespread adoption of the quantified self movement, an increasing number of users rely on mobile applications to monitor their physical activity through their smartphones. Granting to applications a direct access to sensor data expose users to privacy risks. Indeed, usually these motion sensor data are transmitted to analytics applications hosted on the cloud leveraging machine learning models to provide feedback on their health to users. However, nothing prevents the service provider to infer private and sensitive information about a user such as health or demographic attributes.In this paper, we present DySan, a privacy-preserving framework to sanitize motion sensor data against unwanted sensitive inferences (i.e., improving privacy) while limiting the loss of accuracy on the physical activity monitoring (i.e., maintaining data utility). To ensure a good trade-off between utility and privacy, DySan leverages on the framework of Generative Adversarial Network (GAN) to sanitize the sensor data. More precisely, by learning in a competitive manner several networks, DySan is able to build models that sanitize motion data against inferences on a specified sensitive attribute (e.g., gender) while maintaining a high accuracy on activity recognition. In addition, DySan dynamically selects the sanitizing model which maximize the privacy according to the incoming data. Experiments conducted on real datasets demonstrate that DySan can drasticallylimit the gender inference to 47% while only reducing the accuracy of activity recognition by 3%.
2305.18888
Zhiyu Liang
Zhiyu Liang, Jianfeng Zhang, Chen Liang, Hongzhi Wang, Zheng Liang, Lujia Pan
Contrastive Shapelet Learning for Unsupervised Multivariate Time Series Representation Learning
null
PVLDB, 17(3): 386-399, 2023
10.14778/3632093.3632103
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies have shown great promise in unsupervised representation learning (URL) for multivariate time series, because URL has the capability in learning generalizable representation for many downstream tasks without using inaccessible labels. However, existing approaches usually adopt the models originally designed for other domains (e.g., computer vision) to encode the time series data and rely on strong assumptions to design learning objectives, which limits their ability to perform well. To deal with these problems, we propose a novel URL framework for multivariate time series by learning time-series-specific shapelet-based representation through a popular contrasting learning paradigm. To the best of our knowledge, this is the first work that explores the shapelet-based embedding in the unsupervised general-purpose representation learning. A unified shapelet-based encoder and a novel learning objective with multi-grained contrasting and multi-scale alignment are particularly designed to achieve our goal, and a data augmentation library is employed to improve the generalization. We conduct extensive experiments using tens of real-world datasets to assess the representation quality on many downstream tasks, including classification, clustering, and anomaly detection. The results demonstrate the superiority of our method against not only URL competitors, but also techniques specially designed for downstream tasks. Our code has been made publicly available at https://github.com/real2fish/CSL.
[ { "created": "Tue, 30 May 2023 09:31:57 GMT", "version": "v1" }, { "created": "Thu, 1 Jun 2023 17:16:55 GMT", "version": "v2" }, { "created": "Fri, 2 Jun 2023 04:23:40 GMT", "version": "v3" } ]
2024-04-09
[ [ "Liang", "Zhiyu", "" ], [ "Zhang", "Jianfeng", "" ], [ "Liang", "Chen", "" ], [ "Wang", "Hongzhi", "" ], [ "Liang", "Zheng", "" ], [ "Pan", "Lujia", "" ] ]
Recent studies have shown great promise in unsupervised representation learning (URL) for multivariate time series, because URL has the capability in learning generalizable representation for many downstream tasks without using inaccessible labels. However, existing approaches usually adopt the models originally designed for other domains (e.g., computer vision) to encode the time series data and rely on strong assumptions to design learning objectives, which limits their ability to perform well. To deal with these problems, we propose a novel URL framework for multivariate time series by learning time-series-specific shapelet-based representation through a popular contrasting learning paradigm. To the best of our knowledge, this is the first work that explores the shapelet-based embedding in the unsupervised general-purpose representation learning. A unified shapelet-based encoder and a novel learning objective with multi-grained contrasting and multi-scale alignment are particularly designed to achieve our goal, and a data augmentation library is employed to improve the generalization. We conduct extensive experiments using tens of real-world datasets to assess the representation quality on many downstream tasks, including classification, clustering, and anomaly detection. The results demonstrate the superiority of our method against not only URL competitors, but also techniques specially designed for downstream tasks. Our code has been made publicly available at https://github.com/real2fish/CSL.
1803.11245
Travis Gagie
Christina Boucher, Travis Gagie, Alan Kuhnle, Ben Langmead, Giovanni Manzini and Taher Mun
Prefix-Free Parsing for Building Big BWTs
Preliminary version appeared at WABI '18; full version submitted to a journal
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
High-throughput sequencing technologies have led to explosive growth of genomic databases; one of which will soon reach hundreds of terabytes. For many applications we want to build and store indexes of these databases but constructing such indexes is a challenge. Fortunately, many of these genomic databases are highly-repetitive---a characteristic that can be exploited to ease the computation of the Burrows-Wheeler Transform (BWT), which underlies many popular indexes. In this paper, we introduce a preprocessing algorithm, referred to as {\em prefix-free parsing}, that takes a text $T$ as input, and in one-pass generates a dictionary $D$ and a parse $P$ of $T$ with the property that the BWT of $T$ can be constructed from $D$ and $P$ using workspace proportional to their total size and $O (|T|)$-time. Our experiments show that $D$ and $P$ are significantly smaller than $T$ in practice, and thus, can fit in a reasonable internal memory even when $T$ is very large. In particular, we show that with prefix-free parsing we can build an 131-megabyte run-length compressed FM-index (restricted to support only counting and not locating) for 1000 copies of human chromosome 19 in 2 hours using 21 gigabytes of memory, suggesting that we can build a 6.73 gigabyte index for 1000 complete human-genome haplotypes in approximately 102 hours using about 1 terabyte of memory.
[ { "created": "Thu, 29 Mar 2018 20:36:11 GMT", "version": "v1" }, { "created": "Fri, 13 Apr 2018 17:07:15 GMT", "version": "v2" }, { "created": "Mon, 14 May 2018 15:05:06 GMT", "version": "v3" }, { "created": "Fri, 16 Nov 2018 16:35:53 GMT", "version": "v4" } ]
2018-11-19
[ [ "Boucher", "Christina", "" ], [ "Gagie", "Travis", "" ], [ "Kuhnle", "Alan", "" ], [ "Langmead", "Ben", "" ], [ "Manzini", "Giovanni", "" ], [ "Mun", "Taher", "" ] ]
High-throughput sequencing technologies have led to explosive growth of genomic databases; one of which will soon reach hundreds of terabytes. For many applications we want to build and store indexes of these databases but constructing such indexes is a challenge. Fortunately, many of these genomic databases are highly-repetitive---a characteristic that can be exploited to ease the computation of the Burrows-Wheeler Transform (BWT), which underlies many popular indexes. In this paper, we introduce a preprocessing algorithm, referred to as {\em prefix-free parsing}, that takes a text $T$ as input, and in one-pass generates a dictionary $D$ and a parse $P$ of $T$ with the property that the BWT of $T$ can be constructed from $D$ and $P$ using workspace proportional to their total size and $O (|T|)$-time. Our experiments show that $D$ and $P$ are significantly smaller than $T$ in practice, and thus, can fit in a reasonable internal memory even when $T$ is very large. In particular, we show that with prefix-free parsing we can build an 131-megabyte run-length compressed FM-index (restricted to support only counting and not locating) for 1000 copies of human chromosome 19 in 2 hours using 21 gigabytes of memory, suggesting that we can build a 6.73 gigabyte index for 1000 complete human-genome haplotypes in approximately 102 hours using about 1 terabyte of memory.
1108.2482
Saurabh Shivale Mr
Saurabh Anandrao Shivale
Cryptovirology: Virus Approach
null
International Journal of Network Security & Its Applications (IJNSA), Vol.3, No.4, July 2011
10.5121/ijnsa.2011.3404
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditionally, "Cryptography" is a benediction to information processing and communications, it helps people to store information securely and the private communications over long distances. Cryptovirology is the study of applications of cryptography to build the malicious software. It is an investigation, how modern cryptographic tools and paradigms can be used to strengthen, develop and improve new malicious software attacks. Cryptovirology attacks have been categorized as : give malware enhanced privacy and be more robust against reverse-engineering, secondly give the attacker enhanced anonymity while communicating with deployed malware. This paper presents the idea of "Cryptovirology" which introduce a twist on how cryptography can also be used offensively. Being offensive means, it can be used to mount extortion based attacks that cause loss of access to information, loss of confidentiality, and information leakage, tasks which cryptography usually prevents. Also analyze threats and attacks that misuse of cryptography can cause when combined with fraudulent software (viruses, Trojans). Public-key cryptography is very essential for the attacks that based on cryptovirology. This paper also suggest some of the countermeasures, mechanisms to cope with and prevent such attacks. Even if the attackers actions on the host machine are being monitored, it still cannot be proven beyond reasonable doubt that he or she is the attacker; and it is an "originator-concealing attack". Evidence should be collected from the "author's own system which was used for the attack". These attacks have implications on how the use of cryptographic tools and techniques should be audited and managed in general purpose computing environments, and imply that access to the cryptographic tools should be in well control of the system(such as API routines).
[ { "created": "Thu, 11 Aug 2011 18:37:05 GMT", "version": "v1" } ]
2011-08-12
[ [ "Shivale", "Saurabh Anandrao", "" ] ]
Traditionally, "Cryptography" is a benediction to information processing and communications, it helps people to store information securely and the private communications over long distances. Cryptovirology is the study of applications of cryptography to build the malicious software. It is an investigation, how modern cryptographic tools and paradigms can be used to strengthen, develop and improve new malicious software attacks. Cryptovirology attacks have been categorized as : give malware enhanced privacy and be more robust against reverse-engineering, secondly give the attacker enhanced anonymity while communicating with deployed malware. This paper presents the idea of "Cryptovirology" which introduce a twist on how cryptography can also be used offensively. Being offensive means, it can be used to mount extortion based attacks that cause loss of access to information, loss of confidentiality, and information leakage, tasks which cryptography usually prevents. Also analyze threats and attacks that misuse of cryptography can cause when combined with fraudulent software (viruses, Trojans). Public-key cryptography is very essential for the attacks that based on cryptovirology. This paper also suggest some of the countermeasures, mechanisms to cope with and prevent such attacks. Even if the attackers actions on the host machine are being monitored, it still cannot be proven beyond reasonable doubt that he or she is the attacker; and it is an "originator-concealing attack". Evidence should be collected from the "author's own system which was used for the attack". These attacks have implications on how the use of cryptographic tools and techniques should be audited and managed in general purpose computing environments, and imply that access to the cryptographic tools should be in well control of the system(such as API routines).
1808.03405
Wenhan Luo
Wenhan Luo, Peng Sun, Fangwei Zhong, Wei Liu, Tong Zhang, Yizhou Wang
End-to-end Active Object Tracking and Its Real-world Deployment via Reinforcement Learning
To appear in Transactions on Pattern Analysis and Machine Intelligence. arXiv admin note: text overlap with arXiv:1705.10561
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study active object tracking, where a tracker takes visual observations (i.e., frame sequences) as input and produces the corresponding camera control signals as output (e.g., move forward, turn left, etc.). Conventional methods tackle tracking and camera control tasks separately, and the resulting system is difficult to tune jointly. These methods also require significant human efforts for image labeling and expensive trial-and-error system tuning in the real world. To address these issues, we propose, in this paper, an end-to-end solution via deep reinforcement learning. A ConvNet-LSTM function approximator is adopted for the direct frame-to-action prediction. We further propose an environment augmentation technique and a customized reward function, which are crucial for successful training. The tracker trained in simulators (ViZDoom and Unreal Engine) demonstrates good generalization behaviors in the case of unseen object moving paths, unseen object appearances, unseen backgrounds, and distracting objects. The system is robust and can restore tracking after occasional lost of the target being tracked. We also find that the tracking ability, obtained solely from simulators, can potentially transfer to real-world scenarios. We demonstrate successful examples of such transfer, via experiments over the VOT dataset and the deployment of a real-world robot using the proposed active tracker trained in simulation.
[ { "created": "Fri, 10 Aug 2018 04:04:19 GMT", "version": "v1" }, { "created": "Tue, 12 Feb 2019 09:20:10 GMT", "version": "v2" } ]
2019-02-14
[ [ "Luo", "Wenhan", "" ], [ "Sun", "Peng", "" ], [ "Zhong", "Fangwei", "" ], [ "Liu", "Wei", "" ], [ "Zhang", "Tong", "" ], [ "Wang", "Yizhou", "" ] ]
We study active object tracking, where a tracker takes visual observations (i.e., frame sequences) as input and produces the corresponding camera control signals as output (e.g., move forward, turn left, etc.). Conventional methods tackle tracking and camera control tasks separately, and the resulting system is difficult to tune jointly. These methods also require significant human efforts for image labeling and expensive trial-and-error system tuning in the real world. To address these issues, we propose, in this paper, an end-to-end solution via deep reinforcement learning. A ConvNet-LSTM function approximator is adopted for the direct frame-to-action prediction. We further propose an environment augmentation technique and a customized reward function, which are crucial for successful training. The tracker trained in simulators (ViZDoom and Unreal Engine) demonstrates good generalization behaviors in the case of unseen object moving paths, unseen object appearances, unseen backgrounds, and distracting objects. The system is robust and can restore tracking after occasional lost of the target being tracked. We also find that the tracking ability, obtained solely from simulators, can potentially transfer to real-world scenarios. We demonstrate successful examples of such transfer, via experiments over the VOT dataset and the deployment of a real-world robot using the proposed active tracker trained in simulation.
2001.03814
Kunping Huang
Kunping Huang, Paul Siegel, Anxiao (Andrew) Jiang
Functional Error Correction for Robust Neural Networks
24 pages, 22 figures, submitted to JSAIT journal
null
null
null
cs.IT cs.LG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When neural networks (NeuralNets) are implemented in hardware, their weights need to be stored in memory devices. As noise accumulates in the stored weights, the NeuralNet's performance will degrade. This paper studies how to use error correcting codes (ECCs) to protect the weights. Different from classic error correction in data storage, the optimization objective is to optimize the NeuralNet's performance after error correction, instead of minimizing the Uncorrectable Bit Error Rate in the protected bits. That is, by seeing the NeuralNet as a function of its input, the error correction scheme is function-oriented. A main challenge is that a deep NeuralNet often has millions to hundreds of millions of weights, causing a large redundancy overhead for ECCs, and the relationship between the weights and its NeuralNet's performance can be highly complex. To address the challenge, we propose a Selective Protection (SP) scheme, which chooses only a subset of important bits for ECC protection. To find such bits and achieve an optimized tradeoff between ECC's redundancy and NeuralNet's performance, we present an algorithm based on deep reinforcement learning. Experimental results verify that compared to the natural baseline scheme, the proposed algorithm achieves substantially better performance for the functional error correction task.
[ { "created": "Sun, 12 Jan 2020 00:40:49 GMT", "version": "v1" } ]
2020-01-14
[ [ "Huang", "Kunping", "", "Andrew" ], [ "Siegel", "Paul", "", "Andrew" ], [ "Anxiao", "", "", "Andrew" ], [ "Jiang", "", "" ] ]
When neural networks (NeuralNets) are implemented in hardware, their weights need to be stored in memory devices. As noise accumulates in the stored weights, the NeuralNet's performance will degrade. This paper studies how to use error correcting codes (ECCs) to protect the weights. Different from classic error correction in data storage, the optimization objective is to optimize the NeuralNet's performance after error correction, instead of minimizing the Uncorrectable Bit Error Rate in the protected bits. That is, by seeing the NeuralNet as a function of its input, the error correction scheme is function-oriented. A main challenge is that a deep NeuralNet often has millions to hundreds of millions of weights, causing a large redundancy overhead for ECCs, and the relationship between the weights and its NeuralNet's performance can be highly complex. To address the challenge, we propose a Selective Protection (SP) scheme, which chooses only a subset of important bits for ECC protection. To find such bits and achieve an optimized tradeoff between ECC's redundancy and NeuralNet's performance, we present an algorithm based on deep reinforcement learning. Experimental results verify that compared to the natural baseline scheme, the proposed algorithm achieves substantially better performance for the functional error correction task.
1211.2620
Corentin Burnay
Corentin Burnay, Ivan Jureta, St\'ephane Faulkner
Context-Driven Elicitation of Default Requirements: an Empirical Validation
Currently under review
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In Requirements Engineering, requirements elicitation aims the acquisition of information from the stakeholders of a system-to-be. An important task during elicitation is to identify and render explicit the stakeholders' implicit assumptions about the system-to-be and its environment. Purpose of doing so is to identify omissions in, and conflicts between requirements. This paper offers a conceptual framework for the identification and documentation of default requirements that stakeholders may be using. The framework is relevant for practice, as it forms a check-list for types of questions to use during elicitation. An empirical validation is described, and guidelines for elicitation are drawn.
[ { "created": "Mon, 12 Nov 2012 14:04:38 GMT", "version": "v1" }, { "created": "Wed, 4 Dec 2013 14:03:11 GMT", "version": "v2" } ]
2016-11-26
[ [ "Burnay", "Corentin", "" ], [ "Jureta", "Ivan", "" ], [ "Faulkner", "Stéphane", "" ] ]
In Requirements Engineering, requirements elicitation aims the acquisition of information from the stakeholders of a system-to-be. An important task during elicitation is to identify and render explicit the stakeholders' implicit assumptions about the system-to-be and its environment. Purpose of doing so is to identify omissions in, and conflicts between requirements. This paper offers a conceptual framework for the identification and documentation of default requirements that stakeholders may be using. The framework is relevant for practice, as it forms a check-list for types of questions to use during elicitation. An empirical validation is described, and guidelines for elicitation are drawn.
2110.04175
Thijs Vogels
Thijs Vogels and Lie He and Anastasia Koloskova and Tao Lin and Sai Praneeth Karimireddy and Sebastian U. Stich and Martin Jaggi
RelaySum for Decentralized Deep Learning on Heterogeneous Data
Presented at NeurIPS 2021
Advances in Neural Information Processing Systems 34, 2021
null
null
cs.LG cs.DC math.OC stat.ML
http://creativecommons.org/licenses/by/4.0/
In decentralized machine learning, workers compute model updates on their local data. Because the workers only communicate with few neighbors without central coordination, these updates propagate progressively over the network. This paradigm enables distributed training on networks without all-to-all connectivity, helping to protect data privacy as well as to reduce the communication cost of distributed training in data centers. A key challenge, primarily in decentralized deep learning, remains the handling of differences between the workers' local data distributions. To tackle this challenge, we introduce the RelaySum mechanism for information propagation in decentralized learning. RelaySum uses spanning trees to distribute information exactly uniformly across all workers with finite delays depending on the distance between nodes. In contrast, the typical gossip averaging mechanism only distributes data uniformly asymptotically while using the same communication volume per step as RelaySum. We prove that RelaySGD, based on this mechanism, is independent of data heterogeneity and scales to many workers, enabling highly accurate decentralized deep learning on heterogeneous data. Our code is available at http://github.com/epfml/relaysgd.
[ { "created": "Fri, 8 Oct 2021 14:55:32 GMT", "version": "v1" }, { "created": "Mon, 31 Jan 2022 13:00:46 GMT", "version": "v2" } ]
2022-02-01
[ [ "Vogels", "Thijs", "" ], [ "He", "Lie", "" ], [ "Koloskova", "Anastasia", "" ], [ "Lin", "Tao", "" ], [ "Karimireddy", "Sai Praneeth", "" ], [ "Stich", "Sebastian U.", "" ], [ "Jaggi", "Martin", "" ] ]
In decentralized machine learning, workers compute model updates on their local data. Because the workers only communicate with few neighbors without central coordination, these updates propagate progressively over the network. This paradigm enables distributed training on networks without all-to-all connectivity, helping to protect data privacy as well as to reduce the communication cost of distributed training in data centers. A key challenge, primarily in decentralized deep learning, remains the handling of differences between the workers' local data distributions. To tackle this challenge, we introduce the RelaySum mechanism for information propagation in decentralized learning. RelaySum uses spanning trees to distribute information exactly uniformly across all workers with finite delays depending on the distance between nodes. In contrast, the typical gossip averaging mechanism only distributes data uniformly asymptotically while using the same communication volume per step as RelaySum. We prove that RelaySGD, based on this mechanism, is independent of data heterogeneity and scales to many workers, enabling highly accurate decentralized deep learning on heterogeneous data. Our code is available at http://github.com/epfml/relaysgd.
2005.10634
Manish Shukla
Rajan M A, Manish Shukla, Sachin Lodha
A Note on Cryptographic Algorithms for Private Data Analysis in Contact Tracing Applications
12 Pages, 3 Figures
null
null
null
cs.CR cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contact tracing is an important measure to counter the COVID-19 pandemic. In the early phase, many countries employed manual contact tracing to contain the rate of disease spread, however it has many issues. The manual approach is cumbersome, time consuming and also requires active participation of a large number of people to realize it. In order to overcome these drawbacks, digital contact tracing has been proposed that typically involves deploying a contact tracing application on people's mobile devices which can track their movements and close social interactions. While studies suggest that digital contact tracing is more effective than manual contact tracing, it has been observed that higher adoption rates of the contact tracing app may result in a better controlled epidemic. This also increases the confidence in the accuracy of the collected data and the subsequent analytics. One key reason for low adoption rate of contact tracing applications is the concern about individual privacy. In fact, several studies report that contact tracing applications deployed in multiple countries are not privacy friendly and have potential to be used for mass surveillance by the concerned governments. Hence, privacy respecting contact tracing application is the need of the hour that can lead to highly effective, efficient contact tracing. As part of this study, we focus on various cryptographic techniques that can help in addressing the Private Set Intersection problem which lies at the heart of privacy respecting contact tracing. We analyze the computation and communication complexities of these techniques under the typical client-server architecture utilized by contact tracing applications. Further we evaluate those computation and communication complexity expressions for India scenario and thus identify cryptographic techniques that can be more suitably deployed there.
[ { "created": "Tue, 19 May 2020 06:18:13 GMT", "version": "v1" } ]
2020-05-22
[ [ "A", "Rajan M", "" ], [ "Shukla", "Manish", "" ], [ "Lodha", "Sachin", "" ] ]
Contact tracing is an important measure to counter the COVID-19 pandemic. In the early phase, many countries employed manual contact tracing to contain the rate of disease spread, however it has many issues. The manual approach is cumbersome, time consuming and also requires active participation of a large number of people to realize it. In order to overcome these drawbacks, digital contact tracing has been proposed that typically involves deploying a contact tracing application on people's mobile devices which can track their movements and close social interactions. While studies suggest that digital contact tracing is more effective than manual contact tracing, it has been observed that higher adoption rates of the contact tracing app may result in a better controlled epidemic. This also increases the confidence in the accuracy of the collected data and the subsequent analytics. One key reason for low adoption rate of contact tracing applications is the concern about individual privacy. In fact, several studies report that contact tracing applications deployed in multiple countries are not privacy friendly and have potential to be used for mass surveillance by the concerned governments. Hence, privacy respecting contact tracing application is the need of the hour that can lead to highly effective, efficient contact tracing. As part of this study, we focus on various cryptographic techniques that can help in addressing the Private Set Intersection problem which lies at the heart of privacy respecting contact tracing. We analyze the computation and communication complexities of these techniques under the typical client-server architecture utilized by contact tracing applications. Further we evaluate those computation and communication complexity expressions for India scenario and thus identify cryptographic techniques that can be more suitably deployed there.
2304.05084
Xin Chen
Xin Chen, Yuwen Qin, Weidong Zhao, Qiming Yang, Ningbo Cai, Kai Wu
A Self-attention Knowledge Domain Adaptation Network for Commercial Lithium-ion Batteries State-of-health Estimation under Shallow Cycles
null
null
null
null
cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate state-of-health (SOH) estimation is critical to guarantee the safety, efficiency and reliability of battery-powered applications. Most SOH estimation methods focus on the 0-100\% full state-of-charge (SOC) range that has similar distributions. However, the batteries in real-world applications usually work in the partial SOC range under shallow-cycle conditions and follow different degradation profiles with no labeled data available, thus making SOH estimation challenging. To estimate shallow-cycle battery SOH, a novel unsupervised deep transfer learning method is proposed to bridge different domains using self-attention distillation module and multi-kernel maximum mean discrepancy technique. The proposed method automatically extracts domain-variant features from charge curves to transfer knowledge from the large-scale labeled full cycles to the unlabeled shallow cycles. The CALCE and SNL battery datasets are employed to verify the effectiveness of the proposed method to estimate the battery SOH for different SOC ranges, temperatures, and discharge rates. The proposed method achieves a root-mean-square error within 2\% and outperforms other transfer learning methods for different SOC ranges. When applied to batteries with different operating conditions and from different manufacturers, the proposed method still exhibits superior SOH estimation performance. The proposed method is the first attempt at accurately estimating battery SOH under shallow-cycle conditions without needing a full-cycle characteristic test.
[ { "created": "Tue, 11 Apr 2023 09:28:48 GMT", "version": "v1" } ]
2023-04-12
[ [ "Chen", "Xin", "" ], [ "Qin", "Yuwen", "" ], [ "Zhao", "Weidong", "" ], [ "Yang", "Qiming", "" ], [ "Cai", "Ningbo", "" ], [ "Wu", "Kai", "" ] ]
Accurate state-of-health (SOH) estimation is critical to guarantee the safety, efficiency and reliability of battery-powered applications. Most SOH estimation methods focus on the 0-100\% full state-of-charge (SOC) range that has similar distributions. However, the batteries in real-world applications usually work in the partial SOC range under shallow-cycle conditions and follow different degradation profiles with no labeled data available, thus making SOH estimation challenging. To estimate shallow-cycle battery SOH, a novel unsupervised deep transfer learning method is proposed to bridge different domains using self-attention distillation module and multi-kernel maximum mean discrepancy technique. The proposed method automatically extracts domain-variant features from charge curves to transfer knowledge from the large-scale labeled full cycles to the unlabeled shallow cycles. The CALCE and SNL battery datasets are employed to verify the effectiveness of the proposed method to estimate the battery SOH for different SOC ranges, temperatures, and discharge rates. The proposed method achieves a root-mean-square error within 2\% and outperforms other transfer learning methods for different SOC ranges. When applied to batteries with different operating conditions and from different manufacturers, the proposed method still exhibits superior SOH estimation performance. The proposed method is the first attempt at accurately estimating battery SOH under shallow-cycle conditions without needing a full-cycle characteristic test.
2110.02691
Dongho Lee
Dongho Lee, Valentin Perrelle, Beno\^it Valiron and Zhaowei Xu
Concrete Categorical Model of a Quantum Circuit Description Language with Measurement
accepted for publication in FSTTCS 2021
null
10.4230/LIPIcs.FSTTCS.2021.51
null
cs.LO quant-ph
http://creativecommons.org/licenses/by/4.0/
In this paper, we introduce dynamic lifting to a quantum circuit-description language, following the Proto-Quipper language approach. Dynamic lifting allows programs to transfer the result of measuring quantum data -- qubits -- into classical data -- booleans -- . We propose a type system and an operational semantics for the language and we state safety properties. Next, we introduce a concrete categorical semantics for the proposed language, basing our approach on a recent model from Rios\&Selinger for Proto-Quipper-M. Our approach is to construct on top of a concrete category of circuits with measurements a Kleisli category, capturing as a side effect the action of retrieving classical content out of a quantum memory. We then show a soundness result for this semantics.
[ { "created": "Wed, 6 Oct 2021 12:29:03 GMT", "version": "v1" } ]
2022-02-04
[ [ "Lee", "Dongho", "" ], [ "Perrelle", "Valentin", "" ], [ "Valiron", "Benoît", "" ], [ "Xu", "Zhaowei", "" ] ]
In this paper, we introduce dynamic lifting to a quantum circuit-description language, following the Proto-Quipper language approach. Dynamic lifting allows programs to transfer the result of measuring quantum data -- qubits -- into classical data -- booleans -- . We propose a type system and an operational semantics for the language and we state safety properties. Next, we introduce a concrete categorical semantics for the proposed language, basing our approach on a recent model from Rios\&Selinger for Proto-Quipper-M. Our approach is to construct on top of a concrete category of circuits with measurements a Kleisli category, capturing as a side effect the action of retrieving classical content out of a quantum memory. We then show a soundness result for this semantics.
2307.00599
Zihong Yan
Zihong Yan, Xiaoyi Wu, Zhuozhu Jian, Bin Lan Xueqian Wang, and Bin Liang
RH-Map: Online Map Construction Framework of Dynamic Objects Removal Based on Region-wise Hash Map Structure
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile robots navigating in outdoor environments frequently encounter the issue of undesired traces left by dynamic objects and manifested as obstacles on map, impeding robots from achieving accurate localization and effective navigation. To tackle the problem, a novel map construction framework based on 3D region-wise hash map structure (RH-Map) is proposed, consisting of front-end scan fresher and back-end removal modules, which realizes real-time map construction and online dynamic object removal (DOR). First, a two-layer 3D region-wise hash map structure of map management is proposed for effective online DOR. Then, in scan fresher, region-wise ground plane estimation (R-GPE) is adopted for estimating and preserving ground information and Scan-to-Map Removal (S2M-R) is proposed to discriminate and remove dynamic regions. Moreover, the lightweight back-end removal module maintaining keyframes is proposed for further DOR. As experimentally verified on SemanticKITTI, our proposed framework yields promising performance on online DOR of map construction compared with the state-of-the-art methods. And we also validate the proposed framework in real-world environments.
[ { "created": "Sun, 2 Jul 2023 15:50:36 GMT", "version": "v1" }, { "created": "Tue, 25 Jul 2023 00:44:59 GMT", "version": "v2" } ]
2023-07-26
[ [ "Yan", "Zihong", "" ], [ "Wu", "Xiaoyi", "" ], [ "Jian", "Zhuozhu", "" ], [ "Wang", "Bin Lan Xueqian", "" ], [ "Liang", "Bin", "" ] ]
Mobile robots navigating in outdoor environments frequently encounter the issue of undesired traces left by dynamic objects and manifested as obstacles on map, impeding robots from achieving accurate localization and effective navigation. To tackle the problem, a novel map construction framework based on 3D region-wise hash map structure (RH-Map) is proposed, consisting of front-end scan fresher and back-end removal modules, which realizes real-time map construction and online dynamic object removal (DOR). First, a two-layer 3D region-wise hash map structure of map management is proposed for effective online DOR. Then, in scan fresher, region-wise ground plane estimation (R-GPE) is adopted for estimating and preserving ground information and Scan-to-Map Removal (S2M-R) is proposed to discriminate and remove dynamic regions. Moreover, the lightweight back-end removal module maintaining keyframes is proposed for further DOR. As experimentally verified on SemanticKITTI, our proposed framework yields promising performance on online DOR of map construction compared with the state-of-the-art methods. And we also validate the proposed framework in real-world environments.
2303.12096
Martin Schuetz
Martin J. A. Schuetz, J. Kyle Brubaker, Helmut G. Katzgraber
Reply to: Inability of a graph neural network heuristic to outperform greedy algorithms in solving combinatorial optimization problems
Manuscript: 2 pages, 1 figure. arXiv admin note: substantial text overlap with arXiv:2302.03602
Nature Machine Intelligence 5, 26 (2023)
10.1038/s42256-022-00588-z
null
cs.LG cond-mat.dis-nn cs.AI math.OC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide a comprehensive reply to the comment written by Stefan Boettcher [arXiv:2210.00623] and argue that the comment singles out one particular non-representative example problem, entirely focusing on the maximum cut problem (MaxCut) on sparse graphs, for which greedy algorithms are expected to perform well. Conversely, we highlight the broader algorithmic development underlying our original work, and (within our original framework) provide additional numerical results showing sizable improvements over our original data, thereby refuting the comment's original performance statements. Furthermore, it has already been shown that physics-inspired graph neural networks (PI-GNNs) can outperform greedy algorithms, in particular on hard, dense instances. We also argue that the internal (parallel) anatomy of graph neural networks is very different from the (sequential) nature of greedy algorithms, and (based on their usage at the scale of real-world social networks) point out that graph neural networks have demonstrated their potential for superior scalability compared to existing heuristics such as extremal optimization. Finally, we conclude highlighting the conceptual novelty of our work and outline some potential extensions.
[ { "created": "Fri, 3 Feb 2023 17:32:17 GMT", "version": "v1" } ]
2023-03-23
[ [ "Schuetz", "Martin J. A.", "" ], [ "Brubaker", "J. Kyle", "" ], [ "Katzgraber", "Helmut G.", "" ] ]
We provide a comprehensive reply to the comment written by Stefan Boettcher [arXiv:2210.00623] and argue that the comment singles out one particular non-representative example problem, entirely focusing on the maximum cut problem (MaxCut) on sparse graphs, for which greedy algorithms are expected to perform well. Conversely, we highlight the broader algorithmic development underlying our original work, and (within our original framework) provide additional numerical results showing sizable improvements over our original data, thereby refuting the comment's original performance statements. Furthermore, it has already been shown that physics-inspired graph neural networks (PI-GNNs) can outperform greedy algorithms, in particular on hard, dense instances. We also argue that the internal (parallel) anatomy of graph neural networks is very different from the (sequential) nature of greedy algorithms, and (based on their usage at the scale of real-world social networks) point out that graph neural networks have demonstrated their potential for superior scalability compared to existing heuristics such as extremal optimization. Finally, we conclude highlighting the conceptual novelty of our work and outline some potential extensions.
2102.08360
Anirudh Som
Ella Y. Wang, Anirudh Som, Ankita Shukla, Hongjun Choi, Pavan Turaga
Interpretable COVID-19 Chest X-Ray Classification via Orthogonality Constraint
Accepted in the 2021 ACM CHIL Workshop track. An extended version of this work is under consideration at Pattern Recognition Letters
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Deep neural networks have increasingly been used as an auxiliary tool in healthcare applications, due to their ability to improve performance of several diagnosis tasks. However, these methods are not widely adopted in clinical settings due to the practical limitations in the reliability, generalizability, and interpretability of deep learning based systems. As a result, methods have been developed that impose additional constraints during network training to gain more control as well as improve interpretabilty, facilitating their acceptance in healthcare community. In this work, we investigate the benefit of using Orthogonal Spheres (OS) constraint for classification of COVID-19 cases from chest X-ray images. The OS constraint can be written as a simple orthonormality term which is used in conjunction with the standard cross-entropy loss during classification network training. Previous studies have demonstrated significant benefits in applying such constraints to deep learning models. Our findings corroborate these observations, indicating that the orthonormality loss function effectively produces improved semantic localization via GradCAM visualizations, enhanced classification performance, and reduced model calibration error. Our approach achieves an improvement in accuracy of 1.6% and 4.8% for two- and three-class classification, respectively; similar results are found for models with data augmentation applied. In addition to these findings, our work also presents a new application of the OS regularizer in healthcare, increasing the post-hoc interpretability and performance of deep learning models for COVID-19 classification to facilitate adoption of these methods in clinical settings. We also identify the limitations of our strategy that can be explored for further research in future.
[ { "created": "Tue, 2 Feb 2021 11:35:28 GMT", "version": "v1" }, { "created": "Wed, 2 Jun 2021 22:40:17 GMT", "version": "v2" }, { "created": "Wed, 22 Dec 2021 03:09:56 GMT", "version": "v3" } ]
2021-12-24
[ [ "Wang", "Ella Y.", "" ], [ "Som", "Anirudh", "" ], [ "Shukla", "Ankita", "" ], [ "Choi", "Hongjun", "" ], [ "Turaga", "Pavan", "" ] ]
Deep neural networks have increasingly been used as an auxiliary tool in healthcare applications, due to their ability to improve performance of several diagnosis tasks. However, these methods are not widely adopted in clinical settings due to the practical limitations in the reliability, generalizability, and interpretability of deep learning based systems. As a result, methods have been developed that impose additional constraints during network training to gain more control as well as improve interpretabilty, facilitating their acceptance in healthcare community. In this work, we investigate the benefit of using Orthogonal Spheres (OS) constraint for classification of COVID-19 cases from chest X-ray images. The OS constraint can be written as a simple orthonormality term which is used in conjunction with the standard cross-entropy loss during classification network training. Previous studies have demonstrated significant benefits in applying such constraints to deep learning models. Our findings corroborate these observations, indicating that the orthonormality loss function effectively produces improved semantic localization via GradCAM visualizations, enhanced classification performance, and reduced model calibration error. Our approach achieves an improvement in accuracy of 1.6% and 4.8% for two- and three-class classification, respectively; similar results are found for models with data augmentation applied. In addition to these findings, our work also presents a new application of the OS regularizer in healthcare, increasing the post-hoc interpretability and performance of deep learning models for COVID-19 classification to facilitate adoption of these methods in clinical settings. We also identify the limitations of our strategy that can be explored for further research in future.
1904.06292
George Kesidis
David J. Miller, Zhen Xiang, and George Kesidis
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
null
Proceedings of the IEEE, March. 2020
null
null
cs.LG cs.CR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is great potential for damage from adversarial learning (AL) attacks on machine-learning based systems. In this paper, we provide a contemporary survey of AL, focused particularly on defenses against attacks on statistical classifiers. After introducing relevant terminology and the goals and range of possible knowledge of both attackers and defenders, we survey recent work on test-time evasion (TTE), data poisoning (DP), and reverse engineering (RE) attacks and particularly defenses against same. In so doing, we distinguish robust classification from anomaly detection (AD), unsupervised from supervised, and statistical hypothesis-based defenses from ones that do not have an explicit null (no attack) hypothesis; we identify the hyperparameters a particular method requires, its computational complexity, as well as the performance measures on which it was evaluated and the obtained quality. We then dig deeper, providing novel insights that challenge conventional AL wisdom and that target unresolved issues, including: 1) robust classification versus AD as a defense strategy; 2) the belief that attack success increases with attack strength, which ignores susceptibility to AD; 3) small perturbations for test-time evasion attacks: a fallacy or a requirement?; 4) validity of the universal assumption that a TTE attacker knows the ground-truth class for the example to be attacked; 5) black, grey, or white box attacks as the standard for defense evaluation; 6) susceptibility of query-based RE to an AD defense. We also discuss attacks on the privacy of training data. We then present benchmark comparisons of several defenses against TTE, RE, and backdoor DP attacks on images. The paper concludes with a discussion of future work.
[ { "created": "Fri, 12 Apr 2019 16:05:21 GMT", "version": "v1" }, { "created": "Mon, 13 May 2019 17:15:49 GMT", "version": "v2" }, { "created": "Mon, 2 Dec 2019 22:49:28 GMT", "version": "v3" } ]
2020-03-11
[ [ "Miller", "David J.", "" ], [ "Xiang", "Zhen", "" ], [ "Kesidis", "George", "" ] ]
There is great potential for damage from adversarial learning (AL) attacks on machine-learning based systems. In this paper, we provide a contemporary survey of AL, focused particularly on defenses against attacks on statistical classifiers. After introducing relevant terminology and the goals and range of possible knowledge of both attackers and defenders, we survey recent work on test-time evasion (TTE), data poisoning (DP), and reverse engineering (RE) attacks and particularly defenses against same. In so doing, we distinguish robust classification from anomaly detection (AD), unsupervised from supervised, and statistical hypothesis-based defenses from ones that do not have an explicit null (no attack) hypothesis; we identify the hyperparameters a particular method requires, its computational complexity, as well as the performance measures on which it was evaluated and the obtained quality. We then dig deeper, providing novel insights that challenge conventional AL wisdom and that target unresolved issues, including: 1) robust classification versus AD as a defense strategy; 2) the belief that attack success increases with attack strength, which ignores susceptibility to AD; 3) small perturbations for test-time evasion attacks: a fallacy or a requirement?; 4) validity of the universal assumption that a TTE attacker knows the ground-truth class for the example to be attacked; 5) black, grey, or white box attacks as the standard for defense evaluation; 6) susceptibility of query-based RE to an AD defense. We also discuss attacks on the privacy of training data. We then present benchmark comparisons of several defenses against TTE, RE, and backdoor DP attacks on images. The paper concludes with a discussion of future work.
2401.08695
Zhengqing Fang
Zhengqing Fang, Shuowen Zhou, Zhouhang Yuan, Yuxuan Si, Mengze Li, Jinxu Li, Yesheng Xu, Wenjia Xie, Kun Kuang, Yingming Li, Fei Wu, and Yu-Feng Yao
Enabling Collaborative Clinical Diagnosis of Infectious Keratitis by Integrating Expert Knowledge and Interpretable Data-driven Intelligence
33 pages
null
null
null
cs.AI cs.CV cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Although data-driven artificial intelligence (AI) in medical image diagnosis has shown impressive performance in silico, the lack of interpretability makes it difficult to incorporate the "black box" into clinicians' workflows. To make the diagnostic patterns learned from data understandable by clinicians, we develop an interpretable model, knowledge-guided diagnosis model (KGDM), that provides a visualized reasoning process containing AI-based biomarkers and retrieved cases that with the same diagnostic patterns. It embraces clinicians' prompts into the interpreted reasoning through human-AI interaction, leading to potentially enhanced safety and more accurate predictions. This study investigates the performance, interpretability, and clinical utility of KGDM in the diagnosis of infectious keratitis (IK), which is the leading cause of corneal blindness. The classification performance of KGDM is evaluated on a prospective validation dataset, an external testing dataset, and an publicly available testing dataset. The diagnostic odds ratios (DOR) of the interpreted AI-based biomarkers are effective, ranging from 3.011 to 35.233 and exhibit consistent diagnostic patterns with clinic experience. Moreover, a human-AI collaborative diagnosis test is conducted and the participants with collaboration achieved a performance exceeding that of both humans and AI. By synergistically integrating interpretability and interaction, this study facilitates the convergence of clinicians' expertise and data-driven intelligence. The promotion of inexperienced ophthalmologists with the aid of AI-based biomarkers, as well as increased AI prediction by intervention from experienced ones, demonstrate a promising diagnostic paradigm for infectious keratitis using KGDM, which holds the potential for extension to other diseases where experienced medical practitioners are limited and the safety of AI is concerned.
[ { "created": "Sun, 14 Jan 2024 02:10:54 GMT", "version": "v1" } ]
2024-01-18
[ [ "Fang", "Zhengqing", "" ], [ "Zhou", "Shuowen", "" ], [ "Yuan", "Zhouhang", "" ], [ "Si", "Yuxuan", "" ], [ "Li", "Mengze", "" ], [ "Li", "Jinxu", "" ], [ "Xu", "Yesheng", "" ], [ "Xie", "Wenjia", "" ], [ "Kuang", "Kun", "" ], [ "Li", "Yingming", "" ], [ "Wu", "Fei", "" ], [ "Yao", "Yu-Feng", "" ] ]
Although data-driven artificial intelligence (AI) in medical image diagnosis has shown impressive performance in silico, the lack of interpretability makes it difficult to incorporate the "black box" into clinicians' workflows. To make the diagnostic patterns learned from data understandable by clinicians, we develop an interpretable model, knowledge-guided diagnosis model (KGDM), that provides a visualized reasoning process containing AI-based biomarkers and retrieved cases that with the same diagnostic patterns. It embraces clinicians' prompts into the interpreted reasoning through human-AI interaction, leading to potentially enhanced safety and more accurate predictions. This study investigates the performance, interpretability, and clinical utility of KGDM in the diagnosis of infectious keratitis (IK), which is the leading cause of corneal blindness. The classification performance of KGDM is evaluated on a prospective validation dataset, an external testing dataset, and an publicly available testing dataset. The diagnostic odds ratios (DOR) of the interpreted AI-based biomarkers are effective, ranging from 3.011 to 35.233 and exhibit consistent diagnostic patterns with clinic experience. Moreover, a human-AI collaborative diagnosis test is conducted and the participants with collaboration achieved a performance exceeding that of both humans and AI. By synergistically integrating interpretability and interaction, this study facilitates the convergence of clinicians' expertise and data-driven intelligence. The promotion of inexperienced ophthalmologists with the aid of AI-based biomarkers, as well as increased AI prediction by intervention from experienced ones, demonstrate a promising diagnostic paradigm for infectious keratitis using KGDM, which holds the potential for extension to other diseases where experienced medical practitioners are limited and the safety of AI is concerned.
2210.01370
Yunsung Lee
Yunsung Lee, Gyuseong Lee, Kwangrok Ryoo, Hyojun Go, Jihye Park, and Seungryong Kim
Towards Flexible Inductive Bias via Progressive Reparameterization Scheduling
Accepted at VIPriors ECCVW 2022, camera-ready version
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
There are two de facto standard architectures in recent computer vision: Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). Strong inductive biases of convolutions help the model learn sample effectively, but such strong biases also limit the upper bound of CNNs when sufficient data are available. On the contrary, ViT is inferior to CNNs for small data but superior for sufficient data. Recent approaches attempt to combine the strengths of these two architectures. However, we show these approaches overlook that the optimal inductive bias also changes according to the target data scale changes by comparing various models' accuracy on subsets of sampled ImageNet at different ratios. In addition, through Fourier analysis of feature maps, the model's response patterns according to signal frequency changes, we observe which inductive bias is advantageous for each data scale. The more convolution-like inductive bias is included in the model, the smaller the data scale is required where the ViT-like model outperforms the ResNet performance. To obtain a model with flexible inductive bias on the data scale, we show reparameterization can interpolate inductive bias between convolution and self-attention. By adjusting the number of epochs the model stays in the convolution, we show that reparameterization from convolution to self-attention interpolates the Fourier analysis pattern between CNNs and ViTs. Adapting these findings, we propose Progressive Reparameterization Scheduling (PRS), in which reparameterization adjusts the required amount of convolution-like or self-attention-like inductive bias per layer. For small-scale datasets, our PRS performs reparameterization from convolution to self-attention linearly faster at the late stage layer. PRS outperformed previous studies on the small-scale dataset, e.g., CIFAR-100.
[ { "created": "Tue, 4 Oct 2022 04:20:20 GMT", "version": "v1" } ]
2022-10-05
[ [ "Lee", "Yunsung", "" ], [ "Lee", "Gyuseong", "" ], [ "Ryoo", "Kwangrok", "" ], [ "Go", "Hyojun", "" ], [ "Park", "Jihye", "" ], [ "Kim", "Seungryong", "" ] ]
There are two de facto standard architectures in recent computer vision: Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). Strong inductive biases of convolutions help the model learn sample effectively, but such strong biases also limit the upper bound of CNNs when sufficient data are available. On the contrary, ViT is inferior to CNNs for small data but superior for sufficient data. Recent approaches attempt to combine the strengths of these two architectures. However, we show these approaches overlook that the optimal inductive bias also changes according to the target data scale changes by comparing various models' accuracy on subsets of sampled ImageNet at different ratios. In addition, through Fourier analysis of feature maps, the model's response patterns according to signal frequency changes, we observe which inductive bias is advantageous for each data scale. The more convolution-like inductive bias is included in the model, the smaller the data scale is required where the ViT-like model outperforms the ResNet performance. To obtain a model with flexible inductive bias on the data scale, we show reparameterization can interpolate inductive bias between convolution and self-attention. By adjusting the number of epochs the model stays in the convolution, we show that reparameterization from convolution to self-attention interpolates the Fourier analysis pattern between CNNs and ViTs. Adapting these findings, we propose Progressive Reparameterization Scheduling (PRS), in which reparameterization adjusts the required amount of convolution-like or self-attention-like inductive bias per layer. For small-scale datasets, our PRS performs reparameterization from convolution to self-attention linearly faster at the late stage layer. PRS outperformed previous studies on the small-scale dataset, e.g., CIFAR-100.
2307.10184
Yudong Gao
Yudong Gao, Honglong Chen, Peng Sun, Junjian Li, Anqing Zhang, Zhibo Wang
A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives
10 pages, 7 figures. Submit to ACM MM 2023
null
null
null
cs.CR cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Backdoor attacks pose serious security threats to deep neural networks (DNNs). Backdoored models make arbitrarily (targeted) incorrect predictions on inputs embedded with well-designed triggers while behaving normally on clean inputs. Many works have explored the invisibility of backdoor triggers to improve attack stealthiness. However, most of them only consider the invisibility in the spatial domain without explicitly accounting for the generation of invisible triggers in the frequency domain, making the generated poisoned images be easily detected by recent defense methods. To address this issue, in this paper, we propose a DUal stealthy BAckdoor attack method named DUBA, which simultaneously considers the invisibility of triggers in both the spatial and frequency domains, to achieve desirable attack performance, while ensuring strong stealthiness. Specifically, we first use Discrete Wavelet Transform to embed the high-frequency information of the trigger image into the clean image to ensure attack effectiveness. Then, to attain strong stealthiness, we incorporate Fourier Transform and Discrete Cosine Transform to mix the poisoned image and clean image in the frequency domain. Moreover, the proposed DUBA adopts a novel attack strategy, in which the model is trained with weak triggers and attacked with strong triggers to further enhance the attack performance and stealthiness. We extensively evaluate DUBA against popular image classifiers on four datasets. The results demonstrate that it significantly outperforms the state-of-the-art backdoor attacks in terms of the attack success rate and stealthiness
[ { "created": "Mon, 3 Jul 2023 12:28:44 GMT", "version": "v1" } ]
2023-07-21
[ [ "Gao", "Yudong", "" ], [ "Chen", "Honglong", "" ], [ "Sun", "Peng", "" ], [ "Li", "Junjian", "" ], [ "Zhang", "Anqing", "" ], [ "Wang", "Zhibo", "" ] ]
Backdoor attacks pose serious security threats to deep neural networks (DNNs). Backdoored models make arbitrarily (targeted) incorrect predictions on inputs embedded with well-designed triggers while behaving normally on clean inputs. Many works have explored the invisibility of backdoor triggers to improve attack stealthiness. However, most of them only consider the invisibility in the spatial domain without explicitly accounting for the generation of invisible triggers in the frequency domain, making the generated poisoned images be easily detected by recent defense methods. To address this issue, in this paper, we propose a DUal stealthy BAckdoor attack method named DUBA, which simultaneously considers the invisibility of triggers in both the spatial and frequency domains, to achieve desirable attack performance, while ensuring strong stealthiness. Specifically, we first use Discrete Wavelet Transform to embed the high-frequency information of the trigger image into the clean image to ensure attack effectiveness. Then, to attain strong stealthiness, we incorporate Fourier Transform and Discrete Cosine Transform to mix the poisoned image and clean image in the frequency domain. Moreover, the proposed DUBA adopts a novel attack strategy, in which the model is trained with weak triggers and attacked with strong triggers to further enhance the attack performance and stealthiness. We extensively evaluate DUBA against popular image classifiers on four datasets. The results demonstrate that it significantly outperforms the state-of-the-art backdoor attacks in terms of the attack success rate and stealthiness
2203.15580
Yingjie Cai
Yingjie Cai, Kwan-Yee Lin, Chao Zhang, Qiang Wang, Xiaogang Wang and Hongsheng Li
Learning a Structured Latent Space for Unsupervised Point Cloud Completion
8 pages, 5 figures, cvpr2022
CVPR2022
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Unsupervised point cloud completion aims at estimating the corresponding complete point cloud of a partial point cloud in an unpaired manner. It is a crucial but challenging problem since there is no paired partial-complete supervision that can be exploited directly. In this work, we propose a novel framework, which learns a unified and structured latent space that encoding both partial and complete point clouds. Specifically, we map a series of related partial point clouds into multiple complete shape and occlusion code pairs and fuse the codes to obtain their representations in the unified latent space. To enforce the learning of such a structured latent space, the proposed method adopts a series of constraints including structured ranking regularization, latent code swapping constraint, and distribution supervision on the related partial point clouds. By establishing such a unified and structured latent space, better partial-complete geometry consistency and shape completion accuracy can be achieved. Extensive experiments show that our proposed method consistently outperforms state-of-the-art unsupervised methods on both synthetic ShapeNet and real-world KITTI, ScanNet, and Matterport3D datasets.
[ { "created": "Tue, 29 Mar 2022 13:58:44 GMT", "version": "v1" } ]
2022-03-30
[ [ "Cai", "Yingjie", "" ], [ "Lin", "Kwan-Yee", "" ], [ "Zhang", "Chao", "" ], [ "Wang", "Qiang", "" ], [ "Wang", "Xiaogang", "" ], [ "Li", "Hongsheng", "" ] ]
Unsupervised point cloud completion aims at estimating the corresponding complete point cloud of a partial point cloud in an unpaired manner. It is a crucial but challenging problem since there is no paired partial-complete supervision that can be exploited directly. In this work, we propose a novel framework, which learns a unified and structured latent space that encoding both partial and complete point clouds. Specifically, we map a series of related partial point clouds into multiple complete shape and occlusion code pairs and fuse the codes to obtain their representations in the unified latent space. To enforce the learning of such a structured latent space, the proposed method adopts a series of constraints including structured ranking regularization, latent code swapping constraint, and distribution supervision on the related partial point clouds. By establishing such a unified and structured latent space, better partial-complete geometry consistency and shape completion accuracy can be achieved. Extensive experiments show that our proposed method consistently outperforms state-of-the-art unsupervised methods on both synthetic ShapeNet and real-world KITTI, ScanNet, and Matterport3D datasets.
1908.04017
Emanuel Laci\'c
Dominik Kowald, Matthias Traub, Dieter Theiler, Heimo Gursch, Emanuel Lacic, Stefanie Lindstaedt, Roman Kern, Elisabeth Lex
Using the Open Meta Kaggle Dataset to Evaluate Tripartite Recommendations in Data Markets
REVEAL workshop @ RecSys'2019, Kopenhagen, Denmark
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
This work addresses the problem of providing and evaluating recommendations in data markets. Since most of the research in recommender systems is focused on the bipartite relationship between users and items (e.g., movies), we extend this view to the tripartite relationship between users, datasets and services, which is present in data markets. Between these entities, we identify four use cases for recommendations: (i) recommendation of datasets for users, (ii) recommendation of services for users, (iii) recommendation of services for datasets, and (iv) recommendation of datasets for services. Using the open Meta Kaggle dataset, we evaluate the recommendation accuracy of a popularity-based as well as a collaborative filtering-based algorithm for these four use cases and find that the recommendation accuracy strongly depends on the given use case. The presented work contributes to the tripartite recommendation problem in general and to the under-researched portfolio of evaluating recommender systems for data markets in particular.
[ { "created": "Mon, 12 Aug 2019 06:15:44 GMT", "version": "v1" }, { "created": "Tue, 27 Aug 2019 09:21:36 GMT", "version": "v2" } ]
2019-08-28
[ [ "Kowald", "Dominik", "" ], [ "Traub", "Matthias", "" ], [ "Theiler", "Dieter", "" ], [ "Gursch", "Heimo", "" ], [ "Lacic", "Emanuel", "" ], [ "Lindstaedt", "Stefanie", "" ], [ "Kern", "Roman", "" ], [ "Lex", "Elisabeth", "" ] ]
This work addresses the problem of providing and evaluating recommendations in data markets. Since most of the research in recommender systems is focused on the bipartite relationship between users and items (e.g., movies), we extend this view to the tripartite relationship between users, datasets and services, which is present in data markets. Between these entities, we identify four use cases for recommendations: (i) recommendation of datasets for users, (ii) recommendation of services for users, (iii) recommendation of services for datasets, and (iv) recommendation of datasets for services. Using the open Meta Kaggle dataset, we evaluate the recommendation accuracy of a popularity-based as well as a collaborative filtering-based algorithm for these four use cases and find that the recommendation accuracy strongly depends on the given use case. The presented work contributes to the tripartite recommendation problem in general and to the under-researched portfolio of evaluating recommender systems for data markets in particular.
1811.03401
Zhenyue Qin
Jiaxu Zuo and Tom Gedeon and Zhenyue Qin
Your Eyes Say You're Lying: An Eye Movement Pattern Analysis for Face Familiarity and Deceptive Cognition
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Eye movement patterns reflect human latent internal cognitive activities. We aim to discover eye movement patterns during face recognition under different cognitions of information concealing. These cognitions include the degrees of face familiarity and deception or not, namely telling the truth when observing familiar and unfamiliar faces, and deceiving in front of familiar faces. We apply Hidden Markov models with Gaussian emission to generalize regions and trajectories of eye fixation points under the above three conditions. Our results show that both eye movement patterns and eye gaze regions become significantly different during deception compared with truth-telling. We show the feasibility of detecting deception and further cognitive activity classification using eye movement patterns.
[ { "created": "Thu, 8 Nov 2018 13:32:53 GMT", "version": "v1" } ]
2018-11-09
[ [ "Zuo", "Jiaxu", "" ], [ "Gedeon", "Tom", "" ], [ "Qin", "Zhenyue", "" ] ]
Eye movement patterns reflect human latent internal cognitive activities. We aim to discover eye movement patterns during face recognition under different cognitions of information concealing. These cognitions include the degrees of face familiarity and deception or not, namely telling the truth when observing familiar and unfamiliar faces, and deceiving in front of familiar faces. We apply Hidden Markov models with Gaussian emission to generalize regions and trajectories of eye fixation points under the above three conditions. Our results show that both eye movement patterns and eye gaze regions become significantly different during deception compared with truth-telling. We show the feasibility of detecting deception and further cognitive activity classification using eye movement patterns.
2201.11182
Mariam Kiran Dr.
Mariam Kiran and Melis Ozyildirim
Hyperparameter Tuning for Deep Reinforcement Learning Applications
11 pages, 6 figures
null
null
null
cs.LG cs.NE
http://creativecommons.org/licenses/by/4.0/
Reinforcement learning (RL) applications, where an agent can simply learn optimal behaviors by interacting with the environment, are quickly gaining tremendous success in a wide variety of applications from controlling simple pendulums to complex data centers. However, setting the right hyperparameters can have a huge impact on the deployed solution performance and reliability in the inference models, produced via RL, used for decision-making. Hyperparameter search itself is a laborious process that requires many iterations and computationally expensive to find the best settings that produce the best neural network architectures. In comparison to other neural network architectures, deep RL has not witnessed much hyperparameter tuning, due to its algorithm complexity and simulation platforms needed. In this paper, we propose a distributed variable-length genetic algorithm framework to systematically tune hyperparameters for various RL applications, improving training time and robustness of the architecture, via evolution. We demonstrate the scalability of our approach on many RL problems (from simple gyms to complex applications) and compared with Bayesian approach. Our results show that with more generations, optimal solutions that require fewer training episodes and are computationally cheap while being more robust for deployment. Our results are imperative to advance deep reinforcement learning controllers for real-world problems.
[ { "created": "Wed, 26 Jan 2022 20:43:13 GMT", "version": "v1" } ]
2022-01-28
[ [ "Kiran", "Mariam", "" ], [ "Ozyildirim", "Melis", "" ] ]
Reinforcement learning (RL) applications, where an agent can simply learn optimal behaviors by interacting with the environment, are quickly gaining tremendous success in a wide variety of applications from controlling simple pendulums to complex data centers. However, setting the right hyperparameters can have a huge impact on the deployed solution performance and reliability in the inference models, produced via RL, used for decision-making. Hyperparameter search itself is a laborious process that requires many iterations and computationally expensive to find the best settings that produce the best neural network architectures. In comparison to other neural network architectures, deep RL has not witnessed much hyperparameter tuning, due to its algorithm complexity and simulation platforms needed. In this paper, we propose a distributed variable-length genetic algorithm framework to systematically tune hyperparameters for various RL applications, improving training time and robustness of the architecture, via evolution. We demonstrate the scalability of our approach on many RL problems (from simple gyms to complex applications) and compared with Bayesian approach. Our results show that with more generations, optimal solutions that require fewer training episodes and are computationally cheap while being more robust for deployment. Our results are imperative to advance deep reinforcement learning controllers for real-world problems.
2406.15694
Zhuo Zheng
Zhuo Zheng,Yanfei Zhong,Ailong Ma,Liangpei Zhang
Single-Temporal Supervised Learning for Universal Remote Sensing Change Detection
IJCV 2024. arXiv admin note: text overlap with arXiv:2108.07002
null
10.1007/s11263-024-02141-4
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bitemporal supervised learning paradigm always dominates remote sensing change detection using numerous labeled bitemporal image pairs, especially for high spatial resolution (HSR) remote sensing imagery. However, it is very expensive and labor-intensive to label change regions in large-scale bitemporal HSR remote sensing image pairs. In this paper, we propose single-temporal supervised learning (STAR) for universal remote sensing change detection from a new perspective of exploiting changes between unpaired images as supervisory signals. STAR enables us to train a high-accuracy change detector only using unpaired labeled images and can generalize to real-world bitemporal image pairs. To demonstrate the flexibility and scalability of STAR, we design a simple yet unified change detector, termed ChangeStar2, capable of addressing binary change detection, object change detection, and semantic change detection in one architecture. ChangeStar2 achieves state-of-the-art performances on eight public remote sensing change detection datasets, covering above two supervised settings, multiple change types, multiple scenarios. The code is available at https://github.com/Z-Zheng/pytorch-change-models.
[ { "created": "Sat, 22 Jun 2024 00:03:21 GMT", "version": "v1" } ]
2024-06-25
[ [ "Zheng", "Zhuo", "" ], [ "Zhong", "Yanfei", "" ], [ "Ma", "Ailong", "" ], [ "Zhang", "Liangpei", "" ] ]
Bitemporal supervised learning paradigm always dominates remote sensing change detection using numerous labeled bitemporal image pairs, especially for high spatial resolution (HSR) remote sensing imagery. However, it is very expensive and labor-intensive to label change regions in large-scale bitemporal HSR remote sensing image pairs. In this paper, we propose single-temporal supervised learning (STAR) for universal remote sensing change detection from a new perspective of exploiting changes between unpaired images as supervisory signals. STAR enables us to train a high-accuracy change detector only using unpaired labeled images and can generalize to real-world bitemporal image pairs. To demonstrate the flexibility and scalability of STAR, we design a simple yet unified change detector, termed ChangeStar2, capable of addressing binary change detection, object change detection, and semantic change detection in one architecture. ChangeStar2 achieves state-of-the-art performances on eight public remote sensing change detection datasets, covering above two supervised settings, multiple change types, multiple scenarios. The code is available at https://github.com/Z-Zheng/pytorch-change-models.
2005.00306
Hao Dou
Hao Dou, Chen Chen, Xiyuan Hu, Zuxing Xuan, Zhisen Hu, Silong Peng
PCA-SRGAN: Incremental Orthogonal Projection Discrimination for Face Super-resolution
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative Adversarial Networks (GAN) have been employed for face super resolution but they bring distorted facial details easily and still have weakness on recovering realistic texture. To further improve the performance of GAN based models on super-resolving face images, we propose PCA-SRGAN which pays attention to the cumulative discrimination in the orthogonal projection space spanned by PCA projection matrix of face data. By feeding the principal component projections ranging from structure to details into the discriminator, the discrimination difficulty will be greatly alleviated and the generator can be enhanced to reconstruct clearer contour and finer texture, helpful to achieve the high perception and low distortion eventually. This incremental orthogonal projection discrimination has ensured a precise optimization procedure from coarse to fine and avoids the dependence on the perceptual regularization. We conduct experiments on CelebA and FFHQ face datasets. The qualitative visual effect and quantitative evaluation have demonstrated the overwhelming performance of our model over related works.
[ { "created": "Fri, 1 May 2020 10:40:57 GMT", "version": "v1" }, { "created": "Fri, 28 Aug 2020 10:26:21 GMT", "version": "v2" } ]
2020-08-31
[ [ "Dou", "Hao", "" ], [ "Chen", "Chen", "" ], [ "Hu", "Xiyuan", "" ], [ "Xuan", "Zuxing", "" ], [ "Hu", "Zhisen", "" ], [ "Peng", "Silong", "" ] ]
Generative Adversarial Networks (GAN) have been employed for face super resolution but they bring distorted facial details easily and still have weakness on recovering realistic texture. To further improve the performance of GAN based models on super-resolving face images, we propose PCA-SRGAN which pays attention to the cumulative discrimination in the orthogonal projection space spanned by PCA projection matrix of face data. By feeding the principal component projections ranging from structure to details into the discriminator, the discrimination difficulty will be greatly alleviated and the generator can be enhanced to reconstruct clearer contour and finer texture, helpful to achieve the high perception and low distortion eventually. This incremental orthogonal projection discrimination has ensured a precise optimization procedure from coarse to fine and avoids the dependence on the perceptual regularization. We conduct experiments on CelebA and FFHQ face datasets. The qualitative visual effect and quantitative evaluation have demonstrated the overwhelming performance of our model over related works.
2403.00584
Ghazal Fazelnia Ph.D.
Ghazal Fazelnia, Sanket Gupta, Claire Keum, Mark Koh, Ian Anderson, and Mounia Lalmas
Generalized User Representations for Transfer Learning
null
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel framework for user representation in large-scale recommender systems, aiming at effectively representing diverse user taste in a generalized manner. Our approach employs a two-stage methodology combining representation learning and transfer learning. The representation learning model uses an autoencoder that compresses various user features into a representation space. In the second stage, downstream task-specific models leverage user representations via transfer learning instead of curating user features individually. We further augment this methodology on the representation's input features to increase flexibility and enable reaction to user events, including new user experiences, in Near-Real Time. Additionally, we propose a novel solution to manage deployment of this framework in production models, allowing downstream models to work independently. We validate the performance of our framework through rigorous offline and online experiments within a large-scale system, showcasing its remarkable efficacy across multiple evaluation tasks. Finally, we show how the proposed framework can significantly reduce infrastructure costs compared to alternative approaches.
[ { "created": "Fri, 1 Mar 2024 15:05:21 GMT", "version": "v1" } ]
2024-03-04
[ [ "Fazelnia", "Ghazal", "" ], [ "Gupta", "Sanket", "" ], [ "Keum", "Claire", "" ], [ "Koh", "Mark", "" ], [ "Anderson", "Ian", "" ], [ "Lalmas", "Mounia", "" ] ]
We present a novel framework for user representation in large-scale recommender systems, aiming at effectively representing diverse user taste in a generalized manner. Our approach employs a two-stage methodology combining representation learning and transfer learning. The representation learning model uses an autoencoder that compresses various user features into a representation space. In the second stage, downstream task-specific models leverage user representations via transfer learning instead of curating user features individually. We further augment this methodology on the representation's input features to increase flexibility and enable reaction to user events, including new user experiences, in Near-Real Time. Additionally, we propose a novel solution to manage deployment of this framework in production models, allowing downstream models to work independently. We validate the performance of our framework through rigorous offline and online experiments within a large-scale system, showcasing its remarkable efficacy across multiple evaluation tasks. Finally, we show how the proposed framework can significantly reduce infrastructure costs compared to alternative approaches.
2101.04904
Ali Ayub
Ali Ayub, Alan R. Wagner
EEC: Learning to Encode and Regenerate Images for Continual Learning
Added link to the code in the paper. A preliminary version of this work was presented at ICML 2020 Workshop on Lifelong Machine Learning: arXiv:2007.06637
International Conference on Learning Representations (ICLR) 2021
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The two main impediments to continual learning are catastrophic forgetting and memory limitations on the storage of data. To cope with these challenges, we propose a novel, cognitively-inspired approach which trains autoencoders with Neural Style Transfer to encode and store images. During training on a new task, reconstructed images from encoded episodes are replayed in order to avoid catastrophic forgetting. The loss function for the reconstructed images is weighted to reduce its effect during classifier training to cope with image degradation. When the system runs out of memory the encoded episodes are converted into centroids and covariance matrices, which are used to generate pseudo-images during classifier training, keeping classifier performance stable while using less memory. Our approach increases classification accuracy by 13-17% over state-of-the-art methods on benchmark datasets, while requiring 78% less storage space.
[ { "created": "Wed, 13 Jan 2021 06:43:10 GMT", "version": "v1" }, { "created": "Thu, 14 Jan 2021 09:16:24 GMT", "version": "v2" }, { "created": "Mon, 5 Apr 2021 05:05:05 GMT", "version": "v3" }, { "created": "Sun, 2 May 2021 05:45:03 GMT", "version": "v4" } ]
2021-05-04
[ [ "Ayub", "Ali", "" ], [ "Wagner", "Alan R.", "" ] ]
The two main impediments to continual learning are catastrophic forgetting and memory limitations on the storage of data. To cope with these challenges, we propose a novel, cognitively-inspired approach which trains autoencoders with Neural Style Transfer to encode and store images. During training on a new task, reconstructed images from encoded episodes are replayed in order to avoid catastrophic forgetting. The loss function for the reconstructed images is weighted to reduce its effect during classifier training to cope with image degradation. When the system runs out of memory the encoded episodes are converted into centroids and covariance matrices, which are used to generate pseudo-images during classifier training, keeping classifier performance stable while using less memory. Our approach increases classification accuracy by 13-17% over state-of-the-art methods on benchmark datasets, while requiring 78% less storage space.
2406.18051
Zhengqing Yuan
Zhengqing Yuan, Rong Zhou, Hongyi Wang, Lifang He, Yanfang Ye, Lichao Sun
ViT-1.58b: Mobile Vision Transformers in the 1-bit Era
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Vision Transformers (ViTs) have achieved remarkable performance in various image classification tasks by leveraging the attention mechanism to process image patches as tokens. However, the high computational and memory demands of ViTs pose significant challenges for deployment in resource-constrained environments. This paper introduces ViT-1.58b, a novel 1.58-bit quantized ViT model designed to drastically reduce memory and computational overhead while preserving competitive performance. ViT-1.58b employs ternary quantization, which refines the balance between efficiency and accuracy by constraining weights to {-1, 0, 1} and quantizing activations to 8-bit precision. Our approach ensures efficient scaling in terms of both memory and computation. Experiments on CIFAR-10 and ImageNet-1k demonstrate that ViT-1.58b maintains comparable accuracy to full-precision Vit, with significant reductions in memory usage and computational costs. This paper highlights the potential of extreme quantization techniques in developing sustainable AI solutions and contributes to the broader discourse on efficient model deployment in practical applications. Our code and weights are available at https://github.com/DLYuanGod/ViT-1.58b.
[ { "created": "Wed, 26 Jun 2024 04:01:19 GMT", "version": "v1" } ]
2024-06-27
[ [ "Yuan", "Zhengqing", "" ], [ "Zhou", "Rong", "" ], [ "Wang", "Hongyi", "" ], [ "He", "Lifang", "" ], [ "Ye", "Yanfang", "" ], [ "Sun", "Lichao", "" ] ]
Vision Transformers (ViTs) have achieved remarkable performance in various image classification tasks by leveraging the attention mechanism to process image patches as tokens. However, the high computational and memory demands of ViTs pose significant challenges for deployment in resource-constrained environments. This paper introduces ViT-1.58b, a novel 1.58-bit quantized ViT model designed to drastically reduce memory and computational overhead while preserving competitive performance. ViT-1.58b employs ternary quantization, which refines the balance between efficiency and accuracy by constraining weights to {-1, 0, 1} and quantizing activations to 8-bit precision. Our approach ensures efficient scaling in terms of both memory and computation. Experiments on CIFAR-10 and ImageNet-1k demonstrate that ViT-1.58b maintains comparable accuracy to full-precision Vit, with significant reductions in memory usage and computational costs. This paper highlights the potential of extreme quantization techniques in developing sustainable AI solutions and contributes to the broader discourse on efficient model deployment in practical applications. Our code and weights are available at https://github.com/DLYuanGod/ViT-1.58b.
2304.12125
Iacopo Catalano
Iacopo Catalano, Ha Sier, Xianjia Yu, Tomi Westerlund, Jorge Pena Queralta
UAV Tracking with Solid-State Lidars:Dynamic Multi-Frequency Scan Integration
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the increasing use of drones across various industries, the navigation and tracking of these unmanned aerial vehicles (UAVs) in challenging environments (such as GNSS-denied environments) have become critical issues. In this paper, we propose a novel method for a ground-based UAV tracking system using a solid-state LiDAR, which dynamically adjusts the LiDAR frame integration time based on the distance to the UAV and its speed. Our method fuses two simultaneous scan integration frequencies for high accuracy and persistent tracking, enabling reliable estimates of the UAV state even in challenging scenarios. The use of the Inverse Covariance Intersection method and Kalman filters allow for better tracking accuracy and can handle challenging tracking scenarios. We have performed a number of experiments for evaluating the performance of the proposed tracking system and identifying its limitations. Our experimental results demonstrate that the proposed method achieves comparable tracking performance to the established baseline method, while also providing more reliable and accurate tracking when only one of the frequencies is available or unreliable.
[ { "created": "Mon, 24 Apr 2023 14:30:20 GMT", "version": "v1" }, { "created": "Tue, 18 Jul 2023 12:05:55 GMT", "version": "v2" } ]
2023-07-19
[ [ "Catalano", "Iacopo", "" ], [ "Sier", "Ha", "" ], [ "Yu", "Xianjia", "" ], [ "Westerlund", "Tomi", "" ], [ "Queralta", "Jorge Pena", "" ] ]
With the increasing use of drones across various industries, the navigation and tracking of these unmanned aerial vehicles (UAVs) in challenging environments (such as GNSS-denied environments) have become critical issues. In this paper, we propose a novel method for a ground-based UAV tracking system using a solid-state LiDAR, which dynamically adjusts the LiDAR frame integration time based on the distance to the UAV and its speed. Our method fuses two simultaneous scan integration frequencies for high accuracy and persistent tracking, enabling reliable estimates of the UAV state even in challenging scenarios. The use of the Inverse Covariance Intersection method and Kalman filters allow for better tracking accuracy and can handle challenging tracking scenarios. We have performed a number of experiments for evaluating the performance of the proposed tracking system and identifying its limitations. Our experimental results demonstrate that the proposed method achieves comparable tracking performance to the established baseline method, while also providing more reliable and accurate tracking when only one of the frequencies is available or unreliable.
1805.04015
Zheng Li Ms
Zheng Li, Sheng Yang, Thierry Clessienne
Exploiting Location Information to Enhance Throughput in Downlink V2I Systems
This work has been submitted to Globecom 2018 for possible publication
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vehicle-to-Infrastructure (V2I) technology, combined with millimeter wave (mmW) networks, may support high data rates for vehicular communication and therefore provides a whole new set of services. However, in dense urban environment, pedestrians or buildings cause strong blockage to the narrow beams at mmW, severely deteriorating the transmission rate. In this work, we model the downlink mmW V2I system as a simple erasure broadcast channel where the erasure (blockage) event is considered as the state of the channel. While state feedback can be obtained through protocols such as Automatic Repeat reQuest (ARQ), we also assume that the current state can be estimated through the location information shared by the communication peers. We evaluate, through an information-theoretic approach, the achievable downlink rate in such a system. Despite its highly theoretical nature, our result sheds light on how much the location information can contribute to improve the downlink date rate, e.g., as a function of the mobility (velocity) of the vehicles.
[ { "created": "Thu, 10 May 2018 15:05:05 GMT", "version": "v1" } ]
2018-05-11
[ [ "Li", "Zheng", "" ], [ "Yang", "Sheng", "" ], [ "Clessienne", "Thierry", "" ] ]
Vehicle-to-Infrastructure (V2I) technology, combined with millimeter wave (mmW) networks, may support high data rates for vehicular communication and therefore provides a whole new set of services. However, in dense urban environment, pedestrians or buildings cause strong blockage to the narrow beams at mmW, severely deteriorating the transmission rate. In this work, we model the downlink mmW V2I system as a simple erasure broadcast channel where the erasure (blockage) event is considered as the state of the channel. While state feedback can be obtained through protocols such as Automatic Repeat reQuest (ARQ), we also assume that the current state can be estimated through the location information shared by the communication peers. We evaluate, through an information-theoretic approach, the achievable downlink rate in such a system. Despite its highly theoretical nature, our result sheds light on how much the location information can contribute to improve the downlink date rate, e.g., as a function of the mobility (velocity) of the vehicles.
1807.01182
Ayushi Dalmia
Ayushi Dalmia, Sachindra Joshi, Raghavendra Singh, Vikas Raykar
Styling with Attention to Details
null
null
null
null
cs.IR cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Fashion as characterized by its nature, is driven by style. In this paper, we propose a method that takes into account the style information to complete a given set of selected fashion items with a complementary fashion item. Complementary items are those items that can be worn along with the selected items according to the style. Addressing this problem facilitates in automatically generating stylish fashion ensembles leading to a richer shopping experience for users. Recently, there has been a surge of online social websites where fashion enthusiasts post the outfit of the day and other users can like and comment on them. These posts contain a gold-mine of information about style. In this paper, we exploit these posts to train a deep neural network which captures style in an automated manner. We pose the problem of predicting complementary fashion items as a sequence to sequence problem where the input is the selected set of fashion items and the output is a complementary fashion item based on the style information learned by the model. We use the encoder decoder architecture to solve this problem of completing the set of fashion items. We evaluate the goodness of the proposed model through a variety of experiments. We empirically observe that our proposed model outperforms competitive baseline like apriori algorithm by ~28 in terms of accuracy for top-1 recommendation to complete the fashion ensemble. We also perform retrieval based experiments to understand the ability of the model to learn style and rank the complementary fashion items and find that using attention in our encoder decoder model helps in improving the mean reciprocal rank by ~24. Qualitatively we find the complementary fashion items generated by our proposed model are richer than the apriori algorithm.
[ { "created": "Tue, 3 Jul 2018 13:38:20 GMT", "version": "v1" } ]
2018-07-04
[ [ "Dalmia", "Ayushi", "" ], [ "Joshi", "Sachindra", "" ], [ "Singh", "Raghavendra", "" ], [ "Raykar", "Vikas", "" ] ]
Fashion as characterized by its nature, is driven by style. In this paper, we propose a method that takes into account the style information to complete a given set of selected fashion items with a complementary fashion item. Complementary items are those items that can be worn along with the selected items according to the style. Addressing this problem facilitates in automatically generating stylish fashion ensembles leading to a richer shopping experience for users. Recently, there has been a surge of online social websites where fashion enthusiasts post the outfit of the day and other users can like and comment on them. These posts contain a gold-mine of information about style. In this paper, we exploit these posts to train a deep neural network which captures style in an automated manner. We pose the problem of predicting complementary fashion items as a sequence to sequence problem where the input is the selected set of fashion items and the output is a complementary fashion item based on the style information learned by the model. We use the encoder decoder architecture to solve this problem of completing the set of fashion items. We evaluate the goodness of the proposed model through a variety of experiments. We empirically observe that our proposed model outperforms competitive baseline like apriori algorithm by ~28 in terms of accuracy for top-1 recommendation to complete the fashion ensemble. We also perform retrieval based experiments to understand the ability of the model to learn style and rank the complementary fashion items and find that using attention in our encoder decoder model helps in improving the mean reciprocal rank by ~24. Qualitatively we find the complementary fashion items generated by our proposed model are richer than the apriori algorithm.
1911.05611
Junjiao Tian
Junjiao Tian, Wesley Cheung, Nathan Glaser, Yen-Cheng Liu, Zsolt Kira
UNO: Uncertainty-aware Noisy-Or Multimodal Fusion for Unanticipated Input Degradation
IEEE International Conference on Robotics and Automation (ICRA), 2020. IROS Workshop on the Importance of Uncertainty in Deep Learning for Robotics, 2019
null
null
null
cs.CV cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The fusion of multiple sensor modalities, especially through deep learning architectures, has been an active area of study. However, an under-explored aspect of such work is whether the methods can be robust to degradations across their input modalities, especially when they must generalize to degradations not seen during training. In this work, we propose an uncertainty-aware fusion scheme to effectively fuse inputs that might suffer from a range of known and unknown degradations. Specifically, we analyze a number of uncertainty measures, each of which captures a different aspect of uncertainty, and we propose a novel way to fuse degraded inputs by scaling modality-specific output softmax probabilities. We additionally propose a novel data-dependent spatial temperature scaling method to complement these existing uncertainty measures. Finally, we integrate the uncertainty-scaled output from each modality using a probabilistic noisy-or fusion method. In a photo-realistic simulation environment (AirSim), we show that our method achieves significantly better results on a semantic segmentation task, compared to state-of-art fusion architectures, on a range of degradations (e.g. fog, snow, frost, and various other types of noise), some of which are unknown during training. We specifically improve upon the state-of-art[1] by 28% in mean IoU on various degradations. [1] Abhinav Valada, Rohit Mohan, and Wolfram Burgard. Self-Supervised Model Adaptation for Multimodal Semantic Segmentation. In: arXiv e-prints, arXiv:1808.03833 (Aug. 2018), arXiv:1808.03833. arXiv: 1808.03833 [cs.CV].
[ { "created": "Wed, 6 Nov 2019 09:42:04 GMT", "version": "v1" }, { "created": "Wed, 4 Mar 2020 03:39:54 GMT", "version": "v2" } ]
2020-03-05
[ [ "Tian", "Junjiao", "" ], [ "Cheung", "Wesley", "" ], [ "Glaser", "Nathan", "" ], [ "Liu", "Yen-Cheng", "" ], [ "Kira", "Zsolt", "" ] ]
The fusion of multiple sensor modalities, especially through deep learning architectures, has been an active area of study. However, an under-explored aspect of such work is whether the methods can be robust to degradations across their input modalities, especially when they must generalize to degradations not seen during training. In this work, we propose an uncertainty-aware fusion scheme to effectively fuse inputs that might suffer from a range of known and unknown degradations. Specifically, we analyze a number of uncertainty measures, each of which captures a different aspect of uncertainty, and we propose a novel way to fuse degraded inputs by scaling modality-specific output softmax probabilities. We additionally propose a novel data-dependent spatial temperature scaling method to complement these existing uncertainty measures. Finally, we integrate the uncertainty-scaled output from each modality using a probabilistic noisy-or fusion method. In a photo-realistic simulation environment (AirSim), we show that our method achieves significantly better results on a semantic segmentation task, compared to state-of-art fusion architectures, on a range of degradations (e.g. fog, snow, frost, and various other types of noise), some of which are unknown during training. We specifically improve upon the state-of-art[1] by 28% in mean IoU on various degradations. [1] Abhinav Valada, Rohit Mohan, and Wolfram Burgard. Self-Supervised Model Adaptation for Multimodal Semantic Segmentation. In: arXiv e-prints, arXiv:1808.03833 (Aug. 2018), arXiv:1808.03833. arXiv: 1808.03833 [cs.CV].
2408.06021
Xu Long
Long Xu, Shanghong Li, Yongquan Chen, Junkang Chen, Rui Huang, Feng Wu
ClickAttention: Click Region Similarity Guided Interactive Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactive segmentation algorithms based on click points have garnered significant attention from researchers in recent years. However, existing studies typically use sparse click maps as model inputs to segment specific target objects, which primarily affect local regions and have limited abilities to focus on the whole target object, leading to increased times of clicks. In addition, most existing algorithms can not balance well between high performance and efficiency. To address this issue, we propose a click attention algorithm that expands the influence range of positive clicks based on the similarity between positively-clicked regions and the whole input. We also propose a discriminative affinity loss to reduce the attention coupling between positive and negative click regions to avoid an accuracy decrease caused by mutual interference between positive and negative clicks. Extensive experiments demonstrate that our approach is superior to existing methods and achieves cutting-edge performance in fewer parameters. An interactive demo and all reproducible codes will be released at https://github.com/hahamyt/ClickAttention.
[ { "created": "Mon, 12 Aug 2024 09:21:15 GMT", "version": "v1" }, { "created": "Tue, 13 Aug 2024 02:26:09 GMT", "version": "v2" } ]
2024-08-14
[ [ "Xu", "Long", "" ], [ "Li", "Shanghong", "" ], [ "Chen", "Yongquan", "" ], [ "Chen", "Junkang", "" ], [ "Huang", "Rui", "" ], [ "Wu", "Feng", "" ] ]
Interactive segmentation algorithms based on click points have garnered significant attention from researchers in recent years. However, existing studies typically use sparse click maps as model inputs to segment specific target objects, which primarily affect local regions and have limited abilities to focus on the whole target object, leading to increased times of clicks. In addition, most existing algorithms can not balance well between high performance and efficiency. To address this issue, we propose a click attention algorithm that expands the influence range of positive clicks based on the similarity between positively-clicked regions and the whole input. We also propose a discriminative affinity loss to reduce the attention coupling between positive and negative click regions to avoid an accuracy decrease caused by mutual interference between positive and negative clicks. Extensive experiments demonstrate that our approach is superior to existing methods and achieves cutting-edge performance in fewer parameters. An interactive demo and all reproducible codes will be released at https://github.com/hahamyt/ClickAttention.
1804.07655
Andreas Steyven
Emma Hart, Andreas S.W. Steyven, Ben Paechter
Evolution of a Functionally Diverse Swarm via a Novel Decentralised Quality-Diversity Algorithm
In GECCO 2018
null
10.1145/3205455.3205481
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The presence of functional diversity within a group has been demonstrated to lead to greater robustness, higher performance and increased problem-solving ability in a broad range of studies that includes insect groups, human groups and swarm robotics. Evolving group diversity however has proved challenging within Evolutionary Robotics, requiring reproductive isolation and careful attention to population size and selection mechanisms. To tackle this issue, we introduce a novel, decentralised, variant of the MAP-Elites illumination algorithm which is hybridised with a well-known distributed evolutionary algorithm (mEDEA). The algorithm simultaneously evolves multiple diverse behaviours for multiple robots, with respect to a simple token-gathering task. Each robot in the swarm maintains a local archive defined by two pre-specified functional traits which is shared with robots it come into contact with. We investigate four different strategies for sharing, exploiting and combining local archives and compare results to mEDEA. Experimental results show that in contrast to previous claims, it is possible to evolve a functionally diverse swarm without geographical isolation, and that the new method outperforms mEDEA in terms of the diversity, coverage and precision of the evolved swarm.
[ { "created": "Fri, 20 Apr 2018 14:57:26 GMT", "version": "v1" } ]
2018-04-23
[ [ "Hart", "Emma", "" ], [ "Steyven", "Andreas S. W.", "" ], [ "Paechter", "Ben", "" ] ]
The presence of functional diversity within a group has been demonstrated to lead to greater robustness, higher performance and increased problem-solving ability in a broad range of studies that includes insect groups, human groups and swarm robotics. Evolving group diversity however has proved challenging within Evolutionary Robotics, requiring reproductive isolation and careful attention to population size and selection mechanisms. To tackle this issue, we introduce a novel, decentralised, variant of the MAP-Elites illumination algorithm which is hybridised with a well-known distributed evolutionary algorithm (mEDEA). The algorithm simultaneously evolves multiple diverse behaviours for multiple robots, with respect to a simple token-gathering task. Each robot in the swarm maintains a local archive defined by two pre-specified functional traits which is shared with robots it come into contact with. We investigate four different strategies for sharing, exploiting and combining local archives and compare results to mEDEA. Experimental results show that in contrast to previous claims, it is possible to evolve a functionally diverse swarm without geographical isolation, and that the new method outperforms mEDEA in terms of the diversity, coverage and precision of the evolved swarm.
1602.04921
Weiyao Lin
Weiyao Lin, Yang Mi, Weiyue Wang, Jianxin Wu, Jingdong Wang, Tao Mei
A diffusion and clustering-based approach for finding coherent motions and understanding crowd scenes
This manuscript is the accepted version for TIP (IEEE Transactions on Image Processing), 2016
null
10.1109/TIP.2016.2531281
null
cs.CV cs.AI cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the problem of detecting coherent motions in crowd scenes and presents its two applications in crowd scene understanding: semantic region detection and recurrent activity mining. It processes input motion fields (e.g., optical flow fields) and produces a coherent motion filed, named as thermal energy field. The thermal energy field is able to capture both motion correlation among particles and the motion trends of individual particles which are helpful to discover coherency among them. We further introduce a two-step clustering process to construct stable semantic regions from the extracted time-varying coherent motions. These semantic regions can be used to recognize pre-defined activities in crowd scenes. Finally, we introduce a cluster-and-merge process which automatically discovers recurrent activities in crowd scenes by clustering and merging the extracted coherent motions. Experiments on various videos demonstrate the effectiveness of our approach.
[ { "created": "Tue, 16 Feb 2016 06:25:30 GMT", "version": "v1" } ]
2016-04-20
[ [ "Lin", "Weiyao", "" ], [ "Mi", "Yang", "" ], [ "Wang", "Weiyue", "" ], [ "Wu", "Jianxin", "" ], [ "Wang", "Jingdong", "" ], [ "Mei", "Tao", "" ] ]
This paper addresses the problem of detecting coherent motions in crowd scenes and presents its two applications in crowd scene understanding: semantic region detection and recurrent activity mining. It processes input motion fields (e.g., optical flow fields) and produces a coherent motion filed, named as thermal energy field. The thermal energy field is able to capture both motion correlation among particles and the motion trends of individual particles which are helpful to discover coherency among them. We further introduce a two-step clustering process to construct stable semantic regions from the extracted time-varying coherent motions. These semantic regions can be used to recognize pre-defined activities in crowd scenes. Finally, we introduce a cluster-and-merge process which automatically discovers recurrent activities in crowd scenes by clustering and merging the extracted coherent motions. Experiments on various videos demonstrate the effectiveness of our approach.
2101.03552
Andreas Kirsch
Andreas Kirsch, Yarin Gal
PowerEvaluationBALD: Efficient Evaluation-Oriented Deep (Bayesian) Active Learning with Stochastic Acquisition Functions
null
null
null
null
cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop BatchEvaluationBALD, a new acquisition function for deep Bayesian active learning, as an expansion of BatchBALD that takes into account an evaluation set of unlabeled data, for example, the pool set. We also develop a variant for the non-Bayesian setting, which we call Evaluation Information Gain. To reduce computational requirements and allow these methods to scale to larger acquisition batch sizes, we introduce stochastic acquisition functions that use importance sampling of tempered acquisition scores. We call this method PowerEvaluationBALD. We show in a few initial experiments that PowerEvaluationBALD works on par with BatchEvaluationBALD, which outperforms BatchBALD on Repeated MNIST (MNISTx2), while massively reducing the computational requirements compared to BatchBALD or BatchEvaluationBALD.
[ { "created": "Sun, 10 Jan 2021 13:46:45 GMT", "version": "v1" }, { "created": "Mon, 10 May 2021 19:27:20 GMT", "version": "v2" } ]
2021-05-12
[ [ "Kirsch", "Andreas", "" ], [ "Gal", "Yarin", "" ] ]
We develop BatchEvaluationBALD, a new acquisition function for deep Bayesian active learning, as an expansion of BatchBALD that takes into account an evaluation set of unlabeled data, for example, the pool set. We also develop a variant for the non-Bayesian setting, which we call Evaluation Information Gain. To reduce computational requirements and allow these methods to scale to larger acquisition batch sizes, we introduce stochastic acquisition functions that use importance sampling of tempered acquisition scores. We call this method PowerEvaluationBALD. We show in a few initial experiments that PowerEvaluationBALD works on par with BatchEvaluationBALD, which outperforms BatchBALD on Repeated MNIST (MNISTx2), while massively reducing the computational requirements compared to BatchBALD or BatchEvaluationBALD.
1610.05009
Saurav Gupta
Saurav Gupta, Nitin Anand Shrivastava, Abbas Khosravi, Bijaya Ketan Panigrahi
Wind ramp event prediction with parallelized Gradient Boosted Regression Trees
IJCNN 2016
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Accurate prediction of wind ramp events is critical for ensuring the reliability and stability of the power systems with high penetration of wind energy. This paper proposes a classification based approach for estimating the future class of wind ramp event based on certain thresholds. A parallelized gradient boosted regression tree based technique has been proposed to accurately classify the normal as well as rare extreme wind power ramp events. The model has been validated using wind power data obtained from the National Renewable Energy Laboratory database. Performance comparison with several benchmark techniques indicates the superiority of the proposed technique in terms of superior classification accuracy.
[ { "created": "Mon, 17 Oct 2016 08:29:57 GMT", "version": "v1" } ]
2016-10-18
[ [ "Gupta", "Saurav", "" ], [ "Shrivastava", "Nitin Anand", "" ], [ "Khosravi", "Abbas", "" ], [ "Panigrahi", "Bijaya Ketan", "" ] ]
Accurate prediction of wind ramp events is critical for ensuring the reliability and stability of the power systems with high penetration of wind energy. This paper proposes a classification based approach for estimating the future class of wind ramp event based on certain thresholds. A parallelized gradient boosted regression tree based technique has been proposed to accurately classify the normal as well as rare extreme wind power ramp events. The model has been validated using wind power data obtained from the National Renewable Energy Laboratory database. Performance comparison with several benchmark techniques indicates the superiority of the proposed technique in terms of superior classification accuracy.
2001.03482
Te Sun Han
Te Sun Han and Masahide Sasaki
Wiretap channels with causal and non-causal state information: revisited
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The coding problem for wiretap channels with causal channel state information (CSI) available at the encoder (Alice) and/or the decoder (Bob) is studied. We are concerned here particularily with the problem of achievable secret-message secret-key rate pairs under the semantic security criterion. Our main result extends all the previous results on achievable rates as given by Chia and El Gamal [10], Fujita [11], and Han and Sasaki [23]. In order to do this, we first derive a unifying theorem (Theorem 2) with causal CSI at Alice, which follows immediately by leveraging the unifying seminal theorem for wiretap channels with non-causal CSI at Alice as recently established by Bunin et al. [22]. The only thing to do here is just to re-interpret the latter non-causal one in a causal manner. A prominent feature of this approach is that we are able to dispense with the block-Markov encoding scheme as used in the previous works. Also, the exact secret-message key capacity region for wiretap channels with non-causal CSI at both Alice and Bob is given.
[ { "created": "Fri, 10 Jan 2020 14:44:10 GMT", "version": "v1" }, { "created": "Tue, 25 May 2021 18:31:06 GMT", "version": "v10" }, { "created": "Sun, 11 Jul 2021 11:18:59 GMT", "version": "v11" }, { "created": "Sun, 8 Aug 2021 19:09:59 GMT", "version": "v12" }, { "created": "Wed, 22 Jan 2020 05:08:15 GMT", "version": "v2" }, { "created": "Wed, 11 Mar 2020 06:24:14 GMT", "version": "v3" }, { "created": "Sun, 23 Aug 2020 17:27:14 GMT", "version": "v4" }, { "created": "Tue, 13 Oct 2020 10:14:56 GMT", "version": "v5" }, { "created": "Wed, 4 Nov 2020 16:30:26 GMT", "version": "v6" }, { "created": "Thu, 5 Nov 2020 05:45:10 GMT", "version": "v7" }, { "created": "Thu, 3 Dec 2020 05:24:47 GMT", "version": "v8" }, { "created": "Tue, 23 Feb 2021 09:12:50 GMT", "version": "v9" } ]
2021-08-10
[ [ "Han", "Te Sun", "" ], [ "Sasaki", "Masahide", "" ] ]
The coding problem for wiretap channels with causal channel state information (CSI) available at the encoder (Alice) and/or the decoder (Bob) is studied. We are concerned here particularily with the problem of achievable secret-message secret-key rate pairs under the semantic security criterion. Our main result extends all the previous results on achievable rates as given by Chia and El Gamal [10], Fujita [11], and Han and Sasaki [23]. In order to do this, we first derive a unifying theorem (Theorem 2) with causal CSI at Alice, which follows immediately by leveraging the unifying seminal theorem for wiretap channels with non-causal CSI at Alice as recently established by Bunin et al. [22]. The only thing to do here is just to re-interpret the latter non-causal one in a causal manner. A prominent feature of this approach is that we are able to dispense with the block-Markov encoding scheme as used in the previous works. Also, the exact secret-message key capacity region for wiretap channels with non-causal CSI at both Alice and Bob is given.
2108.10118
Thomas Wendler
Markus Kr\"onke, Christine Eilers, Desislava Dimova, Melanie K\"ohler, Gabriel Buschner, Lilit Mirzojan, Lemonia Konstantinidou, Marcus R. Makowski, James Nagarajah, Nassir Navab, Wolfgang Weber, Thomas Wendler
Tracked 3D Ultrasound and Deep Neural Network-based Thyroid Segmentation reduce Interobserver Variability in Thyroid Volumetry
7 figures, 19 pages, under review
null
10.1371/journal.pone.0268550
null
cs.CV cs.LG cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Thyroid volumetry is crucial in diagnosis, treatment and monitoring of thyroid diseases. However, conventional thyroid volumetry with 2D ultrasound is highly operator-dependent. This study compares 2D ultrasound and tracked 3D ultrasound with an automatic thyroid segmentation based on a deep neural network regarding inter- and intraobserver variability, time and accuracy. Volume reference was MRI. Methods: 28 healthy volunteers were scanned with 2D and 3D ultrasound as well as by MRI. Three physicians (MD 1, 2, 3) with different levels of experience (6, 4 and 1 a) performed three 2D ultrasound and three tracked 3D ultrasound scans on each volunteer. In the 2D scans the thyroid lobe volumes were calculated with the ellipsoid formula. A convolutional deep neural network (CNN) segmented the 3D thyroid lobes automatically. On MRI (T1 VIBE sequence) the thyroid was manually segmented by an experienced medical doctor. Results: The CNN was trained to obtain a dice score of 0.94. The interobserver variability comparing two MDs showed mean differences for 2D and 3D respectively of 0.58 ml to 0.52 ml (MD1 vs. 2), -1.33 ml to -0.17 ml (MD1 vs. 3) and -1.89 ml to -0.70 ml (MD2 vs. 3). Paired samples t-tests showed significant differences in two comparisons for 2D and none for 3D. Intraobsever variability was similar for 2D and 3D ultrasound. Comparison of ultrasound volumes and MRI volumes by paired samples t-tests showed a significant difference for the 2D volumetry of all MDs, and no significant difference for 3D ultrasound. Acquisition time was significantly shorter for 3D ultrasound. Conclusion: Tracked 3D ultrasound combined with a CNN segmentation significantly reduces interobserver variability in thyroid volumetry and increases the accuracy of the measurements with shorter acquisition times.
[ { "created": "Tue, 10 Aug 2021 23:28:27 GMT", "version": "v1" } ]
2022-10-12
[ [ "Krönke", "Markus", "" ], [ "Eilers", "Christine", "" ], [ "Dimova", "Desislava", "" ], [ "Köhler", "Melanie", "" ], [ "Buschner", "Gabriel", "" ], [ "Mirzojan", "Lilit", "" ], [ "Konstantinidou", "Lemonia", "" ], [ "Makowski", "Marcus R.", "" ], [ "Nagarajah", "James", "" ], [ "Navab", "Nassir", "" ], [ "Weber", "Wolfgang", "" ], [ "Wendler", "Thomas", "" ] ]
Background: Thyroid volumetry is crucial in diagnosis, treatment and monitoring of thyroid diseases. However, conventional thyroid volumetry with 2D ultrasound is highly operator-dependent. This study compares 2D ultrasound and tracked 3D ultrasound with an automatic thyroid segmentation based on a deep neural network regarding inter- and intraobserver variability, time and accuracy. Volume reference was MRI. Methods: 28 healthy volunteers were scanned with 2D and 3D ultrasound as well as by MRI. Three physicians (MD 1, 2, 3) with different levels of experience (6, 4 and 1 a) performed three 2D ultrasound and three tracked 3D ultrasound scans on each volunteer. In the 2D scans the thyroid lobe volumes were calculated with the ellipsoid formula. A convolutional deep neural network (CNN) segmented the 3D thyroid lobes automatically. On MRI (T1 VIBE sequence) the thyroid was manually segmented by an experienced medical doctor. Results: The CNN was trained to obtain a dice score of 0.94. The interobserver variability comparing two MDs showed mean differences for 2D and 3D respectively of 0.58 ml to 0.52 ml (MD1 vs. 2), -1.33 ml to -0.17 ml (MD1 vs. 3) and -1.89 ml to -0.70 ml (MD2 vs. 3). Paired samples t-tests showed significant differences in two comparisons for 2D and none for 3D. Intraobsever variability was similar for 2D and 3D ultrasound. Comparison of ultrasound volumes and MRI volumes by paired samples t-tests showed a significant difference for the 2D volumetry of all MDs, and no significant difference for 3D ultrasound. Acquisition time was significantly shorter for 3D ultrasound. Conclusion: Tracked 3D ultrasound combined with a CNN segmentation significantly reduces interobserver variability in thyroid volumetry and increases the accuracy of the measurements with shorter acquisition times.
2408.05797
Mahta Zamanizadeh
Mandana Farhang Ghahfarokhi, Seyed Hossein Sonbolestan, Mahta Zamanizadeh
A Comparative Study of Convolutional and Recurrent Neural Networks for Storm Surge Prediction in Tampa Bay
null
null
null
null
cs.LG physics.ao-ph
http://creativecommons.org/licenses/by/4.0/
In this paper, we compare the performance of three common deep learning architectures, CNN-LSTM, LSTM, and 3D-CNN, in the context of surrogate storm surge modeling. The study site for this paper is the Tampa Bay area in Florida. Using high-resolution atmospheric data from the reanalysis models and historical water level data from NOAA tide stations, we trained and tested these models to evaluate their performance. Our findings indicate that the CNN-LSTM model outperforms the other architectures, achieving a test loss of 0.010 and an R-squared (R2) score of 0.84. The LSTM model, although it achieved the lowest training loss of 0.007 and the highest training R2 of 0.88, exhibited poorer generalization with a test loss of 0.014 and an R2 of 0.77. The 3D-CNN model showed reasonable performance with a test loss of 0.011 and an R2 of 0.82 but displayed instability under extreme conditions. A case study on Hurricane Ian, which caused a significant negative surge of -1.5 meters in Tampa Bay indicates the CNN-LSTM model's robustness and accuracy in extreme scenarios.
[ { "created": "Sun, 11 Aug 2024 15:12:21 GMT", "version": "v1" } ]
2024-08-13
[ [ "Ghahfarokhi", "Mandana Farhang", "" ], [ "Sonbolestan", "Seyed Hossein", "" ], [ "Zamanizadeh", "Mahta", "" ] ]
In this paper, we compare the performance of three common deep learning architectures, CNN-LSTM, LSTM, and 3D-CNN, in the context of surrogate storm surge modeling. The study site for this paper is the Tampa Bay area in Florida. Using high-resolution atmospheric data from the reanalysis models and historical water level data from NOAA tide stations, we trained and tested these models to evaluate their performance. Our findings indicate that the CNN-LSTM model outperforms the other architectures, achieving a test loss of 0.010 and an R-squared (R2) score of 0.84. The LSTM model, although it achieved the lowest training loss of 0.007 and the highest training R2 of 0.88, exhibited poorer generalization with a test loss of 0.014 and an R2 of 0.77. The 3D-CNN model showed reasonable performance with a test loss of 0.011 and an R2 of 0.82 but displayed instability under extreme conditions. A case study on Hurricane Ian, which caused a significant negative surge of -1.5 meters in Tampa Bay indicates the CNN-LSTM model's robustness and accuracy in extreme scenarios.
0810.1823
Christophe Paul
Emeric Gioan and Christophe Paul
Split decomposition and graph-labelled trees: characterizations and fully-dynamic algorithms for totally decomposable graphs
extended abstract appeared in ISAAC 2007: Dynamic distance hereditary graphs using split decompositon. In International Symposium on Algorithms and Computation - ISAAC. Number 4835 in Lecture Notes, pages 41-51, 2007
null
null
null
cs.DM cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we revisit the split decomposition of graphs and give new combinatorial and algorithmic results for the class of totally decomposable graphs, also known as the distance hereditary graphs, and for two non-trivial subclasses, namely the cographs and the 3-leaf power graphs. Precisely, we give strutural and incremental characterizations, leading to optimal fully-dynamic recognition algorithms for vertex and edge modifications, for each of these classes. These results rely on a new framework to represent the split decomposition, namely the graph-labelled trees, which also captures the modular decomposition of graphs and thereby unify these two decompositions techniques. The point of the paper is to use bijections between these graph classes and trees whose nodes are labelled by cliques and stars. Doing so, we are also able to derive an intersection model for distance hereditary graphs, which answers an open problem.
[ { "created": "Fri, 10 Oct 2008 07:49:30 GMT", "version": "v1" }, { "created": "Mon, 18 Apr 2011 12:53:57 GMT", "version": "v2" } ]
2011-04-19
[ [ "Gioan", "Emeric", "" ], [ "Paul", "Christophe", "" ] ]
In this paper, we revisit the split decomposition of graphs and give new combinatorial and algorithmic results for the class of totally decomposable graphs, also known as the distance hereditary graphs, and for two non-trivial subclasses, namely the cographs and the 3-leaf power graphs. Precisely, we give strutural and incremental characterizations, leading to optimal fully-dynamic recognition algorithms for vertex and edge modifications, for each of these classes. These results rely on a new framework to represent the split decomposition, namely the graph-labelled trees, which also captures the modular decomposition of graphs and thereby unify these two decompositions techniques. The point of the paper is to use bijections between these graph classes and trees whose nodes are labelled by cliques and stars. Doing so, we are also able to derive an intersection model for distance hereditary graphs, which answers an open problem.
2311.02520
Jeremy Fineman
Jeremy T. Fineman
Single-Source Shortest Paths with Negative Real Weights in $\tilde{O}(mn^{8/9})$ Time
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a randomized algorithm for the problem of single-source shortest paths on directed graphs with real (both positive and negative) edge weights. Given an input graph with $n$ vertices and $m$ edges, the algorithm completes in $\tilde{O}(mn^{8/9})$ time with high probability. For real-weighted graphs, this result constitutes the first asymptotic improvement over the classic $O(mn)$-time algorithm variously attributed to Shimbel, Bellman, Ford, and Moore.
[ { "created": "Sat, 4 Nov 2023 22:35:39 GMT", "version": "v1" }, { "created": "Mon, 13 Nov 2023 15:54:07 GMT", "version": "v2" } ]
2023-11-14
[ [ "Fineman", "Jeremy T.", "" ] ]
This paper presents a randomized algorithm for the problem of single-source shortest paths on directed graphs with real (both positive and negative) edge weights. Given an input graph with $n$ vertices and $m$ edges, the algorithm completes in $\tilde{O}(mn^{8/9})$ time with high probability. For real-weighted graphs, this result constitutes the first asymptotic improvement over the classic $O(mn)$-time algorithm variously attributed to Shimbel, Bellman, Ford, and Moore.
cs/0601047
Juan J. Merelo Pr.
Lourdes Araujo and Juan J. Merelo
Automatic Detection of Trends in Dynamical Text: An Evolutionary Approach
22 pages, submitted to Journal of Information Retrieval
null
null
null
cs.IR cs.NE
null
This paper presents an evolutionary algorithm for modeling the arrival dates of document streams, which is any time-stamped collection of documents, such as newscasts, e-mails, IRC conversations, scientific journals archives and weblog postings. This algorithm assigns frequencies (number of document arrivals per time unit) to time intervals so that it produces an optimal fit to the data. The optimization is a trade off between accurately fitting the data and avoiding too many frequency changes; this way the analysis is able to find fits which ignore the noise. Classical dynamic programming algorithms are limited by memory and efficiency requirements, which can be a problem when dealing with long streams. This suggests to explore alternative search methods which allow for some degree of uncertainty to achieve tractability. Experiments have shown that the designed evolutionary algorithm is able to reach the same solution quality as those classical dynamic programming algorithms in a shorter time. We have also explored different probabilistic models to optimize the fitting of the date streams, and applied these algorithms to infer whether a new arrival increases or decreases {\em interest} in the topic the document stream is about.
[ { "created": "Thu, 12 Jan 2006 20:23:06 GMT", "version": "v1" } ]
2007-05-23
[ [ "Araujo", "Lourdes", "" ], [ "Merelo", "Juan J.", "" ] ]
This paper presents an evolutionary algorithm for modeling the arrival dates of document streams, which is any time-stamped collection of documents, such as newscasts, e-mails, IRC conversations, scientific journals archives and weblog postings. This algorithm assigns frequencies (number of document arrivals per time unit) to time intervals so that it produces an optimal fit to the data. The optimization is a trade off between accurately fitting the data and avoiding too many frequency changes; this way the analysis is able to find fits which ignore the noise. Classical dynamic programming algorithms are limited by memory and efficiency requirements, which can be a problem when dealing with long streams. This suggests to explore alternative search methods which allow for some degree of uncertainty to achieve tractability. Experiments have shown that the designed evolutionary algorithm is able to reach the same solution quality as those classical dynamic programming algorithms in a shorter time. We have also explored different probabilistic models to optimize the fitting of the date streams, and applied these algorithms to infer whether a new arrival increases or decreases {\em interest} in the topic the document stream is about.
2303.09377
Julian Hazell
Markus Anderljung and Julian Hazell
Protecting Society from AI Misuse: When are Restrictions on Capabilities Warranted?
14 pages, 1 figure
null
null
null
cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Artificial intelligence (AI) systems will increasingly be used to cause harm as they grow more capable. In fact, AI systems are already starting to be used to automate fraudulent activities, violate human rights, create harmful fake images, and identify dangerous toxins. To prevent some misuses of AI, we argue that targeted interventions on certain capabilities will be warranted. These restrictions may include controlling who can access certain types of AI models, what they can be used for, whether outputs are filtered or can be traced back to their user, and the resources needed to develop them. We also contend that some restrictions on non-AI capabilities needed to cause harm will be required. Though capability restrictions risk reducing use more than misuse (facing an unfavorable Misuse-Use Tradeoff), we argue that interventions on capabilities are warranted when other interventions are insufficient, the potential harm from misuse is high, and there are targeted ways to intervene on capabilities. We provide a taxonomy of interventions that can reduce AI misuse, focusing on the specific steps required for a misuse to cause harm (the Misuse Chain), and a framework to determine if an intervention is warranted. We apply this reasoning to three examples: predicting novel toxins, creating harmful images, and automating spear phishing campaigns.
[ { "created": "Thu, 16 Mar 2023 15:05:59 GMT", "version": "v1" }, { "created": "Fri, 17 Mar 2023 13:57:28 GMT", "version": "v2" }, { "created": "Wed, 29 Mar 2023 14:46:46 GMT", "version": "v3" } ]
2023-03-30
[ [ "Anderljung", "Markus", "" ], [ "Hazell", "Julian", "" ] ]
Artificial intelligence (AI) systems will increasingly be used to cause harm as they grow more capable. In fact, AI systems are already starting to be used to automate fraudulent activities, violate human rights, create harmful fake images, and identify dangerous toxins. To prevent some misuses of AI, we argue that targeted interventions on certain capabilities will be warranted. These restrictions may include controlling who can access certain types of AI models, what they can be used for, whether outputs are filtered or can be traced back to their user, and the resources needed to develop them. We also contend that some restrictions on non-AI capabilities needed to cause harm will be required. Though capability restrictions risk reducing use more than misuse (facing an unfavorable Misuse-Use Tradeoff), we argue that interventions on capabilities are warranted when other interventions are insufficient, the potential harm from misuse is high, and there are targeted ways to intervene on capabilities. We provide a taxonomy of interventions that can reduce AI misuse, focusing on the specific steps required for a misuse to cause harm (the Misuse Chain), and a framework to determine if an intervention is warranted. We apply this reasoning to three examples: predicting novel toxins, creating harmful images, and automating spear phishing campaigns.
2407.18470
Fan Xu
DaiFeng Li, Fan Xu
Synergizing Knowledge Graphs with Large Language Models: A Comprehensive Review and Future Prospects
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advancements have witnessed the ascension of Large Language Models (LLMs), endowed with prodigious linguistic capabilities, albeit marred by shortcomings including factual inconsistencies and opacity. Conversely, Knowledge Graphs (KGs) harbor verifiable knowledge and symbolic reasoning prowess, thereby complementing LLMs' deficiencies. Against this backdrop, the synergy between KGs and LLMs emerges as a pivotal research direction. Our contribution in this paper is a comprehensive dissection of the latest developments in integrating KGs with LLMs. Through meticulous analysis of their confluence points and methodologies, we introduce a unifying framework designed to elucidate and stimulate further exploration among scholars engaged in cognate disciplines. This framework serves a dual purpose: it consolidates extant knowledge while simultaneously delineating novel avenues for real-world deployment, thereby amplifying the translational impact of academic research.
[ { "created": "Fri, 26 Jul 2024 02:39:30 GMT", "version": "v1" } ]
2024-07-29
[ [ "Li", "DaiFeng", "" ], [ "Xu", "Fan", "" ] ]
Recent advancements have witnessed the ascension of Large Language Models (LLMs), endowed with prodigious linguistic capabilities, albeit marred by shortcomings including factual inconsistencies and opacity. Conversely, Knowledge Graphs (KGs) harbor verifiable knowledge and symbolic reasoning prowess, thereby complementing LLMs' deficiencies. Against this backdrop, the synergy between KGs and LLMs emerges as a pivotal research direction. Our contribution in this paper is a comprehensive dissection of the latest developments in integrating KGs with LLMs. Through meticulous analysis of their confluence points and methodologies, we introduce a unifying framework designed to elucidate and stimulate further exploration among scholars engaged in cognate disciplines. This framework serves a dual purpose: it consolidates extant knowledge while simultaneously delineating novel avenues for real-world deployment, thereby amplifying the translational impact of academic research.
2208.14024
Robert Schmier
Robert Schmier, Ullrich K\"othe, Christoph-Nikolas Straehle
Positive Difference Distribution for Image Outlier Detection using Normalizing Flows and Contrastive Data
null
Transactions on Machine Learning Research (04/2023)
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting test data deviating from training data is a central problem for safe and robust machine learning. Likelihoods learned by a generative model, e.g., a normalizing flow via standard log-likelihood training, perform poorly as an outlier score. We propose to use an unlabelled auxiliary dataset and a probabilistic outlier score for outlier detection. We use a self-supervised feature extractor trained on the auxiliary dataset and train a normalizing flow on the extracted features by maximizing the likelihood on in-distribution data and minimizing the likelihood on the contrastive dataset. We show that this is equivalent to learning the normalized positive difference between the in-distribution and the contrastive feature density. We conduct experiments on benchmark datasets and compare to the likelihood, the likelihood ratio and state-of-the-art anomaly detection methods.
[ { "created": "Tue, 30 Aug 2022 07:00:46 GMT", "version": "v1" }, { "created": "Wed, 26 Apr 2023 07:46:01 GMT", "version": "v2" } ]
2023-04-28
[ [ "Schmier", "Robert", "" ], [ "Köthe", "Ullrich", "" ], [ "Straehle", "Christoph-Nikolas", "" ] ]
Detecting test data deviating from training data is a central problem for safe and robust machine learning. Likelihoods learned by a generative model, e.g., a normalizing flow via standard log-likelihood training, perform poorly as an outlier score. We propose to use an unlabelled auxiliary dataset and a probabilistic outlier score for outlier detection. We use a self-supervised feature extractor trained on the auxiliary dataset and train a normalizing flow on the extracted features by maximizing the likelihood on in-distribution data and minimizing the likelihood on the contrastive dataset. We show that this is equivalent to learning the normalized positive difference between the in-distribution and the contrastive feature density. We conduct experiments on benchmark datasets and compare to the likelihood, the likelihood ratio and state-of-the-art anomaly detection methods.
2408.04045
Chang Han
Chang Han, Justin Lieffers, Clayton Morrison, and Katherine E. Isaacs
An Overview + Detail Layout for Visualizing Compound Graphs
5 pages, 7 figures. To appear at VIS2024
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Compound graphs are networks in which vertices can be grouped into larger subsets, with these subsets capable of further grouping, resulting in a nesting that can be many levels deep. In several applications, including biological workflows, chemical equations, and computational data flow analysis, these graphs often exhibit a tree-like nesting structure, where sibling clusters are disjoint. Common compound graph layouts prioritize the lowest level of the grouping, down to the individual ungrouped vertices, which can make the higher level grouped structures more difficult to discern, especially in deeply nested networks. Leveraging the additional structure of the tree-like nesting, we contribute an overview+detail layout for this class of compound graphs that preserves the saliency of the higher level network structure when groups are expanded to show internal nested structure. Our layout draws inner structures adjacent to their parents, using a modified tree layout to place substructures. We describe our algorithm and then present case studies demonstrating the layout's utility to a domain expert working on data flow analysis. Finally, we discuss network parameters and analysis situations in which our layout is well suited.
[ { "created": "Wed, 7 Aug 2024 18:54:13 GMT", "version": "v1" } ]
2024-08-09
[ [ "Han", "Chang", "" ], [ "Lieffers", "Justin", "" ], [ "Morrison", "Clayton", "" ], [ "Isaacs", "Katherine E.", "" ] ]
Compound graphs are networks in which vertices can be grouped into larger subsets, with these subsets capable of further grouping, resulting in a nesting that can be many levels deep. In several applications, including biological workflows, chemical equations, and computational data flow analysis, these graphs often exhibit a tree-like nesting structure, where sibling clusters are disjoint. Common compound graph layouts prioritize the lowest level of the grouping, down to the individual ungrouped vertices, which can make the higher level grouped structures more difficult to discern, especially in deeply nested networks. Leveraging the additional structure of the tree-like nesting, we contribute an overview+detail layout for this class of compound graphs that preserves the saliency of the higher level network structure when groups are expanded to show internal nested structure. Our layout draws inner structures adjacent to their parents, using a modified tree layout to place substructures. We describe our algorithm and then present case studies demonstrating the layout's utility to a domain expert working on data flow analysis. Finally, we discuss network parameters and analysis situations in which our layout is well suited.
2005.08708
Istv\'an Koren
Dominik Adam Kus, Istv\'an Koren, Ralf Klamma
A Link Generator for Increasing the Utility of OpenAPI-to-GraphQL Translations
WWW2020 Developer Track
null
10.13140/RG.2.2.33982.92488
null
cs.DC cs.DB cs.SE
http://creativecommons.org/licenses/by/4.0/
Standardized interfaces are the connecting link of today's distributed systems, facilitating access to data services in the cloud. REST APIs have been prevalent over the last years, despite several issues like over- and underfetching of resources. GraphQL enjoys rapid adoption, resolving these problems by using statically typed queries. However, the redevelopment of services to the new paradigm is costly. Therefore, several approaches for the successive migration from REST to GraphQL have been proposed, many leveraging OpenAPI service descriptions. In this article, we present the findings of our empirical evaluation on the APIs.guru directory and identify several schema translation challenges. These include less expressive schema types in GraphQL, as well as missing meta information about related resources in OpenAPI. To this end, we developed the open source Link Generator, that analyzes OpenAPI documents and automatically adds links to increase translation utility. This fundamentally benefits around 34% of APIs in the APIs.guru directory. Our findings and tool support contribute to the ongoing discussion about the migration of REST APIs to GraphQL, and provide developers with valuable insights into common pitfalls, to reduce friction during API transformation.
[ { "created": "Mon, 18 May 2020 13:35:22 GMT", "version": "v1" } ]
2020-05-20
[ [ "Kus", "Dominik Adam", "" ], [ "Koren", "István", "" ], [ "Klamma", "Ralf", "" ] ]
Standardized interfaces are the connecting link of today's distributed systems, facilitating access to data services in the cloud. REST APIs have been prevalent over the last years, despite several issues like over- and underfetching of resources. GraphQL enjoys rapid adoption, resolving these problems by using statically typed queries. However, the redevelopment of services to the new paradigm is costly. Therefore, several approaches for the successive migration from REST to GraphQL have been proposed, many leveraging OpenAPI service descriptions. In this article, we present the findings of our empirical evaluation on the APIs.guru directory and identify several schema translation challenges. These include less expressive schema types in GraphQL, as well as missing meta information about related resources in OpenAPI. To this end, we developed the open source Link Generator, that analyzes OpenAPI documents and automatically adds links to increase translation utility. This fundamentally benefits around 34% of APIs in the APIs.guru directory. Our findings and tool support contribute to the ongoing discussion about the migration of REST APIs to GraphQL, and provide developers with valuable insights into common pitfalls, to reduce friction during API transformation.
2109.04677
Shuai Han
Shuai D. Han and Jingjin Yu
Optimizing Space Utilization for More Effective Multi-Robot Path Planning
Submitting to ICRA 2022
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We perform a systematic exploration of the principle of Space Utilization Optimization (SUO) as a heuristic for planning better individual paths in a decoupled multi-robot path planner, with applications to both one-shot and life-long multi-robot path planning problems. We show that the decentralized heuristic set, SU-I, preserves single path optimality and significantly reduces congestion that naturally happens when many paths are planned without coordination. Integration of SU-I into complete planners brings dramatic reductions in computation time due to the significantly reduced number of conflicts and leads to sizable solution optimality gains in diverse evaluation scenarios with medium and large maps, for both one-shot and life-long problem settings.
[ { "created": "Fri, 10 Sep 2021 05:51:35 GMT", "version": "v1" } ]
2021-09-13
[ [ "Han", "Shuai D.", "" ], [ "Yu", "Jingjin", "" ] ]
We perform a systematic exploration of the principle of Space Utilization Optimization (SUO) as a heuristic for planning better individual paths in a decoupled multi-robot path planner, with applications to both one-shot and life-long multi-robot path planning problems. We show that the decentralized heuristic set, SU-I, preserves single path optimality and significantly reduces congestion that naturally happens when many paths are planned without coordination. Integration of SU-I into complete planners brings dramatic reductions in computation time due to the significantly reduced number of conflicts and leads to sizable solution optimality gains in diverse evaluation scenarios with medium and large maps, for both one-shot and life-long problem settings.
2304.09675
Bertrand Teguia Tabuguia
Bertrand Teguia Tabuguia
Operations for D-Algebraic Functions
4.5 pages + 14 references. ISSAC'23 software demonstration. To appear in ACM communications in Computer Algebra
null
null
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A function is differentially algebraic (or simply D-algebraic) if there is a polynomial relationship between some of its derivatives and the indeterminate variable. Many functions in the sciences, such as Mathieu functions, the Weierstrass elliptic functions, and holonomic or D-finite functions are D-algebraic. These functions form a field, and are closed under composition, taking functional inverse, and derivation. We present implementation for each underlying operation. We also give a systematic way for computing an algebraic differential equation from a linear differential equation with D-finite function coefficients. Each command is a feature of our Maple package $NLDE$ available at https://mathrepo.mis.mpg.de/OperationsForDAlgebraicFunctions.
[ { "created": "Wed, 19 Apr 2023 14:06:19 GMT", "version": "v1" }, { "created": "Mon, 10 Jul 2023 13:35:27 GMT", "version": "v2" } ]
2023-07-11
[ [ "Tabuguia", "Bertrand Teguia", "" ] ]
A function is differentially algebraic (or simply D-algebraic) if there is a polynomial relationship between some of its derivatives and the indeterminate variable. Many functions in the sciences, such as Mathieu functions, the Weierstrass elliptic functions, and holonomic or D-finite functions are D-algebraic. These functions form a field, and are closed under composition, taking functional inverse, and derivation. We present implementation for each underlying operation. We also give a systematic way for computing an algebraic differential equation from a linear differential equation with D-finite function coefficients. Each command is a feature of our Maple package $NLDE$ available at https://mathrepo.mis.mpg.de/OperationsForDAlgebraicFunctions.
2011.13116
Li Wei
Li Wei, Chongwen Huang, George C. Alexandropoulos, Zhaohui Yang, Chau Yuen, and Zhaoyang Zhang
Joint Channel Estimation and Signal Recovery in RIS-Assisted Multi-User MISO Communications
arXiv admin note: text overlap with arXiv:2001.09413
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconfigurable Intelligent Surfaces (RISs) have been recently considered as an energy-efficient solution for future wireless networks. Their dynamic and low-power configuration enables coverage extension, massive connectivity, and low-latency communications. Channel estimation and signal recovery in RISbased systems are among the most critical technical challenges, due to the large number of unknown variables referring to the RIS unit elements and the transmitted signals. In this paper, we focus on the downlink of a RIS-assisted multi-user Multiple Input Single Output (MISO) communication system and present a joint channel estimation and signal recovery scheme based on the PARAllel FACtor (PARAFAC) decomposition. This decomposition unfolds the cascaded channel model and facilitates signal recovery using the Bilinear Generalized Approximate Message Passing (BiG-AMP) algorithm. The proposed method includes an alternating least squares algorithm to iteratively estimate the equivalent matrix, which consists of the transmitted signals and the channels between the base station and RIS, as well as the channels between the RIS and the multiple users. Our selective simulation results show that the proposed scheme outperforms a benchmark scheme that uses genie-aided information knowledge. We also provide insights on the impact of different RIS parameter settings on the proposed scheme.
[ { "created": "Thu, 26 Nov 2020 04:02:01 GMT", "version": "v1" } ]
2020-11-30
[ [ "Wei", "Li", "" ], [ "Huang", "Chongwen", "" ], [ "Alexandropoulos", "George C.", "" ], [ "Yang", "Zhaohui", "" ], [ "Yuen", "Chau", "" ], [ "Zhang", "Zhaoyang", "" ] ]
Reconfigurable Intelligent Surfaces (RISs) have been recently considered as an energy-efficient solution for future wireless networks. Their dynamic and low-power configuration enables coverage extension, massive connectivity, and low-latency communications. Channel estimation and signal recovery in RISbased systems are among the most critical technical challenges, due to the large number of unknown variables referring to the RIS unit elements and the transmitted signals. In this paper, we focus on the downlink of a RIS-assisted multi-user Multiple Input Single Output (MISO) communication system and present a joint channel estimation and signal recovery scheme based on the PARAllel FACtor (PARAFAC) decomposition. This decomposition unfolds the cascaded channel model and facilitates signal recovery using the Bilinear Generalized Approximate Message Passing (BiG-AMP) algorithm. The proposed method includes an alternating least squares algorithm to iteratively estimate the equivalent matrix, which consists of the transmitted signals and the channels between the base station and RIS, as well as the channels between the RIS and the multiple users. Our selective simulation results show that the proposed scheme outperforms a benchmark scheme that uses genie-aided information knowledge. We also provide insights on the impact of different RIS parameter settings on the proposed scheme.
2109.12234
Avnish Gupta
Avnish Gupta, Akash Jadhav and Pradyot VN Korupolu
Low Cost Bin Picking Solution for E-Commerce Warehouse Fulfillment Centers
null
Australasian Conference on Robotics and Automation 2019
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
In recent years, the throughput requirements of e-commerce fulfillment warehouses have seen a steep increase. This has resulted in various automation solutions being developed for item picking and movement. In this paper, we address the problem of manipulators picking heterogeneous items placed randomly in a bin. Traditional solutions require that the items be picked to be placed in an orderly manner in the bin and that the exact dimensions of the items be known beforehand. Such solutions do not perform well in the real world since the items in a bin are seldom placed in an orderly manner and new products are added almost every day by e-commerce suppliers. We propose a cost-effective solution that handles both the aforementioned challenges. Our solution comprises of a dual sensor system comprising of a regular RGB camera and a 3D ToF depth sensor. We propose a novel algorithm that fuses data from both these sensors to improve object segmentation while maintaining the accuracy of pose estimation, especially in occluded environments and tightly packed bins. We experimentally verify the performance of our system by picking boxes using an ABB IRB 1200 robot. We also show that our system maintains a high level of accuracy in pose estimation that is independent of the dimensions of the box, texture, occlusion or orientation. We further show that our system is computationally less expensive and maintains a consistent detection time of 1 second. We also discuss how this approach can be easily extended to objects of all shapes.
[ { "created": "Fri, 24 Sep 2021 23:33:42 GMT", "version": "v1" } ]
2021-09-28
[ [ "Gupta", "Avnish", "" ], [ "Jadhav", "Akash", "" ], [ "Korupolu", "Pradyot VN", "" ] ]
In recent years, the throughput requirements of e-commerce fulfillment warehouses have seen a steep increase. This has resulted in various automation solutions being developed for item picking and movement. In this paper, we address the problem of manipulators picking heterogeneous items placed randomly in a bin. Traditional solutions require that the items be picked to be placed in an orderly manner in the bin and that the exact dimensions of the items be known beforehand. Such solutions do not perform well in the real world since the items in a bin are seldom placed in an orderly manner and new products are added almost every day by e-commerce suppliers. We propose a cost-effective solution that handles both the aforementioned challenges. Our solution comprises of a dual sensor system comprising of a regular RGB camera and a 3D ToF depth sensor. We propose a novel algorithm that fuses data from both these sensors to improve object segmentation while maintaining the accuracy of pose estimation, especially in occluded environments and tightly packed bins. We experimentally verify the performance of our system by picking boxes using an ABB IRB 1200 robot. We also show that our system maintains a high level of accuracy in pose estimation that is independent of the dimensions of the box, texture, occlusion or orientation. We further show that our system is computationally less expensive and maintains a consistent detection time of 1 second. We also discuss how this approach can be easily extended to objects of all shapes.
2207.09332
Hualian Sheng
Hualian Sheng, Sijia Cai, Na Zhao, Bing Deng, Jianqiang Huang, Xian-Sheng Hua, Min-Jian Zhao, Gim Hee Lee
Rethinking IoU-based Optimization for Single-stage 3D Object Detection
Accepted by ECCV2022. The code is available at https://github.com/hlsheng1/RDIoU
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Since Intersection-over-Union (IoU) based optimization maintains the consistency of the final IoU prediction metric and losses, it has been widely used in both regression and classification branches of single-stage 2D object detectors. Recently, several 3D object detection methods adopt IoU-based optimization and directly replace the 2D IoU with 3D IoU. However, such a direct computation in 3D is very costly due to the complex implementation and inefficient backward operations. Moreover, 3D IoU-based optimization is sub-optimal as it is sensitive to rotation and thus can cause training instability and detection performance deterioration. In this paper, we propose a novel Rotation-Decoupled IoU (RDIoU) method that can mitigate the rotation-sensitivity issue, and produce more efficient optimization objectives compared with 3D IoU during the training stage. Specifically, our RDIoU simplifies the complex interactions of regression parameters by decoupling the rotation variable as an independent term, yet preserving the geometry of 3D IoU. By incorporating RDIoU into both the regression and classification branches, the network is encouraged to learn more precise bounding boxes and concurrently overcome the misalignment issue between classification and regression. Extensive experiments on the benchmark KITTI and Waymo Open Dataset validate that our RDIoU method can bring substantial improvement for the single-stage 3D object detection.
[ { "created": "Tue, 19 Jul 2022 15:35:23 GMT", "version": "v1" }, { "created": "Wed, 20 Jul 2022 06:27:31 GMT", "version": "v2" } ]
2022-07-21
[ [ "Sheng", "Hualian", "" ], [ "Cai", "Sijia", "" ], [ "Zhao", "Na", "" ], [ "Deng", "Bing", "" ], [ "Huang", "Jianqiang", "" ], [ "Hua", "Xian-Sheng", "" ], [ "Zhao", "Min-Jian", "" ], [ "Lee", "Gim Hee", "" ] ]
Since Intersection-over-Union (IoU) based optimization maintains the consistency of the final IoU prediction metric and losses, it has been widely used in both regression and classification branches of single-stage 2D object detectors. Recently, several 3D object detection methods adopt IoU-based optimization and directly replace the 2D IoU with 3D IoU. However, such a direct computation in 3D is very costly due to the complex implementation and inefficient backward operations. Moreover, 3D IoU-based optimization is sub-optimal as it is sensitive to rotation and thus can cause training instability and detection performance deterioration. In this paper, we propose a novel Rotation-Decoupled IoU (RDIoU) method that can mitigate the rotation-sensitivity issue, and produce more efficient optimization objectives compared with 3D IoU during the training stage. Specifically, our RDIoU simplifies the complex interactions of regression parameters by decoupling the rotation variable as an independent term, yet preserving the geometry of 3D IoU. By incorporating RDIoU into both the regression and classification branches, the network is encouraged to learn more precise bounding boxes and concurrently overcome the misalignment issue between classification and regression. Extensive experiments on the benchmark KITTI and Waymo Open Dataset validate that our RDIoU method can bring substantial improvement for the single-stage 3D object detection.
2008.13255
Anas Blasi
Mohammed Alsuwaiket, Anas H. Blasi, Ra'Fat Al-Msie'deen
Formulating Module Assessment for Improved Academic Performance Predictability in Higher Education
null
null
null
null
cs.CY cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Various studies have shown that students tend to get higher marks when assessed through coursework based assessment methods which include either modules that are fully assessed through coursework or a mixture of coursework and examinations than assessed by examination alone. There are a large number of educational data mining studies that preprocess data through conventional data mining processes including data preparation process, but they are using transcript data as they stand without looking at examination and coursework results weighting which could affect prediction accuracy. This paper proposes a different data preparation process through investigating more than 230000 student records in order to prepare students marks based on the assessment methods of enrolled modules. The data have been processed through different stages in order to extract a categorical factor through which students module marks are refined during the data preparation process. The results of this work show that students final marks should not be isolated from the nature of the enrolled modules assessment methods. They must rather be investigated thoroughly and considered during EDMs data preprocessing phases. More generally, it is concluded that educational data should not be prepared in the same way as other data types due to differences as data sources, applications, and types of errors in them. Therefore, an attribute, coursework assessment ratio, is proposed to be used in order to take the different modules assessment methods into account while preparing student transcript data. The effect of CAR on prediction process using the random forest classification technique has been investigated. It is shown that considering CAR as an attribute increases the accuracy of predicting students second year averages based on their first year results.
[ { "created": "Sun, 30 Aug 2020 19:42:31 GMT", "version": "v1" } ]
2020-09-01
[ [ "Alsuwaiket", "Mohammed", "" ], [ "Blasi", "Anas H.", "" ], [ "Al-Msie'deen", "Ra'Fat", "" ] ]
Various studies have shown that students tend to get higher marks when assessed through coursework based assessment methods which include either modules that are fully assessed through coursework or a mixture of coursework and examinations than assessed by examination alone. There are a large number of educational data mining studies that preprocess data through conventional data mining processes including data preparation process, but they are using transcript data as they stand without looking at examination and coursework results weighting which could affect prediction accuracy. This paper proposes a different data preparation process through investigating more than 230000 student records in order to prepare students marks based on the assessment methods of enrolled modules. The data have been processed through different stages in order to extract a categorical factor through which students module marks are refined during the data preparation process. The results of this work show that students final marks should not be isolated from the nature of the enrolled modules assessment methods. They must rather be investigated thoroughly and considered during EDMs data preprocessing phases. More generally, it is concluded that educational data should not be prepared in the same way as other data types due to differences as data sources, applications, and types of errors in them. Therefore, an attribute, coursework assessment ratio, is proposed to be used in order to take the different modules assessment methods into account while preparing student transcript data. The effect of CAR on prediction process using the random forest classification technique has been investigated. It is shown that considering CAR as an attribute increases the accuracy of predicting students second year averages based on their first year results.
1411.6135
Franck Delaplace
Franck Delaplace
Analogous Dynamics of Boolean Network
28 pages, 6 figures
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Different Boolean networks may reveal similar dynamics although their definition differs, then preventing their distinction from the observations. This raises the question about the sufficiency of a particular Boolean network for properly reproducing a modeled phenomenon to make realistic predictions. The question actually depends on the invariant properties of behaviorally similar Boolean networks. In this article, we address this issue by considering that the similarity is formalized by isomorphism on graphs modeling their dynamics. The similarity also depends on the parameter governing the updating policy, called the mode. We define a general characterization of the group of isomorphism preserving the mode. From this characterization, we deduce invariant structural properties of the interaction graph and conditions to maintain an equivalence through mode variation.
[ { "created": "Sat, 22 Nov 2014 15:23:31 GMT", "version": "v1" } ]
2014-11-25
[ [ "Delaplace", "Franck", "" ] ]
Different Boolean networks may reveal similar dynamics although their definition differs, then preventing their distinction from the observations. This raises the question about the sufficiency of a particular Boolean network for properly reproducing a modeled phenomenon to make realistic predictions. The question actually depends on the invariant properties of behaviorally similar Boolean networks. In this article, we address this issue by considering that the similarity is formalized by isomorphism on graphs modeling their dynamics. The similarity also depends on the parameter governing the updating policy, called the mode. We define a general characterization of the group of isomorphism preserving the mode. From this characterization, we deduce invariant structural properties of the interaction graph and conditions to maintain an equivalence through mode variation.
2009.06725
Mahdi Esmaily-Moghadam
Chenwei Meng and Anirban Bhattacharjee and Mahdi Esmaily
A scalable spectral Stokes solver for simulation of time-periodic flows in complex geometries
null
null
10.1016/j.jcp.2021.110601
null
cs.CE physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simulation of unsteady creeping flows in complex geometries has traditionally required the use of a time-stepping procedure, which is typically costly and unscalable. To reduce the cost and allow for computations at much larger scales, we propose an alternative approach that is formulated based on the unsteady Stokes equation expressed in the time-spectral domain. This transformation results in a boundary value problem with an imaginary source term proportional to the computed mode that is discretized and solved in a complex-valued finite element solver using Bubnov-Galerkin formulation. This transformed spatio-spectral formulation presents several advantages over the traditional spatio-temporal techniques. Firstly, for cases with boundary conditions varying smoothly in time, it provides a significant saving in computational cost as it can resolve time-variation of the solution using a few modes rather than thousands of time steps. Secondly, in contrast to the traditional time integration scheme with a finite order of accuracy, this method exhibits a super convergence behavior versus the number of computed modes. Thirdly, in contrast to the stabilized finite element methods for fluid, no stabilization term is employed in our formulation, producing a solution that is consistent and more accurate. Fourthly, the proposed approach is embarrassingly parallelizable owing to the independence of the solution modes, thus enabling scalable calculations at a much larger number of processors. The comparison of the proposed technique against a standard stabilized finite element solver is performed using two- and three-dimensional canonical and complex geometries. The results show that the proposed method can produce more accurate results at 1% to 11% of the cost of the standard technique for the studied cases.
[ { "created": "Fri, 11 Sep 2020 17:17:27 GMT", "version": "v1" }, { "created": "Fri, 2 Jul 2021 20:38:30 GMT", "version": "v2" } ]
2021-09-15
[ [ "Meng", "Chenwei", "" ], [ "Bhattacharjee", "Anirban", "" ], [ "Esmaily", "Mahdi", "" ] ]
Simulation of unsteady creeping flows in complex geometries has traditionally required the use of a time-stepping procedure, which is typically costly and unscalable. To reduce the cost and allow for computations at much larger scales, we propose an alternative approach that is formulated based on the unsteady Stokes equation expressed in the time-spectral domain. This transformation results in a boundary value problem with an imaginary source term proportional to the computed mode that is discretized and solved in a complex-valued finite element solver using Bubnov-Galerkin formulation. This transformed spatio-spectral formulation presents several advantages over the traditional spatio-temporal techniques. Firstly, for cases with boundary conditions varying smoothly in time, it provides a significant saving in computational cost as it can resolve time-variation of the solution using a few modes rather than thousands of time steps. Secondly, in contrast to the traditional time integration scheme with a finite order of accuracy, this method exhibits a super convergence behavior versus the number of computed modes. Thirdly, in contrast to the stabilized finite element methods for fluid, no stabilization term is employed in our formulation, producing a solution that is consistent and more accurate. Fourthly, the proposed approach is embarrassingly parallelizable owing to the independence of the solution modes, thus enabling scalable calculations at a much larger number of processors. The comparison of the proposed technique against a standard stabilized finite element solver is performed using two- and three-dimensional canonical and complex geometries. The results show that the proposed method can produce more accurate results at 1% to 11% of the cost of the standard technique for the studied cases.
2306.16612
Minsoo Kang
Minsoo Kang, Suhyun Kim
GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps
Published at AAAI2023 (Oral)
Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 2023, 1096-1104
10.1609/aaai.v37i1.25191
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data augmentation is now an essential part of the image training process, as it effectively prevents overfitting and makes the model more robust against noisy datasets. Recent mixing augmentation strategies have advanced to generate the mixup mask that can enrich the saliency information, which is a supervisory signal. However, these methods incur a significant computational burden to optimize the mixup mask. From this motivation, we propose a novel saliency-aware mixup method, GuidedMixup, which aims to retain the salient regions in mixup images with low computational overhead. We develop an efficient pairing algorithm that pursues to minimize the conflict of salient regions of paired images and achieve rich saliency in mixup images. Moreover, GuidedMixup controls the mixup ratio for each pixel to better preserve the salient region by interpolating two paired images smoothly. The experiments on several datasets demonstrate that GuidedMixup provides a good trade-off between augmentation overhead and generalization performance on classification datasets. In addition, our method shows good performance in experiments with corrupted or reduced datasets.
[ { "created": "Thu, 29 Jun 2023 00:55:51 GMT", "version": "v1" } ]
2023-06-30
[ [ "Kang", "Minsoo", "" ], [ "Kim", "Suhyun", "" ] ]
Data augmentation is now an essential part of the image training process, as it effectively prevents overfitting and makes the model more robust against noisy datasets. Recent mixing augmentation strategies have advanced to generate the mixup mask that can enrich the saliency information, which is a supervisory signal. However, these methods incur a significant computational burden to optimize the mixup mask. From this motivation, we propose a novel saliency-aware mixup method, GuidedMixup, which aims to retain the salient regions in mixup images with low computational overhead. We develop an efficient pairing algorithm that pursues to minimize the conflict of salient regions of paired images and achieve rich saliency in mixup images. Moreover, GuidedMixup controls the mixup ratio for each pixel to better preserve the salient region by interpolating two paired images smoothly. The experiments on several datasets demonstrate that GuidedMixup provides a good trade-off between augmentation overhead and generalization performance on classification datasets. In addition, our method shows good performance in experiments with corrupted or reduced datasets.
2110.15087
Benedek Rozemberczki
Benedek Rozemberczki and Anna Gogleva and Sebastian Nilsson and Gavin Edwards and Andriy Nikolov and Eliseo Papa
MOOMIN: Deep Molecular Omics Network for Anti-Cancer Drug Combination Therapy
null
null
null
null
cs.LG cs.AI cs.SI
http://creativecommons.org/licenses/by/4.0/
We propose the molecular omics network (MOOMIN) a multimodal graph neural network used by AstraZeneca oncologists to predict the synergy of drug combinations for cancer treatment. Our model learns drug representations at multiple scales based on a drug-protein interaction network and metadata. Structural properties of compounds and proteins are encoded to create vertex features for a message-passing scheme that operates on the bipartite interaction graph. Propagated messages form multi-resolution drug representations which we utilized to create drug pair descriptors. By conditioning the drug combination representations on the cancer cell type we define a synergy scoring function that can inductively score unseen pairs of drugs. Experimental results on the synergy scoring task demonstrate that MOOMIN outperforms state-of-the-art graph fingerprinting, proximity preserving node embedding, and existing deep learning approaches. Further results establish that the predictive performance of our model is robust to hyperparameter changes. We demonstrate that the model makes high-quality predictions over a wide range of cancer cell line tissues, out-of-sample predictions can be validated with external synergy databases, and that the proposed model is data efficient at learning.
[ { "created": "Thu, 28 Oct 2021 13:10:25 GMT", "version": "v1" }, { "created": "Wed, 20 Apr 2022 13:01:17 GMT", "version": "v2" }, { "created": "Mon, 8 Aug 2022 14:15:44 GMT", "version": "v3" } ]
2022-08-09
[ [ "Rozemberczki", "Benedek", "" ], [ "Gogleva", "Anna", "" ], [ "Nilsson", "Sebastian", "" ], [ "Edwards", "Gavin", "" ], [ "Nikolov", "Andriy", "" ], [ "Papa", "Eliseo", "" ] ]
We propose the molecular omics network (MOOMIN) a multimodal graph neural network used by AstraZeneca oncologists to predict the synergy of drug combinations for cancer treatment. Our model learns drug representations at multiple scales based on a drug-protein interaction network and metadata. Structural properties of compounds and proteins are encoded to create vertex features for a message-passing scheme that operates on the bipartite interaction graph. Propagated messages form multi-resolution drug representations which we utilized to create drug pair descriptors. By conditioning the drug combination representations on the cancer cell type we define a synergy scoring function that can inductively score unseen pairs of drugs. Experimental results on the synergy scoring task demonstrate that MOOMIN outperforms state-of-the-art graph fingerprinting, proximity preserving node embedding, and existing deep learning approaches. Further results establish that the predictive performance of our model is robust to hyperparameter changes. We demonstrate that the model makes high-quality predictions over a wide range of cancer cell line tissues, out-of-sample predictions can be validated with external synergy databases, and that the proposed model is data efficient at learning.
1404.5165
Kian Hsiang Low
Nuo Xu, Kian Hsiang Low, Jie Chen, Keng Kiat Lim, Etkin Baris Ozgul
GP-Localize: Persistent Mobile Robot Localization using Online Sparse Gaussian Process Observation Model
28th AAAI Conference on Artificial Intelligence (AAAI 2014), Extended version with proofs, 10 pages
null
null
null
cs.RO cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Central to robot exploration and mapping is the task of persistent localization in environmental fields characterized by spatially correlated measurements. This paper presents a Gaussian process localization (GP-Localize) algorithm that, in contrast to existing works, can exploit the spatially correlated field measurements taken during a robot's exploration (instead of relying on prior training data) for efficiently and scalably learning the GP observation model online through our proposed novel online sparse GP. As a result, GP-Localize is capable of achieving constant time and memory (i.e., independent of the size of the data) per filtering step, which demonstrates the practical feasibility of using GPs for persistent robot localization and autonomy. Empirical evaluation via simulated experiments with real-world datasets and a real robot experiment shows that GP-Localize outperforms existing GP localization algorithms.
[ { "created": "Mon, 21 Apr 2014 10:28:00 GMT", "version": "v1" }, { "created": "Tue, 22 Apr 2014 08:03:33 GMT", "version": "v2" } ]
2014-04-23
[ [ "Xu", "Nuo", "" ], [ "Low", "Kian Hsiang", "" ], [ "Chen", "Jie", "" ], [ "Lim", "Keng Kiat", "" ], [ "Ozgul", "Etkin Baris", "" ] ]
Central to robot exploration and mapping is the task of persistent localization in environmental fields characterized by spatially correlated measurements. This paper presents a Gaussian process localization (GP-Localize) algorithm that, in contrast to existing works, can exploit the spatially correlated field measurements taken during a robot's exploration (instead of relying on prior training data) for efficiently and scalably learning the GP observation model online through our proposed novel online sparse GP. As a result, GP-Localize is capable of achieving constant time and memory (i.e., independent of the size of the data) per filtering step, which demonstrates the practical feasibility of using GPs for persistent robot localization and autonomy. Empirical evaluation via simulated experiments with real-world datasets and a real robot experiment shows that GP-Localize outperforms existing GP localization algorithms.
1802.04557
Benjamin Wild
Benjamin Wild, Leon Sixt, Tim Landgraf
Automatic localization and decoding of honeybee markers using deep convolutional neural networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The honeybee is a fascinating model animal to investigate how collective behavior emerges from (inter-)actions of thousands of individuals. Bees may acquire unique memories throughout their lives. These experiences affect social interactions even over large time frames. Tracking and identifying all bees in the colony over their lifetimes therefore may likely shed light on the interplay of individual differences and colony behavior. This paper proposes a software pipeline based on two deep convolutional neural networks for the localization and decoding of custom binary markers that honeybees carry from their first to the last day in their life. We show that this approach outperforms similar systems proposed in recent literature. By opening this software for the public, we hope that the resulting datasets will help advancing the understanding of honeybee collective intelligence.
[ { "created": "Tue, 13 Feb 2018 11:03:30 GMT", "version": "v1" }, { "created": "Wed, 14 Feb 2018 14:27:21 GMT", "version": "v2" } ]
2018-02-15
[ [ "Wild", "Benjamin", "" ], [ "Sixt", "Leon", "" ], [ "Landgraf", "Tim", "" ] ]
The honeybee is a fascinating model animal to investigate how collective behavior emerges from (inter-)actions of thousands of individuals. Bees may acquire unique memories throughout their lives. These experiences affect social interactions even over large time frames. Tracking and identifying all bees in the colony over their lifetimes therefore may likely shed light on the interplay of individual differences and colony behavior. This paper proposes a software pipeline based on two deep convolutional neural networks for the localization and decoding of custom binary markers that honeybees carry from their first to the last day in their life. We show that this approach outperforms similar systems proposed in recent literature. By opening this software for the public, we hope that the resulting datasets will help advancing the understanding of honeybee collective intelligence.
1012.0232
Thomas Hugel
Thomas Hugel
Kolmogorov-Loveland Sets and Advice Complexity Classes
11 pages - typos
null
null
null
cs.CC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Loveland complexity is a variant of Kolmogorov complexity, where it is asked to output separately the bits of the desired string, instead of the string itself. Similarly to the resource-bounded Kolmogorov sets we define Loveland sets. We highlight a structural connection between resource-bounded Loveland sets and some advice complexity classes. This structural connection enables us to map to advice complexity classes some properties of Kolmogorov sets first noticed by Hartmanis and thoroughly investigated in Longpr\'e's thesis: 1. Non-inclusion properties of Loveland sets result in hierarchy properties on the corresponding advice complexity classes; 2. Immunity properties of Loveland sets result in the non-existence of natural proofs between the corresponding advice complexity classes, in the sense of Razborov & Rudich.
[ { "created": "Wed, 1 Dec 2010 15:57:48 GMT", "version": "v1" }, { "created": "Wed, 16 Feb 2011 16:26:25 GMT", "version": "v2" } ]
2011-02-17
[ [ "Hugel", "Thomas", "" ] ]
Loveland complexity is a variant of Kolmogorov complexity, where it is asked to output separately the bits of the desired string, instead of the string itself. Similarly to the resource-bounded Kolmogorov sets we define Loveland sets. We highlight a structural connection between resource-bounded Loveland sets and some advice complexity classes. This structural connection enables us to map to advice complexity classes some properties of Kolmogorov sets first noticed by Hartmanis and thoroughly investigated in Longpr\'e's thesis: 1. Non-inclusion properties of Loveland sets result in hierarchy properties on the corresponding advice complexity classes; 2. Immunity properties of Loveland sets result in the non-existence of natural proofs between the corresponding advice complexity classes, in the sense of Razborov & Rudich.
2008.00994
Sheng Zhou
Ruichen Jiang, Sheng Zhou
Cluster-Based Cooperative Digital Over-the-Air Aggregation for Wireless Federated Edge Learning
null
null
null
null
cs.IT cs.LG math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study a federated learning system at the wireless edge that uses over-the-air computation (AirComp). In such a system, users transmit their messages over a multi-access channel concurrently to achieve fast model aggregation. Recently, an AirComp scheme based on digital modulation has been proposed featuring one-bit gradient quantization and truncated channel inversion at users and a majority-voting based decoder at the fusion center (FC). We propose an improved digital AirComp scheme to relax its requirements on the transmitters, where users perform phase correction and transmit with full power. To characterize the decoding failure probability at the FC, we introduce the normalized detection signal-to-noise ratio (SNR), which can be interpreted as the effective participation rate of users. To mitigate wireless fading, we further propose a cluster-based system and design the relay selection scheme based on the normalized detection SNR. By local data fusion within each cluster and relay selection, our scheme can fully exploit spatial diversity to increase the effective number of voting users and accelerate model convergence.
[ { "created": "Mon, 3 Aug 2020 16:29:52 GMT", "version": "v1" } ]
2020-08-04
[ [ "Jiang", "Ruichen", "" ], [ "Zhou", "Sheng", "" ] ]
In this paper, we study a federated learning system at the wireless edge that uses over-the-air computation (AirComp). In such a system, users transmit their messages over a multi-access channel concurrently to achieve fast model aggregation. Recently, an AirComp scheme based on digital modulation has been proposed featuring one-bit gradient quantization and truncated channel inversion at users and a majority-voting based decoder at the fusion center (FC). We propose an improved digital AirComp scheme to relax its requirements on the transmitters, where users perform phase correction and transmit with full power. To characterize the decoding failure probability at the FC, we introduce the normalized detection signal-to-noise ratio (SNR), which can be interpreted as the effective participation rate of users. To mitigate wireless fading, we further propose a cluster-based system and design the relay selection scheme based on the normalized detection SNR. By local data fusion within each cluster and relay selection, our scheme can fully exploit spatial diversity to increase the effective number of voting users and accelerate model convergence.
2204.09746
Zhixiong Chen
Zhixiong Chen, Wenqiang Yi, Arumugam Nallanathan, Geoffrey Ye Li
Efficient Wireless Federated Learning with Partial Model Aggregation
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The data heterogeneity across devices and the limited communication resources, e.g., bandwidth and energy, are two of the main bottlenecks for wireless federated learning (FL). To tackle these challenges, we first devise a novel FL framework with partial model aggregation (PMA). This approach aggregates the lower layers of neural networks, responsible for feature extraction, at the parameter server while keeping the upper layers, responsible for complex pattern recognition, at devices for personalization. The proposed PMA-FL is able to address the data heterogeneity and reduce the transmitted information in wireless channels. Then, we derive a convergence bound of the framework under a non-convex loss function setting to reveal the role of unbalanced data size in the learning performance. On this basis, we maximize the scheduled data size to minimize the global loss function through jointly optimize the device scheduling, bandwidth allocation, computation and communication time division policies with the assistance of Lyapunov optimization. Our analysis reveals that the optimal time division is achieved when the communication and computation parts of PMA-FL have the same power. We also develop a bisection method to solve the optimal bandwidth allocation policy and use the set expansion algorithm to address the device scheduling policy. Compared with the benchmark schemes, the proposed PMA-FL improves 3.13\% and 11.8\% accuracy on two typical datasets with heterogeneous data distribution settings, i.e., MINIST and CIFAR-10, respectively. In addition, the proposed joint dynamic device scheduling and resource management approach achieve slightly higher accuracy than the considered benchmarks, but they provide a satisfactory energy and time reduction: 29\% energy or 20\% time reduction on the MNIST; and 25\% energy or 12.5\% time reduction on the CIFAR-10.
[ { "created": "Wed, 20 Apr 2022 19:09:52 GMT", "version": "v1" }, { "created": "Tue, 26 Jul 2022 09:15:00 GMT", "version": "v2" }, { "created": "Sun, 19 Feb 2023 13:47:59 GMT", "version": "v3" } ]
2023-02-21
[ [ "Chen", "Zhixiong", "" ], [ "Yi", "Wenqiang", "" ], [ "Nallanathan", "Arumugam", "" ], [ "Li", "Geoffrey Ye", "" ] ]
The data heterogeneity across devices and the limited communication resources, e.g., bandwidth and energy, are two of the main bottlenecks for wireless federated learning (FL). To tackle these challenges, we first devise a novel FL framework with partial model aggregation (PMA). This approach aggregates the lower layers of neural networks, responsible for feature extraction, at the parameter server while keeping the upper layers, responsible for complex pattern recognition, at devices for personalization. The proposed PMA-FL is able to address the data heterogeneity and reduce the transmitted information in wireless channels. Then, we derive a convergence bound of the framework under a non-convex loss function setting to reveal the role of unbalanced data size in the learning performance. On this basis, we maximize the scheduled data size to minimize the global loss function through jointly optimize the device scheduling, bandwidth allocation, computation and communication time division policies with the assistance of Lyapunov optimization. Our analysis reveals that the optimal time division is achieved when the communication and computation parts of PMA-FL have the same power. We also develop a bisection method to solve the optimal bandwidth allocation policy and use the set expansion algorithm to address the device scheduling policy. Compared with the benchmark schemes, the proposed PMA-FL improves 3.13\% and 11.8\% accuracy on two typical datasets with heterogeneous data distribution settings, i.e., MINIST and CIFAR-10, respectively. In addition, the proposed joint dynamic device scheduling and resource management approach achieve slightly higher accuracy than the considered benchmarks, but they provide a satisfactory energy and time reduction: 29\% energy or 20\% time reduction on the MNIST; and 25\% energy or 12.5\% time reduction on the CIFAR-10.
2303.16806
Yichen Yang
Yichen Yang, Martin Rinard
Emergence of Locally Suboptimal Behavior in Finitely Repeated Games
null
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the emergence of locally suboptimal behavior in finitely repeated games. Locally suboptimal behavior refers to players play suboptimally in some rounds of the repeated game (i.e., not maximizing their payoffs in those rounds) while maximizing their total payoffs in the whole repeated game. The central research question we aim to answer is when locally suboptimal behavior can arise from rational play in finitely repeated games. In this research, we focus on the emergence of locally suboptimal behavior in subgame-perfect equilibria (SPE) of finitely repeated games with complete information. We prove the first sufficient and necessary condition on the stage game G that ensure that, for all T and all subgame-perfect equilibria of the repeated game G(T), the strategy profile at every round of G(T) forms a Nash equilibrium of the stage game G. We prove the sufficient and necessary conditions for three cases: 1) only pure strategies are allowed, 2) the general case where mixed strategies are allowed, and 3) one player can only use pure strategies and the other player can use mixed strategies. Based on these results, we obtain complete characterizations on when allowing players to play mixed strategies will change whether local suboptimality can ever occur in some repeated game. Furthermore, we present an algorithm for the computational problem of, given an arbitrary stage game, deciding if locally suboptimal behavior can arise in the corresponding finitely repeated games. This addresses the practical side of the research question.
[ { "created": "Wed, 29 Mar 2023 15:50:21 GMT", "version": "v1" } ]
2023-03-30
[ [ "Yang", "Yichen", "" ], [ "Rinard", "Martin", "" ] ]
We study the emergence of locally suboptimal behavior in finitely repeated games. Locally suboptimal behavior refers to players play suboptimally in some rounds of the repeated game (i.e., not maximizing their payoffs in those rounds) while maximizing their total payoffs in the whole repeated game. The central research question we aim to answer is when locally suboptimal behavior can arise from rational play in finitely repeated games. In this research, we focus on the emergence of locally suboptimal behavior in subgame-perfect equilibria (SPE) of finitely repeated games with complete information. We prove the first sufficient and necessary condition on the stage game G that ensure that, for all T and all subgame-perfect equilibria of the repeated game G(T), the strategy profile at every round of G(T) forms a Nash equilibrium of the stage game G. We prove the sufficient and necessary conditions for three cases: 1) only pure strategies are allowed, 2) the general case where mixed strategies are allowed, and 3) one player can only use pure strategies and the other player can use mixed strategies. Based on these results, we obtain complete characterizations on when allowing players to play mixed strategies will change whether local suboptimality can ever occur in some repeated game. Furthermore, we present an algorithm for the computational problem of, given an arbitrary stage game, deciding if locally suboptimal behavior can arise in the corresponding finitely repeated games. This addresses the practical side of the research question.
2101.04266
Yi Liu
Yi Liu, Shuiwang Ji
CleftNet: Augmented Deep Learning for Synaptic Cleft Detection from Brain Electron Microscopy
10 pages, 3 figures, 6 tables
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting synaptic clefts is a crucial step to investigate the biological function of synapses. The volume electron microscopy (EM) allows the identification of synaptic clefts by photoing EM images with high resolution and fine details. Machine learning approaches have been employed to automatically predict synaptic clefts from EM images. In this work, we propose a novel and augmented deep learning model, known as CleftNet, for improving synaptic cleft detection from brain EM images. We first propose two novel network components, known as the feature augmentor and the label augmentor, for augmenting features and labels to improve cleft representations. The feature augmentor can fuse global information from inputs and learn common morphological patterns in clefts, leading to augmented cleft features. In addition, it can generate outputs with varying dimensions, making it flexible to be integrated in any deep network. The proposed label augmentor augments the label of each voxel from a value to a vector, which contains both the segmentation label and boundary label. This allows the network to learn important shape information and to produce more informative cleft representations. Based on the proposed feature augmentor and label augmentor, We build the CleftNet as a U-Net like network. The effectiveness of our methods is evaluated on both online and offline tasks. Our CleftNet currently ranks \#1 on the online task of the CREMI open challenge. In addition, both quantitative and qualitative results in the offline tasks show that our method outperforms the baseline approaches significantly.
[ { "created": "Tue, 12 Jan 2021 02:45:53 GMT", "version": "v1" } ]
2021-01-13
[ [ "Liu", "Yi", "" ], [ "Ji", "Shuiwang", "" ] ]
Detecting synaptic clefts is a crucial step to investigate the biological function of synapses. The volume electron microscopy (EM) allows the identification of synaptic clefts by photoing EM images with high resolution and fine details. Machine learning approaches have been employed to automatically predict synaptic clefts from EM images. In this work, we propose a novel and augmented deep learning model, known as CleftNet, for improving synaptic cleft detection from brain EM images. We first propose two novel network components, known as the feature augmentor and the label augmentor, for augmenting features and labels to improve cleft representations. The feature augmentor can fuse global information from inputs and learn common morphological patterns in clefts, leading to augmented cleft features. In addition, it can generate outputs with varying dimensions, making it flexible to be integrated in any deep network. The proposed label augmentor augments the label of each voxel from a value to a vector, which contains both the segmentation label and boundary label. This allows the network to learn important shape information and to produce more informative cleft representations. Based on the proposed feature augmentor and label augmentor, We build the CleftNet as a U-Net like network. The effectiveness of our methods is evaluated on both online and offline tasks. Our CleftNet currently ranks \#1 on the online task of the CREMI open challenge. In addition, both quantitative and qualitative results in the offline tasks show that our method outperforms the baseline approaches significantly.
2008.07723
Huang Hu
Xiaoyu Kou, Bingfeng Luo, Huang Hu and Yan Zhang
NASE: Learning Knowledge Graph Embedding for Link Prediction via Neural Architecture Search
Accepted by CIKM 2020, short paper
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Link prediction is the task of predicting missing connections between entities in the knowledge graph (KG). While various forms of models are proposed for the link prediction task, most of them are designed based on a few known relation patterns in several well-known datasets. Due to the diversity and complexity nature of the real-world KGs, it is inherently difficult to design a model that fits all datasets well. To address this issue, previous work has tried to use Automated Machine Learning (AutoML) to search for the best model for a given dataset. However, their search space is limited only to bilinear model families. In this paper, we propose a novel Neural Architecture Search (NAS) framework for the link prediction task. First, the embeddings of the input triplet are refined by the Representation Search Module. Then, the prediction score is searched within the Score Function Search Module. This framework entails a more general search space, which enables us to take advantage of several mainstream model families, and thus it can potentially achieve better performance. We relax the search space to be continuous so that the architecture can be optimized efficiently using gradient-based search strategies. Experimental results on several benchmark datasets demonstrate the effectiveness of our method compared with several state-of-the-art approaches.
[ { "created": "Tue, 18 Aug 2020 03:34:09 GMT", "version": "v1" } ]
2020-08-19
[ [ "Kou", "Xiaoyu", "" ], [ "Luo", "Bingfeng", "" ], [ "Hu", "Huang", "" ], [ "Zhang", "Yan", "" ] ]
Link prediction is the task of predicting missing connections between entities in the knowledge graph (KG). While various forms of models are proposed for the link prediction task, most of them are designed based on a few known relation patterns in several well-known datasets. Due to the diversity and complexity nature of the real-world KGs, it is inherently difficult to design a model that fits all datasets well. To address this issue, previous work has tried to use Automated Machine Learning (AutoML) to search for the best model for a given dataset. However, their search space is limited only to bilinear model families. In this paper, we propose a novel Neural Architecture Search (NAS) framework for the link prediction task. First, the embeddings of the input triplet are refined by the Representation Search Module. Then, the prediction score is searched within the Score Function Search Module. This framework entails a more general search space, which enables us to take advantage of several mainstream model families, and thus it can potentially achieve better performance. We relax the search space to be continuous so that the architecture can be optimized efficiently using gradient-based search strategies. Experimental results on several benchmark datasets demonstrate the effectiveness of our method compared with several state-of-the-art approaches.
1311.5376
Valtteri Tervo
Valtteri Tervo, Antti T\"olli, Juha Karjalainen, Tad Matsumoto
PAPR Constrained Power Allocation for Iterative Frequency Domain Multiuser SIMO Detector
Presented in IEEE International Conference on Communications (ICC) 2014
null
10.1109/ICC.2014.6884069
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Peak to average power ratio (PAPR) constrained power allocation in single carrier multiuser (MU) single-input multiple-output (SIMO) systems with iterative frequency domain (FD) soft cancelation (SC) minimum mean squared error (MMSE) equalization is considered in this paper. To obtain full benefit of the iterative receiver, its convergence properties need to be taken into account also at the transmitter side. In this paper, we extend the existing results on the area of convergence constrained power allocation (CCPA) to consider the instantaneous PAPR at the transmit antenna of each user. In other words, we will introduce a constraint that PAPR cannot exceed a predetermined threshold. By adding the aforementioned constraint into the CCPA optimization framework, the power efficiency of a power amplifier (PA) can be significantly enhanced by enabling it to operate on its linear operation range. Hence, PAPR constraint is especially beneficial for power limited cell-edge users. In this paper, we will derive the instantaneous PAPR constraint as a function of transmit power allocation. Furthermore, successive convex approximation is derived for the PAPR constrained problem. Numerical results show that the proposed method can achieve the objectives described above.
[ { "created": "Thu, 21 Nov 2013 12:22:07 GMT", "version": "v1" }, { "created": "Mon, 14 Jul 2014 12:46:03 GMT", "version": "v2" } ]
2016-11-17
[ [ "Tervo", "Valtteri", "" ], [ "Tölli", "Antti", "" ], [ "Karjalainen", "Juha", "" ], [ "Matsumoto", "Tad", "" ] ]
Peak to average power ratio (PAPR) constrained power allocation in single carrier multiuser (MU) single-input multiple-output (SIMO) systems with iterative frequency domain (FD) soft cancelation (SC) minimum mean squared error (MMSE) equalization is considered in this paper. To obtain full benefit of the iterative receiver, its convergence properties need to be taken into account also at the transmitter side. In this paper, we extend the existing results on the area of convergence constrained power allocation (CCPA) to consider the instantaneous PAPR at the transmit antenna of each user. In other words, we will introduce a constraint that PAPR cannot exceed a predetermined threshold. By adding the aforementioned constraint into the CCPA optimization framework, the power efficiency of a power amplifier (PA) can be significantly enhanced by enabling it to operate on its linear operation range. Hence, PAPR constraint is especially beneficial for power limited cell-edge users. In this paper, we will derive the instantaneous PAPR constraint as a function of transmit power allocation. Furthermore, successive convex approximation is derived for the PAPR constrained problem. Numerical results show that the proposed method can achieve the objectives described above.
2111.04228
Lei Sun
Lei Sun
Practical, Fast and Robust Point Cloud Registration for 3D Scene Stitching and Object Localization
null
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
3D point cloud registration ranks among the most fundamental problems in remote sensing, photogrammetry, robotics and geometric computer vision. Due to the limited accuracy of 3D feature matching techniques, outliers may exist, sometimes even in very large numbers, among the correspondences. Since existing robust solvers may encounter high computational cost or restricted robustness, we propose a novel, fast and highly robust solution, named VOCRA (VOting with Cost function and Rotating Averaging), for the point cloud registration problem with extreme outlier rates. Our first contribution is to employ the Tukey's Biweight robust cost to introduce a new voting and correspondence sorting technique, which proves to be rather effective in distinguishing true inliers from outliers even with extreme (99%) outlier rates. Our second contribution consists in designing a time-efficient consensus maximization paradigm based on robust rotation averaging, serving to seek inlier candidates among the correspondences. Finally, we apply Graduated Non-Convexity with Tukey's Biweight (GNC-TB) to estimate the correct transformation with the inlier candidates obtained, which is then used to find the complete inlier set. Both standard benchmarking and realistic experiments with application to two real-data problems are conducted, and we show that our solver VOCRA is robust against over 99% outliers and more time-efficient than the state-of-the-art competitors.
[ { "created": "Mon, 8 Nov 2021 01:49:04 GMT", "version": "v1" } ]
2021-11-09
[ [ "Sun", "Lei", "" ] ]
3D point cloud registration ranks among the most fundamental problems in remote sensing, photogrammetry, robotics and geometric computer vision. Due to the limited accuracy of 3D feature matching techniques, outliers may exist, sometimes even in very large numbers, among the correspondences. Since existing robust solvers may encounter high computational cost or restricted robustness, we propose a novel, fast and highly robust solution, named VOCRA (VOting with Cost function and Rotating Averaging), for the point cloud registration problem with extreme outlier rates. Our first contribution is to employ the Tukey's Biweight robust cost to introduce a new voting and correspondence sorting technique, which proves to be rather effective in distinguishing true inliers from outliers even with extreme (99%) outlier rates. Our second contribution consists in designing a time-efficient consensus maximization paradigm based on robust rotation averaging, serving to seek inlier candidates among the correspondences. Finally, we apply Graduated Non-Convexity with Tukey's Biweight (GNC-TB) to estimate the correct transformation with the inlier candidates obtained, which is then used to find the complete inlier set. Both standard benchmarking and realistic experiments with application to two real-data problems are conducted, and we show that our solver VOCRA is robust against over 99% outliers and more time-efficient than the state-of-the-art competitors.
2209.08197
Kartik Pant
Kartik Anand Pant, Amod Hegde, and K. V. Srinivas
Thompson Sampling with Virtual Helping Agents
14 pages, 8 figures
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
We address the problem of online sequential decision making, i.e., balancing the trade-off between exploiting the current knowledge to maximize immediate performance and exploring the new information to gain long-term benefits using the multi-armed bandit framework. Thompson sampling is one of the heuristics for choosing actions that address this exploration-exploitation dilemma. We first propose a general framework that helps heuristically tune the exploration versus exploitation trade-off in Thompson sampling using multiple samples from the posterior distribution. Utilizing this framework, we propose two algorithms for the multi-armed bandit problem and provide theoretical bounds on the cumulative regret. Next, we demonstrate the empirical improvement in the cumulative regret performance of the proposed algorithm over Thompson Sampling. We also show the effectiveness of the proposed algorithm on real-world datasets. Contrary to the existing methods, our framework provides a mechanism to vary the amount of exploration/ exploitation based on the task at hand. Towards this end, we extend our framework for two additional problems, i.e., best arm identification and time-sensitive learning in bandits and compare our algorithm with existing methods.
[ { "created": "Fri, 16 Sep 2022 23:34:44 GMT", "version": "v1" } ]
2022-09-20
[ [ "Pant", "Kartik Anand", "" ], [ "Hegde", "Amod", "" ], [ "Srinivas", "K. V.", "" ] ]
We address the problem of online sequential decision making, i.e., balancing the trade-off between exploiting the current knowledge to maximize immediate performance and exploring the new information to gain long-term benefits using the multi-armed bandit framework. Thompson sampling is one of the heuristics for choosing actions that address this exploration-exploitation dilemma. We first propose a general framework that helps heuristically tune the exploration versus exploitation trade-off in Thompson sampling using multiple samples from the posterior distribution. Utilizing this framework, we propose two algorithms for the multi-armed bandit problem and provide theoretical bounds on the cumulative regret. Next, we demonstrate the empirical improvement in the cumulative regret performance of the proposed algorithm over Thompson Sampling. We also show the effectiveness of the proposed algorithm on real-world datasets. Contrary to the existing methods, our framework provides a mechanism to vary the amount of exploration/ exploitation based on the task at hand. Towards this end, we extend our framework for two additional problems, i.e., best arm identification and time-sensitive learning in bandits and compare our algorithm with existing methods.
2311.16090
Tsung-Han Wu
Tsung-Han Wu, Long Lian, Joseph E. Gonzalez, Boyi Li, Trevor Darrell
Self-correcting LLM-controlled Diffusion Models
16 pages, 10 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text-to-image generation has witnessed significant progress with the advent of diffusion models. Despite the ability to generate photorealistic images, current text-to-image diffusion models still often struggle to accurately interpret and follow complex input text prompts. In contrast to existing models that aim to generate images only with their best effort, we introduce Self-correcting LLM-controlled Diffusion (SLD). SLD is a framework that generates an image from the input prompt, assesses its alignment with the prompt, and performs self-corrections on the inaccuracies in the generated image. Steered by an LLM controller, SLD turns text-to-image generation into an iterative closed-loop process, ensuring correctness in the resulting image. SLD is not only training-free but can also be seamlessly integrated with diffusion models behind API access, such as DALL-E 3, to further boost the performance of state-of-the-art diffusion models. Experimental results show that our approach can rectify a majority of incorrect generations, particularly in generative numeracy, attribute binding, and spatial relationships. Furthermore, by simply adjusting the instructions to the LLM, SLD can perform image editing tasks, bridging the gap between text-to-image generation and image editing pipelines. We will make our code available for future research and applications.
[ { "created": "Mon, 27 Nov 2023 18:56:37 GMT", "version": "v1" } ]
2023-11-28
[ [ "Wu", "Tsung-Han", "" ], [ "Lian", "Long", "" ], [ "Gonzalez", "Joseph E.", "" ], [ "Li", "Boyi", "" ], [ "Darrell", "Trevor", "" ] ]
Text-to-image generation has witnessed significant progress with the advent of diffusion models. Despite the ability to generate photorealistic images, current text-to-image diffusion models still often struggle to accurately interpret and follow complex input text prompts. In contrast to existing models that aim to generate images only with their best effort, we introduce Self-correcting LLM-controlled Diffusion (SLD). SLD is a framework that generates an image from the input prompt, assesses its alignment with the prompt, and performs self-corrections on the inaccuracies in the generated image. Steered by an LLM controller, SLD turns text-to-image generation into an iterative closed-loop process, ensuring correctness in the resulting image. SLD is not only training-free but can also be seamlessly integrated with diffusion models behind API access, such as DALL-E 3, to further boost the performance of state-of-the-art diffusion models. Experimental results show that our approach can rectify a majority of incorrect generations, particularly in generative numeracy, attribute binding, and spatial relationships. Furthermore, by simply adjusting the instructions to the LLM, SLD can perform image editing tasks, bridging the gap between text-to-image generation and image editing pipelines. We will make our code available for future research and applications.
2312.14836
Quentin Cappart
Augustin Parjadis, Quentin Cappart, Bistra Dilkina, Aaron Ferber, Louis-Martin Rousseau
Learning Lagrangian Multipliers for the Travelling Salesman Problem
null
null
null
null
cs.AI cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lagrangian relaxation is a versatile mathematical technique employed to relax constraints in an optimization problem, enabling the generation of dual bounds to prove the optimality of feasible solutions and the design of efficient propagators in constraint programming (such as the weighted circuit constraint). However, the conventional process of deriving Lagrangian multipliers (e.g., using subgradient methods) is often computationally intensive, limiting its practicality for large-scale or time-sensitive problems. To address this challenge, we propose an innovative unsupervised learning approach that harnesses the capabilities of graph neural networks to exploit the problem structure, aiming to generate accurate Lagrangian multipliers efficiently. We apply this technique to the well-known Held-Karp Lagrangian relaxation for the travelling salesman problem. The core idea is to predict accurate Lagrangian multipliers and to employ them as a warm start for generating Held-Karp relaxation bounds. These bounds are subsequently utilized to enhance the filtering process carried out by branch-and-bound algorithms. In contrast to much of the existing literature, which primarily focuses on finding feasible solutions, our approach operates on the dual side, demonstrating that learning can also accelerate the proof of optimality. We conduct experiments across various distributions of the metric travelling salesman problem, considering instances with up to 200 cities. The results illustrate that our approach can improve the filtering level of the weighted circuit global constraint, reduce the optimality gap by a factor two for unsolved instances up to a timeout, and reduce the execution time for solved instances by 10%.
[ { "created": "Fri, 22 Dec 2023 17:09:34 GMT", "version": "v1" } ]
2023-12-25
[ [ "Parjadis", "Augustin", "" ], [ "Cappart", "Quentin", "" ], [ "Dilkina", "Bistra", "" ], [ "Ferber", "Aaron", "" ], [ "Rousseau", "Louis-Martin", "" ] ]
Lagrangian relaxation is a versatile mathematical technique employed to relax constraints in an optimization problem, enabling the generation of dual bounds to prove the optimality of feasible solutions and the design of efficient propagators in constraint programming (such as the weighted circuit constraint). However, the conventional process of deriving Lagrangian multipliers (e.g., using subgradient methods) is often computationally intensive, limiting its practicality for large-scale or time-sensitive problems. To address this challenge, we propose an innovative unsupervised learning approach that harnesses the capabilities of graph neural networks to exploit the problem structure, aiming to generate accurate Lagrangian multipliers efficiently. We apply this technique to the well-known Held-Karp Lagrangian relaxation for the travelling salesman problem. The core idea is to predict accurate Lagrangian multipliers and to employ them as a warm start for generating Held-Karp relaxation bounds. These bounds are subsequently utilized to enhance the filtering process carried out by branch-and-bound algorithms. In contrast to much of the existing literature, which primarily focuses on finding feasible solutions, our approach operates on the dual side, demonstrating that learning can also accelerate the proof of optimality. We conduct experiments across various distributions of the metric travelling salesman problem, considering instances with up to 200 cities. The results illustrate that our approach can improve the filtering level of the weighted circuit global constraint, reduce the optimality gap by a factor two for unsolved instances up to a timeout, and reduce the execution time for solved instances by 10%.
2103.15217
Adam Polak
Adam Polak, Adrian Siwiec, Micha{\l} Stobierski
Euler Meets GPU: Practical Graph Algorithms with Theoretical Guarantees
IPDPS 2021
null
null
null
cs.DS cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Euler tour technique is a classical tool for designing parallel graph algorithms, originally proposed for the PRAM model. We ask whether it can be adapted to run efficiently on GPU. We focus on two established applications of the technique: (1) the problem of finding lowest common ancestors (LCA) of pairs of nodes in trees, and (2) the problem of finding bridges in undirected graphs. In our experiments, we compare theoretically optimal algorithms using the Euler tour technique against simpler heuristics supposed to perform particularly well on typical instances. We show that the Euler tour-based algorithms not only fulfill their theoretical promises and outperform practical heuristics on hard instances, but also perform on par with them on easy instances.
[ { "created": "Sun, 28 Mar 2021 21:02:12 GMT", "version": "v1" } ]
2021-03-30
[ [ "Polak", "Adam", "" ], [ "Siwiec", "Adrian", "" ], [ "Stobierski", "Michał", "" ] ]
The Euler tour technique is a classical tool for designing parallel graph algorithms, originally proposed for the PRAM model. We ask whether it can be adapted to run efficiently on GPU. We focus on two established applications of the technique: (1) the problem of finding lowest common ancestors (LCA) of pairs of nodes in trees, and (2) the problem of finding bridges in undirected graphs. In our experiments, we compare theoretically optimal algorithms using the Euler tour technique against simpler heuristics supposed to perform particularly well on typical instances. We show that the Euler tour-based algorithms not only fulfill their theoretical promises and outperform practical heuristics on hard instances, but also perform on par with them on easy instances.
1609.00452
Yanlun Wu
Yanlun Wu and Jun Fang
Large-Scale Antenna-Assisted Grant-free Non-Orthogonal Multiple Access via Compressed Sensing
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Support massive connectivity is an important requirement in 5G wireless communication system. For massive Machine Type Communication (MTC) scenario, since the network is expected to accommodate a massive number of MTC devices with sparse short message, the multiple access scheme like current LTE uplink would not be suitable. In order to reduce the signaling overhead, we consider an grant-free multiple access system, which requires the receiver facilitate activity detection, channel estimation, and data decoding in "one shot" and without the knowledge of active user's pilots. However, most of the "one shot" communication research haven't considered the massive MIMO scenario. In this work we propose a Multiple Measurement Model (MMV) model based Massive MIMO and exploit the covariance matrix of the measurements to confirm a high activity detection rate.
[ { "created": "Fri, 2 Sep 2016 02:51:25 GMT", "version": "v1" }, { "created": "Tue, 10 Jan 2017 15:35:39 GMT", "version": "v2" } ]
2017-01-11
[ [ "Wu", "Yanlun", "" ], [ "Fang", "Jun", "" ] ]
Support massive connectivity is an important requirement in 5G wireless communication system. For massive Machine Type Communication (MTC) scenario, since the network is expected to accommodate a massive number of MTC devices with sparse short message, the multiple access scheme like current LTE uplink would not be suitable. In order to reduce the signaling overhead, we consider an grant-free multiple access system, which requires the receiver facilitate activity detection, channel estimation, and data decoding in "one shot" and without the knowledge of active user's pilots. However, most of the "one shot" communication research haven't considered the massive MIMO scenario. In this work we propose a Multiple Measurement Model (MMV) model based Massive MIMO and exploit the covariance matrix of the measurements to confirm a high activity detection rate.
2404.06417
Matthew Fickus
Matthew Fickus, Enrique Gomez-Leos, Joseph W. Iverson
Radon-Hurwitz Grassmannian codes
null
null
null
null
cs.IT math.FA math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Every equi-isoclinic tight fusion frame (EITFF) is a type of optimal code in a Grassmannian, consisting of subspaces of a finite-dimensional Hilbert space for which the smallest principal angle between any pair of them is as large as possible. EITFFs yield dictionaries with minimal block coherence and so are ideal for certain types of compressed sensing. By refining classical arguments of Lemmens and Seidel that rely upon Radon-Hurwitz theory, we fully characterize EITFFs in the special case where the dimension of the subspaces is exactly one-half of that of the ambient space. We moreover show that each such "Radon-Hurwitz EITFF" is highly symmetric.
[ { "created": "Tue, 9 Apr 2024 16:07:00 GMT", "version": "v1" } ]
2024-04-10
[ [ "Fickus", "Matthew", "" ], [ "Gomez-Leos", "Enrique", "" ], [ "Iverson", "Joseph W.", "" ] ]
Every equi-isoclinic tight fusion frame (EITFF) is a type of optimal code in a Grassmannian, consisting of subspaces of a finite-dimensional Hilbert space for which the smallest principal angle between any pair of them is as large as possible. EITFFs yield dictionaries with minimal block coherence and so are ideal for certain types of compressed sensing. By refining classical arguments of Lemmens and Seidel that rely upon Radon-Hurwitz theory, we fully characterize EITFFs in the special case where the dimension of the subspaces is exactly one-half of that of the ambient space. We moreover show that each such "Radon-Hurwitz EITFF" is highly symmetric.
2211.13813
Zhichao Yang
Zhichao Yang, Sunjae Kwon, Zonghai Yao, Hong Yu
Multi-label Few-shot ICD Coding as Autoregressive Generation with Prompt
To be appear in AAAI2023
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Automatic International Classification of Diseases (ICD) coding aims to assign multiple ICD codes to a medical note with an average of 3,000+ tokens. This task is challenging due to the high-dimensional space of multi-label assignment (155,000+ ICD code candidates) and the long-tail challenge - Many ICD codes are infrequently assigned yet infrequent ICD codes are important clinically. This study addresses the long-tail challenge by transforming this multi-label classification task into an autoregressive generation task. Specifically, we first introduce a novel pretraining objective to generate free text diagnoses and procedure using the SOAP structure, the medical logic physicians use for note documentation. Second, instead of directly predicting the high dimensional space of ICD codes, our model generates the lower dimension of text descriptions, which then infer ICD codes. Third, we designed a novel prompt template for multi-label classification. We evaluate our Generation with Prompt model with the benchmark of all code assignment (MIMIC-III-full) and few shot ICD code assignment evaluation benchmark (MIMIC-III-few). Experiments on MIMIC-III-few show that our model performs with a marco F1 30.2, which substantially outperforms the previous MIMIC-III-full SOTA model (marco F1 4.3) and the model specifically designed for few/zero shot setting (marco F1 18.7). Finally, we design a novel ensemble learner, a cross attention reranker with prompts, to integrate previous SOTA and our best few-shot coding predictions. Experiments on MIMIC-III-full show that our ensemble learner substantially improves both macro and micro F1, from 10.4 to 14.6 and from 58.2 to 59.1, respectively.
[ { "created": "Thu, 24 Nov 2022 22:10:50 GMT", "version": "v1" }, { "created": "Tue, 29 Nov 2022 15:49:33 GMT", "version": "v2" } ]
2022-11-30
[ [ "Yang", "Zhichao", "" ], [ "Kwon", "Sunjae", "" ], [ "Yao", "Zonghai", "" ], [ "Yu", "Hong", "" ] ]
Automatic International Classification of Diseases (ICD) coding aims to assign multiple ICD codes to a medical note with an average of 3,000+ tokens. This task is challenging due to the high-dimensional space of multi-label assignment (155,000+ ICD code candidates) and the long-tail challenge - Many ICD codes are infrequently assigned yet infrequent ICD codes are important clinically. This study addresses the long-tail challenge by transforming this multi-label classification task into an autoregressive generation task. Specifically, we first introduce a novel pretraining objective to generate free text diagnoses and procedure using the SOAP structure, the medical logic physicians use for note documentation. Second, instead of directly predicting the high dimensional space of ICD codes, our model generates the lower dimension of text descriptions, which then infer ICD codes. Third, we designed a novel prompt template for multi-label classification. We evaluate our Generation with Prompt model with the benchmark of all code assignment (MIMIC-III-full) and few shot ICD code assignment evaluation benchmark (MIMIC-III-few). Experiments on MIMIC-III-few show that our model performs with a marco F1 30.2, which substantially outperforms the previous MIMIC-III-full SOTA model (marco F1 4.3) and the model specifically designed for few/zero shot setting (marco F1 18.7). Finally, we design a novel ensemble learner, a cross attention reranker with prompts, to integrate previous SOTA and our best few-shot coding predictions. Experiments on MIMIC-III-full show that our ensemble learner substantially improves both macro and micro F1, from 10.4 to 14.6 and from 58.2 to 59.1, respectively.
2110.11246
Johannes M\"uller
Johannes M\"uller, Jan Strohbeck, Martin Herrmann and Michael Buchholz
Motion Planning for Connected Automated Vehicles at Occluded Intersections With Infrastructure Sensors
12 pages, 8 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motion planning at urban intersections that accounts for the situation context, handles occlusions, and deals with measurement and prediction uncertainty is a major challenge on the way to urban automated driving. In this work, we address this challenge with a sampling-based optimization approach. For this, we formulate an optimal control problem that optimizes for low risk and high passenger comfort. The risk is calculated on the basis of the perception information and the respective uncertainty using a risk model. The risk model combines set-based methods and probabilistic approaches. Thus, the approach provides safety guarantees in a probabilistic sense, while for a vanishing risk, the formal safety guarantees of the set-based methods are inherited. By exploring all available behavior options, our approach solves decision making and longitudinal trajectory planning in one step. The available behavior options are provided by a formal representation of the situation context, which is also used to reduce calculation efforts. Occlusions are resolved using the external perception of infrastructure-mounted sensors. Yet, instead of merging external and ego perception with track-to-track fusion, the information is used in parallel. The motion planning scheme is validated through real-world experiments.
[ { "created": "Thu, 21 Oct 2021 16:15:49 GMT", "version": "v1" } ]
2021-10-22
[ [ "Müller", "Johannes", "" ], [ "Strohbeck", "Jan", "" ], [ "Herrmann", "Martin", "" ], [ "Buchholz", "Michael", "" ] ]
Motion planning at urban intersections that accounts for the situation context, handles occlusions, and deals with measurement and prediction uncertainty is a major challenge on the way to urban automated driving. In this work, we address this challenge with a sampling-based optimization approach. For this, we formulate an optimal control problem that optimizes for low risk and high passenger comfort. The risk is calculated on the basis of the perception information and the respective uncertainty using a risk model. The risk model combines set-based methods and probabilistic approaches. Thus, the approach provides safety guarantees in a probabilistic sense, while for a vanishing risk, the formal safety guarantees of the set-based methods are inherited. By exploring all available behavior options, our approach solves decision making and longitudinal trajectory planning in one step. The available behavior options are provided by a formal representation of the situation context, which is also used to reduce calculation efforts. Occlusions are resolved using the external perception of infrastructure-mounted sensors. Yet, instead of merging external and ego perception with track-to-track fusion, the information is used in parallel. The motion planning scheme is validated through real-world experiments.
2307.09481
Xi Chen
Xi Chen, Lianghua Huang, Yu Liu, Yujun Shen, Deli Zhao, Hengshuang Zhao
AnyDoor: Zero-shot Object-level Image Customization
CVPR2024
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents AnyDoor, a diffusion-based image generator with the power to teleport target objects to new scenes at user-specified locations in a harmonious way. Instead of tuning parameters for each object, our model is trained only once and effortlessly generalizes to diverse object-scene combinations at the inference stage. Such a challenging zero-shot setting requires an adequate characterization of a certain object. To this end, we complement the commonly used identity feature with detail features, which are carefully designed to maintain texture details yet allow versatile local variations (e.g., lighting, orientation, posture, etc.), supporting the object in favorably blending with different surroundings. We further propose to borrow knowledge from video datasets, where we can observe various forms (i.e., along the time axis) of a single object, leading to stronger model generalizability and robustness. Extensive experiments demonstrate the superiority of our approach over existing alternatives as well as its great potential in real-world applications, such as virtual try-on and object moving. Project page is https://damo-vilab.github.io/AnyDoor-Page/.
[ { "created": "Tue, 18 Jul 2023 17:59:02 GMT", "version": "v1" }, { "created": "Wed, 8 May 2024 03:21:34 GMT", "version": "v2" } ]
2024-05-09
[ [ "Chen", "Xi", "" ], [ "Huang", "Lianghua", "" ], [ "Liu", "Yu", "" ], [ "Shen", "Yujun", "" ], [ "Zhao", "Deli", "" ], [ "Zhao", "Hengshuang", "" ] ]
This work presents AnyDoor, a diffusion-based image generator with the power to teleport target objects to new scenes at user-specified locations in a harmonious way. Instead of tuning parameters for each object, our model is trained only once and effortlessly generalizes to diverse object-scene combinations at the inference stage. Such a challenging zero-shot setting requires an adequate characterization of a certain object. To this end, we complement the commonly used identity feature with detail features, which are carefully designed to maintain texture details yet allow versatile local variations (e.g., lighting, orientation, posture, etc.), supporting the object in favorably blending with different surroundings. We further propose to borrow knowledge from video datasets, where we can observe various forms (i.e., along the time axis) of a single object, leading to stronger model generalizability and robustness. Extensive experiments demonstrate the superiority of our approach over existing alternatives as well as its great potential in real-world applications, such as virtual try-on and object moving. Project page is https://damo-vilab.github.io/AnyDoor-Page/.
2307.14507
Hengjie Yang
Hengjie Yang and Richard D. Wesel
Systematic Transmission With Fountain Parity Checks for Erasure Channels With Stop Feedback
7 pages, double column, 4 figures; comments are welcome! changes in v2: corrected 2 typos in v1. arXiv admin note: substantial text overlap with arXiv:2205.15399
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present new achievability bounds on the maximal achievable rate of variable-length stop-feedback (VLSF) codes operating over a binary erasure channel (BEC) at a fixed message size $M = 2^k$. We provide new bounds for VLSF codes with zero error, infinite decoding times and with nonzero error, finite decoding times. Both new achievability bounds are proved by constructing a new VLSF code that employs systematic transmission of the first $k$ bits followed by random linear fountain parity bits decoded with a rank decoder. For VLSF codes with infinite decoding times, our new bound outperforms the state-of-the-art result for BEC by Devassy \emph{et al.} in 2016. We also give a negative answer to the open question Devassy \emph{et al.} put forward on whether the $23.4\%$ backoff to capacity at $k = 3$ is fundamental. For VLSF codes with finite decoding times, numerical evaluations show that the achievable rate for VLSF codes with a moderate number of decoding times closely approaches that for VLSF codes with infinite decoding times.
[ { "created": "Wed, 26 Jul 2023 21:12:18 GMT", "version": "v1" }, { "created": "Fri, 28 Jul 2023 05:52:28 GMT", "version": "v2" } ]
2023-07-31
[ [ "Yang", "Hengjie", "" ], [ "Wesel", "Richard D.", "" ] ]
In this paper, we present new achievability bounds on the maximal achievable rate of variable-length stop-feedback (VLSF) codes operating over a binary erasure channel (BEC) at a fixed message size $M = 2^k$. We provide new bounds for VLSF codes with zero error, infinite decoding times and with nonzero error, finite decoding times. Both new achievability bounds are proved by constructing a new VLSF code that employs systematic transmission of the first $k$ bits followed by random linear fountain parity bits decoded with a rank decoder. For VLSF codes with infinite decoding times, our new bound outperforms the state-of-the-art result for BEC by Devassy \emph{et al.} in 2016. We also give a negative answer to the open question Devassy \emph{et al.} put forward on whether the $23.4\%$ backoff to capacity at $k = 3$ is fundamental. For VLSF codes with finite decoding times, numerical evaluations show that the achievable rate for VLSF codes with a moderate number of decoding times closely approaches that for VLSF codes with infinite decoding times.
2110.05477
Shashank Reddy Vadyala
Shashank Reddy Vadyala, Sai Nethra Betgeri
Predicting the spread of COVID-19 in Delhi, India using Deep Residual Recurrent Neural Networks
10 pages,3 figures. arXiv admin note: text overlap with arXiv:2104.14034 by other authors
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Detecting the spread of coronavirus will go a long way toward reducing human and economic loss. Unfortunately, existing Epidemiological models used for COVID 19 prediction models are too slow and fail to capture the COVID-19 development in detail. This research uses Partial Differential Equations to improve the processing speed and accuracy of forecasting of COVID 19 governed by SEIRD model equations. The dynamics of COVID 19 were extracted using Convolutional Neural Networks and Deep Residual Recurrent Neural Networks from data simulated using PDEs. The DRRNNs accuracy is measured using Mean Squared Error. The DRRNNs COVID-19 prediction model has been shown to have accurate COVID-19 predictions. In addition, we concluded that DR-RNNs can significantly advance the ability to support decision-making in real time COVID-19 prediction.
[ { "created": "Sat, 9 Oct 2021 19:16:36 GMT", "version": "v1" } ]
2021-10-13
[ [ "Vadyala", "Shashank Reddy", "" ], [ "Betgeri", "Sai Nethra", "" ] ]
Detecting the spread of coronavirus will go a long way toward reducing human and economic loss. Unfortunately, existing Epidemiological models used for COVID 19 prediction models are too slow and fail to capture the COVID-19 development in detail. This research uses Partial Differential Equations to improve the processing speed and accuracy of forecasting of COVID 19 governed by SEIRD model equations. The dynamics of COVID 19 were extracted using Convolutional Neural Networks and Deep Residual Recurrent Neural Networks from data simulated using PDEs. The DRRNNs accuracy is measured using Mean Squared Error. The DRRNNs COVID-19 prediction model has been shown to have accurate COVID-19 predictions. In addition, we concluded that DR-RNNs can significantly advance the ability to support decision-making in real time COVID-19 prediction.
cs/0105027
Leonid Peshkin
Leonid Peshkin and Sayan Mukherjee
Bounds on sample size for policy evaluation in Markov environments
14 pages
COLT 2001: The Fourteenth Annual Conference on Computational Learning Theory
null
null
cs.LG cs.AI cs.CC
null
Reinforcement learning means finding the optimal course of action in Markovian environments without knowledge of the environment's dynamics. Stochastic optimization algorithms used in the field rely on estimates of the value of a policy. Typically, the value of a policy is estimated from results of simulating that very policy in the environment. This approach requires a large amount of simulation as different points in the policy space are considered. In this paper, we develop value estimators that utilize data gathered when using one policy to estimate the value of using another policy, resulting in much more data-efficient algorithms. We consider the question of accumulating a sufficient experience and give PAC-style bounds.
[ { "created": "Thu, 17 May 2001 18:33:56 GMT", "version": "v1" } ]
2017-05-25
[ [ "Peshkin", "Leonid", "" ], [ "Mukherjee", "Sayan", "" ] ]
Reinforcement learning means finding the optimal course of action in Markovian environments without knowledge of the environment's dynamics. Stochastic optimization algorithms used in the field rely on estimates of the value of a policy. Typically, the value of a policy is estimated from results of simulating that very policy in the environment. This approach requires a large amount of simulation as different points in the policy space are considered. In this paper, we develop value estimators that utilize data gathered when using one policy to estimate the value of using another policy, resulting in much more data-efficient algorithms. We consider the question of accumulating a sufficient experience and give PAC-style bounds.
2407.05874
Georg Arbesser-Rastburg
Georg Arbesser-Rastburg, Thomas Olip, Johanna Pirker
Investigating Trading Mechanisms as a Driver for User Experience in Racing Games
4 pages, 2 figures, to be published in the the conference proceedings of IEEE Conference on Games 2024
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
The exchange of digital goods has become a significant aspect of the global economy, with digital products offering inexpensive reproduction and distribution. In-game objects, a type of digital currency, have emerged as tradable commodities within gaming ecosystems. Despite extensive research on various aspects of digital goods, little attention has been given to the impact of in-game trading mechanisms on user experience. This paper presents a study aimed at evaluating the influence of trading systems on user experience in a racing game context. We developed a simple racing game featuring an in-game market for buying and selling car variants and conducted an A/B study comparing user experiences between groups utilizing the trading system and those unlocking cars through race completion. Our findings suggest that while the trading system did not significantly alter the overall user experience, further exploration of diverse trading approaches may offer insights into their impact on user engagement.
[ { "created": "Mon, 8 Jul 2024 12:33:51 GMT", "version": "v1" } ]
2024-07-09
[ [ "Arbesser-Rastburg", "Georg", "" ], [ "Olip", "Thomas", "" ], [ "Pirker", "Johanna", "" ] ]
The exchange of digital goods has become a significant aspect of the global economy, with digital products offering inexpensive reproduction and distribution. In-game objects, a type of digital currency, have emerged as tradable commodities within gaming ecosystems. Despite extensive research on various aspects of digital goods, little attention has been given to the impact of in-game trading mechanisms on user experience. This paper presents a study aimed at evaluating the influence of trading systems on user experience in a racing game context. We developed a simple racing game featuring an in-game market for buying and selling car variants and conducted an A/B study comparing user experiences between groups utilizing the trading system and those unlocking cars through race completion. Our findings suggest that while the trading system did not significantly alter the overall user experience, further exploration of diverse trading approaches may offer insights into their impact on user engagement.
1412.4353
Aridj Mohamed
Aridj Mohamed
LH*TH: New fast Scalable Distributed Data Structures (SDDS)
null
International Journal of Computer Science Issues,(IJCSI) Volume 11, Issue 6, No 2, pp 123-128 November 2014
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proposed in 1993 the Scalable Distributed Data Structures (SDDSs) became a profile of basis for the data management on Multi computer. In this paper we propose an organization of a LH* bucket based on the trie hashing in order to improve times of different access request.
[ { "created": "Sun, 14 Dec 2014 13:01:39 GMT", "version": "v1" } ]
2014-12-16
[ [ "Mohamed", "Aridj", "" ] ]
Proposed in 1993 the Scalable Distributed Data Structures (SDDSs) became a profile of basis for the data management on Multi computer. In this paper we propose an organization of a LH* bucket based on the trie hashing in order to improve times of different access request.
2102.08904
Nima Mahmoudi
Nima Mahmoudi, Hamzeh Khazaei
SimFaaS: A Performance Simulator for Serverless Computing Platforms
to be published in "The 11th IEEE International Conference on Cloud Computing and Services Science (CLOSER 2021)"
null
null
null
cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Developing accurate and extendable performance models for serverless platforms, aka Function-as-a-Service (FaaS) platforms, is a very challenging task. Also, implementation and experimentation on real serverless platforms is both costly and time-consuming. However, at the moment, there is no comprehensive simulation tool or framework to be used instead of the real platform. As a result, in this paper, we fill this gap by proposing a simulation platform, called SimFaaS, which assists serverless application developers to develop optimized Function-as-a-Service applications in terms of cost and performance. On the other hand, SimFaaS can be leveraged by FaaS providers to tailor their platforms to be workload-aware so that they can increase profit and quality of service at the same time. Also, serverless platform providers can evaluate new designs, implementations, and deployments on SimFaaS in a timely and cost-efficient manner. SimFaaS is open-source, well-documented, and publicly available, making it easily usable and extendable to incorporate more use case scenarios in the future. Besides, it provides performance engineers with a set of tools that can calculate several characteristics of serverless platform internal states, which is otherwise hard (mostly impossible) to extract from real platforms. We show how SimFaaS facilitates the prediction of essential performance metrics such as average response time, probability of cold start, and the average number of instances reflecting the infrastructure cost incurred by the serverless computing provider. We evaluate the accuracy and applicability of SimFaaS by comparing the prediction results with real-world traces from Amazon AWS Lambda.
[ { "created": "Wed, 17 Feb 2021 17:50:48 GMT", "version": "v1" } ]
2021-02-18
[ [ "Mahmoudi", "Nima", "" ], [ "Khazaei", "Hamzeh", "" ] ]
Developing accurate and extendable performance models for serverless platforms, aka Function-as-a-Service (FaaS) platforms, is a very challenging task. Also, implementation and experimentation on real serverless platforms is both costly and time-consuming. However, at the moment, there is no comprehensive simulation tool or framework to be used instead of the real platform. As a result, in this paper, we fill this gap by proposing a simulation platform, called SimFaaS, which assists serverless application developers to develop optimized Function-as-a-Service applications in terms of cost and performance. On the other hand, SimFaaS can be leveraged by FaaS providers to tailor their platforms to be workload-aware so that they can increase profit and quality of service at the same time. Also, serverless platform providers can evaluate new designs, implementations, and deployments on SimFaaS in a timely and cost-efficient manner. SimFaaS is open-source, well-documented, and publicly available, making it easily usable and extendable to incorporate more use case scenarios in the future. Besides, it provides performance engineers with a set of tools that can calculate several characteristics of serverless platform internal states, which is otherwise hard (mostly impossible) to extract from real platforms. We show how SimFaaS facilitates the prediction of essential performance metrics such as average response time, probability of cold start, and the average number of instances reflecting the infrastructure cost incurred by the serverless computing provider. We evaluate the accuracy and applicability of SimFaaS by comparing the prediction results with real-world traces from Amazon AWS Lambda.