id
stringlengths
9
10
submitter
stringlengths
1
64
authors
stringlengths
4
20.7k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
404
doi
stringlengths
11
153
report-no
stringlengths
2
254
categories
stringlengths
5
98
license
stringclasses
9 values
orig_abstract
stringlengths
14
3.35k
versions
listlengths
1
60
update_date
stringlengths
10
10
authors_parsed
listlengths
1
1.35k
abstract
stringlengths
11
3.34k
2406.08085
Yiqin Wang
Haoji Zhang, Yiqin Wang, Yansong Tang, Yong Liu, Jiashi Feng, Jifeng Dai, Xiaojie Jin
Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Benefiting from the advancements in large language models and cross-modal alignment, existing multi-modal video understanding methods have achieved prominent performance in offline scenario. However, online video streams, as one of the most common media forms in the real world, have seldom received attention. Compared to offline videos, the 'dynamic' nature of online video streams poses challenges for the direct application of existing models and introduces new problems, such as the storage of extremely long-term information, interaction between continuous visual content and 'asynchronous' user questions. Therefore, in this paper we present Flash-VStream, a video-language model that simulates the memory mechanism of human. Our model is able to process extremely long video streams in real-time and respond to user queries simultaneously. Compared to existing models, Flash-VStream achieves significant reductions in inference latency and VRAM consumption, which is intimately related to performing understanding of online streaming video. In addition, given that existing video understanding benchmarks predominantly concentrate on offline scenario, we propose VStream-QA, a novel question answering benchmark specifically designed for online video streaming understanding. Comparisons with popular existing methods on the proposed benchmark demonstrate the superiority of our method for such challenging setting. To verify the generalizability of our approach, we further evaluate it on existing video understanding benchmarks and achieves state-of-the-art performance in offline scenarios as well. All code, models, and datasets are available at the https://invinciblewyq.github.io/vstream-page/
[ { "created": "Wed, 12 Jun 2024 11:07:55 GMT", "version": "v1" }, { "created": "Sun, 30 Jun 2024 05:39:46 GMT", "version": "v2" } ]
2024-07-02
[ [ "Zhang", "Haoji", "" ], [ "Wang", "Yiqin", "" ], [ "Tang", "Yansong", "" ], [ "Liu", "Yong", "" ], [ "Feng", "Jiashi", "" ], [ "Dai", "Jifeng", "" ], [ "Jin", "Xiaojie", "" ] ]
Benefiting from the advancements in large language models and cross-modal alignment, existing multi-modal video understanding methods have achieved prominent performance in offline scenario. However, online video streams, as one of the most common media forms in the real world, have seldom received attention. Compared to offline videos, the 'dynamic' nature of online video streams poses challenges for the direct application of existing models and introduces new problems, such as the storage of extremely long-term information, interaction between continuous visual content and 'asynchronous' user questions. Therefore, in this paper we present Flash-VStream, a video-language model that simulates the memory mechanism of human. Our model is able to process extremely long video streams in real-time and respond to user queries simultaneously. Compared to existing models, Flash-VStream achieves significant reductions in inference latency and VRAM consumption, which is intimately related to performing understanding of online streaming video. In addition, given that existing video understanding benchmarks predominantly concentrate on offline scenario, we propose VStream-QA, a novel question answering benchmark specifically designed for online video streaming understanding. Comparisons with popular existing methods on the proposed benchmark demonstrate the superiority of our method for such challenging setting. To verify the generalizability of our approach, we further evaluate it on existing video understanding benchmarks and achieves state-of-the-art performance in offline scenarios as well. All code, models, and datasets are available at the https://invinciblewyq.github.io/vstream-page/
2008.10316
Maximilian B\"other
Thomas Bl\"asius and Maximilian B\"other and Philipp Fischbeck and Tobias Friedrich and Alina Gries and Falk H\"uffner and Otto Ki{\ss}ig and Pascal Lenzner and Louise Molitor and Leon Schiller and Armin Wells and Simon Wietheger
A Strategic Routing Framework and Algorithms for Computing Alternative Paths
19 pages, 7 figures, full version of paper accepted at ATMOS 2020
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional navigation services find the fastest route for a single driver. Though always using the fastest route seems desirable for every individual, selfish behavior can have undesirable effects such as higher energy consumption and avoidable congestion, even leading to higher overall and individual travel times. In contrast, strategic routing aims at optimizing the traffic for all agents regarding a global optimization goal. We introduce a framework to formalize real-world strategic routing scenarios as algorithmic problems and study one of them, which we call Single Alternative Path (SAP), in detail. There, we are given an original route between a single origin--destination pair. The goal is to suggest an alternative route to all agents that optimizes the overall travel time under the assumption that the agents distribute among both routes according to a psychological model, for which we introduce the concept of Pareto-conformity. We show that the SAP problem is NP-complete, even for such models. Nonetheless, assuming Pareto-conformity, we give multiple algorithms for different variants of SAP, using multi-criteria shortest path algorithms as subroutines. Moreover, we prove that several natural models are in fact Pareto-conform. The implementation of our algorithms serves as a proof of concept, showing that SAP can be solved in reasonable time even though the algorithms have exponential running time in the worst case.
[ { "created": "Mon, 24 Aug 2020 10:56:41 GMT", "version": "v1" } ]
2020-08-25
[ [ "Bläsius", "Thomas", "" ], [ "Böther", "Maximilian", "" ], [ "Fischbeck", "Philipp", "" ], [ "Friedrich", "Tobias", "" ], [ "Gries", "Alina", "" ], [ "Hüffner", "Falk", "" ], [ "Kißig", "Otto", "" ], [ "Lenzner", "Pascal", "" ], [ "Molitor", "Louise", "" ], [ "Schiller", "Leon", "" ], [ "Wells", "Armin", "" ], [ "Wietheger", "Simon", "" ] ]
Traditional navigation services find the fastest route for a single driver. Though always using the fastest route seems desirable for every individual, selfish behavior can have undesirable effects such as higher energy consumption and avoidable congestion, even leading to higher overall and individual travel times. In contrast, strategic routing aims at optimizing the traffic for all agents regarding a global optimization goal. We introduce a framework to formalize real-world strategic routing scenarios as algorithmic problems and study one of them, which we call Single Alternative Path (SAP), in detail. There, we are given an original route between a single origin--destination pair. The goal is to suggest an alternative route to all agents that optimizes the overall travel time under the assumption that the agents distribute among both routes according to a psychological model, for which we introduce the concept of Pareto-conformity. We show that the SAP problem is NP-complete, even for such models. Nonetheless, assuming Pareto-conformity, we give multiple algorithms for different variants of SAP, using multi-criteria shortest path algorithms as subroutines. Moreover, we prove that several natural models are in fact Pareto-conform. The implementation of our algorithms serves as a proof of concept, showing that SAP can be solved in reasonable time even though the algorithms have exponential running time in the worst case.
2004.05575
Koteswar Rao Jerripothula
Koteswar Rao Jerripothula, Jianfei Cai, Jiangbo Lu, Junsong Yuan
Image Co-skeletonization via Co-segmentation
13 pages, 12 figures, Submitted to IEEE Transactions on Image Processing (TIP)
null
null
null
cs.CV cs.MM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in the joint processing of images have certainly shown its advantages over individual processing. Different from the existing works geared towards co-segmentation or co-localization, in this paper, we explore a new joint processing topic: image co-skeletonization, which is defined as joint skeleton extraction of objects in an image collection. Object skeletonization in a single natural image is a challenging problem because there is hardly any prior knowledge about the object. Therefore, we resort to the idea of object co-skeletonization, hoping that the commonness prior that exists across the images may help, just as it does for other joint processing problems such as co-segmentation. We observe that the skeleton can provide good scribbles for segmentation, and skeletonization, in turn, needs good segmentation. Therefore, we propose a coupled framework for co-skeletonization and co-segmentation tasks so that they are well informed by each other, and benefit each other synergistically. Since it is a new problem, we also construct a benchmark dataset by annotating nearly 1.8k images spread across 38 categories. Extensive experiments demonstrate that the proposed method achieves promising results in all the three possible scenarios of joint-processing: weakly-supervised, supervised, and unsupervised.
[ { "created": "Sun, 12 Apr 2020 09:35:54 GMT", "version": "v1" } ]
2020-04-14
[ [ "Jerripothula", "Koteswar Rao", "" ], [ "Cai", "Jianfei", "" ], [ "Lu", "Jiangbo", "" ], [ "Yuan", "Junsong", "" ] ]
Recent advances in the joint processing of images have certainly shown its advantages over individual processing. Different from the existing works geared towards co-segmentation or co-localization, in this paper, we explore a new joint processing topic: image co-skeletonization, which is defined as joint skeleton extraction of objects in an image collection. Object skeletonization in a single natural image is a challenging problem because there is hardly any prior knowledge about the object. Therefore, we resort to the idea of object co-skeletonization, hoping that the commonness prior that exists across the images may help, just as it does for other joint processing problems such as co-segmentation. We observe that the skeleton can provide good scribbles for segmentation, and skeletonization, in turn, needs good segmentation. Therefore, we propose a coupled framework for co-skeletonization and co-segmentation tasks so that they are well informed by each other, and benefit each other synergistically. Since it is a new problem, we also construct a benchmark dataset by annotating nearly 1.8k images spread across 38 categories. Extensive experiments demonstrate that the proposed method achieves promising results in all the three possible scenarios of joint-processing: weakly-supervised, supervised, and unsupervised.
1910.11141
Alexey Radul
Alexey Radul, Brian Patton, Dougal Maclaurin, Matthew D. Hoffman and Rif A. Saurous
Automatically Batching Control-Intensive Programs for Modern Accelerators
10 pages; Machine Learning and Systems 2020
null
null
null
cs.DC cs.LG cs.PL
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present a general approach to batching arbitrary computations for accelerators such as GPUs. We show orders-of-magnitude speedups using our method on the No U-Turn Sampler (NUTS), a workhorse algorithm in Bayesian statistics. The central challenge of batching NUTS and other Markov chain Monte Carlo algorithms is data-dependent control flow and recursion. We overcome this by mechanically transforming a single-example implementation into a form that explicitly tracks the current program point for each batch member, and only steps forward those in the same place. We present two different batching algorithms: a simpler, previously published one that inherits recursion from the host Python, and a more complex, novel one that implemenents recursion directly and can batch across it. We implement these batching methods as a general program transformation on Python source. Both the batching system and the NUTS implementation presented here are available as part of the popular TensorFlow Probability software package.
[ { "created": "Wed, 23 Oct 2019 14:06:18 GMT", "version": "v1" }, { "created": "Thu, 12 Mar 2020 15:56:56 GMT", "version": "v2" } ]
2020-03-13
[ [ "Radul", "Alexey", "" ], [ "Patton", "Brian", "" ], [ "Maclaurin", "Dougal", "" ], [ "Hoffman", "Matthew D.", "" ], [ "Saurous", "Rif A.", "" ] ]
We present a general approach to batching arbitrary computations for accelerators such as GPUs. We show orders-of-magnitude speedups using our method on the No U-Turn Sampler (NUTS), a workhorse algorithm in Bayesian statistics. The central challenge of batching NUTS and other Markov chain Monte Carlo algorithms is data-dependent control flow and recursion. We overcome this by mechanically transforming a single-example implementation into a form that explicitly tracks the current program point for each batch member, and only steps forward those in the same place. We present two different batching algorithms: a simpler, previously published one that inherits recursion from the host Python, and a more complex, novel one that implemenents recursion directly and can batch across it. We implement these batching methods as a general program transformation on Python source. Both the batching system and the NUTS implementation presented here are available as part of the popular TensorFlow Probability software package.
1811.06287
Michael Werman
Levi Offen and Michael Werman
Sketch based Reduced Memory Hough Transform
5 pages
2018 25th IEEE International Conference on Image Processing (ICIP)
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes using sketch algorithms to represent the votes in Hough transforms. Replacing the accumulator array with a sketch (Sketch Hough Transform - SHT) significantly reduces the memory needed to compute a Hough transform. We also present a new sketch, Count Median Update, which works better than known sketch methods for replacing the accumulator array in the Hough Transform.
[ { "created": "Thu, 15 Nov 2018 10:44:35 GMT", "version": "v1" } ]
2018-11-16
[ [ "Offen", "Levi", "" ], [ "Werman", "Michael", "" ] ]
This paper proposes using sketch algorithms to represent the votes in Hough transforms. Replacing the accumulator array with a sketch (Sketch Hough Transform - SHT) significantly reduces the memory needed to compute a Hough transform. We also present a new sketch, Count Median Update, which works better than known sketch methods for replacing the accumulator array in the Hough Transform.
2407.17738
Haoran Zhu
Haoran Zhu, Yifan Zhou, Chang Xu, Ruixiang Zhang, and Wen Yang
Enhancing Fine-grained Object Detection in Aerial Images via Orthogonal Mapping
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fine-Grained Object Detection (FGOD) is a critical task in high-resolution aerial image analysis. This letter introduces Orthogonal Mapping (OM), a simple yet effective method aimed at addressing the challenge of semantic confusion inherent in FGOD. OM introduces orthogonal constraints in the feature space by decoupling features from the last layer of the classification branch with a class-wise orthogonal vector basis. This effectively mitigates semantic confusion and enhances classification accuracy. Moreover, OM can be seamlessly integrated into mainstream object detectors. Extensive experiments conducted on three FGOD datasets (FAIR1M, ShipRSImageNet, and MAR20) demonstrate the effectiveness and superiority of the proposed approach. Notably, with just one line of code, OM achieves a 4.08% improvement in mean Average Precision (mAP) over FCOS on the ShipRSImageNet dataset. Codes are released at https://github.com/ZhuHaoranEIS/Orthogonal-FGOD.
[ { "created": "Thu, 25 Jul 2024 03:26:41 GMT", "version": "v1" } ]
2024-07-26
[ [ "Zhu", "Haoran", "" ], [ "Zhou", "Yifan", "" ], [ "Xu", "Chang", "" ], [ "Zhang", "Ruixiang", "" ], [ "Yang", "Wen", "" ] ]
Fine-Grained Object Detection (FGOD) is a critical task in high-resolution aerial image analysis. This letter introduces Orthogonal Mapping (OM), a simple yet effective method aimed at addressing the challenge of semantic confusion inherent in FGOD. OM introduces orthogonal constraints in the feature space by decoupling features from the last layer of the classification branch with a class-wise orthogonal vector basis. This effectively mitigates semantic confusion and enhances classification accuracy. Moreover, OM can be seamlessly integrated into mainstream object detectors. Extensive experiments conducted on three FGOD datasets (FAIR1M, ShipRSImageNet, and MAR20) demonstrate the effectiveness and superiority of the proposed approach. Notably, with just one line of code, OM achieves a 4.08% improvement in mean Average Precision (mAP) over FCOS on the ShipRSImageNet dataset. Codes are released at https://github.com/ZhuHaoranEIS/Orthogonal-FGOD.
1312.0718
Yong Zeng
Yong Zeng, Rui Zhang, and Zhi Ning Chen
Electromagnetic Lens-focusing Antenna Enabled Massive MIMO: Performance Improvement and Cost Reduction
30 pages, 9 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Massive multiple-input multiple-output (MIMO) techniques have been recently advanced to tremendously improve the performance of wireless communication networks. However, the use of very large antenna arrays at the base stations (BSs) brings new issues, such as the significantly increased hardware and signal processing costs. In order to reap the enormous gain of massive MIMO and yet reduce its cost to an affordable level, this paper proposes a novel system design by integrating an electromagnetic (EM) lens with the large antenna array, termed the EM-lens enabled MIMO. The EM lens has the capability of focusing the power of an incident wave to a small area of the antenna array, while the location of the focal area varies with the angle of arrival (AoA) of the wave. Therefore, in practical scenarios where the arriving signals from geographically separated users have different AoAs, the EM-lens enabled system provides two new benefits, namely energy focusing and spatial interference rejection. By taking into account the effects of imperfect channel estimation via pilot-assisted training, in this paper we analytically show that the average received signal-to-noise ratio (SNR) in both the single-user and multiuser uplink transmissions can be strictly improved by the EM-lens enabled system. Furthermore, we demonstrate that the proposed design makes it possible to considerably reduce the hardware and signal processing costs with only slight degradations in performance. To this end, two complexity/cost reduction schemes are proposed, which are small-MIMO processing with parallel receiver filtering applied over subgroups of antennas to reduce the computational complexity, and channel covariance based antenna selection to reduce the required number of radio frequency (RF) chains. Numerical results are provided to corroborate our analysis.
[ { "created": "Tue, 3 Dec 2013 07:14:23 GMT", "version": "v1" }, { "created": "Wed, 26 Mar 2014 14:15:44 GMT", "version": "v2" } ]
2014-03-27
[ [ "Zeng", "Yong", "" ], [ "Zhang", "Rui", "" ], [ "Chen", "Zhi Ning", "" ] ]
Massive multiple-input multiple-output (MIMO) techniques have been recently advanced to tremendously improve the performance of wireless communication networks. However, the use of very large antenna arrays at the base stations (BSs) brings new issues, such as the significantly increased hardware and signal processing costs. In order to reap the enormous gain of massive MIMO and yet reduce its cost to an affordable level, this paper proposes a novel system design by integrating an electromagnetic (EM) lens with the large antenna array, termed the EM-lens enabled MIMO. The EM lens has the capability of focusing the power of an incident wave to a small area of the antenna array, while the location of the focal area varies with the angle of arrival (AoA) of the wave. Therefore, in practical scenarios where the arriving signals from geographically separated users have different AoAs, the EM-lens enabled system provides two new benefits, namely energy focusing and spatial interference rejection. By taking into account the effects of imperfect channel estimation via pilot-assisted training, in this paper we analytically show that the average received signal-to-noise ratio (SNR) in both the single-user and multiuser uplink transmissions can be strictly improved by the EM-lens enabled system. Furthermore, we demonstrate that the proposed design makes it possible to considerably reduce the hardware and signal processing costs with only slight degradations in performance. To this end, two complexity/cost reduction schemes are proposed, which are small-MIMO processing with parallel receiver filtering applied over subgroups of antennas to reduce the computational complexity, and channel covariance based antenna selection to reduce the required number of radio frequency (RF) chains. Numerical results are provided to corroborate our analysis.
1008.1438
Ji King
Ji King
Harmonic Analysis and Qualitative Uncertainty Principle
108 pages,no figures
null
null
null
cs.IT math-ph math.CA math.IT math.MP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the mathematical nature of qualitative uncertainty principle (QUP), which plays an important role in mathematics, physics and engineering fields. Consider a 3-tuple (K, H1, H2) that K: H1 -> H2 is an integral operator. Suppose a signal f in H1, {\Omega}1 and {\Omega}2 are domains on which f, Kf define respectively. Does this signal f vanish if |{\Sigma}(f)|<|{\Omega}1|and|{\Sigma}(Kf)|<|{\Omega}2|? The excesses and deficiencies of integral kernel K({\omega}, t) are found to be greatly related to this general formulation of QUP. The complete point theory of integral kernel is so established to deal with the QUP. This theory addresses the density and linear independence of integral kernels. Some algebraic and geometric properties of complete points are presented. It is shown that the satisfaction of QUP depends on the existence of some complete points. By recognizing complete points of their corresponding integral kernels, the QUP with Fourier transform, Wigner-Ville distribution, Gabor transform and wavelet are studied. It is shown the QUP only holds for good behaved integral operators. An investigation of full violation of QUP shows that L2 space is large for high resolution harmonic analysis. And the invertible linear integral transforms whose kernels are complete in L2 probably lead to the satisfaction of QUP. It indicates the performance limitation of linear integral transforms in harmonic analysis. Two possible ways bypassing uncertainty principle, nonlinear method and sparse representation, are thus suggested. The notion of operator family is developed and is applied to understand remarkable performances of recent sparse representation.
[ { "created": "Mon, 9 Aug 2010 01:59:49 GMT", "version": "v1" } ]
2010-08-10
[ [ "King", "Ji", "" ] ]
This paper investigates the mathematical nature of qualitative uncertainty principle (QUP), which plays an important role in mathematics, physics and engineering fields. Consider a 3-tuple (K, H1, H2) that K: H1 -> H2 is an integral operator. Suppose a signal f in H1, {\Omega}1 and {\Omega}2 are domains on which f, Kf define respectively. Does this signal f vanish if |{\Sigma}(f)|<|{\Omega}1|and|{\Sigma}(Kf)|<|{\Omega}2|? The excesses and deficiencies of integral kernel K({\omega}, t) are found to be greatly related to this general formulation of QUP. The complete point theory of integral kernel is so established to deal with the QUP. This theory addresses the density and linear independence of integral kernels. Some algebraic and geometric properties of complete points are presented. It is shown that the satisfaction of QUP depends on the existence of some complete points. By recognizing complete points of their corresponding integral kernels, the QUP with Fourier transform, Wigner-Ville distribution, Gabor transform and wavelet are studied. It is shown the QUP only holds for good behaved integral operators. An investigation of full violation of QUP shows that L2 space is large for high resolution harmonic analysis. And the invertible linear integral transforms whose kernels are complete in L2 probably lead to the satisfaction of QUP. It indicates the performance limitation of linear integral transforms in harmonic analysis. Two possible ways bypassing uncertainty principle, nonlinear method and sparse representation, are thus suggested. The notion of operator family is developed and is applied to understand remarkable performances of recent sparse representation.
2308.10959
Sijin Wu
Sijin Wu, Dan Zhang, Teng Hu, Shikun Feng
DocPrompt: Large-scale continue pretrain for zero-shot and few-shot document question answering
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose Docprompt for document question answering tasks with powerful zero-shot and few-shot performance. We proposed a novel weakly supervised data generation method, a novel multl-stage training method and a novel understanding model \& generation model ensemble method. We achieved state-of-the-art performance on 4 document question answering tasks. This method greatly improves the delivery efficiency and model performance of document question answering customer projects, reducing annotation costs and labor costs. Our demo can be found at https://huggingface.co/spaces/PaddlePaddle/ERNIE-Layout.
[ { "created": "Mon, 21 Aug 2023 18:14:00 GMT", "version": "v1" }, { "created": "Thu, 31 Aug 2023 09:14:17 GMT", "version": "v2" } ]
2023-09-01
[ [ "Wu", "Sijin", "" ], [ "Zhang", "Dan", "" ], [ "Hu", "Teng", "" ], [ "Feng", "Shikun", "" ] ]
In this paper, we propose Docprompt for document question answering tasks with powerful zero-shot and few-shot performance. We proposed a novel weakly supervised data generation method, a novel multl-stage training method and a novel understanding model \& generation model ensemble method. We achieved state-of-the-art performance on 4 document question answering tasks. This method greatly improves the delivery efficiency and model performance of document question answering customer projects, reducing annotation costs and labor costs. Our demo can be found at https://huggingface.co/spaces/PaddlePaddle/ERNIE-Layout.
2402.02055
Yiping Wang
Yiping Wang, Yifang Chen, Wendan Yan, Kevin Jamieson, Simon Shaolei Du
Variance Alignment Score: A Simple But Tough-to-Beat Data Selection Method for Multimodal Contrastive Learning
17 pages, 4 figures
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, data selection has emerged as a core issue for large-scale visual-language model pretraining, especially on noisy web-curated datasets. One widely adopted strategy assigns quality scores such as CLIP similarity for each sample and retains the data pairs with the highest scores. However, these approaches are agnostic of data distribution and always fail to select the most informative samples. To solve this problem, we propose a simple yet theoretically principled metric named Variance Alignment Score (VAS), which has the form $\langle \Sigma_{\text{test}}, \Sigma_i\rangle$. Here, $\Sigma_{\text{test}}$ represents the target (cross-)covariance matrix we aim to align, potentially based on prior knowledge, while $\Sigma_i$ denotes the tensor product of single or multi-modal representations for the $i$-th sample. We further design a new data selection method that maximizes the total VAS. We provide theoretical analysis in a simplified setting to demonstrate the theoretical advantage of VAS over random or other existing data selection. Experimentally, applying VAS and CLIP scores together can outperform baselines by a margin of $1.3\%$ average on 38 evaluation sets for noisy dataset DataComp and $2.5\%$ on VTAB for high-quality dataset CC12M. Additionally, our ablation study also shows visual features are better than text for calculating VAS, and the related classical experimental design methods may fail under this context.
[ { "created": "Sat, 3 Feb 2024 06:29:04 GMT", "version": "v1" } ]
2024-02-06
[ [ "Wang", "Yiping", "" ], [ "Chen", "Yifang", "" ], [ "Yan", "Wendan", "" ], [ "Jamieson", "Kevin", "" ], [ "Du", "Simon Shaolei", "" ] ]
In recent years, data selection has emerged as a core issue for large-scale visual-language model pretraining, especially on noisy web-curated datasets. One widely adopted strategy assigns quality scores such as CLIP similarity for each sample and retains the data pairs with the highest scores. However, these approaches are agnostic of data distribution and always fail to select the most informative samples. To solve this problem, we propose a simple yet theoretically principled metric named Variance Alignment Score (VAS), which has the form $\langle \Sigma_{\text{test}}, \Sigma_i\rangle$. Here, $\Sigma_{\text{test}}$ represents the target (cross-)covariance matrix we aim to align, potentially based on prior knowledge, while $\Sigma_i$ denotes the tensor product of single or multi-modal representations for the $i$-th sample. We further design a new data selection method that maximizes the total VAS. We provide theoretical analysis in a simplified setting to demonstrate the theoretical advantage of VAS over random or other existing data selection. Experimentally, applying VAS and CLIP scores together can outperform baselines by a margin of $1.3\%$ average on 38 evaluation sets for noisy dataset DataComp and $2.5\%$ on VTAB for high-quality dataset CC12M. Additionally, our ablation study also shows visual features are better than text for calculating VAS, and the related classical experimental design methods may fail under this context.
2006.13202
Oleh Rybkin
Oleh Rybkin, Kostas Daniilidis, Sergey Levine
Simple and Effective VAE Training with Calibrated Decoders
International Conference on Machine Learning (ICML), 2021. Project website is at https://orybkin.github.io/sigma-vae/
null
null
null
cs.LG cs.CV eess.IV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions. However, training VAEs often requires considerable hyperparameter tuning to determine the optimal amount of information retained by the latent variable. We study the impact of calibrated decoders, which learn the uncertainty of the decoding distribution and can determine this amount of information automatically, on the VAE performance. While many methods for learning calibrated decoders have been proposed, many of the recent papers that employ VAEs rely on heuristic hyperparameters and ad-hoc modifications instead. We perform the first comprehensive comparative analysis of calibrated decoder and provide recommendations for simple and effective VAE training. Our analysis covers a range of image and video datasets and several single-image and sequential VAE models. We further propose a simple but novel modification to the commonly used Gaussian decoder, which computes the prediction variance analytically. We observe empirically that using heuristic modifications is not necessary with our method. Project website is at https://orybkin.github.io/sigma-vae/
[ { "created": "Tue, 23 Jun 2020 17:57:47 GMT", "version": "v1" }, { "created": "Sun, 16 Aug 2020 01:09:15 GMT", "version": "v2" }, { "created": "Mon, 12 Jul 2021 04:06:41 GMT", "version": "v3" } ]
2021-07-13
[ [ "Rybkin", "Oleh", "" ], [ "Daniilidis", "Kostas", "" ], [ "Levine", "Sergey", "" ] ]
Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions. However, training VAEs often requires considerable hyperparameter tuning to determine the optimal amount of information retained by the latent variable. We study the impact of calibrated decoders, which learn the uncertainty of the decoding distribution and can determine this amount of information automatically, on the VAE performance. While many methods for learning calibrated decoders have been proposed, many of the recent papers that employ VAEs rely on heuristic hyperparameters and ad-hoc modifications instead. We perform the first comprehensive comparative analysis of calibrated decoder and provide recommendations for simple and effective VAE training. Our analysis covers a range of image and video datasets and several single-image and sequential VAE models. We further propose a simple but novel modification to the commonly used Gaussian decoder, which computes the prediction variance analytically. We observe empirically that using heuristic modifications is not necessary with our method. Project website is at https://orybkin.github.io/sigma-vae/
2007.13639
Pengcheng Xia
Pengcheng Xia, Haoyu Wang, Xiapu Luo, Lei Wu, Yajin Zhou, Guangdong Bai, Guoai Xu, Gang Huang, Xuanzhe Liu
Don't Fish in Troubled Waters! Characterizing Coronavirus-themed Cryptocurrency Scams
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As COVID-19 has been spreading across the world since early 2020, a growing number of malicious campaigns are capitalizing the topic of COVID-19. COVID-19 themed cryptocurrency scams are increasingly popular during the pandemic. However, these newly emerging scams are poorly understood by our community. In this paper, we present the first measurement study of COVID-19 themed cryptocurrency scams. We first create a comprehensive taxonomy of COVID-19 scams by manually analyzing the existing scams reported by users from online resources. Then, we propose a hybrid approach to perform the investigation by: 1) collecting reported scams in the wild; and 2) detecting undisclosed ones based on information collected from suspicious entities (e.g., domains, tweets, etc). We have collected 195 confirmed COVID-19 cryptocurrency scams in total, including 91 token scams, 19 giveaway scams, 9 blackmail scams, 14 crypto malware scams, 9 Ponzi scheme scams, and 53 donation scams. We then identified over 200 blockchain addresses associated with these scams, which lead to at least 330K US dollars in losses from 6,329 victims. For each type of scams, we further investigated the tricks and social engineering techniques they used. To facilitate future research, we have released all the well-labelled scams to the research community.
[ { "created": "Mon, 27 Jul 2020 15:40:05 GMT", "version": "v1" }, { "created": "Sun, 1 Nov 2020 12:43:09 GMT", "version": "v2" } ]
2020-11-03
[ [ "Xia", "Pengcheng", "" ], [ "Wang", "Haoyu", "" ], [ "Luo", "Xiapu", "" ], [ "Wu", "Lei", "" ], [ "Zhou", "Yajin", "" ], [ "Bai", "Guangdong", "" ], [ "Xu", "Guoai", "" ], [ "Huang", "Gang", "" ], [ "Liu", "Xuanzhe", "" ] ]
As COVID-19 has been spreading across the world since early 2020, a growing number of malicious campaigns are capitalizing the topic of COVID-19. COVID-19 themed cryptocurrency scams are increasingly popular during the pandemic. However, these newly emerging scams are poorly understood by our community. In this paper, we present the first measurement study of COVID-19 themed cryptocurrency scams. We first create a comprehensive taxonomy of COVID-19 scams by manually analyzing the existing scams reported by users from online resources. Then, we propose a hybrid approach to perform the investigation by: 1) collecting reported scams in the wild; and 2) detecting undisclosed ones based on information collected from suspicious entities (e.g., domains, tweets, etc). We have collected 195 confirmed COVID-19 cryptocurrency scams in total, including 91 token scams, 19 giveaway scams, 9 blackmail scams, 14 crypto malware scams, 9 Ponzi scheme scams, and 53 donation scams. We then identified over 200 blockchain addresses associated with these scams, which lead to at least 330K US dollars in losses from 6,329 victims. For each type of scams, we further investigated the tricks and social engineering techniques they used. To facilitate future research, we have released all the well-labelled scams to the research community.
2207.08256
Fajrian Yunus
Fajrian Yunus, Chlo\'e Clavel, Catherine Pelachaud
Representation Learning of Image Schema
null
null
null
null
cs.HC cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image schema is a recurrent pattern of reasoning where one entity is mapped into another. Image schema is similar to conceptual metaphor and is also related to metaphoric gesture. Our main goal is to generate metaphoric gestures for an Embodied Conversational Agent. We propose a technique to learn the vector representation of image schemas. As far as we are aware of, this is the first work which addresses that problem. Our technique uses Ravenet et al's algorithm which we use to compute the image schemas from the text input and also BERT and SenseBERT which we use as the base word embedding technique to calculate the final vector representation of the image schema. Our representation learning technique works by clustering: word embedding vectors which belong to the same image schema should be relatively closer to each other, and thus form a cluster. With the image schemas representable as vectors, it also becomes possible to have a notion that some image schemas are closer or more similar to each other than to the others because the distance between the vectors is a proxy of the dissimilarity between the corresponding image schemas. Therefore, after obtaining the vector representation of the image schemas, we calculate the distances between those vectors. Based on these, we create visualizations to illustrate the relative distances between the different image schemas.
[ { "created": "Sun, 17 Jul 2022 18:42:37 GMT", "version": "v1" } ]
2022-07-19
[ [ "Yunus", "Fajrian", "" ], [ "Clavel", "Chloé", "" ], [ "Pelachaud", "Catherine", "" ] ]
Image schema is a recurrent pattern of reasoning where one entity is mapped into another. Image schema is similar to conceptual metaphor and is also related to metaphoric gesture. Our main goal is to generate metaphoric gestures for an Embodied Conversational Agent. We propose a technique to learn the vector representation of image schemas. As far as we are aware of, this is the first work which addresses that problem. Our technique uses Ravenet et al's algorithm which we use to compute the image schemas from the text input and also BERT and SenseBERT which we use as the base word embedding technique to calculate the final vector representation of the image schema. Our representation learning technique works by clustering: word embedding vectors which belong to the same image schema should be relatively closer to each other, and thus form a cluster. With the image schemas representable as vectors, it also becomes possible to have a notion that some image schemas are closer or more similar to each other than to the others because the distance between the vectors is a proxy of the dissimilarity between the corresponding image schemas. Therefore, after obtaining the vector representation of the image schemas, we calculate the distances between those vectors. Based on these, we create visualizations to illustrate the relative distances between the different image schemas.
1404.0408
Emil Bj\"ornson
Emil Bj\"ornson, Mats Bengtsson, and Bj\"orn Ottersten
Optimal Multiuser Transmit Beamforming: A Difficult Problem with a Simple Solution Structure
Accepted for publication as lecture note in IEEE Signal Processing Magazine, 11 pages, 3 figures. The results can be reproduced using the following Matlab code: https://github.com/emilbjornson/optimal-beamforming
IEEE Signal Processing Magazine, vol. 31, no. 4, pp. 142-148, July 2014
10.1109/MSP.2014.2312183
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transmit beamforming is a versatile technique for signal transmission from an array of $N$ antennas to one or multiple users [1]. In wireless communications, the goal is to increase the signal power at the intended user and reduce interference to non-intended users. A high signal power is achieved by transmitting the same data signal from all antennas, but with different amplitudes and phases, such that the signal components add coherently at the user. Low interference is accomplished by making the signal components add destructively at non-intended users. This corresponds mathematically to designing beamforming vectors (that describe the amplitudes and phases) to have large inner products with the vectors describing the intended channels and small inner products with non-intended user channels. While it is fairly easy to design a beamforming vector that maximizes the signal power at the intended user, it is difficult to strike a perfect balance between maximizing the signal power and minimizing the interference leakage. In fact, the optimization of multiuser transmit beamforming is generally a nondeterministic polynomial-time (NP) hard problem [2]. Nevertheless, this lecture shows that the optimal transmit beamforming has a simple structure with very intuitive properties and interpretations. This structure provides a theoretical foundation for practical low-complexity beamforming schemes. (See this lecture note for the complete abstract/introduction)
[ { "created": "Tue, 1 Apr 2014 22:01:02 GMT", "version": "v1" }, { "created": "Wed, 23 Apr 2014 09:54:41 GMT", "version": "v2" } ]
2014-07-22
[ [ "Björnson", "Emil", "" ], [ "Bengtsson", "Mats", "" ], [ "Ottersten", "Björn", "" ] ]
Transmit beamforming is a versatile technique for signal transmission from an array of $N$ antennas to one or multiple users [1]. In wireless communications, the goal is to increase the signal power at the intended user and reduce interference to non-intended users. A high signal power is achieved by transmitting the same data signal from all antennas, but with different amplitudes and phases, such that the signal components add coherently at the user. Low interference is accomplished by making the signal components add destructively at non-intended users. This corresponds mathematically to designing beamforming vectors (that describe the amplitudes and phases) to have large inner products with the vectors describing the intended channels and small inner products with non-intended user channels. While it is fairly easy to design a beamforming vector that maximizes the signal power at the intended user, it is difficult to strike a perfect balance between maximizing the signal power and minimizing the interference leakage. In fact, the optimization of multiuser transmit beamforming is generally a nondeterministic polynomial-time (NP) hard problem [2]. Nevertheless, this lecture shows that the optimal transmit beamforming has a simple structure with very intuitive properties and interpretations. This structure provides a theoretical foundation for practical low-complexity beamforming schemes. (See this lecture note for the complete abstract/introduction)
1411.6741
Chaitanya Ahuja
Chaitanya Ahuja, Karan Nathwani and Rajesh M. Hegde
A Complex Matrix Factorization approach to Joint Modeling of Magnitude and Phase for Source Separation
5 pages, 3 figures
null
null
null
cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conventional NMF methods for source separation factorize the matrix of spectral magnitudes. Spectral Phase is not included in the decomposition process of these methods. However, phase of the speech mixture is generally used in reconstructing the target speech signal. This results in undesired traces of interfering sources in the target signal. In this paper the spectral phase is incorporated in the decomposition process itself. Additionally, the complex matrix factorization problem is reduced to an NMF problem using simple transformations. This results in effective separation of speech mixtures since both magnitude and phase are utilized jointly in the separation process. Improvement in source separation results are demonstrated using objective quality evaluations on the GRID corpus.
[ { "created": "Tue, 25 Nov 2014 06:18:45 GMT", "version": "v1" } ]
2014-11-26
[ [ "Ahuja", "Chaitanya", "" ], [ "Nathwani", "Karan", "" ], [ "Hegde", "Rajesh M.", "" ] ]
Conventional NMF methods for source separation factorize the matrix of spectral magnitudes. Spectral Phase is not included in the decomposition process of these methods. However, phase of the speech mixture is generally used in reconstructing the target speech signal. This results in undesired traces of interfering sources in the target signal. In this paper the spectral phase is incorporated in the decomposition process itself. Additionally, the complex matrix factorization problem is reduced to an NMF problem using simple transformations. This results in effective separation of speech mixtures since both magnitude and phase are utilized jointly in the separation process. Improvement in source separation results are demonstrated using objective quality evaluations on the GRID corpus.
1902.06007
Andrew Silva
Andrew Silva, Matthew Gombolay
Neural-encoding Human Experts' Domain Knowledge to Warm Start Reinforcement Learning
null
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep reinforcement learning has been successful in a variety of tasks, such as game playing and robotic manipulation. However, attempting to learn \textit{tabula rasa} disregards the logical structure of many domains as well as the wealth of readily available knowledge from domain experts that could help "warm start" the learning process. We present a novel reinforcement learning technique that allows for intelligent initialization of a neural network weights and architecture. Our approach permits the encoding domain knowledge directly into a neural decision tree, and improves upon that knowledge with policy gradient updates. We empirically validate our approach on two OpenAI Gym tasks and two modified StarCraft 2 tasks, showing that our novel architecture outperforms multilayer-perceptron and recurrent architectures. Our knowledge-based framework finds superior policies compared to imitation learning-based and prior knowledge-based approaches. Importantly, we demonstrate that our approach can be used by untrained humans to initially provide >80% increase in expected reward relative to baselines prior to training (p < 0.001), which results in a >60% increase in expected reward after policy optimization (p = 0.011).
[ { "created": "Fri, 15 Feb 2019 23:28:59 GMT", "version": "v1" }, { "created": "Tue, 2 Jul 2019 14:23:30 GMT", "version": "v2" }, { "created": "Mon, 2 Dec 2019 17:47:06 GMT", "version": "v3" }, { "created": "Wed, 23 Sep 2020 22:17:29 GMT", "version": "v4" } ]
2020-09-25
[ [ "Silva", "Andrew", "" ], [ "Gombolay", "Matthew", "" ] ]
Deep reinforcement learning has been successful in a variety of tasks, such as game playing and robotic manipulation. However, attempting to learn \textit{tabula rasa} disregards the logical structure of many domains as well as the wealth of readily available knowledge from domain experts that could help "warm start" the learning process. We present a novel reinforcement learning technique that allows for intelligent initialization of a neural network weights and architecture. Our approach permits the encoding domain knowledge directly into a neural decision tree, and improves upon that knowledge with policy gradient updates. We empirically validate our approach on two OpenAI Gym tasks and two modified StarCraft 2 tasks, showing that our novel architecture outperforms multilayer-perceptron and recurrent architectures. Our knowledge-based framework finds superior policies compared to imitation learning-based and prior knowledge-based approaches. Importantly, we demonstrate that our approach can be used by untrained humans to initially provide >80% increase in expected reward relative to baselines prior to training (p < 0.001), which results in a >60% increase in expected reward after policy optimization (p = 0.011).
2312.03567
Joel Stremmel
Joel Stremmel, Ardavan Saeedi, Hamid Hassanzadeh, Sanjit Batra, Jeffrey Hertzberg, Jaime Murillo, Eran Halperin
XAIQA: Explainer-Based Data Augmentation for Extractive Question Answering
Extended Abstract presented at Machine Learning for Health (ML4H) symposium 2023, December 10th, 2023, New Orleans, United States, 8 pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Extractive question answering (QA) systems can enable physicians and researchers to query medical records, a foundational capability for designing clinical studies and understanding patient medical history. However, building these systems typically requires expert-annotated QA pairs. Large language models (LLMs), which can perform extractive QA, depend on high quality data in their prompts, specialized for the application domain. We introduce a novel approach, XAIQA, for generating synthetic QA pairs at scale from data naturally available in electronic health records. Our method uses the idea of a classification model explainer to generate questions and answers about medical concepts corresponding to medical codes. In an expert evaluation with two physicians, our method identifies $2.2\times$ more semantic matches and $3.8\times$ more clinical abbreviations than two popular approaches that use sentence transformers to create QA pairs. In an ML evaluation, adding our QA pairs improves performance of GPT-4 as an extractive QA model, including on difficult questions. In both the expert and ML evaluations, we examine trade-offs between our method and sentence transformers for QA pair generation depending on question difficulty.
[ { "created": "Wed, 6 Dec 2023 15:59:06 GMT", "version": "v1" } ]
2023-12-07
[ [ "Stremmel", "Joel", "" ], [ "Saeedi", "Ardavan", "" ], [ "Hassanzadeh", "Hamid", "" ], [ "Batra", "Sanjit", "" ], [ "Hertzberg", "Jeffrey", "" ], [ "Murillo", "Jaime", "" ], [ "Halperin", "Eran", "" ] ]
Extractive question answering (QA) systems can enable physicians and researchers to query medical records, a foundational capability for designing clinical studies and understanding patient medical history. However, building these systems typically requires expert-annotated QA pairs. Large language models (LLMs), which can perform extractive QA, depend on high quality data in their prompts, specialized for the application domain. We introduce a novel approach, XAIQA, for generating synthetic QA pairs at scale from data naturally available in electronic health records. Our method uses the idea of a classification model explainer to generate questions and answers about medical concepts corresponding to medical codes. In an expert evaluation with two physicians, our method identifies $2.2\times$ more semantic matches and $3.8\times$ more clinical abbreviations than two popular approaches that use sentence transformers to create QA pairs. In an ML evaluation, adding our QA pairs improves performance of GPT-4 as an extractive QA model, including on difficult questions. In both the expert and ML evaluations, we examine trade-offs between our method and sentence transformers for QA pair generation depending on question difficulty.
2205.13280
Chengyu Qiao
Chengyu Qiao, Zhiyu Xiang and Xinglu Wang
Objects Matter: Learning Object Relation Graph for Robust Camera Relocalization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual relocalization aims to estimate the pose of a camera from one or more images. In recent years deep learning based pose regression methods have attracted many attentions. They feature predicting the absolute poses without relying on any prior built maps or stored images, making the relocalization very efficient. However, robust relocalization under environments with complex appearance changes and real dynamics remains very challenging. In this paper, we propose to enhance the distinctiveness of the image features by extracting the deep relationship among objects. In particular, we extract objects in the image and construct a deep object relation graph (ORG) to incorporate the semantic connections and relative spatial clues of the objects. We integrate our ORG module into several popular pose regression models. Extensive experiments on various public indoor and outdoor datasets demonstrate that our method improves the performance significantly and outperforms the previous approaches.
[ { "created": "Thu, 26 May 2022 11:37:11 GMT", "version": "v1" } ]
2022-05-27
[ [ "Qiao", "Chengyu", "" ], [ "Xiang", "Zhiyu", "" ], [ "Wang", "Xinglu", "" ] ]
Visual relocalization aims to estimate the pose of a camera from one or more images. In recent years deep learning based pose regression methods have attracted many attentions. They feature predicting the absolute poses without relying on any prior built maps or stored images, making the relocalization very efficient. However, robust relocalization under environments with complex appearance changes and real dynamics remains very challenging. In this paper, we propose to enhance the distinctiveness of the image features by extracting the deep relationship among objects. In particular, we extract objects in the image and construct a deep object relation graph (ORG) to incorporate the semantic connections and relative spatial clues of the objects. We integrate our ORG module into several popular pose regression models. Extensive experiments on various public indoor and outdoor datasets demonstrate that our method improves the performance significantly and outperforms the previous approaches.
2004.11568
Ryan Mann
Ryan L. Mann, Tyler Helmuth
Efficient Algorithms for Approximating Quantum Partition Functions
7 pages, 0 figures, published version
Journal of Mathematical Physics 62, 022201 (2021)
10.1063/5.0013689
null
cs.DS cs.CC math.CO quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We establish a polynomial-time approximation algorithm for partition functions of quantum spin models at high temperature. Our algorithm is based on the quantum cluster expansion of Neto\v{c}n\'y and Redig and the cluster expansion approach to designing algorithms due to Helmuth, Perkins, and Regts. Similar results have previously been obtained by related methods, and our main contribution is a simple and slightly sharper analysis for the case of pairwise interactions on bounded-degree graphs.
[ { "created": "Fri, 24 Apr 2020 07:21:43 GMT", "version": "v1" }, { "created": "Mon, 1 Feb 2021 13:59:44 GMT", "version": "v2" } ]
2021-02-02
[ [ "Mann", "Ryan L.", "" ], [ "Helmuth", "Tyler", "" ] ]
We establish a polynomial-time approximation algorithm for partition functions of quantum spin models at high temperature. Our algorithm is based on the quantum cluster expansion of Neto\v{c}n\'y and Redig and the cluster expansion approach to designing algorithms due to Helmuth, Perkins, and Regts. Similar results have previously been obtained by related methods, and our main contribution is a simple and slightly sharper analysis for the case of pairwise interactions on bounded-degree graphs.
1601.07932
Keehwan Park
Keehwan Park and Jean Honorio
Information-Theoretic Lower Bounds for Recovery of Diffusion Network Structures
ISIT'16
International Symposium on Information Theory (ISIT) 2016
null
null
cs.LG cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the information-theoretic lower bound of the sample complexity of the correct recovery of diffusion network structures. We introduce a discrete-time diffusion model based on the Independent Cascade model for which we obtain a lower bound of order $\Omega(k \log p)$, for directed graphs of $p$ nodes, and at most $k$ parents per node. Next, we introduce a continuous-time diffusion model, for which a similar lower bound of order $\Omega(k \log p)$ is obtained. Our results show that the algorithm of Pouget-Abadie et al. is statistically optimal for the discrete-time regime. Our work also opens the question of whether it is possible to devise an optimal algorithm for the continuous-time regime.
[ { "created": "Thu, 28 Jan 2016 22:12:06 GMT", "version": "v1" }, { "created": "Mon, 23 May 2016 23:29:19 GMT", "version": "v2" } ]
2019-05-28
[ [ "Park", "Keehwan", "" ], [ "Honorio", "Jean", "" ] ]
We study the information-theoretic lower bound of the sample complexity of the correct recovery of diffusion network structures. We introduce a discrete-time diffusion model based on the Independent Cascade model for which we obtain a lower bound of order $\Omega(k \log p)$, for directed graphs of $p$ nodes, and at most $k$ parents per node. Next, we introduce a continuous-time diffusion model, for which a similar lower bound of order $\Omega(k \log p)$ is obtained. Our results show that the algorithm of Pouget-Abadie et al. is statistically optimal for the discrete-time regime. Our work also opens the question of whether it is possible to devise an optimal algorithm for the continuous-time regime.
1902.07762
Ondrej Skopek
Lukas Jendele, Ondrej Skopek, Anton S. Becker, Ender Konukoglu
Adversarial Augmentation for Enhancing Classification of Mammography Images
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Supervised deep learning relies on the assumption that enough training data is available, which presents a problem for its application to several fields, like medical imaging. On the example of a binary image classification task (breast cancer recognition), we show that pretraining a generative model for meaningful image augmentation helps enhance the performance of the resulting classifier. By augmenting the data, performance on downstream classification tasks could be improved even with a relatively small training set. We show that this "adversarial augmentation" yields promising results compared to classical image augmentation on the example of breast cancer classification.
[ { "created": "Wed, 20 Feb 2019 20:13:24 GMT", "version": "v1" } ]
2019-02-22
[ [ "Jendele", "Lukas", "" ], [ "Skopek", "Ondrej", "" ], [ "Becker", "Anton S.", "" ], [ "Konukoglu", "Ender", "" ] ]
Supervised deep learning relies on the assumption that enough training data is available, which presents a problem for its application to several fields, like medical imaging. On the example of a binary image classification task (breast cancer recognition), we show that pretraining a generative model for meaningful image augmentation helps enhance the performance of the resulting classifier. By augmenting the data, performance on downstream classification tasks could be improved even with a relatively small training set. We show that this "adversarial augmentation" yields promising results compared to classical image augmentation on the example of breast cancer classification.
2301.05466
Liwang Zhu
Liwang Zhu and Zhongzhi Zhang
A Nearly-Linear Time Algorithm for Minimizing Risk of Conflict in Social Networks
null
Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2022, pp.2648-2656
10.1145/3534678.3539469
null
cs.SI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Concomitant with the tremendous prevalence of online social media platforms, the interactions among individuals are unprecedentedly enhanced. People are free to interact with acquaintances, express and exchange their own opinions through commenting, liking, retweeting on online social media, leading to resistance, controversy and other important phenomena over controversial social issues, which have been the subject of many recent works. In this paper, we study the problem of minimizing risk of conflict in social networks by modifying the initial opinions of a small number of nodes. We show that the objective function of the combinatorial optimization problem is monotone and supermodular. We then propose a na\"{\i}ve greedy algorithm with a $(1-1/e)$ approximation ratio that solves the problem in cubic time. To overcome the computation challenge for large networks, we further integrate several effective approximation strategies to provide a nearly linear time algorithm with a $(1-1/e-\epsilon)$ approximation ratio for any error parameter $\epsilon>0$. Extensive experiments on various real-world datasets demonstrate both the efficiency and effectiveness of our algorithms. In particular, the fast one scales to large networks with more than two million nodes, and achieves up to $20\times$ speed-up over the state-of-the-art algorithm.
[ { "created": "Fri, 13 Jan 2023 10:32:12 GMT", "version": "v1" } ]
2023-01-16
[ [ "Zhu", "Liwang", "" ], [ "Zhang", "Zhongzhi", "" ] ]
Concomitant with the tremendous prevalence of online social media platforms, the interactions among individuals are unprecedentedly enhanced. People are free to interact with acquaintances, express and exchange their own opinions through commenting, liking, retweeting on online social media, leading to resistance, controversy and other important phenomena over controversial social issues, which have been the subject of many recent works. In this paper, we study the problem of minimizing risk of conflict in social networks by modifying the initial opinions of a small number of nodes. We show that the objective function of the combinatorial optimization problem is monotone and supermodular. We then propose a na\"{\i}ve greedy algorithm with a $(1-1/e)$ approximation ratio that solves the problem in cubic time. To overcome the computation challenge for large networks, we further integrate several effective approximation strategies to provide a nearly linear time algorithm with a $(1-1/e-\epsilon)$ approximation ratio for any error parameter $\epsilon>0$. Extensive experiments on various real-world datasets demonstrate both the efficiency and effectiveness of our algorithms. In particular, the fast one scales to large networks with more than two million nodes, and achieves up to $20\times$ speed-up over the state-of-the-art algorithm.
2403.07593
Juan Jos\'e Cabrera Mora
J.J. Cabrera, A. Santo, A. Gil, C. Viegas and L. Pay\'a
MinkUNeXt: Point Cloud-based Large-scale Place Recognition using 3D Sparse Convolutions
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents MinkUNeXt, an effective and efficient architecture for place-recognition from point clouds entirely based on the new 3D MinkNeXt Block, a residual block composed of 3D sparse convolutions that follows the philosophy established by recent Transformers but purely using simple 3D convolutions. Feature extraction is performed at different scales by a U-Net encoder-decoder network and the feature aggregation of those features into a single descriptor is carried out by a Generalized Mean Pooling (GeM). The proposed architecture demonstrates that it is possible to surpass the current state-of-the-art by only relying on conventional 3D sparse convolutions without making use of more complex and sophisticated proposals such as Transformers, Attention-Layers or Deformable Convolutions. A thorough assessment of the proposal has been carried out using the Oxford RobotCar and the In-house datasets. As a result, MinkUNeXt proves to outperform other methods in the state-of-the-art.
[ { "created": "Tue, 12 Mar 2024 12:25:54 GMT", "version": "v1" }, { "created": "Wed, 13 Mar 2024 09:39:14 GMT", "version": "v2" } ]
2024-03-14
[ [ "Cabrera", "J. J.", "" ], [ "Santo", "A.", "" ], [ "Gil", "A.", "" ], [ "Viegas", "C.", "" ], [ "Payá", "L.", "" ] ]
This paper presents MinkUNeXt, an effective and efficient architecture for place-recognition from point clouds entirely based on the new 3D MinkNeXt Block, a residual block composed of 3D sparse convolutions that follows the philosophy established by recent Transformers but purely using simple 3D convolutions. Feature extraction is performed at different scales by a U-Net encoder-decoder network and the feature aggregation of those features into a single descriptor is carried out by a Generalized Mean Pooling (GeM). The proposed architecture demonstrates that it is possible to surpass the current state-of-the-art by only relying on conventional 3D sparse convolutions without making use of more complex and sophisticated proposals such as Transformers, Attention-Layers or Deformable Convolutions. A thorough assessment of the proposal has been carried out using the Oxford RobotCar and the In-house datasets. As a result, MinkUNeXt proves to outperform other methods in the state-of-the-art.
1603.08978
William Waites
William Waites, James Sweet, Roger Baig, Peter Buneman, Marwan Fayed, Gordon Hughes, Michael Fourman, Richard Simmons
RemIX: A Distributed Internet Exchange for Remote and Rural Networks
null
null
10.1145/2940157.2940162
null
cs.NI
http://creativecommons.org/licenses/by-sa/4.0/
The concept of the IXP, an Ethernet fabric central to the structure of the global Internet, is largely absent from the development of community-driven collaborative network infrastructure. The reasons for this are two-fold. IXPs exist in central, typically urban, environments where strong network infrastructure ensures high levels of connectivity. Between rural and remote regions, where networks are separated by distance and terrain, no such infrastructure exists. In this paper we present RemIX a distributed IXPs architecture designed for the community network environment. We examine this praxis using an implementation in Scotland, with suggestions for future development and research.
[ { "created": "Tue, 29 Mar 2016 21:51:02 GMT", "version": "v1" } ]
2020-06-24
[ [ "Waites", "William", "" ], [ "Sweet", "James", "" ], [ "Baig", "Roger", "" ], [ "Buneman", "Peter", "" ], [ "Fayed", "Marwan", "" ], [ "Hughes", "Gordon", "" ], [ "Fourman", "Michael", "" ], [ "Simmons", "Richard", "" ] ]
The concept of the IXP, an Ethernet fabric central to the structure of the global Internet, is largely absent from the development of community-driven collaborative network infrastructure. The reasons for this are two-fold. IXPs exist in central, typically urban, environments where strong network infrastructure ensures high levels of connectivity. Between rural and remote regions, where networks are separated by distance and terrain, no such infrastructure exists. In this paper we present RemIX a distributed IXPs architecture designed for the community network environment. We examine this praxis using an implementation in Scotland, with suggestions for future development and research.
1802.07021
Yuehong Huang
Yuehong Huang, Yu-Chee Tseng
Fusing Video and Inertial Sensor Data for Walking Person Identification
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An autonomous computer system (such as a robot) typically needs to identify, locate, and track persons appearing in its sight. However, most solutions have their limitations regarding efficiency, practicability, or environmental constraints. In this paper, we propose an effective and practical system which combines video and inertial sensors for person identification (PID). Persons who do different activities are easy to identify. To show the robustness and potential of our system, we propose a walking person identification (WPID) method to identify persons walking at the same time. By comparing features derived from both video and inertial sensor data, we can associate sensors in smartphones with human objects in videos. Results show that the correctly identified rate of our WPID method can up to 76% in 2 seconds.
[ { "created": "Tue, 20 Feb 2018 09:16:21 GMT", "version": "v1" } ]
2018-02-21
[ [ "Huang", "Yuehong", "" ], [ "Tseng", "Yu-Chee", "" ] ]
An autonomous computer system (such as a robot) typically needs to identify, locate, and track persons appearing in its sight. However, most solutions have their limitations regarding efficiency, practicability, or environmental constraints. In this paper, we propose an effective and practical system which combines video and inertial sensors for person identification (PID). Persons who do different activities are easy to identify. To show the robustness and potential of our system, we propose a walking person identification (WPID) method to identify persons walking at the same time. By comparing features derived from both video and inertial sensor data, we can associate sensors in smartphones with human objects in videos. Results show that the correctly identified rate of our WPID method can up to 76% in 2 seconds.
2401.11694
Patrick Cook
Patrick Cook, Danny Jammooa, Morten Hjorth-Jensen, Daniel D. Lee, Dean Lee
Parametric Matrix Models
Exact same content as previous version (v4); corrected author email
null
null
null
cs.LG cond-mat.dis-nn nucl-th physics.comp-ph quant-ph
http://creativecommons.org/licenses/by-sa/4.0/
We present a general class of machine learning algorithms called parametric matrix models. In contrast with most existing machine learning models that imitate the biology of neurons, parametric matrix models use matrix equations that emulate the physics of quantum systems. Similar to how physics problems are usually solved, parametric matrix models learn the governing equations that lead to the desired outputs. Parametric matrix models can be efficiently trained from empirical data, and the equations may use algebraic, differential, or integral relations. While originally designed for scientific computing, we prove that parametric matrix models are universal function approximators that can be applied to general machine learning problems. After introducing the underlying theory, we apply parametric matrix models to a series of different challenges that show their performance for a wide range of problems. For all the challenges tested here, parametric matrix models produce accurate results within an efficient and interpretable computational framework that allows for input feature extrapolation.
[ { "created": "Mon, 22 Jan 2024 05:26:18 GMT", "version": "v1" }, { "created": "Tue, 23 Jan 2024 20:06:38 GMT", "version": "v2" }, { "created": "Mon, 8 Jul 2024 19:55:41 GMT", "version": "v3" }, { "created": "Fri, 12 Jul 2024 20:08:17 GMT", "version": "v4" }, { "created": "Tue, 30 Jul 2024 21:43:28 GMT", "version": "v5" } ]
2024-08-01
[ [ "Cook", "Patrick", "" ], [ "Jammooa", "Danny", "" ], [ "Hjorth-Jensen", "Morten", "" ], [ "Lee", "Daniel D.", "" ], [ "Lee", "Dean", "" ] ]
We present a general class of machine learning algorithms called parametric matrix models. In contrast with most existing machine learning models that imitate the biology of neurons, parametric matrix models use matrix equations that emulate the physics of quantum systems. Similar to how physics problems are usually solved, parametric matrix models learn the governing equations that lead to the desired outputs. Parametric matrix models can be efficiently trained from empirical data, and the equations may use algebraic, differential, or integral relations. While originally designed for scientific computing, we prove that parametric matrix models are universal function approximators that can be applied to general machine learning problems. After introducing the underlying theory, we apply parametric matrix models to a series of different challenges that show their performance for a wide range of problems. For all the challenges tested here, parametric matrix models produce accurate results within an efficient and interpretable computational framework that allows for input feature extrapolation.
2210.07795
Tiannan Wang
Tiannan Wang, Wangchunshu Zhou, Yan Zeng, Xinsong Zhang
EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning
work in progress
null
null
null
cs.CL cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-trained vision-language models (VLMs) have achieved impressive results in a range of vision-language tasks. However, popular VLMs usually consist of hundreds of millions of parameters which brings challenges for fine-tuning and deployment in real-world applications due to space, memory, and latency constraints. In this work, we introduce a distilling then pruning framework to compress large vision-language models into smaller, faster, and more accurate ones. We first shrink the size of a pre-trained large VLM and apply knowledge distillation in the vision-language pre-training stage to obtain a task-agnostic compact VLM. Then we propose a modal-adaptive pruning algorithm to automatically infer the importance of vision and language modalities for different downstream tasks and adaptively remove redundant structures and neurons in different encoders with controllable target sparsity. We apply our framework to train EfficientVLM, a fast and accurate vision-language model consisting of 6 vision layers, 3 text layers, and 3 cross-modal fusion layers, accounting for only 93 million parameters in total, which is 44.3% of the teacher model. EfficientVLM retains 98.4% performance of the teacher model and accelerates its inference speed by 2.2x. EfficientVLM achieves a large absolute improvement over previous SoTA efficient VLMs of similar sizes by a large margin on various vision-language tasks, including VQAv2 (+4.9%), NLVR2 (+5.6%), ITR (R@1 on TR +17.2%, on IR + 15.6% ) and COCO caption generation (CIDEr +6.5), demonstrating a large potential on training lightweight VLMs.
[ { "created": "Fri, 14 Oct 2022 13:26:41 GMT", "version": "v1" } ]
2022-10-17
[ [ "Wang", "Tiannan", "" ], [ "Zhou", "Wangchunshu", "" ], [ "Zeng", "Yan", "" ], [ "Zhang", "Xinsong", "" ] ]
Pre-trained vision-language models (VLMs) have achieved impressive results in a range of vision-language tasks. However, popular VLMs usually consist of hundreds of millions of parameters which brings challenges for fine-tuning and deployment in real-world applications due to space, memory, and latency constraints. In this work, we introduce a distilling then pruning framework to compress large vision-language models into smaller, faster, and more accurate ones. We first shrink the size of a pre-trained large VLM and apply knowledge distillation in the vision-language pre-training stage to obtain a task-agnostic compact VLM. Then we propose a modal-adaptive pruning algorithm to automatically infer the importance of vision and language modalities for different downstream tasks and adaptively remove redundant structures and neurons in different encoders with controllable target sparsity. We apply our framework to train EfficientVLM, a fast and accurate vision-language model consisting of 6 vision layers, 3 text layers, and 3 cross-modal fusion layers, accounting for only 93 million parameters in total, which is 44.3% of the teacher model. EfficientVLM retains 98.4% performance of the teacher model and accelerates its inference speed by 2.2x. EfficientVLM achieves a large absolute improvement over previous SoTA efficient VLMs of similar sizes by a large margin on various vision-language tasks, including VQAv2 (+4.9%), NLVR2 (+5.6%), ITR (R@1 on TR +17.2%, on IR + 15.6% ) and COCO caption generation (CIDEr +6.5), demonstrating a large potential on training lightweight VLMs.
1303.6017
Zhouyun Wu
Zhouyun Wu, Aiping Huang, and Hsiao-Hwa Chen
Scrambling Code Planning in TD-SCDMA Systems
This paper has been withdrawn
null
null
null
cs.IT cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper has been withdrawn by the author due to a crucial sign error in equation 2.
[ { "created": "Mon, 25 Mar 2013 02:31:32 GMT", "version": "v1" }, { "created": "Sat, 30 Mar 2013 10:05:47 GMT", "version": "v2" } ]
2013-04-02
[ [ "Wu", "Zhouyun", "" ], [ "Huang", "Aiping", "" ], [ "Chen", "Hsiao-Hwa", "" ] ]
This paper has been withdrawn by the author due to a crucial sign error in equation 2.
2005.06645
Michael Vaughn
Michael Vaughn and Thomas Reps
A Generating-Extension-Generator for Machine Code
21 pages, 8 Figures Fixed inclusion of LaTeX macro in plaintext abstract
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of "debloating" programs for security and performance purposes has begun to see increased attention. Of particular interest in many environments is debloating commodity off-the-shelf (COTS) software, which is most commonly made available to end users as stripped binaries (i.e., neither source code nor symbol-table/debugging information is available). Toward this end, we created a system, called GenXGen[MC], that specializes stripped binaries. Many aspects of the debloating problem can be addressed via techniques from the literature on partial evaluation. However, applying such techniques to real-world programs, particularly stripped binaries, involves non-trivial state-management manipulations that have never been addressed in a completely satisfactory manner in previous systems. In particular, a partial evaluator needs to be able to (i) save and restore arbitrary program states, and (ii) determine whether a program state is equal to one that arose earlier. Moreover, to specialize stripped binaries, the system must also be able to handle program states consisting of memory that is undifferentiated beyond the standard coarse division into regions for the stack, the heap, and global data. This paper presents a new approach to state management in a program specializer. The technique has been incorporated into GenXGen[MC], a novel tool for producing machine-code generating extensions. Our experiments show that our solution to issue (i) significantly decreases the space required to represent program states, and our solution to issue (ii) drastically improves the time for producing a specialized program (as much as 13,000x speedup).
[ { "created": "Wed, 13 May 2020 22:19:04 GMT", "version": "v1" }, { "created": "Fri, 15 May 2020 00:53:30 GMT", "version": "v2" } ]
2020-05-18
[ [ "Vaughn", "Michael", "" ], [ "Reps", "Thomas", "" ] ]
The problem of "debloating" programs for security and performance purposes has begun to see increased attention. Of particular interest in many environments is debloating commodity off-the-shelf (COTS) software, which is most commonly made available to end users as stripped binaries (i.e., neither source code nor symbol-table/debugging information is available). Toward this end, we created a system, called GenXGen[MC], that specializes stripped binaries. Many aspects of the debloating problem can be addressed via techniques from the literature on partial evaluation. However, applying such techniques to real-world programs, particularly stripped binaries, involves non-trivial state-management manipulations that have never been addressed in a completely satisfactory manner in previous systems. In particular, a partial evaluator needs to be able to (i) save and restore arbitrary program states, and (ii) determine whether a program state is equal to one that arose earlier. Moreover, to specialize stripped binaries, the system must also be able to handle program states consisting of memory that is undifferentiated beyond the standard coarse division into regions for the stack, the heap, and global data. This paper presents a new approach to state management in a program specializer. The technique has been incorporated into GenXGen[MC], a novel tool for producing machine-code generating extensions. Our experiments show that our solution to issue (i) significantly decreases the space required to represent program states, and our solution to issue (ii) drastically improves the time for producing a specialized program (as much as 13,000x speedup).
2303.04249
Brandon Clark
Brandon Clark, Alec Kerrigan, Parth Parag Kulkarni, Vicente Vivanco Cepeda, Mubarak Shah
Where We Are and What We're Looking At: Query Based Worldwide Image Geo-localization Using Hierarchies and Scenes
CVPR 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Determining the exact latitude and longitude that a photo was taken is a useful and widely applicable task, yet it remains exceptionally difficult despite the accelerated progress of other computer vision tasks. Most previous approaches have opted to learn a single representation of query images, which are then classified at different levels of geographic granularity. These approaches fail to exploit the different visual cues that give context to different hierarchies, such as the country, state, and city level. To this end, we introduce an end-to-end transformer-based architecture that exploits the relationship between different geographic levels (which we refer to as hierarchies) and the corresponding visual scene information in an image through hierarchical cross-attention. We achieve this by learning a query for each geographic hierarchy and scene type. Furthermore, we learn a separate representation for different environmental scenes, as different scenes in the same location are often defined by completely different visual features. We achieve state of the art street level accuracy on 4 standard geo-localization datasets : Im2GPS, Im2GPS3k, YFCC4k, and YFCC26k, as well as qualitatively demonstrate how our method learns different representations for different visual hierarchies and scenes, which has not been demonstrated in the previous methods. These previous testing datasets mostly consist of iconic landmarks or images taken from social media, which makes them either a memorization task, or biased towards certain places. To address this issue we introduce a much harder testing dataset, Google-World-Streets-15k, comprised of images taken from Google Streetview covering the whole planet and present state of the art results. Our code will be made available in the camera-ready version.
[ { "created": "Tue, 7 Mar 2023 21:47:58 GMT", "version": "v1" } ]
2023-03-09
[ [ "Clark", "Brandon", "" ], [ "Kerrigan", "Alec", "" ], [ "Kulkarni", "Parth Parag", "" ], [ "Cepeda", "Vicente Vivanco", "" ], [ "Shah", "Mubarak", "" ] ]
Determining the exact latitude and longitude that a photo was taken is a useful and widely applicable task, yet it remains exceptionally difficult despite the accelerated progress of other computer vision tasks. Most previous approaches have opted to learn a single representation of query images, which are then classified at different levels of geographic granularity. These approaches fail to exploit the different visual cues that give context to different hierarchies, such as the country, state, and city level. To this end, we introduce an end-to-end transformer-based architecture that exploits the relationship between different geographic levels (which we refer to as hierarchies) and the corresponding visual scene information in an image through hierarchical cross-attention. We achieve this by learning a query for each geographic hierarchy and scene type. Furthermore, we learn a separate representation for different environmental scenes, as different scenes in the same location are often defined by completely different visual features. We achieve state of the art street level accuracy on 4 standard geo-localization datasets : Im2GPS, Im2GPS3k, YFCC4k, and YFCC26k, as well as qualitatively demonstrate how our method learns different representations for different visual hierarchies and scenes, which has not been demonstrated in the previous methods. These previous testing datasets mostly consist of iconic landmarks or images taken from social media, which makes them either a memorization task, or biased towards certain places. To address this issue we introduce a much harder testing dataset, Google-World-Streets-15k, comprised of images taken from Google Streetview covering the whole planet and present state of the art results. Our code will be made available in the camera-ready version.
2310.20703
Noam Razin
Noam Razin, Hattie Zhou, Omid Saremi, Vimal Thilak, Arwen Bradley, Preetum Nakkiran, Joshua Susskind, Etai Littwin
Vanishing Gradients in Reinforcement Finetuning of Language Models
Accepted to ICLR 2024
null
null
null
cs.LG cs.AI cs.CL stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pretrained language models are commonly aligned with human preferences and downstream tasks via reinforcement finetuning (RFT), which refers to maximizing a (possibly learned) reward function using policy gradient algorithms. This work identifies a fundamental optimization obstacle in RFT: we prove that the expected gradient for an input vanishes when its reward standard deviation under the model is small, even if the expected reward is far from optimal. Through experiments on an RFT benchmark and controlled environments, as well as a theoretical analysis, we then demonstrate that vanishing gradients due to small reward standard deviation are prevalent and detrimental, leading to extremely slow reward maximization. Lastly, we explore ways to overcome vanishing gradients in RFT. We find the common practice of an initial supervised finetuning (SFT) phase to be the most promising candidate, which sheds light on its importance in an RFT pipeline. Moreover, we show that a relatively small number of SFT optimization steps on as few as 1% of the input samples can suffice, indicating that the initial SFT phase need not be expensive in terms of compute and data labeling efforts. Overall, our results emphasize that being mindful for inputs whose expected gradient vanishes, as measured by the reward standard deviation, is crucial for successful execution of RFT.
[ { "created": "Tue, 31 Oct 2023 17:59:05 GMT", "version": "v1" }, { "created": "Wed, 31 Jan 2024 12:39:06 GMT", "version": "v2" }, { "created": "Thu, 14 Mar 2024 08:05:18 GMT", "version": "v3" } ]
2024-03-15
[ [ "Razin", "Noam", "" ], [ "Zhou", "Hattie", "" ], [ "Saremi", "Omid", "" ], [ "Thilak", "Vimal", "" ], [ "Bradley", "Arwen", "" ], [ "Nakkiran", "Preetum", "" ], [ "Susskind", "Joshua", "" ], [ "Littwin", "Etai", "" ] ]
Pretrained language models are commonly aligned with human preferences and downstream tasks via reinforcement finetuning (RFT), which refers to maximizing a (possibly learned) reward function using policy gradient algorithms. This work identifies a fundamental optimization obstacle in RFT: we prove that the expected gradient for an input vanishes when its reward standard deviation under the model is small, even if the expected reward is far from optimal. Through experiments on an RFT benchmark and controlled environments, as well as a theoretical analysis, we then demonstrate that vanishing gradients due to small reward standard deviation are prevalent and detrimental, leading to extremely slow reward maximization. Lastly, we explore ways to overcome vanishing gradients in RFT. We find the common practice of an initial supervised finetuning (SFT) phase to be the most promising candidate, which sheds light on its importance in an RFT pipeline. Moreover, we show that a relatively small number of SFT optimization steps on as few as 1% of the input samples can suffice, indicating that the initial SFT phase need not be expensive in terms of compute and data labeling efforts. Overall, our results emphasize that being mindful for inputs whose expected gradient vanishes, as measured by the reward standard deviation, is crucial for successful execution of RFT.
2305.16943
Hayeon Lee
Sohyun An, Hayeon Lee, Jaehyeong Jo, Seanie Lee, Sung Ju Hwang
DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models
Accepted to ICLR 2024
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Existing NAS methods suffer from either an excessive amount of time for repetitive sampling and training of many task-irrelevant architectures. To tackle such limitations of existing NAS methods, we propose a paradigm shift from NAS to a novel conditional Neural Architecture Generation (NAG) framework based on diffusion models, dubbed DiffusionNAG. Specifically, we consider the neural architectures as directed graphs and propose a graph diffusion model for generating them. Moreover, with the guidance of parameterized predictors, DiffusionNAG can flexibly generate task-optimal architectures with the desired properties for diverse tasks, by sampling from a region that is more likely to satisfy the properties. This conditional NAG scheme is significantly more efficient than previous NAS schemes which sample the architectures and filter them using the property predictors. We validate the effectiveness of DiffusionNAG through extensive experiments in two predictor-based NAS scenarios: Transferable NAS and Bayesian Optimization (BO)-based NAS. DiffusionNAG achieves superior performance with speedups of up to 35 times when compared to the baselines on Transferable NAS benchmarks. Furthermore, when integrated into a BO-based algorithm, DiffusionNAG outperforms existing BO-based NAS approaches, particularly in the large MobileNetV3 search space on the ImageNet 1K dataset. Code is available at https://github.com/CownowAn/DiffusionNAG.
[ { "created": "Fri, 26 May 2023 13:58:18 GMT", "version": "v1" }, { "created": "Sun, 31 Dec 2023 00:30:53 GMT", "version": "v2" }, { "created": "Fri, 19 Jan 2024 21:38:42 GMT", "version": "v3" }, { "created": "Sun, 24 Mar 2024 22:00:04 GMT", "version": "v4" } ]
2024-03-26
[ [ "An", "Sohyun", "" ], [ "Lee", "Hayeon", "" ], [ "Jo", "Jaehyeong", "" ], [ "Lee", "Seanie", "" ], [ "Hwang", "Sung Ju", "" ] ]
Existing NAS methods suffer from either an excessive amount of time for repetitive sampling and training of many task-irrelevant architectures. To tackle such limitations of existing NAS methods, we propose a paradigm shift from NAS to a novel conditional Neural Architecture Generation (NAG) framework based on diffusion models, dubbed DiffusionNAG. Specifically, we consider the neural architectures as directed graphs and propose a graph diffusion model for generating them. Moreover, with the guidance of parameterized predictors, DiffusionNAG can flexibly generate task-optimal architectures with the desired properties for diverse tasks, by sampling from a region that is more likely to satisfy the properties. This conditional NAG scheme is significantly more efficient than previous NAS schemes which sample the architectures and filter them using the property predictors. We validate the effectiveness of DiffusionNAG through extensive experiments in two predictor-based NAS scenarios: Transferable NAS and Bayesian Optimization (BO)-based NAS. DiffusionNAG achieves superior performance with speedups of up to 35 times when compared to the baselines on Transferable NAS benchmarks. Furthermore, when integrated into a BO-based algorithm, DiffusionNAG outperforms existing BO-based NAS approaches, particularly in the large MobileNetV3 search space on the ImageNet 1K dataset. Code is available at https://github.com/CownowAn/DiffusionNAG.
1702.05939
Andr\'e Gr\"uning
Joseph Chrol-Cannon and Yaochu Jin and Andr\'e Gr\"uning
An Efficient Method for online Detection of Polychronous Patterns in Spiking Neural Network
17 pages, 8 figures
null
10.1016/j.neucom.2017.06.025
null
cs.NE q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Polychronous neural groups are effective structures for the recognition of precise spike-timing patterns but the detection method is an inefficient multi-stage brute force process that works off-line on pre-recorded simulation data. This work presents a new model of polychronous patterns that can capture precise sequences of spikes directly in the neural simulation. In this scheme, each neuron is assigned a randomized code that is used to tag the post-synaptic neurons whenever a spike is transmitted. This creates a polychronous code that preserves the order of pre-synaptic activity and can be registered in a hash table when the post-synaptic neuron spikes. A polychronous code is a sub-component of a polychronous group that will occur, along with others, when the group is active. We demonstrate the representational and pattern recognition ability of polychronous codes on a direction selective visual task involving moving bars that is typical of a computation performed by simple cells in the cortex. The computational efficiency of the proposed algorithm far exceeds existing polychronous group detection methods and is well suited for online detection.
[ { "created": "Mon, 20 Feb 2017 12:02:50 GMT", "version": "v1" } ]
2017-07-12
[ [ "Chrol-Cannon", "Joseph", "" ], [ "Jin", "Yaochu", "" ], [ "Grüning", "André", "" ] ]
Polychronous neural groups are effective structures for the recognition of precise spike-timing patterns but the detection method is an inefficient multi-stage brute force process that works off-line on pre-recorded simulation data. This work presents a new model of polychronous patterns that can capture precise sequences of spikes directly in the neural simulation. In this scheme, each neuron is assigned a randomized code that is used to tag the post-synaptic neurons whenever a spike is transmitted. This creates a polychronous code that preserves the order of pre-synaptic activity and can be registered in a hash table when the post-synaptic neuron spikes. A polychronous code is a sub-component of a polychronous group that will occur, along with others, when the group is active. We demonstrate the representational and pattern recognition ability of polychronous codes on a direction selective visual task involving moving bars that is typical of a computation performed by simple cells in the cortex. The computational efficiency of the proposed algorithm far exceeds existing polychronous group detection methods and is well suited for online detection.
2005.10460
Md. Redowan Mahmud
Redowan Mahmud, Kotagiri Ramamohanarao and Rajkumar Buyya
Application Management in Fog Computing Environments: A Taxonomy, Review and Future Directions
null
ACM Computing Surveys, 2020
10.1145/3403955
Volume: 53, Issue: 4
cs.DC eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Internet of Things (IoT) paradigm is being rapidly adopted for the creation of smart environments in various domains. The IoT-enabled Cyber-Physical Systems (CPSs) associated with smart city, healthcare, Industry 4.0 and Agtech handle a huge volume of data and require data processing services from different types of applications in real-time. The Cloud-centric execution of IoT applications barely meets such requirements as the Cloud datacentres reside at a multi-hop distance from the IoT devices. \textit{Fog computing}, an extension of Cloud at the edge network, can execute these applications closer to data sources. Thus, Fog computing can improve application service delivery time and resist network congestion. However, the Fog nodes are highly distributed, heterogeneous and most of them are constrained in resources and spatial sharing. Therefore, efficient management of applications is necessary to fully exploit the capabilities of Fog nodes. In this work, we investigate the existing application management strategies in Fog computing and review them in terms of architecture, placement and maintenance. Additionally, we propose a comprehensive taxonomy and highlight the research gaps in Fog-based application management. We also discuss a perspective model and provide future research directions for further improvement of application management in Fog computing.
[ { "created": "Thu, 21 May 2020 04:43:44 GMT", "version": "v1" } ]
2020-07-28
[ [ "Mahmud", "Redowan", "" ], [ "Ramamohanarao", "Kotagiri", "" ], [ "Buyya", "Rajkumar", "" ] ]
The Internet of Things (IoT) paradigm is being rapidly adopted for the creation of smart environments in various domains. The IoT-enabled Cyber-Physical Systems (CPSs) associated with smart city, healthcare, Industry 4.0 and Agtech handle a huge volume of data and require data processing services from different types of applications in real-time. The Cloud-centric execution of IoT applications barely meets such requirements as the Cloud datacentres reside at a multi-hop distance from the IoT devices. \textit{Fog computing}, an extension of Cloud at the edge network, can execute these applications closer to data sources. Thus, Fog computing can improve application service delivery time and resist network congestion. However, the Fog nodes are highly distributed, heterogeneous and most of them are constrained in resources and spatial sharing. Therefore, efficient management of applications is necessary to fully exploit the capabilities of Fog nodes. In this work, we investigate the existing application management strategies in Fog computing and review them in terms of architecture, placement and maintenance. Additionally, we propose a comprehensive taxonomy and highlight the research gaps in Fog-based application management. We also discuss a perspective model and provide future research directions for further improvement of application management in Fog computing.
2308.04526
Jord\~ao Bragantini
Jord\~ao Bragantini, Merlin Lange, Lo\"ic Royer
Large-Scale Multi-Hypotheses Cell Tracking Using Ultrametric Contours Maps
13 pages, 7 figures, 4 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this work, we describe a method for large-scale 3D cell-tracking through a segmentation selection approach. The proposed method is effective at tracking cells across large microscopy datasets on two fronts: (i) It can solve problems containing millions of segmentation instances in terabyte-scale 3D+t datasets; (ii) It achieves competitive results with or without deep learning, which requires 3D annotated data, that is scarce in the fluorescence microscopy field. The proposed method computes cell tracks and segments using a hierarchy of segmentation hypotheses and selects disjoint segments by maximizing the overlap between adjacent frames. We show that this method achieves state-of-the-art results in 3D images from the cell tracking challenge and has a faster integer linear programming formulation. Moreover, our framework is flexible and supports segmentations from off-the-shelf cell segmentation models and can combine them into an ensemble that improves tracking. The code is available https://github.com/royerlab/ultrack.
[ { "created": "Tue, 8 Aug 2023 18:41:38 GMT", "version": "v1" }, { "created": "Thu, 11 Apr 2024 23:50:32 GMT", "version": "v2" } ]
2024-04-15
[ [ "Bragantini", "Jordão", "" ], [ "Lange", "Merlin", "" ], [ "Royer", "Loïc", "" ] ]
In this work, we describe a method for large-scale 3D cell-tracking through a segmentation selection approach. The proposed method is effective at tracking cells across large microscopy datasets on two fronts: (i) It can solve problems containing millions of segmentation instances in terabyte-scale 3D+t datasets; (ii) It achieves competitive results with or without deep learning, which requires 3D annotated data, that is scarce in the fluorescence microscopy field. The proposed method computes cell tracks and segments using a hierarchy of segmentation hypotheses and selects disjoint segments by maximizing the overlap between adjacent frames. We show that this method achieves state-of-the-art results in 3D images from the cell tracking challenge and has a faster integer linear programming formulation. Moreover, our framework is flexible and supports segmentations from off-the-shelf cell segmentation models and can combine them into an ensemble that improves tracking. The code is available https://github.com/royerlab/ultrack.
1412.6141
Song-Ju Kim Dr.
Song-Ju Kim, Masashi Aono, and Etsushi Nameda
Efficient Decision-Making by Volume-Conserving Physical Object
5 pages, 3 figures
null
10.1088/1367-2630/17/8/083023
null
cs.AI cs.LG nlin.AO physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We demonstrate that any physical object, as long as its volume is conserved when coupled with suitable operations, provides a sophisticated decision-making capability. We consider the problem of finding, as accurately and quickly as possible, the most profitable option from a set of options that gives stochastic rewards. These decisions are made as dictated by a physical object, which is moved in a manner similar to the fluctuations of a rigid body in a tug-of-war game. Our analytical calculations validate statistical reasons why our method exhibits higher efficiency than conventional algorithms.
[ { "created": "Thu, 30 Oct 2014 08:23:13 GMT", "version": "v1" } ]
2015-09-02
[ [ "Kim", "Song-Ju", "" ], [ "Aono", "Masashi", "" ], [ "Nameda", "Etsushi", "" ] ]
We demonstrate that any physical object, as long as its volume is conserved when coupled with suitable operations, provides a sophisticated decision-making capability. We consider the problem of finding, as accurately and quickly as possible, the most profitable option from a set of options that gives stochastic rewards. These decisions are made as dictated by a physical object, which is moved in a manner similar to the fluctuations of a rigid body in a tug-of-war game. Our analytical calculations validate statistical reasons why our method exhibits higher efficiency than conventional algorithms.
2112.01998
Hariprasad Kodamana
Dibyendu Ghosh, Srija Chakraborty, Hariprasad Kodamana, Supriya Chakraborty
Application of Machine Learning in understanding plant virus pathogenesis: Trends and perspectives on emergence, diagnosis, host-virus interplay and management
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Inclusion of high throughput technologies in the field of biology has generated massive amounts of biological data in the recent years. Now, transforming these huge volumes of data into knowledge is the primary challenge in computational biology. The traditional methods of data analysis have failed to carry out the task. Hence, researchers are turning to machine learning based approaches for the analysis of high-dimensional big data. In machine learning, once a model is trained with a training dataset, it can be applied on a testing dataset which is independent. In current times, deep learning algorithms further promote the application of machine learning in several field of biology including plant virology. Considering a significant progress in the application of machine learning in understanding plant virology, this review highlights an introductory note on machine learning and comprehensively discusses the trends and prospects of machine learning in diagnosis of viral diseases, understanding host-virus interplay and emergence of plant viruses.
[ { "created": "Fri, 3 Dec 2021 16:25:26 GMT", "version": "v1" } ]
2021-12-06
[ [ "Ghosh", "Dibyendu", "" ], [ "Chakraborty", "Srija", "" ], [ "Kodamana", "Hariprasad", "" ], [ "Chakraborty", "Supriya", "" ] ]
Inclusion of high throughput technologies in the field of biology has generated massive amounts of biological data in the recent years. Now, transforming these huge volumes of data into knowledge is the primary challenge in computational biology. The traditional methods of data analysis have failed to carry out the task. Hence, researchers are turning to machine learning based approaches for the analysis of high-dimensional big data. In machine learning, once a model is trained with a training dataset, it can be applied on a testing dataset which is independent. In current times, deep learning algorithms further promote the application of machine learning in several field of biology including plant virology. Considering a significant progress in the application of machine learning in understanding plant virology, this review highlights an introductory note on machine learning and comprehensively discusses the trends and prospects of machine learning in diagnosis of viral diseases, understanding host-virus interplay and emergence of plant viruses.
2208.14966
Andrew Bai
Andrew Bai, Chih-Kuan Yeh, Pradeep Ravikumar, Neil Y. C. Lin, Cho-Jui Hsieh
Concept Gradient: Concept-based Interpretation Without Linear Assumption
21 pages, 7 figures, published in ICLR 2023
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Concept-based interpretations of black-box models are often more intuitive for humans to understand. The most widely adopted approach for concept-based interpretation is Concept Activation Vector (CAV). CAV relies on learning a linear relation between some latent representation of a given model and concepts. The linear separability is usually implicitly assumed but does not hold true in general. In this work, we started from the original intent of concept-based interpretation and proposed Concept Gradient (CG), extending concept-based interpretation beyond linear concept functions. We showed that for a general (potentially non-linear) concept, we can mathematically evaluate how a small change of concept affecting the model's prediction, which leads to an extension of gradient-based interpretation to the concept space. We demonstrated empirically that CG outperforms CAV in both toy examples and real world datasets.
[ { "created": "Wed, 31 Aug 2022 17:06:46 GMT", "version": "v1" }, { "created": "Mon, 5 Feb 2024 21:27:45 GMT", "version": "v2" } ]
2024-02-07
[ [ "Bai", "Andrew", "" ], [ "Yeh", "Chih-Kuan", "" ], [ "Ravikumar", "Pradeep", "" ], [ "Lin", "Neil Y. C.", "" ], [ "Hsieh", "Cho-Jui", "" ] ]
Concept-based interpretations of black-box models are often more intuitive for humans to understand. The most widely adopted approach for concept-based interpretation is Concept Activation Vector (CAV). CAV relies on learning a linear relation between some latent representation of a given model and concepts. The linear separability is usually implicitly assumed but does not hold true in general. In this work, we started from the original intent of concept-based interpretation and proposed Concept Gradient (CG), extending concept-based interpretation beyond linear concept functions. We showed that for a general (potentially non-linear) concept, we can mathematically evaluate how a small change of concept affecting the model's prediction, which leads to an extension of gradient-based interpretation to the concept space. We demonstrated empirically that CG outperforms CAV in both toy examples and real world datasets.
2405.09409
Markus Ralf Bujotzek
Markus R. Bujotzek, \"Unal Ak\"unal, Stefan Denner, Peter Neher, Maximilian Zenk, Eric Frodl, Astha Jaiswal, Moon Kim, Nicolai R. Krekiehn, Manuel Nickel, Richard Ruppel, Marcus Both, Felix D\"ollinger, Marcel Opitz, Thorsten Persigehl, Jens Kleesiek, Tobias Penzkofer, Klaus Maier-Hein, Rickmer Braren, Andreas Bucher
Real-World Federated Learning in Radiology: Hurdles to overcome and Benefits to gain
null
null
null
null
cs.CV cs.DC
http://creativecommons.org/licenses/by/4.0/
Objective: Federated Learning (FL) enables collaborative model training while keeping data locally. Currently, most FL studies in radiology are conducted in simulated environments due to numerous hurdles impeding its translation into practice. The few existing real-world FL initiatives rarely communicate specific measures taken to overcome these hurdles, leaving behind a significant knowledge gap. Minding efforts to implement real-world FL, there is a notable lack of comprehensive assessment comparing FL to less complex alternatives. Materials & Methods: We extensively reviewed FL literature, categorizing insights along with our findings according to their nature and phase while establishing a FL initiative, summarized to a comprehensive guide. We developed our own FL infrastructure within the German Radiological Cooperative Network (RACOON) and demonstrated its functionality by training FL models on lung pathology segmentation tasks across six university hospitals. We extensively evaluated FL against less complex alternatives in three distinct evaluation scenarios. Results: The proposed guide outlines essential steps, identified hurdles, and proposed solutions for establishing successful FL initiatives conducting real-world experiments. Our experimental results show that FL outperforms less complex alternatives in all evaluation scenarios, justifying the effort required to translate FL into real-world applications. Discussion & Conclusion: Our proposed guide aims to aid future FL researchers in circumventing pitfalls and accelerating translation of FL into radiological applications. Our results underscore the value of efforts needed to translate FL into real-world applications by demonstrating advantageous performance over alternatives, and emphasize the importance of strategic organization, robust management of distributed data and infrastructure in real-world settings.
[ { "created": "Wed, 15 May 2024 15:04:27 GMT", "version": "v1" } ]
2024-05-16
[ [ "Bujotzek", "Markus R.", "" ], [ "Akünal", "Ünal", "" ], [ "Denner", "Stefan", "" ], [ "Neher", "Peter", "" ], [ "Zenk", "Maximilian", "" ], [ "Frodl", "Eric", "" ], [ "Jaiswal", "Astha", "" ], [ "Kim", "Moon", "" ], [ "Krekiehn", "Nicolai R.", "" ], [ "Nickel", "Manuel", "" ], [ "Ruppel", "Richard", "" ], [ "Both", "Marcus", "" ], [ "Döllinger", "Felix", "" ], [ "Opitz", "Marcel", "" ], [ "Persigehl", "Thorsten", "" ], [ "Kleesiek", "Jens", "" ], [ "Penzkofer", "Tobias", "" ], [ "Maier-Hein", "Klaus", "" ], [ "Braren", "Rickmer", "" ], [ "Bucher", "Andreas", "" ] ]
Objective: Federated Learning (FL) enables collaborative model training while keeping data locally. Currently, most FL studies in radiology are conducted in simulated environments due to numerous hurdles impeding its translation into practice. The few existing real-world FL initiatives rarely communicate specific measures taken to overcome these hurdles, leaving behind a significant knowledge gap. Minding efforts to implement real-world FL, there is a notable lack of comprehensive assessment comparing FL to less complex alternatives. Materials & Methods: We extensively reviewed FL literature, categorizing insights along with our findings according to their nature and phase while establishing a FL initiative, summarized to a comprehensive guide. We developed our own FL infrastructure within the German Radiological Cooperative Network (RACOON) and demonstrated its functionality by training FL models on lung pathology segmentation tasks across six university hospitals. We extensively evaluated FL against less complex alternatives in three distinct evaluation scenarios. Results: The proposed guide outlines essential steps, identified hurdles, and proposed solutions for establishing successful FL initiatives conducting real-world experiments. Our experimental results show that FL outperforms less complex alternatives in all evaluation scenarios, justifying the effort required to translate FL into real-world applications. Discussion & Conclusion: Our proposed guide aims to aid future FL researchers in circumventing pitfalls and accelerating translation of FL into radiological applications. Our results underscore the value of efforts needed to translate FL into real-world applications by demonstrating advantageous performance over alternatives, and emphasize the importance of strategic organization, robust management of distributed data and infrastructure in real-world settings.
2311.07453
Mubashara Akhtar
Mubashara Akhtar, Nikesh Subedi, Vivek Gupta, Sahar Tahmasebi, Oana Cocarascu, Elena Simperl
ChartCheck: Explainable Fact-Checking over Real-World Chart Images
null
null
null
null
cs.CL cs.CV
http://creativecommons.org/licenses/by/4.0/
Whilst fact verification has attracted substantial interest in the natural language processing community, verifying misinforming statements against data visualizations such as charts has so far been overlooked. Charts are commonly used in the real-world to summarize and communicate key information, but they can also be easily misused to spread misinformation and promote certain agendas. In this paper, we introduce ChartCheck, a novel, large-scale dataset for explainable fact-checking against real-world charts, consisting of 1.7k charts and 10.5k human-written claims and explanations. We systematically evaluate ChartCheck using vision-language and chart-to-table models, and propose a baseline to the community. Finally, we study chart reasoning types and visual attributes that pose a challenge to these models
[ { "created": "Mon, 13 Nov 2023 16:35:29 GMT", "version": "v1" }, { "created": "Fri, 16 Feb 2024 12:14:05 GMT", "version": "v2" } ]
2024-02-19
[ [ "Akhtar", "Mubashara", "" ], [ "Subedi", "Nikesh", "" ], [ "Gupta", "Vivek", "" ], [ "Tahmasebi", "Sahar", "" ], [ "Cocarascu", "Oana", "" ], [ "Simperl", "Elena", "" ] ]
Whilst fact verification has attracted substantial interest in the natural language processing community, verifying misinforming statements against data visualizations such as charts has so far been overlooked. Charts are commonly used in the real-world to summarize and communicate key information, but they can also be easily misused to spread misinformation and promote certain agendas. In this paper, we introduce ChartCheck, a novel, large-scale dataset for explainable fact-checking against real-world charts, consisting of 1.7k charts and 10.5k human-written claims and explanations. We systematically evaluate ChartCheck using vision-language and chart-to-table models, and propose a baseline to the community. Finally, we study chart reasoning types and visual attributes that pose a challenge to these models
1711.02484
Elie Ngomseu Mambou
Ebenezer Esenogho and Elie Ngomseu Mambou
Evaluation of Handover Exchange Schemes Between Two Cognitive Radio Base Stations with and without Buffers
5 pages, 7 figures, conference
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article investigates and evaluate a handover exchange scheme between two secondary users (SUs) moving in different directions across the handover region of neighbouring cell in a cognitive radio network. More specifically, this investigation compares the performance of SUs in a cellular cognitive radio network with and without channel exchange systems. The investigation shows reduced handover failure, blocking, forced and access probabilities respectively, for handover exchange scheme with buffer as compared to the other scenario.
[ { "created": "Tue, 7 Nov 2017 14:27:16 GMT", "version": "v1" } ]
2017-11-08
[ [ "Esenogho", "Ebenezer", "" ], [ "Mambou", "Elie Ngomseu", "" ] ]
This article investigates and evaluate a handover exchange scheme between two secondary users (SUs) moving in different directions across the handover region of neighbouring cell in a cognitive radio network. More specifically, this investigation compares the performance of SUs in a cellular cognitive radio network with and without channel exchange systems. The investigation shows reduced handover failure, blocking, forced and access probabilities respectively, for handover exchange scheme with buffer as compared to the other scenario.
2305.14329
Fivos Kalogiannis
Fivos Kalogiannis, Ioannis Panageas
Zero-sum Polymatrix Markov Games: Equilibrium Collapse and Efficient Computation of Nash Equilibria
Added missing proofs for the infinite-horizon
null
null
null
cs.GT cs.MA cs.SI
http://creativecommons.org/licenses/by/4.0/
The works of (Daskalakis et al., 2009, 2022; Jin et al., 2022; Deng et al., 2023) indicate that computing Nash equilibria in multi-player Markov games is a computationally hard task. This fact raises the question of whether or not computational intractability can be circumvented if one focuses on specific classes of Markov games. One such example is two-player zero-sum Markov games, in which efficient ways to compute a Nash equilibrium are known. Inspired by zero-sum polymatrix normal-form games (Cai et al., 2016), we define a class of zero-sum multi-agent Markov games in which there are only pairwise interactions described by a graph that changes per state. For this class of Markov games, we show that an $\epsilon$-approximate Nash equilibrium can be found efficiently. To do so, we generalize the techniques of (Cai et al., 2016), by showing that the set of coarse-correlated equilibria collapses to the set of Nash equilibria. Afterwards, it is possible to use any algorithm in the literature that computes approximate coarse-correlated equilibria Markovian policies to get an approximate Nash equilibrium.
[ { "created": "Tue, 23 May 2023 17:56:45 GMT", "version": "v1" }, { "created": "Mon, 29 May 2023 17:57:58 GMT", "version": "v2" } ]
2023-05-30
[ [ "Kalogiannis", "Fivos", "" ], [ "Panageas", "Ioannis", "" ] ]
The works of (Daskalakis et al., 2009, 2022; Jin et al., 2022; Deng et al., 2023) indicate that computing Nash equilibria in multi-player Markov games is a computationally hard task. This fact raises the question of whether or not computational intractability can be circumvented if one focuses on specific classes of Markov games. One such example is two-player zero-sum Markov games, in which efficient ways to compute a Nash equilibrium are known. Inspired by zero-sum polymatrix normal-form games (Cai et al., 2016), we define a class of zero-sum multi-agent Markov games in which there are only pairwise interactions described by a graph that changes per state. For this class of Markov games, we show that an $\epsilon$-approximate Nash equilibrium can be found efficiently. To do so, we generalize the techniques of (Cai et al., 2016), by showing that the set of coarse-correlated equilibria collapses to the set of Nash equilibria. Afterwards, it is possible to use any algorithm in the literature that computes approximate coarse-correlated equilibria Markovian policies to get an approximate Nash equilibrium.
2303.10288
Jun Zhao
Terence Jie Chua, Wenhan Yu, Jun Zhao
Mobile Edge Adversarial Detection for Digital Twinning to the Metaverse with Deep Reinforcement Learning
This paper appears in IEEE International Conference on Communications, 2023
null
null
null
cs.NI cs.AI
http://creativecommons.org/licenses/by/4.0/
Real-time Digital Twinning of physical world scenes onto the Metaverse is necessary for a myriad of applications such as augmented-reality (AR) assisted driving. In AR assisted driving, physical environment scenes are first captured by Internet of Vehicles (IoVs) and are uploaded to the Metaverse. A central Metaverse Map Service Provider (MMSP) will aggregate information from all IoVs to develop a central Metaverse Map. Information from the Metaverse Map can then be downloaded into individual IoVs on demand and be delivered as AR scenes to the driver. However, the growing interest in developing AR assisted driving applications which relies on digital twinning invites adversaries. These adversaries may place physical adversarial patches on physical world objects such as cars, signboards, or on roads, seeking to contort the virtual world digital twin. Hence, there is a need to detect these physical world adversarial patches. Nevertheless, as real-time, accurate detection of adversarial patches is compute-intensive, these physical world scenes have to be offloaded to the Metaverse Map Base Stations (MMBS) for computation. Hence in our work, we considered an environment with moving Internet of Vehicles (IoV), uploading real-time physical world scenes to the MMBSs. We formulated a realistic joint variable optimization problem where the MMSPs' objective is to maximize adversarial patch detection mean average precision (mAP), while minimizing the computed AR scene up-link transmission latency and IoVs' up-link transmission idle count, through optimizing the IoV-MMBS allocation and IoV up-link scene resolution selection. We proposed a Heterogeneous Action Proximal Policy Optimization (HAPPO) (discrete-continuous) algorithm to tackle the proposed problem. Extensive experiments shows HAPPO outperforms baseline models when compared against key metrics.
[ { "created": "Sat, 18 Mar 2023 00:03:50 GMT", "version": "v1" } ]
2023-03-21
[ [ "Chua", "Terence Jie", "" ], [ "Yu", "Wenhan", "" ], [ "Zhao", "Jun", "" ] ]
Real-time Digital Twinning of physical world scenes onto the Metaverse is necessary for a myriad of applications such as augmented-reality (AR) assisted driving. In AR assisted driving, physical environment scenes are first captured by Internet of Vehicles (IoVs) and are uploaded to the Metaverse. A central Metaverse Map Service Provider (MMSP) will aggregate information from all IoVs to develop a central Metaverse Map. Information from the Metaverse Map can then be downloaded into individual IoVs on demand and be delivered as AR scenes to the driver. However, the growing interest in developing AR assisted driving applications which relies on digital twinning invites adversaries. These adversaries may place physical adversarial patches on physical world objects such as cars, signboards, or on roads, seeking to contort the virtual world digital twin. Hence, there is a need to detect these physical world adversarial patches. Nevertheless, as real-time, accurate detection of adversarial patches is compute-intensive, these physical world scenes have to be offloaded to the Metaverse Map Base Stations (MMBS) for computation. Hence in our work, we considered an environment with moving Internet of Vehicles (IoV), uploading real-time physical world scenes to the MMBSs. We formulated a realistic joint variable optimization problem where the MMSPs' objective is to maximize adversarial patch detection mean average precision (mAP), while minimizing the computed AR scene up-link transmission latency and IoVs' up-link transmission idle count, through optimizing the IoV-MMBS allocation and IoV up-link scene resolution selection. We proposed a Heterogeneous Action Proximal Policy Optimization (HAPPO) (discrete-continuous) algorithm to tackle the proposed problem. Extensive experiments shows HAPPO outperforms baseline models when compared against key metrics.
2306.07512
Wang Ruijie
Ruijie Wang, Baoyu Li, Yichen Lu, Dachun Sun, Jinning Li, Yuchen Yan, Shengzhong Liu, Hanghang Tong, Tarek F. Abdelzaher
Noisy Positive-Unlabeled Learning with Self-Training for Speculative Knowledge Graph Reasoning
This paper is accepted by ACL-Findings 2023
null
null
null
cs.LG cs.AI cs.CL cs.SI
http://creativecommons.org/licenses/by/4.0/
This paper studies speculative reasoning task on real-world knowledge graphs (KG) that contain both \textit{false negative issue} (i.e., potential true facts being excluded) and \textit{false positive issue} (i.e., unreliable or outdated facts being included). State-of-the-art methods fall short in the speculative reasoning ability, as they assume the correctness of a fact is solely determined by its presence in KG, making them vulnerable to false negative/positive issues. The new reasoning task is formulated as a noisy Positive-Unlabeled learning problem. We propose a variational framework, namely nPUGraph, that jointly estimates the correctness of both collected and uncollected facts (which we call \textit{label posterior}) and updates model parameters during training. The label posterior estimation facilitates speculative reasoning from two perspectives. First, it improves the robustness of a label posterior-aware graph encoder against false positive links. Second, it identifies missing facts to provide high-quality grounds of reasoning. They are unified in a simple yet effective self-training procedure. Empirically, extensive experiments on three benchmark KG and one Twitter dataset with various degrees of false negative/positive cases demonstrate the effectiveness of nPUGraph.
[ { "created": "Tue, 13 Jun 2023 02:43:21 GMT", "version": "v1" } ]
2023-06-14
[ [ "Wang", "Ruijie", "" ], [ "Li", "Baoyu", "" ], [ "Lu", "Yichen", "" ], [ "Sun", "Dachun", "" ], [ "Li", "Jinning", "" ], [ "Yan", "Yuchen", "" ], [ "Liu", "Shengzhong", "" ], [ "Tong", "Hanghang", "" ], [ "Abdelzaher", "Tarek F.", "" ] ]
This paper studies speculative reasoning task on real-world knowledge graphs (KG) that contain both \textit{false negative issue} (i.e., potential true facts being excluded) and \textit{false positive issue} (i.e., unreliable or outdated facts being included). State-of-the-art methods fall short in the speculative reasoning ability, as they assume the correctness of a fact is solely determined by its presence in KG, making them vulnerable to false negative/positive issues. The new reasoning task is formulated as a noisy Positive-Unlabeled learning problem. We propose a variational framework, namely nPUGraph, that jointly estimates the correctness of both collected and uncollected facts (which we call \textit{label posterior}) and updates model parameters during training. The label posterior estimation facilitates speculative reasoning from two perspectives. First, it improves the robustness of a label posterior-aware graph encoder against false positive links. Second, it identifies missing facts to provide high-quality grounds of reasoning. They are unified in a simple yet effective self-training procedure. Empirically, extensive experiments on three benchmark KG and one Twitter dataset with various degrees of false negative/positive cases demonstrate the effectiveness of nPUGraph.
2301.07868
Bowen Zhang
Xiaojie Jin, Bowen Zhang, Weibo Gong, Kai Xu, XueQing Deng, Peng Wang, Zhao Zhang, Xiaohui Shen, Jiashi Feng
MV-Adapter: Multimodal Video Transfer Learning for Video Text Retrieval
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art video-text retrieval (VTR) methods typically involve fully fine-tuning a pre-trained model (e.g. CLIP) on specific datasets. However, this can result in significant storage costs in practical applications as a separate model per task must be stored. To address this issue, we present our pioneering work that enables parameter-efficient VTR using a pre-trained model, with only a small number of tunable parameters during training. Towards this goal, we propose a new method dubbed Multimodal Video Adapter (MV-Adapter) for efficiently transferring the knowledge in the pre-trained CLIP from image-text to video-text. Specifically, MV-Adapter utilizes bottleneck structures in both video and text branches, along with two novel components. The first is a Temporal Adaptation Module that is incorporated in the video branch to introduce global and local temporal contexts. We also train weights calibrations to adjust to dynamic variations across frames. The second is Cross Modality Tying that generates weights for video/text branches through sharing cross modality factors, for better aligning between modalities. Thanks to above innovations, MV-Adapter can achieve comparable or better performance than standard full fine-tuning with negligible parameters overhead. Notably, MV-Adapter consistently outperforms various competing methods in V2T/T2V tasks with large margins on five widely used VTR benchmarks (MSR-VTT, MSVD, LSMDC, DiDemo, and ActivityNet).
[ { "created": "Thu, 19 Jan 2023 03:42:56 GMT", "version": "v1" }, { "created": "Thu, 11 Apr 2024 06:21:29 GMT", "version": "v2" } ]
2024-04-12
[ [ "Jin", "Xiaojie", "" ], [ "Zhang", "Bowen", "" ], [ "Gong", "Weibo", "" ], [ "Xu", "Kai", "" ], [ "Deng", "XueQing", "" ], [ "Wang", "Peng", "" ], [ "Zhang", "Zhao", "" ], [ "Shen", "Xiaohui", "" ], [ "Feng", "Jiashi", "" ] ]
State-of-the-art video-text retrieval (VTR) methods typically involve fully fine-tuning a pre-trained model (e.g. CLIP) on specific datasets. However, this can result in significant storage costs in practical applications as a separate model per task must be stored. To address this issue, we present our pioneering work that enables parameter-efficient VTR using a pre-trained model, with only a small number of tunable parameters during training. Towards this goal, we propose a new method dubbed Multimodal Video Adapter (MV-Adapter) for efficiently transferring the knowledge in the pre-trained CLIP from image-text to video-text. Specifically, MV-Adapter utilizes bottleneck structures in both video and text branches, along with two novel components. The first is a Temporal Adaptation Module that is incorporated in the video branch to introduce global and local temporal contexts. We also train weights calibrations to adjust to dynamic variations across frames. The second is Cross Modality Tying that generates weights for video/text branches through sharing cross modality factors, for better aligning between modalities. Thanks to above innovations, MV-Adapter can achieve comparable or better performance than standard full fine-tuning with negligible parameters overhead. Notably, MV-Adapter consistently outperforms various competing methods in V2T/T2V tasks with large margins on five widely used VTR benchmarks (MSR-VTT, MSVD, LSMDC, DiDemo, and ActivityNet).
2202.03575
Peiying Zhang
Peiying Zhang, Chao Wang, Chunxiao Jiang, and Zhu Han
Deep Reinforcement Learning Assisted Federated Learning Algorithm for Data Management of IIoT
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The continuous expanded scale of the industrial Internet of Things (IIoT) leads to IIoT equipments generating massive amounts of user data every moment. According to the different requirement of end users, these data usually have high heterogeneity and privacy, while most of users are reluctant to expose them to the public view. How to manage these time series data in an efficient and safe way in the field of IIoT is still an open issue, such that it has attracted extensive attention from academia and industry. As a new machine learning (ML) paradigm, federated learning (FL) has great advantages in training heterogeneous and private data. This paper studies the FL technology applications to manage IIoT equipment data in wireless network environments. In order to increase the model aggregation rate and reduce communication costs, we apply deep reinforcement learning (DRL) to IIoT equipment selection process, specifically to select those IIoT equipment nodes with accurate models. Therefore, we propose a FL algorithm assisted by DRL, which can take into account the privacy and efficiency of data training of IIoT equipment. By analyzing the data characteristics of IIoT equipments, we use MNIST, fashion MNIST and CIFAR-10 data sets to represent the data generated by IIoT. During the experiment, we employ the deep neural network (DNN) model to train the data, and experimental results show that the accuracy can reach more than 97\%, which corroborates the effectiveness of the proposed algorithm.
[ { "created": "Thu, 3 Feb 2022 07:12:36 GMT", "version": "v1" } ]
2022-02-09
[ [ "Zhang", "Peiying", "" ], [ "Wang", "Chao", "" ], [ "Jiang", "Chunxiao", "" ], [ "Han", "Zhu", "" ] ]
The continuous expanded scale of the industrial Internet of Things (IIoT) leads to IIoT equipments generating massive amounts of user data every moment. According to the different requirement of end users, these data usually have high heterogeneity and privacy, while most of users are reluctant to expose them to the public view. How to manage these time series data in an efficient and safe way in the field of IIoT is still an open issue, such that it has attracted extensive attention from academia and industry. As a new machine learning (ML) paradigm, federated learning (FL) has great advantages in training heterogeneous and private data. This paper studies the FL technology applications to manage IIoT equipment data in wireless network environments. In order to increase the model aggregation rate and reduce communication costs, we apply deep reinforcement learning (DRL) to IIoT equipment selection process, specifically to select those IIoT equipment nodes with accurate models. Therefore, we propose a FL algorithm assisted by DRL, which can take into account the privacy and efficiency of data training of IIoT equipment. By analyzing the data characteristics of IIoT equipments, we use MNIST, fashion MNIST and CIFAR-10 data sets to represent the data generated by IIoT. During the experiment, we employ the deep neural network (DNN) model to train the data, and experimental results show that the accuracy can reach more than 97\%, which corroborates the effectiveness of the proposed algorithm.
2103.06742
Qianhao Wang
Qianhao Wang, Yuman Gao, Jialin Ji, Chao Xu, and Fei Gao
Visibility-aware Trajectory Optimization with Application to Aerial Tracking
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
The visibility of targets determines performance and even success rate of various applications, such as active slam, exploration, and target tracking. Therefore, it is crucial to take the visibility of targets into explicit account in trajectory planning. In this paper, we propose a general metric for target visibility, considering observation distance and angle as well as occlusion effect. We formulate this metric into a differentiable visibility cost function, with which spatial trajectory and yaw can be jointly optimized. Furthermore, this visibility-aware trajectory optimization handles dynamic feasibility of position and yaw simultaneously. To validate that our method is practical and generic, we integrate it into a customized quadrotor tracking system. The experimental results show that our visibility-aware planner performs more robustly and observes targets better. In order to benefit related researches, we release our code to the public.
[ { "created": "Thu, 11 Mar 2021 15:43:13 GMT", "version": "v1" } ]
2021-03-12
[ [ "Wang", "Qianhao", "" ], [ "Gao", "Yuman", "" ], [ "Ji", "Jialin", "" ], [ "Xu", "Chao", "" ], [ "Gao", "Fei", "" ] ]
The visibility of targets determines performance and even success rate of various applications, such as active slam, exploration, and target tracking. Therefore, it is crucial to take the visibility of targets into explicit account in trajectory planning. In this paper, we propose a general metric for target visibility, considering observation distance and angle as well as occlusion effect. We formulate this metric into a differentiable visibility cost function, with which spatial trajectory and yaw can be jointly optimized. Furthermore, this visibility-aware trajectory optimization handles dynamic feasibility of position and yaw simultaneously. To validate that our method is practical and generic, we integrate it into a customized quadrotor tracking system. The experimental results show that our visibility-aware planner performs more robustly and observes targets better. In order to benefit related researches, we release our code to the public.
2403.05066
Hongjoon Ahn
Hongjoon Ahn, Jinu Hyeon, Youngmin Oh, Bosun Hwang, and Taesup Moon
Reset & Distill: A Recipe for Overcoming Negative Transfer in Continual Reinforcement Learning
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
We argue that the negative transfer problem occurring when the new task to learn arrives is an important problem that needs not be overlooked when developing effective Continual Reinforcement Learning (CRL) algorithms. Through comprehensive experimental validation, we demonstrate that such issue frequently exists in CRL and cannot be effectively addressed by several recent work on mitigating plasticity loss of RL agents. To that end, we develop Reset & Distill (R&D), a simple yet highly effective method, to overcome the negative transfer problem in CRL. R&D combines a strategy of resetting the agent's online actor and critic networks to learn a new task and an offline learning step for distilling the knowledge from the online actor and previous expert's action probabilities. We carried out extensive experiments on long sequence of Meta World tasks and show that our method consistently outperforms recent baselines, achieving significantly higher success rates across a range of tasks. Our findings highlight the importance of considering negative transfer in CRL and emphasize the need for robust strategies like R&D to mitigate its detrimental effects.
[ { "created": "Fri, 8 Mar 2024 05:37:59 GMT", "version": "v1" }, { "created": "Wed, 14 Aug 2024 06:32:11 GMT", "version": "v2" } ]
2024-08-15
[ [ "Ahn", "Hongjoon", "" ], [ "Hyeon", "Jinu", "" ], [ "Oh", "Youngmin", "" ], [ "Hwang", "Bosun", "" ], [ "Moon", "Taesup", "" ] ]
We argue that the negative transfer problem occurring when the new task to learn arrives is an important problem that needs not be overlooked when developing effective Continual Reinforcement Learning (CRL) algorithms. Through comprehensive experimental validation, we demonstrate that such issue frequently exists in CRL and cannot be effectively addressed by several recent work on mitigating plasticity loss of RL agents. To that end, we develop Reset & Distill (R&D), a simple yet highly effective method, to overcome the negative transfer problem in CRL. R&D combines a strategy of resetting the agent's online actor and critic networks to learn a new task and an offline learning step for distilling the knowledge from the online actor and previous expert's action probabilities. We carried out extensive experiments on long sequence of Meta World tasks and show that our method consistently outperforms recent baselines, achieving significantly higher success rates across a range of tasks. Our findings highlight the importance of considering negative transfer in CRL and emphasize the need for robust strategies like R&D to mitigate its detrimental effects.
1311.2677
Raman Singh Mr.
Raman Singh, Harish Kumar and R.K. Singla
Sampling Based Approaches to Handle Imbalances in Network Traffic Dataset for Machine Learning Techniques
12 pages
null
10.5121/csit.2013.3704
null
cs.NI cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Network traffic data is huge, varying and imbalanced because various classes are not equally distributed. Machine learning (ML) algorithms for traffic analysis uses the samples from this data to recommend the actions to be taken by the network administrators as well as training. Due to imbalances in dataset, it is difficult to train machine learning algorithms for traffic analysis and these may give biased or false results leading to serious degradation in performance of these algorithms. Various techniques can be applied during sampling to minimize the effect of imbalanced instances. In this paper various sampling techniques have been analysed in order to compare the decrease in variation in imbalances of network traffic datasets sampled for these algorithms. Various parameters like missing classes in samples, probability of sampling of the different instances have been considered for comparison.
[ { "created": "Tue, 12 Nov 2013 05:32:48 GMT", "version": "v1" } ]
2013-11-13
[ [ "Singh", "Raman", "" ], [ "Kumar", "Harish", "" ], [ "Singla", "R. K.", "" ] ]
Network traffic data is huge, varying and imbalanced because various classes are not equally distributed. Machine learning (ML) algorithms for traffic analysis uses the samples from this data to recommend the actions to be taken by the network administrators as well as training. Due to imbalances in dataset, it is difficult to train machine learning algorithms for traffic analysis and these may give biased or false results leading to serious degradation in performance of these algorithms. Various techniques can be applied during sampling to minimize the effect of imbalanced instances. In this paper various sampling techniques have been analysed in order to compare the decrease in variation in imbalances of network traffic datasets sampled for these algorithms. Various parameters like missing classes in samples, probability of sampling of the different instances have been considered for comparison.
1709.04579
Behzad Ghazanfari
Behzad Ghazanfari and Matthew E. Taylor
Autonomous Extracting a Hierarchical Structure of Tasks in Reinforcement Learning and Multi-task Reinforcement Learning
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement learning (RL), while often powerful, can suffer from slow learning speeds, particularly in high dimensional spaces. The autonomous decomposition of tasks and use of hierarchical methods hold the potential to significantly speed up learning in such domains. This paper proposes a novel practical method that can autonomously decompose tasks, by leveraging association rule mining, which discovers hidden relationship among entities in data mining. We introduce a novel method called ARM-HSTRL (Association Rule Mining to extract Hierarchical Structure of Tasks in Reinforcement Learning). It extracts temporal and structural relationships of sub-goals in RL, and multi-task RL. In particular,it finds sub-goals and relationship among them. It is shown the significant efficiency and performance of the proposed method in two main topics of RL.
[ { "created": "Thu, 14 Sep 2017 01:43:13 GMT", "version": "v1" }, { "created": "Fri, 15 Sep 2017 16:21:03 GMT", "version": "v2" } ]
2017-09-18
[ [ "Ghazanfari", "Behzad", "" ], [ "Taylor", "Matthew E.", "" ] ]
Reinforcement learning (RL), while often powerful, can suffer from slow learning speeds, particularly in high dimensional spaces. The autonomous decomposition of tasks and use of hierarchical methods hold the potential to significantly speed up learning in such domains. This paper proposes a novel practical method that can autonomously decompose tasks, by leveraging association rule mining, which discovers hidden relationship among entities in data mining. We introduce a novel method called ARM-HSTRL (Association Rule Mining to extract Hierarchical Structure of Tasks in Reinforcement Learning). It extracts temporal and structural relationships of sub-goals in RL, and multi-task RL. In particular,it finds sub-goals and relationship among them. It is shown the significant efficiency and performance of the proposed method in two main topics of RL.
2203.16428
Stuart Millar Mr
Stuart Millar
Vulnerability Detection in Open Source Software: An Introduction
This version dated March 26th 2017
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
This paper is an introductory discussion on the cause of open source software vulnerabilities, their importance in the cybersecurity ecosystem, and a selection of detection methods. A recent application security report showed 44% of applications contain critical vulnerabilities in an open source component, a concerning proportion. Most companies do not have a reliable way of being directly and promptly notified when zero-day vulnerabilities are found and then when patches are made available. This means attack vectors in open source exist longer than necessary. Conventional approaches to vulnerability detection are outlined alongside some newer research trends. A conclusion is made that it may not be possible to entirely replace expert human inspection of open source software, although it can be effectively augmented with techniques such as machine learning, IDE plug-ins and repository linking to make implementation and review less time intensive. Underpinning any technological advances should be better knowledge at the human level. Development teams need trained, coached and improved so they can implement open source more securely, know what vulnerabilities to look for and how to handle them. It is the use of this blended approach to detection which is key.
[ { "created": "Sun, 6 Mar 2022 16:46:58 GMT", "version": "v1" } ]
2022-03-31
[ [ "Millar", "Stuart", "" ] ]
This paper is an introductory discussion on the cause of open source software vulnerabilities, their importance in the cybersecurity ecosystem, and a selection of detection methods. A recent application security report showed 44% of applications contain critical vulnerabilities in an open source component, a concerning proportion. Most companies do not have a reliable way of being directly and promptly notified when zero-day vulnerabilities are found and then when patches are made available. This means attack vectors in open source exist longer than necessary. Conventional approaches to vulnerability detection are outlined alongside some newer research trends. A conclusion is made that it may not be possible to entirely replace expert human inspection of open source software, although it can be effectively augmented with techniques such as machine learning, IDE plug-ins and repository linking to make implementation and review less time intensive. Underpinning any technological advances should be better knowledge at the human level. Development teams need trained, coached and improved so they can implement open source more securely, know what vulnerabilities to look for and how to handle them. It is the use of this blended approach to detection which is key.
2103.13447
Seunghun Lee
Seunghun Lee, Sunghyun Cho, Sunghoon Im
DRANet: Disentangling Representation and Adaptation Networks for Unsupervised Cross-Domain Adaptation
Accepted to CVPR 2021
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present DRANet, a network architecture that disentangles image representations and transfers the visual attributes in a latent space for unsupervised cross-domain adaptation. Unlike the existing domain adaptation methods that learn associated features sharing a domain, DRANet preserves the distinctiveness of each domain's characteristics. Our model encodes individual representations of content (scene structure) and style (artistic appearance) from both source and target images. Then, it adapts the domain by incorporating the transferred style factor into the content factor along with learnable weights specified for each domain. This learning framework allows bi-/multi-directional domain adaptation with a single encoder-decoder network and aligns their domain shift. Additionally, we propose a content-adaptive domain transfer module that helps retain scene structure while transferring style. Extensive experiments show our model successfully separates content-style factors and synthesizes visually pleasing domain-transferred images. The proposed method demonstrates state-of-the-art performance on standard digit classification tasks as well as semantic segmentation tasks.
[ { "created": "Wed, 24 Mar 2021 18:54:23 GMT", "version": "v1" }, { "created": "Sun, 28 Mar 2021 07:14:37 GMT", "version": "v2" } ]
2021-03-30
[ [ "Lee", "Seunghun", "" ], [ "Cho", "Sunghyun", "" ], [ "Im", "Sunghoon", "" ] ]
In this paper, we present DRANet, a network architecture that disentangles image representations and transfers the visual attributes in a latent space for unsupervised cross-domain adaptation. Unlike the existing domain adaptation methods that learn associated features sharing a domain, DRANet preserves the distinctiveness of each domain's characteristics. Our model encodes individual representations of content (scene structure) and style (artistic appearance) from both source and target images. Then, it adapts the domain by incorporating the transferred style factor into the content factor along with learnable weights specified for each domain. This learning framework allows bi-/multi-directional domain adaptation with a single encoder-decoder network and aligns their domain shift. Additionally, we propose a content-adaptive domain transfer module that helps retain scene structure while transferring style. Extensive experiments show our model successfully separates content-style factors and synthesizes visually pleasing domain-transferred images. The proposed method demonstrates state-of-the-art performance on standard digit classification tasks as well as semantic segmentation tasks.
1502.06732
Zhiwen Zeng
Zhiwen Zeng, Xiangke Wang, Zhiqiang Zheng
Convergence Analysis using the Edge Laplacian: Robust Consensus of Nonlinear Multi-agent Systems via ISS Method
22 pages, 10 figures; Submitted to International Journal of Robust and Nonlinear Control
null
null
null
cs.SY cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This study develops an original and innovative matrix representation with respect to the information flow for networked multi-agent system. To begin with, the general concepts of the edge Laplacian of digraph are proposed with its algebraic properties. Benefit from this novel graph-theoretic tool, we can build a bridge between the consensus problem and the edge agreement problem; we also show that the edge Laplacian sheds a new light on solving the leaderless consensus problem. Based on the edge agreement framework, the technical challenges caused by unknown but bounded disturbances and inherently nonlinear dynamics can be well handled. In particular, we design an integrated procedure for a new robust consensus protocol that is based on a blend of algebraic graph theory and the newly developed cyclic-small-gain theorem. Besides, to highlight the intricate relationship between the original graph and cyclic-small-gain theorem, the concept of edge-interconnection graph is introduced for the first time. Finally, simulation results are provided to verify the theoretical analysis.
[ { "created": "Tue, 24 Feb 2015 09:52:52 GMT", "version": "v1" } ]
2015-02-25
[ [ "Zeng", "Zhiwen", "" ], [ "Wang", "Xiangke", "" ], [ "Zheng", "Zhiqiang", "" ] ]
This study develops an original and innovative matrix representation with respect to the information flow for networked multi-agent system. To begin with, the general concepts of the edge Laplacian of digraph are proposed with its algebraic properties. Benefit from this novel graph-theoretic tool, we can build a bridge between the consensus problem and the edge agreement problem; we also show that the edge Laplacian sheds a new light on solving the leaderless consensus problem. Based on the edge agreement framework, the technical challenges caused by unknown but bounded disturbances and inherently nonlinear dynamics can be well handled. In particular, we design an integrated procedure for a new robust consensus protocol that is based on a blend of algebraic graph theory and the newly developed cyclic-small-gain theorem. Besides, to highlight the intricate relationship between the original graph and cyclic-small-gain theorem, the concept of edge-interconnection graph is introduced for the first time. Finally, simulation results are provided to verify the theoretical analysis.
2103.08698
Zden\v{e}k Dvo\v{r}\'ak
Zden\v{e}k Dvo\v{r}\'ak
Approximation metatheorems for classes with bounded expansion
35 pages, no figures; revised the presentation
null
null
null
cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give a number of approximation metatheorems for monotone maximization problems expressible in the first-order logic, in substantially more general settings than the previously known. We obtain * constant-factor approximation algorithm in any class of graphs with bounded expansion, * a QPTAS in any class with strongly sublinear separators, and * a PTAS in any fractionally treewidth-fragile class (which includes all common classes with strongly sublinear separators. Moreover, our tools also give an exact subexponential-time algorithm in any class with strongly sublinear separators.
[ { "created": "Mon, 15 Mar 2021 20:26:05 GMT", "version": "v1" }, { "created": "Sat, 11 Sep 2021 16:12:58 GMT", "version": "v2" }, { "created": "Sat, 9 Oct 2021 23:06:38 GMT", "version": "v3" } ]
2021-10-12
[ [ "Dvořák", "Zdeněk", "" ] ]
We give a number of approximation metatheorems for monotone maximization problems expressible in the first-order logic, in substantially more general settings than the previously known. We obtain * constant-factor approximation algorithm in any class of graphs with bounded expansion, * a QPTAS in any class with strongly sublinear separators, and * a PTAS in any fractionally treewidth-fragile class (which includes all common classes with strongly sublinear separators. Moreover, our tools also give an exact subexponential-time algorithm in any class with strongly sublinear separators.
2302.13399
Lingjie Kong
Lingjie Kong and Yun Liao
Path Integral Based Convolution and Pooling for Heterogeneous Graph Neural Networks
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph neural networks (GNN) extends deep learning to graph-structure dataset. Similar to Convolutional Neural Networks (CNN) using on image prediction, convolutional and pooling layers are the foundation to success for GNN on graph prediction tasks. In the initial PAN paper, it uses a path integral based graph neural networks for graph prediction. Specifically, it uses a convolution operation that involves every path linking the message sender and receiver with learnable weights depending on the path length, which corresponds to the maximal entropy random walk. It further generalizes such convolution operation to a new transition matrix called maximal entropy transition (MET). Because the diagonal entries of the MET matrix is directly related to the subgraph centrality, it provide a trial mechanism for pooling based on centrality score. While the initial PAN paper only considers node features. We further extends its capability to handle complex heterogeneous graph including both node and edge features.
[ { "created": "Sun, 26 Feb 2023 20:05:23 GMT", "version": "v1" } ]
2023-02-28
[ [ "Kong", "Lingjie", "" ], [ "Liao", "Yun", "" ] ]
Graph neural networks (GNN) extends deep learning to graph-structure dataset. Similar to Convolutional Neural Networks (CNN) using on image prediction, convolutional and pooling layers are the foundation to success for GNN on graph prediction tasks. In the initial PAN paper, it uses a path integral based graph neural networks for graph prediction. Specifically, it uses a convolution operation that involves every path linking the message sender and receiver with learnable weights depending on the path length, which corresponds to the maximal entropy random walk. It further generalizes such convolution operation to a new transition matrix called maximal entropy transition (MET). Because the diagonal entries of the MET matrix is directly related to the subgraph centrality, it provide a trial mechanism for pooling based on centrality score. While the initial PAN paper only considers node features. We further extends its capability to handle complex heterogeneous graph including both node and edge features.
2310.12274
Chen Jin
Chen Jin, Ryutaro Tanno, Amrutha Saseendran, Tom Diethe, Philip Teare
An Image is Worth Multiple Words: Discovering Object Level Concepts using Multi-Concept Prompt Learning
ICML 2024; project page: https://astrazeneca.github.io/mcpl.github.io
null
null
null
cs.CV cs.AI cs.CL cs.GR cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Textural Inversion, a prompt learning method, learns a singular text embedding for a new "word" to represent image style and appearance, allowing it to be integrated into natural language sentences to generate novel synthesised images. However, identifying multiple unknown object-level concepts within one scene remains a complex challenge. While recent methods have resorted to cropping or masking individual images to learn multiple concepts, these techniques often require prior knowledge of new concepts and are labour-intensive. To address this challenge, we introduce Multi-Concept Prompt Learning (MCPL), where multiple unknown "words" are simultaneously learned from a single sentence-image pair, without any imagery annotations. To enhance the accuracy of word-concept correlation and refine attention mask boundaries, we propose three regularisation techniques: Attention Masking, Prompts Contrastive Loss, and Bind Adjective. Extensive quantitative comparisons with both real-world categories and biomedical images demonstrate that our method can learn new semantically disentangled concepts. Our approach emphasises learning solely from textual embeddings, using less than 10% of the storage space compared to others. The project page, code, and data are available at https://astrazeneca.github.io/mcpl.github.io.
[ { "created": "Wed, 18 Oct 2023 19:18:19 GMT", "version": "v1" }, { "created": "Sat, 25 May 2024 00:01:46 GMT", "version": "v2" } ]
2024-05-28
[ [ "Jin", "Chen", "" ], [ "Tanno", "Ryutaro", "" ], [ "Saseendran", "Amrutha", "" ], [ "Diethe", "Tom", "" ], [ "Teare", "Philip", "" ] ]
Textural Inversion, a prompt learning method, learns a singular text embedding for a new "word" to represent image style and appearance, allowing it to be integrated into natural language sentences to generate novel synthesised images. However, identifying multiple unknown object-level concepts within one scene remains a complex challenge. While recent methods have resorted to cropping or masking individual images to learn multiple concepts, these techniques often require prior knowledge of new concepts and are labour-intensive. To address this challenge, we introduce Multi-Concept Prompt Learning (MCPL), where multiple unknown "words" are simultaneously learned from a single sentence-image pair, without any imagery annotations. To enhance the accuracy of word-concept correlation and refine attention mask boundaries, we propose three regularisation techniques: Attention Masking, Prompts Contrastive Loss, and Bind Adjective. Extensive quantitative comparisons with both real-world categories and biomedical images demonstrate that our method can learn new semantically disentangled concepts. Our approach emphasises learning solely from textual embeddings, using less than 10% of the storage space compared to others. The project page, code, and data are available at https://astrazeneca.github.io/mcpl.github.io.
1304.6501
Evmorfia Argyriou N.
Evmorfia N. Argyriou and Aikaterini A. Sotiraki and Antonios Symvonis
Occupational Fraud Detection Through Visualization
null
null
null
null
cs.CY cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Occupational fraud affects many companies worldwide causing them economic loss and liability issues towards their customers and other involved entities. Detecting internal fraud in a company requires significant effort and, unfortunately cannot be entirely prevented. The internal auditors have to process a huge amount of data produced by diverse systems, which are in most cases in textual form, with little automated support. In this paper, we exploit the advantages of information visualization and present a system that aims to detect occupational fraud in systems which involve a pair of entities (e.g., an employee and a client) and periodic activity. The main visualization is based on a spiral system on which the events are drawn appropriately according to their time-stamp. Suspicious events are considered those which appear along the same radius or on close radii of the spiral. Before producing the visualization, the system ranks both involved entities according to the specifications of the internal auditor and generates a video file of the activity such that events with strong evidence of fraud appear first in the video. The system is also equipped with several different visualizations and mechanisms in order to meet the requirements of an internal fraud detection system.
[ { "created": "Wed, 24 Apr 2013 07:57:53 GMT", "version": "v1" } ]
2013-04-25
[ [ "Argyriou", "Evmorfia N.", "" ], [ "Sotiraki", "Aikaterini A.", "" ], [ "Symvonis", "Antonios", "" ] ]
Occupational fraud affects many companies worldwide causing them economic loss and liability issues towards their customers and other involved entities. Detecting internal fraud in a company requires significant effort and, unfortunately cannot be entirely prevented. The internal auditors have to process a huge amount of data produced by diverse systems, which are in most cases in textual form, with little automated support. In this paper, we exploit the advantages of information visualization and present a system that aims to detect occupational fraud in systems which involve a pair of entities (e.g., an employee and a client) and periodic activity. The main visualization is based on a spiral system on which the events are drawn appropriately according to their time-stamp. Suspicious events are considered those which appear along the same radius or on close radii of the spiral. Before producing the visualization, the system ranks both involved entities according to the specifications of the internal auditor and generates a video file of the activity such that events with strong evidence of fraud appear first in the video. The system is also equipped with several different visualizations and mechanisms in order to meet the requirements of an internal fraud detection system.
2404.08850
Amit Sharma
Amit Sharma, Teodor-Dumitru Ene, Kishor Kunal, Mingjie Liu, Zafar Hasan and Haoxing Ren
Assessing Economic Viability: A Comparative Analysis of Total Cost of Ownership for Domain-Adapted Large Language Models versus State-of-the-art Counterparts in Chip Design Coding Assistance
Paper accepted in IEEE-ACM conference: 2024 IEEE LLM-Aided Design Workshop (LAD)
null
null
null
cs.AI cs.CE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a comparative analysis of total cost of ownership (TCO) and performance between domain-adapted large language models (LLM) and state-of-the-art (SoTA) LLMs , with a particular emphasis on tasks related to coding assistance for chip design. We examine the TCO and performance metrics of a domain-adaptive LLM, ChipNeMo, against two leading LLMs, Claude 3 Opus and ChatGPT-4 Turbo, to assess their efficacy in chip design coding generation. Through a detailed evaluation of the accuracy of the model, training methodologies, and operational expenditures, this study aims to provide stakeholders with critical information to select the most economically viable and performance-efficient solutions for their specific needs. Our results underscore the benefits of employing domain-adapted models, such as ChipNeMo, that demonstrate improved performance at significantly reduced costs compared to their general-purpose counterparts. In particular, we reveal the potential of domain-adapted LLMs to decrease TCO by approximately 90%-95%, with the cost advantages becoming increasingly evident as the deployment scale expands. With expansion of deployment, the cost benefits of ChipNeMo become more pronounced, making domain-adaptive LLMs an attractive option for organizations with substantial coding needs supported by LLMs
[ { "created": "Fri, 12 Apr 2024 23:37:56 GMT", "version": "v1" }, { "created": "Tue, 28 May 2024 17:11:44 GMT", "version": "v2" } ]
2024-05-29
[ [ "Sharma", "Amit", "" ], [ "Ene", "Teodor-Dumitru", "" ], [ "Kunal", "Kishor", "" ], [ "Liu", "Mingjie", "" ], [ "Hasan", "Zafar", "" ], [ "Ren", "Haoxing", "" ] ]
This paper presents a comparative analysis of total cost of ownership (TCO) and performance between domain-adapted large language models (LLM) and state-of-the-art (SoTA) LLMs , with a particular emphasis on tasks related to coding assistance for chip design. We examine the TCO and performance metrics of a domain-adaptive LLM, ChipNeMo, against two leading LLMs, Claude 3 Opus and ChatGPT-4 Turbo, to assess their efficacy in chip design coding generation. Through a detailed evaluation of the accuracy of the model, training methodologies, and operational expenditures, this study aims to provide stakeholders with critical information to select the most economically viable and performance-efficient solutions for their specific needs. Our results underscore the benefits of employing domain-adapted models, such as ChipNeMo, that demonstrate improved performance at significantly reduced costs compared to their general-purpose counterparts. In particular, we reveal the potential of domain-adapted LLMs to decrease TCO by approximately 90%-95%, with the cost advantages becoming increasingly evident as the deployment scale expands. With expansion of deployment, the cost benefits of ChipNeMo become more pronounced, making domain-adaptive LLMs an attractive option for organizations with substantial coding needs supported by LLMs
0908.1077
Ali Tajer
Ali Tajer, Narayan Prasad, and Xiaodong Wang
Beamforming and Rate Allocation in MISO Cognitive Radio Networks
32 pages, 6 figures
null
10.1109/TSP.2009.2031280
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider decentralized multi-antenna cognitive radio networks where secondary (cognitive) users are granted simultaneous spectrum access along with license-holding (primary) users. We treat the problem of distributed beamforming and rate allocation for the secondary users such that the minimum weighted secondary rate is maximized. Such an optimization is subject to (1) a limited weighted sum-power budget for the secondary users and (2) guaranteed protection for the primary users in the sense that the interference level imposed on each primary receiver does not exceed a specified level. Based on the decoding method deployed by the secondary receivers, we consider three scenarios for solving this problem. In the first scenario each secondary receiver decodes only its designated transmitter while suppressing the rest as Gaussian interferers (single-user decoding). In the second case each secondary receiver employs the maximum likelihood decoder (MLD) to jointly decode all secondary transmissions, and in the third one each secondary receiver uses the unconstrained group decoder (UGD). By deploying the UGD, each secondary user is allowed to decode any arbitrary subset of users (which contains its designated user) after suppressing or canceling the remaining users.
[ { "created": "Fri, 7 Aug 2009 15:46:00 GMT", "version": "v1" } ]
2015-05-13
[ [ "Tajer", "Ali", "" ], [ "Prasad", "Narayan", "" ], [ "Wang", "Xiaodong", "" ] ]
We consider decentralized multi-antenna cognitive radio networks where secondary (cognitive) users are granted simultaneous spectrum access along with license-holding (primary) users. We treat the problem of distributed beamforming and rate allocation for the secondary users such that the minimum weighted secondary rate is maximized. Such an optimization is subject to (1) a limited weighted sum-power budget for the secondary users and (2) guaranteed protection for the primary users in the sense that the interference level imposed on each primary receiver does not exceed a specified level. Based on the decoding method deployed by the secondary receivers, we consider three scenarios for solving this problem. In the first scenario each secondary receiver decodes only its designated transmitter while suppressing the rest as Gaussian interferers (single-user decoding). In the second case each secondary receiver employs the maximum likelihood decoder (MLD) to jointly decode all secondary transmissions, and in the third one each secondary receiver uses the unconstrained group decoder (UGD). By deploying the UGD, each secondary user is allowed to decode any arbitrary subset of users (which contains its designated user) after suppressing or canceling the remaining users.
1612.00414
Farzad Salehisadaghiani
Farzad Salehisadaghiani and Lacra Pavel
Distributed Nash Equilibrium Seeking via the Alternating Direction Method of Multipliers
null
null
null
null
cs.SY cs.GT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, the problem of finding a Nash equilibrium of a multi-player game is considered. The players are only aware of their own cost functions as well as the action space of all players. We develop a relatively fast algorithm within the framework of inexact-ADMM. It requires a communication graph for the information exchange between the players as well as a few mild assumptions on cost functions. The convergence proof of the algorithm to a Nash equilibrium of the game is then provided. Moreover, the convergence rate is investigated via simulations.
[ { "created": "Thu, 1 Dec 2016 20:23:48 GMT", "version": "v1" } ]
2017-05-09
[ [ "Salehisadaghiani", "Farzad", "" ], [ "Pavel", "Lacra", "" ] ]
In this paper, the problem of finding a Nash equilibrium of a multi-player game is considered. The players are only aware of their own cost functions as well as the action space of all players. We develop a relatively fast algorithm within the framework of inexact-ADMM. It requires a communication graph for the information exchange between the players as well as a few mild assumptions on cost functions. The convergence proof of the algorithm to a Nash equilibrium of the game is then provided. Moreover, the convergence rate is investigated via simulations.
1810.11112
Ammar Ahmad Awan
Ammar Ahmad Awan, Jeroen Bedorf, Ching-Hsiang Chu, Hari Subramoni, and Dhabaleswar K. Panda
Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation
10 pages, 9 figures, submitted to IEEE IPDPS 2019 for peer-review
IEEE CCGrid, 2019
10.1109/CCGRID.2019.00064
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
TensorFlow has been the most widely adopted Machine/Deep Learning framework. However, little exists in the literature that provides a thorough understanding of the capabilities which TensorFlow offers for the distributed training of large ML/DL models that need computation and communication at scale. Most commonly used distributed training approaches for TF can be categorized as follows: 1) Google Remote Procedure Call (gRPC), 2) gRPC+X: X=(InfiniBand Verbs, Message Passing Interface, and GPUDirect RDMA), and 3) No-gRPC: Baidu Allreduce with MPI, Horovod with MPI, and Horovod with NVIDIA NCCL. In this paper, we provide an in-depth performance characterization and analysis of these distributed training approaches on various GPU clusters including the Piz Daint system (6 on Top500). We perform experiments to gain novel insights along the following vectors: 1) Application-level scalability of DNN training, 2) Effect of Batch Size on scaling efficiency, 3) Impact of the MPI library used for no-gRPC approaches, and 4) Type and size of DNN architectures. Based on these experiments, we present two key insights: 1) Overall, No-gRPC designs achieve better performance compared to gRPC-based approaches for most configurations, and 2) The performance of No-gRPC is heavily influenced by the gradient aggregation using Allreduce. Finally, we propose a truly CUDA-Aware MPI Allreduce design that exploits CUDA kernels and pointer caching to perform large reductions efficiently. Our proposed designs offer 5-17X better performance than NCCL2 for small and medium messages, and reduces latency by 29% for large messages. The proposed optimizations help Horovod-MPI to achieve approximately 90% scaling efficiency for ResNet-50 training on 64 GPUs. Further, Horovod-MPI achieves 1.8X and 3.2X higher throughput than the native gRPC method for ResNet-50 and MobileNet, respectively, on the Piz Daint cluster.
[ { "created": "Thu, 25 Oct 2018 21:25:26 GMT", "version": "v1" } ]
2019-11-14
[ [ "Awan", "Ammar Ahmad", "" ], [ "Bedorf", "Jeroen", "" ], [ "Chu", "Ching-Hsiang", "" ], [ "Subramoni", "Hari", "" ], [ "Panda", "Dhabaleswar K.", "" ] ]
TensorFlow has been the most widely adopted Machine/Deep Learning framework. However, little exists in the literature that provides a thorough understanding of the capabilities which TensorFlow offers for the distributed training of large ML/DL models that need computation and communication at scale. Most commonly used distributed training approaches for TF can be categorized as follows: 1) Google Remote Procedure Call (gRPC), 2) gRPC+X: X=(InfiniBand Verbs, Message Passing Interface, and GPUDirect RDMA), and 3) No-gRPC: Baidu Allreduce with MPI, Horovod with MPI, and Horovod with NVIDIA NCCL. In this paper, we provide an in-depth performance characterization and analysis of these distributed training approaches on various GPU clusters including the Piz Daint system (6 on Top500). We perform experiments to gain novel insights along the following vectors: 1) Application-level scalability of DNN training, 2) Effect of Batch Size on scaling efficiency, 3) Impact of the MPI library used for no-gRPC approaches, and 4) Type and size of DNN architectures. Based on these experiments, we present two key insights: 1) Overall, No-gRPC designs achieve better performance compared to gRPC-based approaches for most configurations, and 2) The performance of No-gRPC is heavily influenced by the gradient aggregation using Allreduce. Finally, we propose a truly CUDA-Aware MPI Allreduce design that exploits CUDA kernels and pointer caching to perform large reductions efficiently. Our proposed designs offer 5-17X better performance than NCCL2 for small and medium messages, and reduces latency by 29% for large messages. The proposed optimizations help Horovod-MPI to achieve approximately 90% scaling efficiency for ResNet-50 training on 64 GPUs. Further, Horovod-MPI achieves 1.8X and 3.2X higher throughput than the native gRPC method for ResNet-50 and MobileNet, respectively, on the Piz Daint cluster.
2311.03078
Md. Shahad Mahmud Chowdhury
Sadia Afrin, Md. Shahad Mahmud Chowdhury, Md. Ekramul Islam, Faisal Ahamed Khan, Labib Imam Chowdhury, MD. Motahar Mahtab, Nazifa Nuha Chowdhury, Massud Forkan, Neelima Kundu, Hakim Arif, Mohammad Mamun Or Rashid, Mohammad Ruhul Amin, Nabeel Mohammed
BanLemma: A Word Formation Dependent Rule and Dictionary Based Bangla Lemmatizer
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Lemmatization holds significance in both natural language processing (NLP) and linguistics, as it effectively decreases data density and aids in comprehending contextual meaning. However, due to the highly inflected nature and morphological richness, lemmatization in Bangla text poses a complex challenge. In this study, we propose linguistic rules for lemmatization and utilize a dictionary along with the rules to design a lemmatizer specifically for Bangla. Our system aims to lemmatize words based on their parts of speech class within a given sentence. Unlike previous rule-based approaches, we analyzed the suffix marker occurrence according to the morpho-syntactic values and then utilized sequences of suffix markers instead of entire suffixes. To develop our rules, we analyze a large corpus of Bangla text from various domains, sources, and time periods to observe the word formation of inflected words. The lemmatizer achieves an accuracy of 96.36% when tested against a manually annotated test dataset by trained linguists and demonstrates competitive performance on three previously published Bangla lemmatization datasets. We are making the code and datasets publicly available at https://github.com/eblict-gigatech/BanLemma in order to contribute to the further advancement of Bangla NLP.
[ { "created": "Mon, 6 Nov 2023 13:02:07 GMT", "version": "v1" } ]
2023-11-07
[ [ "Afrin", "Sadia", "" ], [ "Chowdhury", "Md. Shahad Mahmud", "" ], [ "Islam", "Md. Ekramul", "" ], [ "Khan", "Faisal Ahamed", "" ], [ "Chowdhury", "Labib Imam", "" ], [ "Mahtab", "MD. Motahar", "" ], [ "Chowdhury", "Nazifa Nuha", "" ], [ "Forkan", "Massud", "" ], [ "Kundu", "Neelima", "" ], [ "Arif", "Hakim", "" ], [ "Rashid", "Mohammad Mamun Or", "" ], [ "Amin", "Mohammad Ruhul", "" ], [ "Mohammed", "Nabeel", "" ] ]
Lemmatization holds significance in both natural language processing (NLP) and linguistics, as it effectively decreases data density and aids in comprehending contextual meaning. However, due to the highly inflected nature and morphological richness, lemmatization in Bangla text poses a complex challenge. In this study, we propose linguistic rules for lemmatization and utilize a dictionary along with the rules to design a lemmatizer specifically for Bangla. Our system aims to lemmatize words based on their parts of speech class within a given sentence. Unlike previous rule-based approaches, we analyzed the suffix marker occurrence according to the morpho-syntactic values and then utilized sequences of suffix markers instead of entire suffixes. To develop our rules, we analyze a large corpus of Bangla text from various domains, sources, and time periods to observe the word formation of inflected words. The lemmatizer achieves an accuracy of 96.36% when tested against a manually annotated test dataset by trained linguists and demonstrates competitive performance on three previously published Bangla lemmatization datasets. We are making the code and datasets publicly available at https://github.com/eblict-gigatech/BanLemma in order to contribute to the further advancement of Bangla NLP.
2405.09942
Siliang Ma
Siliang Ma, Yong Xu
FPDIoU Loss: A Loss Function for Efficient Bounding Box Regression of Rotated Object Detection
arXiv admin note: text overlap with arXiv:2307.07662, text overlap with arXiv:1902.09630 by other authors
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bounding box regression is one of the important steps of object detection. However, rotation detectors often involve a more complicated loss based on SkewIoU which is unfriendly to gradient-based training. Most of the existing loss functions for rotated object detection calculate the difference between two bounding boxes only focus on the deviation of area or each points distance (e.g., $\mathcal{L}_{Smooth-\ell 1}$, $\mathcal{L}_{RotatedIoU}$ and $\mathcal{L}_{PIoU}$). The calculation process of some loss functions is extremely complex (e.g. $\mathcal{L}_{KFIoU}$). In order to improve the efficiency and accuracy of bounding box regression for rotated object detection, we proposed a novel metric for arbitrary shapes comparison based on minimum points distance, which takes most of the factors from existing loss functions for rotated object detection into account, i.e., the overlap or nonoverlapping area, the central points distance and the rotation angle. We also proposed a loss function called $\mathcal{L}_{FPDIoU}$ based on four points distance for accurate bounding box regression focusing on faster and high quality anchor boxes. In the experiments, $FPDIoU$ loss has been applied to state-of-the-art rotated object detection (e.g., RTMDET, H2RBox) models training with three popular benchmarks of rotated object detection including DOTA, DIOR, HRSC2016 and two benchmarks of arbitrary orientation scene text detection including ICDAR 2017 RRC-MLT and ICDAR 2019 RRC-MLT, which achieves better performance than existing loss functions.
[ { "created": "Thu, 16 May 2024 09:44:00 GMT", "version": "v1" }, { "created": "Sun, 19 May 2024 04:32:53 GMT", "version": "v2" } ]
2024-05-21
[ [ "Ma", "Siliang", "" ], [ "Xu", "Yong", "" ] ]
Bounding box regression is one of the important steps of object detection. However, rotation detectors often involve a more complicated loss based on SkewIoU which is unfriendly to gradient-based training. Most of the existing loss functions for rotated object detection calculate the difference between two bounding boxes only focus on the deviation of area or each points distance (e.g., $\mathcal{L}_{Smooth-\ell 1}$, $\mathcal{L}_{RotatedIoU}$ and $\mathcal{L}_{PIoU}$). The calculation process of some loss functions is extremely complex (e.g. $\mathcal{L}_{KFIoU}$). In order to improve the efficiency and accuracy of bounding box regression for rotated object detection, we proposed a novel metric for arbitrary shapes comparison based on minimum points distance, which takes most of the factors from existing loss functions for rotated object detection into account, i.e., the overlap or nonoverlapping area, the central points distance and the rotation angle. We also proposed a loss function called $\mathcal{L}_{FPDIoU}$ based on four points distance for accurate bounding box regression focusing on faster and high quality anchor boxes. In the experiments, $FPDIoU$ loss has been applied to state-of-the-art rotated object detection (e.g., RTMDET, H2RBox) models training with three popular benchmarks of rotated object detection including DOTA, DIOR, HRSC2016 and two benchmarks of arbitrary orientation scene text detection including ICDAR 2017 RRC-MLT and ICDAR 2019 RRC-MLT, which achieves better performance than existing loss functions.
2007.15951
Brian Kenji Iwana
Brian Kenji Iwana, Seiichi Uchida
An Empirical Survey of Data Augmentation for Time Series Classification with Neural Networks
null
null
10.1371/journal.pone.0254841
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent times, deep artificial neural networks have achieved many successes in pattern recognition. Part of this success can be attributed to the reliance on big data to increase generalization. However, in the field of time series recognition, many datasets are often very small. One method of addressing this problem is through the use of data augmentation. In this paper, we survey data augmentation techniques for time series and their application to time series classification with neural networks. We propose a taxonomy and outline the four families in time series data augmentation, including transformation-based methods, pattern mixing, generative models, and decomposition methods. Furthermore, we empirically evaluate 12 time series data augmentation methods on 128 time series classification datasets with six different types of neural networks. Through the results, we are able to analyze the characteristics, advantages and disadvantages, and recommendations of each data augmentation method. This survey aims to help in the selection of time series data augmentation for neural network applications.
[ { "created": "Fri, 31 Jul 2020 10:33:54 GMT", "version": "v1" }, { "created": "Thu, 4 Feb 2021 09:58:48 GMT", "version": "v2" }, { "created": "Mon, 24 May 2021 07:40:30 GMT", "version": "v3" }, { "created": "Fri, 2 Jul 2021 09:15:08 GMT", "version": "v4" } ]
2021-09-15
[ [ "Iwana", "Brian Kenji", "" ], [ "Uchida", "Seiichi", "" ] ]
In recent times, deep artificial neural networks have achieved many successes in pattern recognition. Part of this success can be attributed to the reliance on big data to increase generalization. However, in the field of time series recognition, many datasets are often very small. One method of addressing this problem is through the use of data augmentation. In this paper, we survey data augmentation techniques for time series and their application to time series classification with neural networks. We propose a taxonomy and outline the four families in time series data augmentation, including transformation-based methods, pattern mixing, generative models, and decomposition methods. Furthermore, we empirically evaluate 12 time series data augmentation methods on 128 time series classification datasets with six different types of neural networks. Through the results, we are able to analyze the characteristics, advantages and disadvantages, and recommendations of each data augmentation method. This survey aims to help in the selection of time series data augmentation for neural network applications.
2203.01927
Jannis Vamvas
Jannis Vamvas and Rico Sennrich
As Little as Possible, as Much as Necessary: Detecting Over- and Undertranslations with Contrastive Conditioning
ACL 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Omission and addition of content is a typical issue in neural machine translation. We propose a method for detecting such phenomena with off-the-shelf translation models. Using contrastive conditioning, we compare the likelihood of a full sequence under a translation model to the likelihood of its parts, given the corresponding source or target sequence. This allows to pinpoint superfluous words in the translation and untranslated words in the source even in the absence of a reference translation. The accuracy of our method is comparable to a supervised method that requires a custom quality estimation model.
[ { "created": "Thu, 3 Mar 2022 18:59:02 GMT", "version": "v1" } ]
2022-03-04
[ [ "Vamvas", "Jannis", "" ], [ "Sennrich", "Rico", "" ] ]
Omission and addition of content is a typical issue in neural machine translation. We propose a method for detecting such phenomena with off-the-shelf translation models. Using contrastive conditioning, we compare the likelihood of a full sequence under a translation model to the likelihood of its parts, given the corresponding source or target sequence. This allows to pinpoint superfluous words in the translation and untranslated words in the source even in the absence of a reference translation. The accuracy of our method is comparable to a supervised method that requires a custom quality estimation model.
2206.14846
Kaixuan Huang
Kaixuan Huang, Yu Wu, Xuezhou Zhang, Shenyinying Tu, Qingyun Wu, Mengdi Wang, Huazheng Wang
Provably Efficient Reinforcement Learning for Online Adaptive Influence Maximization
null
null
null
null
cs.LG cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online influence maximization aims to maximize the influence spread of a content in a social network with unknown network model by selecting a few seed nodes. Recent studies followed a non-adaptive setting, where the seed nodes are selected before the start of the diffusion process and network parameters are updated when the diffusion stops. We consider an adaptive version of content-dependent online influence maximization problem where the seed nodes are sequentially activated based on real-time feedback. In this paper, we formulate the problem as an infinite-horizon discounted MDP under a linear diffusion process and present a model-based reinforcement learning solution. Our algorithm maintains a network model estimate and selects seed users adaptively, exploring the social network while improving the optimal policy optimistically. We establish $\widetilde O(\sqrt{T})$ regret bound for our algorithm. Empirical evaluations on synthetic network demonstrate the efficiency of our algorithm.
[ { "created": "Wed, 29 Jun 2022 18:17:28 GMT", "version": "v1" } ]
2022-07-01
[ [ "Huang", "Kaixuan", "" ], [ "Wu", "Yu", "" ], [ "Zhang", "Xuezhou", "" ], [ "Tu", "Shenyinying", "" ], [ "Wu", "Qingyun", "" ], [ "Wang", "Mengdi", "" ], [ "Wang", "Huazheng", "" ] ]
Online influence maximization aims to maximize the influence spread of a content in a social network with unknown network model by selecting a few seed nodes. Recent studies followed a non-adaptive setting, where the seed nodes are selected before the start of the diffusion process and network parameters are updated when the diffusion stops. We consider an adaptive version of content-dependent online influence maximization problem where the seed nodes are sequentially activated based on real-time feedback. In this paper, we formulate the problem as an infinite-horizon discounted MDP under a linear diffusion process and present a model-based reinforcement learning solution. Our algorithm maintains a network model estimate and selects seed users adaptively, exploring the social network while improving the optimal policy optimistically. We establish $\widetilde O(\sqrt{T})$ regret bound for our algorithm. Empirical evaluations on synthetic network demonstrate the efficiency of our algorithm.
1907.09029
Bestoun Ahmed Dr.
Bestoun S. Ahmed and Angelo Gargantini and Kamal Z. Zamli and Cemal Yilmaz and Miroslav Bures and Marek Szeles
Code-Aware Combinatorial Interaction Testing
28 pages
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Combinatorial interaction testing (CIT) is a useful testing technique to address the interaction of input parameters in software systems. In many applications, the technique has been used as a systematic sampling technique to sample the enormous possibilities of test cases. In the last decade, most of the research activities focused on the generation of CIT test suites as it is a computationally complex problem. Although promising, less effort has been paid for the application of CIT. In general, to apply the CIT, practitioners must identify the input parameters for the Software-under-test (SUT), feed these parameters to the CIT tool to generate the test suite, and then run those tests on the application with some pass and fail criteria for verification. Using this approach, CIT is used as a black-box testing technique without knowing the effect of the internal code. Although useful, practically, not all the parameters having the same impact on the SUT. This paper introduces a different approach to use the CIT as a gray-box testing technique by considering the internal code structure of the SUT to know the impact of each input parameter and thus use this impact in the test generation stage. We applied our approach to five reliable case studies. The results showed that this approach would help to detect new faults as compared to the equal impact parameter approach.
[ { "created": "Sun, 21 Jul 2019 20:27:28 GMT", "version": "v1" } ]
2019-07-23
[ [ "Ahmed", "Bestoun S.", "" ], [ "Gargantini", "Angelo", "" ], [ "Zamli", "Kamal Z.", "" ], [ "Yilmaz", "Cemal", "" ], [ "Bures", "Miroslav", "" ], [ "Szeles", "Marek", "" ] ]
Combinatorial interaction testing (CIT) is a useful testing technique to address the interaction of input parameters in software systems. In many applications, the technique has been used as a systematic sampling technique to sample the enormous possibilities of test cases. In the last decade, most of the research activities focused on the generation of CIT test suites as it is a computationally complex problem. Although promising, less effort has been paid for the application of CIT. In general, to apply the CIT, practitioners must identify the input parameters for the Software-under-test (SUT), feed these parameters to the CIT tool to generate the test suite, and then run those tests on the application with some pass and fail criteria for verification. Using this approach, CIT is used as a black-box testing technique without knowing the effect of the internal code. Although useful, practically, not all the parameters having the same impact on the SUT. This paper introduces a different approach to use the CIT as a gray-box testing technique by considering the internal code structure of the SUT to know the impact of each input parameter and thus use this impact in the test generation stage. We applied our approach to five reliable case studies. The results showed that this approach would help to detect new faults as compared to the equal impact parameter approach.
2104.01772
Haimin Luo
Haimin Luo, Anpei Chen, Qixuan Zhang, Bai Pang, Minye Wu, Lan Xu, and Jingyi Yu
Convolutional Neural Opacity Radiance Fields
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Photo-realistic modeling and rendering of fuzzy objects with complex opacity are critical for numerous immersive VR/AR applications, but it suffers from strong view-dependent brightness, color. In this paper, we propose a novel scheme to generate opacity radiance fields with a convolutional neural renderer for fuzzy objects, which is the first to combine both explicit opacity supervision and convolutional mechanism into the neural radiance field framework so as to enable high-quality appearance and global consistent alpha mattes generation in arbitrary novel views. More specifically, we propose an efficient sampling strategy along with both the camera rays and image plane, which enables efficient radiance field sampling and learning in a patch-wise manner, as well as a novel volumetric feature integration scheme that generates per-patch hybrid feature embeddings to reconstruct the view-consistent fine-detailed appearance and opacity output. We further adopt a patch-wise adversarial training scheme to preserve both high-frequency appearance and opacity details in a self-supervised framework. We also introduce an effective multi-view image capture system to capture high-quality color and alpha maps for challenging fuzzy objects. Extensive experiments on existing and our new challenging fuzzy object dataset demonstrate that our method achieves photo-realistic, globally consistent, and fined detailed appearance and opacity free-viewpoint rendering for various fuzzy objects.
[ { "created": "Mon, 5 Apr 2021 04:46:46 GMT", "version": "v1" } ]
2021-04-06
[ [ "Luo", "Haimin", "" ], [ "Chen", "Anpei", "" ], [ "Zhang", "Qixuan", "" ], [ "Pang", "Bai", "" ], [ "Wu", "Minye", "" ], [ "Xu", "Lan", "" ], [ "Yu", "Jingyi", "" ] ]
Photo-realistic modeling and rendering of fuzzy objects with complex opacity are critical for numerous immersive VR/AR applications, but it suffers from strong view-dependent brightness, color. In this paper, we propose a novel scheme to generate opacity radiance fields with a convolutional neural renderer for fuzzy objects, which is the first to combine both explicit opacity supervision and convolutional mechanism into the neural radiance field framework so as to enable high-quality appearance and global consistent alpha mattes generation in arbitrary novel views. More specifically, we propose an efficient sampling strategy along with both the camera rays and image plane, which enables efficient radiance field sampling and learning in a patch-wise manner, as well as a novel volumetric feature integration scheme that generates per-patch hybrid feature embeddings to reconstruct the view-consistent fine-detailed appearance and opacity output. We further adopt a patch-wise adversarial training scheme to preserve both high-frequency appearance and opacity details in a self-supervised framework. We also introduce an effective multi-view image capture system to capture high-quality color and alpha maps for challenging fuzzy objects. Extensive experiments on existing and our new challenging fuzzy object dataset demonstrate that our method achieves photo-realistic, globally consistent, and fined detailed appearance and opacity free-viewpoint rendering for various fuzzy objects.
1705.06457
Augustin Speyer
Augustin Speyer, Robin Lemke
Information Density as a Factor for Variation in the Embedding of Relative Clauses
10 pages. To be submitted in a German version to 'Sprachwissenschaft'
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In German, relative clauses can be positioned in-situ or extraposed. A potential factor for the variation might be information density. In this study, this hypothesis is tested with a corpus of 17th century German funeral sermons. For each referent in the relative clauses and their matrix clauses, the attention state was determined (first calculation). In a second calculation, for each word the surprisal values were determined, using a bi-gram language model. In a third calculation, the surprisal values were accommodated as to whether it is the first occurrence of the word in question or not. All three calculations pointed in the same direction: With in-situ relative clauses, the rate of new referents was lower and the average surprisal values were lower, especially the accommodated surprisal values, than with extraposed relative clauses. This indicated that in-formation density is a factor governing the choice between in-situ and extraposed relative clauses. The study also sheds light on the intrinsic relation-ship between the information theoretic concept of information density and in-formation structural concepts such as givenness which are used under a more linguistic perspective.
[ { "created": "Thu, 18 May 2017 08:16:20 GMT", "version": "v1" } ]
2017-05-19
[ [ "Speyer", "Augustin", "" ], [ "Lemke", "Robin", "" ] ]
In German, relative clauses can be positioned in-situ or extraposed. A potential factor for the variation might be information density. In this study, this hypothesis is tested with a corpus of 17th century German funeral sermons. For each referent in the relative clauses and their matrix clauses, the attention state was determined (first calculation). In a second calculation, for each word the surprisal values were determined, using a bi-gram language model. In a third calculation, the surprisal values were accommodated as to whether it is the first occurrence of the word in question or not. All three calculations pointed in the same direction: With in-situ relative clauses, the rate of new referents was lower and the average surprisal values were lower, especially the accommodated surprisal values, than with extraposed relative clauses. This indicated that in-formation density is a factor governing the choice between in-situ and extraposed relative clauses. The study also sheds light on the intrinsic relation-ship between the information theoretic concept of information density and in-formation structural concepts such as givenness which are used under a more linguistic perspective.
1507.06199
Rani Izsak
Moran Feldman and Rani Izsak
Building a Good Team: Secretary Problems and the Supermodular Degree
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the Secretary Problem, one has to hire the best among n candidates. The candidates are interviewed, one at a time, at a random order, and one has to decide on the spot, whether to hire a candidate or continue interviewing. It is well known that the best candidate can be hired with a probability of 1/e (Dynkin, 1963). Recent works extend this problem to settings in which multiple candidates can be hired, subject to some constraint. Here, one wishes to hire a set of candidates maximizing a given set function. Almost all extensions considered in the literature assume the objective set function is either linear or submodular. Unfortunately, real world functions might not have either of these properties. Consider, for example, a scenario where one hires researchers for a project. Indeed, it can be that some researchers can substitute others for that matter. However, it can also be that some combinations of researchers result in synergy (see, e.g, Woolley et al., Science 2010, for a research about collective intelligence). The first phenomenon can be modeled by a submoudlar set function, while the latter cannot. In this work, we study the secretary problem with an arbitrary non-negative monotone function, subject to a general matroid constraint. It is not difficult to prove that, generally, only very poor results can be obtained for this class of objective functions. We tackle this hardness by combining the following: 1.Parametrizing our algorithms by the supermodular degree of the objective function (defined by Feige and Izsak, ITCS 2013), which, roughly speaking, measures the distance of a function from being submodular. 2.Suggesting an (arguably) natural model that permits approximation guarantees that are polynomial in the supermodular degree (as opposed to the standard model which allows only exponential guarantees).
[ { "created": "Wed, 22 Jul 2015 14:15:10 GMT", "version": "v1" } ]
2015-07-23
[ [ "Feldman", "Moran", "" ], [ "Izsak", "Rani", "" ] ]
In the Secretary Problem, one has to hire the best among n candidates. The candidates are interviewed, one at a time, at a random order, and one has to decide on the spot, whether to hire a candidate or continue interviewing. It is well known that the best candidate can be hired with a probability of 1/e (Dynkin, 1963). Recent works extend this problem to settings in which multiple candidates can be hired, subject to some constraint. Here, one wishes to hire a set of candidates maximizing a given set function. Almost all extensions considered in the literature assume the objective set function is either linear or submodular. Unfortunately, real world functions might not have either of these properties. Consider, for example, a scenario where one hires researchers for a project. Indeed, it can be that some researchers can substitute others for that matter. However, it can also be that some combinations of researchers result in synergy (see, e.g, Woolley et al., Science 2010, for a research about collective intelligence). The first phenomenon can be modeled by a submoudlar set function, while the latter cannot. In this work, we study the secretary problem with an arbitrary non-negative monotone function, subject to a general matroid constraint. It is not difficult to prove that, generally, only very poor results can be obtained for this class of objective functions. We tackle this hardness by combining the following: 1.Parametrizing our algorithms by the supermodular degree of the objective function (defined by Feige and Izsak, ITCS 2013), which, roughly speaking, measures the distance of a function from being submodular. 2.Suggesting an (arguably) natural model that permits approximation guarantees that are polynomial in the supermodular degree (as opposed to the standard model which allows only exponential guarantees).
2012.09790
Yilun Du
Yilun Du, Yinan Zhang, Hong-Xing Yu, Joshua B. Tenenbaum, Jiajun Wu
Neural Radiance Flow for 4D View Synthesis and Video Processing
ICCV 2021. Website: https://yilundu.github.io/nerflow/
null
null
null
cs.CV cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method, Neural Radiance Flow (NeRFlow),to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images. Key to our approach is the use of a neural implicit representation that learns to capture the 3D occupancy, radiance, and dynamics of the scene. By enforcing consistency across different modalities, our representation enables multi-view rendering in diverse dynamic scenes, including water pouring, robotic interaction, and real images, outperforming state-of-the-art methods for spatial-temporal view synthesis. Our approach works even when inputs images are captured with only one camera. We further demonstrate that the learned representation can serve as an implicit scene prior, enabling video processing tasks such as image super-resolution and de-noising without any additional supervision.
[ { "created": "Thu, 17 Dec 2020 17:54:32 GMT", "version": "v1" }, { "created": "Sun, 5 Sep 2021 16:39:21 GMT", "version": "v2" } ]
2021-09-07
[ [ "Du", "Yilun", "" ], [ "Zhang", "Yinan", "" ], [ "Yu", "Hong-Xing", "" ], [ "Tenenbaum", "Joshua B.", "" ], [ "Wu", "Jiajun", "" ] ]
We present a method, Neural Radiance Flow (NeRFlow),to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images. Key to our approach is the use of a neural implicit representation that learns to capture the 3D occupancy, radiance, and dynamics of the scene. By enforcing consistency across different modalities, our representation enables multi-view rendering in diverse dynamic scenes, including water pouring, robotic interaction, and real images, outperforming state-of-the-art methods for spatial-temporal view synthesis. Our approach works even when inputs images are captured with only one camera. We further demonstrate that the learned representation can serve as an implicit scene prior, enabling video processing tasks such as image super-resolution and de-noising without any additional supervision.
2204.02822
Ritesh Kumar
Ritesh Kumar, Bornini Lahiri
Language Resources and Technologies for Non-Scheduled and Endangered Indian Languages
To appear in Proceedings of Conference on Sanskrit and Indian Languages: Technology
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the present paper, we will present a survey of the language resources and technologies available for the non-scheduled and endangered languages of India. While there have been different estimates from different sources about the number of languages in India, it could be assumed that there are more than 1,000 languages currently being spoken in India. However barring some of the 22 languages included in the 8th Schedule of the Indian Constitution (called the scheduled languages), there is hardly any substantial resource or technology available for the rest of the languages. Nonetheless there have been some individual attempts at developing resources and technologies for the different languages across the country. Of late, some financial support has also become available for the endangered languages. In this paper, we give a summary of the resources and technologies for those Indian languages which are not included in the 8th schedule of the Indian Constitution and/or which are endangered.
[ { "created": "Wed, 6 Apr 2022 13:33:24 GMT", "version": "v1" } ]
2022-04-07
[ [ "Kumar", "Ritesh", "" ], [ "Lahiri", "Bornini", "" ] ]
In the present paper, we will present a survey of the language resources and technologies available for the non-scheduled and endangered languages of India. While there have been different estimates from different sources about the number of languages in India, it could be assumed that there are more than 1,000 languages currently being spoken in India. However barring some of the 22 languages included in the 8th Schedule of the Indian Constitution (called the scheduled languages), there is hardly any substantial resource or technology available for the rest of the languages. Nonetheless there have been some individual attempts at developing resources and technologies for the different languages across the country. Of late, some financial support has also become available for the endangered languages. In this paper, we give a summary of the resources and technologies for those Indian languages which are not included in the 8th schedule of the Indian Constitution and/or which are endangered.
2211.03509
Gang Cao
Zijie Lou, Gang Cao, Man Lin
Black-Box Attack against GAN-Generated Image Detector with Contrastive Perturbation
null
null
null
null
cs.CV cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visually realistic GAN-generated facial images raise obvious concerns on potential misuse. Many effective forensic algorithms have been developed to detect such synthetic images in recent years. It is significant to assess the vulnerability of such forensic detectors against adversarial attacks. In this paper, we propose a new black-box attack method against GAN-generated image detectors. A novel contrastive learning strategy is adopted to train the encoder-decoder network based anti-forensic model under a contrastive loss function. GAN images and their simulated real counterparts are constructed as positive and negative samples, respectively. Leveraging on the trained attack model, imperceptible contrastive perturbation could be applied to input synthetic images for removing GAN fingerprint to some extent. As such, existing GAN-generated image detectors are expected to be deceived. Extensive experimental results verify that the proposed attack effectively reduces the accuracy of three state-of-the-art detectors on six popular GANs. High visual quality of the attacked images is also achieved. The source code will be available at https://github.com/ZXMMD/BAttGAND.
[ { "created": "Mon, 7 Nov 2022 12:56:14 GMT", "version": "v1" } ]
2022-11-08
[ [ "Lou", "Zijie", "" ], [ "Cao", "Gang", "" ], [ "Lin", "Man", "" ] ]
Visually realistic GAN-generated facial images raise obvious concerns on potential misuse. Many effective forensic algorithms have been developed to detect such synthetic images in recent years. It is significant to assess the vulnerability of such forensic detectors against adversarial attacks. In this paper, we propose a new black-box attack method against GAN-generated image detectors. A novel contrastive learning strategy is adopted to train the encoder-decoder network based anti-forensic model under a contrastive loss function. GAN images and their simulated real counterparts are constructed as positive and negative samples, respectively. Leveraging on the trained attack model, imperceptible contrastive perturbation could be applied to input synthetic images for removing GAN fingerprint to some extent. As such, existing GAN-generated image detectors are expected to be deceived. Extensive experimental results verify that the proposed attack effectively reduces the accuracy of three state-of-the-art detectors on six popular GANs. High visual quality of the attacked images is also achieved. The source code will be available at https://github.com/ZXMMD/BAttGAND.
1409.1045
Uwe Aickelin
Josie C. McCullochy, Chris J. Hinde, Christian Wagner and Uwe Aickelin
A Fuzzy Directional Distance Measure
Proceedings of the 2014 World Congress on Computational Intelligence (WCCI 2014), pp. 141-148, 2014
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The measure of distance between two fuzzy sets is a fundamental tool within fuzzy set theory, however, distance measures currently within the literature use a crisp value to represent the distance between fuzzy sets. A real valued distance measure is developed into a fuzzy distance measure which better reflects the uncertainty inherent in fuzzy sets and a fuzzy directional distance measure is presented, which accounts for the direction of change between fuzzy sets. A multiplicative version is explored as a full maximal assignment is computationally intractable so an intermediate solution is offered.
[ { "created": "Wed, 3 Sep 2014 11:48:23 GMT", "version": "v1" } ]
2014-09-04
[ [ "McCullochy", "Josie C.", "" ], [ "Hinde", "Chris J.", "" ], [ "Wagner", "Christian", "" ], [ "Aickelin", "Uwe", "" ] ]
The measure of distance between two fuzzy sets is a fundamental tool within fuzzy set theory, however, distance measures currently within the literature use a crisp value to represent the distance between fuzzy sets. A real valued distance measure is developed into a fuzzy distance measure which better reflects the uncertainty inherent in fuzzy sets and a fuzzy directional distance measure is presented, which accounts for the direction of change between fuzzy sets. A multiplicative version is explored as a full maximal assignment is computationally intractable so an intermediate solution is offered.
1712.01329
Dana Kianfar
Mircea Mironenco, Dana Kianfar, Ke Tran, Evangelos Kanoulas, Efstratios Gavves
Examining Cooperation in Visual Dialog Models
9 pages, 5 figures, 2 tables, code at http://github.com/danakianfar/Examining-Cooperation-in-VDM/
null
null
null
cs.CV cs.AI cs.CL cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we propose a blackbox intervention method for visual dialog models, with the aim of assessing the contribution of individual linguistic or visual components. Concretely, we conduct structured or randomized interventions that aim to impair an individual component of the model, and observe changes in task performance. We reproduce a state-of-the-art visual dialog model and demonstrate that our methodology yields surprising insights, namely that both dialog and image information have minimal contributions to task performance. The intervention method presented here can be applied as a sanity check for the strength and robustness of each component in visual dialog systems.
[ { "created": "Mon, 4 Dec 2017 20:16:52 GMT", "version": "v1" } ]
2017-12-06
[ [ "Mironenco", "Mircea", "" ], [ "Kianfar", "Dana", "" ], [ "Tran", "Ke", "" ], [ "Kanoulas", "Evangelos", "" ], [ "Gavves", "Efstratios", "" ] ]
In this work we propose a blackbox intervention method for visual dialog models, with the aim of assessing the contribution of individual linguistic or visual components. Concretely, we conduct structured or randomized interventions that aim to impair an individual component of the model, and observe changes in task performance. We reproduce a state-of-the-art visual dialog model and demonstrate that our methodology yields surprising insights, namely that both dialog and image information have minimal contributions to task performance. The intervention method presented here can be applied as a sanity check for the strength and robustness of each component in visual dialog systems.
1909.05569
Michal Kleinbort
Michal Kleinbort, Edgar Granados, Kiril Solovey, Riccardo Bonalli, Kostas E. Bekris, Dan Halperin
Refined Analysis of Asymptotically-Optimal Kinodynamic Planning in the State-Cost Space
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel analysis of AO-RRT: a tree-based planner for motion planning with kinodynamic constraints, originally described by Hauser and Zhou (AO-X, 2016). AO-RRT explores the state-cost space and has been shown to efficiently obtain high-quality solutions in practice without relying on the availability of a computationally-intensive two-point boundary-value solver. Our main contribution is an optimality proof for the single-tree version of the algorithm---a variant that was not analyzed before. Our proof only requires a mild and easily-verifiable set of assumptions on the problem and system: Lipschitz-continuity of the cost function and the dynamics. In particular, we prove that for any system satisfying these assumptions, any trajectory having a piecewise-constant control function and positive clearance from the obstacles can be approximated arbitrarily well by a trajectory found by AO-RRT. We also discuss practical aspects of AO-RRT and present experimental comparisons of variants of the algorithm.
[ { "created": "Thu, 12 Sep 2019 11:18:55 GMT", "version": "v1" }, { "created": "Fri, 13 Sep 2019 04:58:54 GMT", "version": "v2" }, { "created": "Mon, 9 Mar 2020 14:43:35 GMT", "version": "v3" } ]
2020-03-10
[ [ "Kleinbort", "Michal", "" ], [ "Granados", "Edgar", "" ], [ "Solovey", "Kiril", "" ], [ "Bonalli", "Riccardo", "" ], [ "Bekris", "Kostas E.", "" ], [ "Halperin", "Dan", "" ] ]
We present a novel analysis of AO-RRT: a tree-based planner for motion planning with kinodynamic constraints, originally described by Hauser and Zhou (AO-X, 2016). AO-RRT explores the state-cost space and has been shown to efficiently obtain high-quality solutions in practice without relying on the availability of a computationally-intensive two-point boundary-value solver. Our main contribution is an optimality proof for the single-tree version of the algorithm---a variant that was not analyzed before. Our proof only requires a mild and easily-verifiable set of assumptions on the problem and system: Lipschitz-continuity of the cost function and the dynamics. In particular, we prove that for any system satisfying these assumptions, any trajectory having a piecewise-constant control function and positive clearance from the obstacles can be approximated arbitrarily well by a trajectory found by AO-RRT. We also discuss practical aspects of AO-RRT and present experimental comparisons of variants of the algorithm.
1511.08987
Napas Udomsak
Napas Udomsak
How do the naive Bayes classifier and the Support Vector Machine compare in their ability to forecast the Stock Exchange of Thailand?
16 pages
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
This essay investigates the question of how the naive Bayes classifier and the support vector machine compare in their ability to forecast the Stock Exchange of Thailand. The theory behind the SVM and the naive Bayes classifier is explored. The algorithms are trained using data from the month of January 2010, extracted from the MarketWatch.com website. Input features are selected based on previous studies of the SET100 Index. The Weka 3 software is used to create models from the labeled training data. Mean squared error and proportion of correctly classified instances, and a number of other error measurements are the used to compare the two algorithms. This essay shows that these two algorithms are currently not advanced enough to accurately model the stock exchange. Nevertheless, the naive Bayes is better than the support vector machine at predicting the Stock Exchange of Thailand.
[ { "created": "Sun, 29 Nov 2015 09:57:42 GMT", "version": "v1" } ]
2015-12-01
[ [ "Udomsak", "Napas", "" ] ]
This essay investigates the question of how the naive Bayes classifier and the support vector machine compare in their ability to forecast the Stock Exchange of Thailand. The theory behind the SVM and the naive Bayes classifier is explored. The algorithms are trained using data from the month of January 2010, extracted from the MarketWatch.com website. Input features are selected based on previous studies of the SET100 Index. The Weka 3 software is used to create models from the labeled training data. Mean squared error and proportion of correctly classified instances, and a number of other error measurements are the used to compare the two algorithms. This essay shows that these two algorithms are currently not advanced enough to accurately model the stock exchange. Nevertheless, the naive Bayes is better than the support vector machine at predicting the Stock Exchange of Thailand.
2401.15688
Zhenyu Wang
Zhenyu Wang, Enze Xie, Aoxue Li, Zhongdao Wang, Xihui Liu, Zhenguo Li
Divide and Conquer: Language Models can Plan and Self-Correct for Compositional Text-to-Image Generation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Despite significant advancements in text-to-image models for generating high-quality images, these methods still struggle to ensure the controllability of text prompts over images in the context of complex text prompts, especially when it comes to retaining object attributes and relationships. In this paper, we propose CompAgent, a training-free approach for compositional text-to-image generation, with a large language model (LLM) agent as its core. The fundamental idea underlying CompAgent is premised on a divide-and-conquer methodology. Given a complex text prompt containing multiple concepts including objects, attributes, and relationships, the LLM agent initially decomposes it, which entails the extraction of individual objects, their associated attributes, and the prediction of a coherent scene layout. These individual objects can then be independently conquered. Subsequently, the agent performs reasoning by analyzing the text, plans and employs the tools to compose these isolated objects. The verification and human feedback mechanism is finally incorporated into our agent to further correct the potential attribute errors and refine the generated images. Guided by the LLM agent, we propose a tuning-free multi-concept customization model and a layout-to-image generation model as the tools for concept composition, and a local image editing method as the tool to interact with the agent for verification. The scene layout controls the image generation process among these tools to prevent confusion among multiple objects. Extensive experiments demonstrate the superiority of our approach for compositional text-to-image generation: CompAgent achieves more than 10\% improvement on T2I-CompBench, a comprehensive benchmark for open-world compositional T2I generation. The extension to various related tasks also illustrates the flexibility of our CompAgent for potential applications.
[ { "created": "Sun, 28 Jan 2024 16:18:39 GMT", "version": "v1" }, { "created": "Tue, 30 Jan 2024 13:05:13 GMT", "version": "v2" } ]
2024-01-31
[ [ "Wang", "Zhenyu", "" ], [ "Xie", "Enze", "" ], [ "Li", "Aoxue", "" ], [ "Wang", "Zhongdao", "" ], [ "Liu", "Xihui", "" ], [ "Li", "Zhenguo", "" ] ]
Despite significant advancements in text-to-image models for generating high-quality images, these methods still struggle to ensure the controllability of text prompts over images in the context of complex text prompts, especially when it comes to retaining object attributes and relationships. In this paper, we propose CompAgent, a training-free approach for compositional text-to-image generation, with a large language model (LLM) agent as its core. The fundamental idea underlying CompAgent is premised on a divide-and-conquer methodology. Given a complex text prompt containing multiple concepts including objects, attributes, and relationships, the LLM agent initially decomposes it, which entails the extraction of individual objects, their associated attributes, and the prediction of a coherent scene layout. These individual objects can then be independently conquered. Subsequently, the agent performs reasoning by analyzing the text, plans and employs the tools to compose these isolated objects. The verification and human feedback mechanism is finally incorporated into our agent to further correct the potential attribute errors and refine the generated images. Guided by the LLM agent, we propose a tuning-free multi-concept customization model and a layout-to-image generation model as the tools for concept composition, and a local image editing method as the tool to interact with the agent for verification. The scene layout controls the image generation process among these tools to prevent confusion among multiple objects. Extensive experiments demonstrate the superiority of our approach for compositional text-to-image generation: CompAgent achieves more than 10\% improvement on T2I-CompBench, a comprehensive benchmark for open-world compositional T2I generation. The extension to various related tasks also illustrates the flexibility of our CompAgent for potential applications.
1601.06043
Junaid Qadir
Sana Habib, Junaid Qadir, Anwaar Ali, Durdana Habib, Ming Li, Arjuna Sathiaseelan
The Past, Present, and Future of Transport-Layer Multipath
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multipathing in communication networks is gaining momentum due to its attractive features of increased reliability, throughput, fault tolerance, and load balancing capabilities. In particular, wireless environments and datacenters are envisioned to become largely dependent on the power of multipathing for seamless handovers, virtual machine (VM) migration and in general, pooling less proficient resources together for achieving overall high proficiency. The transport layer, with its knowledge about end-to-end path characteristics, is well placed to enhance performance through better utilization of multiple paths. Realizing the importance of transport-layer multipath, this paper investigates the modernization of traditional connection establishment, flow control, sequence number splitting, acknowledgement, and flow scheduling mechanisms for use with multiple paths. Since congestion control defines a fundamental feature of the transport layer, we study the working of multipath rate control and analyze its stability and convergence. We also discuss how various multipath congestion control algorithms differ in their window increase and decrease functions, their TCP-friendliness, and responsiveness. To the best of our knowledge, this is the first in-depth survey paper that has chronicled the evolution of the transport layer of the Internet from the traditional single-path TCP to the recent development of the modern multipath TCP (MPTCP) protocol. Along with describing the history of this evolution, we also highlight in this paper the remaining challenges and research issues.
[ { "created": "Fri, 22 Jan 2016 15:38:11 GMT", "version": "v1" } ]
2016-01-25
[ [ "Habib", "Sana", "" ], [ "Qadir", "Junaid", "" ], [ "Ali", "Anwaar", "" ], [ "Habib", "Durdana", "" ], [ "Li", "Ming", "" ], [ "Sathiaseelan", "Arjuna", "" ] ]
Multipathing in communication networks is gaining momentum due to its attractive features of increased reliability, throughput, fault tolerance, and load balancing capabilities. In particular, wireless environments and datacenters are envisioned to become largely dependent on the power of multipathing for seamless handovers, virtual machine (VM) migration and in general, pooling less proficient resources together for achieving overall high proficiency. The transport layer, with its knowledge about end-to-end path characteristics, is well placed to enhance performance through better utilization of multiple paths. Realizing the importance of transport-layer multipath, this paper investigates the modernization of traditional connection establishment, flow control, sequence number splitting, acknowledgement, and flow scheduling mechanisms for use with multiple paths. Since congestion control defines a fundamental feature of the transport layer, we study the working of multipath rate control and analyze its stability and convergence. We also discuss how various multipath congestion control algorithms differ in their window increase and decrease functions, their TCP-friendliness, and responsiveness. To the best of our knowledge, this is the first in-depth survey paper that has chronicled the evolution of the transport layer of the Internet from the traditional single-path TCP to the recent development of the modern multipath TCP (MPTCP) protocol. Along with describing the history of this evolution, we also highlight in this paper the remaining challenges and research issues.
1010.4603
Aravind Iyengar
Aravind R. Iyengar, Paul H. Siegel, Jack K. Wolf
Write Channel Model for Bit-Patterned Media Recording
11 pages, 12 figures, journal
null
10.1109/TMAG.2010.2080667
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new write channel model for bit-patterned media recording that reflects the data dependence of write synchronization errors. It is shown that this model accommodates both substitution-like errors and insertion-deletion errors whose statistics are determined by an underlying channel state process. We study information theoretic properties of the write channel model, including the capacity, symmetric information rate, Markov-1 rate and the zero-error capacity.
[ { "created": "Fri, 22 Oct 2010 02:28:04 GMT", "version": "v1" } ]
2011-06-02
[ [ "Iyengar", "Aravind R.", "" ], [ "Siegel", "Paul H.", "" ], [ "Wolf", "Jack K.", "" ] ]
We propose a new write channel model for bit-patterned media recording that reflects the data dependence of write synchronization errors. It is shown that this model accommodates both substitution-like errors and insertion-deletion errors whose statistics are determined by an underlying channel state process. We study information theoretic properties of the write channel model, including the capacity, symmetric information rate, Markov-1 rate and the zero-error capacity.
2202.04620
Md Morshed Alam
Md Morshed Alam, Md Sajidul Islam Sajid, Weichao Wang, Jinpeng Wei (Department of Software and Information Systems, University of North Carolina at Charlotte, Charlotte, USA)
IoTMonitor: A Hidden Markov Model-based Security System to Identify Crucial Attack Nodes in Trigger-action IoT Platforms
This paper appears in the 2022 IEEE Wireless Communications and Networking Conference (WCNC 2022). Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
With the emergence and fast development of trigger-action platforms in IoT settings, security vulnerabilities caused by the interactions among IoT devices become more prevalent. The event occurrence at one device triggers an action in another device, which may eventually contribute to the creation of a chain of events in a network. Adversaries exploit the chain effect to compromise IoT devices and trigger actions of interest remotely just by injecting malicious events into the chain. To address security vulnerabilities caused by trigger-action scenarios, existing research efforts focus on the validation of the security properties of devices or verification of the occurrence of certain events based on their physical fingerprints on a device. We propose IoTMonitor, a security analysis system that discerns the underlying chain of event occurrences with the highest probability by observing a chain of physical evidence collected by sensors. We use the Baum-Welch algorithm to estimate transition and emission probabilities and the Viterbi algorithm to discern the event sequence. We can then identify the crucial nodes in the trigger-action sequence whose compromise allows attackers to reach their final goals. The experiment results of our designed system upon the PEEVES datasets show that we can rebuild the event occurrence sequence with high accuracy from the observations and identify the crucial nodes on the attack paths.
[ { "created": "Wed, 9 Feb 2022 18:36:42 GMT", "version": "v1" } ]
2022-02-10
[ [ "Alam", "Md Morshed", "", "Department of Software and Information Systems, University of North Carolina\n at Charlotte, Charlotte, USA" ], [ "Sajid", "Md Sajidul Islam", "", "Department of Software and Information Systems, University of North Carolina\n at Charlotte, Charlotte, USA" ], [ "Wang", "Weichao", "", "Department of Software and Information Systems, University of North Carolina\n at Charlotte, Charlotte, USA" ], [ "Wei", "Jinpeng", "", "Department of Software and Information Systems, University of North Carolina\n at Charlotte, Charlotte, USA" ] ]
With the emergence and fast development of trigger-action platforms in IoT settings, security vulnerabilities caused by the interactions among IoT devices become more prevalent. The event occurrence at one device triggers an action in another device, which may eventually contribute to the creation of a chain of events in a network. Adversaries exploit the chain effect to compromise IoT devices and trigger actions of interest remotely just by injecting malicious events into the chain. To address security vulnerabilities caused by trigger-action scenarios, existing research efforts focus on the validation of the security properties of devices or verification of the occurrence of certain events based on their physical fingerprints on a device. We propose IoTMonitor, a security analysis system that discerns the underlying chain of event occurrences with the highest probability by observing a chain of physical evidence collected by sensors. We use the Baum-Welch algorithm to estimate transition and emission probabilities and the Viterbi algorithm to discern the event sequence. We can then identify the crucial nodes in the trigger-action sequence whose compromise allows attackers to reach their final goals. The experiment results of our designed system upon the PEEVES datasets show that we can rebuild the event occurrence sequence with high accuracy from the observations and identify the crucial nodes on the attack paths.
2406.07497
Judith Dineley Dr
Nicholas Cummins, Lauren L. White, Zahia Rahman, Catriona Lucas, Tian Pan, Ewan Carr, Faith Matcham, Johnny Downs, Richard J. Dobson and Judith Dineley
A pilot protocol and cohort for the investigation of non-pathological variability in speech
29 pages. Pre peer review
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
Background Speech-based biomarkers have potential as a means for regular, objective assessment of symptom severity, remotely and in-clinic in combination with advanced analytical models. However, the complex nature of speech and the often subtle changes associated with health mean that findings are highly dependent on methodological and cohort choices. These are often not reported adequately in studies investigating speech-based health assessment Objective To develop and apply an exemplar protocol to generate a pilot dataset of healthy speech with detailed metadata for the assessment of factors in the speech recording-analysis pipeline, including device choice, speech elicitation task and non-pathological variability. Methods We developed our collection protocol and choice of exemplar speech features based on a thematic literature review. Our protocol includes the elicitation of three different speech types. With a focus towards remote applications, we also choose to collect speech with three different microphone types. We developed a pipeline to extract a set of 14 exemplar speech features. Results We collected speech from 28 individuals three times in one day, repeated at the same times 8-11 weeks later, and from 25 healthy individuals three times in one week. Participant characteristics collected included sex, age, native language status and voice use habits of the participant. A preliminary set of 14 speech features covering timing, prosody, voice quality, articulation and spectral moment characteristics were extracted that provide a resource of normative values. Conclusions There are multiple methodological factors involved in the collection, processing and analysis of speech recordings. Consistent reporting and greater harmonisation of study protocols are urgently required to aid the translation of speech processing into clinical research and practice.
[ { "created": "Tue, 11 Jun 2024 17:32:28 GMT", "version": "v1" } ]
2024-06-12
[ [ "Cummins", "Nicholas", "" ], [ "White", "Lauren L.", "" ], [ "Rahman", "Zahia", "" ], [ "Lucas", "Catriona", "" ], [ "Pan", "Tian", "" ], [ "Carr", "Ewan", "" ], [ "Matcham", "Faith", "" ], [ "Downs", "Johnny", "" ], [ "Dobson", "Richard J.", "" ], [ "Dineley", "Judith", "" ] ]
Background Speech-based biomarkers have potential as a means for regular, objective assessment of symptom severity, remotely and in-clinic in combination with advanced analytical models. However, the complex nature of speech and the often subtle changes associated with health mean that findings are highly dependent on methodological and cohort choices. These are often not reported adequately in studies investigating speech-based health assessment Objective To develop and apply an exemplar protocol to generate a pilot dataset of healthy speech with detailed metadata for the assessment of factors in the speech recording-analysis pipeline, including device choice, speech elicitation task and non-pathological variability. Methods We developed our collection protocol and choice of exemplar speech features based on a thematic literature review. Our protocol includes the elicitation of three different speech types. With a focus towards remote applications, we also choose to collect speech with three different microphone types. We developed a pipeline to extract a set of 14 exemplar speech features. Results We collected speech from 28 individuals three times in one day, repeated at the same times 8-11 weeks later, and from 25 healthy individuals three times in one week. Participant characteristics collected included sex, age, native language status and voice use habits of the participant. A preliminary set of 14 speech features covering timing, prosody, voice quality, articulation and spectral moment characteristics were extracted that provide a resource of normative values. Conclusions There are multiple methodological factors involved in the collection, processing and analysis of speech recordings. Consistent reporting and greater harmonisation of study protocols are urgently required to aid the translation of speech processing into clinical research and practice.
2005.07427
Chen Luo
Lei Cai, Zhengzhang Chen, Chen Luo, Jiaping Gui, Jingchao Ni, Ding Li, Haifeng Chen
Structural Temporal Graph Neural Networks for Anomaly Detection in Dynamic Graphs
null
null
null
null
cs.LG cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting anomalies in dynamic graphs is a vital task, with numerous practical applications in areas such as security, finance, and social media. Previous network embedding based methods have been mostly focusing on learning good node representations, whereas largely ignoring the subgraph structural changes related to the target nodes in dynamic graphs. In this paper, we propose StrGNN, an end-to-end structural temporal Graph Neural Network model for detecting anomalous edges in dynamic graphs. In particular, we first extract the $h$-hop enclosing subgraph centered on the target edge and propose the node labeling function to identify the role of each node in the subgraph. Then, we leverage graph convolution operation and Sortpooling layer to extract the fixed-size feature from each snapshot/timestamp. Based on the extracted features, we utilize Gated recurrent units (GRUs) to capture the temporal information for anomaly detection. Extensive experiments on six benchmark datasets and a real enterprise security system demonstrate the effectiveness of StrGNN.
[ { "created": "Fri, 15 May 2020 09:17:08 GMT", "version": "v1" }, { "created": "Mon, 25 May 2020 08:38:54 GMT", "version": "v2" } ]
2020-05-26
[ [ "Cai", "Lei", "" ], [ "Chen", "Zhengzhang", "" ], [ "Luo", "Chen", "" ], [ "Gui", "Jiaping", "" ], [ "Ni", "Jingchao", "" ], [ "Li", "Ding", "" ], [ "Chen", "Haifeng", "" ] ]
Detecting anomalies in dynamic graphs is a vital task, with numerous practical applications in areas such as security, finance, and social media. Previous network embedding based methods have been mostly focusing on learning good node representations, whereas largely ignoring the subgraph structural changes related to the target nodes in dynamic graphs. In this paper, we propose StrGNN, an end-to-end structural temporal Graph Neural Network model for detecting anomalous edges in dynamic graphs. In particular, we first extract the $h$-hop enclosing subgraph centered on the target edge and propose the node labeling function to identify the role of each node in the subgraph. Then, we leverage graph convolution operation and Sortpooling layer to extract the fixed-size feature from each snapshot/timestamp. Based on the extracted features, we utilize Gated recurrent units (GRUs) to capture the temporal information for anomaly detection. Extensive experiments on six benchmark datasets and a real enterprise security system demonstrate the effectiveness of StrGNN.
2005.02470
Zein Shaheen
Zein Shaheen, Gerhard Wohlgenannt, Bassel Zaity, Dmitry Mouromtsev, Vadim Pak
Russian Natural Language Generation: Creation of a Language Modelling Dataset and Evaluation with Modern Neural Architectures
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generating coherent, grammatically correct, and meaningful text is very challenging, however, it is crucial to many modern NLP systems. So far, research has mostly focused on English language, for other languages both standardized datasets, as well as experiments with state-of-the-art models, are rare. In this work, we i) provide a novel reference dataset for Russian language modeling, ii) experiment with popular modern methods for text generation, namely variational autoencoders, and generative adversarial networks, which we trained on the new dataset. We evaluate the generated text regarding metrics such as perplexity, grammatical correctness and lexical diversity.
[ { "created": "Tue, 5 May 2020 20:20:25 GMT", "version": "v1" } ]
2020-05-07
[ [ "Shaheen", "Zein", "" ], [ "Wohlgenannt", "Gerhard", "" ], [ "Zaity", "Bassel", "" ], [ "Mouromtsev", "Dmitry", "" ], [ "Pak", "Vadim", "" ] ]
Generating coherent, grammatically correct, and meaningful text is very challenging, however, it is crucial to many modern NLP systems. So far, research has mostly focused on English language, for other languages both standardized datasets, as well as experiments with state-of-the-art models, are rare. In this work, we i) provide a novel reference dataset for Russian language modeling, ii) experiment with popular modern methods for text generation, namely variational autoencoders, and generative adversarial networks, which we trained on the new dataset. We evaluate the generated text regarding metrics such as perplexity, grammatical correctness and lexical diversity.
1906.01069
Deepan Muthirayan
Deepan Muthirayan, Dileep Kalathil, Sen Li, Kameshwar Poolla and Pravin Varaiya
Selling Demand Response Using Options
null
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wholesale electricity markets in many jurisdictions use a two-settlement structure: a day-ahead market for bulk power transactions and a real-time market for fine-grain supply-demand balancing. This paper explores trading demand response assets within this two-settlement market structure. We consider two approaches for trading demand response assets: (a) an intermediate spot market with contingent pricing, and (b) an over-the-counter options contract. In the first case, we characterize the competitive equilibrium of the spot market, and show that it is socially optimal. Economic orthodoxy advocates spot markets, but these require expensive infrastructure and regulatory blessing. In the second case, we characterize competitive equilibria and compare its efficiency with the idealized spot market. Options contract are private bilateral over-the-counter transactions and do not require regulatory approval. We show that the optimal social welfare is, in general, not supported. We then design optimal option prices that minimize the social welfare gap. This optimal design serves to approximate the ideal spot market for demand response using options with modest loss of efficiency. Our results are validated through numerical simulations.
[ { "created": "Mon, 3 Jun 2019 20:34:13 GMT", "version": "v1" }, { "created": "Thu, 12 Mar 2020 04:33:52 GMT", "version": "v2" }, { "created": "Fri, 13 Mar 2020 00:35:07 GMT", "version": "v3" }, { "created": "Mon, 20 Jul 2020 23:13:45 GMT", "version": "v4" }, { "created": "Mon, 3 Aug 2020 01:16:47 GMT", "version": "v5" } ]
2020-08-04
[ [ "Muthirayan", "Deepan", "" ], [ "Kalathil", "Dileep", "" ], [ "Li", "Sen", "" ], [ "Poolla", "Kameshwar", "" ], [ "Varaiya", "Pravin", "" ] ]
Wholesale electricity markets in many jurisdictions use a two-settlement structure: a day-ahead market for bulk power transactions and a real-time market for fine-grain supply-demand balancing. This paper explores trading demand response assets within this two-settlement market structure. We consider two approaches for trading demand response assets: (a) an intermediate spot market with contingent pricing, and (b) an over-the-counter options contract. In the first case, we characterize the competitive equilibrium of the spot market, and show that it is socially optimal. Economic orthodoxy advocates spot markets, but these require expensive infrastructure and regulatory blessing. In the second case, we characterize competitive equilibria and compare its efficiency with the idealized spot market. Options contract are private bilateral over-the-counter transactions and do not require regulatory approval. We show that the optimal social welfare is, in general, not supported. We then design optimal option prices that minimize the social welfare gap. This optimal design serves to approximate the ideal spot market for demand response using options with modest loss of efficiency. Our results are validated through numerical simulations.
2008.01365
Zehao Huang
Zehao Huang, Zehui Chen, Qiaofei Li, Hongkai Zhang, Naiyan Wang
1st Place Solutions of Waymo Open Dataset Challenge 2020 -- 2D Object Detection Track
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this technical report, we present our solutions of Waymo Open Dataset (WOD) Challenge 2020 - 2D Object Track. We adopt FPN as our basic framework. Cascade RCNN, stacked PAFPN Neck and Double-Head are used for performance improvements. In order to handle the small object detection problem in WOD, we use very large image scales for both training and testing. Using our methods, our team RW-TSDet achieved the 1st place in the 2D Object Detection Track.
[ { "created": "Tue, 4 Aug 2020 06:46:28 GMT", "version": "v1" } ]
2020-08-05
[ [ "Huang", "Zehao", "" ], [ "Chen", "Zehui", "" ], [ "Li", "Qiaofei", "" ], [ "Zhang", "Hongkai", "" ], [ "Wang", "Naiyan", "" ] ]
In this technical report, we present our solutions of Waymo Open Dataset (WOD) Challenge 2020 - 2D Object Track. We adopt FPN as our basic framework. Cascade RCNN, stacked PAFPN Neck and Double-Head are used for performance improvements. In order to handle the small object detection problem in WOD, we use very large image scales for both training and testing. Using our methods, our team RW-TSDet achieved the 1st place in the 2D Object Detection Track.
2111.08919
Yaya Shi
Yaya Shi, Xu Yang, Haiyang Xu, Chunfeng Yuan, Bing Li, Weiming Hu, Zheng-Jun Zha
EMScore: Evaluating Video Captioning via Coarse-Grained and Fine-Grained Embedding Matching
cvpr2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current metrics for video captioning are mostly based on the text-level comparison between reference and candidate captions. However, they have some insuperable drawbacks, e.g., they cannot handle videos without references, and they may result in biased evaluation due to the one-to-many nature of video-to-text and the neglect of visual relevance. From the human evaluator's viewpoint, a high-quality caption should be consistent with the provided video, but not necessarily be similar to the reference in literal or semantics. Inspired by human evaluation, we propose EMScore (Embedding Matching-based score), a novel reference-free metric for video captioning, which directly measures similarity between video and candidate captions. Benefit from the recent development of large-scale pre-training models, we exploit a well pre-trained vision-language model to extract visual and linguistic embeddings for computing EMScore. Specifically, EMScore combines matching scores of both coarse-grained (video and caption) and fine-grained (frames and words) levels, which takes the overall understanding and detailed characteristics of the video into account. Furthermore, considering the potential information gain, EMScore can be flexibly extended to the conditions where human-labeled references are available. Last but not least, we collect VATEX-EVAL and ActivityNet-FOIl datasets to systematically evaluate the existing metrics. VATEX-EVAL experiments demonstrate that EMScore has higher human correlation and lower reference dependency. ActivityNet-FOIL experiment verifies that EMScore can effectively identify "hallucinating" captions. The datasets will be released to facilitate the development of video captioning metrics. The code is available at: https://github.com/ShiYaya/emscore.
[ { "created": "Wed, 17 Nov 2021 06:02:43 GMT", "version": "v1" }, { "created": "Sun, 17 Jul 2022 04:35:18 GMT", "version": "v2" } ]
2022-07-19
[ [ "Shi", "Yaya", "" ], [ "Yang", "Xu", "" ], [ "Xu", "Haiyang", "" ], [ "Yuan", "Chunfeng", "" ], [ "Li", "Bing", "" ], [ "Hu", "Weiming", "" ], [ "Zha", "Zheng-Jun", "" ] ]
Current metrics for video captioning are mostly based on the text-level comparison between reference and candidate captions. However, they have some insuperable drawbacks, e.g., they cannot handle videos without references, and they may result in biased evaluation due to the one-to-many nature of video-to-text and the neglect of visual relevance. From the human evaluator's viewpoint, a high-quality caption should be consistent with the provided video, but not necessarily be similar to the reference in literal or semantics. Inspired by human evaluation, we propose EMScore (Embedding Matching-based score), a novel reference-free metric for video captioning, which directly measures similarity between video and candidate captions. Benefit from the recent development of large-scale pre-training models, we exploit a well pre-trained vision-language model to extract visual and linguistic embeddings for computing EMScore. Specifically, EMScore combines matching scores of both coarse-grained (video and caption) and fine-grained (frames and words) levels, which takes the overall understanding and detailed characteristics of the video into account. Furthermore, considering the potential information gain, EMScore can be flexibly extended to the conditions where human-labeled references are available. Last but not least, we collect VATEX-EVAL and ActivityNet-FOIl datasets to systematically evaluate the existing metrics. VATEX-EVAL experiments demonstrate that EMScore has higher human correlation and lower reference dependency. ActivityNet-FOIL experiment verifies that EMScore can effectively identify "hallucinating" captions. The datasets will be released to facilitate the development of video captioning metrics. The code is available at: https://github.com/ShiYaya/emscore.
1709.08521
Omar Al-Harbi
Omar Al-Harbi
Using objective words in the reviews to improve the colloquial arabic sentiment analysis
14 pages, 1 figure, International Journal on Natural Language Computing (IJNLC)
International Journal on Natural Language Computing (IJNLC) Vol. 6, No.3, June 2017
10.5121/ijnlc
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the main difficulties in sentiment analysis of the Arabic language is the presence of the colloquialism. In this paper, we examine the effect of using objective words in conjunction with sentimental words on sentiment classification for the colloquial Arabic reviews, specifically Jordanian colloquial reviews. The reviews often include both sentimental and objective words, however, the most existing sentiment analysis models ignore the objective words as they are considered useless. In this work, we created two lexicons: the first includes the colloquial sentimental words and compound phrases, while the other contains the objective words associated with values of sentiment tendency based on a particular estimation method. We used these lexicons to extract sentiment features that would be training input to the Support Vector Machines (SVM) to classify the sentiment polarity of the reviews. The reviews dataset have been collected manually from JEERAN website. The results of the experiments show that the proposed approach improves the polarity classification in comparison to two baseline models, with accuracy 95.6%.
[ { "created": "Mon, 25 Sep 2017 14:40:28 GMT", "version": "v1" } ]
2017-09-26
[ [ "Al-Harbi", "Omar", "" ] ]
One of the main difficulties in sentiment analysis of the Arabic language is the presence of the colloquialism. In this paper, we examine the effect of using objective words in conjunction with sentimental words on sentiment classification for the colloquial Arabic reviews, specifically Jordanian colloquial reviews. The reviews often include both sentimental and objective words, however, the most existing sentiment analysis models ignore the objective words as they are considered useless. In this work, we created two lexicons: the first includes the colloquial sentimental words and compound phrases, while the other contains the objective words associated with values of sentiment tendency based on a particular estimation method. We used these lexicons to extract sentiment features that would be training input to the Support Vector Machines (SVM) to classify the sentiment polarity of the reviews. The reviews dataset have been collected manually from JEERAN website. The results of the experiments show that the proposed approach improves the polarity classification in comparison to two baseline models, with accuracy 95.6%.
2002.01078
Mordechai Guri
Mordechai Guri, Dima Bykhovsky, Yuval Elovici
BRIGHTNESS: Leaking Sensitive Data from Air-Gapped Workstations via Screen Brightness
2019 12th CMI Conference on Cybersecurity and Privacy (CMI)
null
10.1109/CMI48017.2019.8962137
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Air-gapped computers are systems that are kept isolated from the Internet since they store or process sensitive information. In this paper, we introduce an optical covert channel in which an attacker can leak (or, exfiltlrate) sensitive information from air-gapped computers through manipulations on the screen brightness. This covert channel is invisible and it works even while the user is working on the computer. Malware on a compromised computer can obtain sensitive data (e.g., files, images, encryption keys and passwords), and modulate it within the screen brightness, invisible to users. The small changes in the brightness are invisible to humans but can be recovered from video streams taken by cameras such as a local security camera, smartphone camera or a webcam. We present related work and discuss the technical and scientific background of this covert channel. We examined the channel's boundaries under various parameters, with different types of computer and TV screens, and at several distances. We also tested different types of camera receivers to demonstrate the covert channel. Lastly, we present relevant countermeasures to this type of attack. Lastly, we present relevant countermeasures to this type of attack.
[ { "created": "Tue, 4 Feb 2020 01:25:44 GMT", "version": "v1" } ]
2020-02-05
[ [ "Guri", "Mordechai", "" ], [ "Bykhovsky", "Dima", "" ], [ "Elovici", "Yuval", "" ] ]
Air-gapped computers are systems that are kept isolated from the Internet since they store or process sensitive information. In this paper, we introduce an optical covert channel in which an attacker can leak (or, exfiltlrate) sensitive information from air-gapped computers through manipulations on the screen brightness. This covert channel is invisible and it works even while the user is working on the computer. Malware on a compromised computer can obtain sensitive data (e.g., files, images, encryption keys and passwords), and modulate it within the screen brightness, invisible to users. The small changes in the brightness are invisible to humans but can be recovered from video streams taken by cameras such as a local security camera, smartphone camera or a webcam. We present related work and discuss the technical and scientific background of this covert channel. We examined the channel's boundaries under various parameters, with different types of computer and TV screens, and at several distances. We also tested different types of camera receivers to demonstrate the covert channel. Lastly, we present relevant countermeasures to this type of attack. Lastly, we present relevant countermeasures to this type of attack.
2312.10504
Chi Zhang
Chi Zhang (1), Wenkai Xiang (1), Xingzhi Guo (2), Baojian Zhou (1), Deqing Yang (1) ((1) Fudan University, Shanghai Key Laboratory of Data Science, (2) Stony Brook University)
SubAnom: Efficient Subgraph Anomaly Detection Framework over Dynamic Graphs
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a dynamic graph, the efficient tracking of anomalous subgraphs via their node embeddings poses a significant challenge. Addressing this issue necessitates an effective scoring mechanism and an innovative anomalous subgraph strategy. Existing methods predominantly focus on designing scoring strategies or employing graph structures that consider nodes in isolation, resulting in ineffective capture of the anomalous subgraph structure information. In this paper, we introduce SUBANOM, a novel framework for subgraph anomaly detection that is adept at identifying anomalous subgraphs. SUBANOM has three key components: 1) We implement current state-of-the-art dynamic embedding methods to efficiently calculate node embeddings, thereby capturing all node-level anomalies successfully; 2) We devise novel subgraph identification strategies, which include k-hop and triadic-closure. These strategies form the crucial component that can proficiently differentiate between strong and weak neighbors, thus effectively capturing the anomaly structure information; 3) For qualifying the anomaly subgraphs, we propose using Lp-norm-based score aggregation functions. These iterative steps enable us to process large-scale dynamic graphs effectively. Experiments conducted on a real-world dynamic graph underscore the efficacy of our framework in detecting anomalous subgraphs, outperforming state-of-the-art methods. Experimental results further signify that our framework is a potent tool for identifying anomalous subgraphs in real-world scenarios. For instance, the F1 score under the optimal subgraph identification strategy, can peak at 0.6679, while the highest achievable score using the corresponding baseline method is 0.5677.
[ { "created": "Sat, 16 Dec 2023 17:18:30 GMT", "version": "v1" } ]
2023-12-19
[ [ "Zhang", "Chi", "" ], [ "Xiang", "Wenkai", "" ], [ "Guo", "Xingzhi", "" ], [ "Zhou", "Baojian", "" ], [ "Yang", "Deqing", "" ] ]
Given a dynamic graph, the efficient tracking of anomalous subgraphs via their node embeddings poses a significant challenge. Addressing this issue necessitates an effective scoring mechanism and an innovative anomalous subgraph strategy. Existing methods predominantly focus on designing scoring strategies or employing graph structures that consider nodes in isolation, resulting in ineffective capture of the anomalous subgraph structure information. In this paper, we introduce SUBANOM, a novel framework for subgraph anomaly detection that is adept at identifying anomalous subgraphs. SUBANOM has three key components: 1) We implement current state-of-the-art dynamic embedding methods to efficiently calculate node embeddings, thereby capturing all node-level anomalies successfully; 2) We devise novel subgraph identification strategies, which include k-hop and triadic-closure. These strategies form the crucial component that can proficiently differentiate between strong and weak neighbors, thus effectively capturing the anomaly structure information; 3) For qualifying the anomaly subgraphs, we propose using Lp-norm-based score aggregation functions. These iterative steps enable us to process large-scale dynamic graphs effectively. Experiments conducted on a real-world dynamic graph underscore the efficacy of our framework in detecting anomalous subgraphs, outperforming state-of-the-art methods. Experimental results further signify that our framework is a potent tool for identifying anomalous subgraphs in real-world scenarios. For instance, the F1 score under the optimal subgraph identification strategy, can peak at 0.6679, while the highest achievable score using the corresponding baseline method is 0.5677.
1810.09294
Michael Taynnan Barros
Michael Taynnan Barros and Walisson Silva and Carlos Danilo Miranda Regis
The Multi-Scale Impact of the Alzheimer's Disease in the Topology Diversity of Astrocytes Molecular Communications Nanonetworks
Submitted to journal publication
null
null
null
cs.ET q-bio.MN
http://creativecommons.org/licenses/by/4.0/
The Internet of Bio-Nano-Things is a new paradigm that can bring novel remotely controlled actuation and sensing techniques inside the human body. Towards precise bionano sensing techniques in the brain, we investigate the challenges of modelling spatial distribution of astrocyte networks in developing a mathematical framework that lay the groundwork for future early-detection techniques of neurodegenerative disease. In this paper, we investigate the effect of the $\beta$-amyloid plaques in astrocytes with the Alzheimer's disease. We developed a computation model of healthy and Alzheimer's diseases astrocytes networks from the state of the art models and results that account for the intracellular pathways, IP$_3$ dynamics, gap junctions, voltage-gated calcium channels and astrocytes volumes. We also implemented different types of astrocytes network topologies including shortcut networks, regular degree networks, Erd\"os R\'enyi networks and link radius networks. A proposed multi-scale stochastic computational model captures the relationship between the intracellular and intercellular scales. Lastly, we designed and evaluated a single-hop communication system with frequency modulation using metrics such as propagation extend, molecular delay and channel gain. The results show that the more unstable but at the same time lower level oscillations of Alzheimer's astrocyte networks can create a multi-scale effect on communication between astrocytes with increased molecular delay and lower channel gain compared to healthy astrocytes, with an elevated impact on Erd\"os R\'enyi networks and link radius networks topologies.
[ { "created": "Mon, 22 Oct 2018 13:54:43 GMT", "version": "v1" } ]
2018-10-23
[ [ "Barros", "Michael Taynnan", "" ], [ "Silva", "Walisson", "" ], [ "Regis", "Carlos Danilo Miranda", "" ] ]
The Internet of Bio-Nano-Things is a new paradigm that can bring novel remotely controlled actuation and sensing techniques inside the human body. Towards precise bionano sensing techniques in the brain, we investigate the challenges of modelling spatial distribution of astrocyte networks in developing a mathematical framework that lay the groundwork for future early-detection techniques of neurodegenerative disease. In this paper, we investigate the effect of the $\beta$-amyloid plaques in astrocytes with the Alzheimer's disease. We developed a computation model of healthy and Alzheimer's diseases astrocytes networks from the state of the art models and results that account for the intracellular pathways, IP$_3$ dynamics, gap junctions, voltage-gated calcium channels and astrocytes volumes. We also implemented different types of astrocytes network topologies including shortcut networks, regular degree networks, Erd\"os R\'enyi networks and link radius networks. A proposed multi-scale stochastic computational model captures the relationship between the intracellular and intercellular scales. Lastly, we designed and evaluated a single-hop communication system with frequency modulation using metrics such as propagation extend, molecular delay and channel gain. The results show that the more unstable but at the same time lower level oscillations of Alzheimer's astrocyte networks can create a multi-scale effect on communication between astrocytes with increased molecular delay and lower channel gain compared to healthy astrocytes, with an elevated impact on Erd\"os R\'enyi networks and link radius networks topologies.
1704.02738
Xin Tao
Xin Tao, Hongyun Gao, Renjie Liao, Jue Wang, Jiaya Jia
Detail-revealing Deep Video Super-resolution
9 pages, submitted to conference
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous CNN-based video super-resolution approaches need to align multiple frames to the reference. In this paper, we show that proper frame alignment and motion compensation is crucial for achieving high quality results. We accordingly propose a `sub-pixel motion compensation' (SPMC) layer in a CNN framework. Analysis and experiments show the suitability of this layer in video SR. The final end-to-end, scalable CNN framework effectively incorporates the SPMC layer and fuses multiple frames to reveal image details. Our implementation can generate visually and quantitatively high-quality results, superior to current state-of-the-arts, without the need of parameter tuning.
[ { "created": "Mon, 10 Apr 2017 07:28:27 GMT", "version": "v1" } ]
2017-04-11
[ [ "Tao", "Xin", "" ], [ "Gao", "Hongyun", "" ], [ "Liao", "Renjie", "" ], [ "Wang", "Jue", "" ], [ "Jia", "Jiaya", "" ] ]
Previous CNN-based video super-resolution approaches need to align multiple frames to the reference. In this paper, we show that proper frame alignment and motion compensation is crucial for achieving high quality results. We accordingly propose a `sub-pixel motion compensation' (SPMC) layer in a CNN framework. Analysis and experiments show the suitability of this layer in video SR. The final end-to-end, scalable CNN framework effectively incorporates the SPMC layer and fuses multiple frames to reveal image details. Our implementation can generate visually and quantitatively high-quality results, superior to current state-of-the-arts, without the need of parameter tuning.
2306.00356
Hyunsu Kim
Hyunsu Kim, Hyungi Lee, Hongseok Yang, and Juho Lee
Regularizing Towards Soft Equivariance Under Mixed Symmetries
Proceedings of the International Conference on Machine Learning (ICML), 2023
null
null
null
cs.LG cs.AI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Datasets often have their intrinsic symmetries, and particular deep-learning models called equivariant or invariant models have been developed to exploit these symmetries. However, if some or all of these symmetries are only approximate, which frequently happens in practice, these models may be suboptimal due to the architectural restrictions imposed on them. We tackle this issue of approximate symmetries in a setup where symmetries are mixed, i.e., they are symmetries of not single but multiple different types and the degree of approximation varies across these types. Instead of proposing a new architectural restriction as in most of the previous approaches, we present a regularizer-based method for building a model for a dataset with mixed approximate symmetries. The key component of our method is what we call equivariance regularizer for a given type of symmetries, which measures how much a model is equivariant with respect to the symmetries of the type. Our method is trained with these regularizers, one per each symmetry type, and the strength of the regularizers is automatically tuned during training, leading to the discovery of the approximation levels of some candidate symmetry types without explicit supervision. Using synthetic function approximation and motion forecasting tasks, we demonstrate that our method achieves better accuracy than prior approaches while discovering the approximate symmetry levels correctly.
[ { "created": "Thu, 1 Jun 2023 05:33:41 GMT", "version": "v1" } ]
2023-06-02
[ [ "Kim", "Hyunsu", "" ], [ "Lee", "Hyungi", "" ], [ "Yang", "Hongseok", "" ], [ "Lee", "Juho", "" ] ]
Datasets often have their intrinsic symmetries, and particular deep-learning models called equivariant or invariant models have been developed to exploit these symmetries. However, if some or all of these symmetries are only approximate, which frequently happens in practice, these models may be suboptimal due to the architectural restrictions imposed on them. We tackle this issue of approximate symmetries in a setup where symmetries are mixed, i.e., they are symmetries of not single but multiple different types and the degree of approximation varies across these types. Instead of proposing a new architectural restriction as in most of the previous approaches, we present a regularizer-based method for building a model for a dataset with mixed approximate symmetries. The key component of our method is what we call equivariance regularizer for a given type of symmetries, which measures how much a model is equivariant with respect to the symmetries of the type. Our method is trained with these regularizers, one per each symmetry type, and the strength of the regularizers is automatically tuned during training, leading to the discovery of the approximation levels of some candidate symmetry types without explicit supervision. Using synthetic function approximation and motion forecasting tasks, we demonstrate that our method achieves better accuracy than prior approaches while discovering the approximate symmetry levels correctly.
1707.00825
Juan Colmenares
Juan A. Colmenares, Reza Dorrigiv and Daniel G. Waddington
Ingestion, Indexing and Retrieval of High-Velocity Multidimensional Sensor Data on a Single Node
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multidimensional data are becoming more prevalent, partly due to the rise of the Internet of Things (IoT), and with that the need to ingest and analyze data streams at rates higher than before. Some industrial IoT applications require ingesting millions of records per second, while processing queries on recently ingested and historical data. Unfortunately, existing database systems suited to multidimensional data exhibit low per-node ingestion performance, and even if they can scale horizontally in distributed settings, they require large number of nodes to meet such ingest demands. For this reason, in this paper we evaluate a single-node multidimensional data store for high-velocity sensor data. Its design centers around a two-level indexing structure, wherein the global index is an in-memory R*-tree and the local indices are serialized kd-trees. This study is confined to records with numerical indexing fields and range queries, and covers ingest throughput, query response time, and storage footprint. We show that the adopted design streamlines data ingestion and offers ingress rates two orders of magnitude higher than those of Percona Server, SQLite, and Druid. Our prototype also reports query response times comparable to or better than those of Percona Server and Druid, and compares favorably in terms of storage footprint. In addition, we evaluate a kd-tree partitioning based scheme for grouping incoming streamed data records. Compared to a random scheme, this scheme produces less overlap between groups of streamed records, but contrary to what we expected, such reduced overlap does not translate into better query performance. By contrast, the local indices prove much more beneficial to query performance. We believe the experience reported in this paper is valuable to practitioners and researchers alike interested in building database systems for high-velocity multidimensional data.
[ { "created": "Tue, 4 Jul 2017 06:30:37 GMT", "version": "v1" } ]
2017-07-05
[ [ "Colmenares", "Juan A.", "" ], [ "Dorrigiv", "Reza", "" ], [ "Waddington", "Daniel G.", "" ] ]
Multidimensional data are becoming more prevalent, partly due to the rise of the Internet of Things (IoT), and with that the need to ingest and analyze data streams at rates higher than before. Some industrial IoT applications require ingesting millions of records per second, while processing queries on recently ingested and historical data. Unfortunately, existing database systems suited to multidimensional data exhibit low per-node ingestion performance, and even if they can scale horizontally in distributed settings, they require large number of nodes to meet such ingest demands. For this reason, in this paper we evaluate a single-node multidimensional data store for high-velocity sensor data. Its design centers around a two-level indexing structure, wherein the global index is an in-memory R*-tree and the local indices are serialized kd-trees. This study is confined to records with numerical indexing fields and range queries, and covers ingest throughput, query response time, and storage footprint. We show that the adopted design streamlines data ingestion and offers ingress rates two orders of magnitude higher than those of Percona Server, SQLite, and Druid. Our prototype also reports query response times comparable to or better than those of Percona Server and Druid, and compares favorably in terms of storage footprint. In addition, we evaluate a kd-tree partitioning based scheme for grouping incoming streamed data records. Compared to a random scheme, this scheme produces less overlap between groups of streamed records, but contrary to what we expected, such reduced overlap does not translate into better query performance. By contrast, the local indices prove much more beneficial to query performance. We believe the experience reported in this paper is valuable to practitioners and researchers alike interested in building database systems for high-velocity multidimensional data.
1910.06437
Sebastiano Vigna
Sebastiano Vigna
It is high time we let go of the Mersenne Twister
null
null
null
null
cs.DS cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When the Mersenne Twister made his first appearance in 1997 it was a powerful example of how linear maps on $\mathbf F_2$ could be used to generate pseudorandom numbers. In particular, the easiness with which generators with long periods could be defined gave the Mersenne Twister a large following, in spite of the fact that such long periods are not a measure of quality, and they require a large amount of memory. Even at the time of its publication, several defects of the Mersenne Twister were predictable, but they were somewhat obscured by other interesting properties. Today the Mersenne Twister is the default generator in C compilers, the Python language, the Maple mathematical computation system, and in many other environments. Nonetheless, knowledge accumulated in the last $20$ years suggests that the Mersenne Twister has, in fact, severe defects, and should never be used as a general-purpose pseudorandom number generator. Many of these results are folklore, or are scattered through very specialized literature. This paper surveys these results for the non-specialist, providing new, simple, understandable examples, and it is intended as a guide for the final user, or for language implementors, so that they can take an informed decision about whether to use the Mersenne Twister or not.
[ { "created": "Mon, 14 Oct 2019 21:44:14 GMT", "version": "v1" }, { "created": "Thu, 14 Nov 2019 14:50:14 GMT", "version": "v2" } ]
2019-11-15
[ [ "Vigna", "Sebastiano", "" ] ]
When the Mersenne Twister made his first appearance in 1997 it was a powerful example of how linear maps on $\mathbf F_2$ could be used to generate pseudorandom numbers. In particular, the easiness with which generators with long periods could be defined gave the Mersenne Twister a large following, in spite of the fact that such long periods are not a measure of quality, and they require a large amount of memory. Even at the time of its publication, several defects of the Mersenne Twister were predictable, but they were somewhat obscured by other interesting properties. Today the Mersenne Twister is the default generator in C compilers, the Python language, the Maple mathematical computation system, and in many other environments. Nonetheless, knowledge accumulated in the last $20$ years suggests that the Mersenne Twister has, in fact, severe defects, and should never be used as a general-purpose pseudorandom number generator. Many of these results are folklore, or are scattered through very specialized literature. This paper surveys these results for the non-specialist, providing new, simple, understandable examples, and it is intended as a guide for the final user, or for language implementors, so that they can take an informed decision about whether to use the Mersenne Twister or not.
1510.00225
Frederick Benaben
Matthieu Lauras, Frederick Benaben, Sebastien Truptil, Aurelie Charles (UL2)
Event-Cloud Platform to Support Decision- Making in Emergency Management
null
Information Systems Frontiers, Springer Verlag (Germany), 2015, 17 (4), pp.857-869
10.1007/s10796-013-9475-0
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The challenge of this paper is to underline the capability of an Event-Cloud Platform to support efficiently an emergency situation. We chose to focus on a nuclear crisis use case. The proposed approach consists in modeling the business processes of crisis response on the one hand, and in supporting the orchestration and execution of these processes by using an Event-Cloud Platform on the other hand. This paper shows how the use of Event-Cloud techniques can support crisis management stakeholders by automatizing non-value added tasks and by directing decision- makers on what really requires their capabilities of choice. If Event-Cloud technology is a very interesting and topical subject, very few research works have considered this to improve emergency management. This paper tries to fill this gap by considering and applying these technologies on a nuclear crisis use-case.
[ { "created": "Thu, 1 Oct 2015 13:34:44 GMT", "version": "v1" } ]
2015-10-02
[ [ "Lauras", "Matthieu", "", "UL2" ], [ "Benaben", "Frederick", "", "UL2" ], [ "Truptil", "Sebastien", "", "UL2" ], [ "Charles", "Aurelie", "", "UL2" ] ]
The challenge of this paper is to underline the capability of an Event-Cloud Platform to support efficiently an emergency situation. We chose to focus on a nuclear crisis use case. The proposed approach consists in modeling the business processes of crisis response on the one hand, and in supporting the orchestration and execution of these processes by using an Event-Cloud Platform on the other hand. This paper shows how the use of Event-Cloud techniques can support crisis management stakeholders by automatizing non-value added tasks and by directing decision- makers on what really requires their capabilities of choice. If Event-Cloud technology is a very interesting and topical subject, very few research works have considered this to improve emergency management. This paper tries to fill this gap by considering and applying these technologies on a nuclear crisis use-case.
2209.08353
Fengxin Li
Zhaoxin Fan, Fengxin Li, Hongyan Liu, Jun He, Xiaoyong Du
Human Pose Driven Object Effects Recommendation
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we research the new topic of object effects recommendation in micro-video platforms, which is a challenging but important task for many practical applications such as advertisement insertion. To avoid the problem of introducing background bias caused by directly learning video content from image frames, we propose to utilize the meaningful body language hidden in 3D human pose for recommendation. To this end, in this work, a novel human pose driven object effects recommendation network termed PoseRec is introduced. PoseRec leverages the advantages of 3D human pose detection and learns information from multi-frame 3D human pose for video-item registration, resulting in high quality object effects recommendation performance. Moreover, to solve the inherent ambiguity and sparsity issues that exist in object effects recommendation, we further propose a novel item-aware implicit prototype learning module and a novel pose-aware transductive hard-negative mining module to better learn pose-item relationships. What's more, to benchmark methods for the new research topic, we build a new dataset for object effects recommendation named Pose-OBE. Extensive experiments on Pose-OBE demonstrate that our method can achieve superior performance than strong baselines.
[ { "created": "Sat, 17 Sep 2022 15:32:54 GMT", "version": "v1" } ]
2022-09-20
[ [ "Fan", "Zhaoxin", "" ], [ "Li", "Fengxin", "" ], [ "Liu", "Hongyan", "" ], [ "He", "Jun", "" ], [ "Du", "Xiaoyong", "" ] ]
In this paper, we research the new topic of object effects recommendation in micro-video platforms, which is a challenging but important task for many practical applications such as advertisement insertion. To avoid the problem of introducing background bias caused by directly learning video content from image frames, we propose to utilize the meaningful body language hidden in 3D human pose for recommendation. To this end, in this work, a novel human pose driven object effects recommendation network termed PoseRec is introduced. PoseRec leverages the advantages of 3D human pose detection and learns information from multi-frame 3D human pose for video-item registration, resulting in high quality object effects recommendation performance. Moreover, to solve the inherent ambiguity and sparsity issues that exist in object effects recommendation, we further propose a novel item-aware implicit prototype learning module and a novel pose-aware transductive hard-negative mining module to better learn pose-item relationships. What's more, to benchmark methods for the new research topic, we build a new dataset for object effects recommendation named Pose-OBE. Extensive experiments on Pose-OBE demonstrate that our method can achieve superior performance than strong baselines.
2010.01243
Yae Jee Cho
Yae Jee Cho and Jianyu Wang and Gauri Joshi
Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies
null
null
null
null
cs.LG cs.DC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing. Several works have analyzed the convergence of federated learning by accounting of data heterogeneity, communication and computation limitations, and partial client participation. However, they assume unbiased client participation, where clients are selected at random or in proportion of their data sizes. In this paper, we present the first convergence analysis of federated optimization for biased client selection strategies, and quantify how the selection bias affects convergence speed. We reveal that biasing client selection towards clients with higher local loss achieves faster error convergence. Using this insight, we propose Power-of-Choice, a communication- and computation-efficient client selection framework that can flexibly span the trade-off between convergence speed and solution bias. Our experiments demonstrate that Power-of-Choice strategies converge up to 3 $\times$ faster and give $10$% higher test accuracy than the baseline random selection.
[ { "created": "Sat, 3 Oct 2020 01:04:17 GMT", "version": "v1" } ]
2020-10-06
[ [ "Cho", "Yae Jee", "" ], [ "Wang", "Jianyu", "" ], [ "Joshi", "Gauri", "" ] ]
Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing. Several works have analyzed the convergence of federated learning by accounting of data heterogeneity, communication and computation limitations, and partial client participation. However, they assume unbiased client participation, where clients are selected at random or in proportion of their data sizes. In this paper, we present the first convergence analysis of federated optimization for biased client selection strategies, and quantify how the selection bias affects convergence speed. We reveal that biasing client selection towards clients with higher local loss achieves faster error convergence. Using this insight, we propose Power-of-Choice, a communication- and computation-efficient client selection framework that can flexibly span the trade-off between convergence speed and solution bias. Our experiments demonstrate that Power-of-Choice strategies converge up to 3 $\times$ faster and give $10$% higher test accuracy than the baseline random selection.
1704.03822
Wenzhen Yuan
Wenzhen Yuan, Shaoxiong Wang, Siyuan Dong, Edward Adelson
Connecting Look and Feel: Associating the visual and tactile properties of physical materials
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For machines to interact with the physical world, they must understand the physical properties of objects and materials they encounter. We use fabrics as an example of a deformable material with a rich set of mechanical properties. A thin flexible fabric, when draped, tends to look different from a heavy stiff fabric. It also feels different when touched. Using a collection of 118 fabric sample, we captured color and depth images of draped fabrics along with tactile data from a high resolution touch sensor. We then sought to associate the information from vision and touch by jointly training CNNs across the three modalities. Through the CNN, each input, regardless of the modality, generates an embedding vector that records the fabric's physical property. By comparing the embeddings, our system is able to look at a fabric image and predict how it will feel, and vice versa. We also show that a system jointly trained on vision and touch data can outperform a similar system trained only on visual data when tested purely with visual inputs.
[ { "created": "Wed, 12 Apr 2017 16:28:14 GMT", "version": "v1" } ]
2017-04-13
[ [ "Yuan", "Wenzhen", "" ], [ "Wang", "Shaoxiong", "" ], [ "Dong", "Siyuan", "" ], [ "Adelson", "Edward", "" ] ]
For machines to interact with the physical world, they must understand the physical properties of objects and materials they encounter. We use fabrics as an example of a deformable material with a rich set of mechanical properties. A thin flexible fabric, when draped, tends to look different from a heavy stiff fabric. It also feels different when touched. Using a collection of 118 fabric sample, we captured color and depth images of draped fabrics along with tactile data from a high resolution touch sensor. We then sought to associate the information from vision and touch by jointly training CNNs across the three modalities. Through the CNN, each input, regardless of the modality, generates an embedding vector that records the fabric's physical property. By comparing the embeddings, our system is able to look at a fabric image and predict how it will feel, and vice versa. We also show that a system jointly trained on vision and touch data can outperform a similar system trained only on visual data when tested purely with visual inputs.
2404.18239
Jinghan Jia
Jinghan Jia, Yihua Zhang, Yimeng Zhang, Jiancheng Liu, Bharat Runwal, James Diffenderfer, Bhavya Kailkhura, Sijia Liu
SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning
null
null
null
null
cs.LG cs.CL
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) have highlighted the necessity of effective unlearning mechanisms to comply with data regulations and ethical AI practices. LLM unlearning aims at removing undesired data influences and associated model capabilities without compromising utility beyond the scope of unlearning. While interest in studying LLM unlearning is growing, the impact of the optimizer choice for LLM unlearning remains unexplored. In this work, we shed light on the significance of optimizer selection in LLM unlearning for the first time, establishing a clear connection between second-order optimization and influence unlearning (a classical approach using influence functions to update the model for data influence removal). This insight propels us to develop a second-order optimization-based LLM unlearning framework, termed Second-Order UnLearning (SOUL), which extends the static, one-shot model update using influence unlearning to a dynamic, iterative unlearning process. Our extensive experiments show that SOUL consistently outperforms conventional first-order methods across various unlearning tasks, models, and metrics, indicating that second-order optimization offers an effective and broadly applicable solution for LLM unlearning. Codes are available at https://github.com/OPTML-Group/SOUL.
[ { "created": "Sun, 28 Apr 2024 16:31:32 GMT", "version": "v1" }, { "created": "Fri, 31 May 2024 17:38:51 GMT", "version": "v2" }, { "created": "Mon, 3 Jun 2024 01:10:53 GMT", "version": "v3" }, { "created": "Mon, 24 Jun 2024 20:24:53 GMT", "version": "v4" } ]
2024-06-26
[ [ "Jia", "Jinghan", "" ], [ "Zhang", "Yihua", "" ], [ "Zhang", "Yimeng", "" ], [ "Liu", "Jiancheng", "" ], [ "Runwal", "Bharat", "" ], [ "Diffenderfer", "James", "" ], [ "Kailkhura", "Bhavya", "" ], [ "Liu", "Sijia", "" ] ]
Large Language Models (LLMs) have highlighted the necessity of effective unlearning mechanisms to comply with data regulations and ethical AI practices. LLM unlearning aims at removing undesired data influences and associated model capabilities without compromising utility beyond the scope of unlearning. While interest in studying LLM unlearning is growing, the impact of the optimizer choice for LLM unlearning remains unexplored. In this work, we shed light on the significance of optimizer selection in LLM unlearning for the first time, establishing a clear connection between second-order optimization and influence unlearning (a classical approach using influence functions to update the model for data influence removal). This insight propels us to develop a second-order optimization-based LLM unlearning framework, termed Second-Order UnLearning (SOUL), which extends the static, one-shot model update using influence unlearning to a dynamic, iterative unlearning process. Our extensive experiments show that SOUL consistently outperforms conventional first-order methods across various unlearning tasks, models, and metrics, indicating that second-order optimization offers an effective and broadly applicable solution for LLM unlearning. Codes are available at https://github.com/OPTML-Group/SOUL.