Add Batch 1d543312-7422-4e08-b714-fa41043f55e0 data
Browse files- .gitattributes +6 -0
- 2025/TimeKAN_ KAN-based Frequency Decomposition Learning Architecture for Long-term Time Series Forecasting/c4acf521-4bf1-41df-95d9-c37b967e30fc_content_list.json +0 -0
- 2025/TimeKAN_ KAN-based Frequency Decomposition Learning Architecture for Long-term Time Series Forecasting/c4acf521-4bf1-41df-95d9-c37b967e30fc_model.json +0 -0
- 2025/TimeKAN_ KAN-based Frequency Decomposition Learning Architecture for Long-term Time Series Forecasting/c4acf521-4bf1-41df-95d9-c37b967e30fc_origin.pdf +3 -0
- 2025/TimeKAN_ KAN-based Frequency Decomposition Learning Architecture for Long-term Time Series Forecasting/full.md +359 -0
- 2025/TimeKAN_ KAN-based Frequency Decomposition Learning Architecture for Long-term Time Series Forecasting/images.zip +3 -0
- 2025/TimeKAN_ KAN-based Frequency Decomposition Learning Architecture for Long-term Time Series Forecasting/layout.json +0 -0
- 2025/TimeSuite_ Improving MLLMs for Long Video Understanding via Grounded Tuning/f48bb6a8-358b-46f9-aa7b-783937ea3be0_content_list.json +0 -0
- 2025/TimeSuite_ Improving MLLMs for Long Video Understanding via Grounded Tuning/f48bb6a8-358b-46f9-aa7b-783937ea3be0_model.json +0 -0
- 2025/TimeSuite_ Improving MLLMs for Long Video Understanding via Grounded Tuning/f48bb6a8-358b-46f9-aa7b-783937ea3be0_origin.pdf +3 -0
- 2025/TimeSuite_ Improving MLLMs for Long Video Understanding via Grounded Tuning/full.md +466 -0
- 2025/TimeSuite_ Improving MLLMs for Long Video Understanding via Grounded Tuning/images.zip +3 -0
- 2025/TimeSuite_ Improving MLLMs for Long Video Understanding via Grounded Tuning/layout.json +0 -0
- 2025/Timer-XL_ Long-Context Transformers for Unified Time Series Forecasting/1000abc3-3f82-4c7b-a0aa-1b66e4569e7b_content_list.json +0 -0
- 2025/Timer-XL_ Long-Context Transformers for Unified Time Series Forecasting/1000abc3-3f82-4c7b-a0aa-1b66e4569e7b_model.json +0 -0
- 2025/Timer-XL_ Long-Context Transformers for Unified Time Series Forecasting/1000abc3-3f82-4c7b-a0aa-1b66e4569e7b_origin.pdf +3 -0
- 2025/Timer-XL_ Long-Context Transformers for Unified Time Series Forecasting/full.md +0 -0
- 2025/Timer-XL_ Long-Context Transformers for Unified Time Series Forecasting/images.zip +3 -0
- 2025/Timer-XL_ Long-Context Transformers for Unified Time Series Forecasting/layout.json +0 -0
- 2025/To CoT or not to CoT_ Chain-of-thought helps mainly on math and symbolic reasoning/78080855-33d6-4037-9b8c-edc307a2e575_content_list.json +0 -0
- 2025/To CoT or not to CoT_ Chain-of-thought helps mainly on math and symbolic reasoning/78080855-33d6-4037-9b8c-edc307a2e575_model.json +0 -0
- 2025/To CoT or not to CoT_ Chain-of-thought helps mainly on math and symbolic reasoning/78080855-33d6-4037-9b8c-edc307a2e575_origin.pdf +3 -0
- 2025/To CoT or not to CoT_ Chain-of-thought helps mainly on math and symbolic reasoning/full.md +0 -0
- 2025/To CoT or not to CoT_ Chain-of-thought helps mainly on math and symbolic reasoning/images.zip +3 -0
- 2025/To CoT or not to CoT_ Chain-of-thought helps mainly on math and symbolic reasoning/layout.json +0 -0
- 2025/To Code or Not To Code_ Exploring Impact of Code in Pre-training/e6b439cb-3b05-45ee-8c52-561b8f255560_content_list.json +0 -0
- 2025/To Code or Not To Code_ Exploring Impact of Code in Pre-training/e6b439cb-3b05-45ee-8c52-561b8f255560_model.json +0 -0
- 2025/To Code or Not To Code_ Exploring Impact of Code in Pre-training/e6b439cb-3b05-45ee-8c52-561b8f255560_origin.pdf +3 -0
- 2025/To Code or Not To Code_ Exploring Impact of Code in Pre-training/full.md +0 -0
- 2025/To Code or Not To Code_ Exploring Impact of Code in Pre-training/images.zip +3 -0
- 2025/To Code or Not To Code_ Exploring Impact of Code in Pre-training/layout.json +0 -0
- 2025/To Tackle Adversarial Transferability_ A Novel Ensemble Training Method with Fourier Transformation/11b8de53-d193-4b48-bf31-fc86f1bab485_content_list.json +0 -0
- 2025/To Tackle Adversarial Transferability_ A Novel Ensemble Training Method with Fourier Transformation/11b8de53-d193-4b48-bf31-fc86f1bab485_model.json +0 -0
- 2025/To Tackle Adversarial Transferability_ A Novel Ensemble Training Method with Fourier Transformation/11b8de53-d193-4b48-bf31-fc86f1bab485_origin.pdf +3 -0
- 2025/To Tackle Adversarial Transferability_ A Novel Ensemble Training Method with Fourier Transformation/full.md +495 -0
- 2025/To Tackle Adversarial Transferability_ A Novel Ensemble Training Method with Fourier Transformation/images.zip +3 -0
- 2025/To Tackle Adversarial Transferability_ A Novel Ensemble Training Method with Fourier Transformation/layout.json +0 -0
.gitattributes
CHANGED
|
@@ -3742,3 +3742,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 3742 |
2025/gRNAde_[[:space:]]Geometric[[:space:]]Deep[[:space:]]Learning[[:space:]]for[[:space:]]3D[[:space:]]RNA[[:space:]]inverse[[:space:]]design/c299f76d-d075-4d18-96c9-c5ab59a25415_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3743 |
2025/u-$_mu$P_[[:space:]]The[[:space:]]Unit-Scaled[[:space:]]Maximal[[:space:]]Update[[:space:]]Parametrization/8f19909b-5011-4030-bfce-534524dc855c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3744 |
2025/uniINF_[[:space:]]Best-of-Both-Worlds[[:space:]]Algorithm[[:space:]]for[[:space:]]Parameter-Free[[:space:]]Heavy-Tailed[[:space:]]MABs/7c6dea0d-64f2-4b07-8426-d6f9deaeadcd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3742 |
2025/gRNAde_[[:space:]]Geometric[[:space:]]Deep[[:space:]]Learning[[:space:]]for[[:space:]]3D[[:space:]]RNA[[:space:]]inverse[[:space:]]design/c299f76d-d075-4d18-96c9-c5ab59a25415_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3743 |
2025/u-$_mu$P_[[:space:]]The[[:space:]]Unit-Scaled[[:space:]]Maximal[[:space:]]Update[[:space:]]Parametrization/8f19909b-5011-4030-bfce-534524dc855c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3744 |
2025/uniINF_[[:space:]]Best-of-Both-Worlds[[:space:]]Algorithm[[:space:]]for[[:space:]]Parameter-Free[[:space:]]Heavy-Tailed[[:space:]]MABs/7c6dea0d-64f2-4b07-8426-d6f9deaeadcd_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3745 |
+
2025/TimeKAN_[[:space:]]KAN-based[[:space:]]Frequency[[:space:]]Decomposition[[:space:]]Learning[[:space:]]Architecture[[:space:]]for[[:space:]]Long-term[[:space:]]Time[[:space:]]Series[[:space:]]Forecasting/c4acf521-4bf1-41df-95d9-c37b967e30fc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3746 |
+
2025/TimeSuite_[[:space:]]Improving[[:space:]]MLLMs[[:space:]]for[[:space:]]Long[[:space:]]Video[[:space:]]Understanding[[:space:]]via[[:space:]]Grounded[[:space:]]Tuning/f48bb6a8-358b-46f9-aa7b-783937ea3be0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3747 |
+
2025/Timer-XL_[[:space:]]Long-Context[[:space:]]Transformers[[:space:]]for[[:space:]]Unified[[:space:]]Time[[:space:]]Series[[:space:]]Forecasting/1000abc3-3f82-4c7b-a0aa-1b66e4569e7b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3748 |
+
2025/To[[:space:]]CoT[[:space:]]or[[:space:]]not[[:space:]]to[[:space:]]CoT_[[:space:]]Chain-of-thought[[:space:]]helps[[:space:]]mainly[[:space:]]on[[:space:]]math[[:space:]]and[[:space:]]symbolic[[:space:]]reasoning/78080855-33d6-4037-9b8c-edc307a2e575_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3749 |
+
2025/To[[:space:]]Code[[:space:]]or[[:space:]]Not[[:space:]]To[[:space:]]Code_[[:space:]]Exploring[[:space:]]Impact[[:space:]]of[[:space:]]Code[[:space:]]in[[:space:]]Pre-training/e6b439cb-3b05-45ee-8c52-561b8f255560_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 3750 |
+
2025/To[[:space:]]Tackle[[:space:]]Adversarial[[:space:]]Transferability_[[:space:]]A[[:space:]]Novel[[:space:]]Ensemble[[:space:]]Training[[:space:]]Method[[:space:]]with[[:space:]]Fourier[[:space:]]Transformation/11b8de53-d193-4b48-bf31-fc86f1bab485_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2025/TimeKAN_ KAN-based Frequency Decomposition Learning Architecture for Long-term Time Series Forecasting/c4acf521-4bf1-41df-95d9-c37b967e30fc_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/TimeKAN_ KAN-based Frequency Decomposition Learning Architecture for Long-term Time Series Forecasting/c4acf521-4bf1-41df-95d9-c37b967e30fc_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/TimeKAN_ KAN-based Frequency Decomposition Learning Architecture for Long-term Time Series Forecasting/c4acf521-4bf1-41df-95d9-c37b967e30fc_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c1fe83e24893205002db28e925d90ff60c6939578ff5298ce5c8ac380d6e2279
|
| 3 |
+
size 562757
|
2025/TimeKAN_ KAN-based Frequency Decomposition Learning Architecture for Long-term Time Series Forecasting/full.md
ADDED
|
@@ -0,0 +1,359 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TIMEKAN: KAN-BASED FREQUENCY DECOMPOSITION LEARNING ARCHITECTURE FOR LONG-TERM TIME SERIES FORECASTING
|
| 2 |
+
|
| 3 |
+
Songtao Huang $^{1,2}$ , Zhen Zhao $^{1}$ , Can Li $^{3}$ , Lei Bai $^{4}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Shanghai Artificial Intelligence Laboratory, Shanghai, China
|
| 6 |
+
|
| 7 |
+
$^{2}$ School of Information Science and Engineering, Lanzhou University, Lanzhou, China
|
| 8 |
+
|
| 9 |
+
<sup>3</sup>The Key Laboratory of Road and Traffic Engineering of the Ministry of Education, Tongji University, Shanghai, China
|
| 10 |
+
|
| 11 |
+
huangsongtao@pjlab.org.cn, zhen.zhao@outlook.com,
|
| 12 |
+
|
| 13 |
+
lchelen1005@gmail.com, baisanshi@gmail.com
|
| 14 |
+
|
| 15 |
+
# ABSTRACT
|
| 16 |
+
|
| 17 |
+
Real-world time series often have multiple frequency components that are intertwined with each other, making accurate time series forecasting challenging. Decomposing the mixed frequency components into multiple single frequency components is a natural choice. However, the information density of patterns varies across different frequencies, and employing a uniform modeling approach for different frequency components can lead to inaccurate characterization. To address this challenges, inspired by the flexibility of the recent Kolmogorov-Arnold Network (KAN), we propose a KAN-based Frequency Decomposition Learning architecture (TimeKAN) to address the complex forecasting challenges caused by multiple frequency mixtures. Specifically, TimeKAN mainly consists of three components: Cascaded Frequency Decomposition (CFD) blocks, Multi-order KAN Representation Learning (M-KAN) blocks and Frequency Mixing blocks. CFD blocks adopt a bottom-up cascading approach to obtain series representations for each frequency band. Benefiting from the high flexibility of KAN, we design a novel M-KAN block to learn and represent specific temporal patterns within each frequency band. Finally, Frequency Mixing blocks is used to recombine the frequency bands into the original format. Extensive experimental results across multiple real-world time series datasets demonstrate that TimeKAN achieves state-of-the-art performance as an extremely lightweight architecture. Code is available at https://github.com/huangst21/TimeKAN.
|
| 18 |
+
|
| 19 |
+
# 1 INTRODUCTION
|
| 20 |
+
|
| 21 |
+
Time series forecasting (TSF) has garnered significant interest due to its wide range of applications, including finance (Huang et al., 2024), energy management (Yin et al., 2023), traffic flow planning (Jiang & Luo, 2022), and weather forecasting (Lam et al., 2023). Recently, deep learning has led to substantial advancements in TSF, with the most state-of-the-art performances achieved by CNN-based methods (Wang et al., 2023; donghao & wang xue, 2024), Transformer-based methods(Nie et al., 2023; Liu et al., 2024b) and MLP-based methods (Zeng et al., 2023; Wang et al., 2024a).
|
| 22 |
+
|
| 23 |
+
Due to the complex nature of the real world, observed multivariate time series are often nonstationary and exhibit diverse patterns. These intertwined patterns complicate the internal relationships within the time series, making it challenging to capture and establish connections between historical observations and future targets. To address the complex temporal patterns in time series, an increasing number of studies focus on leveraging prior knowledge to decompose time series into simpler components that provide a basis for forecasting. For instance, Autoformer (Wu et al., 2021) decomposes time series into seasonal and trend components. This idea is also adopted by DLinear (Zeng et al., 2023) and FEDFormer (Zhou et al., 2022b). Building on this foundation, TimeMixer (Wang et al., 2024a) further introduces multi-scale seasonal-trend decomposition and highlights the importance of interactions between different scales. Recent models like TimesNet (Wu et al.,
|
| 24 |
+
|
| 25 |
+
2023), PDF (Dai et al., 2024), and SparseTSF (Lin et al., 2024) emphasize the inherent periodicity in time series and decompose long sequences into multiple shorter ones based on the period length, thereby enabling the separate modeling of inter-period and intra-period dependencies within temporal patterns. In summary, these different decomposition methods share a common goal: utilizing the simplified subsequences to provide critical information for future predictions, thereby achieving accurate forecasting.
|
| 26 |
+
|
| 27 |
+
It is worth noting that time series are often composed of multiple frequency components, where the low-frequency components represent long-term periodic variations and the high-frequency components capture certain abrupt events. The mixture of different frequency components makes accurate forecasting particularly challenging. The aforementioned decomposition approaches motivate us to design a frequency decomposition framework that decouples different frequency components in a time series and independently learns the temporal patterns associated with each frequency. However, this introduces another challenge: the information density of patterns varies across different frequencies, and employing a uniform modeling approach for different frequency components can lead to inaccurate characterizations, resulting in sub-optimal results. Fortunately, a new neural network architecture, known as Kolmogorov-Arnold Networks (KAN) (Liu et al., 2024c), has recently gained significant attention in the deep learning community due to its outstanding data-fitting capabilities and flexibility, showing potential as a substitute for traditional MLP. Compared to MLP, KAN offers optional kernels and allows for the adjustment of kernel order to control its fitting capacity. This consideration leads us to explore the use of Multi-order KANs to represent temporal patterns across different frequencies, thereby providing more accurate information for forecasting.
|
| 28 |
+
|
| 29 |
+
Motivated by these observations, we propose a KAN-based Frequency Decomposition Learning architecture (TimeKAN) to address the complex prediction challenges caused by multiple frequency mixtures. Specifically, TimeKAN first employs moving average to progressively remove relatively high-frequency components from the sequence. Subsequently, Cascaded Frequency Decomposition (CFD) blocks adopt a bottom-up cascading approach to obtain sequence representations for each frequency band. Multi-order KAN Representation Learning (M-KAN) blocks leverage the high flexibility of KAN to learn and represent specific temporal patterns within each frequency band. Finally, Frequency Mixing blocks recombine the frequency bands into the original format, ensuring that this Decomposition-Learning-Mixing process is repeatable, thereby modeling different temporal patterns at various frequencies more accurately. The final high-level sequence is then mapped to the desired forecasting output via a simple linear mapping. With our meticulously designed architecture, TimeKAN achieves state-of-the-art performance across multiple long-term time series forecasting tasks, while also being a lightweight architecture that outperforms complex TSF models with fewer computational resources.
|
| 30 |
+
|
| 31 |
+
Our contributions are summarized as follows:
|
| 32 |
+
|
| 33 |
+
- We revisit time series forecasting from the perspective of frequency decoupling, effectively disentangling time series characteristics through a frequency Decomposition-Learning-Mixing architecture to address challenges caused by complex information coupling in time series.
|
| 34 |
+
- We introduce TimeKAN as a lightweight yet effective forecasting model and design a novel M-KAN blocks to effectively modeling and representing patterns at different frequencies by maximizing the flexibility of KAN.
|
| 35 |
+
- TimeKAN demonstrates superior performance across multiple TSF prediction tasks, while having a parameter count significantly lower than that of state-of-the-art TSF models.
|
| 36 |
+
|
| 37 |
+
# 2 RELATED WORK
|
| 38 |
+
|
| 39 |
+
# 2.1 KOLMOGOROV-ARNOLD NETWORK
|
| 40 |
+
|
| 41 |
+
Kolmogorov-Arnold representation theorem states that any multivariate continuous function can be expressed as a combination of univariate functions and addition operations. Kolmogorov-Arnold Network (KAN) (Liu et al., 2024c) leverages this theorem to propose an innovative alternative to traditional MLP. Unlike MLP, which use fixed activation functions at the nodes, KAN introduces
|
| 42 |
+
|
| 43 |
+
learnable activation functions along the edges. Due to the flexibility and adaptability, KAN is considered as a promising alternative to MLP.
|
| 44 |
+
|
| 45 |
+
The original KAN was parameterized using spline functions. However, due to the inherent complexity of spline functions, the speed and scalability of the original KAN were not satisfactory. Consequently, subsequent research explored the use of simpler basis functions to replace splines, thereby achieving higher efficiency. ChebyshevKAN (SS, 2024) incorporates Chebyshev polynomials to parametrize the learnable functions. FastKAN (Li, 2024) uses faster Gaussian radial basis functions to approximate third-order B-spline functions.
|
| 46 |
+
|
| 47 |
+
Moreover, KAN has been applied as alternatives to MLP in various domains. Convolutional KAN (Bodner et al., 2024) replaces the linear weight matrices in traditional convolutional networks with learnable spline function matrices. U-KAN (Li et al., 2024) integrates KAN layers into the U-Net architecture, demonstrating impressive accuracy and efficiency in several medical image segmentation tasks. KAN has also been used to bridge the gap between AI and science. Works such as PIKAN (Shukla et al., 2024) and PINN (Wang et al., 2024b) utilize KAN to build physics-informed machine learning models. This paper aims to introduce KAN into TSF and demonstrate the strong potential of KAN in representing time series data.
|
| 48 |
+
|
| 49 |
+
# 2.2 TIME SERIES FORECASTING
|
| 50 |
+
|
| 51 |
+
Traditional time series forecasting (TSF) methods, such as ARIMA (Zhang, 2003), can provide sufficient interpretability for the forecasting results but often fail to achieve satisfactory accuracy. In recent years, deep learning methods have dominated the field of TSF, mainly including CNN-based, Transformer-based, and MLP-based approaches. CNN-based models primarily apply convolution operations along the temporal dimension to extract temporal patterns. For example, MICN (Wang et al., 2023) and TimesNet (Wu et al., 2023) enhance the precision of sequence modeling by adjusting the receptive field to capture both short-term and long-term views within the sequences. ModernTCN (donghao & wang xue, 2024) advocates using large convolution kernels along the temporal dimension and capture both cross-time and cross-variable dependencies. Compared to CNN-based methods, which have limited receptive field, Transformer-based methods offer global modeling capabilities, making them more suitable for handling long and complex sequence data. They have become the cornerstone of modern time series forecasting. Informer (Zhou et al., 2021) is one of the early implementations of Transformer models in TSF, making efficient forecasting possible by carefully modifying the internal Transformer architecture. PatchTST (Nie et al., 2023) divides the sequence into multiple patches along the temporal dimension, which are then fed into the Transformer, establishing it as an important benchmark in the time series domain. In contrast, iTransformer (Liu et al., 2024b) treats each variable as an independent token to capture cross-variable dependencies in multivariate time series. However, Transformer-based methods face challenges due to the large number of parameters and high memory consumption. Recent research on MLP-based methods has shown that with appropriately designed architectures leveraging prior knowledge, simple MLPs can outperform complex Transformer-based methods. DLinear (Zeng et al., 2023), for instance, preprocesses sequences using a trend-season decomposition strategy. FITS (Xu et al., 2024b) performs linear transformations in the frequency domain, while TimeMixer (Wang et al., 2024a) uses MLP to facilitate information interaction at different scales. These MLP-based methods have demonstrated strong performance regarding both forecasting accuracy and efficiency. Unlike the aforementioned methods, this paper introduces the novel KAN to TSF to represent time series data more accurately. It also proposes a well-designed Decomposition-Learning-Mixing architecture to fully unlock the potential of KAN for time series forecasting.
|
| 52 |
+
|
| 53 |
+
# 2.3 TIME SERIES DECOMPOSITION
|
| 54 |
+
|
| 55 |
+
Real-world time series often consist of various underlying patterns. To leverage the characteristics of different patterns, recent approaches tend to decompose the series into multiple subcomponents, including trend-seasonal decomposition, multi-scale decomposition, and multi-period decomposition. DLinear (Zeng et al., 2023) employs moving averages to decouple the seasonal and trend components. SCINet (Liu et al., 2022) uses a hierarchical downsampling tree to iteratively extract and exchange information at multiple temporal resolutions. TimeMixer (Wang et al., 2024a) follows a fine-to-coarse principle to decompose the sequence into multiple scales across different
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
Figure 1: The architecture of TimeKAN, which mainly consists of Cascaded Frequency Decomposition block, Multi-order KAN Representation Learning block, and Frequency Mixing block. Here, we divide the frequency range of the time series into three frequency bands as an example.
|
| 59 |
+
|
| 60 |
+
time spans and further splits each scale into seasonal and periodic components. TimesNet (Wu et al., 2023) and PDF (Dai et al., 2024) utilize Fourier periodic analysis to decouple sequence into multiple sub-period sequences based on the calculated period. Inspired by these works, this paper proposes a novel Decomposition-Learning-Mixing architecture, which examines time series from a multi-frequency perspective to accurately model the complex patterns within time series.
|
| 61 |
+
|
| 62 |
+
# 3 TIMEKAN
|
| 63 |
+
|
| 64 |
+
# 3.1 OVERALL ARCHITECTURE
|
| 65 |
+
|
| 66 |
+
Given a historical multivariate time series input $\mathbf{X} \in \mathbb{R}^{N \times T}$ , the aim of time series forecasting is to predict the future output series $\mathbf{X}_O \in \mathbb{R}^{N \times F}$ , where $T, F$ is the look-back window length and the future window length, and $N$ represents the number of variates. In this paper, we propose TimeKAN to tackle the challenges arising from the complex mixture of multi-frequency components in time series. The overall architecture of TimeKAN is shown in Figure 1. We adopt variate-independent manner (Nie et al., 2023) to predict each univariate series independently. Each univariate input time series is denoted as $X \in \mathbb{R}^T$ and we consider univariate time series as the instance in the following calculation. In our TimeKAN, the first step is to progressively remove the relatively high-frequency components using moving averages and generate multi-level sequences followed by projecting each sequence into a high-dimensional space. Next, adhering to the Decomposition-Learning-Mixing architecture design principle, we first design Cascaded Frequency Decomposition (CFD) blocks to obtain sequence representations for each frequency band, adopting a bottom-up cascading approach. Then, we propose Multi-order KAN Representation Learning (M-KAN) blocks to learn and represent specific temporal patterns within each frequency band. Finally, Frequency Mixing blocks recombine the frequency bands into the original format, ensuring that the Decomposition-Learning-Mixing process is repeatable. More details about our TimeKAN are described as follow.
|
| 67 |
+
|
| 68 |
+
# 3.2 HIERARCHICAL SEQUENCE PREPROCESSING
|
| 69 |
+
|
| 70 |
+
Assume that we divide the frequency range of raw time series $X$ into predefined $k$ frequency bands. We first use moving average to progressively remove the relatively high-frequency components and generate multi-level sequences $\{x_{1},\dots ,x_{k}\}$ , where $x_{i}\in \mathbb{R}^{\frac{T}{d^{i - 1}}}\left(i\in \{1,\dots ,k\}\right)$ . $x_{1}$ is equal to the input series $X$ and $d$ denotes the length of moving average window. The process of producing multi-level sequences is as follows:
|
| 71 |
+
|
| 72 |
+
$$
|
| 73 |
+
x _ {i} = \operatorname {A v g P o o l} (\text {P a d d i n g} (x _ {i - 1})) \tag {1}
|
| 74 |
+
$$
|
| 75 |
+
|
| 76 |
+
After obtaining the multi-level sequences, each sequence is independently embedded into a higher dimension through a Linear layer:
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
x _ {i} = \operatorname {L i n e a r} \left(x _ {i}\right) \tag {2}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
where $x_{i} \in \mathbb{R}_{d^{i - 1}}^{\frac{T}{T - 1} \times D}$ and $D$ is embedding dimension. We define $x_{1}$ as the highest level sequence and $x_{k}$ as the lowest level sequence. Notably, each lower-level sequence is derived from the sequence one level higher by removing a portion of the high-frequency information. The above process is a preprocessing process and only occurs once in TimeKAN.
|
| 83 |
+
|
| 84 |
+
# 3.3 CASCADED FREQUENCY DECOMPOSITION
|
| 85 |
+
|
| 86 |
+
Real-world time series are often composed of multiple frequency components, with the low-frequency component representing long-term changes in the time series and the high-frequency component representing short-term fluctuations or unexpected events. These different frequency components complement each other and provide a comprehensive perspective for accurately modeling time series. Therefore, we design the Cascaded Frequency Decomposition (CFD) block to accurately decompose each frequency component in a cascade way, thus laying the foundation for accurately modeling different frequency components.
|
| 87 |
+
|
| 88 |
+
The aim of CFD block is to obtain the representation of each frequency component. Here, we take obtaining the representation of the $i$ -th frequency band as an example. To achieve it, we first employ the Fast Fourier Transform (FFT) to obtain the representation of $x_{i+1}$ in the frequency domain. Then, Zero-Padding is used to extend the length of the frequency domain sequence, so that it can have the same length as the upper sequence $x_i$ after transforming back to the time domain. Next, we use Inverse Fast Fourier Transform (IFFT) to transform it back into the time domain. We refer to this upsampling process as Frequency Upsampling, which ensures that the frequency information remains unchanged before and after the upsampling. The process of Frequency Upsampling can be described as:
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\hat {x} _ {i} = \operatorname {I F F T} (\text {P a d d i n g} (\operatorname {F F T} (x _ {i + 1}))) \tag {3}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
Here, $\hat{x}_i$ and $x_i$ have the same sequence length. Notably, compared to $x_i$ , $\hat{x}_i$ lacks the $i$ -th frequency component. The reason is that $x_{i+1}$ is originally formed by removing $i$ -th frequency component from $x_i$ in the hierarchical sequence preprocessing and $x_{i+1}$ is now transformed into $\hat{x}_i$ through a lossless frequency conversion process, thereby aligning length with $x_i$ in the time domain. Therefore, to get the series representation of the $i$ -th frequency component $f_i$ in time domain, we only need to get the residuals between $x_i$ and $\hat{x}_i$ :
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
f _ {i} = x _ {i} - \hat {x} _ {i} \tag {4}
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
# 3.4 MULTI-ORDER KAN REPRESENTATION LEARNING
|
| 101 |
+
|
| 102 |
+
Given the multi-level frequency component representation $\{f_1, \dots, f_k\}$ generated by the CFD block, we propose Multi-order KAN Representation Learning (M-KAN) blocks to learn specific representations and temporal dependencies at each frequency. M-KAN adopts a dual-branch parallel architecture to separately model temporal representation learning and temporal dependency learning in a frequency-specific way, using Multi-order KANs to learn the representation of each frequency component and employing Depthwise Convolution to capture the temporal dependency. The details of Depthwise Convolution and Multi-order KAN will be given as follows.
|
| 103 |
+
|
| 104 |
+
Depthwise Convolution To separate the modeling of temporal dependency from learning sequence representation, we adopt a specific type of group convolution known as Depthwise Convolution, in which the number of groups matches the embedding dimension. Depthwise Convolution employs $D$ groups of convolution kernels to perform independent convolution operations on the series of each channel. This allows the model to focus on capturing temporal patterns without interference from inter channel relationships. The process of Depthwise Convolution is:
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
f _ {i, 1} = \operatorname {C o n v} _ {D \rightarrow D} \left(f _ {i}, \text {g r o u p} = D\right) \tag {5}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
Multi-order KANs Compared with traditional MLP, KAN replaces linear weights with learnable univariate functions, allowing complex nonlinear relationships to be modeled with fewer parameters and greater interpretability. (Xu et al., 2024a). Assume that KAN is composed of $L + 1$ layer neurons and the number of neurons in layer $l$ is $n_{l}$ . The transmission relationship between the $j$ -th neuron in layer $l + 1$ and all neurons in layer $l$ can be expressed as $z_{l + 1,j} = \sum_{i = 1}^{n_l}\phi_{l,j,i}(z_{l,i})$ , where $z_{l + 1,j}$ is the $j$ -th neuron at layer $l + 1$ and $z_{l,i}$ is the $i$ -th neuron at layer $l$ . We can simply understand
|
| 111 |
+
|
| 112 |
+
that each neuron is connected to other neurons in the previous layer through a learnable univariate function $\phi$ . The vanilla KAN (Liu et al., 2024c) employs spline function as the learnable univariate basic functions $\phi$ , but suffering from the complex recursive computation process, which hinders the efficiency of KAN. Here, we adopt ChebyshevKAN (SS, 2024) to learn the representation of each frequency component, i.e., channel learning. ChebyshevKAN is constructed from linear combinations of Chebyshev polynomial. That is, using the linear combination of Chebyshev polynomial with different order to generate learnable univariate function $\phi$ . The Chebyshev polynomial is defined by:
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
T _ {n} (x) = \cos (n \operatorname {a r c c o s} (x)) \tag {6}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
where $n$ is the highest order of Chebyshev polynomials and the complexity of Chebyshev polynomials is increasing with increasing order. A 1-layer ChebyshevKAN applied to channel dimension can be expressed as:
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
\phi_ {o} (x) = \sum_ {j = 1} ^ {D} \sum_ {i = 0} ^ {n} \Theta_ {o, j, i} T _ {i} (\tanh (x _ {j})) \tag {7}
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
\operatorname {K A N} (x) = \left\{ \begin{array}{c} \phi_ {1} (x) \\ \dots \\ \phi_ {D} (x) \end{array} \right\} \tag {8}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
where $o$ is the index of output neuron and $\Theta \in \mathbb{R}^{D\times D\times (n + 1)}$ are the learnable coefficients used to linearly combine the Chebyshev polynomials. It is worth noting that the frequency components within the time series exhibit increasingly complex temporal dynamics as the frequency increases, necessitating a network with stronger representation capabilities to learn these characteristics. ChebyshevKAN allows for the adjustment of the highest order of Chebyshev polynomials $n$ to enhance its representation ability. Therefore, from the low-frequency to high-frequency components, we adopt an increasing order of Chebyshev polynomials to align the frequency components with the complexity of the KAN, thereby accurately learning the representations of different frequency components. We refer to this group of KANs with varying highest Chebyshev polynomials orders as Multi-order KANs. We set an lower bound order $b$ , and the representation learning process for $x_{i}$ can be expressed as:
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
f _ {i, 2} = \mathrm {K A N} \left(f _ {i}, \text {o r d e r} = b + k - i\right) \tag {9}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
The final output of the M-KAN block is the sum of the outputs from the Multi-order KANs and the Depthwise Convolution.
|
| 135 |
+
|
| 136 |
+
$$
|
| 137 |
+
\hat {f} _ {i} = f _ {i, 1} + f _ {i, 2} \tag {10}
|
| 138 |
+
$$
|
| 139 |
+
|
| 140 |
+
# 3.5 FREQUENCY MIXING
|
| 141 |
+
|
| 142 |
+
After specifically learning the representation of each frequency component, we need to re-transform the frequency representations into the form of multi-level sequences before entering next CFD block, ensuring that the Decomposition-Learning-Mixing process is repeatable. Therefore, we designed Frequency Mixing blocks to convert the frequency component at $i$ -th level $\hat{f}_i$ into multi-level sequences $x_i$ , enabling it to serve as input for the next CFD block. To transform the frequency component at $i$ -th level $\hat{f}_i$ into multi-level sequences $x_i$ , we simply need to to supplement the frequency information from levels $i + 1$ to $k$ back into the $i$ -th level. Thus, we employ Frequency Upsampling again to incrementally reintegrate the information into the higher frequency components:
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
x _ {i} = \operatorname {I F F T} (\text {P a d d i n g} (\operatorname {F F T} (x _ {i + 1}))) + f _ {i} \tag {11}
|
| 146 |
+
$$
|
| 147 |
+
|
| 148 |
+
For the last Frequency Mixing block, we extract the highest-level sequence $x_{1}$ and use a simple linear layer to produce the forecasting results $X_{O}$ .
|
| 149 |
+
|
| 150 |
+
$$
|
| 151 |
+
X _ {O} = \operatorname {L i n e a r} \left(x _ {1}\right) \tag {12}
|
| 152 |
+
$$
|
| 153 |
+
|
| 154 |
+
Due to the use of a variate-independent strategy, we also need to stack the predicted results of all variables together to obtain the final multivariate prediction $\mathbf{X}_{\mathrm{O}}$ .
|
| 155 |
+
|
| 156 |
+
Table 1: Full results of the multivariate long-term forecasting result comparison. The input sequence length is set to 96 for all baselines and the prediction lengths $F \in \{96, 192, 336, 720\}$ . Avg means the average results from all four prediction lengths.
|
| 157 |
+
|
| 158 |
+
<table><tr><td colspan="2">Models</td><td colspan="2">TimeKAN Ours</td><td colspan="2">TimeMixer 2024a</td><td colspan="2">iTransformer 2024b</td><td colspan="2">Time-FFM 2024a</td><td colspan="2">PatchTST 2023</td><td colspan="2">TimesNet 2023</td><td colspan="2">MICN 2023</td><td colspan="2">DLinear 2023</td><td colspan="2">FreTS 2024</td><td colspan="2">FiLM 2022a</td><td colspan="2">FEDformer 2022b</td><td colspan="2">Autoformer 2021</td></tr><tr><td colspan="2">Metric</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td></tr><tr><td rowspan="5">ETThI</td><td>96</td><td>0.367</td><td>0.395</td><td>0.385</td><td>0.402</td><td>0.386</td><td>0.405</td><td>0.385</td><td>0.400</td><td>0.460</td><td>0.447</td><td>0.384</td><td>0.402</td><td>0.426</td><td>0.446</td><td>0.397</td><td>0.412</td><td>0.395</td><td>0.407</td><td>0.438</td><td>0.433</td><td>0.395</td><td>0.424</td><td>0.449</td><td>0.459</td></tr><tr><td>192</td><td>0.414</td><td>0.420</td><td>0.443</td><td>0.430</td><td>0.441</td><td>0.436</td><td>0.439</td><td>0.430</td><td>0.512</td><td>0.477</td><td>0.436</td><td>0.429</td><td>0.454</td><td>0.464</td><td>0.446</td><td>0.441</td><td>0.490</td><td>0.477</td><td>0.494</td><td>0.466</td><td>0.469</td><td>0.470</td><td>0.500</td><td>0.482</td></tr><tr><td>336</td><td>0.445</td><td>0.434</td><td>0.512</td><td>0.470</td><td>0.487</td><td>0.458</td><td>0.480</td><td>0.449</td><td>0.546</td><td>0.496</td><td>0.638</td><td>0.469</td><td>0.493</td><td>0.487</td><td>0.489</td><td>0.467</td><td>0.510</td><td>0.480</td><td>0.547</td><td>0.495</td><td>0.490</td><td>0.477</td><td>0.521</td><td>0.496</td></tr><tr><td>720</td><td>0.444</td><td>0.459</td><td>0.497</td><td>0.476</td><td>0.503</td><td>0.491</td><td>0.462</td><td>0.456</td><td>0.544</td><td>0.517</td><td>0.521</td><td>0.500</td><td>0.526</td><td>0.526</td><td>0.513</td><td>0.510</td><td>0.568</td><td>0.538</td><td>0.586</td><td>0.538</td><td>0.598</td><td>0.544</td><td>0.514</td><td>0.512</td></tr><tr><td>Avg</td><td>0.417</td><td>0.427</td><td>0.459</td><td>0.444</td><td>0.454</td><td>0.447</td><td>0.442</td><td>0.434</td><td>0.516</td><td>0.484</td><td>0.495</td><td>0.450</td><td>0.475</td><td>0.480</td><td>0.461</td><td>0.457</td><td>0.491</td><td>0.475</td><td>0.516</td><td>0.483</td><td>0.498</td><td>0.484</td><td>0.496</td><td>0.487</td></tr><tr><td rowspan="5">ETThI</td><td>96</td><td>0.290</td><td>0.340</td><td>0.289</td><td>0.342</td><td>0.297</td><td>0.349</td><td>0.301</td><td>0.351</td><td>0.308</td><td>0.355</td><td>0.340</td><td>0.374</td><td>0.372</td><td>0.424</td><td>0.340</td><td>0.394</td><td>0.332</td><td>0.387</td><td>0.322</td><td>0.364</td><td>0.358</td><td>0.397</td><td>0.346</td><td>0.388</td></tr><tr><td>192</td><td>0.375</td><td>0.392</td><td>0.378</td><td>0.397</td><td>0.380</td><td>0.400</td><td>0.378</td><td>0.397</td><td>0.393</td><td>0.405</td><td>0.402</td><td>0.414</td><td>0.492</td><td>0.492</td><td>0.482</td><td>0.479</td><td>0.451</td><td>0.457</td><td>0.405</td><td>0.414</td><td>0.429</td><td>0.439</td><td>0.456</td><td>0.452</td></tr><tr><td>336</td><td>0.423</td><td>0.435</td><td>0.432</td><td>0.434</td><td>0.428</td><td>0.432</td><td>0.422</td><td>0.431</td><td>0.427</td><td>0.436</td><td>0.452</td><td>0.452</td><td>0.607</td><td>0.555</td><td>0.591</td><td>0.541</td><td>0.466</td><td>0.473</td><td>0.435</td><td>0.445</td><td>0.496</td><td>0.487</td><td>0.482</td><td>0.486</td></tr><tr><td>720</td><td>0.443</td><td>0.449</td><td>0.464</td><td>0.464</td><td>0.427</td><td>0.445</td><td>0.427</td><td>0.444</td><td>0.436</td><td>0.450</td><td>0.462</td><td>0.468</td><td>0.824</td><td>0.655</td><td>0.839</td><td>0.661</td><td>0.485</td><td>0.471</td><td>0.445</td><td>0.457</td><td>0.463</td><td>0.474</td><td>0.515</td><td>0.511</td></tr><tr><td>Avg</td><td>0.383</td><td>0.404</td><td>0.390</td><td>0.409</td><td>0.383</td><td>0.407</td><td>0.382</td><td>0.406</td><td>0.391</td><td>0.411</td><td>0.414</td><td>0.427</td><td>0.574</td><td>0.531</td><td>0.563</td><td>0.519</td><td>0.433</td><td>0.446</td><td>0.402</td><td>0.420</td><td>0.437</td><td>0.449</td><td>0.450</td><td>0.459</td></tr><tr><td rowspan="5">ETThI</td><td>96</td><td>0.322</td><td>0.361</td><td>0.317</td><td>0.356</td><td>0.334</td><td>0.368</td><td>0.336</td><td>0.369</td><td>0.352</td><td>0.374</td><td>0.338</td><td>0.375</td><td>0.365</td><td>0.387</td><td>0.346</td><td>0.374</td><td>0.337</td><td>0.374</td><td>0.353</td><td>0.370</td><td>0.379</td><td>0.419</td><td>0.505</td><td>0.475</td></tr><tr><td>192</td><td>0.357</td><td>0.383</td><td>0.367</td><td>0.384</td><td>0.377</td><td>0.391</td><td>0.378</td><td>0.389</td><td>0.390</td><td>0.393</td><td>0.374</td><td>0.387</td><td>0.403</td><td>0.408</td><td>0.382</td><td>0.391</td><td>0.382</td><td>0.398</td><td>0.387</td><td>0.426</td><td>0.441</td><td>0.553</td><td>0.496</td><td></td></tr><tr><td>336</td><td>0.382</td><td>0.401</td><td>0.391</td><td>0.406</td><td>0.426</td><td>0.420</td><td>0.411</td><td>0.410</td><td>0.421</td><td>0.414</td><td>0.410</td><td>0.411</td><td>0.436</td><td>0.431</td><td>0.415</td><td>0.415</td><td>0.420</td><td>0.423</td><td>0.421</td><td>0.408</td><td>0.445</td><td>0.459</td><td>0.621</td><td>0.537</td></tr><tr><td>720</td><td>0.445</td><td>0.435</td><td>0.454</td><td>0.441</td><td>0.491</td><td>0.459</td><td>0.469</td><td>0.441</td><td>0.462</td><td>0.449</td><td>0.478</td><td>0.450</td><td>0.489</td><td>0.462</td><td>0.473</td><td>0.451</td><td>0.490</td><td>0.471</td><td>0.481</td><td>0.441</td><td>0.543</td><td>0.490</td><td>0.671</td><td>0.561</td></tr><tr><td>Avg</td><td>0.376</td><td>0.395</td><td>0.382</td><td>0.397</td><td>0.407</td><td>0.410</td><td>0.399</td><td>0.402</td><td>0.406</td><td>0.407</td><td>0.400</td><td>0.406</td><td>0.423</td><td>0.422</td><td>0.404</td><td>0.408</td><td>0.407</td><td>0.417</td><td>0.412</td><td>0.402</td><td>0.448</td><td>0.452</td><td>0.588</td><td>0.517</td></tr><tr><td rowspan="5">ETThI</td><td>96</td><td>0.174</td><td>0.255</td><td>0.175</td><td>0.257</td><td>0.180</td><td>0.264</td><td>0.181</td><td>0.267</td><td>0.183</td><td>0.270</td><td>0.187</td><td>0.267</td><td>0.197</td><td>0.296</td><td>0.193</td><td>0.293</td><td>0.186</td><td>0.275</td><td>0.183</td><td>0.266</td><td>0.203</td><td>0.287</td><td>0.255</td><td>0.339</td></tr><tr><td>192</td><td>0.239</td><td>0.299</td><td>0.240</td><td>0.302</td><td>0.250</td><td>0.309</td><td>0.247</td><td>0.308</td><td>0.255</td><td>0.314</td><td>0.249</td><td>0.309</td><td>0.284</td><td>0.361</td><td>0.284</td><td>0.361</td><td>0.259</td><td>0.323</td><td>0.248</td><td>0.305</td><td>0.269</td><td>0.328</td><td>0.281</td><td>0.340</td></tr><tr><td>336</td><td>0.301</td><td>0.340</td><td>0.303</td><td>0.343</td><td>0.311</td><td>0.348</td><td>0.309</td><td>0.347</td><td>0.309</td><td>0.347</td><td>0.321</td><td>0.351</td><td>0.381</td><td>0.429</td><td>0.382</td><td>0.429</td><td>0.349</td><td>0.386</td><td>0.309</td><td>0.343</td><td>0.325</td><td>0.366</td><td>0.339</td><td>0.372</td></tr><tr><td>720</td><td>0.395</td><td>0.396</td><td>0.392</td><td>0.396</td><td>0.412</td><td>0.407</td><td>0.406</td><td>0.404</td><td>0.412</td><td>0.404</td><td>0.408</td><td>0.403</td><td>0.549</td><td>0.522</td><td>0.558</td><td>0.525</td><td>0.559</td><td>0.511</td><td>0.410</td><td>0.400</td><td>0.421</td><td>0.415</td><td>0.433</td><td>0.432</td></tr><tr><td>Avg</td><td>0.277</td><td>0.322</td><td>0.277</td><td>0.324</td><td>0.288</td><td>0.332</td><td>0.286</td><td>0.332</td><td>0.290</td><td>0.334</td><td>0.291</td><td>0.333</td><td>0.353</td><td>0.402</td><td>0.354</td><td>0.402</td><td>0.339</td><td>0.374</td><td>0.288</td><td>0.328</td><td>0.305</td><td>0.349</td><td>0.327</td><td>0.371</td></tr><tr><td rowspan="5">Weather</td><td>96</td><td>0.162</td><td>0.208</td><td>0.163</td><td>0.209</td><td>0.174</td><td>0.214</td><td>0.191</td><td>0.230</td><td>0.186</td><td>0.227</td><td>0.172</td><td>0.220</td><td>0.198</td><td>0.261</td><td>0.195</td><td>0.252</td><td>0.171</td><td>0.227</td><td>0.195</td><td>0.236</td><td>0.217</td><td>0.296</td><td>0.266</td><td>0.336</td></tr><tr><td>192</td><td>0.207</td><td>0.249</td><td>0.211</td><td>0.254</td><td>0.221</td><td>0.254</td><td>0.236</td><td>0.267</td><td>0.234</td><td>0.265</td><td>0.219</td><td>0.261</td><td>0.239</td><td>0.299</td><td>0.237</td><td>0.295</td><td>0.218</td><td>0.280</td><td>0.239</td><td>0.271</td><td>0.276</td><td>0.336</td><td>0.307</td><td>0.367</td></tr><tr><td>336</td><td>0.263</td><td>0.290</td><td>0.263</td><td>0.293</td><td>0.278</td><td>0.296</td><td>0.289</td><td>0.303</td><td>0.284</td><td>0.301</td><td>0.246</td><td>0.337</td><td>0.285</td><td>0.336</td><td>0.282</td><td>0.331</td><td>0.265</td><td>0.317</td><td>0.289</td><td>0.306</td><td>0.339</td><td>0.380</td><td>0.359</td><td></td></tr><tr><td>720</td><td>0.338</td><td>0.340</td><td>0.344</td><td>0.348</td><td>0.358</td><td>0.347</td><td>0.362</td><td>0.350</td><td>0.356</td><td>0.349</td><td>0.365</td><td>0.359</td><td>0.351</td><td>0.388</td><td>0.345</td><td>0.382</td><td>0.326</td><td>0.351</td><td>0.360</td><td>0.351</td><td>0.403</td><td>0.428</td><td>0.419</td><td>0.428</td></tr><tr><td>Avg</td><td>0.242</td><td>0.272</td><td>0.245</td><td>0.276</td><td>0.258</td><td>0.278</td><td>0.270</td><td>0.288</td><td>0.265</td><td>0.285</td><td>0.251</td><td>0.294</td><td>0.268</td><td>0.321</td><td>0.265</td><td>0.315</td><td>0.245</td><td>0.294</td><td>0.271</td><td>0.290</td><td>0.309</td><td>0.360</td><td>0.338</td><td>0.382</td></tr><tr><td rowspan="5">Electricity</td><td>96</td><td>0.174</td><td>0.266</td><td>0.153</td><td>0.245</td><td>0.148</td><td>0.240</td><td>0.198</td><td>0.282</td><td>0.190</td><td>0.296</td><td>0.168</td><td>0.272</td><td>0.180</td><td>0.293</td><td>0.210</td><td>0.302</td><td>0.171</td><td>0.260</td><td>0.198</td><td>0.274</td><td>0.193</td><td>0.308</td><td>0.201</td><td>0.317</td></tr><tr><td>192</td><td>0.182</td><td>0.273</td><td>0.166</td><td>0.257</td><td>0.162</td><td>0.253</td><td>0.199</td><td>0.285</td><td>0.199</td><td>0.304</td><td>0.184</td><td>0.322</td><td>0.189</td><td>0.302</td><td>0.210</td><td>0.305</td><td>0.177</td><td>0.268</td><td>0.198</td><td>0.278</td><td>0.201</td><td>0.315</td><td>0.222</td><td>0.334</td></tr><tr><td>336</td><td>0.197</td><td>0.286</td><td>0.185</td><td>0.275</td><td>0.178</td><td>0.269</td><td>0.212</td><td>0.298</td><td>0.217</td><td>0.319</td><td>0.198</td><td>0.300</td><td>0.198</td><td>0.312</td><td>0.223</td><td>0.319</td><td>0.190</td><td>0.284</td><td>0.217</td><td>0.300</td><td>0.214</td><td>0.329</td><td>0.231</td><td>0.443</td></tr><tr><td>720</td><td>0.236</td><td>0.320</td><td>0.224</td><td>0.312</td><td>0.225</td><td>0.317</td><td>0.253</td><td>0.330</td><td>0.258</td><td>0.352</td><td>0.220</td><td>0.320</td><td>0.217</td><td>0.330</td><td>0.258</td><td>0.350</td><td>0.228</td><td>0.316</td><td>0.278</td><td>0.356</td><td>0.246</td><td>0.355</td><td>0.254</td><td>0.361</td></tr><tr><td>Avg</td><td>0.197</td><td>0.286</td><td>0.182</td><td>0.272</td><td>0.178</td><td>0.270</td><td>0.270</td><td>0.288</td><td>0.216</td><td>0.318</td><td>0.193</td><td>0.304</td><td>0.196</td><td>0.309</td><td>0.225</td><td>0.319</td><td>0.192</td><td>0.282</td><td>0.223</td><td>0.302</td><td>0.214</td><td>0.327</td><td>0.227</td><td>0.338</td></tr></table>
|
| 159 |
+
|
| 160 |
+
# 4 EXPERIMENTS
|
| 161 |
+
|
| 162 |
+
Datasets We conduct extensive experiments on six real-world time series datasets, including Weather, ETTh1, ETTh2, ETTm1, ETTm2 and Electricity for long-term forecasting. Following previous work (Wu et al., 2021), we split the ETT series dataset into training, validation, and test sets in a ratio of 6:2:2. For the remaining datasets, we adopt a split ratio of 7:1:2.
|
| 163 |
+
|
| 164 |
+
Baseline We carefully select eleven well-acknowledged methods in the field of long-term time series forecasting as our baselines, including (1) Transformer-based methods: Autoformer (2021), FEDformer (2022b), PatchTST (2023), iTransformer (2024b); (2) MLP-based methods: DLinear (2023) and TimeMixer (2024a) (3) CNN-based method: MICN (2023), TimesNet (2023); (4) Frequency-based methods: FreTS (2024) and FiLM (2022a). And a time series foundation model Time-FFM (2024a).
|
| 165 |
+
|
| 166 |
+
Experimental Settings To ensure fair comparisons, we adopt the same look-back window length $T = 96$ and the same prediction length $F = \{96,192,336,720\}$ . We utilize the L2 loss for model training and use Mean Square Error (MSE) and Mean Absolute Error (MAE) metrics to evaluate the performance of each method.
|
| 167 |
+
|
| 168 |
+
# 4.1 MAIN RESULTS
|
| 169 |
+
|
| 170 |
+
The comprehensive forecasting results are presented in Table 1, where the best results are highlighted in bold red and the second-best are underlined in blue. A lower MSE/MAE indicates a more accurate prediction result. We observe that TimeKAN demonstrates superior predictive performance across all datasets, except for the Electricity dataset, where iTransformer achieves the best result. This is due to iTransformer's use of channel-wise self-attention mechanisms to model inter-variable dependencies, which is particularly effective for high-dimensional datasets like Electricity. Additionally, both TimeKAN and TimeMixer perform consistently well in long-term forecasting tasks, showcasing the generalizability of well-designed time-series decomposition architectures for accurate predictions. Compared with other state-of-the-art methods, TimeKAN introduces a novel
|
| 171 |
+
|
| 172 |
+
Table 2: Ablation study of the Frequency Upsampling. The best results are in bold.
|
| 173 |
+
|
| 174 |
+
<table><tr><td rowspan="2">DatasetsMetric</td><td colspan="2">ETTh1</td><td colspan="2">ETTh2</td><td colspan="2">ETTm1</td><td colspan="2">ETTm2</td><td colspan="2">Weather</td><td colspan="2">Electricity</td></tr><tr><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td></tr><tr><td>Linear Mapping</td><td>0.401</td><td>0.413</td><td>0.312</td><td>0.362</td><td>0.328</td><td>0.365</td><td>0.180</td><td>0.263</td><td>0.164</td><td>0.211</td><td>0.184</td><td>0.275</td></tr><tr><td>Linear Interpolation</td><td>0.383</td><td>0.398</td><td>0.296</td><td>0.347</td><td>0.336</td><td>0.370</td><td>0.181</td><td>0.263</td><td>0.165</td><td>0.210</td><td>0.196</td><td>0.277</td></tr><tr><td>Transposed Convolution</td><td>0.377</td><td>0.407</td><td>0.290</td><td>0.344</td><td>0.326</td><td>0.366</td><td>0.178</td><td>0.261</td><td>0.163</td><td>0.211</td><td>0.188</td><td>0.274</td></tr><tr><td>Frequency Upsampling</td><td>0.367</td><td>0.395</td><td>0.290</td><td>0.340</td><td>0.322</td><td>0.361</td><td>0.174</td><td>0.255</td><td>0.162</td><td>0.208</td><td>0.174</td><td>0.266</td></tr></table>
|
| 175 |
+
|
| 176 |
+
Table 3: Ablation study of the Multi-order KANs. The best results are in bold.
|
| 177 |
+
|
| 178 |
+
<table><tr><td rowspan="2">DatasetsMetric</td><td colspan="2">ETTh1</td><td colspan="2">ETTh2</td><td colspan="2">ETTm1</td><td colspan="2">ETTm2</td><td colspan="2">Weather</td></tr><tr><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td></tr><tr><td>MLPs</td><td>0.376</td><td>0.397</td><td>0.298</td><td>0.348</td><td>0.319</td><td>0.361</td><td>0.178</td><td>0.264</td><td>0.162</td><td>0.211</td></tr><tr><td>Fixed Low-order KANs</td><td>0.376</td><td>0.398</td><td>0.292</td><td>0.341</td><td>0.327</td><td>0.366</td><td>0.175</td><td>0.257</td><td>0.164</td><td>0.211</td></tr><tr><td>Fixed High-order KANs</td><td>0.380</td><td>0.407</td><td>0.310</td><td>0.363</td><td>0.327</td><td>0.269</td><td>0.176</td><td>0.257</td><td>0.164</td><td>0.212</td></tr><tr><td>Multi-order KANs</td><td>0.367</td><td>0.395</td><td>0.290</td><td>0.340</td><td>0.322</td><td>0.361</td><td>0.174</td><td>0.255</td><td>0.162</td><td>0.208</td></tr></table>
|
| 179 |
+
|
| 180 |
+
Decomposition-Learning-Mixing framework, closely integrating the characteristics of Multi-order KANs with this hierarchical architecture, enabling superior performance in a wide range of long-term forecasting tasks.
|
| 181 |
+
|
| 182 |
+
# 4.2 ABLATION STUDY
|
| 183 |
+
|
| 184 |
+
In this section, we investigate several key components of TimeKAN, including Frequency Upsampling, Depthwise Convolution and Multi-order KANs.
|
| 185 |
+
|
| 186 |
+
Frequency Upsampling To investigate the effectiveness of Frequency Upsampling, we compared it with three alternative upsampling methods that may not preserve frequency information before and after transformation: (1) Linear Mapping; (2) Linear Interpolation; and (3) Transposed Convolution. As shown in Table 2, replacing Frequency Upsampling with any of these three methods resulted in a decline in performance. This indicates that these upsampling techniques fail to maintain the integrity of frequency information after transforming, leading to the Decomposition-Learning-Mixing framework ineffective. This strongly demonstrates that the chosen Frequency Upsampling, as a non-parametric method, is an irreplaceable component of the TimeKAN framework.
|
| 187 |
+
|
| 188 |
+
Multi-order KANs We designed the following modules to investigate the effectiveness of Multi-order KANs: (1) MLPs, which means using MLP to replace each KAN; (2) Fixed Low-order KANs, which means using a KAN of order 2 at each frequency level; and (3) Fixed High-order KANs, which means using a KAN of order 5 at each frequency level. The comparison results are shown in Table 3. Overall, Multi-order KANs achieved the best performance. Compared to MLPs, Multi-order KANs perform significantly better, demonstrating that well-designed KANs possess stronger representation capabilities than MLPs and are a compelling alternative. Both Low-order KANs and High-order KANs performed worse than Multi-order KANs, indicating the validity of our design choice to incrementally increase the order of KANs to adapt to the representation of different frequency components. Thus, the learnable functions of KANs are indeed a double-edged sword; achieving satisfactory results requires selecting the appropriate level of function complexity for specific tasks.
|
| 189 |
+
|
| 190 |
+
Depthwise Convolution To assess the effectiveness of Depthwise Convolution, we replace it with the following choice: (1) w/o Depthwise Convolution; (2) Standard Convolution; (3) Multi-head Self-Attention. The results are shown in Table 4. Overall, Depthwise Convolution is the best choice. We clearly observe that removing Depthwise Convolution or replacing it with Multi-head Self-Attention leads to a significant drop in performance, highlighting the effectiveness of using convolution to learn temporal dependencies. When Depthwise Convolution is replaced with Standard
|
| 191 |
+
|
| 192 |
+
Table 4: Ablation study of the Depthwise Convolution. The best results are in bold.
|
| 193 |
+
|
| 194 |
+
<table><tr><td rowspan="2">DatasetsMetric</td><td colspan="2">ETTh1</td><td colspan="2">ETTh2</td><td colspan="2">ETTm1</td><td colspan="2">ETTm2</td><td colspan="2">Weather</td></tr><tr><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td></tr><tr><td>w/o Depthwise Conv</td><td>0.379</td><td>0.397</td><td>0.296</td><td>0.343</td><td>0.337</td><td>0.373</td><td>0.180</td><td>0.263</td><td>0.168</td><td>0.211</td></tr><tr><td>Standard Conv</td><td>0.364</td><td>0.393</td><td>0.295</td><td>0.345</td><td>0.323</td><td>0.364</td><td>0.180</td><td>0.264</td><td>0.162</td><td>0.210</td></tr><tr><td>Self-Attention</td><td>0.377</td><td>0.406</td><td>0.293</td><td>0.342</td><td>0.329</td><td>0.365</td><td>0.184</td><td>0.272</td><td>0.174</td><td>0.225</td></tr><tr><td>Depthwise Conv</td><td>0.367</td><td>0.395</td><td>0.290</td><td>0.340</td><td>0.322</td><td>0.361</td><td>0.174</td><td>0.255</td><td>0.162</td><td>0.208</td></tr></table>
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
Figure 2: Comparison of forecasting performance between TimeKAN and other three models with varying look-back windows on ETTm2 and Weather datasets. The look-back windows are selected to be $T \in \{48,96,192,336,512,720\}$ , and the prediction length is fixed to $F = 96$ .
|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
|
| 201 |
+
Convolution, there are declines in most metrics, which implies that focusing on extracting temporal dependencies individually with Depthwise Convolution, without interference from inter-channel relationships, is a reasonable design.
|
| 202 |
+
|
| 203 |
+
Varing Look-back Window In principle, extending the look-back window can provide more information for predicting future, leading to a potential improvement in forecasting performance. A effective long-term TSF method equipped with a strong temporal relation extraction capability should be able to improve forecasting performance when look-back window length increasing (Zeng et al., 2023). As a model based on frequency decomposition learning, TimeKAN should achieve better predictive performance as the look-back window lengths, since more incremental frequency information is available for prediction. To demonstrate that TimeKAN benefits from a larger look-back window, we select look-back window lengths from $T = \{48,96,192,336,512,720\}$ while keeping the prediction length fixed at 96. As demonstrated in Figure 2, our TimeKAN consistently reduces the MSE scores as the look-back window increases, indicating that TimeKAN can effectively learn from long time series.
|
| 204 |
+
|
| 205 |
+
# 4.3 MODEL EFFICIENCY
|
| 206 |
+
|
| 207 |
+
We compare TimeKAN with MLP-based method TimeMier and Transformer-based methods iTransformer and PatchTST, in terms of model parameters and Multiply-Accumulate Operations (MACs), to validate that TimeKAN is a lightweight and efficient architecture. To ensure a fair comparison, we fix the prediction length $F = 96$ and input length $T = 96$ , and set the input batch size to 32. The comparison results are summarized in Table 5. It is clear that our TimeKAN demonstrates significant advantages in both model parameter size and MACs, particularly when compared to Transformer-based models. For instance, on the Electricity dataset, the parameter count of PatchTST is nearly 295 times that of TimeKAN, and its MACs are almost 118 times greater. Even when compared to the relatively lightweight MLP-based method TimeMixer, TimeKAN shows superior efficiency. On the Weather dataset, TimeKAN requires only $20.05\%$ of the parameters needed by TimeMixer and only $36.14\%$ of the MACs. This remarkable efficiency advantage is primarily attributed to the lightweight architectural design. The main computations of the TimeKAN model are concentrated
|
| 208 |
+
|
| 209 |
+
Table 5: A comparison of model parameters (Params) and multiply-accumulate operations (MACs) for TimeKAN and three other models. To ensure a fair comparison, we fix the prediction length $F = 96$ and the input length $T = 96$ , and set the input batch size to 32. The lowest computational cost is highlighted in bold.
|
| 210 |
+
|
| 211 |
+
<table><tr><td rowspan="2">Datasets Metric</td><td colspan="2">ETTH1</td><td colspan="2">ETTH2</td><td colspan="2">ETTm1</td><td colspan="2">ETTm2</td><td colspan="2">Weather</td><td colspan="2">Electricity</td></tr><tr><td>Params</td><td>MACs</td><td>Params</td><td>MACs</td><td>Params</td><td>MACs</td><td>Params</td><td>MACs</td><td>Params</td><td>MACs</td><td>Params</td><td>MACs</td></tr><tr><td>TimeMixer</td><td>75.50K</td><td>20.37M</td><td>75.50K</td><td>20.37M</td><td>75.50K</td><td>20.37M</td><td>77.77K</td><td>24.18M</td><td>104.43K</td><td>82.62M</td><td>106.83K</td><td>1.26G</td></tr><tr><td>iTransformer</td><td>841.57K</td><td>77.46M</td><td>224.22K</td><td>19.86M</td><td>224.22K</td><td>19.86M</td><td>224.22K</td><td>19.86M</td><td>4.83M</td><td>1.16G</td><td>4.83M</td><td>16.29G</td></tr><tr><td>PatchTST</td><td>3.75M</td><td>5.90G</td><td>10.06M</td><td>17.66G</td><td>3.75M</td><td>5.90G</td><td>10.06M</td><td>17.66G</td><td>6.90M</td><td>35.30G</td><td>6.90M</td><td>539.38G</td></tr><tr><td>TimeKAN</td><td>12.84K</td><td>7.63M</td><td>15.00K</td><td>8.02M</td><td>14.38K</td><td>7.63M</td><td>38.12K</td><td>16.66M</td><td>20.94K</td><td>29.86M</td><td>23.34K</td><td>456.50M</td></tr></table>
|
| 212 |
+
|
| 213 |
+
in the M-KAN block, and the Depthwise Convolution we employed significantly reduces the number of parameters through grouped operations. Additionally, the powerful representation capabilities afforded by Multi-order KANs allow us to represent time series with very few neurons. Therefore, we cannot overlook that TimeKAN achieves outstanding forecasting performance while requiring minimal computational resources.
|
| 214 |
+
|
| 215 |
+
# 5 CONCLUSION
|
| 216 |
+
|
| 217 |
+
We proposed an efficient KAN-based Frequency Decomposition Learning architecture (TimeKAN) for long-term time series forecasting. Based on Decomposition-Learning-Mixing architecture, TimeKAN obtains series representations for each frequency band using a Cascaded Frequency Decomposition blocks. Additionally, a Multi-order KAN Representation Learning blocks further leverage the high flexibility of KAN to learn and represent specific temporal patterns within each frequency band. Finally, Frequency Mixing blocks recombine the frequency bands into the original format. Extensive experiments on real-world datasets demonstrate that TimeKAN achieves the state of the art forecasting performance and extremely lightweight computational consumption.
|
| 218 |
+
|
| 219 |
+
# ACKNOWLEDGEMENTS
|
| 220 |
+
|
| 221 |
+
This work is supported by Shanghai Artificial Intelligence Laboratory. This work was done during Songtao Huang's internship at Shanghai Artificial Intelligence Laboratory.
|
| 222 |
+
|
| 223 |
+
# REFERENCES
|
| 224 |
+
|
| 225 |
+
Alexander Dylan Bodner, Antonio Santiago Tepsich, Jack Natan Spolski, and Santiago Pourteau. Convolutional kolmogorov-arnold networks. arXiv preprint arXiv:2406.13155, 2024.
|
| 226 |
+
Tao Dai, Beiliang Wu, Peiyuan Liu, Naiqi Li, Jigang Bao, Yong Jiang, and Shu-Tao Xia. Periodicity decoupling framework for long-term series forecasting. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=dp27P5HBBt.
|
| 227 |
+
Luo donghao and wang xue. ModernTCN: A modern pure convolution structure for general time series analysis. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=vpJMJerXHU.
|
| 228 |
+
Mononito Goswami, Konrad Szafer, Arjun Choudhry, Yifu Cai, Shuo Li, and Artur Dubrawski. Moment: A family of open time-series foundation models. In ICML, 2024. URL https://openreview.net/forum?id=FVvf69a5rx.
|
| 229 |
+
Hongbin Huang, Minghua Chen, and Xiao Qiao. Generative learning for financial time series with irregular and scale-invariant patterns. In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=CdjnzWsQax.
|
| 230 |
+
|
| 231 |
+
Weiwei Jiang and Jiayun Luo. Graph neural network for traffic forecasting: A survey. Expert Systems with Applications, 207:117921, 2022. ISSN 0957-4174. doi: https://doi.org/10.1016/j.eswa.2022.117921. URL https://www.sciencedirect.com/science/article/pii/S0957417422011654.
|
| 232 |
+
Remi Lam, Alvaro Sanchez-Gonzalez, Matthew Willson, Peter Wirnsberger, Meire Fortunato, Ferran Alet, Suman Ravuri, Timo Ewalds, Zach Eaton-Rosen, Weihua Hu, Alexander Merose, Stephan Hoyer, George Holland, Oriol Vinyals, Jacklynn Stott, Alexander Pritzel, Shakir Mohamed, and Peter Battaglia. Learning skillful medium-range global weather forecasting. Science, 382(6677):1416-1421, 2023. doi: 10.1126/science.adi2336. URL https://www.science.org/doi/abs/10.1126/science.adi2336.
|
| 233 |
+
Chenxin Li, Xinyu Liu, Wuyang Li, Cheng Wang, Hengyu Liu, and Yixuan Yuan. U-kan makes strong backbone for medical image segmentation and generation. arXiv preprint arXiv:2406.02918, 2024.
|
| 234 |
+
Ziyao Li. Kolmogorov-arnold networks are radial basis function networks. arXiv preprint arXiv:2405.06721, 2024.
|
| 235 |
+
Shengsheng Lin, Weiwei Lin, Wentai Wu, Haojun Chen, and Junjie Yang. SparseTSF: Modeling long-term time series forecasting with $^{*}1\mathrm{k}^{*}$ parameters. In *Forty-first International Conference on Machine Learning*, 2024. URL https://openreview.net/forum?id=54NSHO01Fe.
|
| 236 |
+
Minhao Liu, Ailing Zeng, Muxi Chen, Zhijian Xu, Qiuxia Lai, Lingna Ma, and Qiang Xu. Scinet: Time series modeling and forecasting with sample convolution and interaction. Advances in Neural Information Processing Systems, 35:5816-5828, 2022.
|
| 237 |
+
Qingxiang Liu, Xu Liu, Chenghao Liu, Qingsong Wen, and Yuxuan Liang. Time-FFM: Towards LM-empowered federated foundation model for time series forecasting. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024a. URL https://openreview.net/forum?id=HS0faHRhWD.
|
| 238 |
+
Yong Liu, Tengge Hu, Haoran Zhang, Haixu Wu, Shiyu Wang, Lintao Ma, and Mingsheng Long. itransformer: Inverted transformers are effective for time series forecasting. In The Twelfth International Conference on Learning Representations, 2024b. URL https://openreview.net/forum?id=JePfAI8fah.
|
| 239 |
+
Ziming Liu, Yixuan Wang, Sachin Vaidya, Fabian Ruehle, James Halverson, Marin Soljacic, Thomas Y Hou, and Max Tegmark. Kan: Kolmogorov-arnold networks. arXiv preprint arXiv:2404.19756, 2024c.
|
| 240 |
+
Yuqi Nie, Nam H Nguyen, Phanwadee Sinthong, and Jayant Kalagnanam. A time series is worth 64 words: Long-term forecasting with transformers. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=Jbdc0vTOcol.
|
| 241 |
+
Khemraj Shukla, Juan Diego Toscano, Zhicheng Wang, Zongren Zou, and George Em Karniadakis. A comprehensive and fair comparison between mlp and kan representations for differential equations and operator networks. arXiv preprint arXiv:2406.02917, 2024.
|
| 242 |
+
Sidharth SS. Chebyshev polynomial-based kolmogorov-arnold networks: An efficient architecture for nonlinear function approximation. arXiv preprint arXiv:2405.07200, 2024.
|
| 243 |
+
Huiqiang Wang, Jian Peng, Feihu Huang, Jince Wang, Junhui Chen, and Yifei Xiao. MICN: Multiscale local and global context modeling for long-term series forecasting. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=zt53IDUR1U.
|
| 244 |
+
Shiyu Wang, Haixu Wu, Xiaoming Shi, Tengge Hu, Huakun Luo, Lintao Ma, James Y. Zhang, and JUN ZHOU. Timemixer: Decomposable multiscale mixing for time series forecasting. In The Twelfth International Conference on Learning Representations, 2024a. URL https://openreview.net/forum?id=7oLshfEIC2.
|
| 245 |
+
|
| 246 |
+
Yizheng Wang, Jia Sun, Jinshuai Bai, Cosmin Anitescu, Mohammad Sadegh Eshaghi, Xiaoying Zhuang, Timon Rabczuk, and Yinghua Liu. Kolmogorov arnold informed neural network: A physics-informed deep learning framework for solving pdes based on kolmogorov arnold networks. arXiv preprint arXiv:2406.11045, 2024b.
|
| 247 |
+
Haixu Wu, Jiehui Xu, Jianmin Wang, and Mingsheng Long. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 22419-22430. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/bcc0d400288793e8bcdcd7c19a8ac0c2b-Paper.pdf.
|
| 248 |
+
Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, and Mingsheng Long. Timesnet: Temporal 2d-variation modeling for general time series analysis. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=ju_Uqw384Oq.
|
| 249 |
+
Kunpeng Xu, Lifei Chen, and Shengrui Wang. Are kan effective for identifying and tracking concept drift in time series? arXiv preprint arXiv:2410.10041, 2024a.
|
| 250 |
+
Zhijian Xu, Ailing Zeng, and Qiang Xu. FITS: Modeling time series with $10k$ parameters. In The Twelfth International Conference on Learning Representations, 2024b. URL https://openreview.net/forum?id=bWcvvZ3qMb.
|
| 251 |
+
Kun Yi, Qi Zhang, Wei Fan, Shoujin Wang, Pengyang Wang, Hui He, Ning An, Defu Lian, Longbing Cao, and Zhendong Niu. Frequency-domain mlps are more effective learners in time series forecasting. Advances in Neural Information Processing Systems, 36, 2024.
|
| 252 |
+
Linfei Yin, Xinghui Cao, and Dongduan Liu. Weighted fully-connected regression networks for one-day-ahead hourly photovoltaic power forecasting. Applied Energy, 332:120527, 2023. ISSN 0306-2619. doi: https://doi.org/10.1016/j.apenergy.2022.120527. URL https://www.sciencedirect.com/science/article/pii/S0306261922017846.
|
| 253 |
+
Ailing Zeng, Muxi Chen, Lei Zhang, and Qiang Xu. Are transformers effective for time series forecasting? In Proceedings of the AAAI conference on artificial intelligence, volume 37, pp. 11121-11128, 2023.
|
| 254 |
+
G.Peter Zhang. Time series forecasting using a hybrid arima and neural network model. Neurocomputing, 50:159-175, 2003. ISSN 0925-2312. doi: https://doi.org/10.1016/S0925-2312(01)00702-0. URL https://www.sciencedirect.com/science/article/pii/S0925231201007020.
|
| 255 |
+
Haoyi Zhou, Shanghang Zhang, Jieqi Peng, Shuai Zhang, Jianxin Li, Hui Xiong, and Wancai Zhang. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pp. 11106-11115, 2021.
|
| 256 |
+
Tian Zhou, Ziqing Ma, Qingsong Wen, Liang Sun, Tao Yao, Wotao Yin, Rong Jin, et al. Film: Frequency improved legendre memory model for long-term time series forecasting. Advances in neural information processing systems, 35:12677-12690, 2022a.
|
| 257 |
+
Tian Zhou, Ziqing Ma, Qingsong Wen, Xue Wang, Liang Sun, and Rong Jin. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In International conference on machine learning, pp. 27268-27286. PMLR, 2022b.
|
| 258 |
+
|
| 259 |
+
# A ADDITIONAL MODEL ANALYSIS
|
| 260 |
+
|
| 261 |
+
Table 6: Full comparison results of model parameters (Params) and multiply-accumulate operations (MACs) for TimeKAN and other models. To ensure a fair comparison, we fix the prediction length $F = 96$ and the input length $T = 96$ , and set the input batch size to 32. The lowest computational cost is highlighted in bold.
|
| 262 |
+
|
| 263 |
+
<table><tr><td rowspan="2">Datasets Metric</td><td colspan="2">ETTH1</td><td colspan="2">ETTH2</td><td colspan="2">ETTm1</td><td colspan="2">ETTm2</td><td colspan="2">Weather</td><td colspan="2">Electricity</td></tr><tr><td>Params</td><td>MACs</td><td>Params</td><td>MACs</td><td>Params</td><td>MACs</td><td>Params</td><td>MACs</td><td>Params</td><td>MACs</td><td>Params</td><td>MACs</td></tr><tr><td>TimeMixer</td><td>75.50K</td><td>20.37M</td><td>75.50K</td><td>20.37M</td><td>75.50K</td><td>20.37M</td><td>77.77K</td><td>24.18M</td><td>104.43K</td><td>82.62M</td><td>106.83K</td><td>1.26G</td></tr><tr><td>iTransformer</td><td>841.57K</td><td>77.46M</td><td>224.22K</td><td>19.86M</td><td>224.22K</td><td>19.86M</td><td>224.22K</td><td>19.86M</td><td>4.83M</td><td>1.16G</td><td>4.83M</td><td>16.29G</td></tr><tr><td>PatchTST</td><td>3.75M</td><td>5.90G</td><td>10.06M</td><td>17.66G</td><td>3.75M</td><td>5.90G</td><td>10.06M</td><td>17.66G</td><td>6.90M</td><td>35.30G</td><td>6.90M</td><td>539.38G</td></tr><tr><td>TimesNet</td><td>605.48K</td><td>18.13G</td><td>1.19M</td><td>36.28G</td><td>4.71M</td><td>144G</td><td>1.19M</td><td>36.28G</td><td>1.19M</td><td>36.28G</td><td>150.30M</td><td>4.61T</td></tr><tr><td>MICN</td><td>25.20M</td><td>71.95G</td><td>25.20M</td><td>71.95G</td><td>25.20M</td><td>71.95G</td><td>25.20M</td><td>71.95G</td><td>111.03K</td><td>295.07M</td><td>6.64M</td><td>19.5G</td></tr><tr><td>Dlinear</td><td>18.62K</td><td>0.6M</td><td>18.62K</td><td>0.6M</td><td>18.62K</td><td>0.6M</td><td>18.62K</td><td>0.6M</td><td>18.62K</td><td>0.6M</td><td>18.62K</td><td>0.6M</td></tr><tr><td>FreTS</td><td>3.24M</td><td>101.46M</td><td>3.24M</td><td>101.46M</td><td>3.24M</td><td>101.46M</td><td>3.24M</td><td>101.46M</td><td>3.24M</td><td>101.46M</td><td>3.24M</td><td>101.46M</td></tr><tr><td>FILM</td><td>12.58M</td><td>2.82G</td><td>12.58M</td><td>2.82G</td><td>12.58M</td><td>2.82G</td><td>12.58M</td><td>2.82G</td><td>12.58M</td><td>8.46G</td><td>12.58M</td><td>8.46G</td></tr><tr><td>FEDFormer</td><td>23.38M</td><td>24.96G</td><td>23.38M</td><td>24.96G</td><td>23.38M</td><td>24.96G</td><td>23.38M</td><td>24.96G</td><td>23.45M</td><td>25.23G</td><td>24.99M</td><td>30.89G</td></tr><tr><td>AutoFormer</td><td>10.54M</td><td>22.82G</td><td>10.54M</td><td>22.82G</td><td>10.54M</td><td>22.82G</td><td>10.54M</td><td>22.82G</td><td>10.61M</td><td>23.08G</td><td>12.14M</td><td>28.75G</td></tr><tr><td>TimeKAN</td><td>12.84K</td><td>7.63M</td><td>15.00K</td><td>8.02M</td><td>14.38K</td><td>7.63M</td><td>38.12K</td><td>16.66M</td><td>20.94K</td><td>29.86M</td><td>23.34K</td><td>456.50M</td></tr></table>
|
| 264 |
+
|
| 265 |
+
# A.1 COMPUTATIONAL COMPLEXITY ANALYSIS
|
| 266 |
+
|
| 267 |
+
In our TimeKAN, the main computational complexity lies in Fast Fourier Transform (FFT), Depthwise Convolution block and Multi-order KAN block. Consider a time series with length $L$ and the hidden state of each time point is $D$ . For FFT, the computation complexity is $\mathcal{O}(L\log L)$ . For Depthwise Convolution block, if we set the convolutional kernel to $M$ and stride to 1, the complexity is $\mathcal{O}(LDM)$ . Finally, assuming that the highest order of Chebyshev polynomials is $K$ , the complexity of Multi-order KAN block is $\mathcal{O}(LD^2K)$ . Since $M, D, K$ are constants that are independent of the input length $L$ , the computational complexity of both the Depthwise Convolution block and the Multi-order KAN block can be reduced to $\mathcal{O}(L)$ , which is linear about the sequence length. In summary, the overall computational complexity is $\max(\mathcal{O}(L\log L), \mathcal{O}(L) = \mathcal{O}(L\log L)$ . When the input is a multivariate sequence with $M$ variables, the computational complexity will expand to $\mathcal{O}(ML\log L)$ due to our variable-independent strategy.
|
| 268 |
+
|
| 269 |
+
# A.2 MODEL EFFICIENCY
|
| 270 |
+
|
| 271 |
+
Here, we provide the complete results of model efficiency in terms of parameters and MACs in Table 6. As can be seen, except for DLinear, our TimeKAN consistently demonstrates a significant advantage in both parameter count and MACs compared to any other model. DLinear is a model consisting of only a single linear layer, which makes it the most lightweight in terms of parameters and MACs. However, the performance of DLinear already shows a significant gap when compared to state-of-the-art methods. Therefore, our TimeKAN actually achieves superior performance in both forecasting accuracy and efficiency.
|
| 272 |
+
|
| 273 |
+
# A.3 ERROR BARS
|
| 274 |
+
|
| 275 |
+
To evaluate the robustness of TimeKAN, we repeated the experiments on three randomly selected seeds and compared it with the second-best model (TimeMixer). We report the mean and standard deviation of the results across the three experiments, as well as the confidence level of TimeKAN's superiority over TimeMixer. The results are averaged over four prediction horizons (96, 192, 336, and 720). As shown in the Table 7, in most cases, we have over $90\%$ confidence that TimeKAN outperforms the second-best model and demonstrates good robustne of TimeKAN.
|
| 276 |
+
|
| 277 |
+
Table 7: Standard deviation and statistical tests for our TimeKAN method and second-best method (TimeMixer) on five datasets.
|
| 278 |
+
|
| 279 |
+
<table><tr><td>Metric</td><td colspan="3">MSE</td><td colspan="3">MAE</td></tr><tr><td>Dataset</td><td>TimeKAN</td><td>TimeMixer</td><td>Confidence</td><td>TimeKAN</td><td>TimeMixer</td><td>Confidence</td></tr><tr><td>ETTh1</td><td>0.422±0.004</td><td>0.462±0.006</td><td>99%</td><td>0.430±0.002</td><td>0.448±0.004</td><td>99%</td></tr><tr><td>ETTh2</td><td>0.387±0.003</td><td>0.392±0.003</td><td>99%</td><td>0.408±0.003</td><td>0.412±0.004</td><td>90%</td></tr><tr><td>ETTm1</td><td>0.378±0.002</td><td>0.386±0.003</td><td>99%</td><td>0.396±0.001</td><td>0.399±0.001</td><td>99%</td></tr><tr><td>ETTm2</td><td>0.278±0.001</td><td>0.278±0.001</td><td>—</td><td>0.324±0.001</td><td>0.325±0.001</td><td>90%</td></tr><tr><td>Weather</td><td>0.243±0.001</td><td>0.245±0.001</td><td>99%</td><td>0.273±0.001</td><td>0.276±0.001</td><td>99%</td></tr></table>
|
| 280 |
+
|
| 281 |
+
Table 8: Comparison on the Electricity dataset when the look back window is expanded to 512.
|
| 282 |
+
|
| 283 |
+
<table><tr><td rowspan="2">Models</td><td colspan="2">96</td><td colspan="2">192</td><td colspan="2">336</td><td colspan="2">720</td></tr><tr><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td></tr><tr><td>MOMENT</td><td>0.136</td><td>0.233</td><td>0.152</td><td>0.247</td><td>0.167</td><td>0.264</td><td>0.205</td><td>0.295</td></tr><tr><td>TimeMixer</td><td>0.135</td><td>0.231</td><td>0.149</td><td>0.245</td><td>0.172</td><td>0.268</td><td>0.203</td><td>0.295</td></tr><tr><td>TimeKAN</td><td>0.133</td><td>0.230</td><td>0.149</td><td>0.247</td><td>0.165</td><td>0.261</td><td>0.203</td><td>0.294</td></tr></table>
|
| 284 |
+
|
| 285 |
+
# A.4 FREQUENCY LEARNING WITH LONGER WINDOW
|
| 286 |
+
|
| 287 |
+
In Table 1, TimeKAN performs relatively poorly on the Electricity dataset. We infer that its poor performance on the electricity dataset is due to the overly short look-back window ( $T = 96$ ), which cannot provide sufficient frequency information. To verify this, we compare the average number of effective frequency components under a specific look-back window. Specifically, we randomly select a sequence of length $T$ from the electricity dataset and transform it into the frequency domain using FFT. We define effective frequencies as those with amplitudes greater than 0.1 times the maximum amplitude. Then, we take the average number of effective frequencies obtained across all variables to reflect the amount of effective frequency information provided by the sequence. When $T = 96$ (the setting in this paper), the average number of effective frequencies is 10.69. When we extend the sequence length to 512, the average number of effective frequencies becomes 19.74. Therefore, the effective frequency information provided by 512 time steps is nearly twice that of 96 time steps. This indicates that $T = 96$ loses a substantial amount of effective information.
|
| 288 |
+
|
| 289 |
+
To validate whether using $T = 512$ allows us to leverage more frequency information, we extend the look-back window of TimeKAN to 512 on the electricity dataset and compare it with the state-of-the-art methods TimeMixer and time series foundation model MOMENT (Goswami et al., 2024). The results are shown in Table 8. Although TimeKAN performs significantly worse than TimeMixer when $T = 96$ , it achieves the best performance on the electricity dataset when the look-back window is extended to 512. This also demonstrates that TimeKAN can benefit significantly from richer frequency information.
|
| 290 |
+
|
| 291 |
+
# A.5 IMPACT OF NUMBER OF FREQUENCY BANDS
|
| 292 |
+
|
| 293 |
+
To explore the impact of the number of frequency bands on performance, we set the number of frequency bands to 2, 3, 4, and 5. The effects of different frequency band divisions on performance are shown in the Table 9. As we can see, in most cases, dividing the frequency bands into 3 or 4 layers yields the best performance. This aligns with our prior intuition: dividing into two bands results in excessive frequency overlap, while dividing into five bands leads to too little information within each band, making it difficult to accurately model the information within that frequency range.
|
| 294 |
+
|
| 295 |
+
Table 9: Impact of number of frequency bands on performance under the 96-to-96 prediction setting.
|
| 296 |
+
|
| 297 |
+
<table><tr><td rowspan="2">Number of Frequency</td><td colspan="2">ETTh2</td><td colspan="2">Weather</td><td colspan="2">Electricity</td></tr><tr><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td><td>MSE</td><td>MAE</td></tr><tr><td>2</td><td>0.292</td><td>0.340</td><td>0.164</td><td>0.209</td><td>0.183</td><td>0.270</td></tr><tr><td>3</td><td>0.290</td><td>0.339</td><td>0.163</td><td>0.209</td><td>0.177</td><td>0.268</td></tr><tr><td>4</td><td>0.290</td><td>0.340</td><td>0.162</td><td>0.208</td><td>0.174</td><td>0.266</td></tr><tr><td>5</td><td>0.295</td><td>0.346</td><td>0.164</td><td>0.211</td><td>0.177</td><td>0.273</td></tr></table>
|
| 298 |
+
|
| 299 |
+
# B MATHEMATICAL DETAILS
|
| 300 |
+
|
| 301 |
+
# B.1 KOLMOGOROV-ARNOLD NETWORK
|
| 302 |
+
|
| 303 |
+
Kolmogorov-Arnold representation theorem states that any multivariate continuous function can be expressed as a combination of univariate functions and addition operations. More specifically, a multivariate continuous function $g:[0,1]^n\Rightarrow \mathbb{R}$ can be defined as:
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
g (x) = g \left(x _ {1}, \dots , x _ {n}\right) = \sum_ {i = 1} ^ {2 n + 1} \Phi_ {i} \left(\sum_ {j = 1} ^ {n} \phi_ {i j} \left(x _ {j}\right)\right) \tag {13}
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
where $\phi_{ij}$ and $\Phi_i$ are univariate functions. Following the pattern of MLP, Kolmogorov-Arnold Network (KAN) (Liu et al., 2024c) extends the Kolmogorov-Arnoldtheorem to deep representations, i.e., stacked multilayer Kolmogorov-Arnold representations. Assume that KAN is composed of $L + 1$ layer neurons and the number of neurons in layer $l$ is $n_l$ . The transmission relationship between the $j$ -th neuron in layer $l + 1$ and all neurons in layer $l$ can be expressed as:
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
x _ {l + 1, j} = \sum_ {i = 1} ^ {n _ {l}} \phi_ {l, j, i} \left(x _ {l, i}\right) \tag {14}
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
We can simply understand that each neuron is connected to other neurons in the previous layer through a univariate function $\phi$ . Similar to MLP, the computation of all neurons at layer $l$ can be reorganized as a function matrix multiplication $\Phi_{l-1}$ . Therefore, given a input vector $x \in \mathbb{R}^{n_0}$ , the final output of KAN network is:
|
| 316 |
+
|
| 317 |
+
$$
|
| 318 |
+
\mathrm {K A N} (x) = \left(\Phi_ {L - 1} \circ \dots \circ \Phi_ {1} \circ \Phi_ {0}\right) x \tag {15}
|
| 319 |
+
$$
|
| 320 |
+
|
| 321 |
+
In vanilla KAN (Liu et al., 2024c), the univariate function $\phi_{l,j,i}$ is parametrized using B-splines, which is a class of smooth curves constructed via segmented polynomial basis functions. To ensure the stability and enhance the representational capacity, KAN overlays the spline function on a fixed basis function $b$ , which is typically the SiLU function:
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
\phi (x) = w _ {b} b (x) + w _ {s} \operatorname {s p l i n e} (\mathrm {x}) \tag {16}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
$$
|
| 328 |
+
\operatorname {s p l i n e} (x) = \sum_ {i} c _ {i} B _ {i} (x) \tag {17}
|
| 329 |
+
$$
|
| 330 |
+
|
| 331 |
+
where $w_{b}$ and $w_{s}$ are learnable weights and $\mathrm{spline(x)}$ is the spline function constructed from the linear combination of B-spline basis functions $B_{i}$ . However, the complex recursive computation process of high-order B-spline functions hinders the efficiency of KAN. Therefore, in this work, we adopt the simpler Chebyshev polynomial as the univariate function to replace the B-spline function (SS, 2024). The univariate function defined by the Chebyshev polynomial is given as follows:
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
T _ {k} (x) = \cos (k \operatorname {a r c c o s} (x)) \tag {18}
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
Here, $k$ represents the order of the polynomial. Then, we consider the univariate function $\Phi$ as a linear combination of Chebyshev polynomials with different orders:
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
x _ {l + 1, j} = \sum_ {i = 1} ^ {n _ {l}} \phi_ {l, j, i} \left(x _ {l, i}\right) = \sum_ {i = 1} ^ {n _ {l}} \sum_ {k = 0} ^ {K} \Theta_ {i, k} T _ {k} \left(\tanh \left(x _ {l, i}\right)\right) \tag {19}
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
Where $\Theta_{i,k}$ is the coefficients of $k$ -th order Chebyshev polynomials acting on the $x_{l,i}$ and $\tanh$ is the tanh activation function used to normalize the inputs to between -1 and 1. By adjusting the highest order of the Chebyshev polynomial $K$ , we can control the fitting capability of KAN. This also inspires tour design of the Multi-order KAN to dynamically represent different frequencies.
|
| 344 |
+
|
| 345 |
+
# B.2 FOURIER TRANSFORM
|
| 346 |
+
|
| 347 |
+
Time series are often composed of multiple frequency components superimposed on each other, and it is difficult to observe these individual frequency components directly in the time domain. Therefore, transforming a time series from the time domain to the frequency domain for analysis is often necessary. The Discrete Fourier Transform (DFT) is a commonly used domain transformation algorithm that converts a discrete-time signal from the time domain to the complex frequency domain. Mathematically, given a sequence of real numbers $x[n]$ in time domain, where $n = 0,1,\dots ,N - 1$ the DFT process can be described as:
|
| 348 |
+
|
| 349 |
+
$$
|
| 350 |
+
X [ k ] = \sum_ {n = 0} ^ {N - 1} x [ n ] \cdot e ^ {- i \frac {2 \pi}{N} k n} = \sum_ {n = 0} ^ {N - 1} x [ n ] \left(\cos \left(\frac {2 \pi}{N} k n\right) - i \sin \left(\frac {2 \pi}{N} k n\right)\right), \quad k = 0, 1, \dots , N - 1 \tag {20}
|
| 351 |
+
$$
|
| 352 |
+
|
| 353 |
+
where $X[k]$ is the $k$ -th frequency component of frequency domain signal and $i$ is the imaginary unit. Similarly, we can use Inverse DFT (iDFT) to convert a frequency domain signal back to the time domain.
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
x [ n ] = \frac {1}{N} \sum_ {k = 0} ^ {N - 1} X [ k ] \cdot e ^ {i \frac {2 \pi}{N} k n} = \frac {1}{N} \sum_ {k = 0} ^ {N - 1} X [ k ] \left(\cos \left(\frac {2 \pi}{N} k n\right) + i \sin \left(\frac {2 \pi}{N} k n\right)\right) \tag {21}
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
The computational complexity of the DFT is typically $\mathcal{O}(N^2)$ (Zhou et al., 2022b). In practice, we use the Fast Fourier Transform (FFT) to efficiently compute the Discrete Fourier Transform (DFT) of complex sequences, which reduces the computational complexity to $\mathcal{O}(N\log N)$ . Additionally, by employing the Real FFT (rFFT), we can compress an input sequence of $N$ real numbers into a signal sequence in the complex frequency domain containing $N / 2 + 1$ frequency components.
|
2025/TimeKAN_ KAN-based Frequency Decomposition Learning Architecture for Long-term Time Series Forecasting/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ba76e8d0cfed4082f01f49bcbedc4ca9e37d41a9e2079888d455b4e18c0476a4
|
| 3 |
+
size 913973
|
2025/TimeKAN_ KAN-based Frequency Decomposition Learning Architecture for Long-term Time Series Forecasting/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/TimeSuite_ Improving MLLMs for Long Video Understanding via Grounded Tuning/f48bb6a8-358b-46f9-aa7b-783937ea3be0_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/TimeSuite_ Improving MLLMs for Long Video Understanding via Grounded Tuning/f48bb6a8-358b-46f9-aa7b-783937ea3be0_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/TimeSuite_ Improving MLLMs for Long Video Understanding via Grounded Tuning/f48bb6a8-358b-46f9-aa7b-783937ea3be0_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:729f39f07e09ff57a7f0878770743fcd3d64eaf43f02b1a20dbcaf97e1c97901
|
| 3 |
+
size 2802554
|
2025/TimeSuite_ Improving MLLMs for Long Video Understanding via Grounded Tuning/full.md
ADDED
|
@@ -0,0 +1,466 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TIMESUITE: IMPROVING MLLMS FOR LONG VIDEO UNDERSTANDING VIA GROUNDED TUNING
|
| 2 |
+
|
| 3 |
+
Xiangyu Zeng $^{1,2}$ Kunchang Li $^{3,2}$ Chenting Wang $^{6,2}$ Xinhao Li $^{1,2}$ Tianxiang Jiang $^{5,2}$
|
| 4 |
+
|
| 5 |
+
Ziang Yan $^{4,2}$ Songze Li $^{7,2}$ Yansong Shi $^{5,2}$ Zhengrong Yue $^{6,2}$ Yi Wang $^{2,8}$
|
| 6 |
+
|
| 7 |
+
Yali Wang $^{3,2}$ Yu Qiao $^{2}$ Limin Wang $^{1,2,\dagger}$
|
| 8 |
+
|
| 9 |
+
$^{1}$ Nanjing University $^{2}$ Shanghai AI Laboratory $^{3}$ SIAT, Chinese Academy of Sciences $^{4}$ Zhejiang University
|
| 10 |
+
$^{5}$ University of Science and Technology of China $^{6}$ Shanghai Jiao Tong University $^{7}$ Fudan University
|
| 11 |
+
8 Shanghai Innovation Institute
|
| 12 |
+
|
| 13 |
+
XiangyuZeng2001@outlook.com lmwang@nju.edu.cn
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1: VideoChat-T demonstrates high performance for both long-form video question answering and temporal grounding. Our TimeSuite presents a collection of new designs to enhance the long video understanding capability of MLLMs. It will implicitly endow the MLLM with ability of correctly attending the visual segments when generating answers, thus relieving the hallucinations.
|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
|
| 20 |
+
# ABSTRACT
|
| 21 |
+
|
| 22 |
+
Multimodal Large Language Models (MLLMs) have demonstrated impressive performance in short video understanding. However, understanding long-form videos still remains challenging for MLLMs. This paper proposes TimeSuite, a collection of new designs to adapt the existing short-form video MLLMs for long video understanding, including a simple yet efficient framework to process long video sequence, a high-quality video dataset for grounded tuning of MLLMs, and a carefully-designed instruction tuning task to explicitly incorporate the grounding supervision in the traditional QA format. Specifically, based on VideoChat, we propose our long-video MLLM, coined as VideoChat-T, by implementing a token shuffling to compress long video tokens and introducing Temporal Adaptive Position Encoding (TAPE) to enhance the temporal awareness of visual representation. Meanwhile, we introduce the TimePro, a comprehensive grounding-centric instruction tuning dataset composed of 9 tasks and 349k high-quality grounded annotations. Notably, we design a new instruction tuning task type, called Temporal Grounded Caption, to perform detailed video descriptions with the corresponding timestamps prediction. This explicit temporal location prediction will guide MLLM to correctly attend on the visual content when generating description, and thus reduce the hallucination risk caused by the LLMs. Experimental results demonstrate that our TimeSuite provides a successful solution to enhance the long video understanding capability of short-form MLLM, achieving improvement of $5.6\%$ and $6.8\%$ on the benchmarks of Egoschema and VideoMME, respectively. In addition, VideoChat-T exhibits robust zero-shot temporal grounding capabilities, significantly outperforming the existing state-of-the-art MLLMs. After fine-tuning, it performs on par with the traditional supervised expert models. Our code and dataset are available at https://github.com/OpenGVLab/TimeSuite.
|
| 23 |
+
|
| 24 |
+
# 1 INTRODUCTION
|
| 25 |
+
|
| 26 |
+
Multimodal Large Language Models (MLLMs) have demonstrated impressive video understanding performance by following the general human instructions to interpret the visual content (Li et al., 2023b; Zhang et al., 2023; Lin et al., 2023a; Jin et al., 2024; Wang et al., 2024e). However, these MLLMs still struggle in long video understanding, as a long video sequence may contain various dynamic actions and complex temporal relationships, making it difficult for MLLMs to effectively locate the key segments related to questions. When humans watch long videos, their attention is consciously focused on prominent segments, which may occur within a few seconds. NExT-GQA (Xiao et al., 2024) has also verified the relevance of temporal grounding for accurately answering video QA tasks. Therefore, a natural question arises: Can we enhance long video understanding by using temporal grounding as a auxiliary task?
|
| 27 |
+
|
| 28 |
+
Previously, some works have made progress in temporal grounding task by using general MLLMs. They often enhance the temporal grounding capability of video MLLMs by designing specialized modules and perform specific supervised fine-tuning (Ren et al., 2024; Huang et al., 2024a,b). However, these overly specialized designs significantly impair the general QA capabilities of video MLLMs, resulting in great performance drop on the video QA task (as illustrated by TimeChat in Figure 1). Meanwhile, current research on long video understanding primarily focuses on architecture design, such as long-context LLMs (Liu et al., 2024a) and token compression (Song et al., 2024a). They can only capture holistic semantics in videos without the ability of localizing fine-grained information, leading to poor performance in temporal grounding tasks (as illustrated by MovieChat in Figure 1). So far, it is still challenging to build a video MLLM that is good at both tasks of temporal grounding and long video QA. We argue long video understanding could be assisted by explicitly performing temporal grounding, as grounding supervision enables MLLM to establish the detailed correspondence between the visual segments and fine-grained semantics. This fine-grained alignment would guide the MLLM to attend correctly video segments when generating answers and thus relieve the hallucination risk caused by the LLM.
|
| 29 |
+
|
| 30 |
+
Based on the above analysis, in this paper, we propose TimeSuite, a collection of new designs to improve the long video understanding capability of the existing short-form MLLMs, with a focus on incorporating grounding supervision in instruction tuning process. First, to address the high computational cost caused by the excessive number of visual tokens in long videos, we propose a simple Token Shuffle scheme to compress visual tokens, allowing the LLM to process more frame inputs. We also propose TAPE to generate adaptive position encodings, enhancing the temporal awareness of visual representations. The proposed structure does not introduce overly complex proprietary designs, which could be efficiently initialized with the parameters of short video MLLMs, without damaging the original performance of pre-trained MLLM. Second, to naturally incorporate the grounding ability into our MLLMs and yet still to preserve its original general QA capability, we design a new instruction tuning task, called Temporal Grounded Caption. This new task requires generating detailed segment-level description with corresponding timestamp prediction. Tuning on this new task will not only endow the MLLM with the extra grounding ability but also enhance its original long video QA performance, thanks to the requirement of building correspondence between grounded segments and detailed captions. Finally, we collect a comprehensive grounding-centric instruction tuning dataset for post-training our designed MLLMs, which is composed of 349K high-quality annotations covering 9 tasks. Based on this new dataset, we are able to perform grounded tuning with detailed captions on our proposed MLLMs (coined as VideoChat-T).
|
| 31 |
+
|
| 32 |
+
We verify the effectiveness of TimeSuite design through extensive experiments on the tasks of long video understanding and temporal grounding. VideoChat-T demonstrates a significant improvement in accuracy over baseline for long video understanding, with a $5.6\%$ increase on Egoschema (Mangalam et al., 2023) and a $6.8\%$ increase on VideoMME (Fu et al., 2024). Additionally, VideoChat-T exhibits robust zero-shot temporal localization capabilities on Charades-STA (Gao et al., 2017) and QVHighlights (Lei et al., 2021a). Our VideoChat-T outperforms the state-of-the-art temporal grounding MLLM of TimeChat from $50\%$ to $100\%$ for different metrics. After fine-tuning on the training set of temporal grounding benchmarks, the performance of VideoChat-T is on par with the state-of-the-art supervised expert models. The experiments demonstrate that our VideoChat-T is the first end-to-end MLLM that is able to perform well on both temporal grounding and general video QA. In particular, we show that grounded tuning with explicit location prediction can facilitate the long video understanding and relieve the hallucination risk.
|
| 33 |
+
|
| 34 |
+
# 2 RELATED WORK
|
| 35 |
+
|
| 36 |
+
Video MLLMs. With the advancement of open-sourced LLMs (Chiang et al., 2023; Touvron et al., 2023; Jiang et al., 2023), video MLLMs have emerged by utilizing projection bridges to link vision foundation models with LLMs (Li et al., 2023b; 2024b; Zhang et al., 2023; Li et al., 2024a). Limited by the training context length, thought these methods perform well with a small number of frame inputs, they meet significant challenges when processing long videos. The longer video length usually implies longer temporal relationships and more redundancies, resulting in the difficulty of extracting key clues (Zhou et al., 2024). Recently, several methods for long video handling have been proposed, such as exploiting long context LLM (Liu et al., 2024a; Zhang et al., 2024b; Xue et al., 2024; Wang et al., 2024d) and token compression (Li et al., 2023d; Song et al., 2024a; Zhang et al., 2024a) for enabling more visual inputs and agents for task decomposition or retrieval (Fan et al., 2024; Wang et al., 2024c;h). MovieChat (Song et al., 2024a) supports more frames by applying short-term and long-term memory to merge similar visual tokens. Yet, studies in learning objectives for long videos are less explored, making it difficult to alleviate the frequent hallucination of LLMs in long context reasoning. Our proposed TimeSuite leverages temporally-centric tasks to unlock the temporal perception potential of MLLMs, anchoring responses to the most relevant video segments.
|
| 37 |
+
|
| 38 |
+
Temporal Grounding. Temporal grounding is a fundamental capability in video understanding, associating semantics to specific clips with corresponding timestamps. Typical expert models (Lei et al., 2021b; Moon et al., 2023a;b; Lin et al., 2023b; Zeng et al., 2024) have been developed by formulating it into a timestamp regression from visual inputs and user queries. Most existing video MLLMs fail to address it compared with expert models, while some remedy its temporal grounding by specifically designed architectures and data (Huang et al., 2024a; Wang et al., 2024f; Li et al., 2024c; Wang et al., 2024g; Huang et al., 2024b; Qu et al., 2024). Timechat (Ren et al., 2024) binds visual features of images with timestamps and uses a sliding window to handle variable token length. From the perspective of training data, an instruction-tuning dataset TimeIT is constructed. Despite impressive improvements in temporal performance, these MLLMs still lag behind expert models and compromise general video dialogue capabilities. In this paper, we explore how to enhance the temporal grounding of MLLMs while preserving their original capabilities.
|
| 39 |
+
|
| 40 |
+
# 3 METHOD
|
| 41 |
+
|
| 42 |
+
In this section, we detail the proposed TimeSuite, a new collection of designs for improving short video MLLMs. Specifically, our TimeSuite includes a long video modeling framework, a high-quality video dataset for grounded tuning, and a carefully-designed instruction tuning task. With this new TimeSuite design, we are able to adapt the short-form video MLLM, obtaining significant performance improvements on two types of long video understanding tasks: traditional long video QA and temporal video grounding.
|
| 43 |
+
|
| 44 |
+
# 3.1 VIDEOCHAT-T
|
| 45 |
+
|
| 46 |
+
We first describe the architecture of our proposed long video modeling framework. Specifically, built upon VideoChat2 (Li et al., 2024b), we devise long-video version of VideoChat-T. Our VideoChat-T is composed of a video backbone for extracting visual representations, a visual-language connector to compress visual tokens and bridge the visual and languages modalities, a LLM to follow human instructions to interpret the video content.
|
| 47 |
+
|
| 48 |
+
The architecture of VideoChat-T is illustrated in Figure 2. Its workflow has three stages. In the first stage, long videos are evenly segmented into clips and the clips are embedded by the Video Encoder and Q-Former (Li et al., 2023a). Then, for compressing visual token number and highlighting crucial ones, token shuffling is employed to merge adjacent tokens, and TAPE is used to add temporal adaptive positional encodings. Finally, the compressed video token sequence is fed to the LLM to generate accurate responses that adhere to user requirements.
|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
Figure 2: Overall Architecture of VideoChat-T. First, long videos are segmented into clips, which are then transformed into feature embeddings by video encoder and time-aware Qformer. Next, all visual tokens undergo Token Shuffle to compress overly long tokens, and generate adaptive positional encodings through TAPE. Finally, the long video tokens are concatenated with the user query, serving as the input of LLM, thereby generating appropriate responses.
|
| 52 |
+
|
| 53 |
+
# 3.1.1 BACKBONE DESIGN
|
| 54 |
+
|
| 55 |
+
Video clip encoding. For the given long video, we perform uniform sampling (Wang et al., 2019) to obtain $K \times T$ frames. We divide these frames into $K$ video segments in chronological order, and sample $T$ frames from each segment. Next, we use the video encoder and its visual-linguistic connector (Q-Former here) to encode each segment into $N$ tokens. After the aforementioned processing, the entire video is encoded into a sequence of visual tokens, denoted by $\mathbf{V}_q \in \mathbb{R}^{L \times C_q}$ , where $C_q$ is the dimension of output token by the Q-Former and $L = K \times N$ is the total number of tokens for the entire video.
|
| 56 |
+
|
| 57 |
+
Large Language Model. According to previous research, images and visual cues are projected into the same feature space of the LLM. The LLM acts as an interaction interface in the MLLMs, being used to process multimodal inputs, parse user instructions, and generate appropriate responses. To afford the processing of long video sequence, we need to design an efficient compression module between the visual encoder and LLMs.
|
| 58 |
+
|
| 59 |
+
# 3.1.2 VL-CONNECTOR:TOKEN SHUFFLE
|
| 60 |
+
|
| 61 |
+
The increased number of sampled frames in long videos leads to a larger number of encoded visual tokens, causing a significant rise in the computational complexity and memory consumption of LLMs. Therefore, it is crucial to keep the number of visual tokens within an acceptable range. Some works have proposed various token compression schemes, such as clustering (Jin et al., 2024) and pooling (Huang et al., 2024b). However, clustering methods often struggle to maintain the temporal consistency, and pooling methods usually result in a certain loss of overall performance.
|
| 62 |
+
|
| 63 |
+
To address this, we propose a simple token shuffling compression scheme that ensures the temporal consistency of video tokens before and after compression while avoiding excessive performance loss. Previous methods often used a projector to achieve dimensional conversion. However, projecting visual encoding vectors from low to high dimensions does not increase information density. Therefore, we propose to rearrange multiple visual tokens along the channel dimension. Specifically, for the long video $\mathbf{V}_q = [v_q^1,v_q^2,\dots,v_q^L ]\in \mathbb{R}^{L\times C_q}$ , we concatenate $m$ adjacent tokens along the channel dimension to obtain the reshaped visual feature $\mathbf{V}_m = [v_m^1,v_m^2,\dots,v_m^{\frac{L}{m}}]\in \mathbb{R}_{\frac{L}{m}}^{\times mC_q}$ where each merged token $v_{m}^{i}$ is represented as:
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
v _ {m} ^ {i} = \operatorname {C o n c a t} \left(v _ {q} ^ {(i - 1) * m + 1}, v _ {q} ^ {(i - 1) * m + 2}, \dots , v _ {q} ^ {i * m}\right) \quad \forall i = 1, 2, \dots , \frac {L}{m}.
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
Next, a linear projection layer is applied to the merged visual feature $\mathbf{V}_m$ , generating the visual token sequences $\mathbf{V}_l \in \mathbb{R}^{\frac{L}{m} \times C_l}$ as input into the LLM, where $C_l$ represents the token channel di
|
| 70 |
+
|
| 71 |
+
mension of the LLM. This scheme effectively reuses the projector of base model by replicating the original linear layer parameters $m$ times along the channel dimension, achieving an initialization equivalent to mean pooling with a window length of $m$ . This design avoids introducing additional randomly initialized parameters that might disturb the original model, thus preserving the its original capabilities. Additionally, compared to directly using pooling, this method offers higher flexibility for fine-tuning to achieve better results (see ablation study, Table 4).
|
| 72 |
+
|
| 73 |
+
# 3.1.3 TEMPORAL ADAPTIVE POSITION ENCODING
|
| 74 |
+
|
| 75 |
+
To bind temporal positional information to visual tokens, we propose an adapter called Temporal Adaptive Position Encoding (TAPE). Inspired by CPVT (Chu et al., 2021), our TAPE uses zero padding at both ends of the convolution as anchors, and gradually transmits relative positional encoding information. Without the need to add any special time tokens, TAPE can automatically perceive the relative temporal positions of the token sequence and generate temporal embeddings.
|
| 76 |
+
|
| 77 |
+
Specifically, the long video token sequence $\mathbf{V}_q$ is first compressed in the channel dimension by a linear layer and further compressed in sequence length by a pooling layer. Next, we use a U-Net-like structure composed of one-dimensional depthwise separable convolutions to progressively down-sample the sequence, obtaining three one-dimensional temporal feature sequences with different resolutions. Subsequently, a convolution with a sufficiently long window is applied to the shortest temporal feature sequence, using zero padding at both ends as anchors to encode the relative temporal position of each token in the sequence (Chu et al., 2021). Then, we progressively upsample and restore the temporal feature sequences from short to long, using residual connections to retain temporal features at different scales. Finally, the temporal feature sequences are restored to the same length as $\mathbf{V}_l$ and aligned in the channel dimension by a linear layer, thereby obtaining the temporal features $\mathbf{V}_t$ output by the TAPE. For detailed implementation of TAPE, please refer to Appendix A.
|
| 78 |
+
|
| 79 |
+
Our proposed TAPE offers a plug-and-play module, which could be easily integrated into the network structure via residual connections, adding temporal position information to video tokens without disrupting the distribution of other trainable parameters. With appropriate training strategies, TAPE effectively preserves the model's generalization capabilities and enhances its temporal sensitivity (see ablation study, Table 3), which is important for temporal grounding task.
|
| 80 |
+
|
| 81 |
+
# 3.2 TIMEPRO:TEMPORAL GROUNDED INSTRUCTION DATA
|
| 82 |
+
|
| 83 |
+
Traditional temporal grounding datasets only contain monotonous ground truth, i.e., the start and end times of the target period. This data format performs well in training the classic expert models, but is difficult to unleash the potential of LLMs. Although several temporal grounding-centric datasets have been released for MLLM fine-tuning (Ren et al., 2024; Huang et al., 2024b), they still have deficiencies in data quantity, data quality, and task diversity. Thus, it is necessary to build a more comprehensive temporal dataset designed for the tuning of MLLMs.
|
| 84 |
+
|
| 85 |
+
Based on the criteria of diversity, length, and difficulty, we collect and clean several existing high-quality grounding-centric datasets (Ren et al., 2024; Huang et al., 2024a,b), and create two new datasets, resulting in the TimePro. Compared to previous temporal grounding-centric datasets, TimePro offers a larger volume of data, a broader distribution, and a higher task diversity, facilitating the learning of more generalizable temporal representations for MLLMs.
|
| 86 |
+
|
| 87 |
+
As shown in Figure 3(a), TimePro contains 9 task types from 15 datasets that are highly relevant to temporal grounding, containing approximately 349K high-quality temporal grounding annotations. The 9 tasks are specified as follows. Temporal Video Grounding involves identifying the start and end times of video content based on a natural language query (Anne Hendricks et al., 2017; Oncescu et al., 2021; Zala et al., 2023). Dense Video Captioning requires detecting events within a video and providing corresponding timestamps and descriptions (Krishna et al., 2017; Huang et al., 2020; Zhou et al., 2018). Video Summarization focuses on determining key frames or clips in the form of timestamps rather than semantic summaries (Song et al., 2015; Gygli et al., 2014). Step Localization aims to segment and describe important steps in a long video (Tang et al., 2019; Zala et al., 2023). Transcribed Speech Generation predicts speech content and its timestamps from visual signals (Zellers et al., 2022). Reasoning Temporal Localization combines timestamps with explanatory answers (Huang et al., 2024b). Multi-format Temporal Grounding includes single-turn and multi-turn dialogues with diverse question types (Huang et al., 2024a). Highlight
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
(a) Tasks of TimePro
|
| 91 |
+
Figure 3: (a) The proposed temporal centric instruction-tuning dataset, TimePro. This dataset contains approximately 349K high-quality and strongly temporally correlated data. (b) The proposed Temporal Grounded Caption fine-tuning data paradigm. It effectively reducing the occurrence of hallucinations. We employ a 4-stage processing pipeline to ensure the quality of the generated data.
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
(b) Details of Temporal Grounded Caption
|
| 95 |
+
|
| 96 |
+
Detection identifies the most significant moments in a video based on a query (Lei et al., 2021a). Temporal Grounded Caption uses a brief scene title to output both the time period and fine-grained description for the scene. More detailed information about TimePro is available in the appendix B. It should be noted that Temporal Grounded Caption is our newly-designed task that can help our model to establish fine-grained correspondence between visual segment and linguistic description.
|
| 97 |
+
|
| 98 |
+
# 3.3 TEMPORAL GROUNDED CAPTION TASK
|
| 99 |
+
|
| 100 |
+
Some studies have shown that MLLMs often exhibit severe hallucinations when dealing with fine-grained perception tasks (Ji et al., 2023; Huang et al., 2023; Golkar et al., 2023). Since our VideoChat-T directly regresses the timestamps corresponding to the text queries using MLLMs, it is more susceptible to hallucinations compared to methods that use external expert models as decoders (Wu et al., 2024). By forcing the video MLLMs to predict the event occurrence time and simultaneously describe the visual content evidence, we attempt to anchor these queries to the relevant time segments within the video, rather than generating hallucinations originating from LLM itself. Based on this analysis, we design the Temporal Grounded Caption task.
|
| 101 |
+
|
| 102 |
+
The top of Figure 3(b) illustrates the definition of Temporal Grounded Caption. We use a brief scene title of the video segment as the query, requiring the model to simultaneously respond with the precise start and end times of the video segment and provide a detailed description of that segment. While the content in the scene title may leak into the detailed caption response, most of the missing detailed information must be correctly described by attending the corresponding segment. Moreover, temporal grounding and detailed captioning can serve as regularization task for each other, preventing caption model from hallucinations from unrelated visual or linguistic contexts and helping grounding model to regress the timestamp more accurately.
|
| 103 |
+
|
| 104 |
+
The process for collecting our Temporal Grounded Caption data is described at the bottom of Figure 3(b). In the first stage, we use a detailed caption dataset with timestamps as our data source. We remove data with target grounding time intervals that are too short or too long and ensure that the scenes in the video are as diverse as possible. In the second stage, we use a LLM to summarize scene titles. To prevent excessive semantics of video segments from being leaked from the query to the MLLM, we try to retain the minimal subset of key features that are sufficient to distinguish the video segments. In the third stage, to avoid overly similar or identical content appearing at different temporal intervals in the video, we perform similarity filtering on the data annotations. Based on the scene titles and video features, we calculate the similarity between different segments of the same video and remove data with excessively high similarity. In the fourth stage, we randomly sample the generated data and manually assess its quality. Based on human feedback, we refine the threshold parameters for data filtering used in the first three stages to yield the final Temporal Grounded Caption dataset. This new dataset plays an important role in our grounded tuning.
|
| 105 |
+
|
| 106 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">LLM Size</td><td colspan="3">Charades-STA</td><td colspan="2">QVHighlight</td></tr><tr><td>R@1(IOU=0.3)</td><td>R@1(IOU=0.5)</td><td>R@1(IOU=0.7)</td><td>mAP</td><td>HIT@1</td></tr><tr><td>MovieChat (Song et al., 2024a)</td><td>7B</td><td>8.8</td><td>2.9</td><td>1.3</td><td>11.7</td><td>16.1</td></tr><tr><td>GroundingGPT (Li et al., 2024c)</td><td>7B</td><td>-</td><td>29.6</td><td>11.9</td><td>-</td><td>-</td></tr><tr><td>VTimeLLM (Huang et al., 2024a)</td><td>7B</td><td>51.0</td><td>27.5</td><td>11.4</td><td>-</td><td>-</td></tr><tr><td>HawkEye (Wang et al., 2024f)</td><td>7B</td><td>50.6</td><td>31.4</td><td>14.5</td><td>-</td><td>-</td></tr><tr><td>TimeChat (Ren et al., 2024)</td><td>7B</td><td>-</td><td>32.2</td><td>13.4</td><td>14.5</td><td>23.9</td></tr><tr><td>ChatVTG (Qu et al., 2024)</td><td>7B</td><td>52.7</td><td>33.0</td><td>15.9</td><td>-</td><td>-</td></tr><tr><td>VideoChat2 (Li et al., 2024b)</td><td>7B</td><td>9.6</td><td>3.4</td><td>1.4</td><td>13.4</td><td>18.6</td></tr><tr><td>VideoChat-T</td><td>7B</td><td>69.9 (+60.3)</td><td>48.7 (+45.3)</td><td>24.0 (+22.6)</td><td>26.5 (+13.1)</td><td>54.1 (+35.5)</td></tr><tr><td>QD-DETR※ (FT) (Moon et al., 2023b)</td><td>-</td><td>-</td><td>57.3</td><td>32.6</td><td>38.9</td><td>64.2</td></tr><tr><td>UnLoc-L※ (FT) (Yan et al., 2023)</td><td>-</td><td>-</td><td>60.8</td><td>38.4</td><td>-</td><td>-</td></tr><tr><td>HawkEye (FT) (Wang et al., 2024f)</td><td>7B</td><td>72.5</td><td>58.3</td><td>28.8</td><td>-</td><td>-</td></tr><tr><td>Timechat (FT) (Ren et al., 2024)</td><td>7B</td><td>-</td><td>46.7</td><td>23.7</td><td>21.7</td><td>37.9</td></tr><tr><td>VideoChat-T (FT)</td><td>7B</td><td>79.4</td><td>67.1</td><td>43.0</td><td>27.0</td><td>55.3</td></tr></table>
|
| 107 |
+
|
| 108 |
+
Table 1: Performance of VideoChat-T on temporal grounding and highlight detection tasks. (FT) indicates the model fine-tuned on training set of the evaluation benchmark, with the respective text marked in gray. Classic supervised expert models are marked with $\text{※}$ .
|
| 109 |
+
|
| 110 |
+
# 4 EXPERIMENTS
|
| 111 |
+
|
| 112 |
+
# 4.1 IMPLEMENTATION DETAILS
|
| 113 |
+
|
| 114 |
+
Built upon VideoChat2, we use UMT-L (Li et al., 2023c) and Mistral-7B (Jiang et al., 2023) as the video encoder and LLM, respectively. Except for the TAPE, all components are initialized from the pre-trained model of VideoChat2-Mistral. For the TAPE, we use random initialization, set the initial values of the final linear layer to zero, and freeze it during the first epoch of training. We set the frame count $T$ for each clip to 8, so the number of clips $K$ for a long video is equal to the total frame count divided by $T$ . We fine-tune the model for 3 epochs using the TimePro with 349K instances and a general QA task dataset with 82K instances. To ensure the stability of model training, we use 192-frame input for the first epoch. In the second and third epochs, we unfreeze the TAPE and adjust the model input to 128 frames. All experiments are conducted on 16 A100 GPUs.
|
| 115 |
+
|
| 116 |
+
# 4.2 PERFORMANCE ON TEMPORAL GROUNDING
|
| 117 |
+
|
| 118 |
+
We evaluate our method using two commonly used temporal localization tasks, i.e., Temporal Grounding and Highlight Detection. The performance comparison between VideoChat-T and other models is shown in Table 1. Our method's zero-shot performance surpasses all previous LLM-based methods and after fine-tuning, VideoChat-T even exceeds some classic expert models on the temporal grounding task.
|
| 119 |
+
|
| 120 |
+
Temporal Grounding. This task aims to identify the start and end timestamps of the video content described by the query sentence, using Charades-STA as the evaluation benchmark. VideoChat-T achieves an accuracy of 48.7 in the R@1 (IOU=0.5) metric, significantly surpassing the previous state-of-the-art MLLM method, namely TimeChat, by 16.5 points. Additionally, it outperforms the fine-tuned version of TimeChat on the training set of the evaluation benchmark by $2.0\%$ . Furthermore, the performance of VideoChat-T fine-tuned on the evaluation benchmark training set reaches 67.1 R@1 at IoU=0.5, surpassing most state-of-the-art classic supervised expert models.
|
| 121 |
+
|
| 122 |
+
Highlight Detection. We use QVHighlights as the evaluation benchmark. For a given query, this task requires outputting all timestamps of highlight moments and their corresponding saliency scores. Since there could be many sparse highlight moments in a video, this task requires finer-grained video understanding at the frame level. VideoChat-T achieves mAP of 26.5, significantly surpassing the previous MLLM method of TimeChat by 13.0 points, and also outperforms its finetuned version by 4.8 points. We observe that after fine-tuning on the corresponding training set, VideoChat-T shows almost no performance improvement. This may be due to the bottleneck in language representation of LLMs. The Highlight Detection task requires outputting a (timestamp, saliency score) pair for each highlight moment, and a video may contain dozens of discrete highlight moments, making it challenging for the model to correctly respond with dozens to hundreds of numbers in a language format. The precise numerical salience score output is very difficult for LLMs, and VideoChat-T can only respond well to queries with fewer highlight moments. Due to the
|
| 123 |
+
|
| 124 |
+
<table><tr><td rowspan="3">Method</td><td rowspan="3">LLM Size</td><td colspan="4">Long Video</td><td>Short Video</td></tr><tr><td colspan="2">Egoschema</td><td colspan="2">VideoMME</td><td>MVbench</td></tr><tr><td>Subset</td><td>Full</td><td>w/o subs</td><td>w/o subs (Long)</td><td>Avg</td></tr><tr><td>VideoAgent (Wang et al., 2024c)</td><td>GPT-4</td><td>60.2</td><td>54.1</td><td>-</td><td>-</td><td>-</td></tr><tr><td>VideoAgent (Fan et al., 2024)</td><td>GPT-4</td><td>62.8</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>TimeChat (Ren et al., 2024)</td><td>7B</td><td>-</td><td>33.0</td><td>30.2</td><td>26.1</td><td>38.5</td></tr><tr><td>LLAMA-Vid (Li et al., 2023d)</td><td>7B</td><td>-</td><td>38.5</td><td>-</td><td>-</td><td>41.9</td></tr><tr><td>MovieChat (Song et al., 2024a)</td><td>7B</td><td>-</td><td>53.5</td><td>38.2</td><td>33.4</td><td>55.1</td></tr><tr><td>MovieChat+ (Song et al., 2024b)</td><td>7B</td><td>-</td><td>56.4</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Chat-UniVi (Jin et al., 2024)</td><td>7B</td><td>-</td><td>-</td><td>40.6</td><td>35.8</td><td>-</td></tr><tr><td>VideoChat2 (Li et al., 2024b)</td><td>7B</td><td>63.6</td><td>54.4</td><td>39.5</td><td>33.2</td><td>60.4</td></tr><tr><td>VideoChat-T</td><td>7B</td><td>68.4 (+4.8)</td><td>60.0 (+5.6)</td><td>46.3 (+6.8)</td><td>41.9 (+8.7)</td><td>59.9 (-0.5)</td></tr></table>
|
| 125 |
+
|
| 126 |
+
Table 2: Performance of VideoChat-T and other methods on video question answering tasks. By upgrading VideoChat2 with TimeSuite, VideoChat-T demonstrates significant improvements across multiple long video benchmarks.
|
| 127 |
+
|
| 128 |
+
specific architectural design, classic supervised expert models have a natural advantage in handling such tasks, and VideoChat-T still has a performance gap compared to expert models.
|
| 129 |
+
|
| 130 |
+
# 4.3 PERFORMANCE ON GENERAL VIDEO QA
|
| 131 |
+
|
| 132 |
+
In addition to test the grounding ability of our VideoChat-T, we also want to verify its general video question answering performance. According to mainstream evaluation standards, we use both long video and short video QA to assess the general video understanding capability of VideoChat-T. Table 2 shows the performance of VideoChat-T on the video QA evaluation benchmarks.
|
| 133 |
+
|
| 134 |
+
Long Video QA. We use Egoschema (Mangalam et al., 2023) and VideoMME (Fu et al., 2024) to evaluate the long video capabilities of VideoChat-T. In conjunction with our proposed architectural improvements, we incremental fine-tune VideoChat2 using only 432K data points. VideoChat-T demonstrates outstanding performance on the Egoschema, achieving an accuracy of $68.4\%$ on the test subset and $60.0\%$ on the entire test set. Compared to VideoChat2, VideoChat-T obtains improvements of $4.8\%$ and $5.6\%$ on the subset and the full test set, respectively. Additionally, for the VideoMME benchmark, VideoChat-T achieves an accuracy of $46.3\%$ by solely analyzing the visual content without using subtitles, representing a $6.8\%$ improvement over VideoChat2. On the long video data division of VideoMME, VideoChat-T achieves an accuracy of $41.9\%$ , which is an $8.7\%$ improvement compared to VideoChat2. The upgraded VideoChat-T demonstrated significant performance improvements on long video QA benchmarks. This indicates the potential of leveraging grounding-centric video tasks to enhance the temporal awareness of MLLMs, thereby further improving long video understanding capabilities.
|
| 135 |
+
|
| 136 |
+
Short Video QA. We use MVBench (Li et al., 2024b) to evaluate the general short video understanding capabilities of VideoChat-T. VideoChat-T achieves an overall average accuracy of $59.9\%$ on MVBench, which is a $0.5\%$ decrease compared to VideoChat2. It is important to note that achieving minimal performance loss is a challenging task. According to previous experiences in the field of incremental learning (Van de Ven et al., 2022), models inevitably forget old knowledge while learning new knowledge. VideoChat2 is fine-tuned with 2M data, whereas VideoChat-T is fine-tuned with only 432K data, where 349K annotations are temporal grounding centric, resulting in only a $0.5\%$ accuracy loss. Previous temporal MLLMs like TimeChat (Ren et al., 2024), although achieving strong temporal localization capabilities, yield much weaker general video QA capability, with an accuracy of only $38.5\%$ on MVBench. This demonstrates that the design of our TimeSuite enhances new capabilities for the model while still preserving the original general video understanding capabilities. For a detailed analysis of the performance degradation of MVBench, please refer to Appendix F.2.
|
| 137 |
+
|
| 138 |
+
# 4.4 QUALITATIVE ANALYSIS
|
| 139 |
+
|
| 140 |
+
Figure 4 presents a qualitative comparison between our model and other methods. In the example on the left, VideoChat-T is capable of answering more complex long video reasoning questions. Our model accurately identifies the temporal location of the "light a cigarette" event and determines the correct key clue "the person in a white coat" based on the video content. This leads to the inference
|
| 141 |
+
|
| 142 |
+

|
| 143 |
+
Figure 4: Qualitative comparison between VideoChat-T and other methods. VideoChat-T not only possesses temporal fine-grained perception capabilities but also can perform accurate long video reasoning. Green text indicates correct answers, while red text indicates inappropriate answers.
|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
|
| 147 |
+
<table><tr><td>Model</td><td>Egoschema Full</td><td>VideoMME w/o subs</td><td>Charades-STA R@1 IOU=0.5</td><td>QVHighlights Hit@1</td></tr><tr><td>VideoChat-T (Ours)</td><td>60.0</td><td>46.3</td><td>48.7</td><td>54.1</td></tr><tr><td>w/o TAPE</td><td>59.1</td><td>45.9</td><td>47.1</td><td>50.4</td></tr><tr><td>w/o frz</td><td>59.0</td><td>45.2</td><td>52.4</td><td>53.7</td></tr></table>
|
| 148 |
+
|
| 149 |
+
Table 3: Performance results of the ablation study on the TAPE. Here, w/o adapter refers to removing our proposed TAPE, and w/o frz refers to not using the training method where the TAPE is frozen during the first epoch.
|
| 150 |
+
|
| 151 |
+
<table><tr><td>Model</td><td>Egoschema Full</td><td>VideoMME w/o subs</td><td>Charades-STA R@ 1 IOU=0.5</td><td>QVHighlights Hit@1</td></tr><tr><td>VideoChat-T(Ours)</td><td>60.0</td><td>46.3</td><td>48.7</td><td>54.1</td></tr><tr><td>r/w pooling</td><td>59.8</td><td>44.8</td><td>40.3</td><td>47.3</td></tr><tr><td>r/w clustering</td><td>59.5</td><td>45.0</td><td>39.8</td><td>40.1</td></tr><tr><td>w/o init</td><td>57.4</td><td>43.4</td><td>42.0</td><td>53.9</td></tr></table>
|
| 152 |
+
|
| 153 |
+
Table 4: Performance results of the ablation study on the Token Shuffle. Here, r/w refers to replacing Token Shuffle with the other component, and w/o init refers to removing the efficient initialization.
|
| 154 |
+
|
| 155 |
+
that "playing the piano very fast and pressing the keys very hard" are the true reasons. The example on the right demonstrates our model's fine-grained perception ability. The appearance of "money in the briefcase" is very brief, and most models easily overlook this detail. Thanks to its strong fine-grained perception ability, our model precisely captures this visual content.
|
| 156 |
+
|
| 157 |
+
# 4.5 ABLATION STUDY
|
| 158 |
+
|
| 159 |
+
Role of TAPE. To verify the performance improvement brought by TAPE, ablation experiments were conducted. Table 3 lists the performance results of the conducted adapter-related ablation experiments. It can be observed that when the TAPE is removed, the model's performance on long video understanding and temporal grounding benchmarks decreases. TAPE can adaptively embed positional encodings into video tokens, and the absence of TAPE leads to a certain loss in temporal awareness capability. When we unfroze the TAPE in the first epoch, the performance improved on the temporal grounding task but declined on the long video QA task. This is because the TAPE is highly suited for tasks with strong temporal dependencies. If unfrozen too early, the model may become biased towards fitting temporal grounding tasks. Freezing the TAPE during the first epoch allows the model to first optimize and learn a relatively generalized feature representation, thereby balancing the performance across different tasks.
|
| 160 |
+
|
| 161 |
+
Effectiveness of Token Shuffle. To verify the effectiveness of token shuffle, we conducted ablation experiments. Table 4 presents the results of these ablation experiments. We compared token shuffle with conventional methods such as pooling and clustering, and also observed the results after removing efficient initialization. When we replaced token shuffle with pooling or clustering methods, the model's performance declined. This is because the efficient initialization of the linear layer in token shuffle makes the initial values of the module equivalent to average pooling, which gradually optimizes better solutions during training. Therefore, our method is inherently superior to pooling. On the other hand, clustering often fails to maintain the spatial/temporal consistency of the video, leading to temporal confusion. When we removed the efficient initialization of the linear layer, the negative impact of random initialization severely damaged the model's original performance.
|
| 162 |
+
|
| 163 |
+
Effect of TimePro. We conducted ablation studies to evaluate the effectiveness of the TimePro data components. As shown in Table 5, by gradually adding subsets of TimePro, we observed the model's performance changes across various temporal grounding-centric instruction-tuning data. As we pro
|
| 164 |
+
|
| 165 |
+
<table><tr><td>Normal</td><td>TimeIT</td><td>TGC</td><td>HD</td><td>MTG</td><td>RTL</td><td>Egoschema Full</td><td>VideoMME w/o subs</td><td>Charades-STA R@1 IOU=0.5</td><td>QVHighlights Hit@1</td></tr><tr><td>✓</td><td></td><td></td><td></td><td></td><td></td><td>56.6</td><td>42.6</td><td>8.0</td><td>24.4</td></tr><tr><td>✓</td><td>✓</td><td></td><td></td><td></td><td></td><td>57.8</td><td>43.6</td><td>32.2</td><td>25.2</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td></td><td></td><td></td><td>58.3</td><td>44.0</td><td>39.1</td><td>33.9</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td></td><td></td><td>59.8</td><td>44.9</td><td>41.9</td><td>43.8</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td></td><td>60.0</td><td>45.1</td><td>45.8</td><td>48.3</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>60.0</td><td>46.3</td><td>48.7</td><td>54.1</td></tr></table>
|
| 166 |
+
|
| 167 |
+
Table 5: Performance results of the ablation study on different components of TimePro. We use 82K normal training data as the baseline. TimeIT refers to the training data with five task types from Ren et al. (2024), TGC refers to Temporal Grounded Caption, HD refers to Highlight Detection, MTG refers to Multi-format Temporal Grounding, and RTL refers to Reasoning Temporal Localization.
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
Figure 5: Performance of VideoChat-T with varying input frame numbers. As the number of input frames increases, the performance of VideoChat-T shows an upward trend in both long video QA and temporal grounding tasks. Due to the over low temporal grounding performance of VideoChat2, its curve is omitted.
|
| 171 |
+
|
| 172 |
+

|
| 173 |
+
|
| 174 |
+

|
| 175 |
+
|
| 176 |
+
gressively added subsets of TimePro, not only did the model's performance on temporal grounding tasks show a stable and significant improvement, but we also observed a noticeable upward trend in performance on long video benchmarks. This to some extent corroborates that temporal grounding centric tasks have a positive impact on long video understanding.
|
| 177 |
+
|
| 178 |
+
Impact of frames. To investigate the impact of input frame count on model performance, we conducted an ablation study. Figure 5 illustrates the scalability of our model's performance with respect to input frame count. VideoChat-T demonstrates good stability as the input frame count varies, and its performance in long video QA and temporal grounding tasks improves with an increase in frame count. In contrast, the baseline model, VideoChat2, exhibited catastrophic performance degradation when the frame count was significantly increased. As the input frame count increases, the number of visual encoding tokens grows linearly. Excessive visual token input imposes an additional computational burden on the temporal modeling of the LLM. TimeSuite mitigates this by employing Token Shuffle to reduce the number of tokens, ensuring the stable operation of the model.
|
| 179 |
+
|
| 180 |
+
# 5 CONCLUSION
|
| 181 |
+
|
| 182 |
+
In this paper, we have introduced TimeSuite, a collection of new designs from perspectives of efficient architecture, high-quality data, and new instruction tuning task, to achieve long video understanding by fine-tuning short video MLLMs with temporal grounding-centric data. We address the computational challenges of processing long videos by introducing token shuffle to compress visual tokens. We also propose the TAPE for adaptive position encoding, enhancing the temporal awareness of visual representation. Additionally, our designed Temporal Grounded Caption training task ensures MLLMs to build correspondence between grounded segments and detailed caption, while the TimePro dataset provides comprehensive instruction tuning data for learning more effective temporal perception capability. Experimental results demonstrate that VideoChat-T significantly improves long video understanding, with notable performance gains on Egoschema and VideoMME. Furthermore, VideoChat-T exhibits strong zero-shot temporal grounding capabilities, significantly outperforming the previous MLLMs on temporal grounding. Overall, our TimeSuite provides effective designs for short MLLMs to enhance their performance on temporal grounding and long video QA. We hope our TimeSuite could yield some insights on designing long video MLLMs.
|
| 183 |
+
|
| 184 |
+
# ACKNOWLEDGEMENT
|
| 185 |
+
|
| 186 |
+
This work is supported by the National Key R&D Program of China (No. 2022ZD0160900), the Fundamental Research Funds for the Central Universities (No. 020214380119), Jiangsu Frontier Technology Research and Development Program (No. BF2024076), and the Collaborative Innovation Center of Novel Software Technology and Industrialization.
|
| 187 |
+
|
| 188 |
+
# REFERENCES
|
| 189 |
+
|
| 190 |
+
Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing moments in video with natural language. In Proceedings of the IEEE international conference on computer vision, pp. 5803-5812, 2017.
|
| 191 |
+
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966, 2023.
|
| 192 |
+
Lin Chen, Xin Wei, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Bin Lin, Zhenyu Tang, et al. Sharegpt4video: Improving video understanding and generation with better captions. arXiv preprint arXiv:2406.04325, 2024.
|
| 193 |
+
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with $90\%$ chatgpt quality. See https://vicuna.lmsys.org (accessed 14 April 2023), 2(3):6, 2023.
|
| 194 |
+
Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, and Chunhua Shen. Conditional positional encodings for vision transformers. arXiv preprint arXiv:2102.10882, 2021.
|
| 195 |
+
Yue Fan, Xiaojian Ma, Rujie Wu, Yuntao Du, Jiaqi Li, Zhi Gao, and Qing Li. Videoagent: A memory-augmented multimodal agent for video understanding. arXiv preprint arXiv:2403.11481, 2024.
|
| 196 |
+
Chaoyou Fu, Yuhan Dai, Yondong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, et al. Video-mme: The first-ever comprehensive evaluation benchmark of multi-modal llms in video analysis. arXiv preprint arXiv:2405.21075, 2024.
|
| 197 |
+
Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. Tall: Temporal activity localization via language query. In Proceedings of the IEEE international conference on computer vision, pp. 5267-5275, 2017.
|
| 198 |
+
Siavash Golkar, Mariel Pettee, Michael Eickenberg, Alberto Bietti, Miles Cranmer, Geraud Krawezik, Francois Lanusse, Michael McCabe, Ruben Ohana, Liam Parker, et al. xval: A continuous number encoding for large language models. arXiv preprint arXiv:2310.02989, 2023.
|
| 199 |
+
Michael Gygli, Helmut Grabner, Hayko Riemenschneider, and Luc Van Gool. Creating summaries from user videos. In Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part VII 13, pp. 505-520. Springer, 2014.
|
| 200 |
+
Bin Huang, Xin Wang, Hong Chen, Zihan Song, and Wenwu Zhu. Vtimellm: Empower llm to grasp video moments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14271-14280, 2024a.
|
| 201 |
+
De-An Huang, Shijia Liao, Subhashree Radhakrishnan, Hongxu Yin, Pavlo Molchanov, Zhiding Yu, and Jan Kautz. Lita: Language instructed temporal-localization assistant. arXiv preprint arXiv:2403.19046, 2024b.
|
| 202 |
+
Gabriel Huang, Bo Pang, Zhenhai Zhu, Clara Rivera, and Radu Soricut. Multimodal pretraining for dense video captioning. arXiv preprint arXiv:2011.11760, 2020.
|
| 203 |
+
Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, et al. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. arXiv preprint arXiv:2311.05232, 2023.
|
| 204 |
+
|
| 205 |
+
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1-38, 2023.
|
| 206 |
+
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
|
| 207 |
+
Peng Jin, Ryuichi Takanobu, Wancai Zhang, Xiaochun Cao, and Li Yuan. Chat-univi: Unified visual representation empowers large language models with image and video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13700-13710, 2024.
|
| 208 |
+
Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. Dense-captioning events in videos. In Proceedings of the IEEE international conference on computer vision, pp. 706-715, 2017.
|
| 209 |
+
Jie Lei, Tamara L Berg, and Mohit Bansal. Detecting moments and highlights in videos via natural language queries. Advances in Neural Information Processing Systems, 34:11846-11858, 2021a.
|
| 210 |
+
Jie Lei, Tamara L Berg, and Mohit Bansal. Detecting moments and highlights in videos via natural language queries. Advances in Neural Information Processing Systems, 34:11846-11858, 2021b.
|
| 211 |
+
Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. Llava-onevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326, 2024a.
|
| 212 |
+
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning, pp. 19730–19742. PMLR, 2023a.
|
| 213 |
+
KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355, 2023b.
|
| 214 |
+
Kunchang Li, Yali Wang, Yizhuo Li, Yi Wang, Yinan He, Limin Wang, and Yu Qiao. Unmasked teacher: Towards training-efficient video foundation models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 19948-19960, 2023c.
|
| 215 |
+
Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, et al. Mvbench: A comprehensive multi-modal video understanding benchmark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22195-22206, 2024b.
|
| 216 |
+
Yanwei Li, Chengyao Wang, and Jiaya Jia. Llama-vid: An image is worth 2 tokens in large language models. arXiv preprint arXiv:2311.17043, 2023d.
|
| 217 |
+
Zhaowei Li, Qi Xu, Dong Zhang, Hang Song, Yiqing Cai, Qi Qi, Ran Zhou, Junting Pan, Zefeng Li, Van Tu Vu, et al. Groundinggpt: Language enhanced multi-modal grounding model. CoRR, 2024c.
|
| 218 |
+
Bin Lin, Bin Zhu, Yang Ye, Munan Ning, Peng Jin, and Li Yuan. Video-llava: Learning united visual representation by alignment before projection. arXiv preprint arXiv:2311.10122, 2023a.
|
| 219 |
+
Kevin Qinghong Lin, Pengchuan Zhang, Joya Chen, Shraman Pramanick, Difei Gao, Alex Jinpeng Wang, Rui Yan, and Mike Zheng Shou. Univtg: Towards unified video-language temporal grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2794-2804, 2023b.
|
| 220 |
+
Hao Liu, Wilson Yan, Matei Zaharia, and Pieter Abbeel. World model on million-length video and language with ringattention. arXiv preprint arXiv:2402.08268, 2024a.
|
| 221 |
+
Ruyang Liu, Chen Li, Haoran Tang, Yixiao Ge, Ying Shan, and Ge Li. St-llm: Large language models are effective temporal learners. arXiv preprint arXiv:2404.00308, 2024b.
|
| 222 |
+
|
| 223 |
+
Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. arXiv preprint arXiv:2306.05424, 2023.
|
| 224 |
+
Karttikeya Mangalam, Raiymbek Akshulakov, and Jitendra Malik. Egoschema: A diagnostic benchmark for very long-form video language understanding. Advances in Neural Information Processing Systems, 36, 2023.
|
| 225 |
+
WonJun Moon, Sangeek Hyun, SuBeen Lee, and Jae-Pil Heo. Correlation-guided query-dependency calibration in video representation learning for temporal grounding. arXiv preprint arXiv:2311.08835, 2023a.
|
| 226 |
+
WonJun Moon, Sangeek Hyun, SangUk Park, Dongchan Park, and Jae-Pil Heo. Query-dependent video representation for moment retrieval and highlight detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 23023-23033, 2023b.
|
| 227 |
+
Andreea-Maria Oncescu, Joao F Henriques, Yang Liu, Andrew Zisserman, and Samuel Albanie. Queryd: A video dataset with high-quality text and audio narrations. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2265-2269. IEEE, 2021.
|
| 228 |
+
Mengxue Qu, Xiaodong Chen, Wu Liu, Alicia Li, and Yao Zhao. Chatvtg: Video temporal grounding via chat with video dialogue large language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1847-1856, 2024.
|
| 229 |
+
Shuhuai Ren, Linli Yao, Shicheng Li, Xu Sun, and Lu Hou. Timechat: A time-sensitive multimodal large language model for long video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14313-14323, 2024.
|
| 230 |
+
Share. Sharegemini: Scaling up video caption data for multimodal large language models, June 2024. URL https://github.com/Share14/ShareGemini.
|
| 231 |
+
Enxin Song, Wenhao Chai, Guanhong Wang, Yucheng Zhang, Haoyang Zhou, Feiyang Wu, Haozhe Chi, Xun Guo, Tian Ye, Yanting Zhang, et al. Moviechat: From dense token to sparse memory for long video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18221-18232, 2024a.
|
| 232 |
+
Enxin Song, Wenhao Chai, Tian Ye, Jenq-Neng Hwang, Xi Li, and Gaoang Wang. Moviechat+: Question-aware sparse memory for long video question answering. arXiv preprint arXiv:2404.17176, 2024b.
|
| 233 |
+
Yale Song, Jordi Vallmitjana, Amanda Stent, and Alejandro Jaime. Tvsum: Summarizing web videos using titles. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5179-5187, 2015.
|
| 234 |
+
Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, and Jie Zhou. Coin: A large-scale dataset for comprehensive instructional video analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1207-1216, 2019.
|
| 235 |
+
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
|
| 236 |
+
Gido M Van de Ven, Tinne Tuytelaars, and Andreas S Tolias. Three types of incremental learning. Nature Machine Intelligence, 4(12):1185-1197, 2022.
|
| 237 |
+
Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaou Tang, and Luc Van Gool. Temporal segment networks for action recognition in videos. IEEE Trans. Pattern Anal. Mach. Intell., 41(11):2740-2755, 2019.
|
| 238 |
+
Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191, 2024a.
|
| 239 |
+
|
| 240 |
+
Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, et al. Visionllm: Large language model is also an open-ended decoder for vision-centric tasks. Advances in Neural Information Processing Systems, 36, 2024b.
|
| 241 |
+
Xiaohan Wang, Yuhui Zhang, Orr Zohar, and Serena Yeung-Levy. Videoagent: Long-form video understanding with large language model as agent. arXiv preprint arXiv:2403.10517, 2024c.
|
| 242 |
+
Xidong Wang, Dingjie Song, Shunian Chen, Chen Zhang, and Benyou Wang. Longllava: Scaling multi-modal llms to 1000 images efficiently via hybrid architecture. arXiv preprint arXiv:2409.02889, 2024d.
|
| 243 |
+
Yi Wang, Kunchang Li, Xinhao Li, Jiashuo Yu, Yinan He, Guo Chen, Baoqi Pei, Rongkun Zheng, Jilan Xu, Zun Wang, et al. Internvideo2: Scaling video foundation models for multimodal video understanding. arXiv preprint arXiv:2403.15377, 2024e.
|
| 244 |
+
Yueqian Wang, Xiaojun Meng, Jianxin Liang, Yuxuan Wang, Qun Liu, and Dongyan Zhao. Hawkeye: Training video-text llms for grounding text in videos. arXiv preprint arXiv:2403.10228, 2024f.
|
| 245 |
+
Yuxuan Wang, Yueqian Wang, Pengfei Wu, Jianxin Liang, Dongyan Zhao, Yang Liu, and Zilong Zheng. Efficient temporal extrapolation of multimodal large language models with temporal grounding bridge. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 9972-9987, 2024g.
|
| 246 |
+
Ziyang Wang, Shoubin Yu, Elias Stengel-Eskin, Jaehong Yoon, Feng Cheng, Gedas Bertasius, and Mohit Bansal. Videotree: Adaptive tree-based video representation for llm reasoning on long videos. arXiv preprint arXiv:2405.19209, 2024h.
|
| 247 |
+
Jiannan Wu, Muyan Zhong, Sen Xing, Zeqiang Lai, Zhaoyang Liu, Wenhai Wang, Zhe Chen, Xizhou Zhu, Lewei Lu, Tong Lu, et al. Visionllm v2: An end-to-end generalist multimodal large language model for hundreds of vision-language tasks. arXiv preprint arXiv:2406.08394, 2024.
|
| 248 |
+
Junbin Xiao, Angela Yao, Yicong Li, and Tat-Seng Chua. Can i trust your answer? visually grounded video question answering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13204-13214, 2024.
|
| 249 |
+
Fuzhao Xue, Yukang Chen, Dacheng Li, Qinghao Hu, Ligeng Zhu, Xiuyu Li, Yunhao Fang, Haotian Tang, Shang Yang, Zhijian Liu, et al. Longvila: Scaling long-context visual language models for long videos. arXiv preprint arXiv:2408.10188, 2024.
|
| 250 |
+
Shen Yan, Xuehan Xiong, Arsha Nagrani, Anurag Arnab, Zhonghao Wang, Weina Ge, David Ross, and Cordelia Schmid. Unloc: A unified framework for video localization tasks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 13623-13633, 2023.
|
| 251 |
+
En Yu, Liang Zhao, Yana Wei, Jinrong Yang, Dongming Wu, Lingyu Kong, Haoran Wei, Tiancai Wang, Zheng Ge, Xiangyu Zhang, et al. Merlin: Empowering multimodal llms with foresight minds. arXiv preprint arXiv:2312.00589, 2023.
|
| 252 |
+
Abhay Zala, Jaemin Cho, Satwik Kottur, Xilun Chen, Barlas Oguz, Yashar Mehdad, and Mohit Bansal. Hierarchical video-moment retrieval and step-captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 23056-23065, 2023.
|
| 253 |
+
Rowan Zellers, Jiasen Lu, Ximing Lu, Youngjae Yu, Yanpeng Zhao, Mohammadreza Salehi, Aditya Kusupati, Jack Hessel, Ali Farhadi, and Yejin Choi. Merlot reserve: Neural script knowledge through vision and language and sound. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16375-16387, 2022.
|
| 254 |
+
Xiangyu Zeng, Mingzhu Xu, Yijun Hu, Haoyu Tang, Yupeng Hu, and Liqiang Nie. Adaptive edge-aware semantic interaction network for salient object detection in optical remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 2023.
|
| 255 |
+
Yingsen Zeng, Yujie Zhong, Chengjian Feng, and Lin Ma. Unimd: Towards unifying moment retrieval and temporal action detection. arXiv preprint arXiv:2404.04933, 2024.
|
| 256 |
+
|
| 257 |
+
Hang Zhang, Xin Li, and Lidong Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858, 2023.
|
| 258 |
+
|
| 259 |
+
Haoji Zhang, Yiqin Wang, Yansong Tang, Yong Liu, Jiashi Feng, Jifeng Dai, and Xiaojie Jin. Flash-vstream: Memory-based real-time understanding for long video streams. arXiv preprint arXiv:2406.08085, 2024a.
|
| 260 |
+
|
| 261 |
+
Peiyuan Zhang, Kaichen Zhang, Bo Li, Guangtao Zeng, Jingkang Yang, Yuanhan Zhang, Ziyue Wang, Haoran Tan, Chunyuan Li, and Ziwei Liu. Long context transfer from language to vision. arXiv preprint arXiv:2406.16852, 2024b.
|
| 262 |
+
|
| 263 |
+
Junjie Zhou, Yan Shu, Bo Zhao, Boya Wu, Shitao Xiao, Xi Yang, Yongping Xiong, Bo Zhang, Tiejun Huang, and Zheng Liu. Mlvu: A comprehensive benchmark for multi-task long video understanding. arXiv preprint arXiv:2406.04264, 2024.
|
| 264 |
+
|
| 265 |
+
Luowei Zhou, Chenliang Xu, and Jason Corso. Towards automatic learning of procedures from web instructional videos. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
|
| 266 |
+
|
| 267 |
+
# A IMPLEMENTATION OF TAPE
|
| 268 |
+
|
| 269 |
+
# Algorithm 1 PyTorch snippet of TAPE.
|
| 270 |
+
|
| 271 |
+
# Initialize related package
|
| 272 |
+
|
| 273 |
+
```python
|
| 274 |
+
class TemporalAdapter(nnModule): def __init__(self, merge_len, clip_num, input_dim, mid_dim, output_dim, sample_rate): super().__init_() self.AvgPool = nn.AvgPool1d(merge_len, stride = merge_len) self.upsample = nn.UpSample(scale_factor = sample_rate) self_linear_input = nn.Linear(input_dim, mid_dim) self_linear_output = nn.Linear(mid_dim, output_dim) nn.init_constant_(self_linear_output_weight, 0) nn.init_constant_(self_linear_output.bias, 0) self.Downsample_Depthwise_Separable_Conv1 = nnSEQUENTIAL (nn.Conv1d(mid_dim, mid_dim, merge_len*2+1, stride=sample_rate, padding=merge_len, groups=mid_dim), nn.Conv1d(mid_dim, mid_dim, 1), TransposeLayerNorm(mid_dim), nn.GELU(), ) self.Downsample_Depthwise_Separable_Conv2 = nnSEQUENTIAL (nn.Conv1d(mid_dim, mid_dim, merge_len*2+1, stride=sample_rate, padding=merge_len, groups=mid_dim), nn.Conv1d(mid_dim, mid_dim, 1), TransposeLayerNorm(mid_dim), nn.GELU(), ) self.fc = nnSequential (nn.Conv1d(mid_dim, mid_dim, clip_num+1, stride=1, padding=clip_num//2), TransposeLayerNorm(mid_dim), nn.GELU(), ) self.Conv2 = nnSequential (nn.Conv1d(mid_dim, mid_dim, merge_len+1, stride=1, padding=merge_len//2, groups=mid_dim), nn.Conv1d(mid_dim, mid_dim, 1), TransposeLayerNorm(mid_dim), nn.GELU(), ) self.Conv1 = nnSequential (nn.Conv1d(mid_dim, mid_dim, merge_len+1, stride=1, padding=merge_len//2, groups=mid_dim), nn.Conv1d(mid_dim, mid_dim, 1), TransposeLayerNorm(mid_dim), nn.GELU(), ) def forward(self, input_tokens): time_ad = self(linear_input(input_tokens).transpose(1, 2) time_ad1 = self.AvgPool(time_ad) time_ad2 = self.Downsample_Depthwise_Separable_Conv1(time_adl) time_ad3 = self.Downsample_Depthwise_Separable_Conv2(time_ad2) time_ad3 = self.fc(time_ad3) time_ad2 = self.upsample(time_ad3) + time_ad2 time_ad2 = self.Conv2(time_ad2) time_ad1 = self.upsample(time_ad2) + time_ad1 time_ad1 = self.Conv1(time_ad1) time_ad2 = self.upsample(time_ad2) + time_ad2 time_ad1 = self.Conv1(time_ad1) time_ad2 = self.upsample(time_ad2) + time_ad1 return time_ad_out
|
| 275 |
+
```
|
| 276 |
+
|
| 277 |
+
Algorithm 1 details the implementation process of TAPE in code form. Specifically, the long video token sequence input_tokens is first compressed in the channel dimension by a linear layer to obtain time_ad, and the sequence length is compressed through a pooling layer. Next, we use a U-Net-like structure composed of one-dimensional depthwise separable convolutions to progressively down-sample the sequence, obtaining three one-dimensional temporal feature sequences with different
|
| 278 |
+
|
| 279 |
+
time resolutions, namely time_ad1, time_ad2, and time_ad3. Subsequently, a convolution with a sufficiently long window is applied to the shortest temporal feature sequence time_ad3, using zero padding at both ends as anchors to encode the relative temporal position of each token in the sequence. Then, we progressively upsample the temporal feature sequences from short to long, using residual connections to preserve temporal features at different scales. Finally, the temporal feature sequence time_ad_out is restored to the same length as the video features after token shuffling and aligned in the channel dimension through a linear layer.
|
| 280 |
+
|
| 281 |
+
# B INSTRUCTION-TUNING DATA
|
| 282 |
+
|
| 283 |
+
We fine-tuned VideoChat-T using 432K data, which includes 349K instances from TimePro and 82K instances from normal data. All videos were sampled from existing open-source video datasets, with specific information about the relevant data provided in Table 6.
|
| 284 |
+
|
| 285 |
+
<table><tr><td>Set</td><td>Task</td><td>Course</td><td>Instance Num</td></tr><tr><td rowspan="15">TimePro</td><td rowspan="3">Temporal Video Grounding</td><td>DideMo</td><td>32,944</td></tr><tr><td>QueryD</td><td>14,602</td></tr><tr><td>HiREST-grounding</td><td>459</td></tr><tr><td rowspan="3">Dense Video Captioning</td><td>ActivityNet-Captions</td><td>10,009</td></tr><tr><td>ViTT</td><td>5,086</td></tr><tr><td>Youcook2</td><td>8,700</td></tr><tr><td rowspan="2">Video Summarization</td><td>TVSum</td><td>50</td></tr><tr><td>SumMe</td><td>25</td></tr><tr><td rowspan="2">Step Localization and Captioning</td><td>COIN</td><td>9,026</td></tr><tr><td>HiREST-step</td><td>459</td></tr><tr><td>Transcribed Speech Generation</td><td>YT-Temporal</td><td>31,190</td></tr><tr><td>Reasoning Temporal Localization</td><td>ActivityNet-RTL</td><td>33,557</td></tr><tr><td>Multi-format Temporal Grounding</td><td>InternVid-VTime</td><td>100,000</td></tr><tr><td>Highlight Detection</td><td>ActivityNet-HL</td><td>10,340</td></tr><tr><td>Temporal Grounded Caption</td><td>CosMo-TGC</td><td>93,118</td></tr><tr><td rowspan="6">Normal</td><td rowspan="2">Conversation</td><td>VideoChatGPT</td><td>13,303</td></tr><tr><td>VideoChat</td><td>13,884</td></tr><tr><td rowspan="2">Video QA</td><td>EgoQA</td><td>7,813</td></tr><tr><td>MovieChat-QA</td><td>808</td></tr><tr><td>Reasoning</td><td>STAR</td><td>45,731</td></tr><tr><td>Caption</td><td>MovieChat-Caption</td><td>808</td></tr></table>
|
| 286 |
+
|
| 287 |
+
Table 6: The complete instruction fine-tuning data used for training. We utilized a total of approximately 432K data points, which can be divided into 349K instances of TimePro and 82K instances of regular video data, covering 13 tasks across 21 datasets.
|
| 288 |
+
|
| 289 |
+
We evaluate the quality of the data from three perspectives: diversity, length, and difficulty. We strive to include different datasets for various tasks, and the distribution of videos in the datasets is as broad as possible. The length of the videos should be controlled within an appropriate range, as excessively long or short videos may pose challenges for training. Each query should clearly describe the video content of the target time segment and avoid corresponding to multiple time segments in the video. Based on these principles, we have screened and integrated existing high-quality datasets, which significantly contribute to enhancing the model's temporal awareness capabilities.
|
| 290 |
+
|
| 291 |
+
TimePro encompasses a series of open-source temporal grounding datasets that we have integrated, cleaned, and refined, such as TimeIT (Ren et al., 2024), ANet-RTL (Huang et al., 2024b), and InternVid-VTime (Huang et al., 2024a). These high-quality open-source datasets have been experimentally validated by us. We also added two new self-made datasets, ANet-HL and CosMo-TGC.
|
| 292 |
+
|
| 293 |
+
Temporal Video Grounding. This task involves providing a natural language query and requires outputting the corresponding video's start and end times. The datasets include DiDeMo (Anne Hen
|
| 294 |
+
|
| 295 |
+
dricks et al., 2017), QuerYD (Oncescu et al., 2021), and HiREST-grounding (Zala et al., 2023), aiming to achieve precise temporal localization during user interaction with natural language.
|
| 296 |
+
Dense Video Captioning. This task requires the model to detect a series of events occurring in a given video and output the corresponding timestamps and coarse-grained descriptions. The datasets for this part include ActivityNet-Caption (Krishna et al., 2017), ViTT Huang et al. (2020), and YouCook2 (Zhou et al., 2018), which help the model learn the temporal relationships between different events within the video.
|
| 297 |
+
Video Summarization. The goal of this task is not to summarize at the semantic level of natural language, but to determine a set of compressed frames or clips in the form of timestamps, representing the most informative content in a given video. Our datasets include TVSum (Song et al., 2015) and SumMe (Gygli et al., 2014), which effectively combine the model's temporal perception capabilities with its semantic content inference abilities.
|
| 298 |
+
Step Localization and Captioning. This task differs from dense video captioning as it is designed to segment and describe the important steps within a long video. We have integrated two datasets, COIN (Tang et al., 2019) and HiREST-step (Zala et al., 2023), which can help the model learn the procedural temporal logic relationships of different steps within a single event.
|
| 299 |
+
Transcribed Speech Generation. The purpose of this task is to predict speech content and its corresponding start and end timestamps based on visual signals in the video. Including the YT-Temporal (Zellers et al., 2022) dataset, this task can be viewed as a weakly supervised event localization and description task.
|
| 300 |
+
Reasoning Temporal Localization. The answers to the questions in this task include both timestamps and explanations. We used the ANet-RTL (Huang et al., 2024b) dataset as training data for this task. By combining temporal localization and reasoning, we can more specifically enhance the model's temporal perception capabilities.
|
| 301 |
+
Multi-format Temporal Grounding. This task includes both single-turn and multi-turn dialogues, with a variety of question types. We use the InternVid-VTime (Huang et al., 2024a) dataset for training this task. The broader range of task types and more diverse output formats can effectively enhance the model's temporal generalization capabilities.
|
| 302 |
+
Highlight Detection. Unlike video summarization, this task identifies only the most salient moments of a video in response to a natural language query, without covering the entire scope of the original video (Lei et al., 2021a). We used a custom dataset, ANet-HL, derived from temporal localization data. We extract video segments between the start and end times of the target's appearance and use CLIP to calculate the similarity between each frame's scene and the target. This is converted into discrete saliency levels ranging from 1 to 5, at intervals of 0.5. This task effectively enhances the model's temporal perception capabilities for specific events.
|
| 303 |
+
Temporal Grounded Caption. This task involves using scene titles as queries, requiring the model to output both the time segments when the scenes appear and the fine-grained subtitles for those segments. We used our custom dataset, CosMo-TGC. This task format, which combines temporal localization and semantic understanding, can effectively prevent large language models from focusing on irrelevant video segments, thereby improving the quality of the model's responses to questions.
|
| 304 |
+
We also used normal data comprising four tasks and six different data sources. These general data help prevent the model from overfitting to temporal grounding-related tasks during training, thereby preserving the model's original capabilities.
|
| 305 |
+
|
| 306 |
+
# C COMPUTATIONAL EFFICIENCY
|
| 307 |
+
|
| 308 |
+
By applying Token Shuffle, we further reduced the computational cost of VideoChat-T, giving it a significant computational advantage over high-performance models like LLaVA-OneVision (Li et al., 2024a) and Qwen2-VL (Wang et al., 2024a). Under the same settings, VideoChat-T uses only 3 tokens per frame, with flops consumption at just $5.1\%$ of LLaVA-OneVision. Its inference time on single A100 is only 0.63 seconds, reaching real-time response levels, making it highly suitable for applications requiring rapid response, such as online video understanding.
|
| 309 |
+
|
| 310 |
+
<table><tr><td>Method</td><td>Token num per frame</td><td>flops 128 frames</td><td>Inference Time 128f & on single A100 GPU</td><td>Charades-STA IOU0.5</td><td>QVHighlight mAP</td><td>MVBench Avg</td><td>Egoschema Full</td><td>VideoMME Vision</td></tr><tr><td>Qwen2-VL (Wang et al., 2024a)</td><td>138</td><td>929.8 T</td><td>Out Of Memory</td><td>15.0</td><td>13.0</td><td>67.0</td><td>66.7</td><td>63.3</td></tr><tr><td>LLaVA-OneVision (Li et al., 2024a)</td><td>196</td><td>693.7 T</td><td>4.95 s</td><td>7.3</td><td>14.98</td><td>56.7</td><td>60.1</td><td>58.2</td></tr><tr><td>VideoChat-T (Ours)</td><td>3</td><td>35.5 T</td><td>0.63 s</td><td>48.7</td><td>26.5</td><td>59.9</td><td>60.0</td><td>46.3</td></tr></table>
|
| 311 |
+
|
| 312 |
+
In terms of performance, VideoChat-T significantly outperforms LLaVA-OneVision in temporal grounding tasks. It has a slight advantage on MVBench; both perform comparably on Egoschema; but VideoChat-T performs worse on VideoMME. Given the substantial savings in computational resources with VideoChat-T, we consider the disadvantages on some datasets to be acceptable.
|
| 313 |
+
|
| 314 |
+
Moreover, our model's ability to maintain reasonable performance under high compression ratios suggests that the token embedding spaces of contemporary models may be characterized by considerable feature redundancy. This observation presents a promising avenue for future research, as efficient techniques for compressing or discarding redundant features could substantially reduce computational costs without sacrificing model performance, enabling longer context reasoning.
|
| 315 |
+
|
| 316 |
+
# D DETAILS OF HYPERPARAMETERS
|
| 317 |
+
|
| 318 |
+
Table 7: Comparison of the computational efficiency and performance of VideoChat-T with other methods. Our approach achieves relatively impressive performance with extremely low computational cost.
|
| 319 |
+
|
| 320 |
+
<table><tr><td>config</td><td>epoch1</td><td>epoch2&3</td></tr><tr><td>input frame</td><td>192</td><td>128</td></tr><tr><td>max text length</td><td>1536</td><td>1024</td></tr><tr><td>freeze TAPE</td><td>True</td><td>False</td></tr><tr><td>learning rate</td><td>2e-5</td><td>1.5e-5</td></tr><tr><td>input resolution</td><td colspan="2">224</td></tr><tr><td>clip frame</td><td colspan="2">8</td></tr><tr><td>merge lenth</td><td colspan="2">4</td></tr><tr><td>QFormer token (per clip)</td><td colspan="2">96</td></tr><tr><td>lora rank</td><td colspan="2">16</td></tr><tr><td>lora alpha</td><td colspan="2">32</td></tr><tr><td>lora dropout</td><td colspan="2">0.1</td></tr><tr><td>batch size (per GPU)</td><td colspan="2">2</td></tr><tr><td>optimizer</td><td colspan="2">AdamW</td></tr><tr><td>optimizer momentum</td><td colspan="2">0.9, 0.999</td></tr><tr><td>weight decay</td><td colspan="2">0.02</td></tr><tr><td>learning rate schedule</td><td colspan="2">cosine decay</td></tr></table>
|
| 321 |
+
|
| 322 |
+
Table 8 lists the hyperparameters used during different epochs of the training process. In the first epoch, we used a larger number of input frames and froze the TAPE. At the beginning of the second epoch, we unfroze the TAPE and fixed the model's input frames to 128. Following the settings of VideoChat2, we integrated the lora module into the LLM and applied flash attention to accelerate the training process.
|
| 323 |
+
|
| 324 |
+
# E FULL PERFORMANCES
|
| 325 |
+
|
| 326 |
+
Table 8: Hyper-parameter Settings During the Training Process of VideoChat-T.
|
| 327 |
+
|
| 328 |
+
<table><tr><td>Model</td><td>LLM</td><td>Avg</td><td>AS</td><td>AP</td><td>AA</td><td>FA</td><td>UA</td><td>OE</td><td>OI</td><td>OS</td><td>MD</td><td>AL</td><td>ST</td><td>AC</td><td>MC</td><td>MA</td><td>SC</td><td>FP</td><td>CO</td><td>EN</td><td>ER</td><td>CI</td></tr><tr><td>VideoChatGPT (Maaz et al., 2023)</td><td>7B</td><td>32.7</td><td>23.5</td><td>26.0</td><td>62.0</td><td>22.5</td><td>26.5</td><td>54.0</td><td>28.0</td><td>40.0</td><td>23.0</td><td>20.0</td><td>31.0</td><td>30.5</td><td>25.5</td><td>39.5</td><td>48.5</td><td>29.0</td><td>33.0</td><td>29.5</td><td>26.0</td><td>35.5</td></tr><tr><td>VideoLLaMA (Zhang et al., 2023)</td><td>7B</td><td>34.1</td><td>27.5</td><td>25.5</td><td>51.0</td><td>29.0</td><td>39.0</td><td>48.0</td><td>40.5</td><td>38.0</td><td>22.5</td><td>22.5</td><td>43.0</td><td>34.0</td><td>22.5</td><td>32.5</td><td>45.5</td><td>32.5</td><td>40.0</td><td>30.0</td><td>21.0</td><td>37.0</td></tr><tr><td>VideoChat (Li et al., 2023b)</td><td>7B</td><td>35.5</td><td>33.5</td><td>26.5</td><td>56.0</td><td>33.5</td><td>40.5</td><td>53.0</td><td>40.5</td><td>30.0</td><td>25.5</td><td>27.0</td><td>48.5</td><td>35.0</td><td>20.5</td><td>42.5</td><td>46.0</td><td>26.5</td><td>41.0</td><td>23.5</td><td>23.5</td><td>36.0</td></tr><tr><td>ST-LLM (Liu et al., 2024b)</td><td>7B</td><td>54.9</td><td>66.0</td><td>53.5</td><td>84.0</td><td>44.0</td><td>58.5</td><td>80.5</td><td>73.5</td><td>38.5</td><td>42.5</td><td>31.0</td><td>86.5</td><td>36.5</td><td>56.5</td><td>78.5</td><td>43.0</td><td>44.5</td><td>46.5</td><td>34.5</td><td>41.5</td><td>58.5</td></tr><tr><td>VideoChat2 (Li et al., 2024b)</td><td>7B</td><td>60.4</td><td>75.5</td><td>58.0</td><td>83.5</td><td>50.5</td><td>60.5</td><td>87.5</td><td>74.5</td><td>45.0</td><td>47.5</td><td>44.0</td><td>82.5</td><td>37.0</td><td>64.5</td><td>87.5</td><td>51.0</td><td>66.5</td><td>47.0</td><td>35.0</td><td>37.0</td><td>72.5</td></tr><tr><td>VideoChat-T</td><td>7B</td><td>59.9</td><td>83.5</td><td>68.5</td><td>80.5</td><td>44.0</td><td>61.0</td><td>71.0</td><td>84.0</td><td>35.5</td><td>48.0</td><td>56.5</td><td>87.0</td><td>46.0</td><td>56.5</td><td>78.0</td><td>49.5</td><td>59.0</td><td>46.0</td><td>37.0</td><td>40.0</td><td>66.5</td></tr></table>
|
| 329 |
+
|
| 330 |
+
Table 9: The full performance of VideoChat-T on MVBench. VideoChat-T still demonstrates strong performance, effectively prevents catastrophic forgetting caused by incremental fine-tuning.
|
| 331 |
+
|
| 332 |
+
The performance of VideoChat-T on MVBench is shown in Table 9. Compared to VideoChat2, VideoChat-T only experienced a $0.5\%$ accuracy loss. This indicates that our method effectively preserves the capabilities of the base model, preventing catastrophic forgetting caused by incremental fine-tuning. For a detailed analysis of the performance degradation of MVBench, please refer to Appendix F.2. For the Action Localization (AL) task, which requires the model to determine the coarse-grained temporal position of events, the test accuracy improved from $44.0\%$ to $56.5\%$ . This indirectly confirms that our method significantly enhances the model's temporal awareness capabilities.
|
| 333 |
+
|
| 334 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">LLM size</td><td colspan="2">Overall (%)</td><td colspan="2">Short Video (%)</td><td colspan="2">Medium Video (%)</td><td colspan="2">Long Video (%)</td></tr><tr><td>w/o subs</td><td>w subs</td><td>w/o subs</td><td>w subs</td><td>w/o subs</td><td>w subs</td><td>w/o subs</td><td>w subs</td></tr><tr><td>ST-LLM (Liu et al., 2024b)</td><td>7B</td><td>37.9</td><td>42.3</td><td>45.7</td><td>48.4</td><td>36.8</td><td>41.4</td><td>31.3</td><td>36.9</td></tr><tr><td>Video-LLaVA (Lin et al., 2023a)</td><td>7B</td><td>39.9</td><td>41.6</td><td>45.3</td><td>46.1</td><td>38.0</td><td>40.7</td><td>36.2</td><td>38.1</td></tr><tr><td>ShareGPT4Video (Chen et al., 2024)</td><td>8B</td><td>39.9</td><td>43.6</td><td>48.3</td><td>53.6</td><td>36.3</td><td>39.3</td><td>35.0</td><td>37.9</td></tr><tr><td>Chat-UniVi-v1.5 (Jin et al., 2024)</td><td>7B</td><td>40.6</td><td>45.9</td><td>45.7</td><td>51.2</td><td>40.3</td><td>44.6</td><td>35.8</td><td>41.8</td></tr><tr><td>Qwen-VL-Chat (Bai et al., 2023)</td><td>7B</td><td>41.1</td><td>41.9</td><td>46.9</td><td>47.3</td><td>38.7</td><td>40.4</td><td>37.8</td><td>37.9</td></tr><tr><td>ShareGemini (Share, 2024)</td><td>7B</td><td>43.2</td><td>47.9</td><td>49.1</td><td>52.8</td><td>41.3</td><td>47.3</td><td>39.1</td><td>43.4</td></tr><tr><td>VideoChat2 (Li et al., 2024b)</td><td>7B</td><td>39.5</td><td>43.8</td><td>48.3</td><td>52.8</td><td>37.0</td><td>39.4</td><td>33.2</td><td>39.2</td></tr><tr><td>VideoChat-T</td><td>7B</td><td>46.3</td><td>55.8</td><td>53.3</td><td>59.9</td><td>43.8</td><td>54.0</td><td>41.9</td><td>53.4</td></tr></table>
|
| 335 |
+
|
| 336 |
+
Table 10: The full performance of VideoChat-T on VideoMME. VideoChat-T achieved significant performance improvements, particularly in the long video subset.
|
| 337 |
+
|
| 338 |
+
The overall performance of our model on VideoMME is presented in Table 10. VideoChat-T achieved significant improvements on both evaluation benchmarks of VideoMME, which include watching videos only and videos with subtitles. The improvements are particularly notable in the long video subset.
|
| 339 |
+
|
| 340 |
+
# F EXTRA ABLATION
|
| 341 |
+
|
| 342 |
+
# F.1 DOMAIN CORRELATION OF DATA
|
| 343 |
+
|
| 344 |
+
<table><tr><td>Model</td><td>Charades-STA(R@1 IOU=0.5)</td><td>MVBench(avg)</td></tr><tr><td>VideoChat-T</td><td>48.7</td><td>59.9</td></tr><tr><td>w/o STAR</td><td>47.5 (-1.2)</td><td>59.4 (-0.5)</td></tr></table>
|
| 345 |
+
|
| 346 |
+
Table 11: The performance changes of the model after removing STAR. Although the video sources of STAR may have some domain correlation with those of Charades-STA and MVBench, the performance of our model is minimally affected by STAR.
|
| 347 |
+
|
| 348 |
+
We found that the video sources in the STAR dataset might have some domain correlation with the video sources in MVBench and Charades-STA. Therefore, we removed STAR from the training set while keeping other training settings consistent with the original. The performance on benchmarks where the video sources might have domain correlation is shown in Table 11. The model's accuracy on Charades-STA (R@1 IOU=0.5) decreased by $1.2\%$ , and the average accuracy on MVBench decreased by $0.5\%$ . This indicates that the domain correlation of video sources did not significantly impact performance for our model. Notably, after removing STAR, our normal data volume was reduced to approximately 36K. This implies that, with sufficiently parameter-efficient initialization and appropriate training strategies, using only a small amount of high-quality normal data is sufficient to retain the model's original capabilities.
|
| 349 |
+
|
| 350 |
+
# F.2 DEeper INVESTIGATION OF THE PERFORMANCE DROP ON MVBENCH
|
| 351 |
+
|
| 352 |
+
We conducted a deeper investigation into the performance decline on MVBench. Through additional ablation experiments (as shown in Tabel 12), we identified two main factors contributing to the performance drop.
|
| 353 |
+
|
| 354 |
+
Architectural Discrepancy: The original VideoChat2 model was designed to process only 16 frames, leading to a mismatch in the learned feature distribution compared to the architecture of VideoChatT. As shown in the first two rows of the table, increasing the input frame number for VideoChat2
|
| 355 |
+
|
| 356 |
+
<table><tr><td>Method</td><td>post ft data</td><td>data size</td><td>frame num</td><td>token num (per frame)</td><td>MVBatch(AVG)</td></tr><tr><td>VideoChat2</td><td>-</td><td>-</td><td>16</td><td>12</td><td>60.4</td></tr><tr><td>VideoChat2</td><td>-</td><td>-</td><td>128</td><td>12</td><td>42.1</td></tr><tr><td>VideoChat-T (Common_Init)</td><td>-</td><td>-</td><td>128</td><td>3</td><td>25.3</td></tr><tr><td>VideoChat-T (Ours)</td><td>-</td><td>-</td><td>128</td><td>3</td><td>48.6</td></tr><tr><td>VideoChat-T (Ours)</td><td>TimePro+Normal (Ours)</td><td>0.43M</td><td>128</td><td>3</td><td>59.9</td></tr><tr><td>VideoChat-T (Ours)</td><td>TimePro+FullVideoChat2</td><td>2M</td><td>128</td><td>3</td><td>62.9</td></tr></table>
|
| 357 |
+
|
| 358 |
+
Table 12: Performance of VideoChat2 and VideoChat-T on MVBench under different settings.
|
| 359 |
+
|
| 360 |
+
resulted in a significant performance drop (from 60.4 to 42.1). When initializing VideoChat-T with VideoChat2, performance was close to random (25.3) due to the newly introduced randomly initialized layers. By applying efficient initialization to these new layers, we partially recovered the original capabilities of the model, bringing the MVBench performance of the un-trained VideoChat-T back to 48.6, representing an improvement of 6.5 compared to the 128-frame VideoChat2. After further fine-tuning, the short-video processing capability of VideoChat-T improved significantly, reaching 59.9.
|
| 361 |
+
|
| 362 |
+
Fine-tuning Data Discrepancy: We fine-tuned VideoChat-T using only 432K data, significantly less than the 2M non-grounded regular data used for training VideoChat2. The fine-tuning data for VideoChat2 primarily consisted of short videos of around ten seconds, which closely matched the length distribution of the MVBench evaluation videos, playing a crucial role in improving MVBench performance. To validate our hypothesis, we conducted additional experiments by training our VideoChat-T model using the TimePro and full VideoChat2 training data. It can be observed that VideoChat-T showed a slight improvement in performance on the MVBench dataset, achieving an accuracy of 62.9, which is an increase of 2.5 compared to the original VideoChat2.
|
| 363 |
+
|
| 364 |
+
Based on the above, we can conclude the fundamental reasons affecting the model's foundational generalization capabilities. When a model undergoes adjustments, the learned original distribution may not perfectly match the new architecture, making the efficient initialization of new layers crucial. The features learned from the original dataset might be forgotten due to changes in various parameters. Utilizing a more comprehensive and diverse dataset for fine-tuning can restore and even further enhance performance.
|
| 365 |
+
|
| 366 |
+
F.3 ASSOCIATION BETWEEN PERFORMANCE AND MODEL DESIGN
|
| 367 |
+
|
| 368 |
+
<table><tr><td>Method</td><td>FT Data</td><td>Charades-STA IOU0.5</td><td>QVHighlight mAP</td><td>MVBench Avg</td><td>Egoschema Full</td><td>VideoMME w/o subs</td></tr><tr><td>TimeChat</td><td>TimeIT+Valley</td><td>32.2</td><td>14.5</td><td>38.5</td><td>33.0</td><td>30.2</td></tr><tr><td>TimeChat</td><td>TimePro+Normal</td><td>34.2</td><td>16.3</td><td>41.6</td><td>38.9</td><td>33.4</td></tr><tr><td>VideoChat-T</td><td>TimePro+Normal</td><td>48.7</td><td>26.5</td><td>59.9</td><td>60.0</td><td>46.3</td></tr></table>
|
| 369 |
+
|
| 370 |
+
Table 13: Comparison of other model architectures trained on our dataset with our method, demonstrating the impact of the overall model structure design.
|
| 371 |
+
|
| 372 |
+
To eliminate the influence of training data and auxiliary tasks, and to more clearly evaluate the association between performance and model design, we fine-tuned TimeChat using the full set of fine-tuning data and auxiliary tasks from VideoChat-T. Table 13 presents the performance of TimeChat, fine-tuned with our data, across five datasets. It can be observed that TimeChat, fine-tuned with our data, shows improvements across all benchmarks. However, its performance still lags significantly behind VideoChat-T. This indicates that an efficient fine-tuning architecture design and high-quality, diverse datasets are both essential and complementary.
|
| 373 |
+
|
| 374 |
+
# F.4 VALIDATION OF TRANSFERABILITY
|
| 375 |
+
|
| 376 |
+
To verify the robustness of our TimeSuite for other MLLMs, we transferred our method to Llava-OneVision (Li et al., 2024a). Table 14 shows the performance changes of Llava-OneVision after applying our TimeSuite. It can be seen that when we apply the full set of methods in TimeSuite to Llava-OneVision, the model's performance on two different long-video evaluation benchmarks
|
| 377 |
+
|
| 378 |
+
<table><tr><td>Method</td><td>Charades-STA IOU0.5</td><td>QVHighlight map</td><td>VideoMME w/o subs</td><td>MLVU Avg</td><td>MVBench Avg</td></tr><tr><td>Llava-OneVision (baseline)</td><td>7.3</td><td>15.0</td><td>58.2</td><td>64.7</td><td>56.7</td></tr><tr><td>Llava-OneVision-T (Ours)</td><td>42.5</td><td>21.7</td><td>61.4</td><td>69.4</td><td>56.1</td></tr></table>
|
| 379 |
+
|
| 380 |
+
improves (+3.2 on VideoMME and +4.7 on MLVU), effectively demonstrating the robustness of our TimeSuite for different MLLMs.
|
| 381 |
+
|
| 382 |
+
# F.5 EXPLORATIONS OF DATA CONFIGRATIONS OF TIMEPRO
|
| 383 |
+
|
| 384 |
+
Table 14: Performance comparison of TimeSuite migration to other MLLMs. The application of our method shows a certain improvement in long video comprehension, demonstrating the transferability of our approach.
|
| 385 |
+
|
| 386 |
+
<table><tr><td>Method</td><td>MVBench Avg</td><td>Egoschema Full</td><td>VideoMME w/o subs</td><td>Charades-STA IOU=0.5</td><td>QVHighlight mAP</td></tr><tr><td>TimePro615K+Normal82K (old version)</td><td>60.0</td><td>61.0</td><td>46.3</td><td>45.4</td><td>25.7</td></tr><tr><td>TimePro349K+Normal82K (Ours)</td><td>59.9</td><td>60.0</td><td>46.3</td><td>48.7</td><td>26.5</td></tr></table>
|
| 387 |
+
|
| 388 |
+
Table 15: Comparison of different versions of our proposed TimePro. More data does not necessarily lead to higher overall performance, highlighting the importance of data quality.
|
| 389 |
+
|
| 390 |
+
In the early version of TimePro, we employed datasets comprising 309K Multi-format Temporal Grounding instances, 150K Temporal Grounded Caption instances and other data. Through extensive experimentation (as shown in Tabel 15), we discovered that removing low-quality data while retaining high-quality instances could significantly reduce training time without compromising performance. Consequently, we pruned these two part datasets to 100K and 93K instances, respectively. The data distribution presented in the paper represents the optimized and relatively balanced configuration we arrived at.
|
| 391 |
+
|
| 392 |
+
# G DISCUSSION
|
| 393 |
+
|
| 394 |
+
# G.1 CAN THE OVERALL PERFORMANCE OF MLLMS BE ENHANCED BY CONTINUOUSLY INTEGRATING EXPERT TASKS?
|
| 395 |
+
|
| 396 |
+
By appropriately fine-tuning the Multimodal Large Language Model (MLLM), we have developed a general MLLM with powerful zero-shot temporal grounding capabilities. Its performance, after fine-tuning on the training set of evaluation benchmarks, can rival the current state-of-the-art supervised expert models. Based on these results, we can boldly speculate whether it is possible to internalize the capabilities of expert models such as spatial grounding, tracking and detection (Zeng et al., 2023) into the MLLM itself, without using any external expert decoders, to enhance the comprehensive understanding performance of the MLLM and achieve a unified generalist MLLM for multiple tasks.
|
| 397 |
+
|
| 398 |
+
Merlin (Yu et al., 2023) and VisionLLM (Wang et al., 2024b) have already attempted something similar, but its performance is limited by the reasoning capabilities and language representation bottlenecks of the LLM. There is still a significant gap between its performance and that of expert models for various tasks. We observed similar phenomena in our experiments. The temporal grounding task only requires outputting two timestamps, and the task format is relatively simple, so our model achieved good results. However, the highlight detection task requires outputting multiple discrete timestamps and their corresponding saliency scores. The model needs to accurately predict dozens of numbers in language form to answer the question correctly. Our model performed well only on data with fewer timestamps. Therefore, how to simplify the complex output format of expert tasks into the language representation of LLMs, or to design special processing procedures to simplify complex expert tasks, is a question worth exploring.
|
| 399 |
+
|
| 400 |
+
Moreover, designing diverse data formats is also crucial for enhancing the expert capabilities of MLLMs. Compared to classic expert models, MLLMs have a natural advantage in task type diversity and can enhance their performance through various different variants tasks of a single capability.
|
| 401 |
+
|
| 402 |
+
For temporal grounding tasks, we found that enhancing task diversity has a significant effect on improving the model's temporal perception generalization ability. We can boldly speculate that if there are sufficiently diverse training data task types, most tasks with relatively simple output formats can achieve results comparable to expert models through appropriate instruction fine-tuning.
|
| 403 |
+
|
| 404 |
+
Through the integration of diverse expert tasks and the optimization of language representations, MLLMs can achieve substantial improvements in their overall capabilities. This allows them to effectively comprehend and address complex tasks, rivaling or even exceeding the performance of specialized expert models within specific domains. Looking ahead, MLLMs have the potential to evolve into highly versatile AI models, transcending traditional conversational and QA capabilities. They will be equipped to handle a wide range of complex expert tasks across various domains, such as vision, language, and reasoning.
|
| 405 |
+
|
| 406 |
+
# G.2 WHY DOES TEMPORAL GROUNDING DATA LEAD TO ACCURACY LOSS IN SHORT-TERM VIDEOS?
|
| 407 |
+
|
| 408 |
+
We conducted ablation experiments using different combinations of temporal grounding data and regular data. The accuracy of VideoChat-T on MVBench after fine-tuning with various data combinations is shown in Table 16.
|
| 409 |
+
|
| 410 |
+
<table><tr><td>FT Data</td><td>MVBench (AVG)</td></tr><tr><td>TimeIT</td><td>54.7</td></tr><tr><td>TimeIT+Normal</td><td>55.3</td></tr><tr><td>Normal</td><td>56.1</td></tr><tr><td>TimePro</td><td>57.4</td></tr><tr><td>TimePro+Normal (Ours)</td><td>59.9</td></tr></table>
|
| 411 |
+
|
| 412 |
+
Table 16: Performance VideoChat-T on MVBench under different fine-tuning data settings.
|
| 413 |
+
|
| 414 |
+
The diversity of grounding data formats in the past has often been limited, which can lead to overfitting on Temporal Grounding tasks and cause the model to lose its general question-answering capability. We compared the TimeIT dataset proposed in TimeChat (Ren et al., 2024) with our TimePro dataset on MVBench. As shown in the Table 16, fine-tuning with only TimeIT resulted in the lowest accuracy, and the combined use of TimeIT+Normal also performed slightly worse than using Normal alone. This indicates that monotonous grounding data indeed damages the model's original performance (as shown in Figure 1 at the beginning of the paper, TimeChat loses some of its general question-answering capability after fine-tuning, where it outputs localization times for general questions).
|
| 415 |
+
|
| 416 |
+
In contrast, our TimePro dataset includes diverse data, encompassing 9 different task types from 15 datasets, which helps mitigate the generalization loss caused by homogeneous grounding data types. Additionally, our dataset integrates Grounding with various general tasks. For instance, Grounded Caption requires detailed descriptions of corresponding video segments, while Reasoning Temporal Localization demands the model to reason about questions. This approach significantly enhances the model's generalization ability and minimizes the impact on its original capability (e.g., short video accuracy). As demonstrated in the Table 16, the performance of using only TimePro exceeds that of using Normal alone, and the combined use of TimePro and Normal far surpasses all other combinations. This also confirms that our TimePro effectively preserves the model's original performance.
|
| 417 |
+
|
| 418 |
+
Overall, using a single type of expert task training data can easily lead to model overfitting, resulting in significant loss of the model's original capabilities. To preserve the model's foundational generalization abilities, it is essential to use diversified training data. Additionally, incorporating data of various types and distributions, such as text, images, and videos, can further enhance the model's generalization capabilities.
|
| 419 |
+
|
| 420 |
+
# G.3 COULD TRAINING THE MODEL ON BOTH TEMPORAL AND NON-TEMPORAL GROUNDING DATA MITIGATE PERFORMANCE LOSS IN SHORT-TERM VIDEOS?
|
| 421 |
+
|
| 422 |
+
To address this question, we conducted additional ablation experiments. By training VideoChat-T with different combinations of temporal and non-temporal grounding data, we were able to clearly observe the effects of both types of data on the model's performance. The results of the experiments are shown in the Table 17.
|
| 423 |
+
|
| 424 |
+
<table><tr><td>FT Data</td><td>MVBench Avg</td><td>VideoMME w/o subs</td><td>Charades-STA R1@0.5</td></tr><tr><td>Normal</td><td>56.1</td><td>42.6</td><td>8.0</td></tr><tr><td>TimePro</td><td>57.4</td><td>46.0</td><td>45.6</td></tr><tr><td>TimePro+Normal (Ours)</td><td>59.9</td><td>46.3</td><td>48.7</td></tr></table>
|
| 425 |
+
|
| 426 |
+
Table 17: Performance comparison of VideoChat-T using different combinations of temporal grounding and non-temporal grounding data.
|
| 427 |
+
|
| 428 |
+
It can be observed that the combined use of TimePro+Normal for VideoChat-T achieves the highest performance in short video QA, long video QA, and temporal grounding tasks. This not only demonstrates that using both temporal grounding and non-temporal grounding data can reduce performance loss in short videos, but also reveals that the effects of temporal and non-temporal grounding data are complementary across various tasks. The distinct differences between temporal grounding and non-temporal grounding tasks can respectively compensate for the model's shortcomings in different task perspectives and feature distributions. The simultaneous use of both types of data can effectively enhance the model's overall capabilities.
|
| 429 |
+
|
| 430 |
+
# H CASE STUDY
|
| 431 |
+
|
| 432 |
+
# H.1 MORE QUALITATIVE ANALYSIS
|
| 433 |
+
|
| 434 |
+
To further qualitatively analyze our model, we supplemented it with three types of examples. These examples are about long video QA, short video QA, and captioning tasks, all of which include temporal grounding.
|
| 435 |
+
|
| 436 |
+
More qualitative comparisons about long video QA are shown in Figure 6. VideoChat-T effectively handles various questions across different domains. By better perceiving the temporal relationships of different events occurring in long videos, it can more accurately and deeply understand the detailed content of the entire video.
|
| 437 |
+
|
| 438 |
+
More qualitative comparisons about short video QA are shown in Figure 7. VideoChat-T effectively retains the original capabilities of the base model. Through parameter-efficient initialization methods and appropriate training strategies, we minimize the damage to the base model's capabilities caused by new architectures and data.
|
| 439 |
+
|
| 440 |
+
More qualitative comparisons about captioning are shown in Figure 8. Although VideoChat2 describes more local details in some scenarios compared to VideoChat-T, VideoChat-T focuses more on a series of temporal events, which aligns better with how humans typically describe videos.
|
| 441 |
+
|
| 442 |
+
# H.2 SHORTCOMINGS
|
| 443 |
+
|
| 444 |
+
We also conducted a qualitative analysis of the shortcomings of VideoChat-T through examples. As shown in Figure 9, VideoChat-T performs poorly on examples with complex logic. In the left example, although VideoChat-T accurately identified the timing of the event, it failed to fully explain the motivation behind the man opening the isolation door, which was "to fight the hijackers of the space elevator, seize the controller, and thus save the people in the entire space elevator." In the right example, VideoChat-T correctly identified the event where Mr. Bean reached out to touch his desk mate's table, but it incorrectly explained the true reason for this action, which was "to cover up the fact that he was copying his desk mate's exam by pretending to wipe dust off the desk."
|
| 445 |
+
|
| 446 |
+

|
| 447 |
+
Figure 6: More qualitative comparisons in temporal grounding & long video QA.
|
| 448 |
+
|
| 449 |
+

|
| 450 |
+
|
| 451 |
+

|
| 452 |
+
Figure 7: More qualitative comparisons in temporal grounding & short video QA.
|
| 453 |
+
|
| 454 |
+

|
| 455 |
+
|
| 456 |
+
Due to the preponderance of single-turn, perceptual questions in our training data and the lack of multi-step reasoning data with complex logic, our model struggles to handle more challenging scenarios that demand intricate logical reasoning. To address this limitation, we propose constructing data in a chain-of-thought format to guide the model through multi-step reasoning, enabling it to delve deeper into the underlying motivations and causal relationships within a video.
|
| 457 |
+
|
| 458 |
+

|
| 459 |
+
Figure 8: More qualitative comparisons in temporal grounding & captioning.
|
| 460 |
+
|
| 461 |
+

|
| 462 |
+
|
| 463 |
+

|
| 464 |
+
Figure 9: Examples of poor performance by VideoChat-T. While it accurately identifies the time of events, it struggles to answer questions that involve more complex logic.
|
| 465 |
+
|
| 466 |
+

|
2025/TimeSuite_ Improving MLLMs for Long Video Understanding via Grounded Tuning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b782f04dcd92cc31cd3f2b034c92e0c3cbedc5645deb51bc881d56598ccf1d85
|
| 3 |
+
size 1168134
|
2025/TimeSuite_ Improving MLLMs for Long Video Understanding via Grounded Tuning/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Timer-XL_ Long-Context Transformers for Unified Time Series Forecasting/1000abc3-3f82-4c7b-a0aa-1b66e4569e7b_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Timer-XL_ Long-Context Transformers for Unified Time Series Forecasting/1000abc3-3f82-4c7b-a0aa-1b66e4569e7b_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Timer-XL_ Long-Context Transformers for Unified Time Series Forecasting/1000abc3-3f82-4c7b-a0aa-1b66e4569e7b_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:72063f7bd029c41ece5f644e1c1a854af41d2402bab1e85bf549f3e5bd178b59
|
| 3 |
+
size 3991391
|
2025/Timer-XL_ Long-Context Transformers for Unified Time Series Forecasting/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/Timer-XL_ Long-Context Transformers for Unified Time Series Forecasting/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8dc527dbf5bd882acce1f9ec585f3d8af98d55758e88b34d59beb2fb018d1354
|
| 3 |
+
size 2690738
|
2025/Timer-XL_ Long-Context Transformers for Unified Time Series Forecasting/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/To CoT or not to CoT_ Chain-of-thought helps mainly on math and symbolic reasoning/78080855-33d6-4037-9b8c-edc307a2e575_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/To CoT or not to CoT_ Chain-of-thought helps mainly on math and symbolic reasoning/78080855-33d6-4037-9b8c-edc307a2e575_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/To CoT or not to CoT_ Chain-of-thought helps mainly on math and symbolic reasoning/78080855-33d6-4037-9b8c-edc307a2e575_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0ab9c6f9845d42f37cc96961bce07a0ce78e3c381f00c9324516b0107c9c9793
|
| 3 |
+
size 6536138
|
2025/To CoT or not to CoT_ Chain-of-thought helps mainly on math and symbolic reasoning/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/To CoT or not to CoT_ Chain-of-thought helps mainly on math and symbolic reasoning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1d4f298dbfd9b299124d53b13a228437fcb920af5f7a23859b24e75b54199435
|
| 3 |
+
size 4483096
|
2025/To CoT or not to CoT_ Chain-of-thought helps mainly on math and symbolic reasoning/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/To Code or Not To Code_ Exploring Impact of Code in Pre-training/e6b439cb-3b05-45ee-8c52-561b8f255560_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/To Code or Not To Code_ Exploring Impact of Code in Pre-training/e6b439cb-3b05-45ee-8c52-561b8f255560_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/To Code or Not To Code_ Exploring Impact of Code in Pre-training/e6b439cb-3b05-45ee-8c52-561b8f255560_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0779f8250e8940b5029009c6deeea602f1294e2940724568af370949db32d07e
|
| 3 |
+
size 1978501
|
2025/To Code or Not To Code_ Exploring Impact of Code in Pre-training/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/To Code or Not To Code_ Exploring Impact of Code in Pre-training/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d39af3ce46b37253ee1aee3532681174c4697c1af494a4244235381cff0a5a43
|
| 3 |
+
size 452604
|
2025/To Code or Not To Code_ Exploring Impact of Code in Pre-training/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/To Tackle Adversarial Transferability_ A Novel Ensemble Training Method with Fourier Transformation/11b8de53-d193-4b48-bf31-fc86f1bab485_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/To Tackle Adversarial Transferability_ A Novel Ensemble Training Method with Fourier Transformation/11b8de53-d193-4b48-bf31-fc86f1bab485_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/To Tackle Adversarial Transferability_ A Novel Ensemble Training Method with Fourier Transformation/11b8de53-d193-4b48-bf31-fc86f1bab485_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0b33a94141a686bef595afe7b18417a17973e68cac348dc36f131eebb891c194
|
| 3 |
+
size 4657239
|
2025/To Tackle Adversarial Transferability_ A Novel Ensemble Training Method with Fourier Transformation/full.md
ADDED
|
@@ -0,0 +1,495 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TO TACKLE ADVERSARIAL TRANSFERABILITY: A NOVEL ENSEMBLE TRAINING METHOD WITH FOURIER TRANSFORMATION
|
| 2 |
+
|
| 3 |
+
Wanlin Zhang $^{1,3}$ , Weichen Lin $^{2}$ , Ruomin Huang $^{4}$ , Shihong Song $^{1}$ , Hu Ding $^{1*}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ School of Computer Science and Technology, University of Science and Technology of China
|
| 6 |
+
$^{2}$ School of Artificial Intelligence and Data Science, University of Science and Technology of China
|
| 7 |
+
$^{3}$ Shanghai Innovation Institute $^{4}$ Department of Computer Science, Duke University
|
| 8 |
+
{ideven, linweichen, shihongsong}@mail.ustc.edu.cn
|
| 9 |
+
ruomin.huang@duke.edu,HUDING@ustc.edu.cn
|
| 10 |
+
|
| 11 |
+
# ABSTRACT
|
| 12 |
+
|
| 13 |
+
Ensemble methods are commonly used for enhancing robustness in machine learning. However, due to the "transferability" of adversarial examples, the performance of an ensemble model can be seriously affected even it contains a set of independently trained sub-models. To address this issue, we propose an efficient data transformation method based on a cute "weakness allocation" strategy, to diversify non-robust features. Our approach relies on a fine-grained analysis on the relation between non-robust features and adversarial attack directions. Moreover, our approach enjoys several other advantages, e.g., it does not require any communication between sub-models and the construction complexity is also quite low. We conduct a set of experiments to evaluate the performance of our proposed method and compare it with several popular baselines. The results suggest that our approach can achieve significantly improved robust accuracy over most existing ensemble methods, and meanwhile preserve high clean accuracy.
|
| 14 |
+
|
| 15 |
+
# 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
In the past decade, Deep neural networks (DNNs) have achieved prominent performance on a broad range of real-world tasks (Goodfellow et al., 2016). However, a number of previous works show that DNNs are susceptible to carefully-crafted manipulations, where the manipulated data are called "adversarial examples" (Szegedy et al., 2014; Zhou et al., 2018; Heaven, 2019). The existence of adversarial examples severely impedes the application of DNNs in security-conscious scenarios, such as self-driving car (Rossolini et al., 2023; Zhu et al., 2021) and heath care (Newaz et al., 2020).
|
| 18 |
+
|
| 19 |
+
The adversarial training approach (Wang et al., 2023a; Madry et al., 2018) has gained significant attention due to its great effectiveness for defending against adversarial examples. However, the adversarial training approach often necessitates considerably high training time and large training dataset (Gowal et al., 2021; Carmon et al., 2019). Moreover, it has been observed that adversarial training is likely to incur certain decline in the accuracy on clean data, which also hinders the trained model to be applied for many practical tasks (Tsipras et al., 2018; Zhang et al., 2019).
|
| 20 |
+
|
| 21 |
+
Another important approach to enhance adversarial robustness is ensemble training (Tramér et al., 2018). But recent studies (Yang et al., 2025; Gao et al., 2022; Waseda et al., 2023) demonstrated that an adversarial example can attack different models even they are trained independently, and this phenomenon is the so-called "transferability" of adversarial examples. Hence, the strategy that simply integrates different models trained on the same original dataset is not sufficient to guarantee the overall robustness. To resolve this issue, different approaches have been proposed for maximizing the "diversity" among sub-models; in general, these approaches can be categorized into two classes: "simultaneous training" and "individual training" (Pang et al., 2019).
|
| 22 |
+
|
| 23 |
+
To reduce the similarity among sub-models, most existing "simultaneous training" methods attempt to incorporate some penalty during each epoch of parameter updates. Kariyappa & Qureshi (2019)
|
| 24 |
+
|
| 25 |
+
proposed the "Gradient Alignment Loss (GAL)" method to minimize the gradient similarity between sub-models directly. Further, Yang et al. (2021) proposed the "Transferability Reduced Smooth (TRS)" method to improve GAL by adding a regularization term to increase the smoothness, as the models with a smoother loss function can reduce the "transferability" of attacks. Yang et al. (2020) aimed to isolate the adversarial vulnerability in each sub-model by distilling non-robust features, where the sub-models can then generate diverse outputs being resilient against transfer attacks. Despite their effectiveness for defending adversarial attacks, the simultaneous training methods often require a substantial amount of memory since all the sub-models need to be stored in the GPUs in the training stage, which could be prohibitive if the number of sub-models is not small (say, more than 10) and/or their sizes are large. Additionally, the information interaction in parallel training can also cause extra large communication cost.
|
| 26 |
+
|
| 27 |
+
Different from simultaneous training, most "individual training" methods train each sub-model independently on a randomly transformed version of the given training dataset (Pang et al., 2019; AprilPyone & Kiya, 2021). This "random transformation" strategy yields diverse datasets, and thus different sub-models trained on these datasets can present diverse performances when confronting an adversarial attack. The individual training approach has higher flexibility and also requires less GPU memory, because the sub-models do not need to be stored simultaneously. Since there is no communication between sub-models, individual training methods are more suitable for parallel training with multiple GPUs. But unfortunately, recent studies showed that the commonly used random transformations (e.g. image cropping and rescaling) are not that effective under adversarial attacks (Athalye et al., 2018). The major cause of suppressing the performance of individual training is that the "transferability" problem is still not well addressed.
|
| 28 |
+
|
| 29 |
+
Our contributions. To tackle the transferability obstacle, we consider developing a new data transformation method for ensemble training. Our main contributions are summarized as follows:
|
| 30 |
+
|
| 31 |
+
- First, we propose a fine-grained analysis on the relation between non-robust features and adversarial attack directions (Section 3). Being different from the previous analysis on non-robust features, our new analysis provides us the hints that are particularly useful to allocate the potential vulnerability directions to a set of sub-models, and therefore paves the way for designing our ensemble training strategy.
|
| 32 |
+
|
| 33 |
+
- Second, we propose a data transform framework that can effectively promote the diversity of training data for robust ensemble training. The framework consists of two steps: "frequency selection" and "frequency transformation", where the frequency is based on the Fourier transformation on the images. We propose two efficient frequency transformations with low complexities on the identified non-robust features. The first one is based on simple random noise, and the second one is a cute "targeted attack transformation" that can modify the non-robust features more effectively (Section 4.2).
|
| 34 |
+
|
| 35 |
+
- Finally, we conduct a set of experiments to evaluate the adversarial robustness of our approach on several benchmark datasets under the widely used attack algorithms. We also compare our approach with several open-source ensemble methods, such as ADP (Pang et al., 2019), GAL (Kariyappa & Qureshi, 2019), DVERGE (Yang et al., 2020), and TRS (Yang et al., 2021). Compared with those baselines, the experimental results suggest that our proposed approach can significantly outperform most of them in robust accuracy and also preserve comparable high clean accuracy.
|
| 36 |
+
|
| 37 |
+
# 1.1 OTHER RELATED WORKS
|
| 38 |
+
|
| 39 |
+
Data transformation for ensemble training. Guo et al. (2018) and Raff et al. (2019) proposed the transformations that preserve semantic information to reduce the impact of adversarial perturbation. AprilPyone & Kiya (2021) developed a training method that employs block-wise data transformations, where the input image is partitioned into blocks based on some private key. LINAC (Rusu et al., 2022) uses a predetermined random seed (private key) to initialize and train a DNN to encode the input data, serving as an encrypted input transformation.
|
| 40 |
+
|
| 41 |
+
Adversarial attack from frequency perspective. Wang et al. (2020) explained that the model's vulnerability to small distortions may be due to its dependence on high-frequency features. Yucel et al. (2023) proposed a data augmentation method that reduces the reliance on high-frequency components, so as to improve model's robustness while maintaining clean accuracy. Maiya et al. (2021) and Bernhard et al. (2021) respectively showed that to fully understand the vulnerability, we should consider the distribution of the entire dataset with high and low frequencies.
|
| 42 |
+
|
| 43 |
+
# 2 PRELIMINARIES
|
| 44 |
+
|
| 45 |
+
Some notations. We consider the $k$ -classification task: $\mathcal{X} \to \mathcal{Y}$ where $\mathcal{X}$ is the input data space and $\mathcal{Y} = \{1,2,\dots,k\}$ is the set of labels. A soft-classification model $f(\cdot;\beta)$ maps each $x \in \mathcal{X}$ to a vector $f(x;\beta) \in \mathbb{R}^k$ , where $\beta$ is the parameter vector that needs to be trained. Its associated hard-classification model is $F(x;\beta) = \arg \max_i [f(x;\beta)]_i$ where $[\cdot]_i$ stands for the $i$ -th coordinate. The model $f$ is usually equipped with a loss function $\ell(f(x;\beta), y)$ , $x \in \mathcal{X}$ and $y \in \mathcal{Y}$ , which is differentiable on $\beta$ (e.g., cross-entropy loss). We refer to the accuracy on the original dataset as "clean accuracy" and the accuracy on adversarial examples as "robust accuracy". We denote the one-hot $k$ -dimensional vector that corresponds to the target label $y$ as $h(y)$ .
|
| 46 |
+
|
| 47 |
+
Definition 2.1 (Ensemble Model) Let $\mathcal{M} = \{f_1, \dots, f_M\}$ be a set of sub-models for a $k$ -classification task. We build the ensemble model with the following function:
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
f _ {\mathrm {E}} (x; \beta_ {[ 1: M ]}) = \frac {1}{M} \sum_ {m \in [ M ]} \widehat {F} _ {m} (x; \beta_ {m}), \tag {1}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
where $\beta_{[1:M]} = \{\beta_m \mid 1 \leq m \leq M\}$ , and $\widehat{F}_m(x; \beta_m)$ is the one-hot $k$ -dimensional vector of the hard-classification model $F_m(x; \beta_m)$ of $f_m$ .
|
| 54 |
+
|
| 55 |
+
Definition 2.2 (Adversarial Attack and Targeted Attack) Given a model $f(\cdot; \beta)$ and an input $(x, y) \in \mathcal{X} \times \mathcal{Y}$ , the adversarial attack algorithm $\mathcal{A}$ returns a perturbed data $x'$ inside the $l_p$ ball of radius $\epsilon > 0$ , which maximizes the loss function $\ell(f(\cdot; \beta), \cdot)$ , or minimizes the loss function $\ell(f(\cdot; \beta), y_t)$ if given a target label $y_t \neq y$ . For the latter one, we say it is a "targeted attack from $y$ to $y_t$ ". Usually we set $p = 2$ or $p = \infty$ .
|
| 56 |
+
|
| 57 |
+
As mentioned in Section 1, because our proposed approach is based on Fourier transform, we introduce several necessary notations below. Given an image $x$ of size $L \times N$ , the corresponding two-dimensional discrete Fourier transform can be written as: for any $0 \leq u \leq L - 1$ and $0 \leq v \leq N - 1$ ,
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\tilde {x} [ u, v ] = \sum_ {s = 0} ^ {L - 1} \sum_ {t = 0} ^ {N - 1} x [ s, t ] \cdot e ^ {- 2 \mathrm {j} \pi \left(\frac {u _ {s}}{L} + \frac {v t}{N}\right)}, \tag {2}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
where “ $j$ ” denotes the imaginary unit, and “ $\tilde{x}[u,v]$ ” is the entry in the $u$ -th column and $v$ -th row of the Fourier matrix $\tilde{x}$ (“ $x[s,t]$ ” is defined similarly for the original image $x$ ). The pixels of the image $x$ form the time domain, and the entries of $\tilde{x}$ form the frequency domain. For a frequency $(u,v)$ , the amplitude is the absolute value $|\tilde{x}[u,v]|$ . We call a frequency $(u,v)$ as a frequency feature.
|
| 64 |
+
|
| 65 |
+
# 3 FINE-GRAINED ANALYSIS ON ENSEMBLE MODEL VULNERABILITY
|
| 66 |
+
|
| 67 |
+
The previous work (Ilyas et al., 2019) categorizes the features learned by a model into robust and non-robust features. It shows that adversarial vulnerability is a natural consequence of the presence of highly predictive but non-robust features. Moreover, different models trained on the same dataset often have similar non-robust features, and therefore an adversarial example usually exhibits the "transferability" property among them. Several other works also presented detailed discussions on the impact of non-robust features (Benz et al., 2021; Springer et al., 2021). Following those studies, a natural idea for tackling the transferability issue is to ensure that the sub-models should have diverse non-robust features. In this section, we provide a fine-grained analysis on the vulnerability of ensemble models and then conclude two important hints for achieving this "diversity" goal.
|
| 68 |
+
|
| 69 |
+
The following definitions are inspired by (Ilyas et al., 2019). Note that different from the term "feature" used in their article, we use "feature extractor" instead in our paper, since "feature" will be particularly used for referring to image feature in time or frequency domain. Specifically, we define a "feature extractor" as a function that maps the input $x \in \mathcal{X}$ to a vector in $\mathbb{R}^k$ . A model $f$ is composed of a set of different feature extractors, with each feature extractor focusing on distinct feature. The combination of outputs from these feature extractors forms the model's final output. We then further define the "useful feature extractors".
|
| 70 |
+
|
| 71 |
+
Definition 3.1 (Useful feature extractor) For a given data distribution $\mathcal{D} = \mathcal{X}\times \mathcal{Y}$ , a feature extractor $\theta :\mathcal{X}\to \mathbb{R}^k$ is useful, if we have
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\mathbb {E} _ {(x, y) \sim \mathcal {D}} [ h (y) ^ {\top} \theta (x) ] > \frac {1}{k}. \tag {3}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
Recall that $h(y)$ is the one-hot $k$ -dimensional vector of the label $y$ . Roughly speaking, the inequality (3) implies that the expected contribution of a useful feature extractor to the model's correct prediction is higher than the average contribution over all the $k$ classes.
|
| 78 |
+
|
| 79 |
+
Definition 3.2 (robust and non-robust feature extractor) We use $\mathcal{A}(x)$ to denote the adversarial example of a data item $x$ as described in Definition 2.2. Let $\theta$ be a useful feature extractor. (1) We say $\theta$ is robust if the following condition holds for any $i$ ( $1 \leq i \leq k$ ):
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\mathbb {E} _ {(x, y) \sim \mathcal {D} _ {i}} \left[ \theta (\mathcal {A} (x)) \right] _ {i} > \frac {1}{k}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $\mathcal{D}_i$ represents the $i$ -th class data. We denote the set of these robust feature extractors as $\Theta_{R}$ .
|
| 86 |
+
|
| 87 |
+
(2) The remaining useful feature extractors are non-robust. We assign these non-robust extractors to $k(k - 1)$ sets: $\{\Theta_{i,j} \mid 1 \leq i \neq j \leq k\}$ as follows. Initially, all these $k(k - 1)$ sets are empty. Then we go through all the non-robust feature extractors. For each non-robust $\theta$ , there must exist at least an index "i" such that
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\mathbb {E} _ {(x, y) \sim \mathcal {D} _ {i}} [ \theta (\mathcal {A} (x)) ] _ {i} \leq 1 / k;
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
we let $j = \arg \max_{s} \mathbb{E}_{(x,y) \sim \mathcal{D}_i}[\theta(\mathcal{A}(x))]_s$ and assign $\theta$ to $\Theta_{i,j}$ (note that $j$ should be not equal to $i$ , or there are multiple indices achieving the maximum expectation and at least one is not equal to $i$ , since otherwise $\sum_{s=1}^{k} \mathbb{E}_{(x,y) \sim \mathcal{D}_i}[\theta(\mathcal{A}(x))]_s$ will less than 1). Eventually, these $k(k-1)$ sets are constructed, where each $\Theta_{i,j}$ contains the feature extractors that are not robust to the attack from $i$ to $j$ .
|
| 94 |
+
|
| 95 |
+
Remark 3.3 Intuitively, if a feature extractor is robust, it should have the capability to preserve its contribution to the correct prediction even under perturbation. It is also worth noting that a non-robust feature extractor $\theta$ could be assigned to multiple $\Theta_{i,j}s$ .
|
| 96 |
+
|
| 97 |
+
Assume we have a standardly trained model $f$ consisting of a set of useful feature extractors, and we denote it as $\Theta_f$ . Each of them can be classified as robust or non-robust as Definition 3.2. Similar with the formulation proposed in (Ilyas et al., 2019), we can represent the model as
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
f (x) = \sum_ {\theta \in \Theta_ {R} \cap \Theta_ {f}} w _ {\theta} \theta (x) + \sum_ {i, j = 1, i \neq j} ^ {k} \sum_ {\theta \in \Theta_ {i, j} \cap \Theta_ {f}} w _ {\theta} \theta (x), \tag {4}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
where each $\theta$ has a coefficient $w_{\theta} \in \mathbb{R}$ . We then conduct our analysis based on Equation (4). Some recent works reveal that adversarial training method can obtain robust model through reducing the dependence on non-robust feature extractors (Allen-Zhu & Li, 2022; Tsipras et al., 2018). However, this strategy may cause certain downgrade performance on clean accuracy (because the non-robust feature extractors also contribute to obtaining correct prediction). Fortunately, we are able to avoid this dilemma in the context of ensemble training. Namely, we just need to keep the non-robust features as diverse as possible, instead of entirely eliminating the dependence on those non-robust feature extractors. To pave the way for realizing this goal, we introduce the definition of vulnerability of ensemble model below.
|
| 104 |
+
|
| 105 |
+
Definition 3.4 (Vulnerability of ensemble model) Suppose $f_{\mathrm{E}}$ is an ensemble model as described in Definition 2.1, and its associated hard-classification model is denoted by $F_{\mathrm{E}}$ : $\forall x$ , $F_{\mathrm{E}}(x) = \arg \max_{i}[f_{\mathrm{E}}(x)]_{i}$ . Given the data distribution $\mathcal{D} = \mathcal{X} \times \mathcal{Y}$ , the vulnerability of $F_{\mathrm{E}}$ is defined as:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\operatorname {V r} \left(F _ {\mathrm {E}}\right) = \mathbb {E} _ {(x, y) \sim \mathcal {D}} \left[ \mathbb {I} \left\{F _ {\mathrm {E}} (x) = y \wedge F _ {\mathrm {E}} (\mathcal {A} (x)) \neq y \right\} \right], \tag {5}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where $\mathbb{I}(\cdot)$ represents the indicator function. Furthermore, for any target class $y_{t}$ , we can define the vulnerability towards $y_{t}$ as $\mathrm{Vr}(F_{\mathrm{E}}, y_{t}) = \mathbb{E}_{(x,y) \sim \mathcal{D}}\left[\mathbb{I}\{F_{\mathrm{E}}(x) = y \wedge F_{\mathrm{E}}(\mathcal{A}(x)) = y_{t}\}\right]$ .
|
| 112 |
+
|
| 113 |
+
The vulnerability of Definition 3.4 describes the success probability of an attack $\mathcal{A}$ to the ensemble model $F_{\mathrm{E}}$ . We have the following key inequality, which indicates that $\operatorname{Vr}(F_{\mathrm{E}})$ is bounded by considering all the attack directions, i.e.,
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\operatorname {V r} \left(F _ {\mathrm {E}}\right) \leq \sum_ {y _ {t} \in \mathcal {Y}} \operatorname {V r} \left(F _ {\mathrm {E}}, y _ {t}\right). \tag {6}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
The proof of Inequality (6) is placed in Appendix A.1. Moreover, if $F_{\mathrm{E}}(\mathcal{A}(x)) = y_t$ , there are at least $M / k$ sub-models returning the wrong label $y_t$ due to the pigeonhole principle. Namely, " $\sum_{m=1}^{M} \mathbb{I}\left([f_m(\mathcal{A}(x))]_y < [f_m(\mathcal{A}(x))]_{y_t}\right) > \frac{M}{k}$ " should be a necessary condition for successfully attacking from $y$ to $y_t$ . So it implies
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\operatorname {V r} \left(F _ {\mathrm {E}}, y _ {t}\right) \leq \mathbb {E} _ {(x, y) \sim \mathcal {D}} \left[ \mathbb {I} \left(\sum_ {m = 1} ^ {M} \mathbb {I} \left([ f _ {m} (\mathcal {A} (x)) ] _ {y} < [ f _ {m} (\mathcal {A} (x)) ] _ {y _ {t}}\right) > \frac {M}{k}\right) \right]. \tag {7}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
From the upper bound (6), we can decrease the total vulnerability by reducing $\mathrm{Vr}(F_{\mathrm{E}},y_t)$ for each $y_{t}$ . Also, from (7) we know that $\mathrm{Vr}(F_{\mathrm{E}},y_{t})$ can be reduced by decreasing the chance of “ $[f_m(\mathcal{A}(x))]_y < [f_m(\mathcal{A}(x))]_{y_t}$ ” over $m\in \{1,2,\dots ,M\}$ . According to the Equation (4), the inequality $\left[f_m(\mathcal{A}(x))\right]_y < \left[f_m(\mathcal{A}(x))\right]_{y_t}$ ” can be rewritten as
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
\left[ \sum_ {\theta \in \Theta_ {R} ^ {m}} w _ {\theta} \theta (\mathcal {A} (x)) + \sum_ {i, j = 1, i \neq j} ^ {k} \sum_ {\theta \in \Theta_ {i, j} ^ {m}} w _ {\theta} \theta (\mathcal {A} (x)) \right] _ {y} < \left[ \sum_ {\theta \in \Theta_ {R} ^ {m}} w _ {\theta} \theta (\mathcal {A} (x)) + \sum_ {i, j = 1, i \neq j} ^ {k} \sum_ {\theta \in \Theta_ {i, j} ^ {m}} w _ {\theta} \theta (\mathcal {A} (x)) \right] _ {y _ {t}}, \tag {8}
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
where $\Theta_R^m$ and $\Theta_{i,j}^{m}$ respectively denote the sets of robust and non-robust feature extractors for the sub-model $f_{m}$ . Moreover, the set $\Theta_{y,y_t}^m$ should have relatively larger influence to the right-hand side of (8) than other feature extractor set $\Theta_{y,j}^{m}$ with $j\neq y_{t}$ , due to the outer operator “ $[\cdot ]_{y_t}$ . Therefore, we conclude our first hint as an intuition for reducing $\mathrm{Vr}(F_{\mathrm{E}})$ .
|
| 132 |
+
|
| 133 |
+
Hint (i): To decrease the vulnerability in the attack direction $y_{t}$ (i.e., each term $\mathrm{Vr}(F_{\mathrm{E}},y_{t})$ in the upper bound of (6)), it is reasonable to decrease the influence from the non-robust feature extractors of $\Theta_{y,y_t}^m$ .
|
| 134 |
+
|
| 135 |
+
In Hint (i), a major difference from the previous analysis (Ilyas et al., 2019; Allen-Zhu & Li, 2022) is that, we in particular relate each attack direction $y_{t}$ to some specific non-robust feature extractors, where the benefit is that these correspondences can effectively help us to build the diverse ensemble model. Moreover, According to the principle of ensemble methods, as long as at least $M / 2 + 1$ sub-models are not successfully attacked, the ensemble model will successfully defend against the attack. So we conclude the second hint that is also important for designing our approach.
|
| 136 |
+
|
| 137 |
+
Hint (ii): For each attack direction $y_{t}$ , we only need to consider manipulating the training data of $M / 2 + 1$ sub-models instead of all the $M$ sub-models.
|
| 138 |
+
|
| 139 |
+
Overall, the above Hint (i) & (ii) play the key roles for inspiring our data transformation method in Section 4.
|
| 140 |
+
|
| 141 |
+
# 4 OUR ENSEMBLE TRAINING METHOD
|
| 142 |
+
|
| 143 |
+
We first introduce our model and high-level idea in Section 4.1, and then elaborate on the technical details for the data transformations in Section 4.2.
|
| 144 |
+
|
| 145 |
+
# 4.1 OVERVIEW OF OUR FRAMEWORK
|
| 146 |
+
|
| 147 |
+
Note that the feature extractors of a model depend on the given training data. Namely, any modification on the features of the training data can implicitly influence the model. Thus, in this section we follow the Hint (i) & (ii) of Section 3 to design an effective data transformation method. The transformation is expected to modify the features of the training data, so as to enhance the robustness of the trained ensemble model. We train a set of distinct sub-models on the transformed training data; these sub-models can be integrated into an ensemble model being robust against adversarial attacks, while preserving the clean accuracy of each sub-model as much as possible. We use “ $\pi_{m}$ ” to denote the transformation for the $m$ -th sub-model, $1 \leq m \leq M$ , and formulate the following problem by slightly modifying Definition 2.1 (replace $x$ by the adversarial example $\mathcal{A}(x)$ for each sub-model):
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\min \mathbb {E} _ {(x, y) \sim \mathcal {X} \times \mathcal {Y}} \ell \left(\frac {1}{M} \sum_ {m \in [ M ]} \widehat {F} _ {m} (\mathcal {A} (x); \beta_ {m}), y\right) \tag {9}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
where $\beta_{m}$ is obtained by training on the transformed data, i.e., $\beta_{m} = \operatorname*{argmin}_{\beta}\mathbb{E}_{(x,y)\sim \mathcal{X}\times \mathcal{Y}}\ell (f_{m}(\pi_{m}(x);\beta),y)$ for each $m\in \{1,2,\dots ,M\}$ .
|
| 154 |
+
|
| 155 |
+
The major challenge for solving the above problem (9) is how to design a set of appropriate transformations $\{\pi_m\mid 1\leq m\leq M\}$ , so that the obtained parameters $\beta_{[1:M]}$ can yield sufficiently diverse sub-models. To address this issue, we leverage the transformation from frequency domain to guide the non-robust features of each sub-model to be as diverse as possible. Specifically, we introduce a method called "Frequency Domain Transformation (FDT)" for constructing the set of diverse training datasets $\{\pi_1(\mathcal{X}),\pi_2(\mathcal{X}),\dots ,\pi_M(\mathcal{X})\}$ . FDT relies on a key "weakness allocation" strategy. Roughly speaking, the strategy aims to promote the diversity of the constructed datasets, and meanwhile preserve that the overall clean accuracy should not be sacrificed in the ensemble. The details are presented in the next section.
|
| 156 |
+
|
| 157 |
+
# 4.2 FREQUENCY DOMAIN TRANSFORMATION
|
| 158 |
+
|
| 159 |
+
Before performing the transformation, we need to select a set of non-robust features. In time domain, a simple observation is that an image feature is usually invariant under spatial translation, e.g., it can appear at different positions in images. This property causes the challenge for directly identifying and representing non-robust features in time domain. Thus we turn our attention to the frequency domain. Moreover, some previous studies on robust learning already revealed that robust and non-robust features are often deeply related to frequency domain (Wang et al., 2020; Bernhard et al., 2021; Maiya et al., 2021).
|
| 160 |
+
|
| 161 |
+
Amplitude based selection. To identify the non-robust frequencies, a straightforward idea is to test the robustness of each individual frequency and select the non-robust ones. Nevertheless, it may take a large computational cost since the number of frequencies is high (e.g., if the input image is $64 \times 64$ , the number of frequencies is also $64 \times 64 \approx 4 \times 10^{3}$ ). We propose an easy-to-implement selection idea based on the amplitudes, since the amplitudes can be directly obtained via the Fourier transformation with low complexity. According to the previous research (Ilyas et al., 2019; Benz et al., 2021; Springer et al., 2021), a feature can be regarded as "robust" if it cannot be easily manipulated by small perturbations. We observe that high-amplitude frequency features usually dominate the ground truth of an image. Figure 4 in our Appendix illustrates an example to show that, if we keep high-amplitude frequencies and remove low-amplitude ones, the image is changed slightly even with adding certain noise (i.e., we can still recognize the ground truth from the modified image). This observation suggests that high-amplitude frequency features are more strongly related to the semantic information of image. So in our following approach, we maintain high-amplitude frequency features as "robust features", and select the frequencies with low amplitudes (by setting a threshold " $\tau$ " to transform. Moreover, we can conveniently observe the performance changing through varying the threshold $\tau$ in our experiment. Figure1 illustrates the amplitude-based selection.
|
| 162 |
+
|
| 163 |
+

|
| 164 |
+
Figure 1: We use a $5 \times 5$ image as a toy example, where the intensity of the color indicates the magnitude of the amplitude. In our amplitude-based selection, we retain the high-amplitude frequencies (i.e., the darker regions) and perform data transformations on the low-amplitude frequencies (i.e., the white regions).
|
| 165 |
+
|
| 166 |
+
Following our frequency selection, we propose two transformation methods for promoting the diversity by using the identified non-robust features. Our first approach is from a straightforward idea, which is just to replace the non-robust features by random noise (due to the space limit, we leave the details to Appendix C). This method is very easy to implement in practice. Though it can achieve certain degree of improvement upon previous ensemble training methods, the performance is still not very promising (as shown in our experiments). To further improve the effectiveness, we propose a more sophisticated approach called "targetedattack transformation", which constructs a set of different "substitute" features through attack
|
| 167 |
+
|
| 168 |
+
ing the images to different targeted classes, and then use them to replace the selected non-robust frequencies.
|
| 169 |
+
|
| 170 |
+
Targeted-attack transformation: We briefly explain our intuition first. It was shown that adversarial attacks have the capability to manipulate non-robust features (Ilyas et al., 2019; Yang et al., 2020). In particular, a targeted attack as introduced in Definition 2.2, aims at modifying non-robust features that are associated with a specific target label. For instance, let us consider a data point $(x,y)$ in the original dataset $\mathcal{X} \times \mathcal{Y}$ ; we set the target label as $y_{t}$ and obtain the corresponding adversarial example
|
| 171 |
+
|
| 172 |
+
$x^{\prime}$ ( $x^{\prime}$ contains the modified non-robust features that are associated with $y_{t}$ ). When training a model using $(x^{\prime}, y)$ , intuitively it can be viewed as an "immunization" for defending the attack from $y$ to $y_{t}$ ; and consequently, the chance that obtaining the wrong label $y_{t}$ for the data with label $y$ decreases. In other words, it becomes more difficult to attack the images with label $y$ to $y_{t}$ than to the other classes. We call the modified non-robust feature as a "substitute" feature derived by the targeted attack.
|
| 173 |
+
|
| 174 |
+
Motivated by this observation, we can construct different transformations by using $k \times (k - 1)$ targeted attacks (since each label can be attacked to be the other $k - 1$ labels); these attacks can yield different substitute features, and then we use these features to replace the corresponding non-robust features in the original dataset (based on Hint (i) in Section 3); finally, the $M$ transformed datasets are obtained via an allocation algorithm, where each substitute feature is captured by at least $M / 2 + 1$ datasets (based on Hint (ii) in Section 3). Overall, due to the completeness of the $k \times (k - 1)$ targeted attacks, the $M$ sub-models trained on those datasets can guarantee the robustness of the final ensemble solution. We introduce some definitions for our transformation first.
|
| 175 |
+
|
| 176 |
+
Definition 4.1 (Strengthen a dataset) Let $y_{1} \neq y_{2} \in \mathcal{Y}$ . If a given training dataset $P$ contains at least one adversarial example who has the original label $y_{1}$ but is misclassified as $y_{2}$ , we say that $P$ has been strengthened by the attack direction from $y_{1}$ to $y_{2}$ (" $\overrightarrow{y_{1}y_{2}}$ -direction" for short).
|
| 177 |
+
|
| 178 |
+
In other words, if $P$ is not strengthened in $\overrightarrow{y_1y_2}$ -direction, the model trained on $P$ is more likely to be fragile to the targeted attacks from $y_1$ to $y_2$ . Also, the dataset $P$ may have not been strengthened in multiple different directions. So we define its "weakness set" $\mathcal{W} = \{\overrightarrow{y_1y_2} \mid 1 \leq y_1, y_2 \leq k, y_1 \neq y_2, \text{ and } P \text{ has not been strengthened in } \overrightarrow{y_1y_2} \text{-direction}\}$ .
|
| 179 |
+
|
| 180 |
+
Definition 4.2 (Diversity of weakness sets) Given $M$ datasets $\{P_1, P_2, \dots, P_M\}$ with their corresponding weakness sets $\{\mathcal{W}_1, \mathcal{W}_2, \dots, \mathcal{W}_M\}$ , we define their diversity:
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
\boldsymbol {D i v} (P _ {1}, P _ {2}, \dots , P _ {M}) = 1 - \frac {| \mathcal {W} _ {1} \cap \mathcal {W} _ {2} \cap \cdots \cap \mathcal {W} _ {M} |}{\max \{| \mathcal {W} _ {1} | , | \mathcal {W} _ {2} | , \cdots , | \mathcal {W} _ {M} | \}}.
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
It is easy to see that the higher the value $\mathbf{Div}(P_1, P_2, \dots, P_M)$ , the more diverse the corresponding weakness sets. A higher diversity suggests that the vulnerabilities of the $M$ sub-models trained on those datasets are more likely to be different. To achieve a nice performance in terms of both accuracy and robustness, we need to take account of the diversity function "Div" for designing the transformations. The basic principle is:
|
| 187 |
+
|
| 188 |
+
On the one hand, our transformed datasets should have a sufficiently large number of diverse substitute features, so that one adversarial attack cannot easily capture more than half of the $M$ sub-models. On the other hand, the datasets should also maintain the major information of the original input as much as possible, since otherwise the clean accuracy may decline due to the added substitute features.
|
| 189 |
+
|
| 190 |
+
To provide an appropriate trade-off, we propose the following constrained optimization objective: let $\mathbb{C}$ be the set of all the $\binom{M}{\lceil M/2 \rceil}$ combinations of $\lceil M/2 \rceil$ -size subsets from $\{1, 2, \dots, M\}$ , and then
|
| 191 |
+
|
| 192 |
+
$$
|
| 193 |
+
\max _ {P _ {1}, P _ {2}, \dots , P _ {M}} \quad \min \left\{\left| \mathcal {W} _ {1} \right|, \left| \mathcal {W} _ {2} \right|, \dots , \left| \mathcal {W} _ {M} \right| \right\} \tag {10}
|
| 194 |
+
$$
|
| 195 |
+
|
| 196 |
+
$$
|
| 197 |
+
\mathrm {s . t .} \forall \left\{i _ {1}, i _ {2}, \dots , i _ {\lceil M / 2 \rceil} \right\} \in \mathbb {C}, \quad \boldsymbol {D i v} \left(P _ {i _ {1}}, P _ {i _ {2}}, \dots , P _ {i _ {\lceil M / 2 \rceil}}\right) = 1. \tag {11}
|
| 198 |
+
$$
|
| 199 |
+
|
| 200 |
+
We maximize the objective function of (10) because we want to minimize the modification degree for each transformed dataset. Intuitively, a large weakness set indicates that the corresponding dataset is not changed significantly by the transformation, and thus the clean accuracy is likely to be well preserved. The constraint (11) guarantees that any $\lceil M/2\rceil$ datasets have the intersection $\mathcal{W}_{i_1} \cap \mathcal{W}_{i_2} \cap \dots \cap \mathcal{W}_{i_{\lceil M/2 \rceil}} = \emptyset$ , that is, they do not share any common direction. Consequently, the ensemble solution should be robust to any attack direction. To achieve this twofold goal, we design an efficient allocation strategy together with an attack-guided transformation on the training data. Specifically, the procedure consists of the following two stages.
|
| 201 |
+
|
| 202 |
+
Stage (1): allocating the weakness sets to the sub-models. For each $\overrightarrow{y_1y_2}$ -direction, $1 \leq y_1, y_2 \leq k$ , there are at most $\lceil \frac{M}{2} \rceil - 1$ sets that contain this direction (due to the constraint (11)), so the sum $\sum_{1 \leq i \leq M} |\mathcal{W}_i|$ is no larger than $k(k - 1) * (\lceil \frac{M}{2} \rceil - 1)$ . Therefore, the maximum value of Eq (10) is no larger than
|
| 203 |
+
|
| 204 |
+
$$
|
| 205 |
+
k (k - 1) * \left(\left\lceil \frac {M}{2} \right\rceil - 1\right) / M \tag {12}
|
| 206 |
+
$$
|
| 207 |
+
|
| 208 |
+
based on the pigeonhole principle. We assign the total $k(k - 1)*\left(\lceil \frac{M}{2}\rceil -1\right)$ directions (each direction is duplicated to be $\lceil \frac{M}{2}\rceil -1$ copies) to $M$ sets in a round-robin way, where the number of directions assigned to each set is no larger than the upper bound (12). Please refer to Figure 2 for an example.
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
Figure 2: Assign the attack directions to five sub-models for a three-class classification task.
|
| 212 |
+
|
| 213 |
+
Stage (2): constructing the new datasets. Following the allocation, we transform the original dataset, denoted by $P_{\mathrm{ori}}$ , to align with the assigned weakness sets for the $M$ sub-models correspondingly. Using the same notations of Definition 4.2, we denote the waiting-to-construct dataset for the $m$ -th sub-model as $P_{m}$ (which is initialized to be $\emptyset$ ), $1 \leq m \leq M$ . First, we divide $P_{\mathrm{ori}}$ into $k$ subsets $C_1, C_2, \dots, C_k$ , where each $C_j$ corresponds to the label $j$ , for $1 \leq j \leq k$ ; further, each $C_j$ is equally partitioned to $k - 1$ disjoint parts $\{C_{j,1}, C_{j,2}, \dots, C_{j,k - 1}\}$ at random. For each data $(x,j)$ in $C_{j,i}$ , we attack it from $j$ to $h$ (let $h = i + j \mod k$ ) to obtain the adversarial perturbation; then we only substitute the low-amplitude frequencies of $x$ with the perturbation, and other frequencies (which have their amplitudes higher than the aforementioned threshold $\tau$ ) remain unchanged. We denote the new dataset as $C_{j,i}'$ . Finally, we add $C_{j,i}'$ to $P_{m}$ if the $i \overrightarrow{h}$ -direction is not in the weakness set $\mathcal{W}_m$ . From the construction method of the weakness sets, we know that the $i \overrightarrow{h}$ -direction can appear in at most $\lceil \frac{M}{2} \rceil - 1$ weakness sets. So, the set $C_{i,j}'$ can be added to at least $\lceil \frac{M}{2} \rceil$ different $P_m s$ . Consequently, the completeness for defending the $k(k - 1)$ attack directions can be guaranteed, i.e., the constraint (11) is satisfied. Figure 3 shows the schematic diagram of the construction process, and the full details are shown in Appendix D.
|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
Figure 3: A schematic diagram of the construction process. In the allocation stage, each $C_{j,i}^{\prime}$ is added to $P_{m}$ if the $i \vec{h}$ -direction is not in the weakness set $\mathcal{W}_{m}$ , $h = i + j \mod k$ .
|
| 217 |
+
|
| 218 |
+
Remark 4.3 We are aware of some previous robust learning approaches that also depend on data modification (Allen-Zhu & Li, 2022; Tsipras et al., 2018). But their approaches usually tend to
|
| 219 |
+
|
| 220 |
+
completely eliminate non-robust features. Our method is quite different, where the goal is to leverage the carefully selected non-robust features to weaken the transferability among sub-models. For each sub-model, we only modify the non-robust features corresponding to certain directions, rather than all non-robust features, and therefore the modification yields relatively lower impact on clean accuracy. Moreover, we partition each class $C_j$ into $k - 1$ subsets $\{C_{j,1}, C_{j,2}, \dots, C_{j,k - 1}\}$ , with each subset being attacked to a specified class. This step eliminates the need to attack each data point across all classes, thereby reducing the computational complexity of constructing the new datasets.
|
| 221 |
+
|
| 222 |
+
# 5 EXPERIMENTS
|
| 223 |
+
|
| 224 |
+
We conduct our experiments on the widely used image datasets CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009), and Tiny-ImageNet-200 (Deng et al., 2009). As for the baselines, we reproduce the existing ensemble models including ADP (Pang et al., 2019), GAL (Kariyappa & Qureshi, 2019), DVERGE (Yang et al., 2020), and TRS (Yang et al., 2021), with their released codes and recommended hyperparameter settings. As for our approach, "FDT-random" and "FDT-target" respectively denote the methods utilizing random noise based transformation and target-attack transformation; "FDT-hybrid" represents the method that combines both, that is, we set two frequency selection thresholds $\tau_{1}$ and $\tau_{2}$ ( $\tau_{1} < \tau_{2}$ ), and perform random and target-attack transformations on the frequencies less than $\tau_{1}$ and the frequencies between $\tau_{1}$ and $\tau_{2}$ , respectively (due to the space limit, more details are shown in Appendix E). Our code will be available at https://github.com/ideven123/FDT.
|
| 225 |
+
|
| 226 |
+
We train each sub-model based on ResNet-20 (He et al., 2016) and use Adam optimizer (Kingma & Ba, 2015) with an initial learning rate of 0.001 for 200 epochs. To further test their performance on neural network with larger scale, we also use WideResNet28-10 (Zagoruyko & Komodakis, 2016) to train the sub-models and the results are placed in our supplement. All the experiments are implemented with PyTorch (Paszke et al., 2017) on a single NVIDIA GeForce RTX 3090 with 24GB of memory and 1TB of storage. We assess the performance of our models through 5 repeated runs and compute error bars. Utilizing the numpy library, we calculate the standard deviation and subsequently derive the standard error of the mean (SEM).
|
| 227 |
+
|
| 228 |
+
Varying the number of sub-models. We take the ResNet-20 model trained on CIFAR-10 as an example and test the performance of FDT with different numbers of sub-models in the ensemble. In this experiment, we set the frequency selection threshold $\tau_{1}$ to be 0.2 and $\tau_{2}$ to be 0.8. Then we evaluate the performance of FDT-hybrid under FGSM (Madry et al., 2018), PGD (Carlini et al., 2019), and AutoAttack (AA) (Croce & Hein, 2020) attack methods with $l_{\infty}$ perturbations of size $\epsilon = 0.02$ . The results in Table 1 indicate that our clean accuracy has relatively smaller change as the number increases, while the robust accuracy can be substantially improved from 3 to 20 sub-models.
|
| 229 |
+
|
| 230 |
+
Table 1: Performance of FDT-hybrid with different sub-model numbers on CIFAR-10.
|
| 231 |
+
|
| 232 |
+
<table><tr><td>Sub-model numbers</td><td>3</td><td>5</td><td>8</td><td>12</td><td>20</td></tr><tr><td>Clean accuracy</td><td>90.20 ± 0.03</td><td>90.75±0.03</td><td>91.35±0.05</td><td>91.51 ± 0.06</td><td>91.86±0.07</td></tr><tr><td>FGSM (ε=0.02)</td><td>58.04 ± 0.13</td><td>61.66± 0.15</td><td>62.41 ± 0.11</td><td>63.96 ± 0.12</td><td>64.27 ± 0.14</td></tr><tr><td>PGD (ε=0.02)</td><td>20.01 ± 0.04</td><td>26.10± 0.07</td><td>29.20± 0.05</td><td>29.78 ± 0.08</td><td>29.71± 0.07</td></tr><tr><td>AutoAttack (ε=0.02)</td><td>19.42± 0.04</td><td>25.37± 0.05</td><td>27.33± 0.04</td><td>28.12± 0.07</td><td>28.92± 0.07</td></tr></table>
|
| 233 |
+
|
| 234 |
+
Results for white-box attack. To maintain consistency with the baseline ensemble methods from the literature, we ensemble three ResNet-20 sub-models here and evaluate the robust accuracy using $\epsilon = 0.01$ and $\epsilon = 0.02$ . In this experiment, we set the frequency selection threshold $\tau_{1}$ to be 0.2 and $\tau_{2}$ to be 0.8. In the white-box attack setting, the attacker has full knowledge of the models, including model parameters, architecture, and ensemble training strategy. To evaluate the adversarial robustness of the ensemble, we conduct the following white-box attacks: PGD, FGSM, BIM (Goodfellow et al., 2015), MIM (Dong et al., 2018), C&W (Carlini & Wagner, 2017) and AutoAttack (AA). The attacks are implemented using AdverTorch (Ding et al., 2019). We take the robust and clean accuracies, and average training time per epoch as the evaluation metrics.
|
| 235 |
+
|
| 236 |
+
Table 2 presents the obtained robust accuracies of the baseline ensemble methods on CIFAR-10 and CIFAR-100. In addition, we show the average training time per epoch of different ensemble methods. The experimental results suggest that our FDT-random method can achieve higher adversarial robustness over other baselines on both CIFAR-10 and CIFAR-100, with the training time only higher than ADP (and much lower than other baselines). Furthermore, the FDT-hybrid ensemble method
|
| 237 |
+
|
| 238 |
+
Table 2: Robust and Clean Accuracy (\%) and average training time of different ensemble methods against white-box attacks on CIFAR-10 and CIFAR-100. “ $\epsilon$ ” and “ $\lambda$ ” stand for the $l_{\infty}$ norm of the adversarial perturbation and the coefficient of C&W attack respectively. The TRS results are reported in the original paper Yang et al. (2021), with “-” indicating results not provided.
|
| 239 |
+
|
| 240 |
+
<table><tr><td>CIFAR-10</td><td>ADP</td><td>GAL</td><td>DVERGE</td><td>TRS</td><td>FDT-random</td><td>FDT-target</td><td>FDT-hybrid</td></tr><tr><td>Clean accuracy</td><td>91.84</td><td>91.81</td><td>91.37</td><td>-</td><td>89.88±0.02</td><td>90.16±0.04</td><td>90.20±0.03</td></tr><tr><td>FGSM (ε=0.01)</td><td>59.48</td><td>44.97</td><td>70.05</td><td>-</td><td>66.96±0.12</td><td>72.88±0.12</td><td>72.24±0.12</td></tr><tr><td>FGSM (ε=0.02)</td><td>53.38</td><td>30.58</td><td>56.33</td><td>44.2</td><td>46.28±0.10</td><td>55.54±0.09</td><td>58.04±0.13</td></tr><tr><td>PGD (ε=0.01)</td><td>14.45</td><td>1.35</td><td>40.55</td><td>50.5</td><td>45.42±0.09</td><td>46.58±0.07</td><td>48.48±0.09</td></tr><tr><td>PGD (ε=0.02)</td><td>2.95</td><td>0.34</td><td>11.49</td><td>15.1</td><td>12.24±0.03</td><td>15.08±0.05</td><td>20.01±0.04</td></tr><tr><td>BIM (ε=0.01)</td><td>14.15</td><td>1.37</td><td>40.51</td><td>50.6</td><td>45.24±0.03</td><td>46.86±0.04</td><td>48.57±0.05</td></tr><tr><td>BIM (ε=0.02)</td><td>3.01</td><td>0.27</td><td>10.65</td><td>15.8</td><td>11.68±0.03</td><td>14.86±0.03</td><td>16.63±0.02</td></tr><tr><td>MIM (ε=0.01)</td><td>20.38</td><td>2.05</td><td>44.74</td><td>51.5</td><td>47.73±0.05</td><td>49.97±0.06</td><td>51.50±0.07</td></tr><tr><td>MIM (ε=0.02)</td><td>5.11</td><td>0.69</td><td>14.76</td><td>17.2</td><td>15.14±0.04</td><td>18.27±0.02</td><td>20.09±0.03</td></tr><tr><td>AA (ε=0.01)</td><td>1.80</td><td>0.00</td><td>43.34</td><td>-</td><td>46.09±0.09</td><td>48.83±0.08</td><td>51.56±0.08</td></tr><tr><td>AA (ε=0.02)</td><td>0.00</td><td>0.00</td><td>13.72</td><td>-</td><td>9.38±0.05</td><td>15.70±0.05</td><td>19.42±0.04</td></tr><tr><td>C&W (λ=0.1)</td><td>20.96</td><td>31.57</td><td>52.35</td><td>58.1</td><td>45.01±0.10</td><td>55.48±0.10</td><td>56.08±0.11</td></tr><tr><td>CIFAR-100</td><td>ADP</td><td>GAL</td><td>DVERGE</td><td>TRS</td><td>FDT-random</td><td>FDT-target</td><td>FDT-hybrid</td></tr><tr><td>Clean accuracy</td><td>67.04</td><td>67.70</td><td>66.16</td><td>-</td><td>66.29±0.11</td><td>67.64±0.08</td><td>66.70±0.09</td></tr><tr><td>FGSM (ε=0.01)</td><td>17.82</td><td>16.89</td><td>33.94</td><td>-</td><td>35.42±0.12</td><td>40.46±0.14</td><td>39.85±0.14</td></tr><tr><td>FGSM (ε=0.02)</td><td>10.53</td><td>7.80</td><td>26.61</td><td>19.3</td><td>22.40±0.05</td><td>32.30±0.06</td><td>30.27±0.08</td></tr><tr><td>PGD (ε=0.01)</td><td>0.80</td><td>0.11</td><td>14.62</td><td>23.0</td><td>21.54±0.06</td><td>22.19±0.05</td><td>24.93±0.05</td></tr><tr><td>PGD (ε=0.02)</td><td>0.01</td><td>0.02</td><td>4.25</td><td>5.3</td><td>4.84±0.03</td><td>7.27±0.02</td><td>8.63±0.03</td></tr><tr><td>BIM (ε=0.01)</td><td>0.68</td><td>0.23</td><td>14.85</td><td>22.9</td><td>21.10±0.09</td><td>22.39±0.07</td><td>24.35±0.06</td></tr><tr><td>BIM (ε=0.02)</td><td>0.02</td><td>0.0</td><td>4.07</td><td>5.4</td><td>4.80±0.04</td><td>6.80±0.05</td><td>8.40±0.05</td></tr><tr><td>MIM (ε=0.01)</td><td>0.78</td><td>0.12</td><td>16.82</td><td>23.4</td><td>23.14±0.06</td><td>24.68±0.10</td><td>27.09±0.09</td></tr><tr><td>MIM (ε=0.02)</td><td>0.01</td><td>0.02</td><td>5.31</td><td>6.2</td><td>6.47±0.03</td><td>8.87±0.04</td><td>10.19±0.05</td></tr><tr><td>AA (ε=0.01)</td><td>0.01</td><td>0.00</td><td>11.23</td><td>-</td><td>16.02±0.09</td><td>16.03±0.09</td><td>16.41±0.12</td></tr><tr><td>AA (ε=0.02)</td><td>0.00</td><td>0.00</td><td>2.72</td><td>-</td><td>3.12±0.04</td><td>4.54±0.05</td><td>5.47±0.07</td></tr><tr><td>C&W (λ=0.1)</td><td>0.74</td><td>3.70</td><td>10.68</td><td>26.9</td><td>25.07±0.10</td><td>29.43±0.09</td><td>30.66±0.13</td></tr></table>
|
| 241 |
+
|
| 242 |
+
<table><tr><td>Time (s)</td><td>ADP</td><td>GAL</td><td>DVERGE</td><td>TRS</td><td>FDT-random</td><td>FDT-target</td><td>FDT-hybrid</td></tr><tr><td>CIFAR-10</td><td>30.15</td><td>69.92</td><td>134.33</td><td>350.42</td><td>37.04</td><td>108.22</td><td>114.23</td></tr><tr><td>CIFAR-100</td><td>30.34</td><td>69.71</td><td>129.25</td><td>344.92</td><td>37.12</td><td>108.43</td><td>113.87</td></tr></table>
|
| 243 |
+
|
| 244 |
+
achieves an even better robustness than FDT-random, though its running time is higher since it needs to perform the target-attack transformation.
|
| 245 |
+
|
| 246 |
+
Summary on other experimental results placed in Appendix F. We also conduct the experiments to examine the performance of FDT under black-box attack, and assess the transferability of our method across various sub-models. The results indicate the competitive robustness of our method in defending against black-box attacks. Then, we evaluate the trade-off between clean accuracy and robust accuracy by varying the frequency selection threshold $\tau$ . The result shows that the ensemble model has lower clean accuracy and higher robust accuracy with the increasing of $\tau$ . Moreover, we included some ablation studies on datasets and model architectures. These experiments demonstrate that our method performs the best among ensemble-based baseline methods.
|
| 247 |
+
|
| 248 |
+
# 6 CONCLUSION AND FUTURE WORK
|
| 249 |
+
|
| 250 |
+
In this paper, we present a novel data transformation approach to improve the robustness of ensemble models against adversarial attacks. By leveraging the frequency based features and strategically allocating adversarial examples, we demonstrate the effectiveness of our method in enhancing adversarial robustness while maintaining high accuracy on clean data. As for the future work, we can consider other types of transformation methods (e.g., beyond using frequency) to improve the ensemble robustness. Also, it is interesting to consider more complicated scenarios for ensemble training, such as federated learning with concerning the privacy issue.
|
| 251 |
+
|
| 252 |
+
# ACKNOWLEDGMENTS
|
| 253 |
+
|
| 254 |
+
The authors would like to thank the reviewers for their constructive comments and suggestions. This work was partially supported by the National Natural Science Foundation of China (No. 62272432 and No. 62432016), the National Key Research and Development Program of China (No. 2021YFA1000900), and the Natural Science Foundation of Anhui Province (No. 2208085MF163).
|
| 255 |
+
|
| 256 |
+
# REFERENCES
|
| 257 |
+
|
| 258 |
+
Zeyuan Allen-Zhu and Yanzhi Li. Feature purification: How adversarial training performs robust deep learning. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pp. 977-988. IEEE, 2022.
|
| 259 |
+
MaungMaung AprilPyone and Hitoshi Kiya. Block-wise image transformation with secret key for adversarially robust defense. IEEE Transactions on Information Forensics and Security, 16: 2709-2723, 2021.
|
| 260 |
+
Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International conference on machine learning, pp. 274-283. PMLR, 2018.
|
| 261 |
+
Philipp Benz, Chaoning Zhang, and In So Kweon. Batch normalization increases adversarial vulnerability and decreases adversarial transferability: A non-robust feature perspective. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7818-7827, 2021.
|
| 262 |
+
Rémi Bernhard, Pierre-Alain Moëllic, Martial Mermillod, Yannick Bourrier, Romain Cohendet, Miguel Solinas, and Marina Reyboz. Impact of spatial frequency based constraints on adversarial robustness. In 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2021.
|
| 263 |
+
Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017, IEEE symposium on security and privacy (sp), pp. 39-57. IEEE, 2017.
|
| 264 |
+
Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, and Alexey Kurakin. On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705, 2019.
|
| 265 |
+
Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang. Unlabeled data improves adversarial robustness. Advances in Neural Information Processing Systems, 32, 2019.
|
| 266 |
+
Francesco Croce and Matthias Hein. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In International conference on machine learning, pp. 2206-2216. PMLR, 2020.
|
| 267 |
+
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009.
|
| 268 |
+
Gavin Weiguang Ding, Luyu Wang, and Xiaomeng Jin. Advertorch v0.1: An adversarial robustness toolbox based on pytorch. arXiv preprint arXiv:1902.07623, 2019.
|
| 269 |
+
Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9185-9193, 2018.
|
| 270 |
+
Xitong Gao, Cheng-Zhong Xu, et al. Mora: Improving ensemble robustness evaluation with model reweighing attack. Advances in Neural Information Processing Systems, 35:26955-26965, 2022.
|
| 271 |
+
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. stat, 1050:20, 2015.
|
| 272 |
+
|
| 273 |
+
Ian J. Goodfellow, Yoshua Bengio, and Aaron C. Courville. Deep Learning. Adaptive computation and machine learning. MIT Press, 2016. ISBN 978-0-262-03561-3.
|
| 274 |
+
Sven Gowal, Sylvestre-Alvise Rebuffi, Olivia Wiles, Florian Stimberg, Dan Andrei Calian, and Timothy A Mann. Improving robustness using generated data. Advances in Neural Information Processing Systems, 34:4218-4233, 2021.
|
| 275 |
+
Chuan Guo, Mayank Rana, Moustapha Cissé, and Laurens van der Maaten. Countering adversarial images using input transformations. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.
|
| 276 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
|
| 277 |
+
Douglas Heaven. Why deep-learning ais are so easy to fool. Nature, 574(7777):163-166, 2019.
|
| 278 |
+
Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. Advances in neural information processing systems, 32, 2019.
|
| 279 |
+
Sanjay Kariyappa and Moinuddin K Qureshi. Improving adversarial robustness of ensembles with diversity training. arXiv e-prints, pp. arXiv-1901, 2019.
|
| 280 |
+
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
|
| 281 |
+
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009.
|
| 282 |
+
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.
|
| 283 |
+
Shishira R. Maiya, Max Ehrlich, Vatsal Agarwal, Ser-Nam Lim, Tom Goldstein, and Abhinav Shrivastava. A frequency perspective of adversarial robustness. CoRR, abs/2111.00861, 2021.
|
| 284 |
+
AKM Iqtidar Newaz, Nur Imtiazul Haque, Amit Kumar Sikder, Mohammad Ashiqur Rahman, and A Selcuk Uluagac. Adversarial attacks to machine learning-based smart healthcare systems. In GLOBECOM 2020-2020 IEEE Global Communications Conference, pp. 1-6. IEEE, 2020.
|
| 285 |
+
Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. Improving adversarial robustness via promoting ensemble diversity. In International Conference on Machine Learning, pp. 4970-4979. PMLR, 2019.
|
| 286 |
+
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
|
| 287 |
+
Rahul Rade and Seyed-Mohsen Moosavi-Dezfooli. *Helper-based adversarial training: Reducing excessive margin to achieve a better accuracy vs. robustness trade-off*. In ICML 2021 Workshop on Adversarial Machine Learning, 2021.
|
| 288 |
+
Edward Raff, Jared Sylvester, Steven Forsyth, and Mark McLean. Barrage of random transforms for adversarially robust defense. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6528-6537, 2019.
|
| 289 |
+
Giulio Rossolini, Federico Nesti, Gianluca D'Amico, Saasha Nair, Alessandro Biondi, and Giorgio Buttazzo. On the real-world adversarial robustness of real-time semantic segmentation models for autonomous driving. IEEE Transactions on Neural Networks and Learning Systems, pp. 1-15, 2023. doi: 10.1109/TNNLS.2023.3314512.
|
| 290 |
+
|
| 291 |
+
Andrei A. Rusu, Dan Andrei Calian, Sven Gowal, and Raia Hadsell. Hinding adversarial attacks with implicit neural representations. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 18910-18934. PMLR, 2022.
|
| 292 |
+
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
|
| 293 |
+
James C Spall. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE transactions on automatic control, 37(3):332-341, 1992.
|
| 294 |
+
Jacob Springer, Melanie Mitchell, and Garrett Kenyon. A little robustness goes a long way: Leveraging robust features for targeted transfer attacks. Advances in Neural Information Processing Systems, 34:9759-9773, 2021.
|
| 295 |
+
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014, 2014.
|
| 296 |
+
Florian Tramér, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations, 2018.
|
| 297 |
+
Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. In International Conference on Learning Representations, 2018.
|
| 298 |
+
Zekai Wang, Tianyu Pang, Chao Du, Min Lin, Weiwei Liu, and Shuicheng Yan. Better diffusion models further improve adversarial training. In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pp. 36246-36263. PMLR, 2023a.
|
| 299 |
+
Zekai Wang, Tianyu Pang, Chao Du, Min Lin, Weiwei Liu, and Shuicheng Yan. Better diffusion models further improve adversarial training. In International Conference on Machine Learning, pp. 36246-36263. PMLR, 2023b.
|
| 300 |
+
Zifan Wang, Yilin Yang, Ankit Shrivastava, Varun Rawal, and Zihao Ding. Towards frequency-based explanation for robust cnn. arXiv preprint arXiv:2005.03141, 2020.
|
| 301 |
+
Futa Waseda, Sosuke Nishikawa, Trung-Nghia Le, Huy H Nguyen, and Isao Echizen. Closer look at the transferability of adversarial examples: How they fool different models differently. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1360-1368, 2023.
|
| 302 |
+
Sven-Ake Wegner. Lecture notes on high-dimensional data. arXiv preprint arXiv:2101.05841, 2021.
|
| 303 |
+
Yuancheng Xu, Yanchao Sun, Micah Goldblum, Tom Goldstein, and Furong Huang. Exploring and exploiting decision boundary dynamics for adversarial robustness. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023, 2023.
|
| 304 |
+
Huanrui Yang, Jingyang Zhang, Hongliang Dong, Nathan Inkawich, Andrew Gardner, Andrew Touchet, Wesley Wilkes, Heath Berry, and Hai Li. Diverse: diversifying vulnerabilities for enhanced robust generation of ensembles. Advances in Neural Information Processing Systems, 33:5505-5515, 2020.
|
| 305 |
+
Ruijie Yang, Yuanfang Guo, Junfu Wang, Jiantao Zhou, and Yunhong Wang. Common knowledge learning for generating transferable adversarial examples. Frontiers Comput. Sci., 19(10):1910359, 2025. doi: 10.1007/S11704-024-40533-4.
|
| 306 |
+
|
| 307 |
+
Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Pan Zhou, Benjamin Rubinstein, Ce Zhang, and Bo Li. Trs: Transferability reduced ensemble via promoting gradient diversity and model smoothness. Advances in Neural Information Processing Systems, 34:17642-17655, 2021.
|
| 308 |
+
Mehmet Kerim Yucel, Ramazan Gokberk Cinbis, and Pinar Duygulu. Hybridaugment++: Unified frequency spectra perturbations for model robustness. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5718-5728, 2023.
|
| 309 |
+
Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. CoRR, abs/1605.07146, 2016.
|
| 310 |
+
Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pp. 7472-7482. PMLR, 2019.
|
| 311 |
+
Wen Zhou, Xin Hou, Yongjun Chen, Mengyun Tang, Xiangqi Huang, Xiang Gan, and Yong Yang. Transferable adversarial perturbations. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 452-467, 2018.
|
| 312 |
+
Yi Zhu, Chenglin Miao, Tianhang Zheng, Foad Hajiaghajani, Lu Su, and Chunming Qiao. Can we use arbitrary objects to attack lidar perception in autonomous driving? In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp. 1945-1960, 2021.
|
| 313 |
+
|
| 314 |
+
# A OMITTED PROOFS
|
| 315 |
+
|
| 316 |
+
A.1 PROOF OF INEQUALITY (6):
|
| 317 |
+
|
| 318 |
+
$$
|
| 319 |
+
\begin{array}{l} \sum_ {y _ {t} \in \mathcal {Y}} \operatorname {V r} \left(F _ {\mathrm {E}}, y _ {t}\right) \\ = \sum_ {y _ {t} \in \mathcal {Y}} \mathbb {E} _ {(x, y) \sim \mathcal {D}} \left[ \mathbb {I} \left\{F _ {\mathrm {E}} (x) = y \wedge F _ {\mathrm {E}} (\mathcal {A} (x)) = y _ {t} \right\} \right] \\ = \sum_ {y _ {t} \in \mathcal {Y}} \sum_ {(x, y) \in \mathcal {D}} p _ {(x, y)} \left[ \mathbb {I} \left\{F _ {\mathrm {E}} (x) = y \wedge F _ {\mathrm {E}} (\mathcal {A} (x)) = y _ {t} \right\} \right]. \\ \end{array}
|
| 320 |
+
$$
|
| 321 |
+
|
| 322 |
+
Then, we interchange the order of summation, and so the above equation is equal to
|
| 323 |
+
|
| 324 |
+
$$
|
| 325 |
+
\begin{array}{l} \left. \sum_ {(x, y) \in \mathcal {D}} p _ {(x, y)} \sum_ {y _ {t} \in \mathcal {Y}} \left[ \mathbb {I} \left\{F _ {\mathrm {E}} (x) = y \wedge F _ {\mathrm {E}} (\mathcal {A} (x)) = y _ {t} \right\} \right] \right. \\ = \mathbb {E} _ {(x, y) \sim \mathcal {D}} \Big [ \sum_ {y _ {t} \in \mathcal {Y}} \mathbb {I} \big \{F _ {\mathrm {E}} (x) = y \wedge F _ {\mathrm {E}} (\mathcal {A} (x)) = y _ {t} \big \} \Big ]. \\ \end{array}
|
| 326 |
+
$$
|
| 327 |
+
|
| 328 |
+
For each $(x,y)$ , without loss of generality, let $F_{\mathrm{E}}(\mathcal{A}(x)) = y_0$ . For $y_t \neq y_0$ , $\mathbb{I}\big\{F_{\mathrm{E}}(x) = y \wedge F_{\mathrm{E}}(\mathcal{A}(x)) = y_t\big\} = 0$ . For $y_t = y_0$ , $\mathbb{I}\big\{F_{\mathrm{E}}(x) = y \wedge F_{\mathrm{E}}(\mathcal{A}(x)) = y_t\big\} = \mathbb{I}\big\{F_{\mathrm{E}}(x) = y\big\}$ . So the above equation is equal to
|
| 329 |
+
|
| 330 |
+
$$
|
| 331 |
+
\begin{array}{l} \mathbb {E} _ {(x, y) \sim \mathcal {D}} \left[ \mathbb {I} \left\{F _ {\mathrm {E}} (x) = y \right\} \right] \\ = \mathbb {E} _ {(x, y) \sim \mathcal {D}} \left[ \mathbb {I} \left\{F _ {\mathrm {E}} (x) = y \wedge \left(F _ {\mathrm {E}} (\mathcal {A} (x)) \neq y \vee F _ {\mathrm {E}} (\mathcal {A} (x)) = y\right) \right\} \right] \\ = \mathbb {E} _ {(x, y) \sim \mathcal {D}} \left[ \mathbb {I} \left\{F _ {\mathrm {E}} (x) = y \wedge F _ {\mathrm {E}} (\mathcal {A} (x)) \neq y \right\} + \mathbb {I} \left\{F _ {\mathrm {E}} (x) = y \wedge F _ {\mathrm {E}} (\mathcal {A} (x)) = y \right\} \right]. \\ \end{array}
|
| 332 |
+
$$
|
| 333 |
+
|
| 334 |
+
We split $\mathbb{I}(.)$ because $F_{\mathrm{E}}(\mathcal{A}(x)) \neq y$ and $F_{\mathrm{E}}(\mathcal{A}(x)) = y$ are mutually exclusive. Then, the above equation is equal to
|
| 335 |
+
|
| 336 |
+
$$
|
| 337 |
+
\begin{array}{l} \operatorname {V r} \left(F _ {\mathrm {E}}\right) + \mathbb {E} _ {(x, y) \sim \mathcal {D}} \left[ \mathbb {I} \left\{F _ {\mathrm {E}} (x) = y \wedge F _ {\mathrm {E}} (\mathcal {A} (x)) = y \right\} \right] \\ \geq \operatorname {V r} \left(F _ {\mathrm {E}}\right) \\ \end{array}
|
| 338 |
+
$$
|
| 339 |
+
|
| 340 |
+
Overall, we obtain the inequality (6): $\sum_{y_t\in \mathcal{Y}}\mathrm{Vr}(F_{\mathrm{E}},y_t)\geq \mathrm{Vr}(F_{\mathrm{E}})$
|
| 341 |
+
|
| 342 |
+
# B FREQUENCY SELECTION
|
| 343 |
+
|
| 344 |
+
Figure 4 illustrates an example to show that, if we keep high-amplitude frequencies and remove low-amplitude ones, the image is changed slightly even with adding certain noise (i.e., we can still recognize the ground truth from the modified image). On the other hand, if we keep the low-amplitude frequencies only, the semantic information is almost missing. This observation suggests that high-amplitude frequency features are more strongly related to the semantic information of image.
|
| 345 |
+
|
| 346 |
+
# C RANDOM NOISE BASED TRANSFORMATION
|
| 347 |
+
|
| 348 |
+
Random noise based transformation: This approach substitutes the identified non-robust frequencies with Gaussian noise. For an $N \times N$ image, we take the non-robust frequencies based on the pre-specified threshold $\tau$ , and replace them with random vector for each sub-model in our experiment. In particular, to further increase the randomness, we perform this transformation for each epoch in the training stage. If we select the top $s$ non-robust frequencies, the overall dimensionality of the edited random feature should be $s \times E$ (we concatenate those $s$ -dimensional features together), where $E$ is the number of epochs. For example, if $N = 32$ , $s = N^2 / 2$ , and $E = 200$ , the overall dimensionality can be as large as $10^5$ . Because these $M$ features are random and have high dimensions, they are very likely to be nearly orthogonal with each other (this phenomenon in high-dimensional geometry can be proved by the central limit theorem (Wegner, 2021)). As a consequence, they tend to yield diverse training results for the sub-models.
|
| 349 |
+
|
| 350 |
+

|
| 351 |
+
Figure 4: The first and second rows are the figures by adding random noise to high-amplitude and low-amplitude frequencies, respectively. "20% changed" for the first row means we remove the 20% lowest-amplitudes frequencies, and add small noise to the remaining high-amplitude frequencies. "20% changed" for the second row means we remove the 20% highest-amplitude frequencies, and add small noise to the remaining low-amplitude frequencies. "50% changed" and "80% changed" follow the same procedure as "20% changed".
|
| 352 |
+
|
| 353 |
+
The implementation details are as follows. Given an image $x$ , we perform Fourier Transform on $x$ and also on a generated Gaussian noise $n_0$ . Then, we can obtain the low-amplitude frequencies and high-amplitude frequencies of $x$ by setting an amplitude threshold. Next, we generate two masks $(M_1$ and $M_2$ ) to select high-amplitude frequencies and low-amplitude frequencies. We add the low-amplitude frequencies of $n_0$ (i.e., $M_2(n_0)$ ) to high-amplitude frequencies of $x$ (i.e., $M_1(x)$ ), and obtain the transformation of $x$ (denoted as $\pi(x)$ ). Finally, we transform $\pi(x)$ to time domain by inverse Fourier transform and train the model with $\pi(x)$ .
|
| 354 |
+
|
| 355 |
+
# D ALGORITHM OF FDT
|
| 356 |
+
|
| 357 |
+
Algorithm 1 shows the overall framework of training an ensemble model with FDT. It illustrates that our data transformation is performed at each iteration.
|
| 358 |
+
|
| 359 |
+
Algorithm 1 Training ensemble model with FDT
|
| 360 |
+
```txt
|
| 361 |
+
Input: dataset $\mathcal{X}\times \mathcal{Y}$ , the number of sub-models $M$ , and the epoch number $E$ Output: sub-model $\beta_{1},\beta_{2},\dots ,\beta_{M}$
|
| 362 |
+
for $i = 1$ to $E$ do Run Targeted-attack Transformation and obtain $P_{1},P_{2},\dots ,P_{M}$ . for $j = 1$ to $M$ do train $\beta_{j}$ on $P_{j}$
|
| 363 |
+
end for
|
| 364 |
+
end for
|
| 365 |
+
```
|
| 366 |
+
|
| 367 |
+
Algorithm 2 shows the details of targeted-attack transformation method on the whole dataset. For each specific image $x$ , we obtain the targeted class according to the allocation scheme mentioned in "Stage (1)". Then, we use targeted PGD attack to obtain the adversarial sample $x'$ . After that, we perform Fourier Transform on $x$ and $x'$ , and we can obtain the low-amplitude frequencies and high-amplitude frequencies of $x$ by setting an amplitude threshold. Next, we generate two masks $(M_1$ and $M_2$ ) to select high-amplitude frequencies and low-amplitude frequencies. We add the low-amplitude frequencies of $x'$ (i.e., $M_2(x')$ ) to high-amplitude frequencies of $x$ (i.e., $M_1(x)$ ), and obtain the transformation of $x$ (denoted as $\pi(x)$ ). Finally, we transform $\pi(x)$ to time domain by inverse Fourier transform.
|
| 368 |
+
|
| 369 |
+
Algorithm 2 Targeted-attack Transformation
|
| 370 |
+
```txt
|
| 371 |
+
Input: dataset $P_{ori}$ , number M, steps s, class number k
|
| 372 |
+
Output: Transformed data $P_{1},P_{2},\dots ,P_{M}$
|
| 373 |
+
Divide the dataset $P_{ori}$ into k parts $\{C_1,C_2,\dots ,C_k\}$ according to labels
|
| 374 |
+
Randomly partition the dataset $C_j$ equally into disjoint $k - 1$ parts $\{C_{j,1},C_{j,2},\dots ,C_{j,k - 1}\}$
|
| 375 |
+
Initialize $P_{1},P_{2},\dots ,P_{M}$ with empty set;
|
| 376 |
+
$m\gets 0$
|
| 377 |
+
for $j = 1$ to k do for $i = 1$ to $k - 1$ do $C_{j,i}^{\prime}\gets$ calculate targeted attack example in $C_{j,i}$ with label $i + j$ mod $k$ and perform data transformation on each image; for $s = 1$ to $\lceil \frac{M}{2}\rceil +1$ do $m\gets m + 1$ mod M; Append $C_{j,i}^{\prime}$ to $P_{m}$ end for end for end for
|
| 378 |
+
```
|
| 379 |
+
|
| 380 |
+
# E IMPLEMENT
|
| 381 |
+
|
| 382 |
+
In this section, we provide more experimental details. In our work, we utilize the CIFAR-10 (Krizhevsky & Hinton, 2009), CIFAR-100 (Krizhevsky & Hinton, 2009), and Tiny-ImageNet-200 (Deng et al., 2009). In the testing process, the primary reason for selecting FGSM (Madry et al., 2018), PGD (Carlini et al., 2019), BIM (Goodfellow et al., 2015), MIM (Dong et al., 2018), CW(Carlini & Wagner, 2017) as attack methods is to keep consistent with the baseline methods from the literature. Further, we select AA (Croce & Hein, 2020) because it is also a popular attack method and more powerful than those base methods. To reduce the computational complexity of targeted attacks, we leverage the transferability of adversarial examples and utilize a pre-trained simple network (VGG11(Simonyan & Zisserman, 2015)) structure for targeted attacks.
|
| 383 |
+
|
| 384 |
+
Further, we introduce the implement of "FDT-random", "FDT-target" and "FDT-hybrid" here. For "FDT-random", we perform Fourier Transform on $x$ and also on a randomly sampled standard Gaussian noise $n_0$ . Then, we can obtain the low-amplitude frequencies and high-amplitude frequencies of $x$ by setting an amplitude threshold. Next, we generate two masks $(M_1$ and $M_2$ ) to select high-amplitude frequencies and low-amplitude frequencies. We add the low-amplitude frequencies of $n_0$ (i.e., $M_2(n_0)$ ) to high-amplitude frequencies of $x$ (i.e., $M_1(x)$ ), and obtain the transformation of $x$ (denoted as $\pi(x)$ ). Finally, we transform $\pi(x)$ to time domain by inverse Fourier transform and train the model with $\pi(x)$ . For "FDT-target", we obtain the targeted class according to the allocation scheme mentioned in "Stage (1)". Then, we use targeted PGD attack to obtain the adversarial sample $x'$ . After that, perform the same steps as with FDT-random (we substitute $n_0$ with $x'$ ). For FDT-hybrid, we set two frequency selection thresholds $\tau_1$ and $\tau_2$ ( $\tau_1 < \tau_2$ ), and generate three masks to select the frequencies: $M_1$ for the high-amplitude frequencies (amplitude $> \tau_2$ ), $M_2$ for the middle part ( $\tau_1 <$ amplitude $< \tau_2$ ), and $M_3$ for the small part (amplitude $< \tau_1$ ). Next, we combine $M_1(x), M_2(x')$ and $M_3(n_0)$ to obtain the transformation $\pi(x)$ .
|
| 385 |
+
|
| 386 |
+
# F ADDITIONAL EXPERIMENTAL RESULTS
|
| 387 |
+
|
| 388 |
+
In this section, we provide more experimental results. Firstly, we extend our experiments to SVHN, Tiny-ImageNet-200, and WideResNet-28-10 in Appendix F.1. We also conduct the ablation studies on weakness set allocation method, amplitude-based selection threshold and model architecture in Appendix F.1. Then, we evaluate the performance of FDT under black-box attacks on the CIFAR-10 and CIFAR-100 in Appendix F.2. Then we present the trade-off between clean accuracy and robust accuracy on the CIFAR-100 using FDT method in Appendix F.3. This trade-off sheds light on the effectiveness of FDT with changing the trade-off parameter. Additionally, in Appendix F.4, we compare the transferability across various sub-models with the baseline methods. Furthermore, we compare our method with more related methods in Appendix F.5.
|
| 389 |
+
|
| 390 |
+
# F.1 ABLATION STUDIES
|
| 391 |
+
|
| 392 |
+
In this section, we extend our experiments to additional datasets (SVHN, Tiny-ImageNet-200) and architecture (WideResNet-28-10). We also explore the ablation studies on weakness set allocation method, amplitude-based selection threshold and model architecture.
|
| 393 |
+
|
| 394 |
+
Table 3 presents the performance of ensemble methods trained with ResNet-20 on SVHN against several widely used white-box attacks. The experimental results demonstrate that all ensemble models achieve comparable levels of clean accuracy. Specifically, the FDT approach exhibits better robust accuracy than the other methods. These observations highlight the effectiveness of FDT in achieving favorable clean accuracy and robustness of ensemble models.
|
| 395 |
+
|
| 396 |
+
Table 3: Robust Accuracy (%) of different ensemble methods against white-box attacks on SVHN. The $\epsilon$ and $\lambda$ stand for the $l_{\infty}$ norm of the adversarial perturbation and the coefficient of C&W attack respectively. The last column is the ensemble model trained with FDT-hybrid.
|
| 397 |
+
|
| 398 |
+
<table><tr><td>SVHN</td><td>ADP</td><td>GAL</td><td>DVERGE</td><td>TRS</td><td>FDT-hybrid</td></tr><tr><td>clean accuracy</td><td>96.83</td><td>94.66</td><td>96.28</td><td>94.52</td><td>96.73 ± 0.12</td></tr><tr><td>FGSM (ε=0.01)</td><td>84.38</td><td>80.2</td><td>85.6</td><td>72.87</td><td>90.13 ± 0.09</td></tr><tr><td>FGSM (ε=0.02)</td><td>78.08</td><td>41.5</td><td>81.4</td><td>53.9</td><td>86.78± 0.07</td></tr><tr><td>PGD (ε= 0.01)</td><td>51.01</td><td>50.1</td><td>53.31</td><td>54.43</td><td>59.42 ± 0.07</td></tr><tr><td>PGD (ε = 0.02)</td><td>17.74</td><td>8.24</td><td>17.42</td><td>18.86</td><td>22.74 ± 0.04</td></tr><tr><td>BIM (ε= 0.01)</td><td>54.38</td><td>47.73</td><td>52.08</td><td>53.71</td><td>57.91± 0.08</td></tr><tr><td>BIM (ε = 0.02)</td><td>21.26</td><td>8.1</td><td>14.58</td><td>18.05</td><td>20.23 ± 0.05</td></tr><tr><td>MIM (ε= 0.01)</td><td>61.24</td><td>51.96</td><td>58.51</td><td>56.32</td><td>62.14± 0.08</td></tr><tr><td>MIM (ε= 0.02)</td><td>24.84</td><td>5.14</td><td>23.22</td><td>21.95</td><td>25.37 ± 0.04</td></tr><tr><td>AA (ε= 0.01)</td><td>49.92</td><td>48.39</td><td>52.02</td><td>52.83</td><td>57.54± 0.09</td></tr><tr><td>AA (ε= 0.02)</td><td>16.13</td><td>6.90</td><td>16.95</td><td>17.48</td><td>20.12 ± 0.05</td></tr><tr><td>C&W (λ = 0.1)</td><td>55.81</td><td>49.94</td><td>66.82</td><td>52.74</td><td>72.14± 0.11</td></tr></table>
|
| 399 |
+
|
| 400 |
+
We also extend our experiment to the sub-models trained with WideResNet-28-10 on CIFAR-10. Table 4 shows the performance of the models facing various whitebox attacks. The results indicate that FDT maintains good performance even on more complex network structures. We also evaluated the robustness of an ensemble of eight sub-models, with the results presented in Table 5.
|
| 401 |
+
|
| 402 |
+
Table 4: Robust Accuracy $(\%)$ of different ensemble methods against white-box attacks on CIFAR-10. The $\epsilon$ and $\lambda$ stand for the $l_{\infty}$ norm of the adversarial perturbation and the coefficient of C&W attack respectively. The architecture of sub-model is WRN-28-10.
|
| 403 |
+
|
| 404 |
+
<table><tr><td>CIFAR-10</td><td>ADP</td><td>GAL</td><td>DVERGE</td><td>FDT-hybrid</td></tr><tr><td>clean accuracy</td><td>92.99</td><td>82.14</td><td>94.32</td><td>94.18 ± 0.06</td></tr><tr><td>FGSM (ε=0.01)</td><td>60.04</td><td>44.94</td><td>71.01</td><td>80.64 ± 0.05</td></tr><tr><td>FGSM (ε=0.02)</td><td>51.69</td><td>36.83</td><td>50.43</td><td>60.09 ± 0.05</td></tr><tr><td>PGD (ε=0.01)</td><td>11.09</td><td>22.10</td><td>44.25</td><td>64.64 ± 0.07</td></tr><tr><td>PGD (ε=0.02)</td><td>2.54</td><td>5.06</td><td>13.27</td><td>26.0 ± 0.03</td></tr><tr><td>BIM (ε=0.01)</td><td>15.81</td><td>22.62</td><td>46.53</td><td>67.36 ± 0.10</td></tr><tr><td>BIM (ε=0.02)</td><td>4.50</td><td>5.43</td><td>17.38</td><td>32.36 ± 0.06</td></tr><tr><td>MIM (ε=0.01)</td><td>18.18</td><td>25.97</td><td>44.21</td><td>64.36 ± 0.08</td></tr><tr><td>MIM (ε=0.02)</td><td>4.72</td><td>7.81</td><td>12.83</td><td>25.64 ± 0.05</td></tr><tr><td>AA (ε=0.01)</td><td>9.38</td><td>19.34</td><td>43.23</td><td>63.45 ± 0.08</td></tr><tr><td>AA (ε=0.02)</td><td>1.17</td><td>3.93</td><td>12.49</td><td>25.23 ± 0.04</td></tr><tr><td>C&W (λ=0.1)</td><td>37.81</td><td>19.05</td><td>46.32</td><td>47.23 ± 0.10</td></tr></table>
|
| 405 |
+
|
| 406 |
+
Table 6 is the result of ensemble methods trained with WideResNet-28-10 on Tiny-ImageNet-200. We test the robustness of different methods under widely used white-box attacks. Due to the high
|
| 407 |
+
|
| 408 |
+
Table 5: Robust Accuracy (\%) of an ensemble of eight sub-models against white-box attacks on CIFAR-10. The $\epsilon$ and $\lambda$ stand for the $l_{\infty}$ norm of the adversarial perturbation and the coefficient of C&W attack respectively. The architecture of sub-model is WRN-28-10.
|
| 409 |
+
|
| 410 |
+
<table><tr><td>CIFAR-10</td><td>FDT-hybrid</td></tr><tr><td>clean accuracy</td><td>93.72 ± 0.11</td></tr><tr><td>FGSM (ε=0.01)</td><td>86.31± 0.07</td></tr><tr><td>FGSM (ε=0.02)</td><td>67.29 ± 0.06</td></tr><tr><td>PGD (ε= 0.01)</td><td>72.02 ± 0.07</td></tr><tr><td>PGD (ε = 0.02)</td><td>45.42± 0.05</td></tr><tr><td>BIM (ε= 0.01)</td><td>73.68 ± 0.10</td></tr><tr><td>BIM (ε = 0.02)</td><td>44.53± 0.06</td></tr><tr><td>MIM (ε= 0.01)</td><td>71.36 ± 0.06</td></tr><tr><td>MIM (ε= 0.02)</td><td>45.24 ± 0.06</td></tr><tr><td>AA (ε= 0.01)</td><td>70.45 ± 0.08</td></tr><tr><td>AA (ε= 0.02)</td><td>44.23 ± 0.07</td></tr><tr><td>C&W (λ = 0.1</td><td>72.37 ± 0.11</td></tr></table>
|
| 411 |
+
|
| 412 |
+
time complexity of the TRS, we do not compare with it here. The experimental results show that all ensemble models achieve comparable levels of clean accuracy while FDT-hybrid achieves better robust accuracy than other methods.
|
| 413 |
+
|
| 414 |
+
Table 6: Robust Accuracy (%) of different ensemble methods against white-box attacks on TinyImageNet-200. The $\epsilon$ and $\lambda$ stand for the $l_{\infty}$ norm of the adversarial perturbation and the coefficient of C&W attack respectively. The last column is the ensemble model trained with FDT-hybrid.
|
| 415 |
+
|
| 416 |
+
<table><tr><td>Tiny/ImageNet-200</td><td>ADP</td><td>GAL</td><td>DVERGE</td><td>FDT-hybrid</td></tr><tr><td>clean accuracy</td><td>49.88</td><td>45.7</td><td>51.46</td><td>64.21 ± 0.06</td></tr><tr><td>FGSM (ε = 0.01)</td><td>10.46</td><td>1.24</td><td>22.82</td><td>21.73 ± 0.04</td></tr><tr><td>FGSM (ε = 0.02)</td><td>4.38</td><td>0.59</td><td>18.42</td><td>19.28 ± 0.04</td></tr><tr><td>PGD (ε = 0.01)</td><td>0.02</td><td>0.02</td><td>3.6</td><td>4.76 ± 0.02</td></tr><tr><td>PGD (ε = 0.02)</td><td>0.02</td><td>0.01</td><td>0.34</td><td>0.45 ± 0.01</td></tr><tr><td>BIM (ε = 0.01)</td><td>0.07</td><td>0.02</td><td>3.35</td><td>4.81 ± 0.03</td></tr><tr><td>BIM (ε = 0.02)</td><td>0.03</td><td>0.01</td><td>0.28</td><td>0.32± 0.00</td></tr><tr><td>MIM (ε = 0.01)</td><td>0.11</td><td>0.02</td><td>4.36</td><td>6.13 ± 0.03</td></tr><tr><td>MIN (ε = 0.02)</td><td>0.03</td><td>0.01</td><td>0.41</td><td>0.48 ± 0.00</td></tr><tr><td>AA (ε = 0.01)</td><td>0</td><td>0</td><td>0</td><td>2.66 ± 0.02</td></tr><tr><td>AA (ε = 0.02)</td><td>0</td><td>0</td><td>0</td><td>0.02± 0.00</td></tr><tr><td>CW (λ = 0.01)</td><td>2.36</td><td>0.13</td><td>9.54</td><td>19.47± 0.06</td></tr></table>
|
| 417 |
+
|
| 418 |
+
Ablation study on model architectures. Table 7 presents the results across different model architectures, including ResNet20, ResNet50, WRN28-10, and WRN34-10. While larger models generally achieve higher clean and robust accuracy, the results suggest that our method consistently enhances robustness under various attack scenarios, demonstrating its applicability across diverse architectures.
|
| 419 |
+
|
| 420 |
+
Ablation study on allocation methods. Table 8 compares the performance of FDT-hybrid with different weakness set allocation methods on CIFAR-10. The results indicate that our proposed allocation method achieves better clean accuracy and robustness under various attack scenarios than randomly uniform allocation.
|
| 421 |
+
|
| 422 |
+
Ablation study on $\tau_{1}$ and $\tau_{2}$ . Table 9 presents the results of FDT-hybrid with various combinations of selection thresholds $\tau_{1}$ and $\tau_{2}$ on CIFAR-10. The experiments reveal the impact of different thresholds on both clean accuracy and robustness under adversarial attacks. As $\tau_{2}$ increases, robustness improves across all metrics, but clean accuracy decreases. For a fixed $\tau_{2}$ , increasing $\tau_{1}$ generally leads to a trade-off between clean accuracy and robustness. Setting $\tau_{1} = 0.2$ and $\tau_{2} = 0.8$ achieves a relatively
|
| 423 |
+
|
| 424 |
+
balanced performance, maintaining both competitive clean accuracy and robust accuracy under various attacks.
|
| 425 |
+
|
| 426 |
+
Table 7: Robust Accuracy $(\%)$ of different model architectures against white-box attacks on Cifar10. The $\epsilon$ and $\lambda$ stand for the $l_{\infty}$ norm of the adversarial perturbation and the coefficient of C&W attack respectively.
|
| 427 |
+
|
| 428 |
+
<table><tr><td>CIFAR10</td><td>ResNet20</td><td>ResNet50</td><td>WRN28-10</td><td>WRN34-10</td></tr><tr><td>clean accuracy</td><td>90.02</td><td>93.23</td><td>94.18</td><td>94.63</td></tr><tr><td>FGSM (ε = 0.01)</td><td>72.24</td><td>76.65</td><td>80.64</td><td>81.04</td></tr><tr><td>FGSM (ε = 0.02)</td><td>58.04</td><td>58.59</td><td>60.09</td><td>60.92</td></tr><tr><td>PGD (ε = 0.01)</td><td>48.48</td><td>60.23</td><td>64.64</td><td>65.38</td></tr><tr><td>PGD (ε = 0.02)</td><td>20.01</td><td>24.35</td><td>26.00</td><td>27.42</td></tr><tr><td>BIM (ε = 0.01)</td><td>48.57</td><td>60.43</td><td>67.36</td><td>68.29</td></tr><tr><td>BIM (ε = 0.02)</td><td>16.63</td><td>23.57</td><td>32.36</td><td>33.86</td></tr><tr><td>MIM (ε = 0.01)</td><td>51.48</td><td>60.81</td><td>64.36</td><td>64.71</td></tr><tr><td>MIN (ε = 0.02)</td><td>20.09</td><td>24.54</td><td>25.64</td><td>26.42</td></tr><tr><td>AA (ε = 0.01)</td><td>51.56</td><td>60.48</td><td>63.45</td><td>64.01</td></tr><tr><td>AA (ε = 0.02)</td><td>19.42</td><td>24.21</td><td>25.23</td><td>26.39</td></tr><tr><td>CW (λ = 0.01)</td><td>56.08</td><td>56.55</td><td>57.23</td><td>57.52</td></tr></table>
|
| 429 |
+
|
| 430 |
+
Table 8: Performance of FDT-hybrid with different weakness set allocation method on CIFAR-10. The other settings are consistent with those in Table 1.
|
| 431 |
+
|
| 432 |
+
<table><tr><td>Allocation method</td><td>Clean accuracy</td><td>FGSM (ε=0.02)</td><td>PGD (ε=0.02)</td><td>AutoAttack (ε=0.02)</td></tr><tr><td>Uniform Random</td><td>89.32</td><td>56.20</td><td>18.24</td><td>17.89</td></tr><tr><td>Ours</td><td>90.20</td><td>58.04</td><td>20.01</td><td>19.42</td></tr></table>
|
| 433 |
+
|
| 434 |
+
Table 9: Performance of FDT-hybrid with different selection thresholds $\tau_{1}$ and $\tau_{2}$ on CIFAR-10. The other settings are consistent with those in Table 1.
|
| 435 |
+
|
| 436 |
+
<table><tr><td>Thresholds</td><td>Clean accuracy</td><td>FGSM (ε=0.02)</td><td>PGD (ε=0.02)</td><td>AutoAttack (ε=0.02)</td></tr><tr><td>τ1=0.2,τ2=0.7</td><td>91.03</td><td>56.62</td><td>17.74</td><td>17.60</td></tr><tr><td>τ1=0.2,τ2=0.8</td><td>90.20</td><td>58.04</td><td>20.01</td><td>19.42</td></tr><tr><td>τ1=0.2,τ2=0.9</td><td>89.46</td><td>58.48</td><td>20.12</td><td>19.57</td></tr><tr><td>τ1=0.4,τ2=0.7</td><td>89.75</td><td>56.89</td><td>17.93</td><td>17.82</td></tr><tr><td>τ1=0.4,τ2=0.8</td><td>89.08</td><td>58.21</td><td>20.01</td><td>19.47</td></tr><tr><td>τ1=0.4,τ2=0.9</td><td>88.44</td><td>58.53</td><td>20.09</td><td>19.61</td></tr><tr><td>τ1=0.6,τ2=0.7</td><td>89.62</td><td>53.35</td><td>15.27</td><td>14.63</td></tr><tr><td>τ1=0.6,τ2=0.8</td><td>88.84</td><td>55.33</td><td>15.42</td><td>15.24</td></tr><tr><td>τ1=0.6,τ2=0.9</td><td>88.12</td><td>55.46</td><td>15.83</td><td>15.47</td></tr></table>
|
| 437 |
+
|
| 438 |
+
# F.2 RESULTS FOR BLACK-BOX ATTACK
|
| 439 |
+
|
| 440 |
+
In the black-box setting, the attacker's knowledge usually is limited to the original training dataset and has no information about the model. This setting represents a more practical attack scenario. The attacker can train a surrogate model to generate transferable adversarial examples and transfer them to the target ensemble model. We utilize a single ResNet-20 model as the surrogate model. Adversarial examples are generated on the surrogate model using the SPSA algorithm (Spall, 1992). Figure 5 shows the robust accuracy of ensemble models against black-box attacks under different degrees of perturbation. As we can see, FDT-hybrid ensemble training strategies outperform the other ensemble training strategy against black-box attacks both on CIFAR10 and CIFAR100.
|
| 441 |
+
|
| 442 |
+

|
| 443 |
+
(a)
|
| 444 |
+
|
| 445 |
+

|
| 446 |
+
(b)
|
| 447 |
+
Figure 5: Robust Accuracy for different ensemble models against black-box attack with different perturbation scale $\epsilon$ .
|
| 448 |
+
|
| 449 |
+
# F.3 TRADE-OFF BETWEEN CLEAN AND ROBUST ACCURACY
|
| 450 |
+
|
| 451 |
+
In this section, we explore the trade-off between clean accuracy and robust accuracy by varying the frequency selection threshold $\tau_{2}$ (as mentioned in Section 4.2). And we set $\tau_{1}$ to be 0.1. To assess the adversarial robustness, we utilize the PGD attack under $l_{\infty}$ perturbations of size $\epsilon = 0.01$ as a benchmark. We train a set of ResNet-20 FDT-hybrid models on CIFAR-10 and CIFAR-100 with various frequency selection threshold $\tau_{2} \in \{0.4, 0.6, 0.8, 1.0, 1.2, 1.6\}$ . Figure 6 shows that the ensemble model has lower clean accuracy and higher robust accuracy with the increasing of $\tau_{2}$ .
|
| 452 |
+
|
| 453 |
+

|
| 454 |
+
(a)
|
| 455 |
+
Figure 6: (a) shows the trade-off on CIFAR-10 while (b) on CIFAR-100. From left to right, we decrease the trade-off parameter $\tau_{2}$ for FDT.
|
| 456 |
+
|
| 457 |
+

|
| 458 |
+
(b)
|
| 459 |
+
|
| 460 |
+
# F.4 TRANSFERABILITY ACROSS VARIOUS SUB-MODELS
|
| 461 |
+
|
| 462 |
+
To further investigate the diversity between sub-models, we conduct an analysis by generating adversarial examples using one sub-model and evaluating their accuracy on other target sub-models. The transferability of these adversarial examples among sub-models is visualized in Figure 7, considering different ensemble training methods on the CIFAR10 dataset. We generate adversarial examples from "base model" and test the accuracy of "target model". The experimental results indicate that FDT exhibits comparable performance to DVERGE and TRS in reducing the transferability of adversarial samples across different sub-models. This demonstrates that FDT not only enhances the diversity of weaknesses within the dataset but also weakens the transferability of adversarial examples between sub-models.
|
| 463 |
+
|
| 464 |
+

|
| 465 |
+
(a)
|
| 466 |
+
|
| 467 |
+

|
| 468 |
+
(b)
|
| 469 |
+
|
| 470 |
+

|
| 471 |
+
(c)
|
| 472 |
+
|
| 473 |
+

|
| 474 |
+
(d)
|
| 475 |
+
|
| 476 |
+

|
| 477 |
+
(e)
|
| 478 |
+
|
| 479 |
+

|
| 480 |
+
(f)
|
| 481 |
+
Figure 7: Pair-wise adversarial transferability between sub-models against PGD attack with $\epsilon = 0.02$ on CIFAR-10. The value represents the success rate of adversarial examples generated by the base model in attacking the target model.
|
| 482 |
+
|
| 483 |
+
# F.5 COMPARE WITH ADVERSARIAL TRAINING
|
| 484 |
+
|
| 485 |
+
We use target attacks in our data transformation, which differs significantly from adversarial training. First, we employ a simple pre-trained network (VGG11 in our experiment) to compute adversarial examples, thereby accelerating the training process. Second, we only utilize the low amplitude part of the adversarial examples for data transformation, which helps maintain the model's clean accuracy. We compare our method with several popular approaches (Wang et al., 2023b; Rade & Moosavi-Dezfooli, 2021; Xu et al., 2023) on CIFAR-10 using AutoAttack under $l_{\infty}$ perturbations ( $\epsilon = 8 / 255$ ). Wang et al. (2023b) generated training datasets using a diffusion model, followed by adversarial training on these datasets. For fairness, we compare our method with the version proposed by Wang et al. (2023b) that uses 50k generated images. Rade & Moosavi-Dezfooli (2021) used "helper example" to help the adversarial training. Xu et al. (2023) proposed Dynamics-Aware Robust Training, which encourages the decision boundary to adjust in a way that prioritizes increasing smaller margins. We use WideResNet-28-10 as the sub-model and ensemble eight sub-models without using generated data. The results in Table 10 indicate that, although the robustness of our method is not the highest, it maintains clean accuracy with almost no decline. Moreover, our method does not require additional generated data or adversarial training, and even with the need for ensembling, the training efficiency remains relatively high. This suggests a potential way to enhance robustness while minimizing the decrease in clean accuracy.
|
| 486 |
+
|
| 487 |
+
To further illustrate our method's advantage, we conduct additional experiments to compare the "robustness-clean accuracy" trade-off curves of our method and AT under different settings. Fig. 8 compares the trade-off curves obtained by HAT Rade & Moosavi-Dezfooli (2021) with that of FDT-hybrid. For HAT, we fix $\gamma = 0.25$ and vary $\beta \in \{0.1, 0.5, 2.5, 3.0, 4.0, 5.0\}$ ( $\beta$ is the coefficient of the robustness loss, and higher $\beta$ indicates higher robust accuracy); for FDT-hybrid, we fix $\tau_{1} = 0.2$ and vary $\tau_{2} \in \{0.5, 0.7, 0.9, 1.1, 1.3, 1.5\}$ . We observe that HAT's robustness declines rapidly when the $\beta$ parameter is small (as increasing the clean accuracy). This result shows the significant advantage of our method when a clean accuracy above $90\%$ is required.
|
| 488 |
+
|
| 489 |
+

|
| 490 |
+
Trade-off between clean accuracy and robust accuracy
|
| 491 |
+
Figure 8: It shows the trade-off curves on CIFAR-10. From left to right, we decrease the trade-off parameter $\tau_{2}$ for FDT, and decrease the trade-off parameter $\beta$ for HAT.
|
| 492 |
+
|
| 493 |
+
Table 10: Clean accuracy and robust accuracy (\%) of different methods against AutoAttack under $l_{\infty}$ perturbations ( $\epsilon = 8/255$ ) on CIFAR-10.
|
| 494 |
+
|
| 495 |
+
<table><tr><td>CIFAR-10</td><td>clean accuracy</td><td>robust accuracy</td></tr><tr><td>(Wang et al., 2023b)</td><td>86.15</td><td>55.71</td></tr><tr><td>(Rade & Moosavi-Dezfooli, 2021)</td><td>84.90</td><td>49.08</td></tr><tr><td>(Xu et al., 2023)</td><td>85.55</td><td>54.69</td></tr><tr><td>OURS (FDT-hybrid)</td><td>93.72</td><td>34.61</td></tr></table>
|
2025/To Tackle Adversarial Transferability_ A Novel Ensemble Training Method with Fourier Transformation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:53bf42e85414c10a31e7c455a66adc7a8b470a9aa608bee4b6c9e154f0cb538a
|
| 3 |
+
size 1222151
|
2025/To Tackle Adversarial Transferability_ A Novel Ensemble Training Method with Fourier Transformation/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|