Add Batch a4d01250-768a-4d97-bf0e-f0804d7efe43
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- ablatingconceptsintexttoimagediffusionmodels/9c021565-b1c8-4251-9401-e3558164b323_content_list.json +3 -0
- ablatingconceptsintexttoimagediffusionmodels/9c021565-b1c8-4251-9401-e3558164b323_model.json +3 -0
- ablatingconceptsintexttoimagediffusionmodels/9c021565-b1c8-4251-9401-e3558164b323_origin.pdf +3 -0
- ablatingconceptsintexttoimagediffusionmodels/full.md +358 -0
- ablatingconceptsintexttoimagediffusionmodels/images.zip +3 -0
- ablatingconceptsintexttoimagediffusionmodels/layout.json +3 -0
- accflowbackwardaccumulationforlongrangeopticalflow/e5896194-f826-44fe-b36b-39eda9c29a37_content_list.json +3 -0
- accflowbackwardaccumulationforlongrangeopticalflow/e5896194-f826-44fe-b36b-39eda9c29a37_model.json +3 -0
- accflowbackwardaccumulationforlongrangeopticalflow/e5896194-f826-44fe-b36b-39eda9c29a37_origin.pdf +3 -0
- accflowbackwardaccumulationforlongrangeopticalflow/full.md +369 -0
- accflowbackwardaccumulationforlongrangeopticalflow/images.zip +3 -0
- accflowbackwardaccumulationforlongrangeopticalflow/layout.json +3 -0
- accurate3dfacereconstructionwithfacialcomponenttokens/640e260a-327e-4872-9b6d-5d5661b519dc_content_list.json +3 -0
- accurate3dfacereconstructionwithfacialcomponenttokens/640e260a-327e-4872-9b6d-5d5661b519dc_model.json +3 -0
- accurate3dfacereconstructionwithfacialcomponenttokens/640e260a-327e-4872-9b6d-5d5661b519dc_origin.pdf +3 -0
- accurate3dfacereconstructionwithfacialcomponenttokens/full.md +336 -0
- accurate3dfacereconstructionwithfacialcomponenttokens/images.zip +3 -0
- accurate3dfacereconstructionwithfacialcomponenttokens/layout.json +3 -0
- accurateandfastcompressedvideocaptioning/4f832d5d-d350-44d5-9f62-3e0bf6e26a99_content_list.json +3 -0
- accurateandfastcompressedvideocaptioning/4f832d5d-d350-44d5-9f62-3e0bf6e26a99_model.json +3 -0
- accurateandfastcompressedvideocaptioning/4f832d5d-d350-44d5-9f62-3e0bf6e26a99_origin.pdf +3 -0
- accurateandfastcompressedvideocaptioning/full.md +318 -0
- accurateandfastcompressedvideocaptioning/images.zip +3 -0
- accurateandfastcompressedvideocaptioning/layout.json +3 -0
- achievementbasedtrainingprogressbalancingformultitasklearning/272271d0-f734-4249-a347-749bac4b8015_content_list.json +3 -0
- achievementbasedtrainingprogressbalancingformultitasklearning/272271d0-f734-4249-a347-749bac4b8015_model.json +3 -0
- achievementbasedtrainingprogressbalancingformultitasklearning/272271d0-f734-4249-a347-749bac4b8015_origin.pdf +3 -0
- achievementbasedtrainingprogressbalancingformultitasklearning/full.md +294 -0
- achievementbasedtrainingprogressbalancingformultitasklearning/images.zip +3 -0
- achievementbasedtrainingprogressbalancingformultitasklearning/layout.json +3 -0
- actformeraganbasedtransformertowardsgeneralactionconditioned3dhumanmotiongeneration/b76da01a-6915-4f20-95e0-470a21a063d9_content_list.json +3 -0
- actformeraganbasedtransformertowardsgeneralactionconditioned3dhumanmotiongeneration/b76da01a-6915-4f20-95e0-470a21a063d9_model.json +3 -0
- actformeraganbasedtransformertowardsgeneralactionconditioned3dhumanmotiongeneration/b76da01a-6915-4f20-95e0-470a21a063d9_origin.pdf +3 -0
- actformeraganbasedtransformertowardsgeneralactionconditioned3dhumanmotiongeneration/full.md +284 -0
- actformeraganbasedtransformertowardsgeneralactionconditioned3dhumanmotiongeneration/images.zip +3 -0
- actformeraganbasedtransformertowardsgeneralactionconditioned3dhumanmotiongeneration/layout.json +3 -0
- actionsensitivitylearningfortemporalactionlocalization/07053da8-e3f8-425e-9d4f-338c187ce557_content_list.json +3 -0
- actionsensitivitylearningfortemporalactionlocalization/07053da8-e3f8-425e-9d4f-338c187ce557_model.json +3 -0
- actionsensitivitylearningfortemporalactionlocalization/07053da8-e3f8-425e-9d4f-338c187ce557_origin.pdf +3 -0
- actionsensitivitylearningfortemporalactionlocalization/full.md +408 -0
- actionsensitivitylearningfortemporalactionlocalization/images.zip +3 -0
- actionsensitivitylearningfortemporalactionlocalization/layout.json +3 -0
- activateandrejecttowardssafedomaingeneralizationundercategoryshift/1ff30265-9cf5-4ec1-a982-4f0c39c9392c_content_list.json +3 -0
- activateandrejecttowardssafedomaingeneralizationundercategoryshift/1ff30265-9cf5-4ec1-a982-4f0c39c9392c_model.json +3 -0
- activateandrejecttowardssafedomaingeneralizationundercategoryshift/1ff30265-9cf5-4ec1-a982-4f0c39c9392c_origin.pdf +3 -0
- activateandrejecttowardssafedomaingeneralizationundercategoryshift/full.md +447 -0
- activateandrejecttowardssafedomaingeneralizationundercategoryshift/images.zip +3 -0
- activateandrejecttowardssafedomaingeneralizationundercategoryshift/layout.json +3 -0
- activeneuralmapping/45e08854-ae2d-43e4-bc22-971696065335_content_list.json +3 -0
- activeneuralmapping/45e08854-ae2d-43e4-bc22-971696065335_model.json +3 -0
ablatingconceptsintexttoimagediffusionmodels/9c021565-b1c8-4251-9401-e3558164b323_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7b1561e7641eb924b76f189f2474add00ba37d80960a0cfba17f6ec7cb6b5190
|
| 3 |
+
size 82591
|
ablatingconceptsintexttoimagediffusionmodels/9c021565-b1c8-4251-9401-e3558164b323_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:42c7465339f5161d673008edb9b651534ea5ee45af97f0220639b063f25b0f06
|
| 3 |
+
size 106517
|
ablatingconceptsintexttoimagediffusionmodels/9c021565-b1c8-4251-9401-e3558164b323_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c0de3b6d0b7e83efe69813d8ed7c06c09fea5d091b9b2924f8397a5c1b034ed9
|
| 3 |
+
size 13852194
|
ablatingconceptsintexttoimagediffusionmodels/full.md
ADDED
|
@@ -0,0 +1,358 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Ablating Concepts in Text-to-Image Diffusion Models
|
| 2 |
+
|
| 3 |
+
Nupur Kumari<sup>1</sup> Eli Shechtman<sup>3</sup>
|
| 4 |
+
|
| 5 |
+
Bingliang Zhang $^{2}$
|
| 6 |
+
Richard Zhang $^{3}$
|
| 7 |
+
|
| 8 |
+
Sheng-Yu Wang<sup>1</sup>
|
| 9 |
+
Jun-Yan Zhu<sup>1</sup>
|
| 10 |
+
|
| 11 |
+
$^{1}$ Carnegie Mellon University
|
| 12 |
+
|
| 13 |
+
$^{2}$ Tsinghua University
|
| 14 |
+
|
| 15 |
+
$^{3}$ Adobe Research
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
Remove Van Gogh
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
Remove Grumpy Cat
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
Remove Memorized Image
|
| 33 |
+
|
| 34 |
+

|
| 35 |
+
Remove Monet
|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
Remove R2D2
|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
|
| 44 |
+

|
| 45 |
+
Figure 1: Our method can ablate copyrighted materials and memorized images from pretrained text-to-image diffusion models. Our method learns to change the image distribution of a target concept to match an anchor concept, e.g., Van Gogh painting $\rightarrow$ paintings (first row), or Grumpy cat $\rightarrow$ Cat (second row). Furthermore, we extend our method to prevent the generation of memorized images (third row).
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
Remove emorized Image
|
| 49 |
+
|
| 50 |
+
# Abstract
|
| 51 |
+
|
| 52 |
+
Large-scale text-to-image diffusion models can generate high-fidelity images with powerful compositional ability. However, these models are typically trained on an enormous amount of Internet data, often containing copyrighted material, licensed images, and personal photos. Furthermore, they have been found to replicate the style of various living artists or memorize exact training samples. How can we remove such copyrighted concepts or images without retraining the model from scratch? To achieve this goal, we propose an efficient method of ablating concepts in the pretrained model, i.e., preventing the generation of a target concept. Our algorithm learns to match the image distribution for a target style, instance, or text prompt we wish to ablate to the distribution corresponding to an anchor concept. This prevents the model from generating target concepts given its text condition. Extensive experiments show that our method can successfully prevent the generation of the ablated concept while preserving closely related concepts in the model.
|
| 53 |
+
|
| 54 |
+
# 1. Introduction
|
| 55 |
+
|
| 56 |
+
Large-scale text-to-image models have demonstrated remarkable ability in synthesizing photorealistic images [51, 43, 56, 54, 76, 14]. In addition to algorithms and compute resources, this technological advancement is powered by the use of massive datasets scraped from web [59]. Unfortunately, the datasets often consist of copyrighted materials, the artistic oeuvre of creators, and personal photos [64, 10, 61].
|
| 57 |
+
|
| 58 |
+
We believe that every creator should have the right to opt out from large-scale models at any time for any image they have created. However, fulfilling such requests poses new computational challenges, as re-training a model from scratch for every user request can be computationally intensive. Here, we ask - How can we prevent the model from generating such content? How can we achieve it efficiently without re-training the model from scratch? How can we make sure that the model still preserves related concepts?
|
| 59 |
+
|
| 60 |
+
These questions motivate our work on ablation (removal) of concepts from text-conditioned diffusion models [54, 3]. We perform concept ablation by modifying generated images for the target concept $(\mathbf{c}^{*})$ to match a broader anchor concept (c), e.g., overwriting Grumpy Cat with cat or Van Gogh paintings with painting as shown in Figure 1. Thus, given the text prompt, painting of olive trees in the style of Van Gogh, generate a normal painting of olive trees even though the text prompt consists of Van Gogh. Similarly, prevent the generation of specific instances/objects like Grumpy Cat and generate a random cat given the prompt.
|
| 61 |
+
|
| 62 |
+
Our method aims at modifying the conditional distribution of the model given a target concept $p_{\Phi}(\mathbf{x}|\mathbf{c}^*)$ to match a distribution $p(\mathbf{x}|\mathbf{c})$ defined by the anchor concept $\mathbf{c}$ . This is achieved by minimizing the Kullback-Leibler divergence between the two distributions. We propose two different target distributions that lead to different training objectives. In the first case, we fine-tune the model to match the model prediction between two text prompts containing the target and corresponding anchor concepts, e.g., A cute little Grumpy Cat and A cute little cat. In the second objective, the conditional distribution $p(\mathbf{x}|\mathbf{c})$ is defined by the modified text-image pairs of: a target concept prompt, paired with images of anchor concepts, e.g., the prompt a cute little Grumpy Cat with a random cat image. We show that both objectives can effectively ablate concepts.
|
| 63 |
+
|
| 64 |
+
We evaluate our method on 16 concept ablation tasks, including specific object instances, artistic styles, and memorized images, using various evaluation metrics. Our method can successfully ablate target concepts while minimally affecting closely related surrounding concepts that should be preserved (e.g., other cat breeds when ablating Grumpy Cat). Our method takes around five minutes per concept. Furthermore, we perform an extensive ablation study regarding different algorithmic design choices, such as the objective function variants, the choice of parameter subsets to fine-tune, the choice of anchor concepts, the number of fine-tuning steps, and the robustness of our method to misspelling in the text prompt. Finally, we show that our method can ablate multiple concepts at once and discuss the current limitations. The full version of the paper is available at https://arxiv.org/abs/2303.13516. Our code, data, and models are available at https://www.cs.cmu.edu/~concept-ablation/.
|
| 65 |
+
|
| 66 |
+
# 2. Related Work
|
| 67 |
+
|
| 68 |
+
Text-to-image synthesis has advanced significantly since the seminal works [82, 37], thanks to improvements in model architectures [77, 81, 68, 75, 28, 15, 74, 29, 57, 16], generative modeling techniques [52, 27, 54, 56, 4, 43, 14, 66], and availability of large-scale datasets [59]. Current methods can synthesize high-quality images with remarkable generalization ability, capable of composing different instances, styles,
|
| 69 |
+
|
| 70 |
+
and concepts in unseen contexts. However, as these models are often trained on copyright images, it learns to mimic various artist styles [64, 61] and other copyrighted content [10]. In this work, we aim to modify the pretrained models to prevent the generation of such images. To remove data from pre-trained GANs, Kong et al. [32] add the redacted data to fake data, apply standard adversarial loss, and show results on MNIST and CIFAR. Unlike their method, which requires time-consuming model re-training on the entire dataset, our method can efficiently remove concepts without going through the original training set. Furthermore, we focus on large-scale text-based diffusion models. Recent work of Schramowski et al. [58] modify the inference process to prevent certain concepts from being generated. But we aim to ablate the concept from the model weights. Concurrent with our work, Gandikota et al. [20] aims to remove concepts using a score-based formulation. The reader is encouraged to review their work.
|
| 71 |
+
|
| 72 |
+
Training data memorization and unlearning. Several works have studied training data leaking [62, 12, 13, 11], which can pose a greater security and privacy risk, especially with the use of web-scale uncurated datasets in deep learning. Recent works [64, 10] have also shown that text-to-image models are susceptible to generating exact or similar copies of the training dataset for certain text conditions. Another line of work in machine unlearning [9, 21, 23, 22, 42, 8, 67, 60] explores data deletion at user's request after model training. However, existing unlearning methods [23, 67] typically require calculating information, such as Fisher Information Matrix, making them computationally infeasible for large-scale models with billions of parameters trained on billions of images. In contrast, our method can directly update model weights and ablate a target concept as fast as five minutes.
|
| 73 |
+
|
| 74 |
+
Generative model fine-tuning and editing. Fine-tuning aims to adapt the weights of a pretrained generative model to new domains [73, 46, 72, 41, 79, 34, 47, 80, 30, 35, 24, 44], downstream tasks [71, 54, 78], and test images [6, 53, 48, 31, 25, 49]. Several recent works also explore fine-tuning text-to-image models to learn personalized or unseen concepts [33, 17, 55, 18] given a few exemplar images. Similarly, model editing [5, 70, 19, 69, 45, 38, 40, 39] aims to modify specific model weights based on users' instructions to incorporate new computational rules or new visual effects. Unlike the above approaches, our method reduces the possible space by ablating specific concepts in the pretrained model.
|
| 75 |
+
|
| 76 |
+
# 3. Method
|
| 77 |
+
|
| 78 |
+
Here, we first provide a brief overview of text-to-image diffusion models [63, 27] in Section 3.1. We then propose our concept ablation formulation and explore two variants in Section 3.2. Finally, in Section 3.3, we discuss the training details for each type of ablation task.
|
| 79 |
+
|
| 80 |
+

|
| 81 |
+
Figure 2: Overview. We update model weights to modify the generated image distribution on the target concept, e.g., Grumpy Cat, to match an anchor distribution, e.g., Cat. We propose two variants. Left: The anchor distribution is generated by the model itself, conditioned on the anchor concept. Right: The anchor distribution is defined by the modified pairs of <target prompt, anchor image>. An input image x is generated with anchor concept c. Adding randomly sampled noise $\epsilon$ results in noisy image $\mathbf{x}_t$ at time-step $t$ . Target prompt $\mathbf{c}^*$ is produced by appropriately modifying c. In experiments, we find the model-based variant to be more effective.
|
| 82 |
+
|
| 83 |
+
# 3.1. Diffusion Models
|
| 84 |
+
|
| 85 |
+
Diffusion models [63] learn to reverse a forward Markov chain process where noise is gradually added to the input image over multiple timesteps $t \in [0, T]$ . The noisy image $\mathbf{x}_t$ at any time-step $t$ is given by $\sqrt{\alpha_t} \mathbf{x}_0 + \sqrt{1 - \alpha_t} \epsilon$ , where $\mathbf{x}_0$ is a random real image, and $\alpha_t$ determines the strength of gaussian noise $\epsilon$ and decreases gradually with timestep such that $\mathbf{x}_T \sim N(0, I)$ . The denoising network $\Phi(\mathbf{x}_t, \mathbf{c}, t)$ is trained to denoise the noisy image to obtain $\mathbf{x}_{t-1}$ , and can also be conditioned on other modalities such as text $\mathbf{c}$ . The training objective can be reduced to predicting the noise $\epsilon$ :
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
\mathcal {L} (\mathbf {x}, \mathbf {c}) = \mathbb {E} _ {\epsilon , \mathbf {x}, \mathbf {c}, t} [ w _ {t} | | \epsilon - \Phi (\mathbf {x} _ {t}, \mathbf {c}, t) | | ], \tag {1}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
where $w_{t}$ is a time-dependent weight on the loss. To synthesize an image during inference, given the text condition $\mathbf{c}$ , we iteratively denoise a Gaussian noise image $\mathbf{x}_T \sim N(0, I)$ for a fixed number of timesteps [65, 36].
|
| 92 |
+
|
| 93 |
+
# 3.2. Concept Ablation
|
| 94 |
+
|
| 95 |
+
We define concept ablation as the task of preventing the generation of the desired image corresponding to a given target concept that needs to be ablated. As re-training the model on a new dataset with the concept removed is impractical, this becomes a challenging task. We need to ensure that editing a model to ablate a particular concept doesn't affect the model performance on other closely related concepts.
|
| 96 |
+
|
| 97 |
+
A naive approach. Our first attempt is to simply maximize the diffusion model training loss [67, 32] on the text-image pairs for the target concept while imposing regularizations on the weights. Unfortunately, this method leads to worse results on close surrounding concepts of the target concept. We compare our method with this baseline in Section 4.2 (Figure 3) and show that it performs sub-optimally.
|
| 98 |
+
|
| 99 |
+
Our formulation. As concept ablation prevents the generation of the target concept, thus the question arises: what should be generated instead? In this work, we assume that the user provides the desired anchor concept, e.g., Cat for Grumpy Cat. The anchor concept overwrites the target concept and should be a superset or similar to the target concept. Thus, given a set of text prompts $\{\mathbf{c}^*\}$ describing the target concept, we aim to match the following two distributions via Kullback-Leibler (KL) divergence:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\underset {\hat {\Phi}} {\arg \min } \mathcal {D} _ {\mathcal {K L}} (p (\mathbf {x} _ {(0.. T)} | \mathbf {c}) | | p _ {\hat {\Phi}} (\mathbf {x} _ {(0.. T)} | \mathbf {c} ^ {*})), \tag {2}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where $p(\mathbf{x}_{(0..T)}|\mathbf{c})$ is some target distribution on the $\{\mathbf{x}_t\}$ , $\mathbf{t} \in [0,T]$ , defined by the anchor concept $\mathbf{c}$ and $p_{\hat{\Phi}}(\mathbf{x}_{(0..T)}|\mathbf{c}^*)$ is the model's distribution for the target concept. Intuitively, we want to associate text prompts $\{\mathbf{c}^*\}$ with the images corresponding to anchor prompts $\{\mathbf{c}\}$ . Defining different anchor concept distributions leads to different objective functions, as we discuss next.
|
| 106 |
+
|
| 107 |
+
To accomplish the above objective, we first create a small dataset that consists of $(\mathbf{x},\mathbf{c},\mathbf{c}^{*})$ tuple, where $\mathbf{c}$ is a random prompt for the anchor concept, $\mathbf{x}$ is the generated image with that condition, and $\mathbf{c}^*$ is modified from $\mathbf{c}$ to include the target concept. For example, if $\mathbf{c}$ is photo of a cat, $\mathbf{c}^*$ will be photo of a Grumpy Cat, and $\mathbf{x}$ will be a generated image with text prompt $\mathbf{c}$ . For brevity, we use the same notation $\mathbf{x}$ to denote these generated images.
|
| 108 |
+
|
| 109 |
+
Model-based concept ablation. Here, we match the distribution of the target concept $p_{\hat{\Phi}}(\mathbf{x}_{(0\dots T)}|\mathbf{c}^*)$ to the pretrained model's distribution $p_{\Phi}(\mathbf{x}_{(0\dots T)}|\mathbf{c})$ given the anchor concept. The fine-tuned network should have a similar distribution of generated images given $\mathbf{c}^*$ as that of $\mathbf{c}$ , which can be expressed as minimizing the KL divergence between the two. This is similar to the standard diffusion model training objec
|
| 110 |
+
|
| 111 |
+
tive, except the target distribution is defined by the pretrained model instead of training data. Eqn. 2 can be expanded as
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\arg \min _ {\tilde {\Phi}} \sum_ {t = 1} ^ {T} \mathbb {E} _ {p _ {\Phi} (\mathbf {x} _ {0} \dots \mathbf {x} _ {T} | \mathbf {c})} \left[ \log \frac {p _ {\Phi} (\mathbf {x} _ {t - 1} | \mathbf {x} _ {t} , \mathbf {c})}{p _ {\tilde {\Phi}} (\mathbf {x} _ {t - 1} | \mathbf {x} _ {t} , \mathbf {c} ^ {*})} \right] \tag {3}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
where the noisy intermediate latent $\mathbf{x}_t\sim p_\Phi (\mathbf{x}_t|\mathbf{c})$ $\Phi$ is the original network, and $\hat{\Phi}$ is the new network we aim to learn. We can optimize the KL divergence by minimizing the following equivalent objective:
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\underset {\hat {\Phi}} {\arg \min } \mathbb {E} _ {\epsilon , \mathbf {x} _ {t}, \mathbf {c} ^ {*}, \mathbf {c}, t} [ w _ {t} | | \Phi (\mathbf {x} _ {t}, \mathbf {c}, t) - \hat {\Phi} (\mathbf {x} _ {t}, \mathbf {c} ^ {*}, t) | | ]. \tag {4}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
We show the full derivation in our arXiv version. We initialize $\hat{\Phi}$ with the pretrained model. Unfortunately, optimizing the above objective requires us to sample from $p_{\Phi}(\mathbf{x}_t|\mathbf{c})$ and keep copies of two large networks $\Phi$ and $\hat{\Phi}$ , which is time and memory-intensive. To bypass these, we sample $\mathbf{x}_t$ using the forward diffusion process and assume that the model remains similar for the anchor concept during fine-tuning. Therefore we use the network $\hat{\Phi}$ with stopgrad to get the anchor concept prediction. Thus, our final training objective is
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\begin{array}{l} \mathcal {L} _ {\text {m o d e l}} (\mathbf {x}, \mathbf {c}, \mathbf {c} ^ {*}) = \mathbb {E} _ {\epsilon , \mathbf {x}, \mathbf {c} ^ {*}, \mathbf {c}, t} [ w _ {t} ] | \hat {\Phi} (\mathbf {x} _ {t}, \mathbf {c}, t). \mathrm {s g} () - \tag {5} \\ \tilde {\Phi} \left(\mathbf {x} _ {t}, \mathbf {c} ^ {*}, t) | | \right], \\ \end{array}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where $\mathbf{x}_t = \sqrt{\alpha_t}\mathbf{x} + \sqrt{1 - \alpha_t\epsilon}$ . As shown in Figure 2 (left), this objective minimizes the difference in the model's prediction given the target prompt and anchor prompt. It is also possible to optimize the approximation to reverse KL divergence, and we discuss it in Section 4.3.
|
| 130 |
+
|
| 131 |
+
Noise-based concept ablation. Alternatively, we can redefine the ground truth text-image pairs as $<$ a target concept text prompt, the generated image of the corresponding anchor concept text prompt>, e.g., <photo of Grumpy Cat, random cat image>. We fine-tune the model on these redefined pairs with the standard diffusion training loss:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\mathcal {L} _ {\text {n o i s e}} \left(\mathbf {x}, \mathbf {c}, \mathbf {c} ^ {*}\right) = \mathbb {E} _ {\epsilon , \mathbf {x}, \mathbf {c} ^ {*}, t} [ w _ {t} | | \epsilon - \hat {\Phi} \left(\mathbf {x} _ {t}, \mathbf {c} ^ {*}, t\right) | ], \tag {6}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
where the generated image $\mathbf{x}$ is sampled from conditional distribution $p_{\Phi}(\mathbf{x}|\mathbf{c})$ . We then create the noisy version $\mathbf{x}_t = \sqrt{\alpha_t}\mathbf{x} + \sqrt{1 - \alpha_t\epsilon}$ . As shown in Figure 2, the first objective (Eqn. 5) aims to match the model's predicted noises, while the second objective (Eqn. 6) aims to match the Gaussian noises $\epsilon$ . We evaluate the above two objectives in Section 4.
|
| 138 |
+
|
| 139 |
+
Regualization loss. We also add the standard diffusion loss on $(\mathbf{x},\mathbf{c})$ anchor concept pairs as a regularization [55, 33]. Thus, our final objective is $\lambda \mathcal{L}(\mathbf{x},\mathbf{c}) + \mathcal{L}(\mathbf{x},\mathbf{c},\mathbf{c}^{*})$ , where the losses are as defined in Eqn. 1 and 5 (or 6) respectively. We require regularization loss as the target text prompt can consist of the anchor concept, e.g., Cat in Grumpy Cat.
|
| 140 |
+
|
| 141 |
+
Parameter subset to update. We experiment with three variations where we fine-tune different network parts: (1) Cross-Attention: fine-tune key and value projection matrices in the diffusion model's U-Net [33], (2) Embedding: fine-tune the text embedding in the text transformer [17], and (3) Full Weights: fine-tune all parameters of the U-Net [55].
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
Figure 3: Comparison of different learning objectives. The model-based concept ablation converges faster than the noise-based variant while maintaining better performance on surrounding concepts. Maximizing the loss on the target concept dataset leads to the deterioration of surrounding concepts (top row).
|
| 149 |
+
|
| 150 |
+

|
| 151 |
+
|
| 152 |
+

|
| 153 |
+
|
| 154 |
+

|
| 155 |
+
|
| 156 |
+
# 3.3. Training Details
|
| 157 |
+
|
| 158 |
+
Instance. Given the target and the anchor concept, such as Grumpy Cat and Cat, we first use ChatGPT [1] to generate 200 random prompts $\{\mathbf{c}\}$ containing the anchor concept. We generate 1,000 images from the pretrained diffusion model using the 200 prompts and replace the word Cat with Grumpy Cat to get target text prompts $\{\mathbf{c}^*\}$ .
|
| 159 |
+
|
| 160 |
+
Style. When removing a style, we use generic painting styles as the anchor concept. We use clip-retrieval [2] to obtain a set of text prompts c similar to the word painting in the CLIP feature space. We then generate 1000 images from the pretrained model using the 200 prompts. To get target prompts $\{\mathbf{c}^*\}$ , we append in the style of {target style} and similar variations to anchor prompts c.
|
| 161 |
+
|
| 162 |
+
Memorized images. Recent methods for detecting training set memorization can identify both the memorized image and corresponding text prompt $\mathbf{c}^*$ [10]. We then use ChatGPT to generate five anchor prompts $\{\mathbf{c}\}$ that can generate similar content as the memorized image. In many cases, these anchor prompts still generate the memorized images. Therefore, we first generate several more paraphrases of the anchor prompts using chatGPT and include the three prompts that lead to memorized images often into target prompts and ten prompts that lead to memorized images least as anchor prompts. Thus $\mathbf{c}^*$ and $\mathbf{c}$ for ablating the target memorized image consists of four and ten prompts, respectively. We then similarly generate 1000 images using the anchor prompts and use
|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
|
| 166 |
+

|
| 167 |
+
Figure 4: Quantitative evaluation for ablating instances (top row) and styles (bottom row). We show the performance of our final model-based concept ablation method across training steps and on updating different subsets of parameters. All metrics are averaged across four target concepts. Both embedding and cross-attention fine-tuning converge early. Fine-tuning cross-attention layers performs slightly worse for surrounding concepts but remains more robust to small spelling mistakes (third column).
|
| 168 |
+
Figure 5: Qualitative samples when ablating specific object instances. We show samples from different variations of our method in each row. The noise-based method performs worse on Nemo and R2D2 instances compared to the model-based variant. With the model-based variant, fine-tuning different subsets of parameters perform comparably to each other. As shown in Figure 4 (third column) and Figure 6, fine-tuning only the embedding is less robust to small spelling mistakes.
|
| 169 |
+
|
| 170 |
+

|
| 171 |
+
Figure 6: Robustness of the model-based variant to spelling mistakes in the text prompt. Fine-tuning only the embedding makes it less robust to slight spelling mistakes. This makes it easy to circumvent the method and still be able to generate the target concept. Whereas fine-tuning cross-attention parameters is robust to those.
|
| 172 |
+
|
| 173 |
+
image similarity metrics [50, 10] to filter out the memorized images and use the remaining ones for training.
|
| 174 |
+
|
| 175 |
+
# 4. Experiments
|
| 176 |
+
|
| 177 |
+
In this section, we show the results of our method on ablating various instances, styles, and memorized images. All our experiments are based on the Stable Diffusion model [3]. Please refer to the appendix of our arXiv version for more training details.
|
| 178 |
+
|
| 179 |
+
# 4.1. Evaluation metrics and baselines
|
| 180 |
+
|
| 181 |
+
Baseline. We compare our method with a loss maximization baseline inspired by Tanno et al. [67]:
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\arg \min _ {\hat {\Phi}} \max (1 - \mathcal {L} (\mathbf {x} ^ {*}, \mathbf {c} ^ {*}), 0) + \lambda | | \hat {\Phi} - \Phi | | _ {2} \tag {7}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
where $\mathbf{x}^*$ is the set of generated images with condition $\mathbf{c}^*$ and $\mathcal{L}$ is the diffusion training loss as defined in Eqn. 1. We compare our method with this baseline on ablating instances.
|
| 188 |
+
|
| 189 |
+
Evaluation metrics. We use CLIP Score and CLIP accuracy [26] to evaluate whether the model can ablate the target concept. CLIP Score measures the similarity of the generated image with the target concept text, e.g., Grumpy Cat in CLIP feature space. Similarly, CLIP accuracy measures the accuracy of ablated vs. anchor concept binary classification task for each generated image using cosine distance in CLIP feature space. For both metrics, lower values indicate more successful ablation. We further evaluate the performance on small spelling mistakes in the ablated text prompts. We also use the same metrics to evaluate the model on related
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
Figure 7: Qualitative comparison between baseline and ours. Model fine-tuned by our method generates images that are relatively more similar to the ones generated by the pretrained model on the BB8 instance, which should be preserved while ablating R2D2. Cross-Attention parameters are fine-tuned in both methods.
|
| 193 |
+
|
| 194 |
+
surrounding concepts (e.g., similar cat breeds for Grumpy Cat), which should be preserved. Similar to before, CLIP accuracy is measured between the surrounding concept and anchor concept, and the higher, the better. Similarly, CLIP Score measures the similarity of the generated image with the surrounding concept text, and the higher, the better.
|
| 195 |
+
|
| 196 |
+
Furthermore, to test whether the fine-tuned model can retain existing concepts, we calculate $KID$ [7] between the set of generated images from fine-tuned model and the pretrained model. Higher KID is better for the target concept, while lower KID is better for anchor and surrounding concepts. We generate 200 images each for ablated, anchor, and surrounding concepts using 10 prompts and 50 steps of the DDPM sampler. The prompts are generated through ChatGPT for object instances and manually created for styles by captioning real images corresponding to each style.
|
| 197 |
+
|
| 198 |
+
To measure the effectiveness of our method in ablating memorized images, following previous works [50, 10], we use SSCD [50] model to measure the percentage of generated images having similarity with the memorized image greater than a threshold.
|
| 199 |
+
|
| 200 |
+
# 4.2. Comparisons and main results
|
| 201 |
+
|
| 202 |
+
Instances. We show results on four concepts and replace them with anchor concepts, namely, (1) Grumpy Cat $\rightarrow$ Cat, (2) Snoopy $\rightarrow$ Dog, (3) Nemo $\rightarrow$ Fish, and (4) R2D2 $\rightarrow$ Robot. Figure 3 compares our two proposed methods and the loss maximization baseline with Cross-Attention fine-tuning. As the baseline method maximizes the norm between ground truth and predicted noise, it gradually gen
|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
Figure 8: Ablating styles with the model-based variant. The ablated model generates similar content as the pretrained model but without the unique style. More samples for target and surrounding concepts are shown in the appendix of our arXiv version
|
| 206 |
+
|
| 207 |
+
erates noisy images when trained longer. This also leads to worse performance on surrounding concepts than our method, as shown by the quantitative metrics in Figure 3. Qualitative samples on the target concept R2D2 and its surrounding concept BB8 are also shown in Figure 7. Between our two methods, the model-based variant, i.e., minimizing the difference in prediction with the pretrained model's anchor concept, leads to faster convergence and is better or on par with the noise-based variant. The qualitative comparison in Figure 5 also shows that, specifically on the Nemo instance. Thus, we use model-based variant for all later experiments. In Figure 4, we show the performance comparison when fine-tuning different subsets of the model weights.
|
| 208 |
+
|
| 209 |
+
As shown in Figure 5, the fine-tuned model successfully maps the target concept to the anchor concept. Fine-tuning only the text embedding performs on par with fine-tuning cross-attention layers. However, it is less robust to minor
|
| 210 |
+
|
| 211 |
+
spelling errors that still generate the same instance in the pretrained model as shown in Figure 4 (third column) and Figure 6. We show more results of ablated target and its surrounding concepts in the appendix of our arXiv version.
|
| 212 |
+
|
| 213 |
+
Style. For ablating styles, we consider four artists: (1) Van Gogh, (2) Salvador Dali, (3) Claude Monet, and (4) Greg Rutkowski, with the anchor concept as generic painting styles. Figures 4 and 8 show our method's quantitative and qualitative performance when different subsets of parameters are fine-tuned. We successfully ablate specific styles while minimally affecting related surrounding styles.
|
| 214 |
+
|
| 215 |
+
Memorized images. We select eight image memorization examples from the recent works [64, 10], four of which are shown in Figure 9. It also shows the sample generations before and after fine-tuning. The fine-tuned model generates various outputs given the same text prompt instead of the
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
Figure 9: Ablating memorized images with the model-based variant. Text-to-image diffusion models often learn to generate exact or near-exact copies of real images. We fine-tune the model to map the generated image distribution for the given text prompt to images generated with its variations. This results in the fine-tuned model generating different variations instead of copying the real image. We show more samples in the appendix of our arXiv version.
|
| 219 |
+
|
| 220 |
+
<table><tr><td>Target Prompt</td><td>Pretrained Model</td><td>Ours (Full Weights)</td></tr><tr><td>New Orleans House Galaxy Case</td><td>65.5</td><td>0.0</td></tr><tr><td>Portrait of Tiger in black and white by Lukas Holas</td><td>50.0</td><td>0.0</td></tr><tr><td>VAN GOGH CAFE TERASSE copy.jpg</td><td>56.5</td><td>1.5</td></tr><tr><td>Captain Marvel Exclusive Cxcp Poster Released Online By Marvel</td><td>95.0</td><td>0.5</td></tr><tr><td>Sony Boss Confirms Bloodborne Expansion is Coming</td><td>83.5</td><td>0.5</td></tr><tr><td>Ann Graham Lotz</td><td>26.5</td><td>0.0</td></tr><tr><td>< i> The Long Dark</ i> Gets First Trailer, Steam Early Access</td><td>100.0</td><td>0.0</td></tr><tr><td>A painting with letter M written on it Canvas Wall Art Print</td><td>4.0</td><td>0.0</td></tr><tr><td>Average</td><td>60.1</td><td>0.3</td></tr></table>
|
| 221 |
+
|
| 222 |
+
Table 1: Memorization rate. We show the percentage of generated samples that are highly similar ( $\geq 0.5$ cosine similarity on SSCD) to a "memorized" image.
|
| 223 |
+
|
| 224 |
+
memorized sample. Among different parameter settings, we find finetuning Full Weights gives the best results. We show the percentage of samples with $\geq 0.5$ similarity with the memorized image in Table 1. We show more sample generations and the initial set of anchor prompts for each case in the appendix of our arXiv version.
|
| 225 |
+
|
| 226 |
+
# 4.3. Additional Analysis
|
| 227 |
+
|
| 228 |
+
Single model with multiple concepts ablated. Our method can also remove multiple concepts by training on the union of datasets for longer training steps. We show the results of one model with all instances and one model with all styles ablated in Figure 10. We use the model-based variant of our method and cross-attention fine-tuning. More samples are shown in the appendix of our arXiv version. The drop in accuracy for the ablated concepts is similar to Figure 5 while maintaining the accuracy on surrounding concepts.
|
| 229 |
+
|
| 230 |
+
The role of anchor category. In all the above experiments, we assume an anchor category $\mathbf{c}^*$ is given to overwrite the target concept. Here, we investigate the role of choosing different anchor categories for ablating Grumpy Cat and show results with the anchor concept as British Shorthair Cat
|
| 231 |
+
|
| 232 |
+

|
| 233 |
+
|
| 234 |
+

|
| 235 |
+
Figure 10: Ablating multiple instances (left) and style (right). Top: quantitative results show the drop in the CLIP Accuracy of the target concept, which has been ablated, whereas the accuracy for surrounding concepts remains the same. Bottom: one sample image corresponding to each ablated target concept.
|
| 236 |
+
|
| 237 |
+
and Felidae in Figure 11. Both anchor concepts work well. Reverse KL divergence. In our model-based concept ablation, we optimize the KL divergence between the anchor concept and target concept distribution. Here, we compare it with optimizing the approximation to reverse KL divergence, i.e., $\mathbb{E}_{\epsilon ,\mathbf{x}^*,\mathbf{c}^*,\mathbf{c},t}[w_t||\hat{\Phi} (\mathbf{x}_t^*,\mathbf{c},t).\mathrm{sg}() - \hat{\Phi} (\mathbf{x}_t^*,\mathbf{c}^*,t)||]$ . Thus the expectation of loss is over target concept images. Figure 12 shows the quantitative comparison on ablating instances and style concepts. As we can see, it performs
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
Grumpy cat to British shorthair cat Grumpy cat to Felidae
|
| 241 |
+
Figure 11: The choice of anchor concepts. Our method is robust to the choice of anchor concepts. With both British shorthair cat and Felidae as anchor concepts, our method can ablate the target Grumpy Cat concept.
|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
Figure 12: Reverse KL divergence objective. We show the results of optimizing the loss over target concept images for ablating instances (top) and style (bottom). Compared to using anchor concept images as training images, this performs slightly worse on ablating instances with lower CLIP Score on surrounding concepts while having similar CLIP Score on the target concept. It performs marginally better on ablating styles.
|
| 247 |
+
|
| 248 |
+
marginally better on ablating style concepts but worse on instances. In Figure 13, we show sample generations for the case where it outperforms the forward KL divergence based objective qualitatively on ablating Van Gogh.
|
| 249 |
+
|
| 250 |
+
# 5. Discussion and Limitations
|
| 251 |
+
|
| 252 |
+
Although we can ablate concepts efficiently for a wide range of object instances, styles, and memorized images, our method is still limited in several ways. First, while our method overwrites a target concept, this does not guarantee that the target concept cannot be generated through a different, distant text prompt. We show an example in Figure 14 (a), where after ablating Van Gogh, the model can still generate starry night painting. However, upon discovery, one can resolve this by explicitly ablating the target concept starry night painting. Secondly, when ablating a target concept, we still sometimes observe slight degradation in its surrounding concepts, as shown in Figure 14 (c).
|
| 253 |
+
|
| 254 |
+
Our method does not prevent a downstream user with full access to model weights from re-introducing the ablated con
|
| 255 |
+
|
| 256 |
+

|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
|
| 260 |
+

|
| 261 |
+
Figure 13: Qualitative samples with reverse KL divergence objective. It performs better on certain styles and can successfully ablate famous paintings as well, which is not achievable with forward KL divergence-based objective and requires additional steps as shown in Figure 14.
|
| 262 |
+
Figure 14: Limitations. Top: (a) our method fails to remove certain paintings generated with the painting's titles. (b) We can further ablate these concepts. Bottom: Though our method is better than baseline in preserving surrounding concepts as shown in Figure 7, the generated samples still sometimes show degradation for surrounding concepts, e.g., Monet (c) when ablating Van Gogh as compared to the pretrained model (d).
|
| 263 |
+
|
| 264 |
+
cept [55, 33, 17]. Even without access to the model weights, one may be able to iteratively optimize for a text prompt with a particular target concept. Though that may be much more difficult than optimizing the model weights, our work does not guarantee that this is impossible.
|
| 265 |
+
|
| 266 |
+
Nevertheless, we believe every creator should have an "opt-out" capability. We take a small step towards this goal, creating a computational tool to remove copyrighted images and artworks from large-scale image generative models.
|
| 267 |
+
|
| 268 |
+
Acknowledgment. We are grateful to Gaurav Parmar, Daohan Lu, Muyang Li, Songwei Ge, Jingwan Lu, Sylvain Paris, and Bryan Russell for their helpful discussion, and to Aniruddha Mahapatra and Kangle Deng for paper proofreading. The work is partly supported by Adobe and NSF IIS-2239076.
|
| 269 |
+
|
| 270 |
+
# References
|
| 271 |
+
|
| 272 |
+
[1] Chatgpt. https://chat.openai.com/chat, 2022.4
|
| 273 |
+
[2] Clip retrieval. https://github.com/rom1504/clip-retrieval, 2022.4
|
| 274 |
+
[3] Stable diffusion. https://huggingface.co/ CompVis/stable-diffusion-v-1-4-original, 2022.2,6
|
| 275 |
+
[4] Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Ji-aming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, et al. edifi: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022. 2
|
| 276 |
+
[5] David Bau, Steven Liu, Tongzhou Wang, Jun-Yan Zhu, and Antonio Torralba. Rewriting a deep generative model. In European Conference on Computer Vision (ECCV), 2020. 2
|
| 277 |
+
[6] David Bau, Hendrik Strobelt, William Peebles, Jonas Wulff, Bolei Zhou, Jun-Yan Zhu, and Antonio Torralba. Semantic photo manipulation with a generative image prior. arXiv preprint arXiv:2005.07727, 2020. 2
|
| 278 |
+
[7] Mikolaj Binkowski, Danica J Sutherland, Michael Arbel, and Arthur Gretton. Demystifying mmd gans. In International Conference on Learning Representations (ICLR), 2018. 6
|
| 279 |
+
[8] Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In 2021 IEEE Symposium on Security and Privacy (SP), pages 141-159. IEEE, 2021. 2
|
| 280 |
+
[9] Yinzhi Cao and Junfeng Yang. Towards making systems forget with machine unlearning. In 2015 IEEE symposium on security and privacy, pages 463-480. IEEE, 2015. 2
|
| 281 |
+
[10] Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramér, Borja Balle, Daphne Ippolito, and Eric Wallace. Extracting training data from diffusion models. arXiv preprint arXiv:2301.13188, 2023. 1, 2, 4, 6, 7
|
| 282 |
+
[11] Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models. arXiv preprint arXiv:2202.07646, 2022. 2
|
| 283 |
+
[12] Nicholas Carlini, Chang Liu, Ülfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In USENIX Security Symposium, volume 267, 2019. 2
|
| 284 |
+
[13] Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. In USENIX Security Symposium, volume 6, 2021. 2
|
| 285 |
+
[14] Huiwen Chang, Han Zhang, Jarred Barber, AJ Maschinot, Jose Lezama, Lu Jiang, Ming-Hsuan Yang, Kevin Murphy, William T Freeman, Michael Rubinstein, et al. Muse: Text-to-image generation via masked generative transformers. arXiv preprint arXiv:2301.00704, 2023. 1, 2
|
| 286 |
+
[15] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. In Conference on Neural Information Processing Systems (NeurIPS), 2021. 2
|
| 287 |
+
|
| 288 |
+
[16] Ming Ding, Wendi Zheng, Wenyi Hong, and Jie Tang. Cogview2: Faster and better text-to-image generation via hierarchical transformers. arXiv preprint arXiv:2204.14217, 2022. 2
|
| 289 |
+
[17] Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv preprint arXiv:2208.01618, 2022. 2, 4, 9
|
| 290 |
+
[18] Rinon Gal, Moab Arar, Yuval Atzmon, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. Designing an encoder for fast personalization of text-to-image models. arXiv preprint arXiv:2302.12228, 2023. 2
|
| 291 |
+
[19] Rinon Gal, Or Patashnik, Haggai Maron, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. Stylegan-nada: Clip-guided domain adaptation of image generators. ACM Transactions on Graphics (TOG), 41(4):1-13, 2022. 2
|
| 292 |
+
[20] Rohit Gandikota, Joanna Materzynska, Jaden Fiitto-Kaufman, and David Bau. Erasing concepts from diffusion models. arXiv preprint arXiv:2303.07345, 2023. 2
|
| 293 |
+
[21] Antonio Ginart, Melody Guan, Gregory Valiant, and James Y Zou. Making ai forget you: Data deletion in machine learning. Advances in neural information processing systems, 32, 2019. 2
|
| 294 |
+
[22] Aditya Golatkar, Alessandro Achille, Avinash Ravichandran, Marzia Polito, and Stefano Soatto. Mixed-privacy forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 792–801, 2021. 2
|
| 295 |
+
[23] Aditya Golatkar, Alessandro Achille, and Stefano Soatto. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9304–9312, 2020. 2
|
| 296 |
+
[24] Zheng Gu, Wenbin Li, Jing Huo, Lei Wang, and Yang Gao. Lofgan: Fusing local representations for few-shot image generation. In IEEE International Conference on Computer Vision (ICCV), 2021. 2
|
| 297 |
+
[25] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626, 2022. 2
|
| 298 |
+
[26] Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. In EMNLP, 2021. 6
|
| 299 |
+
[27] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Conference on Neural Information Processing Systems (NeurIPS), 2020. 2
|
| 300 |
+
[28] Xun Huang, Arun Mallya, Ting-Chun Wang, and Ming-Yu Liu. Multimodal conditi onal image synthesis with product-of-experts gans. In European Conference on Computer Vision, pages 91-109. Springer, 2022. 2
|
| 301 |
+
[29] Minguk Kang, Jun-Yan Zhu, Richard Zhang, Jaesik Park, Eli Shechtman, Sylvain Paris, and Taesung Park. Scaling up gans for text-to-image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 2
|
| 302 |
+
|
| 303 |
+
[30] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. In Conference on Neural Information Processing Systems (NeurIPS), 2020. 2
|
| 304 |
+
[31] Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. Imagc: Text-based real image editing with diffusion models. arXiv preprint arXiv:2210.09276, 2022. 2
|
| 305 |
+
[32] Zhifeng Kong and Kamalika Chaudhuri. Data redaction from pre-trained gans. In Workshop on Trustworthy and Socially Responsible Machine Learning, NeurIPS 2022, 2022. 2, 3
|
| 306 |
+
[33] Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. Multi-concept customization of text-to-image diffusion. arXiv preprint arXiv:2212.04488, 2022. 2, 4, 9
|
| 307 |
+
[34] Yijun Li, Richard Zhang, Jingwan Lu, and Eli Shechtman. Few-shot image generation with elastic weight consolidation. In Conference on Neural Information Processing Systems (NeurIPS), 2020. 2
|
| 308 |
+
[35] Bingchen Liu, Yizhe Zhu, Kunpeng Song, and Ahmed Elgamal. Towards faster and stabilized gan training for high-fidelity few-shot image synthesis. In International Conference on Learning Representations (ICLR), 2021. 2
|
| 309 |
+
[36] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. arXiv preprint arXiv:2206.00927, 2022. 3
|
| 310 |
+
[37] Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Generating images from captions with attention. In International Conference on Learning Representations (ICLR), 2016. 2
|
| 311 |
+
[38] Kevin Meng, David Bau, Alex J Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt. In Advances in Neural Information Processing Systems, 2022. 2
|
| 312 |
+
[39] Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. Mass-editing memory in a transformer. arXiv preprint arXiv:2210.07229, 2022. 2
|
| 313 |
+
[40] Eric Mitchell, Charles Lin, Antoine Bosselut, Chelsea Finn, and Christopher D Manning. Fast model editing at scale. arXiv preprint arXiv:2110.11309, 2021. 2
|
| 314 |
+
[41] Sangwoo Mo, Minsu Cho, and Jinwoo Shin. Freeze the discriminator: a simple baseline for fine-tuning gans. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshop, 2020. 2
|
| 315 |
+
[42] Quoc Phong Nguyen, Bryan Kian Hsiang Low, and Patrick Jaillet. Variational bayesian unlearning. Advances in Neural Information Processing Systems, 33:16025-16036, 2020. 2
|
| 316 |
+
[43] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In International Conference on Machine Learning (ICML), 2022. 1, 2
|
| 317 |
+
[44] Yotam Nitzan, Kfir Aberman, Qiurui He, Orly Liba, Michal Yarom, Yossi Gandelsman, Inbar Mosseri, Yael Pritch, and Daniel Cohen-Or. Mystyle: A personalized generative prior. In SIGGRAPH ASIA, 2022. 2
|
| 318 |
+
|
| 319 |
+
[45] Yotam Nitzan, Michael Gharbi, Richard Zhang, Taesung Park, Jun-Yan Zhu, Daniel Cohen-Or, and Eli Shechtman. Domain expansion of image generators. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023. 2
|
| 320 |
+
[46] Atsuhiro Noguchi and Tatsuya Harada. Image generation from small datasets via batch statistics adaptation. In IEEE International Conference on Computer Vision (ICCV), 2019. 2
|
| 321 |
+
[47] Utkarsh Ojha, Yijun Li, Jingwan Lu, Alexei A Efros, Yong Jae Lee, Eli Shechtman, and Richard Zhang. Few-shot image generation via cross-domain correspondence. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 2
|
| 322 |
+
[48] Xingang Pan, Xiaohang Zhan, Bo Dai, Dahua Lin, Chen Change Loy, and Ping Luo. Exploiting deep generative prior for versatile image restoration and manipulation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(11):7474-7489, 2021. 2
|
| 323 |
+
[49] Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-image translation. arXiv preprint arXiv:2302.03027, 2023. 2
|
| 324 |
+
[50] Ed Pizzi, Sreya Dutta Roy, Sugosh Nagavara Ravindra, Priya Goyal, and Matthijs Douze. A self-supervised descriptor for image copy detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14532-14542, 2022. 6
|
| 325 |
+
[51] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022. 1
|
| 326 |
+
[52] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. In International Conference on Machine Learning (ICML), 2016. 2
|
| 327 |
+
[53] Daniel Reich, Ron Mokady, Amit H Bermano, and Daniel Cohen-Or. Pivotal tuning for latent-based editing of real images. ACM Transactions on Graphics (TOG), 42(1):1-13, 2022. 2
|
| 328 |
+
[54] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 1, 2
|
| 329 |
+
[55] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. arXiv preprint arXiv:2208.12242, 2022. 2, 4, 9
|
| 330 |
+
[56] Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. In NeurIPS, 2022. 1, 2
|
| 331 |
+
[57] Axel Sauer, Tero Karras, Samuli Laine, Andreas Geiger, and Timo Aila. Stylegan-t: Unlocking the power of gans for fast large-scale text-to-image synthesis. arXiv preprint arXiv:2301.09515, 2023. 2
|
| 332 |
+
|
| 333 |
+
[58] Patrick Schramowski, Manuel Brack, Björn Deiseroth, and Kristian Kersting. Safe latent diffusion: Mitigating inappropriate degeneration in diffusion models. 2023. 2
|
| 334 |
+
[59] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021. 1, 2
|
| 335 |
+
[60] Ayush Sekhari, Jayadev Acharya, Gautam Kamath, and Ananda Theertha Suresh. Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems, 34:18075-18086, 2021. 2
|
| 336 |
+
[61] Shawn Shan, Jenna Cryan, Emily Wenger, Haitao Zheng, Rana Hanocka, and Ben Y Zhao. Glaze: Protecting artists from style mimicry by text-to-image models. arXiv preprint arXiv:2302.04222, 2023. 1, 2
|
| 337 |
+
[62] Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE symposium on security and privacy (SP), pages 3-18. IEEE, 2017. 2
|
| 338 |
+
[63] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning (ICML), 2015. 2, 3
|
| 339 |
+
[64] Gowthami Somepalli, Vasu Singla, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Diffusion art or digital forgery? investigating data replication in diffusion models. arXiv preprint arXiv:2212.03860, 2022. 1, 2, 7
|
| 340 |
+
[65] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations (ICLR), 2021. 3
|
| 341 |
+
[66] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In ICLR, 2021. 2
|
| 342 |
+
[67] Ryutaro Tanno, Melanie F Pradier, Aditya Nori, and Yingzhen Li. Repairing neural networks by leaving the right past behind. arXiv preprint arXiv:2207.04806, 2022. 2, 3, 6
|
| 343 |
+
[68] Ming Tao, Hao Tang, Songsong Wu, Nicu Sebe, Xiao-Yuan Jing, Fei Wu, and Bingkun Bao. Df-gan: Deep fusion generative adversarial networks for text-to-image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 2
|
| 344 |
+
[69] Sheng-Yu Wang, David Bau, and Jun-Yan Zhu. Sketch your own gan. In IEEE International Conference on Computer Vision (ICCV), 2021. 2
|
| 345 |
+
[70] Sheng-Yu Wang, David Bau, and Jun-Yan Zhu. Rewriting geometric rules of a gan. ACM SIGGRAPH, 2022. 2
|
| 346 |
+
[71] Tengfei Wang, Ting Zhang, Bo Zhang, Hao Ouyang, Dong Chen, Qifeng Chen, and Fang Wen. Pretraining is all you need for image-to-image translation. arXiv preprint arXiv:2205.12952, 2022. 2
|
| 347 |
+
[72] Yaxing Wang, Abel Gonzalez-Garcia, David Berga, Luis Herranz, Fahad Shahbaz Khan, and Joost van de Weijer. Minegan: effective knowledge transfer from gans to target domains with few images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2
|
| 348 |
+
|
| 349 |
+
[73] Yaxing Wang, Chenshen Wu, Luis Herranz, Joost van de Weijer, Abel Gonzalez-Garcia, and Bogdan Raducanu. Transferring gans: generating images from limited data. In European Conference on Computer Vision (ECCV), 2018. 2
|
| 350 |
+
[74] Chenfei Wu, Jian Liang, Lei Ji, Fan Yang, Yuejian Fang, Daxin Jiang, and Nan Duan. Nüwa: Visual synthesis pretraining for neural visual world creation. In European Conference on Computer Vision (ECCV), 2022. 2
|
| 351 |
+
[75] Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
|
| 352 |
+
[76] Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2022. 1
|
| 353 |
+
[77] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In IEEE International Conference on Computer Vision (ICCV), 2017. 2
|
| 354 |
+
[78] Lvmin Zhang and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. arXiv preprint arXiv:2302.05543, 2023. 2
|
| 355 |
+
[79] Miaoyun Zhao, Yulai Cong, and Lawrence Carin. On leveraging pretrained gans for generation with limited data. In International Conference on Machine Learning (ICML), 2020. 2
|
| 356 |
+
[80] Shengyu Zhao, Zhijian Liu, Ji Lin, Jun-Yan Zhu, and Song Han. Differentiable augmentation for data-efficient gan training. In Conference on Neural Information Processing Systems (NeurIPS), volume 33, 2020. 2
|
| 357 |
+
[81] Minfeng Zhu, Pingbo Pan, Wei Chen, and Yi Yang. Dm-gan: Dynamic memory generative adversarial networks for text-to-image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2
|
| 358 |
+
[82] Xiaojin Zhu, Andrew B Goldberg, Mohamed Eldawy, Charles R Dyer, and Bradley Strock. A text-to-picture synthesis system for augmenting communication. In The AAAI Conference on Artificial Intelligence, 2007. 2
|
ablatingconceptsintexttoimagediffusionmodels/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:619a1bdcf8c5f6af32ece415c543b2535bc6100c400d9ba7bd549d356c5fc898
|
| 3 |
+
size 1490782
|
ablatingconceptsintexttoimagediffusionmodels/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:123c0d9386930138667eb30475c9be27fe946a29592a5ff1da4420ee93d2b584
|
| 3 |
+
size 449041
|
accflowbackwardaccumulationforlongrangeopticalflow/e5896194-f826-44fe-b36b-39eda9c29a37_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f1bc8eff3046df2ec0dbcc8e19d2fe1e078c29d6d9e0aa02e1c38a93ae3a5b32
|
| 3 |
+
size 84845
|
accflowbackwardaccumulationforlongrangeopticalflow/e5896194-f826-44fe-b36b-39eda9c29a37_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:24cd8774b6239cfc31ac24b231a7206bab1c1b70df50b082f62221a5ad2f9b4a
|
| 3 |
+
size 105965
|
accflowbackwardaccumulationforlongrangeopticalflow/e5896194-f826-44fe-b36b-39eda9c29a37_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ee019ba5d950b170fba841304132f13486998a8f3a3aacb566b95b1f6574a065
|
| 3 |
+
size 1937070
|
accflowbackwardaccumulationforlongrangeopticalflow/full.md
ADDED
|
@@ -0,0 +1,369 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AccFlow: Backward Accumulation for Long-Range Optical Flow
|
| 2 |
+
|
| 3 |
+
Guangyang $\mathbf{W}\mathbf{u}^{1,2*}$ Xiaohong Liu $^{1\dagger}$ Kunming Luo $^{5‡}$ Xi Liu $^{2‡}$ Qingqing Zheng $^{3}$
|
| 4 |
+
|
| 5 |
+
Shuaicheng Liu² Xinyang Jiang⁴ Guangtao Zhai¹ Wenyi Wang²†
|
| 6 |
+
|
| 7 |
+
$^{1}$ Shanghai Jiao Tong University $^{2}$ University of Electronic Science and Technology of China
|
| 8 |
+
|
| 9 |
+
<sup>3</sup>Shenzhen Institute of Advanced Technology <sup>4</sup>Microsoft Research Asia
|
| 10 |
+
|
| 11 |
+
$^{5}$ Hong Kong University of Science and Technology
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Recent deep learning-based optical flow estimators have exhibited impressive performance in generating local flows between consecutive frames. However, the estimation of long-range flows between distant frames, particularly under complex object deformation and large motion occlusion, remains a challenging task. One promising solution is to accumulate local flows explicitly or implicitly to obtain the desired long-range flow. Nevertheless, the accumulation errors and flow misalignment can hinder the effectiveness of this approach. This paper proposes a novel recurrent framework called AccFlow, which recursively backward accumulates local flows using a deformable module called as AccPlus. In addition, an adaptive blending module is designed along with AccPlus to alleviate the occlusion effect by backward accumulation and rectify the accumulation error. Notably, we demonstrate the superiority of backward accumulation over conventional forward accumulation, which to the best of our knowledge has not been explicitly established before. To train and evaluate the proposed AccFlow, we have constructed a large-scale high-quality dataset named CVO, which provides ground-truth optical flow labels between adjacent and distant frames. Extensive experiments validate the effectiveness of AccFlow in handling long-range optical flow estimation. Codes are available at https://github.com/mulns/AccFlow.
|
| 16 |
+
|
| 17 |
+
# 1. Introduction
|
| 18 |
+
|
| 19 |
+
Optical flow is ideally a dense field of motion vectors that depicts the pixel-wise correspondence of two video frames. Since a variety of downstream applications (e.g., video editing [3, 14, 54], action recognition [42], and object tracking [1]) significantly benefit from the accuracy of flow estimation, optical flow estimation turns out
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Ground Truth
|
| 23 |
+
|
| 24 |
+

|
| 25 |
+
|
| 26 |
+

|
| 27 |
+
Ground Truth
|
| 28 |
+
Figure 1: Comparisons of our method with RAFT [48] and GMA [19] on HS-Sintel dataset [18]. Zoom-in regions are annotated in red boxes. Our method outperforms other methods especially for occluded area.
|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
RAFT
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
GMA
|
| 35 |
+
|
| 36 |
+

|
| 37 |
+
Ours
|
| 38 |
+
|
| 39 |
+
to be a long-standing fundamental task in computer vision [41, 39, 40, 27, 26, 55, 25, 13].
|
| 40 |
+
|
| 41 |
+
Recent advances [6, 47, 48] resort to deep learning to estimate optical flow and achieve promising accuracy. Although remarkable performance has been achieved in the local flow estimation between two adjacent frames, it is nontrivial to estimate the long-range flow that records the pixel correspondence between two distant frames.
|
| 42 |
+
|
| 43 |
+
The long-range optical flow is a grounded research topic that has plenty of practical applications. For instance, in video completion [8], long-range optical flow is beneficial to the detail compensation between distant frames; in video key-point propagation [11], since the long-range optical flow performs holistic pixel-tracking in nature, it frees the quantity limitation of tracked pixels; in video super-resolution [31], it enables better inter-frame alignment in one sliding window; and in segmentation mask propagation [51], it provides an explicit approach to propagate masks to distant frames, improving the interpretability compared to the implicit matching. The above examples take a glance at the wide applications of long-range optical flow. More significantly, the success of this task has the potential to break through the performance bottleneck of relevant tasks.
|
| 44 |
+
|
| 45 |
+
Surprisingly, even though the long-range optical flow is significant and can benefit many related tasks, few works put effort on this research line. One possible reason is the lack of public datasets that provide ground-truth bidirectional cross-frame optical flows for training and validation. In literature, the early attempt to address this long-range optical flow task is Lim et al. [22], which proposed a method based on the forward flow accumulation, in which the flows of adjacent frames are added successively along the motion trajectories. The recent work [18] follows this idea and reasons the occlusion regions from high-frame-rate frames. Apart from these, one can simply estimate the long-range flow by employing methods specified for local flow [19, 48]. As shown in Figure 1, since the influence of occlusion is positively related to the time interval between two frames, the accuracy of flow estimation from these methods would be deteriorated severely or even be unacceptable when the time interval is beyond a threshold. In addition, one can also traverse all pixels in a frame and employ the pixel-tracking methods [38, 11] to produce the long-range dense flow, which has huge computational overheads and cannot be used in applications requiring dense flow. To sum up, a considerate long-range optical flow method should address the following challenging issues:
|
| 46 |
+
|
| 47 |
+
1) Occlusion As the time interval increases, the flow estimation of two distant frames suffers significant degradation owing to the inter-frame occlusion. Therefore, without a specific design, the common methods that aim at dealing with local flows perform poorly. Janai et al. [18] formulate it as an energy minimization problem and found it highly non-convex, so they exploit the linearity of small motions and reasons about occlusions from multiple frames. However, this strategy is based on high-frame-rate videos ( $\geq 240$ FPS) and not applicable on regular videos.
|
| 48 |
+
2) Accumulation error Although flow accumulation is a promising solution to tackle long-range flow estimation, it also brings the accumulation error, resulting in inaccurate estimation in non-occluded regions. Therefore, the effectiveness of accumulation error compensation is critical. Lim et al. [22] and Janai et al. [18] constrained the photo consistency of warped frames to shrink accumulated error. However, the photo consistency loss is not comprehensive for flow estimation as revealed in [17, 24].
|
| 49 |
+
3) Efficiency The computational complexity of long-range optical flow should be controlled at an appropriate level to support the downstream tasks in practice. Therefore, the pixel-tracking methods [38, 11], which iterative estimate the per-pixel long-range displacement, do not satisfy this requirement.
|
| 50 |
+
|
| 51 |
+
To address the above issues, we propose a novel framework, named AccFlow, to estimate long-range optical flow by progressively backward accumulating local flows with effective corrections. More specifically, to alleviate the occlusion effect, we propose the backward accumulation, a new accumulation strategy distinct from the forward accumulation pipeline, and elaborate a corresponding deep module, named AccPlus. More details about the difference between backward and forward accumulation can be found in Section 3.1 and 3.2. The AccFlow framework consists of three components: an arbitrary optical flow estimator, the AccPlus module, and an adaptive blending module. The arbitrary optical flow estimator is used to estimate local flows and long-range initial flow. The AccPlus performs the backward accumulation in feature domain. The adaptive blending module rectifies the accumulated error. Furthermore, to train and validate our AccFlow, we elaborately build a large-scale synthetic dataset, named CVO (cross-frame video optical flows). Different from other synthetic flow datasets [5, 6], the CVO includes comprehensive cross-frame bidirectional flow annotations. The CVO also includes more challenging cases that have large pixel displacement and severe occlusion.
|
| 52 |
+
|
| 53 |
+
The contributions of this paper can be summarized as follows:
|
| 54 |
+
|
| 55 |
+
- We propose a novel backward accumulation strategy to alleviate the long-range occlusion.
|
| 56 |
+
- We build the CVO, a new large-scale synthetic dataset with comprehensive cross-frame optical flow annotations.
|
| 57 |
+
- We propose the AccFlow framework which is simple yet effective to predict the long-range optical flow and achieves the state-of-the-art results on several benchmarks.
|
| 58 |
+
|
| 59 |
+
# 2. Related Works
|
| 60 |
+
|
| 61 |
+
# 2.1. Adjacent Frame Optical Flow Estimation
|
| 62 |
+
|
| 63 |
+
Optical flow methods can be categorized into two-frame and multi-frame methods according to the number of input frames. For two-frame methods, traditional algorithms [4, 45, 36] obtain optical flow by minimizing well-designed energy functions based on the brightness constancy assumption. By training a convolutional network on a synthetic dataset, FlowNet [6] first established a deep learning approach for optical flow estimation. After that, the performance of optical flow estimation is gradually improved by various works, such as FlowNet2 [16], PwC-Net [47], and IRR-PWC [15]. Recently, RAFT [48] proposed a new paradigm to estimate optical flow by introducing 4D correlation volume and recurrent network. Following RAFT, graph reasoning [30], global motion aggregation [19], kernel patch attention [29], and cross-attention transformer [44] are further proposed to improve the accuracy and efficiency.
|
| 64 |
+
|
| 65 |
+
The purpose of multi-frame optical flow estimation is to estimate the optical flow of adjacent frames by utilizing the temporal information of multiple video frames. Traditional methods achieve multi-frame optical flow estimation by phase-based representations of local image structure [7, 12], spatial-temporal regularization term [18, 52, 43], constant velocity prior [18, 49, 37, 46, 50], constant acceleration assumption [2, 20], and directional prior [32]. Recently, deep-based multi-frame methods are proposed to fuse flow prediction [35] or feature [34, 9] from previous frame pair into the current estimation process.
|
| 66 |
+
|
| 67 |
+
Although these optical flow methods have achieved remarkable performance, they mainly focus on estimating optical flow of two adjacent frames, leaving the long-range optical flow of non-adjacent frames rarely being explored.
|
| 68 |
+
|
| 69 |
+
# 2.2. Non-adjacent Frame Optical Flow Estimation
|
| 70 |
+
|
| 71 |
+
Lim et al. [23] proposed the early work to obtain the cross-frame optical flow, where the Lucas-Kanade method [28] is used to produce optical flow at a high frame rate and the accumulation strategy is designed to generate optical flow at a standard frame rate. After that, this accumulation method is improved by accumulation error modeling and correction [21, 22, 18]. Janai et al. [18] cast this task as an energy minimization problem, and opt for a data-driven hypothesis generation strategy for optimization. Recently, Harley et al. [11] proposed a deep CNN network, PIPs, to estimate cross-frame sparse optical flow from the perspective of per-pixel tracking over the video sequence. Although PIPs has achieved state-of-the-art performance for video pixel tracking, it is difficult to obtain long-range dense optical flow due to the lack of spatial coherence information. In this paper, we deeply analyze the drawbacks of existing accumulation strategies and propose a new accumulation framework for obtaining long-range dense optical flow.
|
| 72 |
+
|
| 73 |
+
# 3. Methods
|
| 74 |
+
|
| 75 |
+
Let $\mathcal{I} = \{\mathbf{I}_1, \ldots, \mathbf{I}_N\}$ denote a video sequence with $N$ image frames $\mathbf{I}_t \in \mathbb{R}^{w \times h \times 3}$ of size $w \times h$ and 3 color channels. Let $\mathbf{F}_{i,j} \in \mathbb{R}^{w \times h \times 2}$ denote the optical flow field from the reference image $\mathbf{I}_i$ to the target image $\mathbf{I}_j$ . Specifically, for each pixel $\mathbf{x} \in \Omega_i = \{1, \ldots, w\} \times \{1, \ldots, h\}$ in reference image $\mathbf{I}_i$ , $\mathbf{F}_{i,j}(\mathbf{x}) \in \mathbb{R}^2$ describes the apparent motion from frame $I_i$ to $I_j$ .
|
| 76 |
+
|
| 77 |
+
Our goal is to estimate the long-range optical flow field $\mathbf{F}_{1,N}$ by accumulating all intermediate local flow fields $\{\mathbf{F}_{1,2},\dots ,\mathbf{F}_{N - 1,N}\}$ . To achieve this, Lim et al. [22] and Janai et al. [18] formulate it as a dense pixel tracking task and obtain the long-range flow by tracking through pixel trajectories. In this paper, we refer to these approaches as the forward accumulation. In Section 3.1, we revisit the forward accumulation process and provide a formalization
|
| 78 |
+
|
| 79 |
+
of it. The essential problem inherent in this process is analyzed, and a solution referred to as backward accumulation is proposed in Section 3.2. Subsequently, we introduce in Section 3.3 the proposed AccFlow framework that accomplishes the aforementioned backward accumulation to mitigate the occlusion effect and rectify the accumulated error. Additionally, we introduce the proposed CVO dataset which provides synthesized video with ground-truth long-range optical flow between distant frames in Section 3.4.
|
| 80 |
+
|
| 81 |
+
# 3.1. Revisiting the Forward Accumulation
|
| 82 |
+
|
| 83 |
+
Generally, the accumulation process is a recursive procedure to fuse all intermediate local flows together. For brevity, we define the fusion of two adjacent optical flows $\mathbf{F}_{i,k}$ and $\mathbf{F}_{k,j}$ as $\oplus$ , and we present the fused flow $\mathbf{F}_{i,j}$ as:
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\mathbf {F} _ {i, j} = \mathbf {F} _ {i, k} \oplus \mathbf {F} _ {k, j} \tag {1}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
where $i,k,j\in [1,N]$ denote three time stamps satisfying $i < k < j$ . Since the adjacent flows $\mathbf{F}_{i,k}$ and $\mathbf{F}_{k,j}$ start at different frames (i.e., frame $\mathbf{I}_i$ and $\mathbf{I}_k$ ), in order to obtain the target flow $\mathbf{F}_{i,j}$ which starts at frame $\mathbf{I}_i$ , we need to warp the start point of each motion vector in $\mathbf{F}_{k,j}$ to align them with $\mathbf{F}_{i,k}$ , and then add the two flows pixel-wise. Let $\widetilde{\mathbf{F}}_{k,j}^{i}$ denote the warped $\mathbf{F}_{k,j}$ starting at frame $\mathbf{I}_i$ , we have:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\widetilde {\mathbf {F}} _ {k, j} ^ {i} (\mathbf {x}) = \mathbf {F} _ {k, j} \left(\mathbf {x} + \mathbf {F} _ {i, k} (\mathbf {x})\right) \tag {2}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
for each pixel $\mathbf{x}$ in reference image $\mathbf{I}_i$ . Then we obtain the target flow $\mathbf{F}_{i,j}$ by:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\mathbf {F} _ {i, j} (\mathbf {x}) = \mathbf {F} _ {i, k} (\mathbf {x}) + \widetilde {\mathbf {F}} _ {k, j} ^ {i} (\mathbf {x}). \tag {3}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
However, as Janai et al. [18] revealed, the reference pixel $\mathbf{x} \in \Omega_{i}$ can be forward occluded in frame $\mathbf{I}_{k}$ , which leads to wrong warping results in Equation (1)-(3). Therefore, researchers usually speculate on the occlusion mask and solve the occluded regions by estimation. For brevity, we define the binary occlusion mask $\mathbf{O}_{i,k}$ , where $\mathbf{O}_{i,k}(\mathbf{x}) \in \{0,1\}$ specifies whether pixel $\mathbf{x} \in \Omega_{i}$ is forward occluded from frame $\mathbf{I}_{i}$ to $\mathbf{I}_{k}$ . Equation (1)-(3) valid only when pixel $\mathbf{x} \in \Omega_{i}$ is not occluded in frame $I_{k}$ (i.e., $\mathbf{O}_{i,k}(\mathbf{x}) = 0$ ). As for occluded pixels (i.e., $\mathbf{O}_{i,k}(\mathbf{x}) = 1$ ), its optical flow has to be estimated by some carefully designed occlusion solvers. For easy notation, function solveOcc denotes occlusion solvers in general, and $\mathbf{P}_{i,j} \in \mathbb{R}^{w \times h \times 2}$ denote the estimated flows in occluded region, where
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\mathbf {P} _ {i, j} = \operatorname {s o l v e O c c} \left(\mathbf {F} _ {i, k}, \mathbf {F} _ {k, j}, \mathbf {O} _ {i, k}\right). \tag {4}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
Therefore, Equation (3) can be re-formulated as:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\mathbf {F} _ {i, j} (\mathbf {x}) = \left\{ \begin{array}{l l} \mathbf {F} _ {i, k} (\mathbf {x}) + \widetilde {\mathbf {F}} _ {k, j} ^ {i} (\mathbf {x}) & \text {i f} \mathbf {O} _ {i, k} (\mathbf {x}) = 0, \\ \mathbf {P} _ {i, j} (\mathbf {x}) & \text {i f} \mathbf {O} _ {i, k} (\mathbf {x}) = 1. \end{array} \right. \tag {5}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
Algorithm 1: The Forward Accumulation
|
| 114 |
+
Input: $\{\mathbf{F}_{t,t + 1}\mid t\in [1,N - 1]\}$
|
| 115 |
+
Output: $\mathbf{F}_{1,N}$
|
| 116 |
+
for $t\gets 2$ to $N - 1$ .. $\mathbf{O}_{1,t}\gets getOcc(\mathbf{F}_{1,t},\mathbf{F}_{t,t + 1})$ $\mathbf{P}_{1,t + 1}\gets solveOcc(\mathbf{F}_{1,t},\mathbf{F}_{t,t + 1},\mathbf{O}_{1,t})$
|
| 117 |
+
for $\mathbf{x}\in \Omega_1$ .. $\widetilde{\mathbf{F}}_{t,t + 1}^{1}(\mathbf{x})\leftarrow \mathbf{F}_{t,t + 1}(\mathbf{x} + \mathbf{F}_{1,t}(\mathbf{x}))$ if $\mathbf{O}_{1,t}(\mathbf{x}) = 0$ .. $\mathbf{F}_{1,t + 1}(\mathbf{x})\leftarrow \mathbf{F}_{1,t}(\mathbf{x}) + \widetilde{\mathbf{F}}_{t,t + 1}^{1}(\mathbf{x})$
|
| 118 |
+
elif $\mathbf{O}_{1,t}(\mathbf{x}) = 1$ .. $\mathbf{F}_{1,t + 1}(\mathbf{x})\leftarrow \mathbf{P}_{1,t + 1}(\mathbf{x})$
|
| 119 |
+
|
| 120 |
+
The forward accumulation process recursively performs the above operations. Specifically, with the time index $t$ increases from 2 to $N - 1$ , we recursively produce $\mathbf{F}_{1,t + 1}$ by fusing the pre-obtained flow $\mathbf{F}_{1,t}$ and the local flow $\mathbf{F}_{t,t + 1}$ as follows:
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
\mathbf {F} _ {1, t + 1} = \mathbf {F} _ {1, t} \oplus \mathbf {F} _ {t, t + 1}, \tag {6}
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
where for each pixel $\mathbf{x} \in \Omega_1$ in reference image $\mathbf{I}_1$ , we have
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
\mathbf {F} _ {1, t + 1} (\mathbf {x}) = \left\{ \begin{array}{l l} \mathbf {F} _ {1, t} (\mathbf {x}) + \widetilde {\mathbf {F}} _ {t, t + 1} ^ {1} (\mathbf {x}) & \text {i f} \mathbf {O} _ {1, t} (\mathbf {x}) = 0, \\ \mathbf {P} _ {1, t + 1} (\mathbf {x}) & \text {i f} \mathbf {O} _ {1, t} (\mathbf {x}) = 1, \end{array} \right. \tag {7}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
where the occlusion mask $\mathbf{O}_{1,t}$ is usually estimated as well. We denote the occlusion reasoning methods as getOcc in general:
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
\mathbf {O} _ {1, t} = \operatorname {g e t O c c} \left(\mathbf {F} _ {1, t}, \mathbf {F} _ {t, t + 1}\right). \tag {8}
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
For clarity, we present the pseudocode of the forward accumulation process in Algorithm 1.
|
| 139 |
+
|
| 140 |
+
# 3.2. Backward Accumulation
|
| 141 |
+
|
| 142 |
+
Previous research [18] has shown that the forward accumulation can generate high quality motion hypotheses for visible regions, but the occluded regions limit its performance. In this subsection, we first analyze the occlusion area in the forward accumulation process, then propose a new solution to alleviate the occlusion effect.
|
| 143 |
+
|
| 144 |
+
Let $\Delta = |k - i| \geq 1$ denote the time interval, we define the proportion of occluded area of $\mathbf{O}_{i,k}$ as:
|
| 145 |
+
|
| 146 |
+
$$
|
| 147 |
+
\alpha_ {\Delta} ^ {i} = \frac {\sum_ {\mathbf {x} \in \Omega_ {i}} \mathbf {O} _ {i , k} (\mathbf {x})}{h \times w}, \tag {9}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
where $\alpha_{\Delta}^{i}\in [0,1]$ . We begin by analyzing the case of a one-dimensional object moving with constant velocity, assuming that the object is of length $\delta w$ pixels, the canvas length is $M\gg \delta w$ , the velocity of the object is $v$ pixels per frame, and the background is fixed. From time $t = 1$ to $t = k$ , the proportion of forward occluded area is calculated
|
| 151 |
+
|
| 152 |
+
Algorithm 2: The Backward Accumulation
|
| 153 |
+
Input: $\{\mathbf{F}_{t,t + 1}\mid t\in [1,N - 1]\}$
|
| 154 |
+
Output: $\mathbf{F}_{1,N}$
|
| 155 |
+
for $t\gets N - 1$ to2: $\mathbf{O}_{t - 1,t}\gets getOcc(\mathbf{F}_{t - 1,t},\mathbf{F}_{t,N})$ $\mathbf{P}_{t - 1,N}\gets solveOcc(\mathbf{F}_{t - 1,t},\mathbf{F}_{t,N},\mathbf{O}_{t - 1,t})$
|
| 156 |
+
for $\mathbf{x}\in \Omega_{t - 1}$ .. $\widetilde{\mathbf{F}}_{t,N}^{t - 1}(\mathbf{x})\leftarrow \mathbf{F}_{t,N}(\mathbf{x} + \mathbf{F}_{t - 1,t}(\mathbf{x}))$ if $\mathbf{O}_{t - 1,t}(\mathbf{x}) = 0$ .. $\mathbf{F}_{t - 1,N}(\mathbf{x})\gets \mathbf{F}_{t - 1,t}(\mathbf{x}) + \widetilde{\mathbf{F}}_{t,N}^{t - 1}(\mathbf{x})$
|
| 157 |
+
elif $\mathbf{O}_{1,t}(\mathbf{x}) = 1$ .. $\mathbf{F}_{t - 1,N}(\mathbf{x})\leftarrow \mathbf{P}_{t - 1,N}(\mathbf{x})$
|
| 158 |
+
|
| 159 |
+
as:
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
\alpha_ {| k - 1 |} ^ {1} = \frac {\min \{v \times | k - 1 | , \delta w \}}{M}, \tag {10}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
which is positively correlated with the time interval $|k - 1|$ . Similar conclusions can be extended to two-dimensional cases. Thus, the inequality
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\alpha_ {\Delta + 1} ^ {i} \geq \alpha_ {\Delta} ^ {i}, \tag {11}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
holds for linear motion.
|
| 172 |
+
|
| 173 |
+
While the assumption of linear motion may not always hold in practical scenarios, our experiments show that Equation (11) remains valid when a significant number of samples are tested. The statistical results over 5000 samples are provided in terms of box-plot in Figure 2, which demonstrates that the $\alpha_{\Delta}^{i}$ is positively correlated with $\Delta$ as Equation (10) indicates. This conclusion is important for the following analysis.
|
| 174 |
+
|
| 175 |
+
Algorithm 1 shows that the occlusion proportion $\alpha_{t-1}^{1}$ of $\mathbf{O}_{1,t}$ increases progressively with $t$ increases, which significantly burdens the occlusion solver. Although existing techniques [19, 48] can powerfully solve occlusion with deep neural networks (DNN), the constant increment of the occlusion proportion is still a challenge that might consume substantial computational resources.
|
| 176 |
+
|
| 177 |
+
To address the above critical issue, we propose a simple solution, named the backward accumulation, where we reverse the accumulation order without extra computational complexity involved. As analyzed in Equation (3)-(4), the alignment operation introduces errors in the forward occluded regions, and as revealed in Equation (11), they are proportionally correlated with the time interval. In each step of accumulation process, we can simplify the problem as the alignment of two optical flows, one of which has a larger magnitude (pre-obtained from the last step) and another one has a smaller magnitude (the local flow). The forward accumulation chooses to align two flows along the larger one, which essentially leads to a larger occlusion area. Therefore, we propose to align the two flows along the smaller
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
Figure 2: Box-plot of the occlusion proportion $\alpha_{\Delta}^{1}$ over 5000 samples, the occlusion proportion (Y-axis) increases with the time interval $\Delta$ (X-axis) increases.
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
Figure 3: Visualization of occlusion masks during accumulation. White regions denote occluded area.
|
| 184 |
+
|
| 185 |
+
one. Specifically, with time variable $t$ decreases from $N - 1$ to 2, we recursively produce the long-range flow $\mathbf{F}_{t - 1,N}$ by fusing the pre-obtained flow $\mathbf{F}_{t,N}$ and the local flow $\mathbf{F}_{t - 1,t}$ as follows:
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\mathbf {F} _ {t - 1, N} = \mathbf {F} _ {t - 1, t} \oplus \mathbf {F} _ {t, N}, \tag {12}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
where for each pixel $\mathbf{x} \in \Omega_{t-1}$ in reference image $\mathbf{I}_{t-1}$ , we have
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
\mathbf {F} _ {t - 1, N} (\mathbf {x}) = \left\{ \begin{array}{l l} \mathbf {F} _ {t - 1, t} (\mathbf {x}) + \widetilde {\mathbf {F}} _ {t, N} ^ {t - 1} (\mathbf {x}) & \text {i f} \mathbf {O} _ {t - 1, t} (\mathbf {x}) = 0, \\ \mathbf {P} _ {t - 1, N} (\mathbf {x}) & \text {i f} \mathbf {O} _ {t - 1, t} (\mathbf {x}) = 1, \end{array} \right. \tag {13}
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
and the occlusion mask is obtained by:
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
\mathbf {O} _ {t - 1, t} = \operatorname {g e t O c c} \left(\mathbf {F} _ {t - 1, t}, \mathbf {F} _ {t, N}\right). \tag {14}
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
By doing this, we form the backward accumulation process presented in Algorithm 2.
|
| 204 |
+
|
| 205 |
+
As evident from the recursive process, the occluded regions are pixels with $\mathbf{O}_{t - 1,t}(\mathbf{x}) = 1,\mathbf{x}\in \Omega_{t - 1}$ , at each step. The occlusion proportion defined in Equation (9) is $\alpha_1^{t - 1}$ here. During the backward accumulation, although the reference image undergoes changes, the occluded region remains at a minimum level, particularly when compared to the forward accumulation method where the occluded region progressively increases. We visualize this observation in Figure 3. The reduced occluded area enables the occlusion solver to handle the occlusion more efficiently.
|
| 206 |
+
|
| 207 |
+
# 3.3.AccFlow Framework
|
| 208 |
+
|
| 209 |
+
In this section, we present AccFlow, a deep framework that employs the backward accumulation to estimate accurate long-range optical flow. The framework consists of three components, an arbitrary optical flow estimator OFNet (e.g., RAFT, GMA, etc.), the AccPlus module, and the adaptive fusion module. Initially, local flows $\{\mathbf{F}_{t,t + 1} \mid t \in [1,N - 1]\}$ are obtained from the pretrained OFNet as inputs of AccFlow. The AccFlow recursively produces the long-range flow $\mathbf{F}_{t - 1,N}$ with time $t$ decreases from $N - 1$ to 2 and the recurrent structure is shown in Figure 4a.
|
| 210 |
+
|
| 211 |
+
The AccPlus Module. Following the Algorithm 2, we implement the backward accumulation in the AccPlus module to perform flow fusion in feature domain as shown in Figure 4b. At each stage, given the local flow $\mathbf{F}_{t-1,t}$ and pre-obtained flow $\mathbf{F}_{t,N}$ , we encode them into motion features $f_{t-1,t}$ and $f_{t,N}$ with a motion encoder. The motion encoder spatially downscales features by 1/4 times. The occlusion mask $\mathbf{O}_{t-1,t}$ is determined by getOcc which is a simple warping operation in this paper. More details about the encoder and getOcc are provided in appendix. Afterwards, we warp the motion features $f_{t,N}$ to align them with $f_{t-1,t}$ by deformable convolution and produce $\widetilde{f}_{t,N}$ . In the AccPlus, we implement solveOcc in Algorithm 2 by a set of convolutional layers. Specifically, we concatenate $\widetilde{f}_{t,N}$ and $f_{t-1,t}$ along the channel dimensional, where $f_{t-1,t}$ provides the spatial coherence information for handling occlusion. The concatenated feature is then processed by multiple convolutional layers. The resulting output features, denoted as $p_{t-1,N}$ , are then merged with $\widetilde{f}_{t,N}$ and $f_{t-1,t}$ to produce the final target motion feature $f_{t-1,N}$ .
|
| 212 |
+
|
| 213 |
+
The adaptive blending module. Directly decoding the output features $f_{t-1,N}$ of AccPlus and passing them to next stage may result in the accumulation error. To mitigate this issue, an adaptive blending module is added to suppress the accumulation error by using the directly estimated long-range flow as prior information. Specifically, we first establish an initial long-range optical flow $\mathbf{F}_{t-1,N}^{ini}$ with the pretrained OFNet, and then encode it into a motion feature $f_{t-1,N}^{ini}$ with the motion encoder (share parameters with the one in AccPlus). Subsequently, the adaptive blending module takes the two motion features (i.e., $f_{t-1,N}^{ini}$ and $f_{t-1,N}$ ) and corresponding video frames as inputs to calculate an adaptive confidence mask. The confidence mask is then used to fuse them with attention mechanism, and the output motion features are decoded into the optical flow $\mathbf{F}_{t-1,N}$ with a motion decoder. Details of the motion decoder are provided in appendix.
|
| 214 |
+
|
| 215 |
+
# 3.4. CVO Dataset
|
| 216 |
+
|
| 217 |
+
Existing optical flow datasets only provide the local optical flow annotations. In order to provide the ground-truth
|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
(a) AccFlow Framework.
|
| 223 |
+
(b) AccPlus Module.
|
| 224 |
+
Figure 4: Illustration of the network structure. (a) The AccFlow framework. Time $t$ decreases from $N - 1$ to 2 to obtain long-range flow $\mathbf{F}_{1,N}$ . OFNet is an arbitrary flow estimator. (b) The AccPlus module, an efficient module that implements the backward accumulation in feature domain. The red arrows signify the encoding of images into context features by a context encoder, which adheres to the structure outlined in [48].
|
| 225 |
+
|
| 226 |
+
(GT) long-range optical flows, we construct a cross-frame video optical flow dataset (CVO), consisting of 12K synthetic video sequences and GT optical flow labels across different frame intervals. This dataset is essential for the research on long-range optical flow estimation and other related tasks.
|
| 227 |
+
|
| 228 |
+
Dataset Collection We generate the CVO dataset using Kubric [10], which is a data generation pipeline for creating semi-realistic synthetic multi-object videos. We first simulate the movement of multiple objects, and then render frames along with optical flow annotations. For each video sequence, we render 7 frames of size $512 \times 512$ at 60 FPS (frame per second) in conjunction with the bidirectional optical flow of adjacent frames. In addition, we provide cross-frame bidirectional optical flows across different frame intervals. All the cross-frame flows take the first frame as reference. We further render the RGB video frames with and without random motion blur, which is denoted as Clean and Final sets. We partition all video sequences into two subsets, 11K sequences and 500 sequences, which serve as the training and validation splits, respectively.
|
| 229 |
+
|
| 230 |
+
Comparisons with Existing Datasets The CVO dataset contains richer annotations compared with existing optical
|
| 231 |
+
|
| 232 |
+

|
| 233 |
+
Figure 5: The histogram comparisons of the flow magnitude between the training set of CVO and public datasets, such as MPI Sintel [5] and FlyingThings3D [33].
|
| 234 |
+
|
| 235 |
+
flow datasets [5, 6] since it provides cross-frame bidirectional optical flow annotations. Moreover, the CVO contains more challenging samples with large motion and complex occlusion. We compare the flow magnitudes among different datasets by plotting the statistical histograms in Figure 5. Even though the FlyingThings3D [6] has similar flow magnitude distribution compared to CVO, the CVO contains more extreme large motions (flow magnitude $\geq 125$ pixels). According to our experiments, the proposed CVO is sufficient to support researches on long-range optical flow estimation and other related tasks.
|
| 236 |
+
|
| 237 |
+
# 4. Experiments
|
| 238 |
+
|
| 239 |
+
# 4.1. Validation Benchmarks
|
| 240 |
+
|
| 241 |
+
CVO: We adopt the CVO testing set, which consists of Clean and Final splits, as one of our validation benchmarks. Each split contains 500 sequences for evaluation. In each sequence, there are 7 frames of size $512 \times 512$ , and the default GT optical flow $\mathbf{F}_{1,7}^{gt}$ . If experiments on other frame intervals are desired, we provide the corresponding GT flow $\mathbf{F}_{1,i}^{gt}, i \in [2,6]$ (denoted as CVO- $i$ ).
|
| 242 |
+
|
| 243 |
+
HS-Sintel: MPI Sintel [5] is a commonly used optical flow benchmark generated from the realistic animated film. However, it only provides GT flows at 24 FPS. Therefore, we use the High-Speed Sintel videos [18], namely HS-Sintel, as an alternative. Specifically, Janai et al. [18] selected a subset of 19 sequences from the MPI Sintel training set (clean pass) and re-rendered them 24 FPS to 1008 FPS with $4 \times$ resolution. Unfortunately, the GT flows at other frame rates of HS-Sintel are not publicly available. Therefore, we use the GT flows at 24 FPS of MPI Sintel as labels to evaluate the estimates from video sequences at 1008 FPS of HS-Sintel.
|
| 244 |
+
|
| 245 |
+
# 4.2. Implementation Details
|
| 246 |
+
|
| 247 |
+
Loss function: During the recurrent process to obtain the target flow $\mathbf{F}_{1,N}$ , the AccFlow also produces intermediate
|
| 248 |
+
|
| 249 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">HS-Sintel</td><td colspan="3">CVO (Clean)</td><td colspan="3">CVO (Final)</td><td rowspan="2">Inference time (s)</td></tr><tr><td>ALL</td><td>NOC</td><td>OCC</td><td>ALL</td><td>NOC</td><td>OCC</td><td>ALL</td><td>NOC</td><td>OCC</td></tr><tr><td>RAFT</td><td>2.141</td><td>1.124</td><td>7.169</td><td>5.687</td><td>2.798</td><td>13.233</td><td>6.653</td><td>3.812</td><td>13.891</td><td>0.129</td></tr><tr><td>RAFT-Lim</td><td>3.868</td><td>1.845</td><td>12.63</td><td>11.96</td><td>6.573</td><td>31.10</td><td>12.34</td><td>6.938</td><td>31.45</td><td>0.956</td></tr><tr><td>RAFT-w</td><td>1.921</td><td>1.004</td><td>6.623</td><td>5.259</td><td>2.274</td><td>12.59</td><td>5.508</td><td>2.493</td><td>12.90</td><td>0.525</td></tr><tr><td>Acc+RAFT (ours)</td><td>1.709</td><td>1.163</td><td>5.639</td><td>3.170</td><td>1.623</td><td>8.113</td><td>3.283</td><td>1.714</td><td>8.261</td><td>0.813</td></tr><tr><td>GMA</td><td>2.291</td><td>1.330</td><td>7.139</td><td>5.757</td><td>2.775</td><td>13.58</td><td>6.265</td><td>3.530</td><td>13.71</td><td>0.234</td></tr><tr><td>GMA-Lim</td><td>3.871</td><td>1.764</td><td>12.79</td><td>12.22</td><td>6.708</td><td>31.40</td><td>12.42</td><td>7.038</td><td>31.61</td><td>2.159</td></tr><tr><td>GMA-w</td><td>1.924</td><td>1.043</td><td>6.458</td><td>5.136</td><td>2.137</td><td>12.49</td><td>5.515</td><td>2.502</td><td>12.81</td><td>1.167</td></tr><tr><td>Acc+GMA (ours)</td><td>1.568</td><td>1.091</td><td>5.003</td><td>3.583</td><td>1.807</td><td>8.868</td><td>3.752</td><td>1.979</td><td>9.030</td><td>1.499</td></tr><tr><td>RAFT*</td><td>2.567</td><td>1.426</td><td>7.717</td><td>4.445</td><td>1.948</td><td>11.73</td><td>4.537</td><td>2.003</td><td>11.70</td><td>0.129</td></tr><tr><td>RAFT*-Lim</td><td>3.657</td><td>1.611</td><td>12.36</td><td>23.34</td><td>6.543</td><td>32.90</td><td>13.02</td><td>7.033</td><td>33.82</td><td>0.956</td></tr><tr><td>RAFT*-w</td><td>2.139</td><td>1.059</td><td>6.963</td><td>3.738</td><td>1.052</td><td>10.41</td><td>3.808</td><td>1.162</td><td>10.14</td><td>0.525</td></tr><tr><td>Acc+RAFT* (ours)</td><td>1.383</td><td>0.930</td><td>4.546</td><td>2.634</td><td>1.155</td><td>7.302</td><td>2.707</td><td>1.249</td><td>7.295</td><td>0.813</td></tr><tr><td>GMA*</td><td>2.520</td><td>1.469</td><td>7.600</td><td>4.638</td><td>2.342</td><td>11.33</td><td>4.633</td><td>2.114</td><td>11.36</td><td>0.234</td></tr><tr><td>GMA*-Lim</td><td>3.306</td><td>1.381</td><td>11.70</td><td>11.39</td><td>5.833</td><td>31.28</td><td>11.68</td><td>6.130</td><td>31.35</td><td>2.159</td></tr><tr><td>GMA*-w</td><td>1.888</td><td>0.946</td><td>6.516</td><td>3.832</td><td>1.082</td><td>10.38</td><td>3.807</td><td>1.159</td><td>10.10</td><td>1.167</td></tr><tr><td>Acc+GMA* (ours)</td><td>1.434</td><td>0.950</td><td>4.770</td><td>2.732</td><td>1.181</td><td>7.438</td><td>2.808</td><td>1.261</td><td>7.495</td><td>1.499</td></tr><tr><td>SlowFlow</td><td>2.58†</td><td>0.87†</td><td>9.45†</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>≥ 500</td></tr><tr><td>PIPs</td><td>-</td><td>-</td><td>-</td><td>8.568</td><td>6.351</td><td>21.55</td><td>8.954</td><td>6.718</td><td>22.06</td><td>≥ 500</td></tr><tr><td>GMFlow</td><td>2.055</td><td>1.024</td><td>7.132</td><td>5.801</td><td>2.680</td><td>13.521</td><td>6.506</td><td>3.402</td><td>14.21</td><td>0.341</td></tr></table>
|
| 250 |
+
|
| 251 |
+
Table 1: Comparisons of AccPlus framework with other methods on two benchmarks in terms of EPE $\downarrow$ on all regions (ALL) and occluded regions (OCC). The best and the second-best results are marked in red and blue, respectively. '-Lim' denotes the flow accumulation method in [22]. '-w' denotes the warm-start method (details in Section 4.3). For the SlowFlow [18], we refer to data in their paper (denoted with $\dagger$ ). We report the inference time of 7 frames of size $512\times 512$ per sample on an NVIDIA GTX3090 GPU.
|
| 252 |
+
|
| 253 |
+
flows $\mathbf{F}_{t,N}, t \in [1, N-2]$ . Therefore, we train the network by supervising all the flow outputs with L1 loss:
|
| 254 |
+
|
| 255 |
+
$$
|
| 256 |
+
\mathcal {L} = \frac {1}{N - 2} \sum_ {i = 1} ^ {N - 2} \| \mathbf {F} _ {i, N} - \mathbf {F} _ {i, N} ^ {g t} \| _ {1}. \tag {15}
|
| 257 |
+
$$
|
| 258 |
+
|
| 259 |
+
Training details: We train the AccFlow with the mixture of 'clean' and 'final' pass of CVO training set. We augment the training data by randomly cropping the input frames into patches of size $256 \times 256$ . Other training hyperparameters (e.g., learning rate and batch size) follow the default settings from [48]. By replacing the OFNet with different existing optical flow estimators, we train four models for comparison. Specifically, we embed the officially pretrained RAFT [48] and GMA [19] in AccFlow framework, respectively. On the one hand, we fix the parameter of OFNet and train other parameters from scratch, and produce Acc+RAFT and Acc+GMA, respectively. On the other hand, we fine-tune the parameter of OFNet and produce Acc+RAFT* and Acc+GMA*, respectively.
|
| 260 |
+
|
| 261 |
+
# 4.3. Alternative Approaches
|
| 262 |
+
|
| 263 |
+
Previously, several works [22, 18] have been focused on optical flow accumulation. Therefore, for more comprehensive comparisons, we consider some other alternative approaches to estimate long-range optical flow.
|
| 264 |
+
|
| 265 |
+
Direct estimation. One of the naive methods is to directly estimate long-range flow with two distant reference images. Other than RAFT and GMA, we also compare the GM-Flow [53] which formulates the optical flow as a global
|
| 266 |
+
|
| 267 |
+
matching problem to solve large motion. For fair comparisons, we also fine-tune the RAFT and GMA with training set of CVO, denoted as $\mathrm{RAFT}^*$ and $\mathrm{GMA}^*$ , respectively.
|
| 268 |
+
|
| 269 |
+
Pixel tracking. Another intuitive way is to use pixel tracking method to iteratively estimate the per-pixel long-range displacement. We use the SOTA pixel tracking method PIPs [11] to achieve this. Such process is time-consuming so we only test this method on CVO testing set.
|
| 270 |
+
|
| 271 |
+
Warm start. Zachary et al. [48] propose to estimate optical flow with warm start. This method can also be applied in flow accumulation, that is, we use the pre-obtained $\mathbf{F}_{1,t}$ as an initialized flow input to estimate $\mathbf{F}_{1,t+1}$ . This procedure is essentially an implicit forward accumulation process, thus we include it into comparisons.
|
| 272 |
+
|
| 273 |
+
# 4.4. Comparisons with Existing Methods
|
| 274 |
+
|
| 275 |
+
We compare the existing methods in terms of the average End-Point-Error (EPE) applied to all pixels (ALL) and occlusion regions (OCC). In Table 1, we compare our AccFlow with previous methods on two benchmarks, and our AccFlow outperforms all the previous methods by a large margin especially for occluded regions. Specifically, we notice that it is challenge for direct methods (the 1,5,9,13,18-th rows in Table 1) to produce long-range optical flow due to the extreme large motion and occlusion problems. For forward accumulation, the explicit methods (i.e., [22] and [18]) fail to handle the constantly increased occlusion which result in inferior performance. PIPs can accurately estimate sparse motion but suffers from the lack of spatial coherence information for dense flow estimation. Moreover, the im
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
Figure 6: Visual quality comparisons on CVO dataset. Two small objects with large motions are emphasized with red boxes. More results can be found in the supplementary.
|
| 279 |
+
|
| 280 |
+
<table><tr><td colspan="2">Acc+RAFT</td><td rowspan="2">AB</td><td colspan="3">HS-Sintel</td><td colspan="3">CVO (Final)</td></tr><tr><td>F.</td><td>B.</td><td>ALL</td><td>NOC</td><td>OCC</td><td>ALL</td><td>NOC</td><td>OCC</td></tr><tr><td>✓</td><td></td><td></td><td>2.238</td><td>5.758</td><td>5.758</td><td>3.328</td><td>1.914</td><td>7.716</td></tr><tr><td></td><td>✓</td><td></td><td>1.740</td><td>1.303</td><td>4.711</td><td>2.709</td><td>1.252</td><td>7.299</td></tr><tr><td>✓</td><td></td><td>✓</td><td>1.716</td><td>0.936</td><td>5.895</td><td>3.229</td><td>0.873</td><td>8.823</td></tr><tr><td></td><td>✓</td><td>✓</td><td>1.383</td><td>0.930</td><td>4.546</td><td>2.707</td><td>1.249</td><td>7.295</td></tr></table>
|
| 281 |
+
|
| 282 |
+
Table 2: Ablation study of AccFlow framework (reported in EPE $\downarrow$ ). 'F' denotes a modified AccPlus that accumulates the local flows in forward manner, 'B' is the proposed AccPlus with backward accumulation, and 'AB' denotes the adaptive blending module.
|
| 283 |
+
|
| 284 |
+
plicit forward accumulation method (i.e., warm start) is not specially designed for this task and fall short in tackling occlusion problem, but still brings certain performance gain compared with direct methods. Compared to all these methods, the AccFlow framework can decrease the average EPE error by large margin, which justifies the effectiveness of our framework for occlusion correction and non-occlusion correspondence enhancement.
|
| 285 |
+
|
| 286 |
+
Moreover, the qualitative comparisons are shown in Figure 6, where two small objects with large motion are annotated in red boxes. It can be seen that our AccFlow can produce accurate optical flows while the compared methods suffer from significant errors especially for occluded area.
|
| 287 |
+
|
| 288 |
+
# 4.5. Ablation Study
|
| 289 |
+
|
| 290 |
+
Backward VS. Forward accumulation: In Section 3.2, we demonstrate that the backward accumulation is less susceptible to occlusion effect than the forward one. In order to fairly compare the two methods, we design a modified AccPlus module which implements the forward accumulation in Table 2 (denoted as 'F'). It is worth noting that the modification only changes the inputs of network and no additional computational complexity is introduced. Detailed structure of the forward version of AccPlus is provided in appendix. In Table 2, we compare the backward accumulation with the forward one in terms of EPE under the same experimental settings. We can find that the backward version can deal with the occluded area more effectively than the forward version by large margin. This is because the
|
| 291 |
+
|
| 292 |
+

|
| 293 |
+
Figure 7: Average EPE $\downarrow$ (ALL) of long-range flows from the compared methods in different estimation ranges.
|
| 294 |
+
|
| 295 |
+
backward accumulation has stable and minimum occlusion proportion at each step of iterations.
|
| 296 |
+
|
| 297 |
+
Adaptive blending module: In Section 3.3, we design the AccFlow framework not only to address occlusion problem but also suppress the accumulation error. Specifically, the adaptive blending module takes a directly estimated long-range flow as prior to rectify the cumulated flow. To evaluate this, we train networks w./. and w/o. the adaptive blending module (denoted as 'AB') in Table 2. The EPE is reduced by large margin especially for non-occluded area (NOC), which demonstrates the necessity of adaptive blending module for mitigating accumulated error.
|
| 298 |
+
|
| 299 |
+
Accumulation for different frame ranges In Figure 7, we show the results of long-range optical flow estimation in different estimation ranges. When the range increases, the EPE of the flows from our proposed AccFlow (Acc+GMA*) increases slower than that from direct estimation and the warm start methods. This observation shows the robustness of our proposed framework in different estimation ranges.
|
| 300 |
+
|
| 301 |
+
# 5. Conclusion
|
| 302 |
+
|
| 303 |
+
We propose the backward accumulation strategy for improved long-range optical flow estimation, surpassing prior methods. AccFlow employs feature domain backward accumulation and DNN-based error correction. Experimental results effectively address occlusion and accumulation errors. Ablation studies confirm superiority and adaptive blending's necessity. AccFlow notably reduces EPE on several benchmarks. In conclusion, AccFlow offers a simple, potent solution for flow accumulation, with scalability.
|
| 304 |
+
|
| 305 |
+
# 6. Acknowledgment
|
| 306 |
+
|
| 307 |
+
The work was supported in part by the Shanghai Pujiang Program under Grant 22PJ1406800, in part by the Guang-Dong Basic and Applied Basic Research Foundation under Project (No.2023A1515010644), and in part by Sichuan Provincial Key Laboratory of Intelligent Terminals under Grant SCITLAB-20016.
|
| 308 |
+
|
| 309 |
+
# References
|
| 310 |
+
|
| 311 |
+
[1] Aseem Behl, Omid Hosseini Jafari, Siva Karthik Mustikovela, Hassan Abu Alhaija, Carsten Rother, and Andreas Geiger. Bounding boxes, segmentations and object coordinates: How important is recognition for 3d scene flow estimation in autonomous driving scenarios? In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pages 2574-2583, 2017. 1
|
| 312 |
+
[2] Michael J Black and Padmanabhan Anandan. Robust dynamic motion estimation over time. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 296-203, 1991. 3
|
| 313 |
+
[3] Nicolas Bonneel, James Tompkin, Kalyan Sunkavalli, Deqing Sun, Sylvain Paris, and Hanspeter Pfister. Blind video temporal consistency. ACM Trans. Graph., 34:196:1-196:9, 2015. 1
|
| 314 |
+
[4] Thomas Brox, Andres Bruhn, Nils Papenberg, and Joachim Weickert. High accuracy optical flow etimation based on a theory for warping. In Proc. Eur. Conf. Comput. Vis. (ECCV), pages 25-36, 2004. 2
|
| 315 |
+
[5] Daniel J Butler, Jonas Wulff, Garrett B Stanley, and Michael J Black. A naturalistic open source movie for optical flow evaluation. In Proc. Eur. Conf. Comput. Vis. (ECCV), pages 611-625, 2012. 2, 6
|
| 316 |
+
[6] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pages 2758-2766, 2015. 1, 2, 6
|
| 317 |
+
[7] David J Fleet and Allan D Jepson. Computation of component image velocity from local phase information. Int. J. Comput. Vis., 5:77-104, 1990. 3
|
| 318 |
+
[8] Chen Gao, Ayush Saraf, Jia-Bin Huang, and Johannes Kopf. Flow-edge guided video completion. In Proc. Eur. Conf. Comput. Vis. (ECCV), pages 713-729, 2020. 1
|
| 319 |
+
[9] Pierre Godet, Alexandre Boulch, Aurélien Plyer, and Guy Le Besnerais. Starflow: A spatiotemporal recurrent cell for lightweight multi-frame optical flow estimation. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 2462-2469, 2021. 3
|
| 320 |
+
[10] Klaus Greff, Francois Belletti, Lucas Beyer, Carl Doersch, Yilun Du, Daniel Duckworth, David J Fleet, Dan Gnanapragasam, Florian Golemo, Charles Herrmann, et al. Kubric: A scalable dataset generator. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 3749-3761, 2022. 6
|
| 321 |
+
[11] Adam W. Harley, Zhaoyuan Fang, and Katerina Fragkiadaki. Particle video revisited: Tracking through occlusions using point trajectories. In Proc. Eur. Conf. Comput. Vis. (ECCV), 2022. 1, 2, 3, 7
|
| 322 |
+
[12] David J. Heeger. Optical flow using spatiotemporal filters. Int. J. Comput. Vis., 1:279-302, 1988. 3
|
| 323 |
+
[13] Shan Huang, Xiaohong Liu, Tao Tan, Menghan Hu, Xiaoer Wei, Tingli Chen, and Bin Sheng. Transmrsr: Transformer-based self-distilled generative prior for brain MRI superresolution. Arxiv preprint:2306.06669, 2023. 1
|
| 324 |
+
|
| 325 |
+
[14] Zhewei Huang, Tianyuan Zhang, Wen Heng, Boxin Shi, and Shuchang Zhou. Real-time intermediate flow estimation for video frame interpolation. In Proc. Eur. Conf. Comput. Vis. (ECCV), pages 624-642, 2022. 1
|
| 326 |
+
[15] Junhwa Hur and Stefan Roth. Iterative residual refinement for joint optical flow and occlusion estimation. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 5754-5763, 2019. 2
|
| 327 |
+
[16] Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 2462-2470, 2017. 2
|
| 328 |
+
[17] Joel Janai, Fatma Guney, Anurag Ranjan, Michael Black, and Andreas Geiger. Unsupervised learning of multi-frame optical flow with occlusions. In Proc. Eur. Conf. Comput. Vis. (ECCV), pages 690-706, 2018. 2
|
| 329 |
+
[18] Joel Janai, Fatma Guney, Jonas Wulff, Michael J Black, and Andreas Geiger. Slow flow: Exploiting high-speed cameras for accurate and diverse optical flow reference data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3597-3607, 2017. 1, 2, 3, 4, 6, 7
|
| 330 |
+
[19] Shihao Jiang, Dylan Campbell, Yao Lu, Hongdong Li, and Richard Hartley. Learning to estimate hidden motions with global motion aggregation. In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pages 9772-9781, 2021. 1, 2, 4, 7
|
| 331 |
+
[20] Ryan Kennedy and Camillo J Taylor. Optical flow with geometric occlusion estimation and fusion of multiple frames. In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, pages 364-377, 2015. 3
|
| 332 |
+
[21] SukHwan Lim, John Apostolopoulos, and AE Gamal. Benefits of temporal oversampling in optical flow estimation. In Proc. IEEE Int. Conf. Image Process. (ICIP), pages 2567-2570, 2004. 3
|
| 333 |
+
[22] SukHwan Lim, John G. Apostolopoulos, and Abbas El Gamal. Optical flow estimation using temporally oversampled video. IEEE Trans. Image Process., 14:1074-1087, 2005. 2, 3, 7
|
| 334 |
+
[23] SukHwan Lim and Abbas El Gamal. Optical flow estimation using high frame rate sequences. In Proc. IEEE Int. Conf. Image Process. (ICIP), pages 925-928, 2001. 3
|
| 335 |
+
[24] Pengpeng Liu, Michael Lyu, Irwin King, and Jia Xu. Selfflow: self-supervised learning of optical flow. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 4571-4580, 2019. 2
|
| 336 |
+
[25] Xiaohong Liu, Lei Chen, Wenyi Wang, and Jiying Zhao. Robust multi-frame super-resolution based on spatially weighted half-quadratic estimation and adaptive BTV regularization. IEEE Trans. Image Process., 27(10):4971-4986, 2018. 1
|
| 337 |
+
[26] Xiaohong Liu, Lingshi Kong, Yang Zhou, Jiying Zhao, and Jun Chen. End-to-end trainable video super-resolution based on a new mechanism for implicit motion estimation and compensation. In IEEE Winter Conference on Applications of Computer Vision, WACV 2020, Snowmass Village, CO, USA, March 1-5, 2020, pages 2405-2414. IEEE, 2020. 1
|
| 338 |
+
|
| 339 |
+
[27] Xiaohong Liu, Kangdi Shi, Zhe Wang, and Jun Chen. Exploit camera raw data for video super-resolution via hidden markov model inference. IEEE Trans. Image Process., 30:2127-2140, 2021. 1
|
| 340 |
+
[28] Bruce D Lucas, Takeo Kanade, et al. An iterative image registration technique with an application to stereo vision, volume 81. Vancouver, 1981. 3
|
| 341 |
+
[29] Ao Luo, Fan Yang, Xin Li, and Shuaicheng Liu. Learning optical flow with kernel patch attention. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 8906-8915, 2022. 2
|
| 342 |
+
[30] Ao Luo, Fan Yang, Kunming Luo, Xin Li, Haoqiang Fan, and Shuaicheng Liu. Learning optical flow with adaptive graph reasoning. In AAAI, pages 1890-1898, 2022. 2
|
| 343 |
+
[31] Ziwei Luo, Youwei Li, Shen Cheng, Lei Yu, Qi Wu, Zhihong Wen, Haoqiang Fan, Jian Sun, and Shuaicheng Liu. BSRT: improving burst super-resolution with swim transformer and flow-guided deformable alignment. In IEEE Conf. Comput. Vis. Pattern Recog. Worksh. (CVPRW), pages 997-1007, 2022. 1
|
| 344 |
+
[32] Daniel Maurer, Michael Stoll, and Andrés Bruhn. Directional priors for multi-frame optical flow. In Proc. Brit. Mach. Vis. Conf. (BMVC), page 106, 2018. 3
|
| 345 |
+
[33] Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 4040-4048, 2016. 6
|
| 346 |
+
[34] Michal Neoral, Jan Šochman, and Jiri Matas. Continual occlusion and optical flow estimation. In ACCV, pages 159-174, 2018. 3
|
| 347 |
+
[35] Zhile Ren, Orazio Gallo, Deqing Sun, Ming-Hsuan Yang, Erik B Sudderth, and Jan Kautz. A fusion approach for multiframe optical flow estimation. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 2077-2086, 2019. 3
|
| 348 |
+
[36] Jerome Revaud, Philippe Weinzaepfel, Zaid Harchaoui, and Cordelia Schmid. Epicflow: Edge-preserving interpolation of correspondences for optical flow. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 1164-1172, 2015. 2
|
| 349 |
+
[37] Agustín Salgado and Javier Sánchez. Temporal constraints in large optical flow estimation. In International Conference on Computer Aided Systems Theory, pages 709-716, 2007. 3
|
| 350 |
+
[38] Peter Sand and Seth J. Teller. Particle video: Long-range motion estimation using point trajectories. Int. J. Comput. Vis., 80:72-91, 2008. 2
|
| 351 |
+
[39] Zhihao Shi, Xiaohong Liu, Chengqi Li, Linhui Dai, Jun Chen, Timothy N. Davidson, and Jiying Zhao. Learning for unconstrained space-time video super-resolution. IEEE Trans. Broadcast., 68(2):345-358, 2022. 1
|
| 352 |
+
[40] Zhihao Shi, Xiaohong Liu, Kangdi Shi, Linhui Dai, and Jun Chen. Video frame interpolation via generalized deformable convolution. IEEE Trans. Multim., 24:426-439, 2022. 1
|
| 353 |
+
[41] Zhihao Shi, Xiangyu Xu, Xiaohong Liu, Jun Chen, and Ming-Hsuan Yang. Video frame interpolation transformer.
|
| 354 |
+
|
| 355 |
+
In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 17461-17470. IEEE, 2022. 1
|
| 356 |
+
[42] K Simonyan and A Zisserman. Two-stream convolutional networks for action recognition in videos. In Proc. Adv. Neural Inform. Process. Syst. (NeurIPS), pages 568-576, 2014. 1
|
| 357 |
+
[43] Michael Stoll, Sebastian Volz, and Andrés Bruhn. Joint trilateral filtering for multiframe optical flow. In Proc. IEEE Int. Conf. Image Process. (ICIP), pages 3845-3849, 2013. 3
|
| 358 |
+
[44] Xiuchao Sui, Shaohua Li, Xue Geng, Yan Wu, Xinxing Xu, Yong Liu, Rick Goh, and Hongyuan Zhu. Craft: Cross-attentional flow transformer for robust optical flow. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 17602-17611, 2022. 2
|
| 359 |
+
[45] Deqing Sun, Stefan Roth, and Michael J Black. Secrets of optical flow estimation and their principles. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 2432-2439, 2010. 2
|
| 360 |
+
[46] Deqing Sun, Erik Sudderth, and Michael Black. Layered image motion with explicit occlusions, temporal consistency, and depth ordering. Proc. Adv. Neural Inform. Process. Syst. (NeurIPS), 23:2226-2234, 2010. 3
|
| 361 |
+
[47] Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 8934-8943, 2018. 1, 2
|
| 362 |
+
[48] Zachary Teed and Jia Deng. Raft: Recurrent all-pairs field transforms for optical flow. In Proc. Eur. Conf. Comput. Vis. (ECCV), pages 402-419, 2020. 1, 2, 4, 6, 7
|
| 363 |
+
[49] Sebastian Volz, Andres Bruhn, Levi Valgaerts, and Henning Zimmer. Modeling temporal coherence for optical flow. In Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pages 1116-1123, 2011. 3
|
| 364 |
+
[50] Chia-Ming Wang, Kuo-Chin Fan, Cheng-Tzu Wang, and Tong-Yee Lee. Estimating optical flow by integrating multiframe information. Journal of Information Science & Engineering, 24:1719-1731, 2008. 3
|
| 365 |
+
[51] Xiaolong Wang, Allan Jabri, and Alexei A Efros. Learning correspondence from the cycle-consistency of time. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pages 2566-2576, 2019. 1
|
| 366 |
+
[52] Joachim Weickert and Christoph Schnörr. Variational optic flow computation with a spatio-temporal smoothness constraint. Journal of mathematical imaging and vision, 14:245-255, 2001. 3
|
| 367 |
+
[53] Haofei Xu, Jing Zhang, Jianfei Cai, Hamid Rezatofighi, and Dacheng Tao. Gmflow: Learning optical flow via global matching. In CVPR, pages 8121-8130, 2022. 7
|
| 368 |
+
[54] Yichao Yan, Bingbing Ni, Wendong Zhang, Jun Tang, and Xiaokang Yang. Cross-modality motion parameterization for fine-grained video prediction. Computer Vision and Image Understanding, 183:11-19, 2019. 1
|
| 369 |
+
[55] Guanghao Yin, Zefan Qu, Xinyang Jiang, Shan Jiang, Zhenhua Han, Ningxin Zheng, Xiaohong Liu, Huan Yang, Yuqing Yang, Dongsheng Li, and Lili Qiu. Online video streaming super-resolution with adaptive look-up table fusion. arXiv preprint:2303.00334. 1
|
accflowbackwardaccumulationforlongrangeopticalflow/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:62a88fbd259483383d1d1629b234a86b22124a0aa493b59eebb88104cdcf25d5
|
| 3 |
+
size 394392
|
accflowbackwardaccumulationforlongrangeopticalflow/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f9ba51676c0a12603f96a4205ea749fdbefb4290e59ffbf1389a7e8357423cf0
|
| 3 |
+
size 476420
|
accurate3dfacereconstructionwithfacialcomponenttokens/640e260a-327e-4872-9b6d-5d5661b519dc_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e78ed1d9c622962fd43a919be69cdd9acf01cf3c1dd57c6861ac551fd35dc80f
|
| 3 |
+
size 78844
|
accurate3dfacereconstructionwithfacialcomponenttokens/640e260a-327e-4872-9b6d-5d5661b519dc_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4a3620b3a79e065e2ac76fe1d893a1a1783160e221804d63bf40285fca3271c9
|
| 3 |
+
size 97268
|
accurate3dfacereconstructionwithfacialcomponenttokens/640e260a-327e-4872-9b6d-5d5661b519dc_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:613a753679859cc78840494bc34a50d4c777db9d84d84d2cf0962cc9a8393cd7
|
| 3 |
+
size 5613904
|
accurate3dfacereconstructionwithfacialcomponenttokens/full.md
ADDED
|
@@ -0,0 +1,336 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Accurate 3D Face Reconstruction with Facial Component Tokens
|
| 2 |
+
|
| 3 |
+
Tianke Zhang $^{1,2*}$ Xuangeng Chu $^{2}$ Yunfei Liu $^{2}$ Lijian Lin $^{2}$ Zhendong Yang $^{1,2}$ Zhengzhuo $\mathrm{Xu}^{1,2}$ Chengkun Cao $^{2}$ Fei Yu $^{3}$ Changyin Zhou $^{3}$ Chun Yuan $^{1\dagger}$ Yu Li $^{2\dagger}$ Tsinghua Shenzhen International Graduate School International Digital Economy Academy (IDEA) Vstring Inc.
|
| 4 |
+
|
| 5 |
+

|
| 6 |
+
Figure 1: Exemplar 3D face reconstruction results of our method named TokenFace (first row: inputs, second row: results). Our approach achieves faithful reconstruction results for challenging cases, including varying sizes, ages, poses, races, and partial occlusions. Additionally, our method enables accurate transferring a target expression to the persons (third row).
|
| 7 |
+
|
| 8 |
+

|
| 9 |
+
Target Expressions
|
| 10 |
+
|
| 11 |
+

|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Accurately reconstructing 3D faces from monocular images and videos is crucial for various applications, such as digital avatar creation. However, the current deep learning-based methods face significant challenges in achieving accurate reconstruction with disentangled facial parameters and ensuring temporal stability in single-frame methods for 3D face tracking on video data. In this paper, we propose TokenFace, a transformer-based monocular 3D face reconstruction model. TokenFace uses separate tokens for different facial components to capture information about different facial parameters and employs temporal transformers to capture temporal information from video data. This design can naturally disentangle different facial components and is flexible to both 2D and 3D training data. Trained on hybrid 2D and 3D data, our model shows its power in accurately reconstructing faces from images and producing stable results for video data. Experimental results on popular benchmarks NoW and Stirling demonstrate that TokenFace achieves state-of-the-art performance, outperforming existing methods on all metrics by a large margin.
|
| 16 |
+
|
| 17 |
+
# 1. Introduction
|
| 18 |
+
|
| 19 |
+
The analysis and reconstruction of human faces from images are critical research topics in computer vision due to their vast range of applications. A vital technology is the creation of a detailed 3D model that accurately captures both the geometry and appearance of the face from visual data. However, creating such a model is challenging when working with monocular input where there is no access to 3D information from multiple views or sensors.
|
| 20 |
+
|
| 21 |
+
In this work, we focus on the task of 3D face reconstruction using a 3D deformable face model where the problem can be expressed as estimating the parameters of the 3D face model. While optimization-based approaches have been proposed to solve this model-fitting problem, recent advances in deep learning have facilitated that neural network-based methods can predict these parameters from training data. However, unlike 2D tasks that are easy for annotation, the lack of 3D ground truth data makes 3D face reconstruction challenging. As a result, many existing methods [17, 14, 50] only supervise the 2D rendering results during training, leading to sub-optimal performance in 3D space. For instance, DECA [17] and Deep3dFaceRecon [14] leverage self-supervised training on 2D images, while MICA [50] uses 2D-3D data pairs but only reconstructs the face shape
|
| 22 |
+
|
| 23 |
+
without expression, pose, and other information. Dense Landmark [43], on the other hand, requires a large amount of annotated data for training and relies on a dense landmark for 3D face fitting.
|
| 24 |
+
|
| 25 |
+
Most of the previous deep learning-based methods use Convolutional Neural Networks (CNNs) to regress the parameters together as one vector from the input image. However, these approaches suffer from entanglement between different facial parameters, hindering the ability to improve performance. In addition to this, there are some methods [10, 4] that have tried transformer models for face reconstruction. However, they either rely on conditional GANs or follow the original transformer structure, making the reconstruction results not highly reflective of the original. In this work, we propose TokenFace, a transformer-based 3D face reconstruction model that utilizes six tokens, including shape, expression, jaw pose, camera, texture, and lighting, to effectively distinguish different image features and decouple various parameters. This helps to improve the disentanglement of individual components for more accurate reconstruction. Our model is trained with large-scale hybrid datasets consisting of both 3D scanned data and 2D in-the-wild face images with a multi-stage training pipeline. During the training phase, we can flexibly train our model based on the data type, using rendered 2D images for the 2D image dataset and mesh vertex supervision for datasets with 3D ground truth. To capture temporal information in video datasets, we introduce a temporal transformer between adjacent frames. Our loss functions are designed to address specific data types and focus on dimensions of vertices, identity, image consistency, and other factors. Our proposed method outperforms previous methods and demonstrates excellent performance in 3D face reconstruction, as evidenced by its performance in NoW Benchmark and Stirling Benchmark, with a minimum of $10\%$ improvement in accuracy. Example results of our method are illustrated in Fig. 1. Our model also exhibits stable performance in video reconstruction using our temporal modeling. Our methods enable many practical applications like facial expression transfer between different avatars as shown in the last row in Fig. 1.
|
| 26 |
+
|
| 27 |
+
Our contribution can be summarized as follows.
|
| 28 |
+
|
| 29 |
+
- We present a framework for 3D face reconstruction from monocular images based on transformers. Our approach uses separate tokens to improve the disentanglement of individual components for more accurate reconstruction.
|
| 30 |
+
- We train our network on large-scale hybrid dataset of both 2D and 3D data containing a large variety of faces and expressions using our training pipeline.
|
| 31 |
+
- Our framework can be naturally extended to 3D face reconstruction in videos with straightforward temporal modeling to improve temporal stability.
|
| 32 |
+
- Our model surpasses other methods by achieving no
|
| 33 |
+
|
| 34 |
+
tably lower reconstruction errors on benchmarks. For example, we achieve mean errors of 0.95 on NoW benchmark (previous best is 1.11) and 0.95 on Stirling benchmark (previous best is 1.16).
|
| 35 |
+
|
| 36 |
+
# 2. Related Work
|
| 37 |
+
|
| 38 |
+
3D Face Morphable Model. The 3D face morphable model (3DMM) [5, 32, 6, 7, 28, 45, 3, 27] has been studied for a long time in 3D face research field, which usually defines a linear model to represent the geometric structure and texture of a human face. Recently, more and more 3D face datasets were proposed, which provide more identities and expressions. These data make it possible to construct 3DMM models with better generalization ability [6] and better expression deformability [41, 7, 28, 45, 3]. Also, some 3DMM methods [27, 45, 3] also obtained higher reconstruction accuracy with the development of data capturing system. Besides the increasing quality of data, many works [41, 7, 45, 31, 1, 38, 27, 20] try to improve the 3DMM models from other aspects. These works [7, 45, 41] design bi-linear or multi-linear models that decompose the identity and expression of faces. Nonlinear models have also been used to improve the accuracy of facial deformation. Neumann et al. [31] decomposes the captured face mesh sequences into sparse and localized deformation components. Furthermore, generative adversarial networks (GAN) were also used to build the non-linear 3DMMs [1, 38, 27, 20].
|
| 39 |
+
|
| 40 |
+
Monocular Face Reconstruction Based on 3DMM. 3DMM-based reconstruction methods play a crucial role in the field of 3D reconstruction of monocular images. Along with the proposal of the FLAME model [28], many reconstruction methods [18, 49, 14, 21, 17, 50] based on FLAME achieve promising results of monocular face reconstruction. These methods generally follow a self-supervised or weakly supervised training framework and the 3D face reconstruction task is reduced to a model-fitting problem with the help of 3DMM. Among them, the early methods [34, 32, 37] try to regress the parameters of 3DMM using the facial landmarks. The current deep models [9, 45, 17, 3] usually predict the parameters directly from the input image. Furthermore, to achieve a more accurate reconstruction of facial details which are not parameterized by 3DMM, some works [9, 23, 45, 17, 3] use a two-stage framework, which firstly predicts a rough face mesh and then refined the facial details through depth and displacement map.
|
| 41 |
+
|
| 42 |
+
Transformer for Face Analysis. Following the success of transformers in natural language processing (NLP) tasks [40, 15], vision transformers [16, 29] have also achieved the state-of-the-art performance in many computer vision tasks. ViT [16] splits an image into patches and flattens the patches as a token sequence. Zhong et al. [48] modifies ViT and shows competitive performance on face recognition. Based on the
|
| 43 |
+
|
| 44 |
+

|
| 45 |
+
Figure 2: A comparison of existing methods and ours. Previous works typically rely on a CNN to predict a single 1D vector and segment it into different facial model parameters (colored boxes). Instead, we use separate tokens to encode independent facial components and aggregate this information with image data to obtain updated tokens, which are then used to predict the face parameters.
|
| 46 |
+
|
| 47 |
+
large-scale pre-training and transfer ability of transformers, FaRL [47] achieves superior performance on facial analysis tasks including face parsing and face alignment.
|
| 48 |
+
|
| 49 |
+
# 3. Method
|
| 50 |
+
|
| 51 |
+
# 3.1. Building TokenFace
|
| 52 |
+
|
| 53 |
+
Traditional methods for 3D face reconstruction typically rely on Convolutional Neural Networks (CNNs) to extract facial features and recover 3D information. These methods take a face image as input and regress a 1D vector, which is later segmented into different facial model parameters such as shape, expression, and pose, as shown in Fig. 2 (left). However, these approaches interleave the different facial components from the beginning, making it difficult to disentangle their effects and hindering the performance in accuracy. This is because CNNs can only extract these features from the last layer using global pooling and linear layers. As a result, the coupling between pose and expression can lead to inaccurate 3D face reconstruction.
|
| 54 |
+
|
| 55 |
+
To improve the disentanglement between different facial components in 3D face reconstruction, we propose a novel network structure based on the Vision Transformer (ViT) architecture [16]. Recent studies have shown that ViT's feature extraction ability often outperforms that of CNN networks when trained on large-scale data. We directly replace the ResNet50 backbone used in [17] with ViT-Base [16] and use a global token, i.e. a vector, as the task token to predict the facial parameters, similar to the behavior of the CNN-based model. As shown in Table 1, changing the backbone does improve reconstruction accuracy, but only marginally (the test setup is the same as the ones in our ablation study whose details are presented later in Sec. 5). Therefore, simply replacing the backbone is insufficient to achieve significant improvements in results.
|
| 56 |
+
|
| 57 |
+
To adapt ViT to 3D face reconstruction task, we introduce six tokens to represent shape, expression, jaw pose, camera pose, texture, and lighting, respectively. These tokens are combined with image tokens to create a comprehensive input,
|
| 58 |
+
|
| 59 |
+
<table><tr><td rowspan="2">Backbone</td><td colspan="3">Reconstruction Error</td></tr><tr><td>Median</td><td>Mean</td><td>Std</td></tr><tr><td>ResNet50 [22]</td><td>1.10</td><td>1.41</td><td>1.19</td></tr><tr><td>ViT-Base [16]</td><td>1.08</td><td>1.35</td><td>1.15</td></tr></table>
|
| 60 |
+
|
| 61 |
+
Table 1: Results of naively changing backbone from CNN to ViT. We test the same training and testing scheme using both ResNet50 and ViT-Base. The model with ViT-Base achieves slightly better performance.
|
| 62 |
+
|
| 63 |
+
which enables greater independence between different facial component parameters, reducing mutual influence. Fig. 2 (right) illustrates our method, while Fig. 3 shows the main structure of our TokenFace. We first divide the input image into patches, flatten them, and add position embeddings to build image tokens. Then, we append the six learnable facial component tokens to the image token and feed them into the transformer blocks together. Specifically, the six tokens are denoted as: $\beta$ for shape, $\psi$ for expression, $\theta$ for jaw pose, $C$ for the camera's affine matrix, $\alpha$ for texture, and $\iota$ for light. We use six FLAME headers corresponding to our FLAME tokens to estimate the FLAME parameters after encoding. The resulting output comprises 300-dimensional shape parameters, 100-dimensional expression parameters, 3-dimensional jaw pose parameters, 7-dimensional camera parameters, 50-dimensional texture parameters, and 27-dimensional light parameters. The camera parameters $C$ consist of scale (1-dim), rotation (3-dim), and translation (3-dim). Finally, we reconstruct the 3D face mesh based on the FLAME model and our predicted parameters.
|
| 64 |
+
|
| 65 |
+
There are two more interesting advantages of using individual face component tokens. First, we can generate a metrical face in 3D using shape, expression, and jaw pose and set other tokens to zero. With additional camera pose, texture, and lighting, the face can be projected into the 2D image plane. This is particularly useful when training on a large-scale hybrid dataset of both 2D and 3D data. This hybrid dataset can contain a larger variety of faces and facial expressions, allowing for training a better 3D face reconstruction model. The second advantage of using separate tokens for each facial component is the ability to create a simple temporal modeling. This approach can be used to build a temporal transformer on top of our TokenFace model, which can aggregate temporal information from different frames at token level and generate more stable results for video prediction.
|
| 66 |
+
|
| 67 |
+
# 3.2. Hybrid Training Strategy
|
| 68 |
+
|
| 69 |
+
To take full advantage of publicly available 2D and 3D datasets, we design a hybrid training strategy that enables model learning on different types of data. Specifically, 3D data provide precise FLAME meshes for full supervision. Meanwhile, 2D images provide a large amount of photo
|
| 70 |
+
|
| 71 |
+

|
| 72 |
+
Figure 3: Illustration of our pipeline. We partition input image into patches, flatten them, and add position embedding to build the image tokens. The learnable facial component tokens, including shape, expression, jaw pose, camera pose, texture, and lighting, are appended to the image token sequence and fed to the transformer blocks together. The output facial component tokens at the end of the transformer are converted to FLAME parameters using FLAME heads (FC layers). 3D face mesh is recovered from the FLAME model and our predicted parameters. For 2D and 3D data, different losses are used.
|
| 73 |
+
|
| 74 |
+
metric information (e.g., texture, lighting, etc.) for self-supervision. Our main motivation is to simultaneously learn these two types of supervision for a better 3D face reconstruction. Mathematically, the overall training target is to minimize
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
\mathcal {L} _ {a l l} = \lambda_ {3 D} \cdot \mathcal {L} _ {3 D} + \lambda_ {2 D} \cdot \mathcal {L} _ {2 D}, \tag {1}
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
where $\lambda_{3D},\lambda_{2D}$ are two hyper-parameters to balance loss terms. $\mathcal{L}_{2D}$ and $\mathcal{L}_{3D}$ denote the loss for 2D data and 3D data, respectively.
|
| 81 |
+
|
| 82 |
+
# 3.2.1 Full Supervision for 3D Data
|
| 83 |
+
|
| 84 |
+
For the 3D dataset, we can fully supervise the vertex of mesh directly. Therefore, the 3D-related loss $\mathcal{L}_{3D}$ is defined as
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
\mathcal {L} _ {3 D} = \lambda_ {\text {m e s h}} \mathcal {L} _ {\text {m e s h}} + \lambda_ {v c} \mathcal {L} _ {v c}, \tag {2}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
where $\mathcal{L}_{mesh}$ is mesh loss, $\mathcal{L}_{vc}$ is vertex consistency loss.
|
| 91 |
+
|
| 92 |
+
Mesh Loss. For the data with non-neural expressions, we reconstruct the face based on the estimated shape $\beta$ , expression $\psi$ and jaw pose $\theta$ parameters, while for the data only with neural expressions, we reconstruct the face using the shape parameters $\beta$ . The $\mathcal{L}_{mesh}$ is defined as
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
\mathcal {L} _ {\text {m e s h}} = w \left| \mathcal {V} ^ {i} - \mathcal {V} _ {g t} ^ {i} \right| _ {1}, \tag {3}
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
where $w$ is the region-dependent weight of FLAME vertices. $\mathcal{V}_{gt}^{i}$ is the ground truth 3D vertices. $\mathcal{V}$ is extracted from face mesh $\mathcal{M}$ , which is
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\mathcal {M} = \operatorname {F L A M E} (\beta , \psi , \theta). \tag {4}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
Vertex Consistency Loss. In order to further disentangle the shape and expression parameters, we train our model on the meshes with the same identity (i.e., which should share the same shape parameter) and different expressions. Similar to DECA [17], we adopt vertex consistency loss $\mathcal{L}_{vc}$ to constrain the objectiveness. $\mathcal{L}_{vc}$ is defined as
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\mathcal {L} _ {v c} = \sum^ {M} w | \mathcal {V} ^ {b \rightarrow a} - \mathcal {V} _ {g t} ^ {b} | _ {1}, \tag {5}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
\mathcal {M} ^ {b \rightarrow a} = \operatorname {F L A M E} \left(\beta_ {a}, \psi_ {b}, \boldsymbol {\theta} _ {b}\right),
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
where $\mathcal{V}^{b\to a}$ is the generated vertices with exchanging shape $\beta_{b}$ to $\beta_{a}$ . $M$ denotes the number of samples from the same identity. Besides, these shapes, expressions, and jaw poses are from different images with the same identity.
|
| 115 |
+
|
| 116 |
+
# 3.2.2 Self-Supervision for 2D Data
|
| 117 |
+
|
| 118 |
+
For the 2D Dataset, we need to perform self-supervised training at the image level. The 2D-related loss $\mathcal{L}_{2D}$ is defined as
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
\begin{array}{l} \mathcal {L} _ {2 D} = \lambda_ {e y e s} \mathcal {L} _ {e y e s} + \lambda_ {l i p s} \mathcal {L} _ {l i p s} + \lambda_ {s c} \mathcal {L} _ {s c} + \lambda_ {i d} \mathcal {L} _ {i d} \tag {6} \\ + \omega_ {l m k} \mathcal {L} _ {l m k} + \omega_ {p h o t o} \mathcal {L} _ {p h o t o} + \mathcal {L} _ {r e g}, \\ \end{array}
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
which includes 7 loss terms. Here we introduce each of the loss terms in $\mathcal{L}_{2D}$ for learning on 2D data.
|
| 125 |
+
|
| 126 |
+
Landmark Loss. We calculate the $L1$ distance between the projected landmarks of the mesh and the pre-processed ground-truth landmarks. Note that the non-visible key points will not be involved in landmark loss.
|
| 127 |
+
|
| 128 |
+
Eyelids & Lips Loss. Inspired by DECA [17], we also introduce eyes & lips loss function to add additional constraints on the lip and eye region. It could make the model perform better in scenes such as video. :
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
\mathcal {L} _ {e y e s} = \sum_ {(i, j) \in E} \| \mathbf {k} _ {i} - \mathbf {k} _ {j} - s \Pi (V _ {i} - V _ {j}) \| _ {1}, \tag {7}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
\mathcal {L} _ {l i p s} = \sum_ {(i, j) \in L} \| \mathbf {k} _ {i} - \mathbf {k} _ {j} - s \Pi \left(V _ {i} - V _ {j}\right) \| _ {1},
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
where $E$ and $L$ are the set of upper/lower eyelid and lips landmarks pairs respectively. $\mathbf{k}_i$ and $\mathbf{k}_j$ mean 2D keypoints, which are detected by face landmark detector*. $V_{i}$ and $V_{j}$ denote the 3D face landmarks in the reconstructed mesh. $s$ is the scale parameter of the camera parameters, and $s\Pi$ means projecting the 3D points to the image coordinate.
|
| 139 |
+
|
| 140 |
+
Image Photometric Loss. As a two-dimensional visual supervision, we also add supervision to ensure the consistency between the rendered image $\mathcal{I}_r$ and input image $\mathcal{I}$ :
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
\mathcal {L} _ {\text {p h o t o}} = \left\| m _ {\mathcal {I}} \cdot \left(\mathcal {I} - \mathcal {I} _ {r}\right) \right\| _ {1}, \tag {8}
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
where $\mathcal{I}_r =$ FLAME-Render $(\beta ,\psi ,\theta ,C,\alpha .\iota)$ and $m_{\mathcal{I}}$ is the mask of the face.
|
| 147 |
+
|
| 148 |
+
Shape Consistency Loss. As with the 2D-3D datasets, on 2D datasets, we also need to keep the consistency of the shape in different images of the same identity, i.e. the shape parameter in FLAME. In contrast to the above, we define $\mathcal{L}_{sc}$ with the following equation:
|
| 149 |
+
|
| 150 |
+
$$
|
| 151 |
+
\mathcal {L} _ {s c} = \sum^ {M} m _ {\mathcal {I}} \| \mathcal {I} ^ {b \rightarrow a} - \mathcal {I} _ {g t} ^ {b} \| _ {1}, \tag {9}
|
| 152 |
+
$$
|
| 153 |
+
|
| 154 |
+
$$
|
| 155 |
+
\mathcal {I} ^ {b \rightarrow a} = \operatorname {F L A M E - R e n d e r} \left(\beta_ {a}, \psi_ {b}, \theta_ {b}, C _ {b}, \alpha_ {b}. \iota_ {b}\right),
|
| 156 |
+
$$
|
| 157 |
+
|
| 158 |
+
where $m_{\mathcal{I}}$ is the face mask of the image $\mathcal{I}$ , and $\mathcal{I}^{b\to a}$ is the rendered image with exchanging shape $\beta_{b}$ to $\beta_{a}$ . $M$ denotes the number of images from the same identity.
|
| 159 |
+
|
| 160 |
+
Identity Loss. To better reconstruct the details of the input image, we optimize our model with a perceptual loss based on an advanced face recognition model [13]:
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
\mathcal {L} _ {i d} = 1 - \frac {f (\mathcal {I}) f \left(\mathcal {I} _ {r}\right)}{\| f (\mathcal {I}) \| _ {2} \cdot \| f \left(\mathcal {I} _ {r}\right) \| _ {2}}, \tag {10}
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+
where $f(\cdot)$ is the feature extractor.
|
| 167 |
+
|
| 168 |
+
Regularization. To avoid over-fitting of the facial parameters, we add a regularization loss to the 3DMM coefficients of the regression:
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
\mathcal {L} _ {r e g} = \omega_ {\boldsymbol {\alpha}} \cdot \| \boldsymbol {\alpha} \| ^ {2} + \omega_ {\boldsymbol {\beta}} \cdot \| \boldsymbol {\beta} \| ^ {2} + \omega_ {\boldsymbol {\psi}} \cdot \| \boldsymbol {\psi} \| ^ {2}, \tag {11}
|
| 172 |
+
$$
|
| 173 |
+
|
| 174 |
+
\*http://dlib.net/
|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
Figure 4: Temporal model of our method. We obtain facial component tokens for each frame independently and append them together and send to an additional temporal transformer to predict the aggregated facial component tokens for the middle frame.
|
| 178 |
+
|
| 179 |
+
where $\alpha, \beta, \psi$ are estimated flame parameters, and $\omega_{\alpha}, \omega_{\beta}, \omega_{\psi}$ are hyper-parameters to balance the loss scale.
|
| 180 |
+
|
| 181 |
+
Pose-Aware Loss Function. In order to solve the problem of inaccurate landmark estimation in large head pose, we propose the Pose-Aware Loss (PAL) function to adaptively balance the weights of $\mathcal{L}_{lmk}$ and $\mathcal{L}_{photo}$ in Eq.(6). In detail, We first pre-process the training data to obtain the face orientation, where $x$ and $z$ represent the pitch and yaw angle of the face, respectively. Then we judge the degree of the large pose of a human face by the maximum value of azimuth and elevation angles, i.e. $\xi = \max (\zeta_1,\zeta_2) - \pi /4$ , where $\zeta_{1}$ and $\zeta_{2}$ denotes the azimuth and the elevation respectively. Next, we take $\xi$ as input and adjust the values of $\omega_{lmk}$ and $\omega_{photo}$ using a designed linear function:
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
\omega_ {l m k} = \left\{ \begin{array}{c c} 1. 6, & \xi \leq 0, \\ a \xi + b. & \xi > 0. \end{array} \right. \tag {12}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\omega_ {\text {p h o t o}} = \left\{ \begin{array}{c c} 2. 2, & \xi \leq 0, \\ c \xi + d. & \xi > 0. \end{array} \right. \tag {13}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
# 3.3. Temporal Supervision for Video Data
|
| 192 |
+
|
| 193 |
+
In addition to monocular image reconstruction, we aim for our model to perform well on videos. However, existing methods often require additional smoothing processing after frame-by-frame reconstruction to reduce jitters in videos. Instead of relying on additional processing, we propose an end-to-end method that achieves superior results on videos. By inputting three adjacent frames $t - 1$ , $t$ , and $t + 1$ into a three-layer temporal Transformer, we obtain a set of estimated FLAME tokens. These tokens are then used to reconstruct the face of frame $t$ , improving the continuity of facial motion between adjacent frames. The corresponding framework is illustrated in Fig. 4.
|
| 194 |
+
|
| 195 |
+
# 4. Experiment
|
| 196 |
+
|
| 197 |
+
In this section, we present the results of our method and compare it with other approaches. Due to page limitations, we provided additional content including more results and video demos in our supplementary materials.
|
| 198 |
+
|
| 199 |
+
# 4.1. Datasets
|
| 200 |
+
|
| 201 |
+
3D Datasets. In order to reconstruct accurate metric 3D faces, we collect the currently available open-source 3D scan datasets (FLAME topology) for supervised learning of estimating shape parameters from RGB images. As in MICA [50], we use neutral face data (RGB images and the corresponding neutral face shape parameters) in unified metric space from Stirling [19], Florence [2], LYHM [12], FaceWarehouse [7], and FRGC [33]. In addition, we include 3D face datasets with expressions from FaceWarehouse [7] and WCPA [24].
|
| 202 |
+
|
| 203 |
+
2D Datasets. Besides 3D data, we use common 2D face datasets (images only) in our self-supervised training, including FFHQ [25], FaceScape [45], BUPT-Balanced [42], and VoxCeleb2 [11], CelebA [30], VGGFace2 [8]. We adopt AFLW2000 [46] as the validation set.
|
| 204 |
+
|
| 205 |
+
# 4.2. Implementation Details
|
| 206 |
+
|
| 207 |
+
Training Details. We implement our method with PyTorch[14]. We use Adam [26] as the optimizer, in which the learning rate is set to $1 \times 10^{-4}$ . We resize the input image to the size of $224 \times 224$ . The model is trained in 10 epochs with batch size 8. We use FaRL [47] as the pretrained weight to initialize the transformer backbone. The facial component tokens are all initialized to 0. In our experiments, we set hyper-parameters $\lambda_{3D} = 0.6$ and $\lambda_{2D} = 0.4$ in Eq. (1). We set $\lambda_{mesh} = 2.0$ and $\lambda_{vc} = 1.2$ in Eq. (2). In Eq. (6), we set $\lambda_{eyes} = 0.8$ , $\lambda_{lips} = 1.0$ . In Eq. (11), we set $\omega_{\alpha} = \omega_{\beta} = \omega_{\psi} = 1 \times 10^{-4}$ . In Eq. (12) and Eq. (13), we empirically set $a = -0.76$ , $b = 1.6$ , $c = 0.38$ , and $d = 2.2$ .
|
| 208 |
+
|
| 209 |
+
Evaluation Dataset. In the field of monocular image 3D reconstruction, we usually use the median, mean and standard deviation of the distance of each point to characterize the performance of the reconstruction method, after aligning the predicted mesh reconstructed from the specified image with the meshes of the real scan. Among them, the two commonly used benchmarks are NoW [35] and Stirling [19].
|
| 210 |
+
|
| 211 |
+
# 4.3. Comparison with Existing Methods
|
| 212 |
+
|
| 213 |
+
Quantitative Results. We conducted a quantitative evaluation of our method over the NoW and Stirling benchmarks. The results on the NoW benchmark are presented in Table 2, while the results on the Stirling benchmark are shown in Table 3. It is worth noting that we excluded the Stirling data from our training in Table 3 and train a new model from the
|
| 214 |
+
|
| 215 |
+
beginning for a fair comparison. Our method achieves the top performance with the least errors across all metrics on both benchmarks, as demonstrated in the tables. Notably, our method outperforms the previous best method (MICA) by a remarkable margin.
|
| 216 |
+
|
| 217 |
+
Qualitative Results. In Fig. 5, we present a visual comparison of 3D face reconstruction results from different representative methods that are publicly available. Our method demonstrates robustness in accurately reconstructing faces across different shapes, races, and ages, as shown in the figure. Notably, our method achieves the best mesh recovery of face shapes and accurately captures expressions.
|
| 218 |
+
|
| 219 |
+
# 4.4. Face Tracking Results
|
| 220 |
+
|
| 221 |
+
We conducted a comparison on video data to validate that our method can produce faithful 3D face reconstruction and temporal coherent results with our temporal module. Specifically, we select a sample video clip and compare the photo-metrical reconstruction errors with the original video frames. To ensure consistency in the comparison, we use FLAME texture for mapping to compare the consistency of the face at the mesh level and the coherence of movement. Fig. 6 presents the error curves of video reconstruction for different approaches. Our method outperforms the other methods in terms of the accuracy and coherence of the reconstructed video.
|
| 222 |
+
|
| 223 |
+
# 5. Ablation Study
|
| 224 |
+
|
| 225 |
+
To ensure a fair comparison of model performance in our ablation experiments, we employ the validation dataset of the NoW dataset [35] as quantitative test data to evaluate the reconstruction performance. We use the 3D face reconstruction error as the evaluation metric.
|
| 226 |
+
|
| 227 |
+
Ablation Study on the Effect of Separate Tokens. Previous 3D face reconstruction models typically use a single, long code and split it into different component parameters, resulting in coupling between different parameters and decreased reconstruction accuracy. In our study, we investigate the effectiveness of using separate tokens for different facial parameters in our TokenFace model. We perform an ablation study on the number of tokens by merging closely related tokens, and present the results in Table 4. Our experiment reveals that merging different parameter types into the same token increases the coupling effect between parameters and leads to poorer reconstruction. Conversely, increasing the number of tokens improves decoupling and reconstruction accuracy. Our findings demonstrate the strong decoupling effect of TokenFace's separate tokens for different FLAME parameters, resulting in greater independence and specificity in expressing different components.
|
| 228 |
+
|
| 229 |
+
Ablation Study of Dataset Types for Training. To take advantage of the flexibility of our proposed network structure
|
| 230 |
+
|
| 231 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Non-Metrical</td><td colspan="3">Metrical</td></tr><tr><td>Median↓</td><td>Mean↓</td><td>Std↓</td><td>Median↓</td><td>Mean↓</td><td>Std↓</td></tr><tr><td>3DMM-CNN [39]</td><td>1.84</td><td>2.33</td><td>2.05</td><td>3.91</td><td>4.84</td><td>4.02</td></tr><tr><td>FLAME 2020 template [28]</td><td>1.21</td><td>1.53</td><td>1.31</td><td>1.49</td><td>1.92</td><td>1.68</td></tr><tr><td>PRNet [18]</td><td>1.5</td><td>1.98</td><td>1.88</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Deep3DFaceRecon [Tensorflow] [14]</td><td>1.23</td><td>1.54</td><td>1.29</td><td>2.26</td><td>2.90</td><td>2.51</td></tr><tr><td>RingNet [35]</td><td>1.21</td><td>1.53</td><td>1.31</td><td>1.50</td><td>1.98</td><td>1.77</td></tr><tr><td>Deep3DFaceRecon [PyTorch] [14]</td><td>1.11</td><td>1.41</td><td>1.21</td><td>1.62</td><td>2.21</td><td>2.08</td></tr><tr><td>MGCNet [36]</td><td>1.31</td><td>1.87</td><td>2.63</td><td>1.70</td><td>2.47</td><td>3.02</td></tr><tr><td>3DDFA-v2 [21]</td><td>1.23</td><td>1.57</td><td>1.39</td><td>1.53</td><td>2.06</td><td>1.95</td></tr><tr><td>SynergyNet [44]</td><td>1.27</td><td>1.59</td><td>1.31</td><td>2.28</td><td>2.86</td><td>2.39</td></tr><tr><td>DECA [17]</td><td>1.09</td><td>1.38</td><td>1.18</td><td>1.35</td><td>1.80</td><td>1.64</td></tr><tr><td>Dense Landmark [43]</td><td>1.02</td><td>1.28</td><td>1.08</td><td>1.36</td><td>1.73</td><td>1.47</td></tr><tr><td>MICA [50]</td><td>0.90</td><td>1.11</td><td>0.92</td><td>1.08</td><td>1.37</td><td>1.17</td></tr><tr><td>TokenFace (Ours)</td><td>0.76</td><td>0.95</td><td>0.82</td><td>0.97</td><td>1.24</td><td>1.07</td></tr></table>
|
| 232 |
+
|
| 233 |
+

|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
Figure 5: Visual comparison of 3D face reconstruction quality of ours and some other representative methods. From top to bottom are input image, Deep3DFaceRecon [14], 3DDFAv2 [21], DECA [17], and TokenFace (Ours).
|
| 237 |
+
|
| 238 |
+
Table 2: Quantitative comparisons on NoW benchmark [35]. The metric is the 3D face reconstruction error. Best results are highlighted in bold.
|
| 239 |
+
|
| 240 |
+
<table><tr><td>Method</td><td>Median↓</td><td>Mean↓</td><td>Std↓</td></tr><tr><td>FLAME 2020 template [28]</td><td>1.22</td><td>1.55</td><td>1.35</td></tr><tr><td>RingNet [35]</td><td>1.15</td><td>1.46</td><td>1.27</td></tr><tr><td>Deep3DFaceRecon [TensorFlow] [14]</td><td>1.13</td><td>1.43</td><td>1.25</td></tr><tr><td>Deep3DFaceRecon [Pytorch] [14]</td><td>0.99</td><td>1.27</td><td>1.15</td></tr><tr><td>3DDFA-v2 [21]</td><td>1.20</td><td>1.55</td><td>1.45</td></tr><tr><td>DECA [17]</td><td>1.03</td><td>1.32</td><td>1.18</td></tr><tr><td>MICA [50]</td><td>0.92</td><td>1.16</td><td>1.04</td></tr><tr><td>TokenFace (Ours)</td><td>0.88</td><td>0.95</td><td>0.96</td></tr></table>
|
| 241 |
+
|
| 242 |
+
Table 3: Quantitative results on Stirling benchmark [19].
|
| 243 |
+
|
| 244 |
+
in selecting different types of tokens, we conducted mixed training using a hybrid dataset that contains 2D and 3D data. To investigate the impact of dataset type on performance, we conduct ablation experiments using 2D-only, 3D-only, and mixed datasets for training. In the case of 3D-only datasets, we use a loss function at the vertex level, with camera pose,
|
| 245 |
+
|
| 246 |
+
texture, and light values set to zero in the six tokens. Results presented in Table 5 show that the models trained using only 3D data outperformed those trained using only 2D data on the NoW validation set due to the higher accuracy of ground truths at the vertex level. Moreover, models trained with mixed datasets outperform those trained with only 3D datasets, attributed to the increased quantity of data and the multiple types of supervision provided by the mixed dataset.
|
| 247 |
+
|
| 248 |
+
Ablation Study on Weighting between 2D and 3D Data. Since our training dataset consists of both 2D and 3D data, we want to investigate the impact of adjusting the weight between them. To this end, we conduct experiments with different weightings while keeping other conditions fixed and the results are shown in Table 6. As shown, the weight balance between 2D and 3D data does have an effect on the
|
| 249 |
+
|
| 250 |
+

|
| 251 |
+
Figure 6: Visualization of face tracking error on a video clip. As there is no ground truth 3D face mesh, we calculate the photometric error. TokenFace-T denotes TokenFace with the temporal module.
|
| 252 |
+
|
| 253 |
+
<table><tr><td rowspan="2">#Tokens</td><td rowspan="2">Merging Detail</td><td colspan="3">Reconstruction Error</td></tr><tr><td>Median</td><td>Mean</td><td>Std</td></tr><tr><td>1</td><td>[S, E, J, C, T, L]</td><td>1.08</td><td>1.41</td><td>1.19</td></tr><tr><td>3</td><td>[S, E, J, C], T, L</td><td>1.05</td><td>1.31</td><td>1.10</td></tr><tr><td>5</td><td>[S, E], J, C, T, L</td><td>0.96</td><td>1.15</td><td>1.02</td></tr><tr><td>6</td><td>S, E, J, C, T, L</td><td>0.79</td><td>0.99</td><td>0.85</td></tr></table>
|
| 254 |
+
|
| 255 |
+
Table 4: Ablation study on the number of tokens. Reducing the number of tokens by merging leads to a drop in reconstruction performance. The six tokens, denoted as S (shape), E (expression), J (jaw pose), C (camera pose), T (texture), and L (lighting), derived from the specific parameters of the FLAME model, are crucial for the model's ability to decouple different facial parameters and achieve accurate reconstructions. [·] represents the merging operation.
|
| 256 |
+
|
| 257 |
+
<table><tr><td rowspan="2">2D</td><td rowspan="2">3D</td><td colspan="6">Reconstruction Error</td></tr><tr><td>Median ↓</td><td>Δ</td><td>Mean ↓</td><td>Δ</td><td>Std. ↓</td><td>Δ</td></tr><tr><td>✓</td><td></td><td>1.03</td><td>+0.24</td><td>1.31</td><td>+0.32</td><td>1.11</td><td>+0.26</td></tr><tr><td></td><td>✓</td><td>0.85</td><td>+0.06</td><td>1.08</td><td>+0.09</td><td>0.89</td><td>+0.04</td></tr><tr><td>✓</td><td>✓</td><td>0.79</td><td>-</td><td>0.99</td><td>-</td><td>0.85</td><td>-</td></tr></table>
|
| 258 |
+
|
| 259 |
+
final results, and careful selection of the weight is required. We found that using a weight of (0.4, 0.6) result in the highest performance. This is our setting in reporting the main results.
|
| 260 |
+
|
| 261 |
+
Ablation Study on Adaptive Loss Function. To improve the effectiveness of large pose face reconstruction during training on 2D data, we introduce a pose-aware loss function in Sec. 3.2.2. To demonstrate the effectiveness of this loss function, we conduct an ablation study by comparing the models trained with and without the proposed loss, and then test the model on large pose examples. As shown in Fig. 7,
|
| 262 |
+
|
| 263 |
+
Table 5: Ablation study on using different types of datasets. We train our model on three types of datasets: 2D data only, 3D data only, and hybrid datasets of 2D and 3D. The model trained on mixed datasets of 2D and 3D outperforms those trained on either 2D or 3D data alone.
|
| 264 |
+
|
| 265 |
+
<table><tr><td rowspan="2">ω2D</td><td rowspan="2">ω3D</td><td colspan="3">Reconstruction Error</td></tr><tr><td>Median</td><td>Mean</td><td>Std</td></tr><tr><td>0.3</td><td>0.7</td><td>0.83</td><td>1.04</td><td>0.89</td></tr><tr><td>0.4</td><td>0.6</td><td>0.79</td><td>0.99</td><td>0.85</td></tr><tr><td>0.5</td><td>0.5</td><td>0.81</td><td>1.02</td><td>0.87</td></tr><tr><td>0.6</td><td>0.4</td><td>0.88</td><td>1.10</td><td>0.93</td></tr></table>
|
| 266 |
+
|
| 267 |
+
Table 6: Ablation study on different balance weights on 2D and 3D data. Based on this table, we select the best weighting parameter (0.4, 0.6) in our experiments.
|
| 268 |
+
|
| 269 |
+

|
| 270 |
+
Figure 7: Visualization of Effects of using Adaptive Weights. We test two large pose cases with side face and head tilt. It can be seen that the model with adaptive loss function weights has more accurate alignment for large pose face reconstruction.
|
| 271 |
+
|
| 272 |
+
despite achieving similar reconstruction effects for shape and expression, the model trained with adaptive loss function weights produces more accurate reconstructions in pose for faces with large poses, such as side faces and head tilts.
|
| 273 |
+
|
| 274 |
+
# 6. Conclusion
|
| 275 |
+
|
| 276 |
+
In this paper, we introduce TokenFace, a transformer-based method for reconstructing 3D faces from monocular images. By using six independent facial component tokens to estimate six parameters that represent the reconstructed face, we are able to disentangle the different parameters of FLAME. Additionally, we design a temporal transformer that captures temporal information in videos, resulting in significantly improved accuracy and continuity in video face reconstruction. Our TokenFace achieves state-of-the-art performance in the challenging NoW Benchmark [35] and Stirling Benchmark [19], with a large improvement over previous methods. Furthermore, our method demonstrates stable and accurate performance in video face tracking with our temporal modeling.
|
| 277 |
+
|
| 278 |
+
Acknowledgement. This work was supported by the National Key R&D Program of China(2022YFB4701400 / 4701402), the SZSTC Grant(JCYJ20190809172201639, WDZC20200820200655001), the Shenzhen Key Laboratory(ZDSYS20210623092001004).
|
| 279 |
+
|
| 280 |
+
# References
|
| 281 |
+
|
| 282 |
+
[1] Timur Bagautdinov, Chenglei Wu, Jason Saragih, Pascal Fua, and Yaser Sheikh. Modeling facial geometry using compositional vaes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3877-3886, 2018. 2
|
| 283 |
+
[2] Andrew D. Bagdanov, Alberto Del Bimbo, and Iacopo Masi. The florence 2d/3d hybrid face dataset. In Proceedings of the 2011 Joint ACM Workshop on Human Gesture and Behavior Understanding, J-HGBU '11, page 79-80, New York, NY, USA, 2011. ACM. 6
|
| 284 |
+
[3] Linchao Bao, Xiangkai Lin, Yajing Chen, Haoxian Zhang, Sheng Wang, Xuefei Zhe, Di Kang, Haozhi Huang, Xinwei Jiang, Jue Wang, et al. High-fidelity 3d digital human head creation from rgb-d selfies. ACM Transactions on Graphics (TOG), 41(1):1-21, 2021. 2
|
| 285 |
+
[4] Shubhajit Basak, Peter Corcoran, Rachel McDonnell, and Michael Schukat. 3d face-model reconstruction from a single image: A feature aggregation approach using hierarchical transformer with weak supervision. Neural Networks, 156:108-122, 2022. 2
|
| 286 |
+
[5] Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pages 187-194, 1999. 2
|
| 287 |
+
[6] James Booth, Anastasios Roussos, Allan Ponniah, David Dunaway, and Stefanos Zafeiriou. Large scale 3d morphable models. International Journal of Computer Vision, 126(2):233-254, 2018. 2
|
| 288 |
+
[7] Chen Cao, Yanlin Weng, Shun Zhou, Yiying Tong, and Kun Zhou. Facewarehouse: A 3d facial expression database for visual computing. IEEE Transactions on Visualization and Computer Graphics, 20(3):413-425, 2013. 2, 6
|
| 289 |
+
[8] Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. Vggface2: A dataset for recognising faces across pose and age. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pages 67-74. IEEE, 2018. 6
|
| 290 |
+
[9] Anpei Chen, Zhang Chen, Guli Zhang, Kenny Mitchell, and Jingyi Yu. Photo-realistic facial details synthesis from single image. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9429-9439, 2019. 2
|
| 291 |
+
[10] Zhuo Chen, Yuesong Wang, Tao Guan, Luoyuan Xu, and Wenkai Liu. Transformer-based 3d face reconstruction with end-to-end shape-preserved domain transfer. IEEE Transactions on Circuits and Systems for Video Technology, 32(12):8383-8393, 2022. 2
|
| 292 |
+
[11] Joon Son Chung, Arsha Nagrani, and Andrew Zisserman. Voxceleb2: Deep speaker recognition. Cornell University - arXiv, 2018. 6
|
| 293 |
+
[12] Hang Dai, Nick Pears, William Smith, and Christian Duncan. Statistical modeling of craniofacial shape and texture. International Journal of Computer Vision, 128:547-571, 2020. 6
|
| 294 |
+
[13] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF conference on com
|
| 295 |
+
|
| 296 |
+
puter vision and pattern recognition, pages 4690-4699, 2019. 5
|
| 297 |
+
[14] Yu Deng, Jiaolong Yang, Sicheng Xu, Dong Chen, Yunde Jia, and Xin Tong. Accurate 3d face reconstruction with weakly-supervised learning: From single image to image set. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019. 1, 2, 6, 7
|
| 298 |
+
[15] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 2
|
| 299 |
+
[16] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. 2, 3
|
| 300 |
+
[17] Yao Feng, Haiwen Feng, Michael J Black, and Timo Bolkart. Learning an animatable detailed 3d face model from in-the-wild images. ACM Transactions on Graphics (ToG), 40(4):1-13, 2021. 1, 2, 3, 4, 5, 7
|
| 301 |
+
[18] Yao Feng, Fan Wu, Xiaohu Shao, Yanfeng Wang, and Xi Zhou. Joint 3d face reconstruction and dense alignment with position map regression network. In Proceedings of the European conference on computer vision (ECCV), pages 534-551, 2018. 2, 7
|
| 302 |
+
[19] Zhen-Hua Feng, Patrik Huber, Josef Kittler, Peter Hancock, Xiao-Jun Wu, Qijun Zhao, Paul Koppen, and Matthias Ratsch. Evaluation of dense 3d reconstruction from 2d face images in the wild. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pages 780-786. IEEE, 2018. 6, 7, 8
|
| 303 |
+
[20] Leonardo Galteri, Claudio Ferrari, Giuseppe Lisanti, Stefano Berretti, and Alberto Del Bimbo. Deep 3d morphable model refinement via progressive growing of conditional generative adversarial networks. Computer Vision and Image Understanding, 185:31-42, 2019. 2
|
| 304 |
+
[21] Jianzhu Guo, Xiangyu Zhu, Yang Yang, Fan Yang, Zhen Lei, and Stan Z Li. Towards fast, accurate and stable 3d dense face alignment. In European Conference on Computer Vision, pages 152-168. Springer, 2020. 2, 7
|
| 305 |
+
[22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 3
|
| 306 |
+
[23] Loc Huynh, Weikai Chen, Shunsuke Saito, Jun Xing, Koki Nagano, Andrew Jones, Paul Debevec, and Hao Li. Mesoscopic facial geometry inference using deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8407-8416, 2018. 2
|
| 307 |
+
[24] Yueying Kao, Bowen Pan, Miao Xu, Jiangjing Lyu, Xiangyu Zhu, Yuanzhang Chang, Xiaobo Li, Zhen Lei, and Zixiong Qin. Single-image 3d face reconstruction under perspective projection. arXiv preprint arXiv:2205.04126, 2022. 6
|
| 308 |
+
[25] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In
|
| 309 |
+
|
| 310 |
+
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401-4410, 2019. 6
|
| 311 |
+
[26] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6
|
| 312 |
+
[27] Ruilong Li, Karl Bladin, Yajie Zhao, Chinmay Chinara, Owen Ingraham, Pengda Xiang, Xinglei Ren, Pratusha Prasad, Bipin Kishore, Jun Xing, et al. Learning formation of physically-based face attributes. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3410-3419, 2020. 2
|
| 313 |
+
[28] Tianye Li, Timo Bolkart, Michael J. Black, Hao Li, and Javier Romero. Learning a model of facial shape and expression from 4d scans. ACM Transactions on Graphics, 2017. 2, 7
|
| 314 |
+
[29] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012-10022, 2021. 2
|
| 315 |
+
[30] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Large-scale celebfaces attributes (celeba) dataset. Retrieved August, 15(2018):11, 2018. 6
|
| 316 |
+
[31] Thomas Neumann, Kiran Varanasi, Stephan Wenger, Markus Wacker, Marcus Magnor, and Christian Theobalt. Sparse localized deformation components. ACM Transactions on Graphics (TOG), 32(6):1-10, 2013. 2
|
| 317 |
+
[32] Pascal Paysan, Reinhard Knothe, Brian Amberg, Sami Romdhani, and Thomas Vetter. A 3d face model for pose and illumination invariant face recognition. In 2009 sixth IEEE international conference on advanced video and signal based surveillance, pages 296-301. IEEE, 2009. 2
|
| 318 |
+
[33] P Jonathon Phillips, Patrick J Flynn, Todd Scruggs, Kevin W Bowyer, Jin Chang, Kevin Hoffman, Joe Marques, Jaesik Min, and William Worek. Overview of the face recognition grand challenge. In 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05), volume 1, pages 947-954. IEEE, 2005. 6
|
| 319 |
+
[34] Sami Romdhani and Thomas Vetter. Estimating 3d shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 2, pages 986-993. IEEE, 2005. 2
|
| 320 |
+
[35] Soubhik Sanyal, Timo Bolkart, Haiwen Feng, and Michael Black. Learning to regress 3D face shape and expression from an image without 3D supervision. In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pages 7763-7772, June 2019. 6, 7, 8
|
| 321 |
+
[36] Jiaxiang Shang, Tianwei Shen, Shiwei Li, Lei Zhou, Mingmin Zhen, Tian Fang, and Long Quan. Self-supervised monocular 3d face reconstruction by occlusion-aware multi-view geometry consistency. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XV, pages 53-70. Springer, 2020. 7
|
| 322 |
+
[37] Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2387-2395, 2016. 2
|
| 323 |
+
|
| 324 |
+
[38] Luan Tran, Feng Liu, and Xiaoming Liu. Towards high-fidelity nonlinear 3d face morphable model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1126-1135, 2019. 2
|
| 325 |
+
[39] Anh Tuan Tran, Tal Hassner, Iacopo Masi, and Gerard Medioni. Regressing robust and discriminative 3d morphable models with a very deep neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5163-5172, 2017. 7
|
| 326 |
+
[40] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 2
|
| 327 |
+
[41] Daniel Vlasic, Matthew Brand, Hanspeter Pfister, and Jovan Popovic. Face transfer with multilinear models. In ACM SIGGRAPH 2006 Courses, pages 24-es. 2006. 2
|
| 328 |
+
[42] Mei Wang, Weihong Deng, Jiani Hu, Xunqiang Tao, and Yaohai Huang. Racial faces in the wild: Reducing racial bias by information maximization adaptation network. International Conference on Computer Vision, 2019. 6
|
| 329 |
+
[43] Erroll Wood, Tadas Baltrusaitis, Charlie Hewitt, Matthew Johnson, Jingjing Shen, Nikola Milosavljevic, Daniel Wilde, Stephan Garbin, Toby Sharp, Ivan Stojiljkovic, et al. 3d face reconstruction with dense landmarks. In European Conference on Computer Vision, pages 160-177. Springer, 2022. 2, 7
|
| 330 |
+
[44] Cho-Ying Wu, Qiangeng Xu, and Ulrich Neumann. Synergy between 3dmm and 3d landmarks for accurate 3d facial geometry. In 2021 International Conference on 3D Vision (3DV), pages 453-463. IEEE, 2021. 7
|
| 331 |
+
[45] Haotian Yang, Hao Zhu, Yanru Wang, Mingkai Huang, Qiu Shen, Ruigang Yang, and Xun Cao. Facescape: a large-scale high quality 3d face dataset and detailed riggable 3d face prediction. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 601-610, 2020. 2, 6
|
| 332 |
+
[46] Xi Yin, Xiang Yu, Kihyuk Sohn, Xiaoming Liu, and Manmohan Chandraker. Towards large-posed face frontalization in the wild. In In Proceeding of International Conference on Computer Vision, Venice, Italy, October 2017. 6
|
| 333 |
+
[47] Yinglin Zheng, Hao Yang, Ting Zhang, Jianmin Bao, Dongdong Chen, Yangyu Huang, Lu Yuan, Dong Chen, Ming Zeng, and Fang Wen. General facial representation learning in a visual-linguistic manner. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18697–18709, 2022. 3, 6
|
| 334 |
+
[48] Yaoyao Zhong and Weihong Deng. Face transformer for recognition. arXiv preprint arXiv:2103.14803, 2021. 2
|
| 335 |
+
[49] Xiangyu Zhu, Zhen Lei, Xiaoming Liu, Hailin Shi, and Stan Z Li. Face alignment across large poses: A 3d solution. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 146-155, 2016. 2
|
| 336 |
+
[50] Wojciech Zielonka, Timo Bolkart, and Justus Thies. Towards metrical reconstruction of human faces. In European Conference on Computer Vision (ECCV). Springer International Publishing, Oct. 2022. 1, 2, 6, 7
|
accurate3dfacereconstructionwithfacialcomponenttokens/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:563cc997466db686b788a6bd7c8c73a232aea9bb25b1acc8e36fd458341b4610
|
| 3 |
+
size 584461
|
accurate3dfacereconstructionwithfacialcomponenttokens/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5e85c99f65a5256390fee7567e2850c6ed83b158408066ff28af701d025d7efe
|
| 3 |
+
size 390548
|
accurateandfastcompressedvideocaptioning/4f832d5d-d350-44d5-9f62-3e0bf6e26a99_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4b662992173bbff70011c134d39729f778a6ceaad7ee046553a26c1cb62f5f4d
|
| 3 |
+
size 76686
|
accurateandfastcompressedvideocaptioning/4f832d5d-d350-44d5-9f62-3e0bf6e26a99_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:02d9a1df9c737d7355a319f7a43770665965c79938190ac117efc812ed27908d
|
| 3 |
+
size 91520
|
accurateandfastcompressedvideocaptioning/4f832d5d-d350-44d5-9f62-3e0bf6e26a99_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dcc3883dfaab0bb73e88472381ee2f65b0b064b3d1c45d1c57ec8ff0da7e0981
|
| 3 |
+
size 5073120
|
accurateandfastcompressedvideocaptioning/full.md
ADDED
|
@@ -0,0 +1,318 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Accurate and Fast Compressed Video Captioning
|
| 2 |
+
|
| 3 |
+
Yaojie Shen $^{1,2*}$ , Xin Gu $^{1,2*}$ , Kai Xu $^{3}$ , Heng Fan $^{4}$ , Longyin Wen $^{3}$ , Libo Zhang $^{1,2\dagger}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup> Institute of Software, Chinese Academy of Sciences, Beijing, China
|
| 6 |
+
|
| 7 |
+
$^{2}$ University of Chinese Academy of Sciences, Beijing, China
|
| 8 |
+
|
| 9 |
+
3 ByteDance Inc., San Jose, USA
|
| 10 |
+
|
| 11 |
+
$^{4}$ Department of Computer Science and Engineering, University of North Texas, Denton TX, USA
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Existing video captioning approaches typically require to first sample video frames from a decoded video and then conduct a subsequent process (e.g., feature extraction and/or captioning model learning). In this pipeline, manual frame sampling may ignore key information in videos and thus degrade performance. Additionally, redundant information in the sampled frames may result in low efficiency in the inference of video captioning. Addressing this, we study video captioning from a different perspective in compressed domain, which brings multi-fold advantages over the existing pipeline: 1) Compared to raw images from the decoded video, the compressed video, consisting of I-frames, motion vectors and residuals, is highly distinguishable, which allows us to leverage the entire video for learning without manual sampling through a specialized model design; 2) The captioning model is more efficient in inference as smaller and less redundant information is processed. We propose a simple yet effective end-to-end transformer in the compressed domain for video captioning that enables learning from the compressed video for captioning. We show that even with a simple design, our method can achieve state-of-the-art performance on different benchmarks while running almost $2 \times$ faster than existing approaches. Code is available at https://github.com/acherstyx/CoCap.
|
| 16 |
+
|
| 17 |
+
# 1. Introduction
|
| 18 |
+
|
| 19 |
+
Video captioning is a representative example of applying deep learning to the fields of computer vision and natural language processing with a long list of applications, such as blind navigation, video event commentary, and human-computer interaction. To generate captions for a video, the model needs to not only identify objects and actions in the video, but also be able to express them accurately in natural
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1. Comparing our method with prior methods for video captioning. Prior works are all based on decoding video frames. The difference between them is that some methods use offline extracted multiple features as input and generate captions, while others directly take dense video frames as input. By avoiding heavy redundant information and offline multiple feature extraction, our method speedup the caption generation process while maintaining high quality results.
|
| 23 |
+
|
| 24 |
+
language. Despite significant progress, accurate and fast video captioning remains a challenge.
|
| 25 |
+
|
| 26 |
+
Video captioning requires both 2D appearance information, which reflects the objects in the video, and 3D action information, which reflects the actions. The interaction between these two types of information is crucial for accurately captioning the actions of objects in the video. Most of the existing methods [36, 38, 22] are shown in Fig. 1 (the upper branch), mainly including the three-steps: (1) Decoding the video and densely sampling frames. (2) Extracting the 2D/3D features of the video frames offline. (3) Training the model based on these 2D/3D features. In these methods, densely sampled video frames result in significant redundancy, which in turn increases the computation and inference time of the model. This is because the model needs to extract features from each video frame and use all
|
| 27 |
+
|
| 28 |
+

|
| 29 |
+
Figure 2. Comparison of model inference speed and CIDEr score on MSRVTT dataset. I, MV and Res refer to I-frame, motion vector and residual respectively. The test is run on 1 Card V100 machine with batch size set to 1.
|
| 30 |
+
|
| 31 |
+
of these features as input. Furthermore, extracting 2D appearance features, 3D action features, and region features for each video frame requires additional time. To address the speed issue and improve inference speed, some recent works [18, 29] have adopted an end-to-end approach that avoids extracting multiple visual features offline. As shown in Fig. 1 (The middle branch), the flow of their method is as follows: (1) Decoding the video and densely sample frames. (2) Take video frames directly as input and then end-to-end training model. These approaches involve a trainable visual feature extractor, rather than relying on multiple offline 2D/3D feature extractors. For example, SwinBERT [18] uses VidSwin [19] as the trainable feature extractor, while MV-GPT [29] uses ViViT [1]. While these two-steps methods address the time consumption associated with offline feature extraction, they do not alleviate the computational burden and time required to handle the redundancy of information.
|
| 32 |
+
|
| 33 |
+
To address the above problems, we propose an end-to-end video captioning method based on compressed video. Our work significantly simplifies the video caption pipeline by eliminating time-consuming video decoding and feature extraction steps. As in Fig. 1 (the lower branch), unlike previous methods, we take compressed video information as input and directly output a natural language description of the video. Compressed video is mainly composed of I-frame, motion vector and residual, and there is no redundant information between them, and they are all refined information. Therefore, the model needs less computation to process compressed domain information, and model inference is faster. At the same time, the end-to-end network structure in our proposed method can also avoid the time consumption caused by extracting multiple features. Besides, Our model is better at understanding the content of videos by utilizing the refined information in compressed domain, including the 2D feature from I-frame and the 3D action feature extracted from motion vector and residual.
|
| 34 |
+
|
| 35 |
+
As shown in Fig. 2, compared with other two-steps and three-steps methods, such as SwinBERT [18], HMN [36] and SGN [27], our method is not only faster, but also has competitive performance. Our model comprises two parts, as depicted in Fig. 4. One part consists of three encoders that extract features and an action encoder that fuses them, while the other part comprises a multimodal decoder that generates video captions. Specifically, we first extract the context feature, motion vector feature and residual feature of the compressed video through I-frame Encoder, Motion Encoder, and Residual Encoder, respectively. The context feature contains information about objects in the video, but action information is missing. In order to extract the action feature of the video, we fuse the motion vector feature, residual feature, and context feature through the action encoder. Then use the context feature and action feature as visual input of the multimodal decoder to generate video captions.
|
| 36 |
+
|
| 37 |
+
The contributions of this paper are summarized below:
|
| 38 |
+
|
| 39 |
+
1. We propose a simple and effective transformer that can take compressed video as input and directly generate a video description.
|
| 40 |
+
2. Our experimental results demonstrate that our method is nearly $2 \times$ further than the fastest existing state-of-the-art method in inference time, while maintaining competitive results on three challenging video captioning datasets, e.g., MSVD, MSRVTT and VATEX.
|
| 41 |
+
|
| 42 |
+
# 2. Related Work
|
| 43 |
+
|
| 44 |
+
Compressed vision task. The main idea of introducing compressed video into current computer vision tasks is to utilizing the motion vector and residual on the compressed domain to avoid fully decode all frames from the video and save the storage space at the same time. Early work mainly base on MPEG-4 video codec [33, 16, 12, 4]. CoViAR [33] proposed a back-tracking technique to trace motion vectors back to I-frame, which works on MPEG-4. MM-ViT [4] proposed a multi-modal transformer to process the I-frame, motion vector, residual and audio in the compressed video. Since the MPEG-4 codec is outdated, other works, e.g., MVCGC [13] and ATTP [14], is designed to work on other coedcs like H.264 and H.265 to ensure generalizability. Comparing with MPEG-4, H.264 and H.265 allow a more flexible yet complicated compression, which makes it more challenging to learn from compressed domain. MVCGC [13] proposed a self-supervised method to learn video representations by utilizing the mutual information between RGB video frames and motion vectors. ATTP [14] designed a lightweight deep neural network to process the compressed video and achieve real time action recognition on embedded AI devices. Similarly, our work
|
| 45 |
+
|
| 46 |
+
is conducted on H.264 video codec, which is currently one of the most popular video codecs.
|
| 47 |
+
|
| 48 |
+
Video captioning. Video captioning aims to convert the content of videos into natural language descriptions, which requires the model to understand the objects in the video and the behavior of the objects. Some works focus on the design of the model structure. These methods usually extract features offline, and then models use these features to generate captions by designing different network architectures. HMN [36] proposed a hierarchical modular network that serves as a strong video encoder, which bridges videos and languages. ORG-TRL [38] proposes an object relational graph based encoder, which captures more detailed interaction features to enrich visual representation. SGN [27] designed a semantic grouping network to group video frames with discriminating word phrases of partially decoded caption. Some works explore additional information to help the model generate more accurate video captions. TextKG [9] propose a two-stream network capable of knowledge-assisted video description using knowledge graphs. Univl [20] learns powerful vision-and-language representations by pre-training the models on large-scale datasets, e.g., HowTo100M [21] and WebVid2M [2]. Some other works focus more on end-to-end video captioning generation. SwinBERT [18] proposed an end-to-end transformer-based model, which takes video frame patches directly as inputs and then uses VidSwin to extract visual features. MV-GPT [29] designed an encoder-decoder model end-to-end to generate the video caption from video frames and transcribed speech directly. We propose an end-to-end video captioning model based on the compressed domain without decoding video frames and extracting features offline, which not only accelerates the generation of captions, but also performs favorably against the state-of-the-art methods.
|
| 49 |
+
|
| 50 |
+
# 3. Methods
|
| 51 |
+
|
| 52 |
+
As mentioned above, our method aims to take the dense information (including I-frame, motion vector and residual) in compressed domain as input to accelerate inference and improve performance for video caption. To this end, we design an end-to-end transformer-based network as shown in Fig. 4. In this section, we first detail the information in the compressed video in Sec. 3.1, then introduce the model network in Sec. 3.2 and 3.3, and finally introduce the training strategy of the model in Sec. 3.4.
|
| 53 |
+
|
| 54 |
+
# 3.1. The Structure of Compressed Video
|
| 55 |
+
|
| 56 |
+
Modern video codecs utilizing the temporal redundancy of successive video frames to compress raw video. As shown in Fig. 3, most modern codecs (e.g., H.264, and H.265) divide video frames into three different types according to their dependencies with other frames: I-frame
|
| 57 |
+
|
| 58 |
+

|
| 59 |
+
Figure 3. The GOP structure in compressed video. In each GOP, the first frame must be an I-frame, followed by several B/P-frames.
|
| 60 |
+
|
| 61 |
+
(intra coded frame), P-frame (predictive coded frame) and B-frame (bipredictive coded frame). I-frame is fully encoded independently using intra-prediction without relying on other frames. Other frames like B-frame and P-frame are encoded by referring to the other frames using inter-prediction, which is stored in the form of motion vector. Motion vector describes the movement of a group of pixels from source (reference frames) to destination (current B-frame or P-frame), which contains highly compressed motion information of successive video frames. The difference between P-frame and B-frame is that B-frame could refer to the frames before or after it, while P-frame only refer to the frames before it. Since predicting a frame using neighboring frames could be inaccurate, an additional residual error between the current frame and the prediction is calculated. We denote $\mathcal{I}_I$ , $\mathcal{I}_P$ and $\mathcal{I}_B$ as decoded I-frame, P-frame, and B-frame, and $\mathcal{I}_{mv}$ and $\Delta_{res}$ as the motion vector and residual of P-/B-frame respectively. In compressed domain, the P-frame and B-frame could be reconstructed by
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\mathcal {I} _ {B / P} = \operatorname {P r e d} \left(\mathcal {I} _ {m v}, \mathcal {I} _ {r e f}\right) + \Delta_ {r e s} \tag {1}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
where $\mathcal{I}_{ref}$ is the referenced frame, and Pred is the prediction method to reconstruct current frame based on motion vector and referenced frame. Since the reconstruction process is time consuming, our model takes highly compressed information from compressed domain directly as input to achieve end-to-end video captioning.
|
| 68 |
+
|
| 69 |
+
Moreover, successive frames are divided into several groups, which is called Groups of Pictures (GOP). GOP is an independent encoding or decoding unit, which means that the frames in a GOP do not refer to any frames on other GOP. Each GOP starts with an I frame, followed by several P-frames or B-frames. For each GOP, we take one I-frame and $M$ B-/P-frames as inputs. The B-/P-frames are uniformly sampled from each GOP, and we only use their motion vector and residual as replacements. Therefore, the visual inputs of our model would be
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
\begin{array}{l} X = [ \mathcal {I} _ {I} ^ {(1)}, \mathcal {I} _ {m v} ^ {(1, 1)}, \Delta_ {r e s} ^ {(1, 1)}, \dots , \mathcal {I} _ {m v} ^ {(M, 1)}, \Delta_ {r e s} ^ {(M, 1)} ], \\ \ldots , [ \mathcal {I} _ {I} ^ {(N)}, \mathcal {I} _ {m v} ^ {(1, N)}, \Delta_ {r e s} ^ {(1, N)}, \ldots , \mathcal {I} _ {m v} ^ {(M, N)}, \Delta_ {r e s} ^ {(M, N)} ] \\ \end{array}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
where $N$ is the number of GOP sampled from each video and $M$ is the total number of P-/B-frames sampled from each GOP. We set N according to the average GOP number,
|
| 76 |
+
|
| 77 |
+
and $\mathbf{M}$ is equal to the maximum number of P-/B-frames in each GOP, which is a hyper-parameter during encoding.
|
| 78 |
+
|
| 79 |
+
# 3.2. Model Architecture for Compressed Domain
|
| 80 |
+
|
| 81 |
+
Based on the GOP structure mentioned above, we proposed a transformer based structure to utilizing the dense information from the compressed domain. Fig. 4 (left) shows the main framework of our proposed compressed video transformer. The model takes all information of the compressed video as inputs, including I-frame, motion vector and residual, while maintaining a fast inference speed. Specifically, we use three different Vision Transformers [8] (ViT) as encoder to extract the visual features for I-frame, motion vector and residual. We adopt a pretrained Vision Transformer as the encoder to extract the context feature from the I-frame:
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\mathcal {F} _ {\mathrm {c t x}} ^ {(n)} = \operatorname {E n c o d e r} _ {\mathrm {I}} (\mathcal {I} _ {I} ^ {(n)}).
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
For each B-frame or P-frame, we get a motion vector and a residual from the compressed domain. We use two lightweight Vision Transformers as encoders to extract features from motion vectors and residuals. The motion and residual features is added together to generate the B-/P-frame features $\mathcal{F}_{\mathrm{BP}}^{(m,n)}$ :
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\mathcal {F} _ {\mathrm {B P}} ^ {(m, n)} = \operatorname {E n c o d e r} _ {\mathrm {m v}} (\mathcal {I} _ {m v} ^ {(m, n)}) + \operatorname {E n c o d e r} _ {\mathrm {r e s}} (\Delta_ {r e s} ^ {(m, n)}).
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
In this way, for each GOP we obtain $M$ B-/P-frame features
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\mathcal {F} _ {\mathrm {B P}} ^ {(n)} = \left[ \mathcal {F} _ {\mathrm {B P}} ^ {(1, n)}, \dots , \mathcal {F} _ {\mathrm {B P}} ^ {(M, n)} \right].
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
As motion vector and residual lack fine-grained context information, we use features from motion vector and residual as queries to retrieve the rich context information in RGB frames instead of simply fusing them. We employ action encoder to integrate the object information of I-frame into the action information of motion vector and residual, which takes B-/P-frame features in current GOP $\mathcal{F}_{\mathrm{BP}}^{(n)}$ and the context feature $\mathcal{F}_{\mathrm{ctx}}^{(n)}$ as input to generate the action feature $\mathcal{F}_{\mathrm{act}}^{(n)}$ of current GOP. The action encoder is constructed by $N_{a}$ sets of alternately stacked self-attention and cross-attention blocks.
|
| 100 |
+
|
| 101 |
+
Specifically, the workflow of the action encoder is as follows. Firstly, according to the reconstruction process described in Eq. 1, we utilize the self-attention module fuse the temporal representation of successive frames to obtain $\mathcal{F}_{\mathrm{att}}^{(n)}$ .
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
X = \mathcal {F} _ {\mathrm {B P}} ^ {(n)} + \operatorname {E m b} _ {\mathrm {p}} + \operatorname {E m b} _ {\mathrm {t}},
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
Q = W _ {q} * X, K = W _ {k} * X, V = W _ {v} * X,
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\mathcal {F} _ {\mathrm {a t t}} ^ {(n)} = \operatorname {S e l f A t t e n t i o n} (Q, K, V),
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where $\mathrm{Emb_p}$ is the positional embeddings, $\mathrm{Emb_t}$ is the type embeddings, and $W_{q}, W_{k}, W_{v}$ are learnable matrices. The type embeddings are added to distinguish B-frames and P-frames. And then we use the cross-attention to integrate the $\mathcal{F}_{\mathrm{ctx}}^{(n)}$ from I-frame into the $\mathcal{F}_{\mathrm{att}}^{(n)}$ from the motion vector and residual. Finally, the action feature $\mathcal{F}_{\mathrm{act}}^{(n)}$
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
Q ^ {\prime} = W _ {q} ^ {\prime} * \mathcal {F} _ {\mathrm {a t t}} ^ {(n)}, K ^ {\prime} = W _ {k} ^ {\prime} * \mathcal {F} _ {\mathrm {c t x}} ^ {(n)}, V ^ {\prime} = W _ {v} ^ {\prime} * \mathcal {F} _ {\mathrm {c t x}} ^ {(n)},
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\mathcal {F} _ {\mathrm {a t t}} ^ {(n) ^ {\prime}} = \operatorname {C r o s s A t t e n t i o n} \left(Q ^ {\prime}, K ^ {\prime}, V ^ {\prime}\right),
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\mathcal {F} _ {\mathrm {a c t}} ^ {(n)} = \operatorname {M e a n} \left(\mathcal {F} _ {\mathrm {a t t}} ^ {(n) ^ {\prime}}\right),
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where $W_{q}^{\prime}, W_{k}^{\prime}, W_{v}^{\prime}$ are learnable matrices and Mean() is a function that calculates the average feature.
|
| 130 |
+
|
| 131 |
+
# 3.3. Multimodal Decoder for Video Captioning
|
| 132 |
+
|
| 133 |
+
The context features $\mathcal{F}_{\mathrm{ctx}}^{(n)}$ and action features $\mathcal{F}_{\mathrm{act}}^{(n)}$ for each GOP are contacted to form the visual representation:
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\mathcal {V} = [ \mathcal {F} _ {\mathrm {c t x}} ^ {(1)}, \mathcal {F} _ {\mathrm {a c t}} ^ {(1)}, \dots , \mathcal {F} _ {\mathrm {c t x}} ^ {(N)}, \mathcal {F} _ {\mathrm {a c t}} ^ {(N)} ].
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
Then we design a multimodal decoder to predict the video captions based on the visual representation $\nu$ . The multimodal decoder is composed of $N_{m}$ masked self-attention modules stacked as shown in Fig. 4 (right) and the workflow is as follows:
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\mathcal {T} _ {< \mathrm {t}} = \operatorname {E m b e d d i n g} \left(Y _ {< \mathrm {t}}\right),
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
\mathcal {X} = \operatorname {C o n c a t} (\mathcal {V}, \mathcal {T} _ {< \mathrm {t}}),
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\mathcal {X} ^ {\prime} = \mathcal {X} + \mathrm {E m b} _ {\mathrm {p}} ^ {\prime} + \mathrm {E m b} _ {\mathrm {t}} ^ {\prime},
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
Q ^ {\prime \prime} = W _ {q} ^ {\prime \prime} * \mathcal {X} ^ {\prime}, K ^ {\prime \prime} = W _ {k} ^ {\prime \prime} * \mathcal {X} ^ {\prime}, V ^ {\prime \prime} = W _ {v} ^ {\prime \prime} * \mathcal {X} ^ {\prime},
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
h _ {\mathrm {t}} = \text {M a s k e d S e l f A t t e n t i o n} \left(Q ^ {\prime \prime}, K ^ {\prime \prime}, V ^ {\prime \prime}\right),
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
p \left(y _ {\mathrm {t}} \mid \mathcal {V}, \mathcal {T} _ {< \mathrm {t}}\right) = \operatorname {s o f t m a x} \left(\operatorname {L i n e a r} \left(h _ {\mathrm {t}}\right)\right),
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
where $Y_{<t}$ is the words generated in previous $t - 1$ steps, Embedding() is a function that converts one-hot word vectors into word embeddings, $\mathrm{Emb}_{\mathrm{p}}'$ is the positional embeddings, $\mathrm{Emb}_{\mathrm{t}}'$ is used to distinguish different modality of inputs, $W_{q}''$ , $W_{k}''$ , $W_{v}''$ are learnable matrices and $y_{\mathrm{t}}$ is the prediction of current step. In the multimodal decoder, position embedding and type embedding is added to distinguish the order and type of features respectively.
|
| 166 |
+
|
| 167 |
+
# 3.4. Optimization
|
| 168 |
+
|
| 169 |
+
We train our model using the cross-entropy loss function. Given the ground-truth indices of previous (t-1) words and the visual representation $\mathcal{V}$ , we can get the predictions of the current t-th word $y_{t}^{*}$ . After that, the training loss is computed as
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
L = - \sum_ {t = 1} ^ {l} \log p (y _ {t} ^ {*} | y _ {: t - 1} ^ {*}, \mathcal {V}),
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
Figure 4. The architecture of our proposed Compressed Video Captioner. Left: The Compressed Video Transformer which extract video representation for each GOP. A large visual backbone is used to extract visual representations from I-frame, and two small Vision Transformer is used to extract residual and motion representations from compressed domain. After that, an action encoder is used to fuse the features. Right: The Multimodal Decoder. We use a multimodal decoder with causal mask to learn caption.
|
| 177 |
+
|
| 178 |
+
where $y_{1:T}^{*}$ is the ground truth sequence and $l$ is the total length of predicted captions. Notably, we add the label smoothing to mitigate overconfidence in implementation.
|
| 179 |
+
|
| 180 |
+
# 4. Experiments
|
| 181 |
+
|
| 182 |
+
# 4.1. Datasets
|
| 183 |
+
|
| 184 |
+
MSRVTT [34] is a generic video captioning dataset that comprises 10,000 video clips, with each clip annotated with 20 captions. On average, each video clip lasts about 15 seconds. The standard split involves the use of 6,513 clips for training, 497 clips for validation, and 2,990 clips for testing.
|
| 185 |
+
|
| 186 |
+
MSVD [3] contains 1,970 videos, with each video clip having 40 captions. The average duration of each video clip is around 10 seconds. We adopt the standard split, which involves using 1,200 videos for training, 100 videos for validation, and 670 videos for testing.
|
| 187 |
+
|
| 188 |
+
VATEX [32] is a large-scale dataset which contains about 41,250 video clips. The duration of each video clip is between 10 seconds, and 10 English captions are manually annotated per clip. We use the official training set for training and evaluate the results using the public test set.
|
| 189 |
+
|
| 190 |
+
# 4.2. Evaluation Metrics
|
| 191 |
+
|
| 192 |
+
To evaluate the effectiveness of our approach, we use the standard metrics for video captioning: BLEU@4 (B4) [23],
|
| 193 |
+
|
| 194 |
+
METEOR (M) [7], ROUGE (R) [17], and CIDEr (C) [31]. Each metric provides a unique perspective on the quality of the generated captions. BLEU@4 evaluates sentence fluency, METEOR assesses semantic accuracy, ROUGE measures word order, and CIDEr evaluates the degree to which the caption conveys key information. By considering these different metrics, we can comprehensively evaluate the performance of our model.
|
| 195 |
+
|
| 196 |
+
# 4.3. Implementation Details
|
| 197 |
+
|
| 198 |
+
Our model is implemented using PyTorch, and to read motion vectors and residuals from the compressed video, we utilize the x264 library in Ffmpeg. Before training and testing, the videos are resized to 240 on its smallest edge and compressed using the H.264 codec with KeyInt set to 60. For each video, we fixedly sampled 8 GOPs, each of which contains 1 I-frame, 59 motion vectors, and 59 residuals. The size of the I-frame and residual is $3*224*224$ , and the size of the motion vector is $4*56*56$ . We use Adam with initial learning rate of 1e-4, $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ and the warmup strategy is adopted in the training. The maximum length of the caption sentence is set to 22, which contains two special tokens, e.g., [CLS] token and [EOS] token. The feature dimension in each block is set to 768, and the number of heads in multi-head architecture is set to 12 for all layers. The batch size is set to 64 and the training epochs to 20. The I-frame encoder has 12 layers and is
|
| 199 |
+
|
| 200 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">Decoding</td><td rowspan="2">E2E</td><td colspan="3">Features</td><td colspan="4">MSVD</td><td colspan="4">MSRVTT</td></tr><tr><td>2D Appearance</td><td>3D Action</td><td>Object Detection</td><td>B4</td><td>M</td><td>R</td><td>C</td><td>B4</td><td>M</td><td>R</td><td>C</td></tr><tr><td>SAAT [39]</td><td>✓</td><td>-</td><td>IncepResnetV2</td><td>C3D</td><td>-</td><td>46.5</td><td>33.5</td><td>69.4</td><td>81.0</td><td>39.9</td><td>27.7</td><td>61.2</td><td>51</td></tr><tr><td>STG-KD [22]</td><td>✓</td><td>-</td><td>ResNet101</td><td>I3D</td><td>FasterRCNN</td><td>52.2</td><td>36.9</td><td>73.9</td><td>93.0</td><td>40.5</td><td>28.3</td><td>60.9</td><td>47.1</td></tr><tr><td>PMI-CAP [5]</td><td>✓</td><td>-</td><td>IncepResnetV2</td><td>C3D</td><td>-</td><td>54.6</td><td>36.4</td><td>-</td><td>95.1</td><td>42.1</td><td>28.7</td><td>-</td><td>49.4</td></tr><tr><td>ORG-TRL [38]</td><td>✓</td><td>-</td><td>IncepResnetV2</td><td>C3D</td><td>FasterRCNN</td><td>54.3</td><td>36.4</td><td>73.9</td><td>95.2</td><td>43.6</td><td>28.8</td><td>62.1</td><td>50.9</td></tr><tr><td>OpenBook [37]</td><td>✓</td><td>-</td><td>IncepResnetV2</td><td>C3D</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>42.8</td><td>29.3</td><td>61.7</td><td>52.9</td></tr><tr><td>SGN [27]</td><td>✓</td><td>-</td><td>ResNet101</td><td>C3D</td><td>-</td><td>52.8</td><td>35.5</td><td>72.9</td><td>94.3</td><td>40.8</td><td>28.3</td><td>60.8</td><td>49.5</td></tr><tr><td>MGRMP [6]</td><td>✓</td><td>-</td><td>IncepResnetV2</td><td>C3D</td><td>-</td><td>55.8</td><td>36.9</td><td>74.5</td><td>98.5</td><td>41.7</td><td>28.9</td><td>62.1</td><td>51.4</td></tr><tr><td>HMN [36]</td><td>✓</td><td>-</td><td>IncepResnetV2</td><td>C3D</td><td>FasterRCNN</td><td>59.2</td><td>37.7</td><td>75.1</td><td>104</td><td>43.5</td><td>29</td><td>62.7</td><td>51.5</td></tr><tr><td>UniVL [20]</td><td>✓</td><td>-</td><td></td><td>S3D</td><td></td><td>-</td><td>-</td><td>-</td><td>-</td><td>42.2</td><td>28.8</td><td>61.2</td><td>49.9</td></tr><tr><td>SwinBERT [18]</td><td>✓</td><td>✓</td><td></td><td>VidSwin</td><td></td><td>58.2</td><td>41.3</td><td>77.5</td><td>120.6</td><td>41.9</td><td>29.9</td><td>62.1</td><td>53.8</td></tr><tr><td>MV-GPT [29]</td><td>✓</td><td>✓</td><td></td><td>ViViT</td><td></td><td>-</td><td>-</td><td>-</td><td>-</td><td>48.9</td><td>38.7</td><td>64</td><td>60</td></tr><tr><td>Ours</td><td>-</td><td>✓</td><td></td><td>CLIP</td><td></td><td>55.9</td><td>39.9</td><td>76.8</td><td>113.0</td><td>43.1</td><td>29.8</td><td>62.7</td><td>56.2</td></tr><tr><td>Ours(ViT/L14)</td><td>-</td><td>✓</td><td></td><td>CLIP</td><td></td><td>60.1</td><td>41.4</td><td>78.2</td><td>121.5</td><td>44.4</td><td>30.3</td><td>63.4</td><td>57.2</td></tr></table>
|
| 201 |
+
|
| 202 |
+
Table 1. Comparison with state-of-the-art methods on the test split of MSVD and MSRVTT. Decoding means decoding video frames, and E2E means end-to-end training without offline feature extraction. For a fair comparison, we gray out models that pre-train on large-scale datasets.
|
| 203 |
+
|
| 204 |
+
<table><tr><td></td><td>B4</td><td>M</td><td>R</td><td>C</td></tr><tr><td>NITS-VC [30]</td><td>20.0</td><td>18.0</td><td>42.0</td><td>24.0</td></tr><tr><td>VATEX [32]</td><td>28.4</td><td>21.7</td><td>47</td><td>45.1</td></tr><tr><td>ORG-TRL [38]</td><td>32.1</td><td>22.2</td><td>48.9</td><td>49.7</td></tr><tr><td>Support-set [24]</td><td>32.8</td><td>24.4</td><td>49.1</td><td>51.2</td></tr><tr><td>SwinBERT [18]</td><td>38.7</td><td>26.2</td><td>53.2</td><td>73</td></tr><tr><td>VideoCoCa [35]</td><td>39.7</td><td>-</td><td>54.5</td><td>77.8</td></tr><tr><td>Ours</td><td>31.4</td><td>23.2</td><td>49.4</td><td>52.7</td></tr><tr><td>Ours(ViT/L14)</td><td>35.8</td><td>25.3</td><td>52.0</td><td>64.8</td></tr></table>
|
| 205 |
+
|
| 206 |
+
Table 2. Comparison with state-of-the-art methods on the test split of VATEX. For a fair comparison, we gray out models that pretrain on large-scale datasets.
|
| 207 |
+
|
| 208 |
+
initialized with pre-trained weights from the CLIP [25] visual encoder, while the other encoders and the multimodal decoder are randomly initialized. The layers for the motion encoder, residual encoder and action encoder are 2, 2 and 1, respectively. Lastly, we set the hyperparameters $M$ , $N$ , $N_{a}$ , and $N_{m}$ to 60, 8, 2 and 2.
|
| 209 |
+
|
| 210 |
+
# 4.4. Performance Comparison with SOTA Methods
|
| 211 |
+
|
| 212 |
+
In order to verify the effectiveness of the method, we evaluated the proposed model against state-of-the-art methods on three public benchmark datasets.
|
| 213 |
+
|
| 214 |
+
MSVD dataset. The evaluation results on the MSVD dataset are reported in Table 1 (left). We conducted experiments using two sizes of the I-frame encoder, namely $B / 16$ and $L / 14$ , with the results reported in the article based on $B / 16$ , unless otherwise stated. Our method using the $L / 14$ I-frame encoder achieves the best performance on all metrics, with only SwinBERT [18] performing better than our
|
| 215 |
+
|
| 216 |
+
method using $B / 16$ . Our approach stands out by being able to directly utilize compressed domain information and extract visual features in real-time. The result shows that our model can efficiently extract information from the refined compressed domain information.
|
| 217 |
+
|
| 218 |
+
MSRVTT dataset. In the MSRVTT benchmark, our method outperforms other approaches in all metrics, as shown in Table 1 (right). Specifically, both the based on $B / 16$ model and based on $L / 14$ model achieve higher scores compared to other methods. In particular, our method achieves a CIDEr score of $56.2 / 57.2$ , which represents a significant improvement of $+2.4 / +3.4$ . This result demonstrates that our approach can generate captions with higher semantic accuracy than other methods based on video decoding [31]. CIDEr is particularly effective at capturing human consensus, which makes our achievement in this metric even more impressive.
|
| 219 |
+
|
| 220 |
+
VATEX dataset. Our method is evaluated on a large-scale dataset, as shown in Table 2. We achieve the second-best results on all metrics, falling behind SwinBERT [18]. Our approach involves extracting visual features using three Vision Transformer encoders, while the I-frame encoder is initialized with the pre-trained CLIP [25] model on LAION-400M [28]. In contrast, SwinBERT uses the VidSwin backbone [19], which is pre-trained on the Kinetic-600 dataset [15]. It is worth noting that LAION-400M is a large image-text dataset, while Kinetics-600 is a video-text dataset, and VATEX dataset is a subset of Kinetics-600 videos. SwinBERT outperforms our method on VATEX due to its backbone pre-trained on Kinetics-600.
|
| 221 |
+
|
| 222 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">Data Type</td><td colspan="3">Inference Time ↓</td><td rowspan="2">CIDEr ↑</td></tr><tr><td>Feature Extraction</td><td>Model Time</td><td>Total</td></tr><tr><td>SGN</td><td>RGB Video Frames</td><td>303 ms</td><td>275 ms</td><td>578 ms</td><td>49.5</td></tr><tr><td>HMN</td><td>RGB Video Frames</td><td>2,710 ms</td><td>108 ms</td><td>2,818 ms</td><td>51.5</td></tr><tr><td>SwinBERT</td><td>RGB Video Frames</td><td colspan="2">339 ms</td><td>339 ms</td><td>53.8</td></tr><tr><td>Ours</td><td>I-frame</td><td colspan="2">146 ms</td><td>146 ms</td><td>54.1</td></tr><tr><td>Ours</td><td>I-frame+MV</td><td colspan="2">153 ms</td><td>153 ms</td><td>55.3</td></tr><tr><td>Ours</td><td>I-frame+MV+Res</td><td colspan="2">178 ms</td><td>178 ms</td><td>56.2</td></tr></table>
|
| 223 |
+
|
| 224 |
+
# 4.5. Speed Comparison with the SOTA Methods
|
| 225 |
+
|
| 226 |
+
To evaluate the speed of our method, we compared it to three representative methods, namely SGN [27], HMN [36], and SwinBERT [18], as reported in Table 3. SGN is a threestep method that first decodes video frames and densely sample, then extracts the 2D appearance and 3D action features based on ResNet101 [11] and C3D [10] (consuming $303~\mathrm{ms}$ ) offline, and finally uses the visual features as the input of the model (consuming $275~\mathrm{ms}$ ). Therefore, the total time for SGN to generate a video caption is $578~\mathrm{ms}$ . HMN achieves the best results among the three-steps models, but it is relatively slow as it requires offline region feature extraction based on Faster RCNN [26] (consuming $2,520~\mathrm{ms}$ ), leading to its total time of $2,818~\mathrm{ms}$ . SwinBERT, on the other hand, is an end-to-end method that does not extract multiple features offline, using only $339~\mathrm{ms}$ .
|
| 227 |
+
|
| 228 |
+
Compared to these methods, our proposed method does not require a dense sampling of video frames or the extraction of multiple features offline. As shown in Table 3, our baseline method only considers the I-frame of the entire video, achieving a CIDEr score of 54.1 and a total time of $146\mathrm{ms}$ . By integrating the motion vector, we improved the CIDEr to 55.3, demonstrating that the action information in the motion vector helps the model generate captions. Furthermore, by incorporating residual information, the CIDEr score is further improved by 0.9 to reach 56.2. Although considering three inputs increases our total inference time, our speed is still nearly 2 times faster than SwinBERT, 3 times faster than SGN, and 15 times faster than HMN.
|
| 229 |
+
|
| 230 |
+
# 4.6. Ablation Study
|
| 231 |
+
|
| 232 |
+
Impact of input information. To evaluate the effectiveness of different input information in our method, we conducted several experiments on the MSRVTT dataset, as shown in Table 4. To investigate the role of I-frame, motion vector, and residual, we first experimented with using only one of them. As shown in Table 4, using only I-frame, motion vector, or residual achieved CIDEr scores of 54.1, 19.4, and 13.0, respectively. This indicates that the model can directly use I-frame instead of motion vector and resid-
|
| 233 |
+
|
| 234 |
+
Table 3. A detailed comparison of speed with other methods on the test split of the MSRVTT dataset. During the test, the model is running on a NVIDIA Tesla V100 GPU and the batch size is set to 1. The time cost is computed on the overall MSRVTT test split.
|
| 235 |
+
|
| 236 |
+
<table><tr><td colspan="3">Input</td><td rowspan="2">Module
|
| 237 |
+
En_A</td><td rowspan="2">B4</td><td rowspan="2">M</td><td rowspan="2">R</td><td rowspan="2">C</td></tr><tr><td>I1</td><td>Imv</td><td>Δres</td></tr><tr><td>✓</td><td>-</td><td>-</td><td>-</td><td>41.6</td><td>29.7</td><td>62.3</td><td>54.1</td></tr><tr><td>-</td><td>✓</td><td>-</td><td>-</td><td>27.3</td><td>21.6</td><td>52.6</td><td>19.4</td></tr><tr><td>-</td><td>-</td><td>✓</td><td>-</td><td>23.9</td><td>20.5</td><td>51.0</td><td>13.0</td></tr><tr><td>✓</td><td>✓</td><td>-</td><td>✓</td><td>43.4</td><td>29.9</td><td>62.6</td><td>55.3</td></tr><tr><td>✓</td><td>-</td><td>✓</td><td>✓</td><td>42.2</td><td>30.0</td><td>62.5</td><td>54.9</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>-</td><td>42.1</td><td>30.1</td><td>62.4</td><td>54.3</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>43.1</td><td>29.8</td><td>62.7</td><td>56.2</td></tr></table>
|
| 238 |
+
|
| 239 |
+
Table 4. Ablation study of different input on the test subset of MSRVTT. The $\mathcal{I}_I, \mathcal{I}_{mv}$ and $\Delta_{res}$ mean decoded I-frame, motion vector and residual respectively. And the En_A means the action encoder.
|
| 240 |
+
|
| 241 |
+
<table><tr><td>KeyInt (M)</td><td>GOP Nums (N)</td><td>Inference Time</td><td>B4</td><td>M</td><td>R</td><td>C</td></tr><tr><td>250</td><td>2</td><td>153 ms</td><td>39.6</td><td>28.7</td><td>60.8</td><td>49.5</td></tr><tr><td>60</td><td>2</td><td>131 ms</td><td>41.6</td><td>29.3</td><td>61.7</td><td>52.4</td></tr><tr><td>60</td><td>4</td><td>139 ms</td><td>42.8</td><td>29.9</td><td>62.6</td><td>55.3</td></tr><tr><td>60</td><td>8</td><td>178 ms</td><td>43.1</td><td>29.8</td><td>62.7</td><td>56.2</td></tr><tr><td>60</td><td>10</td><td>187 ms</td><td>42.7</td><td>29.8</td><td>62.6</td><td>55.5</td></tr></table>
|
| 242 |
+
|
| 243 |
+
Table 5. Ablation study of GOP numbers on MSRvTT test subset.
|
| 244 |
+
|
| 245 |
+
ual. By jointly using I-frame and motion vector and fusing their information through the action encoder, we achieved a CIDEr score of 55.3. Similarly, using I-frame and residual achieved a score of 54.9. This demonstrates that motion vector and residual can help the model generate more accurate captions. The performance of the model is further improved by inputting all three types of information, achieving a CIDEr score of 56.2, an improvement of 1.7. Removing the action encoder from the proposed method resulted in a slight drop in CIDEr scores, from 56.2 to 54.3. This demonstrates that the action encoder can help the model integrate the object information of I-frame into the action information of motion vector and residual.
|
| 246 |
+
|
| 247 |
+
Impact of GOP numbers. GOP is a fundamental unit in compressed video that affect the compression rate. A larger GOP size results in fewer GOP numbers and commonly higher compression rates. In video codec (e.g. FFmpeg), the GOP size is determined by the KeyInt parameter. To
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
Figure 5. Qualitative results on the MSRVTT, MSVD and VATEX dataset. We show the input of our model, which is in compressed domain. The red, green and blue borders indicate I-frame, motion vector and residual, respectively.
|
| 251 |
+
|
| 252 |
+
<table><tr><td>En_I</td><td>En_M</td><td>En_R</td><td>En_C</td><td>De_M</td><td>CIDEr</td></tr><tr><td>12</td><td>2</td><td>2</td><td>1</td><td>2</td><td>56.2</td></tr><tr><td>24</td><td>2</td><td>2</td><td>1</td><td>2</td><td>57.2</td></tr><tr><td>12</td><td>4</td><td>4</td><td>2</td><td>2</td><td>55.2</td></tr><tr><td>12</td><td>2</td><td>2</td><td>1</td><td>4</td><td>54.9</td></tr><tr><td>12</td><td>4</td><td>4</td><td>2</td><td>4</td><td>55.4</td></tr></table>
|
| 253 |
+
|
| 254 |
+
Table 6. Ablation study about module layers on the MSRVTT test subset. En_I, En_M, En_R, En_A and De_M refer to the I-frame encoder, motion encoder, residual encoder, action encoder and multimodal decoder of the model respectively.
|
| 255 |
+
|
| 256 |
+
investigate the impact of GOP size on our video caption model, we experimented with different GOP numbers and KeyInts, as shown in Table 5. Comparing KeyInt values of 250 and 60, we observed that a smaller GOP size led to better model performance (49.5 CIDEr vs 52.4 CIDEr). By sampling different GOP numbers under the same KeyInt, the best performance is achieved by setting GOP size to 8 and KeyInt to 60. While the performance is improved with more GOPs, yet speed is decreased due to increased computation as more information is included.
|
| 257 |
+
|
| 258 |
+
Impact of model layers. To investigate the impact of different model layers on our proposed method, we conducted an ablation study on the MSRVTT test subset, as shown in Table 6. Giving that I-frame contains more complex information, we design a deep encoder with more layers for I-frame, while using a shallow encoder for motion vector and residual. Our results show that the performance of the model improves with an increase in the number of layers in the I-frame encoder (56.2 CIDEr to 57.2 CIDEr). However, adding more layers to other modules did not result in further
|
| 259 |
+
|
| 260 |
+
improvements in model performance.
|
| 261 |
+
|
| 262 |
+
# 4.7. Qualitative Results
|
| 263 |
+
|
| 264 |
+
As shown in Fig. 5, we present the qualitative results of our proposed method on three datasets (e.g., MSVD, MSRVTT, and VATEX). Specifically, we visualize the input I-frame, motion vector, and residual and compare the predicted description to the ground truth. Our method consistently produces semantically consistent descriptions that closely align with the ground truth across all three datasets. Furthermore, the results demonstrate a superior ability to capture motion behavior in the videos.
|
| 265 |
+
|
| 266 |
+
# 5. Conclusion
|
| 267 |
+
|
| 268 |
+
In this paper, we introduce an end-to-end transformer-based model for video captioning that takes compressed video as input to eliminate redundant information. Our proposed method is evaluated on three challenging datasets and demonstrates that our proposed method is not only fast, but also competitive in performance with SOTA. In the future, we plan to further improve our method in two ways: (1) Add additional modalities such as audio, text, and knowledge graphs to enhance the quality of the generated captions. (2) Pre-train the model on a large-scale dataset to further boost the overall performance in compressed domain.
|
| 269 |
+
|
| 270 |
+
# Acknowledgement
|
| 271 |
+
|
| 272 |
+
Libo Zhang was supported by Youth Innovation Promotion Association, CAS (2020111). Heng Fan and his employer received no financial support for research, authorship, and/or publication of this article. This work was done during internship at ByteDance Inc.
|
| 273 |
+
|
| 274 |
+
# References
|
| 275 |
+
|
| 276 |
+
[1] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lucic, and Cordelia Schmid. Vivit: A video vision transformer. In IEEE/CVF International Conference on Computer Vision, pages 6816-6826, 2021. 2
|
| 277 |
+
[2] Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In IEEE/CVF International Conference on Computer Vision, pages 1708-1718, 2021. 3
|
| 278 |
+
[3] David L. Chen and William B. Dolan. Collecting highly parallel data for paraphrase evaluation. In Annual Meeting of the Association for Computational Linguistics, 2011. 5
|
| 279 |
+
[4] Jiawei Chen and Chiu Man Ho. Mm-vit: Multi-modal video transformer for compressed video action recognition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1910-1921, 2022. 2
|
| 280 |
+
[5] Shaoxiang Chen, Wenhao Jiang, Wei Liu, and Yu-Gang Jiang. Learning modality interaction for temporal sentence localization and event captioning in videos. In European Conference on Computer Vision, pages 333-351, 2020. 6
|
| 281 |
+
[6] Shaoxiang Chen and Yu-Gang Jiang. Motion guided region message passing for video captioning. In IEEE/CVF International Conference on Computer Vision, pages 1523-1532, 2021. 6
|
| 282 |
+
[7] Michael J. Denkowski and Alon Lavie. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 376-380, 2014. 5
|
| 283 |
+
[8] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. 4
|
| 284 |
+
[9] Xin Gu, Guang Chen, Yufei Wang, Libo Zhang, Tiejian Luo, and Longyin Wen. Text with knowledge graph augmented transformer for video captioning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18941-18951, 2023. 3
|
| 285 |
+
[10] Kensho Hara, Hirokatsu Kataoka, and Yutaka Satoh. Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet? In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6546-6555, 2018. 7
|
| 286 |
+
[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016. 7
|
| 287 |
+
[12] Lianghua Huang, Yu Liu, Bin Wang, Pan Pan, Yinghui Xu, and Rong Jin. Self-supervised video representation learning by context and motion decoupling. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13886-13895, 2021. 2
|
| 288 |
+
[13] Yuqi Huo, Mingyu Ding, Haoyu Lu, Nanyi Fei, Zhiwu Lu, Ji-Rong Wen, and Ping Luo. Compressed video contrastive learning. Advances in Neural Information Processing Systems, 34:14176-14187, 2021. 2
|
| 289 |
+
|
| 290 |
+
[14] Yuqi Huo, Xiaoli Xu, Yao Lu, Yulei Niu, Mingyu Ding, Zhiwu Lu, Tao Xiang, and Ji-rong Wen. Lightweight action recognition in compressed videos. In Computer Vision-ECCV 2020 Workshops: Glasgow, UK, August 23-28, 2020, Proceedings, Part II 16, pages 337-352. Springer, 2020. 2
|
| 291 |
+
[15] Ang Li, Meghan Thotakuri, David A Ross, João Carreira, Alexander Vostrikov, and Andrew Zisserman. Theava-kinetics localized human actions video dataset. arXiv preprint arXiv:2005.00214, 2020. 6
|
| 292 |
+
[16] Jiapeng Li, Ping Wei, Yongchi Zhang, and Nanning Zheng. A slow-i-fast-p architecture for compressed video action recognition. In ACM International Conference on Multimedia, pages 2039-2047, 2020. 2
|
| 293 |
+
[17] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Annual Meeting of the Association for Computational Linguistics, pages 74-81, 2004. 5
|
| 294 |
+
[18] Kevin Lin, Linjie Li, Chung-Ching Lin, Faisal Ahmed, Zhe Gan, Zicheng Liu, Yumao Lu, and Lijuan Wang. Swinbert: End-to-end transformers with sparse attention for video captioning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17949-17958, 2022. 2, 3, 6, 7
|
| 295 |
+
[19] Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, and Han Hu. Video swim transformer. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3192-3201, 2022. 2, 6
|
| 296 |
+
[20] Huaishao Luo, Lei Ji, Botian Shi, Haoyang Huang, Nan Duan, Tianrui Li, Jason Li, Taroon Bharti, and Ming Zhou. Univl: A unified video and language pre-training model for multimodal understanding and generation. arXiv preprint arXiv:2002.06353, 2020. 3, 6
|
| 297 |
+
[21] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. HowTo100M: Learning a text-video embedding by watching hundred million narrated video clips. In IEEE/CVF International Conference on Computer Vision, pages 2630-2640, 2019. 3
|
| 298 |
+
[22] Boxiao Pan, Haoye Cai, De-An Huang, Kuan-Hui Lee, Adrien Gaidon, Ehsan Adeli, and Juan Carlos Niebles. Spatio-temporal graph for video captioning with knowledge distillation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10867-10876, 2020. 1, 6
|
| 299 |
+
[23] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Annual Meeting of the Association for Computational Linguistics, pages 311-318, 2002. 5
|
| 300 |
+
[24] Mandela Patrick, Po-Yao Huang, Yuki Asano, Florian Metze, Alexander Hauptmann, Joao Henriques, and Andrea Vedaldi. Support-set bottlenecks for video-text representation learning. arXiv preprint arXiv:2010.02824, 2020. 6
|
| 301 |
+
[25] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763, 2021. 6
|
| 302 |
+
[26] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region
|
| 303 |
+
|
| 304 |
+
proposal networks. Advances in Neural Information Processing Systems, 28, 2015. 7
|
| 305 |
+
[27] Hobin Ryu, Sunghun Kang, Haeyong Kang, and Chang D. Yoo. Semantic grouping network for video captioning. In Association for the Advancement of Artificial Intelligence, pages 2514-2522, 2021. 2, 3, 6, 7
|
| 306 |
+
[28] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021. 6
|
| 307 |
+
[29] Paul Hongsuck Seo, Arsha Nagrani, Anurag Arnab, and Cordelia Schmid. End-to-end generative pretraining for multimodal video captioning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17959-17968, 2022. 2, 3, 6
|
| 308 |
+
[30] Alok Singh, Thoudam Doren Singh, and Sivaji Bandyopadhyay. NITS-VC system for vxex video captioning challenge 2020. arXiv preprint arXiv:2006.04058, 2020. 6
|
| 309 |
+
[31] Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4566-4575, 2015. 5, 6
|
| 310 |
+
[32] Xin Wang, Jiawei Wu, Junkun Chen, Lei Li, Yuan-Fang Wang, and William Yang Wang. Vatex: A large-scale, high-quality multilingual dataset for video-and-language research. In IEEE/CVF International Conference on Computer Vision, 2019. 5, 6
|
| 311 |
+
[33] Chao-Yuan Wu, Manzil Zaheer, Hexiang Hu, R Manmatha, Alexander J Smola, and Philipp Krahenbuhl. Compressed video action recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6026-6035, 2018. 2
|
| 312 |
+
[34] Jun Xu, Tao Mei, Ting Yao, and Yong Rui. MSR-VTT: A large video description dataset for bridging video and language. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5288-5296, 2016. 5
|
| 313 |
+
[35] Shen Yan, Tao Zhu, Zirui Wang, Yuan Cao, Mi Zhang, Soham Ghosh, Yonghui Wu, and Jiahui Yu. Video-text modeling with zero-shot transfer from contrastive captioners. arXiv preprint arXiv:2212.04979, 2022. 6
|
| 314 |
+
[36] Hanhua Ye, Guorong Li, Yuankai Qi, Shuhui Wang, Qingming Huang, and Ming-Hsuan Yang. Hierarchical modular network for video captioning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17939-17948, 2022. 1, 2, 3, 6, 7
|
| 315 |
+
[37] Ziqi Zhang, Zhongang Qi, Chunfeng Yuan, Ying Shan, Bing Li, Ying Deng, and Weiming Hu. Open-book video captioning with retrieve-copy-generate network. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9837-9846, 2021. 6
|
| 316 |
+
[38] Ziqi Zhang, Yaya Shi, Chunfeng Yuan, Bing Li, Peijin Wang, Weiming Hu, and Zheng-Jun Zha. Object relational graph with teacher-recommended learning for video captioning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13275-13285, 2020. 1, 3, 6
|
| 317 |
+
|
| 318 |
+
[39] Qi Zheng, Chaoyue Wang, and Dacheng Tao. Syntax-aware action targeting for video captioning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13093-13102, 2020. 6
|
accurateandfastcompressedvideocaptioning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6cc914a25ad6dcb58f43939ada13aa4ebbe5a904755bf52dfe2a0865adcc7e2a
|
| 3 |
+
size 625163
|
accurateandfastcompressedvideocaptioning/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:edf0d07ea79ca274b87db6d239f6f405427305ba86efd737997d9d9f32a46cea
|
| 3 |
+
size 340128
|
achievementbasedtrainingprogressbalancingformultitasklearning/272271d0-f734-4249-a347-749bac4b8015_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fc5a965155f4bdc5d22cba989ac34ac4720d77bad781afbfe32507adee86b8b8
|
| 3 |
+
size 78406
|
achievementbasedtrainingprogressbalancingformultitasklearning/272271d0-f734-4249-a347-749bac4b8015_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:186656d57f7faa2284ba328cf0bab57a232964cb2a801cdc6c8e28f0d67d2f3a
|
| 3 |
+
size 94689
|
achievementbasedtrainingprogressbalancingformultitasklearning/272271d0-f734-4249-a347-749bac4b8015_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:03d6e89bd094519d52dde03dd79316ab72ea6f7ee66f7822c8d6e8854ea1fc27
|
| 3 |
+
size 735323
|
achievementbasedtrainingprogressbalancingformultitasklearning/full.md
ADDED
|
@@ -0,0 +1,294 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Achievement-based Training Progress Balancing for Multi-Task Learning
|
| 2 |
+
|
| 3 |
+
Hayoung Yun and Hanjoo Cho*
|
| 4 |
+
Samsung Research
|
| 5 |
+
{hayoung.yun, hanjoo.cho}@samsung.com
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Multi-task learning faces two challenging issues: (1) the high cost of annotating labels for all tasks and (2) balancing the training progress of various tasks with different natures. To resolve the label annotation issue, we construct a large-scale "partially annotated" multi-task dataset by combining task-specific datasets. However, the numbers of annotations for individual tasks are imbalanced, which may escalate an imbalance in training progress. To balance the training progress, we propose an achievement-based multi-task loss to modulate training speed based on the "achievement," defined as the ratio of current accuracy to single-task accuracy. Then, we formulate the multitask loss as a weighted geometric mean of individual task losses instead of a weighted sum to prevent any task from dominating the loss. In experiments, we evaluated the accuracy and training speed of the proposed multi-task loss on the large-scale multi-task dataset against recent multitask losses. The proposed loss achieved the best multi-task accuracy without incurring training time overhead. Compared to single-task models, the proposed one achieved $1.28\%$ , $1.65\%$ , and $1.18\%$ accuracy improvement in object detection, semantic segmentation, and depth estimation, respectively, while reducing computations to $33.73\%$ . Source code is available at https://github.com/samsung/Achievement-based-MTL.
|
| 10 |
+
|
| 11 |
+
# 1. Introduction
|
| 12 |
+
|
| 13 |
+
Cooperation of various vision tasks is often required for high-level vision applications for autonomous driving and surveillance cameras [6, 16, 41, 8, 12, 38]. The vision task models typically consist of two parts: a feature extractor and a prediction head, and most computations are concentrated on the feature extractor. Hence, sharing the feature extractor among different tasks, multi-task learning, can significantly expedite inference and enhance the
|
| 14 |
+
|
| 15 |
+
feature extractor to produce more general representations [25, 18, 3, 12, 38]. However, multi-task learning faces two major challenges: balancing the training progress of various tasks with different natures and the cost of annotating the labels of all tasks for plenty of images.
|
| 16 |
+
|
| 17 |
+
There are two major approaches to balancing the training progress: loss scale-based [25, 5, 20] and gradients-based [4, 31, 40, 24]. Primitive multi-task losses [25, 5] address the difference in loss scale among individual tasks due to their distinct loss functions (e.g., cross entropy for classification and L1 loss for regression). However, simply matching the loss scales is insufficient to balance the gradients because the derivatives of distinct functions can differ.
|
| 18 |
+
|
| 19 |
+
Recent multi-task losses have directly adjusted backpropagated gradients [4, 31, 40, 24, 23]. The gradient-based methods seek to equalize the task gradients at the last shared layer [4, 24]. However, achieving balance in task gradients does not guarantee balance in the training progress because the difficulty of tasks may differ. Easy tasks quickly converge, while difficult ones are trained slowly [13]. Hence, it is insufficient to consider only gradients to balance the training progress, but task difficulty should also be regarded.
|
| 20 |
+
|
| 21 |
+
Annotating labels for all tasks on plenty of images is expensive and time-consuming. Thus, multi-task datasets [33, 7, 10] suffer from a lack of annotations, while task-specific datasets have become larger and larger [29, 22, 32]. Some previous works [18, 38] construct a union dataset composed of task-specific datasets to resolve this issue. Images of the union dataset are partially annotated. Because task losses are only produced for existing labels, multi-task models can be easily biased toward the dominant task if the numbers of labels for individual tasks differ significantly. Moreover, the gradient of each task is also heavily influenced by the number of task labels presented in a batch, and thus gradient-based multi-task losses are significantly disturbed on a partially annotated dataset.
|
| 22 |
+
|
| 23 |
+
In this paper, we propose a novel multi-task loss that can balance the training progress of different tasks effectively, without using task gradients. The proposed loss controls the training progress based on accuracy achievement, defined as the ratio of current accuracy to single-task accuracy.
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
(a) Multi-Task
|
| 31 |
+
(c) Semantic Segmentation
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
(b) Object Detection
|
| 35 |
+
(d) Depth Estimation
|
| 36 |
+
Figure 1. Achievement and task weight curves for (a) multi-task, (b) object detection, (c) semantic segmentation, and (d) depth estimation on the PASCAL VOC [9] + NYU v2 [33] dataset. The blue and orange lines are the proposed method and IMTL-G [24], respectively. The solid lines mean achievements (left y-axis), and the dotted lines denote task weights (right y-axis). Delivering the same amount of task gradients to the shared feature extractor, IMTL-G learned easy tasks (segmentation and depth) quickly while the difficult one (detection) suffered from under-fitting. The proposed method much focused on the challenging task (detection) having lower achievement than others, and as a result, demonstrated better multi-task accuracy than IMTL-G.
|
| 37 |
+
|
| 38 |
+
Furthermore, refraining from using a general weighted sum based loss, the proposed loss is composed of a weighted geometric mean to exploit its scale-invariant property.
|
| 39 |
+
|
| 40 |
+
The main contributions of this paper are as follows:
|
| 41 |
+
|
| 42 |
+
1. We propose an achievement-based multi-task loss that employs a weighted geometric mean in multi-task learning. The proposed loss effectively balances the training progress and prevents any task from dominating the loss. Moreover, the proposed weights and weighted geometric mean also dramatically improve the accuracy of other multi-task losses, respectively.
|
| 43 |
+
2. We conduct a robust evaluation for multi-task losses on a large-scale partially annotated multi-task dataset.
|
| 44 |
+
3. We empirically validate that multi-task learning on a partially annotated dataset can achieve better accuracy than filling in absent labels using single-task models.
|
| 45 |
+
|
| 46 |
+
# 2. Related Works
|
| 47 |
+
|
| 48 |
+
# 2.1. Multi-Task Loss
|
| 49 |
+
|
| 50 |
+
Recent research on multi-task learning has focused on developing effective multi-task losses to train all tasks in balance and improve the accuracy of each task as much as possible. Most multi-task losses are generally represented as the weighted sum of task losses as follows:
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
L _ {t o t a l} = \sum_ {t = 1} ^ {N _ {T}} w _ {t} L _ {t}, \tag {1}
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
where $w_{t}$ and $L_{t}$ mean the task weight and task loss of $t$ -th task, $N_{T}$ is the number of tasks, and $L_{total}$ denotes the total multi-task loss. The task weight directly affects the accuracy of the corresponding task [17]. Hence, finding optimal weights is crucial for achieving good accuracy, but manual tuning of task weights is prohibitively expensive. Thus, extensive research has been conducted to determine the task weights automatically.
|
| 57 |
+
|
| 58 |
+
The first approach is the learning-based method [17], which defines task weights as learnable parameters based on the task-agnostic homoscedastic uncertainty of task losses. This method can be easily applied by simply adding the learnable parameters and regularization. However, it is only applicable if the uncertainty of output distribution can be derived to the task loss [24].
|
| 59 |
+
|
| 60 |
+
Loss scale-based methods [25, 20, 5] address the scale difference in task losses. RLW [20] chooses random task weights, while DWA [25] modulates task weights to decrease task losses evenly. Simply defining multi-task losses as the geometric mean of task losses, GLS [5] effectively addresses the scale variance. However, matching loss scales does not guarantee balance in task gradients because the derivatives of different functions are distinct, even if their scales are similar.
|
| 61 |
+
|
| 62 |
+
Gradient-based methods adjust task weights to control task gradients directly [31, 40, 23, 4, 24]. MGDA [31] employs an iterative optimization process to find the task weights so that the gradient vector of shared parameters is aligned toward the minimum norm points of the convex hull. PCGrad [40] and CAGrad [23] directly modulate task gradients, without using task weights, to avoid conflicts in their directions, which prevents destructive interference. In contrast, GradNorm [4] and IMTL [24] focus on task gradients at the last shared layer. While GradNorm controls task weights to make the magnitudes of the task gradients close to each other, IMTL [24] adjusts the task weights so that the gradient has an identical length when projected onto each task gradient. However, all gradient-based methods are sensitive to popular regularization modules that drop out units or layers during training, such as dropout or stochastic depth [15]. In addition, selecting shared and task-specific parameters incorrectly can significantly degrade accuracy. Furthermore, even if the same amount of gradient is delivered for all tasks, the training speed may vary depending on the difficulty of the tasks.
|
| 63 |
+
|
| 64 |
+
DTP [13], an accuracy-based method, introduces task difficulty to multi-task learning. DTP estimates task difficulty based on current task accuracy. Regarding tasks with low accuracy as difficult, DTP increases task weights of ones with low accuracy to expedite their training and vice versa. However, estimating training progress based on current accuracy alone is insufficient. If easy and difficult tasks have the same accuracy, DTP assumes their training progress is the same, regardless of how much task accuracy can be improved further. Moreover, DTP does not address the imbalance scales of individual task losses.
|
| 65 |
+
|
| 66 |
+
In this paper, we rediscover and improve GLS [5] and DTP [13], which were developed earlier, but have not received much attention. Based on DTP using current accuracy alone, we elaborate accuracy-based task weights by introducing the "achievement," defined as the ratio of current and single-task accuracy. Employing the proposed achievement-based task weight, we propose a novel multitask loss that consists of a weighted geometric mean of individual task losses to effectively address the training imbalance caused by different derivatives and scales of distinct loss functions.
|
| 67 |
+
|
| 68 |
+
# 2.2. Annotating Multi-Task Labels
|
| 69 |
+
|
| 70 |
+
The biggest challenge of multi-task learning for practical usages is data collection and annotation [11]. Especially, the effort to annotate labels for all tasks is linearly proportional to the number of tasks. Thus, representative multitask datasets [7, 33, 10] are order-of-magnitude smaller than conventional single-task datasets, and most research on multi-task learning [4, 31, 24, 25, 40] has used these small datasets for training and evaluation.
|
| 71 |
+
|
| 72 |
+
UberNet [18] attempts to construct a union dataset by combining different single-task datasets. To handle the issue of imbalanced data sizes across tasks, it also proposes a training method that delays parameter updates until sufficient data is accumulated. However, it focuses on the task-specific part of multi-task models, so the label imbalance issue still exists for the shared one.
|
| 73 |
+
|
| 74 |
+
MuST [11] applies self-training [39] to relieve the efforts for annotating multi-task labels, which constructing a fully-annotated multi-task dataset from partially annotated images by creating pseudo labels for label-absent tasks using pre-trained single-task teachers.
|
| 75 |
+
|
| 76 |
+
KD-MTL [19] adopts pre-trained single-task teachers in the training phase. Balancing the training progress of the tasks with different difficulties, KD-MTL trains a multi-task model to generate shared features similar to what the task-specific teachers produce.
|
| 77 |
+
|
| 78 |
+
In this paper, we empirically validate that multi-task learning on a partially annotated dataset can provide superior multi-task accuracy than methods leveraging single-task models by learning general representation for multiple tasks. Moreover, we also provide robust accuracy comparisons for various multi-task losses on a large-scale partially annotated dataset.
|
| 79 |
+
|
| 80 |
+
# 3. Achievement-based Multi-Task Loss
|
| 81 |
+
|
| 82 |
+
The proposed multi-task loss is inspired by focal loss [21], which was introduced to resolve the class imbalance in object detection. Generally, numerous background samples are in images, while foreground objects are only a few. As a result, most detection losses are from the easily-detected background, even though hard-to-detect foreground objects are critical. To focus on objects, focal loss modulates cross-entropy with focal weighting term, $(1 - p_c)^{\gamma}$ :
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
F L \left(p _ {c}; \gamma\right) = \left(1 - p _ {c}\right) ^ {\gamma} C E = - \left(1 - p _ {c}\right) ^ {\gamma} \log \left(p _ {c}\right), \tag {2}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
where $\gamma$ means the focusing factor and $p_c$ denotes the probability that the prediction is correct (i.e., $p_c$ is $p$ for foreground samples and $1 - p$ for background). Through focal weighting, focal loss diminishes the contribution of easy samples while enhancing the influence of difficult ones.
|
| 89 |
+
|
| 90 |
+
We introduce focal weighting, $(1 - p_{c})^{\gamma}$ , as task weights for multi-task learning to address the imbalance of training progress across tasks. We define the achievement of each task as the ratio of current and single-task accuracy, and use it instead of $p_{c}$ as follows:
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
w _ {t} \left(A c c _ {t}; \gamma\right) = \left(1 - A c c _ {t} / p _ {t}\right) ^ {\gamma}, \tag {3}
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
where $Acc_{t}$ denotes current accuracy of task $t$ , and $p_t$ means task potential, defined as single-task accuracy. Like the focal loss, the achievement-based task weight encourages
|
| 97 |
+
|
| 98 |
+
tasks with low achievement to expedite their training while slowing down the early converged ones.
|
| 99 |
+
|
| 100 |
+
Learning multiple tasks can enhance feature extractors to learn more general representations than single-task learning. Hence, a multi-task model often outperforms its single-task counterparts. During training, the achievement-based task weights decrease as the task accuracy of the multi-task model approaches the single-task accuracy. However, the task weights can unintentionally increase if the accuracy of the multi-task model surpasses the single-task ones. To prevent the unintended increase, we introduce a slight margin, $m > 1$ , to the potential:
|
| 101 |
+
|
| 102 |
+
$$
|
| 103 |
+
w _ {t} = \left(1 - \frac {\overline {{A c c}} _ {t}}{m \cdot p _ {t}}\right) ^ {\gamma}. \tag {4}
|
| 104 |
+
$$
|
| 105 |
+
|
| 106 |
+
As the accuracy of a task improves during training, its task weight is decreased. Decreasing task weights is theoretically identical to reducing the learning rate, inducing the under-fitting of the corresponding task. To avoid underfitting, we normalize the task weights using softmax.
|
| 107 |
+
|
| 108 |
+
Finally, to resolve the scale imbalance of individual task losses, the proposed achievement-based multi-task loss employs the weighted geometric mean instead of the conventional weighted sum as follows:
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
L _ {\text {t o t a l}} = \prod_ {t = 1} ^ {N _ {T}} L _ {t} ^ {w _ {t}}. \tag {5}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
# 4. Experimental Results
|
| 115 |
+
|
| 116 |
+
# 4.1. Experimental Setup
|
| 117 |
+
|
| 118 |
+
# 4.1.1 Preprocessing
|
| 119 |
+
|
| 120 |
+
We applied both geometric and photometric augmentations to improve accuracy. We conducted random scaling, resize, and random horizontal flip as geometric augmentation, and then performed SSD's photometric distortions [26] and random adjust sharpness as photometric augmentation.
|
| 121 |
+
|
| 122 |
+
# 4.1.2 Evaluation Metrics
|
| 123 |
+
|
| 124 |
+
A popular metric for multi-task accuracy is the average pertask accuracy drop [36]:
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
\Delta_ {M T L} = \frac {1}{N _ {T}} \sum_ {t = 1} ^ {N _ {T}} S _ {t} \frac {M _ {m , t} - M _ {b , t}}{M _ {b , t}}, \tag {6}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
where $m$ and $b$ denote the multi-task model and single-task baseline, respectively. $M_{t}$ means the accuracy metric for task $t$ . $S_{t}$ is 1 if $M_{t}$ is higher is better, otherwise -1. We slightly modified this metric for multiple metrics for a task:
|
| 131 |
+
|
| 132 |
+
$$
|
| 133 |
+
\Delta_ {M T L} = \frac {1}{N _ {T}} \sum_ {t = 1} ^ {N _ {T}} \frac {1}{N _ {t}} \sum_ {i = 1} ^ {N _ {t}} S _ {t, i} \frac {M _ {m . t , i} - M _ {b , t , i}}{M _ {b , t , i}}, \tag {7}
|
| 134 |
+
$$
|
| 135 |
+
|
| 136 |
+
where $N_{t}$ denotes the number of metrics for task $t$ .
|
| 137 |
+
|
| 138 |
+
Depending on the single-task baseline, the average pertask accuracy drop influenced by the accuracy of single-task models. Hence, we propose a new multi-task accuracy metric, independent of the quality of single-task models, based on the geometric mean:
|
| 139 |
+
|
| 140 |
+
$$
|
| 141 |
+
A c c _ {M T L} = \prod_ {t = 1} ^ {N _ {T}} \prod_ {i = 1} ^ {N _ {t}} ^ {N _ {T} N _ {t}} \sqrt {M _ {m , t} ^ {S _ {t , i}}}. \tag {8}
|
| 142 |
+
$$
|
| 143 |
+
|
| 144 |
+
For example, the multi-task accuracy metric for segmentation, depth estimation, and surface normal is as follows:
|
| 145 |
+
|
| 146 |
+
$$
|
| 147 |
+
A c c _ {M T L} = \sqrt [ 3 ]{m I o U \cdot \sqrt {\frac {\delta_ {1}}{r m s e}} \cdot \sqrt [ 3 ]{\frac {1 1 . 2 5}{m e a n \cdot m e d i a n}}}. \tag {9}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
# 4.2. Comparison to Recent Multi-Task Losses
|
| 151 |
+
|
| 152 |
+
In this subsection, the accuracy and training speed of the proposed multi-task loss was compared to recent multi-task losses. As multi-task baseline, we simply added all task losses (uniform task weight). Then, as benchmark methods, we used loss-scale based method (RLW [20], DWA [25] and GLS [5]), gradient-based methods (MGDA [31], PCGrad [40], CAGrad [23], GradNorm [4], IMTL-G, and IMTL [24]), and an accuracy-based method (DTP [13]). All benchmark methods were implemented on the same code base for a fair comparison; the same augmentation, optimizer, search range of learning rates, and LR scheduler were applied to all benchmark and proposed methods. No manual scalers were used for all task losses.
|
| 153 |
+
|
| 154 |
+
# 4.2.1 Comparison on the NYU v2 Dataset
|
| 155 |
+
|
| 156 |
+
We evaluated the performance of the proposed and benchmark multi-task losses on the NYU v2 dataset for semantic segmentation, depth estimation, and surface normal. We used DeepLabV3 [2] as the baseline architecture and ResNet50 [14] as a feature extractor. The single-task and multi-task models were trained 10 times for learning rates of 8e-4, 4e-4, 2e-4, 1e-4, and 8e-5. The representative metric values were obtained by averaging the results of all trials, excluding the maximum and minimum of $Acc_{MTL}$ (average of 8 trials). The metric values for the learning rate with the best average $Acc_{MTL}$ were presented in Table 1. More training details are available in Appendix A.
|
| 157 |
+
|
| 158 |
+
Although multi-task models learned general features, none of the multi-task losses surpassed the single-task baseline because surface normal requires significantly different features, causing conflicts in training. Learning segmentation and depth estimation results improved accuracy than the single-task baseline as presented in Appendix B.
|
| 159 |
+
|
| 160 |
+
<table><tr><td rowspan="2" colspan="2">methods</td><td>segmentation</td><td colspan="2">depth estimation</td><td colspan="3">surface normal</td><td colspan="3">total</td></tr><tr><td>mIoU ↑</td><td>δ1 ↑</td><td>rmse ↓</td><td>mean ↓</td><td>median ↓</td><td>11.25 ↑</td><td>AccMTL ↑</td><td>ΔMTL ↑</td><td>time</td></tr><tr><td colspan="2">Single-Task</td><td>0.4437</td><td>0.8087</td><td>0.5814</td><td>19.3462</td><td>13.2045</td><td>0.4553</td><td>0.3989</td><td>0.00%</td><td>-</td></tr><tr><td>Constant</td><td>Uniform</td><td>0.4446(0.20%)</td><td>0.8091(0.05%)</td><td>0.5776(0.66%)</td><td>22.8531(-18.13%)</td><td>17.7271(-34.25%)</td><td>0.3322(-27.05%)</td><td>0.3666(-8.10%)</td><td>-8.64%</td><td>31.07</td></tr><tr><td rowspan="3">Scale-based</td><td>RLW [20]</td><td>0.4447(0.23%)</td><td>0.8082(-0.06%)</td><td>0.5759(0.94%)</td><td>22.8410(-18.06%)</td><td>17.6180(-33.42%)</td><td>0.3350(-26.43%)</td><td>0.3673(-7.91%)</td><td>-8.43%</td><td>30.78</td></tr><tr><td>DWA [25]</td><td>0.4465(0.62%)</td><td>0.8093(0.07%)</td><td>0.5751(1.08%)</td><td>22.7934(-17.82%)</td><td>17.6902(-33.97%)</td><td>0.3330(-26.85%)</td><td>0.3676(-7.82%)</td><td>-8.34%</td><td>30.84</td></tr><tr><td>GLS [5]</td><td>0.4321(-2.61%)</td><td>0.8221(1.65%)</td><td>0.5665(2.56%)</td><td>20.7032(-7.01%)</td><td>15.0512(-13.99%)</td><td>0.3982(-12.54%)</td><td>0.3837(-3.80%)</td><td>-3.90%</td><td>30.07</td></tr><tr><td rowspan="6">Gradient-based</td><td>MGDA [31]</td><td>0.2511(-43.41%)</td><td>0.7636(-5.58%)</td><td>0.6266(-7.77%)</td><td>19.2796(0.34%)</td><td>13.1962(0.06%)</td><td>0.4553(-0.01%)</td><td>0.3229(-19.05%)</td><td>-16.65%</td><td>76.23</td></tr><tr><td>PCGrad [40]</td><td>0.4435(-0.06%)</td><td>0.8017(-0.87%)</td><td>0.5825(-0.19%)</td><td>24.2444(-25.32%)</td><td>19.3005(-46.17%)</td><td>0.3038(-33.27%)</td><td>0.3558(-10.79%)</td><td>-11.83%</td><td>58.06</td></tr><tr><td>CAGrad [23]</td><td>0.4448(0.24%)</td><td>0.8001(-1.07%)</td><td>0.5854(-0.68%)</td><td>24.2759(-25.48%)</td><td>19.3395(-46.46%)</td><td>0.3033(-33.38%)</td><td>0.3556(-10.85%)</td><td>-11.91%</td><td>59.08</td></tr><tr><td>GradNorm [4]</td><td>0.4458(0.46%)</td><td>0.7888(-2.46%)</td><td>0.5928(-1.96%)</td><td>22.3488(-15.52%)</td><td>16.9259(-28.18%)</td><td>0.3524(-22.60%)</td><td>0.3690(-7.50%)</td><td>-7.95%</td><td>35.56</td></tr><tr><td>IMTL-G [24]</td><td>0.4361(-1.72%)</td><td>0.8021(-0.82%)</td><td>0.5788(0.45%)</td><td>20.5248(-6.09%)</td><td>14.6814(-11.19%)</td><td>0.4097(-10.01%)</td><td>0.3846(-3.58%)</td><td>-3.67%</td><td>35.84</td></tr><tr><td>IMTL [24]</td><td>0.4162(-6.20%)</td><td>0.7876(-2.61%)</td><td>0.5930(-2.00%)</td><td>20.8134(-7.58%)</td><td>14.8333(-12.34%)</td><td>0.4064(-10.74%)</td><td>0.3746(-6.08%)</td><td>-6.24%</td><td>58.98</td></tr><tr><td rowspan="2">Accuracy-based</td><td>DTP [13]</td><td>0.4458(0.46%)</td><td>0.7648(-5.42%)</td><td>0.6140(-5.61%)</td><td>22.2556(-15.04%)</td><td>16.7507(-26.86%)</td><td>0.3568(-21.63%)</td><td>0.3660(-8.23%)</td><td>-8.74%</td><td>31.07</td></tr><tr><td>AMTL(proposed)</td><td>0.4377(-1.35%)</td><td>0.8205(1.46%)</td><td>0.5667(2.54%)</td><td>20.7974(-7.50%)</td><td>15.1003(-14.36%)</td><td>0.3969(-12.82%)</td><td>0.3847(-3.54%)</td><td>-3.64%</td><td>31.05</td></tr></table>
|
| 161 |
+
|
| 162 |
+
Table 1. Comparison to recent multi-task losses on the NYU v2 dataset. $mIoU$ , $\delta_1$ , 11.25, $Acc_{MTL}$ , and $\Delta_{MTL}$ are better when higher while rmse, mean, and median are better when lower. time denotes the average training time for epoch in seconds. The best and runner-up results for each metric are highlighted by bold and underline, respectively.
|
| 163 |
+
|
| 164 |
+
RLW [20], randomly choosing task weights, showed similar accuracy to the multi-task baseline. Modulating task weights to decrease task losses evenly, DWA [25] provided slightly better multi-task accuracy than the baseline. GLS [5], based on a scale-invariant geometric mean, demonstrated the best accuracy among scale-based methods even though it did not use task weights. All scale-based losses did not incur additional training time.
|
| 165 |
+
|
| 166 |
+
There are three types of gradient-based multi-task losses: using optimization (MGDA [31]), resolving gradient conflict (PCGrad [40] and CAGrad [23]), and modulating the task gradients on the last shared layer (GradNorm [4], IMTL-G, and IMTL [24]). When task gradients conflict, MGDA [31] excessively enhanced the task of which gradient magnitude was the least. As a result, while MGDA provided the best accuracy in surface normal, its multitask accuracy suffered seriously. PCGrad [40] and CAgrad [23] directly modified task gradients to resolve gradient conflict, easily affected by dropout or stochastic depth. Thus, they did not improve accuracy compared to the multitask baseline. We discussed the influence of dropout to
|
| 167 |
+
|
| 168 |
+
gradient-based losses in Appendix B.5. GradNorm [4] controlled task weights so that all task losses evenly decreased, which promoted multi-task accuracy compared to the baseline. IMTL-G [24] achieved the best multi-task accuracy among gradient-based methods by adjusting task weights so that the gradient at the last shared layer has the same length when projected to each gradients.
|
| 169 |
+
|
| 170 |
+
All gradient-based losses incurred overhead in training time for computing task gradients. GradNorm and IMTL-G have the smallest overhead because using task gradients only for determining task weights. Resolving the conflict (PCGrad and CAGrad) and additional back-propagation for task-specific parameters (IMTL) further increased training time. The iterative optimization process of MGDA induced the most significant overhead in training time.
|
| 171 |
+
|
| 172 |
+
The proposed multi-task loss effectively addressed scale difference in task losses by employing a weighted geometric mean that is invariant to loss scale. The proposed one also balanced training progress of various tasks by modulating the achievement-based task weights. As a result, it achieved the best multi-task accuracy without impeding training.
|
| 173 |
+
|
| 174 |
+
<table><tr><td rowspan="3" colspan="2">Methods</td><td colspan="6">Shared ASPP</td><td colspan="6">Individual ASPP</td></tr><tr><td colspan="3">DeepLabV3(197 GMAC)</td><td colspan="3">DeepLabV3+(207 GMAC)</td><td colspan="3">DeepLabV3(334 GMAC)</td><td colspan="3">DeepLabV3+(343 GMAC)</td></tr><tr><td>AccMTL</td><td>ΔMTL</td><td>time</td><td>AccMTL</td><td>ΔMTL</td><td>time</td><td>AccMTL</td><td>ΔMTL</td><td>time</td><td>AccMTL</td><td>ΔMTL</td><td>time</td></tr><tr><td colspan="2">Single-Task</td><td>0.3989</td><td>-</td><td>-</td><td>0.3986</td><td>-</td><td>-</td><td>0.3989</td><td>-</td><td>-</td><td>0.3986</td><td>-</td><td>-</td></tr><tr><td>Constant</td><td>Uniform</td><td>0.3666</td><td>-8.64%</td><td>30.24</td><td>0.3683</td><td>-8.07%</td><td>30.98</td><td>0.3763</td><td>-6.00%</td><td>43.31</td><td>0.3762</td><td>-5.96%</td><td>45.61</td></tr><tr><td rowspan="3">Scale-based</td><td>RLW [20]</td><td>0.3673</td><td>-8.43%</td><td>30.50</td><td>0.3665</td><td>-8.55%</td><td>31.55</td><td>0.3774</td><td>-5.71%</td><td>43.22</td><td>0.3764</td><td>-5.88%</td><td>43.37</td></tr><tr><td>DWA [25]</td><td>0.3676</td><td>-8.34%</td><td>30.65</td><td>0.3677</td><td>-8.25%</td><td>30.77</td><td>0.3768</td><td>-5.87%</td><td>43.01</td><td>0.3761</td><td>-5.97%</td><td>43.83</td></tr><tr><td>GLS [5]</td><td>0.3784</td><td>-5.33%</td><td>31.08</td><td>0.3761</td><td>-5.83%</td><td>31.27</td><td>0.3786</td><td>-5.25%</td><td>44.16</td><td>0.3798</td><td>-4.80%</td><td>43.70</td></tr><tr><td rowspan="6">Gradient-based</td><td>MGDA [31]</td><td>0.3229</td><td>-16.65%</td><td>74.99</td><td>0.3394</td><td>-13.41%</td><td>76.78</td><td>0.3770</td><td>-5.34%</td><td>88.97</td><td>0.3793</td><td>-4.75%</td><td>90.15</td></tr><tr><td>PCGrad [40]</td><td>0.3558</td><td>-11.83%</td><td>58.05</td><td>0.3581</td><td>-11.04%</td><td>58.02</td><td>0.3697</td><td>-7.95%</td><td>63.91</td><td>0.3710</td><td>-7.44%</td><td>64.25</td></tr><tr><td>CAGrad [23]</td><td>0.3556</td><td>-11.91%</td><td>57.63</td><td>0.3584</td><td>-10.94%</td><td>58.99</td><td>0.3689</td><td>-8.15%</td><td>63.01</td><td>0.3708</td><td>-7.51%</td><td>63.75</td></tr><tr><td>GradNorm [4]</td><td>0.3690</td><td>-7.95%</td><td>35.12</td><td>0.3677</td><td>-8.21%</td><td>36.19</td><td>0.3773</td><td>-5.74%</td><td>57.90</td><td>0.3760</td><td>-6.02%</td><td>59.13</td></tr><tr><td>IMTL-G [24]</td><td>0.3846</td><td>-3.67%</td><td>35.22</td><td>0.3838</td><td>-3.78%</td><td>35.33</td><td>0.3917</td><td>-1.79%</td><td>57.89</td><td>0.3910</td><td>-1.93%</td><td>58.67</td></tr><tr><td>IMTL [24]</td><td>0.3746</td><td>-6.24%</td><td>57.31</td><td>0.3746</td><td>-6.18%</td><td>57.85</td><td>0.3815</td><td>-4.43%</td><td>99.27</td><td>0.3807</td><td>-4.57%</td><td>103.50</td></tr><tr><td rowspan="2">Accuracy-based</td><td>DTP [13]</td><td>0.3660</td><td>-8.74%</td><td>29.88</td><td>0.3625</td><td>-9.63%</td><td>31.46</td><td>0.3739</td><td>-6.63%</td><td>43.43</td><td>0.3751</td><td>-6.24%</td><td>43.10</td></tr><tr><td>AMTL</td><td>0.3847</td><td>-3.64%</td><td>32.08</td><td>0.3831</td><td>-3.98%</td><td>30.65</td><td>0.3899</td><td>-2.29%</td><td>44.08</td><td>0.3883</td><td>-2.59%</td><td>43.39</td></tr></table>
|
| 175 |
+
|
| 176 |
+
Table 2. Comparison of multi-task accuracy and training time for various DeepLab prediction heads.
|
| 177 |
+
|
| 178 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="2">MobileNetV2 [30]</td><td colspan="2">EfficientNetV2-S [34]</td></tr><tr><td>AccMTL</td><td>ΔMTL</td><td>AccMTL</td><td>ΔMTL</td></tr><tr><td>single-task</td><td>0.3581</td><td>-</td><td>0.3877</td><td>-</td></tr><tr><td>Uniform</td><td>0.3313</td><td>-7.91%</td><td>0.3868</td><td>-0.20%</td></tr><tr><td>RLW [20]</td><td>0.328</td><td>-8.92%</td><td>0.383</td><td>-1.20%</td></tr><tr><td>DWA [25]</td><td>0.3311</td><td>-7.94%</td><td>0.387</td><td>-0.14%</td></tr><tr><td>GLS [5]</td><td>0.3464</td><td>-3.30%</td><td>0.3958</td><td>2.06%</td></tr><tr><td>MGDA [31]</td><td>0.3109</td><td>-11.93%</td><td>0.3268</td><td>-14.08%</td></tr><tr><td>PCGrad [40]</td><td>0.3204</td><td>-11.44%</td><td>0.3743</td><td>-3.51%</td></tr><tr><td>CAGrad [23]</td><td>0.3202</td><td>-11.48%</td><td>0.3742</td><td>-3.52%</td></tr><tr><td>GradNorm [4]</td><td>0.3346</td><td>-6.87%</td><td>0.3873</td><td>-0.06%</td></tr><tr><td>IMTL-G [24]</td><td>0.3513</td><td>-1.88%</td><td>0.3991</td><td>2.90%</td></tr><tr><td>IMTL [24]</td><td>0.3445</td><td>-3.82%</td><td>0.3936</td><td>1.54%</td></tr><tr><td>DTP [13]</td><td>0.3289</td><td>-8.58%</td><td>0.3837</td><td>-0.97%</td></tr><tr><td>AMTL</td><td>0.3476</td><td>-2.95%</td><td>0.3989</td><td>2.85%</td></tr></table>
|
| 179 |
+
|
| 180 |
+
Robustness We evaluated the robustness of the proposed method for various prediction heads and backbones. First, we estimated the performance of the proposed and benchmark methods for various DeepLab heads (Table 2). The details of the architectures are described in appendix B. No remarkable accuracy improvement was achieved by exploiting high resolution features (DeepLabV3+) since we adopted the dilated ResNet50 [1] like MTI-Net [37]. However, it significantly escalated GMAC to use individual ASPP for each task. The proposed multi-task loss provided stable and excellent accuracy across all prediction heads.
|
| 181 |
+
|
| 182 |
+
Next, we evaluated the accuracy of the benchmark and proposed losses with other backbones: MobileNet-V2 [30]
|
| 183 |
+
|
| 184 |
+
Table 3. Comparison of multi-task accuracy for MobileNetV2 and EfficientNetV2-S backbones.
|
| 185 |
+
|
| 186 |
+
<table><tr><td></td><td>AccMTL</td><td>��MTL</td></tr><tr><td>DTP [13]</td><td>0.3660</td><td>-8.74%</td></tr><tr><td>+ achievement-based weight</td><td>0.3745</td><td>-6.11%</td></tr><tr><td>+ weighted geometric mean</td><td>0.3847</td><td>-3.64%</td></tr></table>
|
| 187 |
+
|
| 188 |
+
Table 4. Ablation study for the proposed multi-task loss.
|
| 189 |
+
|
| 190 |
+
<table><tr><td rowspan="2"></td><td colspan="2">Uniform</td><td colspan="2">Achievement-based</td></tr><tr><td>AccMTL</td><td>ΔMTL</td><td>AccMTL</td><td>ΔMTL</td></tr><tr><td>PCGrad [40]</td><td>0.3558</td><td>-11.83%</td><td>0.3662</td><td>-8.73%</td></tr><tr><td>CAGrad [23]</td><td>0.3556</td><td>-11.91%</td><td>0.3653</td><td>-8.98%</td></tr></table>
|
| 191 |
+
|
| 192 |
+
Table 5. Comparison of multi-task accuracy for the uniform and achievement-base weights.
|
| 193 |
+
|
| 194 |
+
<table><tr><td></td><td colspan="2">Arithmetic</td><td colspan="2">Geometric</td></tr><tr><td></td><td>AccMTL</td><td>ΔMTL</td><td>AccMTL</td><td>ΔMTL</td></tr><tr><td>RLW [20]</td><td>0.3673</td><td>-8.43%</td><td>0.3774</td><td>-5.59%</td></tr><tr><td>DWA [25]</td><td>0.3676</td><td>-8.34%</td><td>0.3811</td><td>-4.60%</td></tr><tr><td>DTP [13]</td><td>0.3660</td><td>-8.74%</td><td>0.3800</td><td>-4.81%</td></tr></table>
|
| 195 |
+
|
| 196 |
+
Table 6. Comparison of multi-task accuracy for the weighted arithmetic and geometric means.
|
| 197 |
+
|
| 198 |
+
and EfficientNetV2-S [34] (Table 3). We used shared ASPP and DeepLabV3+ architecture in this comparison. The results showed similar patterns when using the ResNet50 backbone. The proposed method achieved runner-up accuracy for the MobileNetV2 and EfficientNetV2-S backbones.
|
| 199 |
+
|
| 200 |
+
<table><tr><td rowspan="2" colspan="2">methods</td><td>detection</td><td>segmentation</td><td colspan="2">depth estimation</td><td colspan="2">total</td><td></td></tr><tr><td>\(mAP@50:95\uparrow\)</td><td>\(mIoU\uparrow\)</td><td>\(\delta_1\uparrow\)</td><td>\(rmse\downarrow\)</td><td>\(Acc_{MTL}\uparrow\)</td><td>\(\Delta_{MTL}\uparrow\)</td><td>time</td></tr><tr><td colspan="2">Single-Task</td><td>0.5795</td><td>0.7895</td><td>0.8882</td><td>0.4393</td><td>0.8665</td><td>-</td><td>-</td></tr><tr><td>Constant</td><td>Uniform</td><td>0.5922(2.19%)</td><td>0.7823(-0.91%)</td><td>0.8731(-1.69%)</td><td>0.4498(-2.40%)</td><td>0.8642(-0.26%)</td><td>-0.25%</td><td>713.65</td></tr><tr><td rowspan="3">Scale-based</td><td>RLW [20]</td><td>0.5900(1.81%)</td><td>0.7835(-0.76%)</td><td>0.8716(-1.87%)</td><td>0.4587(-4.42%)</td><td>0.8605(-0.69%)</td><td>-0.70%</td><td>707.60</td></tr><tr><td>DWA [25]</td><td>0.5853(1.01%)</td><td>0.7835(-0.75%)</td><td>0.8621(-2.93%)</td><td>0.4565(-3.92%)</td><td>0.8574(-1.05%)</td><td>-1.06%</td><td>737.70</td></tr><tr><td>GLS [5]</td><td>0.5833(0.65%)</td><td>0.8007(1.42%)</td><td>0.8917(0.39%)</td><td>0.4329(1.45%)</td><td>0.8752(1.00%)</td><td>1.00%</td><td>733.23</td></tr><tr><td rowspan="6">Gradient-based</td><td>MGDA [31]</td><td>0.4064(-29.88%)</td><td>0.7714(-2.29%)</td><td>0.8880(-0.02%)</td><td>0.4453(-1.36%)</td><td>0.7621(-12.04%)</td><td>-10.95%</td><td>1475.13</td></tr><tr><td>PCGrad [40]</td><td>0.5898(1.78%)</td><td>0.7799(-1.22%)</td><td>0.8428(-5.11%)</td><td>0.4829(-9.92%)</td><td>0.8470(-2.25%)</td><td>-2.32%</td><td>1120.39</td></tr><tr><td>CAGrad [23]</td><td>0.5877(1.41%)</td><td>0.7785(-1.39%)</td><td>0.8461(-4.73%)</td><td>0.4781(-8.84%)</td><td>0.8474(-2.20%)</td><td>-2.26%</td><td>1081.03</td></tr><tr><td>GradNorm [4]</td><td>0.5881(1.47%)</td><td>0.7884(-0.13%)</td><td>0.8722(-1.80%)</td><td>0.4462(-1.57%)</td><td>0.8654(-0.12%)</td><td>-0.12%</td><td>839.57</td></tr><tr><td>IMTL-G [24]</td><td>0.5740(-0.95%)</td><td>0.8080(2.35%)</td><td>0.8916(0.39%)</td><td>0.4295(2.24%)</td><td>0.8743(0.90%)</td><td>0.90%</td><td>820.15</td></tr><tr><td>IMTL [24]</td><td>0.5891(1.65%)</td><td>0.8005(1.39%)</td><td>0.8908(0.30%)</td><td>0.4392(0.02%)</td><td>0.8757(1.07%)</td><td>1.07%</td><td>1271.22</td></tr><tr><td rowspan="2">Accuracy-based</td><td>DTP [13]</td><td>0.5853(1.00%)</td><td>0.7666(-2.90%)</td><td>0.8265(-6.94%)</td><td>0.5025(-14.38%)</td><td>0.8318(-4.01%)</td><td>-4.19%</td><td>705.87</td></tr><tr><td>AMTL(proposed)</td><td>0.5870(1.28%)</td><td>0.8025(1.65%)</td><td>0.8903(0.24%)</td><td>0.4300(2.11%)</td><td>0.8784(1.37%)</td><td>1.37%</td><td>707.61</td></tr></table>
|
| 201 |
+
|
| 202 |
+
Table 7. Comparison to recent multi-task losses on the partially annotated VOC+NYU dataset. $mAP$ , $mIoU$ , $\delta_{1}$ , $Acc_{MTL}$ , and $\Delta_{MTL}$ are better when higher while rmse is better when lower. time denotes the average training time for epoch in seconds. The best and runner-up results for each metric are highlighted by bold and underline, respectively.
|
| 203 |
+
|
| 204 |
+
Effectiveness We evaluated the effectiveness of the proposed achievement-base task weights and weighted geometric mean (Table 4). Compared to DTP that not consider task potential, the achievement-based weight improved multitask accuracy from 0.3660 to 0.3745. The weighted geometric mean further improved accuracy to 0.3847.
|
| 205 |
+
|
| 206 |
+
We also adopted the proposed weight and weighted geometric mean to compatible benchmark methods to validate their effectiveness. We applied the proposed achievement-based weight to PCGrad and CAGrad that did not use task weights. As described in Table 5, the proposed weights greatly promoted multi-task accuracy of both losses.
|
| 207 |
+
|
| 208 |
+
Furthermore, we employed a weighted geometric mean to RLW, DWA, and DTP that used task weight but not used task gradients (Table 6). The weighted geometric mean significantly improved multi-task accuracy of all of them.
|
| 209 |
+
|
| 210 |
+
# 4.2.2 Comparison on the VOC + NYU Dataset
|
| 211 |
+
|
| 212 |
+
In the following experiments, the multi-task accuracy and training time of the proposed and benchmark multi-task losses were evaluated on the large-scale partially annotated multi-task dataset that consists of task-specific datasets (PASCAL VOC [9] and NYU depth [33]). The dataset has abundant training images, compared to the existing fully-annotated multi-task datasets such as NYU v2 [33] (795 training images), Cityscapes [7] (2,975 training images), and KITTI [10] (200 training images for multi-task). Its total number of training images is 39,446, which contains 15,215 $(38.57\%)$ for object detection, 10,477 $(26.56\%)$ for semantic segmentation, and 24,231 $(64.43\%)$ for depth estimation. Some images from PASCAL VOC have labels for both detection and segmentation. More details for the partially-annotated dataset were described in Appendix C.
|
| 213 |
+
|
| 214 |
+
<table><tr><td colspan="2">method</td><td>AccMTL↑</td><td>ΔMTL↑</td><td>time</td></tr><tr><td colspan="2">Single-Task</td><td>0.8665</td><td>-</td><td>-</td></tr><tr><td rowspan="3">MuST [11]</td><td>Uniform</td><td>0.8332</td><td>-3.78%</td><td>717.28</td></tr><tr><td>IMTL-G [24]</td><td>0.8365</td><td>-3.46%</td><td>869.29</td></tr><tr><td>AMTL</td><td>0.8431</td><td>-2.60%</td><td>728.85</td></tr><tr><td rowspan="3">NoisyStudent[39]</td><td>Uniform</td><td>0.8596</td><td>-0.68%</td><td>939.07</td></tr><tr><td>IMTL-G [24]</td><td>0.8714</td><td>0.69%</td><td>1081.99</td></tr><tr><td>AMTL</td><td>0.8726</td><td>0.83%</td><td>940.03</td></tr><tr><td rowspan="3">KD-MTL [19]</td><td>Uniform</td><td>0.8673</td><td>0.22%</td><td>946.63</td></tr><tr><td>IMTL-G [24]</td><td>0.8736</td><td>0.94%</td><td>1055.44</td></tr><tr><td>AMTL</td><td>0.8742</td><td>1.02%</td><td>940.03</td></tr><tr><td rowspan="3">w/o Teachers (partially annotated)</td><td>Uniform</td><td>0.8642</td><td>-0.25%</td><td>713.65</td></tr><tr><td>IMTL-G [24]</td><td>0.8743</td><td>0.90%</td><td>820.15</td></tr><tr><td>AMTL</td><td>0.8784</td><td>1.37%</td><td>707.61</td></tr></table>
|
| 215 |
+
|
| 216 |
+
Table 8. Comparison to methods leveraging single-task teachers.
|
| 217 |
+
|
| 218 |
+
In previous works using VOC [43, 44, 27], $mAP@50$ was used to evaluate detection accuracy. However, it is loose to catch the improvement of regression quality, achieved by recent regression losses such as cIoU and gIoU losses [28, 42]. Hence, we adopted $mAP@50:95$ , the standard MS COCO metric, instead of $mAP@50$ .
|
| 219 |
+
|
| 220 |
+
We used EfficientDet [35] as the baseline architecture to address object detection, and EfficientNet-V2-small [34] as a feature extractor. More description for network architecture and training details is presented in Appendix C.
|
| 221 |
+
|
| 222 |
+
The accuracy of the benchmark and proposed losses on the partially annotated dataset is presented in Table 7. When using the partially annotated dataset, the task loss was only produced for existing labels. Hence, the gradient of each task was heavily influenced by the number of labels present in each batch, which seriously degraded accuracy of methods that directly use task gradients (MGDA, PCGrad, and CAGrad). However, IMTL and IMTL-G performed well on the partially annotated dataset also because they can compensate for the absence of task labels while balancing the effective task gradients at the last shared layer. Remarkably, despite its simplicity, GLS demonstrated superior multi-task accuracy. Employing the achievement-based task weights in addition, the proposed multi-task loss further improved and achieved the best multi-task accuracy, without task gradients requiring additional computations.
|
| 223 |
+
|
| 224 |
+
Finally, to verify the effectiveness of multi-task learning on partially annotated datasets, we compared multi-task accuracy using various methods that leverage single-task models: hard pseudo labels (MuST [11]), soft-pseudo labels (NoisyStudent [39]), and knowledge distillation (KD-MTL [19]) (Table 8).
|
| 225 |
+
|
| 226 |
+
Multi-task self-training (MuST) [11] constructs a complete multi-task dataset by producing hard-pseudo labels for label-absent tasks before conducting multi-task learning. However, this method suffers from out-of-distribution and false-positive issues, and as a result, demonstrated lower ac
|
| 227 |
+
|
| 228 |
+
curacy when compared to the partially annotated dataset.
|
| 229 |
+
|
| 230 |
+
NoisyStudent [39] encourages a student model to learn beyond its teacher. The teacher produces predictions on images without augmentation, and then the student is trained to generate identical predictions to its teacher's on difficult images (augmented images). Using soft-pseudo labels, NoisyStudent successfully relieved accuracy degradation caused by unreliable teacher predictions. As a result, NoisyStudent greatly improved multi-task accuracy than MuST, but still lower than using the partially annotated one.
|
| 231 |
+
|
| 232 |
+
KD-MTL [19] minimizes the difference between the shared features of the multi-task model and the projected features of single-task teachers. By learning from features instead of pseudo labels, KD-MTL does not suffer from out-of-distribution or false-positive issues. However, KD-MTL showed lower multi-task accuracy than using the partially annotated dataset. As trained for multiple tasks, a multi-task model learned more general and powerful representations than its single-task counterparts. Hence, imitating single-task teachers rather hindered multi-task learning.
|
| 233 |
+
|
| 234 |
+
# 5. Conclusion
|
| 235 |
+
|
| 236 |
+
In this paper, we proposed a novel achievement-based multi-task loss (AMTL) to balance the training progress of various tasks with different natures. To focus on how much accuracy can be improved further, we assessed the potential of task accuracy using the single-task model in advance. Then, we estimated the training progress as the ratio of current accuracy to its potential. Furthermore, to prevent any task from dominating the loss, we formulated the proposed multi-task loss as the weighted geometric mean of task losses instead of the conventional weight sum.
|
| 237 |
+
|
| 238 |
+
In experiments, we conducted comprehensive evaluations for the proposed loss with various recent benchmark multi-task losses. We demonstrated that the proposed loss achieved excellent multi-task accuracy regardless of backbones and prediction heads. Moreover, to validate the effectiveness of the proposed achievement-base task weights and the weighted geometric mean, we applied them to compatible benchmark methods, respectively, and observed significant improvements in accuracy for each.
|
| 239 |
+
|
| 240 |
+
Further, we constructed a large-scale partially annotated multi-task dataset composed of task-specific datasets and performed an accuracy comparison. Not using task gradients, the proposed loss outperformed benchmark losses, including sophisticated gradient-based losses, on the partially annotated dataset without incurring training time overheads.
|
| 241 |
+
|
| 242 |
+
Finally, as learning more general representations, multitask learning on the partially annotated dataset can produce higher accuracy than methods leveraging single-task teachers. We hope that experiments using such large-scale partially annotated datasets become a new experimental baseline for further multi-task learning research.
|
| 243 |
+
|
| 244 |
+
# References
|
| 245 |
+
|
| 246 |
+
[1] Shiyu Chang, Yang Zhang, Wei Han, Mo Yu, Xiaoxiao Guo, Wei Tan, Xiaodong Cui, Michael Witbrock, Mark A Hasegawa-Johnson, and Thomas S Huang. Dilated recurrent neural networks. Advances in neural information processing systems, 30, 2017.
|
| 247 |
+
[2] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pages 801-818, 2018.
|
| 248 |
+
[3] Po-Yi Chen, Alexander H Liu, Yen-Cheng Liu, and Yu-Chiang Frank Wang. Towards scene understanding: Unsupervised monocular depth estimation with semantic-aware representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2624-2632, 2019.
|
| 249 |
+
[4] Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In International Conference on Machine Learning, pages 794-803. PMLR, 2018.
|
| 250 |
+
[5] Sumanth Chennupati, Ganesh Sistu, Senthil Yogamani, and Samir A Rawashdeh. Multinet++: Multi-stream feature aggregation and geometric loss strategy for multi-task learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR) Workshops, pages 0-0, 2019.
|
| 251 |
+
[6] Sauhaarda Chowdhuri, Tushar Pankaj, and Karl Zipser. Multinet: Multi-modal multi-task learning for autonomous driving. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1496-1504. IEEE, 2019.
|
| 252 |
+
[7] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2016.
|
| 253 |
+
[8] Keval Doshi and Yasin Yilmaz. Multi-task learning for video surveillance with limited data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3889-3899, 2022.
|
| 254 |
+
[9] Mark Everingham, SM Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98-136, 2015.
|
| 255 |
+
[10] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2012.
|
| 256 |
+
[11] Golnaz Ghiasi, Barret Zoph, Ekin D Cubuk, Quoc V Le, and Tsung-Yi Lin. Multi-task self-training for learning general representations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8856-8865, 2021.
|
| 257 |
+
[12] Kratarth Goel, Praveen Srinivasan, Sarah Tariq, and James Philbin. Quadronet: Multi-task learning for real-time se
|
| 258 |
+
|
| 259 |
+
mantic depth aware instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 315-324, 2021.
|
| 260 |
+
[13] Michelle Guo, Albert Haque, De-An Huang, Serena Yeung, and Li Fei-Fei. Dynamic task prioritization for multitask learning. In Proceedings of the European Conference on Computer Vision (ECCV), pages 270–287, 2018.
|
| 261 |
+
[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
|
| 262 |
+
[15] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV 14, pages 646-661. Springer, 2016.
|
| 263 |
+
[16] Keishi Ishihara, Anssi Kanervisto, Jun Miura, and Ville Hautamaki. Multi-task learning with attention for end-to-end autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2902-2911, 2021.
|
| 264 |
+
[17] Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7482-7491, 2018.
|
| 265 |
+
[18] Iasonas Kokkinos. Ethernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6129-6138, 2017.
|
| 266 |
+
[19] Wei-Hong Li and Hakan Bilen. Knowledge distillation for multi-task learning. In European Conference on Computer Vision, pages 163-176. Springer, 2020.
|
| 267 |
+
[20] Baijiong Lin, YE Feiyang, Yu Zhang, and Ivor Tsang. Reasonable effectiveness of random weighting: A litmus test for multi-task learning. Transactions on Machine Learning Research, 2022.
|
| 268 |
+
[21] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólar. Focal loss for dense object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2980-2988, 2017.
|
| 269 |
+
[22] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dálár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision (ECCV), pages 740-755. Springer, 2014.
|
| 270 |
+
[23] Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, and Qiang Liu. Conflict-averse gradient descent for multi-task learning. Advances in Neural Information Processing Systems, 34:18878-18890, 2021.
|
| 271 |
+
[24] Liyang Liu, Yi Li, Zhanghui Kuang, J Xue, Yimin Chen, Wenming Yang, Qingmin Liao, and Wayne Zhang. Towards impartial multi-task learning. In International Conference on Learning Representations, 2021.
|
| 272 |
+
[25] Shikun Liu, Edward Johns, and Andrew J Davison. End-to-end multi-task learning with attention. In Proceedings of
|
| 273 |
+
|
| 274 |
+
the IEEE/CVF conference on computer vision and pattern recognition, pages 1871-1880, 2019.
|
| 275 |
+
[26] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV), pages 21-37. Springer, 2016.
|
| 276 |
+
[27] Joseph Redmon and Ali Farhadi. Yolo9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7263-7271, 2017.
|
| 277 |
+
[28] Hamid Rezatofighi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, and Silvio Savarese. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 658-666, 2019.
|
| 278 |
+
[29] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015.
|
| 279 |
+
[30] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. *Mobilenetv2: Inverted residuals and linear bottlenecks*. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pages 4510–4520, 2018.
|
| 280 |
+
[31] Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. Advances in Neural Information Processing Systems, 31, 2018.
|
| 281 |
+
[32] Shuai Shao, Zeming Li, Tianyuan Zhang, Chao Peng, Gang Yu, Xiangyu Zhang, Jing Li, and Jian Sun. Objects365: A large-scale, high-quality dataset for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 8430-8439, 2019.
|
| 282 |
+
[33] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. In Proceedings of the European Conference on Computer Vision (ECCV), pages 746-760. Springer, 2012.
|
| 283 |
+
[34] Mingxing Tan and Quoc Le. Efficientnetv2: Smaller models and faster training. In International conference on machine learning, pages 10096-10106. PMLR, 2021.
|
| 284 |
+
[35] Mingxing Tan, Ruoming Pang, and Quoc V Le. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10781-10790, 2020.
|
| 285 |
+
[36] Simon Vandenhende, Stamatios Georgoulis, Wouter Van Gansbeke, Marc Proesmans, Dengxin Dai, and Luc Van Gool. Multi-task learning for dense prediction tasks: A survey. IEEE transactions on pattern analysis and machine intelligence, 2021.
|
| 286 |
+
[37] Simon Vandenhende, Stamatios Georgoulis, and Luc Van Gool. Mti-net: Multi-scale task interaction networks for multi-task learning supplementary materials.
|
| 287 |
+
[38] Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In Proceedings of the European Conference on Computer Vision (ECCV), pages 418-434, 2018.
|
| 288 |
+
|
| 289 |
+
[39] Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10687-10698, 2020.
|
| 290 |
+
[40] Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. Gradient surgery for multi-task learning. Advances in Neural Information Processing Systems, 33:5824-5836, 2020.
|
| 291 |
+
[41] Mingliang Zhai, Xuezhi Xiang, Ning Lv, and Abdulmotaleb El Saddik. Multi-task learning in autonomous driving scenarios via adaptive feature refinement networks. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2323-2327. IEEE, 2020.
|
| 292 |
+
[42] Zhaohui Zheng, Ping Wang, Wei Liu, Jinze Li, Rongguang Ye, and Dongwei Ren. Distance-iou loss: Faster and better learning for bounding box regression. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 12993-13000, 2020.
|
| 293 |
+
[43] Xingyi Zhou, Dequan Wang, and Philipp Krahenbuhl. Objects as points. arXiv preprint arXiv:1904.07850, 2019.
|
| 294 |
+
[44] Yousong Zhu, Chaoyang Zhao, Jinqiao Wang, Xu Zhao, Yi Wu, and Hanqing Lu. Couplenet: Coupling global structure with local parts for object detection. In Proceedings of the IEEE international conference on computer vision, pages 4126-4134, 2017.
|
achievementbasedtrainingprogressbalancingformultitasklearning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fac6c47b9aa52cc3fa653ea673abf4c1a60cb8b31cb9b6da6299f4b84fc157c5
|
| 3 |
+
size 817099
|
achievementbasedtrainingprogressbalancingformultitasklearning/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:da944e3c85e4741d2fe26a573366234aedb0809b8990cf7326733a01004b3ff8
|
| 3 |
+
size 320512
|
actformeraganbasedtransformertowardsgeneralactionconditioned3dhumanmotiongeneration/b76da01a-6915-4f20-95e0-470a21a063d9_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5d5bb7f6ce1af4f339dac43b03bdea14c757e247a6af2fcadcea146cb14992ca
|
| 3 |
+
size 77075
|
actformeraganbasedtransformertowardsgeneralactionconditioned3dhumanmotiongeneration/b76da01a-6915-4f20-95e0-470a21a063d9_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:41fd5b39a1114d59f178b3685bd4cc626205b898a8b6fb55eff79d835e10806e
|
| 3 |
+
size 95671
|
actformeraganbasedtransformertowardsgeneralactionconditioned3dhumanmotiongeneration/b76da01a-6915-4f20-95e0-470a21a063d9_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4399964a9a2a504e65c63fda0d46e81c4af32a0047f8b2ff48a4dee33a9707c4
|
| 3 |
+
size 10563225
|
actformeraganbasedtransformertowardsgeneralactionconditioned3dhumanmotiongeneration/full.md
ADDED
|
@@ -0,0 +1,284 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ActFormer: A GAN-based Transformer towards General Action-Conditioned 3D Human Motion Generation
|
| 2 |
+
|
| 3 |
+
Liang Xu $^{2,3*}$ Ziyang Song $^{5*}$ Dongliang Wang $^{1,6}$ Jing Su $^{1}$ Zhicheng Fang $^{1}$ Chenjing Ding $^{1}$
|
| 4 |
+
|
| 5 |
+
Weihao Gan<sup>7</sup> Yichao Yan<sup>2</sup> Xin Jin<sup>3,4</sup> Xiaokang Yang<sup>2</sup> Wenjun Zeng<sup>3,4</sup> Wei Wu<sup>1,6</sup>
|
| 6 |
+
|
| 7 |
+
$^{1}$ SenseTime Research $^{2}$ Shanghai Jiao Tong University $^{3}$ Eastern Institute of Technology, Ningbo
|
| 8 |
+
|
| 9 |
+
$^{4}$ Ningbo Institute of Digital Twin $^{5}$ The Hong Kong Polytechnic University $^{6}$ Shanghai AI Laboratory
|
| 10 |
+
|
| 11 |
+
$^{7}$ Mashang Consumer Finance Co., Ltd.
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
We present a GAN-based Transformer for general action-conditioned 3D human motion generation, including not only single-person actions but also multi-person interactive actions. Our approach consists of a powerful Action-conditioned motion TransFormer (ActFormer) under a GAN training scheme, equipped with a Gaussian Process latent prior. Such a design combines the strong spatiotemporal representation capacity of Transformer, superiority in generative modeling of GAN, and inherent temporal correlations from the latent prior. Furthermore, ActFormer can be naturally extended to multi-person motions by alternately modeling temporal correlations and human interactions with Transformer encoders. To further facilitate research on multi-person motion generation, we introduce a new synthetic dataset of complex multi-person combat behaviors. Extensive experiments on NTU-13, NTU RGB+D 120, BABEL and the proposed combat dataset show that our method can adapt to various human motion representations and achieve superior performance over the state-of-the-art methods on both single-person and multi-person motion generation tasks, demonstrating a promising step towards a general human motion generator. The project website can be found at https://liangxuy.github.io/actformer/.
|
| 16 |
+
|
| 17 |
+
# 1. Introduction
|
| 18 |
+
|
| 19 |
+
This work aims to tackle the action-conditioned motion generation task. Specifically, given a semantic action label as input and generate corresponding 3D human motions. The technique is key to applications like character anima
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1. Towards general action-conditioned 3D human motion generation. Our framework adapts to more action categories, various human motion representations (e.g., SMPL body models, skeleton joint coordinates), and multi-person interactive actions.
|
| 23 |
+
|
| 24 |
+
tion creation, humanoid robots interaction and data synthesis for computer vision tasks related to human actions.
|
| 25 |
+
|
| 26 |
+
Human motion synthesis has been a long-standing research topic. However, most of the prior works are closer to a prediction task, in which future motions are generated from previous motions [16, 26, 41, 8, 22, 11, 55, 34]. In recent years, some works started to focus on motion generation from action labels [20, 45, 49]. Despite some impressive generation results, these works are still limited in the following two aspects. Firstly, most of these works bias towards motion data of SMPL pose parameters while performing poorly on data of skeleton joint coordinates, which limits the generalization. A solution adaptable to various human motion representations is thus expected. Secondly, prior works only focus on single-person motion generation while neglecting multi-person interactive actions, which are integral parts of daily human motions. In general, prior works fail to cover a complete domain of human motions and stand far from a general human motion generator.
|
| 27 |
+
|
| 28 |
+
This paper explores a solution towards general action-conditioned human motion generation, as shown in Fig. 1. The very first challenge lies in generating long motion sequences with realism and diversity. Many prior works assume a Markovian dependency in temporal motions and adopt an auto-regressive model [58, 20, 5, 6, 31]. However, these methods are subject to the "mean-pose" problem, in which the model starts to generate the mean pose continuously after a few frames. In contrast, CSGN [53] and ACTOR [45] sample from a sequence-level latent prior and produce the whole sequence altogether. Specifically, CSGN samples from a Gaussian Process (GP) latent prior and stacks convolutions in the generator to enforce temporal correlations. On the other hand, ACTOR samples a single vector as the sequence-level embedding and produces multiple frames by querying through different positional encodings. We argue that both are sub-optimal solutions, and we seek a better trade-off between inductive bias and representation capacity. Our proposed Action-Conditioned Motion TransFormer (ActFormer) leverages the GP prior for the inherent temporal correlations. Meanwhile, we adopt a Transformer architecture for its simple structure and strong power in encoding non-local correlations proved in many other tasks. The Transformer model naturally regards a latent vector sequence from the GP prior as a sequence of tokens, leading to a seamless conjunction. We incorporate the Transformer-based motion generator into GAN, known for high-quality generative modeling. These designs jointly contribute to significant advantages of our framework in the single-person motion generation task.
|
| 29 |
+
|
| 30 |
+
Another challenge lies in handling human interactions when multi-person interactive actions are included. Human interactions have been explored by some motion prediction algorithms, in which pooling or self-attention modules are adopted to encode the interactions [21, 3, 4, 56]. However, it has not been considered in the motion generation task. To our knowledge, our approach is the first to tackle multiperson motion generation. We share the same latent vector sequence from GP among multiple persons in a group to enforce their synchronization over time. Meanwhile, different persons are distinguished through positional encodings. Our ActFormer can be easily extended to the multi-person scenario by alternately modeling temporal correlations and human interactions. The generation results show impressive realism in both motions and multi-person interactions.
|
| 31 |
+
|
| 32 |
+
The strong demand for motion capture (MoCap) data with action labels also poses a challenge. Prior methods rely on datasets with $\sim 10$ categories, which can hardly drive a general motion generator. MoCap datasets with multi-person interactive actions are even rarer. We leverage NTU RGB+D 120 [37] and the newly-released BA-BEL dataset [47], both including more than 100 action categories. To facilitate the research on multi-person mo
|
| 33 |
+
|
| 34 |
+
tion generation, we further construct a GTA Combat dataset through the Grand Theft Auto V's (GTA-V) [1] gaming engine. We collect $\sim 7\mathrm{K}$ motion sequences of combat behavior, which is one of the most complex types of human interactions. Experiments on these datasets verify the effectiveness of our approach.
|
| 35 |
+
|
| 36 |
+
Our three-fold contributions are summarized as follows: (i) We propose ActFormer, a GAN-based Transformer framework, which adapts to various human motion representations and achieves leading results in the single-person motion generation task; (ii) Our ActFormer takes a faithfully early step to solve the multi-person motion generation problem; (iii) We contribute a GTA Combat dataset with plentiful and complex multi-person interactive motions.
|
| 37 |
+
|
| 38 |
+
# 2. Related Work
|
| 39 |
+
|
| 40 |
+
We review the literature on human motion prediction and generation tasks and MoCap datasets. We also review Transformers in GANs which are relevant to our approach.
|
| 41 |
+
|
| 42 |
+
# 2.1. Motion Prediction
|
| 43 |
+
|
| 44 |
+
Motion prediction aims to predict motions of future frames, given one or several frames of past motions. Recurrent Neural Networks have been predominantly adopted to model sequence learning [16, 26, 41, 22, 11, 55, 8]. Especially, generative models like VAEs and GANs are incorporated in [22, 11, 55]. Recently, the powerful Transformer architecture is utilized by [34] to predict dance motions conditioned on music.
|
| 45 |
+
|
| 46 |
+
Multi-person interactions have also been considered in motion prediction. [21] and [3] adopts the pooling module to aggregate information across multiple persons. [4] proposes a graph-based message passing mechanism to model both human-human and human-object interactions. Recently, [56] uses the self-attention module to entangle multiperson motions. Unlike the works above, we aim to tackle the motion generation task without relying on past motions.
|
| 47 |
+
|
| 48 |
+
# 2.2. Motion Generation
|
| 49 |
+
|
| 50 |
+
Compared to future motion prediction, motion generation from scratch is a new and less explored field. CSGN [53] generates unconstrained motions with a graph convolution-based GAN framework. [58] further explores generating ever-changing motions for unbounded durations.
|
| 51 |
+
|
| 52 |
+
More works tend to generate motions from various conditions. [31, 33, 24] generates dance motions corresponding to the given music. [35, 17, 5, 6, 13, 19, 10, 46, 29] synthesize motions from language descriptions. More recently, diffusion model is also proposed for text-driven human motion generation in [57, 49].
|
| 53 |
+
|
| 54 |
+
Our work is dedicated to the task of action-conditioned motion generation, which uses semantic action labels as the
|
| 55 |
+
|
| 56 |
+
condition. Action2Motion [20] and ACTOR [45] are the works most similar to ours. Action2Motion proposes a temporal VAE to generate motions frame by frame, based on GRU architecture. ACTOR also adopts a VAE framework while leveraging the Transformer architecture and learning a sequence-level latent distribution, which differs from Action2Motion. Our ActFormer also receives a sequence-level latent prior as input. Unlike ACTOR, our input is a latent vector sequence sampled from a GP prior, inherently enforcing temporal correlations and thus reducing the difficulty of generating realistic motions.
|
| 57 |
+
|
| 58 |
+
Another stream of works lies in simulation-based character control. For example, [36] learns an autoregressive conditional VAE and then applies task-specific control policies based on this model. [48] guides characters to achieve specific character-scene interaction goals with their motions. Recently, [52] and [51] seek general approaches for physics-based character control.
|
| 59 |
+
|
| 60 |
+
# 2.3. MoCap Dataset
|
| 61 |
+
|
| 62 |
+
Large-scale MoCap data is critical for driving a general motion generator. There have been many MoCap datasets [25, 40, 44, 7], and AMASS [39] contributes a large collection by unifying MoCap data from different sources into a common representation based on the SMPL body model [38]. Despite a large amount of high-quality and diverse human motion data, semantic action labels are missing in AMASS. Fortunately, the newly-released BA-BEL [47] provides fine-grained action labels on frame-level for AMASS, resulting in a challenging and practically valuable benchmark for our task.
|
| 63 |
+
|
| 64 |
+
Some other datasets provide action labels, while their motion data is represented by skeleton joint coordinates. Among them, NTU RGB+D 120 [37] is a large-scale and representative one. Owing to the scalability of our method, we can learn from MoCap data with various motion representations. The NTU RGB+D data is used for both single-person and multi-person motion generation. As a complement to the daily interactive actions in NTU RGB+D, we contribute another GTA Combat dataset with more complex multi-person motions and interactions.
|
| 65 |
+
|
| 66 |
+
# 2.4. Transformer in GANs
|
| 67 |
+
|
| 68 |
+
The success of Transformers [50] in visual recognition tasks inspires its application in generation tasks. TransGAN [27] is the first pure Transformer-based GAN architecture. Later ViTGAN [32] proposes novel regularization methods to improve the stability of Transformer-based GAN training. [14] combines a Transformer-based generator and a CNN-based discriminator to form a robust model without cumbersome design choices. We follow [14] to exempt from the tricky designs in the discriminator and pay more attention to the generator.
|
| 69 |
+
|
| 70 |
+
# 3. Approach
|
| 71 |
+
|
| 72 |
+
The problem to be tackled is action-conditioned motion generation. Formally speaking, given a semantic action label $a$ and a seed $z$ sampled from the latent prior, Our Action-Conditioned Motion TransFormer (ActFormer) will generate a sequence of human motions $M = \{M_t|t\in \{1,\dots,T\}\}$ corresponding to the action label. Each frame $M_{t}$ contains the motions of $P$ persons, i.e., $M_{t} = \{M_{t}^{p}|p\in \{1,\dots,P\}\}$ . The motion of one person in a frame $M_t^p$ is composed of a root translation $l_t^p$ in a global coordinate frame, and local body poses $\theta_t^p$ . The latter can be either SMPL-based parameters or other formats like skeleton joint coordinates. In this section, we first introduce our approach to solve the single-person motion generation and then show how it can be extended to the multi-person setting.
|
| 73 |
+
|
| 74 |
+
# 3.1. Single-person Motion Generation
|
| 75 |
+
|
| 76 |
+
The generation starts with sampling a random seed from the latent prior. Since temporal correlation is critical to a realistic motion sequence, we select the Gaussian Process as our latent prior and sample a $(T,C_0)$ latent vector sequence $z$ for each generation. The time length $T$ is the same as that of the motion sequence to be generated, and the latent vector at each time step has $C_0$ channels. A 1-d vector sequence with $T$ time steps is sampled independently from a GP on each of $C_0$ channels, with the characteristic length-scales on different channels spanning a spectrum of values. As suggested by [53], this models a composition of correlations at various time scales into the latent vector sequence.
|
| 77 |
+
|
| 78 |
+
The ActFormer employs a Transformer-based generator to transform the latent vector sequence and the given action label, to a human motion sequence. As shown in Fig. 2, at the input stage, the $(T,C_0)$ latent vector sequence $z$ is regarded as a list of $T$ tokens and passed through an MLP layer for input embedding. To incorporate the semantic condition, we append a class token embedding the action category $a$ , resulting in totally $T + 1$ tokens. Since the Transformer encoder is data-dependent, we add learnable positional encoding (PE) to the input tokens to maintain location information. After that, $L$ layers of Temporal-transformer (T-Former) model the temporal correlations among various time steps represented by tokens. Finally, the class token is discarded while each of the rest $T$ tokens is projected by an output layer into a $C$ -d vector. The vector is regarded as a concatenation of a person's root translation and local body poses at a specific time step.
|
| 79 |
+
|
| 80 |
+
# 3.2. Multi-person Motion Generation
|
| 81 |
+
|
| 82 |
+
Transferring from the single-person to multi-person setting induces an additional person-wise dimension $P$ . Fortunately, our approach introduced above can fit such an additional dimension $P$ by slight adjustment, thus be scaled to the multi-person case.
|
| 83 |
+
|
| 84 |
+

|
| 85 |
+
Figure 2. Overview of the proposed ActFormer framework. Given a latent vector sequence $z$ sampled from Gaussian Process (GP) prior and an action label $a$ , the model can synthesize either a single-person (top stream) or a multi-person (bottom stream) motion sequence. The model is trained under a GAN scheme.
|
| 86 |
+
|
| 87 |
+
In the single-person setting, a $T$ -frame motion sequence is encoded from a list of $T$ tokens. Therefore, another $T$ -frame motion sequence with $P$ persons requires $P$ lists of $T$ tokens. Considering that motions of these $P$ persons are highly correlated and synchronized at each time step, we regard them as an entity and sample only one latent vector sequence to share among them. Specifically, the $T$ input embedding tokens along with the class token are shared $P$ times to produce the input. This strategy enforces the synchronization among multiple persons in a group from the input stage. Meanwhile, different persons need to be distinguished from each other. We turn to learnable positional encoding (PE) again to achieve this goal. Here $T + 1$ temporal positional encodings (TPE) and $P$ person-wise positional encodings (PPE) are separately learned. Then they are tiled to generate $P \cdot (T + 1)$ positional encodings to match input tokens. Specifically, the PE for a $(t,p)$ -indexed token $PE(t,p) = \text{concat}(TPE(t), PPE(p))$ . Completely independent PE for each token is also feasible, while such a 2D combination provides better results.
|
| 88 |
+
|
| 89 |
+
The ActFormer generator can be extended to a multiperson setting with slight adjustments, owing to the flexibility of Transformer encoders to adapt to modeling correlations along various dimensions. As Fig. 2 shows, an Interaction-transFormer (I-Former) firstly models the interactions among different persons at each time step independently. Then a Temporal-transFormer (T-Former) follows to model the temporal correlations for each person independently. Such a module, which alternately encodes human interactions and temporal correlations, is stacked $L$ layers. Similar to the single-person case, all class tokens are discarded finally. The remaining $P \cdot T$ tokens are projected into the final $(P, T, C)$ output, in which the $C$ -d vector from each token represents the motion of a specific person at a particular time step.
|
| 90 |
+
|
| 91 |
+
# 3.3. Generative Adversarial Training
|
| 92 |
+
|
| 93 |
+
Our ActFormer is learned under the conditional generative adversarial training framework [18, 42]. During train
|
| 94 |
+
|
| 95 |
+
ing, the ActFormer generator synthesizes human motion sequences conditioned on given action labels. Besides, a discriminator receives human motion sequences and action labels as inputs, trying to discriminate the generated human motions from real ones of specified actions. The generator learns from the discriminator's feedback to improve its generation results to be close to real ones. Conditional Wasserstein GAN loss functions [42, 9] are adopted for training, formatted as,
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\begin{array}{l} L _ {D} = \mathbb {E} [ D (G (z, a), a) - D (\tilde {M}, a) ], \tag {1} \\ L _ {G} = \mathbb {E} [ - D (G (z, a), a) ], \\ \end{array}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where $\tilde{M}$ represents the sequence sampled from real motion data belonging to the action category $a$ . $D$ denotes the discriminator and $G$ denotes the ActFormer generator.
|
| 102 |
+
|
| 103 |
+
An ST-GCN [54] is adopted as the discriminator. The $C$ -d vector of a person's motion at each time step is decomposed into a $(K, D)$ part-wise representation. The $K$ -node Graph is constructed according to the skeleton topology behind the local body poses $\theta_t^p$ , with the root translation $l_t^p$ also connected. Therefore, a $(T, C)$ motion sequence is re-organized into a $(T, K, D)$ spatial-temporal graph. In the multi-person setting, the part-wise motions of multiple persons are directly concatenated on the $D$ dimension. In other words, the $(P, T, C)$ multi-person motion sequence becomes a $(T, K, P \cdot D)$ graph. Finally, the graph-based motion sequence is input to the GCN discriminator to compute a score, with the semantic condition integrated by projection [43].
|
| 104 |
+
|
| 105 |
+
By concatenating the part-wise motions of multiple persons, the GCN can model their interactions at various spatial and temporal scales. However, the concatenation operator is not permutation-invariant. In other words, it cannot ensure to output the same score for the same motion sequence with various person-wise permutations. We adopt a simple data augmentation strategy to compensate for this. For each sample from the MoCap dataset, the persons inside are randomly permuted in every training iteration. In
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
Figure 3. Sample RGB images and human pose annotations in the GTA Combat dataset.
|
| 109 |
+
|
| 110 |
+
this way, we encourage the ActFormer to regard the same sample with various permutations as different samples and model all of them into the learned distribution. We find such a simple data augmentation more robust than importing symmetric functions into the discriminator network, since the latter often destabilizes the GAN training.
|
| 111 |
+
|
| 112 |
+
# 4. Experiments
|
| 113 |
+
|
| 114 |
+
In this section, we evaluate the proposed ActFormer on both single-person and multi-person motion generation tasks. We firstly introduce the datasets and quantitative metrics used for evaluation. Next, the ActFormer is compared with baseline methods from prior works. Then we conduct an ablation study to investigate various components in ActFormer. Finally, qualitative results are shown.
|
| 115 |
+
|
| 116 |
+
# 4.1. Datasets and Evaluation Metrics
|
| 117 |
+
|
| 118 |
+
NTU-13 [37, 20] is a subset of NTU RGB+D 120 with only 13 action categories and we choose it to be able to compare to previous works [20, 45]. The SMPL-based motion data are obtained through VIBE [30], as adopted in [20, 45].
|
| 119 |
+
|
| 120 |
+
NTU RGB+D 120 [37] contains 114,480 motion clips belonging to 120 action categories. Among them, 94 categories are single-person actions, and the rest 26 categories are two-person interactive actions. The dataset provides MoCap data in the format of skeleton joint coordinates, which are captured by Kinect [59] and severely noisy. We fill missing detections and perform temporal smoothing to improve its quality. Different parts of the dataset are used for both single-person and multi-person motion generation (referred to as NTU-1P and NTU-2P respectively in the following context). Moreover, to evaluate the methods' adaptability to different motion representations, we apply a motion reconstruction pipeline similar to [34] onto multi-view images from NTU-1P. This results in a new version of NTU-1P data with SMPL parameters, named NTURcon-1P.
|
| 121 |
+
|
| 122 |
+
BABEL [47] is another large-scale motion dataset with semantic action labels. The SMPL-based motion data is de
|
| 123 |
+
|
| 124 |
+
rived from AMASS with higher quality. The whole dataset contains more than 250 action categories but is remarkably long-tailed. Therefore, we follow the BABEL-120 benchmark in [47] and select motion data belonging to the most frequent 120 action categories. The data is used for single-person motion generation.
|
| 125 |
+
|
| 126 |
+
GTA Combat is a synthetic multi-person MoCap dataset collected by us. As there are few full-body MoCap datasets qualified for multi-person interactive action generation, we tackle the game GTA-V [1] for synthetic data collection. We find combat behavior with 2 or more persons is a partial procedure generation process in GTA-V. The combat actions are randomly triggered by combing more than 10 atomic actions in real-time with more variations when equipped with different combating arms. And the attacked one can react randomly, triggered by the ragdoll physics [2] of the game engine. Our policy of combating behavior is that each character pick a random opponent to attack without order. This setting is more like the chaos fighting scene in some fighting games. So it simulates parts of real-world human social interactions but the complexity and diversity are beyond real-world fightings, which is suitable to serve as a benchmark to confirm that our framework can work well on more complex interactive actions (more than 2 persons). We sample 2 to 5 actors in each run and make each actor fight with one of the others randomly picked. We extend JTA [15] which helps to extract 3D human skeletons with this random combat logic for the data collection. Thereby, we get a high-quality multi-person interaction MoCap dataset ranging from 2 to even 5 persons. Fig. 3 shows some samples of GTA Combat dataset. More samples are provided in the supplementary. According to the number of persons in each sequence, the dataset is divided into 4 splits. Each split contains $\sim 2.3 / 1.9 / 1.5 / 1.2\mathrm{K}$ sequences with motions of $2/3/4/5$ persons.
|
| 127 |
+
|
| 128 |
+
Evaluation Metrics. We use action recognition accuracy and FID [23] score as quantitative metrics. Note that these metrics require a pre-trained action recognition model. We adopt ST-GCN [54], sharing a similar configuration to the discriminator in the ActFormer's training. Different from previous works [20, 45], the root translation is considered as an extra node when training the action recognition model since the root translation results are significant to measure the quality of the generated motions especially for the multi-person scenarios.
|
| 129 |
+
|
| 130 |
+
Previous works [20, 45, 53] consider statistics of the whole real/generated sample set when computing FID, regardless of action categories. We denote this as whole FID $(\mathrm{FID}_w)$ , while we find this not reasonable for the action-conditional generation case. As a supplement, we adopt another mean FID $(\mathrm{FID}_m)$ , in which FID is independently measured on samples of each action category and then averaged. $\mathrm{FID}_m$ assumes the conditional distributions of per
|
| 131 |
+
|
| 132 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">NTU-13</td><td colspan="3">NTURecon-1P</td><td colspan="3">NTU-1P</td><td colspan="3">BABEL</td></tr><tr><td>Acc.↑</td><td>FIDm↓</td><td>FIDw↓</td><td>Acc.↑</td><td>FIDm↓</td><td>FIDw↓</td><td>Acc.↑</td><td>FIDm↓</td><td>FIDw↓</td><td>Acc.↑</td><td>FIDm↓</td><td>FIDw↓</td></tr><tr><td>Action2Motion [20]</td><td>94.9</td><td>4.40</td><td>2.01</td><td>41.97</td><td>23.86</td><td>18.35</td><td>8.34</td><td>56.13</td><td>46.13</td><td>6.04</td><td>17.03</td><td>6.05</td></tr><tr><td>CSGN [53]</td><td>85.9</td><td>8.07</td><td>3.64</td><td>20.02</td><td>36.02</td><td>27.51</td><td>38.99</td><td>17.01</td><td>7.14</td><td>7.02</td><td>20.11</td><td>9.17</td></tr><tr><td>ACTOR [45]</td><td>97.1</td><td>5.35</td><td>1.18</td><td>39.69</td><td>37.87</td><td>19.58</td><td>35.56</td><td>32.89</td><td>9.63</td><td>3.44</td><td>55.66</td><td>36.27</td></tr><tr><td>Ours</td><td>99.9</td><td>4.28</td><td>1.11</td><td>49.12</td><td>20.35</td><td>8.86</td><td>42.01</td><td>12.53</td><td>3.57</td><td>13.49</td><td>11.21</td><td>2.55</td></tr></table>
|
| 133 |
+
|
| 134 |
+
Table 1. State-of-the-art comparison on single-person motion generation. Higher Acc. and lower FIDs are better.
|
| 135 |
+
|
| 136 |
+
<table><tr><td>Method</td><td>Acc.↑</td><td>FIDm↓</td><td>FIDw↓</td><td>FIDma↓</td><td>FIDwa↓</td><td>Split</td><td>CSGN [53]</td><td>Ours</td></tr><tr><td>Action2Motion [20]</td><td>16.85</td><td>15.07</td><td>10.12</td><td>26.25</td><td>20.31</td><td>2p</td><td>1.15</td><td>1.04</td></tr><tr><td>CSGN [53]</td><td>55.12</td><td>6.10</td><td>3.88</td><td>8.96</td><td>3.20</td><td>3p</td><td>1.73</td><td>1.13</td></tr><tr><td>ACTOR [45]</td><td>63.04</td><td>13.56</td><td>6.72</td><td>19.14</td><td>8.16</td><td>4p</td><td>2.09</td><td>1.58</td></tr><tr><td>Ours</td><td>69.65</td><td>3.77</td><td>3.27</td><td>7.12</td><td>2.46</td><td>5p</td><td>4.02</td><td>2.23</td></tr></table>
|
| 137 |
+
|
| 138 |
+
Table 2. State-of-the-art comparison on multi-person motion generation. Left: Performance on NTU-2P dataset. Right: $\mathrm{FID}_a$ performance on different splits of the GTA Combat dataset.
|
| 139 |
+
|
| 140 |
+
class samples to be individual Gaussians, thus better describing the similarity between two mixture distributions in this task. We adopt $\mathrm{FID}_m$ and $\mathrm{FID}_w$ for all the experiments.
|
| 141 |
+
|
| 142 |
+
When extending to the multi-person setting, we measure the FID in two ways to observe generation results from different perspectives. The first is to regard a group of persons as a whole. Specifically, the recognition model receives concatenated multi-person motions as input and extracts features for calculating FID directly from it. Data augmentation by random person-wise permutation is also applied in the recognition model's training to compensate for permutation invariance. The second is to construct multi-person features by aggregating single-person features. Towards this end, we train a single-person action recognition model. During the evaluation, we extract features of each person in a group independently and aggregate them by channel-wise max pooling. Such feature extraction is applied on both real and generated samples for alignment. FID calculated in this way is called Aggregated FID $(\mathrm{FID}^a)$ .
|
| 143 |
+
|
| 144 |
+
Implementation Details. For each dataset, we use different data splits to train ActFormer and action recognition models. In NTU RGB+D 120, we follow its cross-subject split. In BABEL, we also follow the provided split as [47]. The data distribution of different action categories in BABEL is remarkably long-tailed. Therefore, we adopt a square-root sampling [28] strategy in training the ActFormer, the action recognition models, and all the compared baselines. GTA Combat contains only one action category, making it infeasible to train an action recognition model on it. Also, to our knowledge, there are no public datasets whose motions contain $2 \sim 5$ participants and meanwhile with action categories annotated. Therefore, we leverage an action recognition model trained on NTU-1P by aligning motion data in NTU and GTA Combat into a unified skeleton topology. In this way, $\mathrm{FID}^a$ can be measured on GTA Combat. Please refer to the supplementary for more details about our network
|
| 145 |
+
|
| 146 |
+
architecture and training/evaluation configurations.
|
| 147 |
+
|
| 148 |
+
# 4.2. Comparison to State-of-the-Arts
|
| 149 |
+
|
| 150 |
+
We compare the proposed ActFormer with the following baseline methods: Action2Motion [20], ACTOR [45], and CSGN [53], on both single- and multi-person motion generation tasks. Action2Motion and ACTOR can be directly evaluated on the specified datasets by simply adapting motion representations if needed. CSGN is an unconditional motion generation method, and we extend it to conditional generation by incorporating conditional BatchNorm [12] in generator and projection [43] in discriminator. All the methods above are for the single-person setting, and there is no prior works tackling the multi-person case. Therefore, we scale these methods to the multi-person case, again by concatenating multi-person motions as a whole in each frame.
|
| 151 |
+
|
| 152 |
+
The comparison on single-person motion generation is presented in Tab. 1. Action2Motion is sub-optimal in both latent prior (frame-level) and network architecture (GRU), causing it to lag behind ours on all datasets, especially for the NTU-1P dataset. CSGN achieves good performance on NTU-1P. However, this method, oriented for skeleton data, suffers a performance degradation when moving from NTU-1P to SMPL-based NTUR Recon-1P. ACTOR achieves excellent performance on NTU-13, but degrades significantly when faced with more challenging large-scale datasets and more strict evaluation metrics (considering root translation). Compared to them, our method demonstrates strong adaptability to various motion representations and achieves leading performance on all datasets. In particular, our ActFormer is the only workable method on the extremely challenging, long-tailed BABEL dataset.
|
| 153 |
+
|
| 154 |
+
As Tab. 2 shows, when extended to the multi-person case on NTU-2P, our advantages are more significant since no prior methods take specific designs to model human interactions. On GTA Combat, multi-person motions are fur
|
| 155 |
+
|
| 156 |
+
<table><tr><td>Configuration</td><td>Acc.↑</td><td>FIDm↓</td><td>FIDw↓</td></tr><tr><td>(1) Gaussian latent prior</td><td>37.55</td><td>17.25</td><td>6.01</td></tr><tr><td>(2) GraphConv. in Gen.</td><td>32.97</td><td>18.68</td><td>6.09</td></tr><tr><td>(3) Fixed PE</td><td>37.57</td><td>18.38</td><td>5.55</td></tr><tr><td>(4) Full model</td><td>42.01</td><td>12.53</td><td>3.57</td></tr></table>
|
| 157 |
+
|
| 158 |
+
ther extended to at most 5 persons, making the generation task extremely challenging. According to the experimental results on NTU-2P, CSGN is the only baseline method showing some potential (lower FIDs) to be applied on the multi-person case. Therefore, here we compare our method with only CSGN on the GTA Combat dataset. With the increasing number of persons, both methods are prone to performance drops. By contrast, our ActFormer shows a more smooth decline and outperforms CSGN on each dataset split. Despite this, the interaction complexity in 4P/5P poses a significant challenge to our method. Specific failure cases will be discussed in Sec. 4.4.
|
| 159 |
+
|
| 160 |
+
# 4.3. Ablation Study
|
| 161 |
+
|
| 162 |
+
In this part, we conduct ablation experiments to study various components in our proposed framework.
|
| 163 |
+
|
| 164 |
+
Latent Prior. Here we further verify the importance of Gaussian Process latent prior in our framework. As in Tab. 3 (1), replacing GP with a Gaussian latent prior leads to $4.72\mathrm{FID}_m$ and $2.44\mathrm{FID}_w$ increases on NTU-1P.
|
| 165 |
+
|
| 166 |
+
Architecture Design. In the ActFormer, frame-wise human motions are encoded into vector-like token embeddings. In Tab. 3 (2), we evaluate another design choice: explicitly modeling skeleton topology with Graph Convolution and Graph Upsampling like in [53], meanwhile modeling temporal correlations with T-Former. It lags behind ActFormer, suggesting no need to explicitly model skeleton topology in this Transformer-based framework.
|
| 167 |
+
|
| 168 |
+
Positional Encoding. In the data-dependent Transformer architecture, positional encoding (PE) is a crucial component which brings additional positional dependencies. We find it better to make PE learnable rather than fixed in our method, given the results in Tab. 3 (3) and Tab. 4 (8). Moreover, for multi-person case, learning a 2D combination of temporal and person-wise PE is superior to Tab. 4 (9) of completely learnable independent PE, showing another tradeoff between inductive bias and representation capacity.
|
| 169 |
+
|
| 170 |
+
Discriminator. In the multi-person case, the GCN discriminator in the GAN training framework receives concatenated multi-person motions as input, equipped with a simple data augmentation strategy for permutation invariance. Here we investigate its effectiveness by comparing with other designs which embed permutation invariance into the discriminator architecture with symmetric functions. Specifically, these designs apply the same motion discrimina
|
| 171 |
+
|
| 172 |
+
Table 3. Ablation study: Several design choices on NTU-1P.
|
| 173 |
+
|
| 174 |
+
<table><tr><td>Configuration</td><td>Acc.↑</td><td>FIDm↓</td><td>FIDw↓</td><td>FIDma↓</td><td>FIDwa↓</td></tr><tr><td>(5) AvgPool. in Disc.</td><td>53.85</td><td>20.42</td><td>8.33</td><td>25.35</td><td>10.09</td></tr><tr><td>(6) MaxPool. in Disc.</td><td>60.19</td><td>10.71</td><td>4.01</td><td>17.26</td><td>5.42</td></tr><tr><td>(7) SelfAtt. in Disc.</td><td>51.53</td><td>19.96</td><td>8.27</td><td>24.91</td><td>8.75</td></tr><tr><td>(8) Fixed PE</td><td>61.42</td><td>4.86</td><td>3.64</td><td>7.71</td><td>2.78</td></tr><tr><td>(9) Independent PE</td><td>67.19</td><td>4.08</td><td>3.38</td><td>7.00</td><td>2.62</td></tr><tr><td>(10) Full model</td><td>69.65</td><td>3.77</td><td>3.27</td><td>7.12</td><td>2.46</td></tr></table>
|
| 175 |
+
|
| 176 |
+
Table 4. Ablation study: Several design choices on NTU-2P.
|
| 177 |
+
|
| 178 |
+
tor to each person independently and use the pooling/self-attention module to aggregate their features. Tab. 4 (5-7) shows that all of them fall behind our simple combination of concatenation with data augmentation.
|
| 179 |
+
|
| 180 |
+
Interaction Encoding. The leading performance of our method in multi-person motion generation is credited to two key designs. The first is to share the same latent vector sequence among multiple persons for inherent synchronization. The other is to model human interactions with I-Former. To verify their respective contributions, we experiment in Tab. 5 (1) to sample different latent vector sequences for different persons independently, and Tab. 5 (2) to adopt ActFormer for the single-person setting and output concatenated multi-person motions as a whole, without explicitly modeling interactions inside. From experiments on NTU-2P as shown in Tab. 5, we find both designs indispensable to synthesize high-quality interactive actions.
|
| 181 |
+
|
| 182 |
+
We further investigate the interaction encoding by experiments on GTA Combat. Despite the performance drop with the increasing number of persons, the complete model always maintains a leading position. When the number of persons reaches to 5, the ablation Tab. 5 (2) collapses with a $7.62\mathrm{FID}_a$ . We attribute this to the fact that human interactions become sparse correlations at this moment. Compared to position-dependent operation in concatenation-based interaction module, encoding human interactions with data-dependent self-attention in I-Former can better model such sparse correlations.
|
| 183 |
+
|
| 184 |
+
# 4.4. Qualitative results
|
| 185 |
+
|
| 186 |
+
We further evaluate the generation quality of ActFormer by visualization. Firstly, we visualize several generated samples for single-person actions from BABEL in Fig. 4. For each action label, diverse motions are synthesized. In the "Stretch" case, the persons are stretching different body parts in different samples. In the "Dance" case, various dancing styles are presented.
|
| 187 |
+
|
| 188 |
+
In Fig. 5 we visualize multi-person motion generation results for actions from NTU-2P and GTA Combat. In the multi-person case, we focus more on the synchronization of human interactions. In our generated samples, the motions of different participants are well synchronized, making the interactions look natural and vivid. For example, in the "Cheers and drink" case, the two persons' toasting, cheering, and drinking actions are temporally well-matched. In
|
| 189 |
+
|
| 190 |
+
<table><tr><td rowspan="2">Configuration</td><td colspan="5">NTU-2P</td><td colspan="4">GTA Combat (2~5 P)</td></tr><tr><td>Acc.↑</td><td>FIDm↓</td><td>FIDw↓</td><td>FIDma↓</td><td>FIDda↓</td><td colspan="4">FIDa↓</td></tr><tr><td>(1) w/o shared z</td><td>56.35</td><td>14.79</td><td>5.73</td><td>20.69</td><td>7.26</td><td>1.26</td><td>1.36</td><td>1.84</td><td>2.42</td></tr><tr><td>(2) Interaction by concatenation</td><td>56.46</td><td>7.63</td><td>4.33</td><td>11.51</td><td>3.57</td><td>1.15</td><td>1.37</td><td>2.23</td><td>7.62</td></tr><tr><td>(3) Complete model</td><td>69.65</td><td>3.77</td><td>3.27</td><td>7.12</td><td>2.46</td><td>1.04</td><td>1.13</td><td>1.58</td><td>2.23</td></tr></table>
|
| 191 |
+
|
| 192 |
+
Table 5. Ablation study: Interaction encoding on NTU-2P and GTA Combat.
|
| 193 |
+
|
| 194 |
+

|
| 195 |
+
Figure 4. Generated single-person motions. The "Stretch" and "Dance" actions are both from BABEL.
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
Figure 5. Generated multi-person motions. Row $1 \sim 3$ : "Cheers and drink", "Take a photo", and "Support somebody" actions from NTU-2P. Row $4 \sim 6$ : "Combat" actions from GTA Combat.
|
| 201 |
+
|
| 202 |
+
the "Take a photo" case, after the photographer on the left has prepared the camera, the person on the right poses up immediately (gesturing with one hand while slightly leaning back) and maintains the posture until the end of photographing. Synchronization is reflected in not only local
|
| 203 |
+
|
| 204 |
+
poses but also global trajectories. As seen in the "Support somebody" case, the two persons are stumbling together. Their trajectories keep close since the sick one relies on the other to walk. In the first "Combat" case, we see the blue fighter suddenly attack the opponent and make him stagger backward. The second "Combat" case is more complex, in which four persons fight in pairs. The right two clash, while the left two, although not contacting, maintain fight postures and constantly move to find chances. More qualitative video results can be found in the supplementary.
|
| 205 |
+
|
| 206 |
+
The last sample in Fig. 5 shows a failure case. The green and purple persons seem to interact with nobody and then fall onto the ground without being attacked. This case reflects a limitation of our method: when the number of persons increases, human interactions become sparse and may form two separate groups. ActFormer has no mechanism to divide persons into different interaction groups, thus not sufficiently effective to learn from some GTA Combat data with 4 or 5 persons.
|
| 207 |
+
|
| 208 |
+
# 5. Conclusion and Discussion
|
| 209 |
+
|
| 210 |
+
This work explores a solution towards general action-conditioned human motion generation and proposes ActFormer, a GAN-based Transformer framework. The ActFormer is evaluated on several challenging benchmarks and achieves leading performance over prior methods on various human motion representations and both single-person and multi-person motion generation tasks. Detailed ablation studies are also conducted to investigate the various components in our approach. The ActFormer adapts to a more complete domain of human actions compared to prior works, while the general human motion generator is still not reached. Human-object interaction synthesis remains unex
|
| 211 |
+
|
| 212 |
+
plored, and we leave this direction for future exploration.
|
| 213 |
+
|
| 214 |
+
Limitations. As discussed in the qualitative results section of the main manuscript, our method has no mechanism to divide persons into different interaction groups. Therefore, given MoCap samples in which multiple persons form separate interaction groups, the ActFormer cannot effectively learn from it. Besides, in the GAN training, multi-person motions are concatenated before input to the GCN discriminator. Thus we cannot learn a shared model for motions with a variable number of persons, despite the ActFormer being a Transformer-based generator.
|
| 215 |
+
|
| 216 |
+
Broader impacts. The proposed generative method can synthesize non-existing content. The community should be wary of the malicious uses of this feature. The collected GTA Combat dataset contains fighting scenes, while we do not promote violence. The dataset should only be used for research on modeling multi-person interactive actions.
|
| 217 |
+
|
| 218 |
+
Acknowledgement. This work is supported by ZJNSFC under Grant LQ23F010008, NSFC (62201342, 62101325, U19B2035), and Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102). The authors would like to appreciate the High Performance Computing Center at Eastern Institute of Technology, Ningbo for the GPU support.
|
| 219 |
+
|
| 220 |
+
# References
|
| 221 |
+
|
| 222 |
+
[1] Grand Theft Auto V. https://www.rockstargames.com/V/.
|
| 223 |
+
[2] Ragdoll Physics. https://gta.fandom.com/wiki/Ragdoll_Physics.
|
| 224 |
+
[3] Vida Adeli, Ehsan Adeli, Ian Reid, Juan Carlos Niebles, and Hamid Rezatofighi. Socially and contextually aware human motion and pose forecasting. IEEE Robotics Autom. Lett., 5(4):6033-6040, 2020.
|
| 225 |
+
[4] Vida Adeli, Mahsa Ehsanpour, Ian D. Reid, Juan Carlos Niebles, Silvio Savarese, Ehsan Adeli, and Hamid Rezatofighi. Tripod: Human trajectory and pose dynamics forecasting in the wild. CoRR, abs/2104.04029, 2021.
|
| 226 |
+
[5] Hyemin Ahn, Timothy Ha, Yunho Choi, Hwiyeon Yoo, and Songhwai Oh. Text2action: Generative adversarial synthesis from language to action. In ICRA, pages 1-5. IEEE, 2018.
|
| 227 |
+
[6] Chaitanya Ahuja and Louis-Philippe Morency. Language2pose: Natural language grounded pose forecasting. In 3DV, pages 719-728. IEEE, 2019.
|
| 228 |
+
[7] Ijaz Akhter and Michael J. Black. Pose-conditioned joint angle limits for 3d human pose reconstruction. In CVPR, pages 1446-1455. IEEE Computer Society, 2015.
|
| 229 |
+
[8] Emre Aksan, Manuel Kaufmann, and Otmar Hilliges. Structured prediction helps 3d human motion modelling. In ICCV, pages 7143-7152. IEEE, 2019.
|
| 230 |
+
[9] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. CoRR, abs/1701.07875, 2017.
|
| 231 |
+
[10] Nikos Athanasiou, Mathis Petrovich, Michael J Black, and Gül Varol. Teach: Temporal action composition for 3d humans. arXiv preprint arXiv:2209.04066, 2022.
|
| 232 |
+
|
| 233 |
+
[11] Emad Barsoum, John Kender, and Zicheng Liu. HPGAN: probabilistic 3d human motion prediction via GAN. In CVPR Workshops, pages 1418-1427. Computer Vision Foundation / IEEE Computer Society, 2018.
|
| 234 |
+
[12] Harm de Vries, Florian Strub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron C. Courville. Modulating early visual processing by language. In NIPS, pages 6594-6604, 2017.
|
| 235 |
+
[13] Ginger Delmas, Philippe Weinzaepfel, Thomas Lucas, Francesc Moreno-Noguer, and Grégory Rogez. Posescript: 3d human poses from natural language. In ECCV, pages 346-362. Springer, 2022.
|
| 236 |
+
[14] Ricard Durall, Stanislav Frolov, Andreas Dengel, and Janis Keuper. Combining transformer generators with convolutional discriminators. CoRR, abs/2105.10189, 2021.
|
| 237 |
+
[15] Matteo Fabbri, Fabio Lanzi, Simone Calderara, Andrea Palazzi, Roberto Vezzani, and Rita Cucchiara. Learning to detect and track visible and occluded body joints in a virtual world. In ECCV, 2018.
|
| 238 |
+
[16] Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Jitendra Malik. Recurrent network models for human dynamics. In ICCV, pages 4346-4354. IEEE Computer Society, 2015.
|
| 239 |
+
[17] Anindita Ghosh, Noshaba Cheema, Cennet Oguz, Christian Theobalt, and Philipp Slusallek. Synthesis of compositional animations from textual descriptions. In ICCV, pages 1396-1406, 2021.
|
| 240 |
+
[18] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pages 2672-2680, 2014.
|
| 241 |
+
[19] Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In CVPR, pages 5152-5161, 2022.
|
| 242 |
+
[20] Chuan Guo, Xinxin Zuo, Sen Wang, Shihao Zou, Qingyao Sun, Annan Deng, Minglun Gong, and Li Cheng. Action2motion: Conditioned generation of 3d human motions. In ACM Multimedia, pages 2021-2029. ACM, 2020.
|
| 243 |
+
[21] Agrim Gupta, Justin Johnson, Li Fei-Fei, Silvio Savarese, and Alexandre Alahi. Social GAN: socially acceptable trajectories with generative adversarial networks. In CVPR, pages 2255–2264. Computer Vision Foundation / IEEE Computer Society, 2018.
|
| 244 |
+
[22] Ikhsanul Habibie, Daniel Holden, Jonathan Schwarz, Joe Yearsley, and Taku Komura. A recurrent variational autoencoder for human motion synthesis. In BMVC. BMVA Press, 2017.
|
| 245 |
+
[23] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS, pages 6626-6637, 2017.
|
| 246 |
+
[24] Ruozhi Huang, Huang Hu, Wei Wu, Kei Sawada, Mi Zhang, and Daxin Jiang. Dance revolution: Long-term dance generation with music via curriculum learning. In ICLR. OpenReview.net, 2021.
|
| 247 |
+
|
| 248 |
+
[25] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Trans. Pattern Anal. Mach. Intell., 36(7):1325-1339, 2014.
|
| 249 |
+
[26] Ashesh Jain, Amir Roshan Zamir, Silvio Savarese, and Ashutosh Saxena. Structural-rnn: Deep learning on spatiotemporal graphs. In CVPR, pages 5308-5317. IEEE Computer Society, 2016.
|
| 250 |
+
[27] Yifan Jiang, Shiyu Chang, and Zhangyang Wang. Transg: Two transformers can make one strong GAN. CoRR, abs/2102.07074, 2021.
|
| 251 |
+
[28] Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. Decoupling representation and classifier for long-tailed recognition. In ICLR. OpenReview.net, 2020.
|
| 252 |
+
[29] Jihoon Kim, Jiseob Kim, and Sungjoon Choi. Flame: Freeform language-based motion synthesis & editing. arXiv preprint arXiv:2209.00349, 2022.
|
| 253 |
+
[30] Muhammed Kocabas, Nikos Athanasiou, and Michael J. Black. VIBE: video inference for human body pose and shape estimation. In CVPR, 2020.
|
| 254 |
+
[31] Hsin-Ying Lee, Xiaodong Yang, Ming-Yu Liu, Ting-Chun Wang, Yu-Ding Lu, Ming-Hsuan Yang, and Jan Kautz. Dancing to music. In NeurIPS, pages 3581-3591, 2019.
|
| 255 |
+
[32] Kwonjoon Lee, Huiwen Chang, Lu Jiang, Han Zhang, Zhuowen Tu, and Ce Liu. Vitgan: Training gans with vision transformers. CoRR, abs/2107.04589, 2021.
|
| 256 |
+
[33] Jiaman Li, Yihang Yin, Hang Chu, Yi Zhou, Tingwu Wang, Sanja Fidler, and Hao Li. Learning to generate diverse dance motions with transformer. CoRR, abs/2008.08171, 2020.
|
| 257 |
+
[34] Ruilong Li, Shan Yang, David A. Ross, and Angjoo Kanazawa. Learn to dance with AIST++: music conditioned 3d dance generation. CoRR, abs/2101.08779, 2021.
|
| 258 |
+
[35] Angela S. Lin, Lemeng Wu, Rodolfo Corona, Kevin Tai, Qixing Huang, and Raymond J. Mooney. Generating animated videos of human activities from natural language descriptions. In Proceedings of the Visually Grounded Interaction and Language Workshop at NeurIPS 2018, December 2018.
|
| 259 |
+
[36] Hung Yu Ling, Fabio Zinno, George Cheng, and Michiel van de Panne. Character controllers using motion vaes. ACM Trans. Graph., 39(4):40, 2020.
|
| 260 |
+
[37] Jun Liu, Amir Shahroudy, Mauricio Perez, Gang Wang, Ling-Yu Duan, and Alex C. Kot. NTU RGB+D 120: A large-scale benchmark for 3d human activity understanding. IEEE Trans. Pattern Anal. Mach. Intell., 42(10):2684-2701, 2020.
|
| 261 |
+
[38] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: a skinned multiperson linear model. ACM Trans. Graph., 34(6):248:1-248:16, 2015.
|
| 262 |
+
[39] Naureen Mahmood, Nima Ghorbani, Nikolaus F. Troje, Gerard Pons-Moll, and Michael J. Black. AMASS: archive of motion capture as surface shapes. In ICCV, pages 5441-5450. IEEE, 2019.
|
| 263 |
+
[40] Christian Mandery, Ömer Terlemez, Martin Do, Nikolaus Vahrenkamp, and Tamim Asfour. The KIT whole-body human motion database. In ICAR, pages 329-336. IEEE, 2015.
|
| 264 |
+
|
| 265 |
+
[41] Julieta Martinez, Michael J. Black, and Javier Romero. On human motion prediction using recurrent neural networks. In CVPR, pages 4674-4683. IEEE Computer Society, 2017.
|
| 266 |
+
[42] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. CoRR, abs/1411.1784, 2014.
|
| 267 |
+
[43] Takeru Miyato and Masanori Koyama. cgans with projection discriminator. In ICLR (Poster). OpenReview.net, 2018.
|
| 268 |
+
[44] Meinard Müller, Andreas Baak, and Hans-Peter Seidel. Efficient and robust annotation of motion capture data. In Symposium on Computer Animation, pages 17-26. ACM, 2009.
|
| 269 |
+
[45] Mathis Petrovich, Michael J. Black, and Gül Varol. Action-conditioned 3D human motion synthesis with transformer VAE. In ICCV, 2021.
|
| 270 |
+
[46] Mathis Petrovich, Michael J Black, and Gül Varol. Temos: Generating diverse human motions from textual descriptions. In ECCV, pages 480-497. Springer, 2022.
|
| 271 |
+
[47] Abhinanda R. Punnakkal, Arjun Chandrasekaran, Nikos Athanasiou, Alejandra Quiros-Ramirez, and Michael J. Black. BABEL: bodies, action and behavior with english labels. In CVPR, pages 722-731. Computer Vision Foundation / IEEE, 2021.
|
| 272 |
+
[48] Sebastian Starke, He Zhang, Taku Komura, and Jun Saito. Neural state machine for character-scene interactions. ACM Trans. Graph., 38(6):209:1-209:14, 2019.
|
| 273 |
+
[49] Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, and Amit H Bermano. Human motion diffusion model. arXiv preprint arXiv:2209.14916, 2022.
|
| 274 |
+
[50] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, pages 5998-6008, 2017.
|
| 275 |
+
[51] Tingwu Wang, Yunrong Guo, Maria Shugrina, and Sanja Fidler. Unicon: Universal neural controller for physics-based character motion. CoRR, abs/2011.15119, 2020.
|
| 276 |
+
[52] Jungdam Won, Deepak Gopinath, and Jessica K. Hodgins. A scalable approach to control diverse behaviors for physically simulated characters. ACM Trans. Graph., 39(4):33, 2020.
|
| 277 |
+
[53] Sijie Yan, Zhizhong Li, Yuanjun Xiong, Huahan Yan, and Dahua Lin. Convolutional sequence generation for skeleton-based action synthesis. In ICCV, pages 4393-4401. IEEE, 2019.
|
| 278 |
+
[54] Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. In AAAI, pages 7444-7452. AAAI Press, 2018.
|
| 279 |
+
[55] Xinchen Yan, Akash Rastogi, Ruben Villegas, Kalyan Sunkavalli, Eli Shechtman, Sunil Hadap, Ersin Yumer, and Honglak Lee. MT-VAE: learning motion transformations to generate multimodal human dynamics. In ECCV(5), volume 11209 of Lecture Notes in Computer Science, pages 276–293. Springer, 2018.
|
| 280 |
+
[56] Mohammad Samin Yasar and Tariq Iqbal. A scalable approach to predict multi-agent motion for human-robot collaboration. IEEE Robotics Autom. Lett., 6(2):1686-1693, 2021.
|
| 281 |
+
[57] Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, and Ziwei Liu. Motiondiffuse: Text-driven human motion generation with diffusion model. arXiv preprint arXiv:2208.15001, 2022.
|
| 282 |
+
|
| 283 |
+
[58] Yan Zhang, Michael J. Black, and Siyu Tang. Perpetual motion: Generating unbounded human motion. CoRR, abs/2007.13886, 2020.
|
| 284 |
+
[59] Zhengyou Zhang. Microsoft Kinect sensor and its effect. IEEE Multim., 19(2):4-10, 2012.
|
actformeraganbasedtransformertowardsgeneralactionconditioned3dhumanmotiongeneration/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aad548c5c8c392f1e9105b3ac5a5bbed62862959e3f709bc1a655f680818888f
|
| 3 |
+
size 402758
|
actformeraganbasedtransformertowardsgeneralactionconditioned3dhumanmotiongeneration/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7621ca0515c66f04505d6ae916d729bfa14cf13426c0ee19b7af2f1e845bf339
|
| 3 |
+
size 368294
|
actionsensitivitylearningfortemporalactionlocalization/07053da8-e3f8-425e-9d4f-338c187ce557_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6fe04558ed37c96ef593295bc41d0c7e062621612c3a7117cefac938fd94306e
|
| 3 |
+
size 103612
|
actionsensitivitylearningfortemporalactionlocalization/07053da8-e3f8-425e-9d4f-338c187ce557_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:18b42d2e5c7c3d0d92b72925090f42bb4e18ff195e3ee3d25ea1add24caa08d8
|
| 3 |
+
size 131164
|
actionsensitivitylearningfortemporalactionlocalization/07053da8-e3f8-425e-9d4f-338c187ce557_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:73d9ad0c8fbfd90354c3567a7d68c564ee22e7ff4cba6ee8ff27254c65a6dbcf
|
| 3 |
+
size 2313184
|
actionsensitivitylearningfortemporalactionlocalization/full.md
ADDED
|
@@ -0,0 +1,408 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Action Sensitivity Learning for Temporal Action Localization
|
| 2 |
+
|
| 3 |
+
Jiayi Shao $^{1,*}$ , Xiaohan Wang $^{1}$ , Ruijie Quan $^{1}$ , Junjun Zheng $^{2}$ , Jiang Yang $^{2}$ , Yi Yang $^{1,\dagger}$ , $^{1}$ ReLER Lab, CCAI, Zhejiang University, $^{2}$ Alibaba Group
|
| 4 |
+
|
| 5 |
+
shaojiayil@zju.edu.cn,wxh199611@gmail.com,quanruij@hotmail.com, {fangcheng.zjj,yangjiang.yj}@alibaba-inc.com, yangyics@zju.edu.cn
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Temporal action localization (TAL), which involves recognizing and locating action instances, is a challenging task in video understanding. Most existing approaches directly predict action classes and regress offsets to boundaries, while overlooking the discrepant importance of each frame. In this paper, we propose an Action Sensitivity Learning framework (ASL) to tackle this task, which aims to assess the value of each frame and then leverage the generated action sensitivity to recalibrate the training procedure. We first introduce a lightweight Action Sensitivity Evaluator to learn the action sensitivity at the class level and instance level, respectively. The outputs of the two branches are combined to reweight the gradient of the two sub-tasks. Moreover, based on the action sensitivity of each frame, we design an Action Sensitive Contrastive Loss to enhance features, where the action-aware frames are sampled as positive pairs to push away the action-irrelevant frames. The extensive studies on various action localization benchmarks (i.e., MultiThumos, Charades, Ego4D-Moment Queries v1.0, Epic-Kitchens 100, Thumos14 and ActivityNet1.3) show that ASL surpasses the state-of-the-art in terms of average-mAP under multiple types of scenarios, e.g., single-labeled, densely-labeled and egocentric.
|
| 10 |
+
|
| 11 |
+
# 1. Introduction
|
| 12 |
+
|
| 13 |
+
With an increasing number of videos appearing online, video understanding has become a prominent research topic in computer vision. Temporal action localization (TAL), which aims to temporally locate and recognize human actions with a set of categories in a video clip, is a challenging yet fundamental task in this area, owing to its various applications such as sports highlighting, human action analysis and security monitoring [25, 63, 46, 17, 14].
|
| 14 |
+
|
| 15 |
+
We have recently witnessed significant progress in TAL,
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
Figure 1. The motivation of our method. We show the action instance of clothes drying and depict the possible importance of each frame to recognizing the action category and locating action boundaries. Each frame's importance is different.
|
| 19 |
+
|
| 20 |
+
where most methods can be mainly divided into two parts: 1) Two-stage approaches [75, 85] tackle this task accompanied by the generation of class-agnostic action proposals and then perform classification and proposal boundaries refinement in proposal-level; 2) One-stage approaches [79, 72, 32] simultaneously recognize and localize action instances in a single-shot manner. Typical methods [76, 29] of this type predict categories as well as locate corresponding temporal boundaries in frame-level, achieving stronger TAL results currently. In training, they classify every frame as one action category or background and regress the boundaries of frames inside ground-truth action segments. However, these works treat each frame within action segments equally in training, leading to sub-optimal performance.
|
| 21 |
+
|
| 22 |
+
When humans intend to locate action instances, the discrepant information of each frame is referred to. For the instance of action: clothes drying, as depicted in Fig 1, frames in the purple box promote recognizing clothes drying most as they describe the intrinsic sub-action: hang clothes on the hanger. Analogously, frames in red and gray boxes depict take out clothes from laundry basket and lift laundry basket, which are more informative to locate precise start and end time respectively. In a word, each frame's contribution is quite different, due to intrinsic patterns of actions, as well as existing transitional or blurred frames.
|
| 23 |
+
|
| 24 |
+
Can we discover informative frames for classifying and
|
| 25 |
+
|
| 26 |
+
localizing respectively? To this end, we first introduce a concept — Action Sensitivity, to measure the frame's importance. It is disentangled into two parts: action sensitivity to classification sub-task and action sensitivity to localization sub-task. For one sub-task, the higher action sensitivity each frame has, the more important it will be for this sub-task. With this concept, intuitively, more attention should be paid to action sensitive frames in training.
|
| 27 |
+
|
| 28 |
+
Therefore in this paper, we propose a lightweight Action Sensitivity Evaluator (ASE) for each sub-task to better exploit frame-level information. Essentially, for a specific sub-task, ASE learns the action sensitivity of each frame from two perspectives: class-level and instance-level. The class-level perspective is to model the coarse action sensitivity distribution of each action category and is achieved by incorporating gaussian weights. The instance-level perspective is complementary to class-level modeling and is supervised in a prediction-aware manner. Then the training weights of each frame are dynamically adjusted depending on their action sensitivity, making it more reasonable and effective for model training.
|
| 29 |
+
|
| 30 |
+
With the proposed ASE, we build our novel Action Sensitivity Learning framework dubbed ASL to tackle temporal action localization task (TAL) effectively. Moreover, to furthermore enhance the features and improve the discrimination between actions and backgrounds, we design a novel Action Sensitive Contrastive Loss (ASCL) based on ASE. It is implemented by elaborately generating various types of action-related and action-irrelevant features and performing contrasting between them, which brings multiple merits for TAL.
|
| 31 |
+
|
| 32 |
+
By conducting extensive experiments on 6 datasets and detailed ablation studies, we demonstrate ASL is able to classify and localize action instances better. In a nutshell, our main contributions can be summarized as follows:
|
| 33 |
+
|
| 34 |
+
- We propose a novel framework with an Action Sensitivity Evaluator component to boost training, by discovering action sensitive frames to specific sub-tasks, which is modeled from class level and instance level.
|
| 35 |
+
- We design an Action Sensitive Contrastive Loss to do feature enhancement and to increase the discrimination between actions and backgrounds.
|
| 36 |
+
- We verify ASL on various action localization datasets of multiple types: i) densely-labeled (i.e., MultiThumos [74] and Charades [53]). ii) egocentric (Ego4d-Moment Queries v1.0 [19] and Epic-Kitchens 100 [11]). iii) nearly single-labeled (Thumos14 [57] and ActivityNet1.3 [2]), and achieve superior results.
|
| 37 |
+
|
| 38 |
+
# 2. Related Works
|
| 39 |
+
|
| 40 |
+
Temporal Action Localization. Temporal action localization is a long-standing research topic. Contemporary ap
|
| 41 |
+
|
| 42 |
+
proaches mostly fall into two categories, i.e. two-stage and one-stage paradigms. Previous two-stage methods usually focused on action proposal generation [31, 33, 56, 58, 65]. Others have integrated action proposal, calibrated backbone, classification and boundary regression or refinement modules into one single model [51, 69, 49, 81]. Recent efforts have investigated the proposal relations [75, 85, 66], utilized graph modeling [72, 75], or designed fine-grained temporal representation [44, 55]. One-stage approaches usually perform frame-level or segment-level classification and directly localization or merging segments [49, 80, 32]. [79, 39] process the video with the assistance of pre-defined anchors or learned proposals, while others utilize existing information and are totally anchor-free [29, 76, 78]. Currently, some works introduce pretrain-finetune to TAL task [70, 71] or attempt to train the model in an efficient end-to-end manner [38, 7, 37]. Others focused on densely-labeled setting [61, 10, 9, 24, 59, 8]. With the success of DETR [3] in object detection, query-based methods have also been proposed [48, 58, 59, 38]. Our method falls into the one-stage TAL paradigm and performs frame-level classification and localization. Notably, [43, 39] incorporate Gaussian kernels to improve receptive fields and optimize the temporal scale of action proposals, [24] use fixed gaussian-like weights to fuse the coarse and fine stage. We also utilize gaussian weights as one part of ASE, but it differs in that: i) Our gaussian-like weights in ASE serve as modeling class-level action sensitivity and to boost effective training, while [24, 43, 39] use it only to better encode the videos. ii) Our learned gaussian weights describe frames' contributions to each sub-task and can be easily visualized, whereas the semantic meaning of gaussian weights in [24, 43, 39] is unclear. iii) Our gaussian-like weights are totally learnable, category-aware and disentangled to different sub-tasks.
|
| 43 |
+
|
| 44 |
+
One-stage Object Detection. Analogous to TAL task, the object detection task shares a few similarities. As a counterpart in object detection, the one-stage paradigm has surged recently. Some works remain anchor-based [35], while others are anchor-free, utilizing a feature pyramid network [34, 60] and improved label-assign strategies [77, 83, 84, 52]. Moreover, some works define key points in different ways (e.g. corner [26], center [13, 60] or learned points [73]). These methods bring some inspiration to design a better TAL framework. Some methods [16, 28, 27] aim to tackle the misalignment between classification and localization. But i) we mainly focus on the discrepant information of frames. ii) Misalignment of two sub-tasks (i.e., classification and localization) is only the second issue and we alleviate it by a novel contrastive loss which differs from these works.
|
| 45 |
+
|
| 46 |
+
Contrastive Learning. Contrastive learning [6, 20, 22] is an unsupervised learning objective that aims to bring sim
|
| 47 |
+
|
| 48 |
+

|
| 49 |
+
Figure 2. The overview of ASL. Given a video clip, we first leverage a pre-trained 3D-CNN to extract the video feature and then utilize a Transformer Encoder to encode feature. We then use ground-truth location sampling to sample all ground-truth segments and feed these into Action Sensitivity Evaluator. In this module, we model sub-task-specific action sensitivity of each frame from class level and instance-level. The former is learned by incorporating learnable gaussian-like weights and the latter is learned with an instance-level evaluator. Then each frame's weight in training is adjusted based on action sensitivity. Moreover, we propose an Action Sensitive Contrastive Loss to better enhance the feature and alleviate misalignment problems.
|
| 50 |
+
|
| 51 |
+
ilar examples closer together in feature space while pushing dissimilar examples apart. NCE [21] and Info-NCE [41] are two typical methods that mine data features by distinguishing between data and noise or negative samples. InfoNCE-based contrastive learning has been used in methods of different tasks, such as [67, 36] in cross-modality retrieval and [23, 42] in unsupervised learning. In TAL, [29] leverages ranking loss to boost discrimination between foreground and background while [48] contrasts different actions with a global representation of action segments. But we design a new contrastive loss both across different types of actions and between actions and backgrounds. Moreover, compared to [50] which also contrasts between actions and backgrounds, our proposed contrastive loss contrasts more between i)same and different action classes, ii)sensitive frames of localization and classification to mitigate the misalignment of sub-tasks. Details will be discussed in 3.3.
|
| 52 |
+
|
| 53 |
+
# 3. Method
|
| 54 |
+
|
| 55 |
+
Problem Formulation. The task of temporal action localization (TAL) is to predict a set of action instances $\{(t_m^s,t_m^e,c_m)\}_{m = 1}^M$ , given a video clip, where $M$ is the number of predicted action instances, $t_m^s,t_m^e,c_m$ are the start, end timestamp and action category of the $m$ -th predicted action instance. ASL is built on an anchor-free representation that classifies each frame as one action category or background, as well as regresses the distances from this frame to the start time and end time.
|
| 56 |
+
|
| 57 |
+
Overview. The overall architecture of ASL is shown in Fig 2. ASL is composed of four parts: video feature extractor, feature encoder, action sensitivity evaluator, and two
|
| 58 |
+
|
| 59 |
+
sub-task heads. Concretely, given a video clip, we first extract the video feature using a pre-trained 3D-CNN model. Then we exert a feature encoder involving a pyramid network to better represent the temporal features at multiple levels. We propose an action sensitivity evaluator module to access the action sensitivity of frames to a specific subtask. The pyramid features combined with frames' action sensitivity are further processed by sub-task heads to generate predictions. We now describe the details of ASL.
|
| 60 |
+
|
| 61 |
+
# 3.1. Feature Encoder
|
| 62 |
+
|
| 63 |
+
With the success of [76, 29], ASL utilizes a Transformer encoder and feature pyramid network to encode feature sequences into a multiscale representation. To enhance features, in Transformer encoder we design a new attention mechanism that operates temporal attention and channel attention parallelly and then fuses these two outputs.
|
| 64 |
+
|
| 65 |
+
For normal temporal attention that is performed in the temporal dimension, input features generate query, key and value tensors $(Q_{t},K_{t},V_{t})\in \mathbb{R}^{T\times D}$ , where $T$ is the number of frames, $D$ is the embedding dimension, then the output is calculated:
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
f _ {\mathrm {t a}} ^ {\prime} = \operatorname {s o f t m a x} \left(\frac {Q _ {t} K _ {t} ^ {T}}{\sqrt {D}}\right) V _ {t} \tag {1}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
For channel attention that is conducted in the channel dimension, input features generate query, key and value tensors $(Q_{d},K_{d},V_{d})\in \mathbb{R}^{D\times T}$ , where $D$ is the number of channels. Then the output is calculated:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
f _ {\mathrm {c a}} ^ {\prime} = \operatorname {s o f t m a x} \left(\frac {Q _ {d} K _ {d} ^ {T}}{\sqrt {T}}\right) V _ {d} \tag {2}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
Above two outputs are then added with a coefficient $\theta$ : $f^{\prime} = (1 - \theta)f_{\mathrm{ta}}^{\prime} + \theta f_{\mathrm{ca}}^{\prime T}$ . Then it is processed by layer nor
|
| 78 |
+
|
| 79 |
+
malization and feedforward network to obtain the encoded video representation $f \in \mathbb{R}^{T \times D}$ .
|
| 80 |
+
|
| 81 |
+
# 3.2. Action Sensitivity Evaluator
|
| 82 |
+
|
| 83 |
+
As discussed in 1, not all frames inside ground-truth segments contribute equally to the sub-task (i.e., localization or classification). Thus we designed an Action Sensitivity Evaluator (ASE) module, the core idea of which is to determine the sub-task-specific action sensitivity of each frame and help the model pay more attention to those valuable frames. Besides, this module is lightweight, leading to efficient and effective training.
|
| 84 |
+
|
| 85 |
+
Decoupling to two levels. Digging into action instances, a key observation is that actions of a particular category often share a similar pattern, but they appear slightly different in diverse scenarios or under different behavior agents. For example, action instances of category: wash vegetables inherently contain sub-actions: turn the tap on, take vegetables, wash, turn the tap off, where frames depicting washing are more sensitive to classification, frames depicting turning the tap on and turning the tap off are more sensitive to localization. But the respective duration or proportion of these sub-actions are dependent on the scenes and context of each action instance, thus making sensitive frames a little different. This motivates us that the action sensitivity of every frame should be decoupled into class-level and instance-level modeling and then recombined from these two parts.
|
| 86 |
+
|
| 87 |
+
Disentangling to two sub-tasks. Here sub-tasks mean classification and localization. Intuitively action sensitivity for classification needs to be modeled as sensitive frames for classification is not easily determined. Actually, action sensitivity modeling for localization is also necessary. Though the boundaries of action segments are defined already, sensitive frames are not necessarily at the start or the end of an action since i) action boundaries are often unclear, ii) each frame of sub-actions around boundaries also has different semantics. Therefore, action sensitivity modeling should be disentangled for two sub-tasks respectively (i.e., classification and localization).
|
| 88 |
+
|
| 89 |
+
Formally, for a given ground-truth $\mathcal{G} = \{\bar{t}^s,\bar{t}^e,\bar{c}\}$ , each indicating the start time, end time and category of one action, we denote $N_{f}$ as the number of frames within this action, $N_{c}$ as the number of all pre-defined action categories. Our goal is to model the class-level action sensitivity $p$ (disentangled into $p^{cls},p^{loc}$ to classification and localization respectively), instance-level action sensitivity $q$ (disentagged to $q^{cls},q^{loc}$ ). Then we delve into details of action sensitivity learning.
|
| 90 |
+
|
| 91 |
+
Class-level Modeling. Class-level sensitivity poses a fundamental prior for action sensitivity learning. Two key observations are that: i) video frames are often consecutive. ii) there often exist keyframes that have a peek value of sensitivity among all frames. In this case, we in
|
| 92 |
+
|
| 93 |
+
corporate gaussian-like weights with learnable parameters $\mu, \sigma \in \mathbb{R}^{N_c}$ to model class-level action sensitivity $p$ .
|
| 94 |
+
|
| 95 |
+
For classification sub-task, we model corresponding action sensitivity $p_i^{cls}$ for the $i$ -th frame:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
p _ {i} ^ {c l s} = \exp \left\{- \frac {\left(d (i) - \mu_ {c}\right) ^ {2}}{2 \sigma_ {c} ^ {2}} \right\} \tag {3}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where $d(i)$ is the distance from the current $i$ -th frame to the central frame of the ground-truth segment which is normalized by $N_{f}$ . In this case, $d(i) \in [-0.5, 0.5]$ , when $i = 1$ (i.e., start frame), $d(i) = -0.5$ , when $i = N_{f}$ (i.e., end frame), $d(i) = 0.5$ . Learnable parameters $\mu_{c}, \sigma_{c}$ denote mean and variance of each category $c$ 's action sensitivity distribution.
|
| 102 |
+
|
| 103 |
+
For localization sub-task, different frames are sensitive to locating start time and end time. Therefore action sensitivity $p^{loc}$ is the combination of two parts. We explicitly allocate one gaussian-like weights $p^{sot}$ to model the start time locating sensitivity and another $p^{eot}$ to model the end time locating sensitivity. $p^{loc}$ is calculated:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
p _ {i} ^ {\text {l o c}} = \underbrace {\exp \left\{- \frac {\left(d (i) - \mu_ {c , 1}\right) ^ {2}}{2 \sigma_ {c , 1}} \right\}} _ {p _ {i} ^ {\text {s o t}}} + \underbrace {\exp \left\{- \frac {\left(d (i) - \mu_ {c , 2}\right) ^ {2}}{2 \sigma_ {c , 2}} \right\}} _ {p _ {i} ^ {\text {e o t}}} \tag {4}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
In this way, class-level action sensitivity $p^{cls}, p^{loc} \in \mathbb{R}^{N_f \times N_c}$ of all categories are learned with the optimization of model training. In addition, the initialization of $\mu_c$ and $\sigma_c$ counts as there exists prior knowledge [76, 60] according to different sub-tasks. For classification sub-task, near-center frames are more sensitive. Thus we initialize $\mu_c$ as 0. For localization sub-task, near-start and near-end frames are more sensitive. Thus we initialize $\mu_1$ as -0.5 and $\mu_2$ as 0.5. For all $\sigma$ , we initialize as 1.
|
| 110 |
+
|
| 111 |
+
Instance-level Modeling. Intuitively, a Gaussian can only give a single peak, and thus class-level action sensitivity learning may not discover all sensitive frames. To this end, we introduce instance-level modeling which is complementary and aims to capture additional important frames that haven't been discovered by class-level modeling.
|
| 112 |
+
|
| 113 |
+
In the instance-level modeling, as more information about frame contexts of each instance is referred to, we obtain instance-level action sensitivity $q \in \mathbb{R}^{N_f}$ using an instance-level evaluator operated directly on each frame, composed of 1D temporal convolution network which aims to encode temporal contexts better, a fully connected layer and a Sigmoid activation function. We denote $\Phi^{cls}$ and $\Phi^{loc}$ as two sub-task specific instance-level evaluator, then $q^{cls}$ and $q^{loc}$ are computed:
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\left\{ \begin{array}{l} q _ {i} ^ {c l s} = \Phi^ {c l s} \left(f _ {i}\right) \\ q _ {i} ^ {l o c} = \Phi^ {l o c} \left(f _ {i}\right) \end{array} \right. \tag {5}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
Unlike class-level modeling that contains some prior knowledge, instance-level sensitivity $q$ is hard to learn in an
|
| 120 |
+
|
| 121 |
+
unsupervised manner. Intuitively, from the instance level a sensitive frame implies that it can result in fine predictions. Hence we utilize the quality $\{\bar{Q}_i\}_{i=1}^{N_f}$ of each frame's prediction to supervise the learning of $q$ . For localization, The higher tIoU indicates a higher degree of overlap between two segments. Thus tIoU between the predicted segment and the ground-truth segment can measure the quality of prediction. For classification, the probability of the ground-truth category can serve as the quality of prediction. Therefore, quality $\bar{Q}^{cls}$ and $\bar{Q}^{loc}$ are defined as:
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
\left\{ \begin{array}{l} \bar {Q} _ {i} ^ {c l s} = \varphi \left(\mathrm {s} _ {i} [ \bar {c} ]\right) \\ \bar {Q} _ {i} ^ {l o c} = \mathrm {t I o U} \left(\Delta_ {i}, \bar {\Delta}\right) \end{array} \right. \tag {6}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where $s$ denotes the classification logits, $\Delta_{i}$ is the predicted segment $(t^{s}, t^{e})$ of the $i$ -th frame, $\bar{\Delta}$ is the corresponding ground-truth segment, $\varphi(\cdot)$ is Sigmoid function. We use MSE loss to supervise the calculation of $q$ . For $q^{cls}$ , optimization objective is formed as 7. Optimization of $q^{loc}$ is in a similar way.
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\mathcal {L} _ {s} = \operatorname {M S E} \left(q ^ {c l s}, \bar {Q} ^ {c l s}\right) \tag {7}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
Optimization with Action Sensitivity. In this way, combining class-level and instance-level, we obtain the final action sensitivity $h(\bar{c}) \in \mathbb{R}^{N_f}$ (disentangled to classification and localization sub-task: $h(\bar{c}) \to \{h^{cls}(\bar{c}), h^{loc}(\bar{c})\}$ ) for the ground-truth $\mathcal{G} = \{\bar{t}^s, \bar{t}^e, \bar{c}\}$ :
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\left\{ \begin{array}{l} h ^ {c l s} (\bar {c}) = p ^ {c l s} \mathbb {1} [ \bar {c} ] + q ^ {c l s} \\ h ^ {l o c} (\bar {c}) = p ^ {l o c} \mathbb {1} [ \bar {c} ] + q ^ {l o c} \end{array} \right. \tag {8}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
where $\mathbb{1}[\bar{c} ]\in \mathbb{R}^{N_c}$ denotes the one-hot vector of $\bar{c}$ . Action sensitivity $h$ is further used in training. For classification sub-task, we use a focal loss [35] to classify each frame, combined with classification action sensitivity $h^{cls}$ :
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\mathcal {L} _ {c l s} = \frac {1}{N _ {p o s}} \sum_ {i} \left(\mathbb {1} _ {i n _ {i}} h _ {i} ^ {c l s} (\bar {c} _ {i}) \mathcal {L} _ {\text {f o c a l} _ {i}} + \mathbb {1} _ {b g _ {i}} \mathcal {L} _ {\text {f o c a l} _ {i}}\right) \tag {9}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where $\mathbb{1}_{in_i},\mathbb{1}_{bg_i}$ are indicators that denote if the $i$ -th frame is within one ground-truth action or if is background, $N_{pos}$ is the number of frames within action segments, $\bar{c}_i$ denotes the action category of the $i$ -th frame.
|
| 146 |
+
|
| 147 |
+
For localization sub-task, we use a DIoU loss [82] performed on frames within any ground-truth action instance, to regress offsets from current frames to boundaries, combined with localization action sensitivity $h^{loc}$ :
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\mathcal {L} _ {l o c} = \frac {1}{N _ {p o s}} \sum_ {i} \left(\mathbb {1} _ {i n _ {i}} h _ {i} ^ {l o c} (\bar {c} _ {i}) \mathcal {L} _ {\mathrm {D I o U} _ {i}}\right) \tag {10}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
# 3.3. Action Sensitive Contrastive Loss
|
| 154 |
+
|
| 155 |
+
Now with ASE, each frame is equipped with action sensitivity and valuable frames to specific sub-tasks will be discovered. We further boost the training from the perspective
|
| 156 |
+
|
| 157 |
+
of feature enhancement. Delve into feature representation, three shortcomings may hinder the performance: i) classification sensitive and localization sensitive frames are quite different, resulting in the misalignment of these two subtasks. ii) features in actions of different categories are not much discriminable. iii) features within action and outside boundaries are not much distinguished yet.
|
| 158 |
+
|
| 159 |
+
Therefore, on the basis of ASE, we propose an Action Sensitive Contrastive Loss (ASCL) to correspondingly tackle the above issues. Specifically, for a given video feature $\{f_t\}_{t=1}^T$ and a ground-truth action instance $\mathcal{G} = \{\bar{t}^s, \bar{t}^e, \bar{c}\}$ , we generate two action-related features and one action-irrelevant feature. First, to generate more valuable action-related features, we aim to find sensitive frames to these sub-tasks. Thinking that ASCL contrasts action instances of different classes, where class-level discrimination is more important, we hence utilize class-level sensitivity $p$ to parse the sensitive frame ranges $T_{cls}$ for classification and $T_{loc}$ for localization. With one ground-truth category $\bar{c}$ , we get the most sensitive frames $a_{cls}, a_{sot}, a_{eot}$ for classification, start time localization, end time localization respectively. Take $a_{eot}$ as an example:
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
a _ {e o t} = \underset {i} {\arg \max } \left(p _ {i} ^ {e o t} \mathbb {1} [ \bar {c} ]\right) \tag {11}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
$a_{cls}$ and $a_{sot}$ are obtained in a similar way. Then, centered on $a$ and extending forward and backward with a range of $\delta N_f$ , where $\delta$ is the sampling length ratio, we get sensitive frame ranges $T_{cls}$ for classification and $T_{loc}$ for localization ( $T_{cls}$ and $T_{loc}$ are limited inside the action instance). Furthermore, we utilize class-level sensitivity to compute sensitive features $f_{cls}$ for classification, $f_{loc}$ for localization:
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\left\{ \begin{array}{l} f _ {c l s} = \frac {1}{T} \sum_ {t} p _ {t} ^ {c l s} \mathbb {1} [ \bar {c} ] f _ {t}, t \in T _ {c l s} \\ f _ {l o c} = \frac {1}{T} \sum_ {t} p _ {t} ^ {l o c} \mathbb {1} [ \bar {c} ] f _ {t}, t \in T _ {l o c} \end{array} \right. \tag {12}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
Secondly, we aim to simultaneously discriminate actions and backgrounds better. Consequently we generate boundary-related background features $f_{bg}$ :
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
f _ {b g} = \frac {1}{T} \sum_ {t} f _ {t}, t \in [ \bar {t} ^ {s} - \delta N _ {f}, \bar {t} ^ {s} ] \cup [ \bar {t} ^ {e}, \bar {t} ^ {e} + \delta N _ {f} ] \tag {13}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
The learning objective of ASCL is based on a contrastive loss. As figure 2 shows, the positive samples $\mathcal{P}$ are constructed from $f_{cls}$ and $f_{loc}$ in action instances of the same category while the negative samples $\mathcal{N}$ come from: i) $f_{cls}$ and $f_{loc}$ in action instances of different categories. ii) all background features $f_{bg}$ . ASCL is computed for each batch $B$ with $N$ samples:
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
\mathcal {L} _ {\mathrm {A S C L}} = \frac {1}{N} \sum_ {B} - \log \frac {\sum_ {f _ {x} \in \mathcal {P} _ {f _ {*}}} \operatorname {s i m} \left(f _ {*} , f _ {x}\right)}{\sum_ {f _ {x} \in \mathcal {P} _ {f _ {*}}} \operatorname {s i m} \left(f _ {*} , f _ {x}\right) + \sum_ {f _ {x} \in \mathcal {N} _ {f _ {*}}} \operatorname {s i m} \left(f _ {*} , f _ {x}\right)} \tag {14}
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
Optimizing ASCL will be of benefits to tackle the corresponding issues above: i) alleviate the misalignment of two sub-tasks by pulling features of their respective sensitive frames closer. ii) discriminate actions and backgrounds better by pushing action features of the same category closer and different categories apart, meanwhile pushing actions and backgrounds apart. Thus ASCL can enhance the feature representation and boost training furthermore.
|
| 184 |
+
|
| 185 |
+
# 3.4. Training and Inference
|
| 186 |
+
|
| 187 |
+
Training. In the training process, our final loss function is designed:
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\mathcal {L} = \mathcal {L} _ {c l s} + \mathcal {L} _ {l o c} + \mathcal {L} _ {s} + \lambda \mathcal {L} _ {\mathrm {A S C L}} \tag {15}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
where $\mathcal{L}_{cls}$ , $\mathcal{L}_{loc}$ and $\mathcal{L}_s$ are discussed in equation 9, equation 10 and equation 7. $\lambda$ denotes the weight of Action Sensitive Contrastive loss.
|
| 194 |
+
|
| 195 |
+
Inference. At inference time, our model outputs predictions $(t^s,t^e,c)$ for every frame across all pyramids levels, where $t^s,t^e$ denote the start and end time of action, $c$ denote the predicted action category. $c$ also serves as the action confidence score. SoftNMS [1] is then applied on these results to suppress redundant predictions.
|
| 196 |
+
|
| 197 |
+
# 4. Experiments
|
| 198 |
+
|
| 199 |
+
# 4.1. Datasets and Evaluation Metric
|
| 200 |
+
|
| 201 |
+
Datasets. To validate the efficacy of the proposed ASL, extensive experiments on 6 datasets of 3 types are conducted: i) densely-labeled: MultiThumos[74] and Charades[53]; ii) densely-labeled and egocentric: Ego4D-Moment Queries v1.0[19] and Epic-Kitchens 100[11]; iii) single-labeled: Thumos14[57] and ActivityNet1.3[2].
|
| 202 |
+
|
| 203 |
+
MultiThumos is a densely labeled dataset including 413 sports videos of 65 classes. Charades is a large multi-label dataset containing 9848 videos of 157 action classes. These two datasets are both densely labeled and hence have multiple action instances in each video clip, where different actions may occur concurrently.
|
| 204 |
+
|
| 205 |
+
Ego4D-Moment Queries v1.0 (Ego4D-MQ1.0 for short) is a large-scale egocentric benchmark with 2,488 video clips and 22.2K action instances from 110 pre-defined action categories, which is densely labeled and composed of long clips. EPIC-Kitchens 100 is a large egocentric action dataset containing 100 hours of videos from 700 sessions capturing cooking activities in different kitchens. These two datasets are both large, egocentric and densely labeled.
|
| 206 |
+
|
| 207 |
+
Thumos14 is composed of 200 validation videos and 212 testing videos from 20 action classes while ActivityNet has 19,994 videos with 200 action classes. These two datasets are singly labeled and thus most of video clips in them have one action instance in each video clip.
|
| 208 |
+
|
| 209 |
+
Evaluation Metric. Since ASL focuses on action detection, we take mean Average Precision (mAP) at certain tIoU thresholds as the evaluation metric. For all six datasets, we also report average mAP over several tIoU thresholds as the main metric. The tIoU thresholds are set consistent with the official setup or previous methods, which is detailed in the caption of Table 1, 2, 3, 4.
|
| 210 |
+
|
| 211 |
+
# 4.2. Implementation Details.
|
| 212 |
+
|
| 213 |
+
We follow the practice of using off-the-shelf preextracted features as input, specifically I3D [4] RGB features for MultiThumos, Charades, Thumos14 and ActivityNet, EgoVLP [30], Slowfast [15] and Omnivore [18] features for Ego4D-MQ1.0, Slowfast features [15, 12] for Epic-Kitchens 100.
|
| 214 |
+
|
| 215 |
+
We train our model with a batch size of 2, 16, 2, 2 for 60, 30, 15, 25 epochs on MultiThumos, Charades, Ego4D-MQ1.0 and Epic-Kitchens 100 respectively, where the learning rate is set to $2e^{-4}$ . On ActivityNet and Thumos, we train our model with the batch size of 16, 2, the learning rate of $1e^{-3}$ , $1e^{-4}$ for 15, 30 epochs. We set $\lambda$ as 0.3 and $\theta$ as 0.2.
|
| 216 |
+
|
| 217 |
+
In post-processing, we apply softNMS [1] to suppress redundant predictions. For fair comparison, we keep 200, 100, and 2000, 2000 predictions on Thumos14, ActivityNet, Ego4D-MQ1.0 and Epic-Kitchens 100 respectively. As on MultiThumos and Charades, considering that PointTAD [59] splits a video clip into more than 4 parts and generates 48 predictions for each part, we keep 200 predictions on these two datasets.
|
| 218 |
+
|
| 219 |
+
In the training process, we clamp $\sigma$ with a threshold (set as 5.0) to ensure $\sigma$ won't be very large and thus prevent very small $p^{cls}, p^{loc}$ , which may cause trivial solution to minimize the loss. Moreover, We tackle the issue of overlapped actions following [76, 60]: i) use multi-scale mechanism [34] to assign actions with different duration to different feature levels. ii) If a frame, even with multi-scale used, is still assigned to more than one ground-truth action, we choose the action with the shortest duration as its ground-truth target and model its action sensitivity based on this ground-truth.
|
| 220 |
+
|
| 221 |
+
# 4.3. Main Results
|
| 222 |
+
|
| 223 |
+
MultiThumos and Charades: We compare ASL with state-of-the-art methods under detection-mAP on these two densely-labeled TAL benchmarks. PDAN[10], coarse-fine[24], MLAD[61], MS-TCT[9] are based on frame-level representation, while PointTAD[59] are query-based. As shown in Table 1, ASL reaches the highest mAP over all tIoU thresholds, outperforming the previous best method(i.e. PointTAD) by $2.0\%$ absolute increase of average mAP on MultiThumos and $3.3\%$ on Charades. Notably, PointTAD is further trained in an end-to-end manner with
|
| 224 |
+
|
| 225 |
+
Table 1. Results on MultiThumos and Charades. We report detection- $m\mathrm{{AP}}$ at different tIoU thresholds. Average $m\mathrm{{AP}}$ in $\left\lbrack {{0.1} : {0.1} : {0.9}}\right\rbrack$ is reported on MultiThumos and Chrades. Best results are in bold. $\ddagger$ indicates results trained with stronger image augmentation [59, 38]. I3D denotes using I3D [4] features and E2E indicates results trained in an end-to-end manner.
|
| 226 |
+
|
| 227 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">Modality</td><td rowspan="2">Feature</td><td colspan="4">MultiThumos</td><td colspan="4">Charades</td></tr><tr><td>0.2</td><td>0.5</td><td>0.7</td><td>Avg.</td><td>0.2</td><td>0.5</td><td>0.7</td><td>Avg.</td></tr><tr><td>PDAN [10]</td><td>RGB</td><td>I3D</td><td>-</td><td>-</td><td>-</td><td>17.3</td><td>-</td><td>-</td><td>-</td><td>8.5</td></tr><tr><td>Coarse-Fine [24]</td><td>RGB</td><td>I3D</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>6.1</td></tr><tr><td>MLAD [61]</td><td>RGB</td><td>I3D</td><td>-</td><td>-</td><td>-</td><td>14.2</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>MS-TCT [9]</td><td>RGB</td><td>I3D</td><td>-</td><td>-</td><td>-</td><td>16.3</td><td>-</td><td>-</td><td>-</td><td>7.9</td></tr><tr><td>PointTAD [59]</td><td>RGB</td><td>I3D-E2E</td><td>36.8</td><td>23.3</td><td>11.0</td><td>21.7</td><td>15.9</td><td>12.6</td><td>8.5</td><td>11.3</td></tr><tr><td>PointTAD‡ [59]</td><td>RGB</td><td>I3D-E2E</td><td>39.7</td><td>24.9</td><td>12.0</td><td>23.5</td><td>17.5</td><td>13.5</td><td>9.1</td><td>12.1</td></tr><tr><td>ASL</td><td>RGB</td><td>I3D</td><td>42.4</td><td>27.8</td><td>13.7</td><td>25.5</td><td>24.5</td><td>16.5</td><td>9.4</td><td>15.4</td></tr></table>
|
| 228 |
+
|
| 229 |
+
Table 2. Results on Ego4D-Moment Queries v1.0. We report mAP at different tIoU thresholds. Average mAP in [0.1:0.1:0.5] is reported on Ego4D-Moment Queries. Best results are in bold. EgoVLP, SF and OF denote EgoVLP [30], Slowfast [15] and Omnivore [18] features. InterVideo [5] denotes features extracted from VideoMAE-L [62] and fine-tuned on Ego4D-Moment Queries.
|
| 230 |
+
|
| 231 |
+
<table><tr><td rowspan="2">Method/Entry</td><td rowspan="2">Feature</td><td colspan="4">mAP at IoUs, Val set</td><td rowspan="2">mAP at IoUs, Test set Avg.</td></tr><tr><td>0.1</td><td>0.3</td><td>0.5</td><td>Avg.</td></tr><tr><td>VSGN [79]</td><td>SF</td><td>9.10</td><td>5.76</td><td>3.41</td><td>6.03</td><td>5.68</td></tr><tr><td>VSGN [30]</td><td>EgoVLP</td><td>16.63</td><td>11.45</td><td>6.57</td><td>11.39</td><td>10.33</td></tr><tr><td>RELER [47]</td><td>SF+OV</td><td>22.75</td><td>17.61</td><td>13.43</td><td>17.94</td><td>17.67</td></tr><tr><td>Actionformer [40]</td><td>EgoVLP</td><td>26.84</td><td>20.57</td><td>14.54</td><td>20.60</td><td>-</td></tr><tr><td>Actionformer [40]</td><td>EgoVLP+SF+OV</td><td>28.26</td><td>21.88</td><td>16.28</td><td>22.09</td><td>21.76</td></tr><tr><td>Actionformer [5]</td><td>InternVideo</td><td>-</td><td>-</td><td>-</td><td>23.29</td><td>23.59</td></tr><tr><td>ASL</td><td>EgoVLP</td><td>29.45</td><td>23.03</td><td>16.08</td><td>22.83</td><td>22.25</td></tr><tr><td>ASL</td><td>EgoVLP+SF+OV</td><td>30.50</td><td>24.39</td><td>17.45</td><td>24.15</td><td>23.97</td></tr></table>
|
| 232 |
+
|
| 233 |
+
Table 3. Results on EPIC-Kitchens 100 val set. We report mAP at different tIoU thresholds and average mAP in [0.1:0.1:0.5]. All methods use the same SlowFast [15, 12] features.
|
| 234 |
+
|
| 235 |
+
<table><tr><td>Sub-Task</td><td>Method</td><td>0.1</td><td>0.3</td><td>0.5</td><td>Avg</td></tr><tr><td rowspan="4">Verb</td><td>BMN [31]</td><td>10.8</td><td>8.4</td><td>5.6</td><td>8.4</td></tr><tr><td>G-TAD [72]</td><td>12.1</td><td>9.4</td><td>6.5</td><td>9.4</td></tr><tr><td>Actionformer [76]</td><td>26.6</td><td>24.2</td><td>19.1</td><td>23.5</td></tr><tr><td>ASL</td><td>27.9</td><td>25.5</td><td>19.8</td><td>24.6</td></tr><tr><td rowspan="4">Noun</td><td>BMN [31]</td><td>10.3</td><td>6.2</td><td>3.4</td><td>6.5</td></tr><tr><td>G-TAD [72]</td><td>11.0</td><td>8.6</td><td>5.4</td><td>8.4</td></tr><tr><td>Actionformer [76]</td><td>25.2</td><td>22.7</td><td>17.0</td><td>21.9</td></tr><tr><td>ASL</td><td>26.0</td><td>23.4</td><td>17.7</td><td>22.6</td></tr></table>
|
| 236 |
+
|
| 237 |
+
strong image augmentation while ASL is feature-based, indicating that ASL performs more accurate TAL with more efficiency on densely-labeled datasets.
|
| 238 |
+
|
| 239 |
+
Ego4D-MQ1.0 and Epic-Kitchens 100: These two datasets are both challenging as they are large-scale, egocentric, densely labeled and composed of longer clips. Table 2 reports the results on Ego4D-MQ1.0. The state-of-the-art methods are all based on Actionformer[76] and perform frame-level recognition and localization with strong features. Using the same feature EgoVLP[30], ASL surpasses the current best entry[40]. Using the combined EgoVLP, slowfast[15] and omnivore[18] features, ASL
|
| 240 |
+
|
| 241 |
+
gains $2.06\%$ improvement of average mAP on Val set and $2.21\%$ on Test set. Moreover, ASL performs better than [5] which uses a stronger but not open-sourced InternVideo [5] feature. Meanwhile, on Epic-Kitchens 100 as table 3 shows, ASL outperforms the strong performance of Actionformer[76], BMN[31] and G-TAD[72] with the same Slowfast feature[15, 12]. The above results demonstrate the advantage of ASL on the challenging, egocentric and densely labeled benchmark.
|
| 242 |
+
|
| 243 |
+
Thumos14 and ActivityNet1.3: These two datasets are popular and nearly single-labeled, with approximately one action instance in each clip. Table 4 compares the results of ASL with various state-of-the-art methods (e.g., two-stage methods: BSN[33], G-TAD[72], P-GCN[75], RTDNet[58], one-stage methods: AFSD[29], SSN[81], Actionformer[76]). On Thumos14, across all tIoU thresholds, ASL achieves the best and gains $1.1\%$ improvement of average mAP $(67.9\%$ v.s. $66.8\%)$ . On ActivityNet, ASL also outperforms previous methods of mAP@0.75 and average mAP, though the gap is slight. One possible reason is that due to the success of action recognition on ActivityNet, we follow the common practice [76, 79, 85] to fuse external video-level classification scores [68]. In this case, class-level sensitivity will not play an important role in training. Another reason may be that since each video in ActivityNet is nearly single-labeled, our proposed ASCL will
|
| 244 |
+
|
| 245 |
+
Table 4. Results on Thumos14 and ActivityNet1.3. We report mAP at different IoU thresholds. Average mAP in [0.3:0.1:0.7] is reported on THUMOS14 and [0.5:0.05:0.95] on ActivityNet1.3. The best results are in bold.
|
| 246 |
+
|
| 247 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">Feature</td><td colspan="6">Thumos14</td><td colspan="4">ActivityNet1.3</td></tr><tr><td>0.3</td><td>0.4</td><td>0.5</td><td>0.6</td><td>0.7</td><td>Avg.</td><td>0.5</td><td>0.75</td><td>0.95</td><td>Avg.</td></tr><tr><td>BSN [33]</td><td>TSN [64]</td><td>53.5</td><td>45.0</td><td>36.9</td><td>28.4</td><td>20.0</td><td>36.8</td><td>46.5</td><td>30.0</td><td>8.0</td><td>30.0</td></tr><tr><td>BMN [31]</td><td>TSN [64]</td><td>56.0</td><td>47.4</td><td>38.8</td><td>29.7</td><td>20.5</td><td>38.5</td><td>50.1</td><td>34.8</td><td>8.3</td><td>33.9</td></tr><tr><td>G-TAD [72]</td><td>TSN [64]</td><td>54.5</td><td>47.6</td><td>40.3</td><td>30.8</td><td>23.4</td><td>39.3</td><td>50.4</td><td>34.6</td><td>9.0</td><td>34.1</td></tr><tr><td>P-GCN [75]</td><td>I3D [4]</td><td>63.6</td><td>57.8</td><td>49.1</td><td>-</td><td>-</td><td>-</td><td>48.3</td><td>33.2</td><td>3.3</td><td>31.1</td></tr><tr><td>TCANet [44]</td><td>TSN [64]</td><td>60.6</td><td>53.2</td><td>44.6</td><td>36.8</td><td>26.7</td><td>44.3</td><td>52.3</td><td>36.7</td><td>6.9</td><td>35.5</td></tr><tr><td>ContextLoc [85]</td><td>I3D [4]</td><td>68.3</td><td>63.8</td><td>54.3</td><td>41.8</td><td>26.2</td><td>50.9</td><td>56.0</td><td>35.2</td><td>3.6</td><td>34.2</td></tr><tr><td>VSGN [79]</td><td>TSN [64]</td><td>66.7</td><td>60.4</td><td>52.4</td><td>41.0</td><td>30.4</td><td>50.2</td><td>52.4</td><td>36.0</td><td>8.4</td><td>35.1</td></tr><tr><td>RTD-Net [58]</td><td>I3D [4]</td><td>68.3</td><td>62.3</td><td>51.9</td><td>38.8</td><td>23.7</td><td>49.0</td><td>47.2</td><td>30.7</td><td>8.6</td><td>30.8</td></tr><tr><td>SSN [81]</td><td>TS [54]</td><td>51.0</td><td>41.0</td><td>29.8</td><td>-</td><td>-</td><td>-</td><td>43.2</td><td>28.7</td><td>5.6</td><td>28.3</td></tr><tr><td>GTAN [39]</td><td>P3D [45]</td><td>57.8</td><td>47.2</td><td>38.8</td><td>-</td><td>-</td><td>-</td><td>52.6</td><td>34.1</td><td>8.9</td><td>34.3</td></tr><tr><td>AFSD [29]</td><td>I3D [4]</td><td>67.3</td><td>62.4</td><td>55.5</td><td>43.7</td><td>31.1</td><td>52.0</td><td>52.4</td><td>35.3</td><td>6.5</td><td>34.4</td></tr><tr><td>React [48]</td><td>I3D [4]</td><td>69.2</td><td>65.0</td><td>57.1</td><td>47.8</td><td>35.6</td><td>55.0</td><td>49.6</td><td>33.0</td><td>8.6</td><td>32.6</td></tr><tr><td>TadTR [38]</td><td>I3D [4]</td><td>62.4</td><td>57.4</td><td>49.2</td><td>37.8</td><td>26.3</td><td>46.6</td><td>49.1</td><td>32.6</td><td>8.5</td><td>32.3</td></tr><tr><td>Actionformer [76]</td><td>I3D [4]</td><td>82.1</td><td>77.8</td><td>71.0</td><td>59.4</td><td>43.9</td><td>66.8</td><td>54.2</td><td>36.9</td><td>7.6</td><td>36.0</td></tr><tr><td>ASL</td><td>I3D [4]</td><td>83.1</td><td>79.0</td><td>71.7</td><td>59.7</td><td>45.8</td><td>67.9</td><td>54.1</td><td>37.4</td><td>8.0</td><td>36.2</td></tr></table>
|
| 248 |
+
|
| 249 |
+
Table 5. Ablation studies of components. ASE: Action Sensitivity Evaluator. class.: class-level modeling. inst.: instance-level modeling. ASCL: Action Sensitive Contrastive Loss.
|
| 250 |
+
|
| 251 |
+
<table><tr><td></td><td colspan="3">Components</td><td colspan="4">mAP at different tIoUs</td></tr><tr><td rowspan="2">#</td><td colspan="2">ASE</td><td rowspan="2">ASCL</td><td rowspan="2">0.2</td><td rowspan="2">0.5</td><td rowspan="2">0.7</td><td rowspan="2">Avg.</td></tr><tr><td>class.</td><td>inst.</td></tr><tr><td>1</td><td></td><td></td><td></td><td>39.6</td><td>25.9</td><td>11.6</td><td>23.4</td></tr><tr><td>2</td><td>✓</td><td></td><td></td><td>41.0</td><td>26.5</td><td>12.9</td><td>24.5</td></tr><tr><td>3</td><td></td><td>✓</td><td></td><td>40.5</td><td>26.2</td><td>12.0</td><td>23.9</td></tr><tr><td>4</td><td></td><td></td><td>✓</td><td>40.2</td><td>26.1</td><td>11.8</td><td>23.7</td></tr><tr><td>5</td><td>✓</td><td></td><td>✓</td><td>41.9</td><td>27.0</td><td>13.6</td><td>25.1</td></tr><tr><td>6</td><td>✓</td><td>✓</td><td></td><td>41.8</td><td>27.2</td><td>13.3</td><td>25.0</td></tr><tr><td>7</td><td>✓</td><td>✓</td><td>✓</td><td>42.4</td><td>27.8</td><td>13.7</td><td>25.5</td></tr></table>
|
| 252 |
+
|
| 253 |
+
be short of positive and negative samples, leading to a nonsignificant increase compared to improvements on densely labeled datasets as Table 1, 2.
|
| 254 |
+
|
| 255 |
+
# 4.4. Ablation Study
|
| 256 |
+
|
| 257 |
+
To further verify the efficacy of our contributions, we analyze main components of ASL on MultiThumos.
|
| 258 |
+
|
| 259 |
+
Action Sensitive Evaluator. Our proposed ASE can be divided into class-level and instance-level modeling. We first investigate the effect of these parts. In Table 5, baseline 1 denotes using our proposed framework without ASE and ASCL. After being equipped with class-level modeling, it boosts the performance by $1.1\%$ of average mAP (baseline 2 v.s. baseline 1). When further adding instance-level bias, it gains $0.5\%$ absolute increase (baseline 6 v.s. baseline 2). And our ASE contributes a total improvement of $1.6\%$ on average mAP (baseline 7 v.s. baseline 1). It is obvious that action sensitivity modeling from both class-level and instance-level is beneficial to TAL task.
|
| 260 |
+
|
| 261 |
+
Table 6. Ablation studies of Gaussians weights. cls and loc denotes classification and localization sub-task. For gaussian weights in class-level action sensitivity learning, learnable/fixed denotes parameters learnable/not learnable. None denotes not using gaussian weights.
|
| 262 |
+
|
| 263 |
+
<table><tr><td>#</td><td>cls.</td><td>loc.</td><td>0.1</td><td>0.3</td><td>0.5</td><td>Avg.</td></tr><tr><td>1</td><td>None</td><td>None</td><td>40.9</td><td>26.3</td><td>12.3</td><td>24.2</td></tr><tr><td>2</td><td rowspan="3">fixed</td><td>None</td><td>40.9</td><td>26.5</td><td>12.4</td><td>24.4</td></tr><tr><td>3</td><td>fixed</td><td>41.0</td><td>26.6</td><td>12.7</td><td>24.6</td></tr><tr><td>4</td><td>learnable</td><td>41.7</td><td>26.8</td><td>13.0</td><td>24.9</td></tr><tr><td>5</td><td rowspan="3">learnable</td><td>None</td><td>41.9</td><td>27.1</td><td>13.0</td><td>24.9</td></tr><tr><td>6</td><td>fixed</td><td>42.0</td><td>26.9</td><td>13.4</td><td>25.1</td></tr><tr><td>7</td><td>learnable</td><td>42.4</td><td>27.8</td><td>13.7</td><td>25.5</td></tr></table>
|
| 264 |
+
|
| 265 |
+
Gaussian Weights. Then we analyze the effect of learnable gaussian weights in class-level action sensitivity learning. Table 6 demonstrates that compared to baseline 1 which does not use any gaussian weights to learn action sensitivity, fixed gaussian weights with prior knowledge do bring benefits (baseline 2,3 v.s. baseline 1). Meanwhile, learnable gaussian weights are more favored (baseline 4 v.s. baseline 3, baseline 7 v.s. baseline 6). Moreover, learnable gaussian weights for both two sub-tasks achieve the best results.
|
| 266 |
+
|
| 267 |
+
We further study the number of Gaussians used in classification and localization sub-task. As shown in Table 7, using two Gaussians for localization and one Gaussian for classification achieves the best results. It is probably because on the one hand, using two Gaussians for localization explicitly allocates one for modeling start time and one for modeling end time. On the other hand, more Gaussian weights may be a burden for training, leading to inferior performance.
|
| 268 |
+
|
| 269 |
+
Table 7. Ablation studies of number of Gaussians weights. #cls and #loc denote the number of Gaussian weights used in classification and localization sub-task. shared indicates two sub-tasks share one Gaussian weights.
|
| 270 |
+
|
| 271 |
+
<table><tr><td>#cls</td><td>#loc</td><td>0.1</td><td>0.3</td><td>0.5</td><td>Avg</td></tr><tr><td>1(shared)</td><td>1(shared)</td><td>42.2</td><td>27.2</td><td>13.7</td><td>25.3</td></tr><tr><td rowspan="3">0</td><td>0</td><td>40.9</td><td>26.3</td><td>12.3</td><td>24.2</td></tr><tr><td>1</td><td>41.5</td><td>26.9</td><td>13.0</td><td>24.8</td></tr><tr><td>2</td><td>41.6</td><td>27.1</td><td>13.4</td><td>25.0</td></tr><tr><td rowspan="3">1</td><td>0</td><td>42.2</td><td>27.1</td><td>13.2</td><td>25.1</td></tr><tr><td>1</td><td>42.0</td><td>26.7</td><td>13.1</td><td>24.9</td></tr><tr><td>2</td><td>42.4</td><td>27.8</td><td>13.7</td><td>25.5</td></tr><tr><td rowspan="3">2</td><td>0</td><td>42.3</td><td>26.9</td><td>13.3</td><td>25.1</td></tr><tr><td>1</td><td>41.8</td><td>26.9</td><td>13.0</td><td>25.0</td></tr><tr><td>2</td><td>42.0</td><td>27.2</td><td>13.6</td><td>25.3</td></tr></table>
|
| 272 |
+
|
| 273 |
+

|
| 274 |
+
(a) Ablation of $\lambda$
|
| 275 |
+
Figure 3. Ablation of hyperparameters in ASCL.
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
(b) Ablation of $\delta$
|
| 279 |
+
|
| 280 |
+
Action Sensitive Contrastive Loss. Moreover, we delve into our proposed ASCL. As shown in Table 5, ASCL improves around $0.6\%$ of average mAP on the basis of class-level prior (baseline 5 v.s. baseline 2) and $0.5\%$ on the basis of ASE (baseline 7 v.s. baseline 6). Baseline 4, where using ASCL alone denotes sampling near the center frame to form $f_{cls}$ and $f_{loc}$ directly, also gains an improvement of $0.3\%$ compared to the vanilla framework (baseline 4 v.s. baseline 1). This indicates the effectiveness of contrast between actions and backgrounds. When performing ASCL based on ASE, it will facilitate the final performance more because it can alleviate the misalignment as discussed in 3.3.
|
| 281 |
+
|
| 282 |
+
Finally we discussed the hyperparameters in ASCL. Fig 3(a) shows the performance curve of average mAP corresponding to ASCL weight $\lambda$ . Average mAP on MultiThumos generally improves when $\lambda$ increases and slightly drop as $\lambda$ reaches 0.4. Fig 3(b) reports the average mAP to different sampling length ratios $\delta$ . When $\delta$ equals 0.2, our method achieves the best. In this case, we set $\lambda$ to 0.3 and $\delta$ to 0.2.
|
| 283 |
+
|
| 284 |
+
# 4.5. Qualitative Experiment
|
| 285 |
+
|
| 286 |
+
To better illustrate the effectiveness of ASL, we visualize some qualitative results of Ego4D-MQ1.0 benchmark in Fig 4. We show that i) frames depicting action's main sub-action (i.e., hang clothes on the hanger, water run through hands) are of higher action sensitivity for classification. ii) Frames depicting near-start and near-end sub-action (i.e., turn the tap on, lift laundry basket, e.t.c.) are of higher ac
|
| 287 |
+
|
| 288 |
+

|
| 289 |
+
Action: hang clothes to dry
|
| 290 |
+
|
| 291 |
+

|
| 292 |
+
|
| 293 |
+

|
| 294 |
+
|
| 295 |
+

|
| 296 |
+
|
| 297 |
+

|
| 298 |
+
|
| 299 |
+

|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
Action Sensitivity for Classification
|
| 303 |
+
Figure 4. Visualization of (Top) the frame sensitivity to sub-tasks of Action: hang clothes to dry and (bottom) Action: wash hands. Please zoom in for the best view.
|
| 304 |
+
|
| 305 |
+
tion sensitivity for localization. Moreover, action sensitivity of frames is not continuous, as our proposed instance-level action sensitivity is discrete partly because blurred or transitional frames exist in video clips.
|
| 306 |
+
|
| 307 |
+
# 5. Conclusion
|
| 308 |
+
|
| 309 |
+
In this paper, we introduce an Action Sensitivity Learning framework (ASL) for temporal action localization (TAL). ASL models action sensitivity of each frame and dynamically change their weights in training. Together with the proposed Action Sensitive Contrastive Loss (ASCL) to further enhance features and alleviate misalignment, ASL is able to recognize and localize action instances effectively. For accurate TAL, fine-grained information should be considered (e.g. frame-level information). We believe that ASL is a step further in this direction. In the future, efforts could be paid to more complicated sensitivity modeling. Besides, ASL could also be redesigned as a plug-and-play component that will be beneficial to various TAL methods.
|
| 310 |
+
|
| 311 |
+
Acknowledgements: This work is supported by the Fundamental Research Funds for the Central Universities (No.226-2023-00048) and Major Program of the National Natural Science Foundation of China (T2293720/T2293723)
|
| 312 |
+
|
| 313 |
+
# References
|
| 314 |
+
|
| 315 |
+
[1] Navaneeth Bodla, Bharat Singh, Rama Chellappa, and Larry S Davis. Soft-nms-improving object detection with one line of code. In Proceedings of the IEEE international conference on computer vision, pages 5561-5569, 2017. 6
|
| 316 |
+
[2] Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 961-970, 2015. 2, 6
|
| 317 |
+
[3] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part I 16, pages 213-229. Springer, 2020. 2
|
| 318 |
+
[4] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6299-6308, 2017. 6, 7, 8
|
| 319 |
+
[5] Guo Chen, Sen Xing, Zhe Chen, Yi Wang, Kunchang Li, Yizhuo Li, Yi Liu, Jiahao Wang, Yin-Dong Zheng, Bingkun Huang, Zhiyu Zhao, Junting Pan, Yifei Huang, Zun Wang, Jiashuo Yu, Yinan He, Hongjie Zhang, Tong Lu, Yali Wang, Limin Wang, and Yu Qiao. Internvideo-ego4d: A pack of champion solutions to ego4d challenges, 2022. 7
|
| 320 |
+
[6] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597-1607. PMLR, 2020. 2
|
| 321 |
+
[7] Gedas Cheng, Feng & Bertasius. Tallformer: Temporal action localization with a long-memory transformer. In Computer Vision-ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXIV, pages 503-521. Springer, 2022. 2
|
| 322 |
+
[8] Rui Dai, Srijan Das, and Francois Bremond. Ctrn: Class-temporal relational network for action detection. arXiv preprint arXiv:2110.13473, 2021. 2
|
| 323 |
+
[9] Rui Dai, Srijan Das, Kumara Kahatapitiya, Michael S Ryoo, and François Brémond. Ms-tct: multi-scale temporal convtransformer for action detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20041-20051, 2022. 2, 6, 7
|
| 324 |
+
[10] Rui Dai, Srijan Das, Luca Minciullo, Lorenzo Garattoni, Gianpiero Francesca, and François Bremond. Pdan: Pyramid dilated attention network for action detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2970-2979, 2021. 2, 6, 7
|
| 325 |
+
[11] Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and Michael Wray. Scaling egocentric vision: The epic-kitchens dataset. In European Conference on Computer Vision (ECCV), 2018. 2, 6
|
| 326 |
+
[12] Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Antonino Furnari, Jian Ma, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and
|
| 327 |
+
|
| 328 |
+
Michael Wray. Rescaling egocentric vision: Collection, pipeline and challenges for epic-kitchens-100. International Journal of Computer Vision (IJCV), 130:33-55, 2022. 6, 7
|
| 329 |
+
[13] Kaiwen Duan, Song Bai, Lingxi Xie, Honggang Qi, Qingming Huang, and Qi Tian. Centernet: Keypoint triplets for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6569-6578, 2019. 2
|
| 330 |
+
[14] Lijie Fan, Wenbing Huang, Chuang Gan, Stefano Ermon, Boqing Gong, and Junzhou Huang. End-to-end learning of motion representation for video understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6016-6025, 2018. 1
|
| 331 |
+
[15] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019. 6, 7
|
| 332 |
+
[16] Chengjian Feng, Yujie Zhong, Yu Gao, Matthew R Scott, and Weilin Huang. Tood: Task-aligned one-stage object detection. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 3490-3499. IEEE Computer Society, 2021. 2
|
| 333 |
+
[17] Chuang Gan, Naiyan Wang, Yi Yang, Dit-Yan Yeung, and Alex G Hauptmann. Devnet: A deep event network for multimedia event detection and evidence recounting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2568-2577, 2015. 1
|
| 334 |
+
[18] Rohit Girdhar, Mannat Singh, Nikhila Ravi, Laurens van der Maaten, Armand Joulin, and Ishan Misra. Omnivore: A Single Model for Many Visual Modalities. In CVPR, 2022. 6, 7
|
| 335 |
+
[19] Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, et al. Ego4d: Around the world in 3,000 hours of egocentric video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18995-19012, 2022. 2, 6
|
| 336 |
+
[20] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271-21284, 2020. 2
|
| 337 |
+
[21] Michael Gutmann and Aapo Hyvarinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 297–304. JMLR Workshop and Conference Proceedings, 2010. 3
|
| 338 |
+
[22] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729-9738, 2020. 2
|
| 339 |
+
[23] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF con
|
| 340 |
+
|
| 341 |
+
ference on computer vision and pattern recognition, pages 9729-9738, 2020. 3
|
| 342 |
+
[24] Kumara Kahatapitiya and Michael S Ryoo. Coarse-fine networks for temporal activity detection in videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8385-8394, 2021. 2, 6, 7
|
| 343 |
+
[25] Dahun Kim, Donghyeon Cho, and In So Kweon. Self-supervised video representation learning with space-time cubic puzzles. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 8545-8552, 2019. 1
|
| 344 |
+
[26] Hei Law and Jia Deng. Cornernet: Detecting objects as paired keypoints. In Proceedings of the European conference on computer vision (ECCV), pages 734-750, 2018. 2
|
| 345 |
+
[27] Xiang Li, Wenhai Wang, Xiaolin Hu, Jun Li, Jinhui Tang, and Jian Yang. Generalized focal loss v2: Learning reliable localization quality estimation for dense object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11632-11641, 2021. 2
|
| 346 |
+
[28] Xiang Li, Wenhai Wang, Lijun Wu, Shuo Chen, Xiaolin Hu, Jun Li, Jinhui Tang, and Jian Yang. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. Advances in Neural Information Processing Systems, 33:21002-21012, 2020. 2
|
| 347 |
+
[29] Chuming Lin, Chengming Xu, Donghao Luo, Yabiao Wang, Ying Tai, Chengjie Wang, Jilin Li, Feiyue Huang, and Yanwei Fu. Learning salient boundary feature for anchor-free temporal action localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3320-3329, June 2021. 1, 2, 3, 7, 8
|
| 348 |
+
[30] Kevin Qinghong Lin, Alex Jinpeng Wang, Mattia Soldan, Michael Wray, Rui Yan, Eric Zhongcong Xu, Difei Gao, Rongcheng Tu, Wenzhe Zhao, Weijie Kong, et al. Egocentric video-language pretraining. arXiv preprint arXiv:2206.01670, 2022. 6, 7
|
| 349 |
+
[31] Tianwei Lin, Xiao Liu, Xin Li, Errui Ding, and Shilei Wen. Bmn: Boundary-matching network for temporal action proposal generation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3889-3898, 2019. 2, 7, 8
|
| 350 |
+
[32] Tianwei Lin, Xu Zhao, and Zheng Shou. Single shot temporal action detection. In Proceedings of the 25th ACM international conference on Multimedia, pages 988-996, 2017. 1, 2
|
| 351 |
+
[33] Tianwei Lin, Xu Zhao, Haisheng Su, Chongjing Wang, and Ming Yang. Bsn: Boundary sensitive network for temporal action proposal generation. In European Conference on Computer Vision, 2018. 2, 7, 8
|
| 352 |
+
[34] Tsung-Yi Lin, Piotr Dólar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117-2125, 2017. 2, 6
|
| 353 |
+
[35] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980-2988, 2017. 2, 5
|
| 354 |
+
|
| 355 |
+
[36] Naiyuan Liu, Xiaohan Wang, Xiaobo Li, Yi Yang, and Yueting Zhuang. Refer@zju-alibaba submission to the ego4d natural language queries challenge 2022, 2022. 3
|
| 356 |
+
[37] Shuming Liu, Mengmeng Xu, Chen Zhao, Xu Zhao, and Bernard Ghanem. Etad: Training action detection end to end on a laptop, 2022. 2
|
| 357 |
+
[38] Xiaolong Liu, Qimeng Wang, Yao Hu, Xu Tang, Shiwei Zhang, Song Bai, and Xiang Bai. End-to-end temporal action detection with transformer. IEEE Transactions on Image Processing, 31:5427-5441, 2022. 2, 7, 8
|
| 358 |
+
[39] Fuchen Long, Ting Yao, Zhaofan Qiu, Xinmei Tian, Jiebo Luo, and Tao Mei. Gaussian temporal awareness networks for action localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 344-353, 2019. 2, 8
|
| 359 |
+
[40] Fangzhou Mu, Sicheng Mo, Gillian Wang, and Yin Li. Where a strong backbone meets strong features - actionformer for ego4d moment queries challenge, 2022. 7
|
| 360 |
+
[41] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. 3
|
| 361 |
+
[42] Tian Pan, Yibing Song, Tianyu Yang, Wenhao Jiang, and Wei Liu. Videomoco: Contrastive video representation learning with temporally adversarial examples. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11205-11214, 2021. 3
|
| 362 |
+
[43] AJ Piergiovanni and Michael Ryoo. Temporal gaussian mixture layer for videos. In International Conference on Machine learning, pages 5152-5161. PMLR, 2019. 2
|
| 363 |
+
[44] Zhiwu Qing, Haisheng Su, Weihao Gan, Dongliang Wang, Wei Wu, Xiang Wang, Yu Qiao, Junjie Yan, Changxin Gao, and Nong Sang. Temporal context aggregation network for temporal action proposal refinement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 485-494, 2021. 2, 8
|
| 364 |
+
[45] Zhaofan Qiu, Ting Yao, and Tao Mei. Learning spatiotemporal representation with pseudo-3d residual networks, 2017. 8
|
| 365 |
+
[46] Zhaofan Qiu, Ting Yao, Chong-Wah Ngo, Xinmei Tian, and Tao Mei. Learning spatio-temporal representation with local and global diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12056-12065, 2019. 1
|
| 366 |
+
[47] Jiayi Shao, Xiaohan Wang, and Yi Yang. Refer@zju submission to the ego4d moment queries challenge 2022, 2022. 7
|
| 367 |
+
[48] Dingfeng Shi, Yujie Zhong, Qiong Cao, Jing Zhang, Lin Ma, Jia Li, and Dacheng Tao. React: Temporal action detection with relational queries. In European conference on computer vision, 2022. 2, 3, 8
|
| 368 |
+
[49] Zheng Shou, Jonathan Chan, Alireza Zareian, Kazuyuki Miyazawa, and Shih-Fu Chang. Cdc: Convolutional-deconvolutional networks for precise temporal action localization in untrimmed videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5734-5743, 2017. 2
|
| 369 |
+
|
| 370 |
+
[50] Zheng Shou, Hang Gao, Lei Zhang, Kazuyuki Miyazawa, and Shih-Fu Chang. Autoloc: Weakly-supervised temporal action localization in untrimmed videos. In Proceedings of the European Conference on Computer Vision (ECCV), pages 154-171, 2018. 3
|
| 371 |
+
[51] Zheng Shou, Dongang Wang, and Shih-Fu Chang. Temporal action localization in untrimmed videos via multi-stage cnns. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1049-1058, 2016. 2
|
| 372 |
+
[52] Abhinav Shrivastava, Abhinav Gupta, and Ross Girshick. Training region-based object detectors with online hard example mining. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 761-769, 2016. 2
|
| 373 |
+
[53] Gunnar A Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part I 14, pages 510-526. Springer, 2016. 2, 6
|
| 374 |
+
[54] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos, 2014. 8
|
| 375 |
+
[55] Deepak Sridhar, Niamul Quader, Srikanth Muralidharan, Yaoxin Li, Peng Dai, and Juwei Lu. Class semantics-based attention for action detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13739-13748, 2021. 2
|
| 376 |
+
[56] Haisheng Su, Weihao Gan, Wei Wu, Yu Qiao, and Junjie Yan. Bsn++: Complementary boundary regressor with scale-balanced relation modeling for temporal action proposal generation. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 2602-2610, 2021. 2
|
| 377 |
+
[57] Yu-Gang Jiang&Jingen Liu&A Roshan Zamir&George Toderici&Ivan Laptev&Mubarak Shah& Rahul Sukthankar. Thumos challenge: Action recognition with a large number of classes. 2014. 2, 6
|
| 378 |
+
[58] Jing Tan, Jiaqi Tang, Limin Wang, and Gangshan Wu. Relaxed transformer decoders for direct action proposal generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 13526-13535, October 2021. 2, 7, 8
|
| 379 |
+
[59] Jing Tan, Xiaotong Zhao, Xintian Shi, Bin Kang, and Limin Wang. Pointtad: Multi-label temporal action detection with learnable query points. In Advances in Neural Information Processing Systems. 2, 6, 7
|
| 380 |
+
[60] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9627-9636, 2019. 2, 4, 6
|
| 381 |
+
[61] Praveen Tirupattur, Kevin Duarte, Yogesh S Rawat, and Mubarak Shah. Modeling multi-label action dependencies for temporal action localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1460-1470, 2021. 2, 6, 7
|
| 382 |
+
[62] Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. VideoMAE: Masked autoencoders are data-efficient learners
|
| 383 |
+
|
| 384 |
+
for self-supervised video pre-training. In Advances in Neural Information Processing Systems, 2022. 7
|
| 385 |
+
[63] Heng Wang, Dan Oneata, Jakob Verbeek, and Cordelia Schmid. A robust and efficient video representation for action recognition. International journal of computer vision, 119:219-238, 2016. 1
|
| 386 |
+
[64] Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaou Tang, and Luc Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In European conference on computer vision, pages 20-36. Springer, 2016. 8
|
| 387 |
+
[65] Qiang Wang, Yanhao Zhang, Yun Zheng, and Pan Pan. Rcl: Recurrent continuous localization for temporal action detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13566-13575, 2022. 2
|
| 388 |
+
[66] Xiang Wang, Zhiwu Qing, Ziyuan Huang, Yutong Feng, Shiwei Zhang, Jianwen Jiang, Mingqian Tang, Changxin Gao, and Nong Sang. Proposal relation network for temporal action detection. 2021. 2
|
| 389 |
+
[67] Xiaohan Wang, Linchao Zhu, Zhedong Zheng, Mingliang Xu, and Yi Yang. Align and tell: Boosting text-video retrieval with local alignment and fine-grained supervision. IEEE Transactions on Multimedia, 2022. 3
|
| 390 |
+
[68] Yuanjun Xiong, Limin Wang, Zhe Wang, Bowen Zhang, Hang Song, Wei Li, Dahua Lin, Yu Qiao, Luc Van Gool, and Xiaou Tang. Cuhk & ethz & siat submission to activitynet challenge 2016, 2016. 7
|
| 391 |
+
[69] Huijuan Xu, Abir Das, and Kate Saenko. R-c3d: Region convolutional 3d network for temporal activity detection. In Proceedings of the International Conference on Computer Vision (ICCV), 2017. 2
|
| 392 |
+
[70] Mengmeng Xu, Juan-Manuel Pérez-Rúa, Victor Escorcia, Brais Martinez, Xiatian Zhu, Li Zhang, Bernard Ghanem, and Tao Xiang. Boundary-sensitive pre-training for temporal localization in videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7220-7230, 2021. 2
|
| 393 |
+
[71] Mengmeng Xu, Juan Manuel Perez Rua, Xiatian Zhu, Bernard Ghanem, and Brais Martinez. Low-fidelity video encoder optimization for temporal action localization. Advances in Neural Information Processing Systems, 34:9923-9935, 2021. 2
|
| 394 |
+
[72] Mengmeng Xu, Chen Zhao, David S. Rojas, Ali Thabet, and Bernard Ghanem. G-tad: Sub-graph localization for temporal action detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020. 1, 2, 7, 8
|
| 395 |
+
[73] Ze Yang, Shaohui Liu, Han Hu, Liwei Wang, and Stephen Lin. Reppoints: Point set representation for object detection. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9657-9666, 2019. 2
|
| 396 |
+
[74] Serena Yeung, Olga Russakovsky, Ning Jin, Mykhaylo Andriluka, Greg Mori, and Li Fei-Fei. Every moment counts: Dense detailed labeling of actions in complex videos. International Journal of Computer Vision, 126:375–389, 2018. 2, 6
|
| 397 |
+
|
| 398 |
+
[75] Runhao Zeng, Wenbing Huang, Mingkui Tan, Yu Rong, Peilin Zhao, Junzhou Huang, and Chuang Gan. Graph convolutional networks for temporal action localization. In ICCV, 2019. 1, 2, 7, 8
|
| 399 |
+
[76] Chen-Lin Zhang, Jianxin Wu, and Yin Li. Actionformer: Localizing moments of actions with transformers. In European Conference on Computer Vision, volume 13664 of LNCS, pages 492-510, 2022. 1, 2, 3, 4, 6, 7, 8
|
| 400 |
+
[77] Shifeng Zhang, Cheng Chi, Yongqiang Yao, Zhen Lei, and Stan Z Li. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9759-9768, 2020. 2
|
| 401 |
+
[78] Chen Zhao, Merey Ramazanova, Mengmeng Xu, and Bernard Ghanem. Segstad: Precise temporal action detection via semantic segmentation, 2022. 2
|
| 402 |
+
[79] Chen Zhao, Ali Thabet, and Bernard Ghanem. Video self-stitching graph network for temporal action localization. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 13638-13647, 2021. 1, 2, 7, 8
|
| 403 |
+
[80] Peisen Zhao, Lingxi Xie, Chen Ju, Ya Zhang, Yanfeng Wang, and Qi Tian. Bottom-up temporal action localization with mutual regularization. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part VIII 16, pages 539-555. Springer, 2020. 2
|
| 404 |
+
[81] Yue Zhao, Yuanjun Xiong, Limin Wang, Zhirong Wu, Xiaou Tang, and Dahua Lin. Temporal action detection with structured segment networks. In ICCV, 2017. 2, 7, 8
|
| 405 |
+
[82] Zhaohui Zheng, Ping Wang, Wei Liu, Jinze Li, Rongguang Ye, and Dongwei Ren. Distance-iou loss: Faster and better learning for bounding box regression. In The AAAI Conference on Artificial Intelligence (AAAI), 2020. 5
|
| 406 |
+
[83] Benjin Zhu, Jianfeng Wang, Zhengkai Jiang, Fuhang Zong, Songtao Liu, Zeming Li, and Jian Sun. Autoassign: Differentiable label assignment for dense object detection. arXiv preprint arXiv:2007.03496, 2020. 2
|
| 407 |
+
[84] Chenchen Zhu, Yihui He, and Marios Savvides. Feature selective anchor-free module for single-shot object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 840-849, 2019. 2
|
| 408 |
+
[85] Zixin Zhu, Wei Tang, Le Wang, Nanning Zheng, and G. Hua. Enriching local and global contexts for temporal action localization. In ICCV, 2021. 1, 2, 7, 8
|
actionsensitivitylearningfortemporalactionlocalization/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:357ad8026eb68c430287c7405f2d30473e9a18573efe438becb4307f2b993aaf
|
| 3 |
+
size 636085
|
actionsensitivitylearningfortemporalactionlocalization/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ed889f7b396e38e42cf3ba15dee729625e9e1219840adba8159bf8274beb17b1
|
| 3 |
+
size 541225
|
activateandrejecttowardssafedomaingeneralizationundercategoryshift/1ff30265-9cf5-4ec1-a982-4f0c39c9392c_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d0759368bb7b4722b0d983dc77705582dbb3c3e90763dec5288d95a281061ac8
|
| 3 |
+
size 103326
|
activateandrejecttowardssafedomaingeneralizationundercategoryshift/1ff30265-9cf5-4ec1-a982-4f0c39c9392c_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9692729ac4305a9334057f6219d1fd0ce176e920eeed5d455bc873f721dbef33
|
| 3 |
+
size 135184
|
activateandrejecttowardssafedomaingeneralizationundercategoryshift/1ff30265-9cf5-4ec1-a982-4f0c39c9392c_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:98bb271feca776adb8a992f22a98928648245ddfd6c47286d249fd799dbe0d32
|
| 3 |
+
size 3625742
|
activateandrejecttowardssafedomaingeneralizationundercategoryshift/full.md
ADDED
|
@@ -0,0 +1,447 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Activate and Reject: Towards Safe Domain Generalization under Category Shift
|
| 2 |
+
|
| 3 |
+
Chaoqi Chen $^{1*}$ , Luyao Tang $^{2*}$ , Leitian Tao $^{3}$ , Hong-Yu Zhou $^{1}$ , Yue Huang $^{2}$ , Xiaoguang Han $^{4\dagger}$ , Yizhou Yu $^{1\dagger}$ $^{1}$ The University of Hong Kong
|
| 4 |
+
$^{2}$ Xiamen University
|
| 5 |
+
$^{3}$ University of Wisconsin - Madison
|
| 6 |
+
$^{4}$ The Chinese University of Hong Kong (Shenzhen)
|
| 7 |
+
|
| 8 |
+
cqchen1994@gmail.com, lytang@stu.xmu.edu.cn, taoleitian@gmail.com, whuzhouhongyu@gmail.com yhuang2010@xmu.edu.cn, hanxiaoguang@cuhk.edu.cn, yizhouy@acm.org
|
| 9 |
+
|
| 10 |
+
# Abstract
|
| 11 |
+
|
| 12 |
+
Albeit the notable performance on in-domain test points, it is non-trivial for deep neural networks to attain satisfactory accuracy when deploying in the open world, where novel domains and object classes often occur. In this paper, we study a practical problem of Domain Generalization under Category Shift (DGCS), which aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains. Compared to prior DG works, we face two new challenges: 1) how to learn the concept of "unknown" during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments for safe model deployment. To this end, we propose a novel Activate and Reject (ART) framework to reshape the model's decision boundary to accommodate unknown classes and conduct post hoc modification to further discriminate known and unknown classes using unlabeled test data. Specifically, during training, we promote the response to the unknown by optimizing the unknown probability and then smoothing the overall output to mitigate the overconfidence issue. At test time, we introduce a step-wise online adaptation method that predicts the label by virtue of the cross-domain nearest neighbor and class prototype information without updating the network's parameters or using threshold-based mechanisms. Experiments reveal that ART consistently improves the generalization capability of deep networks on different vision tasks. For image classification, ART improves the H-score by $6.1\%$ on average compared to the previous best method. For object detection and semantic segmentation, we establish new benchmarks and achieve competitive performance.
|
| 13 |
+
|
| 14 |
+
# 1. Introduction
|
| 15 |
+
|
| 16 |
+
Deep neural networks have achieved unprecedented success in a myriad of vision tasks over the past decade. Despite the promise, a well-trained model deployed in the open
|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
Figure 1. DGCS in image classification and object detection tasks.
|
| 20 |
+
|
| 21 |
+
and ever-changing world often struggles to deal with the domain shifts—the training and testing data do not follow the independent and identically distributed (i.i.d) assumption, and therefore deteriorates its safety and reliability in many safety-critical applications, such as autonomous driving and computer-aided disease diagnosis. This gives rise to the importance of Domain Generalization (DG) [101, 83], a.k.a. out-of-distribution (OOD) generalization, which aims at generalizing predictive models trained on multiple (or a single) source domains to unseen target distributions.
|
| 22 |
+
|
| 23 |
+
In order to unearth domain-agnostic knowledge and alleviate domain-specific components, a plethora of DG algorithms have been proposed, spanning invariant risk minimization [2, 1], augmentation [81, 87, 105, 9], feature disentanglement [63, 49, 93], meta-learning [43, 44, 21], to name a few. Among them, a common assumption is that the label spaces of source and target domains are identical, which may not always hold in practice. Suppose that we wished to deploy modern vision systems to recognize objects in an autonomous vehicle. When only the environment (e.g., weather and illumination) and appearance (e.g., size and viewpoint) of previously seen objects can change, principled approaches are capable of correcting for the po
|
| 24 |
+
|
| 25 |
+
tential shifts on the fly. But what if the sudden arrival of new objects in an ever-changing world? Most existing DG methods will break and may even result in catastrophe, raising strong concerns about model reliability. Although several prior arts [71, 106] have explored the open DG scenarios, the "adaptivity gap" [22] between training and test distributions still hinders safe deployment of source-learned models [30].
|
| 26 |
+
|
| 27 |
+
To this premise, we challenge the status quo by raising an open question: can deep models learn what they don't know during training and subsequently adapt to novel environments at test-time for safe model deployment? Thus, we consider a more realistic scenario namely Domain Generalization under Category Shift (DGCS) (see Fig. 1), wherein the source-trained model is expected to simultaneously detect unknown-class samples and categorize known-class samples under the presence of domain shifts. The core challenges are: (i) no unknown-class data is available in training and (ii) the mixture of domain and label shifts during test time. In this paper, we present a simple yet effective framework—Activate and RejecT (dubbed ART), which reshapes the model's decision boundary to accommodate unknown classes and adjusts the final prediction to reconcile the intrinsic tension between domain and label shifts. ART encapsulates two key components: (i) Unknown-aware Gradient Diffusion (UGD) to make the classifier give response to unknown dimension and smooth the decision boundary to mitigate overconfidence; (ii) Test-time Unknown Rejection (TUR) to conduct post hoc modification to the learned classifier's final predictions, making the decision boundaries of different classes closer to the well-behaved case.
|
| 28 |
+
|
| 29 |
+
Specifically, the logit of unknown class is activated by minimizing the negative log-likelihood regarding unknown probability. However, we find that the learned probability will be suppressed due to the overconfidence w.r.t. known classes. Thus, we introduce a smoothed cross-entropy loss to promote the response to the unknown by adding the penalty on the $L_{2}$ norm of the logits and using a temperature scaling parameter, where the former mitigates the excessive increase of the logit norm while the latter magnifies the effect of logit penalty. Due to the unavailability of real target data in training, the source-trained decision boundaries between known and unknown classes may still be ambiguous. Therefore, TUR refines the source-trained classifier using unlabeled test data in an online adaptation manner. To be specific, TUR first determines if the input belongs to known classes or not via a cross-domain nearest neighbor search, based on prototype information and cyclic consistent constraint; otherwise, the prediction will be made by a parallel module that measures the input's similarity with a set of dynamically-updated target prototypes. TUR is training-free (no backward passes) and does not rely on threshold-based criteria nor impose any distributional assumptions.
|
| 30 |
+
|
| 31 |
+
Our key contributions are summarized as follows:
|
| 32 |
+
|
| 33 |
+
- We study a challenging DG problem (DGCS) and propose a principled framework (ART) to jointly consider domain shift, label shift, and adaptivity gap.
|
| 34 |
+
- We propose an unknown-aware training objective to activate the unknown's logit and alleviate the overconfidence issue, and an online adaptation strategy to perform post hoc modification to the learned classifier's prediction at test-time without additional tuning.
|
| 35 |
+
- Extensive experiments show that ART achieves superior performance on a wide range of tasks including image classification, object detection, and semantic segmentation. In particular, on four image classification benchmarks (PACS, Office-Home, Office-31, and Digits), ART improves the H-score by $6.1\%$ on average compared to the previous best method.
|
| 36 |
+
|
| 37 |
+
# 2. Related Works
|
| 38 |
+
|
| 39 |
+
Domain Generalization (DG). The objective of DG is to learn representations that are independent of domain-specific factors and thus can extrapolate well to unseen test distributions. This is typically achieved by invariant learning and robust learning. Current approaches can be broadly categorized into feature matching [45, 54, 107, 12], decomposition [66, 63, 17, 53, 72, 49, 93, 100], augmentation [80, 102, 103, 87, 56, 105, 86, 96, 13, 6], and meta-learning-based [43, 46, 44, 21, 15] approaches. To adapt to complex real-world applications, very recently, several works [71, 106, 91] consider the existence of both known and unknown classes in new DG settings, such as open DG [71] open-set DG (OSDG) [106]. Shu et al. [71] assume that both source and target domains have different label spaces and introduce novel augmentation strategies to augment domains on both feature- and label-level. Zhu et al. [106] generate auxiliary samples via an adversarial data augmentation strategy [81] and enhance unknown class identification with multi-binary classifiers. Yang et al. [91] introduce an additional CE loss based on the assumption that any non-ground-truth category can be viewed as unknown categories. However, these works rely on additional training modules and heuristic thresholding mechanism [106] or impose a strong distributional assumption of the feature space regarding known and unknown data [91]. In addition, Dubey et al. [22] reveal that there will always be an "adaptivity gap" when applying the source-learn model to target domains without further adaptation. How to endow the source model with the capability of identifying unseen open classes and safely adapting the learned classifier to unlabeled test samples is yet to be thoroughly studied.
|
| 40 |
+
|
| 41 |
+
Domain Adaptation (DA). DA [59, 52, 25, 10, 8] aims to improve the performance of the learned model on the target domain using labeled source data and unlabeled target
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
(a)
|
| 45 |
+
|
| 46 |
+

|
| 47 |
+
(b)
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
(c)
|
| 51 |
+
Figure 2. Toy example illustrating the decision boundaries learned by different methods. We generate isotropic Gaussian blobs with 4 classes. Red, green, and blue points indicate the known-class samples. Black points denote unknown-class samples, which are unavailable during training. (a) Train with standard CE loss, i.e., vanilla $(|\mathcal{C}_s| + 1)$ -way classifier in DGCS. (b) Train with our unknown activation loss $\mathcal{L}_{\mathrm{UA}}$ . (c) Train with full UGD loss $\mathcal{L}_{\mathrm{UGD}}$ . (d) The result of ART $(\mathrm{UGD} + \mathrm{TUR})$ . This figure is best seen in color.
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
(d)
|
| 55 |
+
|
| 56 |
+
data. In addition to the close-set setting, many new and practical DA paradigms have been proposed, such as partial [94, 5], open-set [60, 69, 38, 4, 7, 50], universal [92, 68], and source-free [85, 89, 20, 97, 90]. In particular, open-set DA (OSDA) and source-free DA (SFDA) are closely related to the problem explored in this paper.
|
| 57 |
+
|
| 58 |
+
Test-Time Adaptation (TTA). For DG, due to the inaccessibility of target data during training, it is natural to solve the adaptivity gap [22] with TTA strategies. Adaptive methods [47, 76, 82, 35, 61, 11, 95, 14] have been proposed to refine the matching process between target test data and source-trained models in an online manner, i.e., all test data can be accessed only once. Tent [82] proposes to reduce the entropy of model's predictions on test data via entropy minimization. T3A [35] introduces a training-free approach by classifying each test sample based on its distance to a dynamically-updated support set. Despite the promising results on closed-set classes, these approaches fail to deal with open-set samples and thus lead to semantic mismatching.
|
| 59 |
+
|
| 60 |
+
Out-of-Distribution Detection (OD). A separate line of work studies the problem of OD [88], which aims to identify novel examples that the network has not been exposed to at the training phase. Mainstream OD methods are devoted to design OOD scoring functions, e.g., confidence-based approaches [3, 31, 32], distance-based score [41, 70, 75], and energy-based score [51, 73]. The main difference between OD and our problem is that the former is a binary classification problem and does not account for the domain and label shifts between training and test data at the same time.
|
| 61 |
+
|
| 62 |
+
Discussion. We provide a comparison of the problem settings among different methods in Tab. 1. OSDA and SFDA optimize offline with target data and specific learning objectives, while ART only adjusts the classifier in an online
|
| 63 |
+
|
| 64 |
+
Table 1. Comparison of different problem settings. $(X_{s},Y_{s})$ and $X_{t}$ are the labeled source and unlabeled target data respectively. Fine-tune means to update the model's parameters. Adjustment means making post-hoc modifications to the model's predictions.
|
| 65 |
+
|
| 66 |
+
<table><tr><td rowspan="2">Problem Setting</td><td>Training</td><td colspan="4">Test-time</td></tr><tr><td>Data</td><td>Domain Shift</td><td>Open Class</td><td>Fine-tune</td><td>Adjustment</td></tr><tr><td>OD [31, 74]</td><td>\( X_s, Y_s \)</td><td>✘</td><td>✓</td><td>✘</td><td>✘</td></tr><tr><td>OSDA [69, 4]</td><td>\( X_s, Y_s, X_t \)</td><td>✓</td><td>✓</td><td>✘</td><td>✘</td></tr><tr><td>SFDA [92, 97]</td><td>\( X_s, Y_s, X_t \)</td><td>✓</td><td>✘</td><td>✓</td><td>✘</td></tr><tr><td>TTA [76, 82, 95]</td><td>\( X_s, Y_s \)</td><td>✓</td><td>✘</td><td>✓</td><td>✓</td></tr><tr><td>OSDG [106, 91]</td><td>\( X_s, Y_s \)</td><td>✓</td><td>✓</td><td>✘</td><td>✘</td></tr><tr><td>Ours</td><td>\( X_s, Y_s \)</td><td>✓</td><td>✓</td><td>✘</td><td>✓</td></tr></table>
|
| 67 |
+
|
| 68 |
+
manner. TTA usually needs to update the trained model's parameters (e.g. entropy minimization [82, 95]) and a batch of data, while our TUR is fully training-free and can be performed on single test samples. These promising properties make the proposed approach more suitable for DG. Compared to OSDG, our setting allows training-free test-time adjustment for adapting source-trained models to novel environments, largely mitigating the potential adaptivity gap.
|
| 69 |
+
|
| 70 |
+
# 3. Methodology
|
| 71 |
+
|
| 72 |
+
# 3.1. Preliminary and Motivation
|
| 73 |
+
|
| 74 |
+
Notation. In DGCS, we have a single source domain $\mathcal{D}_s = \{(x_s^i,y_s^i)\}_{i = 1}^{n_s}$ of $n_{s}$ labeled samples and multiple (or a single) unseen target domains $\mathcal{D}_t = \{\mathcal{D}_t^1,\dots,\mathcal{D}_t^M\}$ , where $M\geq 1$ and $\mathcal{D}_t^m = \{(x_t^j,y_t^j)\}_{j = 1}^{n_t^m}$ . $\mathcal{D}_s$ and $\mathcal{D}_t$ are sampled from probability distributions $p_s(x,y)$ and $p_t(x,y)$ respectively. DGCS jointly considers two distribution shifts: (i) class-conditional shift where $p_{s}(y|x)\neq p_{t}(y|x)$ , and (ii) label shift where $p_{s}(y)\neq p_{t}(y)$ . Specifically, assume that $\mathcal{C}_s$ and $\mathcal{C}_t$ are the source and target class sets, respectively. DGCS dictates $\mathcal{C}_s\subset \mathcal{C}_t$ and $\mathcal{C}_t^u = \mathcal{C}_t\setminus \mathcal{C}_s$ is called unknown classes. Note we take all unknown classes as a whole even though there can be multiple classes. The objective of
|
| 75 |
+
|
| 76 |
+

|
| 77 |
+
Input Image
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
(a) Vanilla Classifier
|
| 81 |
+
|
| 82 |
+

|
| 83 |
+
(b) Unknown Activation
|
| 84 |
+
Figure 3. The softmax outputs of different training methods regarding two input images on the PACS [42] benchmark. Source Domain: Cartoon, Target Domain: Art. Known: dog from Domain Cartoon, Unknown: horse from Domain Art.
|
| 85 |
+
|
| 86 |
+

|
| 87 |
+
(c) Full UGD Loss
|
| 88 |
+
|
| 89 |
+
DGCS is to train a model on $\mathcal{D}_s$ to classify all target instances from $\mathcal{D}_t$ into $|\mathcal{C}_s| + 1$ classes.
|
| 90 |
+
|
| 91 |
+
Motivation. Before formally introducing technical details, we discuss the motivation of our method using toy data. Since the decision boundaries are learned by known classes only, the unknown target samples tend to lie out of the support of source training data (i.e., low-density regions [27]) and are ambiguous for the decision boundaries. On the other hand, as shown in Fig. 3(a), deep neural networks trained with the standard softmax Cross-Entropy (CE) loss tend to give overconfident predictions even when the test input differs from the training distribution [58]. Motivate by this, our goal is to explicitly create a support region for unknown target samples. A native choice is the low-density regions with respect to the source-trained classifier.
|
| 92 |
+
|
| 93 |
+
To empirically verify our intuitions, we use scikit-learn [62] to generate samples (3 known classes and 1 unknown class) and show the comparison in Fig. 2. From the figure, we have the following observations. (1) Simply training a $(|\mathcal{C}_s| + 1)$ -way classifier cannot improve the discrimination of unknown class. (2) Forcefully increasing the softmax probability in the unknown dimension creates an additional support region. However, due to the overconfidence issue regarding known classes, the response to the unknown (reflected by the size of the region) is still limited. (3) To increase the response to unknown class, we penalize the prediction confidence w.r.t. known classes, i.e., making the known-class data closer to their decision boundaries. (4) Although the reshaped decision boundaries are able to accommodate unknown-class data, the boundaries between known and unknown classes are less discriminative as we do not have access to real unknown data, i.e., the unknown samples do not necessarily lie in the support of the created region since the above operations only encourage it far away from the support of known classes. Thus, we dynamically adjust the learned boundaries using unlabeled test data.
|
| 94 |
+
|
| 95 |
+
Grounded on these insights, we propose a novel Activate and Reject (ART) approach. Specifically, ART encompasses two innovative components: 1) Unknown-aware Gradient Diffusion (UGD) to diffuse the gradient to the unknown's
|
| 96 |
+
|
| 97 |
+
logit with smoothing regularization; 2) Test-time Unknown Rejection (TUR) to conduct post hoc modification to the learned classifier's final prediction.
|
| 98 |
+
|
| 99 |
+
# 3.2. Unknown-aware Gradient Diffusion
|
| 100 |
+
|
| 101 |
+
As discussed in Sec. 3.1, deep classifiers trained with the standard softmax CE loss are susceptible to the notorious overconfidence issue. This problem becomes more sophisticated in the context of DGCS, wherein the learned decision boundary is highly biased towards source known-class samples. On the other hand, given only access to known-class data during the training phase, how to optimize the $|\mathcal{C}_s| + 1$ -way classifier is problematic (cf. Fig. 2(a)).
|
| 102 |
+
|
| 103 |
+
With this premise, we propose the UGD to solve the above issues at training phase from two perspectives, i.e., unknown activation and output's smoothness. The former activates the unknown probability, while the latter mitigates the overconfidence issue. First of all, we need to train a $(|\mathcal{C}_s| + 1)$ -way classifier, where an additional dimension is introduced to discriminate unknown classes from known ones. Given $\mathbf{x}_s \in \mathcal{D}_s$ and a neural network $f(\mathbf{x};\theta)$ parameterized by $\theta$ , we define the standard CE loss as:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\mathcal {L} _ {\mathrm {C E}} \left(f \left(\mathbf {x} _ {\mathbf {s}}\right), y _ {s}\right) = - \log \frac {\exp \left(f _ {y _ {s}} \left(\mathbf {x} _ {\mathbf {s}}\right)\right)}{\sum_ {k \in \left| \mathcal {C} _ {s} \right| + 1} \exp \left(f _ {k} \left(\mathbf {x} _ {\mathbf {s}}\right)\right)}, \tag {1}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $f(\mathbf{x_s}) \in \mathbb{R}^{|\mathcal{C}_s| + 1}$ denotes the network's logit and $f_{y_s}(\mathbf{x_s})$ is the $y_s$ -th element of $f(\mathbf{x_s})$ corresponding to the ground-truth label $y_s$ .
|
| 110 |
+
|
| 111 |
+
Based on the $(|\mathcal{C}_s| + 1)$ -way classifier, we aim to activate the unknown's logit in the absence of real unknown-class samples. The key idea is to increase the value of unknown probability without affecting the ground-truth classification. For notation shorthand, we use $f_{k}$ to represent the logit of $k$ -th class and $f_{u}$ for the unknown's logit. Since we have no supervision over the unknown, the value of $f_{u}$ is negligible (cf. Fig. 3(a)). For a source sample $(\mathbf{x}_s, y_s) \in \mathcal{D}_s$ , we forcefully increase the unknown probability by minimizing the negative log-likelihood,
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\mathcal {L} _ {\mathrm {U A}} = - \log \frac {\exp \left(\boldsymbol {f} _ {u}\right)}{\sum_ {k \in \left| \mathcal {C} _ {s} \right| + 1 , k \neq y _ {s}} \exp \left(\boldsymbol {f} _ {k}\right)}, \tag {2}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
This objective ensures that the unknown probability can give a response to any input sample regardless of its class label (cf. gray region in Fig. 2(b)). Since the learning process is always dominated by CE loss regarding the ground-truth category, Eq. (2) is tractable and will not hurt the known-class performance. However, the activated probability is relatively small (compared to the ground-truth category), which leads to an unsatisfactory accuracy for real unknown samples, especially for some hard samples (cf. Fig. 3(b)).
|
| 118 |
+
|
| 119 |
+
Next, we aim to enhance the response to unknown classes by increasing the smoothness of the network's output (cf. Fig. 2(c)). Formally, we impose two constraints to the standard CE loss: a temperature scaling parameter $\tau$ ( $\tau > 1$ ) and a penalty on the $L_{2}$ norm of the logits. Thus, the proposed smoothed CE (SCE) loss $\mathcal{L}_{\mathrm{SCE}}$ is defined as:
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\mathcal {L} _ {\mathrm {S C E}} = - \log \frac {\exp \left(f _ {y _ {s}} \left(\mathbf {x} _ {\mathbf {s}}\right) / \tau\right)}{\sum_ {i \in | \mathcal {C} _ {s} | + 1} \exp \left(f _ {i} \left(\mathbf {x} _ {\mathbf {s}}\right) / \tau\right)} + \lambda \| f (\mathbf {x} _ {\mathbf {s}}) \| _ {2}, \tag {3}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
where $\lambda$ is set to 0.05 in all experiments.
|
| 126 |
+
|
| 127 |
+
Finally, the UGD loss is formulated as:
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\mathcal {L} _ {\mathrm {U G D}} = \mathcal {L} _ {\mathrm {U A}} + \mathcal {L} _ {\mathrm {S C E}}. \tag {4}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
As shown in Fig. 3(c), the proposed $\mathcal{L}_{\mathrm{UGD}}$ not only reduces the overconfidence issue (smaller max-probability for known sample) but also significantly increases the unknown probability.
|
| 134 |
+
|
| 135 |
+
# 3.3. Test-Time Unknown Rejection
|
| 136 |
+
|
| 137 |
+
Although we have activated the network's logit about unknowns, there still exist two critical challenges that impede the safe and reliable deployment of our source-trained models on open-world data. First, a conservative (smaller max-probability) and smoothing (larger entropy) output on source data may not guarantee the category correspondence across domains and therefore may lead to semantic misalignment. Second, how to reject a sample as "unknown" lacks principled criterion considering that the unknown-class samples may distribute randomly in the embedding space. In this regard, previous open-set-oriented methods [68, 106] that typically rely on thresholding mechanisms (e.g. entropy value [106]) are heuristic and will be sensitive to the variations of domain disparity.
|
| 138 |
+
|
| 139 |
+
To solve the above issues, we introduce a simple and effective technique—TUR—to match unlabeled test data to the source-trained model in an online adaptation manner. Our key idea is to conduct post hoc modification to the learned classifier's final predictions, so as to bring the decision boundaries of different classes closer to the well-behaved case. TUR is training-free (i.e., no backward passes) and does not impose any distributional assumptions.
|
| 140 |
+
|
| 141 |
+
Technically, we impose a cross-domain cycle-consistent constraint on the top of embedding space for identifying
|
| 142 |
+
|
| 143 |
+
whether a test sample corresponds to any known classes or not. The cross-domain relationships are based on $K$ -nearest neighbor (KNN) [36] to perform non-parametric density estimation, which is model-agnostic and easy to implement. Specifically, we decompose the source-trained model into a feature extractor $g$ and a linear classifier $f$ . Assume that the embedding of training data is $\mathbb{Z}_s = \{z_s^1, z_s^2, \dots, z_s^{n_s}\}$ , where $z_s^i$ is the $L_2$ -normalized penultimate feature $z_s^i = g(\mathbf{x}_s) / \| g(\mathbf{x}_s)\|_2$ . Here, we do not require access to the original training samples since the embedding will be extracted in advance, and no need to update. Then, we define two sets of known-class prototypes on the top of penultimate layer, i.e., $\{\mu_s^k\}_{k=1}^{|C_s|}$ and $\{\mu_t^k\}_{k=1}^{|C_s|}$ , where $\mu_s^k$ is computed from $\mathbb{Z}_s$ (mean feature per class) and will be fixed at test time. $\mu_t^k$ is empty at the beginning.
|
| 144 |
+
|
| 145 |
+
For an test input $\mathbf{x}_t^j$ with its normalized feature vector $z_{t}^{j}$ , we compute its KNN in $\mathbb{Z}_s$ , denoted by $\mathcal{N}_s(z_t^j)$ . The feature centroid of $\mathcal{N}_s(z_t^j)$ is denoted by $\bar{z}_s^j$ . Next, we find the corresponding source class as,
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
k ^ {\prime} = \underset {k ^ {\prime} \in \{0, 1, \dots , | \mathcal {C} _ {s} | \}} {\arg \max } \operatorname {s i m} \left(\bar {z} _ {s} ^ {j}, \mu_ {s} ^ {k ^ {\prime}}\right) \tag {5}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
Here, we measure the cosine similarity between features as: $sim(\overline{z}_s^j,\mu_s^{k'}) = \frac{(\overline{z}_s^j)^T\mu_s^{k'}}{\|\overline{z}_s^j\|_2\|\mu_s^{k'}\|_2}$ . In the same way, we search the target class $k''$ based on the similarity between $\overline{z}_s^j$ and $\mu_t^{k''}$ . If $k'$ and $k''$ belong to the same category, the sample $x_{t}^{j}$ will be predicted as class $k''$ and we further update $\mu_t^{k''}$ in the following manner,
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
\mu_ {t (I)} ^ {k ^ {\prime \prime}} = \phi z _ {t} ^ {j} + (1 - \phi) \mu_ {t (I)} ^ {k ^ {\prime \prime}}, \tag {6}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
where $\mu_{t(I)}^{k''}$ denote the $k''$ -th target prototype until time $I$ and $\phi \in (0,1)$ is a preset scalar and fixed to 0.3 in practice.
|
| 158 |
+
|
| 159 |
+
If $k'$ and $k''$ belong to different categories, the prediction will be given by using a follow-up strategy. Specifically, a memory bank $\mathbb{M}_I = \{\mathbb{M}_I^1, \dots, \mathbb{M}_I^{|\mathcal{C}_s| + 1}\}$ is a set of target sample embedding until time $I$ , which is initialized by the weight of linear classifier $f$ . At time $I$ , $\mathbb{M}_I$ is updated as:
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
\mathbb {M} _ {I} ^ {k} = \left\{ \begin{array}{l l} \mathbb {M} _ {I - 1} ^ {k} \cup \boldsymbol {z} _ {t} ^ {j} & \text {i f} k ^ {\prime} \neq k ^ {\prime \prime} \text {a n d} f \left(\boldsymbol {z} _ {t} ^ {j}\right) = k, \\ \mathbb {M} _ {I - 1} ^ {k} & \text {o t h e r w i s e}, \end{array} \right. \tag {7}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
Similarly, we can build a new set of target class prototypes $\{\psi_t^k\}_{k = 1}^{|\mathcal{C}_s| + 1}$ based on samples from $\mathbb{M}_I$ . Note that $\psi_t^k$ will be constantly updated during test time. Then, we predict the class label $((|\mathcal{C}_s| + 1)$ -way) of $\mathbf{x}_t^j$ as follows,
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\begin{array}{l} \hat {k} = \quad \arg \max \quad s i m \left(\bar {z} _ {s} ^ {j}, \psi_ {t} ^ {\hat {k}}\right). \tag {8} \\ \hat {k} \in \{0, 1, \dots , | \mathcal {C} _ {s} | + 1 \} \\ \end{array}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
The decision boundaries between known and unknown classes are refined without backpropagation (cf. Fig. 2(d)).
|
| 172 |
+
|
| 173 |
+
Table 2. Accuracy (%) on four classification benchmarks (ResNet-18).
|
| 174 |
+
|
| 175 |
+
<table><tr><td rowspan="2">Regime</td><td rowspan="2">Method</td><td colspan="3">PACS</td><td colspan="3">Office-Home</td><td colspan="3">Office-31</td><td colspan="3">Digits</td><td colspan="3">Average</td></tr><tr><td>acck</td><td>accu</td><td>hs</td><td>acck</td><td>accu</td><td>hs</td><td>acck</td><td>accu</td><td>hs</td><td>acck</td><td>accu</td><td>hs</td><td>acck</td><td>accu</td><td>hs</td></tr><tr><td rowspan="2">OSDA (upper bound)</td><td>OSBP [69]</td><td>40.6</td><td>49.5</td><td>44.6</td><td>47.1</td><td>66.9</td><td>55.3</td><td>75.8</td><td>84.3</td><td>77.7</td><td>35.6</td><td>70.6</td><td>40.5</td><td>49.8</td><td>67.8</td><td>54.5</td></tr><tr><td>ROS [4]</td><td>35.6</td><td>66.4</td><td>46.4</td><td>50.8</td><td>77.5</td><td>60.8</td><td>71.7</td><td>80.0</td><td>75.6</td><td>20.1</td><td>48.6</td><td>34.9</td><td>47.7</td><td>68.1</td><td>54.4</td></tr><tr><td rowspan="3">OD</td><td>MSP [31]</td><td>38.9</td><td>62.5</td><td>46.4</td><td>52.7</td><td>75.6</td><td>62.0</td><td>49.7</td><td>89.2</td><td>63.8</td><td>17.2</td><td>87.1</td><td>28.8</td><td>39.6</td><td>78.6</td><td>50.3</td></tr><tr><td>LogitNorm [84]</td><td>35.1</td><td>47.6</td><td>38.3</td><td>56.3</td><td>56.5</td><td>56.1</td><td>41.0</td><td>71.2</td><td>52.1</td><td>26.8</td><td>51.2</td><td>35.2</td><td>39.8</td><td>56.6</td><td>45.4</td></tr><tr><td>DICE [74]</td><td>44.0</td><td>53.4</td><td>49.2</td><td>61.5</td><td>58.8</td><td>59.9</td><td>72.8</td><td>61.1</td><td>66.4</td><td>35.0</td><td>47.6</td><td>40.3</td><td>53.3</td><td>55.2</td><td>54.0</td></tr><tr><td rowspan="2">SFDA</td><td>SHOT [47]</td><td>51.2</td><td>34.9</td><td>40.8</td><td>52.5</td><td>32.4</td><td>44.3</td><td>84.8</td><td>60.2</td><td>70.4</td><td>27.4</td><td>20.3</td><td>23.3</td><td>54.0</td><td>37.0</td><td>44.7</td></tr><tr><td>AaD [90]</td><td>45.1</td><td>40.0</td><td>42.0</td><td>59.4</td><td>58.7</td><td>58.9</td><td>70.1</td><td>85.3</td><td>76.9</td><td>25.6</td><td>26.9</td><td>26.2</td><td>50.1</td><td>52.7</td><td>51.0</td></tr><tr><td rowspan="3">TTA</td><td>TTT [76]</td><td>36.9</td><td>44.6</td><td>38.9</td><td>52.0</td><td>45.9</td><td>47.2</td><td>35.4</td><td>79.6</td><td>49.0</td><td>44.1</td><td>45.1</td><td>44.6</td><td>42.1</td><td>53.8</td><td>44.9</td></tr><tr><td>Tent [82]</td><td>25.2</td><td>43.1</td><td>31.7</td><td>33.6</td><td>45.9</td><td>38.7</td><td>56.0</td><td>85.1</td><td>67.5</td><td>27.2</td><td>41.1</td><td>32.7</td><td>35.5</td><td>53.8</td><td>42.7</td></tr><tr><td>MEMO [95]</td><td>37.9</td><td>52.3</td><td>44.5</td><td>49.0</td><td>55.6</td><td>52.1</td><td>59.8</td><td>72.7</td><td>65.6</td><td>21.7</td><td>56.1</td><td>31.3</td><td>42.1</td><td>59.2</td><td>48.4</td></tr><tr><td rowspan="7">OSDG</td><td>ERM [78]</td><td>52.3</td><td>27.0</td><td>36.1</td><td>66.9</td><td>23.7</td><td>34.3</td><td>85.1</td><td>27.0</td><td>40.7</td><td>56.4</td><td>13.0</td><td>18.0</td><td>65.2</td><td>22.7</td><td>32.3</td></tr><tr><td>ADA [81]</td><td>54.2</td><td>30.9</td><td>36.4</td><td>67.9</td><td>25.4</td><td>36.2</td><td>85.6</td><td>25.2</td><td>38.7</td><td>57.2</td><td>15.1</td><td>20.1</td><td>66.2</td><td>24.2</td><td>32.9</td></tr><tr><td>ADA+CM [106]</td><td>56.4</td><td>45.6</td><td>43.0</td><td>65.0</td><td>40.4</td><td>48.5</td><td>83.0</td><td>34.5</td><td>48.5</td><td>49.2</td><td>52.1</td><td>39.9</td><td>63.4</td><td>43.2</td><td>45.0</td></tr><tr><td>MEADA [98]</td><td>54.1</td><td>31.4</td><td>36.2</td><td>67.6</td><td>25.7</td><td>36.4</td><td>85.8</td><td>25.1</td><td>38.6</td><td>57.6</td><td>29.8</td><td>30.4</td><td>66.3</td><td>28.0</td><td>35.4</td></tr><tr><td>MEADA+CM [106]</td><td>54.3</td><td>46.6</td><td>42.7</td><td>64.9</td><td>40.5</td><td>49.6</td><td>82.8</td><td>41.1</td><td>54.7</td><td>52.3</td><td>46.1</td><td>38.7</td><td>63.6</td><td>43.6</td><td>46.4</td></tr><tr><td>One Ring-S [91]</td><td>43.7</td><td>49.4</td><td>41.5</td><td>56.9</td><td>69.0</td><td>62.3</td><td>67.3</td><td>77.0</td><td>71.3</td><td>33.2</td><td>51.3</td><td>40.3</td><td>50.3</td><td>61.7</td><td>53.9</td></tr><tr><td>ART w/o TUR</td><td>47.0</td><td>51.3</td><td>48.1</td><td>58.8</td><td>69.8</td><td>63.7</td><td>70.7</td><td>65.9</td><td>68.2</td><td>29.7</td><td>65.7</td><td>40.9</td><td>51.6</td><td>63.2</td><td>55.2</td></tr><tr><td>DGCS</td><td>ART (full)</td><td>43.7</td><td>65.9</td><td>52.3</td><td>64.3</td><td>65.3</td><td>64.8</td><td>82.1</td><td>75.2</td><td>78.5</td><td>34.3</td><td>63.8</td><td>44.6</td><td>56.1</td><td>67.6</td><td>60.1</td></tr></table>
|
| 176 |
+
|
| 177 |
+
Table 3. Performance of ART on object detection benchmarks.
|
| 178 |
+
|
| 179 |
+
<table><tr><td rowspan="2">Method</td><td colspan="5">Pascal VOC→Clipart</td><td colspan="5">Pascal VOC→Watercolor</td><td colspan="5">Pascal VOC→Comic</td></tr><tr><td>WI↓</td><td>AOSE↓</td><td>mAPK↑</td><td>APU↑</td><td>hs↑</td><td>WI↓</td><td>AOSE↓</td><td>mAPK↑</td><td>APU↑</td><td>hs↑</td><td>WI↓</td><td>AOSE↓</td><td>mAPK↑</td><td>APU↑</td><td>hs↑</td></tr><tr><td>ORE [37]</td><td>17.3</td><td>876</td><td>37.7</td><td>3.0</td><td>5.6</td><td>28.4</td><td>3216</td><td>19.8</td><td>13.5</td><td>16.1</td><td>23.1</td><td>2242</td><td>7.3</td><td>3.0</td><td>4.3</td></tr><tr><td>OpenDet [28]</td><td>14.2</td><td>300</td><td>32.7</td><td>6.7</td><td>11.1</td><td>14.9</td><td>1944</td><td>19.2</td><td>19.3</td><td>19.2</td><td>15.2</td><td>744</td><td>7.5</td><td>3.1</td><td>4.4</td></tr><tr><td>ART (full)</td><td>11.7</td><td>317</td><td>35.8</td><td>10.2</td><td>15.9</td><td>19.7</td><td>944</td><td>20.8</td><td>15.2</td><td>17.6</td><td>13.2</td><td>596</td><td>7.2</td><td>9.1</td><td>8.0</td></tr><tr><td>w/o LUA</td><td>16.3</td><td>1363</td><td>35.4</td><td>6.0</td><td>10.3</td><td>29.4</td><td>3924</td><td>18.6</td><td>14.1</td><td>16.0</td><td>25.0</td><td>2826</td><td>6.4</td><td>2.2</td><td>3.3</td></tr><tr><td>w/o LSCE</td><td>14.6</td><td>426</td><td>34.7</td><td>3.2</td><td>5.9</td><td>24.8</td><td>1104</td><td>21.4</td><td>19.1</td><td>20.2</td><td>24.7</td><td>1372</td><td>6.3</td><td>3.5</td><td>4.5</td></tr><tr><td>w/o TUR</td><td>14.9</td><td>444</td><td>34.5</td><td>4.9</td><td>8.6</td><td>23.3</td><td>1398</td><td>21.1</td><td>17.6</td><td>19.2</td><td>15.0</td><td>784</td><td>7.3</td><td>4.6</td><td>5.6</td></tr></table>
|
| 180 |
+
|
| 181 |
+
Table 4. Performance of ART on semantic segmentation benchmark, i.e., from GTA5 (synthetic) to Cityscapes (real).
|
| 182 |
+
|
| 183 |
+
<table><tr><td>Method</td><td>mAcc</td><td>mIOU</td><td>accu</td><td>hs</td></tr><tr><td>ERM [78]</td><td>64.9</td><td>48.2</td><td>27.6</td><td>39.4</td></tr><tr><td>One Ring-S [91]</td><td>55.7</td><td>41.0</td><td>72.5</td><td>61.9</td></tr><tr><td>ART (full)</td><td>57.1</td><td>43.3</td><td>73.2</td><td>63.1</td></tr><tr><td>w/o UGD</td><td>64.2</td><td>46.6</td><td>41.6</td><td>50.2</td></tr><tr><td>w/o TUR</td><td>54.7</td><td>42.6</td><td>78.5</td><td>62.6</td></tr></table>
|
| 184 |
+
|
| 185 |
+
# 4. Experiments
|
| 186 |
+
|
| 187 |
+
# 4.1. Generalization in Image Classification
|
| 188 |
+
|
| 189 |
+
Dataset. We evaluate our ART on four standard DG benchmarks. PACS [42], which has dramatic differences in terms of image styles, contains 9,991 images of seven object classes from four domains, i.e., Photo, Art Painting, Cartoon, and Sketch. 4 classes (dog, elephant, giraffe, and guitar) are adopted as $\mathcal{C}_s$ and the remaining 3 classes are used as $\mathcal{C}_u^t$ . Office-Home [79], which is collected from office and home environments, has 15,500 images of 65 classes from four domains, i.e., Artistic, Clipart, Product, and Real World. The domain shifts stem from the variations of viewpoint and image style. In alphabetic order,
|
| 190 |
+
|
| 191 |
+
the first 15 classes are selected as $\mathcal{C}_s$ and the remaining 50 classes are used as $\mathcal{C}_u^t$ . Office-31 [67] has 31 classes collected from three domains: Amazon, DSLR, and Webcam. The 10 classes shared by Office-31 and Caltech-256 [26] are adopted as $\mathcal{C}_s$ . In alphabetical order, the last 11 classes along with $\mathcal{C}_s$ form $\mathcal{C}_u^t$ . Digits, which differs in the background, style, and color, contains four handwritten digit domains including MNIST [40], MNIST-M [25], SVHN [57], USPS [33], and SYN [25]. MNIST is used as the source domain and the other datasets are viewed as target domains. $\mathcal{C}_s$ includes numbers from 0 to 4.
|
| 192 |
+
|
| 193 |
+
Evaluation Protocols. Following [4, 106, 91], we adopt H-score $(hs)$ [24] as the main evaluation metric. $hs$ harmonizes the importance of known and unknown classes by requiring that known and unknown class accuracy should be both high and balanced. The known class accuracy $(acc_k)$ and unknown class accuracy $(acc_u)$ are also provided.
|
| 194 |
+
|
| 195 |
+
Implementation Details. We conduct experiments based on Dassl [104], including data preparation, model training, and model selection. For PACS, Office-Home, and Office-31, we use ResNet-18 [29] pre-trained on the ImageNet as the backbone network. We use the ConvNet [39] with architecture conv-pool-conv-pool-fc-fc-softmax for Digits. The
|
| 196 |
+
|
| 197 |
+
networks are trained using SGD with momentum of 0.9 for 100 epochs. The batch size is set to 16.
|
| 198 |
+
|
| 199 |
+
Baselines. Given the contact points with other problem settings, we compare ART with five types of state-of-the-art methods. (1) OSDG [106, 91] is the most related baseline. When TUR is removed, the proposed ART becomes a standard OSDG method. (2) OSDA [69, 4] jointly utilizes source and target data for training and thus can be viewed as an upper bound of our problem. (3) OD [84, 74] usually identifies unknown-class samples via scoring functions. (4) SFDA [47, 90] and TTA [47, 90] cannot deal with unknown-class samples directly. Therefore, we follow [106] that uses the entropy of softmax output as the normality score.
|
| 200 |
+
|
| 201 |
+
Results. The classification results on PACS and OfficeHome, Office-31, and Digits benchmarks are reported in Tab. 2. ART substantially and consistently outperforms baseline methods on different benchmark datasets. For example, ART improves $h_s$ by $9.3\%$ (PACS), $2.5\%$ (OfficeHome), $7.2\%$ (Office-31), and $4.3\%$ (Digits) compared to the previous best OSDG baselines. In particular, only using UGD could also substantially exceed state-of-the-art methods, e.g. DICE [74] and One Ring-S [91]. The results also reveal several interesting observations. (1) The performance of [91] is unstable across different benchmarks. For example, they outperform CM [106] by $+12.7\%$ and $+16.6\%$ on Office-Home and Office-31 but show inferior performance $(-1.5\%)$ on PACS. By contrast, our method achieves more consistent improvements, indicating the efficacy and scalability of ART. (2) ART achieves even better performance than OSDA methods (upper bound) under much more challenging settings. (3) LogitNorm [84] and Tent [82] achieve inferior performance due to the imbalance between $acc_k$ and $acc_u$ , showing the non-triviality of performing both unknown-aware training and test-time modification.
|
| 202 |
+
|
| 203 |
+
# 4.2. Generalization in Other Vision Tasks
|
| 204 |
+
|
| 205 |
+
Setup. (1) Object Detection. We introduce four datasets to form three tasks, i.e., Pascal VOC [23], Clipart, Watercolor, and Comic [34] datasets. They share 6 classes, where person is selected as $\mathcal{C}_u^t$ and the remaining 5 classes are viewed as $\mathcal{C}_s$ . The Pascal VOC2007-trainval and VOC2012-trainval datasets are combined to form the source domain, and Clipart1k, Watercolor, and Comic as used as the target domains respectively. For evaluation, we introduce four metrics: Wilderness Impact (WI) [19], Absolute Open-Set Error (AOSE) [55], mean average precision of known classes $(\mathrm{mAP}_{\mathcal{K}})$ and average precision of unknown class $(\mathrm{AP}_{\mathcal{U}})$ . (2) Semantic Segmentation. GTA5 [65] and Cityscapes [18] are used as the source and target domains respectively. GTA5 is a synthetic dataset generated from Grand Theft Auto 5 game engine, while Cityscapes is collected from the street scenarios of different cities. They share 19 classes in
|
| 206 |
+
|
| 207 |
+
Table 5. Ablation of ART on four benchmarks. $hs(\%)$ is reported. - and + denote the removal or addition of a module respectively.
|
| 208 |
+
|
| 209 |
+
<table><tr><td>Method</td><td>PACS</td><td>Office-Home</td><td>Office-31</td><td>Digits</td><td>Avg.</td></tr><tr><td>ART</td><td>52.3</td><td>64.8</td><td>78.5</td><td>44.6</td><td>60.1</td></tr><tr><td>- LUA</td><td>39.9</td><td>62.5</td><td>69.0</td><td>20.0</td><td>47.9</td></tr><tr><td>- LsCE</td><td>43.4</td><td>60.0</td><td>71.5</td><td>41.3</td><td>54.1</td></tr><tr><td>- UGD</td><td>45.5</td><td>57.0</td><td>66.6</td><td>32.0</td><td>50.3</td></tr><tr><td>- TUR & LUA</td><td>44.4</td><td>61.4</td><td>65.8</td><td>7.9</td><td>44.9</td></tr><tr><td>- TUR & LsCE</td><td>41.0</td><td>58.9</td><td>63.0</td><td>40.3</td><td>50.8</td></tr><tr><td>UGD</td><td>48.1</td><td>63.7</td><td>68.2</td><td>40.9</td><td>55.2</td></tr><tr><td>+ TTT [76]</td><td>48.5</td><td>60.8</td><td>72.8</td><td>41.3</td><td>55.9</td></tr><tr><td>+ Tent [82]</td><td>37.8</td><td>45.3</td><td>64.9</td><td>33.2</td><td>45.3</td></tr><tr><td>+ T3A [35]</td><td>49.2</td><td>62.7</td><td>72.0</td><td>41.7</td><td>56.4</td></tr><tr><td>+ MEMO [95]</td><td>49.9</td><td>61.4</td><td>75.4</td><td>41.0</td><td>56.9</td></tr><tr><td>+ SHOT [47]</td><td>46.6</td><td>50.3</td><td>71.5</td><td>33.5</td><td>50.5</td></tr><tr><td>+ AaD [90]</td><td>50.2</td><td>62.5</td><td>74.7</td><td>41.8</td><td>57.3</td></tr></table>
|
| 210 |
+
|
| 211 |
+
all. According to the number of pixels per class, we use 10 classes as $\mathcal{C}_s$ and the remaining 9 classes as $\mathcal{C}_t^u$ . We report the mean accuracy of all classes (mAcc), mean Intersection over Union (mIOU), $acc_u$ and $hs$ .
|
| 212 |
+
|
| 213 |
+
Implementation Details. (1) Object Detection. We utilize Faster R-CNN [64] as the detection model and ResNet-50 with FPN [48] as the backbone network. To avoid the mutual influence between classification and regression heads, the original shared FC layer is replaced by two parallel FC layers. The networks are trained for 40 epochs. (2) Semantic Segmentation. We adopt DeepLab-v2 [16] segmentation network with ResNet-101 backbone. We use SGD optimizer with an initial learning rate of $5 \times 10^{-4}$ , momentum of 0.9, and weight decay of $10^{-4}$ .
|
| 214 |
+
|
| 215 |
+
Results. Tab. 3 shows the detection results compared to ORE [37], OpenDet [28], and several variants of ART. With respect to $\mathrm{mAP}_{\mathcal{K}}$ and $\mathrm{AP}_{\mathcal{U}}$ , ART outperforms the previous best method by $1.5\%$ and $1.8\%$ on average, revealing that ART strikes a better balance between identifications of known- and unknown-class objects. Fig. 4 provides the qualitative comparisons, where ART could precisely identify unknown samples and exhibits better bounding box regression results. For semantic segmentation, Tab. 4 reveal that even in the dense prediction task, ART is capable of significantly improving the generalization ability of deep models. The qualitative results are shown in Fig. 5, where the predictions given by ART are smoother and contain much fewer spurious areas than One Ring-S [91] and ART w/o TUR, especially on the unknown classes (rider and bike).
|
| 216 |
+
|
| 217 |
+
# 4.3. Discussion
|
| 218 |
+
|
| 219 |
+
Ablation study. (1) In Tab. 5, we evaluate the contribution of the different components of ART. It is evident that each of these components is reasonably designed, as the removal of any one of them leads to a commensurate reduction in accuracy. Note that when $\mathcal{L}_{\mathrm{UA}}$ is removed, we will
|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
Figure 4. Qualitative comparisons between OpenDet (top) and ART (bottom).
|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
|
| 228 |
+

|
| 229 |
+
|
| 230 |
+

|
| 231 |
+
|
| 232 |
+

|
| 233 |
+
|
| 234 |
+

|
| 235 |
+
|
| 236 |
+

|
| 237 |
+
|
| 238 |
+

|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
|
| 242 |
+

|
| 243 |
+
|
| 244 |
+

|
| 245 |
+
(a) Target Image
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
(b) Ground Truth
|
| 251 |
+
|
| 252 |
+

|
| 253 |
+
|
| 254 |
+

|
| 255 |
+
(c) One Ring-S
|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
|
| 259 |
+

|
| 260 |
+
(d) ART w/o TUR
|
| 261 |
+
|
| 262 |
+

|
| 263 |
+
|
| 264 |
+

|
| 265 |
+
(e) ART
|
| 266 |
+
|
| 267 |
+

|
| 268 |
+
Figure 5. Visualization of segmentation results for the task GTA5 $\rightarrow$ Cityscapes. Gray regions indicate the unknown-class pixels.
|
| 269 |
+
(a)ERM
|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
(b) One Ring-S
|
| 273 |
+
|
| 274 |
+

|
| 275 |
+
(c) ART w/o $\mathcal{L}_{\mathrm{SCE}}$
|
| 276 |
+
Figure 6. (a)-(d) t-SNE visualization [77] of the penultimate layer's feature on Office-31. (e) Varying the size of known classes on PACS.
|
| 277 |
+
|
| 278 |
+

|
| 279 |
+
(d) ART
|
| 280 |
+
|
| 281 |
+

|
| 282 |
+
(e) One Ring-S vs. ART
|
| 283 |
+
|
| 284 |
+

|
| 285 |
+
|
| 286 |
+

|
| 287 |
+
|
| 288 |
+

|
| 289 |
+
Known Class
|
| 290 |
+
|
| 291 |
+

|
| 292 |
+
Figure 7. Top: w/o $\mathcal{L}_{\mathrm{SCE}}$ vs. Bottom: w/ $\mathcal{L}_{\mathrm{SCE}}$
|
| 293 |
+
|
| 294 |
+

|
| 295 |
+
|
| 296 |
+

|
| 297 |
+
|
| 298 |
+

|
| 299 |
+
Unknown Class
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
|
| 303 |
+

|
| 304 |
+
|
| 305 |
+

|
| 306 |
+
|
| 307 |
+
make the prediction by following the thresholding mechanism in [106]. To isolate the contribution of TUR, We additionally combine UGD with different TTA and SFDA methods. Notably, Tent and SHOT achieve inferior perfor
|
| 308 |
+
|
| 309 |
+
Table 6. The influence of the order of test data. $h s$ is reported.
|
| 310 |
+
|
| 311 |
+
<table><tr><td>ID</td><td>PACS</td><td>Office-Home</td><td>Office-31</td><td>Digits</td><td>Avg.</td></tr><tr><td>1</td><td>52.5</td><td>64.8</td><td>78.2</td><td>44.4</td><td>60.0</td></tr><tr><td>2</td><td>52.3</td><td>64.6</td><td>78.9</td><td>45.0</td><td>60.2</td></tr><tr><td>3</td><td>52.7</td><td>64.9</td><td>78.8</td><td>44.8</td><td>60.3</td></tr><tr><td>4</td><td>52.2</td><td>64.7</td><td>78.4</td><td>44.7</td><td>60.0</td></tr></table>
|
| 312 |
+
|
| 313 |
+
mance, and TTT and MEMO bring marginal improvements compared to the proposed TUR. (2) In fig. 7, we use Grad-CAM [99] to visualize the results trained w/ and w/o $\mathcal{L}_{\mathrm{SCE}}$ on both target known- and unknown-class samples. We can observe that $\mathcal{L}_{\mathrm{SCE}}$ makes the network focus on the entire object rather than a small or inaccurate local region, reveal
|
| 314 |
+
|
| 315 |
+
ing the importance of mitigating the overconfidence issue in DGCS tasks.
|
| 316 |
+
|
| 317 |
+
The influence of known classes. With fixed $|\mathcal{C}_s \cup \mathcal{C}_t|$ , we investigate the influence of the number of known classes. As shown in Fig. 6, ART consistently outperforms the previous best method in terms of $hs$ especially when the size is small, indicating that ART can improve the generalization ability even with very limited known knowledge.
|
| 318 |
+
|
| 319 |
+
The influence of test order. As TUR is performed online, we study the influence of the order of test data. The results in Tab. 6 reveal that TUR is insensitive to the variations of data order, showing its robustness to the open world.
|
| 320 |
+
|
| 321 |
+
Feature visualization. We use t-SNE [77] to visualize the feature learned by ERM, One Ring-S, ART w/o $\mathcal{L}_{\mathrm{SCE}}$ , and ART, respectively. The results are displayed in Fig. 6, where different colors except for gray indicate different known classes. Points in gray represent all unknown classes. The features learned by ERM and One Ring-S cannot be reasonably separated, where the boundaries between known and unknown classes are ambiguous to some extent. By contrast, ART provides more meaningful embedding features to distinguish known and unknown samples.
|
| 322 |
+
|
| 323 |
+
# 5. Conclusion
|
| 324 |
+
|
| 325 |
+
We investigate the problem of DGCS, which is realistic but has been largely overlooked in the literature. Specifically, we present a simple yet surprisingly effective approach (ART) to regularize the model's decision boundary in training and adjust the source-trained classifier's prediction at test time, endowing the deep model with unknown-aware ability even without any access to real data in training. Experiments show that ART consistently improves the generalization capability of deep networks in different tasks. We hope our work will motivate future research on open-world generalization in safety-critical applications.
|
| 326 |
+
|
| 327 |
+
# Acknowledgement
|
| 328 |
+
|
| 329 |
+
This work was partially supported by Hong Kong Research Grants Council under Collaborative Research Fund (Project No. HKU C7004-22G). It was also partially supported by NSFC62172348, Outstanding Young Fund of Guangdong Province with No. 2023B1515020055, and Shenzhen General Project with No. JCYJ20220530143604010.
|
| 330 |
+
|
| 331 |
+
# References
|
| 332 |
+
|
| 333 |
+
[1] Kartik Ahuja, Karthikeyan Shanmugam, Kush Varshney, and Amit Dhurandhar. Invariant risk minimization games. In ICML, pages 145-155, 2020. 1
|
| 334 |
+
[2] Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019. 1
|
| 335 |
+
|
| 336 |
+
[3] Abhijit Bendale and Terrance E Boult. Towards open set deep networks. In CVPR, pages 1563-1572, 2016. 3
|
| 337 |
+
[4] Silvia Bucci, Mohammad Reza Loghmani, and Tatiana Tommasi. On the effectiveness of image rotation for open set domain adaptation. In ECCV, 2020. 3, 6, 7
|
| 338 |
+
[5] Zhangjie Cao, Kaichao You, Mingsheng Long, Jianmin Wang, and Qiang Yang. Learning to transfer examples for partial domain adaptation. In CVPR, 2019. 3
|
| 339 |
+
[6] Chaoqi Chen, Jiongcheng Li, Xiaoguang Han, Xiaqing Liu, and Yizhou Yu. Compound domain generalization via meta-knowledge encoding. In CVPR, pages 7119-7129, 2022. 2
|
| 340 |
+
[7] Chaoqi Chen, Jiongcheng Li, Zebiao Zheng, Yue Huang, Xinghao Ding, and Yizhou Yu. Dual bipartite graph learning: A general approach for domain adaptive object detection. In ICCV, pages 2703-2712, 2021. 3
|
| 341 |
+
[8] Chaoqi Chen, Jiongcheng Li, Hong-Yu Zhou, Xiaoguang Han, Yue Huang, Xinghao Ding, and Yizhou Yu. Relation matters: foreground-aware graph-based relational reasoning for domain adaptive object detection. IEEE TPAMI, 45(3):3677-3694, 2022. 2
|
| 342 |
+
[9] Chaoqi Chen, Luyao Tang, Feng Liu, Gangming Zhao, Yue Huang, and Yizhou Yu. Mix and reason: Reasoning over semantic topology with data mixing for domain generalization. NeurIPS, 35:33302-33315, 2022. 1
|
| 343 |
+
[10] Chaoqi Chen, Weiping Xie, Wenbing Huang, Yu Rong, Xinghao Ding, Yue Huang, Tingyang Xu, and Junzhou Huang. Progressive feature alignment for unsupervised domain adaptation. In CVPR, pages 627-636, 2019. 2
|
| 344 |
+
[11] Dian Chen, Dequan Wang, Trevor Darrell, and Sayna Ebrahimi. Contrastive test-time adaptation. In CVPR, pages 295-305, 2022. 3
|
| 345 |
+
[12] Liang Chen, Yong Zhang, Yibing Song, Anton van den Hengel, and Lingqiao Liu. Domain generalization via rationale invariance. In ICCV, 2023. 2
|
| 346 |
+
[13] Liang Chen, Yong Zhang, Yibing Song, Lingqiao Liu, and Jue Wang. Self-supervised learning of adversarial example: Towards good generalizations for deepfake detection. In CVPR, pages 18710-18719, 2022. 2
|
| 347 |
+
[14] Liang Chen, Yong Zhang, Yibing Song, Ying Shan, and Lingqiao Liu. Improved test-time adaptation for domain generalization. In CVPR, pages 24172-24182, 2023. 3
|
| 348 |
+
[15] Liang Chen, Yong Zhang, Yibing Song, Jue Wang, and Lingqiao Liu. OST: Improving generalization of deepfake detection via one-shot test-time training. In NeurlPS, 2022. 2
|
| 349 |
+
[16] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI, 40(4):834-848, 2017. 7
|
| 350 |
+
[17] Rune Christiansen, Niklas Pfister, Martin Emil Jakobsen, Nicola Gneco, and Jonas Peters. A causal framework for distribution generalization. IEEE TPAMI, 2021. 2
|
| 351 |
+
[18] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes
|
| 352 |
+
|
| 353 |
+
dataset for semantic urban scene understanding. In CVPR, pages 3213-3223, 2016. 7
|
| 354 |
+
[19] Akshay Dhamija, Manuel Gunther, Jonathan Ventura, and Terrance Boult. The overlooked elephant of object detection: Open set. In WACV, pages 1021-1030, 2020. 7
|
| 355 |
+
[20] Ning Ding, Yixing Xu, Yehui Tang, Chao Xu, Yunhe Wang, and Dacheng Tao. Source-free domain adaptation via distribution estimation. In CVPR, 2022. 3
|
| 356 |
+
[21] Qi Dou, Daniel Coelho de Castro, Konstantinos Kamnitsas, and Ben Glocker. Domain generalization via model-agnostic learning of semantic features. In NeurIPS, pages 6447–6458, 2019. 1, 2
|
| 357 |
+
[22] Abhimanyu Dubey, Vignesh Ramanathan, Alex Pentland, and Dhruv Mahajan. Adaptive methods for real-world domain generalization. In CVPR, 2021. 2, 3
|
| 358 |
+
[23] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. IJCV, 88(2):303-338, 2010. 7
|
| 359 |
+
[24] Bo Fu, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Learning to detect open classes for universal domain adaptation. In ECCV, pages 567-583. Springer, 2020. 6
|
| 360 |
+
[25] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, pages 1180-1189, 2015. 2, 6
|
| 361 |
+
[26] Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, pages 2066-2073. IEEE, 2012. 6
|
| 362 |
+
[27] Yves Grandvalet and Yoshua Bengio. Semi-supervised learning by entropy minimization. In NeurIPS, pages 529-536, 2005. 4
|
| 363 |
+
[28] Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, and Gui-Song Xia. Expanding low-density latent regions for open-set object detection. In CVPR, pages 9591-9600, 2022. 6, 7
|
| 364 |
+
[29] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016. 6
|
| 365 |
+
[30] Dan Hendrycks, Nicholas Carlini, John Schulman, and Jacob Steinhardt. Unsolved problems in ml safety. arXiv preprint arXiv:2109.13916, 2021. 2
|
| 366 |
+
[31] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In ICLR, 2017. 3, 6
|
| 367 |
+
[32] Rui Huang and Yixuan Li. Towards scaling out-of-distribution detection for large semantic space. In CVPR, 2021. 3
|
| 368 |
+
[33] Jonathan J. Hull. A database for handwritten text recognition research. TPAMI, 16(5):550-554, 1994. 6
|
| 369 |
+
[34] Naoto Inoue, Ryosuke Furuta, Toshihiko Yamasaki, and Kiyoharu Aizawa. Cross-domain weakly-supervised object detection through progressive domain adaptation. In CVPR, pages 5001-5009, 2018. 7
|
| 370 |
+
[35] Yusuke Iwasawa and Yutaka Matsuo. Test-time classifier adjustment module for model-agnostic domain generalization. NeurIPS, 34:2427-2440, 2021. 3, 7
|
| 371 |
+
|
| 372 |
+
[36] Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billionscale similarity search with gpus. IEEE Transactions on Big Data, 7(3):535-547, 2019. 5
|
| 373 |
+
[37] KJ Joseph, Salman Khan, Fahad Shahbaz Khan, and Vineeth N Balasubramanian. Towards open world object detection. In CVPR, pages 5830-5840, 2021. 6, 7
|
| 374 |
+
[38] Jogendra Nath Kundu, Naveen Venkat, Ambareesh Revanur, R Venkatesh Babu, et al. Towards inheritable models for open-set domain adaptation. In CVPR, pages 12376-12385, 2020. 3
|
| 375 |
+
[39] Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541-551, 1989. 6
|
| 376 |
+
[40] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. 6
|
| 377 |
+
[41] Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In NeurIPS, pages 7167-7177, 2018. 3
|
| 378 |
+
[42] Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In ICCV, pages 5542-5550, 2017. 4, 6
|
| 379 |
+
[43] Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Learning to generalize: Meta-learning for domain generalization. In AAAI, 2018. 1, 2
|
| 380 |
+
[44] Da Li, Jianshu Zhang, Yongxin Yang, Cong Liu, Yi-Zhe Song, and Timothy M Hospedales. Episodic training for domain generalization. In ICCV, pages 1446-1455, 2019. 1, 2
|
| 381 |
+
[45] Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, and Dacheng Tao. Deep domain generalization via conditional invariant adversarial networks. In ECCV, pages 624-639, 2018. 2
|
| 382 |
+
[46] Yiying Li, Yongxin Yang, Wei Zhou, and Timothy Hospedales. Feature-critic networks for heterogeneous domain generalization. In ICML, pages 3915-3924, 2019. 2
|
| 383 |
+
[47] Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In ICML, pages 6028-6039. PMLR, 2020. 3, 6, 7
|
| 384 |
+
[48] Tsung-Yi Lin, Piotr Dólar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In CVPR, pages 2117-2125, 2017. 7
|
| 385 |
+
[49] Chang Liu, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, and Tie-Yan Liu. Learning causal semantic representation for out-of-distribution prediction. NeurIPS, 34, 2021. 1, 2
|
| 386 |
+
[50] Jie Liu, Xiaoqing Guo, and Yixuan Yuan. Unknown-oriented learning for open set domain adaptation. In ECCV, 2022. 3
|
| 387 |
+
[51] Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection. In NeurIPS, 2020. 3
|
| 388 |
+
|
| 389 |
+
[52] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In ICML, pages 97-105, 2015. 2
|
| 390 |
+
[53] Divyat Mahajan, Shruti Tople, and Amit Sharma. Domain generalization using causal matching. In ICML, pages 7313-7324, 2021. 2
|
| 391 |
+
[54] Toshihiko Matsuura and Tatsuya Harada. Domain generalization using a mixture of multiple latent domains. In AAAI, 2020. 2
|
| 392 |
+
[55] Dimity Miller, Lachlan Nicholson, Feras Dayoub, and Niko Sünderhauf. Dropout sampling for robust object detection in open-set conditions. In ICRA, pages 3243-3249. IEEE, 2018. 7
|
| 393 |
+
[56] Hyeonseob Nam, HyunJae Lee, Jongchan Park, Wonjun Yoon, and Donggeun Yoo. Reducing domain gap by reducing style bias. In CVPR, pages 8690-8699, 2021. 2
|
| 394 |
+
[57] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. 6
|
| 395 |
+
[58] Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In CVPR, pages 427-436, 2015. 4
|
| 396 |
+
[59] Sinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 22(2):199-210, 2011. 2
|
| 397 |
+
[60] Pau Panareda Busto and Juergen Gall. Open set domain adaptation. In ICCV, pages 754-763, 2017. 3
|
| 398 |
+
[61] Prashant Pandey, Mrigank Raman, Sumanth Varambally, and Prathosh Ap. Generalization on unseen domains via inference-time label-preserving target projections. In CVPR, pages 12924-12933, 2021. 3
|
| 399 |
+
[62] Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. JMLR, 12:2825–2830, 2011. 4
|
| 400 |
+
[63] Vihari Piratla, Praneeth Netrapalli, and Sunita Sarawagi. Efficient domain generalization via common-specific low-rank decomposition. In ICML, pages 7728-7738, 2020. 1, 2
|
| 401 |
+
[64] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. NeurIPS, 28, 2015. 7
|
| 402 |
+
[65] Stephan R Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In ECCV, pages 102-118. Springer, 2016. 7
|
| 403 |
+
[66] Mateo Rojas-Carulla, Bernhard Scholkopf, Richard Turner, and Jonas Peters. Invariant models for causal transfer learning. JMLR, 19(1):1309-1342, 2018. 2
|
| 404 |
+
[67] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In ECCV, pages 213-226. Springer, 2010. 6
|
| 405 |
+
[68] Kuniaki Saito, Donghyun Kim, Stan Sclaroff, and Kate Saenko. Universal domain adaptation through self supervision. NeurIPS, 33:16282-16292, 2020. 3, 5
|
| 406 |
+
|
| 407 |
+
[69] Kuniaki Saito, Shohei Yamamoto, Yoshitaka Ushiku, and Tatsuya Harada. Open set domain adaptation by backpropagation. In ECCV, pages 153-168, 2018. 3, 6, 7
|
| 408 |
+
[70] Vikash Sehwag, Mung Chiang, and Prateek Mittal. Ssd: A unified framework for self-supervised outlier detection. In ICLR, 2021. 3
|
| 409 |
+
[71] Yang Shu, Zhangjie Cao, Chenyu Wang, Jianmin Wang, and Mingsheng Long. Open domain generalization with domain-augmented meta-learning. In CVPR, pages 9624-9633, 2021. 2
|
| 410 |
+
[72] Xinwei Sun, Botong Wu, Xiangyu Zheng, Chang Liu, Wei Chen, Tao Qin, and Tie-Yan Liu. Recovering latent causal factor for generalization to distributional shifts. NeurIPS, 34, 2021. 2
|
| 411 |
+
[73] Yiyou Sun, Chuan Guo, and Yixuan Li. React: Out-of-distribution detection with rectified activations. In NeurIPS, 2021. 3
|
| 412 |
+
[74] Yiyou Sun and Yixuan Li. Dice: Leveraging sparsification for out-of-distribution detection. In ECCV, 2022. 3, 6, 7
|
| 413 |
+
[75] Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. Out-of-distribution detection with deep nearest neighbors. In ICML, 2022. 3
|
| 414 |
+
[76] Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei Efros, and Moritz Hardt. Test-time training with self-supervision for generalization under distribution shifts. In ICML, pages 9229-9248. PMLR, 2020. 3, 6, 7
|
| 415 |
+
[77] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. JMLR, 9(11), 2008. 8, 9
|
| 416 |
+
[78] Vladimir N Vapnik. An overview of statistical learning theory. IEEE transactions on neural networks, 10(5):988-999, 1999. 6
|
| 417 |
+
[79] Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In CVPR, pages 5018-5027, 2017. 6
|
| 418 |
+
[80] Riccardo Volpi and Vittorio Murino. Addressing model vulnerability to distributional shifts over image transformation sets. In ICCV, pages 7980-7989, 2019. 2
|
| 419 |
+
[81] Riccardo Volpi, Hongseok Namkoong, Ozan Sener, John C Duchi, Vittorio Murino, and Silvio Savarese. Generalizing to unseen domains via adversarial data augmentation. In NeurIPS, pages 5334-5344, 2018. 1, 2, 6
|
| 420 |
+
[82] Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. In ICLR, 2021. 3, 6, 7
|
| 421 |
+
[83] Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip Yu. Generalizing to unseen domains: A survey on domain generalization. TKDE, 2022. 1
|
| 422 |
+
[84] Hongxin Wei, Renchunzi Xie, Hao Cheng, Lei Feng, Bo An, and Yixuan Li. Mitigating neural network overconfidence with logit normalization. In ICML, 2022. 6, 7
|
| 423 |
+
[85] Haifeng Xia, Handong Zhao, and Zhengming Ding. Adaptive adversarial network for source-free domain adaptation. In ICCV, 2021. 3
|
| 424 |
+
[86] Qinwei Xu, Ruipeng Zhang, Ya Zhang, Yanfeng Wang, and Qi Tian. A fourier-based framework for domain generalization. In CVPR, pages 14383-14392, 2021. 2
|
| 425 |
+
|
| 426 |
+
[87] Zhenlin Xu, Deyi Liu, Junlin Yang, Colin Raffel, and Marc Niethammer. Robust and generalizable visual representation learning via random convolutions. In ICLR, 2021. 1, 2
|
| 427 |
+
[88] Jingkang Yang, Kaiyang Zhou, Yixuan Li, and Ziwei Liu. Generalized out-of-distribution detection: A survey. arXiv preprint arXiv:2110.11334, 2021. 3
|
| 428 |
+
[89] Shiqi Yang, Yaxing Wang, Joost van de Weijer, Luis Herranz, and Shangling Jui. Exploiting the intrinsic neighborhood structure for source-free domain adaptation. In NeurIPS, 2021. 3
|
| 429 |
+
[90] Shiqi Yang, Yaxing Wang, Kai Wang, Shangling Jui, et al. Attracting and dispersing: A simple approach for source-free domain adaptation. In NeurIPS, 2022. 3, 6, 7
|
| 430 |
+
[91] Shiqi Yang, Yaxing Wang, Kai Wang, Shangling Jui, and Joost van de Weijer. One ring to bring them all: Towards open-set recognition under domain shift. arXiv preprint arXiv:2206.03600, 2022. 2, 3, 6, 7
|
| 431 |
+
[92] Kaichao You, Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Universal domain adaptation. In CVPR, pages 2720-2729, 2019. 3
|
| 432 |
+
[93] Hanlin Zhang, Yi-Fan Zhang, Weiyang Liu, Adrian Weller, Bernhard Schölkopf, and Eric P Xing. Towards principled disentanglement for domain generalization. In CVPR, pages 8024-8034, 2022. 1, 2
|
| 433 |
+
[94] Jing Zhang, Zewei Ding, Wanqing Li, and Philip Ogunbona. Importance weighted adversarial nets for partial domain adaptation. In CVPR, 2018. 3
|
| 434 |
+
[95] Marvin Mengxin Zhang, Sergey Levine, and Chelsea Finn. Memo: Test time robustness via adaptation and augmentation. In NeurIPS, 2022. 3, 6, 7
|
| 435 |
+
[96] Yabin Zhang, Minghan Li, Ruihuang Li, Kui Jia, and Lei Zhang. Exact feature distribution matching for arbitrary style transfer and domain generalization. In CVPR, 2022. 2
|
| 436 |
+
[97] Ziyi Zhang, Weikai Chen, Hui Cheng, Zhen Li, Siyuan Li, Liang Lin, and Guanbin Li. Divide and contrast: Source-free domain adaptation via adaptive contrastive learning. In NeurIPS, 2022. 3
|
| 437 |
+
[98] Long Zhao, Ting Liu, Xi Peng, and Dimitris Metaxas. Maximum-entropy adversarial data augmentation for improved generalization and robustness. NeurIPS, 33:14435-14447, 2020. 6
|
| 438 |
+
[99] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In CVPR, 2016. 8
|
| 439 |
+
[100] Hong-Yu Zhou, Yizhou Yu, Chengdi Wang, Shu Zhang, Yuanxu Gao, Jia Pan, Jun Shao, Guangming Lu, Kang Zhang, and Weimin Li. A transformer-based representation-learning model with unified processing of multimodal input for clinical diagnostics. Nature Biomedical Engineering, pages 1-13, 2023. 2
|
| 440 |
+
[101] Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization: A survey. TPAMI, 2022. 1
|
| 441 |
+
[102] Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. Deep domain-adversarial image generation for domain generalisation. In AAAI, pages 13025-13032, 2020. 2
|
| 442 |
+
|
| 443 |
+
[103] Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. Learning to generate novel domains for domain generalization. In ECCV, pages 561-578, 2020. 2
|
| 444 |
+
[104] Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. Domain adaptive ensemble learning. TIP, 30:8008-8018, 2021. 6
|
| 445 |
+
[105] Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. Domain generalization with mixstyle. In ICLR, 2021. 1, 2
|
| 446 |
+
[106] Ronghang Zhu and Sheng Li. Crossmatch: Cross-classifier consistency regularization for open-set single domain generalization. In ICLR, 2022. 2, 3, 5, 6, 7, 8
|
| 447 |
+
[107] Wei Zhu, Le Lu, Jing Xiao, Mei Han, Jiebo Luo, and Adam P Harrison. Localized adversarial domain generalization. In CVPR, pages 7108-7118, 2022. 2
|
activateandrejecttowardssafedomaingeneralizationundercategoryshift/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e17a8a69cfaa60f2fe23a29cf3b20ab1f2c3f5cfb5297b21baf40b2512dcdddb
|
| 3 |
+
size 806568
|
activateandrejecttowardssafedomaingeneralizationundercategoryshift/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c570f795c3e11dbd565b5de9e7adaca7cbdc8bbbd6eba02f3fb581882b57e6a6
|
| 3 |
+
size 611198
|
activeneuralmapping/45e08854-ae2d-43e4-bc22-971696065335_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3262e554da4f05a0a55d9a6ff52eb248e4ca8299331581ace5dd79fd7f36abcb
|
| 3 |
+
size 84253
|
activeneuralmapping/45e08854-ae2d-43e4-bc22-971696065335_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2c59a02395b733a10a9520cf75d080d6251fce68ace5ac3d50db8248f9406242
|
| 3 |
+
size 110496
|