diff --git a/.gitattributes b/.gitattributes index 8614ad2531925fe2527f3bdd95f20b9a50c57e7b..07f887daeea6febf091d1ec23f38d9a03b30e9c3 100644 --- a/.gitattributes +++ b/.gitattributes @@ -7991,3 +7991,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text 2023/pCON_[[:space:]]Polarimetric[[:space:]]Coordinate[[:space:]]Networks[[:space:]]for[[:space:]]Neural[[:space:]]Scene[[:space:]]Representations/762a8e4d-373c-4bab-83f9-f1ad8a1ae928_origin.pdf filter=lfs diff=lfs merge=lfs -text 2023/sRGB[[:space:]]Real[[:space:]]Noise[[:space:]]Synthesizing[[:space:]]With[[:space:]]Neighboring[[:space:]]Correlation-Aware[[:space:]]Noise[[:space:]]Model/4d3b606f-ee7b-460f-b2bf-d38a08fa5304_origin.pdf filter=lfs diff=lfs merge=lfs -text 2023/vMAP_[[:space:]]Vectorised[[:space:]]Object[[:space:]]Mapping[[:space:]]for[[:space:]]Neural[[:space:]]Field[[:space:]]SLAM/ff44bc7e-edea-45f5-9989-5020c0b824b0_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/On[[:space:]]the[[:space:]]Benefits[[:space:]]of[[:space:]]3D[[:space:]]Pose[[:space:]]and[[:space:]]Tracking[[:space:]]for[[:space:]]Human[[:space:]]Action[[:space:]]Recognition/0c904778-ffe0-4595-8277-48b7638fc1b2_origin.pdf filter=lfs diff=lfs merge=lfs -text +2023/On[[:space:]]the[[:space:]]Effectiveness[[:space:]]of[[:space:]]Partial[[:space:]]Variance[[:space:]]Reduction[[:space:]]in[[:space:]]Federated[[:space:]]Learning[[:space:]]With[[:space:]]Heterogeneous[[:space:]]Data/fbd20275-52b6-475a-b61f-5e4c3c892317_origin.pdf filter=lfs diff=lfs merge=lfs -text diff --git a/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/0c904778-ffe0-4595-8277-48b7638fc1b2_content_list.json b/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/0c904778-ffe0-4595-8277-48b7638fc1b2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..bbce252b6532a3e4a91acd87d3f8a688f201e459 --- /dev/null +++ b/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/0c904778-ffe0-4595-8277-48b7638fc1b2_content_list.json @@ -0,0 +1,1410 @@ +[ + { + "type": "text", + "text": "On the Benefits of 3D Pose and Tracking for Human Action Recognition", + "text_level": 1, + "bbox": [ + 122, + 130, + 846, + 152 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Jathushan Rajasegaran $^{1,2}$ , Georgios Pavlakos $^{1}$ , Angjoo Kanazawa $^{1}$ , Christoph Feichtenhofer $^{2}$ , Jitendra Malik $^{1,2}$ , $^{1}$ UC Berkeley, $^{2}$ Meta AI, FAIR", + "bbox": [ + 53, + 179, + 929, + 215 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 233, + 251, + 313, + 266 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In this work we study the benefits of using tracking and 3D poses for action recognition. To achieve this, we take the Lagrangian view on analysing actions over a trajectory of human motion rather than at a fixed point in space. Taking this stand allows us to use the tracklets of people to predict their actions. In this spirit, first we show the benefits of using 3D pose to infer actions, and study person-person interactions. Subsequently, we propose a Lagrangian Action Recognition model by fusing 3D pose and contextualized appearance over tracklets. To this end, our method achieves state-of-the-art performance on the AVA v2.2 dataset on both pose only settings and on standard benchmark settings. When reasoning about the action using only pose cues, our pose model achieves $+10.0$ mAP gain over the corresponding state-of-the-art while our fused model has a gain of $+2.8$ mAP over the best state-of-the-art model. Code and results are available at: https://brjathu.github.io/LART", + "bbox": [ + 75, + 282, + 472, + 541 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 76, + 571, + 207, + 585 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "In fluid mechanics, it is traditional to distinguish between the Lagrangian and Eulerian specifications of the flow field. Quoting the Wikipedia entry, \"Lagrangian specification of the flow field is a way of looking at fluid motion where the observer follows an individual fluid parcel as it moves through space and time. Plotting the position of an individual parcel through time gives the pathline of the parcel. This can be visualized as sitting in a boat and drifting down a river. The Eulerian specification of the flow field is a way of looking at fluid motion that focuses on specific locations in the space through which the fluid flows as time passes. This can be visualized by sitting on the bank of a river and watching the water pass the fixed location.\"", + "bbox": [ + 75, + 597, + 468, + 794 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "These concepts are very relevant to how we analyze videos of human activity. In the Eulerian viewpoint, we would focus on feature vectors at particular locations, either $(x,y)$ or $(x,y,z)$ , and consider evolution over time while staying fixed in space at the location. In the Lagrangian viewpoint, we would track, say a person over space-time and track the associated feature vector across space-time.", + "bbox": [ + 75, + 795, + 468, + 902 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "While the older literature for activity recognition e.g., [11, 18, 53] typically adopted the Lagrangian viewpoint, ever since the advent of neural networks based on 3D spacetime convolution, e.g., [50], the Eulerian viewpoint became standard in state-of-the-art approaches such as SlowFast Networks [16]. Even after the switch to transformer architectures [12, 52] the Eulerian viewpoint has persisted. This is noteworthy because the tokenization step for transformers gives us an opportunity to freshly examine the question, \"What should be the counterparts of words in video analysis?\" Dosovitskiy et al. [10] suggested that image patches were a good choice, and the continuation of that idea to video suggests that spatiotemporal cuboids would work for video as well.", + "bbox": [ + 496, + 252, + 893, + 464 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "On the contrary, in this work we take the Lagrangian viewpoint for analysing human actions. This specifies that we reason about the trajectory of an entity over time. Here, the entity can be low-level, e.g., a pixel or a patch, or high-level, e.g., a person. Since, we are interested in understanding human actions, we choose to operate on the level of \"humans-as-entities\". To this end, we develop a method that processes trajectories of people in video and uses them to recognize their action. We recover these trajectories by capitalizing on a recently introduced 3D tracking method PHALP [43] and HMR 2.0 [19]. As shown in Figure 1 PHALP recovers person tracklets from video by lifting people to 3D, which means that we can both link people over a series of frames and get access to their 3D representation. Given these 3D representations of people (i.e., 3D pose and 3D location), we use them as the basic content of each token. This allows us to build a flexible system where the model, here a transformer, takes as input tokens corresponding to the different people with access to their identity, 3D pose and 3D location. Having 3D location of the people in the scene allows us to learn interaction among people. Our model relying on this tokenization can benefit from 3D tracking and pose, and outperforms previous baseline that only have access to pose information [8, 45].", + "bbox": [ + 496, + 470, + 893, + 834 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "While the change in human pose over time is a strong signal, some actions require more contextual information about the appearance and the scene. Therefore, it is important to also fuse pose with appearance information from", + "bbox": [ + 496, + 839, + 893, + 902 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 106, + 2, + 181, + 42 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 236, + 0, + 807, + 46 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "640", + "bbox": [ + 485, + 945, + 511, + 955 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/b709f11f233361974d43ae852845e4e9eeb5342539b37b15be82d1a7009d093e.jpg", + "image_caption": [ + "Figure 1. Overview of our method: Given a video, first, we track every person using a tracking algorithm (e.g. PHALP [43]). Then every detection in the track is tokenized to represent a human-centric vector (e.g. pose, appearance). To represent 3D pose we use SMPL [35] parameters and estimated 3D location of the person, for contextualized appearance we use MViT [12] (pre-trained on MaskFeat [59]) features. Then we train a transformer network to predict actions using the tracks. Note that, at the second frame we do not have detection for the blue person, at these places we pass a mask token to in-fill the missing detections." + ], + "image_footnote": [], + "bbox": [ + 138, + 89, + 831, + 397 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "humans and the scene, coming directly from pixels. To achieve this, we also use the state-of-the-art models for action recognition [12, 34] to provide complementary information from the contextualized appearance of the humans and the scene in a Lagrangian framework. Specifically, we densely run such models over the trajectory of each tracklet and record the contextualized appearance features localized around the tracklet. As a result, our tokens include explicit information about the 3D pose of the people and densely sampled appearance information from the pixels, processed by action recognition backbones [12]. Our complete system outperforms the previous state of the art by a large margin of $2.8\\mathrm{mAP}$ , on the challenging AVA v2.2 dataset.", + "bbox": [ + 75, + 506, + 470, + 700 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Overall, our main contribution is introducing an approach that highlights the effects of tracking and 3D poses for human action understanding. To this end, in this work, we propose a Lagrangian Action Recognition with Tracking (LART) approach, which utilizes the tracklets of people to predict their action. Our baseline version leverages tracklet trajectories and 3D pose representations of the people in the video to outperform previous baselines utilizing pose information. Moreover, we demonstrate that the proposed Lagrangian viewpoint of action recognition can be easily combined with traditional baselines that rely only on appearance and context from the video, achieving significant gains compared to the dominant paradigm.", + "bbox": [ + 75, + 703, + 470, + 898 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related Work", + "text_level": 1, + "bbox": [ + 500, + 503, + 640, + 522 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Recovering humans in 3D: A lot of the related work has been using the SMPL human body model [35] for recovering 3D humans from images. Initially, the related methods were relying on optimization-based approaches, like SMPLify [5], but since the introduction of the HMR [23], there has been a lot of interest in approaches that can directly regress SMPL parameters [35] given the corresponding image of the person as input. Many follow-up works have improved upon the original model, estimating more accurate pose [31] or shape [7], increasing the robustness of the model [41], incorporating side information [30,32], investigating different architecture choices [29,64], etc.", + "bbox": [ + 496, + 535, + 890, + 715 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "While these works have been improving the basic single-frame reconstruction performance, there have been parallel efforts toward the temporal reconstruction of humans from video input. The HMMR model [24] uses a convolutional temporal encoder on HMR image features [23] to reconstruct humans over time. Other approaches have investigated recurrent [28] or transformer [41] encoders. Instead of performing the temporal pooling on image features, recent work has been using the SMPL parameters directly for the temporal encoding [2, 44].", + "bbox": [ + 496, + 718, + 890, + 869 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "One assumption of the temporal methods in the above category is that they have access to tracklets of people in", + "bbox": [ + 498, + 869, + 890, + 900 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "641", + "bbox": [ + 486, + 945, + 509, + 955 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "the video. This means that they rely on tracking methods, most of which operate on the 2D domain [3, 13, 37, 62] and are responsible for introducing many errors. To overcome this limitation, recent work [42, 43] has capitalized on the advances of 3D human recovery to perform more robust identity tracking from video. More specifically, the PHALP method of Rajasegaran et al. [43] allows for robust tracking in a variety of settings, including in the wild videos and movies. Here, we make use of the PHALP system to discover long tracklets from large-scale video datasets. This allows us to train our method for recognizing actions from 3D pose input.", + "bbox": [ + 75, + 90, + 472, + 272 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Action Recognition: Earlier works on action recognition relied on hand-crafted features such as HOG3D [27], Cuboids [9] and Dense Trajectories [53, 54]. After the introduction of deep learning, 3D convolutional networks became the main backbone for action recognition [6, 48, 50]. However, the 3D convolutional models treat both space and time in a similar fashion, so to overcome this issue, two-stream architectures were proposed [46]. In two-steam networks, one pathway is dedicated to motion features, usually taking optical flow as input. This requirement of computing optical flow makes it hard to learn these models in an end-to-end manner. On the other hand, SlowFast networks [16] only use video streams but at different frame rates, allowing it to learn motion features from the fast pathway and lateral connections to fuse spatial and temporal information. Recently, with the advancements in transformer architectures, there has been a lot of work on action recognition using transformer backbones [1, 4, 12, 38].", + "bbox": [ + 75, + 292, + 472, + 565 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "While the above-mentioned works mainly focus on the model architectures for action recognition, another line of work investigates more fine-grained relationships between actors and objects [47, 55, 56, 65]. Non-local networks [55] use self-attention to reason about entities in the video and learn long-range relationships. ACAR [39] models actor-context-actor relationships by first extracting actor-context features through pooling in bounding box region and then learning higher-level relationships between actors. Compared to ACAR, our method does not explicitly design any priors about actor relationships, except their track identity.", + "bbox": [ + 75, + 580, + 470, + 750 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Along these lines, some works use the human pose to understand the action [8, 45, 51, 60, 63]. PoTion [8] uses a keypoint-based pose representation by colorizing the temporal dependencies. Recently, JMRN [45] proposed a joint-motion re-weighting network to learn joint trajectories separately and then fuse this information to reason about interjoint motion. While these works rely on 2D key points and design-specific architectures to encode the representation, we use more explicit 3D SMPL parameters.", + "bbox": [ + 75, + 763, + 470, + 902 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Method", + "text_level": 1, + "bbox": [ + 500, + 89, + 591, + 106 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Understanding human action requires interpreting multiple sources of information [26]. These include head and gaze direction, human body pose and dynamics, interactions with objects or other humans or animals, the scene as a whole, the activity context (e.g. immediately preceding actions by self or others), and more. Some actions can be recognized by pose and pose dynamics alone, as demonstrated by Johansson et al [22] who showed that people are remarkable at recognizing walking, running, crawling just by looking at moving point-lights. However, interpreting complex actions requires reasoning with multiple sources of information e.g. to recognize that someone is slicing a tomato with a knife, it helps to see the knife and the tomato.", + "bbox": [ + 496, + 114, + 890, + 311 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "There are many design choices that can be made here. Should one use \"disentangled\" representations, with elements such as pose, interacted objects, etc, represented explicitly in a modular way? Or should one just input video pixels into a large capacity neural network model and rely on it to figure out what is discriminatively useful? In this paper, we study two options: a) human pose reconstructed from an HMR model [19, 23] and b) human pose with contextual appearance as computed by an MViT model [12].", + "bbox": [ + 496, + 311, + 890, + 446 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Given a video with number of frames $T$ , we first track every person using PHALP [43], which gives us a unique identity for each person over time. Let a person $i \\in [1,2,3,\\dots n]$ at time $t \\in [1,2,3,\\dots T]$ be represented by a person-vector $\\mathbf{H}_t^i$ . Here $n$ is the number of people in a frame. This person-vector is constructed such that, it contains human-centric representation $\\mathbf{P}_t^i$ and some contextualized appearance information $\\mathbf{Q}_t$ .", + "bbox": [ + 496, + 446, + 890, + 568 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {H} _ {t} ^ {i} = \\left\\{\\mathbf {P} _ {t} ^ {i}, \\mathbf {Q} _ {t} ^ {i} \\right\\}. \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 637, + 571, + 890, + 590 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Since we know the identity of each person from the tracking, we can create an action-tube [18] representation for each person. Let $\\Phi_{i}$ be the action-tube of person $i$ , then this action-tube contains all the person-vectors over time.", + "bbox": [ + 496, + 590, + 890, + 651 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {\\Phi} _ {i} = \\left\\{\\mathbf {H} _ {1} ^ {i}, \\mathbf {H} _ {2} ^ {i}, \\mathbf {H} _ {3} ^ {i}, \\dots , \\mathbf {H} _ {T} ^ {i} \\right\\}. \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 594, + 655, + 890, + 672 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Given this representation, we train our model LART to predict actions from action-tubes (tracks). In this work we use a vanilla transformer [52] to model the network $\\mathcal{F}$ , and this allows us to mask attention, if the track is not continuous due to occlusions and failed detections etc. Please see the Appendix for more details on network architecture.", + "bbox": [ + 496, + 672, + 890, + 763 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {F} \\left(\\Phi_ {1}, \\Phi_ {2}, \\dots , \\Phi_ {i}, \\dots , \\Phi_ {n}; \\Theta\\right) = \\widehat {Y _ {i}}. \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 573, + 768, + 890, + 787 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Here, $\\Theta$ is the model parameters, $\\widehat{Y}_i = \\{y_1^i,y_2^i,y_3^i,\\dots,y_T^i\\}$ is the predictions for a track, and $y_{t}^{i}$ is the predicted action of the track $i$ at time $t$ . The model can use the actions of others for reasoning when predicting the action for the person-of-interest $i$ . Finally, we use binary cross-entropy loss to train our model and measure mean Average Precision (mAP) for evaluation.", + "bbox": [ + 496, + 792, + 890, + 898 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "642", + "bbox": [ + 485, + 944, + 511, + 955 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Action Recognition with 3D Pose", + "text_level": 1, + "bbox": [ + 76, + 90, + 366, + 107 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In this section, we study the effect of human-centric pose representation on action recognition. To do that, we consider a person-vector that only contains the pose representation, $\\mathbf{H}_t^i = \\{\\mathbf{P}_t^i\\}$ . While, $\\mathbf{P}_t^i$ can in general contain any information about the person, in this work train a pose only model LART-pose which uses 3D body pose of the person based on the SMPL [35] model. This includes the joint angles of the different body parts, $\\theta_t^i \\in \\mathcal{R}^{23 \\times 3 \\times 3}$ and is considered as an amodal representation, which means we make a prediction about all body parts, even those that are potentially occluded/truncated in the image. Since the global body orientation $\\psi_t^i \\in \\mathcal{R}^{3 \\times 3}$ is represented separately from the body pose, our body representation is invariant to the specific viewpoint of the video. In addition to the 3D pose, we also use the 3D location $L_t^i$ of the person in the camera view (which is also predicted by the PHALP model [43]). This makes it possible to consider the relative location of the different people in 3D. More specifically, each person is represented as,", + "bbox": [ + 75, + 113, + 472, + 400 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {H} _ {t} ^ {i} = \\mathbf {P} _ {t} ^ {i} = \\left\\{\\theta_ {t} ^ {i}, \\psi_ {t} ^ {i}, L _ {t} ^ {i} \\right\\}. \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 187, + 407, + 468, + 426 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Let us assume that there are $n$ tracklets $\\{\\Phi_1, \\Phi_2, \\Phi_3, \\dots, \\Phi_n\\}$ in a given video. To study the action of the tracklet $i$ , we consider that person $i$ as the person-of-interest and having access to other tracklets can be helpful to interpret the person-person interactions for person $i$ . Therefore, to predict the action for all $n$ tracklets we need to make $n$ number of forward passes. If person $i$ is the person-of-interest, then we randomly sample $N - 1$ number of other tracklets and pass it to the model $\\mathcal{F}(\\cdot; \\Theta)$ along with the $\\Phi_i$ .", + "bbox": [ + 75, + 435, + 472, + 585 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {F} \\left(\\Phi_ {i}, \\left\\{\\Phi_ {j} \\mid j \\in [ N ] \\right\\}; \\Theta\\right) = \\widehat {Y} _ {i} \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 168, + 593, + 468, + 612 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Therefore, the model sees $N$ number of tracklets and predicts the action for the main (person-of-interest) track. To do this, we first tokenize all the person-vectors, by passing them through a linear layer and project it in $f_{proj}(\\mathcal{H}_t^i)\\in \\mathcal{R}^d$ a $d$ dimensional space. Afterward, we add positional embeddings for a) time, b) tracklet-id. For time and tracklet-id we use 2D sine and cosine functions as positional encoding [57], by assigning person $i$ as the zero $^{\\mathrm{th}}$ track, and the rest of the tracklets use tracklet-ids $\\{1,2,3,\\dots,N - 1\\}$ .", + "bbox": [ + 75, + 619, + 468, + 758 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nP E (t, i, 2 r) = \\sin (t / 1 0 0 0 0 ^ {4 r / d})\n$$\n", + "text_format": "latex", + "bbox": [ + 207, + 765, + 419, + 782 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nP E (t, i, 2 r + 1) = \\cos (t / 1 0 0 0 0 ^ {4 r / d})\n$$\n", + "text_format": "latex", + "bbox": [ + 176, + 786, + 419, + 803 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nP E (t, i, 2 s + D / 2) = \\sin (i / 1 0 0 0 0 ^ {4 s / d})\n$$\n", + "text_format": "latex", + "bbox": [ + 156, + 808, + 419, + 825 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nP E (t, i, 2 s + D / 2 + 1) = \\cos (i / 1 0 0 0 0 ^ {4 s / d})\n$$\n", + "text_format": "latex", + "bbox": [ + 127, + 829, + 419, + 845 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Here, $t$ is the time index, $i$ is the track-id, $r, s \\in [0, d/2)$ specifies the dimensions and $D$ is the dimensions of the token.", + "bbox": [ + 75, + 854, + 468, + 898 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "After adding the position encodings for time and identity, each person token is passed to the transformer network. The $(t + i\\times N)^{th}$ token is given by,", + "bbox": [ + 498, + 90, + 890, + 137 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\operatorname {t o k e n} _ {(t + i \\times N)} = f _ {\\text {p r o j}} \\left(\\mathcal {H} _ {t} ^ {i}\\right) + P E (t, i,:) \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 560, + 148, + 890, + 165 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Our person of interest formulation would allow us to use other actors in the scene to make better predictions for the main actor. When there are multiple actors involved in the scene, knowing one person's action could help in predicting another's action. Some actions are correlated among the actors in a scene (e.g. dancing, fighting), while in some cases, people will be performing reciprocal actions (e.g. speaking and listening). In these cases knowing one person's action would help in predicting the other person's action with more confidence.", + "bbox": [ + 496, + 176, + 890, + 325 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.2. Actions from Appearance and 3D Pose", + "text_level": 1, + "bbox": [ + 498, + 335, + 831, + 353 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "While human pose plays a key role in understanding actions, more complex actions require reasoning about the scene and context. Therefore, in this section, we investigate the benefits of combining pose and contextual appearance features for action recognition and train model LART to benefit from 3D poses and appearance over a trajectory. For every track, we run a 2D action recognition model (i.e. MaskFeat [59] pretrained MViT [12]) at a frequency $f_{s}$ and store the feature vectors before the classification layer. For example, consider a track $\\Phi_{i}$ , which has detections $\\{D_1^i,D_2^i,D_3^i,\\dots ,D_T^i\\}$ . We get the predictions form the 2D action recognition models, for the detections at $\\{t,t + f_{FPS} / f_s,t + 2f_{FPS} / f_s,\\ldots \\}$ . Here, $f_{FPS}$ is the rate at which frames appear on the screen. Since these action recognition models capture temporal information to some extent, $\\mathbf{Q}_{t - f_{FPS} / 2f_s}^i$ to $\\mathbf{Q}_{t + f_{FPS} / 2f_s}^i$ share the same appearance features. Let's assume we have a pre-trained action recognition model $\\mathcal{A}$ , and it takes a sequence of frames and a detection bounding box at mid-frame, then the feature vectors for $\\mathbf{Q}_t^i$ is given by:", + "bbox": [ + 496, + 359, + 890, + 662 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\mathcal {A} \\big (D _ {t} ^ {i}, \\{I \\} _ {t - M} ^ {t + M} \\big) = \\mathbf {U} _ {t} ^ {i}\n$$\n", + "text_format": "latex", + "bbox": [ + 617, + 672, + 774, + 691 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Here, $\\{I\\}_{t - M}^{t + M}$ is the sequence of image frames, $2M$ is the number of frames seen by the action recognition model, and $\\mathbf{U}_t^i$ is the contextual appearance vector. Note that, since the action recognition models look at the whole image frame, this representation implicitly contains information about the scene and objects and movements. However, we argue that human-centric pose representation has orthogonal information compared to feature vectors taken from convolutional or transformer networks. For example, the 3D pose is a geometric representation while $\\mathbf{U}_t^i$ is more photometric, the SMPL parameters have more priors about human actions/pose and it is amodal while the appearance representation is learned from raw pixels. Now that we have both", + "bbox": [ + 496, + 702, + 890, + 900 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "643", + "bbox": [ + 485, + 945, + 511, + 955 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "pose-centric representation and appearance-centric representation in the person vector $\\mathbf{H}_t^i$ :", + "bbox": [ + 76, + 90, + 468, + 122 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\mathbf {H} _ {t} ^ {i} = \\underbrace {\\left\\{\\theta_ {t} ^ {i} , \\psi_ {t} ^ {i} , L _ {t} ^ {i} \\right.} _ {\\mathbf {P} _ {t} ^ {i}} \\underbrace {\\left. \\mathbf {U} _ {t} ^ {i} \\right.} _ {\\mathbf {Q} _ {t} ^ {i}} \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 191, + 132, + 468, + 170 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "So, each human is represented by their 3D pose, 3D location, and with their appearance and scene content. We follow the same procedure as discussed in the previous section to add positional encoding and train a transformer network $\\mathcal{F}(\\Theta)$ with pose+appearance tokens.", + "bbox": [ + 76, + 181, + 468, + 257 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experiments", + "text_level": 1, + "bbox": [ + 76, + 270, + 209, + 287 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We evaluate our method on AVA [20] in various settings. AVA [20] poses an action detection problem, where people are localized in a spatio-temporal volume with action labels. It provides annotations at 1Hz, and each actor will have 1 pose action, up to 3 person-object interactions (optional), and up to 3 person-person interaction (optional) labels. For the evaluations, we use AVA v2.2 annotations and follow the standard protocol as in [20]. We measure mean average precision (mAP) on 60 classes with a frame-level IoU of 0.5. In addition to that, we also evaluate our method on AVA-Kinetics [33] dataset, which provides spatio-temporal localized annotations for Kinetics videos.", + "bbox": [ + 75, + 296, + 468, + 474 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We use PHALP [43] to track people in the AVA dataset. PHALP falls into the tracking-by-detection paradigm and uses Mask R-CNN [21] for detecting people in the scene. At the training stage, where the bounding box annotations are available only at $1\\mathrm{Hz}$ , we use Mask R-CNN detections for the in-between frames and use the ground-truth bounding box for every 30 frames. For validation, we use the bounding boxes used by [39] and do the same strategy to complete the tracking. We ran, PHALP on Kinetics-400 [25] and AVA [20]. Both datasets contain over 1 million tracks with an average length of 3.4s and over 100 million detections. In total, we use about 900 hours length of tracks, which is about 40x more than previous works [24]. See Table 1 for more details.", + "bbox": [ + 75, + 478, + 468, + 686 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Tracking allows us to train actions densely. Since, we have tokens for each actor at every frame, we can supervise every token by assuming the human action remains the same in a 1 sec window [20]. First, we pre-train our model on Kinetics-400 dataset [25] and AVA [20] dataset. We run MViT [12] (pretrained on MaskFeat [58]) at $1\\mathrm{Hz}$ on every track in Kinetics-400 to generate pseudo groundtruth annotations. Every 30 frames will share the same annotations and we train our model end-to-end with binary cross-entropy loss. Then we fine-tune the pretrained model, with tracks generated by us, on AVA ground-truth action labels. At inference, we take a track, and randomly sample $N - 1$ of other tracks from the same video and pass it through the model. We take an average pooling on the", + "bbox": [ + 75, + 688, + 468, + 900 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/4e4c9912ddfeb82f28df7a444c88d709ccde902705ecdab6a3d55b4dfc11a5aa.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
Dataset# clips# tracks#bbox
AVA [20]184k320k32.9m
Kinetics [25]217k686k71.4m
Total400k1m104.3m
", + "bbox": [ + 553, + 93, + 841, + 174 + ], + "page_idx": 4 + }, + { + "type": "table", + "img_path": "images/f2cc98216001acb29f5f77831f36f2c12b8479acf3cccf50cb84d53d039195a3.jpg", + "table_caption": [ + "Table 1. Tracking statistics on AVA [20] and Kinetics-400 [25]: We report the number tracks returned by PHALP [43] for each datasets (m: million). This results in over 900 hours of tracks, with a mean length of 3.4 seconds (with overlaps)." + ], + "table_footnote": [], + "table_body": "
ModelPoseOMPIPMmAP
PoTion [8]2D---13.1
JMRN [45]2D7.117.227.614.1
LART-poseSMPL11.924.645.822.3
LART-poseSMPL+Joints13.325.948.724.1
", + "bbox": [ + 506, + 253, + 885, + 348 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Table 2. AVA Action Recognition with 3D pose: We evaluate human-centric representation on AVA dataset [20]. Here $OM$ : Object Manipulation, $PI$ : Person Interactions, and $PM$ : Person Movement. LART-poscan achieve about $80\\%$ performance of MViT models on person movement tasks without looking at scene information.", + "bbox": [ + 498, + 357, + 890, + 439 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "prediction head over a sequence of 12 frames, and evaluate at the center-frame. For more details on model architecture, hyper-parameters, and training procedure/training-time please see Appendix A1.", + "bbox": [ + 498, + 460, + 890, + 521 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4.1. Action Recognition with 3D Pose", + "text_level": 1, + "bbox": [ + 500, + 530, + 785, + 545 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In this section, we discuss the performance of our method on AVA action recognition, when using 3D pose cues, corresponding to Section 3.1. We train our 3D pose model LART-pose, on Kinetics-400 and AVA datasets. For Kinetics-400 tracks, we use MaskFeat [59] pseudo-ground truth labels and for AVA tracks, we train with ground-truth labels. We train a single person model and a multi-person model to study the interactions of a person over time, and person-person interactions. Our method achieves $24.1\\mathrm{mAP}$ on multi-person $(N = 5)$ setting (See Table 2). While this is well below the state-of-the-art performance, this is a first time a 3D model achieves more than $15.6\\mathrm{mAP}$ on AVA dataset. Note that the first reported performance on AVA was $15.6\\mathrm{mAP}$ [20], and our 3D pose model is already above this baseline.", + "bbox": [ + 496, + 553, + 890, + 777 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We evaluate the performance of our method on three AVA sub-categories (Object Manipulation $(OM)$ , Person Interactions $(PI)$ , and Person Movement $(PM)$ ). For the person-movement task, which includes actions such as running, standing, and sitting etc., the 3D pose model achieves $48.7\\mathrm{mAP}$ . In contrast, MaskFeat performance in this sub-category is $58.6\\mathrm{mAP}$ . This shows that the 3D pose model can perform about $80\\%$ good as a strong state-of-the-art", + "bbox": [ + 496, + 779, + 890, + 900 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "644", + "bbox": [ + 485, + 945, + 509, + 955 + ], + "page_idx": 4 + }, + { + "type": "image", + "img_path": "images/85ec193b8f26c4303b14618908eddcd148fcd39b87a2daa70ee76cabeea1df84.jpg", + "image_caption": [ + "AVA2.2 Performance with 3D pose", + "Figure 2. Class-wise performance on AVA: We show the performance of JMRN [45] and LART-pose on 60 AVA classes (average precision and relative gain). For pose based classes such as standing, sitting, and walking our 3D pose model can achieve above $60\\mathrm{mAP}$ average precision performance by only looking at the 3D poses over time. By modeling multiple trajectories as input our model can understand the interactions among people. For example, activities such as dancing $(+30.1\\%)$ , martial art $(+19.8\\%)$ and hugging $(+62.1\\%)$ have large relative gains over state-of-the-art pose only model. We only plot the gains if it is above or below $1\\mathrm{mAP}$ ." + ], + "image_footnote": [], + "bbox": [ + 89, + 102, + 880, + 311 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "model. On the person-person interaction category, our multi-person model achieves a gain of $+2.4\\mathrm{mAP}$ compared to the single-person model, showing that the multiperson model was able to capture the person-person interactions. As shown in the Fig 2, for person-person interactions classes such as dancing, fighting, lifting a person and handshaking etc., the multi-person model performs much better than the current state-of-the-art pose-only models. For example, in dancing multi-person model gains $+39.8\\mathrm{mAP}$ , and in hugging the relative gain is over $+200\\%$ .", + "bbox": [ + 75, + 422, + 468, + 571 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "On the other hand, object manipulation has the lowest score among these three tasks. Since we do not model objects explicitly, the model has no information about which object is being manipulated and how it is being associated with the person. However, since some tasks have a unique pose when interacting with objects such as answering a phone or carrying an object, knowing the pose would help in identifying the action, which results in 13.3 mAP.", + "bbox": [ + 75, + 575, + 468, + 695 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.2. Actions from Appearance and 3D Pose", + "text_level": 1, + "bbox": [ + 76, + 709, + 410, + 726 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "While the 3D pose model can capture about $50\\%$ performance compared to the state-of-the-art methods, it does not reason about the scene context. To model this, we concatenate the human-centric 3D representation with feature vectors from MaskFeat [59] as discussed in Section 3.2. MaskFeat has a MViT2 [34] as the backbone and it learns a strong representation about the scene and contextualized appearance. First, we pretrain this model on Kinetics-400 [25] and AVA [20] datasets, using the pseudo ground truth labels. Then, we fine-tune this model on AVA tracks using the ground-truth action annotation.", + "bbox": [ + 75, + 734, + 470, + 902 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In Table 3 we compare our method with other state-of-the-art methods. Overall our method has a gain of $+2.8$ mAP compared to Video MAE [15, 49]. In addition to that if we train with extra annotations from AVA-Kinetics our method achieves $42.3\\mathrm{mAP}$ . Figure 3 show the class-wise performance of our method compared to MaskFeat [58]. Our method overall improves the performance of 56 classes in 60 classes. For some classes (e.g. fighting, hugging, climbing) our method improves the performance by more than $+5\\mathrm{mAP}$ . In Table 4 we evaluate our method on AVA-Kinetics [33] dataset. Compared to the previous state-of-the-art methods our method has a gain of $+1.5\\mathrm{mAP}$ .", + "bbox": [ + 496, + 422, + 890, + 602 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "In Figure 4, we show qualitative results from MViT [12] and our method. As shown in the figure, having explicit access to the tracks of everyone in the scene allows us to make more confident predictions for actions like hugging and fighting, where it is easy to interpret close interactions. In addition to that, some actions like riding a horse and climbing can benefit from having access to explicit 3D poses over time. Finally, the amodal nature of 3D meshes also allows us to make better predictions during occlusions.", + "bbox": [ + 496, + 604, + 890, + 739 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "4.3. Ablation Experiments", + "text_level": 1, + "bbox": [ + 500, + 751, + 705, + 768 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Effect of tracking: All the current works on action recognition do not associate people over time, explicitly. They only use the mid-frame bounding box to predict the action. For example, when a person is running across the scene from left to right, a feature volume cropped at the mid-frame bounding box is unlikely to contain all the information about the person. However, if we can track this person we could simply know their exact position over time and", + "bbox": [ + 496, + 779, + 890, + 900 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "645", + "bbox": [ + 485, + 945, + 509, + 955 + ], + "page_idx": 5 + }, + { + "type": "image", + "img_path": "images/05ca790dc3c8448142e73a409b5861ab1d5d07b2cdb5c11ced8e83aea992726e.jpg", + "image_caption": [ + "AVA2.2 Performance with 3D pose", + "Figure 3. Comparison with State-of-the-art methods: We show class-level performance (average precision and relative gain) of MViT [12] (pretrained on MaskFeat [59]) and ours. Our methods achieve better performance compared to MViT on over 50 classes out of 60 classes. Especially, for actions like running, fighting, hugging, and sleeping etc., our method achieves over $+5$ mAP. This shows the benefit of having access to explicit tracks and 3D poses for action recognition. We only plot the gains if it is above or below 1 mAP." + ], + "image_footnote": [], + "bbox": [ + 84, + 104, + 875, + 313 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/da56a2f97f950f2631f86aa36fd78258fd234babd1ee425c1737931edbc64da3.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelPretrainmAP
SlowFast R101, 8×8 [16]K40023.8
MViTv1-B, 64×3 [12]27.3
SlowFast 16×8 +NL [16]27.5
X3D-XL [14]27.4
MViTv1-B-24, 32×3 [12]K60028.7
Object Transformer [61]31.0
ACAR R101, 8×8 +NL [39]31.4
ACAR R101, 8×8 +NL [39]K70033.3
MViT-L↑312, 40×3 [34],IN-21K+K40031.6
MaskFeat [59]K40037.5
MaskFeat [59]K60038.8
Video MAE [15, 49]K60039.3
Video MAE [15, 49]K40039.5
LARTK40042.3 (+2.8)
", + "bbox": [ + 78, + 405, + 467, + 656 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "that would give more localized information to the model to predict the action.", + "bbox": [ + 78, + 744, + 468, + 773 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "To this end, first, we evaluate MaskFeat [58] with the same detection bounding boxes [39] used in our evaluations, and it results in $40.2\\mathrm{mAP}$ . With this being the baseline for our system, we train a model which only uses MaskFeat features as input, but over time. This way we can measure the effect of tracking in action recognition. Unsurprisingly, as shown in Table 5 when training MaskFeat with tracking, the model performs $+1.2\\mathrm{mAP}$ better than the baseline. This", + "bbox": [ + 78, + 779, + 470, + 900 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/5e94199a64dc86781df44b8789451e5fdcc8db51706be0f3cbbf1c065551b43f.jpg", + "table_caption": [ + "Table 3. Comparison with state-of-the-art methods on AVA 2.2:. Our model uses features from MaskFeat [59] with full crop inference. Compared to Video MAE [15,49] our method achieves a gain of $+2.8$ mAP." + ], + "table_footnote": [], + "table_body": "
ModelmAP
SlowFast [16]32.98
ACAR [39]36.36
RM [17]37.34
LART38.91
", + "bbox": [ + 616, + 405, + 776, + 512 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Table 4. Performance on AVA-Kinetics Dataset. We evaluate the performance of our model on AVA-Kinetics [33] using a single model (no ensembles) and compare the performance with previous state-of-the-art single models.", + "bbox": [ + 500, + 527, + 893, + 583 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "clearly shows that the use of tracking is helpful in action recognition. Specifically, having access to the tracks help to localize a person over time, which in return provides a second order signal of how joint angles changes over time. In addition, knowing the identity of each person also gives a discriminative signal between people, which is helpful for learning interactions between people.", + "bbox": [ + 500, + 613, + 890, + 719 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Effect of Pose: The second contribution from our work is to use 3D pose information for action recognition. As discussed in Section 4.1 by only using 3D pose, we can achieve $24.1\\mathrm{mAP}$ on AVA dataset. While it is hard to measure the exact contribution of 3D pose and 2D features, we compare our method with a model trained with only MaskFeat and tracking, where the only difference is the use of 3D pose. As shown in Table 5, the addition of 3D pose gives a gain of $+0.8\\mathrm{mAP}$ . While this is a relatively small gain compared to the use of tracking, we believe with more robust and accurate 3D pose systems, this can be improved.", + "bbox": [ + 500, + 734, + 892, + 900 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "646", + "bbox": [ + 485, + 945, + 511, + 955 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/337ca2f9e863e1ecb545648fee6ab447558f3dd1ee6b89409412b49515688ff2.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 80, + 89, + 236, + 189 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/d289c746e420c26ad5a0e5db9ce34964ffcab95a9663d2ac2f4273890c785b5f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 243, + 90, + 400, + 189 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/eedb596834c89f51bd72bbdf0c7f1cd4b5a77ab0afaecefb84e1ef8fc077aa67.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 406, + 90, + 563, + 189 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/74ddbfe81ea4d414ee9c7c28dfaccac26c06f7513bd025b4e9b486773f6b099f.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 568, + 90, + 725, + 189 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/cf85a997378b5af50f4364452d28b7e8a8bf6df4cf779b4634da5a758d5ac12b.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 733, + 90, + 888, + 189 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/160129cf6f122763c4c6f703e7286c00f6096f8af394d73c19001257d0ada9cc.jpg", + "image_caption": [ + "Figure 4. Qualitative Results: We show the predictions from MViT [12] and our model on validation samples from AVA v2.2. The person with the colored mesh indicates the person-of-interest for which we recognise the action and the one with the gray mesh indicates the supporting actors. The first two columns demonstrate the benefits of having access to the action-tubes of other people for action prediction. In the first column, the orange person is very close to the other person with hugging posture, which makes it easy to predict hugging with higher probability. Similarly, in the second column, the explicit interaction between the multiple people, and knowing others also fighting increases the confidence for the fighting action for the green person over the 2D recognition model. The third and the fourth columns show the benefit of explicitly modeling the 3D pose over time (using tracks) for action recognition. Where the yellow person is in riding pose and purple person is looking upwards and legs on a vertical plane. The last column indicates the benefit of representing people with an amodal representation. Here the hand of the blue person is occluded, so the 2D recognition model does not see the action as a whole. However, SMPL meshes are amodal, therefore the hand is still present, which boosts the probability of predicting the action label for closing the door." + ], + "image_footnote": [], + "bbox": [ + 80, + 191, + 236, + 292 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/91d7f4af78483263ba8204c1e8f2f3a56a4a491ccc3abb72a2622fcbc0b16d60.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 243, + 191, + 400, + 292 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/a0be3df41c898b8c712cb382bde8ee02c1270ed71dbce63b7e372334f7f5f5b0.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 406, + 191, + 563, + 292 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/3a1badd50d434728f13b224b79fbff5a84952774802f20019b53760fec7e8037.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 570, + 191, + 725, + 292 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/be1e9142532f97d8df0f7ce60c36363d4f6315613d416e49ccb32d766793c0ec.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 733, + 191, + 888, + 292 + ], + "page_idx": 7 + }, + { + "type": "table", + "img_path": "images/7a21fe4f7c332ed2ecf6365b8692f6db5f9da4bbcff3ef54ff07421d3c251d4f.jpg", + "table_caption": [], + "table_footnote": [], + "table_body": "
ModelOMPIPMmAP
MViT32.241.158.640.2
MViT + Tracking33.443.059.341.4 (+1.2)
MViT + Tracking + Pose34.443.959.942.3 (+0.9)
", + "bbox": [ + 78, + 497, + 475, + 570 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Table 5. Ablation on the main components: We ablate the contribution of tracking and 3D poses using the same detections. First, we only use MViT features over the tracks to evaluate the contribution from tracking. Then we add 3D pose features to study the contribution from 3D pose for action recognition.", + "bbox": [ + 75, + 580, + 470, + 650 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "4.4. Implementation details", + "text_level": 1, + "bbox": [ + 76, + 664, + 290, + 680 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In both the pose model and pose+appearance model, we use the same vanilla transformer architecture [52] with 16 layers and 16 heads. For both models the embedding dimension is 512. We train with 0.4 mask ratio and at test time use the same mask token to in-fill the missing detections. The output token from the transformer is passed to a linear layer to predict the AVA action labels. We pre-train our model on kinetics for 30 epochs with MViT [12] predictions as pseudo-supervision and then fine-tune on AVA with AVA ground truth labels for few epochs. We train our models with AdamW [36] with base learning rate of 0.001 and betas = (0.9, 0.95). We use cosine annealing scheduling with a linear warm-up. For additional details please see the Appendix.", + "bbox": [ + 75, + 689, + 468, + 900 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "5. Conclusion", + "text_level": 1, + "bbox": [ + 500, + 491, + 617, + 506 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this paper, we investigated the benefits of 3D tracking and pose for the task of human action recognition. By leveraging a state-of-the-art method for person tracking, PHALP [43], we trained a transformer model that takes as input tokens the state of the person at every time instance. We investigated two design choices for the content of the token. First, when using information about the 3D pose of the person, we outperform previous baselines that rely on pose information for action recognition by $8.2\\mathrm{mAP}$ on the AVA v2.2 dataset. Then, we also proposed fusing the pose information with contextualized appearance information coming from a typical action recognition backbone [12] applied over the tracklet trajectory. With this model, we improved upon the previous state-of-the-art on AVA v2.2 by $2.8\\mathrm{mAP}$ . There are many avenues for future work and further improvements for action recognition. For example, one could achieve better performance for more fine-grained tasks by more expressive 3D reconstruction of the human body (e.g., using the SMPL-X model [40] to capture also the hands), and by explicit modeling of the objects in the scene (potentially by extending the \"tubes\" idea to objects).", + "bbox": [ + 496, + 527, + 890, + 845 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgements: This work was supported by the FAIR-BAIR program as well as ONR MURI (N00014-21-1-2801). We thank Shubham Goel, for helpful discussions.", + "bbox": [ + 498, + 854, + 890, + 900 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "647", + "bbox": [ + 485, + 944, + 509, + 955 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 80, + 90, + 171, + 104 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lucic, and Cordelia Schmid. ViViT: A video vision transformer. In ICCV, 2021.", + "[2] Fabien Baradel, Thibault Groueix, Philippe Weinzaepfel, Romain Brégier, Yannis Kalantidis, and Grégory Rogez. Leveraging MoCap data for human mesh recovery. In 3DV, 2021.", + "[3] Philipp Bergmann, Tim Meinhardt, and Laura Leal-Taixe. Tracking without bells and whistles. In ICCV, 2019.", + "[4] Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In ICML, 2021.", + "[5] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J Black. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In ECCV, 2016.", + "[6] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017.", + "[7] Vasileios Choutas, Lea Müller, Chun-Hao P Huang, Siyu Tang, Dimitrios Tzionas, and Michael J Black. Accurate 3D body shape regression using metric and semantic attributes. In CVPR, 2022.", + "[8] Vasileios Choutas, Philippe Weinzaepfel, Jérôme Revaud, and Cordelia Schmid. PoTion: Pose motion representation for action recognition. In CVPR, 2018.", + "[9] Piotr Dollár, Vincent Rabaud, Garrison Cottrell, and Serge Belongie. Behavior recognition via sparse spatio-temporal features. In 2005 IEEE international workshop on visual surveillance and performance evaluation of tracking and surveillance, 2005.", + "[10] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. 2021.", + "[11] Alexei A Efros, Alexander C Berg, Greg Mori, and Jitendra Malik. Recognizing action at a distance. In ICCV, 2003.", + "[12] Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer. Multiscale vision transformers. In ICCV, 2021.", + "[13] Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, and Cewu Lu. RMPE: Regional multi-person pose estimation. In ICCV, 2017.", + "[14] Christoph Feichtenhofer. X3D: Expanding architectures for efficient video recognition. In CVPR, 2020.", + "[15] Christoph Feichtenhofer, Haoqi Fan, Yanghao Li, and Kaiming He. Masked autoencoders as spatiotemporal learners. In NeurIPS, 2022.", + "[16] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In ICCV, 2019.", + "[17] Yutong Feng, Jianwen Jiang, Ziyuan Huang, Zhiwu Qing, Xiang Wang, Shiwei Zhang, Mingqian Tang, and Yue Gao. Relation modeling in spatio-temporal action localization. arXiv preprint arXiv:2106.08061, 2021." + ], + "bbox": [ + 78, + 114, + 467, + 909 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[18] Georgia Gkioxari and Jitendra Malik. Finding action tubes. In CVPR, 2015.", + "[19] Shubham Goel, Georgios Pavlakos, Jathushan Rajasegaran, Angjoo Kanazawa, and Jitendra Malik. Humans in 4D: Reconstructing and tracking humans with transformers. arXiv preprint (forthcoming), 2023.", + "[20] Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, and Jitendra Malik. AVA: A video dataset of spatio-temporally localized atomic visual actions. In CVPR, 2018.", + "[21] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In ICCV, 2017.", + "[22] Gunnar Johansson. Visual perception of biological motion and a model for its analysis. Perception & psychophysics, 14(2):201-211, 1973.", + "[23] Angjoo Kanazawa, Michael J Black, David W Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In CVPR, 2018.", + "[24] Angjoo Kanazawa, Jason Y Zhang, Panna Felsen, and Jitendra Malik. Learning 3D human dynamics from video. In CVPR, 2019.", + "[25] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.", + "[26] Machiel Keestra. Understanding human action. Integrating meanings, mechanisms, causes, and contexts. 2015.", + "[27] Alexander Klaser, Marcin Marszalek, and Cordelia Schmid. A spatio-temporal descriptor based on 3D-gradients. In BMVC, 2008.", + "[28] Muhammed Kocabas, Nikos Athanasiou, and Michael J Black. VIBE: Video inference for human body pose and shape estimation. In CVPR, 2020.", + "[29] Muhammed Kocabas, Chun-Hao P Huang, Otmar Hilliges, and Michael J Black. PARE: Part attention regressor for 3D human body estimation. In ICCV, 2021.", + "[30] Muhammed Kocabas, Chun-Hao P Huang, Joachim Tesch, Lea Muller, Otmar Hilliges, and Michael J Black. SPEC: Seeing people in the wild with an estimated camera. In ICCV, 2021.", + "[31] Nikos Kolotouros, Georgios Pavlakos, Michael J Black, and Kostas Daniilidis. Learning to reconstruct 3D human pose and shape via model-fitting in the loop. In ICCV, 2019.", + "[32] Nikos Kolotouros, Georgios Pavlakos, Dinesh Jayaraman, and Kostas Daniilidis. Probabilistic modeling for human mesh recovery. In ICCV, 2021.", + "[33] Ang Li, Meghan Thotakuri, David A Ross, João Carreira, Alexander Vostrikov, and Andrew Zisserman. Theava-kinetics localized human actions video dataset. arXiv preprint arXiv:2005.00214, 2020.", + "[34] Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Mangalam, Bo Xiong, Jitendra Malik, and Christoph Feichtenhofer. MViTv2: Improved multiscale vision transformers for classification and detection. In CVPR, 2022." + ], + "bbox": [ + 501, + 92, + 890, + 900 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "648", + "bbox": [ + 486, + 945, + 509, + 955 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[35] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. SMPL: A skinned multiperson linear model. ACM Transactions on Graphics (TOG), 34(6):1-16, 2015.", + "[36] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.", + "[37] Tim Meinhardt, Alexander Kirillov, Laura Leal-Taixe, and Christoph Feichtenhofer. TrackFormer: Multi-object tracking with transformers. In CVPR, 2022.", + "[38] Daniel Neimark, Omri Bar, Maya Zohar, and Dotan Asselmann. Video transformer network. In ICCV, 2021.", + "[39] Junting Pan, Siyu Chen, Mike Zheng Shou, Yu Liu, Jing Shao, and Hongsheng Li. Actor-context-actor relation network for spatio-temporal action localization. In CVPR, 2021.", + "[40] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Expressive body capture: 3D hands, face, and body from a single image. In CVPR, 2019.", + "[41] Georgios Pavlakos, Jitendra Malik, and Angjoo Kanazawa. Human mesh recovery from multiple shots. In CVPR, 2022.", + "[42] Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, and Jitendra Malik. Tracking people with 3D representations. In NeurIPS, 2021.", + "[43] Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, and Jitendra Malik. Tracking people by predicting 3D appearance, location and pose. In CVPR, 2022.", + "[44] Davis Rempe, Tolga Birdal, Aaron Hertzmann, Jimei Yang, Srinath Sridhar, and Leonidas J Guibas. HuMoR: 3D human motion model for robust pose estimation. In ICCV, 2021.", + "[45] Anshul Shah, Shlok Mishra, Ankan Bansal, Jun-Cheng Chen, Rama Chellappa, and Abhinav Shrivastava. Pose and joint-aware action recognition. In WACV, 2022.", + "[46] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. NIPS, 2014.", + "[47] Chen Sun, Abhinav Shrivastava, Carl Vondrick, Rahul Sukthankar, Kevin Murphy, and Cordelia Schmid. Relational action forecasting. In CVPR, 2019.", + "[48] Graham W Taylor, Rob Fergus, Yann LeCun, and Christoph Bregler. Convolutional learning of spatio-temporal features. In ECCV, 2010.", + "[49] Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. VideoMAE: Masked autoencoders are data-efficient learners for self-supervised video pre-training. In NeurIPS, 2022." + ], + "bbox": [ + 78, + 90, + 468, + 732 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[50] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3D convolutional networks. In ICCV, 2015.", + "[51] Gül Varol, Ivan Laptev, Cordelia Schmid, and Andrew Zisserman. Synthetic humans for action recognition from unseen viewpoints. International Journal of Computer Vision, 129(7):2264-2287, 2021.", + "[52] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017.", + "[53] Heng Wang, A. Klaser, C. Schmid, and Cheng-Lin Liu. Action recognition by dense trajectories. In CVPR, 2011.", + "[54] Heng Wang and Cordelia Schmid. Action recognition with improved trajectories. In ICCV, 2013.", + "[55] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018.", + "[56] Xiaolong Wang and Abhinav Gupta. Videos as space-time region graphs. In ECCV, 2018.", + "[57] Zelun Wang and Jyh-Charn Liu. Translating math formula images to latex sequences using deep neural networks with sequence-level training. International Journal on Document Analysis and Recognition (IJDAR), 24(1):63-75, 2021.", + "[58] Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, and Christoph Feichtenhofer. Masked feature prediction for self-supervised visual pre-training. arXiv preprint arXiv:2112.09133, 2021.", + "[59] Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, and Christoph Feichtenhofer. Masked feature prediction for self-supervised visual pre-training. In CVPR, 2022.", + "[60] Philippe Weinzaepfel and Grégory Rogez. Mimetics: Towards understanding human actions out of context. *IJCV*, 2021.", + "[61] Chao-Yuan Wu and Philipp Krahenbuhl. Towards long-form video understanding. In CVPR, 2021.", + "[62] Yuliang Xiu, Jiefeng Li, Haoyu Wang, Yinghong Fang, and Cewu Lu. Pose Flow: Efficient online pose tracking. In BMVC, 2018.", + "[63] An Yan, Yali Wang, Zhifeng Li, and Yu Qiao. PA3D: Pose-action 3D machine for video recognition. In CVPR, 2019.", + "[64] Hongwen Zhang, Yating Tian, Xinchi Zhou, Wanli Ouyang, Yebin Liu, Limin Wang, and Zhenan Sun. PyMAF: 3D human pose and shape regression with pyramidal mesh alignment feedback loop. In ICCV, 2021.", + "[65] Yubo Zhang, Pavel Tokmakov, Martial Hebert, and Cordelia Schmid. A structured model for action detection. In CVPR, 2019." + ], + "bbox": [ + 501, + 92, + 890, + 743 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "649", + "bbox": [ + 486, + 945, + 509, + 955 + ], + "page_idx": 9 + } +] \ No newline at end of file diff --git a/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/0c904778-ffe0-4595-8277-48b7638fc1b2_model.json b/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/0c904778-ffe0-4595-8277-48b7638fc1b2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..22ba81dd752022b907bf4f1faafe95acd11b3a17 --- /dev/null +++ b/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/0c904778-ffe0-4595-8277-48b7638fc1b2_model.json @@ -0,0 +1,2079 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.107, + 0.003, + 0.182, + 0.043 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.237, + 0.001, + 0.808, + 0.047 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.123, + 0.131, + 0.847, + 0.153 + ], + "angle": 0, + "content": "On the Benefits of 3D Pose and Tracking for Human Action Recognition" + }, + { + "type": "text", + "bbox": [ + 0.055, + 0.18, + 0.93, + 0.217 + ], + "angle": 0, + "content": "Jathushan Rajasegaran\\(^{1,2}\\), Georgios Pavlakos\\(^{1}\\), Angjoo Kanazawa\\(^{1}\\), Christoph Feichtenhofer\\(^{2}\\), Jitendra Malik\\(^{1,2}\\), \\(^{1}\\)UC Berkeley, \\(^{2}\\)Meta AI, FAIR" + }, + { + "type": "title", + "bbox": [ + 0.235, + 0.252, + 0.314, + 0.267 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.284, + 0.473, + 0.542 + ], + "angle": 0, + "content": "In this work we study the benefits of using tracking and 3D poses for action recognition. To achieve this, we take the Lagrangian view on analysing actions over a trajectory of human motion rather than at a fixed point in space. Taking this stand allows us to use the tracklets of people to predict their actions. In this spirit, first we show the benefits of using 3D pose to infer actions, and study person-person interactions. Subsequently, we propose a Lagrangian Action Recognition model by fusing 3D pose and contextualized appearance over tracklets. To this end, our method achieves state-of-the-art performance on the AVA v2.2 dataset on both pose only settings and on standard benchmark settings. When reasoning about the action using only pose cues, our pose model achieves \\(+10.0\\) mAP gain over the corresponding state-of-the-art while our fused model has a gain of \\(+2.8\\) mAP over the best state-of-the-art model. Code and results are available at: https://brjathu.github.io/LART" + }, + { + "type": "title", + "bbox": [ + 0.078, + 0.572, + 0.208, + 0.587 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.598, + 0.47, + 0.795 + ], + "angle": 0, + "content": "In fluid mechanics, it is traditional to distinguish between the Lagrangian and Eulerian specifications of the flow field. Quoting the Wikipedia entry, \"Lagrangian specification of the flow field is a way of looking at fluid motion where the observer follows an individual fluid parcel as it moves through space and time. Plotting the position of an individual parcel through time gives the pathline of the parcel. This can be visualized as sitting in a boat and drifting down a river. The Eulerian specification of the flow field is a way of looking at fluid motion that focuses on specific locations in the space through which the fluid flows as time passes. This can be visualized by sitting on the bank of a river and watching the water pass the fixed location.\"" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.796, + 0.47, + 0.903 + ], + "angle": 0, + "content": "These concepts are very relevant to how we analyze videos of human activity. In the Eulerian viewpoint, we would focus on feature vectors at particular locations, either \\((x,y)\\) or \\((x,y,z)\\), and consider evolution over time while staying fixed in space at the location. In the Lagrangian viewpoint, we would track, say a person over space-time and track the associated feature vector across space-time." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.253, + 0.895, + 0.465 + ], + "angle": 0, + "content": "While the older literature for activity recognition e.g., [11, 18, 53] typically adopted the Lagrangian viewpoint, ever since the advent of neural networks based on 3D spacetime convolution, e.g., [50], the Eulerian viewpoint became standard in state-of-the-art approaches such as SlowFast Networks [16]. Even after the switch to transformer architectures [12, 52] the Eulerian viewpoint has persisted. This is noteworthy because the tokenization step for transformers gives us an opportunity to freshly examine the question, \"What should be the counterparts of words in video analysis?\" Dosovitskiy et al. [10] suggested that image patches were a good choice, and the continuation of that idea to video suggests that spatiotemporal cuboids would work for video as well." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.471, + 0.895, + 0.835 + ], + "angle": 0, + "content": "On the contrary, in this work we take the Lagrangian viewpoint for analysing human actions. This specifies that we reason about the trajectory of an entity over time. Here, the entity can be low-level, e.g., a pixel or a patch, or high-level, e.g., a person. Since, we are interested in understanding human actions, we choose to operate on the level of \"humans-as-entities\". To this end, we develop a method that processes trajectories of people in video and uses them to recognize their action. We recover these trajectories by capitalizing on a recently introduced 3D tracking method PHALP [43] and HMR 2.0 [19]. As shown in Figure 1 PHALP recovers person tracklets from video by lifting people to 3D, which means that we can both link people over a series of frames and get access to their 3D representation. Given these 3D representations of people (i.e., 3D pose and 3D location), we use them as the basic content of each token. This allows us to build a flexible system where the model, here a transformer, takes as input tokens corresponding to the different people with access to their identity, 3D pose and 3D location. Having 3D location of the people in the scene allows us to learn interaction among people. Our model relying on this tokenization can benefit from 3D tracking and pose, and outperforms previous baseline that only have access to pose information [8, 45]." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.84, + 0.895, + 0.903 + ], + "angle": 0, + "content": "While the change in human pose over time is a strong signal, some actions require more contextual information about the appearance and the scene. Therefore, it is important to also fuse pose with appearance information from" + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.946, + 0.512, + 0.957 + ], + "angle": 0, + "content": "640" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.139, + 0.09, + 0.833, + 0.398 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.076, + 0.409, + 0.896, + 0.482 + ], + "angle": 0, + "content": "Figure 1. Overview of our method: Given a video, first, we track every person using a tracking algorithm (e.g. PHALP [43]). Then every detection in the track is tokenized to represent a human-centric vector (e.g. pose, appearance). To represent 3D pose we use SMPL [35] parameters and estimated 3D location of the person, for contextualized appearance we use MViT [12] (pre-trained on MaskFeat [59]) features. Then we train a transformer network to predict actions using the tracks. Note that, at the second frame we do not have detection for the blue person, at these places we pass a mask token to in-fill the missing detections." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.507, + 0.471, + 0.702 + ], + "angle": 0, + "content": "humans and the scene, coming directly from pixels. To achieve this, we also use the state-of-the-art models for action recognition [12, 34] to provide complementary information from the contextualized appearance of the humans and the scene in a Lagrangian framework. Specifically, we densely run such models over the trajectory of each tracklet and record the contextualized appearance features localized around the tracklet. As a result, our tokens include explicit information about the 3D pose of the people and densely sampled appearance information from the pixels, processed by action recognition backbones [12]. Our complete system outperforms the previous state of the art by a large margin of \\(2.8\\mathrm{mAP}\\), on the challenging AVA v2.2 dataset." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.704, + 0.471, + 0.9 + ], + "angle": 0, + "content": "Overall, our main contribution is introducing an approach that highlights the effects of tracking and 3D poses for human action understanding. To this end, in this work, we propose a Lagrangian Action Recognition with Tracking (LART) approach, which utilizes the tracklets of people to predict their action. Our baseline version leverages tracklet trajectories and 3D pose representations of the people in the video to outperform previous baselines utilizing pose information. Moreover, we demonstrate that the proposed Lagrangian viewpoint of action recognition can be easily combined with traditional baselines that rely only on appearance and context from the video, achieving significant gains compared to the dominant paradigm." + }, + { + "type": "title", + "bbox": [ + 0.5, + 0.505, + 0.642, + 0.523 + ], + "angle": 0, + "content": "2. Related Work" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.536, + 0.892, + 0.717 + ], + "angle": 0, + "content": "Recovering humans in 3D: A lot of the related work has been using the SMPL human body model [35] for recovering 3D humans from images. Initially, the related methods were relying on optimization-based approaches, like SMPLify [5], but since the introduction of the HMR [23], there has been a lot of interest in approaches that can directly regress SMPL parameters [35] given the corresponding image of the person as input. Many follow-up works have improved upon the original model, estimating more accurate pose [31] or shape [7], increasing the robustness of the model [41], incorporating side information [30,32], investigating different architecture choices [29,64], etc." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.719, + 0.892, + 0.87 + ], + "angle": 0, + "content": "While these works have been improving the basic single-frame reconstruction performance, there have been parallel efforts toward the temporal reconstruction of humans from video input. The HMMR model [24] uses a convolutional temporal encoder on HMR image features [23] to reconstruct humans over time. Other approaches have investigated recurrent [28] or transformer [41] encoders. Instead of performing the temporal pooling on image features, recent work has been using the SMPL parameters directly for the temporal encoding [2, 44]." + }, + { + "type": "text", + "bbox": [ + 0.499, + 0.871, + 0.892, + 0.901 + ], + "angle": 0, + "content": "One assumption of the temporal methods in the above category is that they have access to tracklets of people in" + }, + { + "type": "page_number", + "bbox": [ + 0.487, + 0.946, + 0.51, + 0.957 + ], + "angle": 0, + "content": "641" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.076, + 0.092, + 0.473, + 0.273 + ], + "angle": 0, + "content": "the video. This means that they rely on tracking methods, most of which operate on the 2D domain [3, 13, 37, 62] and are responsible for introducing many errors. To overcome this limitation, recent work [42, 43] has capitalized on the advances of 3D human recovery to perform more robust identity tracking from video. More specifically, the PHALP method of Rajasegaran et al. [43] allows for robust tracking in a variety of settings, including in the wild videos and movies. Here, we make use of the PHALP system to discover long tracklets from large-scale video datasets. This allows us to train our method for recognizing actions from 3D pose input." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.294, + 0.473, + 0.566 + ], + "angle": 0, + "content": "Action Recognition: Earlier works on action recognition relied on hand-crafted features such as HOG3D [27], Cuboids [9] and Dense Trajectories [53, 54]. After the introduction of deep learning, 3D convolutional networks became the main backbone for action recognition [6, 48, 50]. However, the 3D convolutional models treat both space and time in a similar fashion, so to overcome this issue, two-stream architectures were proposed [46]. In two-steam networks, one pathway is dedicated to motion features, usually taking optical flow as input. This requirement of computing optical flow makes it hard to learn these models in an end-to-end manner. On the other hand, SlowFast networks [16] only use video streams but at different frame rates, allowing it to learn motion features from the fast pathway and lateral connections to fuse spatial and temporal information. Recently, with the advancements in transformer architectures, there has been a lot of work on action recognition using transformer backbones [1, 4, 12, 38]." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.582, + 0.472, + 0.75 + ], + "angle": 0, + "content": "While the above-mentioned works mainly focus on the model architectures for action recognition, another line of work investigates more fine-grained relationships between actors and objects [47, 55, 56, 65]. Non-local networks [55] use self-attention to reason about entities in the video and learn long-range relationships. ACAR [39] models actor-context-actor relationships by first extracting actor-context features through pooling in bounding box region and then learning higher-level relationships between actors. Compared to ACAR, our method does not explicitly design any priors about actor relationships, except their track identity." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.765, + 0.471, + 0.903 + ], + "angle": 0, + "content": "Along these lines, some works use the human pose to understand the action [8, 45, 51, 60, 63]. PoTion [8] uses a keypoint-based pose representation by colorizing the temporal dependencies. Recently, JMRN [45] proposed a joint-motion re-weighting network to learn joint trajectories separately and then fuse this information to reason about interjoint motion. While these works rely on 2D key points and design-specific architectures to encode the representation, we use more explicit 3D SMPL parameters." + }, + { + "type": "title", + "bbox": [ + 0.5, + 0.09, + 0.593, + 0.107 + ], + "angle": 0, + "content": "3. Method" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.115, + 0.892, + 0.312 + ], + "angle": 0, + "content": "Understanding human action requires interpreting multiple sources of information [26]. These include head and gaze direction, human body pose and dynamics, interactions with objects or other humans or animals, the scene as a whole, the activity context (e.g. immediately preceding actions by self or others), and more. Some actions can be recognized by pose and pose dynamics alone, as demonstrated by Johansson et al [22] who showed that people are remarkable at recognizing walking, running, crawling just by looking at moving point-lights. However, interpreting complex actions requires reasoning with multiple sources of information e.g. to recognize that someone is slicing a tomato with a knife, it helps to see the knife and the tomato." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.312, + 0.892, + 0.448 + ], + "angle": 0, + "content": "There are many design choices that can be made here. Should one use \"disentangled\" representations, with elements such as pose, interacted objects, etc, represented explicitly in a modular way? Or should one just input video pixels into a large capacity neural network model and rely on it to figure out what is discriminatively useful? In this paper, we study two options: a) human pose reconstructed from an HMR model [19, 23] and b) human pose with contextual appearance as computed by an MViT model [12]." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.448, + 0.892, + 0.569 + ], + "angle": 0, + "content": "Given a video with number of frames \\( T \\), we first track every person using PHALP [43], which gives us a unique identity for each person over time. Let a person \\( i \\in [1,2,3,\\dots n] \\) at time \\( t \\in [1,2,3,\\dots T] \\) be represented by a person-vector \\( \\mathbf{H}_t^i \\). Here \\( n \\) is the number of people in a frame. This person-vector is constructed such that, it contains human-centric representation \\( \\mathbf{P}_t^i \\) and some contextualized appearance information \\( \\mathbf{Q}_t \\)." + }, + { + "type": "equation", + "bbox": [ + 0.638, + 0.573, + 0.892, + 0.591 + ], + "angle": 0, + "content": "\\[\n\\mathbf {H} _ {t} ^ {i} = \\left\\{\\mathbf {P} _ {t} ^ {i}, \\mathbf {Q} _ {t} ^ {i} \\right\\}. \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.591, + 0.892, + 0.652 + ], + "angle": 0, + "content": "Since we know the identity of each person from the tracking, we can create an action-tube [18] representation for each person. Let \\(\\Phi_{i}\\) be the action-tube of person \\(i\\), then this action-tube contains all the person-vectors over time." + }, + { + "type": "equation", + "bbox": [ + 0.596, + 0.656, + 0.892, + 0.674 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {\\Phi} _ {i} = \\left\\{\\mathbf {H} _ {1} ^ {i}, \\mathbf {H} _ {2} ^ {i}, \\mathbf {H} _ {3} ^ {i}, \\dots , \\mathbf {H} _ {T} ^ {i} \\right\\}. \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.674, + 0.892, + 0.765 + ], + "angle": 0, + "content": "Given this representation, we train our model LART to predict actions from action-tubes (tracks). In this work we use a vanilla transformer [52] to model the network \\(\\mathcal{F}\\), and this allows us to mask attention, if the track is not continuous due to occlusions and failed detections etc. Please see the Appendix for more details on network architecture." + }, + { + "type": "equation", + "bbox": [ + 0.575, + 0.769, + 0.892, + 0.788 + ], + "angle": 0, + "content": "\\[\n\\mathcal {F} \\left(\\Phi_ {1}, \\Phi_ {2}, \\dots , \\Phi_ {i}, \\dots , \\Phi_ {n}; \\Theta\\right) = \\widehat {Y _ {i}}. \\tag {3}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.794, + 0.892, + 0.9 + ], + "angle": 0, + "content": "Here, \\(\\Theta\\) is the model parameters, \\(\\widehat{Y}_i = \\{y_1^i,y_2^i,y_3^i,\\dots,y_T^i\\}\\) is the predictions for a track, and \\(y_{t}^{i}\\) is the predicted action of the track \\(i\\) at time \\(t\\). The model can use the actions of others for reasoning when predicting the action for the person-of-interest \\(i\\). Finally, we use binary cross-entropy loss to train our model and measure mean Average Precision (mAP) for evaluation." + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.945, + 0.512, + 0.957 + ], + "angle": 0, + "content": "642" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.078, + 0.091, + 0.367, + 0.108 + ], + "angle": 0, + "content": "3.1. Action Recognition with 3D Pose" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.114, + 0.473, + 0.401 + ], + "angle": 0, + "content": "In this section, we study the effect of human-centric pose representation on action recognition. To do that, we consider a person-vector that only contains the pose representation, \\(\\mathbf{H}_t^i = \\{\\mathbf{P}_t^i\\}\\). While, \\(\\mathbf{P}_t^i\\) can in general contain any information about the person, in this work train a pose only model LART-pose which uses 3D body pose of the person based on the SMPL [35] model. This includes the joint angles of the different body parts, \\(\\theta_t^i \\in \\mathcal{R}^{23 \\times 3 \\times 3}\\) and is considered as an amodal representation, which means we make a prediction about all body parts, even those that are potentially occluded/truncated in the image. Since the global body orientation \\(\\psi_t^i \\in \\mathcal{R}^{3 \\times 3}\\) is represented separately from the body pose, our body representation is invariant to the specific viewpoint of the video. In addition to the 3D pose, we also use the 3D location \\(L_t^i\\) of the person in the camera view (which is also predicted by the PHALP model [43]). This makes it possible to consider the relative location of the different people in 3D. More specifically, each person is represented as," + }, + { + "type": "equation", + "bbox": [ + 0.188, + 0.409, + 0.47, + 0.428 + ], + "angle": 0, + "content": "\\[\n\\mathbf {H} _ {t} ^ {i} = \\mathbf {P} _ {t} ^ {i} = \\left\\{\\theta_ {t} ^ {i}, \\psi_ {t} ^ {i}, L _ {t} ^ {i} \\right\\}. \\tag {4}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.436, + 0.473, + 0.587 + ], + "angle": 0, + "content": "Let us assume that there are \\(n\\) tracklets \\(\\{\\Phi_1, \\Phi_2, \\Phi_3, \\dots, \\Phi_n\\}\\) in a given video. To study the action of the tracklet \\(i\\), we consider that person \\(i\\) as the person-of-interest and having access to other tracklets can be helpful to interpret the person-person interactions for person \\(i\\). Therefore, to predict the action for all \\(n\\) tracklets we need to make \\(n\\) number of forward passes. If person \\(i\\) is the person-of-interest, then we randomly sample \\(N - 1\\) number of other tracklets and pass it to the model \\(\\mathcal{F}(\\cdot; \\Theta)\\) along with the \\(\\Phi_i\\)." + }, + { + "type": "equation", + "bbox": [ + 0.169, + 0.594, + 0.47, + 0.613 + ], + "angle": 0, + "content": "\\[\n\\mathcal {F} \\left(\\Phi_ {i}, \\left\\{\\Phi_ {j} \\mid j \\in [ N ] \\right\\}; \\Theta\\right) = \\widehat {Y} _ {i} \\tag {5}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.621, + 0.47, + 0.759 + ], + "angle": 0, + "content": "Therefore, the model sees \\(N\\) number of tracklets and predicts the action for the main (person-of-interest) track. To do this, we first tokenize all the person-vectors, by passing them through a linear layer and project it in \\(f_{proj}(\\mathcal{H}_t^i)\\in \\mathcal{R}^d\\) a \\(d\\) dimensional space. Afterward, we add positional embeddings for a) time, b) tracklet-id. For time and tracklet-id we use 2D sine and cosine functions as positional encoding [57], by assigning person \\(i\\) as the zero\\(^{\\mathrm{th}}\\) track, and the rest of the tracklets use tracklet-ids \\(\\{1,2,3,\\dots,N - 1\\}\\)." + }, + { + "type": "equation", + "bbox": [ + 0.208, + 0.766, + 0.42, + 0.784 + ], + "angle": 0, + "content": "\\[\nP E (t, i, 2 r) = \\sin (t / 1 0 0 0 0 ^ {4 r / d})\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.178, + 0.787, + 0.42, + 0.804 + ], + "angle": 0, + "content": "\\[\nP E (t, i, 2 r + 1) = \\cos (t / 1 0 0 0 0 ^ {4 r / d})\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.158, + 0.809, + 0.42, + 0.826 + ], + "angle": 0, + "content": "\\[\nP E (t, i, 2 s + D / 2) = \\sin (i / 1 0 0 0 0 ^ {4 s / d})\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.128, + 0.83, + 0.42, + 0.847 + ], + "angle": 0, + "content": "\\[\nP E (t, i, 2 s + D / 2 + 1) = \\cos (i / 1 0 0 0 0 ^ {4 s / d})\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.855, + 0.47, + 0.9 + ], + "angle": 0, + "content": "Here, \\( t \\) is the time index, \\( i \\) is the track-id, \\( r, s \\in [0, d/2) \\) specifies the dimensions and \\( D \\) is the dimensions of the token." + }, + { + "type": "text", + "bbox": [ + 0.499, + 0.092, + 0.892, + 0.138 + ], + "angle": 0, + "content": "After adding the position encodings for time and identity, each person token is passed to the transformer network. The \\((t + i\\times N)^{th}\\) token is given by," + }, + { + "type": "equation", + "bbox": [ + 0.561, + 0.149, + 0.892, + 0.166 + ], + "angle": 0, + "content": "\\[\n\\operatorname {t o k e n} _ {(t + i \\times N)} = f _ {\\text {p r o j}} \\left(\\mathcal {H} _ {t} ^ {i}\\right) + P E (t, i,:) \\tag {6}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.177, + 0.892, + 0.327 + ], + "angle": 0, + "content": "Our person of interest formulation would allow us to use other actors in the scene to make better predictions for the main actor. When there are multiple actors involved in the scene, knowing one person's action could help in predicting another's action. Some actions are correlated among the actors in a scene (e.g. dancing, fighting), while in some cases, people will be performing reciprocal actions (e.g. speaking and listening). In these cases knowing one person's action would help in predicting the other person's action with more confidence." + }, + { + "type": "title", + "bbox": [ + 0.499, + 0.337, + 0.833, + 0.354 + ], + "angle": 0, + "content": "3.2. Actions from Appearance and 3D Pose" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.36, + 0.892, + 0.663 + ], + "angle": 0, + "content": "While human pose plays a key role in understanding actions, more complex actions require reasoning about the scene and context. Therefore, in this section, we investigate the benefits of combining pose and contextual appearance features for action recognition and train model LART to benefit from 3D poses and appearance over a trajectory. For every track, we run a 2D action recognition model (i.e. MaskFeat [59] pretrained MViT [12]) at a frequency \\( f_{s} \\) and store the feature vectors before the classification layer. For example, consider a track \\( \\Phi_{i} \\), which has detections \\( \\{D_1^i,D_2^i,D_3^i,\\dots ,D_T^i\\} \\). We get the predictions form the 2D action recognition models, for the detections at \\( \\{t,t + f_{FPS} / f_s,t + 2f_{FPS} / f_s,\\ldots \\} \\). Here, \\( f_{FPS} \\) is the rate at which frames appear on the screen. Since these action recognition models capture temporal information to some extent, \\( \\mathbf{Q}_{t - f_{FPS} / 2f_s}^i \\) to \\( \\mathbf{Q}_{t + f_{FPS} / 2f_s}^i \\) share the same appearance features. Let's assume we have a pre-trained action recognition model \\( \\mathcal{A} \\), and it takes a sequence of frames and a detection bounding box at mid-frame, then the feature vectors for \\( \\mathbf{Q}_t^i \\) is given by:" + }, + { + "type": "equation", + "bbox": [ + 0.618, + 0.673, + 0.776, + 0.693 + ], + "angle": 0, + "content": "\\[\n\\mathcal {A} \\big (D _ {t} ^ {i}, \\{I \\} _ {t - M} ^ {t + M} \\big) = \\mathbf {U} _ {t} ^ {i}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.703, + 0.892, + 0.901 + ], + "angle": 0, + "content": "Here, \\(\\{I\\}_{t - M}^{t + M}\\) is the sequence of image frames, \\(2M\\) is the number of frames seen by the action recognition model, and \\(\\mathbf{U}_t^i\\) is the contextual appearance vector. Note that, since the action recognition models look at the whole image frame, this representation implicitly contains information about the scene and objects and movements. However, we argue that human-centric pose representation has orthogonal information compared to feature vectors taken from convolutional or transformer networks. For example, the 3D pose is a geometric representation while \\(\\mathbf{U}_t^i\\) is more photometric, the SMPL parameters have more priors about human actions/pose and it is amodal while the appearance representation is learned from raw pixels. Now that we have both" + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.946, + 0.512, + 0.957 + ], + "angle": 0, + "content": "643" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.077, + 0.092, + 0.47, + 0.123 + ], + "angle": 0, + "content": "pose-centric representation and appearance-centric representation in the person vector \\(\\mathbf{H}_t^i\\):" + }, + { + "type": "equation", + "bbox": [ + 0.192, + 0.133, + 0.469, + 0.171 + ], + "angle": 0, + "content": "\\[\n\\mathbf {H} _ {t} ^ {i} = \\underbrace {\\left\\{\\theta_ {t} ^ {i} , \\psi_ {t} ^ {i} , L _ {t} ^ {i} \\right.} _ {\\mathbf {P} _ {t} ^ {i}} \\underbrace {\\left. \\mathbf {U} _ {t} ^ {i} \\right.} _ {\\mathbf {Q} _ {t} ^ {i}} \\tag {7}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.077, + 0.182, + 0.469, + 0.258 + ], + "angle": 0, + "content": "So, each human is represented by their 3D pose, 3D location, and with their appearance and scene content. We follow the same procedure as discussed in the previous section to add positional encoding and train a transformer network \\(\\mathcal{F}(\\Theta)\\) with pose+appearance tokens." + }, + { + "type": "title", + "bbox": [ + 0.077, + 0.271, + 0.21, + 0.289 + ], + "angle": 0, + "content": "4. Experiments" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.297, + 0.469, + 0.476 + ], + "angle": 0, + "content": "We evaluate our method on AVA [20] in various settings. AVA [20] poses an action detection problem, where people are localized in a spatio-temporal volume with action labels. It provides annotations at 1Hz, and each actor will have 1 pose action, up to 3 person-object interactions (optional), and up to 3 person-person interaction (optional) labels. For the evaluations, we use AVA v2.2 annotations and follow the standard protocol as in [20]. We measure mean average precision (mAP) on 60 classes with a frame-level IoU of 0.5. In addition to that, we also evaluate our method on AVA-Kinetics [33] dataset, which provides spatio-temporal localized annotations for Kinetics videos." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.479, + 0.469, + 0.688 + ], + "angle": 0, + "content": "We use PHALP [43] to track people in the AVA dataset. PHALP falls into the tracking-by-detection paradigm and uses Mask R-CNN [21] for detecting people in the scene. At the training stage, where the bounding box annotations are available only at \\(1\\mathrm{Hz}\\), we use Mask R-CNN detections for the in-between frames and use the ground-truth bounding box for every 30 frames. For validation, we use the bounding boxes used by [39] and do the same strategy to complete the tracking. We ran, PHALP on Kinetics-400 [25] and AVA [20]. Both datasets contain over 1 million tracks with an average length of 3.4s and over 100 million detections. In total, we use about 900 hours length of tracks, which is about 40x more than previous works [24]. See Table 1 for more details." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.689, + 0.469, + 0.901 + ], + "angle": 0, + "content": "Tracking allows us to train actions densely. Since, we have tokens for each actor at every frame, we can supervise every token by assuming the human action remains the same in a 1 sec window [20]. First, we pre-train our model on Kinetics-400 dataset [25] and AVA [20] dataset. We run MViT [12] (pretrained on MaskFeat [58]) at \\(1\\mathrm{Hz}\\) on every track in Kinetics-400 to generate pseudo groundtruth annotations. Every 30 frames will share the same annotations and we train our model end-to-end with binary cross-entropy loss. Then we fine-tune the pretrained model, with tracks generated by us, on AVA ground-truth action labels. At inference, we take a track, and randomly sample \\(N - 1\\) of other tracks from the same video and pass it through the model. We take an average pooling on the" + }, + { + "type": "table", + "bbox": [ + 0.554, + 0.094, + 0.842, + 0.175 + ], + "angle": 0, + "content": "
Dataset# clips# tracks#bbox
AVA [20]184k320k32.9m
Kinetics [25]217k686k71.4m
Total400k1m104.3m
" + }, + { + "type": "table_caption", + "bbox": [ + 0.499, + 0.185, + 0.892, + 0.242 + ], + "angle": 0, + "content": "Table 1. Tracking statistics on AVA [20] and Kinetics-400 [25]: We report the number tracks returned by PHALP [43] for each datasets (m: million). This results in over 900 hours of tracks, with a mean length of 3.4 seconds (with overlaps)." + }, + { + "type": "table", + "bbox": [ + 0.508, + 0.254, + 0.887, + 0.349 + ], + "angle": 0, + "content": "
ModelPoseOMPIPMmAP
PoTion [8]2D---13.1
JMRN [45]2D7.117.227.614.1
LART-poseSMPL11.924.645.822.3
LART-poseSMPL+Joints13.325.948.724.1
" + }, + { + "type": "table_caption", + "bbox": [ + 0.499, + 0.358, + 0.892, + 0.44 + ], + "angle": 0, + "content": "Table 2. AVA Action Recognition with 3D pose: We evaluate human-centric representation on AVA dataset [20]. Here \\(OM\\): Object Manipulation, \\(PI\\): Person Interactions, and \\(PM\\): Person Movement. LART-poscan achieve about \\(80\\%\\) performance of MViT models on person movement tasks without looking at scene information." + }, + { + "type": "text", + "bbox": [ + 0.499, + 0.461, + 0.891, + 0.522 + ], + "angle": 0, + "content": "prediction head over a sequence of 12 frames, and evaluate at the center-frame. For more details on model architecture, hyper-parameters, and training procedure/training-time please see Appendix A1." + }, + { + "type": "title", + "bbox": [ + 0.5, + 0.531, + 0.787, + 0.546 + ], + "angle": 0, + "content": "4.1. Action Recognition with 3D Pose" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.554, + 0.892, + 0.779 + ], + "angle": 0, + "content": "In this section, we discuss the performance of our method on AVA action recognition, when using 3D pose cues, corresponding to Section 3.1. We train our 3D pose model LART-pose, on Kinetics-400 and AVA datasets. For Kinetics-400 tracks, we use MaskFeat [59] pseudo-ground truth labels and for AVA tracks, we train with ground-truth labels. We train a single person model and a multi-person model to study the interactions of a person over time, and person-person interactions. Our method achieves \\(24.1\\mathrm{mAP}\\) on multi-person \\((N = 5)\\) setting (See Table 2). While this is well below the state-of-the-art performance, this is a first time a 3D model achieves more than \\(15.6\\mathrm{mAP}\\) on AVA dataset. Note that the first reported performance on AVA was \\(15.6\\mathrm{mAP}\\) [20], and our 3D pose model is already above this baseline." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.78, + 0.892, + 0.901 + ], + "angle": 0, + "content": "We evaluate the performance of our method on three AVA sub-categories (Object Manipulation \\((OM)\\), Person Interactions \\((PI)\\), and Person Movement \\((PM)\\)). For the person-movement task, which includes actions such as running, standing, and sitting etc., the 3D pose model achieves \\(48.7\\mathrm{mAP}\\). In contrast, MaskFeat performance in this sub-category is \\(58.6\\mathrm{mAP}\\). This shows that the 3D pose model can perform about \\(80\\%\\) good as a strong state-of-the-art" + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.946, + 0.511, + 0.956 + ], + "angle": 0, + "content": "644" + } + ], + [ + { + "type": "image_caption", + "bbox": [ + 0.385, + 0.091, + 0.619, + 0.105 + ], + "angle": 0, + "content": "AVA2.2 Performance with 3D pose" + }, + { + "type": "image", + "bbox": [ + 0.09, + 0.103, + 0.882, + 0.312 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.076, + 0.327, + 0.894, + 0.397 + ], + "angle": 0, + "content": "Figure 2. Class-wise performance on AVA: We show the performance of JMRN [45] and LART-pose on 60 AVA classes (average precision and relative gain). For pose based classes such as standing, sitting, and walking our 3D pose model can achieve above \\(60\\mathrm{mAP}\\) average precision performance by only looking at the 3D poses over time. By modeling multiple trajectories as input our model can understand the interactions among people. For example, activities such as dancing \\((+30.1\\%)\\), martial art \\((+19.8\\%)\\) and hugging \\((+62.1\\%)\\) have large relative gains over state-of-the-art pose only model. We only plot the gains if it is above or below \\(1\\mathrm{mAP}\\)." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.424, + 0.47, + 0.573 + ], + "angle": 0, + "content": "model. On the person-person interaction category, our multi-person model achieves a gain of \\(+2.4\\mathrm{mAP}\\) compared to the single-person model, showing that the multiperson model was able to capture the person-person interactions. As shown in the Fig 2, for person-person interactions classes such as dancing, fighting, lifting a person and handshaking etc., the multi-person model performs much better than the current state-of-the-art pose-only models. For example, in dancing multi-person model gains \\(+39.8\\mathrm{mAP}\\), and in hugging the relative gain is over \\(+200\\%\\)." + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.576, + 0.47, + 0.696 + ], + "angle": 0, + "content": "On the other hand, object manipulation has the lowest score among these three tasks. Since we do not model objects explicitly, the model has no information about which object is being manipulated and how it is being associated with the person. However, since some tasks have a unique pose when interacting with objects such as answering a phone or carrying an object, knowing the pose would help in identifying the action, which results in 13.3 mAP." + }, + { + "type": "title", + "bbox": [ + 0.077, + 0.71, + 0.411, + 0.727 + ], + "angle": 0, + "content": "4.2. Actions from Appearance and 3D Pose" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.735, + 0.471, + 0.903 + ], + "angle": 0, + "content": "While the 3D pose model can capture about \\(50\\%\\) performance compared to the state-of-the-art methods, it does not reason about the scene context. To model this, we concatenate the human-centric 3D representation with feature vectors from MaskFeat [59] as discussed in Section 3.2. MaskFeat has a MViT2 [34] as the backbone and it learns a strong representation about the scene and contextualized appearance. First, we pretrain this model on Kinetics-400 [25] and AVA [20] datasets, using the pseudo ground truth labels. Then, we fine-tune this model on AVA tracks using the ground-truth action annotation." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.424, + 0.892, + 0.603 + ], + "angle": 0, + "content": "In Table 3 we compare our method with other state-of-the-art methods. Overall our method has a gain of \\(+2.8\\) mAP compared to Video MAE [15, 49]. In addition to that if we train with extra annotations from AVA-Kinetics our method achieves \\(42.3\\mathrm{mAP}\\). Figure 3 show the class-wise performance of our method compared to MaskFeat [58]. Our method overall improves the performance of 56 classes in 60 classes. For some classes (e.g. fighting, hugging, climbing) our method improves the performance by more than \\(+5\\mathrm{mAP}\\). In Table 4 we evaluate our method on AVA-Kinetics [33] dataset. Compared to the previous state-of-the-art methods our method has a gain of \\(+1.5\\mathrm{mAP}\\)." + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.605, + 0.892, + 0.741 + ], + "angle": 0, + "content": "In Figure 4, we show qualitative results from MViT [12] and our method. As shown in the figure, having explicit access to the tracks of everyone in the scene allows us to make more confident predictions for actions like hugging and fighting, where it is easy to interpret close interactions. In addition to that, some actions like riding a horse and climbing can benefit from having access to explicit 3D poses over time. Finally, the amodal nature of 3D meshes also allows us to make better predictions during occlusions." + }, + { + "type": "title", + "bbox": [ + 0.5, + 0.752, + 0.707, + 0.769 + ], + "angle": 0, + "content": "4.3. Ablation Experiments" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.78, + 0.892, + 0.901 + ], + "angle": 0, + "content": "Effect of tracking: All the current works on action recognition do not associate people over time, explicitly. They only use the mid-frame bounding box to predict the action. For example, when a person is running across the scene from left to right, a feature volume cropped at the mid-frame bounding box is unlikely to contain all the information about the person. However, if we can track this person we could simply know their exact position over time and" + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.946, + 0.511, + 0.957 + ], + "angle": 0, + "content": "645" + } + ], + [ + { + "type": "image_caption", + "bbox": [ + 0.388, + 0.09, + 0.623, + 0.102 + ], + "angle": 0, + "content": "AVA2.2 Performance with 3D pose" + }, + { + "type": "image", + "bbox": [ + 0.086, + 0.106, + 0.877, + 0.314 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.079, + 0.326, + 0.896, + 0.384 + ], + "angle": 0, + "content": "Figure 3. Comparison with State-of-the-art methods: We show class-level performance (average precision and relative gain) of MViT [12] (pretrained on MaskFeat [59]) and ours. Our methods achieve better performance compared to MViT on over 50 classes out of 60 classes. Especially, for actions like running, fighting, hugging, and sleeping etc., our method achieves over \\(+5\\) mAP. This shows the benefit of having access to explicit tracks and 3D poses for action recognition. We only plot the gains if it is above or below 1 mAP." + }, + { + "type": "table", + "bbox": [ + 0.079, + 0.406, + 0.468, + 0.657 + ], + "angle": 0, + "content": "
ModelPretrainmAP
SlowFast R101, 8×8 [16]K40023.8
MViTv1-B, 64×3 [12]27.3
SlowFast 16×8 +NL [16]27.5
X3D-XL [14]27.4
MViTv1-B-24, 32×3 [12]K60028.7
Object Transformer [61]31.0
ACAR R101, 8×8 +NL [39]31.4
ACAR R101, 8×8 +NL [39]K70033.3
MViT-L↑312, 40×3 [34],IN-21K+K40031.6
MaskFeat [59]K40037.5
MaskFeat [59]K60038.8
Video MAE [15, 49]K60039.3
Video MAE [15, 49]K40039.5
LARTK40042.3 (+2.8)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.079, + 0.672, + 0.472, + 0.729 + ], + "angle": 0, + "content": "Table 3. Comparison with state-of-the-art methods on AVA 2.2:. Our model uses features from MaskFeat [59] with full crop inference. Compared to Video MAE [15,49] our method achieves a gain of \\(+2.8\\) mAP." + }, + { + "type": "text", + "bbox": [ + 0.079, + 0.745, + 0.47, + 0.775 + ], + "angle": 0, + "content": "that would give more localized information to the model to predict the action." + }, + { + "type": "text", + "bbox": [ + 0.079, + 0.78, + 0.472, + 0.901 + ], + "angle": 0, + "content": "To this end, first, we evaluate MaskFeat [58] with the same detection bounding boxes [39] used in our evaluations, and it results in \\(40.2\\mathrm{mAP}\\). With this being the baseline for our system, we train a model which only uses MaskFeat features as input, but over time. This way we can measure the effect of tracking in action recognition. Unsurprisingly, as shown in Table 5 when training MaskFeat with tracking, the model performs \\(+1.2\\mathrm{mAP}\\) better than the baseline. This" + }, + { + "type": "table", + "bbox": [ + 0.617, + 0.406, + 0.777, + 0.513 + ], + "angle": 0, + "content": "
ModelmAP
SlowFast [16]32.98
ACAR [39]36.36
RM [17]37.34
LART38.91
" + }, + { + "type": "table_caption", + "bbox": [ + 0.5, + 0.528, + 0.894, + 0.584 + ], + "angle": 0, + "content": "Table 4. Performance on AVA-Kinetics Dataset. We evaluate the performance of our model on AVA-Kinetics [33] using a single model (no ensembles) and compare the performance with previous state-of-the-art single models." + }, + { + "type": "text", + "bbox": [ + 0.5, + 0.614, + 0.892, + 0.72 + ], + "angle": 0, + "content": "clearly shows that the use of tracking is helpful in action recognition. Specifically, having access to the tracks help to localize a person over time, which in return provides a second order signal of how joint angles changes over time. In addition, knowing the identity of each person also gives a discriminative signal between people, which is helpful for learning interactions between people." + }, + { + "type": "text", + "bbox": [ + 0.5, + 0.735, + 0.893, + 0.901 + ], + "angle": 0, + "content": "Effect of Pose: The second contribution from our work is to use 3D pose information for action recognition. As discussed in Section 4.1 by only using 3D pose, we can achieve \\(24.1\\mathrm{mAP}\\) on AVA dataset. While it is hard to measure the exact contribution of 3D pose and 2D features, we compare our method with a model trained with only MaskFeat and tracking, where the only difference is the use of 3D pose. As shown in Table 5, the addition of 3D pose gives a gain of \\(+0.8\\mathrm{mAP}\\). While this is a relatively small gain compared to the use of tracking, we believe with more robust and accurate 3D pose systems, this can be improved." + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.946, + 0.512, + 0.957 + ], + "angle": 0, + "content": "646" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.081, + 0.09, + 0.237, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.244, + 0.091, + 0.401, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.407, + 0.091, + 0.564, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.57, + 0.091, + 0.727, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.734, + 0.091, + 0.89, + 0.19 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.081, + 0.192, + 0.237, + 0.293 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.244, + 0.192, + 0.401, + 0.293 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.408, + 0.192, + 0.564, + 0.293 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.571, + 0.192, + 0.727, + 0.293 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.734, + 0.192, + 0.89, + 0.293 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.076, + 0.304, + 0.895, + 0.466 + ], + "angle": 0, + "content": "Figure 4. Qualitative Results: We show the predictions from MViT [12] and our model on validation samples from AVA v2.2. The person with the colored mesh indicates the person-of-interest for which we recognise the action and the one with the gray mesh indicates the supporting actors. The first two columns demonstrate the benefits of having access to the action-tubes of other people for action prediction. In the first column, the orange person is very close to the other person with hugging posture, which makes it easy to predict hugging with higher probability. Similarly, in the second column, the explicit interaction between the multiple people, and knowing others also fighting increases the confidence for the fighting action for the green person over the 2D recognition model. The third and the fourth columns show the benefit of explicitly modeling the 3D pose over time (using tracks) for action recognition. Where the yellow person is in riding pose and purple person is looking upwards and legs on a vertical plane. The last column indicates the benefit of representing people with an amodal representation. Here the hand of the blue person is occluded, so the 2D recognition model does not see the action as a whole. However, SMPL meshes are amodal, therefore the hand is still present, which boosts the probability of predicting the action label for closing the door." + }, + { + "type": "table", + "bbox": [ + 0.079, + 0.498, + 0.477, + 0.571 + ], + "angle": 0, + "content": "
ModelOMPIPMmAP
MViT32.241.158.640.2
MViT + Tracking33.443.059.341.4 (+1.2)
MViT + Tracking + Pose34.443.959.942.3 (+0.9)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.076, + 0.581, + 0.471, + 0.651 + ], + "angle": 0, + "content": "Table 5. Ablation on the main components: We ablate the contribution of tracking and 3D poses using the same detections. First, we only use MViT features over the tracks to evaluate the contribution from tracking. Then we add 3D pose features to study the contribution from 3D pose for action recognition." + }, + { + "type": "title", + "bbox": [ + 0.077, + 0.665, + 0.291, + 0.681 + ], + "angle": 0, + "content": "4.4. Implementation details" + }, + { + "type": "text", + "bbox": [ + 0.076, + 0.69, + 0.47, + 0.901 + ], + "angle": 0, + "content": "In both the pose model and pose+appearance model, we use the same vanilla transformer architecture [52] with 16 layers and 16 heads. For both models the embedding dimension is 512. We train with 0.4 mask ratio and at test time use the same mask token to in-fill the missing detections. The output token from the transformer is passed to a linear layer to predict the AVA action labels. We pre-train our model on kinetics for 30 epochs with MViT [12] predictions as pseudo-supervision and then fine-tune on AVA with AVA ground truth labels for few epochs. We train our models with AdamW [36] with base learning rate of 0.001 and betas = (0.9, 0.95). We use cosine annealing scheduling with a linear warm-up. For additional details please see the Appendix." + }, + { + "type": "title", + "bbox": [ + 0.5, + 0.492, + 0.619, + 0.507 + ], + "angle": 0, + "content": "5. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.498, + 0.528, + 0.892, + 0.846 + ], + "angle": 0, + "content": "In this paper, we investigated the benefits of 3D tracking and pose for the task of human action recognition. By leveraging a state-of-the-art method for person tracking, PHALP [43], we trained a transformer model that takes as input tokens the state of the person at every time instance. We investigated two design choices for the content of the token. First, when using information about the 3D pose of the person, we outperform previous baselines that rely on pose information for action recognition by \\(8.2\\mathrm{mAP}\\) on the AVA v2.2 dataset. Then, we also proposed fusing the pose information with contextualized appearance information coming from a typical action recognition backbone [12] applied over the tracklet trajectory. With this model, we improved upon the previous state-of-the-art on AVA v2.2 by \\(2.8\\mathrm{mAP}\\). There are many avenues for future work and further improvements for action recognition. For example, one could achieve better performance for more fine-grained tasks by more expressive 3D reconstruction of the human body (e.g., using the SMPL-X model [40] to capture also the hands), and by explicit modeling of the objects in the scene (potentially by extending the \"tubes\" idea to objects)." + }, + { + "type": "text", + "bbox": [ + 0.499, + 0.856, + 0.892, + 0.901 + ], + "angle": 0, + "content": "Acknowledgements: This work was supported by the FAIR-BAIR program as well as ONR MURI (N00014-21-1-2801). We thank Shubham Goel, for helpful discussions." + }, + { + "type": "page_number", + "bbox": [ + 0.486, + 0.945, + 0.511, + 0.956 + ], + "angle": 0, + "content": "647" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.081, + 0.091, + 0.173, + 0.105 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.115, + 0.468, + 0.156 + ], + "angle": 0, + "content": "[1] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lucic, and Cordelia Schmid. ViViT: A video vision transformer. In ICCV, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.157, + 0.468, + 0.212 + ], + "angle": 0, + "content": "[2] Fabien Baradel, Thibault Groueix, Philippe Weinzaepfel, Romain Brégier, Yannis Kalantidis, and Grégory Rogez. Leveraging MoCap data for human mesh recovery. In 3DV, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.213, + 0.468, + 0.242 + ], + "angle": 0, + "content": "[3] Philipp Bergmann, Tim Meinhardt, and Laura Leal-Taixe. Tracking without bells and whistles. In ICCV, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.242, + 0.468, + 0.282 + ], + "angle": 0, + "content": "[4] Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In ICML, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.283, + 0.468, + 0.338 + ], + "angle": 0, + "content": "[5] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J Black. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In ECCV, 2016." + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.339, + 0.468, + 0.38 + ], + "angle": 0, + "content": "[6] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.381, + 0.468, + 0.436 + ], + "angle": 0, + "content": "[7] Vasileios Choutas, Lea Müller, Chun-Hao P Huang, Siyu Tang, Dimitrios Tzionas, and Michael J Black. Accurate 3D body shape regression using metric and semantic attributes. In CVPR, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.437, + 0.468, + 0.478 + ], + "angle": 0, + "content": "[8] Vasileios Choutas, Philippe Weinzaepfel, Jérôme Revaud, and Cordelia Schmid. PoTion: Pose motion representation for action recognition. In CVPR, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.087, + 0.479, + 0.468, + 0.548 + ], + "angle": 0, + "content": "[9] Piotr Dollár, Vincent Rabaud, Garrison Cottrell, and Serge Belongie. Behavior recognition via sparse spatio-temporal features. In 2005 IEEE international workshop on visual surveillance and performance evaluation of tracking and surveillance, 2005." + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.549, + 0.468, + 0.631 + ], + "angle": 0, + "content": "[10] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.632, + 0.468, + 0.66 + ], + "angle": 0, + "content": "[11] Alexei A Efros, Alexander C Berg, Greg Mori, and Jitendra Malik. Recognizing action at a distance. In ICCV, 2003." + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.66, + 0.468, + 0.702 + ], + "angle": 0, + "content": "[12] Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer. Multiscale vision transformers. In ICCV, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.702, + 0.468, + 0.743 + ], + "angle": 0, + "content": "[13] Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, and Cewu Lu. RMPE: Regional multi-person pose estimation. In ICCV, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.744, + 0.468, + 0.772 + ], + "angle": 0, + "content": "[14] Christoph Feichtenhofer. X3D: Expanding architectures for efficient video recognition. In CVPR, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.772, + 0.468, + 0.813 + ], + "angle": 0, + "content": "[15] Christoph Feichtenhofer, Haoqi Fan, Yanghao Li, and Kaiming He. Masked autoencoders as spatiotemporal learners. In NeurIPS, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.814, + 0.468, + 0.855 + ], + "angle": 0, + "content": "[16] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In ICCV, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.857, + 0.468, + 0.91 + ], + "angle": 0, + "content": "[17] Yutong Feng, Jianwen Jiang, Ziyuan Huang, Zhiwu Qing, Xiang Wang, Shiwei Zhang, Mingqian Tang, and Yue Gao. Relation modeling in spatio-temporal action localization. arXiv preprint arXiv:2106.08061, 2021." + }, + { + "type": "list", + "bbox": [ + 0.08, + 0.115, + 0.468, + 0.91 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.093, + 0.89, + 0.119 + ], + "angle": 0, + "content": "[18] Georgia Gkioxari and Jitendra Malik. Finding action tubes. In CVPR, 2015." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.121, + 0.892, + 0.178 + ], + "angle": 0, + "content": "[19] Shubham Goel, Georgios Pavlakos, Jathushan Rajasegaran, Angjoo Kanazawa, and Jitendra Malik. Humans in 4D: Reconstructing and tracking humans with transformers. arXiv preprint (forthcoming), 2023." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.179, + 0.892, + 0.26 + ], + "angle": 0, + "content": "[20] Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, and Jitendra Malik. AVA: A video dataset of spatio-temporally localized atomic visual actions. In CVPR, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.261, + 0.892, + 0.289 + ], + "angle": 0, + "content": "[21] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In ICCV, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.29, + 0.892, + 0.332 + ], + "angle": 0, + "content": "[22] Gunnar Johansson. Visual perception of biological motion and a model for its analysis. Perception & psychophysics, 14(2):201-211, 1973." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.334, + 0.892, + 0.375 + ], + "angle": 0, + "content": "[23] Angjoo Kanazawa, Michael J Black, David W Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In CVPR, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.376, + 0.892, + 0.417 + ], + "angle": 0, + "content": "[24] Angjoo Kanazawa, Jason Y Zhang, Panna Felsen, and Jitendra Malik. Learning 3D human dynamics from video. In CVPR, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.419, + 0.892, + 0.488 + ], + "angle": 0, + "content": "[25] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.49, + 0.892, + 0.518 + ], + "angle": 0, + "content": "[26] Machiel Keestra. Understanding human action. Integrating meanings, mechanisms, causes, and contexts. 2015." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.519, + 0.892, + 0.56 + ], + "angle": 0, + "content": "[27] Alexander Klaser, Marcin Marszalek, and Cordelia Schmid. A spatio-temporal descriptor based on 3D-gradients. In BMVC, 2008." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.561, + 0.892, + 0.603 + ], + "angle": 0, + "content": "[28] Muhammed Kocabas, Nikos Athanasiou, and Michael J Black. VIBE: Video inference for human body pose and shape estimation. In CVPR, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.604, + 0.892, + 0.646 + ], + "angle": 0, + "content": "[29] Muhammed Kocabas, Chun-Hao P Huang, Otmar Hilliges, and Michael J Black. PARE: Part attention regressor for 3D human body estimation. In ICCV, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.647, + 0.892, + 0.702 + ], + "angle": 0, + "content": "[30] Muhammed Kocabas, Chun-Hao P Huang, Joachim Tesch, Lea Muller, Otmar Hilliges, and Michael J Black. SPEC: Seeing people in the wild with an estimated camera. In ICCV, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.703, + 0.892, + 0.746 + ], + "angle": 0, + "content": "[31] Nikos Kolotouros, Georgios Pavlakos, Michael J Black, and Kostas Daniilidis. Learning to reconstruct 3D human pose and shape via model-fitting in the loop. In ICCV, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.747, + 0.892, + 0.788 + ], + "angle": 0, + "content": "[32] Nikos Kolotouros, Georgios Pavlakos, Dinesh Jayaraman, and Kostas Daniilidis. Probabilistic modeling for human mesh recovery. In ICCV, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.789, + 0.892, + 0.845 + ], + "angle": 0, + "content": "[33] Ang Li, Meghan Thotakuri, David A Ross, João Carreira, Alexander Vostrikov, and Andrew Zisserman. Theava-kinetics localized human actions video dataset. arXiv preprint arXiv:2005.00214, 2020." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.846, + 0.892, + 0.901 + ], + "angle": 0, + "content": "[34] Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Mangalam, Bo Xiong, Jitendra Malik, and Christoph Feichtenhofer. MViTv2: Improved multiscale vision transformers for classification and detection. In CVPR, 2022." + }, + { + "type": "list", + "bbox": [ + 0.503, + 0.093, + 0.892, + 0.901 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.487, + 0.946, + 0.511, + 0.956 + ], + "angle": 0, + "content": "648" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.08, + 0.092, + 0.47, + 0.147 + ], + "angle": 0, + "content": "[35] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. SMPL: A skinned multiperson linear model. ACM Transactions on Graphics (TOG), 34(6):1-16, 2015." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.149, + 0.468, + 0.177 + ], + "angle": 0, + "content": "[36] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.179, + 0.469, + 0.219 + ], + "angle": 0, + "content": "[37] Tim Meinhardt, Alexander Kirillov, Laura Leal-Taixe, and Christoph Feichtenhofer. TrackFormer: Multi-object tracking with transformers. In CVPR, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.221, + 0.469, + 0.248 + ], + "angle": 0, + "content": "[38] Daniel Neimark, Omri Bar, Maya Zohar, and Dotan Asselmann. Video transformer network. In ICCV, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.25, + 0.469, + 0.291 + ], + "angle": 0, + "content": "[39] Junting Pan, Siyu Chen, Mike Zheng Shou, Yu Liu, Jing Shao, and Hongsheng Li. Actor-context-actor relation network for spatio-temporal action localization. In CVPR, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.293, + 0.469, + 0.348 + ], + "angle": 0, + "content": "[40] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Expressive body capture: 3D hands, face, and body from a single image. In CVPR, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.35, + 0.468, + 0.377 + ], + "angle": 0, + "content": "[41] Georgios Pavlakos, Jitendra Malik, and Angjoo Kanazawa. Human mesh recovery from multiple shots. In CVPR, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.379, + 0.468, + 0.419 + ], + "angle": 0, + "content": "[42] Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, and Jitendra Malik. Tracking people with 3D representations. In NeurIPS, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.421, + 0.469, + 0.475 + ], + "angle": 0, + "content": "[43] Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, and Jitendra Malik. Tracking people by predicting 3D appearance, location and pose. In CVPR, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.478, + 0.469, + 0.518 + ], + "angle": 0, + "content": "[44] Davis Rempe, Tolga Birdal, Aaron Hertzmann, Jimei Yang, Srinath Sridhar, and Leonidas J Guibas. HuMoR: 3D human motion model for robust pose estimation. In ICCV, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.521, + 0.469, + 0.561 + ], + "angle": 0, + "content": "[45] Anshul Shah, Shlok Mishra, Ankan Bansal, Jun-Cheng Chen, Rama Chellappa, and Abhinav Shrivastava. Pose and joint-aware action recognition. In WACV, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.564, + 0.469, + 0.604 + ], + "angle": 0, + "content": "[46] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. NIPS, 2014." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.606, + 0.469, + 0.647 + ], + "angle": 0, + "content": "[47] Chen Sun, Abhinav Shrivastava, Carl Vondrick, Rahul Sukthankar, Kevin Murphy, and Cordelia Schmid. Relational action forecasting. In CVPR, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.649, + 0.468, + 0.689 + ], + "angle": 0, + "content": "[48] Graham W Taylor, Rob Fergus, Yann LeCun, and Christoph Bregler. Convolutional learning of spatio-temporal features. In ECCV, 2010." + }, + { + "type": "ref_text", + "bbox": [ + 0.081, + 0.691, + 0.468, + 0.733 + ], + "angle": 0, + "content": "[49] Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. VideoMAE: Masked autoencoders are data-efficient learners for self-supervised video pre-training. In NeurIPS, 2022." + }, + { + "type": "list", + "bbox": [ + 0.08, + 0.092, + 0.47, + 0.733 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.093, + 0.892, + 0.133 + ], + "angle": 0, + "content": "[50] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3D convolutional networks. In ICCV, 2015." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.136, + 0.892, + 0.189 + ], + "angle": 0, + "content": "[51] Gül Varol, Ivan Laptev, Cordelia Schmid, and Andrew Zisserman. Synthetic humans for action recognition from unseen viewpoints. International Journal of Computer Vision, 129(7):2264-2287, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.192, + 0.892, + 0.233 + ], + "angle": 0, + "content": "[52] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.235, + 0.892, + 0.261 + ], + "angle": 0, + "content": "[53] Heng Wang, A. Klaser, C. Schmid, and Cheng-Lin Liu. Action recognition by dense trajectories. In CVPR, 2011." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.262, + 0.892, + 0.288 + ], + "angle": 0, + "content": "[54] Heng Wang and Cordelia Schmid. Action recognition with improved trajectories. In ICCV, 2013." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.29, + 0.892, + 0.317 + ], + "angle": 0, + "content": "[55] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.32, + 0.892, + 0.346 + ], + "angle": 0, + "content": "[56] Xiaolong Wang and Abhinav Gupta. Videos as space-time region graphs. In ECCV, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.349, + 0.892, + 0.403 + ], + "angle": 0, + "content": "[57] Zelun Wang and Jyh-Charn Liu. Translating math formula images to latex sequences using deep neural networks with sequence-level training. International Journal on Document Analysis and Recognition (IJDAR), 24(1):63-75, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.405, + 0.892, + 0.459 + ], + "angle": 0, + "content": "[58] Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, and Christoph Feichtenhofer. Masked feature prediction for self-supervised visual pre-training. arXiv preprint arXiv:2112.09133, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.462, + 0.892, + 0.503 + ], + "angle": 0, + "content": "[59] Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, and Christoph Feichtenhofer. Masked feature prediction for self-supervised visual pre-training. In CVPR, 2022." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.505, + 0.892, + 0.544 + ], + "angle": 0, + "content": "[60] Philippe Weinzaepfel and Grégory Rogez. Mimetics: Towards understanding human actions out of context. *IJCV*, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.547, + 0.892, + 0.573 + ], + "angle": 0, + "content": "[61] Chao-Yuan Wu and Philipp Krahenbuhl. Towards long-form video understanding. In CVPR, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.576, + 0.892, + 0.616 + ], + "angle": 0, + "content": "[62] Yuliang Xiu, Jiefeng Li, Haoyu Wang, Yinghong Fang, and Cewu Lu. Pose Flow: Efficient online pose tracking. In BMVC, 2018." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.619, + 0.892, + 0.646 + ], + "angle": 0, + "content": "[63] An Yan, Yali Wang, Zhifeng Li, and Yu Qiao. PA3D: Pose-action 3D machine for video recognition. In CVPR, 2019." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.648, + 0.892, + 0.702 + ], + "angle": 0, + "content": "[64] Hongwen Zhang, Yating Tian, Xinchi Zhou, Wanli Ouyang, Yebin Liu, Limin Wang, and Zhenan Sun. PyMAF: 3D human pose and shape regression with pyramidal mesh alignment feedback loop. In ICCV, 2021." + }, + { + "type": "ref_text", + "bbox": [ + 0.503, + 0.704, + 0.892, + 0.744 + ], + "angle": 0, + "content": "[65] Yubo Zhang, Pavel Tokmakov, Martial Hebert, and Cordelia Schmid. A structured model for action detection. In CVPR, 2019." + }, + { + "type": "list", + "bbox": [ + 0.503, + 0.093, + 0.892, + 0.744 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.487, + 0.946, + 0.511, + 0.956 + ], + "angle": 0, + "content": "649" + } + ] +] \ No newline at end of file diff --git a/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/0c904778-ffe0-4595-8277-48b7638fc1b2_origin.pdf b/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/0c904778-ffe0-4595-8277-48b7638fc1b2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..079eabb135aa2027d34eec08daee92e4b73815c4 --- /dev/null +++ b/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/0c904778-ffe0-4595-8277-48b7638fc1b2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7601495cd756154206e444c35e393f8523ac40209d357baea3735100a0d74951 +size 4491045 diff --git a/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/full.md b/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f7354ba0257abdcef1639d446e043f908e1d2e63 --- /dev/null +++ b/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/full.md @@ -0,0 +1,298 @@ +# On the Benefits of 3D Pose and Tracking for Human Action Recognition + +Jathushan Rajasegaran $^{1,2}$ , Georgios Pavlakos $^{1}$ , Angjoo Kanazawa $^{1}$ , Christoph Feichtenhofer $^{2}$ , Jitendra Malik $^{1,2}$ , $^{1}$ UC Berkeley, $^{2}$ Meta AI, FAIR + +# Abstract + +In this work we study the benefits of using tracking and 3D poses for action recognition. To achieve this, we take the Lagrangian view on analysing actions over a trajectory of human motion rather than at a fixed point in space. Taking this stand allows us to use the tracklets of people to predict their actions. In this spirit, first we show the benefits of using 3D pose to infer actions, and study person-person interactions. Subsequently, we propose a Lagrangian Action Recognition model by fusing 3D pose and contextualized appearance over tracklets. To this end, our method achieves state-of-the-art performance on the AVA v2.2 dataset on both pose only settings and on standard benchmark settings. When reasoning about the action using only pose cues, our pose model achieves $+10.0$ mAP gain over the corresponding state-of-the-art while our fused model has a gain of $+2.8$ mAP over the best state-of-the-art model. Code and results are available at: https://brjathu.github.io/LART + +# 1. Introduction + +In fluid mechanics, it is traditional to distinguish between the Lagrangian and Eulerian specifications of the flow field. Quoting the Wikipedia entry, "Lagrangian specification of the flow field is a way of looking at fluid motion where the observer follows an individual fluid parcel as it moves through space and time. Plotting the position of an individual parcel through time gives the pathline of the parcel. This can be visualized as sitting in a boat and drifting down a river. The Eulerian specification of the flow field is a way of looking at fluid motion that focuses on specific locations in the space through which the fluid flows as time passes. This can be visualized by sitting on the bank of a river and watching the water pass the fixed location." + +These concepts are very relevant to how we analyze videos of human activity. In the Eulerian viewpoint, we would focus on feature vectors at particular locations, either $(x,y)$ or $(x,y,z)$ , and consider evolution over time while staying fixed in space at the location. In the Lagrangian viewpoint, we would track, say a person over space-time and track the associated feature vector across space-time. + +While the older literature for activity recognition e.g., [11, 18, 53] typically adopted the Lagrangian viewpoint, ever since the advent of neural networks based on 3D spacetime convolution, e.g., [50], the Eulerian viewpoint became standard in state-of-the-art approaches such as SlowFast Networks [16]. Even after the switch to transformer architectures [12, 52] the Eulerian viewpoint has persisted. This is noteworthy because the tokenization step for transformers gives us an opportunity to freshly examine the question, "What should be the counterparts of words in video analysis?" Dosovitskiy et al. [10] suggested that image patches were a good choice, and the continuation of that idea to video suggests that spatiotemporal cuboids would work for video as well. + +On the contrary, in this work we take the Lagrangian viewpoint for analysing human actions. This specifies that we reason about the trajectory of an entity over time. Here, the entity can be low-level, e.g., a pixel or a patch, or high-level, e.g., a person. Since, we are interested in understanding human actions, we choose to operate on the level of "humans-as-entities". To this end, we develop a method that processes trajectories of people in video and uses them to recognize their action. We recover these trajectories by capitalizing on a recently introduced 3D tracking method PHALP [43] and HMR 2.0 [19]. As shown in Figure 1 PHALP recovers person tracklets from video by lifting people to 3D, which means that we can both link people over a series of frames and get access to their 3D representation. Given these 3D representations of people (i.e., 3D pose and 3D location), we use them as the basic content of each token. This allows us to build a flexible system where the model, here a transformer, takes as input tokens corresponding to the different people with access to their identity, 3D pose and 3D location. Having 3D location of the people in the scene allows us to learn interaction among people. Our model relying on this tokenization can benefit from 3D tracking and pose, and outperforms previous baseline that only have access to pose information [8, 45]. + +While the change in human pose over time is a strong signal, some actions require more contextual information about the appearance and the scene. Therefore, it is important to also fuse pose with appearance information from + +![](images/b709f11f233361974d43ae852845e4e9eeb5342539b37b15be82d1a7009d093e.jpg) +Figure 1. Overview of our method: Given a video, first, we track every person using a tracking algorithm (e.g. PHALP [43]). Then every detection in the track is tokenized to represent a human-centric vector (e.g. pose, appearance). To represent 3D pose we use SMPL [35] parameters and estimated 3D location of the person, for contextualized appearance we use MViT [12] (pre-trained on MaskFeat [59]) features. Then we train a transformer network to predict actions using the tracks. Note that, at the second frame we do not have detection for the blue person, at these places we pass a mask token to in-fill the missing detections. + +humans and the scene, coming directly from pixels. To achieve this, we also use the state-of-the-art models for action recognition [12, 34] to provide complementary information from the contextualized appearance of the humans and the scene in a Lagrangian framework. Specifically, we densely run such models over the trajectory of each tracklet and record the contextualized appearance features localized around the tracklet. As a result, our tokens include explicit information about the 3D pose of the people and densely sampled appearance information from the pixels, processed by action recognition backbones [12]. Our complete system outperforms the previous state of the art by a large margin of $2.8\mathrm{mAP}$ , on the challenging AVA v2.2 dataset. + +Overall, our main contribution is introducing an approach that highlights the effects of tracking and 3D poses for human action understanding. To this end, in this work, we propose a Lagrangian Action Recognition with Tracking (LART) approach, which utilizes the tracklets of people to predict their action. Our baseline version leverages tracklet trajectories and 3D pose representations of the people in the video to outperform previous baselines utilizing pose information. Moreover, we demonstrate that the proposed Lagrangian viewpoint of action recognition can be easily combined with traditional baselines that rely only on appearance and context from the video, achieving significant gains compared to the dominant paradigm. + +# 2. Related Work + +Recovering humans in 3D: A lot of the related work has been using the SMPL human body model [35] for recovering 3D humans from images. Initially, the related methods were relying on optimization-based approaches, like SMPLify [5], but since the introduction of the HMR [23], there has been a lot of interest in approaches that can directly regress SMPL parameters [35] given the corresponding image of the person as input. Many follow-up works have improved upon the original model, estimating more accurate pose [31] or shape [7], increasing the robustness of the model [41], incorporating side information [30,32], investigating different architecture choices [29,64], etc. + +While these works have been improving the basic single-frame reconstruction performance, there have been parallel efforts toward the temporal reconstruction of humans from video input. The HMMR model [24] uses a convolutional temporal encoder on HMR image features [23] to reconstruct humans over time. Other approaches have investigated recurrent [28] or transformer [41] encoders. Instead of performing the temporal pooling on image features, recent work has been using the SMPL parameters directly for the temporal encoding [2, 44]. + +One assumption of the temporal methods in the above category is that they have access to tracklets of people in + +the video. This means that they rely on tracking methods, most of which operate on the 2D domain [3, 13, 37, 62] and are responsible for introducing many errors. To overcome this limitation, recent work [42, 43] has capitalized on the advances of 3D human recovery to perform more robust identity tracking from video. More specifically, the PHALP method of Rajasegaran et al. [43] allows for robust tracking in a variety of settings, including in the wild videos and movies. Here, we make use of the PHALP system to discover long tracklets from large-scale video datasets. This allows us to train our method for recognizing actions from 3D pose input. + +Action Recognition: Earlier works on action recognition relied on hand-crafted features such as HOG3D [27], Cuboids [9] and Dense Trajectories [53, 54]. After the introduction of deep learning, 3D convolutional networks became the main backbone for action recognition [6, 48, 50]. However, the 3D convolutional models treat both space and time in a similar fashion, so to overcome this issue, two-stream architectures were proposed [46]. In two-steam networks, one pathway is dedicated to motion features, usually taking optical flow as input. This requirement of computing optical flow makes it hard to learn these models in an end-to-end manner. On the other hand, SlowFast networks [16] only use video streams but at different frame rates, allowing it to learn motion features from the fast pathway and lateral connections to fuse spatial and temporal information. Recently, with the advancements in transformer architectures, there has been a lot of work on action recognition using transformer backbones [1, 4, 12, 38]. + +While the above-mentioned works mainly focus on the model architectures for action recognition, another line of work investigates more fine-grained relationships between actors and objects [47, 55, 56, 65]. Non-local networks [55] use self-attention to reason about entities in the video and learn long-range relationships. ACAR [39] models actor-context-actor relationships by first extracting actor-context features through pooling in bounding box region and then learning higher-level relationships between actors. Compared to ACAR, our method does not explicitly design any priors about actor relationships, except their track identity. + +Along these lines, some works use the human pose to understand the action [8, 45, 51, 60, 63]. PoTion [8] uses a keypoint-based pose representation by colorizing the temporal dependencies. Recently, JMRN [45] proposed a joint-motion re-weighting network to learn joint trajectories separately and then fuse this information to reason about interjoint motion. While these works rely on 2D key points and design-specific architectures to encode the representation, we use more explicit 3D SMPL parameters. + +# 3. Method + +Understanding human action requires interpreting multiple sources of information [26]. These include head and gaze direction, human body pose and dynamics, interactions with objects or other humans or animals, the scene as a whole, the activity context (e.g. immediately preceding actions by self or others), and more. Some actions can be recognized by pose and pose dynamics alone, as demonstrated by Johansson et al [22] who showed that people are remarkable at recognizing walking, running, crawling just by looking at moving point-lights. However, interpreting complex actions requires reasoning with multiple sources of information e.g. to recognize that someone is slicing a tomato with a knife, it helps to see the knife and the tomato. + +There are many design choices that can be made here. Should one use "disentangled" representations, with elements such as pose, interacted objects, etc, represented explicitly in a modular way? Or should one just input video pixels into a large capacity neural network model and rely on it to figure out what is discriminatively useful? In this paper, we study two options: a) human pose reconstructed from an HMR model [19, 23] and b) human pose with contextual appearance as computed by an MViT model [12]. + +Given a video with number of frames $T$ , we first track every person using PHALP [43], which gives us a unique identity for each person over time. Let a person $i \in [1,2,3,\dots n]$ at time $t \in [1,2,3,\dots T]$ be represented by a person-vector $\mathbf{H}_t^i$ . Here $n$ is the number of people in a frame. This person-vector is constructed such that, it contains human-centric representation $\mathbf{P}_t^i$ and some contextualized appearance information $\mathbf{Q}_t$ . + +$$ +\mathbf {H} _ {t} ^ {i} = \left\{\mathbf {P} _ {t} ^ {i}, \mathbf {Q} _ {t} ^ {i} \right\}. \tag {1} +$$ + +Since we know the identity of each person from the tracking, we can create an action-tube [18] representation for each person. Let $\Phi_{i}$ be the action-tube of person $i$ , then this action-tube contains all the person-vectors over time. + +$$ +\boldsymbol {\Phi} _ {i} = \left\{\mathbf {H} _ {1} ^ {i}, \mathbf {H} _ {2} ^ {i}, \mathbf {H} _ {3} ^ {i}, \dots , \mathbf {H} _ {T} ^ {i} \right\}. \tag {2} +$$ + +Given this representation, we train our model LART to predict actions from action-tubes (tracks). In this work we use a vanilla transformer [52] to model the network $\mathcal{F}$ , and this allows us to mask attention, if the track is not continuous due to occlusions and failed detections etc. Please see the Appendix for more details on network architecture. + +$$ +\mathcal {F} \left(\Phi_ {1}, \Phi_ {2}, \dots , \Phi_ {i}, \dots , \Phi_ {n}; \Theta\right) = \widehat {Y _ {i}}. \tag {3} +$$ + +Here, $\Theta$ is the model parameters, $\widehat{Y}_i = \{y_1^i,y_2^i,y_3^i,\dots,y_T^i\}$ is the predictions for a track, and $y_{t}^{i}$ is the predicted action of the track $i$ at time $t$ . The model can use the actions of others for reasoning when predicting the action for the person-of-interest $i$ . Finally, we use binary cross-entropy loss to train our model and measure mean Average Precision (mAP) for evaluation. + +# 3.1. Action Recognition with 3D Pose + +In this section, we study the effect of human-centric pose representation on action recognition. To do that, we consider a person-vector that only contains the pose representation, $\mathbf{H}_t^i = \{\mathbf{P}_t^i\}$ . While, $\mathbf{P}_t^i$ can in general contain any information about the person, in this work train a pose only model LART-pose which uses 3D body pose of the person based on the SMPL [35] model. This includes the joint angles of the different body parts, $\theta_t^i \in \mathcal{R}^{23 \times 3 \times 3}$ and is considered as an amodal representation, which means we make a prediction about all body parts, even those that are potentially occluded/truncated in the image. Since the global body orientation $\psi_t^i \in \mathcal{R}^{3 \times 3}$ is represented separately from the body pose, our body representation is invariant to the specific viewpoint of the video. In addition to the 3D pose, we also use the 3D location $L_t^i$ of the person in the camera view (which is also predicted by the PHALP model [43]). This makes it possible to consider the relative location of the different people in 3D. More specifically, each person is represented as, + +$$ +\mathbf {H} _ {t} ^ {i} = \mathbf {P} _ {t} ^ {i} = \left\{\theta_ {t} ^ {i}, \psi_ {t} ^ {i}, L _ {t} ^ {i} \right\}. \tag {4} +$$ + +Let us assume that there are $n$ tracklets $\{\Phi_1, \Phi_2, \Phi_3, \dots, \Phi_n\}$ in a given video. To study the action of the tracklet $i$ , we consider that person $i$ as the person-of-interest and having access to other tracklets can be helpful to interpret the person-person interactions for person $i$ . Therefore, to predict the action for all $n$ tracklets we need to make $n$ number of forward passes. If person $i$ is the person-of-interest, then we randomly sample $N - 1$ number of other tracklets and pass it to the model $\mathcal{F}(\cdot; \Theta)$ along with the $\Phi_i$ . + +$$ +\mathcal {F} \left(\Phi_ {i}, \left\{\Phi_ {j} \mid j \in [ N ] \right\}; \Theta\right) = \widehat {Y} _ {i} \tag {5} +$$ + +Therefore, the model sees $N$ number of tracklets and predicts the action for the main (person-of-interest) track. To do this, we first tokenize all the person-vectors, by passing them through a linear layer and project it in $f_{proj}(\mathcal{H}_t^i)\in \mathcal{R}^d$ a $d$ dimensional space. Afterward, we add positional embeddings for a) time, b) tracklet-id. For time and tracklet-id we use 2D sine and cosine functions as positional encoding [57], by assigning person $i$ as the zero $^{\mathrm{th}}$ track, and the rest of the tracklets use tracklet-ids $\{1,2,3,\dots,N - 1\}$ . + +$$ +P E (t, i, 2 r) = \sin (t / 1 0 0 0 0 ^ {4 r / d}) +$$ + +$$ +P E (t, i, 2 r + 1) = \cos (t / 1 0 0 0 0 ^ {4 r / d}) +$$ + +$$ +P E (t, i, 2 s + D / 2) = \sin (i / 1 0 0 0 0 ^ {4 s / d}) +$$ + +$$ +P E (t, i, 2 s + D / 2 + 1) = \cos (i / 1 0 0 0 0 ^ {4 s / d}) +$$ + +Here, $t$ is the time index, $i$ is the track-id, $r, s \in [0, d/2)$ specifies the dimensions and $D$ is the dimensions of the token. + +After adding the position encodings for time and identity, each person token is passed to the transformer network. The $(t + i\times N)^{th}$ token is given by, + +$$ +\operatorname {t o k e n} _ {(t + i \times N)} = f _ {\text {p r o j}} \left(\mathcal {H} _ {t} ^ {i}\right) + P E (t, i,:) \tag {6} +$$ + +Our person of interest formulation would allow us to use other actors in the scene to make better predictions for the main actor. When there are multiple actors involved in the scene, knowing one person's action could help in predicting another's action. Some actions are correlated among the actors in a scene (e.g. dancing, fighting), while in some cases, people will be performing reciprocal actions (e.g. speaking and listening). In these cases knowing one person's action would help in predicting the other person's action with more confidence. + +# 3.2. Actions from Appearance and 3D Pose + +While human pose plays a key role in understanding actions, more complex actions require reasoning about the scene and context. Therefore, in this section, we investigate the benefits of combining pose and contextual appearance features for action recognition and train model LART to benefit from 3D poses and appearance over a trajectory. For every track, we run a 2D action recognition model (i.e. MaskFeat [59] pretrained MViT [12]) at a frequency $f_{s}$ and store the feature vectors before the classification layer. For example, consider a track $\Phi_{i}$ , which has detections $\{D_1^i,D_2^i,D_3^i,\dots ,D_T^i\}$ . We get the predictions form the 2D action recognition models, for the detections at $\{t,t + f_{FPS} / f_s,t + 2f_{FPS} / f_s,\ldots \}$ . Here, $f_{FPS}$ is the rate at which frames appear on the screen. Since these action recognition models capture temporal information to some extent, $\mathbf{Q}_{t - f_{FPS} / 2f_s}^i$ to $\mathbf{Q}_{t + f_{FPS} / 2f_s}^i$ share the same appearance features. Let's assume we have a pre-trained action recognition model $\mathcal{A}$ , and it takes a sequence of frames and a detection bounding box at mid-frame, then the feature vectors for $\mathbf{Q}_t^i$ is given by: + +$$ +\mathcal {A} \big (D _ {t} ^ {i}, \{I \} _ {t - M} ^ {t + M} \big) = \mathbf {U} _ {t} ^ {i} +$$ + +Here, $\{I\}_{t - M}^{t + M}$ is the sequence of image frames, $2M$ is the number of frames seen by the action recognition model, and $\mathbf{U}_t^i$ is the contextual appearance vector. Note that, since the action recognition models look at the whole image frame, this representation implicitly contains information about the scene and objects and movements. However, we argue that human-centric pose representation has orthogonal information compared to feature vectors taken from convolutional or transformer networks. For example, the 3D pose is a geometric representation while $\mathbf{U}_t^i$ is more photometric, the SMPL parameters have more priors about human actions/pose and it is amodal while the appearance representation is learned from raw pixels. Now that we have both + +pose-centric representation and appearance-centric representation in the person vector $\mathbf{H}_t^i$ : + +$$ +\mathbf {H} _ {t} ^ {i} = \underbrace {\left\{\theta_ {t} ^ {i} , \psi_ {t} ^ {i} , L _ {t} ^ {i} \right.} _ {\mathbf {P} _ {t} ^ {i}} \underbrace {\left. \mathbf {U} _ {t} ^ {i} \right.} _ {\mathbf {Q} _ {t} ^ {i}} \tag {7} +$$ + +So, each human is represented by their 3D pose, 3D location, and with their appearance and scene content. We follow the same procedure as discussed in the previous section to add positional encoding and train a transformer network $\mathcal{F}(\Theta)$ with pose+appearance tokens. + +# 4. Experiments + +We evaluate our method on AVA [20] in various settings. AVA [20] poses an action detection problem, where people are localized in a spatio-temporal volume with action labels. It provides annotations at 1Hz, and each actor will have 1 pose action, up to 3 person-object interactions (optional), and up to 3 person-person interaction (optional) labels. For the evaluations, we use AVA v2.2 annotations and follow the standard protocol as in [20]. We measure mean average precision (mAP) on 60 classes with a frame-level IoU of 0.5. In addition to that, we also evaluate our method on AVA-Kinetics [33] dataset, which provides spatio-temporal localized annotations for Kinetics videos. + +We use PHALP [43] to track people in the AVA dataset. PHALP falls into the tracking-by-detection paradigm and uses Mask R-CNN [21] for detecting people in the scene. At the training stage, where the bounding box annotations are available only at $1\mathrm{Hz}$ , we use Mask R-CNN detections for the in-between frames and use the ground-truth bounding box for every 30 frames. For validation, we use the bounding boxes used by [39] and do the same strategy to complete the tracking. We ran, PHALP on Kinetics-400 [25] and AVA [20]. Both datasets contain over 1 million tracks with an average length of 3.4s and over 100 million detections. In total, we use about 900 hours length of tracks, which is about 40x more than previous works [24]. See Table 1 for more details. + +Tracking allows us to train actions densely. Since, we have tokens for each actor at every frame, we can supervise every token by assuming the human action remains the same in a 1 sec window [20]. First, we pre-train our model on Kinetics-400 dataset [25] and AVA [20] dataset. We run MViT [12] (pretrained on MaskFeat [58]) at $1\mathrm{Hz}$ on every track in Kinetics-400 to generate pseudo groundtruth annotations. Every 30 frames will share the same annotations and we train our model end-to-end with binary cross-entropy loss. Then we fine-tune the pretrained model, with tracks generated by us, on AVA ground-truth action labels. At inference, we take a track, and randomly sample $N - 1$ of other tracks from the same video and pass it through the model. We take an average pooling on the + +
Dataset# clips# tracks#bbox
AVA [20]184k320k32.9m
Kinetics [25]217k686k71.4m
Total400k1m104.3m
+ +Table 1. Tracking statistics on AVA [20] and Kinetics-400 [25]: We report the number tracks returned by PHALP [43] for each datasets (m: million). This results in over 900 hours of tracks, with a mean length of 3.4 seconds (with overlaps). + +
ModelPoseOMPIPMmAP
PoTion [8]2D---13.1
JMRN [45]2D7.117.227.614.1
LART-poseSMPL11.924.645.822.3
LART-poseSMPL+Joints13.325.948.724.1
+ +Table 2. AVA Action Recognition with 3D pose: We evaluate human-centric representation on AVA dataset [20]. Here $OM$ : Object Manipulation, $PI$ : Person Interactions, and $PM$ : Person Movement. LART-poscan achieve about $80\%$ performance of MViT models on person movement tasks without looking at scene information. + +prediction head over a sequence of 12 frames, and evaluate at the center-frame. For more details on model architecture, hyper-parameters, and training procedure/training-time please see Appendix A1. + +# 4.1. Action Recognition with 3D Pose + +In this section, we discuss the performance of our method on AVA action recognition, when using 3D pose cues, corresponding to Section 3.1. We train our 3D pose model LART-pose, on Kinetics-400 and AVA datasets. For Kinetics-400 tracks, we use MaskFeat [59] pseudo-ground truth labels and for AVA tracks, we train with ground-truth labels. We train a single person model and a multi-person model to study the interactions of a person over time, and person-person interactions. Our method achieves $24.1\mathrm{mAP}$ on multi-person $(N = 5)$ setting (See Table 2). While this is well below the state-of-the-art performance, this is a first time a 3D model achieves more than $15.6\mathrm{mAP}$ on AVA dataset. Note that the first reported performance on AVA was $15.6\mathrm{mAP}$ [20], and our 3D pose model is already above this baseline. + +We evaluate the performance of our method on three AVA sub-categories (Object Manipulation $(OM)$ , Person Interactions $(PI)$ , and Person Movement $(PM)$ ). For the person-movement task, which includes actions such as running, standing, and sitting etc., the 3D pose model achieves $48.7\mathrm{mAP}$ . In contrast, MaskFeat performance in this sub-category is $58.6\mathrm{mAP}$ . This shows that the 3D pose model can perform about $80\%$ good as a strong state-of-the-art + +![](images/85ec193b8f26c4303b14618908eddcd148fcd39b87a2daa70ee76cabeea1df84.jpg) +AVA2.2 Performance with 3D pose +Figure 2. Class-wise performance on AVA: We show the performance of JMRN [45] and LART-pose on 60 AVA classes (average precision and relative gain). For pose based classes such as standing, sitting, and walking our 3D pose model can achieve above $60\mathrm{mAP}$ average precision performance by only looking at the 3D poses over time. By modeling multiple trajectories as input our model can understand the interactions among people. For example, activities such as dancing $(+30.1\%)$ , martial art $(+19.8\%)$ and hugging $(+62.1\%)$ have large relative gains over state-of-the-art pose only model. We only plot the gains if it is above or below $1\mathrm{mAP}$ . + +model. On the person-person interaction category, our multi-person model achieves a gain of $+2.4\mathrm{mAP}$ compared to the single-person model, showing that the multiperson model was able to capture the person-person interactions. As shown in the Fig 2, for person-person interactions classes such as dancing, fighting, lifting a person and handshaking etc., the multi-person model performs much better than the current state-of-the-art pose-only models. For example, in dancing multi-person model gains $+39.8\mathrm{mAP}$ , and in hugging the relative gain is over $+200\%$ . + +On the other hand, object manipulation has the lowest score among these three tasks. Since we do not model objects explicitly, the model has no information about which object is being manipulated and how it is being associated with the person. However, since some tasks have a unique pose when interacting with objects such as answering a phone or carrying an object, knowing the pose would help in identifying the action, which results in 13.3 mAP. + +# 4.2. Actions from Appearance and 3D Pose + +While the 3D pose model can capture about $50\%$ performance compared to the state-of-the-art methods, it does not reason about the scene context. To model this, we concatenate the human-centric 3D representation with feature vectors from MaskFeat [59] as discussed in Section 3.2. MaskFeat has a MViT2 [34] as the backbone and it learns a strong representation about the scene and contextualized appearance. First, we pretrain this model on Kinetics-400 [25] and AVA [20] datasets, using the pseudo ground truth labels. Then, we fine-tune this model on AVA tracks using the ground-truth action annotation. + +In Table 3 we compare our method with other state-of-the-art methods. Overall our method has a gain of $+2.8$ mAP compared to Video MAE [15, 49]. In addition to that if we train with extra annotations from AVA-Kinetics our method achieves $42.3\mathrm{mAP}$ . Figure 3 show the class-wise performance of our method compared to MaskFeat [58]. Our method overall improves the performance of 56 classes in 60 classes. For some classes (e.g. fighting, hugging, climbing) our method improves the performance by more than $+5\mathrm{mAP}$ . In Table 4 we evaluate our method on AVA-Kinetics [33] dataset. Compared to the previous state-of-the-art methods our method has a gain of $+1.5\mathrm{mAP}$ . + +In Figure 4, we show qualitative results from MViT [12] and our method. As shown in the figure, having explicit access to the tracks of everyone in the scene allows us to make more confident predictions for actions like hugging and fighting, where it is easy to interpret close interactions. In addition to that, some actions like riding a horse and climbing can benefit from having access to explicit 3D poses over time. Finally, the amodal nature of 3D meshes also allows us to make better predictions during occlusions. + +# 4.3. Ablation Experiments + +Effect of tracking: All the current works on action recognition do not associate people over time, explicitly. They only use the mid-frame bounding box to predict the action. For example, when a person is running across the scene from left to right, a feature volume cropped at the mid-frame bounding box is unlikely to contain all the information about the person. However, if we can track this person we could simply know their exact position over time and + +![](images/05ca790dc3c8448142e73a409b5861ab1d5d07b2cdb5c11ced8e83aea992726e.jpg) +AVA2.2 Performance with 3D pose +Figure 3. Comparison with State-of-the-art methods: We show class-level performance (average precision and relative gain) of MViT [12] (pretrained on MaskFeat [59]) and ours. Our methods achieve better performance compared to MViT on over 50 classes out of 60 classes. Especially, for actions like running, fighting, hugging, and sleeping etc., our method achieves over $+5$ mAP. This shows the benefit of having access to explicit tracks and 3D poses for action recognition. We only plot the gains if it is above or below 1 mAP. + +
ModelPretrainmAP
SlowFast R101, 8×8 [16]K40023.8
MViTv1-B, 64×3 [12]27.3
SlowFast 16×8 +NL [16]27.5
X3D-XL [14]27.4
MViTv1-B-24, 32×3 [12]K60028.7
Object Transformer [61]31.0
ACAR R101, 8×8 +NL [39]31.4
ACAR R101, 8×8 +NL [39]K70033.3
MViT-L↑312, 40×3 [34],IN-21K+K40031.6
MaskFeat [59]K40037.5
MaskFeat [59]K60038.8
Video MAE [15, 49]K60039.3
Video MAE [15, 49]K40039.5
LARTK40042.3 (+2.8)
+ +that would give more localized information to the model to predict the action. + +To this end, first, we evaluate MaskFeat [58] with the same detection bounding boxes [39] used in our evaluations, and it results in $40.2\mathrm{mAP}$ . With this being the baseline for our system, we train a model which only uses MaskFeat features as input, but over time. This way we can measure the effect of tracking in action recognition. Unsurprisingly, as shown in Table 5 when training MaskFeat with tracking, the model performs $+1.2\mathrm{mAP}$ better than the baseline. This + +Table 3. Comparison with state-of-the-art methods on AVA 2.2:. Our model uses features from MaskFeat [59] with full crop inference. Compared to Video MAE [15,49] our method achieves a gain of $+2.8$ mAP. + +
ModelmAP
SlowFast [16]32.98
ACAR [39]36.36
RM [17]37.34
LART38.91
+ +Table 4. Performance on AVA-Kinetics Dataset. We evaluate the performance of our model on AVA-Kinetics [33] using a single model (no ensembles) and compare the performance with previous state-of-the-art single models. + +clearly shows that the use of tracking is helpful in action recognition. Specifically, having access to the tracks help to localize a person over time, which in return provides a second order signal of how joint angles changes over time. In addition, knowing the identity of each person also gives a discriminative signal between people, which is helpful for learning interactions between people. + +Effect of Pose: The second contribution from our work is to use 3D pose information for action recognition. As discussed in Section 4.1 by only using 3D pose, we can achieve $24.1\mathrm{mAP}$ on AVA dataset. While it is hard to measure the exact contribution of 3D pose and 2D features, we compare our method with a model trained with only MaskFeat and tracking, where the only difference is the use of 3D pose. As shown in Table 5, the addition of 3D pose gives a gain of $+0.8\mathrm{mAP}$ . While this is a relatively small gain compared to the use of tracking, we believe with more robust and accurate 3D pose systems, this can be improved. + +![](images/337ca2f9e863e1ecb545648fee6ab447558f3dd1ee6b89409412b49515688ff2.jpg) + +![](images/d289c746e420c26ad5a0e5db9ce34964ffcab95a9663d2ac2f4273890c785b5f.jpg) + +![](images/eedb596834c89f51bd72bbdf0c7f1cd4b5a77ab0afaecefb84e1ef8fc077aa67.jpg) + +![](images/74ddbfe81ea4d414ee9c7c28dfaccac26c06f7513bd025b4e9b486773f6b099f.jpg) + +![](images/cf85a997378b5af50f4364452d28b7e8a8bf6df4cf779b4634da5a758d5ac12b.jpg) + +![](images/160129cf6f122763c4c6f703e7286c00f6096f8af394d73c19001257d0ada9cc.jpg) +Figure 4. Qualitative Results: We show the predictions from MViT [12] and our model on validation samples from AVA v2.2. The person with the colored mesh indicates the person-of-interest for which we recognise the action and the one with the gray mesh indicates the supporting actors. The first two columns demonstrate the benefits of having access to the action-tubes of other people for action prediction. In the first column, the orange person is very close to the other person with hugging posture, which makes it easy to predict hugging with higher probability. Similarly, in the second column, the explicit interaction between the multiple people, and knowing others also fighting increases the confidence for the fighting action for the green person over the 2D recognition model. The third and the fourth columns show the benefit of explicitly modeling the 3D pose over time (using tracks) for action recognition. Where the yellow person is in riding pose and purple person is looking upwards and legs on a vertical plane. The last column indicates the benefit of representing people with an amodal representation. Here the hand of the blue person is occluded, so the 2D recognition model does not see the action as a whole. However, SMPL meshes are amodal, therefore the hand is still present, which boosts the probability of predicting the action label for closing the door. + +![](images/91d7f4af78483263ba8204c1e8f2f3a56a4a491ccc3abb72a2622fcbc0b16d60.jpg) + +![](images/a0be3df41c898b8c712cb382bde8ee02c1270ed71dbce63b7e372334f7f5f5b0.jpg) + +![](images/3a1badd50d434728f13b224b79fbff5a84952774802f20019b53760fec7e8037.jpg) + +![](images/be1e9142532f97d8df0f7ce60c36363d4f6315613d416e49ccb32d766793c0ec.jpg) + +
ModelOMPIPMmAP
MViT32.241.158.640.2
MViT + Tracking33.443.059.341.4 (+1.2)
MViT + Tracking + Pose34.443.959.942.3 (+0.9)
+ +Table 5. Ablation on the main components: We ablate the contribution of tracking and 3D poses using the same detections. First, we only use MViT features over the tracks to evaluate the contribution from tracking. Then we add 3D pose features to study the contribution from 3D pose for action recognition. + +# 4.4. Implementation details + +In both the pose model and pose+appearance model, we use the same vanilla transformer architecture [52] with 16 layers and 16 heads. For both models the embedding dimension is 512. We train with 0.4 mask ratio and at test time use the same mask token to in-fill the missing detections. The output token from the transformer is passed to a linear layer to predict the AVA action labels. We pre-train our model on kinetics for 30 epochs with MViT [12] predictions as pseudo-supervision and then fine-tune on AVA with AVA ground truth labels for few epochs. We train our models with AdamW [36] with base learning rate of 0.001 and betas = (0.9, 0.95). We use cosine annealing scheduling with a linear warm-up. For additional details please see the Appendix. + +# 5. Conclusion + +In this paper, we investigated the benefits of 3D tracking and pose for the task of human action recognition. By leveraging a state-of-the-art method for person tracking, PHALP [43], we trained a transformer model that takes as input tokens the state of the person at every time instance. We investigated two design choices for the content of the token. First, when using information about the 3D pose of the person, we outperform previous baselines that rely on pose information for action recognition by $8.2\mathrm{mAP}$ on the AVA v2.2 dataset. Then, we also proposed fusing the pose information with contextualized appearance information coming from a typical action recognition backbone [12] applied over the tracklet trajectory. With this model, we improved upon the previous state-of-the-art on AVA v2.2 by $2.8\mathrm{mAP}$ . There are many avenues for future work and further improvements for action recognition. For example, one could achieve better performance for more fine-grained tasks by more expressive 3D reconstruction of the human body (e.g., using the SMPL-X model [40] to capture also the hands), and by explicit modeling of the objects in the scene (potentially by extending the "tubes" idea to objects). + +Acknowledgements: This work was supported by the FAIR-BAIR program as well as ONR MURI (N00014-21-1-2801). We thank Shubham Goel, for helpful discussions. + +# References + +[1] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lucic, and Cordelia Schmid. ViViT: A video vision transformer. In ICCV, 2021. +[2] Fabien Baradel, Thibault Groueix, Philippe Weinzaepfel, Romain Brégier, Yannis Kalantidis, and Grégory Rogez. Leveraging MoCap data for human mesh recovery. In 3DV, 2021. +[3] Philipp Bergmann, Tim Meinhardt, and Laura Leal-Taixe. Tracking without bells and whistles. In ICCV, 2019. +[4] Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In ICML, 2021. +[5] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J Black. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In ECCV, 2016. +[6] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017. +[7] Vasileios Choutas, Lea Müller, Chun-Hao P Huang, Siyu Tang, Dimitrios Tzionas, and Michael J Black. Accurate 3D body shape regression using metric and semantic attributes. In CVPR, 2022. +[8] Vasileios Choutas, Philippe Weinzaepfel, Jérôme Revaud, and Cordelia Schmid. PoTion: Pose motion representation for action recognition. In CVPR, 2018. +[9] Piotr Dollár, Vincent Rabaud, Garrison Cottrell, and Serge Belongie. Behavior recognition via sparse spatio-temporal features. In 2005 IEEE international workshop on visual surveillance and performance evaluation of tracking and surveillance, 2005. +[10] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. 2021. +[11] Alexei A Efros, Alexander C Berg, Greg Mori, and Jitendra Malik. Recognizing action at a distance. In ICCV, 2003. +[12] Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer. Multiscale vision transformers. In ICCV, 2021. +[13] Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, and Cewu Lu. RMPE: Regional multi-person pose estimation. In ICCV, 2017. +[14] Christoph Feichtenhofer. X3D: Expanding architectures for efficient video recognition. In CVPR, 2020. +[15] Christoph Feichtenhofer, Haoqi Fan, Yanghao Li, and Kaiming He. Masked autoencoders as spatiotemporal learners. In NeurIPS, 2022. +[16] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In ICCV, 2019. +[17] Yutong Feng, Jianwen Jiang, Ziyuan Huang, Zhiwu Qing, Xiang Wang, Shiwei Zhang, Mingqian Tang, and Yue Gao. Relation modeling in spatio-temporal action localization. arXiv preprint arXiv:2106.08061, 2021. + +[18] Georgia Gkioxari and Jitendra Malik. Finding action tubes. In CVPR, 2015. +[19] Shubham Goel, Georgios Pavlakos, Jathushan Rajasegaran, Angjoo Kanazawa, and Jitendra Malik. Humans in 4D: Reconstructing and tracking humans with transformers. arXiv preprint (forthcoming), 2023. +[20] Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, and Jitendra Malik. AVA: A video dataset of spatio-temporally localized atomic visual actions. In CVPR, 2018. +[21] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In ICCV, 2017. +[22] Gunnar Johansson. Visual perception of biological motion and a model for its analysis. Perception & psychophysics, 14(2):201-211, 1973. +[23] Angjoo Kanazawa, Michael J Black, David W Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In CVPR, 2018. +[24] Angjoo Kanazawa, Jason Y Zhang, Panna Felsen, and Jitendra Malik. Learning 3D human dynamics from video. In CVPR, 2019. +[25] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. +[26] Machiel Keestra. Understanding human action. Integrating meanings, mechanisms, causes, and contexts. 2015. +[27] Alexander Klaser, Marcin Marszalek, and Cordelia Schmid. A spatio-temporal descriptor based on 3D-gradients. In BMVC, 2008. +[28] Muhammed Kocabas, Nikos Athanasiou, and Michael J Black. VIBE: Video inference for human body pose and shape estimation. In CVPR, 2020. +[29] Muhammed Kocabas, Chun-Hao P Huang, Otmar Hilliges, and Michael J Black. PARE: Part attention regressor for 3D human body estimation. In ICCV, 2021. +[30] Muhammed Kocabas, Chun-Hao P Huang, Joachim Tesch, Lea Muller, Otmar Hilliges, and Michael J Black. SPEC: Seeing people in the wild with an estimated camera. In ICCV, 2021. +[31] Nikos Kolotouros, Georgios Pavlakos, Michael J Black, and Kostas Daniilidis. Learning to reconstruct 3D human pose and shape via model-fitting in the loop. In ICCV, 2019. +[32] Nikos Kolotouros, Georgios Pavlakos, Dinesh Jayaraman, and Kostas Daniilidis. Probabilistic modeling for human mesh recovery. In ICCV, 2021. +[33] Ang Li, Meghan Thotakuri, David A Ross, João Carreira, Alexander Vostrikov, and Andrew Zisserman. Theava-kinetics localized human actions video dataset. arXiv preprint arXiv:2005.00214, 2020. +[34] Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Mangalam, Bo Xiong, Jitendra Malik, and Christoph Feichtenhofer. MViTv2: Improved multiscale vision transformers for classification and detection. In CVPR, 2022. + +[35] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. SMPL: A skinned multiperson linear model. ACM Transactions on Graphics (TOG), 34(6):1-16, 2015. +[36] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. +[37] Tim Meinhardt, Alexander Kirillov, Laura Leal-Taixe, and Christoph Feichtenhofer. TrackFormer: Multi-object tracking with transformers. In CVPR, 2022. +[38] Daniel Neimark, Omri Bar, Maya Zohar, and Dotan Asselmann. Video transformer network. In ICCV, 2021. +[39] Junting Pan, Siyu Chen, Mike Zheng Shou, Yu Liu, Jing Shao, and Hongsheng Li. Actor-context-actor relation network for spatio-temporal action localization. In CVPR, 2021. +[40] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Expressive body capture: 3D hands, face, and body from a single image. In CVPR, 2019. +[41] Georgios Pavlakos, Jitendra Malik, and Angjoo Kanazawa. Human mesh recovery from multiple shots. In CVPR, 2022. +[42] Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, and Jitendra Malik. Tracking people with 3D representations. In NeurIPS, 2021. +[43] Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, and Jitendra Malik. Tracking people by predicting 3D appearance, location and pose. In CVPR, 2022. +[44] Davis Rempe, Tolga Birdal, Aaron Hertzmann, Jimei Yang, Srinath Sridhar, and Leonidas J Guibas. HuMoR: 3D human motion model for robust pose estimation. In ICCV, 2021. +[45] Anshul Shah, Shlok Mishra, Ankan Bansal, Jun-Cheng Chen, Rama Chellappa, and Abhinav Shrivastava. Pose and joint-aware action recognition. In WACV, 2022. +[46] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. NIPS, 2014. +[47] Chen Sun, Abhinav Shrivastava, Carl Vondrick, Rahul Sukthankar, Kevin Murphy, and Cordelia Schmid. Relational action forecasting. In CVPR, 2019. +[48] Graham W Taylor, Rob Fergus, Yann LeCun, and Christoph Bregler. Convolutional learning of spatio-temporal features. In ECCV, 2010. +[49] Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. VideoMAE: Masked autoencoders are data-efficient learners for self-supervised video pre-training. In NeurIPS, 2022. + +[50] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3D convolutional networks. In ICCV, 2015. +[51] Gül Varol, Ivan Laptev, Cordelia Schmid, and Andrew Zisserman. Synthetic humans for action recognition from unseen viewpoints. International Journal of Computer Vision, 129(7):2264-2287, 2021. +[52] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. +[53] Heng Wang, A. Klaser, C. Schmid, and Cheng-Lin Liu. Action recognition by dense trajectories. In CVPR, 2011. +[54] Heng Wang and Cordelia Schmid. Action recognition with improved trajectories. In ICCV, 2013. +[55] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018. +[56] Xiaolong Wang and Abhinav Gupta. Videos as space-time region graphs. In ECCV, 2018. +[57] Zelun Wang and Jyh-Charn Liu. Translating math formula images to latex sequences using deep neural networks with sequence-level training. International Journal on Document Analysis and Recognition (IJDAR), 24(1):63-75, 2021. +[58] Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, and Christoph Feichtenhofer. Masked feature prediction for self-supervised visual pre-training. arXiv preprint arXiv:2112.09133, 2021. +[59] Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, and Christoph Feichtenhofer. Masked feature prediction for self-supervised visual pre-training. In CVPR, 2022. +[60] Philippe Weinzaepfel and Grégory Rogez. Mimetics: Towards understanding human actions out of context. *IJCV*, 2021. +[61] Chao-Yuan Wu and Philipp Krahenbuhl. Towards long-form video understanding. In CVPR, 2021. +[62] Yuliang Xiu, Jiefeng Li, Haoyu Wang, Yinghong Fang, and Cewu Lu. Pose Flow: Efficient online pose tracking. In BMVC, 2018. +[63] An Yan, Yali Wang, Zhifeng Li, and Yu Qiao. PA3D: Pose-action 3D machine for video recognition. In CVPR, 2019. +[64] Hongwen Zhang, Yating Tian, Xinchi Zhou, Wanli Ouyang, Yebin Liu, Limin Wang, and Zhenan Sun. PyMAF: 3D human pose and shape regression with pyramidal mesh alignment feedback loop. In ICCV, 2021. +[65] Yubo Zhang, Pavel Tokmakov, Martial Hebert, and Cordelia Schmid. A structured model for action detection. In CVPR, 2019. \ No newline at end of file diff --git a/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/images.zip b/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6ee482eb7769eb9af66a91aaa8121cdafb432640 --- /dev/null +++ b/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a7b24040fb01d206c8efa9d0fef1b9b4701ff73488cfae070a9b6e365be3284 +size 523393 diff --git a/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/layout.json b/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..cd97dc30d6d4789f30bd1e18a71d6e798c2c059b --- /dev/null +++ b/2023/On the Benefits of 3D Pose and Tracking for Human Action Recognition/layout.json @@ -0,0 +1,8604 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 75, + 103, + 518, + 121 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 75, + 103, + 518, + 121 + ], + "spans": [ + { + "bbox": [ + 75, + 103, + 518, + 121 + ], + "type": "text", + "content": "On the Benefits of 3D Pose and Tracking for Human Action Recognition" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "spans": [ + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "text", + "content": "Jathushan Rajasegaran" + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "text", + "content": ", Georgios Pavlakos" + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "text", + "content": ", Angjoo Kanazawa" + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "text", + "content": ", Christoph Feichtenhofer" + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "text", + "content": ", Jitendra Malik" + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "inline_equation", + "content": "^{1,2}" + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "text", + "content": "UC Berkeley, " + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 33, + 142, + 569, + 171 + ], + "type": "text", + "content": "Meta AI, FAIR" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 143, + 199, + 192, + 211 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 143, + 199, + 192, + 211 + ], + "spans": [ + { + "bbox": [ + 143, + 199, + 192, + 211 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 46, + 224, + 289, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 224, + 289, + 429 + ], + "spans": [ + { + "bbox": [ + 46, + 224, + 289, + 429 + ], + "type": "text", + "content": "In this work we study the benefits of using tracking and 3D poses for action recognition. To achieve this, we take the Lagrangian view on analysing actions over a trajectory of human motion rather than at a fixed point in space. Taking this stand allows us to use the tracklets of people to predict their actions. In this spirit, first we show the benefits of using 3D pose to infer actions, and study person-person interactions. Subsequently, we propose a Lagrangian Action Recognition model by fusing 3D pose and contextualized appearance over tracklets. To this end, our method achieves state-of-the-art performance on the AVA v2.2 dataset on both pose only settings and on standard benchmark settings. When reasoning about the action using only pose cues, our pose model achieves " + }, + { + "bbox": [ + 46, + 224, + 289, + 429 + ], + "type": "inline_equation", + "content": "+10.0" + }, + { + "bbox": [ + 46, + 224, + 289, + 429 + ], + "type": "text", + "content": " mAP gain over the corresponding state-of-the-art while our fused model has a gain of " + }, + { + "bbox": [ + 46, + 224, + 289, + 429 + ], + "type": "inline_equation", + "content": "+2.8" + }, + { + "bbox": [ + 46, + 224, + 289, + 429 + ], + "type": "text", + "content": " mAP over the best state-of-the-art model. Code and results are available at: https://brjathu.github.io/LART" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 47, + 453, + 127, + 464 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 453, + 127, + 464 + ], + "spans": [ + { + "bbox": [ + 47, + 453, + 127, + 464 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 46, + 473, + 287, + 629 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 473, + 287, + 629 + ], + "spans": [ + { + "bbox": [ + 46, + 473, + 287, + 629 + ], + "type": "text", + "content": "In fluid mechanics, it is traditional to distinguish between the Lagrangian and Eulerian specifications of the flow field. Quoting the Wikipedia entry, \"Lagrangian specification of the flow field is a way of looking at fluid motion where the observer follows an individual fluid parcel as it moves through space and time. Plotting the position of an individual parcel through time gives the pathline of the parcel. This can be visualized as sitting in a boat and drifting down a river. The Eulerian specification of the flow field is a way of looking at fluid motion that focuses on specific locations in the space through which the fluid flows as time passes. This can be visualized by sitting on the bank of a river and watching the water pass the fixed location.\"" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 46, + 630, + 287, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 630, + 287, + 715 + ], + "spans": [ + { + "bbox": [ + 46, + 630, + 287, + 715 + ], + "type": "text", + "content": "These concepts are very relevant to how we analyze videos of human activity. In the Eulerian viewpoint, we would focus on feature vectors at particular locations, either " + }, + { + "bbox": [ + 46, + 630, + 287, + 715 + ], + "type": "inline_equation", + "content": "(x,y)" + }, + { + "bbox": [ + 46, + 630, + 287, + 715 + ], + "type": "text", + "content": " or " + }, + { + "bbox": [ + 46, + 630, + 287, + 715 + ], + "type": "inline_equation", + "content": "(x,y,z)" + }, + { + "bbox": [ + 46, + 630, + 287, + 715 + ], + "type": "text", + "content": ", and consider evolution over time while staying fixed in space at the location. In the Lagrangian viewpoint, we would track, say a person over space-time and track the associated feature vector across space-time." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 304, + 200, + 547, + 368 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 200, + 547, + 368 + ], + "spans": [ + { + "bbox": [ + 304, + 200, + 547, + 368 + ], + "type": "text", + "content": "While the older literature for activity recognition e.g., [11, 18, 53] typically adopted the Lagrangian viewpoint, ever since the advent of neural networks based on 3D spacetime convolution, e.g., [50], the Eulerian viewpoint became standard in state-of-the-art approaches such as SlowFast Networks [16]. Even after the switch to transformer architectures [12, 52] the Eulerian viewpoint has persisted. This is noteworthy because the tokenization step for transformers gives us an opportunity to freshly examine the question, \"What should be the counterparts of words in video analysis?\" Dosovitskiy et al. [10] suggested that image patches were a good choice, and the continuation of that idea to video suggests that spatiotemporal cuboids would work for video as well." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 373, + 547, + 661 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 373, + 547, + 661 + ], + "spans": [ + { + "bbox": [ + 304, + 373, + 547, + 661 + ], + "type": "text", + "content": "On the contrary, in this work we take the Lagrangian viewpoint for analysing human actions. This specifies that we reason about the trajectory of an entity over time. Here, the entity can be low-level, e.g., a pixel or a patch, or high-level, e.g., a person. Since, we are interested in understanding human actions, we choose to operate on the level of \"humans-as-entities\". To this end, we develop a method that processes trajectories of people in video and uses them to recognize their action. We recover these trajectories by capitalizing on a recently introduced 3D tracking method PHALP [43] and HMR 2.0 [19]. As shown in Figure 1 PHALP recovers person tracklets from video by lifting people to 3D, which means that we can both link people over a series of frames and get access to their 3D representation. Given these 3D representations of people (i.e., 3D pose and 3D location), we use them as the basic content of each token. This allows us to build a flexible system where the model, here a transformer, takes as input tokens corresponding to the different people with access to their identity, 3D pose and 3D location. Having 3D location of the people in the scene allows us to learn interaction among people. Our model relying on this tokenization can benefit from 3D tracking and pose, and outperforms previous baseline that only have access to pose information [8, 45]." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 304, + 665, + 547, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 665, + 547, + 715 + ], + "spans": [ + { + "bbox": [ + 304, + 665, + 547, + 715 + ], + "type": "text", + "content": "While the change in human pose over time is a strong signal, some actions require more contextual information about the appearance and the scene. Therefore, it is important to also fuse pose with appearance information from" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "spans": [ + { + "bbox": [ + 65, + 2, + 111, + 34 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 145, + 0, + 494, + 37 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 145, + 0, + 494, + 37 + ], + "spans": [ + { + "bbox": [ + 145, + 0, + 494, + 37 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation. Except for this watermark, it is identical to the accepted version; the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "type": "text", + "content": "640" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 85, + 71, + 509, + 315 + ], + "blocks": [ + { + "bbox": [ + 85, + 71, + 509, + 315 + ], + "lines": [ + { + "bbox": [ + 85, + 71, + 509, + 315 + ], + "spans": [ + { + "bbox": [ + 85, + 71, + 509, + 315 + ], + "type": "image", + "image_path": "b709f11f233361974d43ae852845e4e9eeb5342539b37b15be82d1a7009d093e.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 46, + 323, + 548, + 381 + ], + "lines": [ + { + "bbox": [ + 46, + 323, + 548, + 381 + ], + "spans": [ + { + "bbox": [ + 46, + 323, + 548, + 381 + ], + "type": "text", + "content": "Figure 1. Overview of our method: Given a video, first, we track every person using a tracking algorithm (e.g. PHALP [43]). Then every detection in the track is tokenized to represent a human-centric vector (e.g. pose, appearance). To represent 3D pose we use SMPL [35] parameters and estimated 3D location of the person, for contextualized appearance we use MViT [12] (pre-trained on MaskFeat [59]) features. Then we train a transformer network to predict actions using the tracks. Note that, at the second frame we do not have detection for the blue person, at these places we pass a mask token to in-fill the missing detections." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 46, + 401, + 288, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 401, + 288, + 555 + ], + "spans": [ + { + "bbox": [ + 46, + 401, + 288, + 555 + ], + "type": "text", + "content": "humans and the scene, coming directly from pixels. To achieve this, we also use the state-of-the-art models for action recognition [12, 34] to provide complementary information from the contextualized appearance of the humans and the scene in a Lagrangian framework. Specifically, we densely run such models over the trajectory of each tracklet and record the contextualized appearance features localized around the tracklet. As a result, our tokens include explicit information about the 3D pose of the people and densely sampled appearance information from the pixels, processed by action recognition backbones [12]. Our complete system outperforms the previous state of the art by a large margin of " + }, + { + "bbox": [ + 46, + 401, + 288, + 555 + ], + "type": "inline_equation", + "content": "2.8\\mathrm{mAP}" + }, + { + "bbox": [ + 46, + 401, + 288, + 555 + ], + "type": "text", + "content": ", on the challenging AVA v2.2 dataset." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 46, + 557, + 288, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 557, + 288, + 712 + ], + "spans": [ + { + "bbox": [ + 46, + 557, + 288, + 712 + ], + "type": "text", + "content": "Overall, our main contribution is introducing an approach that highlights the effects of tracking and 3D poses for human action understanding. To this end, in this work, we propose a Lagrangian Action Recognition with Tracking (LART) approach, which utilizes the tracklets of people to predict their action. Our baseline version leverages tracklet trajectories and 3D pose representations of the people in the video to outperform previous baselines utilizing pose information. Moreover, we demonstrate that the proposed Lagrangian viewpoint of action recognition can be easily combined with traditional baselines that rely only on appearance and context from the video, achieving significant gains compared to the dominant paradigm." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 306, + 399, + 392, + 414 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 399, + 392, + 414 + ], + "spans": [ + { + "bbox": [ + 306, + 399, + 392, + 414 + ], + "type": "text", + "content": "2. Related Work" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 304, + 424, + 545, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 424, + 545, + 567 + ], + "spans": [ + { + "bbox": [ + 304, + 424, + 545, + 567 + ], + "type": "text", + "content": "Recovering humans in 3D: A lot of the related work has been using the SMPL human body model [35] for recovering 3D humans from images. Initially, the related methods were relying on optimization-based approaches, like SMPLify [5], but since the introduction of the HMR [23], there has been a lot of interest in approaches that can directly regress SMPL parameters [35] given the corresponding image of the person as input. Many follow-up works have improved upon the original model, estimating more accurate pose [31] or shape [7], increasing the robustness of the model [41], incorporating side information [30,32], investigating different architecture choices [29,64], etc." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 304, + 569, + 545, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 569, + 545, + 689 + ], + "spans": [ + { + "bbox": [ + 304, + 569, + 545, + 689 + ], + "type": "text", + "content": "While these works have been improving the basic single-frame reconstruction performance, there have been parallel efforts toward the temporal reconstruction of humans from video input. The HMMR model [24] uses a convolutional temporal encoder on HMR image features [23] to reconstruct humans over time. Other approaches have investigated recurrent [28] or transformer [41] encoders. Instead of performing the temporal pooling on image features, recent work has been using the SMPL parameters directly for the temporal encoding [2, 44]." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 689, + 545, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 689, + 545, + 713 + ], + "spans": [ + { + "bbox": [ + 305, + 689, + 545, + 713 + ], + "type": "text", + "content": "One assumption of the temporal methods in the above category is that they have access to tracklets of people in" + } + ] + } + ], + "index": 7 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 298, + 749, + 312, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 298, + 749, + 312, + 757 + ], + "spans": [ + { + "bbox": [ + 298, + 749, + 312, + 757 + ], + "type": "text", + "content": "641" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 46, + 72, + 289, + 216 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 72, + 289, + 216 + ], + "spans": [ + { + "bbox": [ + 46, + 72, + 289, + 216 + ], + "type": "text", + "content": "the video. This means that they rely on tracking methods, most of which operate on the 2D domain [3, 13, 37, 62] and are responsible for introducing many errors. To overcome this limitation, recent work [42, 43] has capitalized on the advances of 3D human recovery to perform more robust identity tracking from video. More specifically, the PHALP method of Rajasegaran et al. [43] allows for robust tracking in a variety of settings, including in the wild videos and movies. Here, we make use of the PHALP system to discover long tracklets from large-scale video datasets. This allows us to train our method for recognizing actions from 3D pose input." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 46, + 232, + 289, + 448 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 232, + 289, + 448 + ], + "spans": [ + { + "bbox": [ + 46, + 232, + 289, + 448 + ], + "type": "text", + "content": "Action Recognition: Earlier works on action recognition relied on hand-crafted features such as HOG3D [27], Cuboids [9] and Dense Trajectories [53, 54]. After the introduction of deep learning, 3D convolutional networks became the main backbone for action recognition [6, 48, 50]. However, the 3D convolutional models treat both space and time in a similar fashion, so to overcome this issue, two-stream architectures were proposed [46]. In two-steam networks, one pathway is dedicated to motion features, usually taking optical flow as input. This requirement of computing optical flow makes it hard to learn these models in an end-to-end manner. On the other hand, SlowFast networks [16] only use video streams but at different frame rates, allowing it to learn motion features from the fast pathway and lateral connections to fuse spatial and temporal information. Recently, with the advancements in transformer architectures, there has been a lot of work on action recognition using transformer backbones [1, 4, 12, 38]." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 46, + 460, + 288, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 460, + 288, + 594 + ], + "spans": [ + { + "bbox": [ + 46, + 460, + 288, + 594 + ], + "type": "text", + "content": "While the above-mentioned works mainly focus on the model architectures for action recognition, another line of work investigates more fine-grained relationships between actors and objects [47, 55, 56, 65]. Non-local networks [55] use self-attention to reason about entities in the video and learn long-range relationships. ACAR [39] models actor-context-actor relationships by first extracting actor-context features through pooling in bounding box region and then learning higher-level relationships between actors. Compared to ACAR, our method does not explicitly design any priors about actor relationships, except their track identity." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 46, + 605, + 288, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 605, + 288, + 715 + ], + "spans": [ + { + "bbox": [ + 46, + 605, + 288, + 715 + ], + "type": "text", + "content": "Along these lines, some works use the human pose to understand the action [8, 45, 51, 60, 63]. PoTion [8] uses a keypoint-based pose representation by colorizing the temporal dependencies. Recently, JMRN [45] proposed a joint-motion re-weighting network to learn joint trajectories separately and then fuse this information to reason about interjoint motion. While these works rely on 2D key points and design-specific architectures to encode the representation, we use more explicit 3D SMPL parameters." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 306, + 71, + 362, + 84 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 71, + 362, + 84 + ], + "spans": [ + { + "bbox": [ + 306, + 71, + 362, + 84 + ], + "type": "text", + "content": "3. Method" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 304, + 91, + 545, + 247 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 91, + 545, + 247 + ], + "spans": [ + { + "bbox": [ + 304, + 91, + 545, + 247 + ], + "type": "text", + "content": "Understanding human action requires interpreting multiple sources of information [26]. These include head and gaze direction, human body pose and dynamics, interactions with objects or other humans or animals, the scene as a whole, the activity context (e.g. immediately preceding actions by self or others), and more. Some actions can be recognized by pose and pose dynamics alone, as demonstrated by Johansson et al [22] who showed that people are remarkable at recognizing walking, running, crawling just by looking at moving point-lights. However, interpreting complex actions requires reasoning with multiple sources of information e.g. to recognize that someone is slicing a tomato with a knife, it helps to see the knife and the tomato." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 304, + 247, + 545, + 354 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 247, + 545, + 354 + ], + "spans": [ + { + "bbox": [ + 304, + 247, + 545, + 354 + ], + "type": "text", + "content": "There are many design choices that can be made here. Should one use \"disentangled\" representations, with elements such as pose, interacted objects, etc, represented explicitly in a modular way? Or should one just input video pixels into a large capacity neural network model and rely on it to figure out what is discriminatively useful? In this paper, we study two options: a) human pose reconstructed from an HMR model [19, 23] and b) human pose with contextual appearance as computed by an MViT model [12]." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "spans": [ + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "text", + "content": "Given a video with number of frames " + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "inline_equation", + "content": "T" + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "text", + "content": ", we first track every person using PHALP [43], which gives us a unique identity for each person over time. Let a person " + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "inline_equation", + "content": "i \\in [1,2,3,\\dots n]" + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "text", + "content": " at time " + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "inline_equation", + "content": "t \\in [1,2,3,\\dots T]" + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "text", + "content": " be represented by a person-vector " + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_t^i" + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "text", + "content": ". Here " + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "text", + "content": " is the number of people in a frame. This person-vector is constructed such that, it contains human-centric representation " + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "inline_equation", + "content": "\\mathbf{P}_t^i" + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "text", + "content": " and some contextualized appearance information " + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "inline_equation", + "content": "\\mathbf{Q}_t" + }, + { + "bbox": [ + 304, + 354, + 545, + 450 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 390, + 453, + 545, + 468 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 390, + 453, + 545, + 468 + ], + "spans": [ + { + "bbox": [ + 390, + 453, + 545, + 468 + ], + "type": "interline_equation", + "content": "\\mathbf {H} _ {t} ^ {i} = \\left\\{\\mathbf {P} _ {t} ^ {i}, \\mathbf {Q} _ {t} ^ {i} \\right\\}. \\tag {1}", + "image_path": "42517090ebcf9426b6cd7fb03c951fb2c8e5e978fa2d4c49ef7e84567cbbd93e.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 304, + 468, + 545, + 516 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 468, + 545, + 516 + ], + "spans": [ + { + "bbox": [ + 304, + 468, + 545, + 516 + ], + "type": "text", + "content": "Since we know the identity of each person from the tracking, we can create an action-tube [18] representation for each person. Let " + }, + { + "bbox": [ + 304, + 468, + 545, + 516 + ], + "type": "inline_equation", + "content": "\\Phi_{i}" + }, + { + "bbox": [ + 304, + 468, + 545, + 516 + ], + "type": "text", + "content": " be the action-tube of person " + }, + { + "bbox": [ + 304, + 468, + 545, + 516 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 304, + 468, + 545, + 516 + ], + "type": "text", + "content": ", then this action-tube contains all the person-vectors over time." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 364, + 519, + 545, + 533 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 364, + 519, + 545, + 533 + ], + "spans": [ + { + "bbox": [ + 364, + 519, + 545, + 533 + ], + "type": "interline_equation", + "content": "\\boldsymbol {\\Phi} _ {i} = \\left\\{\\mathbf {H} _ {1} ^ {i}, \\mathbf {H} _ {2} ^ {i}, \\mathbf {H} _ {3} ^ {i}, \\dots , \\mathbf {H} _ {T} ^ {i} \\right\\}. \\tag {2}", + "image_path": "08da207404072c798b1219a2a8d2317d0b617ae462291d0750656a6891afc18a.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 304, + 533, + 545, + 605 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 533, + 545, + 605 + ], + "spans": [ + { + "bbox": [ + 304, + 533, + 545, + 605 + ], + "type": "text", + "content": "Given this representation, we train our model LART to predict actions from action-tubes (tracks). In this work we use a vanilla transformer [52] to model the network " + }, + { + "bbox": [ + 304, + 533, + 545, + 605 + ], + "type": "inline_equation", + "content": "\\mathcal{F}" + }, + { + "bbox": [ + 304, + 533, + 545, + 605 + ], + "type": "text", + "content": ", and this allows us to mask attention, if the track is not continuous due to occlusions and failed detections etc. Please see the Appendix for more details on network architecture." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 351, + 609, + 545, + 624 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 351, + 609, + 545, + 624 + ], + "spans": [ + { + "bbox": [ + 351, + 609, + 545, + 624 + ], + "type": "interline_equation", + "content": "\\mathcal {F} \\left(\\Phi_ {1}, \\Phi_ {2}, \\dots , \\Phi_ {i}, \\dots , \\Phi_ {n}; \\Theta\\right) = \\widehat {Y _ {i}}. \\tag {3}", + "image_path": "9cbb1144292b1934b75a2d38dd5daab75b9439c4fccd61400b930e3e9fdf73d2.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "spans": [ + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "text", + "content": "Here, " + }, + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "inline_equation", + "content": "\\Theta" + }, + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "text", + "content": " is the model parameters, " + }, + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "inline_equation", + "content": "\\widehat{Y}_i = \\{y_1^i,y_2^i,y_3^i,\\dots,y_T^i\\}" + }, + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "text", + "content": " is the predictions for a track, and " + }, + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "inline_equation", + "content": "y_{t}^{i}" + }, + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "text", + "content": " is the predicted action of the track " + }, + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "text", + "content": " at time " + }, + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "text", + "content": ". The model can use the actions of others for reasoning when predicting the action for the person-of-interest " + }, + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 304, + 628, + 545, + 712 + ], + "type": "text", + "content": ". Finally, we use binary cross-entropy loss to train our model and measure mean Average Precision (mAP) for evaluation." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 748, + 313, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 748, + 313, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 748, + 313, + 757 + ], + "type": "text", + "content": "642" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 47, + 72, + 224, + 85 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 72, + 224, + 85 + ], + "spans": [ + { + "bbox": [ + 47, + 72, + 224, + 85 + ], + "type": "text", + "content": "3.1. Action Recognition with 3D Pose" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 46, + 90, + 289, + 317 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 90, + 289, + 317 + ], + "spans": [ + { + "bbox": [ + 46, + 90, + 289, + 317 + ], + "type": "text", + "content": "In this section, we study the effect of human-centric pose representation on action recognition. To do that, we consider a person-vector that only contains the pose representation, " + }, + { + "bbox": [ + 46, + 90, + 289, + 317 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_t^i = \\{\\mathbf{P}_t^i\\}" + }, + { + "bbox": [ + 46, + 90, + 289, + 317 + ], + "type": "text", + "content": ". While, " + }, + { + "bbox": [ + 46, + 90, + 289, + 317 + ], + "type": "inline_equation", + "content": "\\mathbf{P}_t^i" + }, + { + "bbox": [ + 46, + 90, + 289, + 317 + ], + "type": "text", + "content": " can in general contain any information about the person, in this work train a pose only model LART-pose which uses 3D body pose of the person based on the SMPL [35] model. This includes the joint angles of the different body parts, " + }, + { + "bbox": [ + 46, + 90, + 289, + 317 + ], + "type": "inline_equation", + "content": "\\theta_t^i \\in \\mathcal{R}^{23 \\times 3 \\times 3}" + }, + { + "bbox": [ + 46, + 90, + 289, + 317 + ], + "type": "text", + "content": " and is considered as an amodal representation, which means we make a prediction about all body parts, even those that are potentially occluded/truncated in the image. Since the global body orientation " + }, + { + "bbox": [ + 46, + 90, + 289, + 317 + ], + "type": "inline_equation", + "content": "\\psi_t^i \\in \\mathcal{R}^{3 \\times 3}" + }, + { + "bbox": [ + 46, + 90, + 289, + 317 + ], + "type": "text", + "content": " is represented separately from the body pose, our body representation is invariant to the specific viewpoint of the video. In addition to the 3D pose, we also use the 3D location " + }, + { + "bbox": [ + 46, + 90, + 289, + 317 + ], + "type": "inline_equation", + "content": "L_t^i" + }, + { + "bbox": [ + 46, + 90, + 289, + 317 + ], + "type": "text", + "content": " of the person in the camera view (which is also predicted by the PHALP model [43]). This makes it possible to consider the relative location of the different people in 3D. More specifically, each person is represented as," + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 115, + 323, + 287, + 338 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 323, + 287, + 338 + ], + "spans": [ + { + "bbox": [ + 115, + 323, + 287, + 338 + ], + "type": "interline_equation", + "content": "\\mathbf {H} _ {t} ^ {i} = \\mathbf {P} _ {t} ^ {i} = \\left\\{\\theta_ {t} ^ {i}, \\psi_ {t} ^ {i}, L _ {t} ^ {i} \\right\\}. \\tag {4}", + "image_path": "4b2aacad99b8364debd7c57204a84e5ea3f342c3e004628848e1f4110301b672.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "spans": [ + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "text", + "content": "Let us assume that there are " + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "text", + "content": " tracklets " + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "inline_equation", + "content": "\\{\\Phi_1, \\Phi_2, \\Phi_3, \\dots, \\Phi_n\\}" + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "text", + "content": " in a given video. To study the action of the tracklet " + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "text", + "content": ", we consider that person " + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "text", + "content": " as the person-of-interest and having access to other tracklets can be helpful to interpret the person-person interactions for person " + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "text", + "content": ". Therefore, to predict the action for all " + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "text", + "content": " tracklets we need to make " + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "text", + "content": " number of forward passes. If person " + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "text", + "content": " is the person-of-interest, then we randomly sample " + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "inline_equation", + "content": "N - 1" + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "text", + "content": " number of other tracklets and pass it to the model " + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(\\cdot; \\Theta)" + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "text", + "content": " along with the " + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "inline_equation", + "content": "\\Phi_i" + }, + { + "bbox": [ + 46, + 345, + 289, + 464 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 103, + 470, + 287, + 485 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 103, + 470, + 287, + 485 + ], + "spans": [ + { + "bbox": [ + 103, + 470, + 287, + 485 + ], + "type": "interline_equation", + "content": "\\mathcal {F} \\left(\\Phi_ {i}, \\left\\{\\Phi_ {j} \\mid j \\in [ N ] \\right\\}; \\Theta\\right) = \\widehat {Y} _ {i} \\tag {5}", + "image_path": "d9012ef8731c7ffe6107ee0afe3d9f4fd4c51c39970a775516cafcc67610c654.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "spans": [ + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "text", + "content": "Therefore, the model sees " + }, + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "text", + "content": " number of tracklets and predicts the action for the main (person-of-interest) track. To do this, we first tokenize all the person-vectors, by passing them through a linear layer and project it in " + }, + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "inline_equation", + "content": "f_{proj}(\\mathcal{H}_t^i)\\in \\mathcal{R}^d" + }, + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "text", + "content": " a " + }, + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "inline_equation", + "content": "d" + }, + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "text", + "content": " dimensional space. Afterward, we add positional embeddings for a) time, b) tracklet-id. For time and tracklet-id we use 2D sine and cosine functions as positional encoding [57], by assigning person " + }, + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "text", + "content": " as the zero" + }, + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "inline_equation", + "content": "^{\\mathrm{th}}" + }, + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "text", + "content": " track, and the rest of the tracklets use tracklet-ids " + }, + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "inline_equation", + "content": "\\{1,2,3,\\dots,N - 1\\}" + }, + { + "bbox": [ + 46, + 491, + 287, + 601 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 127, + 606, + 257, + 620 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 606, + 257, + 620 + ], + "spans": [ + { + "bbox": [ + 127, + 606, + 257, + 620 + ], + "type": "interline_equation", + "content": "P E (t, i, 2 r) = \\sin (t / 1 0 0 0 0 ^ {4 r / d})", + "image_path": "55614281816eea7e66edaadf01f61d2408a5997e59162890632f0233ed2b4e0d.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 108, + 623, + 257, + 636 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 108, + 623, + 257, + 636 + ], + "spans": [ + { + "bbox": [ + 108, + 623, + 257, + 636 + ], + "type": "interline_equation", + "content": "P E (t, i, 2 r + 1) = \\cos (t / 1 0 0 0 0 ^ {4 r / d})", + "image_path": "4b1ce4acb44dd0326f24261e09088d72920079912fe2bfab12b3f4cbdc68ae2d.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 96, + 640, + 257, + 654 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 96, + 640, + 257, + 654 + ], + "spans": [ + { + "bbox": [ + 96, + 640, + 257, + 654 + ], + "type": "interline_equation", + "content": "P E (t, i, 2 s + D / 2) = \\sin (i / 1 0 0 0 0 ^ {4 s / d})", + "image_path": "f429375d6df421a0555b7904b3b1f3cfac276996b8e00847ce03a365ffd2c42e.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 78, + 657, + 257, + 670 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 78, + 657, + 257, + 670 + ], + "spans": [ + { + "bbox": [ + 78, + 657, + 257, + 670 + ], + "type": "interline_equation", + "content": "P E (t, i, 2 s + D / 2 + 1) = \\cos (i / 1 0 0 0 0 ^ {4 s / d})", + "image_path": "8b1a2b56a1b45024e73a822fc386ede9a362298164fa8fc9cc8765b603846a87.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 46, + 677, + 287, + 712 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 677, + 287, + 712 + ], + "spans": [ + { + "bbox": [ + 46, + 677, + 287, + 712 + ], + "type": "text", + "content": "Here, " + }, + { + "bbox": [ + 46, + 677, + 287, + 712 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 46, + 677, + 287, + 712 + ], + "type": "text", + "content": " is the time index, " + }, + { + "bbox": [ + 46, + 677, + 287, + 712 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 46, + 677, + 287, + 712 + ], + "type": "text", + "content": " is the track-id, " + }, + { + "bbox": [ + 46, + 677, + 287, + 712 + ], + "type": "inline_equation", + "content": "r, s \\in [0, d/2)" + }, + { + "bbox": [ + 46, + 677, + 287, + 712 + ], + "type": "text", + "content": " specifies the dimensions and " + }, + { + "bbox": [ + 46, + 677, + 287, + 712 + ], + "type": "inline_equation", + "content": "D" + }, + { + "bbox": [ + 46, + 677, + 287, + 712 + ], + "type": "text", + "content": " is the dimensions of the token." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 305, + 72, + 545, + 109 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 72, + 545, + 109 + ], + "spans": [ + { + "bbox": [ + 305, + 72, + 545, + 109 + ], + "type": "text", + "content": "After adding the position encodings for time and identity, each person token is passed to the transformer network. The " + }, + { + "bbox": [ + 305, + 72, + 545, + 109 + ], + "type": "inline_equation", + "content": "(t + i\\times N)^{th}" + }, + { + "bbox": [ + 305, + 72, + 545, + 109 + ], + "type": "text", + "content": " token is given by," + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 343, + 118, + 545, + 131 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 343, + 118, + 545, + 131 + ], + "spans": [ + { + "bbox": [ + 343, + 118, + 545, + 131 + ], + "type": "interline_equation", + "content": "\\operatorname {t o k e n} _ {(t + i \\times N)} = f _ {\\text {p r o j}} \\left(\\mathcal {H} _ {t} ^ {i}\\right) + P E (t, i,:) \\tag {6}", + "image_path": "72a69302a606d7fff48668d7c375b1013de6674ce8b910ccc5f9b074e71df469.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 140, + 545, + 258 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 140, + 545, + 258 + ], + "spans": [ + { + "bbox": [ + 304, + 140, + 545, + 258 + ], + "type": "text", + "content": "Our person of interest formulation would allow us to use other actors in the scene to make better predictions for the main actor. When there are multiple actors involved in the scene, knowing one person's action could help in predicting another's action. Some actions are correlated among the actors in a scene (e.g. dancing, fighting), while in some cases, people will be performing reciprocal actions (e.g. speaking and listening). In these cases knowing one person's action would help in predicting the other person's action with more confidence." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 305, + 266, + 509, + 280 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 266, + 509, + 280 + ], + "spans": [ + { + "bbox": [ + 305, + 266, + 509, + 280 + ], + "type": "text", + "content": "3.2. Actions from Appearance and 3D Pose" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "spans": [ + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "text", + "content": "While human pose plays a key role in understanding actions, more complex actions require reasoning about the scene and context. Therefore, in this section, we investigate the benefits of combining pose and contextual appearance features for action recognition and train model LART to benefit from 3D poses and appearance over a trajectory. For every track, we run a 2D action recognition model (i.e. MaskFeat [59] pretrained MViT [12]) at a frequency " + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "inline_equation", + "content": "f_{s}" + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "text", + "content": " and store the feature vectors before the classification layer. For example, consider a track " + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "inline_equation", + "content": "\\Phi_{i}" + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "text", + "content": ", which has detections " + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "inline_equation", + "content": "\\{D_1^i,D_2^i,D_3^i,\\dots ,D_T^i\\}" + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "text", + "content": ". We get the predictions form the 2D action recognition models, for the detections at " + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "inline_equation", + "content": "\\{t,t + f_{FPS} / f_s,t + 2f_{FPS} / f_s,\\ldots \\}" + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "text", + "content": ". Here, " + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "inline_equation", + "content": "f_{FPS}" + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "text", + "content": " is the rate at which frames appear on the screen. Since these action recognition models capture temporal information to some extent, " + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "inline_equation", + "content": "\\mathbf{Q}_{t - f_{FPS} / 2f_s}^i" + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "inline_equation", + "content": "\\mathbf{Q}_{t + f_{FPS} / 2f_s}^i" + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "text", + "content": " share the same appearance features. Let's assume we have a pre-trained action recognition model " + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "inline_equation", + "content": "\\mathcal{A}" + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "text", + "content": ", and it takes a sequence of frames and a detection bounding box at mid-frame, then the feature vectors for " + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "inline_equation", + "content": "\\mathbf{Q}_t^i" + }, + { + "bbox": [ + 304, + 285, + 545, + 525 + ], + "type": "text", + "content": " is given by:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 378, + 533, + 474, + 548 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 378, + 533, + 474, + 548 + ], + "spans": [ + { + "bbox": [ + 378, + 533, + 474, + 548 + ], + "type": "interline_equation", + "content": "\\mathcal {A} \\big (D _ {t} ^ {i}, \\{I \\} _ {t - M} ^ {t + M} \\big) = \\mathbf {U} _ {t} ^ {i}", + "image_path": "a64cc8da8ef482c2e17fd0bdbdd14f91e483d1c4475eb3a837b9535e68d87116.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 556, + 545, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 556, + 545, + 713 + ], + "spans": [ + { + "bbox": [ + 304, + 556, + 545, + 713 + ], + "type": "text", + "content": "Here, " + }, + { + "bbox": [ + 304, + 556, + 545, + 713 + ], + "type": "inline_equation", + "content": "\\{I\\}_{t - M}^{t + M}" + }, + { + "bbox": [ + 304, + 556, + 545, + 713 + ], + "type": "text", + "content": " is the sequence of image frames, " + }, + { + "bbox": [ + 304, + 556, + 545, + 713 + ], + "type": "inline_equation", + "content": "2M" + }, + { + "bbox": [ + 304, + 556, + 545, + 713 + ], + "type": "text", + "content": " is the number of frames seen by the action recognition model, and " + }, + { + "bbox": [ + 304, + 556, + 545, + 713 + ], + "type": "inline_equation", + "content": "\\mathbf{U}_t^i" + }, + { + "bbox": [ + 304, + 556, + 545, + 713 + ], + "type": "text", + "content": " is the contextual appearance vector. Note that, since the action recognition models look at the whole image frame, this representation implicitly contains information about the scene and objects and movements. However, we argue that human-centric pose representation has orthogonal information compared to feature vectors taken from convolutional or transformer networks. For example, the 3D pose is a geometric representation while " + }, + { + "bbox": [ + 304, + 556, + 545, + 713 + ], + "type": "inline_equation", + "content": "\\mathbf{U}_t^i" + }, + { + "bbox": [ + 304, + 556, + 545, + 713 + ], + "type": "text", + "content": " is more photometric, the SMPL parameters have more priors about human actions/pose and it is amodal while the appearance representation is learned from raw pixels. Now that we have both" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "type": "text", + "content": "643" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 47, + 72, + 287, + 97 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 72, + 287, + 97 + ], + "spans": [ + { + "bbox": [ + 47, + 72, + 287, + 97 + ], + "type": "text", + "content": "pose-centric representation and appearance-centric representation in the person vector " + }, + { + "bbox": [ + 47, + 72, + 287, + 97 + ], + "type": "inline_equation", + "content": "\\mathbf{H}_t^i" + }, + { + "bbox": [ + 47, + 72, + 287, + 97 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 117, + 105, + 287, + 135 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 117, + 105, + 287, + 135 + ], + "spans": [ + { + "bbox": [ + 117, + 105, + 287, + 135 + ], + "type": "interline_equation", + "content": "\\mathbf {H} _ {t} ^ {i} = \\underbrace {\\left\\{\\theta_ {t} ^ {i} , \\psi_ {t} ^ {i} , L _ {t} ^ {i} \\right.} _ {\\mathbf {P} _ {t} ^ {i}} \\underbrace {\\left. \\mathbf {U} _ {t} ^ {i} \\right.} _ {\\mathbf {Q} _ {t} ^ {i}} \\tag {7}", + "image_path": "871f7f734aa9407c4f068f9fd145094e487906a0f58b175e06c1dad0ba4efe69.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 47, + 144, + 287, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 144, + 287, + 204 + ], + "spans": [ + { + "bbox": [ + 47, + 144, + 287, + 204 + ], + "type": "text", + "content": "So, each human is represented by their 3D pose, 3D location, and with their appearance and scene content. We follow the same procedure as discussed in the previous section to add positional encoding and train a transformer network " + }, + { + "bbox": [ + 47, + 144, + 287, + 204 + ], + "type": "inline_equation", + "content": "\\mathcal{F}(\\Theta)" + }, + { + "bbox": [ + 47, + 144, + 287, + 204 + ], + "type": "text", + "content": " with pose+appearance tokens." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 47, + 214, + 128, + 228 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 214, + 128, + 228 + ], + "spans": [ + { + "bbox": [ + 47, + 214, + 128, + 228 + ], + "type": "text", + "content": "4. Experiments" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 46, + 235, + 287, + 376 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 235, + 287, + 376 + ], + "spans": [ + { + "bbox": [ + 46, + 235, + 287, + 376 + ], + "type": "text", + "content": "We evaluate our method on AVA [20] in various settings. AVA [20] poses an action detection problem, where people are localized in a spatio-temporal volume with action labels. It provides annotations at 1Hz, and each actor will have 1 pose action, up to 3 person-object interactions (optional), and up to 3 person-person interaction (optional) labels. For the evaluations, we use AVA v2.2 annotations and follow the standard protocol as in [20]. We measure mean average precision (mAP) on 60 classes with a frame-level IoU of 0.5. In addition to that, we also evaluate our method on AVA-Kinetics [33] dataset, which provides spatio-temporal localized annotations for Kinetics videos." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 46, + 379, + 287, + 544 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 379, + 287, + 544 + ], + "spans": [ + { + "bbox": [ + 46, + 379, + 287, + 544 + ], + "type": "text", + "content": "We use PHALP [43] to track people in the AVA dataset. PHALP falls into the tracking-by-detection paradigm and uses Mask R-CNN [21] for detecting people in the scene. At the training stage, where the bounding box annotations are available only at " + }, + { + "bbox": [ + 46, + 379, + 287, + 544 + ], + "type": "inline_equation", + "content": "1\\mathrm{Hz}" + }, + { + "bbox": [ + 46, + 379, + 287, + 544 + ], + "type": "text", + "content": ", we use Mask R-CNN detections for the in-between frames and use the ground-truth bounding box for every 30 frames. For validation, we use the bounding boxes used by [39] and do the same strategy to complete the tracking. We ran, PHALP on Kinetics-400 [25] and AVA [20]. Both datasets contain over 1 million tracks with an average length of 3.4s and over 100 million detections. In total, we use about 900 hours length of tracks, which is about 40x more than previous works [24]. See Table 1 for more details." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 46, + 545, + 287, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 545, + 287, + 713 + ], + "spans": [ + { + "bbox": [ + 46, + 545, + 287, + 713 + ], + "type": "text", + "content": "Tracking allows us to train actions densely. Since, we have tokens for each actor at every frame, we can supervise every token by assuming the human action remains the same in a 1 sec window [20]. First, we pre-train our model on Kinetics-400 dataset [25] and AVA [20] dataset. We run MViT [12] (pretrained on MaskFeat [58]) at " + }, + { + "bbox": [ + 46, + 545, + 287, + 713 + ], + "type": "inline_equation", + "content": "1\\mathrm{Hz}" + }, + { + "bbox": [ + 46, + 545, + 287, + 713 + ], + "type": "text", + "content": " on every track in Kinetics-400 to generate pseudo groundtruth annotations. Every 30 frames will share the same annotations and we train our model end-to-end with binary cross-entropy loss. Then we fine-tune the pretrained model, with tracks generated by us, on AVA ground-truth action labels. At inference, we take a track, and randomly sample " + }, + { + "bbox": [ + 46, + 545, + 287, + 713 + ], + "type": "inline_equation", + "content": "N - 1" + }, + { + "bbox": [ + 46, + 545, + 287, + 713 + ], + "type": "text", + "content": " of other tracks from the same video and pass it through the model. We take an average pooling on the" + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 339, + 74, + 515, + 138 + ], + "blocks": [ + { + "bbox": [ + 339, + 74, + 515, + 138 + ], + "lines": [ + { + "bbox": [ + 339, + 74, + 515, + 138 + ], + "spans": [ + { + "bbox": [ + 339, + 74, + 515, + 138 + ], + "type": "table", + "html": "
Dataset# clips# tracks#bbox
AVA [20]184k320k32.9m
Kinetics [25]217k686k71.4m
Total400k1m104.3m
", + "image_path": "4e4c9912ddfeb82f28df7a444c88d709ccde902705ecdab6a3d55b4dfc11a5aa.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 310, + 201, + 542, + 276 + ], + "blocks": [ + { + "bbox": [ + 305, + 146, + 545, + 191 + ], + "lines": [ + { + "bbox": [ + 305, + 146, + 545, + 191 + ], + "spans": [ + { + "bbox": [ + 305, + 146, + 545, + 191 + ], + "type": "text", + "content": "Table 1. Tracking statistics on AVA [20] and Kinetics-400 [25]: We report the number tracks returned by PHALP [43] for each datasets (m: million). This results in over 900 hours of tracks, with a mean length of 3.4 seconds (with overlaps)." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 310, + 201, + 542, + 276 + ], + "lines": [ + { + "bbox": [ + 310, + 201, + 542, + 276 + ], + "spans": [ + { + "bbox": [ + 310, + 201, + 542, + 276 + ], + "type": "table", + "html": "
ModelPoseOMPIPMmAP
PoTion [8]2D---13.1
JMRN [45]2D7.117.227.614.1
LART-poseSMPL11.924.645.822.3
LART-poseSMPL+Joints13.325.948.724.1
", + "image_path": "f2cc98216001acb29f5f77831f36f2c12b8479acf3cccf50cb84d53d039195a3.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 283, + 545, + 348 + ], + "lines": [ + { + "bbox": [ + 305, + 283, + 545, + 348 + ], + "spans": [ + { + "bbox": [ + 305, + 283, + 545, + 348 + ], + "type": "text", + "content": "Table 2. AVA Action Recognition with 3D pose: We evaluate human-centric representation on AVA dataset [20]. Here " + }, + { + "bbox": [ + 305, + 283, + 545, + 348 + ], + "type": "inline_equation", + "content": "OM" + }, + { + "bbox": [ + 305, + 283, + 545, + 348 + ], + "type": "text", + "content": ": Object Manipulation, " + }, + { + "bbox": [ + 305, + 283, + 545, + 348 + ], + "type": "inline_equation", + "content": "PI" + }, + { + "bbox": [ + 305, + 283, + 545, + 348 + ], + "type": "text", + "content": ": Person Interactions, and " + }, + { + "bbox": [ + 305, + 283, + 545, + 348 + ], + "type": "inline_equation", + "content": "PM" + }, + { + "bbox": [ + 305, + 283, + 545, + 348 + ], + "type": "text", + "content": ": Person Movement. LART-poscan achieve about " + }, + { + "bbox": [ + 305, + 283, + 545, + 348 + ], + "type": "inline_equation", + "content": "80\\%" + }, + { + "bbox": [ + 305, + 283, + 545, + 348 + ], + "type": "text", + "content": " performance of MViT models on person movement tasks without looking at scene information." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 305, + 365, + 545, + 413 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 365, + 545, + 413 + ], + "spans": [ + { + "bbox": [ + 305, + 365, + 545, + 413 + ], + "type": "text", + "content": "prediction head over a sequence of 12 frames, and evaluate at the center-frame. For more details on model architecture, hyper-parameters, and training procedure/training-time please see Appendix A1." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 306, + 420, + 481, + 432 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 420, + 481, + 432 + ], + "spans": [ + { + "bbox": [ + 306, + 420, + 481, + 432 + ], + "type": "text", + "content": "4.1. Action Recognition with 3D Pose" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 438, + 545, + 616 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 438, + 545, + 616 + ], + "spans": [ + { + "bbox": [ + 304, + 438, + 545, + 616 + ], + "type": "text", + "content": "In this section, we discuss the performance of our method on AVA action recognition, when using 3D pose cues, corresponding to Section 3.1. We train our 3D pose model LART-pose, on Kinetics-400 and AVA datasets. For Kinetics-400 tracks, we use MaskFeat [59] pseudo-ground truth labels and for AVA tracks, we train with ground-truth labels. We train a single person model and a multi-person model to study the interactions of a person over time, and person-person interactions. Our method achieves " + }, + { + "bbox": [ + 304, + 438, + 545, + 616 + ], + "type": "inline_equation", + "content": "24.1\\mathrm{mAP}" + }, + { + "bbox": [ + 304, + 438, + 545, + 616 + ], + "type": "text", + "content": " on multi-person " + }, + { + "bbox": [ + 304, + 438, + 545, + 616 + ], + "type": "inline_equation", + "content": "(N = 5)" + }, + { + "bbox": [ + 304, + 438, + 545, + 616 + ], + "type": "text", + "content": " setting (See Table 2). While this is well below the state-of-the-art performance, this is a first time a 3D model achieves more than " + }, + { + "bbox": [ + 304, + 438, + 545, + 616 + ], + "type": "inline_equation", + "content": "15.6\\mathrm{mAP}" + }, + { + "bbox": [ + 304, + 438, + 545, + 616 + ], + "type": "text", + "content": " on AVA dataset. Note that the first reported performance on AVA was " + }, + { + "bbox": [ + 304, + 438, + 545, + 616 + ], + "type": "inline_equation", + "content": "15.6\\mathrm{mAP}" + }, + { + "bbox": [ + 304, + 438, + 545, + 616 + ], + "type": "text", + "content": " [20], and our 3D pose model is already above this baseline." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "spans": [ + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "text", + "content": "We evaluate the performance of our method on three AVA sub-categories (Object Manipulation " + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "inline_equation", + "content": "(OM)" + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "text", + "content": ", Person Interactions " + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "inline_equation", + "content": "(PI)" + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "text", + "content": ", and Person Movement " + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "inline_equation", + "content": "(PM)" + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "text", + "content": "). For the person-movement task, which includes actions such as running, standing, and sitting etc., the 3D pose model achieves " + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "inline_equation", + "content": "48.7\\mathrm{mAP}" + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "text", + "content": ". In contrast, MaskFeat performance in this sub-category is " + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "inline_equation", + "content": "58.6\\mathrm{mAP}" + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "text", + "content": ". This shows that the 3D pose model can perform about " + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "inline_equation", + "content": "80\\%" + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "text", + "content": " good as a strong state-of-the-art" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "text", + "content": "644" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 55, + 81, + 539, + 247 + ], + "blocks": [ + { + "bbox": [ + 235, + 72, + 378, + 83 + ], + "lines": [ + { + "bbox": [ + 235, + 72, + 378, + 83 + ], + "spans": [ + { + "bbox": [ + 235, + 72, + 378, + 83 + ], + "type": "text", + "content": "AVA2.2 Performance with 3D pose" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 55, + 81, + 539, + 247 + ], + "lines": [ + { + "bbox": [ + 55, + 81, + 539, + 247 + ], + "spans": [ + { + "bbox": [ + 55, + 81, + 539, + 247 + ], + "type": "image", + "image_path": "85ec193b8f26c4303b14618908eddcd148fcd39b87a2daa70ee76cabeea1df84.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 46, + 258, + 547, + 314 + ], + "lines": [ + { + "bbox": [ + 46, + 258, + 547, + 314 + ], + "spans": [ + { + "bbox": [ + 46, + 258, + 547, + 314 + ], + "type": "text", + "content": "Figure 2. Class-wise performance on AVA: We show the performance of JMRN [45] and LART-pose on 60 AVA classes (average precision and relative gain). For pose based classes such as standing, sitting, and walking our 3D pose model can achieve above " + }, + { + "bbox": [ + 46, + 258, + 547, + 314 + ], + "type": "inline_equation", + "content": "60\\mathrm{mAP}" + }, + { + "bbox": [ + 46, + 258, + 547, + 314 + ], + "type": "text", + "content": " average precision performance by only looking at the 3D poses over time. By modeling multiple trajectories as input our model can understand the interactions among people. For example, activities such as dancing " + }, + { + "bbox": [ + 46, + 258, + 547, + 314 + ], + "type": "inline_equation", + "content": "(+30.1\\%)" + }, + { + "bbox": [ + 46, + 258, + 547, + 314 + ], + "type": "text", + "content": ", martial art " + }, + { + "bbox": [ + 46, + 258, + 547, + 314 + ], + "type": "inline_equation", + "content": "(+19.8\\%)" + }, + { + "bbox": [ + 46, + 258, + 547, + 314 + ], + "type": "text", + "content": " and hugging " + }, + { + "bbox": [ + 46, + 258, + 547, + 314 + ], + "type": "inline_equation", + "content": "(+62.1\\%)" + }, + { + "bbox": [ + 46, + 258, + 547, + 314 + ], + "type": "text", + "content": " have large relative gains over state-of-the-art pose only model. We only plot the gains if it is above or below " + }, + { + "bbox": [ + 46, + 258, + 547, + 314 + ], + "type": "inline_equation", + "content": "1\\mathrm{mAP}" + }, + { + "bbox": [ + 46, + 258, + 547, + 314 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 46, + 335, + 287, + 453 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 335, + 287, + 453 + ], + "spans": [ + { + "bbox": [ + 46, + 335, + 287, + 453 + ], + "type": "text", + "content": "model. On the person-person interaction category, our multi-person model achieves a gain of " + }, + { + "bbox": [ + 46, + 335, + 287, + 453 + ], + "type": "inline_equation", + "content": "+2.4\\mathrm{mAP}" + }, + { + "bbox": [ + 46, + 335, + 287, + 453 + ], + "type": "text", + "content": " compared to the single-person model, showing that the multiperson model was able to capture the person-person interactions. As shown in the Fig 2, for person-person interactions classes such as dancing, fighting, lifting a person and handshaking etc., the multi-person model performs much better than the current state-of-the-art pose-only models. For example, in dancing multi-person model gains " + }, + { + "bbox": [ + 46, + 335, + 287, + 453 + ], + "type": "inline_equation", + "content": "+39.8\\mathrm{mAP}" + }, + { + "bbox": [ + 46, + 335, + 287, + 453 + ], + "type": "text", + "content": ", and in hugging the relative gain is over " + }, + { + "bbox": [ + 46, + 335, + 287, + 453 + ], + "type": "inline_equation", + "content": "+200\\%" + }, + { + "bbox": [ + 46, + 335, + 287, + 453 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 46, + 456, + 287, + 551 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 456, + 287, + 551 + ], + "spans": [ + { + "bbox": [ + 46, + 456, + 287, + 551 + ], + "type": "text", + "content": "On the other hand, object manipulation has the lowest score among these three tasks. Since we do not model objects explicitly, the model has no information about which object is being manipulated and how it is being associated with the person. However, since some tasks have a unique pose when interacting with objects such as answering a phone or carrying an object, knowing the pose would help in identifying the action, which results in 13.3 mAP." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 47, + 562, + 251, + 575 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 562, + 251, + 575 + ], + "spans": [ + { + "bbox": [ + 47, + 562, + 251, + 575 + ], + "type": "text", + "content": "4.2. Actions from Appearance and 3D Pose" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 46, + 582, + 288, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 582, + 288, + 715 + ], + "spans": [ + { + "bbox": [ + 46, + 582, + 288, + 715 + ], + "type": "text", + "content": "While the 3D pose model can capture about " + }, + { + "bbox": [ + 46, + 582, + 288, + 715 + ], + "type": "inline_equation", + "content": "50\\%" + }, + { + "bbox": [ + 46, + 582, + 288, + 715 + ], + "type": "text", + "content": " performance compared to the state-of-the-art methods, it does not reason about the scene context. To model this, we concatenate the human-centric 3D representation with feature vectors from MaskFeat [59] as discussed in Section 3.2. MaskFeat has a MViT2 [34] as the backbone and it learns a strong representation about the scene and contextualized appearance. First, we pretrain this model on Kinetics-400 [25] and AVA [20] datasets, using the pseudo ground truth labels. Then, we fine-tune this model on AVA tracks using the ground-truth action annotation." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 304, + 335, + 545, + 477 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 335, + 545, + 477 + ], + "spans": [ + { + "bbox": [ + 304, + 335, + 545, + 477 + ], + "type": "text", + "content": "In Table 3 we compare our method with other state-of-the-art methods. Overall our method has a gain of " + }, + { + "bbox": [ + 304, + 335, + 545, + 477 + ], + "type": "inline_equation", + "content": "+2.8" + }, + { + "bbox": [ + 304, + 335, + 545, + 477 + ], + "type": "text", + "content": " mAP compared to Video MAE [15, 49]. In addition to that if we train with extra annotations from AVA-Kinetics our method achieves " + }, + { + "bbox": [ + 304, + 335, + 545, + 477 + ], + "type": "inline_equation", + "content": "42.3\\mathrm{mAP}" + }, + { + "bbox": [ + 304, + 335, + 545, + 477 + ], + "type": "text", + "content": ". Figure 3 show the class-wise performance of our method compared to MaskFeat [58]. Our method overall improves the performance of 56 classes in 60 classes. For some classes (e.g. fighting, hugging, climbing) our method improves the performance by more than " + }, + { + "bbox": [ + 304, + 335, + 545, + 477 + ], + "type": "inline_equation", + "content": "+5\\mathrm{mAP}" + }, + { + "bbox": [ + 304, + 335, + 545, + 477 + ], + "type": "text", + "content": ". In Table 4 we evaluate our method on AVA-Kinetics [33] dataset. Compared to the previous state-of-the-art methods our method has a gain of " + }, + { + "bbox": [ + 304, + 335, + 545, + 477 + ], + "type": "inline_equation", + "content": "+1.5\\mathrm{mAP}" + }, + { + "bbox": [ + 304, + 335, + 545, + 477 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 304, + 479, + 545, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 479, + 545, + 586 + ], + "spans": [ + { + "bbox": [ + 304, + 479, + 545, + 586 + ], + "type": "text", + "content": "In Figure 4, we show qualitative results from MViT [12] and our method. As shown in the figure, having explicit access to the tracks of everyone in the scene allows us to make more confident predictions for actions like hugging and fighting, where it is easy to interpret close interactions. In addition to that, some actions like riding a horse and climbing can benefit from having access to explicit 3D poses over time. Finally, the amodal nature of 3D meshes also allows us to make better predictions during occlusions." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 306, + 595, + 432, + 609 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 595, + 432, + 609 + ], + "spans": [ + { + "bbox": [ + 306, + 595, + 432, + 609 + ], + "type": "text", + "content": "4.3. Ablation Experiments" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "spans": [ + { + "bbox": [ + 304, + 617, + 545, + 713 + ], + "type": "text", + "content": "Effect of tracking: All the current works on action recognition do not associate people over time, explicitly. They only use the mid-frame bounding box to predict the action. For example, when a person is running across the scene from left to right, a feature volume cropped at the mid-frame bounding box is unlikely to contain all the information about the person. However, if we can track this person we could simply know their exact position over time and" + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 749, + 312, + 757 + ], + "type": "text", + "content": "645" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 52, + 83, + 536, + 248 + ], + "blocks": [ + { + "bbox": [ + 237, + 71, + 381, + 80 + ], + "lines": [ + { + "bbox": [ + 237, + 71, + 381, + 80 + ], + "spans": [ + { + "bbox": [ + 237, + 71, + 381, + 80 + ], + "type": "text", + "content": "AVA2.2 Performance with 3D pose" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 52, + 83, + 536, + 248 + ], + "lines": [ + { + "bbox": [ + 52, + 83, + 536, + 248 + ], + "spans": [ + { + "bbox": [ + 52, + 83, + 536, + 248 + ], + "type": "image", + "image_path": "05ca790dc3c8448142e73a409b5861ab1d5d07b2cdb5c11ced8e83aea992726e.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 48, + 258, + 548, + 304 + ], + "lines": [ + { + "bbox": [ + 48, + 258, + 548, + 304 + ], + "spans": [ + { + "bbox": [ + 48, + 258, + 548, + 304 + ], + "type": "text", + "content": "Figure 3. Comparison with State-of-the-art methods: We show class-level performance (average precision and relative gain) of MViT [12] (pretrained on MaskFeat [59]) and ours. Our methods achieve better performance compared to MViT on over 50 classes out of 60 classes. Especially, for actions like running, fighting, hugging, and sleeping etc., our method achieves over " + }, + { + "bbox": [ + 48, + 258, + 548, + 304 + ], + "type": "inline_equation", + "content": "+5" + }, + { + "bbox": [ + 48, + 258, + 548, + 304 + ], + "type": "text", + "content": " mAP. This shows the benefit of having access to explicit tracks and 3D poses for action recognition. We only plot the gains if it is above or below 1 mAP." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 48, + 321, + 286, + 520 + ], + "blocks": [ + { + "bbox": [ + 48, + 321, + 286, + 520 + ], + "lines": [ + { + "bbox": [ + 48, + 321, + 286, + 520 + ], + "spans": [ + { + "bbox": [ + 48, + 321, + 286, + 520 + ], + "type": "table", + "html": "
ModelPretrainmAP
SlowFast R101, 8×8 [16]K40023.8
MViTv1-B, 64×3 [12]27.3
SlowFast 16×8 +NL [16]27.5
X3D-XL [14]27.4
MViTv1-B-24, 32×3 [12]K60028.7
Object Transformer [61]31.0
ACAR R101, 8×8 +NL [39]31.4
ACAR R101, 8×8 +NL [39]K70033.3
MViT-L↑312, 40×3 [34],IN-21K+K40031.6
MaskFeat [59]K40037.5
MaskFeat [59]K60038.8
Video MAE [15, 49]K60039.3
Video MAE [15, 49]K40039.5
LARTK40042.3 (+2.8)
", + "image_path": "da56a2f97f950f2631f86aa36fd78258fd234babd1ee425c1737931edbc64da3.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 48, + 590, + 287, + 613 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 590, + 287, + 613 + ], + "spans": [ + { + "bbox": [ + 48, + 590, + 287, + 613 + ], + "type": "text", + "content": "that would give more localized information to the model to predict the action." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 48, + 617, + 288, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 617, + 288, + 713 + ], + "spans": [ + { + "bbox": [ + 48, + 617, + 288, + 713 + ], + "type": "text", + "content": "To this end, first, we evaluate MaskFeat [58] with the same detection bounding boxes [39] used in our evaluations, and it results in " + }, + { + "bbox": [ + 48, + 617, + 288, + 713 + ], + "type": "inline_equation", + "content": "40.2\\mathrm{mAP}" + }, + { + "bbox": [ + 48, + 617, + 288, + 713 + ], + "type": "text", + "content": ". With this being the baseline for our system, we train a model which only uses MaskFeat features as input, but over time. This way we can measure the effect of tracking in action recognition. Unsurprisingly, as shown in Table 5 when training MaskFeat with tracking, the model performs " + }, + { + "bbox": [ + 48, + 617, + 288, + 713 + ], + "type": "inline_equation", + "content": "+1.2\\mathrm{mAP}" + }, + { + "bbox": [ + 48, + 617, + 288, + 713 + ], + "type": "text", + "content": " better than the baseline. This" + } + ] + } + ], + "index": 6 + }, + { + "type": "table", + "bbox": [ + 377, + 321, + 475, + 406 + ], + "blocks": [ + { + "bbox": [ + 48, + 532, + 288, + 577 + ], + "lines": [ + { + "bbox": [ + 48, + 532, + 288, + 577 + ], + "spans": [ + { + "bbox": [ + 48, + 532, + 288, + 577 + ], + "type": "text", + "content": "Table 3. Comparison with state-of-the-art methods on AVA 2.2:. Our model uses features from MaskFeat [59] with full crop inference. Compared to Video MAE [15,49] our method achieves a gain of " + }, + { + "bbox": [ + 48, + 532, + 288, + 577 + ], + "type": "inline_equation", + "content": "+2.8" + }, + { + "bbox": [ + 48, + 532, + 288, + 577 + ], + "type": "text", + "content": " mAP." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 377, + 321, + 475, + 406 + ], + "lines": [ + { + "bbox": [ + 377, + 321, + 475, + 406 + ], + "spans": [ + { + "bbox": [ + 377, + 321, + 475, + 406 + ], + "type": "table", + "html": "
ModelmAP
SlowFast [16]32.98
ACAR [39]36.36
RM [17]37.34
LART38.91
", + "image_path": "5e94199a64dc86781df44b8789451e5fdcc8db51706be0f3cbbf1c065551b43f.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "bbox": [ + 306, + 418, + 547, + 462 + ], + "lines": [ + { + "bbox": [ + 306, + 418, + 547, + 462 + ], + "spans": [ + { + "bbox": [ + 306, + 418, + 547, + 462 + ], + "type": "text", + "content": "Table 4. Performance on AVA-Kinetics Dataset. We evaluate the performance of our model on AVA-Kinetics [33] using a single model (no ensembles) and compare the performance with previous state-of-the-art single models." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 306, + 486, + 545, + 570 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 486, + 545, + 570 + ], + "spans": [ + { + "bbox": [ + 306, + 486, + 545, + 570 + ], + "type": "text", + "content": "clearly shows that the use of tracking is helpful in action recognition. Specifically, having access to the tracks help to localize a person over time, which in return provides a second order signal of how joint angles changes over time. In addition, knowing the identity of each person also gives a discriminative signal between people, which is helpful for learning interactions between people." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 306, + 582, + 546, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 582, + 546, + 713 + ], + "spans": [ + { + "bbox": [ + 306, + 582, + 546, + 713 + ], + "type": "text", + "content": "Effect of Pose: The second contribution from our work is to use 3D pose information for action recognition. As discussed in Section 4.1 by only using 3D pose, we can achieve " + }, + { + "bbox": [ + 306, + 582, + 546, + 713 + ], + "type": "inline_equation", + "content": "24.1\\mathrm{mAP}" + }, + { + "bbox": [ + 306, + 582, + 546, + 713 + ], + "type": "text", + "content": " on AVA dataset. While it is hard to measure the exact contribution of 3D pose and 2D features, we compare our method with a model trained with only MaskFeat and tracking, where the only difference is the use of 3D pose. As shown in Table 5, the addition of 3D pose gives a gain of " + }, + { + "bbox": [ + 306, + 582, + 546, + 713 + ], + "type": "inline_equation", + "content": "+0.8\\mathrm{mAP}" + }, + { + "bbox": [ + 306, + 582, + 546, + 713 + ], + "type": "text", + "content": ". While this is a relatively small gain compared to the use of tracking, we believe with more robust and accurate 3D pose systems, this can be improved." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 749, + 313, + 757 + ], + "type": "text", + "content": "646" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 49, + 71, + 145, + 150 + ], + "blocks": [ + { + "bbox": [ + 49, + 71, + 145, + 150 + ], + "lines": [ + { + "bbox": [ + 49, + 71, + 145, + 150 + ], + "spans": [ + { + "bbox": [ + 49, + 71, + 145, + 150 + ], + "type": "image", + "image_path": "337ca2f9e863e1ecb545648fee6ab447558f3dd1ee6b89409412b49515688ff2.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + } + ], + "index": 0 + }, + { + "type": "image", + "bbox": [ + 149, + 72, + 245, + 150 + ], + "blocks": [ + { + "bbox": [ + 149, + 72, + 245, + 150 + ], + "lines": [ + { + "bbox": [ + 149, + 72, + 245, + 150 + ], + "spans": [ + { + "bbox": [ + 149, + 72, + 245, + 150 + ], + "type": "image", + "image_path": "d289c746e420c26ad5a0e5db9ce34964ffcab95a9663d2ac2f4273890c785b5f.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 249, + 72, + 345, + 150 + ], + "blocks": [ + { + "bbox": [ + 249, + 72, + 345, + 150 + ], + "lines": [ + { + "bbox": [ + 249, + 72, + 345, + 150 + ], + "spans": [ + { + "bbox": [ + 249, + 72, + 345, + 150 + ], + "type": "image", + "image_path": "eedb596834c89f51bd72bbdf0c7f1cd4b5a77ab0afaecefb84e1ef8fc077aa67.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 348, + 72, + 444, + 150 + ], + "blocks": [ + { + "bbox": [ + 348, + 72, + 444, + 150 + ], + "lines": [ + { + "bbox": [ + 348, + 72, + 444, + 150 + ], + "spans": [ + { + "bbox": [ + 348, + 72, + 444, + 150 + ], + "type": "image", + "image_path": "74ddbfe81ea4d414ee9c7c28dfaccac26c06f7513bd025b4e9b486773f6b099f.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 449, + 72, + 544, + 150 + ], + "blocks": [ + { + "bbox": [ + 449, + 72, + 544, + 150 + ], + "lines": [ + { + "bbox": [ + 449, + 72, + 544, + 150 + ], + "spans": [ + { + "bbox": [ + 449, + 72, + 544, + 150 + ], + "type": "image", + "image_path": "cf85a997378b5af50f4364452d28b7e8a8bf6df4cf779b4634da5a758d5ac12b.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 49, + 152, + 145, + 232 + ], + "blocks": [ + { + "bbox": [ + 49, + 152, + 145, + 232 + ], + "lines": [ + { + "bbox": [ + 49, + 152, + 145, + 232 + ], + "spans": [ + { + "bbox": [ + 49, + 152, + 145, + 232 + ], + "type": "image", + "image_path": "160129cf6f122763c4c6f703e7286c00f6096f8af394d73c19001257d0ada9cc.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 46, + 240, + 547, + 369 + ], + "lines": [ + { + "bbox": [ + 46, + 240, + 547, + 369 + ], + "spans": [ + { + "bbox": [ + 46, + 240, + 547, + 369 + ], + "type": "text", + "content": "Figure 4. Qualitative Results: We show the predictions from MViT [12] and our model on validation samples from AVA v2.2. The person with the colored mesh indicates the person-of-interest for which we recognise the action and the one with the gray mesh indicates the supporting actors. The first two columns demonstrate the benefits of having access to the action-tubes of other people for action prediction. In the first column, the orange person is very close to the other person with hugging posture, which makes it easy to predict hugging with higher probability. Similarly, in the second column, the explicit interaction between the multiple people, and knowing others also fighting increases the confidence for the fighting action for the green person over the 2D recognition model. The third and the fourth columns show the benefit of explicitly modeling the 3D pose over time (using tracks) for action recognition. Where the yellow person is in riding pose and purple person is looking upwards and legs on a vertical plane. The last column indicates the benefit of representing people with an amodal representation. Here the hand of the blue person is occluded, so the 2D recognition model does not see the action as a whole. However, SMPL meshes are amodal, therefore the hand is still present, which boosts the probability of predicting the action label for closing the door." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 149, + 152, + 245, + 232 + ], + "blocks": [ + { + "bbox": [ + 149, + 152, + 245, + 232 + ], + "lines": [ + { + "bbox": [ + 149, + 152, + 245, + 232 + ], + "spans": [ + { + "bbox": [ + 149, + 152, + 245, + 232 + ], + "type": "image", + "image_path": "91d7f4af78483263ba8204c1e8f2f3a56a4a491ccc3abb72a2622fcbc0b16d60.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 249, + 152, + 345, + 232 + ], + "blocks": [ + { + "bbox": [ + 249, + 152, + 345, + 232 + ], + "lines": [ + { + "bbox": [ + 249, + 152, + 345, + 232 + ], + "spans": [ + { + "bbox": [ + 249, + 152, + 345, + 232 + ], + "type": "image", + "image_path": "a0be3df41c898b8c712cb382bde8ee02c1270ed71dbce63b7e372334f7f5f5b0.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 349, + 152, + 444, + 232 + ], + "blocks": [ + { + "bbox": [ + 349, + 152, + 444, + 232 + ], + "lines": [ + { + "bbox": [ + 349, + 152, + 444, + 232 + ], + "spans": [ + { + "bbox": [ + 349, + 152, + 444, + 232 + ], + "type": "image", + "image_path": "3a1badd50d434728f13b224b79fbff5a84952774802f20019b53760fec7e8037.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 449, + 152, + 544, + 232 + ], + "blocks": [ + { + "bbox": [ + 449, + 152, + 544, + 232 + ], + "lines": [ + { + "bbox": [ + 449, + 152, + 544, + 232 + ], + "spans": [ + { + "bbox": [ + 449, + 152, + 544, + 232 + ], + "type": "image", + "image_path": "be1e9142532f97d8df0f7ce60c36363d4f6315613d416e49ccb32d766793c0ec.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "table", + "bbox": [ + 48, + 394, + 291, + 452 + ], + "blocks": [ + { + "bbox": [ + 48, + 394, + 291, + 452 + ], + "lines": [ + { + "bbox": [ + 48, + 394, + 291, + 452 + ], + "spans": [ + { + "bbox": [ + 48, + 394, + 291, + 452 + ], + "type": "table", + "html": "
ModelOMPIPMmAP
MViT32.241.158.640.2
MViT + Tracking33.443.059.341.4 (+1.2)
MViT + Tracking + Pose34.443.959.942.3 (+0.9)
", + "image_path": "7a21fe4f7c332ed2ecf6365b8692f6db5f9da4bbcff3ef54ff07421d3c251d4f.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_body" + } + ], + "index": 11 + }, + { + "bbox": [ + 46, + 460, + 288, + 515 + ], + "lines": [ + { + "bbox": [ + 46, + 460, + 288, + 515 + ], + "spans": [ + { + "bbox": [ + 46, + 460, + 288, + 515 + ], + "type": "text", + "content": "Table 5. Ablation on the main components: We ablate the contribution of tracking and 3D poses using the same detections. First, we only use MViT features over the tracks to evaluate the contribution from tracking. Then we add 3D pose features to study the contribution from 3D pose for action recognition." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "text" + }, + { + "bbox": [ + 47, + 526, + 178, + 539 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 47, + 526, + 178, + 539 + ], + "spans": [ + { + "bbox": [ + 47, + 526, + 178, + 539 + ], + "type": "text", + "content": "4.4. Implementation details" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 46, + 546, + 287, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 46, + 546, + 287, + 713 + ], + "spans": [ + { + "bbox": [ + 46, + 546, + 287, + 713 + ], + "type": "text", + "content": "In both the pose model and pose+appearance model, we use the same vanilla transformer architecture [52] with 16 layers and 16 heads. For both models the embedding dimension is 512. We train with 0.4 mask ratio and at test time use the same mask token to in-fill the missing detections. The output token from the transformer is passed to a linear layer to predict the AVA action labels. We pre-train our model on kinetics for 30 epochs with MViT [12] predictions as pseudo-supervision and then fine-tune on AVA with AVA ground truth labels for few epochs. We train our models with AdamW [36] with base learning rate of 0.001 and betas = (0.9, 0.95). We use cosine annealing scheduling with a linear warm-up. For additional details please see the Appendix." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 306, + 389, + 378, + 401 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 389, + 378, + 401 + ], + "spans": [ + { + "bbox": [ + 306, + 389, + 378, + 401 + ], + "type": "text", + "content": "5. Conclusion" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 304, + 418, + 545, + 670 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 418, + 545, + 670 + ], + "spans": [ + { + "bbox": [ + 304, + 418, + 545, + 670 + ], + "type": "text", + "content": "In this paper, we investigated the benefits of 3D tracking and pose for the task of human action recognition. By leveraging a state-of-the-art method for person tracking, PHALP [43], we trained a transformer model that takes as input tokens the state of the person at every time instance. We investigated two design choices for the content of the token. First, when using information about the 3D pose of the person, we outperform previous baselines that rely on pose information for action recognition by " + }, + { + "bbox": [ + 304, + 418, + 545, + 670 + ], + "type": "inline_equation", + "content": "8.2\\mathrm{mAP}" + }, + { + "bbox": [ + 304, + 418, + 545, + 670 + ], + "type": "text", + "content": " on the AVA v2.2 dataset. Then, we also proposed fusing the pose information with contextualized appearance information coming from a typical action recognition backbone [12] applied over the tracklet trajectory. With this model, we improved upon the previous state-of-the-art on AVA v2.2 by " + }, + { + "bbox": [ + 304, + 418, + 545, + 670 + ], + "type": "inline_equation", + "content": "2.8\\mathrm{mAP}" + }, + { + "bbox": [ + 304, + 418, + 545, + 670 + ], + "type": "text", + "content": ". There are many avenues for future work and further improvements for action recognition. For example, one could achieve better performance for more fine-grained tasks by more expressive 3D reconstruction of the human body (e.g., using the SMPL-X model [40] to capture also the hands), and by explicit modeling of the objects in the scene (potentially by extending the \"tubes\" idea to objects)." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 305, + 677, + 545, + 713 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 677, + 545, + 713 + ], + "spans": [ + { + "bbox": [ + 305, + 677, + 545, + 713 + ], + "type": "text", + "content": "Acknowledgements: This work was supported by the FAIR-BAIR program as well as ONR MURI (N00014-21-1-2801). We thank Shubham Goel, for helpful discussions." + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 297, + 748, + 312, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 748, + 312, + 757 + ], + "spans": [ + { + "bbox": [ + 297, + 748, + 312, + 757 + ], + "type": "text", + "content": "647" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 49, + 72, + 105, + 83 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 72, + 105, + 83 + ], + "spans": [ + { + "bbox": [ + 49, + 72, + 105, + 83 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 48, + 91, + 286, + 720 + ], + "type": "list", + "angle": 0, + "index": 18, + "blocks": [ + { + "bbox": [ + 53, + 91, + 286, + 123 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 91, + 286, + 123 + ], + "spans": [ + { + "bbox": [ + 53, + 91, + 286, + 123 + ], + "type": "text", + "content": "[1] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lucic, and Cordelia Schmid. ViViT: A video vision transformer. In ICCV, 2021." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 53, + 124, + 286, + 167 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 124, + 286, + 167 + ], + "spans": [ + { + "bbox": [ + 53, + 124, + 286, + 167 + ], + "type": "text", + "content": "[2] Fabien Baradel, Thibault Groueix, Philippe Weinzaepfel, Romain Brégier, Yannis Kalantidis, and Grégory Rogez. Leveraging MoCap data for human mesh recovery. In 3DV, 2021." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 53, + 168, + 286, + 191 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 168, + 286, + 191 + ], + "spans": [ + { + "bbox": [ + 53, + 168, + 286, + 191 + ], + "type": "text", + "content": "[3] Philipp Bergmann, Tim Meinhardt, and Laura Leal-Taixe. Tracking without bells and whistles. In ICCV, 2019." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 53, + 191, + 286, + 223 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 191, + 286, + 223 + ], + "spans": [ + { + "bbox": [ + 53, + 191, + 286, + 223 + ], + "type": "text", + "content": "[4] Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is space-time attention all you need for video understanding? In ICML, 2021." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 53, + 224, + 286, + 267 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 224, + 286, + 267 + ], + "spans": [ + { + "bbox": [ + 53, + 224, + 286, + 267 + ], + "type": "text", + "content": "[5] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J Black. Keep it SMPL: Automatic estimation of 3D human pose and shape from a single image. In ECCV, 2016." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 53, + 268, + 286, + 300 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 268, + 286, + 300 + ], + "spans": [ + { + "bbox": [ + 53, + 268, + 286, + 300 + ], + "type": "text", + "content": "[6] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 53, + 301, + 286, + 345 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 301, + 286, + 345 + ], + "spans": [ + { + "bbox": [ + 53, + 301, + 286, + 345 + ], + "type": "text", + "content": "[7] Vasileios Choutas, Lea Müller, Chun-Hao P Huang, Siyu Tang, Dimitrios Tzionas, and Michael J Black. Accurate 3D body shape regression using metric and semantic attributes. In CVPR, 2022." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 53, + 346, + 286, + 378 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 346, + 286, + 378 + ], + "spans": [ + { + "bbox": [ + 53, + 346, + 286, + 378 + ], + "type": "text", + "content": "[8] Vasileios Choutas, Philippe Weinzaepfel, Jérôme Revaud, and Cordelia Schmid. PoTion: Pose motion representation for action recognition. In CVPR, 2018." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 53, + 379, + 286, + 434 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 53, + 379, + 286, + 434 + ], + "spans": [ + { + "bbox": [ + 53, + 379, + 286, + 434 + ], + "type": "text", + "content": "[9] Piotr Dollár, Vincent Rabaud, Garrison Cottrell, and Serge Belongie. Behavior recognition via sparse spatio-temporal features. In 2005 IEEE international workshop on visual surveillance and performance evaluation of tracking and surveillance, 2005." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 48, + 434, + 286, + 499 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 434, + 286, + 499 + ], + "spans": [ + { + "bbox": [ + 48, + 434, + 286, + 499 + ], + "type": "text", + "content": "[10] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. 2021." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 48, + 500, + 286, + 522 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 500, + 286, + 522 + ], + "spans": [ + { + "bbox": [ + 48, + 500, + 286, + 522 + ], + "type": "text", + "content": "[11] Alexei A Efros, Alexander C Berg, Greg Mori, and Jitendra Malik. Recognizing action at a distance. In ICCV, 2003." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 48, + 522, + 286, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 522, + 286, + 555 + ], + "spans": [ + { + "bbox": [ + 48, + 522, + 286, + 555 + ], + "type": "text", + "content": "[12] Haoqi Fan, Bo Xiong, Karttikeya Mangalam, Yanghao Li, Zhicheng Yan, Jitendra Malik, and Christoph Feichtenhofer. Multiscale vision transformers. In ICCV, 2021." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 48, + 555, + 286, + 588 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 555, + 286, + 588 + ], + "spans": [ + { + "bbox": [ + 48, + 555, + 286, + 588 + ], + "type": "text", + "content": "[13] Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, and Cewu Lu. RMPE: Regional multi-person pose estimation. In ICCV, 2017." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 48, + 589, + 286, + 611 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 589, + 286, + 611 + ], + "spans": [ + { + "bbox": [ + 48, + 589, + 286, + 611 + ], + "type": "text", + "content": "[14] Christoph Feichtenhofer. X3D: Expanding architectures for efficient video recognition. In CVPR, 2020." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 48, + 611, + 286, + 643 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 611, + 286, + 643 + ], + "spans": [ + { + "bbox": [ + 48, + 611, + 286, + 643 + ], + "type": "text", + "content": "[15] Christoph Feichtenhofer, Haoqi Fan, Yanghao Li, and Kaiming He. Masked autoencoders as spatiotemporal learners. In NeurIPS, 2022." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 48, + 644, + 286, + 677 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 644, + 286, + 677 + ], + "spans": [ + { + "bbox": [ + 48, + 644, + 286, + 677 + ], + "type": "text", + "content": "[16] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In ICCV, 2019." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 48, + 678, + 286, + 720 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 678, + 286, + 720 + ], + "spans": [ + { + "bbox": [ + 48, + 678, + 286, + 720 + ], + "type": "text", + "content": "[17] Yutong Feng, Jianwen Jiang, Ziyuan Huang, Zhiwu Qing, Xiang Wang, Shiwei Zhang, Mingqian Tang, and Yue Gao. Relation modeling in spatio-temporal action localization. arXiv preprint arXiv:2106.08061, 2021." + } + ] + } + ], + "index": 17 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 307, + 73, + 545, + 713 + ], + "type": "list", + "angle": 0, + "index": 36, + "blocks": [ + { + "bbox": [ + 307, + 73, + 544, + 94 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 73, + 544, + 94 + ], + "spans": [ + { + "bbox": [ + 307, + 73, + 544, + 94 + ], + "type": "text", + "content": "[18] Georgia Gkioxari and Jitendra Malik. Finding action tubes. In CVPR, 2015." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 307, + 95, + 545, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 95, + 545, + 140 + ], + "spans": [ + { + "bbox": [ + 307, + 95, + 545, + 140 + ], + "type": "text", + "content": "[19] Shubham Goel, Georgios Pavlakos, Jathushan Rajasegaran, Angjoo Kanazawa, and Jitendra Malik. Humans in 4D: Reconstructing and tracking humans with transformers. arXiv preprint (forthcoming), 2023." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 307, + 141, + 545, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 141, + 545, + 205 + ], + "spans": [ + { + "bbox": [ + 307, + 141, + 545, + 205 + ], + "type": "text", + "content": "[20] Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, Cordelia Schmid, and Jitendra Malik. AVA: A video dataset of spatio-temporally localized atomic visual actions. In CVPR, 2018." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 307, + 206, + 545, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 206, + 545, + 228 + ], + "spans": [ + { + "bbox": [ + 307, + 206, + 545, + 228 + ], + "type": "text", + "content": "[21] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask R-CNN. In ICCV, 2017." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 307, + 229, + 545, + 262 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 229, + 545, + 262 + ], + "spans": [ + { + "bbox": [ + 307, + 229, + 545, + 262 + ], + "type": "text", + "content": "[22] Gunnar Johansson. Visual perception of biological motion and a model for its analysis. Perception & psychophysics, 14(2):201-211, 1973." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 307, + 264, + 545, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 264, + 545, + 297 + ], + "spans": [ + { + "bbox": [ + 307, + 264, + 545, + 297 + ], + "type": "text", + "content": "[23] Angjoo Kanazawa, Michael J Black, David W Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In CVPR, 2018." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 307, + 297, + 545, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 297, + 545, + 330 + ], + "spans": [ + { + "bbox": [ + 307, + 297, + 545, + 330 + ], + "type": "text", + "content": "[24] Angjoo Kanazawa, Jason Y Zhang, Panna Felsen, and Jitendra Malik. Learning 3D human dynamics from video. In CVPR, 2019." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 307, + 331, + 545, + 386 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 331, + 545, + 386 + ], + "spans": [ + { + "bbox": [ + 307, + 331, + 545, + 386 + ], + "type": "text", + "content": "[25] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 307, + 388, + 545, + 410 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 388, + 545, + 410 + ], + "spans": [ + { + "bbox": [ + 307, + 388, + 545, + 410 + ], + "type": "text", + "content": "[26] Machiel Keestra. Understanding human action. Integrating meanings, mechanisms, causes, and contexts. 2015." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 307, + 411, + 545, + 443 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 411, + 545, + 443 + ], + "spans": [ + { + "bbox": [ + 307, + 411, + 545, + 443 + ], + "type": "text", + "content": "[27] Alexander Klaser, Marcin Marszalek, and Cordelia Schmid. A spatio-temporal descriptor based on 3D-gradients. In BMVC, 2008." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 307, + 444, + 545, + 477 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 444, + 545, + 477 + ], + "spans": [ + { + "bbox": [ + 307, + 444, + 545, + 477 + ], + "type": "text", + "content": "[28] Muhammed Kocabas, Nikos Athanasiou, and Michael J Black. VIBE: Video inference for human body pose and shape estimation. In CVPR, 2020." + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 307, + 478, + 545, + 511 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 478, + 545, + 511 + ], + "spans": [ + { + "bbox": [ + 307, + 478, + 545, + 511 + ], + "type": "text", + "content": "[29] Muhammed Kocabas, Chun-Hao P Huang, Otmar Hilliges, and Michael J Black. PARE: Part attention regressor for 3D human body estimation. In ICCV, 2021." + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 307, + 512, + 545, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 512, + 545, + 555 + ], + "spans": [ + { + "bbox": [ + 307, + 512, + 545, + 555 + ], + "type": "text", + "content": "[30] Muhammed Kocabas, Chun-Hao P Huang, Joachim Tesch, Lea Muller, Otmar Hilliges, and Michael J Black. SPEC: Seeing people in the wild with an estimated camera. In ICCV, 2021." + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 307, + 556, + 545, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 556, + 545, + 590 + ], + "spans": [ + { + "bbox": [ + 307, + 556, + 545, + 590 + ], + "type": "text", + "content": "[31] Nikos Kolotouros, Georgios Pavlakos, Michael J Black, and Kostas Daniilidis. Learning to reconstruct 3D human pose and shape via model-fitting in the loop. In ICCV, 2019." + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 307, + 591, + 545, + 624 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 591, + 545, + 624 + ], + "spans": [ + { + "bbox": [ + 307, + 591, + 545, + 624 + ], + "type": "text", + "content": "[32] Nikos Kolotouros, Georgios Pavlakos, Dinesh Jayaraman, and Kostas Daniilidis. Probabilistic modeling for human mesh recovery. In ICCV, 2021." + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 307, + 624, + 545, + 669 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 624, + 545, + 669 + ], + "spans": [ + { + "bbox": [ + 307, + 624, + 545, + 669 + ], + "type": "text", + "content": "[33] Ang Li, Meghan Thotakuri, David A Ross, João Carreira, Alexander Vostrikov, and Andrew Zisserman. Theava-kinetics localized human actions video dataset. arXiv preprint arXiv:2005.00214, 2020." + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 307, + 670, + 545, + 713 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 670, + 545, + 713 + ], + "spans": [ + { + "bbox": [ + 307, + 670, + 545, + 713 + ], + "type": "text", + "content": "[34] Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Mangalam, Bo Xiong, Jitendra Malik, and Christoph Feichtenhofer. MViTv2: Improved multiscale vision transformers for classification and detection. In CVPR, 2022." + } + ] + } + ], + "index": 35 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 298, + 749, + 312, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 298, + 749, + 312, + 757 + ], + "spans": [ + { + "bbox": [ + 298, + 749, + 312, + 757 + ], + "type": "text", + "content": "648" + } + ] + } + ], + "index": 37 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 48, + 72, + 287, + 580 + ], + "type": "list", + "angle": 0, + "index": 15, + "blocks": [ + { + "bbox": [ + 48, + 72, + 287, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 48, + 72, + 287, + 116 + ], + "spans": [ + { + "bbox": [ + 48, + 72, + 287, + 116 + ], + "type": "text", + "content": "[35] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. SMPL: A skinned multiperson linear model. ACM Transactions on Graphics (TOG), 34(6):1-16, 2015." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 49, + 118, + 286, + 140 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 118, + 286, + 140 + ], + "spans": [ + { + "bbox": [ + 49, + 118, + 286, + 140 + ], + "type": "text", + "content": "[36] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 49, + 141, + 287, + 173 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 141, + 287, + 173 + ], + "spans": [ + { + "bbox": [ + 49, + 141, + 287, + 173 + ], + "type": "text", + "content": "[37] Tim Meinhardt, Alexander Kirillov, Laura Leal-Taixe, and Christoph Feichtenhofer. TrackFormer: Multi-object tracking with transformers. In CVPR, 2022." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 49, + 175, + 287, + 196 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 175, + 287, + 196 + ], + "spans": [ + { + "bbox": [ + 49, + 175, + 287, + 196 + ], + "type": "text", + "content": "[38] Daniel Neimark, Omri Bar, Maya Zohar, and Dotan Asselmann. Video transformer network. In ICCV, 2021." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 49, + 198, + 287, + 230 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 198, + 287, + 230 + ], + "spans": [ + { + "bbox": [ + 49, + 198, + 287, + 230 + ], + "type": "text", + "content": "[39] Junting Pan, Siyu Chen, Mike Zheng Shou, Yu Liu, Jing Shao, and Hongsheng Li. Actor-context-actor relation network for spatio-temporal action localization. In CVPR, 2021." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 49, + 232, + 287, + 275 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 232, + 287, + 275 + ], + "spans": [ + { + "bbox": [ + 49, + 232, + 287, + 275 + ], + "type": "text", + "content": "[40] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Expressive body capture: 3D hands, face, and body from a single image. In CVPR, 2019." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 49, + 277, + 286, + 298 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 277, + 286, + 298 + ], + "spans": [ + { + "bbox": [ + 49, + 277, + 286, + 298 + ], + "type": "text", + "content": "[41] Georgios Pavlakos, Jitendra Malik, and Angjoo Kanazawa. Human mesh recovery from multiple shots. In CVPR, 2022." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 49, + 300, + 286, + 331 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 300, + 286, + 331 + ], + "spans": [ + { + "bbox": [ + 49, + 300, + 286, + 331 + ], + "type": "text", + "content": "[42] Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, and Jitendra Malik. Tracking people with 3D representations. In NeurIPS, 2021." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 49, + 333, + 287, + 376 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 333, + 287, + 376 + ], + "spans": [ + { + "bbox": [ + 49, + 333, + 287, + 376 + ], + "type": "text", + "content": "[43] Jathushan Rajasegaran, Georgios Pavlakos, Angjoo Kanazawa, and Jitendra Malik. Tracking people by predicting 3D appearance, location and pose. In CVPR, 2022." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 49, + 378, + 287, + 410 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 378, + 287, + 410 + ], + "spans": [ + { + "bbox": [ + 49, + 378, + 287, + 410 + ], + "type": "text", + "content": "[44] Davis Rempe, Tolga Birdal, Aaron Hertzmann, Jimei Yang, Srinath Sridhar, and Leonidas J Guibas. HuMoR: 3D human motion model for robust pose estimation. In ICCV, 2021." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 49, + 412, + 287, + 444 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 412, + 287, + 444 + ], + "spans": [ + { + "bbox": [ + 49, + 412, + 287, + 444 + ], + "type": "text", + "content": "[45] Anshul Shah, Shlok Mishra, Ankan Bansal, Jun-Cheng Chen, Rama Chellappa, and Abhinav Shrivastava. Pose and joint-aware action recognition. In WACV, 2022." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 49, + 446, + 287, + 478 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 446, + 287, + 478 + ], + "spans": [ + { + "bbox": [ + 49, + 446, + 287, + 478 + ], + "type": "text", + "content": "[46] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. NIPS, 2014." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 49, + 479, + 287, + 512 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 479, + 287, + 512 + ], + "spans": [ + { + "bbox": [ + 49, + 479, + 287, + 512 + ], + "type": "text", + "content": "[47] Chen Sun, Abhinav Shrivastava, Carl Vondrick, Rahul Sukthankar, Kevin Murphy, and Cordelia Schmid. Relational action forecasting. In CVPR, 2019." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 49, + 514, + 286, + 545 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 514, + 286, + 545 + ], + "spans": [ + { + "bbox": [ + 49, + 514, + 286, + 545 + ], + "type": "text", + "content": "[48] Graham W Taylor, Rob Fergus, Yann LeCun, and Christoph Bregler. Convolutional learning of spatio-temporal features. In ECCV, 2010." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 49, + 547, + 286, + 580 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 49, + 547, + 286, + 580 + ], + "spans": [ + { + "bbox": [ + 49, + 547, + 286, + 580 + ], + "type": "text", + "content": "[49] Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. VideoMAE: Masked autoencoders are data-efficient learners for self-supervised video pre-training. In NeurIPS, 2022." + } + ] + } + ], + "index": 14 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 307, + 73, + 545, + 589 + ], + "type": "list", + "angle": 0, + "index": 32, + "blocks": [ + { + "bbox": [ + 307, + 73, + 545, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 73, + 545, + 105 + ], + "spans": [ + { + "bbox": [ + 307, + 73, + 545, + 105 + ], + "type": "text", + "content": "[50] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3D convolutional networks. In ICCV, 2015." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 307, + 107, + 545, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 107, + 545, + 149 + ], + "spans": [ + { + "bbox": [ + 307, + 107, + 545, + 149 + ], + "type": "text", + "content": "[51] Gül Varol, Ivan Laptev, Cordelia Schmid, and Andrew Zisserman. Synthetic humans for action recognition from unseen viewpoints. International Journal of Computer Vision, 129(7):2264-2287, 2021." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 307, + 152, + 545, + 184 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 152, + 545, + 184 + ], + "spans": [ + { + "bbox": [ + 307, + 152, + 545, + 184 + ], + "type": "text", + "content": "[52] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 307, + 186, + 545, + 206 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 186, + 545, + 206 + ], + "spans": [ + { + "bbox": [ + 307, + 186, + 545, + 206 + ], + "type": "text", + "content": "[53] Heng Wang, A. Klaser, C. Schmid, and Cheng-Lin Liu. Action recognition by dense trajectories. In CVPR, 2011." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 307, + 207, + 545, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 207, + 545, + 228 + ], + "spans": [ + { + "bbox": [ + 307, + 207, + 545, + 228 + ], + "type": "text", + "content": "[54] Heng Wang and Cordelia Schmid. Action recognition with improved trajectories. In ICCV, 2013." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 307, + 229, + 545, + 251 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 229, + 545, + 251 + ], + "spans": [ + { + "bbox": [ + 307, + 229, + 545, + 251 + ], + "type": "text", + "content": "[55] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 307, + 253, + 545, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 253, + 545, + 274 + ], + "spans": [ + { + "bbox": [ + 307, + 253, + 545, + 274 + ], + "type": "text", + "content": "[56] Xiaolong Wang and Abhinav Gupta. Videos as space-time region graphs. In ECCV, 2018." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 307, + 276, + 545, + 319 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 276, + 545, + 319 + ], + "spans": [ + { + "bbox": [ + 307, + 276, + 545, + 319 + ], + "type": "text", + "content": "[57] Zelun Wang and Jyh-Charn Liu. Translating math formula images to latex sequences using deep neural networks with sequence-level training. International Journal on Document Analysis and Recognition (IJDAR), 24(1):63-75, 2021." + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 307, + 320, + 545, + 363 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 320, + 545, + 363 + ], + "spans": [ + { + "bbox": [ + 307, + 320, + 545, + 363 + ], + "type": "text", + "content": "[58] Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, and Christoph Feichtenhofer. Masked feature prediction for self-supervised visual pre-training. arXiv preprint arXiv:2112.09133, 2021." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 307, + 365, + 545, + 398 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 365, + 545, + 398 + ], + "spans": [ + { + "bbox": [ + 307, + 365, + 545, + 398 + ], + "type": "text", + "content": "[59] Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, and Christoph Feichtenhofer. Masked feature prediction for self-supervised visual pre-training. In CVPR, 2022." + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 307, + 399, + 545, + 430 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 399, + 545, + 430 + ], + "spans": [ + { + "bbox": [ + 307, + 399, + 545, + 430 + ], + "type": "text", + "content": "[60] Philippe Weinzaepfel and Grégory Rogez. Mimetics: Towards understanding human actions out of context. *IJCV*, 2021." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 307, + 433, + 545, + 453 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 433, + 545, + 453 + ], + "spans": [ + { + "bbox": [ + 307, + 433, + 545, + 453 + ], + "type": "text", + "content": "[61] Chao-Yuan Wu and Philipp Krahenbuhl. Towards long-form video understanding. In CVPR, 2021." + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 307, + 456, + 545, + 487 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 456, + 545, + 487 + ], + "spans": [ + { + "bbox": [ + 307, + 456, + 545, + 487 + ], + "type": "text", + "content": "[62] Yuliang Xiu, Jiefeng Li, Haoyu Wang, Yinghong Fang, and Cewu Lu. Pose Flow: Efficient online pose tracking. In BMVC, 2018." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 307, + 490, + 545, + 511 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 490, + 545, + 511 + ], + "spans": [ + { + "bbox": [ + 307, + 490, + 545, + 511 + ], + "type": "text", + "content": "[63] An Yan, Yali Wang, Zhifeng Li, and Yu Qiao. PA3D: Pose-action 3D machine for video recognition. In CVPR, 2019." + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 307, + 513, + 545, + 555 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 513, + 545, + 555 + ], + "spans": [ + { + "bbox": [ + 307, + 513, + 545, + 555 + ], + "type": "text", + "content": "[64] Hongwen Zhang, Yating Tian, Xinchi Zhou, Wanli Ouyang, Yebin Liu, Limin Wang, and Zhenan Sun. PyMAF: 3D human pose and shape regression with pyramidal mesh alignment feedback loop. In ICCV, 2021." + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 307, + 557, + 545, + 589 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 557, + 545, + 589 + ], + "spans": [ + { + "bbox": [ + 307, + 557, + 545, + 589 + ], + "type": "text", + "content": "[65] Yubo Zhang, Pavel Tokmakov, Martial Hebert, and Cordelia Schmid. A structured model for action detection. In CVPR, 2019." + } + ] + } + ], + "index": 31 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 298, + 749, + 312, + 757 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 298, + 749, + 312, + 757 + ], + "spans": [ + { + "bbox": [ + 298, + 749, + 312, + 757 + ], + "type": "text", + "content": "649" + } + ] + } + ], + "index": 33 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file diff --git a/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/fbd20275-52b6-475a-b61f-5e4c3c892317_content_list.json b/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/fbd20275-52b6-475a-b61f-5e4c3c892317_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b20aa1801e666c2babf00ef2692c086d3ce8c68f --- /dev/null +++ b/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/fbd20275-52b6-475a-b61f-5e4c3c892317_content_list.json @@ -0,0 +1,2039 @@ +[ + { + "type": "text", + "text": "On the effectiveness of partial variance reduction in federated learning with heterogeneous data", + "text_level": 1, + "bbox": [ + 122, + 137, + 873, + 178 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Bo Li*", + "bbox": [ + 183, + 203, + 240, + 218 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Mikkel N. Schmidt", + "bbox": [ + 275, + 203, + 431, + 218 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Tommy S. Alström", + "bbox": [ + 467, + 203, + 618, + 219 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Technical University of Denmark", + "bbox": [ + 270, + 220, + 532, + 235 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "{blia, mnsc, tsal}@dtu.dk", + "bbox": [ + 287, + 237, + 505, + 251 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Sebastian U. Stich", + "bbox": [ + 665, + 203, + 811, + 219 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "CISPA", + "bbox": [ + 707, + 220, + 769, + 233 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "stich@cispa.de", + "bbox": [ + 677, + 239, + 801, + 251 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Abstract", + "text_level": 1, + "bbox": [ + 250, + 298, + 327, + 313 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Data heterogeneity across clients is a key challenge in federated learning. Prior works address this by either aligning client and server models or using control variates to correct client model drift. Although these methods achieve fast convergence in convex or simple non-convex problems, the performance in over-parameterized models such as deep neural networks is lacking. In this paper, we first revisit the widely used FedAvg algorithm in a deep neural network to understand how data heterogeneity influences the gradient updates across the neural network layers. We observe that while the feature extraction layers are learned efficiently by FedAvg, the substantial diversity of the final classification layers across clients impedes the performance. Motivated by this, we propose to correct model drift by variance reduction only on the final layers. We demonstrate that this significantly outperforms existing benchmarks at a similar or lower communication cost. We furthermore provide proof for the convergence rate of our algorithm.", + "bbox": [ + 94, + 327, + 482, + 587 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "1. Introduction", + "text_level": 1, + "bbox": [ + 95, + 612, + 225, + 627 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Federated learning (FL) is emerging as an essential distributed learning paradigm in large-scale machine learning. Unlike in traditional machine learning, where a model is trained on the collected centralized data, in federated learning, each client (e.g. phones and institutions) learns a model with its local data. A centralized model is then obtained by aggregating the updates from all participating clients without ever requesting the client data, thereby ensuring a certain level of user privacy [13, 17]. Such an algorithm is especially beneficial for tasks where the data is sensitive, e.g. chemical hazards detection and diseases diagnosis [33].", + "bbox": [ + 94, + 636, + 482, + 786 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "Two primary challenges in federated learning are i) han", + "bbox": [ + 115, + 786, + 482, + 800 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/f01bb3a5e13038a42a26eed4391142bd7b625074297f83b4a6ca324120a2ed32.jpg", + "image_caption": [ + "Feature extractor", + "□Classifier" + ], + "image_footnote": [], + "bbox": [ + 515, + 298, + 636, + 410 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/3936e133bfbce6b47cfb2ce625a65c688b0f4b45e0d1530d5665474e90c74615.jpg", + "image_caption": [], + "image_footnote": [], + "bbox": [ + 636, + 299, + 895, + 365 + ], + "page_idx": 0 + }, + { + "type": "image", + "img_path": "images/5364e44dbf026e26321a15d410f7ab6c500a8dd9f4cb0d75d3b5ecd31e06cb78.jpg", + "image_caption": [ + "Variance reduction", + "Communication rounds", + "Figure 1. Our proposed FedPVR framework with the performance (communicated parameters per round client $\\longleftrightarrow$ server). Smaller $\\alpha$ corresponds to higher data heterogeneity. Our method achieves a better speedup than existing approaches by transmitting a slightly larger number of parameters than FedAvg." + ], + "image_footnote": [], + "bbox": [ + 636, + 365, + 895, + 437 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "dling data heterogeneity across clients [13] and ii) limiting the cost of communication between the server and clients [10]. In this setting, FedAvg [17] is one of the most widely used schemes: A server broadcasts its model to clients, which then update the model using their local data in a series of steps before sending their individual model to the server, where the models are aggregated by averaging the parameters. The process is repeated for multiple communication rounds. While it has shown great success in many applications, it tends to achieve subpar accuracy and convergence when the data are heterogeneous [14, 24, 31].", + "bbox": [ + 510, + 545, + 900, + 695 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "The slow and sometimes unstable convergence of FedAvg can be caused by client drift [14] brought on by data heterogeneity. Numerous efforts have been made to improve FedAvg's performance in this setting. Prior works attempt to mitigate client drift by penalizing the distance between a client model and the server model [20, 31] or by performing variance reduction techniques while updating client models [1, 14, 32]. These works demonstrate fast convergence on convex problems or for simple neural networks; however, their performance on deep neural", + "bbox": [ + 510, + 695, + 900, + 833 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "CVF", + "bbox": [ + 95, + 9, + 171, + 46 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.", + "bbox": [ + 228, + 5, + 818, + 21 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "Except for this watermark, it is identical to the accepted version;", + "bbox": [ + 317, + 21, + 729, + 34 + ], + "page_idx": 0 + }, + { + "type": "header", + "text": "the final published version of the proceedings is available on IEEE Xplore.", + "bbox": [ + 287, + 34, + 761, + 48 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "* Work done while at CISPA", + "bbox": [ + 119, + 809, + 272, + 820 + ], + "page_idx": 0 + }, + { + "type": "page_footnote", + "text": "‡ CISPA Helmholtz Center for Information Security", + "bbox": [ + 119, + 820, + 391, + 832 + ], + "page_idx": 0 + }, + { + "type": "page_number", + "text": "3964", + "bbox": [ + 468, + 870, + 500, + 881 + ], + "page_idx": 0 + }, + { + "type": "text", + "text": "networks, which are state-of-the-art for many centralized learning tasks [11, 34], has yet to be well explored. Adapting techniques that perform well on convex problems to neural networks is non-trivial [7] due to their \"intriguing properties\" [38] such as over-parametrization and permutation symmetries.", + "bbox": [ + 95, + 102, + 482, + 184 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "To overcome the above issues, we revisit the FedAvg algorithm with a deep neural network (VGG-11 [34]) under the assumption of data heterogeneity and full client participation. Specifically, we investigate which layers in a neural network are mostly influenced by data heterogeneity. We define drift diversity, which measures the diversity of the directions and scales of the averaged gradients across clients per communication round. We observe that in the non-IID scenario, the deeper layers, especially the final classification layer, have the highest diversity across clients compared to an IID setting. This indicates that FedAvg learns good feature representations even in the non-IID scenario [5] and that the significant variation of the deeper layers across clients is a primary cause of FedAvg's subpar performance.", + "bbox": [ + 95, + 184, + 482, + 375 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Based on the above observations, we propose to align the classification layers across clients using variance reduction. Specifically, we estimate the average updating direction of the classifiers (the last several fully connected layers) at the client $c_{i}$ and server level $c$ and use their difference as a control variate [14] to reduce the variance of the classifiers across clients. We analyze our proposed algorithm and derive a convergence rate bound.", + "bbox": [ + 95, + 375, + 482, + 484 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We perform experiments on the popular federated learning benchmark datasets CIFAR10 [19] and CIFAR100 [19] using two types of neural networks, VGG-11 [34] and ResNet-8 [11], and different levels of data heterogeneity across clients. We experimentally show that we require fewer communication rounds compared to the existing methods [14, 17, 31] to achieve the same accuracy while transmitting a similar or slightly larger number of parameters between server and clients than FedAvg (see Fig. 1). With a (large) fixed number of communication rounds, our method achieves on-par or better top-1 accuracy, and in some settings it even outperforms centralized learning. Using conformal prediction [3], we show how performance can be improved further using adaptive prediction sets.", + "bbox": [ + 95, + 484, + 482, + 674 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "We show that applying variance reduction on the last layers increases the diversity of the feature extraction layers. This diversity in the feature extraction layers may give each client more freedom to learn richer feature representations, and the uniformity in the classifier then ensures a less biased decision. We summarize our contributions here:", + "bbox": [ + 95, + 675, + 482, + 755 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "- We present our algorithm for partial variance-reduced federated learning (FedPVR). We experimentally demonstrate that the key to the success of our algorithm is the diversity between the feature extraction layers and the alignment between the classifiers.", + "bbox": [ + 114, + 764, + 482, + 832 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "- We prove the convergence rate in the convex settings and non-convex settings, precisely characterize its weak dependence on data-heterogeneity measures and show that FedPVR provably converges as fast as the centralized SGD baseline in most practical relevant cases.", + "bbox": [ + 531, + 102, + 900, + 184 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "- We experimentally show that our algorithm is more communication efficient than previous works across various levels of data heterogeneity, datasets, and neural network architectures. In some cases where data heterogeneity exists, the proposed algorithm even performs slightly better than centralized learning.", + "bbox": [ + 531, + 194, + 900, + 276 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2. Related work", + "text_level": 1, + "bbox": [ + 512, + 298, + 648, + 312 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "2.1. Federated learning", + "text_level": 1, + "bbox": [ + 512, + 320, + 692, + 336 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Federated learning (FL) is a fast-growing field [13, 40]. We mainly describe FL methods in non-IID settings where the data is distributed heterogeneously across clients. Among the existing approaches, FedAvg [25] is the de facto optimization technique. Despite its solid empirical performances in IID settings [13, 25], it tends to achieve a subpar accuracy-communication trade-off in non-IID scenarios.", + "bbox": [ + 512, + 340, + 900, + 435 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Many works attempt to tackle FL when data is heterogeneous across clients [1, 9, 14, 31, 39, 42]. FedProx [31] proposes a temperature parameter and proximal regularization term to control the divergence between client and server models. However, the proximal term does not bring the alignment between the global and local optimal points [1]. Similarly, some works control the update direction by introducing client-dependent control variate [1, 14, 17, 27, 32] that is also communicated between the server and clients. They have achieved a much faster convergence rate, but their performance in a non-convex setup, especially in deep neural networks, such as ResNet [11] and VGG [34], is not well explored. Besides, they suffer from a higher communication cost due to the transmission of the extra control variates, which may be a critical issue for resources-limited IoT mobile devices [10]. Among these methods, SCAFFOLD [14] is the most closely related method to ours, and we give a more detailed comparison in section 3 and 5.", + "bbox": [ + 512, + 437, + 900, + 681 + ], + "page_idx": 1 + }, + { + "type": "text", + "text": "Another line of work develops FL algorithms based on the characteristics, such as expressive feature representations [28] of neural networks. Collins et al. [5] show that FedAvg is powerful in learning common data representations from clients' data. FedBabu [29], TCT [41], and CCVR [24] propose to improve FL performance by finetuning the classifiers with a standalone dataset or features that are simulated based on the client models. However, preparing a standalone dataset/features that represents the data distribution across clients is challenging as this usually requires domain knowledge and may raise privacy concerns.", + "bbox": [ + 510, + 682, + 900, + 833 + ], + "page_idx": 1 + }, + { + "type": "page_number", + "text": "3965", + "bbox": [ + 468, + 870, + 500, + 881 + ], + "page_idx": 1 + }, + { + "type": "image", + "img_path": "images/a6ccf984c91b668f39f0637e261b46a42c6d3bd271313729673828ee5984887c.jpg", + "image_caption": [ + "Figure 2. Data distribution (number of images per client per class) with different levels of heterogeneity, client CKA similarity, and the drift diversity of each layer in VGG-11 (20 layers) with FedAvg. Deep layers in an over-parameterised neural network have higher disagreement and variance when the clients are heterogeneous using FedAvg." + ], + "image_footnote": [], + "bbox": [ + 132, + 97, + 865, + 264 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Moon [20] encourages the similarity of the representations across different client models by using contrastive loss [4] but with the cost of three full-size models in memory on each client, which may limit its applicability in resource-limited devices.", + "bbox": [ + 94, + 336, + 480, + 404 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Other works focus on reducing the communication cost by compressing the transmitted gradients [2, 8, 21, 26, 36]. They can reduce the communication bandwidth by adjusting the number of bits sent per iteration. These works are complementary to ours and can be easily integrated into our method to save communication costs.", + "bbox": [ + 94, + 404, + 482, + 486 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.2. Variance reduction", + "text_level": 1, + "bbox": [ + 95, + 496, + 275, + 509 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Stochastic variance reduction (SVR), such as SVRG [12], SAGA [6], and their variants, use control variate to reduce the variance of traditional stochastic gradient descent (SGD). These methods can remarkably achieve a linear convergence rate for strongly convex optimization problems compared to the sub-linear rate of SGD. Many federated learning algorithms, such as SCAFFOLD [14] and DANE [32], have adapted the idea of variance reduction for the whole model and achieved good convergence on convex problems. However, as [7] demonstrated, naively applying variance reduction techniques gives no actual variance reduction and tends to result in a slower convergence in deep neural networks. This suggests that adapting SVR techniques in deep neural networks for FL requires a more careful design.", + "bbox": [ + 94, + 517, + 482, + 720 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "2.3. Conformal prediction", + "text_level": 1, + "bbox": [ + 95, + 730, + 294, + 744 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Conformal prediction is a general framework that computes a prediction set guaranteed to include the true class with a high user-determined probability [3, 30]. It requires no retraining of the models and achieves a finite-sum coverage guarantee [3]. As FL algorithms can hardly perform as well as centralized learning [24] when the data heterogene", + "bbox": [ + 94, + 751, + 482, + 832 + ], + "page_idx": 2 + }, + { + "type": "table", + "img_path": "images/db2239ccc151bc31eb9ca304fedeadfea0ba5e1053a7e3248523cf0e2f19a707.jpg", + "table_caption": [ + "Table 1. Notations used in this paper" + ], + "table_footnote": [], + "table_body": "
R,rNumber of communication rounds and round index
K,kNumber of local steps, local step index
N,iNumber of clients, client index
yr,i,kclient model i at step k and round r
xrserver model at round r
cr,i, crclient and server control variate
", + "bbox": [ + 519, + 339, + 897, + 425 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "ity is high, we can integrate conformal prediction in FL to improve the empirical coverage by slightly increasing the predictive set size. This can be beneficial in sensitive use cases such as detecting chemical hazards, where it is better to give a prediction set that contains the correct class than producing a single but wrong prediction.", + "bbox": [ + 510, + 428, + 900, + 511 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3. Method", + "text_level": 1, + "bbox": [ + 512, + 521, + 601, + 535 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.1. Problem statement", + "text_level": 1, + "bbox": [ + 512, + 543, + 690, + 557 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "Given $N$ clients with full participation, we formalise the problem as minimizing the average of the stochastic functions with access to stochastic samples in Eq. 1 where $\\pmb{x}$ is the model parameters and $f_{i}$ represents the loss function at client $i$ with dataset $\\mathcal{D}_i$ ,", + "bbox": [ + 510, + 564, + 900, + 631 + ], + "page_idx": 2 + }, + { + "type": "equation", + "text": "\n$$\n\\min _ {\\boldsymbol {x} \\in \\mathbb {R} ^ {d}} \\left(f (\\boldsymbol {x}) := \\frac {1}{N} \\sum_ {i = 1} ^ {N} f _ {i} (\\boldsymbol {x})\\right), \\tag {1}\n$$\n", + "text_format": "latex", + "bbox": [ + 598, + 638, + 899, + 676 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "where $f_{i}(\\pmb {x}):= \\mathbb{E}_{\\mathcal{D}_{i}}[f_{i}(\\pmb {x};\\mathcal{D}_{i})]$", + "bbox": [ + 512, + 681, + 722, + 697 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "3.2. Motivation", + "text_level": 1, + "bbox": [ + 512, + 701, + 633, + 716 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "When the data $\\{\\mathcal{D}_i\\}$ are heterogeneous across clients, FedAvg suffers from client drift [14], where the average of the local optimal $\\bar{\\pmb{x}}^* = \\frac{1}{N}\\sum_{i\\in N}\\pmb{x}_i^*$ is far from the global optimal $\\pmb{x}^*$ . To understand what causes client drift, specifically which layers in a neural network are influenced most by the data heterogeneity, we perform a simple experiment using FedAvg and CIFAR10 datasets on a VGG-11. The detailed experimental setup can be found in section 4.", + "bbox": [ + 510, + 723, + 900, + 833 + ], + "page_idx": 2 + }, + { + "type": "page_number", + "text": "3966", + "bbox": [ + 468, + 871, + 500, + 881 + ], + "page_idx": 2 + }, + { + "type": "text", + "text": "In an over-parameterized model, it is difficult to directly calculate client drift $||\\bar{x}^{*} - x^{*}||^{2}$ as it is challenging to obtain the global optimum $x^{*}$ . We instead hypothesize that we can represent the influence of data heterogeneity on the model by measuring 1) drift diversity and 2) client model similarity. Drift diversity reflects the diversity in the amount each client model deviates from the server model after an update round.", + "bbox": [ + 94, + 102, + 482, + 210 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Definition 1 (Drift diversity). We define the drift diversity across $N$ clients at round $r$ as:", + "bbox": [ + 95, + 211, + 482, + 238 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\xi^ {r} := \\frac {\\sum_ {i = 1} ^ {N} \\left\\| \\boldsymbol {m} _ {i} ^ {r} \\right\\| ^ {2}}{\\left\\| \\sum_ {i = 1} ^ {N} \\boldsymbol {m} _ {i} ^ {r} \\right\\| ^ {2}} \\quad \\boldsymbol {m} _ {i} ^ {r} = \\boldsymbol {y} _ {i, K} ^ {r} - \\boldsymbol {x} ^ {r - 1} \\tag {2}\n$$\n", + "text_format": "latex", + "bbox": [ + 142, + 245, + 482, + 281 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Drift diversity $\\xi$ is high when all the clients update their models in different directions, i.e., when dot products between client updates $\\pmb{m}_i$ are small. When each client performs $K$ steps of vanilla SGD updates, $\\xi$ depends on the directions and amplitude of the gradients over $N$ clients and is equivalent to $\\frac{\\sum_{i=1}^{N} \\|\\sum_{k} g_i(\\pmb{y}_{i,k})\\|^2}{\\|\\sum_{i=1}^{N} \\sum_{k} g_i(\\pmb{y}_{i,k})\\|^2}$ , where $g_i(\\pmb{y}_{i,k})$ is the stochastic mini-batch gradient.", + "bbox": [ + 94, + 290, + 482, + 393 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "After updating client models, we quantify the client model similarity using centred kernel alignment (CKA) [18] computed on a test dataset. CKA is a widely used permutation invariant metric for measuring the similarity between feature representations in neural networks [18, 24, 28].", + "bbox": [ + 94, + 394, + 482, + 461 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Fig. 2 shows the movement of $\\xi$ and CKA across different levels of data heterogeneity using FedAvg. We observe that the similarity and diversity of the early layers (e.g. layer index 4 and 12) are with a higher agreement between the IID $(\\alpha = 100.0)$ and non-IID $(\\alpha = 0.1)$ experiments, which indicates that FedAvg can still learn and extract good feature representations even when it is trained with non-IID data. The lower similarity on the deeper layers, especially the classifiers, suggests that these layers are strongly biased towards their local data distribution. When we only look at the model that is trained with $\\alpha = 0.1$ , we see the highest diversity and variance on the classifiers across clients compared to the rest of the layers. Based on the above observations, we propose to align the classifiers across clients using variance reduction. We deploy client and server control variates to control the updating directions of the classifiers.", + "bbox": [ + 94, + 461, + 482, + 680 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3.3. Classifier variance reduction", + "text_level": 1, + "bbox": [ + 95, + 688, + 349, + 701 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Our proposed algorithm (Alg. I) consists of three parts: i) client updating (Eq. 5-6) ii) client control variate updating, (Eq. 7), and iii) server updating (Eq. 8-9)", + "bbox": [ + 94, + 709, + 482, + 751 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "We first define a vector $\\pmb{p} \\in \\mathbb{R}^d$ that contains 0 or 1 with $v$ non-zero elements $(v \\ll d)$ in Eq. 3. We recover SCAF-FOLD with $\\pmb{p} = \\mathbf{1}$ and recover FedAvg with $\\pmb{p} = \\mathbf{0}$ . For the set of indices $j$ where $p_j = 1$ ( $S_{\\mathrm{svr}}$ from Eq. 4), we update the corresponding weights $y_{i,S_{\\mathrm{svr}}}$ with variance reduction such that we maintain a state for each client $(c_i \\in \\mathbb{R}^v)$ and", + "bbox": [ + 94, + 751, + 482, + 832 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "for the server $(\\pmb{c} \\in \\mathbb{R}^v)$ in Eq. 5. For the rest of the indices $S_{\\mathrm{sgd}}$ from Eq. 4, we update the corresponding weights $\\pmb{y}_{i,S_{\\mathrm{sgd}}}$ with SGD in Eq. 6. As the server variate $\\pmb{c}$ is an average of $\\pmb{c}_i$ across clients, we can safely initialise them as $\\mathbf{0}$ .", + "bbox": [ + 510, + 102, + 900, + 157 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In each communication round, each client receives a copy of the server model $\\mathbf{x}$ and the server control variate $\\mathbf{c}$ . They then perform $K$ model updating steps (see Eq. 5-6 for one step) using cross-entropy as the loss function. Once this is finished, we calculate the updated client control variate $\\mathbf{c}_i$ using Eq. 7. The server then receives the updated $\\mathbf{c}_i$ and $\\mathbf{y}_i$ from all the clients for aggregation (Eq. 8-9). This completes one communication round.", + "bbox": [ + 510, + 157, + 900, + 266 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {p} := \\{0, 1 \\} ^ {d}, \\quad v = \\sum \\boldsymbol {p} \\tag {3}\n$$\n", + "text_format": "latex", + "bbox": [ + 569, + 269, + 899, + 289 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\nS _ {\\mathrm {s v r}} := \\{j: \\boldsymbol {p} _ {j} = 1 \\}, \\quad S _ {\\mathrm {s g d}} := \\{j: \\boldsymbol {p} _ {j} = 0 \\} \\tag {4}\n$$\n", + "text_format": "latex", + "bbox": [ + 557, + 290, + 899, + 308 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {y} _ {i, S _ {\\mathrm {s v r}}} \\leftarrow \\boldsymbol {y} _ {i, S _ {\\mathrm {s v r}}} - \\eta_ {l} \\left(g _ {i} \\left(\\boldsymbol {y} _ {i}\\right) _ {S _ {\\mathrm {s v r}}} - \\boldsymbol {c} _ {i} + \\boldsymbol {c}\\right) \\tag {5}\n$$\n", + "text_format": "latex", + "bbox": [ + 542, + 311, + 899, + 326 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {y} _ {i, S _ {\\mathrm {s g d}}} \\leftarrow \\boldsymbol {y} _ {i, S _ {\\mathrm {s g d}}} - \\eta_ {l} g _ {i} \\left(\\boldsymbol {y} _ {i}\\right) _ {S _ {\\mathrm {s g d}}} \\tag {6}\n$$\n", + "text_format": "latex", + "bbox": [ + 541, + 328, + 899, + 345 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {c} _ {i} \\leftarrow \\boldsymbol {c} _ {i} - \\boldsymbol {c} + \\frac {1}{K \\eta_ {l}} \\left(\\boldsymbol {x} _ {S _ {\\mathrm {s v r}}} - \\boldsymbol {y} _ {i, S _ {\\mathrm {s v r}}}\\right) \\tag {7}\n$$\n", + "text_format": "latex", + "bbox": [ + 566, + 347, + 899, + 375 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {x} \\leftarrow (1 - \\eta_ {g}) \\boldsymbol {x} + \\frac {1}{N} \\sum_ {i \\in N} \\boldsymbol {y} _ {i} \\tag {8}\n$$\n", + "text_format": "latex", + "bbox": [ + 569, + 377, + 899, + 409 + ], + "page_idx": 3 + }, + { + "type": "equation", + "text": "\n$$\n\\boldsymbol {c} \\leftarrow \\frac {1}{N} \\sum_ {i \\in N} \\boldsymbol {c} _ {i} \\tag {9}\n$$\n", + "text_format": "latex", + "bbox": [ + 571, + 412, + 899, + 444 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Algorithm I Partial variance reduction (FedPVR)", + "text_level": 1, + "bbox": [ + 514, + 450, + 842, + 464 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "server: initialise the server model $\\mathbf{x}$ , the control variate $\\mathbf{c}$ , and global step size $\\eta_g$", + "bbox": [ + 512, + 466, + 900, + 491 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "client: initialise control variate $c_{i}$ and local step size $\\eta_{l}$", + "bbox": [ + 537, + 491, + 865, + 503 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "$\\mathbf{mask}$ .. $\\pmb {p}:= \\{0,1\\} ^d$ $S_{\\mathrm{sgd}}\\coloneqq \\{j:\\pmb {p}_j = 0\\}$ $S_{\\mathrm{svr}}\\coloneqq \\{j:$ $\\pmb {p}_j = 1\\}$", + "bbox": [ + 512, + 503, + 899, + 530 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "1: procedure MODEL UPDATING", + "bbox": [ + 519, + 530, + 722, + 542 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "2: for $r = 1\\rightarrow R$ do", + "bbox": [ + 521, + 543, + 675, + 554 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "3: communicate $x$ and $c$ to all clients $i \\in [N]$", + "bbox": [ + 522, + 555, + 845, + 567 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "4: for On client $i \\in [N]$ in parallel do", + "bbox": [ + 522, + 568, + 791, + 580 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "5: $y_{i}\\gets x$", + "bbox": [ + 522, + 581, + 655, + 593 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "6: for $k = 1\\to K$ do", + "bbox": [ + 522, + 593, + 719, + 604 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "7: compute minibatch gradient $g_{i}(\\pmb{y}_{i})$", + "bbox": [ + 522, + 605, + 831, + 618 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "8: $\\pmb{y}_{i,S_{\\mathrm{sgd}}} \\gets \\pmb{y}_{i,S_{\\mathrm{sgd}}} - \\eta_l g_i(\\pmb{y}_i)_{S_{\\mathrm{sgd}}}$", + "bbox": [ + 522, + 619, + 815, + 633 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "9: $\\pmb{y}_{i,S_{\\mathrm{svr}}}\\gets \\pmb{y}_{i,S_{\\mathrm{svr}}} - \\eta_{l}(g_{i}(\\pmb{y}_{i})_{S_{\\mathrm{svr}}} - \\pmb{c}_{i} + \\pmb{c})$", + "bbox": [ + 522, + 633, + 878, + 644 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "10: end for", + "bbox": [ + 517, + 644, + 652, + 655 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "11: $\\pmb{c}_i \\gets \\pmb{c}_i - \\pmb{c} + \\frac{1}{K\\eta_l} (\\pmb{x}_{S_{\\mathrm{svr}}} - \\pmb{y}_{i,S_{\\mathrm{svr}}})$", + "bbox": [ + 517, + 656, + 818, + 671 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "12: communicate $\\pmb{y}_i, \\pmb{c}_i$", + "bbox": [ + 517, + 671, + 726, + 682 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "13: end for", + "bbox": [ + 517, + 682, + 628, + 693 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "14: $\\pmb{x} \\gets (1 - \\eta_g)\\pmb{x} + \\frac{1}{N}\\sum_{i\\in N}\\pmb{y}_i$", + "bbox": [ + 517, + 694, + 769, + 707 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "15: $\\pmb{c} \\gets \\frac{1}{N} \\sum_{i \\in N} \\pmb{c}_i$", + "bbox": [ + 517, + 707, + 687, + 720 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "16: end for", + "bbox": [ + 517, + 720, + 608, + 731 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "17: end procedure", + "bbox": [ + 517, + 732, + 631, + 743 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "In terms of implementation, we can simply assume the control variate for the block of weights that is updated with SGD as 0 and implement line 8 and 9 in one step", + "bbox": [ + 510, + 745, + 900, + 777 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "Ours vs SCAFFOLD [14] While our work is similar to SCAFFOLD in the use of variance reduction, there are some fundamental differences. We both communicate control variates between the clients and server, but our control", + "bbox": [ + 510, + 777, + 900, + 832 + ], + "page_idx": 3 + }, + { + "type": "page_number", + "text": "3967", + "bbox": [ + 468, + 870, + 500, + 881 + ], + "page_idx": 3 + }, + { + "type": "text", + "text": "variate $(2v\\leq 0.1d)$ is significantly smaller than the one in SCAFFOLD $(2d)$ . This $2x$ decrease in bits can be critical for some low-power IoT devices as the communication may consume more energy [10]. From the application point of view, SCAFFOLD achieved great success in convex or simple two layers problems. However, adapting the techniques that work well from convex problems to over-parameterized models is non-trivial [38], and naively adapting variance reduction techniques on deep neural networks gives little or no convergence speedup [7]. Therefore, the significant improvement achieved by our method gives essential and nontrivial insight into what matters when tackling data heterogeneity in FL in over-parameterized models.", + "bbox": [ + 94, + 102, + 482, + 280 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "3.4. Convergence rate", + "text_level": 1, + "bbox": [ + 95, + 287, + 265, + 302 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "We state the convergence rate in this section. We assume functions $\\{f_i\\}$ are $\\beta$ -smooth following [16, 35]. We then assume $g_{i}(\\pmb{x})\\coloneqq \\nabla f_{i}(x;\\mathcal{D}_{i})$ is an unbiased stochastic gradient of $f_{i}$ with variance bounded by $\\sigma^2$ . We assume strongly convexity $(\\mu >0)$ and general convexity $(\\mu = 0)$ for some of the results following [14]. Furthermore, we also make assumptions about the heterogeneity of the functions.", + "bbox": [ + 94, + 308, + 482, + 403 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For convex functions, we assume the heterogeneity of the function $\\{f_i\\}$ at the optimal point $x^{*}$ (such a point always exists for a strongly convex function) following [15, 16].", + "bbox": [ + 95, + 404, + 482, + 458 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Assumption 1 ( $\\zeta$ -heterogeneity). We define a measure of variance at the optimum $x^{*}$ given $N$ clients as:", + "bbox": [ + 95, + 460, + 484, + 486 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\zeta^ {2} := \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\mathbb {E} | | \\nabla f _ {i} \\left(\\boldsymbol {x} ^ {*}\\right) | | ^ {2}. \\tag {10}\n$$\n", + "text_format": "latex", + "bbox": [ + 191, + 489, + 480, + 526 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "For the non-convex functions, such an unique optimal point $\\pmb{x}^*$ does not necessarily exist, so we generalize Assumption 1 to Assumption 2.", + "bbox": [ + 95, + 529, + 482, + 571 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Assumption 2 ( $\\hat{\\zeta}$ -heterogeneity). We assume there exists constant $\\hat{\\zeta}$ such that $\\forall x \\in \\mathbb{R}^d$", + "bbox": [ + 95, + 577, + 482, + 605 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\frac {1}{N} \\sum_ {i = 1} ^ {N} \\mathbb {E} | | \\nabla f _ {i} (\\boldsymbol {x}) | | ^ {2} \\leq \\hat {\\zeta} ^ {2}. \\tag {11}\n$$\n", + "text_format": "latex", + "bbox": [ + 200, + 606, + 480, + 644 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Given the mask $\\pmb{p}$ as defined in Eq. 3, we know $||\\pmb{p} \\odot \\pmb{x}|| \\leq ||\\pmb{x}||$ . Therefore, we have the following propositions.", + "bbox": [ + 95, + 654, + 482, + 682 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Proposition 1 (Implication of Assumption 1). Given the mask $\\mathbf{p}$ , we define the heterogeneity of the block of weights that are not variance reduced at the optimum $\\mathbf{x}^*$ as:", + "bbox": [ + 95, + 688, + 482, + 730 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\zeta_ {1 - p} ^ {2} := \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\left\\| (\\mathbf {1} - \\boldsymbol {p}) \\odot \\nabla f _ {i} \\left(\\boldsymbol {x} ^ {*}\\right) \\right\\| ^ {2}, \\tag {12}\n$$\n", + "text_format": "latex", + "bbox": [ + 154, + 733, + 480, + 770 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "If Assumption 1 holds, then it also holds that:", + "bbox": [ + 95, + 777, + 393, + 790 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\zeta_ {1 - p} ^ {2} \\leq \\zeta^ {2}. \\tag {13}\n$$\n", + "text_format": "latex", + "bbox": [ + 248, + 793, + 480, + 811 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "In Proposition 1, $\\zeta_{1 - p}^2 = \\zeta^2$ if $\\pmb{p} = \\mathbf{0}$ and $\\zeta_{1 - p}^2 = 0$ if $\\pmb{p} = \\mathbf{1}$ . If $\\pmb{p} \\neq \\mathbf{0}$ and $\\pmb{p} \\neq \\mathbf{1}$ , as the heterogeneity of the shallow weights is lower than the deeper weights [41], we have $\\zeta_{1 - p}^2 \\leq \\zeta^2$ . Similarly, we can validate Proposition 2.", + "bbox": [ + 510, + 96, + 900, + 153 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Proposition 2 (Implication of Assumption 2). Given the mask $\\pmb{p}$ , we assume there exists constant $\\hat{\\zeta}_{1 - p}$ such that $\\forall \\pmb{x} \\in \\mathbb{R}^d$ , the heterogeneity of the block of weights that are not variance reduced:", + "bbox": [ + 510, + 162, + 900, + 216 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\frac {1}{N} \\sum_ {i = 1} ^ {N} | | (\\mathbf {1} - \\boldsymbol {p}) \\odot \\nabla f _ {i} (\\boldsymbol {x}) | | ^ {2} \\leq \\hat {\\zeta} _ {1 - p}, \\tag {14}\n$$\n", + "text_format": "latex", + "bbox": [ + 579, + 219, + 899, + 257 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "If Assumption 2 holds, then it also holds that:", + "bbox": [ + 512, + 267, + 810, + 281 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\n\\hat {\\zeta} _ {1 - p} ^ {2} \\leq \\hat {\\zeta} ^ {2}. \\tag {15}\n$$\n", + "text_format": "latex", + "bbox": [ + 665, + 285, + 899, + 302 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Theorem 1. For any $\\beta$ -smooth function $\\{f_i\\}$ , the output of FedPVR has expected error smaller than $\\epsilon$ for $\\eta_g = \\sqrt{N}$ and some values of $\\eta_l$ , $R$ satisfying:", + "bbox": [ + 512, + 315, + 902, + 357 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Strongly convex: $\\eta_l \\leq \\min \\left(\\frac{1}{80K\\eta_g\\beta}, \\frac{26}{20\\mu K\\eta_g}\\right)$ ,", + "bbox": [ + 531, + 366, + 858, + 389 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nR = \\tilde {\\mathcal {O}} \\left(\\frac {\\sigma^ {2}}{\\mu N K \\epsilon} + \\frac {\\zeta_ {1 - p} ^ {2}}{\\mu \\epsilon} + \\frac {\\beta}{\\mu}\\right), \\tag {16}\n$$\n", + "text_format": "latex", + "bbox": [ + 608, + 400, + 899, + 437 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "- General convex: $\\eta_{l} \\leq \\frac{1}{80K\\eta_{g}\\beta}$ ,", + "bbox": [ + 531, + 444, + 751, + 463 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nR = \\mathcal {O} \\left(\\frac {\\sigma^ {2} D}{K N \\epsilon^ {2}} + \\frac {\\zeta_ {1 - p} ^ {2} D}{\\epsilon^ {2}} + \\frac {\\beta D}{\\epsilon} + F\\right), \\tag {17}\n$$\n", + "text_format": "latex", + "bbox": [ + 566, + 473, + 899, + 510 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Non-convex: $\\eta_l \\leq \\frac{1}{26K\\eta_g\\beta}$ , and $R \\geq 1$ , then:", + "bbox": [ + 531, + 517, + 842, + 535 + ], + "page_idx": 4 + }, + { + "type": "equation", + "text": "\n$$\nR = \\mathcal {O} \\left(\\frac {\\beta \\sigma^ {2} F}{K N \\epsilon^ {2}} + \\frac {\\beta \\hat {\\zeta} _ {1 - p} ^ {2} F}{N \\epsilon^ {2}} + \\frac {\\beta F}{\\epsilon}\\right), \\tag {18}\n$$\n", + "text_format": "latex", + "bbox": [ + 579, + 546, + 899, + 583 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Where $D\\coloneqq ||\\pmb {x}^0 -\\pmb{x}^* ||^2$ and $F\\coloneqq f(\\pmb {x}^0) - f^*$", + "bbox": [ + 512, + 586, + 833, + 602 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "Given the above assumptions, the convergence rate is given in Theorem I. When $p = 1$ , we recover SCAFFOLD convergence guarantee as $\\zeta_{1 - p}^2 = 0$ , $\\hat{\\zeta}_{1 - p}^2 = 0$ . In the strongly convex case, the effect of the heterogeneity of the block of weights that are not variance reduced $\\zeta_{1 - p}^2$ becomes negligible if $\\tilde{\\mathcal{O}}\\left(\\frac{\\zeta_{1 - p}^2}{\\epsilon}\\right)$ is sufficiently smaller than $\\tilde{\\mathcal{O}}\\left(\\frac{\\sigma^2}{NK\\epsilon}\\right)$ . In such case, our rate is $\\frac{\\sigma^2}{NK\\epsilon} + \\frac{1}{\\mu}$ , which recovers the SCAFFOLD in the strongly convex without sampling and further matches that of SGD (with mini-batch size $K$ on each worker). We also recover the FedAvg rate* at simple IID case. See Appendix. B for the full proof.", + "bbox": [ + 510, + 612, + 900, + 779 + ], + "page_idx": 4 + }, + { + "type": "page_footnote", + "text": "*FedAvg at strongly convex case has the rate $R = \\tilde{\\mathcal{O}}\\left(\\frac{\\sigma^2}{\\mu K N \\epsilon} + \\frac{\\sqrt{\\beta} G}{\\mu \\sqrt{\\epsilon}} + \\frac{\\beta}{\\mu}\\right)$ with $G$ measures the gradient dissimilarity. At simple IID case, $G = 0$ [14].", + "bbox": [ + 512, + 789, + 899, + 832 + ], + "page_idx": 4 + }, + { + "type": "page_number", + "text": "3968", + "bbox": [ + 468, + 870, + 500, + 881 + ], + "page_idx": 4 + }, + { + "type": "text", + "text": "4. Experimental setup", + "text_level": 1, + "bbox": [ + 95, + 100, + 282, + 117 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We demonstrate the effectiveness of our approach with CIFAR10 [19] and CIFAR100 [19] on image classification tasks. We simulate the data heterogeneity scenario following [22] by partitioning the data according to the Dirichlet distribution with the concentration parameter $\\alpha$ . The smaller the $\\alpha$ is, the more imbalanced the data are distributed across clients. An example of the data distribution over multiple clients using the CIFAR10 dataset can be seen in Fig. 2. In our experiment, we use $\\alpha \\in \\{0.1, 0.5, 1.0\\}$ as these are commonly used concentration parameters [22]. Each client has its local data, and this data is kept to be the same during all the communication rounds. We hold out the test dataset at the server for evaluating the classification performance of the server model. Following [22], we perform the same data augmentation for all the experiments.", + "bbox": [ + 94, + 124, + 480, + 327 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We use two models: VGG-11 and ResNet-8 following [22]. We perform variance reduction for the last three layers in VGG-11 and the last layer in ResNet-8. We use 10 clients with full participation following [41] (close to cross-silo setup) and a batch size of 256. Each client performs 10 local epochs of model updating. We set the server learning rate $\\eta_{g} = 1$ for all the models [14]. We tune the clients learning rate from $\\{0.05, 0.1, 0.2, 0.3\\}$ for each individual experiment. The learning rate schedule is experimentally chosen from constant, cosine decay [23], and multiple step decay [22]. We compare our method with the representative federated learning algorithms FedAvg [25], FedProx [31], SCAFFOLD [14], and FedDyn [1]. All the results are averaged over three repeated experiments with different random initialization. We leave $1\\%$ of the training data from each client out as the validation data to tune the hyperparameters (learning rate and schedule) per client. See Appendix. C for additional experimental setups. The code is at github.com/lyn1874/fedpvr.", + "bbox": [ + 94, + 328, + 482, + 587 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5. Experimental results", + "text_level": 1, + "bbox": [ + 95, + 591, + 294, + 606 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We demonstrate the performance of our proposed approach in the FL setup with data heterogeneity in this section. We compare our method with the existing state-of-the-art algorithms on various datasets and deep neural networks. For the baseline approaches, we finetune the hyperparameters and only show the best performance we get. Our main findings are 1) we are more communication efficient than the baseline approaches, 2) conformal prediction is an effective tool to improve FL performance in high data heterogeneity scenarios, and 3) the benefit of the trade-off between diversity and uniformity for using deep neural networks in FL.", + "bbox": [ + 94, + 614, + 482, + 777 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5.1. Communication efficiency and accuracy", + "text_level": 1, + "bbox": [ + 95, + 783, + 435, + 799 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We first report the number of rounds required to achieve a certain level of Top $1\\%$ accuracy (66% for CIFAR10 and", + "bbox": [ + 95, + 805, + 482, + 833 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "$44\\%$ for CIFAR100) in Table. 2. An algorithm is more communication efficient if it requires less number of rounds to achieve the same accuracy and/or if it transmits fewer number of parameters between the clients and server. Compared to the baseline approaches, we require much fewer number of rounds for almost all types of data heterogeneity and models. We can achieve a speedup between 1.5 and 6.7 than FedAvg. We also observe that ResNet-8 tends to converge slower than VGG-11, which may be due to the aggregation of the Batch Normalization layers that are discrepant between the local data distribution [22].", + "bbox": [ + 510, + 102, + 900, + 252 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We next compare the top-1 accuracy between centralized learning and federated learning algorithms. For the centralized learning experiment, we tune the learning rate from $\\{0.01, 0.05, 0.1\\}$ and report the best test accuracy based on the validation dataset. We train the model for 800 epochs which is as same as the total number of epochs in the federated learning algorithms (80 communication rounds x 10 local epochs). The results are shown in Table. 3. We also show the number of copies of the parameters that need to be transmitted between the server and clients (e.g. 2x means we communicate $x$ and $y_i$ )", + "bbox": [ + 510, + 252, + 900, + 402 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "Table. 3 shows that our approach achieves a much better Top-1 accuracy compared to FedAvg with transmitting a similar or slightly bigger number of parameters between the server and client per round. Our method also achieves slightly better accuracy than centralized learning when the data is less heterogeneous (e.g. $\\alpha = 0.5$ for CIFAR10 and $\\alpha = 1.0$ for CIFAR100).", + "bbox": [ + 510, + 404, + 900, + 500 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "5.2. Conformal prediction", + "text_level": 1, + "bbox": [ + 512, + 509, + 714, + 524 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "When the data heterogeneity is high across clients, it is difficult for a federated learning algorithm to match the centralized learning performance [24]. Therefore, we demonstrate the benefit of using simple post-processing conformal prediction to improve the model performance.", + "bbox": [ + 510, + 530, + 900, + 599 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "We examine the relationship between the empirical coverage and the average predictive set size for the server model after 80 communication rounds for each federated learning algorithm. The empirical coverage is the percentage of the data samples where the correct prediction is in the predictive set, and the average predictive size is the average of the length of the predictive sets over all the test images [3]. See Appendix. C for more information about conformal prediction setup and results.", + "bbox": [ + 510, + 600, + 900, + 722 + ], + "page_idx": 5 + }, + { + "type": "text", + "text": "The results for when $\\alpha = 0.1$ for both datasets and architectures are shown in Fig. 3. We show that by slightly increasing the predictive set size, we can achieve a similar accuracy as the centralized performance. Besides, our approach tends to surpass the centralized top-1 performance similar to or faster than other approaches. In sensitive use cases such as chemical threat detection, conformal prediction is a valuable tool to achieve certified accuracy at the", + "bbox": [ + 510, + 723, + 900, + 833 + ], + "page_idx": 5 + }, + { + "type": "page_number", + "text": "3969", + "bbox": [ + 468, + 870, + 502, + 882 + ], + "page_idx": 5 + }, + { + "type": "table", + "img_path": "images/364380cd708c2b69a71a67d4be916ae7d81e6ee36837ce35c4c69f16cc0b7135.jpg", + "table_caption": [ + "Table 2. The required number of communication rounds (speedup compared to FedAvg) to achieve a certain level of top-1 accuracy (66% for the CIFAR10 dataset and 44% for the CIFAR100 dataset). Our method requires fewer rounds to achieve the same accuracy." + ], + "table_footnote": [], + "table_body": "
CIFAR10 (66%)CIFAR100 (44%)
α=0.1α=0.5α=0.1α=1.0
VGG-11ResNet-8VGG-11ResNet-8VGG-11ResNet-8VGG-11ResNet-8
No. roundsNo. roundsNo. roundsNo. roundsNo. roundsNo. roundsNo. roundsNo. rounds
FedAvg55(1.0x)90(1.0x)15(1.0x)15(1.0x)100 + (1.0x)100 + (1.0x)80(1.0x)56(1.0x)
FedProx52(1.1x)75(1.2x)16(0.9x)20(0.8x)100 + (1.0x)100 + (1.0x)80(1.0x)59(0.9x)
SCAFFOLD39(1.4x)57(1.6x)14(1.0x)9(1.7x)80 (>1.3x)61 (>1.6x)36(2.2x)25(2.2x)
FedDyn27(2.0x)67(1.3x)15(1.0x)34(0.4x)80 + (-)80 + (-)24(3.3x)51(1.1x)
Ours27(2.0x)50(1.8x)9(1.6x)5(3.0x)37 (>2.7x)66 (>1.5x)12(6.7x)15(3.7x)
", + "bbox": [ + 95, + 127, + 897, + 238 + ], + "page_idx": 6 + }, + { + "type": "table", + "img_path": "images/aa4d0d4b9e620cc4118b43638f17bf4350ca1e4105a3bb11ba5f5e963a42a140.jpg", + "table_caption": [ + "Table 3. The top-1 accuracy $(\\%)$ after running 80 communication rounds using different methods on CIFAR10 and CIFAR100, together with the number of communicated parameters between the client and the server. We train the centralised model for 800 epochs $(= 80$ rounds x 10 local epochs in FL). Higher accuracy is better, and we highlight the best accuracy in red colour." + ], + "table_footnote": [], + "table_body": "
VGG-11ResNet-8
CIFAR10CIFAR100server↔clientCIFAR10CIFAR100server↔client
α = 0.1α = 0.5α = 0.1α = 1.0α = 0.1α = 0.5α = 0.1α = 1.0
Centralised87.556.3-83.456.8-
FedAvg69.380.934.345.02x64.979.138.847.02x
Fedprox72.180.435.043.22x66.177.942.047.22x
SCAFFOLD74.183.543.450.64x66.680.343.852.34x
FedDyn77.480.143.845.22x63.872.936.448.12x
Ours78.284.943.558.02.1x69.383.643.552.32.02x
", + "bbox": [ + 95, + 286, + 890, + 420 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "cost of a slightly larger predictive set size.", + "bbox": [ + 95, + 442, + 371, + 457 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/951dc30d681cc38c34a57c084476b9c21ec213ab178859442e6120b674b105b4.jpg", + "image_caption": [ + "Figure 3. Relation between average predictive size and empirical coverage when $\\alpha = 0.1$ . By slightly increasing the predictive set size, we can achieve a similar performance as the centralised model (Top-1 accuracy) even if the data are heterogeneously distributed across clients. Our method is similar to or faster than other approaches to surpass the centralised Top-1 accuracy." + ], + "image_footnote": [], + "bbox": [ + 112, + 471, + 468, + 657 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "5.3. Diversity and uniformity", + "text_level": 1, + "bbox": [ + 95, + 756, + 319, + 771 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "We have shown that our algorithm achieves a better speedup and performance against the existing approaches with only lightweight modifications to FedAvg. We next investigate what factors lead to better accuracy. Specifi", + "bbox": [ + 94, + 777, + 482, + 833 + ], + "page_idx": 6 + }, + { + "type": "image", + "img_path": "images/31d31a04d19f4d675f9b4101e7054b8bb0d95550232e6f4009e9d8aff64aa0b5.jpg", + "image_caption": [ + "Figure 4. Drift diversity and learning curve for ResNet-8 on CIFAR100 with $\\alpha = 1.0$ . Compared to FedAvg, SCAFFOLD and our method can both improve the agreement between the classifiers. Compared to SCAFFOLD, our method results in a higher gradient diversity at the early stage of the communication, which tends to boost the learning speed as the curvature of the drift diversity seem to match the learning curve." + ], + "image_footnote": [], + "bbox": [ + 532, + 439, + 882, + 615 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "cally, we calculate the drift diversity $\\xi$ across clients after each communication round using Eq. 2 and average $\\xi$ across three runs. We show the result of using ResNet-8 and CIFAR100 with $\\alpha = 1.0$ in Fig. 4.", + "bbox": [ + 510, + 736, + 899, + 790 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "Fig. 4 shows the drift diversity for different layers in ResNet-8 and the testing accuracy along the communication rounds. We observe that classifiers have the highest diver", + "bbox": [ + 510, + 792, + 899, + 833 + ], + "page_idx": 6 + }, + { + "type": "page_number", + "text": "3970", + "bbox": [ + 468, + 870, + 500, + 881 + ], + "page_idx": 6 + }, + { + "type": "text", + "text": "sity in FedAvg against other layers and methods. SCAFFOLD, which applies control variate on the entire model, can effectively reduce the disagreement of the directions and scales of the averaged gradient across clients. Our proposed algorithm that performs variance reduction only on the classifiers can reduce the diversity of the classifiers even further but increase the diversity of the feature extraction layers. This high diversity tends to boost the learning speed as the curvature of the diversity movement (Fig. 4 left) seems to match the learning curve (Fig. 4 right). Based on this observation, we hypothesize that this diversity along the feature extractor and the uniformity of the classifier is the main reason for our better speedup.", + "bbox": [ + 95, + 102, + 482, + 277 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "To test this hypothesis, we perform an experiment where we use variance reduction starting from different layers of a neural network. If the starting position of the use of variance reduction influences the learning speed, it indicates where in a neural network we need more diversity and where we need more uniformity. We here show the result of using VGG-11 on CIFAR100 with $\\alpha = 1.0$ as there are more layers in VGG-11. The result is shown in Fig. 5 where SVR: $16 \\rightarrow 20$ is corresponding to our approach and SVR: $0 \\rightarrow 20$ is corresponding to SCAFFOLD that applies variance reduction for the entire model. Results for using ResNet-8 is shown in Appendix. C.", + "bbox": [ + 95, + 279, + 482, + 444 + ], + "page_idx": 7 + }, + { + "type": "image", + "img_path": "images/f270fd19410f370dc1632f06a91683cf88e3cc1148ad6fcfe9916088c224d50e.jpg", + "image_caption": [ + "Figure 5. Influence of using stochastic variance reduction(SVR) on layers that start from different positions in a neural network on the learning speed. $\\mathrm{SVR}:0\\rightarrow 20$ applies variance reduction on the entire model (SCAFFOLD). $\\mathrm{SVR}:16\\rightarrow 20$ applies variance reduction from the layer index 16 to 20 (ours). The later we apply variance reduction, the better performance speedup we obtain. However, no variance reduction (FedAvg) performs the worst here." + ], + "image_footnote": [], + "bbox": [ + 110, + 453, + 472, + 612 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We see from Fig. 5 that the deeper in a neural network we apply variance reduction, the better learning speedup we can obtain. There is no clear performance difference between where to activate the variance reduction when the layer index is over 10. However, applying no variance reduction (FedAvg) achieves by far the worst performance. We believe that these experimental results indicate that in a distributed optimization framework, to boost the learning", + "bbox": [ + 95, + 723, + 482, + 833 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "speed of an over-parameterized model, we need some levels of diversity in the middle and early layers for learning richer feature representation and some degrees of uniformity in the classifiers for making a less biased decision.", + "bbox": [ + 512, + 102, + 900, + 157 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "6. Conclusion", + "text_level": 1, + "bbox": [ + 512, + 169, + 630, + 182 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "In this work, we studied stochastic gradient descent learning for deep neural network classifiers in a federated learning setting, where each client updates its local model using stochastic gradient descent on local data. A central model is periodically updated (by averaging local model parameters) and broadcast to the clients under a communication bandwidth constraint. When data is homogeneous across clients, this procedure is comparable to centralized learning in terms of efficiency; however, when data is heterogeneous, learning is impeded. Our hypothesis for the primary reason for this is that when the local models are out of alignment, updating the central model by averaging is ineffective and sometimes even destructive.", + "bbox": [ + 512, + 192, + 900, + 368 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Examining the diversity across clients of their local model updates and their learned feature representations, we found that the misalignment between models is much stronger in the last few neural network layers than in the rest of the network. This finding inspired us to experiment with aligning the local models using a partial variance reduction technique applied only on the last layers, which we named FedPVR. We found that this led to a substantial improvement in convergence speed compared to the competing federated learning methods. In some cases, our method even outperformed centralized learning. We derived a bound on the convergence rate of our proposed method, which matches the rates for SGD when the gradient diversity across clients is sufficiently low. Compared with FedAvg, the communication cost of our method is only marginally worse, as it requires transmitting control variates for the last layers.", + "bbox": [ + 512, + 369, + 900, + 600 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "We believe our FedPVR algorithm strikes a good balance between simplicity and efficiency, requiring only a minor modification to the established FedAvg method; however, in our further research, we plan to pursue more optimal methods for aligning and guiding the local learning algorithms, e.g. using adaptive procedures. Furthermore, the degree of over-parameterization in the neural network layers (e.g. feature extraction vs bottlenecks) may also play an important role, which we would like to understand better.", + "bbox": [ + 512, + 600, + 900, + 723 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "Acknowledgements", + "text_level": 1, + "bbox": [ + 512, + 735, + 678, + 751 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "The first three authors thank for financial support from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 883390 (H2020-SU-SECU-2019 SERSing Project). BL thanks for the financial support from the Otto Mønsted Foundation.", + "bbox": [ + 512, + 758, + 900, + 827 + ], + "page_idx": 7 + }, + { + "type": "page_number", + "text": "3971", + "bbox": [ + 468, + 870, + 499, + 881 + ], + "page_idx": 7 + }, + { + "type": "text", + "text": "References", + "text_level": 1, + "bbox": [ + 97, + 100, + 191, + 115 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[1] Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N. Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. CoRR, abs/2111.04263, 2021. 1, 2, 6, 26", + "[2] Dan Alistarh, Jerry Li, Ryota Tomioka, and Milan Vo-jnovic. QSGD: randomized quantization for communication-optimal stochastic gradient descent. CoRR, abs/1610.02132, 2016. 3", + "[3] Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan, and Jitendra Malik. Uncertainty sets for image classifiers using conformal prediction. In International Conference on Learning Representations, 2021. 2, 3, 6", + "[4] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. CoRR, abs/2002.05709, 2020. 3", + "[5] Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Fedavg with fine tuning: Local updates lead to representation learning, 2022. 2", + "[6] Aaron Defazio, Francis R. Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. CoRR, abs/1407.0202, 2014. 3", + "[7] Aaron Defazio and Léon Bottou. On the ineffectiveness of variance reduced optimization for deep learning. CoRR, abs/1812.04529, 2018. 2, 3, 5", + "[8] Aritra Dutta, El Houcine Bergou, Ahmed M. Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, and Panos Kalnis. On the discrepancy between the theoretical analysis and practical implementations of compressed communication for distributed deep learning. CoRR, abs/1911.08250, 2019.3", + "[9] Liang Gao, Huazhu Fu, Li Li, Yingwen Chen, Ming Xu, and Cheng-Zhong Xu. Feddc: Federated learning with non-iid data via local drift decoupling and correction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 10102-10111. IEEE, 2022. 2", + "[10] Malka N. Halgamuge, Moshe Zukerman, Kotagiri Ramamoohanarao, and Hai Le Vu. An estimation of sensor energy consumption. Progress in Electromagnetics Research B, 12:259-295, 2009. 1, 2, 5", + "[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. 2", + "[12] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013. 3", + "[13] Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascon, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zäïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri" + ], + "bbox": [ + 99, + 123, + 482, + 832 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrede Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramér, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. CoRR, abs/1912.04977, 2019. 1, 2", + "[14] Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. SCAFFOLD: Stochastic controlled averaging for federated learning. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5132-5143. PMLR, 13-18 Jul 2020. 1, 2, 3, 4, 5, 6, 11, 12, 13, 21", + "[15] Ahmed Khaled, Konstantin Mishchenko, and Peter Richtárik. Better communication complexity for local SGD. CoRR, abs/1909.04746, 2019. 5", + "[16] Anastasia Koloskova, Nicolas Loizou, Sadra Boreiri, Martin Jaggi, and Sebastian Stich. A unified theory of decentralized SGD with changing topology and local updates. In Hal Daume III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5381-5393. PMLR, 13-18 Jul 2020. 5, 11, 13", + "[17] Jakub Konečny, H. Brendan McMahan, Daniel Ramage, and Peter Richtárik. Federated optimization: Distributed machine learning for on-device intelligence. CoRR, abs/1610.02527, 2016. 1, 2", + "[18] Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey E. Hinton. Similarity of neural network representations revisited. CoRR, abs/1905.00414, 2019. 4", + "[19] Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. 2, 6", + "[20] Qinbin Li, Bingsheng He, and Dawn Song. Model-contrastive federated learning. CoRR, abs/2103.16257, 2021. 1, 3", + "[21] Zhize Li, Dmitry Kovalev, Xun Qian, and Peter Richtárik. Acceleration for compressed gradient descent in distributed and federated optimization, 2020. 3", + "[22] Tao Lin, Lingjing Kong, Sebastian U. Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. 6, 25, 26", + "[23] Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with restarts. CoRR, abs/1608.03983, 2016. 6", + "[24] Mi Luo, Fei Chen, Dapeng Hu, Yifan Zhang, Jian Liang, and Jiashi Feng. No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. CoRR, abs/2106.05001, 2021. 1, 2, 3, 4, 6" + ], + "bbox": [ + 515, + 103, + 900, + 832 + ], + "page_idx": 8 + }, + { + "type": "page_number", + "text": "3972", + "bbox": [ + 468, + 870, + 500, + 882 + ], + "page_idx": 8 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[25] H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. Federated learning of deep networks using model averaging. CoRR, abs/1602.05629, 2016. 2, 6", + "[26] Konstantin Mishchenko, Eduard Gorbunov, Martin Takáč, and Peter Richtárik. Distributed learning with compressed gradient differences. CoRR, abs/1901.09269, 2019. 3", + "[27] Konstantin Mishchenko, Grigory Malinovsky, Sebastian Stich, and Peter Richtárik. ProxSkip: Yes! local gradient steps provably lead to communication acceleration! finally! International Conference on Machine Learning (ICML), 2022. 2", + "[28] Thao Nguyen, Maithra Raghu, and Simon Kornblith. Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth. CoRR, abs/2010.15327, 2020. 2, 4", + "[29] Jaehoon Oh, Sangmook Kim, and Se-Young Yun. Fedbabu: Towards enhanced representation for federated image classification. CoRR, abs/2106.06042, 2021. 2", + "[30] Yaniv Romano, Matteo Sesia, and Emmanuel J. Candes. Classification with valid and adaptive coverage. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20, Red Hook, NY, USA, 2020. Curran Associates Inc. 3", + "[31] Anit Kumar Sahu, Tian Li, Maziar Sanjabi, Manzil Zaheer, Ameet Talwalkar, and Virginia Smith. On the convergence of federated optimization in heterogeneous networks. CoRR, abs/1812.06127, 2018. 1, 2, 6, 26", + "[32] Ohad Shamir, Nathan Srebro, and Tong Zhang. Communication efficient distributed optimization using an approximate newton-type method. CoRR, abs/1312.7853, 2013. 1, 2, 3", + "[33] Micah J. Sheller, Brandon Edwards, G. Anthony Reina, Jason Martin, Sarthak Pati, Aikaterini Kotrotsou, Mikhail Milchenko, Weilin Xu, Daniel Marcus, Rivka R. Colen, and Spyridon Bakas. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Scientific Reports, 10(1):12598, Jul 2020. 1", + "[34] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015. 2", + "[35] Sebastian U. Stich. Unified optimal analysis of the (stochastic) gradient method. CoRR, abs/1907.04232, 2019. 5, 11, 14", + "[36] Sebastian U. Stich, Jean-Baptiste Cordonnier, and Martin Jaggi. Sparsified SGD with memory. In Advances in Neural Information Processing Systems 31 (NeurIPS), pages 4452-4463. Curran Associates, Inc., 2018. 3", + "[37] Sebastian U. Stich and Sai Praneeth Karimireddy. The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication. CoRR, abs/1909.05350, 2019. 15", + "[38] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. 2, 5" + ], + "bbox": [ + 97, + 103, + 482, + 830 + ], + "page_idx": 9 + }, + { + "type": "list", + "sub_type": "ref_text", + "list_items": [ + "[39] F Varno, M Saghayi, L Rafiee, S Gupta, S Matwin, and M Havaei. Minimizing client drift in federated learning via adaptive bias estimation. ArXiv, abs/2204.13170, 2022. 2", + "[40] Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Agüera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas N. Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horváth, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konečný, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtárik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake E. Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, and Wennan Zhu. A field guide to federated optimization. CoRR, abs/2107.06917, 2021. 2", + "[41] Yaodong Yu, Alexander Wei, Sai Praneeth Karimireddy, Yi Ma, and Michael I. Jordan. TCT: Convexifying federated learning using bootstrapped neural tangent kernels, 2022. 2, 5, 6", + "[42] Haoyu Zhao, Zhize Li, and Peter Richtárik. Fedpage: A fast local stochastic gradient method for communication-efficient federated learning. CoRR, abs/2108.04755, 2021. 2" + ], + "bbox": [ + 514, + 103, + 900, + 442 + ], + "page_idx": 9 + }, + { + "type": "page_number", + "text": "3973", + "bbox": [ + 468, + 870, + 500, + 881 + ], + "page_idx": 9 + } +] \ No newline at end of file diff --git a/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/fbd20275-52b6-475a-b61f-5e4c3c892317_model.json b/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/fbd20275-52b6-475a-b61f-5e4c3c892317_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9ae99c9d5d6b4249a1b04538e204bd73af9fd438 --- /dev/null +++ b/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/fbd20275-52b6-475a-b61f-5e4c3c892317_model.json @@ -0,0 +1,2541 @@ +[ + [ + { + "type": "header", + "bbox": [ + 0.096, + 0.01, + 0.172, + 0.047 + ], + "angle": 0, + "content": "CVF" + }, + { + "type": "header", + "bbox": [ + 0.23, + 0.006, + 0.819, + 0.022 + ], + "angle": 0, + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation." + }, + { + "type": "header", + "bbox": [ + 0.319, + 0.022, + 0.731, + 0.035 + ], + "angle": 0, + "content": "Except for this watermark, it is identical to the accepted version;" + }, + { + "type": "header", + "bbox": [ + 0.288, + 0.035, + 0.762, + 0.049 + ], + "angle": 0, + "content": "the final published version of the proceedings is available on IEEE Xplore." + }, + { + "type": "title", + "bbox": [ + 0.123, + 0.138, + 0.875, + 0.179 + ], + "angle": 0, + "content": "On the effectiveness of partial variance reduction in federated learning with heterogeneous data" + }, + { + "type": "text", + "bbox": [ + 0.184, + 0.204, + 0.241, + 0.219 + ], + "angle": 0, + "content": "Bo Li*" + }, + { + "type": "text", + "bbox": [ + 0.277, + 0.204, + 0.433, + 0.219 + ], + "angle": 0, + "content": "Mikkel N. Schmidt" + }, + { + "type": "text", + "bbox": [ + 0.468, + 0.204, + 0.619, + 0.22 + ], + "angle": 0, + "content": "Tommy S. Alström" + }, + { + "type": "text", + "bbox": [ + 0.272, + 0.221, + 0.533, + 0.236 + ], + "angle": 0, + "content": "Technical University of Denmark" + }, + { + "type": "text", + "bbox": [ + 0.289, + 0.238, + 0.507, + 0.252 + ], + "angle": 0, + "content": "{blia, mnsc, tsal}@dtu.dk" + }, + { + "type": "text", + "bbox": [ + 0.667, + 0.204, + 0.813, + 0.22 + ], + "angle": 0, + "content": "Sebastian U. Stich" + }, + { + "type": "text", + "bbox": [ + 0.708, + 0.221, + 0.771, + 0.235 + ], + "angle": 0, + "content": "CISPA" + }, + { + "type": "text", + "bbox": [ + 0.678, + 0.24, + 0.802, + 0.252 + ], + "angle": 0, + "content": "stich@cispa.de" + }, + { + "type": "title", + "bbox": [ + 0.252, + 0.299, + 0.329, + 0.314 + ], + "angle": 0, + "content": "Abstract" + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.328, + 0.484, + 0.588 + ], + "angle": 0, + "content": "Data heterogeneity across clients is a key challenge in federated learning. Prior works address this by either aligning client and server models or using control variates to correct client model drift. Although these methods achieve fast convergence in convex or simple non-convex problems, the performance in over-parameterized models such as deep neural networks is lacking. In this paper, we first revisit the widely used FedAvg algorithm in a deep neural network to understand how data heterogeneity influences the gradient updates across the neural network layers. We observe that while the feature extraction layers are learned efficiently by FedAvg, the substantial diversity of the final classification layers across clients impedes the performance. Motivated by this, we propose to correct model drift by variance reduction only on the final layers. We demonstrate that this significantly outperforms existing benchmarks at a similar or lower communication cost. We furthermore provide proof for the convergence rate of our algorithm." + }, + { + "type": "title", + "bbox": [ + 0.097, + 0.614, + 0.226, + 0.628 + ], + "angle": 0, + "content": "1. Introduction" + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.637, + 0.483, + 0.787 + ], + "angle": 0, + "content": "Federated learning (FL) is emerging as an essential distributed learning paradigm in large-scale machine learning. Unlike in traditional machine learning, where a model is trained on the collected centralized data, in federated learning, each client (e.g. phones and institutions) learns a model with its local data. A centralized model is then obtained by aggregating the updates from all participating clients without ever requesting the client data, thereby ensuring a certain level of user privacy [13, 17]. Such an algorithm is especially beneficial for tasks where the data is sensitive, e.g. chemical hazards detection and diseases diagnosis [33]." + }, + { + "type": "text", + "bbox": [ + 0.116, + 0.787, + 0.483, + 0.801 + ], + "angle": 0, + "content": "Two primary challenges in federated learning are i) han" + }, + { + "type": "image", + "bbox": [ + 0.516, + 0.299, + 0.638, + 0.411 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.517, + 0.412, + 0.644, + 0.421 + ], + "angle": 0, + "content": "Feature extractor" + }, + { + "type": "image_caption", + "bbox": [ + 0.517, + 0.422, + 0.595, + 0.431 + ], + "angle": 0, + "content": "□Classifier" + }, + { + "type": "image_caption", + "bbox": [ + 0.517, + 0.432, + 0.652, + 0.443 + ], + "angle": 0, + "content": "Variance reduction" + }, + { + "type": "image", + "bbox": [ + 0.638, + 0.3, + 0.897, + 0.366 + ], + "angle": 0, + "content": null + }, + { + "type": "image", + "bbox": [ + 0.638, + 0.366, + 0.897, + 0.438 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.722, + 0.436, + 0.855, + 0.444 + ], + "angle": 0, + "content": "Communication rounds" + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.457, + 0.9, + 0.52 + ], + "angle": 0, + "content": "Figure 1. Our proposed FedPVR framework with the performance (communicated parameters per round client \\(\\longleftrightarrow\\) server). Smaller \\(\\alpha\\) corresponds to higher data heterogeneity. Our method achieves a better speedup than existing approaches by transmitting a slightly larger number of parameters than FedAvg." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.546, + 0.901, + 0.697 + ], + "angle": 0, + "content": "dling data heterogeneity across clients [13] and ii) limiting the cost of communication between the server and clients [10]. In this setting, FedAvg [17] is one of the most widely used schemes: A server broadcasts its model to clients, which then update the model using their local data in a series of steps before sending their individual model to the server, where the models are aggregated by averaging the parameters. The process is repeated for multiple communication rounds. While it has shown great success in many applications, it tends to achieve subpar accuracy and convergence when the data are heterogeneous [14, 24, 31]." + }, + { + "type": "text", + "bbox": [ + 0.511, + 0.697, + 0.901, + 0.834 + ], + "angle": 0, + "content": "The slow and sometimes unstable convergence of FedAvg can be caused by client drift [14] brought on by data heterogeneity. Numerous efforts have been made to improve FedAvg's performance in this setting. Prior works attempt to mitigate client drift by penalizing the distance between a client model and the server model [20, 31] or by performing variance reduction techniques while updating client models [1, 14, 32]. These works demonstrate fast convergence on convex problems or for simple neural networks; however, their performance on deep neural" + }, + { + "type": "page_footnote", + "bbox": [ + 0.12, + 0.81, + 0.273, + 0.821 + ], + "angle": 0, + "content": "* Work done while at CISPA" + }, + { + "type": "page_footnote", + "bbox": [ + 0.12, + 0.821, + 0.392, + 0.833 + ], + "angle": 0, + "content": "‡ CISPA Helmholtz Center for Information Security" + }, + { + "type": "list", + "bbox": [ + 0.12, + 0.81, + 0.392, + 0.833 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.469, + 0.871, + 0.502, + 0.882 + ], + "angle": 0, + "content": "3964" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.096, + 0.103, + 0.484, + 0.185 + ], + "angle": 0, + "content": "networks, which are state-of-the-art for many centralized learning tasks [11, 34], has yet to be well explored. Adapting techniques that perform well on convex problems to neural networks is non-trivial [7] due to their \"intriguing properties\" [38] such as over-parametrization and permutation symmetries." + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.185, + 0.484, + 0.376 + ], + "angle": 0, + "content": "To overcome the above issues, we revisit the FedAvg algorithm with a deep neural network (VGG-11 [34]) under the assumption of data heterogeneity and full client participation. Specifically, we investigate which layers in a neural network are mostly influenced by data heterogeneity. We define drift diversity, which measures the diversity of the directions and scales of the averaged gradients across clients per communication round. We observe that in the non-IID scenario, the deeper layers, especially the final classification layer, have the highest diversity across clients compared to an IID setting. This indicates that FedAvg learns good feature representations even in the non-IID scenario [5] and that the significant variation of the deeper layers across clients is a primary cause of FedAvg's subpar performance." + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.376, + 0.484, + 0.485 + ], + "angle": 0, + "content": "Based on the above observations, we propose to align the classification layers across clients using variance reduction. Specifically, we estimate the average updating direction of the classifiers (the last several fully connected layers) at the client \\( c_{i} \\) and server level \\( c \\) and use their difference as a control variate [14] to reduce the variance of the classifiers across clients. We analyze our proposed algorithm and derive a convergence rate bound." + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.485, + 0.484, + 0.675 + ], + "angle": 0, + "content": "We perform experiments on the popular federated learning benchmark datasets CIFAR10 [19] and CIFAR100 [19] using two types of neural networks, VGG-11 [34] and ResNet-8 [11], and different levels of data heterogeneity across clients. We experimentally show that we require fewer communication rounds compared to the existing methods [14, 17, 31] to achieve the same accuracy while transmitting a similar or slightly larger number of parameters between server and clients than FedAvg (see Fig. 1). With a (large) fixed number of communication rounds, our method achieves on-par or better top-1 accuracy, and in some settings it even outperforms centralized learning. Using conformal prediction [3], we show how performance can be improved further using adaptive prediction sets." + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.676, + 0.484, + 0.756 + ], + "angle": 0, + "content": "We show that applying variance reduction on the last layers increases the diversity of the feature extraction layers. This diversity in the feature extraction layers may give each client more freedom to learn richer feature representations, and the uniformity in the classifier then ensures a less biased decision. We summarize our contributions here:" + }, + { + "type": "text", + "bbox": [ + 0.115, + 0.765, + 0.483, + 0.833 + ], + "angle": 0, + "content": "- We present our algorithm for partial variance-reduced federated learning (FedPVR). We experimentally demonstrate that the key to the success of our algorithm is the diversity between the feature extraction layers and the alignment between the classifiers." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.103, + 0.901, + 0.185 + ], + "angle": 0, + "content": "- We prove the convergence rate in the convex settings and non-convex settings, precisely characterize its weak dependence on data-heterogeneity measures and show that FedPVR provably converges as fast as the centralized SGD baseline in most practical relevant cases." + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.195, + 0.901, + 0.277 + ], + "angle": 0, + "content": "- We experimentally show that our algorithm is more communication efficient than previous works across various levels of data heterogeneity, datasets, and neural network architectures. In some cases where data heterogeneity exists, the proposed algorithm even performs slightly better than centralized learning." + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.299, + 0.65, + 0.313 + ], + "angle": 0, + "content": "2. Related work" + }, + { + "type": "title", + "bbox": [ + 0.514, + 0.321, + 0.694, + 0.337 + ], + "angle": 0, + "content": "2.1. Federated learning" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.342, + 0.901, + 0.437 + ], + "angle": 0, + "content": "Federated learning (FL) is a fast-growing field [13, 40]. We mainly describe FL methods in non-IID settings where the data is distributed heterogeneously across clients. Among the existing approaches, FedAvg [25] is the de facto optimization technique. Despite its solid empirical performances in IID settings [13, 25], it tends to achieve a subpar accuracy-communication trade-off in non-IID scenarios." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.438, + 0.901, + 0.682 + ], + "angle": 0, + "content": "Many works attempt to tackle FL when data is heterogeneous across clients [1, 9, 14, 31, 39, 42]. FedProx [31] proposes a temperature parameter and proximal regularization term to control the divergence between client and server models. However, the proximal term does not bring the alignment between the global and local optimal points [1]. Similarly, some works control the update direction by introducing client-dependent control variate [1, 14, 17, 27, 32] that is also communicated between the server and clients. They have achieved a much faster convergence rate, but their performance in a non-convex setup, especially in deep neural networks, such as ResNet [11] and VGG [34], is not well explored. Besides, they suffer from a higher communication cost due to the transmission of the extra control variates, which may be a critical issue for resources-limited IoT mobile devices [10]. Among these methods, SCAFFOLD [14] is the most closely related method to ours, and we give a more detailed comparison in section 3 and 5." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.684, + 0.901, + 0.834 + ], + "angle": 0, + "content": "Another line of work develops FL algorithms based on the characteristics, such as expressive feature representations [28] of neural networks. Collins et al. [5] show that FedAvg is powerful in learning common data representations from clients' data. FedBabu [29], TCT [41], and CCVR [24] propose to improve FL performance by finetuning the classifiers with a standalone dataset or features that are simulated based on the client models. However, preparing a standalone dataset/features that represents the data distribution across clients is challenging as this usually requires domain knowledge and may raise privacy concerns." + }, + { + "type": "page_number", + "bbox": [ + 0.469, + 0.871, + 0.502, + 0.882 + ], + "angle": 0, + "content": "3965" + } + ], + [ + { + "type": "image", + "bbox": [ + 0.134, + 0.098, + 0.867, + 0.266 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.095, + 0.276, + 0.9, + 0.315 + ], + "angle": 0, + "content": "Figure 2. Data distribution (number of images per client per class) with different levels of heterogeneity, client CKA similarity, and the drift diversity of each layer in VGG-11 (20 layers) with FedAvg. Deep layers in an over-parameterised neural network have higher disagreement and variance when the clients are heterogeneous using FedAvg." + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.337, + 0.482, + 0.405 + ], + "angle": 0, + "content": "Moon [20] encourages the similarity of the representations across different client models by using contrastive loss [4] but with the cost of three full-size models in memory on each client, which may limit its applicability in resource-limited devices." + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.406, + 0.483, + 0.487 + ], + "angle": 0, + "content": "Other works focus on reducing the communication cost by compressing the transmitted gradients [2, 8, 21, 26, 36]. They can reduce the communication bandwidth by adjusting the number of bits sent per iteration. These works are complementary to ours and can be easily integrated into our method to save communication costs." + }, + { + "type": "title", + "bbox": [ + 0.097, + 0.497, + 0.276, + 0.51 + ], + "angle": 0, + "content": "2.2. Variance reduction" + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.518, + 0.484, + 0.722 + ], + "angle": 0, + "content": "Stochastic variance reduction (SVR), such as SVRG [12], SAGA [6], and their variants, use control variate to reduce the variance of traditional stochastic gradient descent (SGD). These methods can remarkably achieve a linear convergence rate for strongly convex optimization problems compared to the sub-linear rate of SGD. Many federated learning algorithms, such as SCAFFOLD [14] and DANE [32], have adapted the idea of variance reduction for the whole model and achieved good convergence on convex problems. However, as [7] demonstrated, naively applying variance reduction techniques gives no actual variance reduction and tends to result in a slower convergence in deep neural networks. This suggests that adapting SVR techniques in deep neural networks for FL requires a more careful design." + }, + { + "type": "title", + "bbox": [ + 0.097, + 0.731, + 0.295, + 0.745 + ], + "angle": 0, + "content": "2.3. Conformal prediction" + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.752, + 0.483, + 0.833 + ], + "angle": 0, + "content": "Conformal prediction is a general framework that computes a prediction set guaranteed to include the true class with a high user-determined probability [3, 30]. It requires no retraining of the models and achieves a finite-sum coverage guarantee [3]. As FL algorithms can hardly perform as well as centralized learning [24] when the data heterogene" + }, + { + "type": "table_caption", + "bbox": [ + 0.598, + 0.327, + 0.816, + 0.339 + ], + "angle": 0, + "content": "Table 1. Notations used in this paper" + }, + { + "type": "table", + "bbox": [ + 0.52, + 0.34, + 0.899, + 0.426 + ], + "angle": 0, + "content": "
R,rNumber of communication rounds and round index
K,kNumber of local steps, local step index
N,iNumber of clients, client index
yr,i,kclient model i at step k and round r
xrserver model at round r
cr,i, crclient and server control variate
" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.429, + 0.901, + 0.512 + ], + "angle": 0, + "content": "ity is high, we can integrate conformal prediction in FL to improve the empirical coverage by slightly increasing the predictive set size. This can be beneficial in sensitive use cases such as detecting chemical hazards, where it is better to give a prediction set that contains the correct class than producing a single but wrong prediction." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.522, + 0.603, + 0.536 + ], + "angle": 0, + "content": "3. Method" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.545, + 0.692, + 0.558 + ], + "angle": 0, + "content": "3.1. Problem statement" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.565, + 0.901, + 0.633 + ], + "angle": 0, + "content": "Given \\(N\\) clients with full participation, we formalise the problem as minimizing the average of the stochastic functions with access to stochastic samples in Eq. 1 where \\(\\pmb{x}\\) is the model parameters and \\(f_{i}\\) represents the loss function at client \\(i\\) with dataset \\(\\mathcal{D}_i\\)," + }, + { + "type": "equation", + "bbox": [ + 0.599, + 0.639, + 0.9, + 0.677 + ], + "angle": 0, + "content": "\\[\n\\min _ {\\boldsymbol {x} \\in \\mathbb {R} ^ {d}} \\left(f (\\boldsymbol {x}) := \\frac {1}{N} \\sum_ {i = 1} ^ {N} f _ {i} (\\boldsymbol {x})\\right), \\tag {1}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.682, + 0.724, + 0.698 + ], + "angle": 0, + "content": "where \\(f_{i}(\\pmb {x}):= \\mathbb{E}_{\\mathcal{D}_{i}}[f_{i}(\\pmb {x};\\mathcal{D}_{i})]\\)" + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.703, + 0.634, + 0.717 + ], + "angle": 0, + "content": "3.2. Motivation" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.724, + 0.901, + 0.834 + ], + "angle": 0, + "content": "When the data \\(\\{\\mathcal{D}_i\\}\\) are heterogeneous across clients, FedAvg suffers from client drift [14], where the average of the local optimal \\(\\bar{\\pmb{x}}^* = \\frac{1}{N}\\sum_{i\\in N}\\pmb{x}_i^*\\) is far from the global optimal \\(\\pmb{x}^*\\). To understand what causes client drift, specifically which layers in a neural network are influenced most by the data heterogeneity, we perform a simple experiment using FedAvg and CIFAR10 datasets on a VGG-11. The detailed experimental setup can be found in section 4." + }, + { + "type": "page_number", + "bbox": [ + 0.469, + 0.872, + 0.502, + 0.882 + ], + "angle": 0, + "content": "3966" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.095, + 0.103, + 0.484, + 0.211 + ], + "angle": 0, + "content": "In an over-parameterized model, it is difficult to directly calculate client drift \\(||\\bar{x}^{*} - x^{*}||^{2}\\) as it is challenging to obtain the global optimum \\(x^{*}\\). We instead hypothesize that we can represent the influence of data heterogeneity on the model by measuring 1) drift diversity and 2) client model similarity. Drift diversity reflects the diversity in the amount each client model deviates from the server model after an update round." + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.212, + 0.484, + 0.239 + ], + "angle": 0, + "content": "Definition 1 (Drift diversity). We define the drift diversity across \\(N\\) clients at round \\(r\\) as:" + }, + { + "type": "equation", + "bbox": [ + 0.144, + 0.246, + 0.484, + 0.282 + ], + "angle": 0, + "content": "\\[\n\\xi^ {r} := \\frac {\\sum_ {i = 1} ^ {N} \\left\\| \\boldsymbol {m} _ {i} ^ {r} \\right\\| ^ {2}}{\\left\\| \\sum_ {i = 1} ^ {N} \\boldsymbol {m} _ {i} ^ {r} \\right\\| ^ {2}} \\quad \\boldsymbol {m} _ {i} ^ {r} = \\boldsymbol {y} _ {i, K} ^ {r} - \\boldsymbol {x} ^ {r - 1} \\tag {2}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.292, + 0.484, + 0.394 + ], + "angle": 0, + "content": "Drift diversity \\(\\xi\\) is high when all the clients update their models in different directions, i.e., when dot products between client updates \\(\\pmb{m}_i\\) are small. When each client performs \\(K\\) steps of vanilla SGD updates, \\(\\xi\\) depends on the directions and amplitude of the gradients over \\(N\\) clients and is equivalent to \\(\\frac{\\sum_{i=1}^{N} \\|\\sum_{k} g_i(\\pmb{y}_{i,k})\\|^2}{\\|\\sum_{i=1}^{N} \\sum_{k} g_i(\\pmb{y}_{i,k})\\|^2}\\), where \\(g_i(\\pmb{y}_{i,k})\\) is the stochastic mini-batch gradient." + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.395, + 0.484, + 0.463 + ], + "angle": 0, + "content": "After updating client models, we quantify the client model similarity using centred kernel alignment (CKA) [18] computed on a test dataset. CKA is a widely used permutation invariant metric for measuring the similarity between feature representations in neural networks [18, 24, 28]." + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.463, + 0.484, + 0.681 + ], + "angle": 0, + "content": "Fig. 2 shows the movement of \\(\\xi\\) and CKA across different levels of data heterogeneity using FedAvg. We observe that the similarity and diversity of the early layers (e.g. layer index 4 and 12) are with a higher agreement between the IID \\((\\alpha = 100.0)\\) and non-IID \\((\\alpha = 0.1)\\) experiments, which indicates that FedAvg can still learn and extract good feature representations even when it is trained with non-IID data. The lower similarity on the deeper layers, especially the classifiers, suggests that these layers are strongly biased towards their local data distribution. When we only look at the model that is trained with \\(\\alpha = 0.1\\), we see the highest diversity and variance on the classifiers across clients compared to the rest of the layers. Based on the above observations, we propose to align the classifiers across clients using variance reduction. We deploy client and server control variates to control the updating directions of the classifiers." + }, + { + "type": "title", + "bbox": [ + 0.096, + 0.689, + 0.35, + 0.703 + ], + "angle": 0, + "content": "3.3. Classifier variance reduction" + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.71, + 0.483, + 0.752 + ], + "angle": 0, + "content": "Our proposed algorithm (Alg. I) consists of three parts: i) client updating (Eq. 5-6) ii) client control variate updating, (Eq. 7), and iii) server updating (Eq. 8-9)" + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.752, + 0.484, + 0.833 + ], + "angle": 0, + "content": "We first define a vector \\( \\pmb{p} \\in \\mathbb{R}^d \\) that contains 0 or 1 with \\( v \\) non-zero elements \\( (v \\ll d) \\) in Eq. 3. We recover SCAF-FOLD with \\( \\pmb{p} = \\mathbf{1} \\) and recover FedAvg with \\( \\pmb{p} = \\mathbf{0} \\). For the set of indices \\( j \\) where \\( p_j = 1 \\) (\\( S_{\\mathrm{svr}} \\) from Eq. 4), we update the corresponding weights \\( y_{i,S_{\\mathrm{svr}}} \\) with variance reduction such that we maintain a state for each client \\( (c_i \\in \\mathbb{R}^v) \\) and" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.103, + 0.901, + 0.158 + ], + "angle": 0, + "content": "for the server \\((\\pmb{c} \\in \\mathbb{R}^v)\\) in Eq. 5. For the rest of the indices \\(S_{\\mathrm{sgd}}\\) from Eq. 4, we update the corresponding weights \\(\\pmb{y}_{i,S_{\\mathrm{sgd}}}\\) with SGD in Eq. 6. As the server variate \\(\\pmb{c}\\) is an average of \\(\\pmb{c}_i\\) across clients, we can safely initialise them as \\(\\mathbf{0}\\)." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.158, + 0.901, + 0.267 + ], + "angle": 0, + "content": "In each communication round, each client receives a copy of the server model \\( \\mathbf{x} \\) and the server control variate \\( \\mathbf{c} \\). They then perform \\( K \\) model updating steps (see Eq. 5-6 for one step) using cross-entropy as the loss function. Once this is finished, we calculate the updated client control variate \\( \\mathbf{c}_i \\) using Eq. 7. The server then receives the updated \\( \\mathbf{c}_i \\) and \\( \\mathbf{y}_i \\) from all the clients for aggregation (Eq. 8-9). This completes one communication round." + }, + { + "type": "equation", + "bbox": [ + 0.57, + 0.27, + 0.9, + 0.29 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {p} := \\{0, 1 \\} ^ {d}, \\quad v = \\sum \\boldsymbol {p} \\tag {3}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.558, + 0.292, + 0.9, + 0.309 + ], + "angle": 0, + "content": "\\[\nS _ {\\mathrm {s v r}} := \\{j: \\boldsymbol {p} _ {j} = 1 \\}, \\quad S _ {\\mathrm {s g d}} := \\{j: \\boldsymbol {p} _ {j} = 0 \\} \\tag {4}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.543, + 0.312, + 0.9, + 0.327 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {y} _ {i, S _ {\\mathrm {s v r}}} \\leftarrow \\boldsymbol {y} _ {i, S _ {\\mathrm {s v r}}} - \\eta_ {l} \\left(g _ {i} \\left(\\boldsymbol {y} _ {i}\\right) _ {S _ {\\mathrm {s v r}}} - \\boldsymbol {c} _ {i} + \\boldsymbol {c}\\right) \\tag {5}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.542, + 0.33, + 0.9, + 0.346 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {y} _ {i, S _ {\\mathrm {s g d}}} \\leftarrow \\boldsymbol {y} _ {i, S _ {\\mathrm {s g d}}} - \\eta_ {l} g _ {i} \\left(\\boldsymbol {y} _ {i}\\right) _ {S _ {\\mathrm {s g d}}} \\tag {6}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.567, + 0.348, + 0.9, + 0.376 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {c} _ {i} \\leftarrow \\boldsymbol {c} _ {i} - \\boldsymbol {c} + \\frac {1}{K \\eta_ {l}} \\left(\\boldsymbol {x} _ {S _ {\\mathrm {s v r}}} - \\boldsymbol {y} _ {i, S _ {\\mathrm {s v r}}}\\right) \\tag {7}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.571, + 0.378, + 0.9, + 0.41 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {x} \\leftarrow (1 - \\eta_ {g}) \\boldsymbol {x} + \\frac {1}{N} \\sum_ {i \\in N} \\boldsymbol {y} _ {i} \\tag {8}\n\\]" + }, + { + "type": "equation", + "bbox": [ + 0.573, + 0.413, + 0.9, + 0.445 + ], + "angle": 0, + "content": "\\[\n\\boldsymbol {c} \\leftarrow \\frac {1}{N} \\sum_ {i \\in N} \\boldsymbol {c} _ {i} \\tag {9}\n\\]" + }, + { + "type": "title", + "bbox": [ + 0.515, + 0.451, + 0.843, + 0.465 + ], + "angle": 0, + "content": "Algorithm I Partial variance reduction (FedPVR)" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.467, + 0.901, + 0.492 + ], + "angle": 0, + "content": "server: initialise the server model \\( \\mathbf{x} \\), the control variate \\( \\mathbf{c} \\), and global step size \\( \\eta_g \\)" + }, + { + "type": "text", + "bbox": [ + 0.539, + 0.492, + 0.866, + 0.504 + ], + "angle": 0, + "content": "client: initialise control variate \\( c_{i} \\) and local step size \\( \\eta_{l} \\)" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.504, + 0.9, + 0.531 + ], + "angle": 0, + "content": "\\(\\mathbf{mask}\\) .. \\(\\pmb {p}:= \\{0,1\\} ^d\\) \\(S_{\\mathrm{sgd}}\\coloneqq \\{j:\\pmb {p}_j = 0\\}\\) \\(S_{\\mathrm{svr}}\\coloneqq \\{j:\\) \\(\\pmb {p}_j = 1\\}\\)" + }, + { + "type": "text", + "bbox": [ + 0.521, + 0.532, + 0.724, + 0.543 + ], + "angle": 0, + "content": "1: procedure MODEL UPDATING" + }, + { + "type": "text", + "bbox": [ + 0.522, + 0.544, + 0.676, + 0.555 + ], + "angle": 0, + "content": "2: for \\(r = 1\\rightarrow R\\) do" + }, + { + "type": "text", + "bbox": [ + 0.523, + 0.556, + 0.846, + 0.568 + ], + "angle": 0, + "content": "3: communicate \\( x \\) and \\( c \\) to all clients \\( i \\in [N] \\)" + }, + { + "type": "text", + "bbox": [ + 0.523, + 0.569, + 0.792, + 0.581 + ], + "angle": 0, + "content": "4: for On client \\( i \\in [N] \\) in parallel do" + }, + { + "type": "text", + "bbox": [ + 0.523, + 0.583, + 0.656, + 0.594 + ], + "angle": 0, + "content": "5: \\(y_{i}\\gets x\\)" + }, + { + "type": "text", + "bbox": [ + 0.523, + 0.595, + 0.721, + 0.605 + ], + "angle": 0, + "content": "6: for \\(k = 1\\to K\\) do" + }, + { + "type": "text", + "bbox": [ + 0.523, + 0.606, + 0.833, + 0.619 + ], + "angle": 0, + "content": "7: compute minibatch gradient \\(g_{i}(\\pmb{y}_{i})\\)" + }, + { + "type": "text", + "bbox": [ + 0.523, + 0.62, + 0.816, + 0.634 + ], + "angle": 0, + "content": "8: \\(\\pmb{y}_{i,S_{\\mathrm{sgd}}} \\gets \\pmb{y}_{i,S_{\\mathrm{sgd}}} - \\eta_l g_i(\\pmb{y}_i)_{S_{\\mathrm{sgd}}}\\)" + }, + { + "type": "text", + "bbox": [ + 0.523, + 0.634, + 0.879, + 0.646 + ], + "angle": 0, + "content": "9: \\(\\pmb{y}_{i,S_{\\mathrm{svr}}}\\gets \\pmb{y}_{i,S_{\\mathrm{svr}}} - \\eta_{l}(g_{i}(\\pmb{y}_{i})_{S_{\\mathrm{svr}}} - \\pmb{c}_{i} + \\pmb{c})\\)" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.646, + 0.653, + 0.656 + ], + "angle": 0, + "content": "10: end for" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.657, + 0.819, + 0.672 + ], + "angle": 0, + "content": "11: \\( \\pmb{c}_i \\gets \\pmb{c}_i - \\pmb{c} + \\frac{1}{K\\eta_l} (\\pmb{x}_{S_{\\mathrm{svr}}} - \\pmb{y}_{i,S_{\\mathrm{svr}}}) \\)" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.672, + 0.727, + 0.683 + ], + "angle": 0, + "content": "12: communicate \\(\\pmb{y}_i, \\pmb{c}_i\\)" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.684, + 0.63, + 0.694 + ], + "angle": 0, + "content": "13: end for" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.695, + 0.771, + 0.708 + ], + "angle": 0, + "content": "14: \\(\\pmb{x} \\gets (1 - \\eta_g)\\pmb{x} + \\frac{1}{N}\\sum_{i\\in N}\\pmb{y}_i\\)" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.708, + 0.688, + 0.721 + ], + "angle": 0, + "content": "15: \\( \\pmb{c} \\gets \\frac{1}{N} \\sum_{i \\in N} \\pmb{c}_i \\)" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.721, + 0.61, + 0.732 + ], + "angle": 0, + "content": "16: end for" + }, + { + "type": "text", + "bbox": [ + 0.518, + 0.733, + 0.633, + 0.744 + ], + "angle": 0, + "content": "17: end procedure" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.747, + 0.901, + 0.779 + ], + "angle": 0, + "content": "In terms of implementation, we can simply assume the control variate for the block of weights that is updated with SGD as 0 and implement line 8 and 9 in one step" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.779, + 0.901, + 0.833 + ], + "angle": 0, + "content": "Ours vs SCAFFOLD [14] While our work is similar to SCAFFOLD in the use of variance reduction, there are some fundamental differences. We both communicate control variates between the clients and server, but our control" + }, + { + "type": "page_number", + "bbox": [ + 0.469, + 0.871, + 0.502, + 0.882 + ], + "angle": 0, + "content": "3967" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.095, + 0.103, + 0.484, + 0.281 + ], + "angle": 0, + "content": "variate \\((2v\\leq 0.1d)\\) is significantly smaller than the one in SCAFFOLD \\((2d)\\). This \\(2x\\) decrease in bits can be critical for some low-power IoT devices as the communication may consume more energy [10]. From the application point of view, SCAFFOLD achieved great success in convex or simple two layers problems. However, adapting the techniques that work well from convex problems to over-parameterized models is non-trivial [38], and naively adapting variance reduction techniques on deep neural networks gives little or no convergence speedup [7]. Therefore, the significant improvement achieved by our method gives essential and nontrivial insight into what matters when tackling data heterogeneity in FL in over-parameterized models." + }, + { + "type": "title", + "bbox": [ + 0.097, + 0.288, + 0.266, + 0.303 + ], + "angle": 0, + "content": "3.4. Convergence rate" + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.309, + 0.484, + 0.404 + ], + "angle": 0, + "content": "We state the convergence rate in this section. We assume functions \\(\\{f_i\\}\\) are \\(\\beta\\)-smooth following [16, 35]. We then assume \\(g_{i}(\\pmb{x})\\coloneqq \\nabla f_{i}(x;\\mathcal{D}_{i})\\) is an unbiased stochastic gradient of \\(f_{i}\\) with variance bounded by \\(\\sigma^2\\). We assume strongly convexity \\((\\mu >0)\\) and general convexity \\((\\mu = 0)\\) for some of the results following [14]. Furthermore, we also make assumptions about the heterogeneity of the functions." + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.405, + 0.484, + 0.459 + ], + "angle": 0, + "content": "For convex functions, we assume the heterogeneity of the function \\(\\{f_i\\}\\) at the optimal point \\(x^{*}\\) (such a point always exists for a strongly convex function) following [15, 16]." + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.461, + 0.485, + 0.488 + ], + "angle": 0, + "content": "Assumption 1 (\\(\\zeta\\)-heterogeneity). We define a measure of variance at the optimum \\(x^{*}\\) given \\(N\\) clients as:" + }, + { + "type": "equation", + "bbox": [ + 0.192, + 0.49, + 0.482, + 0.527 + ], + "angle": 0, + "content": "\\[\n\\zeta^ {2} := \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\mathbb {E} | | \\nabla f _ {i} \\left(\\boldsymbol {x} ^ {*}\\right) | | ^ {2}. \\tag {10}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.53, + 0.484, + 0.572 + ], + "angle": 0, + "content": "For the non-convex functions, such an unique optimal point \\( \\pmb{x}^* \\) does not necessarily exist, so we generalize Assumption 1 to Assumption 2." + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.578, + 0.484, + 0.606 + ], + "angle": 0, + "content": "Assumption 2 (\\(\\hat{\\zeta}\\)-heterogeneity). We assume there exists constant \\(\\hat{\\zeta}\\) such that \\(\\forall x \\in \\mathbb{R}^d\\)" + }, + { + "type": "equation", + "bbox": [ + 0.2, + 0.608, + 0.482, + 0.645 + ], + "angle": 0, + "content": "\\[\n\\frac {1}{N} \\sum_ {i = 1} ^ {N} \\mathbb {E} | | \\nabla f _ {i} (\\boldsymbol {x}) | | ^ {2} \\leq \\hat {\\zeta} ^ {2}. \\tag {11}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.655, + 0.484, + 0.683 + ], + "angle": 0, + "content": "Given the mask \\( \\pmb{p} \\) as defined in Eq. 3, we know \\( ||\\pmb{p} \\odot \\pmb{x}|| \\leq ||\\pmb{x}|| \\). Therefore, we have the following propositions." + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.69, + 0.484, + 0.731 + ], + "angle": 0, + "content": "Proposition 1 (Implication of Assumption 1). Given the mask \\(\\mathbf{p}\\), we define the heterogeneity of the block of weights that are not variance reduced at the optimum \\(\\mathbf{x}^*\\) as:" + }, + { + "type": "equation", + "bbox": [ + 0.155, + 0.734, + 0.482, + 0.771 + ], + "angle": 0, + "content": "\\[\n\\zeta_ {1 - p} ^ {2} := \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\left\\| (\\mathbf {1} - \\boldsymbol {p}) \\odot \\nabla f _ {i} \\left(\\boldsymbol {x} ^ {*}\\right) \\right\\| ^ {2}, \\tag {12}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.779, + 0.394, + 0.792 + ], + "angle": 0, + "content": "If Assumption 1 holds, then it also holds that:" + }, + { + "type": "equation", + "bbox": [ + 0.25, + 0.794, + 0.482, + 0.812 + ], + "angle": 0, + "content": "\\[\n\\zeta_ {1 - p} ^ {2} \\leq \\zeta^ {2}. \\tag {13}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.097, + 0.901, + 0.154 + ], + "angle": 0, + "content": "In Proposition 1, \\(\\zeta_{1 - p}^2 = \\zeta^2\\) if \\(\\pmb{p} = \\mathbf{0}\\) and \\(\\zeta_{1 - p}^2 = 0\\) if \\(\\pmb{p} = \\mathbf{1}\\). If \\(\\pmb{p} \\neq \\mathbf{0}\\) and \\(\\pmb{p} \\neq \\mathbf{1}\\), as the heterogeneity of the shallow weights is lower than the deeper weights [41], we have \\(\\zeta_{1 - p}^2 \\leq \\zeta^2\\). Similarly, we can validate Proposition 2." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.163, + 0.901, + 0.217 + ], + "angle": 0, + "content": "Proposition 2 (Implication of Assumption 2). Given the mask \\(\\pmb{p}\\), we assume there exists constant \\(\\hat{\\zeta}_{1 - p}\\) such that \\(\\forall \\pmb{x} \\in \\mathbb{R}^d\\), the heterogeneity of the block of weights that are not variance reduced:" + }, + { + "type": "equation", + "bbox": [ + 0.58, + 0.22, + 0.9, + 0.258 + ], + "angle": 0, + "content": "\\[\n\\frac {1}{N} \\sum_ {i = 1} ^ {N} | | (\\mathbf {1} - \\boldsymbol {p}) \\odot \\nabla f _ {i} (\\boldsymbol {x}) | | ^ {2} \\leq \\hat {\\zeta} _ {1 - p}, \\tag {14}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.268, + 0.811, + 0.282 + ], + "angle": 0, + "content": "If Assumption 2 holds, then it also holds that:" + }, + { + "type": "equation", + "bbox": [ + 0.667, + 0.286, + 0.9, + 0.304 + ], + "angle": 0, + "content": "\\[\n\\hat {\\zeta} _ {1 - p} ^ {2} \\leq \\hat {\\zeta} ^ {2}. \\tag {15}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.316, + 0.903, + 0.358 + ], + "angle": 0, + "content": "Theorem 1. For any \\(\\beta\\)-smooth function \\(\\{f_i\\}\\), the output of FedPVR has expected error smaller than \\(\\epsilon\\) for \\(\\eta_g = \\sqrt{N}\\) and some values of \\(\\eta_l\\), \\(R\\) satisfying:" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.367, + 0.86, + 0.39 + ], + "angle": 0, + "content": "Strongly convex: \\(\\eta_l \\leq \\min \\left(\\frac{1}{80K\\eta_g\\beta}, \\frac{26}{20\\mu K\\eta_g}\\right)\\)," + }, + { + "type": "equation", + "bbox": [ + 0.61, + 0.401, + 0.9, + 0.438 + ], + "angle": 0, + "content": "\\[\nR = \\tilde {\\mathcal {O}} \\left(\\frac {\\sigma^ {2}}{\\mu N K \\epsilon} + \\frac {\\zeta_ {1 - p} ^ {2}}{\\mu \\epsilon} + \\frac {\\beta}{\\mu}\\right), \\tag {16}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.445, + 0.752, + 0.464 + ], + "angle": 0, + "content": "- General convex: \\(\\eta_{l} \\leq \\frac{1}{80K\\eta_{g}\\beta}\\)," + }, + { + "type": "equation", + "bbox": [ + 0.568, + 0.474, + 0.9, + 0.511 + ], + "angle": 0, + "content": "\\[\nR = \\mathcal {O} \\left(\\frac {\\sigma^ {2} D}{K N \\epsilon^ {2}} + \\frac {\\zeta_ {1 - p} ^ {2} D}{\\epsilon^ {2}} + \\frac {\\beta D}{\\epsilon} + F\\right), \\tag {17}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.532, + 0.518, + 0.843, + 0.536 + ], + "angle": 0, + "content": "Non-convex: \\(\\eta_l \\leq \\frac{1}{26K\\eta_g\\beta}\\), and \\(R \\geq 1\\), then:" + }, + { + "type": "equation", + "bbox": [ + 0.581, + 0.547, + 0.9, + 0.584 + ], + "angle": 0, + "content": "\\[\nR = \\mathcal {O} \\left(\\frac {\\beta \\sigma^ {2} F}{K N \\epsilon^ {2}} + \\frac {\\beta \\hat {\\zeta} _ {1 - p} ^ {2} F}{N \\epsilon^ {2}} + \\frac {\\beta F}{\\epsilon}\\right), \\tag {18}\n\\]" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.587, + 0.834, + 0.603 + ], + "angle": 0, + "content": "Where \\(D\\coloneqq ||\\pmb {x}^0 -\\pmb{x}^* ||^2\\) and \\(F\\coloneqq f(\\pmb {x}^0) - f^*\\)" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.613, + 0.901, + 0.78 + ], + "angle": 0, + "content": "Given the above assumptions, the convergence rate is given in Theorem I. When \\( p = 1 \\), we recover SCAFFOLD convergence guarantee as \\( \\zeta_{1 - p}^2 = 0 \\), \\( \\hat{\\zeta}_{1 - p}^2 = 0 \\). In the strongly convex case, the effect of the heterogeneity of the block of weights that are not variance reduced \\( \\zeta_{1 - p}^2 \\) becomes negligible if \\( \\tilde{\\mathcal{O}}\\left(\\frac{\\zeta_{1 - p}^2}{\\epsilon}\\right) \\) is sufficiently smaller than \\( \\tilde{\\mathcal{O}}\\left(\\frac{\\sigma^2}{NK\\epsilon}\\right) \\). In such case, our rate is \\( \\frac{\\sigma^2}{NK\\epsilon} + \\frac{1}{\\mu} \\), which recovers the SCAFFOLD in the strongly convex without sampling and further matches that of SGD (with mini-batch size \\( K \\) on each worker). We also recover the FedAvg rate* at simple IID case. See Appendix. B for the full proof." + }, + { + "type": "page_footnote", + "bbox": [ + 0.513, + 0.79, + 0.9, + 0.833 + ], + "angle": 0, + "content": "*FedAvg at strongly convex case has the rate \\( R = \\tilde{\\mathcal{O}}\\left(\\frac{\\sigma^2}{\\mu K N \\epsilon} + \\frac{\\sqrt{\\beta} G}{\\mu \\sqrt{\\epsilon}} + \\frac{\\beta}{\\mu}\\right) \\) with \\( G \\) measures the gradient dissimilarity. At simple IID case, \\( G = 0 \\) [14]." + }, + { + "type": "page_number", + "bbox": [ + 0.469, + 0.871, + 0.502, + 0.882 + ], + "angle": 0, + "content": "3968" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.097, + 0.102, + 0.283, + 0.118 + ], + "angle": 0, + "content": "4. Experimental setup" + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.125, + 0.482, + 0.328 + ], + "angle": 0, + "content": "We demonstrate the effectiveness of our approach with CIFAR10 [19] and CIFAR100 [19] on image classification tasks. We simulate the data heterogeneity scenario following [22] by partitioning the data according to the Dirichlet distribution with the concentration parameter \\(\\alpha\\). The smaller the \\(\\alpha\\) is, the more imbalanced the data are distributed across clients. An example of the data distribution over multiple clients using the CIFAR10 dataset can be seen in Fig. 2. In our experiment, we use \\(\\alpha \\in \\{0.1, 0.5, 1.0\\}\\) as these are commonly used concentration parameters [22]. Each client has its local data, and this data is kept to be the same during all the communication rounds. We hold out the test dataset at the server for evaluating the classification performance of the server model. Following [22], we perform the same data augmentation for all the experiments." + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.33, + 0.483, + 0.589 + ], + "angle": 0, + "content": "We use two models: VGG-11 and ResNet-8 following [22]. We perform variance reduction for the last three layers in VGG-11 and the last layer in ResNet-8. We use 10 clients with full participation following [41] (close to cross-silo setup) and a batch size of 256. Each client performs 10 local epochs of model updating. We set the server learning rate \\(\\eta_{g} = 1\\) for all the models [14]. We tune the clients learning rate from \\(\\{0.05, 0.1, 0.2, 0.3\\}\\) for each individual experiment. The learning rate schedule is experimentally chosen from constant, cosine decay [23], and multiple step decay [22]. We compare our method with the representative federated learning algorithms FedAvg [25], FedProx [31], SCAFFOLD [14], and FedDyn [1]. All the results are averaged over three repeated experiments with different random initialization. We leave \\(1\\%\\) of the training data from each client out as the validation data to tune the hyperparameters (learning rate and schedule) per client. See Appendix. C for additional experimental setups. The code is at github.com/lyn1874/fedpvr." + }, + { + "type": "title", + "bbox": [ + 0.097, + 0.592, + 0.295, + 0.607 + ], + "angle": 0, + "content": "5. Experimental results" + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.615, + 0.483, + 0.779 + ], + "angle": 0, + "content": "We demonstrate the performance of our proposed approach in the FL setup with data heterogeneity in this section. We compare our method with the existing state-of-the-art algorithms on various datasets and deep neural networks. For the baseline approaches, we finetune the hyperparameters and only show the best performance we get. Our main findings are 1) we are more communication efficient than the baseline approaches, 2) conformal prediction is an effective tool to improve FL performance in high data heterogeneity scenarios, and 3) the benefit of the trade-off between diversity and uniformity for using deep neural networks in FL." + }, + { + "type": "title", + "bbox": [ + 0.096, + 0.785, + 0.436, + 0.8 + ], + "angle": 0, + "content": "5.1. Communication efficiency and accuracy" + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.806, + 0.484, + 0.834 + ], + "angle": 0, + "content": "We first report the number of rounds required to achieve a certain level of Top \\(1\\%\\) accuracy (66% for CIFAR10 and" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.103, + 0.901, + 0.253 + ], + "angle": 0, + "content": "\\(44\\%\\) for CIFAR100) in Table. 2. An algorithm is more communication efficient if it requires less number of rounds to achieve the same accuracy and/or if it transmits fewer number of parameters between the clients and server. Compared to the baseline approaches, we require much fewer number of rounds for almost all types of data heterogeneity and models. We can achieve a speedup between 1.5 and 6.7 than FedAvg. We also observe that ResNet-8 tends to converge slower than VGG-11, which may be due to the aggregation of the Batch Normalization layers that are discrepant between the local data distribution [22]." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.254, + 0.901, + 0.403 + ], + "angle": 0, + "content": "We next compare the top-1 accuracy between centralized learning and federated learning algorithms. For the centralized learning experiment, we tune the learning rate from \\(\\{0.01, 0.05, 0.1\\}\\) and report the best test accuracy based on the validation dataset. We train the model for 800 epochs which is as same as the total number of epochs in the federated learning algorithms (80 communication rounds x 10 local epochs). The results are shown in Table. 3. We also show the number of copies of the parameters that need to be transmitted between the server and clients (e.g. 2x means we communicate \\(x\\) and \\(y_i\\))" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.405, + 0.901, + 0.5 + ], + "angle": 0, + "content": "Table. 3 shows that our approach achieves a much better Top-1 accuracy compared to FedAvg with transmitting a similar or slightly bigger number of parameters between the server and client per round. Our method also achieves slightly better accuracy than centralized learning when the data is less heterogeneous (e.g. \\(\\alpha = 0.5\\) for CIFAR10 and \\(\\alpha = 1.0\\) for CIFAR100)." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.51, + 0.715, + 0.525 + ], + "angle": 0, + "content": "5.2. Conformal prediction" + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.532, + 0.901, + 0.6 + ], + "angle": 0, + "content": "When the data heterogeneity is high across clients, it is difficult for a federated learning algorithm to match the centralized learning performance [24]. Therefore, we demonstrate the benefit of using simple post-processing conformal prediction to improve the model performance." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.601, + 0.901, + 0.723 + ], + "angle": 0, + "content": "We examine the relationship between the empirical coverage and the average predictive set size for the server model after 80 communication rounds for each federated learning algorithm. The empirical coverage is the percentage of the data samples where the correct prediction is in the predictive set, and the average predictive size is the average of the length of the predictive sets over all the test images [3]. See Appendix. C for more information about conformal prediction setup and results." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.724, + 0.901, + 0.834 + ], + "angle": 0, + "content": "The results for when \\(\\alpha = 0.1\\) for both datasets and architectures are shown in Fig. 3. We show that by slightly increasing the predictive set size, we can achieve a similar accuracy as the centralized performance. Besides, our approach tends to surpass the centralized top-1 performance similar to or faster than other approaches. In sensitive use cases such as chemical threat detection, conformal prediction is a valuable tool to achieve certified accuracy at the" + }, + { + "type": "page_number", + "bbox": [ + 0.469, + 0.871, + 0.503, + 0.883 + ], + "angle": 0, + "content": "3969" + } + ], + [ + { + "type": "table_caption", + "bbox": [ + 0.097, + 0.101, + 0.9, + 0.126 + ], + "angle": 0, + "content": "Table 2. The required number of communication rounds (speedup compared to FedAvg) to achieve a certain level of top-1 accuracy (66% for the CIFAR10 dataset and 44% for the CIFAR100 dataset). Our method requires fewer rounds to achieve the same accuracy." + }, + { + "type": "table", + "bbox": [ + 0.097, + 0.128, + 0.898, + 0.239 + ], + "angle": 0, + "content": "
CIFAR10 (66%)CIFAR100 (44%)
α=0.1α=0.5α=0.1α=1.0
VGG-11ResNet-8VGG-11ResNet-8VGG-11ResNet-8VGG-11ResNet-8
No. roundsNo. roundsNo. roundsNo. roundsNo. roundsNo. roundsNo. roundsNo. rounds
FedAvg55(1.0x)90(1.0x)15(1.0x)15(1.0x)100 + (1.0x)100 + (1.0x)80(1.0x)56(1.0x)
FedProx52(1.1x)75(1.2x)16(0.9x)20(0.8x)100 + (1.0x)100 + (1.0x)80(1.0x)59(0.9x)
SCAFFOLD39(1.4x)57(1.6x)14(1.0x)9(1.7x)80 (>1.3x)61 (>1.6x)36(2.2x)25(2.2x)
FedDyn27(2.0x)67(1.3x)15(1.0x)34(0.4x)80 + (-)80 + (-)24(3.3x)51(1.1x)
Ours27(2.0x)50(1.8x)9(1.6x)5(3.0x)37 (>2.7x)66 (>1.5x)12(6.7x)15(3.7x)
" + }, + { + "type": "table_caption", + "bbox": [ + 0.096, + 0.249, + 0.9, + 0.286 + ], + "angle": 0, + "content": "Table 3. The top-1 accuracy \\((\\%)\\) after running 80 communication rounds using different methods on CIFAR10 and CIFAR100, together with the number of communicated parameters between the client and the server. We train the centralised model for 800 epochs \\((= 80\\) rounds x 10 local epochs in FL). Higher accuracy is better, and we highlight the best accuracy in red colour." + }, + { + "type": "table", + "bbox": [ + 0.097, + 0.287, + 0.892, + 0.421 + ], + "angle": 0, + "content": "
VGG-11ResNet-8
CIFAR10CIFAR100server↔clientCIFAR10CIFAR100server↔client
α = 0.1α = 0.5α = 0.1α = 1.0α = 0.1α = 0.5α = 0.1α = 1.0
Centralised87.556.3-83.456.8-
FedAvg69.380.934.345.02x64.979.138.847.02x
Fedprox72.180.435.043.22x66.177.942.047.22x
SCAFFOLD74.183.543.450.64x66.680.343.852.34x
FedDyn77.480.143.845.22x63.872.936.448.12x
Ours78.284.943.558.02.1x69.383.643.552.32.02x
" + }, + { + "type": "text", + "bbox": [ + 0.097, + 0.444, + 0.373, + 0.458 + ], + "angle": 0, + "content": "cost of a slightly larger predictive set size." + }, + { + "type": "image", + "bbox": [ + 0.113, + 0.472, + 0.47, + 0.658 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.095, + 0.672, + 0.484, + 0.749 + ], + "angle": 0, + "content": "Figure 3. Relation between average predictive size and empirical coverage when \\(\\alpha = 0.1\\). By slightly increasing the predictive set size, we can achieve a similar performance as the centralised model (Top-1 accuracy) even if the data are heterogeneously distributed across clients. Our method is similar to or faster than other approaches to surpass the centralised Top-1 accuracy." + }, + { + "type": "title", + "bbox": [ + 0.096, + 0.757, + 0.32, + 0.773 + ], + "angle": 0, + "content": "5.3. Diversity and uniformity" + }, + { + "type": "text", + "bbox": [ + 0.095, + 0.779, + 0.484, + 0.834 + ], + "angle": 0, + "content": "We have shown that our algorithm achieves a better speedup and performance against the existing approaches with only lightweight modifications to FedAvg. We next investigate what factors lead to better accuracy. Specifi" + }, + { + "type": "image", + "bbox": [ + 0.533, + 0.44, + 0.883, + 0.616 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.512, + 0.625, + 0.9, + 0.713 + ], + "angle": 0, + "content": "Figure 4. Drift diversity and learning curve for ResNet-8 on CIFAR100 with \\(\\alpha = 1.0\\). Compared to FedAvg, SCAFFOLD and our method can both improve the agreement between the classifiers. Compared to SCAFFOLD, our method results in a higher gradient diversity at the early stage of the communication, which tends to boost the learning speed as the curvature of the drift diversity seem to match the learning curve." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.737, + 0.9, + 0.792 + ], + "angle": 0, + "content": "cally, we calculate the drift diversity \\(\\xi\\) across clients after each communication round using Eq. 2 and average \\(\\xi\\) across three runs. We show the result of using ResNet-8 and CIFAR100 with \\(\\alpha = 1.0\\) in Fig. 4." + }, + { + "type": "text", + "bbox": [ + 0.512, + 0.793, + 0.9, + 0.834 + ], + "angle": 0, + "content": "Fig. 4 shows the drift diversity for different layers in ResNet-8 and the testing accuracy along the communication rounds. We observe that classifiers have the highest diver" + }, + { + "type": "page_number", + "bbox": [ + 0.469, + 0.871, + 0.502, + 0.882 + ], + "angle": 0, + "content": "3970" + } + ], + [ + { + "type": "text", + "bbox": [ + 0.096, + 0.103, + 0.483, + 0.279 + ], + "angle": 0, + "content": "sity in FedAvg against other layers and methods. SCAFFOLD, which applies control variate on the entire model, can effectively reduce the disagreement of the directions and scales of the averaged gradient across clients. Our proposed algorithm that performs variance reduction only on the classifiers can reduce the diversity of the classifiers even further but increase the diversity of the feature extraction layers. This high diversity tends to boost the learning speed as the curvature of the diversity movement (Fig. 4 left) seems to match the learning curve (Fig. 4 right). Based on this observation, we hypothesize that this diversity along the feature extractor and the uniformity of the classifier is the main reason for our better speedup." + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.28, + 0.484, + 0.445 + ], + "angle": 0, + "content": "To test this hypothesis, we perform an experiment where we use variance reduction starting from different layers of a neural network. If the starting position of the use of variance reduction influences the learning speed, it indicates where in a neural network we need more diversity and where we need more uniformity. We here show the result of using VGG-11 on CIFAR100 with \\(\\alpha = 1.0\\) as there are more layers in VGG-11. The result is shown in Fig. 5 where SVR: \\(16 \\rightarrow 20\\) is corresponding to our approach and SVR: \\(0 \\rightarrow 20\\) is corresponding to SCAFFOLD that applies variance reduction for the entire model. Results for using ResNet-8 is shown in Appendix. C." + }, + { + "type": "image", + "bbox": [ + 0.111, + 0.454, + 0.473, + 0.613 + ], + "angle": 0, + "content": null + }, + { + "type": "image_caption", + "bbox": [ + 0.096, + 0.623, + 0.483, + 0.71 + ], + "angle": 0, + "content": "Figure 5. Influence of using stochastic variance reduction(SVR) on layers that start from different positions in a neural network on the learning speed. \\(\\mathrm{SVR}:0\\rightarrow 20\\) applies variance reduction on the entire model (SCAFFOLD). \\(\\mathrm{SVR}:16\\rightarrow 20\\) applies variance reduction from the layer index 16 to 20 (ours). The later we apply variance reduction, the better performance speedup we obtain. However, no variance reduction (FedAvg) performs the worst here." + }, + { + "type": "text", + "bbox": [ + 0.096, + 0.724, + 0.483, + 0.834 + ], + "angle": 0, + "content": "We see from Fig. 5 that the deeper in a neural network we apply variance reduction, the better learning speedup we can obtain. There is no clear performance difference between where to activate the variance reduction when the layer index is over 10. However, applying no variance reduction (FedAvg) achieves by far the worst performance. We believe that these experimental results indicate that in a distributed optimization framework, to boost the learning" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.103, + 0.901, + 0.158 + ], + "angle": 0, + "content": "speed of an over-parameterized model, we need some levels of diversity in the middle and early layers for learning richer feature representation and some degrees of uniformity in the classifiers for making a less biased decision." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.17, + 0.631, + 0.184 + ], + "angle": 0, + "content": "6. Conclusion" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.193, + 0.901, + 0.369 + ], + "angle": 0, + "content": "In this work, we studied stochastic gradient descent learning for deep neural network classifiers in a federated learning setting, where each client updates its local model using stochastic gradient descent on local data. A central model is periodically updated (by averaging local model parameters) and broadcast to the clients under a communication bandwidth constraint. When data is homogeneous across clients, this procedure is comparable to centralized learning in terms of efficiency; however, when data is heterogeneous, learning is impeded. Our hypothesis for the primary reason for this is that when the local models are out of alignment, updating the central model by averaging is ineffective and sometimes even destructive." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.37, + 0.901, + 0.601 + ], + "angle": 0, + "content": "Examining the diversity across clients of their local model updates and their learned feature representations, we found that the misalignment between models is much stronger in the last few neural network layers than in the rest of the network. This finding inspired us to experiment with aligning the local models using a partial variance reduction technique applied only on the last layers, which we named FedPVR. We found that this led to a substantial improvement in convergence speed compared to the competing federated learning methods. In some cases, our method even outperformed centralized learning. We derived a bound on the convergence rate of our proposed method, which matches the rates for SGD when the gradient diversity across clients is sufficiently low. Compared with FedAvg, the communication cost of our method is only marginally worse, as it requires transmitting control variates for the last layers." + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.601, + 0.901, + 0.724 + ], + "angle": 0, + "content": "We believe our FedPVR algorithm strikes a good balance between simplicity and efficiency, requiring only a minor modification to the established FedAvg method; however, in our further research, we plan to pursue more optimal methods for aligning and guiding the local learning algorithms, e.g. using adaptive procedures. Furthermore, the degree of over-parameterization in the neural network layers (e.g. feature extraction vs bottlenecks) may also play an important role, which we would like to understand better." + }, + { + "type": "title", + "bbox": [ + 0.513, + 0.736, + 0.68, + 0.752 + ], + "angle": 0, + "content": "Acknowledgements" + }, + { + "type": "text", + "bbox": [ + 0.513, + 0.759, + 0.901, + 0.828 + ], + "angle": 0, + "content": "The first three authors thank for financial support from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 883390 (H2020-SU-SECU-2019 SERSing Project). BL thanks for the financial support from the Otto Mønsted Foundation." + }, + { + "type": "page_number", + "bbox": [ + 0.469, + 0.871, + 0.5, + 0.882 + ], + "angle": 0, + "content": "3971" + } + ], + [ + { + "type": "title", + "bbox": [ + 0.098, + 0.102, + 0.192, + 0.116 + ], + "angle": 0, + "content": "References" + }, + { + "type": "ref_text", + "bbox": [ + 0.106, + 0.124, + 0.484, + 0.174 + ], + "angle": 0, + "content": "[1] Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N. Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. CoRR, abs/2111.04263, 2021. 1, 2, 6, 26" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.176, + 0.484, + 0.223 + ], + "angle": 0, + "content": "[2] Dan Alistarh, Jerry Li, Ryota Tomioka, and Milan Vo-jnovic. QSGD: randomized quantization for communication-optimal stochastic gradient descent. CoRR, abs/1610.02132, 2016. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.226, + 0.484, + 0.275 + ], + "angle": 0, + "content": "[3] Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan, and Jitendra Malik. Uncertainty sets for image classifiers using conformal prediction. In International Conference on Learning Representations, 2021. 2, 3, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.277, + 0.484, + 0.325 + ], + "angle": 0, + "content": "[4] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. CoRR, abs/2002.05709, 2020. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.328, + 0.484, + 0.365 + ], + "angle": 0, + "content": "[5] Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Fedavg with fine tuning: Local updates lead to representation learning, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.367, + 0.484, + 0.415 + ], + "angle": 0, + "content": "[6] Aaron Defazio, Francis R. Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. CoRR, abs/1407.0202, 2014. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.417, + 0.484, + 0.453 + ], + "angle": 0, + "content": "[7] Aaron Defazio and Léon Bottou. On the ineffectiveness of variance reduced optimization for deep learning. CoRR, abs/1812.04529, 2018. 2, 3, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.455, + 0.484, + 0.528 + ], + "angle": 0, + "content": "[8] Aritra Dutta, El Houcine Bergou, Ahmed M. Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, and Panos Kalnis. On the discrepancy between the theoretical analysis and practical implementations of compressed communication for distributed deep learning. CoRR, abs/1911.08250, 2019.3" + }, + { + "type": "ref_text", + "bbox": [ + 0.107, + 0.531, + 0.484, + 0.605 + ], + "angle": 0, + "content": "[9] Liang Gao, Huazhu Fu, Li Li, Yingwen Chen, Ming Xu, and Cheng-Zhong Xu. Feddc: Federated learning with non-iid data via local drift decoupling and correction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 10102-10111. IEEE, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.607, + 0.483, + 0.655 + ], + "angle": 0, + "content": "[10] Malka N. Halgamuge, Moshe Zukerman, Kotagiri Ramamoohanarao, and Hai Le Vu. An estimation of sensor energy consumption. Progress in Electromagnetics Research B, 12:259-295, 2009. 1, 2, 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.657, + 0.483, + 0.693 + ], + "angle": 0, + "content": "[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.696, + 0.483, + 0.731 + ], + "angle": 0, + "content": "[12] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.1, + 0.734, + 0.483, + 0.833 + ], + "angle": 0, + "content": "[13] Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascon, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zäïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri" + }, + { + "type": "list", + "bbox": [ + 0.1, + 0.124, + 0.484, + 0.833 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.547, + 0.104, + 0.901, + 0.228 + ], + "angle": 0, + "content": "Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrede Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramér, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. CoRR, abs/1912.04977, 2019. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.231, + 0.901, + 0.329 + ], + "angle": 0, + "content": "[14] Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. SCAFFOLD: Stochastic controlled averaging for federated learning. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5132-5143. PMLR, 13-18 Jul 2020. 1, 2, 3, 4, 5, 6, 11, 12, 13, 21" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.332, + 0.901, + 0.369 + ], + "angle": 0, + "content": "[15] Ahmed Khaled, Konstantin Mishchenko, and Peter Richtárik. Better communication complexity for local SGD. CoRR, abs/1909.04746, 2019. 5" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.371, + 0.901, + 0.458 + ], + "angle": 0, + "content": "[16] Anastasia Koloskova, Nicolas Loizou, Sadra Boreiri, Martin Jaggi, and Sebastian Stich. A unified theory of decentralized SGD with changing topology and local updates. In Hal Daume III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5381-5393. PMLR, 13-18 Jul 2020. 5, 11, 13" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.461, + 0.901, + 0.509 + ], + "angle": 0, + "content": "[17] Jakub Konečny, H. Brendan McMahan, Daniel Ramage, and Peter Richtárik. Federated optimization: Distributed machine learning for on-device intelligence. CoRR, abs/1610.02527, 2016. 1, 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.512, + 0.901, + 0.548 + ], + "angle": 0, + "content": "[18] Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey E. Hinton. Similarity of neural network representations revisited. CoRR, abs/1905.00414, 2019. 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.551, + 0.901, + 0.575 + ], + "angle": 0, + "content": "[19] Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.578, + 0.901, + 0.613 + ], + "angle": 0, + "content": "[20] Qinbin Li, Bingsheng He, and Dawn Song. Model-contrastive federated learning. CoRR, abs/2103.16257, 2021. 1, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.616, + 0.901, + 0.653 + ], + "angle": 0, + "content": "[21] Zhize Li, Dmitry Kovalev, Xun Qian, and Peter Richtárik. Acceleration for compressed gradient descent in distributed and federated optimization, 2020. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.518, + 0.655, + 0.901, + 0.754 + ], + "angle": 0, + "content": "[22] Tao Lin, Lingjing Kong, Sebastian U. Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. 6, 25, 26" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.757, + 0.901, + 0.781 + ], + "angle": 0, + "content": "[23] Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with restarts. CoRR, abs/1608.03983, 2016. 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.517, + 0.784, + 0.901, + 0.833 + ], + "angle": 0, + "content": "[24] Mi Luo, Fei Chen, Dapeng Hu, Yifan Zhang, Jian Liang, and Jiashi Feng. No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. CoRR, abs/2106.05001, 2021. 1, 2, 3, 4, 6" + }, + { + "type": "list", + "bbox": [ + 0.517, + 0.104, + 0.901, + 0.833 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.469, + 0.871, + 0.502, + 0.883 + ], + "angle": 0, + "content": "3972" + } + ], + [ + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.104, + 0.484, + 0.141 + ], + "angle": 0, + "content": "[25] H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. Federated learning of deep networks using model averaging. CoRR, abs/1602.05629, 2016. 2, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.142, + 0.484, + 0.178 + ], + "angle": 0, + "content": "[26] Konstantin Mishchenko, Eduard Gorbunov, Martin Takáč, and Peter Richtárik. Distributed learning with compressed gradient differences. CoRR, abs/1901.09269, 2019. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.179, + 0.483, + 0.242 + ], + "angle": 0, + "content": "[27] Konstantin Mishchenko, Grigory Malinovsky, Sebastian Stich, and Peter Richtárik. ProxSkip: Yes! local gradient steps provably lead to communication acceleration! finally! International Conference on Machine Learning (ICML), 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.243, + 0.483, + 0.292 + ], + "angle": 0, + "content": "[28] Thao Nguyen, Maithra Raghu, and Simon Kornblith. Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth. CoRR, abs/2010.15327, 2020. 2, 4" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.293, + 0.483, + 0.33 + ], + "angle": 0, + "content": "[29] Jaehoon Oh, Sangmook Kim, and Se-Young Yun. Fedbabu: Towards enhanced representation for federated image classification. CoRR, abs/2106.06042, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.33, + 0.483, + 0.392 + ], + "angle": 0, + "content": "[30] Yaniv Romano, Matteo Sesia, and Emmanuel J. Candes. Classification with valid and adaptive coverage. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20, Red Hook, NY, USA, 2020. Curran Associates Inc. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.393, + 0.483, + 0.443 + ], + "angle": 0, + "content": "[31] Anit Kumar Sahu, Tian Li, Maziar Sanjabi, Manzil Zaheer, Ameet Talwalkar, and Virginia Smith. On the convergence of federated optimization in heterogeneous networks. CoRR, abs/1812.06127, 2018. 1, 2, 6, 26" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.444, + 0.483, + 0.481 + ], + "angle": 0, + "content": "[32] Ohad Shamir, Nathan Srebro, and Tong Zhang. Communication efficient distributed optimization using an approximate newton-type method. CoRR, abs/1312.7853, 2013. 1, 2, 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.482, + 0.483, + 0.556 + ], + "angle": 0, + "content": "[33] Micah J. Sheller, Brandon Edwards, G. Anthony Reina, Jason Martin, Sarthak Pati, Aikaterini Kotrotsou, Mikhail Milchenko, Weilin Xu, Daniel Marcus, Rivka R. Colen, and Spyridon Bakas. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Scientific Reports, 10(1):12598, Jul 2020. 1" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.557, + 0.483, + 0.605 + ], + "angle": 0, + "content": "[34] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.607, + 0.483, + 0.643 + ], + "angle": 0, + "content": "[35] Sebastian U. Stich. Unified optimal analysis of the (stochastic) gradient method. CoRR, abs/1907.04232, 2019. 5, 11, 14" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.645, + 0.483, + 0.694 + ], + "angle": 0, + "content": "[36] Sebastian U. Stich, Jean-Baptiste Cordonnier, and Martin Jaggi. Sparsified SGD with memory. In Advances in Neural Information Processing Systems 31 (NeurIPS), pages 4452-4463. Curran Associates, Inc., 2018. 3" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.695, + 0.483, + 0.744 + ], + "angle": 0, + "content": "[37] Sebastian U. Stich and Sai Praneeth Karimireddy. The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication. CoRR, abs/1909.05350, 2019. 15" + }, + { + "type": "ref_text", + "bbox": [ + 0.099, + 0.745, + 0.483, + 0.831 + ], + "angle": 0, + "content": "[38] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. 2, 5" + }, + { + "type": "list", + "bbox": [ + 0.099, + 0.104, + 0.484, + 0.831 + ], + "angle": 0, + "content": null + }, + { + "type": "ref_text", + "bbox": [ + 0.515, + 0.104, + 0.9, + 0.141 + ], + "angle": 0, + "content": "[39] F Varno, M Saghayi, L Rafiee, S Gupta, S Matwin, and M Havaei. Minimizing client drift in federated learning via adaptive bias estimation. ArXiv, abs/2204.13170, 2022. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.515, + 0.142, + 0.901, + 0.353 + ], + "angle": 0, + "content": "[40] Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Agüera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas N. Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horváth, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konečný, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtárik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake E. Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, and Wennan Zhu. A field guide to federated optimization. CoRR, abs/2107.06917, 2021. 2" + }, + { + "type": "ref_text", + "bbox": [ + 0.515, + 0.355, + 0.901, + 0.405 + ], + "angle": 0, + "content": "[41] Yaodong Yu, Alexander Wei, Sai Praneeth Karimireddy, Yi Ma, and Michael I. Jordan. TCT: Convexifying federated learning using bootstrapped neural tangent kernels, 2022. 2, 5, 6" + }, + { + "type": "ref_text", + "bbox": [ + 0.515, + 0.407, + 0.901, + 0.444 + ], + "angle": 0, + "content": "[42] Haoyu Zhao, Zhize Li, and Peter Richtárik. Fedpage: A fast local stochastic gradient method for communication-efficient federated learning. CoRR, abs/2108.04755, 2021. 2" + }, + { + "type": "list", + "bbox": [ + 0.515, + 0.104, + 0.901, + 0.444 + ], + "angle": 0, + "content": null + }, + { + "type": "page_number", + "bbox": [ + 0.469, + 0.871, + 0.501, + 0.882 + ], + "angle": 0, + "content": "3973" + } + ] +] \ No newline at end of file diff --git a/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/fbd20275-52b6-475a-b61f-5e4c3c892317_origin.pdf b/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/fbd20275-52b6-475a-b61f-5e4c3c892317_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..02f77252212ed481c5092dc465e4aaef11c64fda --- /dev/null +++ b/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/fbd20275-52b6-475a-b61f-5e4c3c892317_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfa293ba84e3bb5bc21478a502aee831d36a519e3f4aa28e1be5125acb1fd8ef +size 1121288 diff --git a/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/full.md b/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/full.md new file mode 100644 index 0000000000000000000000000000000000000000..dd98463f0aaa558f4a613bba6b4383a08b18889c --- /dev/null +++ b/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/full.md @@ -0,0 +1,401 @@ +# On the effectiveness of partial variance reduction in federated learning with heterogeneous data + +Bo Li* + +Mikkel N. Schmidt + +Tommy S. Alström + +Technical University of Denmark + +{blia, mnsc, tsal}@dtu.dk + +Sebastian U. Stich + +CISPA + +stich@cispa.de + +# Abstract + +Data heterogeneity across clients is a key challenge in federated learning. Prior works address this by either aligning client and server models or using control variates to correct client model drift. Although these methods achieve fast convergence in convex or simple non-convex problems, the performance in over-parameterized models such as deep neural networks is lacking. In this paper, we first revisit the widely used FedAvg algorithm in a deep neural network to understand how data heterogeneity influences the gradient updates across the neural network layers. We observe that while the feature extraction layers are learned efficiently by FedAvg, the substantial diversity of the final classification layers across clients impedes the performance. Motivated by this, we propose to correct model drift by variance reduction only on the final layers. We demonstrate that this significantly outperforms existing benchmarks at a similar or lower communication cost. We furthermore provide proof for the convergence rate of our algorithm. + +# 1. Introduction + +Federated learning (FL) is emerging as an essential distributed learning paradigm in large-scale machine learning. Unlike in traditional machine learning, where a model is trained on the collected centralized data, in federated learning, each client (e.g. phones and institutions) learns a model with its local data. A centralized model is then obtained by aggregating the updates from all participating clients without ever requesting the client data, thereby ensuring a certain level of user privacy [13, 17]. Such an algorithm is especially beneficial for tasks where the data is sensitive, e.g. chemical hazards detection and diseases diagnosis [33]. + +Two primary challenges in federated learning are i) han + +![](images/f01bb3a5e13038a42a26eed4391142bd7b625074297f83b4a6ca324120a2ed32.jpg) +Feature extractor +□Classifier + +![](images/3936e133bfbce6b47cfb2ce625a65c688b0f4b45e0d1530d5665474e90c74615.jpg) + +![](images/5364e44dbf026e26321a15d410f7ab6c500a8dd9f4cb0d75d3b5ecd31e06cb78.jpg) +Variance reduction +Communication rounds +Figure 1. Our proposed FedPVR framework with the performance (communicated parameters per round client $\longleftrightarrow$ server). Smaller $\alpha$ corresponds to higher data heterogeneity. Our method achieves a better speedup than existing approaches by transmitting a slightly larger number of parameters than FedAvg. + +dling data heterogeneity across clients [13] and ii) limiting the cost of communication between the server and clients [10]. In this setting, FedAvg [17] is one of the most widely used schemes: A server broadcasts its model to clients, which then update the model using their local data in a series of steps before sending their individual model to the server, where the models are aggregated by averaging the parameters. The process is repeated for multiple communication rounds. While it has shown great success in many applications, it tends to achieve subpar accuracy and convergence when the data are heterogeneous [14, 24, 31]. + +The slow and sometimes unstable convergence of FedAvg can be caused by client drift [14] brought on by data heterogeneity. Numerous efforts have been made to improve FedAvg's performance in this setting. Prior works attempt to mitigate client drift by penalizing the distance between a client model and the server model [20, 31] or by performing variance reduction techniques while updating client models [1, 14, 32]. These works demonstrate fast convergence on convex problems or for simple neural networks; however, their performance on deep neural + +networks, which are state-of-the-art for many centralized learning tasks [11, 34], has yet to be well explored. Adapting techniques that perform well on convex problems to neural networks is non-trivial [7] due to their "intriguing properties" [38] such as over-parametrization and permutation symmetries. + +To overcome the above issues, we revisit the FedAvg algorithm with a deep neural network (VGG-11 [34]) under the assumption of data heterogeneity and full client participation. Specifically, we investigate which layers in a neural network are mostly influenced by data heterogeneity. We define drift diversity, which measures the diversity of the directions and scales of the averaged gradients across clients per communication round. We observe that in the non-IID scenario, the deeper layers, especially the final classification layer, have the highest diversity across clients compared to an IID setting. This indicates that FedAvg learns good feature representations even in the non-IID scenario [5] and that the significant variation of the deeper layers across clients is a primary cause of FedAvg's subpar performance. + +Based on the above observations, we propose to align the classification layers across clients using variance reduction. Specifically, we estimate the average updating direction of the classifiers (the last several fully connected layers) at the client $c_{i}$ and server level $c$ and use their difference as a control variate [14] to reduce the variance of the classifiers across clients. We analyze our proposed algorithm and derive a convergence rate bound. + +We perform experiments on the popular federated learning benchmark datasets CIFAR10 [19] and CIFAR100 [19] using two types of neural networks, VGG-11 [34] and ResNet-8 [11], and different levels of data heterogeneity across clients. We experimentally show that we require fewer communication rounds compared to the existing methods [14, 17, 31] to achieve the same accuracy while transmitting a similar or slightly larger number of parameters between server and clients than FedAvg (see Fig. 1). With a (large) fixed number of communication rounds, our method achieves on-par or better top-1 accuracy, and in some settings it even outperforms centralized learning. Using conformal prediction [3], we show how performance can be improved further using adaptive prediction sets. + +We show that applying variance reduction on the last layers increases the diversity of the feature extraction layers. This diversity in the feature extraction layers may give each client more freedom to learn richer feature representations, and the uniformity in the classifier then ensures a less biased decision. We summarize our contributions here: + +- We present our algorithm for partial variance-reduced federated learning (FedPVR). We experimentally demonstrate that the key to the success of our algorithm is the diversity between the feature extraction layers and the alignment between the classifiers. + +- We prove the convergence rate in the convex settings and non-convex settings, precisely characterize its weak dependence on data-heterogeneity measures and show that FedPVR provably converges as fast as the centralized SGD baseline in most practical relevant cases. + +- We experimentally show that our algorithm is more communication efficient than previous works across various levels of data heterogeneity, datasets, and neural network architectures. In some cases where data heterogeneity exists, the proposed algorithm even performs slightly better than centralized learning. + +# 2. Related work + +# 2.1. Federated learning + +Federated learning (FL) is a fast-growing field [13, 40]. We mainly describe FL methods in non-IID settings where the data is distributed heterogeneously across clients. Among the existing approaches, FedAvg [25] is the de facto optimization technique. Despite its solid empirical performances in IID settings [13, 25], it tends to achieve a subpar accuracy-communication trade-off in non-IID scenarios. + +Many works attempt to tackle FL when data is heterogeneous across clients [1, 9, 14, 31, 39, 42]. FedProx [31] proposes a temperature parameter and proximal regularization term to control the divergence between client and server models. However, the proximal term does not bring the alignment between the global and local optimal points [1]. Similarly, some works control the update direction by introducing client-dependent control variate [1, 14, 17, 27, 32] that is also communicated between the server and clients. They have achieved a much faster convergence rate, but their performance in a non-convex setup, especially in deep neural networks, such as ResNet [11] and VGG [34], is not well explored. Besides, they suffer from a higher communication cost due to the transmission of the extra control variates, which may be a critical issue for resources-limited IoT mobile devices [10]. Among these methods, SCAFFOLD [14] is the most closely related method to ours, and we give a more detailed comparison in section 3 and 5. + +Another line of work develops FL algorithms based on the characteristics, such as expressive feature representations [28] of neural networks. Collins et al. [5] show that FedAvg is powerful in learning common data representations from clients' data. FedBabu [29], TCT [41], and CCVR [24] propose to improve FL performance by finetuning the classifiers with a standalone dataset or features that are simulated based on the client models. However, preparing a standalone dataset/features that represents the data distribution across clients is challenging as this usually requires domain knowledge and may raise privacy concerns. + +![](images/a6ccf984c91b668f39f0637e261b46a42c6d3bd271313729673828ee5984887c.jpg) +Figure 2. Data distribution (number of images per client per class) with different levels of heterogeneity, client CKA similarity, and the drift diversity of each layer in VGG-11 (20 layers) with FedAvg. Deep layers in an over-parameterised neural network have higher disagreement and variance when the clients are heterogeneous using FedAvg. + +Moon [20] encourages the similarity of the representations across different client models by using contrastive loss [4] but with the cost of three full-size models in memory on each client, which may limit its applicability in resource-limited devices. + +Other works focus on reducing the communication cost by compressing the transmitted gradients [2, 8, 21, 26, 36]. They can reduce the communication bandwidth by adjusting the number of bits sent per iteration. These works are complementary to ours and can be easily integrated into our method to save communication costs. + +# 2.2. Variance reduction + +Stochastic variance reduction (SVR), such as SVRG [12], SAGA [6], and their variants, use control variate to reduce the variance of traditional stochastic gradient descent (SGD). These methods can remarkably achieve a linear convergence rate for strongly convex optimization problems compared to the sub-linear rate of SGD. Many federated learning algorithms, such as SCAFFOLD [14] and DANE [32], have adapted the idea of variance reduction for the whole model and achieved good convergence on convex problems. However, as [7] demonstrated, naively applying variance reduction techniques gives no actual variance reduction and tends to result in a slower convergence in deep neural networks. This suggests that adapting SVR techniques in deep neural networks for FL requires a more careful design. + +# 2.3. Conformal prediction + +Conformal prediction is a general framework that computes a prediction set guaranteed to include the true class with a high user-determined probability [3, 30]. It requires no retraining of the models and achieves a finite-sum coverage guarantee [3]. As FL algorithms can hardly perform as well as centralized learning [24] when the data heterogene + +Table 1. Notations used in this paper + +
R,rNumber of communication rounds and round index
K,kNumber of local steps, local step index
N,iNumber of clients, client index
yr,i,kclient model i at step k and round r
xrserver model at round r
cr,i, crclient and server control variate
+ +ity is high, we can integrate conformal prediction in FL to improve the empirical coverage by slightly increasing the predictive set size. This can be beneficial in sensitive use cases such as detecting chemical hazards, where it is better to give a prediction set that contains the correct class than producing a single but wrong prediction. + +# 3. Method + +# 3.1. Problem statement + +Given $N$ clients with full participation, we formalise the problem as minimizing the average of the stochastic functions with access to stochastic samples in Eq. 1 where $\pmb{x}$ is the model parameters and $f_{i}$ represents the loss function at client $i$ with dataset $\mathcal{D}_i$ , + +$$ +\min _ {\boldsymbol {x} \in \mathbb {R} ^ {d}} \left(f (\boldsymbol {x}) := \frac {1}{N} \sum_ {i = 1} ^ {N} f _ {i} (\boldsymbol {x})\right), \tag {1} +$$ + +where $f_{i}(\pmb {x}):= \mathbb{E}_{\mathcal{D}_{i}}[f_{i}(\pmb {x};\mathcal{D}_{i})]$ + +# 3.2. Motivation + +When the data $\{\mathcal{D}_i\}$ are heterogeneous across clients, FedAvg suffers from client drift [14], where the average of the local optimal $\bar{\pmb{x}}^* = \frac{1}{N}\sum_{i\in N}\pmb{x}_i^*$ is far from the global optimal $\pmb{x}^*$ . To understand what causes client drift, specifically which layers in a neural network are influenced most by the data heterogeneity, we perform a simple experiment using FedAvg and CIFAR10 datasets on a VGG-11. The detailed experimental setup can be found in section 4. + +In an over-parameterized model, it is difficult to directly calculate client drift $||\bar{x}^{*} - x^{*}||^{2}$ as it is challenging to obtain the global optimum $x^{*}$ . We instead hypothesize that we can represent the influence of data heterogeneity on the model by measuring 1) drift diversity and 2) client model similarity. Drift diversity reflects the diversity in the amount each client model deviates from the server model after an update round. + +Definition 1 (Drift diversity). We define the drift diversity across $N$ clients at round $r$ as: + +$$ +\xi^ {r} := \frac {\sum_ {i = 1} ^ {N} \left\| \boldsymbol {m} _ {i} ^ {r} \right\| ^ {2}}{\left\| \sum_ {i = 1} ^ {N} \boldsymbol {m} _ {i} ^ {r} \right\| ^ {2}} \quad \boldsymbol {m} _ {i} ^ {r} = \boldsymbol {y} _ {i, K} ^ {r} - \boldsymbol {x} ^ {r - 1} \tag {2} +$$ + +Drift diversity $\xi$ is high when all the clients update their models in different directions, i.e., when dot products between client updates $\pmb{m}_i$ are small. When each client performs $K$ steps of vanilla SGD updates, $\xi$ depends on the directions and amplitude of the gradients over $N$ clients and is equivalent to $\frac{\sum_{i=1}^{N} \|\sum_{k} g_i(\pmb{y}_{i,k})\|^2}{\|\sum_{i=1}^{N} \sum_{k} g_i(\pmb{y}_{i,k})\|^2}$ , where $g_i(\pmb{y}_{i,k})$ is the stochastic mini-batch gradient. + +After updating client models, we quantify the client model similarity using centred kernel alignment (CKA) [18] computed on a test dataset. CKA is a widely used permutation invariant metric for measuring the similarity between feature representations in neural networks [18, 24, 28]. + +Fig. 2 shows the movement of $\xi$ and CKA across different levels of data heterogeneity using FedAvg. We observe that the similarity and diversity of the early layers (e.g. layer index 4 and 12) are with a higher agreement between the IID $(\alpha = 100.0)$ and non-IID $(\alpha = 0.1)$ experiments, which indicates that FedAvg can still learn and extract good feature representations even when it is trained with non-IID data. The lower similarity on the deeper layers, especially the classifiers, suggests that these layers are strongly biased towards their local data distribution. When we only look at the model that is trained with $\alpha = 0.1$ , we see the highest diversity and variance on the classifiers across clients compared to the rest of the layers. Based on the above observations, we propose to align the classifiers across clients using variance reduction. We deploy client and server control variates to control the updating directions of the classifiers. + +# 3.3. Classifier variance reduction + +Our proposed algorithm (Alg. I) consists of three parts: i) client updating (Eq. 5-6) ii) client control variate updating, (Eq. 7), and iii) server updating (Eq. 8-9) + +We first define a vector $\pmb{p} \in \mathbb{R}^d$ that contains 0 or 1 with $v$ non-zero elements $(v \ll d)$ in Eq. 3. We recover SCAF-FOLD with $\pmb{p} = \mathbf{1}$ and recover FedAvg with $\pmb{p} = \mathbf{0}$ . For the set of indices $j$ where $p_j = 1$ ( $S_{\mathrm{svr}}$ from Eq. 4), we update the corresponding weights $y_{i,S_{\mathrm{svr}}}$ with variance reduction such that we maintain a state for each client $(c_i \in \mathbb{R}^v)$ and + +for the server $(\pmb{c} \in \mathbb{R}^v)$ in Eq. 5. For the rest of the indices $S_{\mathrm{sgd}}$ from Eq. 4, we update the corresponding weights $\pmb{y}_{i,S_{\mathrm{sgd}}}$ with SGD in Eq. 6. As the server variate $\pmb{c}$ is an average of $\pmb{c}_i$ across clients, we can safely initialise them as $\mathbf{0}$ . + +In each communication round, each client receives a copy of the server model $\mathbf{x}$ and the server control variate $\mathbf{c}$ . They then perform $K$ model updating steps (see Eq. 5-6 for one step) using cross-entropy as the loss function. Once this is finished, we calculate the updated client control variate $\mathbf{c}_i$ using Eq. 7. The server then receives the updated $\mathbf{c}_i$ and $\mathbf{y}_i$ from all the clients for aggregation (Eq. 8-9). This completes one communication round. + +$$ +\boldsymbol {p} := \{0, 1 \} ^ {d}, \quad v = \sum \boldsymbol {p} \tag {3} +$$ + +$$ +S _ {\mathrm {s v r}} := \{j: \boldsymbol {p} _ {j} = 1 \}, \quad S _ {\mathrm {s g d}} := \{j: \boldsymbol {p} _ {j} = 0 \} \tag {4} +$$ + +$$ +\boldsymbol {y} _ {i, S _ {\mathrm {s v r}}} \leftarrow \boldsymbol {y} _ {i, S _ {\mathrm {s v r}}} - \eta_ {l} \left(g _ {i} \left(\boldsymbol {y} _ {i}\right) _ {S _ {\mathrm {s v r}}} - \boldsymbol {c} _ {i} + \boldsymbol {c}\right) \tag {5} +$$ + +$$ +\boldsymbol {y} _ {i, S _ {\mathrm {s g d}}} \leftarrow \boldsymbol {y} _ {i, S _ {\mathrm {s g d}}} - \eta_ {l} g _ {i} \left(\boldsymbol {y} _ {i}\right) _ {S _ {\mathrm {s g d}}} \tag {6} +$$ + +$$ +\boldsymbol {c} _ {i} \leftarrow \boldsymbol {c} _ {i} - \boldsymbol {c} + \frac {1}{K \eta_ {l}} \left(\boldsymbol {x} _ {S _ {\mathrm {s v r}}} - \boldsymbol {y} _ {i, S _ {\mathrm {s v r}}}\right) \tag {7} +$$ + +$$ +\boldsymbol {x} \leftarrow (1 - \eta_ {g}) \boldsymbol {x} + \frac {1}{N} \sum_ {i \in N} \boldsymbol {y} _ {i} \tag {8} +$$ + +$$ +\boldsymbol {c} \leftarrow \frac {1}{N} \sum_ {i \in N} \boldsymbol {c} _ {i} \tag {9} +$$ + +# Algorithm I Partial variance reduction (FedPVR) + +server: initialise the server model $\mathbf{x}$ , the control variate $\mathbf{c}$ , and global step size $\eta_g$ + +client: initialise control variate $c_{i}$ and local step size $\eta_{l}$ + +$\mathbf{mask}$ .. $\pmb {p}:= \{0,1\} ^d$ $S_{\mathrm{sgd}}\coloneqq \{j:\pmb {p}_j = 0\}$ $S_{\mathrm{svr}}\coloneqq \{j:$ $\pmb {p}_j = 1\}$ + +1: procedure MODEL UPDATING + +2: for $r = 1\rightarrow R$ do + +3: communicate $x$ and $c$ to all clients $i \in [N]$ + +4: for On client $i \in [N]$ in parallel do + +5: $y_{i}\gets x$ + +6: for $k = 1\to K$ do + +7: compute minibatch gradient $g_{i}(\pmb{y}_{i})$ + +8: $\pmb{y}_{i,S_{\mathrm{sgd}}} \gets \pmb{y}_{i,S_{\mathrm{sgd}}} - \eta_l g_i(\pmb{y}_i)_{S_{\mathrm{sgd}}}$ + +9: $\pmb{y}_{i,S_{\mathrm{svr}}}\gets \pmb{y}_{i,S_{\mathrm{svr}}} - \eta_{l}(g_{i}(\pmb{y}_{i})_{S_{\mathrm{svr}}} - \pmb{c}_{i} + \pmb{c})$ + +10: end for + +11: $\pmb{c}_i \gets \pmb{c}_i - \pmb{c} + \frac{1}{K\eta_l} (\pmb{x}_{S_{\mathrm{svr}}} - \pmb{y}_{i,S_{\mathrm{svr}}})$ + +12: communicate $\pmb{y}_i, \pmb{c}_i$ + +13: end for + +14: $\pmb{x} \gets (1 - \eta_g)\pmb{x} + \frac{1}{N}\sum_{i\in N}\pmb{y}_i$ + +15: $\pmb{c} \gets \frac{1}{N} \sum_{i \in N} \pmb{c}_i$ + +16: end for + +17: end procedure + +In terms of implementation, we can simply assume the control variate for the block of weights that is updated with SGD as 0 and implement line 8 and 9 in one step + +Ours vs SCAFFOLD [14] While our work is similar to SCAFFOLD in the use of variance reduction, there are some fundamental differences. We both communicate control variates between the clients and server, but our control + +variate $(2v\leq 0.1d)$ is significantly smaller than the one in SCAFFOLD $(2d)$ . This $2x$ decrease in bits can be critical for some low-power IoT devices as the communication may consume more energy [10]. From the application point of view, SCAFFOLD achieved great success in convex or simple two layers problems. However, adapting the techniques that work well from convex problems to over-parameterized models is non-trivial [38], and naively adapting variance reduction techniques on deep neural networks gives little or no convergence speedup [7]. Therefore, the significant improvement achieved by our method gives essential and nontrivial insight into what matters when tackling data heterogeneity in FL in over-parameterized models. + +# 3.4. Convergence rate + +We state the convergence rate in this section. We assume functions $\{f_i\}$ are $\beta$ -smooth following [16, 35]. We then assume $g_{i}(\pmb{x})\coloneqq \nabla f_{i}(x;\mathcal{D}_{i})$ is an unbiased stochastic gradient of $f_{i}$ with variance bounded by $\sigma^2$ . We assume strongly convexity $(\mu >0)$ and general convexity $(\mu = 0)$ for some of the results following [14]. Furthermore, we also make assumptions about the heterogeneity of the functions. + +For convex functions, we assume the heterogeneity of the function $\{f_i\}$ at the optimal point $x^{*}$ (such a point always exists for a strongly convex function) following [15, 16]. + +Assumption 1 ( $\zeta$ -heterogeneity). We define a measure of variance at the optimum $x^{*}$ given $N$ clients as: + +$$ +\zeta^ {2} := \frac {1}{N} \sum_ {i = 1} ^ {N} \mathbb {E} | | \nabla f _ {i} \left(\boldsymbol {x} ^ {*}\right) | | ^ {2}. \tag {10} +$$ + +For the non-convex functions, such an unique optimal point $\pmb{x}^*$ does not necessarily exist, so we generalize Assumption 1 to Assumption 2. + +Assumption 2 ( $\hat{\zeta}$ -heterogeneity). We assume there exists constant $\hat{\zeta}$ such that $\forall x \in \mathbb{R}^d$ + +$$ +\frac {1}{N} \sum_ {i = 1} ^ {N} \mathbb {E} | | \nabla f _ {i} (\boldsymbol {x}) | | ^ {2} \leq \hat {\zeta} ^ {2}. \tag {11} +$$ + +Given the mask $\pmb{p}$ as defined in Eq. 3, we know $||\pmb{p} \odot \pmb{x}|| \leq ||\pmb{x}||$ . Therefore, we have the following propositions. + +Proposition 1 (Implication of Assumption 1). Given the mask $\mathbf{p}$ , we define the heterogeneity of the block of weights that are not variance reduced at the optimum $\mathbf{x}^*$ as: + +$$ +\zeta_ {1 - p} ^ {2} := \frac {1}{N} \sum_ {i = 1} ^ {N} \left\| (\mathbf {1} - \boldsymbol {p}) \odot \nabla f _ {i} \left(\boldsymbol {x} ^ {*}\right) \right\| ^ {2}, \tag {12} +$$ + +If Assumption 1 holds, then it also holds that: + +$$ +\zeta_ {1 - p} ^ {2} \leq \zeta^ {2}. \tag {13} +$$ + +In Proposition 1, $\zeta_{1 - p}^2 = \zeta^2$ if $\pmb{p} = \mathbf{0}$ and $\zeta_{1 - p}^2 = 0$ if $\pmb{p} = \mathbf{1}$ . If $\pmb{p} \neq \mathbf{0}$ and $\pmb{p} \neq \mathbf{1}$ , as the heterogeneity of the shallow weights is lower than the deeper weights [41], we have $\zeta_{1 - p}^2 \leq \zeta^2$ . Similarly, we can validate Proposition 2. + +Proposition 2 (Implication of Assumption 2). Given the mask $\pmb{p}$ , we assume there exists constant $\hat{\zeta}_{1 - p}$ such that $\forall \pmb{x} \in \mathbb{R}^d$ , the heterogeneity of the block of weights that are not variance reduced: + +$$ +\frac {1}{N} \sum_ {i = 1} ^ {N} | | (\mathbf {1} - \boldsymbol {p}) \odot \nabla f _ {i} (\boldsymbol {x}) | | ^ {2} \leq \hat {\zeta} _ {1 - p}, \tag {14} +$$ + +If Assumption 2 holds, then it also holds that: + +$$ +\hat {\zeta} _ {1 - p} ^ {2} \leq \hat {\zeta} ^ {2}. \tag {15} +$$ + +Theorem 1. For any $\beta$ -smooth function $\{f_i\}$ , the output of FedPVR has expected error smaller than $\epsilon$ for $\eta_g = \sqrt{N}$ and some values of $\eta_l$ , $R$ satisfying: + +Strongly convex: $\eta_l \leq \min \left(\frac{1}{80K\eta_g\beta}, \frac{26}{20\mu K\eta_g}\right)$ , + +$$ +R = \tilde {\mathcal {O}} \left(\frac {\sigma^ {2}}{\mu N K \epsilon} + \frac {\zeta_ {1 - p} ^ {2}}{\mu \epsilon} + \frac {\beta}{\mu}\right), \tag {16} +$$ + +- General convex: $\eta_{l} \leq \frac{1}{80K\eta_{g}\beta}$ , + +$$ +R = \mathcal {O} \left(\frac {\sigma^ {2} D}{K N \epsilon^ {2}} + \frac {\zeta_ {1 - p} ^ {2} D}{\epsilon^ {2}} + \frac {\beta D}{\epsilon} + F\right), \tag {17} +$$ + +Non-convex: $\eta_l \leq \frac{1}{26K\eta_g\beta}$ , and $R \geq 1$ , then: + +$$ +R = \mathcal {O} \left(\frac {\beta \sigma^ {2} F}{K N \epsilon^ {2}} + \frac {\beta \hat {\zeta} _ {1 - p} ^ {2} F}{N \epsilon^ {2}} + \frac {\beta F}{\epsilon}\right), \tag {18} +$$ + +Where $D\coloneqq ||\pmb {x}^0 -\pmb{x}^* ||^2$ and $F\coloneqq f(\pmb {x}^0) - f^*$ + +Given the above assumptions, the convergence rate is given in Theorem I. When $p = 1$ , we recover SCAFFOLD convergence guarantee as $\zeta_{1 - p}^2 = 0$ , $\hat{\zeta}_{1 - p}^2 = 0$ . In the strongly convex case, the effect of the heterogeneity of the block of weights that are not variance reduced $\zeta_{1 - p}^2$ becomes negligible if $\tilde{\mathcal{O}}\left(\frac{\zeta_{1 - p}^2}{\epsilon}\right)$ is sufficiently smaller than $\tilde{\mathcal{O}}\left(\frac{\sigma^2}{NK\epsilon}\right)$ . In such case, our rate is $\frac{\sigma^2}{NK\epsilon} + \frac{1}{\mu}$ , which recovers the SCAFFOLD in the strongly convex without sampling and further matches that of SGD (with mini-batch size $K$ on each worker). We also recover the FedAvg rate* at simple IID case. See Appendix. B for the full proof. + +# 4. Experimental setup + +We demonstrate the effectiveness of our approach with CIFAR10 [19] and CIFAR100 [19] on image classification tasks. We simulate the data heterogeneity scenario following [22] by partitioning the data according to the Dirichlet distribution with the concentration parameter $\alpha$ . The smaller the $\alpha$ is, the more imbalanced the data are distributed across clients. An example of the data distribution over multiple clients using the CIFAR10 dataset can be seen in Fig. 2. In our experiment, we use $\alpha \in \{0.1, 0.5, 1.0\}$ as these are commonly used concentration parameters [22]. Each client has its local data, and this data is kept to be the same during all the communication rounds. We hold out the test dataset at the server for evaluating the classification performance of the server model. Following [22], we perform the same data augmentation for all the experiments. + +We use two models: VGG-11 and ResNet-8 following [22]. We perform variance reduction for the last three layers in VGG-11 and the last layer in ResNet-8. We use 10 clients with full participation following [41] (close to cross-silo setup) and a batch size of 256. Each client performs 10 local epochs of model updating. We set the server learning rate $\eta_{g} = 1$ for all the models [14]. We tune the clients learning rate from $\{0.05, 0.1, 0.2, 0.3\}$ for each individual experiment. The learning rate schedule is experimentally chosen from constant, cosine decay [23], and multiple step decay [22]. We compare our method with the representative federated learning algorithms FedAvg [25], FedProx [31], SCAFFOLD [14], and FedDyn [1]. All the results are averaged over three repeated experiments with different random initialization. We leave $1\%$ of the training data from each client out as the validation data to tune the hyperparameters (learning rate and schedule) per client. See Appendix. C for additional experimental setups. The code is at github.com/lyn1874/fedpvr. + +# 5. Experimental results + +We demonstrate the performance of our proposed approach in the FL setup with data heterogeneity in this section. We compare our method with the existing state-of-the-art algorithms on various datasets and deep neural networks. For the baseline approaches, we finetune the hyperparameters and only show the best performance we get. Our main findings are 1) we are more communication efficient than the baseline approaches, 2) conformal prediction is an effective tool to improve FL performance in high data heterogeneity scenarios, and 3) the benefit of the trade-off between diversity and uniformity for using deep neural networks in FL. + +# 5.1. Communication efficiency and accuracy + +We first report the number of rounds required to achieve a certain level of Top $1\%$ accuracy (66% for CIFAR10 and + +$44\%$ for CIFAR100) in Table. 2. An algorithm is more communication efficient if it requires less number of rounds to achieve the same accuracy and/or if it transmits fewer number of parameters between the clients and server. Compared to the baseline approaches, we require much fewer number of rounds for almost all types of data heterogeneity and models. We can achieve a speedup between 1.5 and 6.7 than FedAvg. We also observe that ResNet-8 tends to converge slower than VGG-11, which may be due to the aggregation of the Batch Normalization layers that are discrepant between the local data distribution [22]. + +We next compare the top-1 accuracy between centralized learning and federated learning algorithms. For the centralized learning experiment, we tune the learning rate from $\{0.01, 0.05, 0.1\}$ and report the best test accuracy based on the validation dataset. We train the model for 800 epochs which is as same as the total number of epochs in the federated learning algorithms (80 communication rounds x 10 local epochs). The results are shown in Table. 3. We also show the number of copies of the parameters that need to be transmitted between the server and clients (e.g. 2x means we communicate $x$ and $y_i$ ) + +Table. 3 shows that our approach achieves a much better Top-1 accuracy compared to FedAvg with transmitting a similar or slightly bigger number of parameters between the server and client per round. Our method also achieves slightly better accuracy than centralized learning when the data is less heterogeneous (e.g. $\alpha = 0.5$ for CIFAR10 and $\alpha = 1.0$ for CIFAR100). + +# 5.2. Conformal prediction + +When the data heterogeneity is high across clients, it is difficult for a federated learning algorithm to match the centralized learning performance [24]. Therefore, we demonstrate the benefit of using simple post-processing conformal prediction to improve the model performance. + +We examine the relationship between the empirical coverage and the average predictive set size for the server model after 80 communication rounds for each federated learning algorithm. The empirical coverage is the percentage of the data samples where the correct prediction is in the predictive set, and the average predictive size is the average of the length of the predictive sets over all the test images [3]. See Appendix. C for more information about conformal prediction setup and results. + +The results for when $\alpha = 0.1$ for both datasets and architectures are shown in Fig. 3. We show that by slightly increasing the predictive set size, we can achieve a similar accuracy as the centralized performance. Besides, our approach tends to surpass the centralized top-1 performance similar to or faster than other approaches. In sensitive use cases such as chemical threat detection, conformal prediction is a valuable tool to achieve certified accuracy at the + +Table 2. The required number of communication rounds (speedup compared to FedAvg) to achieve a certain level of top-1 accuracy (66% for the CIFAR10 dataset and 44% for the CIFAR100 dataset). Our method requires fewer rounds to achieve the same accuracy. + +
CIFAR10 (66%)CIFAR100 (44%)
α=0.1α=0.5α=0.1α=1.0
VGG-11ResNet-8VGG-11ResNet-8VGG-11ResNet-8VGG-11ResNet-8
No. roundsNo. roundsNo. roundsNo. roundsNo. roundsNo. roundsNo. roundsNo. rounds
FedAvg55(1.0x)90(1.0x)15(1.0x)15(1.0x)100 + (1.0x)100 + (1.0x)80(1.0x)56(1.0x)
FedProx52(1.1x)75(1.2x)16(0.9x)20(0.8x)100 + (1.0x)100 + (1.0x)80(1.0x)59(0.9x)
SCAFFOLD39(1.4x)57(1.6x)14(1.0x)9(1.7x)80 (>1.3x)61 (>1.6x)36(2.2x)25(2.2x)
FedDyn27(2.0x)67(1.3x)15(1.0x)34(0.4x)80 + (-)80 + (-)24(3.3x)51(1.1x)
Ours27(2.0x)50(1.8x)9(1.6x)5(3.0x)37 (>2.7x)66 (>1.5x)12(6.7x)15(3.7x)
+ +Table 3. The top-1 accuracy $(\%)$ after running 80 communication rounds using different methods on CIFAR10 and CIFAR100, together with the number of communicated parameters between the client and the server. We train the centralised model for 800 epochs $(= 80$ rounds x 10 local epochs in FL). Higher accuracy is better, and we highlight the best accuracy in red colour. + +
VGG-11ResNet-8
CIFAR10CIFAR100server↔clientCIFAR10CIFAR100server↔client
α = 0.1α = 0.5α = 0.1α = 1.0α = 0.1α = 0.5α = 0.1α = 1.0
Centralised87.556.3-83.456.8-
FedAvg69.380.934.345.02x64.979.138.847.02x
Fedprox72.180.435.043.22x66.177.942.047.22x
SCAFFOLD74.183.543.450.64x66.680.343.852.34x
FedDyn77.480.143.845.22x63.872.936.448.12x
Ours78.284.943.558.02.1x69.383.643.552.32.02x
+ +cost of a slightly larger predictive set size. + +![](images/951dc30d681cc38c34a57c084476b9c21ec213ab178859442e6120b674b105b4.jpg) +Figure 3. Relation between average predictive size and empirical coverage when $\alpha = 0.1$ . By slightly increasing the predictive set size, we can achieve a similar performance as the centralised model (Top-1 accuracy) even if the data are heterogeneously distributed across clients. Our method is similar to or faster than other approaches to surpass the centralised Top-1 accuracy. + +# 5.3. Diversity and uniformity + +We have shown that our algorithm achieves a better speedup and performance against the existing approaches with only lightweight modifications to FedAvg. We next investigate what factors lead to better accuracy. Specifi + +![](images/31d31a04d19f4d675f9b4101e7054b8bb0d95550232e6f4009e9d8aff64aa0b5.jpg) +Figure 4. Drift diversity and learning curve for ResNet-8 on CIFAR100 with $\alpha = 1.0$ . Compared to FedAvg, SCAFFOLD and our method can both improve the agreement between the classifiers. Compared to SCAFFOLD, our method results in a higher gradient diversity at the early stage of the communication, which tends to boost the learning speed as the curvature of the drift diversity seem to match the learning curve. + +cally, we calculate the drift diversity $\xi$ across clients after each communication round using Eq. 2 and average $\xi$ across three runs. We show the result of using ResNet-8 and CIFAR100 with $\alpha = 1.0$ in Fig. 4. + +Fig. 4 shows the drift diversity for different layers in ResNet-8 and the testing accuracy along the communication rounds. We observe that classifiers have the highest diver + +sity in FedAvg against other layers and methods. SCAFFOLD, which applies control variate on the entire model, can effectively reduce the disagreement of the directions and scales of the averaged gradient across clients. Our proposed algorithm that performs variance reduction only on the classifiers can reduce the diversity of the classifiers even further but increase the diversity of the feature extraction layers. This high diversity tends to boost the learning speed as the curvature of the diversity movement (Fig. 4 left) seems to match the learning curve (Fig. 4 right). Based on this observation, we hypothesize that this diversity along the feature extractor and the uniformity of the classifier is the main reason for our better speedup. + +To test this hypothesis, we perform an experiment where we use variance reduction starting from different layers of a neural network. If the starting position of the use of variance reduction influences the learning speed, it indicates where in a neural network we need more diversity and where we need more uniformity. We here show the result of using VGG-11 on CIFAR100 with $\alpha = 1.0$ as there are more layers in VGG-11. The result is shown in Fig. 5 where SVR: $16 \rightarrow 20$ is corresponding to our approach and SVR: $0 \rightarrow 20$ is corresponding to SCAFFOLD that applies variance reduction for the entire model. Results for using ResNet-8 is shown in Appendix. C. + +![](images/f270fd19410f370dc1632f06a91683cf88e3cc1148ad6fcfe9916088c224d50e.jpg) +Figure 5. Influence of using stochastic variance reduction(SVR) on layers that start from different positions in a neural network on the learning speed. $\mathrm{SVR}:0\rightarrow 20$ applies variance reduction on the entire model (SCAFFOLD). $\mathrm{SVR}:16\rightarrow 20$ applies variance reduction from the layer index 16 to 20 (ours). The later we apply variance reduction, the better performance speedup we obtain. However, no variance reduction (FedAvg) performs the worst here. + +We see from Fig. 5 that the deeper in a neural network we apply variance reduction, the better learning speedup we can obtain. There is no clear performance difference between where to activate the variance reduction when the layer index is over 10. However, applying no variance reduction (FedAvg) achieves by far the worst performance. We believe that these experimental results indicate that in a distributed optimization framework, to boost the learning + +speed of an over-parameterized model, we need some levels of diversity in the middle and early layers for learning richer feature representation and some degrees of uniformity in the classifiers for making a less biased decision. + +# 6. Conclusion + +In this work, we studied stochastic gradient descent learning for deep neural network classifiers in a federated learning setting, where each client updates its local model using stochastic gradient descent on local data. A central model is periodically updated (by averaging local model parameters) and broadcast to the clients under a communication bandwidth constraint. When data is homogeneous across clients, this procedure is comparable to centralized learning in terms of efficiency; however, when data is heterogeneous, learning is impeded. Our hypothesis for the primary reason for this is that when the local models are out of alignment, updating the central model by averaging is ineffective and sometimes even destructive. + +Examining the diversity across clients of their local model updates and their learned feature representations, we found that the misalignment between models is much stronger in the last few neural network layers than in the rest of the network. This finding inspired us to experiment with aligning the local models using a partial variance reduction technique applied only on the last layers, which we named FedPVR. We found that this led to a substantial improvement in convergence speed compared to the competing federated learning methods. In some cases, our method even outperformed centralized learning. We derived a bound on the convergence rate of our proposed method, which matches the rates for SGD when the gradient diversity across clients is sufficiently low. Compared with FedAvg, the communication cost of our method is only marginally worse, as it requires transmitting control variates for the last layers. + +We believe our FedPVR algorithm strikes a good balance between simplicity and efficiency, requiring only a minor modification to the established FedAvg method; however, in our further research, we plan to pursue more optimal methods for aligning and guiding the local learning algorithms, e.g. using adaptive procedures. Furthermore, the degree of over-parameterization in the neural network layers (e.g. feature extraction vs bottlenecks) may also play an important role, which we would like to understand better. + +# Acknowledgements + +The first three authors thank for financial support from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 883390 (H2020-SU-SECU-2019 SERSing Project). BL thanks for the financial support from the Otto Mønsted Foundation. + +# References + +[1] Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N. Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. CoRR, abs/2111.04263, 2021. 1, 2, 6, 26 +[2] Dan Alistarh, Jerry Li, Ryota Tomioka, and Milan Vo-jnovic. QSGD: randomized quantization for communication-optimal stochastic gradient descent. CoRR, abs/1610.02132, 2016. 3 +[3] Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan, and Jitendra Malik. Uncertainty sets for image classifiers using conformal prediction. In International Conference on Learning Representations, 2021. 2, 3, 6 +[4] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. CoRR, abs/2002.05709, 2020. 3 +[5] Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Fedavg with fine tuning: Local updates lead to representation learning, 2022. 2 +[6] Aaron Defazio, Francis R. Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. CoRR, abs/1407.0202, 2014. 3 +[7] Aaron Defazio and Léon Bottou. On the ineffectiveness of variance reduced optimization for deep learning. CoRR, abs/1812.04529, 2018. 2, 3, 5 +[8] Aritra Dutta, El Houcine Bergou, Ahmed M. Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, and Panos Kalnis. On the discrepancy between the theoretical analysis and practical implementations of compressed communication for distributed deep learning. CoRR, abs/1911.08250, 2019.3 +[9] Liang Gao, Huazhu Fu, Li Li, Yingwen Chen, Ming Xu, and Cheng-Zhong Xu. Feddc: Federated learning with non-iid data via local drift decoupling and correction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 10102-10111. IEEE, 2022. 2 +[10] Malka N. Halgamuge, Moshe Zukerman, Kotagiri Ramamoohanarao, and Hai Le Vu. An estimation of sensor energy consumption. Progress in Electromagnetics Research B, 12:259-295, 2009. 1, 2, 5 +[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. 2 +[12] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013. 3 +[13] Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascon, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zäïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri + +Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrede Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramér, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. CoRR, abs/1912.04977, 2019. 1, 2 +[14] Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. SCAFFOLD: Stochastic controlled averaging for federated learning. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5132-5143. PMLR, 13-18 Jul 2020. 1, 2, 3, 4, 5, 6, 11, 12, 13, 21 +[15] Ahmed Khaled, Konstantin Mishchenko, and Peter Richtárik. Better communication complexity for local SGD. CoRR, abs/1909.04746, 2019. 5 +[16] Anastasia Koloskova, Nicolas Loizou, Sadra Boreiri, Martin Jaggi, and Sebastian Stich. A unified theory of decentralized SGD with changing topology and local updates. In Hal Daume III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5381-5393. PMLR, 13-18 Jul 2020. 5, 11, 13 +[17] Jakub Konečny, H. Brendan McMahan, Daniel Ramage, and Peter Richtárik. Federated optimization: Distributed machine learning for on-device intelligence. CoRR, abs/1610.02527, 2016. 1, 2 +[18] Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey E. Hinton. Similarity of neural network representations revisited. CoRR, abs/1905.00414, 2019. 4 +[19] Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. 2, 6 +[20] Qinbin Li, Bingsheng He, and Dawn Song. Model-contrastive federated learning. CoRR, abs/2103.16257, 2021. 1, 3 +[21] Zhize Li, Dmitry Kovalev, Xun Qian, and Peter Richtárik. Acceleration for compressed gradient descent in distributed and federated optimization, 2020. 3 +[22] Tao Lin, Lingjing Kong, Sebastian U. Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. 6, 25, 26 +[23] Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with restarts. CoRR, abs/1608.03983, 2016. 6 +[24] Mi Luo, Fei Chen, Dapeng Hu, Yifan Zhang, Jian Liang, and Jiashi Feng. No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. CoRR, abs/2106.05001, 2021. 1, 2, 3, 4, 6 + +[25] H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. Federated learning of deep networks using model averaging. CoRR, abs/1602.05629, 2016. 2, 6 +[26] Konstantin Mishchenko, Eduard Gorbunov, Martin Takáč, and Peter Richtárik. Distributed learning with compressed gradient differences. CoRR, abs/1901.09269, 2019. 3 +[27] Konstantin Mishchenko, Grigory Malinovsky, Sebastian Stich, and Peter Richtárik. ProxSkip: Yes! local gradient steps provably lead to communication acceleration! finally! International Conference on Machine Learning (ICML), 2022. 2 +[28] Thao Nguyen, Maithra Raghu, and Simon Kornblith. Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth. CoRR, abs/2010.15327, 2020. 2, 4 +[29] Jaehoon Oh, Sangmook Kim, and Se-Young Yun. Fedbabu: Towards enhanced representation for federated image classification. CoRR, abs/2106.06042, 2021. 2 +[30] Yaniv Romano, Matteo Sesia, and Emmanuel J. Candes. Classification with valid and adaptive coverage. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20, Red Hook, NY, USA, 2020. Curran Associates Inc. 3 +[31] Anit Kumar Sahu, Tian Li, Maziar Sanjabi, Manzil Zaheer, Ameet Talwalkar, and Virginia Smith. On the convergence of federated optimization in heterogeneous networks. CoRR, abs/1812.06127, 2018. 1, 2, 6, 26 +[32] Ohad Shamir, Nathan Srebro, and Tong Zhang. Communication efficient distributed optimization using an approximate newton-type method. CoRR, abs/1312.7853, 2013. 1, 2, 3 +[33] Micah J. Sheller, Brandon Edwards, G. Anthony Reina, Jason Martin, Sarthak Pati, Aikaterini Kotrotsou, Mikhail Milchenko, Weilin Xu, Daniel Marcus, Rivka R. Colen, and Spyridon Bakas. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Scientific Reports, 10(1):12598, Jul 2020. 1 +[34] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015. 2 +[35] Sebastian U. Stich. Unified optimal analysis of the (stochastic) gradient method. CoRR, abs/1907.04232, 2019. 5, 11, 14 +[36] Sebastian U. Stich, Jean-Baptiste Cordonnier, and Martin Jaggi. Sparsified SGD with memory. In Advances in Neural Information Processing Systems 31 (NeurIPS), pages 4452-4463. Curran Associates, Inc., 2018. 3 +[37] Sebastian U. Stich and Sai Praneeth Karimireddy. The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication. CoRR, abs/1909.05350, 2019. 15 +[38] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. 2, 5 + +[39] F Varno, M Saghayi, L Rafiee, S Gupta, S Matwin, and M Havaei. Minimizing client drift in federated learning via adaptive bias estimation. ArXiv, abs/2204.13170, 2022. 2 +[40] Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Agüera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas N. Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horváth, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konečný, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtárik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake E. Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, and Wennan Zhu. A field guide to federated optimization. CoRR, abs/2107.06917, 2021. 2 +[41] Yaodong Yu, Alexander Wei, Sai Praneeth Karimireddy, Yi Ma, and Michael I. Jordan. TCT: Convexifying federated learning using bootstrapped neural tangent kernels, 2022. 2, 5, 6 +[42] Haoyu Zhao, Zhize Li, and Peter Richtárik. Fedpage: A fast local stochastic gradient method for communication-efficient federated learning. CoRR, abs/2108.04755, 2021. 2 \ No newline at end of file diff --git a/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/images.zip b/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7117531e1b26a3e477d02af03a5770a09e6533a3 --- /dev/null +++ b/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c3988b22794b070e090c83cca844023f56b54bdf2c518732cc304062fcb315a8 +size 465722 diff --git a/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/layout.json b/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b36922d9d0a435c78bb5d122f4c640b4f2740d80 --- /dev/null +++ b/2023/On the Effectiveness of Partial Variance Reduction in Federated Learning With Heterogeneous Data/layout.json @@ -0,0 +1,10625 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 73, + 116, + 520, + 150 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 73, + 116, + 520, + 150 + ], + "spans": [ + { + "bbox": [ + 73, + 116, + 520, + 150 + ], + "type": "text", + "content": "On the effectiveness of partial variance reduction in federated learning with heterogeneous data" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 109, + 171, + 143, + 184 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 171, + 143, + 184 + ], + "spans": [ + { + "bbox": [ + 109, + 171, + 143, + 184 + ], + "type": "text", + "content": "Bo Li*" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 164, + 171, + 257, + 184 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 164, + 171, + 257, + 184 + ], + "spans": [ + { + "bbox": [ + 164, + 171, + 257, + 184 + ], + "type": "text", + "content": "Mikkel N. Schmidt" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 278, + 171, + 368, + 185 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 278, + 171, + 368, + 185 + ], + "spans": [ + { + "bbox": [ + 278, + 171, + 368, + 185 + ], + "type": "text", + "content": "Tommy S. Alström" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 161, + 186, + 317, + 198 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 161, + 186, + 317, + 198 + ], + "spans": [ + { + "bbox": [ + 161, + 186, + 317, + 198 + ], + "type": "text", + "content": "Technical University of Denmark" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 171, + 200, + 301, + 212 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 171, + 200, + 301, + 212 + ], + "spans": [ + { + "bbox": [ + 171, + 200, + 301, + 212 + ], + "type": "text", + "content": "{blia, mnsc, tsal}@dtu.dk" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 396, + 171, + 483, + 185 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 396, + 171, + 483, + 185 + ], + "spans": [ + { + "bbox": [ + 396, + 171, + 483, + 185 + ], + "type": "text", + "content": "Sebastian U. Stich" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 421, + 186, + 458, + 197 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 421, + 186, + 458, + 197 + ], + "spans": [ + { + "bbox": [ + 421, + 186, + 458, + 197 + ], + "type": "text", + "content": "CISPA" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 403, + 202, + 477, + 212 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 403, + 202, + 477, + 212 + ], + "spans": [ + { + "bbox": [ + 403, + 202, + 477, + 212 + ], + "type": "text", + "content": "stich@cispa.de" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 149, + 251, + 195, + 264 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 149, + 251, + 195, + 264 + ], + "spans": [ + { + "bbox": [ + 149, + 251, + 195, + 264 + ], + "type": "text", + "content": "Abstract" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 56, + 276, + 287, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 276, + 287, + 495 + ], + "spans": [ + { + "bbox": [ + 56, + 276, + 287, + 495 + ], + "type": "text", + "content": "Data heterogeneity across clients is a key challenge in federated learning. Prior works address this by either aligning client and server models or using control variates to correct client model drift. Although these methods achieve fast convergence in convex or simple non-convex problems, the performance in over-parameterized models such as deep neural networks is lacking. In this paper, we first revisit the widely used FedAvg algorithm in a deep neural network to understand how data heterogeneity influences the gradient updates across the neural network layers. We observe that while the feature extraction layers are learned efficiently by FedAvg, the substantial diversity of the final classification layers across clients impedes the performance. Motivated by this, we propose to correct model drift by variance reduction only on the final layers. We demonstrate that this significantly outperforms existing benchmarks at a similar or lower communication cost. We furthermore provide proof for the convergence rate of our algorithm." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 57, + 516, + 134, + 528 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 516, + 134, + 528 + ], + "spans": [ + { + "bbox": [ + 57, + 516, + 134, + 528 + ], + "type": "text", + "content": "1. Introduction" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 56, + 536, + 287, + 662 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 536, + 287, + 662 + ], + "spans": [ + { + "bbox": [ + 56, + 536, + 287, + 662 + ], + "type": "text", + "content": "Federated learning (FL) is emerging as an essential distributed learning paradigm in large-scale machine learning. Unlike in traditional machine learning, where a model is trained on the collected centralized data, in federated learning, each client (e.g. phones and institutions) learns a model with its local data. A centralized model is then obtained by aggregating the updates from all participating clients without ever requesting the client data, thereby ensuring a certain level of user privacy [13, 17]. Such an algorithm is especially beneficial for tasks where the data is sensitive, e.g. chemical hazards detection and diseases diagnosis [33]." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 69, + 662, + 287, + 674 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 69, + 662, + 287, + 674 + ], + "spans": [ + { + "bbox": [ + 69, + 662, + 287, + 674 + ], + "type": "text", + "content": "Two primary challenges in federated learning are i) han" + } + ] + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 307, + 251, + 379, + 346 + ], + "blocks": [ + { + "bbox": [ + 307, + 251, + 379, + 346 + ], + "lines": [ + { + "bbox": [ + 307, + 251, + 379, + 346 + ], + "spans": [ + { + "bbox": [ + 307, + 251, + 379, + 346 + ], + "type": "image", + "image_path": "f01bb3a5e13038a42a26eed4391142bd7b625074297f83b4a6ca324120a2ed32.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 307, + 346, + 383, + 354 + ], + "lines": [ + { + "bbox": [ + 307, + 346, + 383, + 354 + ], + "spans": [ + { + "bbox": [ + 307, + 346, + 383, + 354 + ], + "type": "text", + "content": "Feature extractor" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 307, + 355, + 354, + 362 + ], + "lines": [ + { + "bbox": [ + 307, + 355, + 354, + 362 + ], + "spans": [ + { + "bbox": [ + 307, + 355, + 354, + 362 + ], + "type": "text", + "content": "□Classifier" + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_caption" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 379, + 252, + 533, + 308 + ], + "blocks": [ + { + "bbox": [ + 379, + 252, + 533, + 308 + ], + "lines": [ + { + "bbox": [ + 379, + 252, + 533, + 308 + ], + "spans": [ + { + "bbox": [ + 379, + 252, + 533, + 308 + ], + "type": "image", + "image_path": "3936e133bfbce6b47cfb2ce625a65c688b0f4b45e0d1530d5665474e90c74615.jpg" + } + ] + } + ], + "index": 22, + "angle": 0, + "type": "image_body" + } + ], + "index": 22 + }, + { + "type": "image", + "bbox": [ + 379, + 308, + 533, + 368 + ], + "blocks": [ + { + "bbox": [ + 307, + 363, + 387, + 373 + ], + "lines": [ + { + "bbox": [ + 307, + 363, + 387, + 373 + ], + "spans": [ + { + "bbox": [ + 307, + 363, + 387, + 373 + ], + "type": "text", + "content": "Variance reduction" + } + ] + } + ], + "index": 21, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 379, + 308, + 533, + 368 + ], + "lines": [ + { + "bbox": [ + 379, + 308, + 533, + 368 + ], + "spans": [ + { + "bbox": [ + 379, + 308, + 533, + 368 + ], + "type": "image", + "image_path": "5364e44dbf026e26321a15d410f7ab6c500a8dd9f4cb0d75d3b5ecd31e06cb78.jpg" + } + ] + } + ], + "index": 23, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 429, + 367, + 508, + 373 + ], + "lines": [ + { + "bbox": [ + 429, + 367, + 508, + 373 + ], + "spans": [ + { + "bbox": [ + 429, + 367, + 508, + 373 + ], + "type": "text", + "content": "Communication rounds" + } + ] + } + ], + "index": 24, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 304, + 384, + 535, + 437 + ], + "lines": [ + { + "bbox": [ + 304, + 384, + 535, + 437 + ], + "spans": [ + { + "bbox": [ + 304, + 384, + 535, + 437 + ], + "type": "text", + "content": "Figure 1. Our proposed FedPVR framework with the performance (communicated parameters per round client " + }, + { + "bbox": [ + 304, + 384, + 535, + 437 + ], + "type": "inline_equation", + "content": "\\longleftrightarrow" + }, + { + "bbox": [ + 304, + 384, + 535, + 437 + ], + "type": "text", + "content": " server). Smaller " + }, + { + "bbox": [ + 304, + 384, + 535, + 437 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 304, + 384, + 535, + 437 + ], + "type": "text", + "content": " corresponds to higher data heterogeneity. Our method achieves a better speedup than existing approaches by transmitting a slightly larger number of parameters than FedAvg." + } + ] + } + ], + "index": 25, + "angle": 0, + "type": "image_caption" + } + ], + "index": 23 + }, + { + "bbox": [ + 304, + 459, + 536, + 586 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 459, + 536, + 586 + ], + "spans": [ + { + "bbox": [ + 304, + 459, + 536, + 586 + ], + "type": "text", + "content": "dling data heterogeneity across clients [13] and ii) limiting the cost of communication between the server and clients [10]. In this setting, FedAvg [17] is one of the most widely used schemes: A server broadcasts its model to clients, which then update the model using their local data in a series of steps before sending their individual model to the server, where the models are aggregated by averaging the parameters. The process is repeated for multiple communication rounds. While it has shown great success in many applications, it tends to achieve subpar accuracy and convergence when the data are heterogeneous [14, 24, 31]." + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 304, + 586, + 536, + 702 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 586, + 536, + 702 + ], + "spans": [ + { + "bbox": [ + 304, + 586, + 536, + 702 + ], + "type": "text", + "content": "The slow and sometimes unstable convergence of FedAvg can be caused by client drift [14] brought on by data heterogeneity. Numerous efforts have been made to improve FedAvg's performance in this setting. Prior works attempt to mitigate client drift by penalizing the distance between a client model and the server model [20, 31] or by performing variance reduction techniques while updating client models [1, 14, 32]. These works demonstrate fast convergence on convex problems or for simple neural networks; however, their performance on deep neural" + } + ] + } + ], + "index": 27 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 57, + 8, + 102, + 39 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 8, + 102, + 39 + ], + "spans": [ + { + "bbox": [ + 57, + 8, + 102, + 39 + ], + "type": "text", + "content": "CVF" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 136, + 5, + 487, + 18 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 136, + 5, + 487, + 18 + ], + "spans": [ + { + "bbox": [ + 136, + 5, + 487, + 18 + ], + "type": "text", + "content": "This CVPR paper is the Open Access version, provided by the Computer Vision Foundation." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 189, + 18, + 434, + 29 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 18, + 434, + 29 + ], + "spans": [ + { + "bbox": [ + 189, + 18, + 434, + 29 + ], + "type": "text", + "content": "Except for this watermark, it is identical to the accepted version;" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 171, + 29, + 453, + 41 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 171, + 29, + 453, + 41 + ], + "spans": [ + { + "bbox": [ + 171, + 29, + 453, + 41 + ], + "type": "text", + "content": "the final published version of the proceedings is available on IEEE Xplore." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 71, + 682, + 162, + 691 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 682, + 162, + 691 + ], + "spans": [ + { + "bbox": [ + 71, + 682, + 162, + 691 + ], + "type": "text", + "content": "* Work done while at CISPA" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 71, + 691, + 233, + 701 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 71, + 691, + 233, + 701 + ], + "spans": [ + { + "bbox": [ + 71, + 691, + 233, + 701 + ], + "type": "text", + "content": "‡ CISPA Helmholtz Center for Information Security" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "spans": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "type": "text", + "content": "3964" + } + ] + } + ], + "index": 31 + } + ], + "page_size": [ + 595, + 842 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 57, + 86, + 287, + 155 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 86, + 287, + 155 + ], + "spans": [ + { + "bbox": [ + 57, + 86, + 287, + 155 + ], + "type": "text", + "content": "networks, which are state-of-the-art for many centralized learning tasks [11, 34], has yet to be well explored. Adapting techniques that perform well on convex problems to neural networks is non-trivial [7] due to their \"intriguing properties\" [38] such as over-parametrization and permutation symmetries." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 155, + 287, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 155, + 287, + 316 + ], + "spans": [ + { + "bbox": [ + 57, + 155, + 287, + 316 + ], + "type": "text", + "content": "To overcome the above issues, we revisit the FedAvg algorithm with a deep neural network (VGG-11 [34]) under the assumption of data heterogeneity and full client participation. Specifically, we investigate which layers in a neural network are mostly influenced by data heterogeneity. We define drift diversity, which measures the diversity of the directions and scales of the averaged gradients across clients per communication round. We observe that in the non-IID scenario, the deeper layers, especially the final classification layer, have the highest diversity across clients compared to an IID setting. This indicates that FedAvg learns good feature representations even in the non-IID scenario [5] and that the significant variation of the deeper layers across clients is a primary cause of FedAvg's subpar performance." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 57, + 316, + 287, + 408 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 316, + 287, + 408 + ], + "spans": [ + { + "bbox": [ + 57, + 316, + 287, + 408 + ], + "type": "text", + "content": "Based on the above observations, we propose to align the classification layers across clients using variance reduction. Specifically, we estimate the average updating direction of the classifiers (the last several fully connected layers) at the client " + }, + { + "bbox": [ + 57, + 316, + 287, + 408 + ], + "type": "inline_equation", + "content": "c_{i}" + }, + { + "bbox": [ + 57, + 316, + 287, + 408 + ], + "type": "text", + "content": " and server level " + }, + { + "bbox": [ + 57, + 316, + 287, + 408 + ], + "type": "inline_equation", + "content": "c" + }, + { + "bbox": [ + 57, + 316, + 287, + 408 + ], + "type": "text", + "content": " and use their difference as a control variate [14] to reduce the variance of the classifiers across clients. We analyze our proposed algorithm and derive a convergence rate bound." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 408, + 287, + 568 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 408, + 287, + 568 + ], + "spans": [ + { + "bbox": [ + 57, + 408, + 287, + 568 + ], + "type": "text", + "content": "We perform experiments on the popular federated learning benchmark datasets CIFAR10 [19] and CIFAR100 [19] using two types of neural networks, VGG-11 [34] and ResNet-8 [11], and different levels of data heterogeneity across clients. We experimentally show that we require fewer communication rounds compared to the existing methods [14, 17, 31] to achieve the same accuracy while transmitting a similar or slightly larger number of parameters between server and clients than FedAvg (see Fig. 1). With a (large) fixed number of communication rounds, our method achieves on-par or better top-1 accuracy, and in some settings it even outperforms centralized learning. Using conformal prediction [3], we show how performance can be improved further using adaptive prediction sets." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 57, + 569, + 287, + 636 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 569, + 287, + 636 + ], + "spans": [ + { + "bbox": [ + 57, + 569, + 287, + 636 + ], + "type": "text", + "content": "We show that applying variance reduction on the last layers increases the diversity of the feature extraction layers. This diversity in the feature extraction layers may give each client more freedom to learn richer feature representations, and the uniformity in the classifier then ensures a less biased decision. We summarize our contributions here:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 68, + 644, + 287, + 701 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 68, + 644, + 287, + 701 + ], + "spans": [ + { + "bbox": [ + 68, + 644, + 287, + 701 + ], + "type": "text", + "content": "- We present our algorithm for partial variance-reduced federated learning (FedPVR). We experimentally demonstrate that the key to the success of our algorithm is the diversity between the feature extraction layers and the alignment between the classifiers." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 316, + 86, + 536, + 155 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 86, + 536, + 155 + ], + "spans": [ + { + "bbox": [ + 316, + 86, + 536, + 155 + ], + "type": "text", + "content": "- We prove the convergence rate in the convex settings and non-convex settings, precisely characterize its weak dependence on data-heterogeneity measures and show that FedPVR provably converges as fast as the centralized SGD baseline in most practical relevant cases." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 316, + 164, + 536, + 233 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 164, + 536, + 233 + ], + "spans": [ + { + "bbox": [ + 316, + 164, + 536, + 233 + ], + "type": "text", + "content": "- We experimentally show that our algorithm is more communication efficient than previous works across various levels of data heterogeneity, datasets, and neural network architectures. In some cases where data heterogeneity exists, the proposed algorithm even performs slightly better than centralized learning." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 251, + 386, + 263 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 251, + 386, + 263 + ], + "spans": [ + { + "bbox": [ + 305, + 251, + 386, + 263 + ], + "type": "text", + "content": "2. Related work" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 270, + 412, + 283 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 270, + 412, + 283 + ], + "spans": [ + { + "bbox": [ + 305, + 270, + 412, + 283 + ], + "type": "text", + "content": "2.1. Federated learning" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 287, + 536, + 367 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 287, + 536, + 367 + ], + "spans": [ + { + "bbox": [ + 305, + 287, + 536, + 367 + ], + "type": "text", + "content": "Federated learning (FL) is a fast-growing field [13, 40]. We mainly describe FL methods in non-IID settings where the data is distributed heterogeneously across clients. Among the existing approaches, FedAvg [25] is the de facto optimization technique. Despite its solid empirical performances in IID settings [13, 25], it tends to achieve a subpar accuracy-communication trade-off in non-IID scenarios." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 305, + 368, + 536, + 574 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 368, + 536, + 574 + ], + "spans": [ + { + "bbox": [ + 305, + 368, + 536, + 574 + ], + "type": "text", + "content": "Many works attempt to tackle FL when data is heterogeneous across clients [1, 9, 14, 31, 39, 42]. FedProx [31] proposes a temperature parameter and proximal regularization term to control the divergence between client and server models. However, the proximal term does not bring the alignment between the global and local optimal points [1]. Similarly, some works control the update direction by introducing client-dependent control variate [1, 14, 17, 27, 32] that is also communicated between the server and clients. They have achieved a much faster convergence rate, but their performance in a non-convex setup, especially in deep neural networks, such as ResNet [11] and VGG [34], is not well explored. Besides, they suffer from a higher communication cost due to the transmission of the extra control variates, which may be a critical issue for resources-limited IoT mobile devices [10]. Among these methods, SCAFFOLD [14] is the most closely related method to ours, and we give a more detailed comparison in section 3 and 5." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 575, + 536, + 702 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 575, + 536, + 702 + ], + "spans": [ + { + "bbox": [ + 304, + 575, + 536, + 702 + ], + "type": "text", + "content": "Another line of work develops FL algorithms based on the characteristics, such as expressive feature representations [28] of neural networks. Collins et al. [5] show that FedAvg is powerful in learning common data representations from clients' data. FedBabu [29], TCT [41], and CCVR [24] propose to improve FL performance by finetuning the classifiers with a standalone dataset or features that are simulated based on the client models. However, preparing a standalone dataset/features that represents the data distribution across clients is challenging as this usually requires domain knowledge and may raise privacy concerns." + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "spans": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "type": "text", + "content": "3965" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 842 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 79, + 82, + 515, + 223 + ], + "blocks": [ + { + "bbox": [ + 79, + 82, + 515, + 223 + ], + "lines": [ + { + "bbox": [ + 79, + 82, + 515, + 223 + ], + "spans": [ + { + "bbox": [ + 79, + 82, + 515, + 223 + ], + "type": "image", + "image_path": "a6ccf984c91b668f39f0637e261b46a42c6d3bd271313729673828ee5984887c.jpg" + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 56, + 232, + 535, + 265 + ], + "lines": [ + { + "bbox": [ + 56, + 232, + 535, + 265 + ], + "spans": [ + { + "bbox": [ + 56, + 232, + 535, + 265 + ], + "type": "text", + "content": "Figure 2. Data distribution (number of images per client per class) with different levels of heterogeneity, client CKA similarity, and the drift diversity of each layer in VGG-11 (20 layers) with FedAvg. Deep layers in an over-parameterised neural network have higher disagreement and variance when the clients are heterogeneous using FedAvg." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_caption" + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 283, + 286, + 341 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 283, + 286, + 341 + ], + "spans": [ + { + "bbox": [ + 56, + 283, + 286, + 341 + ], + "type": "text", + "content": "Moon [20] encourages the similarity of the representations across different client models by using contrastive loss [4] but with the cost of three full-size models in memory on each client, which may limit its applicability in resource-limited devices." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 341, + 287, + 410 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 341, + 287, + 410 + ], + "spans": [ + { + "bbox": [ + 56, + 341, + 287, + 410 + ], + "type": "text", + "content": "Other works focus on reducing the communication cost by compressing the transmitted gradients [2, 8, 21, 26, 36]. They can reduce the communication bandwidth by adjusting the number of bits sent per iteration. These works are complementary to ours and can be easily integrated into our method to save communication costs." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 57, + 418, + 164, + 429 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 418, + 164, + 429 + ], + "spans": [ + { + "bbox": [ + 57, + 418, + 164, + 429 + ], + "type": "text", + "content": "2.2. Variance reduction" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 436, + 287, + 607 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 436, + 287, + 607 + ], + "spans": [ + { + "bbox": [ + 56, + 436, + 287, + 607 + ], + "type": "text", + "content": "Stochastic variance reduction (SVR), such as SVRG [12], SAGA [6], and their variants, use control variate to reduce the variance of traditional stochastic gradient descent (SGD). These methods can remarkably achieve a linear convergence rate for strongly convex optimization problems compared to the sub-linear rate of SGD. Many federated learning algorithms, such as SCAFFOLD [14] and DANE [32], have adapted the idea of variance reduction for the whole model and achieved good convergence on convex problems. However, as [7] demonstrated, naively applying variance reduction techniques gives no actual variance reduction and tends to result in a slower convergence in deep neural networks. This suggests that adapting SVR techniques in deep neural networks for FL requires a more careful design." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 57, + 615, + 175, + 627 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 615, + 175, + 627 + ], + "spans": [ + { + "bbox": [ + 57, + 615, + 175, + 627 + ], + "type": "text", + "content": "2.3. Conformal prediction" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "spans": [ + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "content": "Conformal prediction is a general framework that computes a prediction set guaranteed to include the true class with a high user-determined probability [3, 30]. It requires no retraining of the models and achieves a finite-sum coverage guarantee [3]. As FL algorithms can hardly perform as well as centralized learning [24] when the data heterogene" + } + ] + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 309, + 286, + 534, + 358 + ], + "blocks": [ + { + "bbox": [ + 355, + 275, + 485, + 285 + ], + "lines": [ + { + "bbox": [ + 355, + 275, + 485, + 285 + ], + "spans": [ + { + "bbox": [ + 355, + 275, + 485, + 285 + ], + "type": "text", + "content": "Table 1. Notations used in this paper" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 309, + 286, + 534, + 358 + ], + "lines": [ + { + "bbox": [ + 309, + 286, + 534, + 358 + ], + "spans": [ + { + "bbox": [ + 309, + 286, + 534, + 358 + ], + "type": "table", + "html": "
R,rNumber of communication rounds and round index
K,kNumber of local steps, local step index
N,iNumber of clients, client index
yr,i,kclient model i at step k and round r
xrserver model at round r
cr,i, crclient and server control variate
", + "image_path": "db2239ccc151bc31eb9ca304fedeadfea0ba5e1053a7e3248523cf0e2f19a707.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 361, + 536, + 431 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 361, + 536, + 431 + ], + "spans": [ + { + "bbox": [ + 304, + 361, + 536, + 431 + ], + "type": "text", + "content": "ity is high, we can integrate conformal prediction in FL to improve the empirical coverage by slightly increasing the predictive set size. This can be beneficial in sensitive use cases such as detecting chemical hazards, where it is better to give a prediction set that contains the correct class than producing a single but wrong prediction." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 305, + 439, + 358, + 451 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 439, + 358, + 451 + ], + "spans": [ + { + "bbox": [ + 305, + 439, + 358, + 451 + ], + "type": "text", + "content": "3. Method" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 305, + 458, + 411, + 469 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 458, + 411, + 469 + ], + "spans": [ + { + "bbox": [ + 305, + 458, + 411, + 469 + ], + "type": "text", + "content": "3.1. Problem statement" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 475, + 536, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 475, + 536, + 532 + ], + "spans": [ + { + "bbox": [ + 304, + 475, + 536, + 532 + ], + "type": "text", + "content": "Given " + }, + { + "bbox": [ + 304, + 475, + 536, + 532 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 304, + 475, + 536, + 532 + ], + "type": "text", + "content": " clients with full participation, we formalise the problem as minimizing the average of the stochastic functions with access to stochastic samples in Eq. 1 where " + }, + { + "bbox": [ + 304, + 475, + 536, + 532 + ], + "type": "inline_equation", + "content": "\\pmb{x}" + }, + { + "bbox": [ + 304, + 475, + 536, + 532 + ], + "type": "text", + "content": " is the model parameters and " + }, + { + "bbox": [ + 304, + 475, + 536, + 532 + ], + "type": "inline_equation", + "content": "f_{i}" + }, + { + "bbox": [ + 304, + 475, + 536, + 532 + ], + "type": "text", + "content": " represents the loss function at client " + }, + { + "bbox": [ + 304, + 475, + 536, + 532 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 304, + 475, + 536, + 532 + ], + "type": "text", + "content": " with dataset " + }, + { + "bbox": [ + 304, + 475, + 536, + 532 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_i" + }, + { + "bbox": [ + 304, + 475, + 536, + 532 + ], + "type": "text", + "content": "," + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 356, + 538, + 535, + 570 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 356, + 538, + 535, + 570 + ], + "spans": [ + { + "bbox": [ + 356, + 538, + 535, + 570 + ], + "type": "interline_equation", + "content": "\\min _ {\\boldsymbol {x} \\in \\mathbb {R} ^ {d}} \\left(f (\\boldsymbol {x}) := \\frac {1}{N} \\sum_ {i = 1} ^ {N} f _ {i} (\\boldsymbol {x})\\right), \\tag {1}", + "image_path": "ccb5e9a5429dc952cb24e8642d5e51f424f7a4b7918570f58c09464f3f3c8d15.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 305, + 574, + 430, + 587 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 574, + 430, + 587 + ], + "spans": [ + { + "bbox": [ + 305, + 574, + 430, + 587 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 305, + 574, + 430, + 587 + ], + "type": "inline_equation", + "content": "f_{i}(\\pmb {x}):= \\mathbb{E}_{\\mathcal{D}_{i}}[f_{i}(\\pmb {x};\\mathcal{D}_{i})]" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 305, + 591, + 377, + 603 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 591, + 377, + 603 + ], + "spans": [ + { + "bbox": [ + 305, + 591, + 377, + 603 + ], + "type": "text", + "content": "3.2. Motivation" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "spans": [ + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "type": "text", + "content": "When the data " + }, + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "type": "inline_equation", + "content": "\\{\\mathcal{D}_i\\}" + }, + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "type": "text", + "content": " are heterogeneous across clients, FedAvg suffers from client drift [14], where the average of the local optimal " + }, + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "type": "inline_equation", + "content": "\\bar{\\pmb{x}}^* = \\frac{1}{N}\\sum_{i\\in N}\\pmb{x}_i^*" + }, + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "type": "text", + "content": " is far from the global optimal " + }, + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "type": "inline_equation", + "content": "\\pmb{x}^*" + }, + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "type": "text", + "content": ". To understand what causes client drift, specifically which layers in a neural network are influenced most by the data heterogeneity, we perform a simple experiment using FedAvg and CIFAR10 datasets on a VGG-11. The detailed experimental setup can be found in section 4." + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 279, + 734, + 298, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 279, + 734, + 298, + 742 + ], + "spans": [ + { + "bbox": [ + 279, + 734, + 298, + 742 + ], + "type": "text", + "content": "3966" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 595, + 842 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 86, + 287, + 177 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 86, + 287, + 177 + ], + "spans": [ + { + "bbox": [ + 56, + 86, + 287, + 177 + ], + "type": "text", + "content": "In an over-parameterized model, it is difficult to directly calculate client drift " + }, + { + "bbox": [ + 56, + 86, + 287, + 177 + ], + "type": "inline_equation", + "content": "||\\bar{x}^{*} - x^{*}||^{2}" + }, + { + "bbox": [ + 56, + 86, + 287, + 177 + ], + "type": "text", + "content": " as it is challenging to obtain the global optimum " + }, + { + "bbox": [ + 56, + 86, + 287, + 177 + ], + "type": "inline_equation", + "content": "x^{*}" + }, + { + "bbox": [ + 56, + 86, + 287, + 177 + ], + "type": "text", + "content": ". We instead hypothesize that we can represent the influence of data heterogeneity on the model by measuring 1) drift diversity and 2) client model similarity. Drift diversity reflects the diversity in the amount each client model deviates from the server model after an update round." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 178, + 287, + 201 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 178, + 287, + 201 + ], + "spans": [ + { + "bbox": [ + 57, + 178, + 287, + 201 + ], + "type": "text", + "content": "Definition 1 (Drift diversity). We define the drift diversity across " + }, + { + "bbox": [ + 57, + 178, + 287, + 201 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 57, + 178, + 287, + 201 + ], + "type": "text", + "content": " clients at round " + }, + { + "bbox": [ + 57, + 178, + 287, + 201 + ], + "type": "inline_equation", + "content": "r" + }, + { + "bbox": [ + 57, + 178, + 287, + 201 + ], + "type": "text", + "content": " as:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 85, + 207, + 287, + 237 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 85, + 207, + 287, + 237 + ], + "spans": [ + { + "bbox": [ + 85, + 207, + 287, + 237 + ], + "type": "interline_equation", + "content": "\\xi^ {r} := \\frac {\\sum_ {i = 1} ^ {N} \\left\\| \\boldsymbol {m} _ {i} ^ {r} \\right\\| ^ {2}}{\\left\\| \\sum_ {i = 1} ^ {N} \\boldsymbol {m} _ {i} ^ {r} \\right\\| ^ {2}} \\quad \\boldsymbol {m} _ {i} ^ {r} = \\boldsymbol {y} _ {i, K} ^ {r} - \\boldsymbol {x} ^ {r - 1} \\tag {2}", + "image_path": "8f4521967db60068155fc471b208ea5cdb982ec90a58ac5d50bfc1db6de15d88.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "spans": [ + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "text", + "content": "Drift diversity " + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "inline_equation", + "content": "\\xi" + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "text", + "content": " is high when all the clients update their models in different directions, i.e., when dot products between client updates " + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "inline_equation", + "content": "\\pmb{m}_i" + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "text", + "content": " are small. When each client performs " + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "text", + "content": " steps of vanilla SGD updates, " + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "inline_equation", + "content": "\\xi" + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "text", + "content": " depends on the directions and amplitude of the gradients over " + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "text", + "content": " clients and is equivalent to " + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "inline_equation", + "content": "\\frac{\\sum_{i=1}^{N} \\|\\sum_{k} g_i(\\pmb{y}_{i,k})\\|^2}{\\|\\sum_{i=1}^{N} \\sum_{k} g_i(\\pmb{y}_{i,k})\\|^2}" + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "inline_equation", + "content": "g_i(\\pmb{y}_{i,k})" + }, + { + "bbox": [ + 56, + 245, + 287, + 331 + ], + "type": "text", + "content": " is the stochastic mini-batch gradient." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 332, + 287, + 389 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 332, + 287, + 389 + ], + "spans": [ + { + "bbox": [ + 56, + 332, + 287, + 389 + ], + "type": "text", + "content": "After updating client models, we quantify the client model similarity using centred kernel alignment (CKA) [18] computed on a test dataset. CKA is a widely used permutation invariant metric for measuring the similarity between feature representations in neural networks [18, 24, 28]." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 56, + 389, + 287, + 573 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 389, + 287, + 573 + ], + "spans": [ + { + "bbox": [ + 56, + 389, + 287, + 573 + ], + "type": "text", + "content": "Fig. 2 shows the movement of " + }, + { + "bbox": [ + 56, + 389, + 287, + 573 + ], + "type": "inline_equation", + "content": "\\xi" + }, + { + "bbox": [ + 56, + 389, + 287, + 573 + ], + "type": "text", + "content": " and CKA across different levels of data heterogeneity using FedAvg. We observe that the similarity and diversity of the early layers (e.g. layer index 4 and 12) are with a higher agreement between the IID " + }, + { + "bbox": [ + 56, + 389, + 287, + 573 + ], + "type": "inline_equation", + "content": "(\\alpha = 100.0)" + }, + { + "bbox": [ + 56, + 389, + 287, + 573 + ], + "type": "text", + "content": " and non-IID " + }, + { + "bbox": [ + 56, + 389, + 287, + 573 + ], + "type": "inline_equation", + "content": "(\\alpha = 0.1)" + }, + { + "bbox": [ + 56, + 389, + 287, + 573 + ], + "type": "text", + "content": " experiments, which indicates that FedAvg can still learn and extract good feature representations even when it is trained with non-IID data. The lower similarity on the deeper layers, especially the classifiers, suggests that these layers are strongly biased towards their local data distribution. When we only look at the model that is trained with " + }, + { + "bbox": [ + 56, + 389, + 287, + 573 + ], + "type": "inline_equation", + "content": "\\alpha = 0.1" + }, + { + "bbox": [ + 56, + 389, + 287, + 573 + ], + "type": "text", + "content": ", we see the highest diversity and variance on the classifiers across clients compared to the rest of the layers. Based on the above observations, we propose to align the classifiers across clients using variance reduction. We deploy client and server control variates to control the updating directions of the classifiers." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 57, + 580, + 208, + 591 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 580, + 208, + 591 + ], + "spans": [ + { + "bbox": [ + 57, + 580, + 208, + 591 + ], + "type": "text", + "content": "3.3. Classifier variance reduction" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 56, + 597, + 287, + 633 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 597, + 287, + 633 + ], + "spans": [ + { + "bbox": [ + 56, + 597, + 287, + 633 + ], + "type": "text", + "content": "Our proposed algorithm (Alg. I) consists of three parts: i) client updating (Eq. 5-6) ii) client control variate updating, (Eq. 7), and iii) server updating (Eq. 8-9)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "spans": [ + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "content": "We first define a vector " + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "inline_equation", + "content": "\\pmb{p} \\in \\mathbb{R}^d" + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "content": " that contains 0 or 1 with " + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "inline_equation", + "content": "v" + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "content": " non-zero elements " + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "inline_equation", + "content": "(v \\ll d)" + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "content": " in Eq. 3. We recover SCAF-FOLD with " + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "inline_equation", + "content": "\\pmb{p} = \\mathbf{1}" + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "content": " and recover FedAvg with " + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "inline_equation", + "content": "\\pmb{p} = \\mathbf{0}" + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "content": ". For the set of indices " + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "inline_equation", + "content": "p_j = 1" + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "content": " (" + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "inline_equation", + "content": "S_{\\mathrm{svr}}" + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "content": " from Eq. 4), we update the corresponding weights " + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "inline_equation", + "content": "y_{i,S_{\\mathrm{svr}}}" + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "content": " with variance reduction such that we maintain a state for each client " + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "inline_equation", + "content": "(c_i \\in \\mathbb{R}^v)" + }, + { + "bbox": [ + 56, + 633, + 287, + 701 + ], + "type": "text", + "content": " and" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "spans": [ + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "text", + "content": "for the server " + }, + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "inline_equation", + "content": "(\\pmb{c} \\in \\mathbb{R}^v)" + }, + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "text", + "content": " in Eq. 5. For the rest of the indices " + }, + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "inline_equation", + "content": "S_{\\mathrm{sgd}}" + }, + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "text", + "content": " from Eq. 4, we update the corresponding weights " + }, + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "inline_equation", + "content": "\\pmb{y}_{i,S_{\\mathrm{sgd}}}" + }, + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "text", + "content": " with SGD in Eq. 6. As the server variate " + }, + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "inline_equation", + "content": "\\pmb{c}" + }, + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "text", + "content": " is an average of " + }, + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "inline_equation", + "content": "\\pmb{c}_i" + }, + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "text", + "content": " across clients, we can safely initialise them as " + }, + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "inline_equation", + "content": "\\mathbf{0}" + }, + { + "bbox": [ + 304, + 86, + 536, + 133 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "spans": [ + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "text", + "content": "In each communication round, each client receives a copy of the server model " + }, + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "inline_equation", + "content": "\\mathbf{x}" + }, + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "text", + "content": " and the server control variate " + }, + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "inline_equation", + "content": "\\mathbf{c}" + }, + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "text", + "content": ". They then perform " + }, + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "text", + "content": " model updating steps (see Eq. 5-6 for one step) using cross-entropy as the loss function. Once this is finished, we calculate the updated client control variate " + }, + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "inline_equation", + "content": "\\mathbf{c}_i" + }, + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "text", + "content": " using Eq. 7. The server then receives the updated " + }, + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "inline_equation", + "content": "\\mathbf{c}_i" + }, + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "inline_equation", + "content": "\\mathbf{y}_i" + }, + { + "bbox": [ + 304, + 133, + 536, + 224 + ], + "type": "text", + "content": " from all the clients for aggregation (Eq. 8-9). This completes one communication round." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 339, + 227, + 535, + 244 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 339, + 227, + 535, + 244 + ], + "spans": [ + { + "bbox": [ + 339, + 227, + 535, + 244 + ], + "type": "interline_equation", + "content": "\\boldsymbol {p} := \\{0, 1 \\} ^ {d}, \\quad v = \\sum \\boldsymbol {p} \\tag {3}", + "image_path": "2f32f21896c71edcdc7f935b4b109a6426a352c18f1bea919122a85547bcd5a5.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 332, + 245, + 535, + 260 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 332, + 245, + 535, + 260 + ], + "spans": [ + { + "bbox": [ + 332, + 245, + 535, + 260 + ], + "type": "interline_equation", + "content": "S _ {\\mathrm {s v r}} := \\{j: \\boldsymbol {p} _ {j} = 1 \\}, \\quad S _ {\\mathrm {s g d}} := \\{j: \\boldsymbol {p} _ {j} = 0 \\} \\tag {4}", + "image_path": "928ffb9a5217fc5d5a208d588691d5b95c6ee9cc8e1790e825eeac157f0b194f.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 323, + 262, + 535, + 275 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 323, + 262, + 535, + 275 + ], + "spans": [ + { + "bbox": [ + 323, + 262, + 535, + 275 + ], + "type": "interline_equation", + "content": "\\boldsymbol {y} _ {i, S _ {\\mathrm {s v r}}} \\leftarrow \\boldsymbol {y} _ {i, S _ {\\mathrm {s v r}}} - \\eta_ {l} \\left(g _ {i} \\left(\\boldsymbol {y} _ {i}\\right) _ {S _ {\\mathrm {s v r}}} - \\boldsymbol {c} _ {i} + \\boldsymbol {c}\\right) \\tag {5}", + "image_path": "718e8d5ff21ed018b02b3835adf381396e4086e3ab3ca267444ab049ce963402.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 322, + 277, + 535, + 291 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 322, + 277, + 535, + 291 + ], + "spans": [ + { + "bbox": [ + 322, + 277, + 535, + 291 + ], + "type": "interline_equation", + "content": "\\boldsymbol {y} _ {i, S _ {\\mathrm {s g d}}} \\leftarrow \\boldsymbol {y} _ {i, S _ {\\mathrm {s g d}}} - \\eta_ {l} g _ {i} \\left(\\boldsymbol {y} _ {i}\\right) _ {S _ {\\mathrm {s g d}}} \\tag {6}", + "image_path": "6549ad12805a1e8623f804b47b1d4e9202ebfff94a08aeef5b75bbd24181be4a.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 337, + 293, + 535, + 316 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 337, + 293, + 535, + 316 + ], + "spans": [ + { + "bbox": [ + 337, + 293, + 535, + 316 + ], + "type": "interline_equation", + "content": "\\boldsymbol {c} _ {i} \\leftarrow \\boldsymbol {c} _ {i} - \\boldsymbol {c} + \\frac {1}{K \\eta_ {l}} \\left(\\boldsymbol {x} _ {S _ {\\mathrm {s v r}}} - \\boldsymbol {y} _ {i, S _ {\\mathrm {s v r}}}\\right) \\tag {7}", + "image_path": "94e6f3b8509f8f7157c79bbed72b0a301e624396066b1ef8a43a9141da24d3dd.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 339, + 318, + 535, + 345 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 339, + 318, + 535, + 345 + ], + "spans": [ + { + "bbox": [ + 339, + 318, + 535, + 345 + ], + "type": "interline_equation", + "content": "\\boldsymbol {x} \\leftarrow (1 - \\eta_ {g}) \\boldsymbol {x} + \\frac {1}{N} \\sum_ {i \\in N} \\boldsymbol {y} _ {i} \\tag {8}", + "image_path": "3ed9fd9e03b3f627ba036262229e67e5196d01a308bf1ca552cba5b8b4a20638.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 340, + 347, + 535, + 374 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 340, + 347, + 535, + 374 + ], + "spans": [ + { + "bbox": [ + 340, + 347, + 535, + 374 + ], + "type": "interline_equation", + "content": "\\boldsymbol {c} \\leftarrow \\frac {1}{N} \\sum_ {i \\in N} \\boldsymbol {c} _ {i} \\tag {9}", + "image_path": "bc999e7a2ae8072c1e3f58f81a5eee749f2b046a638dc5648ce1dc0d48dc96b0.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 306, + 379, + 501, + 391 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 379, + 501, + 391 + ], + "spans": [ + { + "bbox": [ + 306, + 379, + 501, + 391 + ], + "type": "text", + "content": "Algorithm I Partial variance reduction (FedPVR)" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 305, + 393, + 536, + 414 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 393, + 536, + 414 + ], + "spans": [ + { + "bbox": [ + 305, + 393, + 536, + 414 + ], + "type": "text", + "content": "server: initialise the server model " + }, + { + "bbox": [ + 305, + 393, + 536, + 414 + ], + "type": "inline_equation", + "content": "\\mathbf{x}" + }, + { + "bbox": [ + 305, + 393, + 536, + 414 + ], + "type": "text", + "content": ", the control variate " + }, + { + "bbox": [ + 305, + 393, + 536, + 414 + ], + "type": "inline_equation", + "content": "\\mathbf{c}" + }, + { + "bbox": [ + 305, + 393, + 536, + 414 + ], + "type": "text", + "content": ", and global step size " + }, + { + "bbox": [ + 305, + 393, + 536, + 414 + ], + "type": "inline_equation", + "content": "\\eta_g" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 320, + 414, + 515, + 424 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 320, + 414, + 515, + 424 + ], + "spans": [ + { + "bbox": [ + 320, + 414, + 515, + 424 + ], + "type": "text", + "content": "client: initialise control variate " + }, + { + "bbox": [ + 320, + 414, + 515, + 424 + ], + "type": "inline_equation", + "content": "c_{i}" + }, + { + "bbox": [ + 320, + 414, + 515, + 424 + ], + "type": "text", + "content": " and local step size " + }, + { + "bbox": [ + 320, + 414, + 515, + 424 + ], + "type": "inline_equation", + "content": "\\eta_{l}" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 305, + 424, + 535, + 447 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 424, + 535, + 447 + ], + "spans": [ + { + "bbox": [ + 305, + 424, + 535, + 447 + ], + "type": "inline_equation", + "content": "\\mathbf{mask}" + }, + { + "bbox": [ + 305, + 424, + 535, + 447 + ], + "type": "text", + "content": " .. " + }, + { + "bbox": [ + 305, + 424, + 535, + 447 + ], + "type": "inline_equation", + "content": "\\pmb {p}:= \\{0,1\\} ^d" + }, + { + "bbox": [ + 305, + 424, + 535, + 447 + ], + "type": "inline_equation", + "content": "S_{\\mathrm{sgd}}\\coloneqq \\{j:\\pmb {p}_j = 0\\}" + }, + { + "bbox": [ + 305, + 424, + 535, + 447 + ], + "type": "inline_equation", + "content": "S_{\\mathrm{svr}}\\coloneqq \\{j:" + }, + { + "bbox": [ + 305, + 424, + 535, + 447 + ], + "type": "inline_equation", + "content": "\\pmb {p}_j = 1\\}" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 309, + 447, + 430, + 457 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 309, + 447, + 430, + 457 + ], + "spans": [ + { + "bbox": [ + 309, + 447, + 430, + 457 + ], + "type": "text", + "content": "1: procedure MODEL UPDATING" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 310, + 458, + 402, + 467 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 310, + 458, + 402, + 467 + ], + "spans": [ + { + "bbox": [ + 310, + 458, + 402, + 467 + ], + "type": "text", + "content": "2: for " + }, + { + "bbox": [ + 310, + 458, + 402, + 467 + ], + "type": "inline_equation", + "content": "r = 1\\rightarrow R" + }, + { + "bbox": [ + 310, + 458, + 402, + 467 + ], + "type": "text", + "content": " do" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 311, + 468, + 503, + 478 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 468, + 503, + 478 + ], + "spans": [ + { + "bbox": [ + 311, + 468, + 503, + 478 + ], + "type": "text", + "content": "3: communicate " + }, + { + "bbox": [ + 311, + 468, + 503, + 478 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 311, + 468, + 503, + 478 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 311, + 468, + 503, + 478 + ], + "type": "inline_equation", + "content": "c" + }, + { + "bbox": [ + 311, + 468, + 503, + 478 + ], + "type": "text", + "content": " to all clients " + }, + { + "bbox": [ + 311, + 468, + 503, + 478 + ], + "type": "inline_equation", + "content": "i \\in [N]" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 311, + 479, + 471, + 489 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 479, + 471, + 489 + ], + "spans": [ + { + "bbox": [ + 311, + 479, + 471, + 489 + ], + "type": "text", + "content": "4: for On client " + }, + { + "bbox": [ + 311, + 479, + 471, + 489 + ], + "type": "inline_equation", + "content": "i \\in [N]" + }, + { + "bbox": [ + 311, + 479, + 471, + 489 + ], + "type": "text", + "content": " in parallel do" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 311, + 490, + 390, + 500 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 490, + 390, + 500 + ], + "spans": [ + { + "bbox": [ + 311, + 490, + 390, + 500 + ], + "type": "text", + "content": "5: " + }, + { + "bbox": [ + 311, + 490, + 390, + 500 + ], + "type": "inline_equation", + "content": "y_{i}\\gets x" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 311, + 500, + 428, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 500, + 428, + 509 + ], + "spans": [ + { + "bbox": [ + 311, + 500, + 428, + 509 + ], + "type": "text", + "content": "6: for " + }, + { + "bbox": [ + 311, + 500, + 428, + 509 + ], + "type": "inline_equation", + "content": "k = 1\\to K" + }, + { + "bbox": [ + 311, + 500, + 428, + 509 + ], + "type": "text", + "content": " do" + } + ] + } + ], + "index": 27 + }, + { + "bbox": [ + 311, + 510, + 495, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 510, + 495, + 521 + ], + "spans": [ + { + "bbox": [ + 311, + 510, + 495, + 521 + ], + "type": "text", + "content": "7: compute minibatch gradient " + }, + { + "bbox": [ + 311, + 510, + 495, + 521 + ], + "type": "inline_equation", + "content": "g_{i}(\\pmb{y}_{i})" + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 311, + 522, + 485, + 533 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 522, + 485, + 533 + ], + "spans": [ + { + "bbox": [ + 311, + 522, + 485, + 533 + ], + "type": "text", + "content": "8: " + }, + { + "bbox": [ + 311, + 522, + 485, + 533 + ], + "type": "inline_equation", + "content": "\\pmb{y}_{i,S_{\\mathrm{sgd}}} \\gets \\pmb{y}_{i,S_{\\mathrm{sgd}}} - \\eta_l g_i(\\pmb{y}_i)_{S_{\\mathrm{sgd}}}" + } + ] + } + ], + "index": 29 + }, + { + "bbox": [ + 311, + 533, + 523, + 543 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 311, + 533, + 523, + 543 + ], + "spans": [ + { + "bbox": [ + 311, + 533, + 523, + 543 + ], + "type": "text", + "content": "9: " + }, + { + "bbox": [ + 311, + 533, + 523, + 543 + ], + "type": "inline_equation", + "content": "\\pmb{y}_{i,S_{\\mathrm{svr}}}\\gets \\pmb{y}_{i,S_{\\mathrm{svr}}} - \\eta_{l}(g_{i}(\\pmb{y}_{i})_{S_{\\mathrm{svr}}} - \\pmb{c}_{i} + \\pmb{c})" + } + ] + } + ], + "index": 30 + }, + { + "bbox": [ + 308, + 543, + 388, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 543, + 388, + 552 + ], + "spans": [ + { + "bbox": [ + 308, + 543, + 388, + 552 + ], + "type": "text", + "content": "10: end for" + } + ] + } + ], + "index": 31 + }, + { + "bbox": [ + 308, + 553, + 487, + 565 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 553, + 487, + 565 + ], + "spans": [ + { + "bbox": [ + 308, + 553, + 487, + 565 + ], + "type": "text", + "content": "11: " + }, + { + "bbox": [ + 308, + 553, + 487, + 565 + ], + "type": "inline_equation", + "content": "\\pmb{c}_i \\gets \\pmb{c}_i - \\pmb{c} + \\frac{1}{K\\eta_l} (\\pmb{x}_{S_{\\mathrm{svr}}} - \\pmb{y}_{i,S_{\\mathrm{svr}}})" + } + ] + } + ], + "index": 32 + }, + { + "bbox": [ + 308, + 565, + 432, + 575 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 565, + 432, + 575 + ], + "spans": [ + { + "bbox": [ + 308, + 565, + 432, + 575 + ], + "type": "text", + "content": "12: communicate " + }, + { + "bbox": [ + 308, + 565, + 432, + 575 + ], + "type": "inline_equation", + "content": "\\pmb{y}_i, \\pmb{c}_i" + } + ] + } + ], + "index": 33 + }, + { + "bbox": [ + 308, + 575, + 374, + 584 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 575, + 374, + 584 + ], + "spans": [ + { + "bbox": [ + 308, + 575, + 374, + 584 + ], + "type": "text", + "content": "13: end for" + } + ] + } + ], + "index": 34 + }, + { + "bbox": [ + 308, + 585, + 458, + 596 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 585, + 458, + 596 + ], + "spans": [ + { + "bbox": [ + 308, + 585, + 458, + 596 + ], + "type": "text", + "content": "14: " + }, + { + "bbox": [ + 308, + 585, + 458, + 596 + ], + "type": "inline_equation", + "content": "\\pmb{x} \\gets (1 - \\eta_g)\\pmb{x} + \\frac{1}{N}\\sum_{i\\in N}\\pmb{y}_i" + } + ] + } + ], + "index": 35 + }, + { + "bbox": [ + 308, + 596, + 409, + 607 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 596, + 409, + 607 + ], + "spans": [ + { + "bbox": [ + 308, + 596, + 409, + 607 + ], + "type": "text", + "content": "15: " + }, + { + "bbox": [ + 308, + 596, + 409, + 607 + ], + "type": "inline_equation", + "content": "\\pmb{c} \\gets \\frac{1}{N} \\sum_{i \\in N} \\pmb{c}_i" + } + ] + } + ], + "index": 36 + }, + { + "bbox": [ + 308, + 607, + 362, + 616 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 607, + 362, + 616 + ], + "spans": [ + { + "bbox": [ + 308, + 607, + 362, + 616 + ], + "type": "text", + "content": "16: end for" + } + ] + } + ], + "index": 37 + }, + { + "bbox": [ + 308, + 617, + 376, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 617, + 376, + 626 + ], + "spans": [ + { + "bbox": [ + 308, + 617, + 376, + 626 + ], + "type": "text", + "content": "17: end procedure" + } + ] + } + ], + "index": 38 + }, + { + "bbox": [ + 304, + 628, + 536, + 655 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 628, + 536, + 655 + ], + "spans": [ + { + "bbox": [ + 304, + 628, + 536, + 655 + ], + "type": "text", + "content": "In terms of implementation, we can simply assume the control variate for the block of weights that is updated with SGD as 0 and implement line 8 and 9 in one step" + } + ] + } + ], + "index": 39 + }, + { + "bbox": [ + 304, + 655, + 536, + 701 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 655, + 536, + 701 + ], + "spans": [ + { + "bbox": [ + 304, + 655, + 536, + 701 + ], + "type": "text", + "content": "Ours vs SCAFFOLD [14] While our work is similar to SCAFFOLD in the use of variance reduction, there are some fundamental differences. We both communicate control variates between the clients and server, but our control" + } + ] + } + ], + "index": 40 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "spans": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "type": "text", + "content": "3967" + } + ] + } + ], + "index": 41 + } + ], + "page_size": [ + 595, + 842 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 56, + 86, + 287, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 86, + 287, + 236 + ], + "spans": [ + { + "bbox": [ + 56, + 86, + 287, + 236 + ], + "type": "text", + "content": "variate " + }, + { + "bbox": [ + 56, + 86, + 287, + 236 + ], + "type": "inline_equation", + "content": "(2v\\leq 0.1d)" + }, + { + "bbox": [ + 56, + 86, + 287, + 236 + ], + "type": "text", + "content": " is significantly smaller than the one in SCAFFOLD " + }, + { + "bbox": [ + 56, + 86, + 287, + 236 + ], + "type": "inline_equation", + "content": "(2d)" + }, + { + "bbox": [ + 56, + 86, + 287, + 236 + ], + "type": "text", + "content": ". This " + }, + { + "bbox": [ + 56, + 86, + 287, + 236 + ], + "type": "inline_equation", + "content": "2x" + }, + { + "bbox": [ + 56, + 86, + 287, + 236 + ], + "type": "text", + "content": " decrease in bits can be critical for some low-power IoT devices as the communication may consume more energy [10]. From the application point of view, SCAFFOLD achieved great success in convex or simple two layers problems. However, adapting the techniques that work well from convex problems to over-parameterized models is non-trivial [38], and naively adapting variance reduction techniques on deep neural networks gives little or no convergence speedup [7]. Therefore, the significant improvement achieved by our method gives essential and nontrivial insight into what matters when tackling data heterogeneity in FL in over-parameterized models." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 242, + 158, + 255 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 242, + 158, + 255 + ], + "spans": [ + { + "bbox": [ + 57, + 242, + 158, + 255 + ], + "type": "text", + "content": "3.4. Convergence rate" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "spans": [ + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "text", + "content": "We state the convergence rate in this section. We assume functions " + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "inline_equation", + "content": "\\{f_i\\}" + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "text", + "content": " are " + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "text", + "content": "-smooth following [16, 35]. We then assume " + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "inline_equation", + "content": "g_{i}(\\pmb{x})\\coloneqq \\nabla f_{i}(x;\\mathcal{D}_{i})" + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "text", + "content": " is an unbiased stochastic gradient of " + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "inline_equation", + "content": "f_{i}" + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "text", + "content": " with variance bounded by " + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "inline_equation", + "content": "\\sigma^2" + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "text", + "content": ". We assume strongly convexity " + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "inline_equation", + "content": "(\\mu >0)" + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "text", + "content": " and general convexity " + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "inline_equation", + "content": "(\\mu = 0)" + }, + { + "bbox": [ + 56, + 260, + 287, + 340 + ], + "type": "text", + "content": " for some of the results following [14]. Furthermore, we also make assumptions about the heterogeneity of the functions." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 341, + 287, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 341, + 287, + 386 + ], + "spans": [ + { + "bbox": [ + 57, + 341, + 287, + 386 + ], + "type": "text", + "content": "For convex functions, we assume the heterogeneity of the function " + }, + { + "bbox": [ + 57, + 341, + 287, + 386 + ], + "type": "inline_equation", + "content": "\\{f_i\\}" + }, + { + "bbox": [ + 57, + 341, + 287, + 386 + ], + "type": "text", + "content": " at the optimal point " + }, + { + "bbox": [ + 57, + 341, + 287, + 386 + ], + "type": "inline_equation", + "content": "x^{*}" + }, + { + "bbox": [ + 57, + 341, + 287, + 386 + ], + "type": "text", + "content": " (such a point always exists for a strongly convex function) following [15, 16]." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 57, + 388, + 288, + 410 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 388, + 288, + 410 + ], + "spans": [ + { + "bbox": [ + 57, + 388, + 288, + 410 + ], + "type": "text", + "content": "Assumption 1 (" + }, + { + "bbox": [ + 57, + 388, + 288, + 410 + ], + "type": "inline_equation", + "content": "\\zeta" + }, + { + "bbox": [ + 57, + 388, + 288, + 410 + ], + "type": "text", + "content": "-heterogeneity). We define a measure of variance at the optimum " + }, + { + "bbox": [ + 57, + 388, + 288, + 410 + ], + "type": "inline_equation", + "content": "x^{*}" + }, + { + "bbox": [ + 57, + 388, + 288, + 410 + ], + "type": "text", + "content": " given " + }, + { + "bbox": [ + 57, + 388, + 288, + 410 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 57, + 388, + 288, + 410 + ], + "type": "text", + "content": " clients as:" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 114, + 412, + 286, + 443 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 412, + 286, + 443 + ], + "spans": [ + { + "bbox": [ + 114, + 412, + 286, + 443 + ], + "type": "interline_equation", + "content": "\\zeta^ {2} := \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\mathbb {E} | | \\nabla f _ {i} \\left(\\boldsymbol {x} ^ {*}\\right) | | ^ {2}. \\tag {10}", + "image_path": "b553bd329ab7119b96fa289334172028acb771e62ffe6419fdba7f7cf97e14d1.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 57, + 446, + 287, + 481 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 446, + 287, + 481 + ], + "spans": [ + { + "bbox": [ + 57, + 446, + 287, + 481 + ], + "type": "text", + "content": "For the non-convex functions, such an unique optimal point " + }, + { + "bbox": [ + 57, + 446, + 287, + 481 + ], + "type": "inline_equation", + "content": "\\pmb{x}^*" + }, + { + "bbox": [ + 57, + 446, + 287, + 481 + ], + "type": "text", + "content": " does not necessarily exist, so we generalize Assumption 1 to Assumption 2." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 57, + 486, + 287, + 510 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 486, + 287, + 510 + ], + "spans": [ + { + "bbox": [ + 57, + 486, + 287, + 510 + ], + "type": "text", + "content": "Assumption 2 (" + }, + { + "bbox": [ + 57, + 486, + 287, + 510 + ], + "type": "inline_equation", + "content": "\\hat{\\zeta}" + }, + { + "bbox": [ + 57, + 486, + 287, + 510 + ], + "type": "text", + "content": "-heterogeneity). We assume there exists constant " + }, + { + "bbox": [ + 57, + 486, + 287, + 510 + ], + "type": "inline_equation", + "content": "\\hat{\\zeta}" + }, + { + "bbox": [ + 57, + 486, + 287, + 510 + ], + "type": "text", + "content": " such that " + }, + { + "bbox": [ + 57, + 486, + 287, + 510 + ], + "type": "inline_equation", + "content": "\\forall x \\in \\mathbb{R}^d" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 119, + 511, + 286, + 543 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 119, + 511, + 286, + 543 + ], + "spans": [ + { + "bbox": [ + 119, + 511, + 286, + 543 + ], + "type": "interline_equation", + "content": "\\frac {1}{N} \\sum_ {i = 1} ^ {N} \\mathbb {E} | | \\nabla f _ {i} (\\boldsymbol {x}) | | ^ {2} \\leq \\hat {\\zeta} ^ {2}. \\tag {11}", + "image_path": "921800346aec29d6abc86f9c039ef3e35ebdcf6fbe778739f99cbe1f5e31c4a4.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 57, + 551, + 287, + 575 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 551, + 287, + 575 + ], + "spans": [ + { + "bbox": [ + 57, + 551, + 287, + 575 + ], + "type": "text", + "content": "Given the mask " + }, + { + "bbox": [ + 57, + 551, + 287, + 575 + ], + "type": "inline_equation", + "content": "\\pmb{p}" + }, + { + "bbox": [ + 57, + 551, + 287, + 575 + ], + "type": "text", + "content": " as defined in Eq. 3, we know " + }, + { + "bbox": [ + 57, + 551, + 287, + 575 + ], + "type": "inline_equation", + "content": "||\\pmb{p} \\odot \\pmb{x}|| \\leq ||\\pmb{x}||" + }, + { + "bbox": [ + 57, + 551, + 287, + 575 + ], + "type": "text", + "content": ". Therefore, we have the following propositions." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 57, + 580, + 287, + 615 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 580, + 287, + 615 + ], + "spans": [ + { + "bbox": [ + 57, + 580, + 287, + 615 + ], + "type": "text", + "content": "Proposition 1 (Implication of Assumption 1). Given the mask " + }, + { + "bbox": [ + 57, + 580, + 287, + 615 + ], + "type": "inline_equation", + "content": "\\mathbf{p}" + }, + { + "bbox": [ + 57, + 580, + 287, + 615 + ], + "type": "text", + "content": ", we define the heterogeneity of the block of weights that are not variance reduced at the optimum " + }, + { + "bbox": [ + 57, + 580, + 287, + 615 + ], + "type": "inline_equation", + "content": "\\mathbf{x}^*" + }, + { + "bbox": [ + 57, + 580, + 287, + 615 + ], + "type": "text", + "content": " as:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 92, + 618, + 286, + 649 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 92, + 618, + 286, + 649 + ], + "spans": [ + { + "bbox": [ + 92, + 618, + 286, + 649 + ], + "type": "interline_equation", + "content": "\\zeta_ {1 - p} ^ {2} := \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\left\\| (\\mathbf {1} - \\boldsymbol {p}) \\odot \\nabla f _ {i} \\left(\\boldsymbol {x} ^ {*}\\right) \\right\\| ^ {2}, \\tag {12}", + "image_path": "a1f114be7e69772cd4f8eeaccb72db1f2fa87d6e5572e7d831d632f32f411d64.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 57, + 655, + 234, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 655, + 234, + 666 + ], + "spans": [ + { + "bbox": [ + 57, + 655, + 234, + 666 + ], + "type": "text", + "content": "If Assumption 1 holds, then it also holds that:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 148, + 668, + 286, + 683 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 148, + 668, + 286, + 683 + ], + "spans": [ + { + "bbox": [ + 148, + 668, + 286, + 683 + ], + "type": "interline_equation", + "content": "\\zeta_ {1 - p} ^ {2} \\leq \\zeta^ {2}. \\tag {13}", + "image_path": "fde98237725fc10dd71b949517772fc62738acc2bb7151de82495096a0523971.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "spans": [ + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "text", + "content": "In Proposition 1, " + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "inline_equation", + "content": "\\zeta_{1 - p}^2 = \\zeta^2" + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "text", + "content": " if " + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "inline_equation", + "content": "\\pmb{p} = \\mathbf{0}" + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "inline_equation", + "content": "\\zeta_{1 - p}^2 = 0" + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "text", + "content": " if " + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "inline_equation", + "content": "\\pmb{p} = \\mathbf{1}" + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "text", + "content": ". If " + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "inline_equation", + "content": "\\pmb{p} \\neq \\mathbf{0}" + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "inline_equation", + "content": "\\pmb{p} \\neq \\mathbf{1}" + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "text", + "content": ", as the heterogeneity of the shallow weights is lower than the deeper weights [41], we have " + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "inline_equation", + "content": "\\zeta_{1 - p}^2 \\leq \\zeta^2" + }, + { + "bbox": [ + 304, + 81, + 536, + 129 + ], + "type": "text", + "content": ". Similarly, we can validate Proposition 2." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 304, + 137, + 536, + 182 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 137, + 536, + 182 + ], + "spans": [ + { + "bbox": [ + 304, + 137, + 536, + 182 + ], + "type": "text", + "content": "Proposition 2 (Implication of Assumption 2). Given the mask " + }, + { + "bbox": [ + 304, + 137, + 536, + 182 + ], + "type": "inline_equation", + "content": "\\pmb{p}" + }, + { + "bbox": [ + 304, + 137, + 536, + 182 + ], + "type": "text", + "content": ", we assume there exists constant " + }, + { + "bbox": [ + 304, + 137, + 536, + 182 + ], + "type": "inline_equation", + "content": "\\hat{\\zeta}_{1 - p}" + }, + { + "bbox": [ + 304, + 137, + 536, + 182 + ], + "type": "text", + "content": " such that " + }, + { + "bbox": [ + 304, + 137, + 536, + 182 + ], + "type": "inline_equation", + "content": "\\forall \\pmb{x} \\in \\mathbb{R}^d" + }, + { + "bbox": [ + 304, + 137, + 536, + 182 + ], + "type": "text", + "content": ", the heterogeneity of the block of weights that are not variance reduced:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 345, + 185, + 535, + 217 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 345, + 185, + 535, + 217 + ], + "spans": [ + { + "bbox": [ + 345, + 185, + 535, + 217 + ], + "type": "interline_equation", + "content": "\\frac {1}{N} \\sum_ {i = 1} ^ {N} | | (\\mathbf {1} - \\boldsymbol {p}) \\odot \\nabla f _ {i} (\\boldsymbol {x}) | | ^ {2} \\leq \\hat {\\zeta} _ {1 - p}, \\tag {14}", + "image_path": "c4f01f2e7c48f11e0486242eddabe51fc30221bbbfcde43177ee627c1411d1e5.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 305, + 225, + 482, + 237 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 225, + 482, + 237 + ], + "spans": [ + { + "bbox": [ + 305, + 225, + 482, + 237 + ], + "type": "text", + "content": "If Assumption 2 holds, then it also holds that:" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 396, + 240, + 535, + 255 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 396, + 240, + 535, + 255 + ], + "spans": [ + { + "bbox": [ + 396, + 240, + 535, + 255 + ], + "type": "interline_equation", + "content": "\\hat {\\zeta} _ {1 - p} ^ {2} \\leq \\hat {\\zeta} ^ {2}. \\tag {15}", + "image_path": "561e0f508ff4b80e5f7aaa8d0fb13fa462afdea8ab7426e966d477d4dfc08a1d.jpg" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "spans": [ + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "text", + "content": "Theorem 1. For any " + }, + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "text", + "content": "-smooth function " + }, + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "inline_equation", + "content": "\\{f_i\\}" + }, + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "text", + "content": ", the output of FedPVR has expected error smaller than " + }, + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "inline_equation", + "content": "\\epsilon" + }, + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "inline_equation", + "content": "\\eta_g = \\sqrt{N}" + }, + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "text", + "content": " and some values of " + }, + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "inline_equation", + "content": "\\eta_l" + }, + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "inline_equation", + "content": "R" + }, + { + "bbox": [ + 305, + 266, + 537, + 301 + ], + "type": "text", + "content": " satisfying:" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 316, + 309, + 511, + 328 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 309, + 511, + 328 + ], + "spans": [ + { + "bbox": [ + 316, + 309, + 511, + 328 + ], + "type": "text", + "content": "Strongly convex: " + }, + { + "bbox": [ + 316, + 309, + 511, + 328 + ], + "type": "inline_equation", + "content": "\\eta_l \\leq \\min \\left(\\frac{1}{80K\\eta_g\\beta}, \\frac{26}{20\\mu K\\eta_g}\\right)" + }, + { + "bbox": [ + 316, + 309, + 511, + 328 + ], + "type": "text", + "content": "," + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 362, + 337, + 535, + 368 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 362, + 337, + 535, + 368 + ], + "spans": [ + { + "bbox": [ + 362, + 337, + 535, + 368 + ], + "type": "interline_equation", + "content": "R = \\tilde {\\mathcal {O}} \\left(\\frac {\\sigma^ {2}}{\\mu N K \\epsilon} + \\frac {\\zeta_ {1 - p} ^ {2}}{\\mu \\epsilon} + \\frac {\\beta}{\\mu}\\right), \\tag {16}", + "image_path": "d02c53d4d6cc3c2ec50ae486dcc623b46a9aee49531afd0400316618df856134.jpg" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 316, + 374, + 447, + 390 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 374, + 447, + 390 + ], + "spans": [ + { + "bbox": [ + 316, + 374, + 447, + 390 + ], + "type": "text", + "content": "- General convex: " + }, + { + "bbox": [ + 316, + 374, + 447, + 390 + ], + "type": "inline_equation", + "content": "\\eta_{l} \\leq \\frac{1}{80K\\eta_{g}\\beta}" + }, + { + "bbox": [ + 316, + 374, + 447, + 390 + ], + "type": "text", + "content": "," + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 337, + 399, + 535, + 430 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 337, + 399, + 535, + 430 + ], + "spans": [ + { + "bbox": [ + 337, + 399, + 535, + 430 + ], + "type": "interline_equation", + "content": "R = \\mathcal {O} \\left(\\frac {\\sigma^ {2} D}{K N \\epsilon^ {2}} + \\frac {\\zeta_ {1 - p} ^ {2} D}{\\epsilon^ {2}} + \\frac {\\beta D}{\\epsilon} + F\\right), \\tag {17}", + "image_path": "d4a258a32d6934c37bae8c545e6085164122f99564cba3c918df1905432450f0.jpg" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 316, + 436, + 501, + 451 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 316, + 436, + 501, + 451 + ], + "spans": [ + { + "bbox": [ + 316, + 436, + 501, + 451 + ], + "type": "text", + "content": "Non-convex: " + }, + { + "bbox": [ + 316, + 436, + 501, + 451 + ], + "type": "inline_equation", + "content": "\\eta_l \\leq \\frac{1}{26K\\eta_g\\beta}" + }, + { + "bbox": [ + 316, + 436, + 501, + 451 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 316, + 436, + 501, + 451 + ], + "type": "inline_equation", + "content": "R \\geq 1" + }, + { + "bbox": [ + 316, + 436, + 501, + 451 + ], + "type": "text", + "content": ", then:" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 345, + 460, + 535, + 491 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 345, + 460, + 535, + 491 + ], + "spans": [ + { + "bbox": [ + 345, + 460, + 535, + 491 + ], + "type": "interline_equation", + "content": "R = \\mathcal {O} \\left(\\frac {\\beta \\sigma^ {2} F}{K N \\epsilon^ {2}} + \\frac {\\beta \\hat {\\zeta} _ {1 - p} ^ {2} F}{N \\epsilon^ {2}} + \\frac {\\beta F}{\\epsilon}\\right), \\tag {18}", + "image_path": "e7c40f9545684d4d4a2ebe87f41abe8bb8e513fc155b092bc7e618bd8041740e.jpg" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 305, + 494, + 496, + 507 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 494, + 496, + 507 + ], + "spans": [ + { + "bbox": [ + 305, + 494, + 496, + 507 + ], + "type": "text", + "content": "Where " + }, + { + "bbox": [ + 305, + 494, + 496, + 507 + ], + "type": "inline_equation", + "content": "D\\coloneqq ||\\pmb {x}^0 -\\pmb{x}^* ||^2" + }, + { + "bbox": [ + 305, + 494, + 496, + 507 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 305, + 494, + 496, + 507 + ], + "type": "inline_equation", + "content": "F\\coloneqq f(\\pmb {x}^0) - f^*" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "spans": [ + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "text", + "content": "Given the above assumptions, the convergence rate is given in Theorem I. When " + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "inline_equation", + "content": "p = 1" + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "text", + "content": ", we recover SCAFFOLD convergence guarantee as " + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "inline_equation", + "content": "\\zeta_{1 - p}^2 = 0" + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "inline_equation", + "content": "\\hat{\\zeta}_{1 - p}^2 = 0" + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "text", + "content": ". In the strongly convex case, the effect of the heterogeneity of the block of weights that are not variance reduced " + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "inline_equation", + "content": "\\zeta_{1 - p}^2" + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "text", + "content": " becomes negligible if " + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathcal{O}}\\left(\\frac{\\zeta_{1 - p}^2}{\\epsilon}\\right)" + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "text", + "content": " is sufficiently smaller than " + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "inline_equation", + "content": "\\tilde{\\mathcal{O}}\\left(\\frac{\\sigma^2}{NK\\epsilon}\\right)" + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "text", + "content": ". In such case, our rate is " + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "inline_equation", + "content": "\\frac{\\sigma^2}{NK\\epsilon} + \\frac{1}{\\mu}" + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "text", + "content": ", which recovers the SCAFFOLD in the strongly convex without sampling and further matches that of SGD (with mini-batch size " + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "inline_equation", + "content": "K" + }, + { + "bbox": [ + 304, + 516, + 536, + 656 + ], + "type": "text", + "content": " on each worker). We also recover the FedAvg rate* at simple IID case. See Appendix. B for the full proof." + } + ] + } + ], + "index": 27 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 305, + 665, + 535, + 701 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 665, + 535, + 701 + ], + "spans": [ + { + "bbox": [ + 305, + 665, + 535, + 701 + ], + "type": "text", + "content": "*FedAvg at strongly convex case has the rate " + }, + { + "bbox": [ + 305, + 665, + 535, + 701 + ], + "type": "inline_equation", + "content": "R = \\tilde{\\mathcal{O}}\\left(\\frac{\\sigma^2}{\\mu K N \\epsilon} + \\frac{\\sqrt{\\beta} G}{\\mu \\sqrt{\\epsilon}} + \\frac{\\beta}{\\mu}\\right)" + }, + { + "bbox": [ + 305, + 665, + 535, + 701 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 305, + 665, + 535, + 701 + ], + "type": "inline_equation", + "content": "G" + }, + { + "bbox": [ + 305, + 665, + 535, + 701 + ], + "type": "text", + "content": " measures the gradient dissimilarity. At simple IID case, " + }, + { + "bbox": [ + 305, + 665, + 535, + 701 + ], + "type": "inline_equation", + "content": "G = 0" + }, + { + "bbox": [ + 305, + 665, + 535, + 701 + ], + "type": "text", + "content": " [14]." + } + ] + } + ], + "index": 28 + }, + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "spans": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "type": "text", + "content": "3968" + } + ] + } + ], + "index": 29 + } + ], + "page_size": [ + 595, + 842 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "bbox": [ + 57, + 85, + 168, + 99 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 85, + 168, + 99 + ], + "spans": [ + { + "bbox": [ + 57, + 85, + 168, + 99 + ], + "type": "text", + "content": "4. Experimental setup" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 56, + 105, + 286, + 276 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 105, + 286, + 276 + ], + "spans": [ + { + "bbox": [ + 56, + 105, + 286, + 276 + ], + "type": "text", + "content": "We demonstrate the effectiveness of our approach with CIFAR10 [19] and CIFAR100 [19] on image classification tasks. We simulate the data heterogeneity scenario following [22] by partitioning the data according to the Dirichlet distribution with the concentration parameter " + }, + { + "bbox": [ + 56, + 105, + 286, + 276 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 56, + 105, + 286, + 276 + ], + "type": "text", + "content": ". The smaller the " + }, + { + "bbox": [ + 56, + 105, + 286, + 276 + ], + "type": "inline_equation", + "content": "\\alpha" + }, + { + "bbox": [ + 56, + 105, + 286, + 276 + ], + "type": "text", + "content": " is, the more imbalanced the data are distributed across clients. An example of the data distribution over multiple clients using the CIFAR10 dataset can be seen in Fig. 2. In our experiment, we use " + }, + { + "bbox": [ + 56, + 105, + 286, + 276 + ], + "type": "inline_equation", + "content": "\\alpha \\in \\{0.1, 0.5, 1.0\\}" + }, + { + "bbox": [ + 56, + 105, + 286, + 276 + ], + "type": "text", + "content": " as these are commonly used concentration parameters [22]. Each client has its local data, and this data is kept to be the same during all the communication rounds. We hold out the test dataset at the server for evaluating the classification performance of the server model. Following [22], we perform the same data augmentation for all the experiments." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 56, + 277, + 287, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 277, + 287, + 495 + ], + "spans": [ + { + "bbox": [ + 56, + 277, + 287, + 495 + ], + "type": "text", + "content": "We use two models: VGG-11 and ResNet-8 following [22]. We perform variance reduction for the last three layers in VGG-11 and the last layer in ResNet-8. We use 10 clients with full participation following [41] (close to cross-silo setup) and a batch size of 256. Each client performs 10 local epochs of model updating. We set the server learning rate " + }, + { + "bbox": [ + 56, + 277, + 287, + 495 + ], + "type": "inline_equation", + "content": "\\eta_{g} = 1" + }, + { + "bbox": [ + 56, + 277, + 287, + 495 + ], + "type": "text", + "content": " for all the models [14]. We tune the clients learning rate from " + }, + { + "bbox": [ + 56, + 277, + 287, + 495 + ], + "type": "inline_equation", + "content": "\\{0.05, 0.1, 0.2, 0.3\\}" + }, + { + "bbox": [ + 56, + 277, + 287, + 495 + ], + "type": "text", + "content": " for each individual experiment. The learning rate schedule is experimentally chosen from constant, cosine decay [23], and multiple step decay [22]. We compare our method with the representative federated learning algorithms FedAvg [25], FedProx [31], SCAFFOLD [14], and FedDyn [1]. All the results are averaged over three repeated experiments with different random initialization. We leave " + }, + { + "bbox": [ + 56, + 277, + 287, + 495 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 56, + 277, + 287, + 495 + ], + "type": "text", + "content": " of the training data from each client out as the validation data to tune the hyperparameters (learning rate and schedule) per client. See Appendix. C for additional experimental setups. The code is at github.com/lyn1874/fedpvr." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 498, + 175, + 511 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 498, + 175, + 511 + ], + "spans": [ + { + "bbox": [ + 57, + 498, + 175, + 511 + ], + "type": "text", + "content": "5. Experimental results" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 56, + 517, + 287, + 655 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 517, + 287, + 655 + ], + "spans": [ + { + "bbox": [ + 56, + 517, + 287, + 655 + ], + "type": "text", + "content": "We demonstrate the performance of our proposed approach in the FL setup with data heterogeneity in this section. We compare our method with the existing state-of-the-art algorithms on various datasets and deep neural networks. For the baseline approaches, we finetune the hyperparameters and only show the best performance we get. Our main findings are 1) we are more communication efficient than the baseline approaches, 2) conformal prediction is an effective tool to improve FL performance in high data heterogeneity scenarios, and 3) the benefit of the trade-off between diversity and uniformity for using deep neural networks in FL." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 57, + 660, + 259, + 673 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 660, + 259, + 673 + ], + "spans": [ + { + "bbox": [ + 57, + 660, + 259, + 673 + ], + "type": "text", + "content": "5.1. Communication efficiency and accuracy" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 57, + 678, + 287, + 702 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 678, + 287, + 702 + ], + "spans": [ + { + "bbox": [ + 57, + 678, + 287, + 702 + ], + "type": "text", + "content": "We first report the number of rounds required to achieve a certain level of Top " + }, + { + "bbox": [ + 57, + 678, + 287, + 702 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 57, + 678, + 287, + 702 + ], + "type": "text", + "content": " accuracy (66% for CIFAR10 and" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 304, + 86, + 536, + 213 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 86, + 536, + 213 + ], + "spans": [ + { + "bbox": [ + 304, + 86, + 536, + 213 + ], + "type": "inline_equation", + "content": "44\\%" + }, + { + "bbox": [ + 304, + 86, + 536, + 213 + ], + "type": "text", + "content": " for CIFAR100) in Table. 2. An algorithm is more communication efficient if it requires less number of rounds to achieve the same accuracy and/or if it transmits fewer number of parameters between the clients and server. Compared to the baseline approaches, we require much fewer number of rounds for almost all types of data heterogeneity and models. We can achieve a speedup between 1.5 and 6.7 than FedAvg. We also observe that ResNet-8 tends to converge slower than VGG-11, which may be due to the aggregation of the Batch Normalization layers that are discrepant between the local data distribution [22]." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 304, + 213, + 536, + 339 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 213, + 536, + 339 + ], + "spans": [ + { + "bbox": [ + 304, + 213, + 536, + 339 + ], + "type": "text", + "content": "We next compare the top-1 accuracy between centralized learning and federated learning algorithms. For the centralized learning experiment, we tune the learning rate from " + }, + { + "bbox": [ + 304, + 213, + 536, + 339 + ], + "type": "inline_equation", + "content": "\\{0.01, 0.05, 0.1\\}" + }, + { + "bbox": [ + 304, + 213, + 536, + 339 + ], + "type": "text", + "content": " and report the best test accuracy based on the validation dataset. We train the model for 800 epochs which is as same as the total number of epochs in the federated learning algorithms (80 communication rounds x 10 local epochs). The results are shown in Table. 3. We also show the number of copies of the parameters that need to be transmitted between the server and clients (e.g. 2x means we communicate " + }, + { + "bbox": [ + 304, + 213, + 536, + 339 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 304, + 213, + 536, + 339 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 304, + 213, + 536, + 339 + ], + "type": "inline_equation", + "content": "y_i" + }, + { + "bbox": [ + 304, + 213, + 536, + 339 + ], + "type": "text", + "content": ")" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 304, + 341, + 536, + 421 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 341, + 536, + 421 + ], + "spans": [ + { + "bbox": [ + 304, + 341, + 536, + 421 + ], + "type": "text", + "content": "Table. 3 shows that our approach achieves a much better Top-1 accuracy compared to FedAvg with transmitting a similar or slightly bigger number of parameters between the server and client per round. Our method also achieves slightly better accuracy than centralized learning when the data is less heterogeneous (e.g. " + }, + { + "bbox": [ + 304, + 341, + 536, + 421 + ], + "type": "inline_equation", + "content": "\\alpha = 0.5" + }, + { + "bbox": [ + 304, + 341, + 536, + 421 + ], + "type": "text", + "content": " for CIFAR10 and " + }, + { + "bbox": [ + 304, + 341, + 536, + 421 + ], + "type": "inline_equation", + "content": "\\alpha = 1.0" + }, + { + "bbox": [ + 304, + 341, + 536, + 421 + ], + "type": "text", + "content": " for CIFAR100)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 429, + 425, + 442 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 429, + 425, + 442 + ], + "spans": [ + { + "bbox": [ + 305, + 429, + 425, + 442 + ], + "type": "text", + "content": "5.2. Conformal prediction" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 304, + 447, + 536, + 505 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 447, + 536, + 505 + ], + "spans": [ + { + "bbox": [ + 304, + 447, + 536, + 505 + ], + "type": "text", + "content": "When the data heterogeneity is high across clients, it is difficult for a federated learning algorithm to match the centralized learning performance [24]. Therefore, we demonstrate the benefit of using simple post-processing conformal prediction to improve the model performance." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 506, + 536, + 608 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 506, + 536, + 608 + ], + "spans": [ + { + "bbox": [ + 304, + 506, + 536, + 608 + ], + "type": "text", + "content": "We examine the relationship between the empirical coverage and the average predictive set size for the server model after 80 communication rounds for each federated learning algorithm. The empirical coverage is the percentage of the data samples where the correct prediction is in the predictive set, and the average predictive size is the average of the length of the predictive sets over all the test images [3]. See Appendix. C for more information about conformal prediction setup and results." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "spans": [ + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "type": "text", + "content": "The results for when " + }, + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "type": "inline_equation", + "content": "\\alpha = 0.1" + }, + { + "bbox": [ + 304, + 609, + 536, + 702 + ], + "type": "text", + "content": " for both datasets and architectures are shown in Fig. 3. We show that by slightly increasing the predictive set size, we can achieve a similar accuracy as the centralized performance. Besides, our approach tends to surpass the centralized top-1 performance similar to or faster than other approaches. In sensitive use cases such as chemical threat detection, conformal prediction is a valuable tool to achieve certified accuracy at the" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 279, + 733, + 299, + 743 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 279, + 733, + 299, + 743 + ], + "spans": [ + { + "bbox": [ + 279, + 733, + 299, + 743 + ], + "type": "text", + "content": "3969" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 595, + 842 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 57, + 107, + 534, + 201 + ], + "blocks": [ + { + "bbox": [ + 57, + 85, + 535, + 106 + ], + "lines": [ + { + "bbox": [ + 57, + 85, + 535, + 106 + ], + "spans": [ + { + "bbox": [ + 57, + 85, + 535, + 106 + ], + "type": "text", + "content": "Table 2. The required number of communication rounds (speedup compared to FedAvg) to achieve a certain level of top-1 accuracy (66% for the CIFAR10 dataset and 44% for the CIFAR100 dataset). Our method requires fewer rounds to achieve the same accuracy." + } + ] + } + ], + "index": 0, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 57, + 107, + 534, + 201 + ], + "lines": [ + { + "bbox": [ + 57, + 107, + 534, + 201 + ], + "spans": [ + { + "bbox": [ + 57, + 107, + 534, + 201 + ], + "type": "table", + "html": "
CIFAR10 (66%)CIFAR100 (44%)
α=0.1α=0.5α=0.1α=1.0
VGG-11ResNet-8VGG-11ResNet-8VGG-11ResNet-8VGG-11ResNet-8
No. roundsNo. roundsNo. roundsNo. roundsNo. roundsNo. roundsNo. roundsNo. rounds
FedAvg55(1.0x)90(1.0x)15(1.0x)15(1.0x)100 + (1.0x)100 + (1.0x)80(1.0x)56(1.0x)
FedProx52(1.1x)75(1.2x)16(0.9x)20(0.8x)100 + (1.0x)100 + (1.0x)80(1.0x)59(0.9x)
SCAFFOLD39(1.4x)57(1.6x)14(1.0x)9(1.7x)80 (>1.3x)61 (>1.6x)36(2.2x)25(2.2x)
FedDyn27(2.0x)67(1.3x)15(1.0x)34(0.4x)80 + (-)80 + (-)24(3.3x)51(1.1x)
Ours27(2.0x)50(1.8x)9(1.6x)5(3.0x)37 (>2.7x)66 (>1.5x)12(6.7x)15(3.7x)
", + "image_path": "364380cd708c2b69a71a67d4be916ae7d81e6ee36837ce35c4c69f16cc0b7135.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_body" + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 57, + 241, + 530, + 354 + ], + "blocks": [ + { + "bbox": [ + 57, + 209, + 535, + 240 + ], + "lines": [ + { + "bbox": [ + 57, + 209, + 535, + 240 + ], + "spans": [ + { + "bbox": [ + 57, + 209, + 535, + 240 + ], + "type": "text", + "content": "Table 3. The top-1 accuracy " + }, + { + "bbox": [ + 57, + 209, + 535, + 240 + ], + "type": "inline_equation", + "content": "(\\%)" + }, + { + "bbox": [ + 57, + 209, + 535, + 240 + ], + "type": "text", + "content": " after running 80 communication rounds using different methods on CIFAR10 and CIFAR100, together with the number of communicated parameters between the client and the server. We train the centralised model for 800 epochs " + }, + { + "bbox": [ + 57, + 209, + 535, + 240 + ], + "type": "inline_equation", + "content": "(= 80" + }, + { + "bbox": [ + 57, + 209, + 535, + 240 + ], + "type": "text", + "content": " rounds x 10 local epochs in FL). Higher accuracy is better, and we highlight the best accuracy in red colour." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 57, + 241, + 530, + 354 + ], + "lines": [ + { + "bbox": [ + 57, + 241, + 530, + 354 + ], + "spans": [ + { + "bbox": [ + 57, + 241, + 530, + 354 + ], + "type": "table", + "html": "
VGG-11ResNet-8
CIFAR10CIFAR100server↔clientCIFAR10CIFAR100server↔client
α = 0.1α = 0.5α = 0.1α = 1.0α = 0.1α = 0.5α = 0.1α = 1.0
Centralised87.556.3-83.456.8-
FedAvg69.380.934.345.02x64.979.138.847.02x
Fedprox72.180.435.043.22x66.177.942.047.22x
SCAFFOLD74.183.543.450.64x66.680.343.852.34x
FedDyn77.480.143.845.22x63.872.936.448.12x
Ours78.284.943.558.02.1x69.383.643.552.32.02x
", + "image_path": "aa4d0d4b9e620cc4118b43638f17bf4350ca1e4105a3bb11ba5f5e963a42a140.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_body" + } + ], + "index": 3 + }, + { + "bbox": [ + 57, + 373, + 221, + 385 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 373, + 221, + 385 + ], + "spans": [ + { + "bbox": [ + 57, + 373, + 221, + 385 + ], + "type": "text", + "content": "cost of a slightly larger predictive set size." + } + ] + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 67, + 397, + 279, + 554 + ], + "blocks": [ + { + "bbox": [ + 67, + 397, + 279, + 554 + ], + "lines": [ + { + "bbox": [ + 67, + 397, + 279, + 554 + ], + "spans": [ + { + "bbox": [ + 67, + 397, + 279, + 554 + ], + "type": "image", + "image_path": "951dc30d681cc38c34a57c084476b9c21ec213ab178859442e6120b674b105b4.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 56, + 565, + 287, + 630 + ], + "lines": [ + { + "bbox": [ + 56, + 565, + 287, + 630 + ], + "spans": [ + { + "bbox": [ + 56, + 565, + 287, + 630 + ], + "type": "text", + "content": "Figure 3. Relation between average predictive size and empirical coverage when " + }, + { + "bbox": [ + 56, + 565, + 287, + 630 + ], + "type": "inline_equation", + "content": "\\alpha = 0.1" + }, + { + "bbox": [ + 56, + 565, + 287, + 630 + ], + "type": "text", + "content": ". By slightly increasing the predictive set size, we can achieve a similar performance as the centralised model (Top-1 accuracy) even if the data are heterogeneously distributed across clients. Our method is similar to or faster than other approaches to surpass the centralised Top-1 accuracy." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 57, + 637, + 190, + 650 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 637, + 190, + 650 + ], + "spans": [ + { + "bbox": [ + 57, + 637, + 190, + 650 + ], + "type": "text", + "content": "5.3. Diversity and uniformity" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 56, + 655, + 287, + 702 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 56, + 655, + 287, + 702 + ], + "spans": [ + { + "bbox": [ + 56, + 655, + 287, + 702 + ], + "type": "text", + "content": "We have shown that our algorithm achieves a better speedup and performance against the existing approaches with only lightweight modifications to FedAvg. We next investigate what factors lead to better accuracy. Specifi" + } + ] + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 317, + 370, + 525, + 518 + ], + "blocks": [ + { + "bbox": [ + 317, + 370, + 525, + 518 + ], + "lines": [ + { + "bbox": [ + 317, + 370, + 525, + 518 + ], + "spans": [ + { + "bbox": [ + 317, + 370, + 525, + 518 + ], + "type": "image", + "image_path": "31d31a04d19f4d675f9b4101e7054b8bb0d95550232e6f4009e9d8aff64aa0b5.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 304, + 526, + 535, + 600 + ], + "lines": [ + { + "bbox": [ + 304, + 526, + 535, + 600 + ], + "spans": [ + { + "bbox": [ + 304, + 526, + 535, + 600 + ], + "type": "text", + "content": "Figure 4. Drift diversity and learning curve for ResNet-8 on CIFAR100 with " + }, + { + "bbox": [ + 304, + 526, + 535, + 600 + ], + "type": "inline_equation", + "content": "\\alpha = 1.0" + }, + { + "bbox": [ + 304, + 526, + 535, + 600 + ], + "type": "text", + "content": ". Compared to FedAvg, SCAFFOLD and our method can both improve the agreement between the classifiers. Compared to SCAFFOLD, our method results in a higher gradient diversity at the early stage of the communication, which tends to boost the learning speed as the curvature of the drift diversity seem to match the learning curve." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 9 + }, + { + "bbox": [ + 304, + 620, + 535, + 666 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 620, + 535, + 666 + ], + "spans": [ + { + "bbox": [ + 304, + 620, + 535, + 666 + ], + "type": "text", + "content": "cally, we calculate the drift diversity " + }, + { + "bbox": [ + 304, + 620, + 535, + 666 + ], + "type": "inline_equation", + "content": "\\xi" + }, + { + "bbox": [ + 304, + 620, + 535, + 666 + ], + "type": "text", + "content": " across clients after each communication round using Eq. 2 and average " + }, + { + "bbox": [ + 304, + 620, + 535, + 666 + ], + "type": "inline_equation", + "content": "\\xi" + }, + { + "bbox": [ + 304, + 620, + 535, + 666 + ], + "type": "text", + "content": " across three runs. We show the result of using ResNet-8 and CIFAR100 with " + }, + { + "bbox": [ + 304, + 620, + 535, + 666 + ], + "type": "inline_equation", + "content": "\\alpha = 1.0" + }, + { + "bbox": [ + 304, + 620, + 535, + 666 + ], + "type": "text", + "content": " in Fig. 4." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 304, + 667, + 535, + 702 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 304, + 667, + 535, + 702 + ], + "spans": [ + { + "bbox": [ + 304, + 667, + 535, + 702 + ], + "type": "text", + "content": "Fig. 4 shows the drift diversity for different layers in ResNet-8 and the testing accuracy along the communication rounds. We observe that classifiers have the highest diver" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "spans": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "type": "text", + "content": "3970" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 595, + 842 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 57, + 86, + 287, + 234 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 86, + 287, + 234 + ], + "spans": [ + { + "bbox": [ + 57, + 86, + 287, + 234 + ], + "type": "text", + "content": "sity in FedAvg against other layers and methods. SCAFFOLD, which applies control variate on the entire model, can effectively reduce the disagreement of the directions and scales of the averaged gradient across clients. Our proposed algorithm that performs variance reduction only on the classifiers can reduce the diversity of the classifiers even further but increase the diversity of the feature extraction layers. This high diversity tends to boost the learning speed as the curvature of the diversity movement (Fig. 4 left) seems to match the learning curve (Fig. 4 right). Based on this observation, we hypothesize that this diversity along the feature extractor and the uniformity of the classifier is the main reason for our better speedup." + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 57, + 235, + 287, + 374 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 235, + 287, + 374 + ], + "spans": [ + { + "bbox": [ + 57, + 235, + 287, + 374 + ], + "type": "text", + "content": "To test this hypothesis, we perform an experiment where we use variance reduction starting from different layers of a neural network. If the starting position of the use of variance reduction influences the learning speed, it indicates where in a neural network we need more diversity and where we need more uniformity. We here show the result of using VGG-11 on CIFAR100 with " + }, + { + "bbox": [ + 57, + 235, + 287, + 374 + ], + "type": "inline_equation", + "content": "\\alpha = 1.0" + }, + { + "bbox": [ + 57, + 235, + 287, + 374 + ], + "type": "text", + "content": " as there are more layers in VGG-11. The result is shown in Fig. 5 where SVR: " + }, + { + "bbox": [ + 57, + 235, + 287, + 374 + ], + "type": "inline_equation", + "content": "16 \\rightarrow 20" + }, + { + "bbox": [ + 57, + 235, + 287, + 374 + ], + "type": "text", + "content": " is corresponding to our approach and SVR: " + }, + { + "bbox": [ + 57, + 235, + 287, + 374 + ], + "type": "inline_equation", + "content": "0 \\rightarrow 20" + }, + { + "bbox": [ + 57, + 235, + 287, + 374 + ], + "type": "text", + "content": " is corresponding to SCAFFOLD that applies variance reduction for the entire model. Results for using ResNet-8 is shown in Appendix. C." + } + ] + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 66, + 382, + 281, + 516 + ], + "blocks": [ + { + "bbox": [ + 66, + 382, + 281, + 516 + ], + "lines": [ + { + "bbox": [ + 66, + 382, + 281, + 516 + ], + "spans": [ + { + "bbox": [ + 66, + 382, + 281, + 516 + ], + "type": "image", + "image_path": "f270fd19410f370dc1632f06a91683cf88e3cc1148ad6fcfe9916088c224d50e.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 57, + 524, + 287, + 597 + ], + "lines": [ + { + "bbox": [ + 57, + 524, + 287, + 597 + ], + "spans": [ + { + "bbox": [ + 57, + 524, + 287, + 597 + ], + "type": "text", + "content": "Figure 5. Influence of using stochastic variance reduction(SVR) on layers that start from different positions in a neural network on the learning speed. " + }, + { + "bbox": [ + 57, + 524, + 287, + 597 + ], + "type": "inline_equation", + "content": "\\mathrm{SVR}:0\\rightarrow 20" + }, + { + "bbox": [ + 57, + 524, + 287, + 597 + ], + "type": "text", + "content": " applies variance reduction on the entire model (SCAFFOLD). " + }, + { + "bbox": [ + 57, + 524, + 287, + 597 + ], + "type": "inline_equation", + "content": "\\mathrm{SVR}:16\\rightarrow 20" + }, + { + "bbox": [ + 57, + 524, + 287, + 597 + ], + "type": "text", + "content": " applies variance reduction from the layer index 16 to 20 (ours). The later we apply variance reduction, the better performance speedup we obtain. However, no variance reduction (FedAvg) performs the worst here." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "bbox": [ + 57, + 609, + 287, + 702 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 57, + 609, + 287, + 702 + ], + "spans": [ + { + "bbox": [ + 57, + 609, + 287, + 702 + ], + "type": "text", + "content": "We see from Fig. 5 that the deeper in a neural network we apply variance reduction, the better learning speedup we can obtain. There is no clear performance difference between where to activate the variance reduction when the layer index is over 10. However, applying no variance reduction (FedAvg) achieves by far the worst performance. We believe that these experimental results indicate that in a distributed optimization framework, to boost the learning" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 305, + 86, + 536, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 86, + 536, + 133 + ], + "spans": [ + { + "bbox": [ + 305, + 86, + 536, + 133 + ], + "type": "text", + "content": "speed of an over-parameterized model, we need some levels of diversity in the middle and early layers for learning richer feature representation and some degrees of uniformity in the classifiers for making a less biased decision." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 305, + 143, + 375, + 154 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 143, + 375, + 154 + ], + "spans": [ + { + "bbox": [ + 305, + 143, + 375, + 154 + ], + "type": "text", + "content": "6. Conclusion" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 305, + 162, + 536, + 310 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 162, + 536, + 310 + ], + "spans": [ + { + "bbox": [ + 305, + 162, + 536, + 310 + ], + "type": "text", + "content": "In this work, we studied stochastic gradient descent learning for deep neural network classifiers in a federated learning setting, where each client updates its local model using stochastic gradient descent on local data. A central model is periodically updated (by averaging local model parameters) and broadcast to the clients under a communication bandwidth constraint. When data is homogeneous across clients, this procedure is comparable to centralized learning in terms of efficiency; however, when data is heterogeneous, learning is impeded. Our hypothesis for the primary reason for this is that when the local models are out of alignment, updating the central model by averaging is ineffective and sometimes even destructive." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 305, + 311, + 536, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 311, + 536, + 506 + ], + "spans": [ + { + "bbox": [ + 305, + 311, + 536, + 506 + ], + "type": "text", + "content": "Examining the diversity across clients of their local model updates and their learned feature representations, we found that the misalignment between models is much stronger in the last few neural network layers than in the rest of the network. This finding inspired us to experiment with aligning the local models using a partial variance reduction technique applied only on the last layers, which we named FedPVR. We found that this led to a substantial improvement in convergence speed compared to the competing federated learning methods. In some cases, our method even outperformed centralized learning. We derived a bound on the convergence rate of our proposed method, which matches the rates for SGD when the gradient diversity across clients is sufficiently low. Compared with FedAvg, the communication cost of our method is only marginally worse, as it requires transmitting control variates for the last layers." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 305, + 506, + 536, + 609 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 506, + 536, + 609 + ], + "spans": [ + { + "bbox": [ + 305, + 506, + 536, + 609 + ], + "type": "text", + "content": "We believe our FedPVR algorithm strikes a good balance between simplicity and efficiency, requiring only a minor modification to the established FedAvg method; however, in our further research, we plan to pursue more optimal methods for aligning and guiding the local learning algorithms, e.g. using adaptive procedures. Furthermore, the degree of over-parameterization in the neural network layers (e.g. feature extraction vs bottlenecks) may also play an important role, which we would like to understand better." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 305, + 619, + 404, + 633 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 619, + 404, + 633 + ], + "spans": [ + { + "bbox": [ + 305, + 619, + 404, + 633 + ], + "type": "text", + "content": "Acknowledgements" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 305, + 639, + 536, + 697 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 305, + 639, + 536, + 697 + ], + "spans": [ + { + "bbox": [ + 305, + 639, + 536, + 697 + ], + "type": "text", + "content": "The first three authors thank for financial support from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 883390 (H2020-SU-SECU-2019 SERSing Project). BL thanks for the financial support from the Otto Mønsted Foundation." + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 279, + 733, + 297, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 279, + 733, + 297, + 742 + ], + "spans": [ + { + "bbox": [ + 279, + 733, + 297, + 742 + ], + "type": "text", + "content": "3971" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 595, + 842 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 58, + 85, + 114, + 97 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 85, + 114, + 97 + ], + "spans": [ + { + "bbox": [ + 58, + 85, + 114, + 97 + ], + "type": "text", + "content": "References" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 59, + 104, + 287, + 701 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 63, + 104, + 287, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 104, + 287, + 146 + ], + "spans": [ + { + "bbox": [ + 63, + 104, + 287, + 146 + ], + "type": "text", + "content": "[1] Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N. Whatmough, and Venkatesh Saligrama. Federated learning based on dynamic regularization. CoRR, abs/2111.04263, 2021. 1, 2, 6, 26" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 63, + 148, + 287, + 187 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 148, + 287, + 187 + ], + "spans": [ + { + "bbox": [ + 63, + 148, + 287, + 187 + ], + "type": "text", + "content": "[2] Dan Alistarh, Jerry Li, Ryota Tomioka, and Milan Vo-jnovic. QSGD: randomized quantization for communication-optimal stochastic gradient descent. CoRR, abs/1610.02132, 2016. 3" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 63, + 190, + 287, + 231 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 190, + 287, + 231 + ], + "spans": [ + { + "bbox": [ + 63, + 190, + 287, + 231 + ], + "type": "text", + "content": "[3] Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan, and Jitendra Malik. Uncertainty sets for image classifiers using conformal prediction. In International Conference on Learning Representations, 2021. 2, 3, 6" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 63, + 233, + 287, + 273 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 233, + 287, + 273 + ], + "spans": [ + { + "bbox": [ + 63, + 233, + 287, + 273 + ], + "type": "text", + "content": "[4] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. CoRR, abs/2002.05709, 2020. 3" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 63, + 276, + 287, + 307 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 276, + 287, + 307 + ], + "spans": [ + { + "bbox": [ + 63, + 276, + 287, + 307 + ], + "type": "text", + "content": "[5] Liam Collins, Hamed Hassani, Aryan Mokhtari, and Sanjay Shakkottai. Fedavg with fine tuning: Local updates lead to representation learning, 2022. 2" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 63, + 309, + 287, + 349 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 309, + 287, + 349 + ], + "spans": [ + { + "bbox": [ + 63, + 309, + 287, + 349 + ], + "type": "text", + "content": "[6] Aaron Defazio, Francis R. Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. CoRR, abs/1407.0202, 2014. 3" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 63, + 351, + 287, + 381 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 351, + 287, + 381 + ], + "spans": [ + { + "bbox": [ + 63, + 351, + 287, + 381 + ], + "type": "text", + "content": "[7] Aaron Defazio and Léon Bottou. On the ineffectiveness of variance reduced optimization for deep learning. CoRR, abs/1812.04529, 2018. 2, 3, 5" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 63, + 383, + 287, + 444 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 383, + 287, + 444 + ], + "spans": [ + { + "bbox": [ + 63, + 383, + 287, + 444 + ], + "type": "text", + "content": "[8] Aritra Dutta, El Houcine Bergou, Ahmed M. Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, and Panos Kalnis. On the discrepancy between the theoretical analysis and practical implementations of compressed communication for distributed deep learning. CoRR, abs/1911.08250, 2019.3" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 63, + 447, + 287, + 509 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 63, + 447, + 287, + 509 + ], + "spans": [ + { + "bbox": [ + 63, + 447, + 287, + 509 + ], + "type": "text", + "content": "[9] Liang Gao, Huazhu Fu, Li Li, Yingwen Chen, Ming Xu, and Cheng-Zhong Xu. Feddc: Federated learning with non-iid data via local drift decoupling and correction. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 10102-10111. IEEE, 2022. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 59, + 511, + 287, + 551 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 59, + 511, + 287, + 551 + ], + "spans": [ + { + "bbox": [ + 59, + 511, + 287, + 551 + ], + "type": "text", + "content": "[10] Malka N. Halgamuge, Moshe Zukerman, Kotagiri Ramamoohanarao, and Hai Le Vu. An estimation of sensor energy consumption. Progress in Electromagnetics Research B, 12:259-295, 2009. 1, 2, 5" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 59, + 553, + 287, + 583 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 59, + 553, + 287, + 583 + ], + "spans": [ + { + "bbox": [ + 59, + 553, + 287, + 583 + ], + "type": "text", + "content": "[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. 2" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 59, + 586, + 287, + 615 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 59, + 586, + 287, + 615 + ], + "spans": [ + { + "bbox": [ + 59, + 586, + 287, + 615 + ], + "type": "text", + "content": "[12] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013. 3" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 59, + 618, + 287, + 701 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 59, + 618, + 287, + 701 + ], + "spans": [ + { + "bbox": [ + 59, + 618, + 287, + 701 + ], + "type": "text", + "content": "[13] Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista A. Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascon, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zäïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 307, + 87, + 536, + 701 + ], + "type": "list", + "angle": 0, + "index": 27, + "blocks": [ + { + "bbox": [ + 325, + 87, + 536, + 191 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 325, + 87, + 536, + 191 + ], + "spans": [ + { + "bbox": [ + 325, + 87, + 536, + 191 + ], + "type": "text", + "content": "Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrede Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramér, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in federated learning. CoRR, abs/1912.04977, 2019. 1, 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 308, + 194, + 536, + 277 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 194, + 536, + 277 + ], + "spans": [ + { + "bbox": [ + 308, + 194, + 536, + 277 + ], + "type": "text", + "content": "[14] Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. SCAFFOLD: Stochastic controlled averaging for federated learning. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5132-5143. PMLR, 13-18 Jul 2020. 1, 2, 3, 4, 5, 6, 11, 12, 13, 21" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 307, + 279, + 536, + 310 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 279, + 536, + 310 + ], + "spans": [ + { + "bbox": [ + 307, + 279, + 536, + 310 + ], + "type": "text", + "content": "[15] Ahmed Khaled, Konstantin Mishchenko, and Peter Richtárik. Better communication complexity for local SGD. CoRR, abs/1909.04746, 2019. 5" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 307, + 312, + 536, + 385 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 312, + 536, + 385 + ], + "spans": [ + { + "bbox": [ + 307, + 312, + 536, + 385 + ], + "type": "text", + "content": "[16] Anastasia Koloskova, Nicolas Loizou, Sadra Boreiri, Martin Jaggi, and Sebastian Stich. A unified theory of decentralized SGD with changing topology and local updates. In Hal Daume III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5381-5393. PMLR, 13-18 Jul 2020. 5, 11, 13" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 307, + 388, + 536, + 428 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 388, + 536, + 428 + ], + "spans": [ + { + "bbox": [ + 307, + 388, + 536, + 428 + ], + "type": "text", + "content": "[17] Jakub Konečny, H. Brendan McMahan, Daniel Ramage, and Peter Richtárik. Federated optimization: Distributed machine learning for on-device intelligence. CoRR, abs/1610.02527, 2016. 1, 2" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 307, + 431, + 536, + 461 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 431, + 536, + 461 + ], + "spans": [ + { + "bbox": [ + 307, + 431, + 536, + 461 + ], + "type": "text", + "content": "[18] Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey E. Hinton. Similarity of neural network representations revisited. CoRR, abs/1905.00414, 2019. 4" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 307, + 463, + 536, + 484 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 463, + 536, + 484 + ], + "spans": [ + { + "bbox": [ + 307, + 463, + 536, + 484 + ], + "type": "text", + "content": "[19] Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. 2, 6" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 307, + 486, + 536, + 516 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 486, + 536, + 516 + ], + "spans": [ + { + "bbox": [ + 307, + 486, + 536, + 516 + ], + "type": "text", + "content": "[20] Qinbin Li, Bingsheng He, and Dawn Song. Model-contrastive federated learning. CoRR, abs/2103.16257, 2021. 1, 3" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 307, + 518, + 536, + 549 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 518, + 536, + 549 + ], + "spans": [ + { + "bbox": [ + 307, + 518, + 536, + 549 + ], + "type": "text", + "content": "[21] Zhize Li, Dmitry Kovalev, Xun Qian, and Peter Richtárik. Acceleration for compressed gradient descent in distributed and federated optimization, 2020. 3" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 308, + 551, + 536, + 634 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 308, + 551, + 536, + 634 + ], + "spans": [ + { + "bbox": [ + 308, + 551, + 536, + 634 + ], + "type": "text", + "content": "[22] Tao Lin, Lingjing Kong, Sebastian U. Stich, and Martin Jaggi. Ensemble distillation for robust model fusion in federated learning. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. 6, 25, 26" + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 307, + 637, + 536, + 657 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 637, + 536, + 657 + ], + "spans": [ + { + "bbox": [ + 307, + 637, + 536, + 657 + ], + "type": "text", + "content": "[23] Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with restarts. CoRR, abs/1608.03983, 2016. 6" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 307, + 660, + 536, + 701 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 307, + 660, + 536, + 701 + ], + "spans": [ + { + "bbox": [ + 307, + 660, + 536, + 701 + ], + "type": "text", + "content": "[24] Mi Luo, Fei Chen, Dapeng Hu, Yifan Zhang, Jian Liang, and Jiashi Feng. No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. CoRR, abs/2106.05001, 2021. 1, 2, 3, 4, 6" + } + ] + } + ], + "index": 26 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 279, + 733, + 298, + 743 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 279, + 733, + 298, + 743 + ], + "spans": [ + { + "bbox": [ + 279, + 733, + 298, + 743 + ], + "type": "text", + "content": "3972" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 595, + 842 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 58, + 87, + 287, + 699 + ], + "type": "list", + "angle": 0, + "index": 14, + "blocks": [ + { + "bbox": [ + 58, + 87, + 287, + 118 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 87, + 287, + 118 + ], + "spans": [ + { + "bbox": [ + 58, + 87, + 287, + 118 + ], + "type": "text", + "content": "[25] H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. Federated learning of deep networks using model averaging. CoRR, abs/1602.05629, 2016. 2, 6" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 58, + 119, + 287, + 149 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 119, + 287, + 149 + ], + "spans": [ + { + "bbox": [ + 58, + 119, + 287, + 149 + ], + "type": "text", + "content": "[26] Konstantin Mishchenko, Eduard Gorbunov, Martin Takáč, and Peter Richtárik. Distributed learning with compressed gradient differences. CoRR, abs/1901.09269, 2019. 3" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 58, + 150, + 287, + 203 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 150, + 287, + 203 + ], + "spans": [ + { + "bbox": [ + 58, + 150, + 287, + 203 + ], + "type": "text", + "content": "[27] Konstantin Mishchenko, Grigory Malinovsky, Sebastian Stich, and Peter Richtárik. ProxSkip: Yes! local gradient steps provably lead to communication acceleration! finally! International Conference on Machine Learning (ICML), 2022. 2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 58, + 204, + 287, + 245 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 204, + 287, + 245 + ], + "spans": [ + { + "bbox": [ + 58, + 204, + 287, + 245 + ], + "type": "text", + "content": "[28] Thao Nguyen, Maithra Raghu, and Simon Kornblith. Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth. CoRR, abs/2010.15327, 2020. 2, 4" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 58, + 246, + 287, + 277 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 246, + 287, + 277 + ], + "spans": [ + { + "bbox": [ + 58, + 246, + 287, + 277 + ], + "type": "text", + "content": "[29] Jaehoon Oh, Sangmook Kim, and Se-Young Yun. Fedbabu: Towards enhanced representation for federated image classification. CoRR, abs/2106.06042, 2021. 2" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 58, + 277, + 287, + 330 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 277, + 287, + 330 + ], + "spans": [ + { + "bbox": [ + 58, + 277, + 287, + 330 + ], + "type": "text", + "content": "[30] Yaniv Romano, Matteo Sesia, and Emmanuel J. Candes. Classification with valid and adaptive coverage. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20, Red Hook, NY, USA, 2020. Curran Associates Inc. 3" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 58, + 330, + 287, + 373 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 330, + 287, + 373 + ], + "spans": [ + { + "bbox": [ + 58, + 330, + 287, + 373 + ], + "type": "text", + "content": "[31] Anit Kumar Sahu, Tian Li, Maziar Sanjabi, Manzil Zaheer, Ameet Talwalkar, and Virginia Smith. On the convergence of federated optimization in heterogeneous networks. CoRR, abs/1812.06127, 2018. 1, 2, 6, 26" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 58, + 373, + 287, + 405 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 373, + 287, + 405 + ], + "spans": [ + { + "bbox": [ + 58, + 373, + 287, + 405 + ], + "type": "text", + "content": "[32] Ohad Shamir, Nathan Srebro, and Tong Zhang. Communication efficient distributed optimization using an approximate newton-type method. CoRR, abs/1312.7853, 2013. 1, 2, 3" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 58, + 405, + 287, + 468 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 405, + 287, + 468 + ], + "spans": [ + { + "bbox": [ + 58, + 405, + 287, + 468 + ], + "type": "text", + "content": "[33] Micah J. Sheller, Brandon Edwards, G. Anthony Reina, Jason Martin, Sarthak Pati, Aikaterini Kotrotsou, Mikhail Milchenko, Weilin Xu, Daniel Marcus, Rivka R. Colen, and Spyridon Bakas. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Scientific Reports, 10(1):12598, Jul 2020. 1" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 58, + 468, + 287, + 509 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 468, + 287, + 509 + ], + "spans": [ + { + "bbox": [ + 58, + 468, + 287, + 509 + ], + "type": "text", + "content": "[34] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015. 2" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 58, + 511, + 287, + 541 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 511, + 287, + 541 + ], + "spans": [ + { + "bbox": [ + 58, + 511, + 287, + 541 + ], + "type": "text", + "content": "[35] Sebastian U. Stich. Unified optimal analysis of the (stochastic) gradient method. CoRR, abs/1907.04232, 2019. 5, 11, 14" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 58, + 543, + 287, + 584 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 543, + 287, + 584 + ], + "spans": [ + { + "bbox": [ + 58, + 543, + 287, + 584 + ], + "type": "text", + "content": "[36] Sebastian U. Stich, Jean-Baptiste Cordonnier, and Martin Jaggi. Sparsified SGD with memory. In Advances in Neural Information Processing Systems 31 (NeurIPS), pages 4452-4463. Curran Associates, Inc., 2018. 3" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 58, + 585, + 287, + 626 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 585, + 287, + 626 + ], + "spans": [ + { + "bbox": [ + 58, + 585, + 287, + 626 + ], + "type": "text", + "content": "[37] Sebastian U. Stich and Sai Praneeth Karimireddy. The error-feedback framework: Better rates for SGD with delayed gradients and compressed communication. CoRR, abs/1909.05350, 2019. 15" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 58, + 627, + 287, + 699 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 58, + 627, + 287, + 699 + ], + "spans": [ + { + "bbox": [ + 58, + 627, + 287, + 699 + ], + "type": "text", + "content": "[38] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In Yoshua Bengio and Yann LeCun, editors, 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. 2, 5" + } + ] + } + ], + "index": 13 + } + ], + "sub_type": "ref_text" + }, + { + "bbox": [ + 306, + 87, + 536, + 373 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 306, + 87, + 535, + 118 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 87, + 535, + 118 + ], + "spans": [ + { + "bbox": [ + 306, + 87, + 535, + 118 + ], + "type": "text", + "content": "[39] F Varno, M Saghayi, L Rafiee, S Gupta, S Matwin, and M Havaei. Minimizing client drift in federated learning via adaptive bias estimation. ArXiv, abs/2204.13170, 2022. 2" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 306, + 119, + 536, + 297 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 119, + 536, + 297 + ], + "spans": [ + { + "bbox": [ + 306, + 119, + 536, + 297 + ], + "type": "text", + "content": "[40] Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Agüera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas N. Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horváth, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konečný, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtárik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake E. Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, and Wennan Zhu. A field guide to federated optimization. CoRR, abs/2107.06917, 2021. 2" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 306, + 298, + 536, + 341 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 298, + 536, + 341 + ], + "spans": [ + { + "bbox": [ + 306, + 298, + 536, + 341 + ], + "type": "text", + "content": "[41] Yaodong Yu, Alexander Wei, Sai Praneeth Karimireddy, Yi Ma, and Michael I. Jordan. TCT: Convexifying federated learning using bootstrapped neural tangent kernels, 2022. 2, 5, 6" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 306, + 342, + 536, + 373 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 306, + 342, + 536, + 373 + ], + "spans": [ + { + "bbox": [ + 306, + 342, + 536, + 373 + ], + "type": "text", + "content": "[42] Haoyu Zhao, Zhize Li, and Peter Richtárik. Fedpage: A fast local stochastic gradient method for communication-efficient federated learning. CoRR, abs/2108.04755, 2021. 2" + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "spans": [ + { + "bbox": [ + 279, + 733, + 298, + 742 + ], + "type": "text", + "content": "3973" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 595, + 842 + ], + "page_idx": 9 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file