diff --git "a/title_31K_G/test_title_long_2405.04233v1.json" "b/title_31K_G/test_title_long_2405.04233v1.json" new file mode 100644--- /dev/null +++ "b/title_31K_G/test_title_long_2405.04233v1.json" @@ -0,0 +1,305 @@ +{ + "url": "http://arxiv.org/abs/2405.04233v1", + "title": "Vidu: a Highly Consistent, Dynamic and Skilled Text-to-Video Generator with Diffusion Models", + "abstract": "We introduce Vidu, a high-performance text-to-video generator that is capable\nof producing 1080p videos up to 16 seconds in a single generation. Vidu is a\ndiffusion model with U-ViT as its backbone, which unlocks the scalability and\nthe capability for handling long videos. Vidu exhibits strong coherence and\ndynamism, and is capable of generating both realistic and imaginative videos,\nas well as understanding some professional photography techniques, on par with\nSora -- the most powerful reported text-to-video generator. Finally, we perform\ninitial experiments on other controllable video generation, including\ncanny-to-video generation, video prediction and subject-driven generation,\nwhich demonstrate promising results.", + "authors": "Fan Bao, Chendong Xiang, Gang Yue, Guande He, Hongzhou Zhu, Kaiwen Zheng, Min Zhao, Shilong Liu, Yaole Wang, Jun Zhu", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Vidu: a Highly Consistent, Dynamic and Skilled Text-to-Video Generator with Diffusion Models", + "main_content": "Introduction Diffusion models have obtained breakthrough progress on generating high-quality images, videos and other types of data, outperforming alternative approaches like auto-regressive networks. Previously, video generation models primarily relied on diffusion models [13, 9, 14] with the U-Net backbone [11], and focused on a single limited duration like 4 seconds [8, 5, 7, 4]. Our model, Vidu, demonstrates that a text-to-video diffusion model with U-ViT [1, 2] as its backbone can break this duration limitation by leveraging the scalability and the long sequence modeling ability of a transformer [15]. Vidu is capable of producing 1080p videos up to 16 seconds in a single generation, as well as images as videos of a single frame. Additionally, Vidu exhibits strong coherence and dynamism, and is capable of generating both realistic and imaginative videos. Vidu also has a preliminary understanding of some professional photography techniques, such as transitions, camera movements, lighting effects and emotional portrayal. We observe that to some extent, the generation performance of Vidu is comparable with that of Sora [6], which is currently the most powerful text-to-video generator, much better than the other text-to-video generators. Finally, we perform initial experiments on other controllable video generation, including canny-to-video generation [16], video prediction and subject-driven generation [12]. All of them demonstrate promising results. 2 Text-to-Video Generation Vidu firstly employs a video autoencoder [10] to reduce both the spatial and temporal dimensions of videos for efficient training and inference. After that, Vidu employs a U-ViT [1] as the noise prediction network to model these compressed representations. Specifically, as shown in Figure 1, U-ViT splits the compressed videos into 3D patches, treats all inputs including the time, text condition \u2217Second authors listed alphabetically. \u2021The corresponding author. arXiv:2405.04233v1 [cs.CV] 7 May 2024 \fTransformer Block Transformer Block Transformer Block Transformer Block t c Embedding Layer Linear 0 1 2 3 4 5 6 L \u00b7\u00b7\u00b7 C \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 Transformer Block Embeddings Norm MLP Multi-Head Attention Norm + + + : Add C : Concatenate + Linear Transformer Block \ud835\udc99\ud835\udc61 C Rearrange to T\u00d73\u00d7H\u00d7W Predicted noise Figure 1: The U-ViT architecture for predicting the noise in videos. and noisy 3D patches as tokens, and employs long skip connections between shallow and deep layers in a transformer. By leveraging the ability of transformers to process variable-length sequences, Vidu can handle videos with variable durations. Vidu is trained on vast amount of text-video pairs, and it is infeasible to have all videos labeled by humans. To address it, we firstly train a high-performance video captioner optimized for understanding dynamic information in videos, and then automatically annotate all the training videos using this captioner. During inference, we apply the re-captioning technique [3] to rephrase user inputs into a form that is more suitable for the model. 2 \f2.1 Generating Videos of Different Lengths Since Vidu is trained on videos of various lengths, it can generate 1080p videos of all lengths up to 16 seconds, including images as videos of a single frame. We present examples in Figure 2. (a) 16 seconds. Prompt: A person clad in a space suit with a helmet and equipped with a chest light and arm device is seen closely examining and interacting with a variety of plants in a lush, indoor botanical setting. (b) 8 seconds. Prompt: A desolate lunar landscape with craters and a large moon in the sky transitions to a warmly lit interior of a spacecraft-like structure where a group of people are engaged in various activities. (c) Image. Prompt: An exquisite silverware piece, aesthetically adorned with intricate patterns and scenes, exhibits the detailed artisanship and metallic sheen. (d) Image. Prompt: Under the veil of nightfall, a rose reveals its subtle, exquisite beauty in the gentle moonlight. Figure 2: Vidu can generate videos of all lengths up to 16 seconds, including images. 3 \f2.2 3D Consistency The video generated by Vidu exhibits strong 3D consistency. As the camera rotates, the video presents projections of the same object from different angles. For instance, as shown in Figure 3, the hair of the generated cat naturally occludes as the camera rotates. (a) Prompt: This portrait depicts an orange cat with blue eyes, slowly rotating, inspired by Vermeer\u2019s \u2019Girl with a Pearl Earring\u2019. The cat is adorned with pearl earrings and has brown fur styled like a Dutch cap against a black background, illuminated by studio lighting. (b) Prompt: In a studio, there is a painting depicting a ship sailing through the rough sea. (c) Prompt: A red car is stuck in the snow, with the entire vehicle emitting green light and red signal lights flashing on the back. The camera slowly pans around the car. Figure 3: 3D consistency of Vidu. 4 \f2.3 Generating Cuts Vidu is capable of generating videos incorporating cuts. As shown in Figure 4, these videos present different perspectives of the same scene by switching camera angles, while maintaining consistency of subjects in the scene. (a) Prompt: A sculptor is intently working on a clay bust, meticulously refining its facial features with precise hand movements. (b) Prompt: Churning ocean waves at night with a lighthouse on the coast create an intense and somewhat foreboding atmosphere. The scene is set under an overcast sky, with the ocean\u2019s dark waters illuminated by natural light, highlighting the white foam of the waves. Figure 4: Vidu is capable of generating videos with cuts. 5 \f2.4 Generating Transitions Vidu is capable of producing videos with transitions in a single generation. As shown in Figure 5, these transitions can connect two different scenes in an engaging manner. (a) Prompt: An elderly man with glasses, dressed in formal attire, is deeply engrossed in examining a large, ornate pocket watch. As the video progresses, there is a cinematic transition to a fantastical mechanical cityscape, viewed through the openwork of the watch. This shift evokes a sense of wonder and transports the viewer into a steampunk-inspired world where buildings and structures are made of metal and gears. (b) Prompt: A person holding a dessert with a fluffy layer of whipped cream elegantly drizzled with smooth chocolate sauce. As a dollop of cream falls, a mini polar bear appears, with floating icebergs nearby, set against a serene blue backdrop. Figure 5: Vidu is capable of generating videos with transitions. 6 \f2.5 Camera Movements Camera movements involve the physical adjustments or movements of a camera during filming, enhancing visual narrative and conveying various perspectives and emotions within scenes. Vidu learned these techniques from the data, enhancing the visual experience of viewers. For instance, as shown in Figure 6, Vidu is capable of generating videos with camera movements including zoom, pan and dolly. (a) Zoom. Prompt: A large sailing ship sails slowly through the fog. (b) Pan. Prompt: An elderly man with a white beard is seated in a room filled with wooden bookshelves, brimming with old books. He is dressed in a dark suit and tie, and he is engrossed in reading a large book. The room is bathed in the warm glow of sunlight streaming through a window, creating a serene and contemplative atmosphere. (c) Dolly. Prompt: An animated hedgehog with distinctive spiky hair and large eyes is seen exploring a lush, grassy environment. Figure 6: Camera movements generated by Vidu. 7 \f2.6 Lighting Effects Vidu is capable of generating videos with impressive lighting effects, which help enhance the overall atmosphere. For example, as shown in Figure 7, the generated videos can evoke atmospheres of mystery and tranquility. Therefore, besides the entities within the video content, Vidu has the preliminary ability to convey some abstract feelings. (a) Prompt: A man wearing a hat and a dark suit walks from the corridor towards the room. The lighting casts a bluish tint over the scene, creating a suspenseful atmosphere. (b) Prompt: A rustic wooden cabin nestles by the shore of a clear, sunlit lake, surrounded by verdant trees and mountains. The water is calm, reflecting the sky above, with a few clouds scattered across it. Sailboats and kayaks are moored on the lake, inviting leisure and tranquility. Figure 7: Lighting effects generated by Vidu. 8 \f2.7 Emotional Portrayal Vidu is able to depict characters\u2019 emotions effectively. For example, as shown in Figure 8, Vidu can express emotions such as happiness, loneliness, embarrassment, and joy. (a) Prompt: A man and a woman are sharing a close and affectionate interaction in an indoor setting that suggests a romantic ambiance. (b) Prompt: An elderly woman with white hair and a lined face is seated inside an older model car, looking out through the side window with a contemplative or mildly sad expression. (c) Prompt: A couple about to get divorced sat awkwardly in the waiting room. (d) Prompt: Audience members in a theater are captured in a series of medium shots, with a young man and woman in formal attire centrally positioned and illuminated by a spotlight effect. Figure 8: Emotional portrayal of Vidu. 9 \f2.8 Imaginative Ability In addition to generating real-world scenes, Vidu also possesses a rich imagination. As shown in Figure 9, Vidu is able to generate scenes that do not exist in the real world. (a) Prompt: A painting of a boat on water comes to life, with waves crashing and the boat becoming submerged. (b) Prompt: An animated rabbit in a playful pink snowboarding outfit is carving its way down a snowy mountain slope under a clear blue sky. (c) Prompt: A model train with a blue engine is seen traveling through a meticulously crafted miniature landscape. The train is pulling several red and cream-colored passenger cars along a track that winds through a rural or suburban setting with small-scale houses, verdant trees, and miniature waterfalls. Figure 9: Imaginative ability of Vidu. 10 \f2.9 Comparison with Sora Sora [6] is currently the most powerful text-to-video generator, capable of producing high-definition videos with high consistency. However, as Sora is not publicly accessible, we compare them by inserting the example prompts released by Sora directly to Vidu. Figure 10 and Figure 11 illustrate the comparison between Vidu and Sora, indicating that to some extent, the generation performance of Vidu is comparable to Sora. (a) Sora (b) Vidu Figure 10: Prompt: The camera rotates around a large stack of vintage televisions all showing different programs \u2014 1950s sci-fi movies, horror movies, news, static, a 1970s sitcom, etc, set inside a large New York museum gallery. 11 \f(a) Sora (b) Vidu Figure 11: Prompt: The camera follows behind a white vintage SUV with a black roof rack as it speeds up a steep dirt road surrounded by pine trees on a steep mountain slope, dust kicks up from it\u2019s tires, the sunlight shines on the SUV as it speeds along the dirt road, casting a warm glow over the scene. The dirt road curves gently into the distance, with no other cars or vehicles in sight. The trees on either side of the road are redwoods, with patches of greenery scattered throughout. The car is seen from the rear following the curve with ease, making it seem as if it is on a rugged drive through the rugged terrain. The dirt road itself is surrounded by steep hills and mountains, with a clear blue sky above with wispy clouds. 12 \f3 Other Controllable Video Generation We also perform several initial experiments at 512 resolution on other controllable video generation, including canny-to-video generation [16], video prediction, and subject-driven generation [12]. All of them demonstrate promising results. 3.1 Canny-to-Video Generation Vidu can add additional control by using techniques similar to ControlNet [16], as shown in Figure 12. (a) Input canny. (b) Prompt: During the day, a white car drove towards me and splashed water as it passed by a pond, realistic visual style. (c) Prompt: During the day, a red car drove towards me and splashed water as it passed by a pond, realistic visual style. (d) Prompt: During the day, a white car drove towards me and splashed water as it passed by a pond, anime style. Figure 12: Canny-to-video generation examples of Vidu. 13 \f3.2 Video Prediction As shown in Figure 13, Vidu can generate subsequent frames, given an input image, or several input frames (marked with red boxes). (a) Prompt: A pink chrysanthemum flower with intricate petals is the focal point, resting on a wooden surface in an indoor setting. (b) Prompt: A serene mountainous landscape bathed in the warm glow of sunset or twilight, with snow-capped peaks rising above the green vegetation-covered slopes. A calm body of water rests in the foreground, reflecting the sky above, which is dotted with clouds tinged with pink and orange hues. Figure 13: Video prediction examples of Vidu. 14 \f3.3 Subject-Driven Generation We surprisingly find that Vidu can perform subject-driven video generation by finetuning solely on images without videos. For example, we use the DreamBooth [12] technique to designate the learned subject as a special symbol for finetuning. As shown in Figure 14, the generated videos faithfully recreates the learned subject. (a) Input images. (b) Prompt: A dog lies on the ground and then goes to eat from the bowl. (c) Prompt: A dog bit his tail happily and shakes his head. Figure 14: Subject-driven generation examples of Vidu. 15 \f4 Conclusion We present Vidu, a high-definition text-to-video generator that demonstrates strong abilities in various aspects, including duration, coherence, and dynamism of the generated videos, on par with Sora. In the future, Vidu still has room for improvement. For instance, there are occasional flaws in details, and interactions between different subjects in the video sometimes deviate from physical laws. We believe that these issues can be effectively addressed by further scaling Vidu. 5 Acknowledgements We appreciate the support of the data team and the product team for the project at Shengshu. This work was partly supported by NSFC Projects (Nos. 62061136001, 62106123, 61972224), Tsinghua Institute for Guo Qiang, and the High Performance Computing Center, Tsinghua University. J.Z is also supported by the XPlorer Prize.", + "additional_graph_info": { + "graph": [ + [ + "Fan Bao", + "Chongxuan Li" + ], + [ + "Fan Bao", + "Chendong Xiang" + ], + [ + "Chongxuan Li", + "Jiashuo Liu" + ], + [ + "Chendong Xiang", + "Chongxuan Li" + ], + [ + "Chendong Xiang", + "Hang Su" + ] + ], + "node_feat": { + "Fan Bao": [ + { + "url": "http://arxiv.org/abs/2405.04233v1", + "title": "Vidu: a Highly Consistent, Dynamic and Skilled Text-to-Video Generator with Diffusion Models", + "abstract": "We introduce Vidu, a high-performance text-to-video generator that is capable\nof producing 1080p videos up to 16 seconds in a single generation. Vidu is a\ndiffusion model with U-ViT as its backbone, which unlocks the scalability and\nthe capability for handling long videos. Vidu exhibits strong coherence and\ndynamism, and is capable of generating both realistic and imaginative videos,\nas well as understanding some professional photography techniques, on par with\nSora -- the most powerful reported text-to-video generator. Finally, we perform\ninitial experiments on other controllable video generation, including\ncanny-to-video generation, video prediction and subject-driven generation,\nwhich demonstrate promising results.", + "authors": "Fan Bao, Chendong Xiang, Gang Yue, Guande He, Hongzhou Zhu, Kaiwen Zheng, Min Zhao, Shilong Liu, Yaole Wang, Jun Zhu", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "main_content": "Introduction Diffusion models have obtained breakthrough progress on generating high-quality images, videos and other types of data, outperforming alternative approaches like auto-regressive networks. Previously, video generation models primarily relied on diffusion models [13, 9, 14] with the U-Net backbone [11], and focused on a single limited duration like 4 seconds [8, 5, 7, 4]. Our model, Vidu, demonstrates that a text-to-video diffusion model with U-ViT [1, 2] as its backbone can break this duration limitation by leveraging the scalability and the long sequence modeling ability of a transformer [15]. Vidu is capable of producing 1080p videos up to 16 seconds in a single generation, as well as images as videos of a single frame. Additionally, Vidu exhibits strong coherence and dynamism, and is capable of generating both realistic and imaginative videos. Vidu also has a preliminary understanding of some professional photography techniques, such as transitions, camera movements, lighting effects and emotional portrayal. We observe that to some extent, the generation performance of Vidu is comparable with that of Sora [6], which is currently the most powerful text-to-video generator, much better than the other text-to-video generators. Finally, we perform initial experiments on other controllable video generation, including canny-to-video generation [16], video prediction and subject-driven generation [12]. All of them demonstrate promising results. 2 Text-to-Video Generation Vidu firstly employs a video autoencoder [10] to reduce both the spatial and temporal dimensions of videos for efficient training and inference. After that, Vidu employs a U-ViT [1] as the noise prediction network to model these compressed representations. Specifically, as shown in Figure 1, U-ViT splits the compressed videos into 3D patches, treats all inputs including the time, text condition \u2217Second authors listed alphabetically. \u2021The corresponding author. arXiv:2405.04233v1 [cs.CV] 7 May 2024 \fTransformer Block Transformer Block Transformer Block Transformer Block t c Embedding Layer Linear 0 1 2 3 4 5 6 L \u00b7\u00b7\u00b7 C \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 Transformer Block Embeddings Norm MLP Multi-Head Attention Norm + + + : Add C : Concatenate + Linear Transformer Block \ud835\udc99\ud835\udc61 C Rearrange to T\u00d73\u00d7H\u00d7W Predicted noise Figure 1: The U-ViT architecture for predicting the noise in videos. and noisy 3D patches as tokens, and employs long skip connections between shallow and deep layers in a transformer. By leveraging the ability of transformers to process variable-length sequences, Vidu can handle videos with variable durations. Vidu is trained on vast amount of text-video pairs, and it is infeasible to have all videos labeled by humans. To address it, we firstly train a high-performance video captioner optimized for understanding dynamic information in videos, and then automatically annotate all the training videos using this captioner. During inference, we apply the re-captioning technique [3] to rephrase user inputs into a form that is more suitable for the model. 2 \f2.1 Generating Videos of Different Lengths Since Vidu is trained on videos of various lengths, it can generate 1080p videos of all lengths up to 16 seconds, including images as videos of a single frame. We present examples in Figure 2. (a) 16 seconds. Prompt: A person clad in a space suit with a helmet and equipped with a chest light and arm device is seen closely examining and interacting with a variety of plants in a lush, indoor botanical setting. (b) 8 seconds. Prompt: A desolate lunar landscape with craters and a large moon in the sky transitions to a warmly lit interior of a spacecraft-like structure where a group of people are engaged in various activities. (c) Image. Prompt: An exquisite silverware piece, aesthetically adorned with intricate patterns and scenes, exhibits the detailed artisanship and metallic sheen. (d) Image. Prompt: Under the veil of nightfall, a rose reveals its subtle, exquisite beauty in the gentle moonlight. Figure 2: Vidu can generate videos of all lengths up to 16 seconds, including images. 3 \f2.2 3D Consistency The video generated by Vidu exhibits strong 3D consistency. As the camera rotates, the video presents projections of the same object from different angles. For instance, as shown in Figure 3, the hair of the generated cat naturally occludes as the camera rotates. (a) Prompt: This portrait depicts an orange cat with blue eyes, slowly rotating, inspired by Vermeer\u2019s \u2019Girl with a Pearl Earring\u2019. The cat is adorned with pearl earrings and has brown fur styled like a Dutch cap against a black background, illuminated by studio lighting. (b) Prompt: In a studio, there is a painting depicting a ship sailing through the rough sea. (c) Prompt: A red car is stuck in the snow, with the entire vehicle emitting green light and red signal lights flashing on the back. The camera slowly pans around the car. Figure 3: 3D consistency of Vidu. 4 \f2.3 Generating Cuts Vidu is capable of generating videos incorporating cuts. As shown in Figure 4, these videos present different perspectives of the same scene by switching camera angles, while maintaining consistency of subjects in the scene. (a) Prompt: A sculptor is intently working on a clay bust, meticulously refining its facial features with precise hand movements. (b) Prompt: Churning ocean waves at night with a lighthouse on the coast create an intense and somewhat foreboding atmosphere. The scene is set under an overcast sky, with the ocean\u2019s dark waters illuminated by natural light, highlighting the white foam of the waves. Figure 4: Vidu is capable of generating videos with cuts. 5 \f2.4 Generating Transitions Vidu is capable of producing videos with transitions in a single generation. As shown in Figure 5, these transitions can connect two different scenes in an engaging manner. (a) Prompt: An elderly man with glasses, dressed in formal attire, is deeply engrossed in examining a large, ornate pocket watch. As the video progresses, there is a cinematic transition to a fantastical mechanical cityscape, viewed through the openwork of the watch. This shift evokes a sense of wonder and transports the viewer into a steampunk-inspired world where buildings and structures are made of metal and gears. (b) Prompt: A person holding a dessert with a fluffy layer of whipped cream elegantly drizzled with smooth chocolate sauce. As a dollop of cream falls, a mini polar bear appears, with floating icebergs nearby, set against a serene blue backdrop. Figure 5: Vidu is capable of generating videos with transitions. 6 \f2.5 Camera Movements Camera movements involve the physical adjustments or movements of a camera during filming, enhancing visual narrative and conveying various perspectives and emotions within scenes. Vidu learned these techniques from the data, enhancing the visual experience of viewers. For instance, as shown in Figure 6, Vidu is capable of generating videos with camera movements including zoom, pan and dolly. (a) Zoom. Prompt: A large sailing ship sails slowly through the fog. (b) Pan. Prompt: An elderly man with a white beard is seated in a room filled with wooden bookshelves, brimming with old books. He is dressed in a dark suit and tie, and he is engrossed in reading a large book. The room is bathed in the warm glow of sunlight streaming through a window, creating a serene and contemplative atmosphere. (c) Dolly. Prompt: An animated hedgehog with distinctive spiky hair and large eyes is seen exploring a lush, grassy environment. Figure 6: Camera movements generated by Vidu. 7 \f2.6 Lighting Effects Vidu is capable of generating videos with impressive lighting effects, which help enhance the overall atmosphere. For example, as shown in Figure 7, the generated videos can evoke atmospheres of mystery and tranquility. Therefore, besides the entities within the video content, Vidu has the preliminary ability to convey some abstract feelings. (a) Prompt: A man wearing a hat and a dark suit walks from the corridor towards the room. The lighting casts a bluish tint over the scene, creating a suspenseful atmosphere. (b) Prompt: A rustic wooden cabin nestles by the shore of a clear, sunlit lake, surrounded by verdant trees and mountains. The water is calm, reflecting the sky above, with a few clouds scattered across it. Sailboats and kayaks are moored on the lake, inviting leisure and tranquility. Figure 7: Lighting effects generated by Vidu. 8 \f2.7 Emotional Portrayal Vidu is able to depict characters\u2019 emotions effectively. For example, as shown in Figure 8, Vidu can express emotions such as happiness, loneliness, embarrassment, and joy. (a) Prompt: A man and a woman are sharing a close and affectionate interaction in an indoor setting that suggests a romantic ambiance. (b) Prompt: An elderly woman with white hair and a lined face is seated inside an older model car, looking out through the side window with a contemplative or mildly sad expression. (c) Prompt: A couple about to get divorced sat awkwardly in the waiting room. (d) Prompt: Audience members in a theater are captured in a series of medium shots, with a young man and woman in formal attire centrally positioned and illuminated by a spotlight effect. Figure 8: Emotional portrayal of Vidu. 9 \f2.8 Imaginative Ability In addition to generating real-world scenes, Vidu also possesses a rich imagination. As shown in Figure 9, Vidu is able to generate scenes that do not exist in the real world. (a) Prompt: A painting of a boat on water comes to life, with waves crashing and the boat becoming submerged. (b) Prompt: An animated rabbit in a playful pink snowboarding outfit is carving its way down a snowy mountain slope under a clear blue sky. (c) Prompt: A model train with a blue engine is seen traveling through a meticulously crafted miniature landscape. The train is pulling several red and cream-colored passenger cars along a track that winds through a rural or suburban setting with small-scale houses, verdant trees, and miniature waterfalls. Figure 9: Imaginative ability of Vidu. 10 \f2.9 Comparison with Sora Sora [6] is currently the most powerful text-to-video generator, capable of producing high-definition videos with high consistency. However, as Sora is not publicly accessible, we compare them by inserting the example prompts released by Sora directly to Vidu. Figure 10 and Figure 11 illustrate the comparison between Vidu and Sora, indicating that to some extent, the generation performance of Vidu is comparable to Sora. (a) Sora (b) Vidu Figure 10: Prompt: The camera rotates around a large stack of vintage televisions all showing different programs \u2014 1950s sci-fi movies, horror movies, news, static, a 1970s sitcom, etc, set inside a large New York museum gallery. 11 \f(a) Sora (b) Vidu Figure 11: Prompt: The camera follows behind a white vintage SUV with a black roof rack as it speeds up a steep dirt road surrounded by pine trees on a steep mountain slope, dust kicks up from it\u2019s tires, the sunlight shines on the SUV as it speeds along the dirt road, casting a warm glow over the scene. The dirt road curves gently into the distance, with no other cars or vehicles in sight. The trees on either side of the road are redwoods, with patches of greenery scattered throughout. The car is seen from the rear following the curve with ease, making it seem as if it is on a rugged drive through the rugged terrain. The dirt road itself is surrounded by steep hills and mountains, with a clear blue sky above with wispy clouds. 12 \f3 Other Controllable Video Generation We also perform several initial experiments at 512 resolution on other controllable video generation, including canny-to-video generation [16], video prediction, and subject-driven generation [12]. All of them demonstrate promising results. 3.1 Canny-to-Video Generation Vidu can add additional control by using techniques similar to ControlNet [16], as shown in Figure 12. (a) Input canny. (b) Prompt: During the day, a white car drove towards me and splashed water as it passed by a pond, realistic visual style. (c) Prompt: During the day, a red car drove towards me and splashed water as it passed by a pond, realistic visual style. (d) Prompt: During the day, a white car drove towards me and splashed water as it passed by a pond, anime style. Figure 12: Canny-to-video generation examples of Vidu. 13 \f3.2 Video Prediction As shown in Figure 13, Vidu can generate subsequent frames, given an input image, or several input frames (marked with red boxes). (a) Prompt: A pink chrysanthemum flower with intricate petals is the focal point, resting on a wooden surface in an indoor setting. (b) Prompt: A serene mountainous landscape bathed in the warm glow of sunset or twilight, with snow-capped peaks rising above the green vegetation-covered slopes. A calm body of water rests in the foreground, reflecting the sky above, which is dotted with clouds tinged with pink and orange hues. Figure 13: Video prediction examples of Vidu. 14 \f3.3 Subject-Driven Generation We surprisingly find that Vidu can perform subject-driven video generation by finetuning solely on images without videos. For example, we use the DreamBooth [12] technique to designate the learned subject as a special symbol for finetuning. As shown in Figure 14, the generated videos faithfully recreates the learned subject. (a) Input images. (b) Prompt: A dog lies on the ground and then goes to eat from the bowl. (c) Prompt: A dog bit his tail happily and shakes his head. Figure 14: Subject-driven generation examples of Vidu. 15 \f4 Conclusion We present Vidu, a high-definition text-to-video generator that demonstrates strong abilities in various aspects, including duration, coherence, and dynamism of the generated videos, on par with Sora. In the future, Vidu still has room for improvement. For instance, there are occasional flaws in details, and interactions between different subjects in the video sometimes deviate from physical laws. We believe that these issues can be effectively addressed by further scaling Vidu. 5 Acknowledgements We appreciate the support of the data team and the product team for the project at Shengshu. This work was partly supported by NSFC Projects (Nos. 62061136001, 62106123, 61972224), Tsinghua Institute for Guo Qiang, and the High Performance Computing Center, Tsinghua University. J.Z is also supported by the XPlorer Prize." + }, + { + "url": "http://arxiv.org/abs/2303.06555v2", + "title": "One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale", + "abstract": "This paper proposes a unified diffusion framework (dubbed UniDiffuser) to fit\nall distributions relevant to a set of multi-modal data in one model. Our key\ninsight is -- learning diffusion models for marginal, conditional, and joint\ndistributions can be unified as predicting the noise in the perturbed data,\nwhere the perturbation levels (i.e. timesteps) can be different for different\nmodalities. Inspired by the unified view, UniDiffuser learns all distributions\nsimultaneously with a minimal modification to the original diffusion model --\nperturbs data in all modalities instead of a single modality, inputs individual\ntimesteps in different modalities, and predicts the noise of all modalities\ninstead of a single modality. UniDiffuser is parameterized by a transformer for\ndiffusion models to handle input types of different modalities. Implemented on\nlarge-scale paired image-text data, UniDiffuser is able to perform image, text,\ntext-to-image, image-to-text, and image-text pair generation by setting proper\ntimesteps without additional overhead. In particular, UniDiffuser is able to\nproduce perceptually realistic samples in all tasks and its quantitative\nresults (e.g., the FID and CLIP score) are not only superior to existing\ngeneral-purpose models but also comparable to the bespoken models (e.g., Stable\nDiffusion and DALL-E 2) in representative tasks (e.g., text-to-image\ngeneration).", + "authors": "Fan Bao, Shen Nie, Kaiwen Xue, Chongxuan Li, Shi Pu, Yaole Wang, Gang Yue, Yue Cao, Hang Su, Jun Zhu", + "published": "2023-03-12", + "updated": "2023-05-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "main_content": "Introduction Recently, we are witnessing a content-creation revolution driven by the rapid advances of generative modeling on multi-modal data. In particular, diffusion models (SohlDickstein et al., 2015; Ho et al., 2020; Song et al., 2021c) have shown an incredible ability to create high-fidelity and diverse data (Ramesh et al., 2022; Saharia et al., 2022; Rombach et al., 2022; Ho et al., 2022a; Popov et al., 2021), whose content aligns well with the input text condition. However, these generative models are designed as bespoke systems, which only allow a single task. Actually, humans can generate various multi-modal content simultaneously, with arbitrary conditioning types. For example, artists can create paintings conditioned on texts, scenes, or just imagination and employ language ability to generate the caption of a photo. Toward a general generative system on multimodal data, a unified training framework that can cover all types of multi-modal generative tasks (see Figure 1) is one of the fundamental components. The task is solved by fitting a corresponding distribution in the view of probabilistic modeling. For instance, textto-image generation can be formulated as learning the conditional distribution p(Image|Text). A classical way to fit all relevant distributions is implicit \u2013 it first learns the joint distribution and then infers the marginal and conditional distributions by additional procedures (e.g., Markov Chain Monte Carlo (Srivastava & Salakhutdinov, 2012)), which is unaffordable on large-scale multi-modal data (Schuhmann et al., 2022). In contrast, this paper presents a diffusion-based framework (dubbed UniDiffuser) that explicitly fits all relevant distributions in one model without introducing additional training or inference overhead. Our key insight is \u2013 learning diffusion models for all distributions can be unified as predicting the noise in the perturbed data, where the perturbation levels (i.e. timesteps) can be different for different modalities. For instance, a zero level indicates conditional generation given the corresponding modality, and a maximum level indicates unconditional generation of other modalities by ignoring the corresponding modality. Inspired by the unified view, UniDiffuser learns all distributions simultaneously with a minimal modification to the original diffusion model (Ho 1 arXiv:2303.06555v2 [cs.LG] 30 May 2023 \fOne Transformer Fits All Distributions in Multi-Modal Diffusion at Scale (b) \ud835\udc5e(\ud835\udc990|\ud835\udc9a0) text to image generation (c) \ud835\udc5e(\ud835\udc9a0|\ud835\udc990) image to text generation (a) \ud835\udc5e(\ud835\udc990, \ud835\udc9a0) text & image joint generation (d) \ud835\udc5e(\ud835\udc990) unconditional image generation \u2022Tightly after sunset, hacienda snows in forest \u2022Best Birthday Party Ideas \u2022Christmas gift shop in Guizhou, China \u2022Colorful Abstract Animal image (e) \ud835\udc5e(\ud835\udc9a0) unconditional text generation Valley of Fire Living room with ocean views An elephant under the sea. A rabbit floating in the galaxy Christmas santa dog Teddy bear with smartphone (f) \ud835\udc990 \ud835\udc5e(\ud835\udc9a0|\ud835\udc990) \ud835\udc9a0 \ud835\udc5e(\ud835\udc990 \u2032 |\ud835\udc9a0) \ud835\udc990 \u2032 image variation (g) \ud835\udc9a0 \ud835\udc5e(\ud835\udc990|\ud835\udc9a0) \ud835\udc990 \ud835\udc5e(\ud835\udc9a0 \u2032 |\ud835\udc990) \ud835\udc9a0 \u2032 text variation (h) \ud835\udc9a0 \ud835\udc5e(\ud835\udc990|\ud835\udc9a0) \ud835\udc990 \ud835\udc5e(\ud835\udc9a0 \u2032 |\ud835\udc990) \ud835\udc9a0 \u2032 \ud835\udc5e(\ud835\udc990 \u2032 |\ud835\udc9a0 \u2032 ) \ud835\udc990 \u2032 \ud835\udc5e(\ud835\udc9a0 \u2032\u2032|\ud835\udc990 \u2032 ) \ud835\udc9a0 \u2032\u2032 \ud835\udc5e(\ud835\udc990 \u2032\u2032|\ud835\udc9a0 \u2032\u2032) \ud835\udc990 \u2032\u2032 \u2192\u22ef\u2192Blocked Gibbs sampling between images and texts (i) Interpolation between two images in the wild A sailboat is sailing on the Atlantic Ocean High angle view of sailing boat during daytime Red maple on a hill in golden Autumn Red maple tree at autumn Sunset scene in the mountains. Sunset mountains in Slovenia Marta Raja mountain, sunrise in Mokal, Slovenia Dolomites with sun in sunset. Grand Dolomites Sunset Marbella Italy Italy Land Photography HDR Figure 1. UniDiffuser handles various tasks by fitting all distributions with one transformer. (a-e) UniDiffuser can directly perform joint generation, conditional generation, and unconditional generation. (f-g) Image variation and text variation are direct applications by leveraging two conditional distributions modeled by UniDiffuser. (h) Furthermore, UniDiffuser can perform blocked Gibbs sampling to see how images and texts are translated to each other. (i) UniDiffuser can also perform interpolation between two images in the wild. et al., 2020) (see Figure 2) \u2013 perturbs data in all modalities instead of a single modality, inputs individual timesteps in different modalities, and predicts the noise of all modalities instead of a single modality. Naturally, UniDiffuser is able to perform all kinds of generation (see Figure 1) in the same way as bespoken diffusion models. Moreover, UniDiffuser can perform the classifier-free guidance (Ho & Salimans, 2021) for free to improve the sample quality in both conditional and joint generation because UniDiffuser already models marginal distributions. Besides the probabilistic modeling framework, a unified architecture that can handle input types of different modalities is another fundamental component in a general generative system. Notably, the emergence of Transformer (Vaswani et al., 2017; Dosovitskiy et al., 2021) and its applications on generative modeling (Bao et al., 2023a) provide a promising solution to capture interactions between modalities. Naturally, UniDiffuser employs a transformer-based backbone. We implement UniDiffuser in the latent space (Rombach et al., 2022) with an additional CLIP encoder (Radford et al., 2021) for images and GPT-2 (Radford et al., 2019) decoder for texts on large-scale image-text data (Schuhmann et al., 2022). UniDiffuser is able to perform image, text, textto-image, image-to-text, and image-text pair generation by setting proper timesteps without additional overhead. In particular, UniDiffuser is able to produce perceptually realistic samples in all tasks and its quantitative results (e.g., the FID and CLIP score) are not only superior to existing generalpurpose models but also comparable to the corresponding bespoken models (e.g., Stable Diffusion and DALL\u00b7E 2) in 2 \fOne Transformer Fits All Distributions in Multi-Modal Diffusion at Scale representative tasks (e.g., text-to-image generation). 2. Background Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) perturb the data by gradually injecting noise to data x0 \u223cq(x0), which is formalized by a Markov chain: q(x1:T |x0) = T Y t=1 q(xt|xt\u22121), q(xt|xt\u22121) = N(xt|\u221a\u03b1txt\u22121, \u03b2tI), where \u03b2t is the noise schedule and \u03b1t = 1 \u2212\u03b2t. The data can be generated by reversing this process, where the reverse transition q(xt\u22121|xt) is approximated by a Gaussian model p(xt\u22121|xt) = N(xt\u22121|\u00b5t(xt), \u03c32 t I). As shown by Bao et al. (2022b), the optimal mean under maximal likelihood estimation is \u00b5\u2217 t (xt) = 1 \u221a\u03b1t \u0012 xt \u2212 \u03b2t \u221a1 \u2212\u03b1t E[\u03f5x|xt] \u0013 , (1) where \u03b1t = Qt i=1 \u03b1i and \u03f5x is the standard Gaussian noise injected to xt. To estimate the conditional expectation E[\u03f5x|xt], a noise prediction network \u03f5\u03b8(xt, t) is adopted to minimize the regression loss as follows: min \u03b8 Et,x0,\u03f5x\u2225\u03f5x \u2212\u03f5\u03b8(xt, t)\u22252 2, (2) where t is uniformly sampled from {1, 2, . . . , T} and xt = \u221a\u03b1tx0 + \u221a1 \u2212\u03b1t\u03f5x. According to the property of the l2 regression loss, the optimal noise prediction network satisfies \u03f5\u03b8\u2217(xt, t) = E[\u03f5x|xt]. Since Eq. (2) is also equivalent to the denoising score matching loss (Vincent, 2011), the optimal noise prediction network also satisfies \u03f5\u03b8\u2217(xt, t) = \u2212 q \u03b2t\u2207log q(xt), where q(xt) is the distribution of the perturbed data at timestep t. Conditional generation with diffusion models. In the case of conditional generation, we have paired data (x0, y0) \u223c q(x0, y0), and we want to model the conditional data distribution q(x0|y0). The Gaussian model of the reverse process conditioned on y0 is p(xt\u22121|xt, y0) = N(xt\u22121|\u00b5t(xt, y0), \u03c32 t I). Similarly to Eq. (1), the optimal mean under maximal likelihood estimation is \u00b5\u2217 t (xt, y0) = 1 \u221a\u03b1t \u0012 xt \u2212 \u03b2t \u221a1 \u2212\u03b1t E[\u03f5x|xt, y0] \u0013 . (3) To estimate E[\u03f5x|xt, y0], a noise prediction network conditioned on y0 is adopted to minimize the regression loss min \u03b8 Et,x0,y0,\u03f5x\u2225\u03f5x \u2212\u03f5\u03b8(xt, y0, t)\u22252 2. Classifier-free guidance (CFG) (Ho & Salimans, 2021) is proposed to improve the sample quality of a conditional diffusion model. Specifically, it samples by linearly combining a conditional model and an unconditional one: \u02c6 \u03f5\u03b8(xt, y0, t) = (1 + s)\u03f5\u03b8(xt, y0, t) \u2212s\u03f5\u03b8(xt, t), (4) where s is the guidance scale. The conditional and unconditional models share parameters by introducing a null token \u2205, i.e., \u03f5\u03b8(xt, t) = \u03f5\u03b8(xt, y0 = \u2205, t). 3. Method Section 3.1 presents UniDiffuser, a single diffusion model to capture the marginal, conditional, and joint distributions determined by multi-modal data simultaneously. Section 3.2 demonstrates how to perform classifier-free guidance (CFG) for free in conditional and joint sampling of UniDiffuser. For simplicity, we focus on two-modal data in this paper but UniDiffuser can be easily extended to more modalities. 3.1. UniDiffuser: One Diffusion Fits All Distributions Formally, suppose we have two modalities of data sampled from distribution q(x0, y0). We aim to design a diffusionbased model that is able to capture all relevant distributions determined by q(x0, y0), i.e., the marginal distributions q(x0) and q(y0), the conditional distributions q(x0|y0) and q(y0|x0), and the joint distribution q(x0, y0). We notice that learning a distribution with diffusion models is equivalent to estimating a conditional expectation over the noise. In particular, modeling the marginal distribution q(x0) is equivalent to estimating the conditional expectation of the noise injected to xt, i.e., E[\u03f5x|xt], according to Eq. (1). Similarly, the key quantities to be estimated in modeling the conditional distribution q(x0|y0) and the joint distribution q(x0, y0) are E[\u03f5x|xt, y0] (see Eq. (3)) and E[\u03f5x, \u03f5y|xt, yt] respectively. A key observation is that all above conditional expectations can be unified in the general form of E[\u03f5x, \u03f5y|xtx, yty], where tx and ty are two timesteps that can be different, and xtx and yty are the corresponding perturbed data. In particular, a maximum timestep T means marginalizing it. Namely, by setting ty = T, we have E[\u03f5x|xtx, yT ] \u2248E[\u03f5x|xtx]1, which corresponds to the marginal distribution q(x0). Similarly, a zero timestep means conditioning on the corresponding modality and a tied timestep means sampling two modalities jointly. Formally, E[\u03f5x|xtx, y0] corresponds to the conditional distribution q(x0|y0) by setting ty = 0 and E[\u03f5x, \u03f5y|xt, yt] corresponds to the joint distribution q(x0, y0) by setting tx = ty = t. Moreover, we can characterize q(x0|yty) and q(y0|xtx) for all ty and tx and 1There is a negligible gap between yT and the standard Gaussian noise \u03f5y for a large T (e.g., 1000 by default (Ho et al., 2020)). 3 \fOne Transformer Fits All Distributions in Multi-Modal Diffusion at Scale Noisy data \ud835\udc99\ud835\udc61 Noise in \ud835\udc99\ud835\udc61 \ud835\udc61 Marginal Diffuser Noisy data \ud835\udc99\ud835\udc61 Noise in \ud835\udc99\ud835\udc61 \ud835\udc61 Conditional Diffuser Condition \ud835\udc9a0 Noisy data \ud835\udc99\ud835\udc61 Noise in \ud835\udc99\ud835\udc61 \ud835\udc61 Joint Diffuser Noisy data \ud835\udc9a\ud835\udc61 Noise in \ud835\udc9a\ud835\udc61 Noisy data \ud835\udc99\ud835\udc61\ud835\udc65 Noise in \ud835\udc99\ud835\udc61\ud835\udc65 \ud835\udc61\ud835\udc65 UniDiffuser Noisy data \ud835\udc9a\ud835\udc61\ud835\udc66 Noise in \ud835\udc9a\ud835\udc61\ud835\udc66 \ud835\udc61\ud835\udc66 Figure 2. Comparison with bespoken diffusers. UniDiffuser fits all distributions simultaneously with a minimal modification of Ho et al. (2020). In particular, it degenerates to bespoken diffusion models by setting the timesteps (or noise levels) properly. generate data conditioned on noisy input, by estimating E[\u03f5x, \u03f5y|xtx, yty] in general. Inspired by the unified view, we learn E[\u03f5x, \u03f5y|xtx, yty] for all 0 \u2264tx, ty \u2264T to model all relevant distributions determined by q(x0, y0). Specifically, we employ a joint noise prediction network2 \u03f5\u03b8(xtx, yty, tx, ty) to predict the noise injected to xtx and yty together by minimizing the following regression loss similarly to Ho et al. (2020): Ex0,y0,\u03f5x,\u03f5y,tx,ty\u2225\u03f5\u03b8(xtx, yty, tx, ty) \u2212[\u03f5x, \u03f5y]\u22252 2, (5) where (x0, y0) is a random data point, [, ] denotes concatenation, \u03f5x and \u03f5y are sampled from standard Gaussian distributions, and tx and ty are uniformly sampled from {1, 2, . . . , T} independently. We call our method UniDiffuser because it captures multiple distributions in a unified way. We present the training algorithm in Appendix B. The objective in Eq. (5) is as simple as the original DDPM in Eq. (2). Besides, for a single update of parameters, UniDiffuser only requires a single forward-backward calculation for multiple tasks (i.e., distributions), which is as efficient as the original DDPM. Although the gradient estimate of UniDiffuser has a slightly higher variance than the original DDPM due to two independent timesteps, we do not observe that UniDiffuser suffers from slower convergence. UniDiffuser attempts to fit all distributions by one joint noise prediction network, requiring that the backbone can handle the mutual interaction between modalities and is scalable for large-scale data and multiple tasks. Inspired by the excellent performance of transformers on multi-modal representation learning at scale (Kim et al., 2021; Wang et al., 2022), we employ a transformer-based network in UniDiffuser, as detailed in Section 4.2. 2UniDiffuser can be easily reparameterized to data prediction or velocity prediction (Salimans & Ho, 2022) as well. Given a single joint noise prediction network, UniDiffuser can perform unconditional, conditional, and joint sampling according to a certain sampler (see Appendix B for the sampling algorithm). Notably, by setting the timesteps properly, the inference procedure of UniDiffuser is the same as the bespoken models. In comparison, learning a single joint distribution (Srivastava & Salakhutdinov, 2012; Hu et al., 2022) over multi-modal data requires additional procedures (e.g., Markov Chain Monte Carlo) to sample from the marginal or conditional distributions, which is unaffordable on large-scale multi-modal data (Schuhmann et al., 2022). 3.2. Classifier-Free Guidance for Free Classifier-free guidance (CFG) (Ho & Salimans, 2021) combines a conditional and an unconditional model linearly during sampling (see Eq. (4)). It is simple yet effective to improve the sample quality and image-text alignment in diffusion models. Notably, CFG is directly applicable to the conditional and joint sampling of UniDiffuser without modifying the training process (see Figure 3 for results). Formally, we denote the output of \u03f5\u03b8 as the concatenation of \u03f5x \u03b8 and \u03f5y \u03b8, i.e. \u03f5\u03b8 = [\u03f5x \u03b8, \u03f5y \u03b8], where we omit the input for simplicity. UniDiffuser can perform CFG for free in conditional sampling because it captures both the conditional and unconditional models. For example, we can generate x0 conditioned on y0 similarly to Eq. (4) as follows: \u02c6 \u03f5x \u03b8(xt, y0, t) = (1 + s)\u03f5x \u03b8(xt, y0, t, 0) \u2212s\u03f5x \u03b8(xt, \u03f5y, t, T), where \u03f5x \u03b8(xt, y0, t, 0) and \u03f5x \u03b8(xt, \u03f5y, t, T) represent the conditional and unconditional models respectively, and s is the guidance scale. In contrast to the original CFG, UniDiffuser does not need to specify a null token for parameter sharing. CFG is also applicable to joint sampling. By setting tx = ty = t, note that the joint score model can be equivalently 4 \fOne Transformer Fits All Distributions in Multi-Modal Diffusion at Scale A cute dog made of chocolate A cute dog made of chocolate A cute dog made of chocolate A cute dog made of chocolate Large scale Small scale Joint generation Text to image The fella s travel tips bolso airport bags mainbest travel bags A feast of luggage items all being considered for making and travel. A few still in luggage collection all in recognition of making a travel. A few of the luggage collection at the airport Image to text Gardiner Skyline Gardiner Sofa Contemporary Leather Alden Blue Contemporary Leather Sofa Contemporary Blue Leather Sectional Sofa Figure 3. Effects of CFG. UniDiffuser employs CFG for free in joint and conditional sampling, improving the sample quality and image-text alignment with a large scale of around 6. expressed in the form of conditional models as follows: \u03f5\u03b8(xt, yt, t, t)\u2248\u2212 q \u03b2t[\u2207xtlog q(xt, yt),\u2207ytlog q(xt, yt)] =\u2212 q \u03b2t[\u2207xtlog q(xt|yt),\u2207ytlog q(yt|xt)], where q(xt, yt) is the joint distribution of perturbed data at the same noisy level t. Inspired by the above relationship between score functions, \u03f5\u03b8(xt, yt, t, t) can be viewed as approximating a pair conditional scores \u2207xt log q(xt|yt) and \u2207yt log q(yt|xt). In the same spirit of CFG, we can replace each conditional score by interpolating the joint model with the corresponding unconditional model as follows: \u02c6 \u03f5\u03b8(xt, yt, t) =(1+s)\u03f5\u03b8(xt, yt, t, t)\u2212s[\u03f5x \u03b8(xt, \u03f5y, t, T),\u03f5y \u03b8(\u03f5x, yt, T, t)] \u2248\u2212 q \u03b2t[(1 + s)\u2207xt log q(xt|yt) \u2212s\u2207xt log q(xt), (1 + s)\u2207yt log q(yt|xt) \u2212s\u2207yt log q(yt)], where \u03f5x \u03b8(xt, \u03f5y, t, T) and \u03f5y \u03b8(\u03f5x, yt, T, t) represent unconditional models. We summarize the formulation of CFG in UniDiffuser for all tasks in Appendix C. 4. UniDiffuser on Images and Texts Images and texts are two of the most common modalities in daily life. Thus, it is representative to validate the effectiveness of UniDiffuser on the two modalities. Our implementation is two-staged following (Rombach et al., 2022) (see Figure 4). First, we convert images and texts to continuous latent embeddings x0 and y0 via image and text encoders and introduce two decoders for reconstruction, as presented in Section 4.1. Second, we train UniDiffuser parameterized by a transformer on the latent embeddings x0 and y0, as presented in Section 4.2. 4.1. Encoding Images and Texts into Latent Space The image and text encoder-decoders are illustrated in Figure 4 (a). Below we provide their details. Image encoder-decoder. The image encoder consists of two parts. The first part is the image autoencoder employed in Stable Diffusion (Rombach et al., 2022). We use its encoder EAE to obtain an embedding for image reconstruction xAE 0 . The second part is the image CLIP (Radford et al., 2021) (ViT-B/32). It extracts a semantic embedding xCLIP 0 of dimension 512. The final latent embedding for images is the concatenation of the outputs from two parts, i.e., x0 = [xAE 0 , xCLIP 0 ]. Empirically, we found that xAE 0 is sufficient for image reconstruction via the image decoder DAE from Stable diffusion and the additional xCLIP 0 helps understand the semantics of images in image-to-text generation. We hypothesize that the different roles of the two embeddings are inherently caused by the original objectives, i.e. reconstruction versus semantics alignment with text. Text encoder-decoder. As for the text encoder, we employ the same text CLIP as Stable Diffusion (Rombach et al., 2022). The text CLIP outputs 77 vectors and each is 768dimensional. To facilitate training, we add an extra linear layer, which reduces the dimension of each vector to 64 to obtain the final text embedding y0. We construct the text decoder Dtext based on GPT-2 (Radford et al., 2019). Specifically, GPT-2 takes y0 as a prefix embedding (Mokady et al., 2021) and reconstructs the text autoregressively. Freezing the parameters in CLIP, we train the linear layer and finetune GPT-2 to reconstruct the input texts, which performs well on reconstruction. We present more training details and the reconstruction results in Appendix E. Remark. We observe that the latent embeddings of both image and text already have similar and reasonable numerical ranges. Specifically, they are concentrated within the range of [\u22122, 2] and exhibit approximately normal distributions with comparable mean and variance values (image modality: mean = 0.0269, standard deviation = 0.7919; text modality: mean = 0.0127, standard deviation = 0.5957). As a result, we did not apply additional normalization to them. For more modalities, we can similarly convert them to continuous latent features through encoders that have regularization on the latent space. This makes it easy for all modalities to have similar ranges after normalization. Besides, obtaining high-quality encoders and decoders is relatively straightfor5 \fOne Transformer Fits All Distributions in Multi-Modal Diffusion at Scale Norm Transformer Block Linear CLIP Transformer Block Transformer Block Transformer Block Norm Transformer Block Linear x t y t x t y t x t y t GPT-2 AE AE CLIP CLIP CLIP AE AE drop Predicted noise Embedding An astronaut riding a horse. An astronaut riding a horse. An astronaut riding a horse. + + Multi-head Attention Norm Norm Norm Norm + + Multi-head Attention Norm Norm Embedding Layer Embedding Layer C C MLP MLP Embedding Embedding C + : Add : Concatenate + Linear (a) Encode images & texts into latent space (b) The U-ViT backbone of the joint noise prediction network 0 y 0 x y \uf071 x \uf071 AE 0 x CLIP 0 x 0 y x t x y t y Image Text Figure 4. Implementation of UniDiffuser on image-text data. (a) First, we encode images and texts into latent space. (b) Second, we train UniDiffuser parameterized by a transformer (Bao et al., 2023a) in the way illustrated in Figure 2 on the latent embeddings. ward and can be achieved with a smaller amount of data. For example, the dataset size of the image encoder and decoder is less than 1% of UniDiffuser\u2019s. Therefore, in practice, we can efficiently train high-quality encoders and decoders for each modality at a modest cost if needed. 4.2. Transformer as Joint Noise Prediction Network We train a joint noise prediction network on the embeddings obtained in Section 4.1 according to Eq. (5). It is natural to employ a transformer-based backbone in UniDiffuser to handle inputs from different modalities. In particular, we adopt U-ViT (Bao et al., 2023a), a recently proposed transformer for conditional diffusion models. The original U-ViT is characterized by treating all inputs including the data, the condition, and the timestep as tokens, and employing long skip connections between shallow and deep layers. In UniDiffuser, we slightly modify U-ViT by treating the two modalities of data and their corresponding timesteps as tokens. Besides, we empirically find that the pre-layer normalization (Xiong et al., 2020) in the original U-ViT causes overflow easily when trained with mixed precision. A simple solution is to use the post-layer normalization (Vaswani et al., 2017) and add a layer normalization after concatenating a long skip connection, which stabilizes the training of UniDiffuser. We illustrate the backbone in Figure 4 (b) and present more details in Appendix D. 5. Related Work Multi-modal generative modeling. Many prior work on multi-modal generative modeling can be formalized as learning a conditional distribution. Representative applications include text-to-image generation (Ramesh et al., 2021; Ding et al., 2021; Ramesh et al., 2022; Nichol et al., 2022; Saharia et al., 2022; Yu et al., 2022; Gu et al., 2022; Xu et al., 2018; Rombach et al., 2022), text-to-video generation (Ho et al., 2022a), text-to-speech generation (Chen et al., 2021; Popov et al., 2021) and image captioning (i.e., image-to-text generation) (Mokady et al., 2021; Chen et al., 2022). Such models are specially designed for a single task. In addition to learning a conditional distribution, Hu et al. (2022) aims to learn the joint distribution of image and text data via a discrete diffusion model (Gu et al., 2022). However, its scalability is unexplored. The most related prior work is Versatile Diffusion (VD) (Xu et al., 2022), which employs a multi-flow architecture and is trained for multiple generation tasks in the traditional multitask framework, which requires multiple feed-forward to compute losses for all tasks and carefully tuned gradient multipliers for different layers during training. In contrast, UniDiffuser provides an elegant solution based on the insightful unified view of training diffusion models. As a result, UniDiffuser is simpler (with a single training loss), more efficient to train (with a single forward-backward per 6 \fOne Transformer Fits All Distributions in Multi-Modal Diffusion at Scale update), and can handle more tasks (able to perform joint sampling) without the need for complex tricks. Besides, UniDiffuser outperforms VD in both image-to-text and textto-image generation tasks in terms of the FID and CLIP scores in our experiments (see Section 6), suggesting that the time-condition strategy in UniDiffuser is statistically more efficient than the multi-task one in VD. Multi-modal representation learning aims to learn features for different modalities that can be transferred to downstream tasks. Vision-and-language pretraining (VLP) is at the front. VLP can employ different strategies, such as contrastive learning (Radford et al., 2021), masked data modeling (Wang et al., 2022), and a combination of multiple losses (Kim et al., 2021; Li et al., 2022; Bao et al., 2022c). A transformer is often employed to fuse the two modalities. This work implies that a transformer is also effective for multi-modal generative modeling. Diffusion models are initially proposed by Sohl-Dickstein et al. (2015). Recently, Ho et al. (2020) introduce a noise prediction formulation, and Song et al. (2021c) introduce a stochastic differential equation formulation for learning diffusion models. Diffusion models are able to generate highquality images (Dhariwal & Nichol, 2021),audios (Chen et al., 2021; Kong et al., 2021), videos (Ho et al., 2022b), point clouds (Luo & Hu, 2021) and molecular conformations (Hoogeboom et al., 2022; Bao et al., 2023b). Other improvements in diffusion models include fast sampling (Song et al., 2021a; Bao et al., 2022b; Salimans & Ho, 2022; Lu et al., 2022b;c) and improved training and sampling techniques (Nichol & Dhariwal, 2021; Song et al., 2021b; Kingma et al., 2021; Vahdat et al., 2021; Zhao et al., 2022; Bao et al., 2022a; Lu et al., 2022a; Karras et al., 2022). 6. Experiments We present the experimental setup in Section 6.1. We show the ability of UniDiffuser to perform multiple generation tasks and directly compare it with existing large models in Section 6.2. We further demonstrate that UniDiffuser naturally supports applications like data variation, blocked Gibbs sampling between modalities (see Section 6.3), and interpolation between images in the wild (see Section 6.4). 6.1. Setup Dataset. We use three subsets of LAION-5B (Schuhmann et al., 2022) following Stable Diffusion (Rombach et al., 2022). The first one is laion2B-en, which contains around 2B image-text pairs with English captions. The second one is laion-high-resolution, which contains around 170M image-text pairs with image resolution \u22651024 and multilingual captions. The third one is laion-aesthetics v2 5+, which is a subset of laion2b-en containing around 600M 0.22 0.23 0.24 0.25 0.26 CLIP Score (ViT-L/14) 12 16 20 24 FID (10K samples) Versatile Diffusion (VD) UniDiffuser (ours) Figure 5. Comparing UniDiffuser and VD in text-to-image generation. We connect the results with the same scale in CFG. UniDiffuser consistently outperforms VD in all settings w.r.t. both the CLIP score \u2191(horizontal axis) and FID \u2193(vertical axis). image-text pairs with high visual quality. Following Stable Diffusion, we additionally filter laion-aesthetics v2 5+ to images with resolution \u2265512 and an estimated watermark probability <0.5, leading to around 193M preserved pairs. For image normalization, we follow the standard practice in diffusion models by normalizing the image values from the range of [0, 255] to [\u22121, 1]. Since the texts in LAION-5B are quite noisy, we further clean texts in the laion-aesthetics v2 5+ subset by removing URLs, HTML tags, emails, contents in brackets, quotes except \u2019s, and symbols except , . ? !. Before inputting the text into CLIP, we tokenize the preprocessed text using CLIP\u2019s built-in tokenizer, which is based on byte-level Byte-Pair-Encoding (Radford et al., 2021). Training and Sampling. The training is multiple-staged following Stable Diffusion (Rombach et al., 2022). In the first stage, we train 250K steps at 256\u00d7256 resolution on laion2B-en with a batch size of 11264 and 5K warm-up steps. In the second stage, we fine-tune the model with 200K steps at 512\u00d7512 resolution on laion-high-resolution with a batch size of 2112 and 5K warm-up steps. In the last stage, we resume from the last checkpoint of the second stage (including both weights of the model and states of the optimizer), and train 220K steps at 512\u00d7512 resolution on laionaesthetics v2 5+ with a batch size of 2112. Following Bao et al. (2023a), we use the AdamW optimizer (Loshchilov & Hutter, 2019) with a learning rate of 2e-4, a weight decay of 0.03 and running coefficients of (\u03b21, \u03b22) = (0.9, 0.9) in all stages. We reduce the learning rate by a factor of 10 and continue training whenever the validation loss does not decrease. We train with mixed precision for efficiency. When U-ViT is trained at 256\u00d7256 resolution, we interpolate the positional embeddings related to images via bilinear interpolation. The training takes around 28 days on 88 A100 (80GB) GPUs. We use DPM-Solver (Lu et al., 2022b;c) with 50 steps in all experiments. 7 \fOne Transformer Fits All Distributions in Multi-Modal Diffusion at Scale 0 1 2 3 4 5 6 7 Classifier-free guidance scale 0.14 0.16 0.18 0.20 0.22 CLIP Score (ViT-L/14) Versatile Diffusion (VD) UniDiffuser (ours) Figure 6. Comparing UniDiffuser and VD in image-to-text generation. UniDiffuser consistently outperforms VD with the same CFG scale (horizontal axis) w.r.t. the CLIP score \u2191(vertical axis). Table 1. Zero-shot FID \u2193on the MS-COCO validation set. \u2020 marks results produced by us upon official implementation and other results are taken from the corresponding references. We report the results of UniDiffuser and VD with a scale of 3 in CFG, which is the best choice for both models according to Figure 5. Model FID \u2193 Bespoken models GLIDE (Nichol et al., 2022) 12.24 Make-A-Scene (Gafni et al., 2022) 11.84 DALL\u00b7E 2 (Ramesh et al., 2022) 10.39 Stable Diffusion\u2020 (Rombach et al., 2022) 8.59 Imagen (Saharia et al., 2022) 7.27 Parti (Yu et al., 2022) 7.23 General-purpose models Versatile Diffusion\u2020 (Xu et al., 2022) 10.09 UniDiffuser (ours) 9.71 Baseline. To our knowledge, Versatile Diffusion (VD) (Xu et al., 2022) is the most direct competitor for generalpurpose multi-modal generation (see details in Section 5). We directly compare to VD in all experiments if possible. The results of VD are reproduced by us upon official code because there is no quantitative result in the original paper. Evaluation. For text-to-image generation, we report the FID (Heusel et al., 2017) and CLIP score (Radford et al., 2021) on the MS-COCO validation set (Lin et al., 2014) to measure the image fidelity and image-text alignment respectively. Following the literature, we randomly draw 10K and 30K prompts from the MS-COCO validation set to calculate FID and the CLIP score on generated images. For imageto-text generation, we report the CLIP score to measure the image-text alignment. Specifically, we randomly draw 10K images to calculate the score on generated texts. UniDiffuser (ours) Versatile Diffusion (VD) A dog wearing a beret. A rabbit in a space suit. Figure 7. Random samples of UniDiffuser and VD on text-toimage generation. UniDiffuser produces semantically correct images given representative prompts while VD does not. 6.2. Main Results We first systematically compare with the most direct baseline Versatile Diffusion (VD), which is a general-purpose generative model, in both text-to-image and image-to-text generation. Quantitatively, UniDiffuser outperforms VD consistently in both tasks under all metrics and guidance CFG scales, as presented in Figure 5 and Figure 6. The empirical results demonstrate the effectiveness (in addition to the simplicity, efficiency, and generality) of UniDiffuser compared to VD (see details in Section 5). Qualitatively, Figure 7 presents samples in text-to-image generation, and UniDiffuser aligns image and text better than VD. See more results including image-to-text generation in Appendix G. We also compare with bespoken systems designed for textto-image generation w.r.t. zero-shot FID on MS-COCO in Table 1. Although UniDiffuser is designed to handle multiple generation tasks, its performance on the single text-to-image generation task is comparable to bespoken diffusion models such as Stable Diffusion and outperforms famous diffusion models like DALL\u00b7E 2. Finally, we present examples of joint, conditional, and unconditional generation in Figure 1 (a-e) to show the generality of UniDiffuser. See more examples in Appendix A. 8 \fOne Transformer Fits All Distributions in Multi-Modal Diffusion at Scale 6.3. Data Variation and Gibbs Sampling UniDiffuser naturally supports applications such as image variation and text variation. For example, given a source image, we can firstly perform image-to-text generation to obtain a description of the image, and then perform text-toimage generation with this description as input to obtain a new image with similar semantics but different contents. In Figure 1 (f-g), we present examples on image and text variation. Furthermore, we can perform blocked Gibbs sampling to see how images and texts are translated to each other by chaining conditional distributions modeled by UniDiffuser. We present examples in Figure 1 (h). More samples on data variation and blocked Gibbs can be found in Appendix A. 6.4. Interpolation between Two Images in the Wild UniDiffuser can also perform interpolation between two images in the wild. Specifically, we firstly perform imageto-text generation to obtain the latent text embeddings of the two images via the deterministic DPM-Solver with the same Gaussian noise as the initial state for both images. Then we perform a noise injection process via DPM-Solver to get a noisy version of the latent image embeddings given the two latent text embeddings. We perform spherical linear interpolation between the latent text embeddings and the noisy version of the latent image embeddings to obtain intermediate states. Finally, with the text intermediate states as the condition and the image intermediate states as the initial state, we generate the final images by DPM-solver. See Appendix F for a formalized algorithm of the interpolation procedure. We present examples in Figure 1 (i) and more examples can be found in Appendix A. 7. Conclusion We propose UniDiffuser, a general-purpose multi-modal probabilistic framework based on insights of unifying training of diffusion models for different distributions. UniDiffuser is able to perform various generation tasks via one model with minimal modification of the original diffusion models. Empirical results on image-text data show the effectiveness of UniDiffuser compared to large existing models. UniDiffuser also enables semi-supervised learning and learning on more modalities, which are left as future work. Currently, the text generated by our implementation is not that smooth, mainly because the text data is noisy. UniDiffuser has high potential to improve multiple tasks: by fitting multiple tasks with one single transformer network, UniDiffuser can be much easier to further improve all tasks simultaneously (e.g., by increasing parameter scale and data scale) and maintain under the large-scale pre-training regime. Any further improvement/optimization of the underlying single network can seamlessly benefit all tasks. Social Impact: We believe UniDiffuser can advance realworld applications with generated content due to its generality. However, it is worth noting that large-scale multimodal generative models may have consequences like \u201cdeepfakes\u201d. We watermark all images sampled from the model and will provide a systematical protocol to relieve the problem before releasing the code and model. Acknowledgements This work was supported by NSF of China Projects (Nos. 62061136001, 61620106010, 62076145, U19B2034, U1811461, U19A2081, 6197222); Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098; a grant from Tsinghua Institute for Guo Qiang; the High Performance Computing Center, Tsinghua University; the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (22XNKJ13). C. Li was also sponsored by Beijing Nova Program. J.Z was also supported by the New Cornerstone Science Foundation through the XPLORER PRIZE." + }, + { + "url": "http://arxiv.org/abs/2212.00362v1", + "title": "Why Are Conditional Generative Models Better Than Unconditional Ones?", + "abstract": "Extensive empirical evidence demonstrates that conditional generative models\nare easier to train and perform better than unconditional ones by exploiting\nthe labels of data. So do score-based diffusion models. In this paper, we\nanalyze the phenomenon formally and identify that the key of conditional\nlearning is to partition the data properly. Inspired by the analyses, we\npropose self-conditioned diffusion models (SCDM), which is trained conditioned\non indices clustered by the k-means algorithm on the features extracted by a\nmodel pre-trained in a self-supervised manner. SCDM significantly improves the\nunconditional model across various datasets and achieves a record-breaking FID\nof 3.94 on ImageNet 64x64 without labels. Besides, SCDM achieves a slightly\nbetter FID than the corresponding conditional model on CIFAR10.", + "authors": "Fan Bao, Chongxuan Li, Jiacheng Sun, Jun Zhu", + "published": "2022-12-01", + "updated": "2022-12-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Extensive empirical evidence in prior work [14, 3, 9] demonstrates that conditional generative models are easier to train and perform better than unconditional ones by exploiting the labels of data. So do score-based diffusion models (DM). For instance, the representative work [9] achieves a FID of 10.94 when trained conditionally and a FID of 26.21 when trained unconditionally on ImageNet of size 256x256. Intuitively, the gap exists because (1) the marginal distribution induced by a conditional model is more expressive than the corresponding unconditional model; and (2) the data distribution conditioned on a speci\ufb01c class has fewer modes and is easier to \ufb01t than the original data distribution. In this paper, we formalize the above intuition in an ideal setting where we have in\ufb01nite data. It is easy to show that the marginal distribution induced by a conditional model can be viewed as a mixture of the corresponding unconditional models. Further, we derive a suf\ufb01cient condition for the superiority of the conditional model, which suggests that the conditional model gains more as the conditional data distribution gets simpler. The analyses explain previous empirical \ufb01ndings: conditioning on class labels probably partitions the data into simpler groups according to the semantics of data. Notably, our analyses apply to all possible conditions, not limited to class labels. Then, the very natural idea is to \ufb01nd a certain way to obtain meaningful conditions in an unsupervised manner and boost the unconditional generation results. The recent advances in self-supervised learning [10, 5] show that one can learn predictive representations without labels, which serve as an ideal tool for obtaining meaningful conditions. Speci\ufb01cally, we simply run a clustering algorithm (e.g., k-means) Preprint. Under review. arXiv:2212.00362v1 [cs.LG] 1 Dec 2022 \fon the features extracted by a model pre-trained in a self-supervised manner (on the same dataset) and use the cluster indices as conditions to train a conditional model. Although our analyses and the self-conditional approach is applicable to all types of deep generative models, we focus on score-based diffusion models in our experiments to explore the boundary of unsupervised generative modeling. Therefore, we refer to our approach as self-conditioned diffusion models (SCDM). We systematically evaluate SCDM on several widely adopted datasets. In all settings, SCDM signi\ufb01cantly improves the unconditional model. Notably, SCDM achieves a recordbreaking FID of 3.94 on ImageNet 64x64 without labels. Besides, SCDM achieves a slightly better FID than the corresponding conditional model on CIFAR10. 2 Why Are Conditional Generative Models Better Than Unconditional Ones In this section, we present the problem formulation and our analyses. 2.1 Problem Formulation Let q(x, c) be the joint distribution of the data x and the condition c and q(x) := P c q(x, c). Let p\u03b8,E(x) be a model parameterized by \u03b8 \u2208\u0398 and E \u2208E, where \u03b8 denotes the parameters in the backbone and E is the embedding for a condition. We formalize two learning paradigms as follows. In unconditional learning, p\u03b8,E(x) approximates the marginal data distribution q(x) directly and E is a redundant embedding shared by all data. Formally, given a certain statistics divergence D (or more loosely a divergence upper bound [20, 2]), unconditional learning aims to optimize min \u03b8\u2208\u0398,E\u2208E D(q(x)\u2225p\u03b8,E(x)). (1) In conditional learning, the embedding E is spared to receive the signal from the condition c, through an embedding function \u03c6 \u2208\u03a6. This induces a conditional model p\u03b8,\u03c6(x|c) := p\u03b8,E(x)|E=\u03c6(c), which approximates the conditional data distribution q(x|c) by tuning the backbone \u03b8 and the embedding function \u03c6. Formally, conditional learning aims to optimize min \u03b8\u2208\u0398,\u03c6\u2208\u03a6 Eq(c)D(q(x|c)\u2225p\u03b8,\u03c6(x|c)). (2) The conditional model applies ancestral sampling to generate samples, where a condition c is \ufb01rstly drawn from q(c)1, and then a data x is drawn from p\u03b8,\u03c6(x|c). Such a process produces samples from p\u03b8,\u03c6(x) := Eq(c)p\u03b8,\u03c6(x|c). The generation performance of the conditional model is evaluated according to how close p\u03b8,\u03c6(x) is to the data distribution q(x), i.e., D(q(x)\u2225p\u03b8,\u03c6(x)). 2.2 Analyses In this section, we attempt to formalize two insights on why conditional learning of generative models generally outperforms the unconditional one. Firstly, we compare the expressive power of the two strategies with the same backbone parameterized by \u03b8. As shown in Section 2.1, the conditional model produces samples from p\u03b8,\u03c6(x) = Eq(c)[p\u03b8,\u03c6(x|c)] = Eq(c)[p\u03b8,E(x)|E=\u03c6(c)]. Therefore, p\u03b8,\u03c6(x) can be viewed as a mixture of several unconditional models. Namely, the conditional model is more expressive than the unconditional one, despite the fact that both models are based on the same backbone p\u03b8,E(x). Secondly, we derive a suf\ufb01cient condition for the superiority of the conditional model. Let \u03b8\u2217 u, E\u2217 u be the optimal solution of the unconditional learning in Eq. (1). Let \u03b8\u2217 c, \u03c6\u2217 c be the optimal solution of the conditional learning in Eq. (2). Proposition 1 characterizes a suf\ufb01cient condition for D(q(x)\u2225p\u03b8\u2217 c ,\u03c6\u2217 c(x)) < D(q(x)\u2225p\u03b8\u2217 u,E\u2217 u(x)). Proposition 1. Suppose for any parameter \u03b8 \u2208\u0398 and any condition c, approximating q(x|c) is simpler than q(x) by only tuning the embedding E of p\u03b8,E(x), i.e., minE D(q(x|c)\u2225p\u03b8,E(x)) < 1We assume q(c) is known, which is satis\ufb01ed in conditional learning with labels. 2 \fTable 1: FID\u2193results on different datasets. K represents the number of clusters. CIFAR10 CelebA 64x64 LSUN Bedroom 64x64 ImageNet 64x64 Unconditional DM 2.72 2.14 2.69 6.44 Conditional DM 2.24 3.08 SCDM (K = 2) 2.04 SCDM (K = 10) 2.23 1.91 SCDM (K = 20) 2.27 2.08 2.39 SCDM (K = 30) 2.30 SCDM (K = 50) 2.34 SCDM (K = 100) 2.25 SCDM (K = 1000) 3.94 minE D(q(x)\u2225p\u03b8,E(x)). Then, under additional mild regularity conditions 2, D(q(x)\u2225p\u03b8\u2217 c ,\u03c6\u2217 c(x)) < D(q(x)\u2225p\u03b8\u2217 u,E\u2217 u(x)) holds. (Proof in Appendix A) The suf\ufb01cient condition in Proposition 1 is hard to verify in practice generally3. However, it does provide insights on when conditional learning is preferable. In fact, it implies that the conditional model gains more (i.e., minE D(q(x|c)\u2225p\u03b8,E(x)) gets smaller for all \u03b8) as the conditional data distribution gets simpler. The condition is probably satis\ufb01ed in practical conditional learning with class labels. In this sense, Proposition 1 explains previous empirical \ufb01ndings. 3 Self-Conditioned Diffusion Models Note that Proposition 1 applies to all possible conditions, not limited to class labels, which inspires us to obtain meaningful conditions in an unsupervised manner to boost the unconditional generation results. The recent advances in self-supervised learning [10, 5] show that one can learn predictive representations without labels, which serves as an ideal tool for obtaining meaningful conditions. Speci\ufb01cally, we propose a three-stage algorithm. Firstly, we train a feature extractor on the target dataset (without labels) in a self-supervised manner and extract features. Secondly, we run a clustering algorithm (e.g., k-means in our experiments) on these features and obtain the cluster indices for all data. Finally, we train a conditional diffusion model [18, 9] by taking the cluster indices as conditions. We refer to our approach as self-conditioned diffusion models (SCDM). We mention that the high-level idea of using clustering indices from self-supervised learning coincides with prior work in GANs [1, 4, 19]. This paper presents distinct contributions in the following aspects. First, prior work focuses on avoiding mode collapse while this paper is motivated by a different perspective with theoretical insights missing in the literature. Second, this paper is built upon SOTA diffusion models [6, 9] to explore the boundary of unconditional generative modeling. In fact, we obtain a record-breaking FID of 3.94 on ImageNet 64x64 without labels. See a direct comparison with prior work [4, 19] in Table 2. 4 Experiment We evaluate SCDM on CIFAR10 [13], CelebA 64x64 [15], LSUN Bedroom 64x64 [22] and ImageNet 64x64 [8]. By default, we use MoCo-v2 [6] on CIFAR10, CelebA 64x64 and LSUN Bedroom 64x64, and use MoCo-v3 [7] on ImageNet 64x64, in the self-supervised learning stage. We use the FID score [11] to measure the sample quality. We use the same architecture for SCDM and its unconditional and conditional baselines. See more experimental details in Appendix B. 2Speci\ufb01cally, we assume that the divergence D is convex and the embedding function space \u03a6 includes all measurable functions, which are veri\ufb01able in practice. In fact, the former can be satis\ufb01ed using the KL divergence and the latter can be satis\ufb01ed by using nonparametric embeddings. 3A simple veri\ufb01able case is to \ufb01t a mixture of Gaussian (MoG) data by a single Gaussian (unconditional learning) or a MoG with ground-truth cluster indices (conditional learning). 3 \f(a) CIFAR10 (b) CelebA 64x64 (c) LSUN Bedroom 64x64 (d) ImageNet 64x64 Figure 2: Generated samples of SCDM. Each column corresponds to a cluster. We use the model with the best FID. (a) MoCo-v2 (K = 10) (b) SimCLR (K = 10) (c) Pixel (K = 10) Figure 3: Generated samples on CIFAR10 with different clustering methods. 4.1 Sample Quality Firstly, we compare our SCDM with the unconditional and conditional baselines. As shown in Table 1, SCDM uniformly outperforms the unconditional model and slightly outperforms the conditional model on CIFAR10. On ImageNet 64x64, SCDM greatly improves the FID compared to the unconditional model. We provide generated samples in Figure 2. In Table 2, we compare SCDM with other methods on ImageNet 64x64 in the unlabelled setting. SCDM signi\ufb01cantly outperforms all prior methods and achieves a record-breaking FID of 3.94. Figure 1: The effect of the self-supervised learning methods, and the backbones used in self-supervised learning. Table 2: ImageNet 64x64 results in the unlabelled setting. \u2020Improved DDPM reports FID with 10K samples, and thereby we use reproduced results on 50K samples [2]. Method FID SLCGAN [19] 19.2 Unconditional BigGAN [4] 16.9 IC-GAN [4] 9.2 Improved DDPM\u2020 [18] 16.38 Unconditional DM 6.44 SCDM (ours) 3.94 4.2 Ablation Study In this part, we study the effect of the self-supervised learning methods. We test MoCo-v2, as well as SimCLR [5] with 3 backbones: ResNet-18, ResNet-34, and ResNet-50. We also perform k-means on image pixels directly to get cluster indices, and we call this method pixel. As shown in Figure 1, SimCLR performs similarly to MoCo-v2, and the choice of backbones does not affect the performance much. However, k-means on image pixels performs much worse than SimCLR and MoCo-v2. Indeed, as shown in Figure 3, we \ufb01nd objects of diverse classes appear in a single cluster for the pixel method, leading to a more complex distribution in a single cluster, which is more dif\ufb01cult to learn. 4" + } + ], + "Chongxuan Li": [ + { + "url": "http://arxiv.org/abs/1901.08400v3", + "title": "To Relieve Your Headache of Training an MRF, Take AdVIL", + "abstract": "We propose a black-box algorithm called {\\it Adversarial Variational\nInference and Learning} (AdVIL) to perform inference and learning on a general\nMarkov random field (MRF). AdVIL employs two variational distributions to\napproximately infer the latent variables and estimate the partition function of\nan MRF, respectively. The two variational distributions provide an estimate of\nthe negative log-likelihood of the MRF as a minimax optimization problem, which\nis solved by stochastic gradient descent. AdVIL is proven convergent under\ncertain conditions. On one hand, compared with contrastive divergence, AdVIL\nrequires a minimal assumption about the model structure and can deal with a\nbroader family of MRFs. On the other hand, compared with existing black-box\nmethods, AdVIL provides a tighter estimate of the log partition function and\nachieves much better empirical results.", + "authors": "Chongxuan Li, Chao Du, Kun Xu, Max Welling, Jun Zhu, Bo Zhang", + "published": "2019-01-24", + "updated": "2020-02-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "INTRODUCTION Markov random \ufb01elds (MRFs) \ufb01nd applications in a variety of machine learning areas (Kr\u00a8 ahenb\u00a8 uhl & Koltun, 2011; Salakhutdinov & Larochelle, 2010; Lafferty et al., 2001). In particular, one famous example is conditional random \ufb01elds (Lafferty et al., 2001), a conditional version of MRFs that was developed to address the limitations (e.g., local dependency and label bias) of directed models for sequential data (e.g., hidden Markov models and other discriminative Markov models based on directed graphical models). However, the inference and learning of general MRFs are challenging due to the presence of a global normalizing factor, i.e. partition function, especially when latent variables are present. Extensive efforts have been devoted to developing approximate methods. On one hand, sample-based methods (Neal, 1993) and variational approaches (Jordan et al., 1999; Welling & Sutton, 2005; Salakhutdinov & Larochelle, 2010) are proposed to infer the latent variables. On the other hand, extensive work (Meng & Wong, 1996; Neal, 2001; Hinton, 2002; Tieleman, 2008; Wainwright et al., 2005; Wainwright & Jordan, 2006) has been done to estimate the partition function. Among these methods, contrastive divergence (Hinton, 2002) is proven effective in certain types of models. Most of the existing methods highly depend on the model structure and require model-speci\ufb01c analysis in new applications, which makes it important to develop black-box inference and learning methods. Previous work (Ranganath et al., 2014; Schulman et al., 2015) shows the ability to automatically infer the latent variables and obtain gradient estimate in directed models. However, there is no black-box learning method for undirected models except the recent work of NVIL (Kuleshov & Ermon, 2017). NVIL introduces a variational distribution and derives an upper bound of the partition function in a general MRF, in the same spirit as amortized inference (Kingma & Welling, 2013; Rezende et al., 2014; Mnih & Gregor, 2014) for directed models. NVIL has several advantages over existing methods, including the ability of black-box learning, tracking the partition function during training and getting approximate samples ef\ufb01ciently during testing. However, NVIL also comes with two disadvantages: (1) it leaves the inference problem of MRFs unsolved1 and only trains simple MRFs with tractable \u2217Dept. of Comp. Sci. & Tech., BNRist Center, Institute for AI, THBI Lab, Tsinghua University, Beijing, 100084, China \u2020University of Amsterdam, and the Canadian Institute for Advanced Research (CIFAR). 1NVIL (Kuleshov & Ermon, 2017) presents a hybrid model. The inference in the title refers to directed part but not for an MRF. 1 arXiv:1901.08400v3 [cs.LG] 14 Feb 2020 \fPublished as a conference paper at ICLR 2020 posteriors, and (2) the upper bound of the partition function can be underestimated (Kuleshov & Ermon, 2017), resulting in sub-optimal solutions on high-dimensional data. We propose Adversarial Variational Inference and Learning (AdVIL) to relieve some headache of learning an MRF model. AdVIL is a black-box inference and learning method that partly solves the two problems of NVIL and retains the advantages of NVIL at the same time. First, AdVIL introduces a variational encoder to infer the latent variables, which provides an upper bound of the free energy. Second, AdVIL introduces a variational decoder for the MRF, which provides a lower bound of the log partition function. The two variational distributions provide an estimate of the negative log-likelihood of the MRF. On one hand, the estimate is in an intuitive form of an approximate contrastive free energy, which is expressed in terms of the expected energy and the (conditional) entropy of the corresponding variational distribution. On the other hand, similar to GAN (Goodfellow et al., 2014), the estimate is a minimax optimization problem, which is solved by stochastic gradient descent (SGD) in an alternating manner. Theoretically, our algorithm is convergent if the variational decoder approximates the model well. This motivates us to introduce an auxiliary variable to enhance the \ufb02exibility of the variational decoder, whose entropy is approximated by the third variational trick. We evaluate AdVIL in various undirected generative models, including restricted Boltzmann machines (RBM) (Ackley et al., 1985), deep Boltzmann machines (DBM) (Salakhutdinov & Hinton, 2009), and Gaussian restricted Boltzmann machines (GRBM) (Hinton & Salakhutdinov, 2006), on several real datasets. We empirically demonstrate that (1) compared to the black-box NVIL (Kuleshov & Ermon, 2017) method, AdVIL provides a tighter estimate of the log partition function and achieves much better log-likelihood results; and (2) compared to contrastive divergence based methods (Hinton, 2002; Welling & Sutton, 2005), AdVIL can deal with a broader family of MRFs without model-speci\ufb01c analysis and obtain better results when the model structure gets complex as in DBM. 2 BACKGROUND We consider a general case where the model consists of both visible variables v and latent variables h. An MRF de\ufb01nes the joint distribution over v and h as P(v, h) = e\u2212E(v,h) Z , where E denotes the associated energy function that assigns a scalar value for a given con\ufb01guration of (v, h) and Z is the partition function such that Z = R v,h e\u2212E(v,h)dvdh. Let PD(v) denote the empirical distribution of the training data. Minimizing the negative loglikelihood (NLL) of an MRF is a commonly chosen learning criterion and it is given by: L(\u03b8) := \u2212EPD(v) \u0014 log Z h e\u2212E(v,h) Z dh \u0015 , (1) where \u03b8 denotes the trainable parameters in E . Further, the gradient of \u03b8 is: \u2207\u03b8L(\u03b8) = EPD(v) [\u2207\u03b8F(v)] \u2212EP (v) [\u2207\u03b8F(v)] , (2) where F(v) = \u2212log R h e\u2212E(v,h)dh denotes the free energy and the gradient in Eqn. (2) is the difference of the free energy in two phases. In the \ufb01rst positive phase, the expectation of the free energy under the data distribution is decreased. In the second negative phase, the expectation of the free energy under the model distribution is increased. Unfortunately, both the NLL in Eqn. (1) and its gradient in Eqn. (2) are intractable in general for two reasons. First, the integral of the latent variables in Eqn. (1) or equivalently the computation of the free energy in Eqn. (2) is intractable. Second, the computation of the partition function in Eqn. (1) or equivalently the negative phase in Eqn. (2) is intractable. Variational inference. Extensive work introduces deterministic approximations for the intractability of inference, including the mean-\ufb01eld approximation (Welling & Hinton, 2002; Salakhutdinov & Hinton, 2009), the Kikuchi and Bethe approximations (Welling & Sutton, 2005) and the recognition model approach (Salakhutdinov & Larochelle, 2010). In this line of work, the intractability of the partition function is addressed using Monte Carlo based methods. Contrastive free energy. Contrastive divergence (CD) (Hinton, 2002) addresses the intractability of the partition function by approximating the negative phase in Eqn. (2) as follows: \u2207\u03b8L(\u03b8) = EPD(v) [\u2207\u03b8F(v)] \u2212EPCD(v) [\u2207\u03b8F(v)] , (3) 2 \fPublished as a conference paper at ICLR 2020 Q(h|v)\u00a0 P(v, h) q(v, h) h v h v h v r(z|h) z h q(h, z) z h q(h) q(v, h) h v Figure 1: Illustration of the models involved in AdVIL. From left to right: variational encoder Q(h|v), MRF P(v, h), variational decoder q(v, h) with a simple prior and q(v, h) with an expressive prior. where PCD(v) denotes the empirical distribution obtained by starting from a data point and running several steps of Gibbs sampling according to the model distribution and the free energy F(v) is assumed to be tractable. Existing methods (Welling & Hinton, 2002; Welling & Sutton, 2005) approximate F(v) using certain function G(v) and the gradient of \u03b8 is: \u2207\u03b8L(\u03b8) \u2248EPD(v) [\u2207\u03b8G(v)] \u2212EPCD(v) [\u2207\u03b8G(v)] . (4) Although these generalized methods exist, it is nontrivial to extend CD-based methods to general MRFs because the Gibbs sampling procedure is highly dependent on the model structure. Black-box learning. The recent work of NVIL (Kuleshov & Ermon, 2017) addresses the intractability of the partition function in a black-box manner via a variational upper bound of the partition function: Eq(v) \" \u02dc P(v)2 q(v)2 # \u2265Z2, (5) where \u02dc P(v) = e\u2212F(v) is the unnormalized marginal distribution on v and q(v) is a neural variational distribution. As a black-box learning method, NVIL potentially allows application to broader model families and improves the capabilities of probabilistic programming systems (Carpenter et al., 2017). Though promising, NVIL leaves the intractability of inference in an MRF unsolved, and the bound in Eqn. (5) is of high variance and is easily underestimated (Kuleshov & Ermon, 2017). 3 METHOD As stated above, the black-box inference and learning of MRFs are still largely open. In this paper, we make a step towards solving the problems by a new variational approach. For simplicity, we focus on the resulting objective function in this section. See Appendix A for detailed derivation. 3.1 ADVERSARIAL VARIATIONAL INFERENCE AND LEARNING First, we rewrite the NLL of the MRF (See an illustration in Fig. 1) as follows: L(\u03b8) = \u2212EPD(v) [\u2212F(v)] + log Z, (6) where the negative free energy and the log partition function are in the form of a logarithm of an integral. Naturally, we can apply the variational trick (Jordan et al., 1999) twice and approximate the two terms individually. Due to the presence of the minus before the \ufb01rst term in Eqn. (6), the two variational tricks bound the two parts of the NLL in the opposite directions, detailed as below. Formally, on one hand, we introduce an approximate posterior for the latent variables Q(h|v), which is parameterized as a neural variational encoder (See an illustration in Fig. 1), to address the intractability of inference as follows: L(\u03b8) \u2264EPD(v)Q(h|v) [E(v, h) + log Q(h|v)] + log Z := L1(\u03b8, \u03c6), (7) where \u03c6 denotes the trainable parameters in Q(h|v). The upper bound is derived via applying the Jensen inequality and the equality holds if and only if Q(h|v) = P(h|v) for all v. In the bound, the \ufb01rst term is the expected energy, which encourages Q(h|v) to infer latent variables that have low values of the energy function E(v, h), or equivalently high probabilities of P(v, h). The second term corresponds to the negative conditional entropy of Q(h|v), which increases the uncertainty of Q(h|v). In the paper, we denote the conditional entropy of Q(h|v) as H(Q) := \u2212EPD(v)Q(h|v)[log Q(h|v)]. 3 \fPublished as a conference paper at ICLR 2020 On the other hand, we introduce an approximate sampler q(v, h), which is parameterized by a neural variational decoder (See Fig. 1), to address the intractability of the partition function as follows: L1(\u03b8, \u03c6)\u2265EPD(v)Q(h|v) \uf8ee \uf8f0 energy term z }| { E(v, h) + entropy term z }| { log Q(h|v) | {z } Positive Phase \uf8f9 \uf8fb \u2212Eq(v,h) \uf8ee \uf8ef \uf8f0 energy term z }| { E(v, h) + entropy term z }| { log q(v, h) | {z } Negative Phase \uf8f9 \uf8fa \uf8fb:= L2(\u03b8, \u03c6, \u03c8), (8) where \u03c8 denotes the trainable parameters in q(v, h). The lower bound is derived via applying the Jensen inequality as well, and the equality holds if and only if q(v, h) = P(v, h). It can be seen that the lower bound given by q(v, h) consists of the entropy (denoted as H(q)) and energy terms, which is similar to the upper bound in Eqn. (7), and the overall objective is in the form of approximate contrastive free energy (Hinton, 2002; Welling & Sutton, 2005). Because the double variational trick bounds the NLL in opposite directions as above, we have a minimax optimization problem: min \u03b8 min \u03c6 max \u03c8 L2(\u03b8, \u03c6, \u03c8). (9) The minimax formulation has been investigated in GAN (Goodfellow et al., 2014) and it is interpreted as an adversarial game between two networks. We name our framework adversarial variational inference and learning (AdVIL) following the well-established literature. Note that L2(\u03b8, \u03c6, \u03c8) is neither an upper bound, nor a lower bound of L(\u03b8) due to the double variational trick. However, we argue that solving the optimization problem in Eqn. (9) is reasonable because (1) it is equivalent to optimizing L(\u03b8) under the nonparametric assumption, which is similar to GAN (Goodfellow et al., 2014); and (2) it converges to a stationary point of L1(\u03b8, \u03c6), which is an upper bound of L(\u03b8), under a weaker assumption, as stated in the following theoretical analysis. 3.2 THEORETICAL ANALYSIS OF ADVIL In this section, we present our main theoretical results and the proofs can be found in Appendix C. Firstly, similarly to GAN (Goodfellow et al., 2014), we can prove that L2 is a tight estimate of L under the nonparametric assumption, which is summarized in Proposition 1 in Appendix C.1. However, the nonparametric assumption does not tolerate any approximation error between P(v, h) and q(v, h) during training and no guarantee can be obtained in \ufb01nite steps. To this end, we establish a convergence theorem based on a weaker assumption that allows non-zero approximation error before convergence. A key insight is that the angle between \u2202L2(\u03b8,\u03c6,\u03c8) \u2202\u03b8 and \u2202L1(\u03b8,\u03c6) \u2202\u03b8 is positive if q(v, h) approximates P(v, h) well, as stated in the following Lemma 1. Lemma 1. For any (\u03b8, \u03c6), there exists a symmetric positive de\ufb01nite matrix H such that \u2202L2(\u03b8,\u03c6,\u03c8) \u2202\u03b8 = H \u2202L1(\u03b8,\u03c6) \u2202\u03b8 under the assumption: || P v,h \u03b4(v, h) \u2202E(v,h) \u2202\u03b8 ||2 < || \u2202L1(\u03b8,\u03c6) \u2202\u03b8 ||2 if || \u2202L1(\u03b8,\u03c6) \u2202\u03b8 ||2 > 0 and || P v,h \u03b4(v, h) \u2202E(v,h) \u2202\u03b8 ||2 = 0 if || \u2202L1(\u03b8,\u03c6) \u2202\u03b8 ||2 = 0, where \u03b4(v, h) = q(v, h) \u2212P(v, h). Based on Lemma 1 and other commonly used assumptions in the analysis of stochastic optimization (Bottou et al., 2018), AdVIL converges to a stationary point of L1(\u03b8, \u03c6), as stated in Theorem 1. Theorem 1. Solving the optimization problem in Eqn. (9) using stochastic gradient descent, then (\u03b8, \u03c6) converges to a stationary point of L1(\u03b8, \u03c6) under the assumptions in the general stochastic optimization (Bottou et al., 2018) and that the condition of Lemma 1 holds in each step. Please see Appendix C.2 for a detailed and formal version of Theorem 1. Compared to Proposition 1 and the analysis in GAN (Goodfellow et al., 2014), Theorem 1 has a weaker statement that AdVIL converges to a stationary point of the negative evidence lower bound (i.e., L1) instead of L. Nevertheless, we argue that converging to L1 is suf\ufb01ciently good for variational approaches in general. Besides, Theorem 1 states that AdVIL can at least decrease L1 in expectation if the assumption holds in \ufb01nite steps. Indeed, we empirically justify Theorem 1, as detailed in Appendix E.1. Theorem 1 also provides insights for the implementation of AdVIL. Indeed, its assumption motivates us to use a suf\ufb01ciently powerful q(v, h) with neural networks and auxiliary variables, and update q(v, h) multiple times per update of P(v, h), as detailed in Sec. 3.3 and Sec. 5.1 respectively. 3.3 SPECIFYING THE VARIATIONAL DISTRIBUTIONS To ef\ufb01ciently get samples, both variational distributions are directed models. We use a directed neural network that maps v to h as the variational encoder Q(h|v) (Kingma & Welling, 2013). 4 \fPublished as a conference paper at ICLR 2020 As for the variational decoder, we \ufb01rst factorize it as the product of a prior over h and a conditional distribution, namely q(v, h) = q(v|h)q(h). It is nontrivial to specify the prior q(h) because the marginal distribution of h in the MRF, i.e. P(h), can be correlated across different dimensions. Consequently, a simple q(h) is not \ufb02exible enough to track P(h) and can violate the condition of Lemma 1. To this end, we introduce an auxiliary variable z, which can be discrete or continuous, on top of h and de\ufb01ne q(v, h) = R z q(z)q(h|z)q(v|h)dz.2 (See an illustration in Fig. 1.) However, the entropy term of q(v, h) is intractable because we need to integrate out the auxiliary variable z. Therefore, we introduce the third variational distribution r(z|h) to approximate the entropy of q(v, h). As in Eqn. (7), applying the standard variational trick gives an upper bound: \u2212Eq(v,h) log q(v, h) \u2264\u2212Eq(v,h) log q(v|h) \u2212Eq(h)r(z|h) log \u0014q(h, z) r(z|h) \u0015 , (10) which is unsatisfactory because the estimate is minimized w.r.t r(z|h) while maximized w.r.t q(v, h). Instead, after some transformations (See details in Appendix A) we get a lower bound as follows: \u2212Eq(v,h) log q(v, h) \u2265\u2212Eq(v,h) log q(v|h) \u2212Eq(h,z) log \u0014q(h, z) r(z|h) \u0015 . (11) The equality holds if and only if r(z|h) = q(z|h) for all h. The difference between the two bounds is subtle: the last expectation in Eqn. (10) is over q(h)r(z|h) but that in Eqn. (11) is over q(h, z). Here, a lower bound is preferable because the estimate is maximized with respect to both r(z|h) and q(v, h) and we can train them simultaneously. For simplicity, we absorb the trainable parameters of r(z|h) into \u03c8. Note that after introducing z and r(z|h), we can still obtain a convergence theorem of AdVIL under the conditions that r(z|h) approximates q(z|h) well and q(v, h) = R q(v, h, z)dz is suf\ufb01ciently close to P(v, h) in every step, together the assumptions in general stochastic optimization. Following GAN (Goodfellow et al., 2014), we optimize \u03b8, \u03c6 and \u03c8 jointly using stochastic gradient descent (SGD) in an alternating manner. The partial derivatives of \u03c6 and \u03c8 are estimated via the reparameterization trick (Kingma & Welling, 2013) for the continuous variables and the GumbelSoftmax trick (Jang et al., 2016; Maddison et al., 2016) for the discrete variables. See Algorithm 1 in Appendix B for the whole training procedure. Note that \u03c8 is updated K1 > 1 times per update of \u03b8. 4 RELATED WORK Existing traditional methods (Neal, 2001; Hinton, 2002; Winn & Bishop, 2005; Wainwright & Jordan, 2006; Rother et al., 2007) can be used to estimate the log partition function but are nontrivial to be extended to learn general MRFs. Some methods (Winn & Bishop, 2005; Neal, 2001) require an expensive inference procedure for each update of the model and others (Hinton, 2002; Rother et al., 2007) cannot be directly applied to general cases (e.g., DBM). Among these methods, contrastive divergence (CD) (Hinton, 2002) is proven effective in certain types of models and it is closely related to AdVIL. Indeed, the partial derivative of \u03b8 in AdVIL is: \u2202L2(\u03b8, \u03c6, \u03c8) \u2202\u03b8 = EPD(v)Q(h|v) \u0014 \u2202 \u2202\u03b8E(v, h) \u0015 \u2212Eq(v,h) \u0014 \u2202 \u2202\u03b8E(v, h) \u0015 , (12) which also involves a positive phase and a negative phase naturally and is quite similar to Eqn. (3). However, notably, the two phases average over the (v, h) pairs and only require the knowledge of the energy function without any further assumption of the model in AdVIL. Therefore, AdVIL is more suitable to general MRFs than CD (See empirical evidence in Sec. 5.3). In the context of black-box learning in MRFs, AdVIL competes directly with NVIL (Kuleshov & Ermon, 2017). It seems that the upper bound in Eqn. (5) is suitable for optimization because P and q share the same training direction. However, the bound holds only if the support of \u02dc P is a subset of the support of q. Further, the Monte Carlo estimate of the upper bound is of high variance. Therefore, the bound of NVIL can be easily underestimated, which results in sub-optimal solutions (Kuleshov & Ermon, 2017). In contrast, though AdVIL arrives at a minimax optimization problem, the estimate of Eqn. (8) is tighter and of lower variance. We empirically verify this argument (See Fig. 3) and systematically compare the two methods (See Tab. 1) in Sec.5.4. 2An alternative way is to use an autoregressive model as q(h). See results and analysis in Appendix E.3. 5 \fPublished as a conference paper at ICLR 2020 5000 10000 15000 Iterations 20 15 10 5 Free Energy Upper Bound (a) upper bound of F(v) 5000 10000 15000 Iterations 30.0 32.5 35.0 37.5 40.0 Exact Entropy Lower Bound (b) lower bound of H(q) 5000 10000 15000 Iterations 35 40 45 50 55 60 Partition Function Lower Bound (c) lower bound of log Z 5000 10000 15000 Iterations 0 10 20 30 40 RBM loss NLL (d) RBM loss and NLL Figure 2: Curves of AdVIL on Digits. (a-c) compare the values of the variational approximations and the corresponding ground truths. All bounds are rather tight after 5,000 iterations. (d) shows that the RBM loss (i.e., the loss of \u03b8 as in Eqn. (8)) tends to zero and the model converges gradually. Apart from the work on approximate inference and learning in MRFs as mentioned above, AdVIL is also related to some directed models. Kim & Bengio (2016) jointly trains a deep energy model (Ngiam et al., 2011) and a directed generative model by minimizing the KL-divergence between them. Similar ideas have been highlighted in (Finn et al., 2016; Zhai et al., 2016; Dai et al., 2017; Liu & Wang, 2017). In comparison, \ufb01rstly, AdVIL obtains the objective function in a uni\ufb01ed perspective on the black-box inference and learning in general MRFs. Note that dealing with latent variables in MRFs is nontrivial (Kim & Bengio, 2016) and therefore existing work focuses on fully observable models. Secondly, AdVIL uses a sophisticated decoder with auxiliary variables to handle the latent variables and derives a principled variational approximation of the entropy term instead of the heuristics (Kim & Bengio, 2016; Zhai et al., 2016). Lastly, the convergence of AdVIL is formally characterized by Theorem 1 while the effect of the approximation error in inference is not well understood in existing methods. Adversarially learned inference (ALI) (Donahue et al., 2016; Dumoulin et al., 2016) is also formulated as a minimax optimization problem but focuses on directed models. 5 EXPERIMENTS In this section, we evaluate AdVIL in restricted Boltzmann machines (RBM) (Ackley et al., 1985), deep Boltzmann machines (DBM) (Salakhutdinov & Hinton, 2009) and Gaussian restricted Boltzmann machines (GRBM) (Hinton & Salakhutdinov, 2006) on the Digits dataset, the UCI binary databases (Dheeru & Karra, 2017) and the Frey faces datasets (See detailed settings in Appendix D and the source code3). We compare AdVIL with strong baseline methods systematically and show the promise of AdVIL to learn a broad family of models effectively as a black-box method. 5.1 EMPIRICAL ANALYSIS OF ADVIL We present a detailed analysis of AdVIL in RBM, whose energy function is de\ufb01ned as E(v, h) = \u2212b\u22a4v\u2212v\u22a4Wh\u2212c\u22a4h. The conditional distributions of an RBM are tractable, but we still treat P(h|v) as unknown and train AdVIL in a fully black-box manner. The analysis is performed on the Digits dataset and we augment the data of \ufb01ve times by shifting the digits following the protocol in (Kuleshov & Ermon, 2017). The dimensions of v, h and z are 64, 15 and 10, respectively. Therefore, the log partition function of the RBM and the entropy of the decoder can be computed by brute force. Firstly, we empirically validate AdVIL in Fig. 2. Speci\ufb01cally, Panel (a) shows that the variational encoder Q(h|v) provides a tight upper bound of the free energy after 2,000 iterations. Panel (b) demonstrates that the variational distribution r(z|h) estimate the entropy of q(v, h) accurately. Panel (c) shows that q(v, h) can successfully track the log partition function after 5,000 iterations. Panel (d) presents that the RBM loss balances well between the negative phase and positive phase, and the model converges gradually. See Appendix E.1 for an empirical test of the condition in Lemma 1. Secondly, we empirically show that both P and q can generate data samples in Appendix E.2. Lastly, we analyze the sensitivity of K1. Theoretically, enlarging K1 will make q(v, h) and P(v, h) to be close and then help the convergence according to Theorem 1. As shown in Fig. 3 (a), a larger K1 at least won\u2019t hurt the convergence, which agrees with Theorem 1. Though K1 = 15 is suf\ufb01cient on the Digits dataset, we use K1 = 100 as a default setting for AdVIL on larger datasets. 3See the source code in https://anonymous.4open.science/r/8c779fbc-6394-40c7-8273-e52504814703/. 6 \fPublished as a conference paper at ICLR 2020 0 10000 20000 30000 Iterations 26 27 28 29 Test NLL, K1=5 Test NLL, K1=15 Test NLL, K1=25 Test NLL, K1=50 (a) Sensitivity of K1 0 2000 4000 6000 8000 Iterations 25 50 75 100 125 150 logZ-bound logZ-ais NLL on test data (b) NVIL 0 2000 4000 6000 8000 Iterations 25 50 75 100 125 150 logZ-estimation logZ-ais NLL on test data (c) AdVIL 0 8000 16000 24000 Iterations 25 50 75 100 125 Test NLL of PCD-1 Test NLL of CD10 (d) PCD-1 Figure 3: (a) Sensitivity analysis of K1 on the Digits dataset. (b-d) Learning curves of NVIL, AdVIL and CD on the Mushrooms dataset. Compared to NVIL, AdVIL provides a tighter and lower variance estimate of log Z and achieves better performance. Compared to PCD-1 and CD-10, AdVIL can track the log partition function and achieve comparable results though trained in a black-box manner. Table 1: Anneal importance sampling (AIS) results in RBM. The results are recorded on the test set according to the best validation performance and averaged over three runs. AdVIL outperforms NVIL consistently and signi\ufb01cantly. See the standard deviations in Appendix E.5. Method Digits Adult Connect4 DNA Mushrooms NIPS-0-12 Ocr-letters RCV1 NVIL-mean \u221227.36 \u221220.05 \u221224.71 \u221297.71 \u221229.28 \u2212290.01 \u221247.56 \u221250.47 AdVIL-mean \u221226.34 \u221219.29 \u221221.95 \u221297.59 \u221219.59 \u2212276.42 \u221245.64 \u221250.22 5.2 RBM RESULTS To the best of our knowledge, NVIL (Kuleshov & Ermon, 2017) is the only existing black-box learning method for MRFs and hence it is the most direct competitor of AdVIL. In this section, we provide a systematic comparison and analysis of these two methods in terms of the log-likelihood results on the UCI databases (Dheeru & Karra, 2017). For a fair comparison, we use the widely-adopted anneal importance sampling (AIS) (Salakhutdinov & Murray, 2008) metric for quantitative evaluation. Besides, we carefully perform grid search over the default settings of NVIL (Kuleshov & Ermon, 2017) and our settings based on their code, and choose the best con\ufb01guration including K1 = 100 (See details in Appendix D). We directly compare with the best version of NVIL in Tab. 1. It can be seen that AdVIL consistently outperforms NVIL on all datasets, which demonstrate the effectiveness of AdVIL. Besides, the time complexity of AdVIL is comparable to that of NVIL with the same hyperparameters. We compare the learning curves of NVIL and AdVIL on the Mushroom dataset. As shown in Fig. 3 (b), the upper bound of NVIL is underestimated after 4,000 iterations and then the model can get worse or even diverge. In contrast, as shown in Fig. 3 (c), the lower bound of AdVIL is consistently valid. Besides, the estimate of NVIL is looser and of higher variance than that of AdVIL. The results agree with our analysis in Sec. 4 and explain why AdVIL signi\ufb01cantly outperforms NVIL. Further, as shown in Fig. 3 (d), AdVIL is comparable to CD-10 and persistent contrastive divergence (PCD) (Tieleman, 2008), which leverage the tractability of the conditional distributions in an RBM. 5.3 DBM RESULTS We would like to demonstrate that AdVIL has the ability to deal with highly intractable models such as a DBM conveniently and effectively, compared to standard CD-based methods (Hinton, 2002; Welling & Hinton, 2002; Welling & Sutton, 2005) and NVIL (Kuleshov & Ermon, 2017). DBM (Salakhutdinov & Hinton, 2009) is a powerful family of deep models that stack multiple RBMs together. The energy function of a two-layer DBM is de\ufb01ned as E(v, h1, h2) = \u2212b\u22a4v \u2212 v\u22a4W1h1 \u2212c\u22a4 1 h1 \u2212h\u22a4 1 W2h2 \u2212c\u22a4 2 h2. Learning a DBM is challenging because P(h1, h2|v) is not tractable and CD (Hinton, 2002) is not applicable. Inspired by (Welling & Hinton, 2002; Welling & Sutton, 2005), we construct a variational CD (VCD) baseline by employing the same variational encoder Q(h1, h2|v) as in AdVIL. The free energy is approximated by the same upper bound as in Eqn. (7), which is minimized with respect to the parameters in Q(h1, h2|v). The gradient of the parameters in the DBM is given by Eqn. (4), where the Gibbs sampling procedure is approximated by h1 \u223cQ(h1|v) and v \u223cP(v|h1). Note that AdVIL can be directly applied to this case. As for 7 \fPublished as a conference paper at ICLR 2020 Table 2: AIS results in DBM. The results are recorded according to the best validation performance and averaged by three runs. AdVIL achieves higher averaged AIS results on \ufb01ve out of eight datasets and has a better overall performance than VCD. See the standard deviations in Appendix E.5. Method Digits Adult Connect4 DNA Mushrooms NIPS-0-12 Ocr-letters RCV1 VCD-mean \u221228.49 \u221222.26 \u221226.79 \u221297.59 \u221223.15 \u2212356.26 \u221245.77 \u221250.83 AdVIL-mean \u221227.89 \u221220.29 \u221226.34 \u221299.40 \u221221.21 \u2212287.15 \u221248.38 \u221251.02 (a) Data (b) Filters of the GRBM (c) Samples from q (d) Samples from P Figure 4: Filters and samples of a GRBM learned by AdVIL on the Frey faces dataset. (a) presents the training data. (b) presents the \ufb01rst 40 \ufb01lters of the GRBM. (c) and (d) show random samples from the variational decoder and the GRBM, respectively. We present the mean of v for better visualization. the time complexity, the training speed of AdVIL is around ten times slower than that of VCD in our implementation. However, the approximate inference and sampling procedure of AdVIL is very ef\ufb01cient thanks to the directed variational distributions. The log-likelihood results on the UCI databases are shown in Tab. 2. It can be seen that AdVIL has a better overall performance even trained in a black-box manner, which shows the promise of AdVIL. See Appendix E.4 for learning curves and a detailed analysis of the results. We also extend NVIL by using the same Q(h1, h2|v) and q(v, h1, h2) as AdVIL. However, NVIL diverges after 300 iterations and gets bad AIS results (e.g., less than \u221240 on Digits) in our implementation. A potential reason is that the upper bound given by q in NVIL can be underestimated if q is high-dimensional, as analyzed in Sec. 4 and Fig. 3. Note that q(v, h1, h2) in DBM involves latent variables and has a higher dimension (e.g. 164 on the Digits dataset) than q(v) in RBM (e.g. 64 on the Digits dataset). The results again demonstrate the advantages of AdVIL over NVIL. 5.4 GRBM RESULTS We now show the ability of AdVIL to learn a GRBM on the continuous Frey faces dataset. The energy function of a GRBM is E(v, h) = 1 2\u03c32 ||v \u2212b||2 \u2212c\u22a4h \u22121 \u03c3v\u22a4Wh, where \u03c3 is the standard deviation of the Gaussian likelihood and is set as 1 manually. We standardize the data by subtracting the mean and dividing by the standard deviation. The dimensions of h and z are 200 and 50, respectively. Though a GRBM is more sensitive to the hyperparameters and hence harder to train than an RBM (Cho et al., 2011; 2013), AdVIL can successfully capture the underlying data distribution using the default hyperparameters (See Appendix D). As shown in Fig. 4, the samples from both the GRBM (via Gibbs sampling after 100,000 burn-in steps) and the decoder are meaningful faces. Besides, the \ufb01lters of the GRBM outline diverse prototypes of faces, which accords with our expectation. In summary, the results of the three models together demonstrate that AdVIL can learn a broad family of models conveniently and effectively in a fully black-box manner. 6 CONCLUSION AND DISCUSSION A novel black-box learning and inference method for undirected graphical models, called adversarial variational inference and learning (AdVIL), is proposed. The key to AdVIL is a double variational trick that approximates the negative free energy and the log partition function separately. A formal convergence theorem, which provides insights for implementation, is established for AdVIL. Empirical results show that AdVIL can deal with a broad family of MRFs in a fully black-box manner and outperforms both the standard contrastive divergence method and the black-box NVIL algorithm. Though AdVIL shows promising results, we emphasize that the black-box learning and inference of the MRFs are far from completely solved, especially on high-dimensional data. The two intractability 8 \fPublished as a conference paper at ICLR 2020 problems of MRFs are distinct since the posterior of the latent variables is local in terms of v but the partition function is global by integrating out v. The additional integral makes estimating the partition function much more challenging. In AdVIL, simply increasing the number of updates of the decoder to obtain a tighter estimate of the partition function on high-dimensional data can be expensive. A potential future work to avoid the problem is adopting recent advances on non-convex optimization (Dauphin et al., 2014; Reddi et al., 2016; Wang et al., 2017) to accelerate the inner loop optimization. We conjecture that AdVIL is comparable to CD in RBM and superior to VCD in DBM on larger datasets if AdVIL can be trained to nearly converge based on our current results. ACKNOWLEDGEMENTS This work was supported by the National Key Research and Development Program of China (No. 2017YFA0700904), NSFC Projects (Nos. 61620106010, U19B2034, U1811461), Beijing NSF Project (No. L172037), Beijing Academy of Arti\ufb01cial Intelligence (BAAI), Tsinghua-Huawei Joint Research Program, a grant from Tsinghua Institute for Guo Qiang, Tiangong Institute for Intelligent Computing, the JP Morgan Faculty Research Program and the NVIDIA NVAIL Program with GPU/DGX Acceleration. C. Li was supported by the Chinese postdoctoral innovative talent support program and Shuimu Tsinghua Scholar." + }, + { + "url": "http://arxiv.org/abs/1804.03429v2", + "title": "Graphical Generative Adversarial Networks", + "abstract": "We propose Graphical Generative Adversarial Networks (Graphical-GAN) to model\nstructured data. Graphical-GAN conjoins the power of Bayesian networks on\ncompactly representing the dependency structures among random variables and\nthat of generative adversarial networks on learning expressive dependency\nfunctions. We introduce a structured recognition model to infer the posterior\ndistribution of latent variables given observations. We generalize the\nExpectation Propagation (EP) algorithm to learn the generative model and\nrecognition model jointly. Finally, we present two important instances of\nGraphical-GAN, i.e. Gaussian Mixture GAN (GMGAN) and State Space GAN (SSGAN),\nwhich can successfully learn the discrete and temporal structures on visual\ndatasets, respectively.", + "authors": "Chongxuan Li, Max Welling, Jun Zhu, Bo Zhang", + "published": "2018-04-10", + "updated": "2018-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "main_content": "Introduction Deep implicit models [29] have shown promise on synthesizing realistic images [10, 33, 2] and inferring latent variables [26, 11]. However, these approaches do not explicitly model the underlying structures of the data, which are common in practice (e.g., temporal structures in videos). Probabilistic graphical models [18] provide principle ways to incorporate the prior knowledge about the data structures but these models often lack the capability to deal with the complex data like images. To conjoin the bene\ufb01ts of both worlds, we propose a \ufb02exible generative modelling framework called Graphical Generative Adversarial Networks (Graphical-GAN). On one hand, Graphical-GAN employs Bayesian networks [18] to represent the structures among variables. On the other hand, Graphical-GAN uses deep implicit likelihood functions [10] to model complex data. Graphical-GAN is suf\ufb01ciently \ufb02exible to model structured data but the inference and learning are challenging due to the presence of deep implicit likelihoods and complex structures. We build a structured recognition model [17] to approximate the true posterior distribution. We study two families of the recognition models, i.e. the mean \ufb01eld posteriors [14] and the inverse factorizations [39]. We generalize the Expectation Propagation (EP) [27] algorithm to learn the generative model and recognition model jointly. Motivated by EP, we minimize a local divergence between the generative model and recognition model for each individual local factor de\ufb01ned by the generative model. The local divergences are estimated via the adversarial technique [10] to deal with the implicit likelihoods. Given a speci\ufb01c scenario, the generative model is determined a priori by context or domain knowledge and the proposed inference and learning algorithms are applicable to arbitrary Graphical-GAN. As instances, we present Gaussian Mixture GAN (GMGAN) and State Space GAN (SSGAN) to learn the discrete and temporal structures on visual datasets, respectively. Empirically, these models can \u2217Department of Computer Science & Technology, Institute for Arti\ufb01cial Intelligence, BNRist Center, THBI Lab, State Key Lab for Intell. Tech. & Sys., Tsinghua University. Correspondence to: J. Zhu. \u2020University of Amsterdam, and the Canadian Institute for Advanced Research (CIFAR). 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montr\u00e9al, Canada. arXiv:1804.03429v2 [cs.LG] 12 Dec 2018 \f(a) Generative models (b) Recognition models Figure 1: (a) Generative models of GMGAN (left panel) and SSGAN (right panel). (b) Recognition models of GMGAN and SSGAN. The grey and white units denote the observed and latent variables, respectively. The arrows denote dependencies between variables. \u03b8 and \u03c6 denote the parameters in the generative model and recognition model, respectively. We omit \u03b8 and \u03c6 in SSGAN for simplicity. infer the latent structures and generate structured samples. Further, Graphical-GAN outperforms the baseline models on inference, generation and reconstruction tasks consistently and substantially. Overall, our contributions are: (1) we propose Graphical-GAN, a general generative modelling framework for structured data; (2) we present two instances of Graphical-GAN to learn the discrete and temporal structures, respectively; and (3) we empirically evaluate Graphical-GAN on generative modelling of structured data and achieve good qualitative and quantitative results. 2 General Framework In this section, we present the model de\ufb01nition, inference method and learning algorithm. 2.1 Model De\ufb01nition Let X and Z denote the observable variables and latent variables, respectively. We assume that we have N i.i.d. samples from a generative model with the joint distribution pG(X, Z) = pG(Z)pG(X|Z), where G is the associated directed acyclic graph (DAG). According to the local structures of G, the distribution of a single data point can be further factorized as follows: pG(X, Z) = |Z| Y i=1 p(zi|paG(zi)) |X| Y j=1 p(xj|paG(xj)), (1) where paG(x) denotes the parents of x in the associated graph G. Note that only latent variables can be the parent of a latent variable and see Fig 1 (a) for an illustration. Following the factorization in Eqn. (1), we can sample from the generative model ef\ufb01ciently via ancestral sampling. Given the dependency structures, the dependency functions among the variables can be parameterized as deep neural networks to \ufb01t complicated data. As for the likelihood functions, we consider implicit probabilistic models [29] instead of prescribed probabilistic models. Prescribed models [17] de\ufb01ne the likelihood functions for X with an explicit speci\ufb01cation. In contrast, implicit models [10] deterministically transform Z to X and the likelihood can be intractable. We focus on implicit models because they have been proven effective on image generation [33, 2] and the learning algorithms for implicit models can be easily extended to prescribed models. We also directly compare with existing structured prescribed models [7] in Sec. 5.1. Following the well established literature, we refer to our model as Graphical Generative Adversarial Networks (Graphical-GAN). The inference and learning of Graphical-GAN are nontrivial. On one hand, Graphical-GAN employs deep implicit likelihood functions, which makes the inference of the latent variables intractable and the likelihood-based learning method infeasible. On the other hand, Graphical-GAN involves complex structures, which requires the inference and learning algorithm to exploit the structural information explicitly. To address the problems, we propose structured recognition models and a sample-based massage passing algorithm, as detailed in Sec. 2.2 and Sec. 2.3, respectively. 2 \f2.2 Inference Method We leverage recent advances on amortized inference of deep generative models [17, 9, 8, 40] to infer the latent variables given the data. Basically, these approaches introduce a recognition model, which is a family of distributions of a simple form, to approximate the true posterior. The recognition model is shared by all data points and often parameterized as a deep neural network. The problem is more complicated in our case because we need to further consider the graphical structure during the inference procedure. Naturally, we introduce a structured recognition model with an associated graph H as the approximate posterior, whose distribution is formally given by: qH(Z|X) = |Z| Y i=1 q(zi|paH(zi)). (2) Given data points from the true data distribution q(X), we can obtain samples following the joint distribution qH(X, Z) = q(X)qH(Z|X) ef\ufb01ciently via ancestral sampling. Considering different dependencies among the variables, or equivalently Hs, we study two types of recognition models: the mean-\ufb01eld posteriors [14] and the inverse factorizations [39]. The mean-\ufb01eld assumption has been widely adopted to variational inference methods [14] because of its simplicity. In such methods, all of the dependency structures among the latent variables are ignored and the approximate posterior could be factorized as follows: qH(Z|X) = |Z| Y i=1 q(zi|X), (3) where the associated graph H has fully factorized structures. The inverse factorizations [39] approach views the original graphical model as a forward factorization and samples the latent variables given the observations ef\ufb01ciently by inverting G step by step. Formally, the inverse factorization is de\ufb01ned as follows: qH(Z|X) = |Z| Y i=1 q(zi|\u2202G(zi) \u2229z>i), (4) where \u2202G(zi) denotes the Markov blanket of zi on G and z>i denotes all z after zi in a certain order, which is de\ufb01ned from leaves to root according to the structure of G. See the formal algorithm to build H based on G in Appendix A. Given the structure of the approximate posterior, we also parameterize the dependency functions as neural networks of similar sizes to those in the generative models. Both posterior families are generally applicable for arbitrary Graphical-GANs and we use them in two different instances, respectively. See Fig. 1 (b) for an illustration. 2.3 Learning Algorithm Let \u03b8 and \u03c6 denote the parameters in the generative model, p, and the recognition model q, respectively. Our goal is to learn \u03b8 and \u03c6 jointly via divergence minimization, which is formulated as: min \u03b8,\u03c6 D(q(X, Z)||p(X, Z)), (5) where we omit the subscripts of the associated graphs in p and q for simplicity. We restrict D in the f-divergence family [5], that is D(q(X, Z)||p(X, Z)) = R p(X, Z)f( q(X,Z) p(X,Z))dXdZ, where f is a convex function of the likelihood ratio. The Kullback-Leibler (KL) divergence and the Jensen-Shannon (JS) divergence are included. Note that we cannot optimize Eqn. (5) directly because the likelihood ratio is unknown given implicit p(X, Z). To this end, ALI [8, 9] introduces a parametric discriminator to estimate the divergence via discriminating the samples from the models. We can directly apply ALI to Graphical-GAN by treating all variables as a whole and we refer it as the global baseline (See Appendix B for the formal algorithm). The global baseline uses a single discriminator that takes all variables as input. It may be 3 \fsub-optimal in practice because the capability of a single discriminator is insuf\ufb01cient to distinguish complex data, which makes the estimate of the divergence not reliable. Intuitively, the problem will be easier if we exploit the data structures explicitly when discriminating the samples. The intuition motivates us to propose a local algorithm like Expectation Propagation (EP) [27], which is known as a deterministic approximation algorithm with analytic and computational advantages over other approximations, including Variational Inference [21]. Following EP, we start from the factorization of p(X, Z) in terms of a set of factors FG: p(X, Z) \u221d Y A\u2208FG p(A). (6) Generally, we can choose any reasonable FG3 but here we specify that FG consists of families (x, paG(x)) and (z, paG(z)) for all x and z in the model. We assume that the recognition model can also be factorized in the same way. Namely, we have q(X, Z) \u221d Y A\u2208FG q(A). (7) Instead of minimizing Eqn. (5), EP iteratively minimizes a local divergence in terms of each factor individually. Formally, for factor A, we\u2019re interested in the following divergence [27, 28]: D(q(A)q(A)||p(A)p(A)), (8) where p(A) denotes the marginal distribution over the complementary \u00af A of A. EP [27] further assmues that q(A) \u2248p(A) to make the expression tractable. Though the approximation cannot be justi\ufb01ed theoretically, empirical results [28] suggest that the gap is small if the approximate posterior is a good \ufb01t to the true one. Given the approximation, for each factor A, the objective function changes to: D(q(A)q(A)||p(A)q(A)). (9) Here we make the same assumption because q(A) will be cancelled in the likelihood ratio if D belongs to f-divergence and we can ignore other factors when checking factor A, which reduces the complexity of the problem. For instance, we can approximate the JS divergence for factor A as: DJS(q(X, Z)||p(X, Z))\u2248Eq[log q(A) m(A)]+Ep[log p(A) m(A)], (10) where m(A) = 1 2(p(A) + q(A)). See Appendix C for the derivation. As we are doing amortized inference, we further average the divergences over all local factors as: 1 |FG| X A\u2208FG \u0014 Eq[log q(A) m(A)]+Ep[log p(A) m(A)] \u0015 = 1 |FG| \uf8ee \uf8f0 Eq[ X A\u2208FG log q(A) m(A)]+Ep[ X A\u2208FG log p(A) m(A)] \uf8f9 \uf8fb. (11) The equality holds due to the linearity of the expectation. The expression in Eqn. (11) provides an ef\ufb01cient solution where we can obtain samples over the entire variable space once and repeatedly project the samples into each factor. Finally, we can estimate the local divergences using individual discriminators and the entire objective function is as follows: max \u03c8 1 |FG|Eq[ X A\u2208FG log(DA(A))] + 1 |FG|Ep[ X A\u2208FG log(1 \u2212DA(A))], (12) where DA is the discriminator for the factor A and \u03c8 denotes the parameters in all discriminators. Though we assume that q(X, Z) shares the same factorization with p(X, Z) as in Eqn. (7) when deriving the objective function, the result in Eqn. (12) does not specify the form of q(X, Z). This is because we do not need to compute q(A) explicitly and instead we directly estimate the likelihood ratio based on samples. This makes it possible for Graphical-GAN to use an arbitrary q(X, Z), including the two recognition models presented in Sec. 2.2, as long as we can sample from it quickly. Given the divergence estimate, we perform the stochastic gradient decent to update the parameters. We use the reparameterization trick [17] and the Gumbel-Softmax trick [12] to estimate the gradients with continuous and discrete random variables, respectively. We summarize the procedure in Algorithm 1. 3For instance, we can specify that FG has only one factor that involves all variables, which reduces to ALI. 4 \f3 Two Instances Algorithm 1 Local algorithm for Graphical-GAN repeat \u2022 Get a minibatch of samples from p(X, Z) \u2022 Get a minibatch of samples from q(X, Z) \u2022 Approximate the divergence D(q(X, Z)||p(X, Z)) using Eqn. (12) and the current value of \u03c8 \u2022 Update \u03c8 to maximize the divergence \u2022 Get a minibatch of samples from p(X, Z) \u2022 Get a minibatch of samples from q(X, Z) \u2022 Approximate the divergence D(q(X, Z)||p(X, Z)) using Eqn. (12) and the current value of \u03c8 \u2022 Update \u03b8 and \u03c6 to minimize the divergence until Convergence or reaching certain threshold We consider two common and typical scenarios involving structured data in practice. In the \ufb01rst one, the dataset consists of images with discrete attributes or classes but the groundtruth for an individual sample is unknown. In the second one, the dataset consists of sequences of images with temporal dependency within each sequence. We present two important instances of GraphicalGAN, i.e. Gaussian Mixture GAN (GMGAN) and State Space GAN (SSGAN), to deal with these two scenarios, respectively. These instances show the abilities of our general framework to deal with discrete latent variables and complex structures, respectively. GMGAN We assume that the data consists of K mixtures and hence uses a mixture of Gaussian prior. Formally, the generative process of GMGAN is: k \u223cCat(\u03c0), h|k \u223cN(\u00b5k, \u03a3k), x|h = G(h), where Z = (k, h), and \u03c0 and G are the coef\ufb01cient vector and the generator, respectively. We assume that \u03c0 and \u03a3ks are \ufb01xed as the uniform prior and identity matrices, respectively. Namely, we only have a few extra trainable parameters, i.e. the means for the mixtures \u00b5ks. We use the inverse factorization as the recognition model because it preserves the dependency relationships in the model. The resulting approximate posterior is a simple inverse chain as follows: h|x = E(x), q(k|h) = \u03c0kN(h|\u00b5k, \u03a3k) P k\u2032 \u03c0k\u2032N(h|\u00b5k\u2032, \u03a3k\u2032), where E is the extractor that maps data points to the latent variables. In the global baseline, a single network is used to discriminate the (x, h, k) tuples. In our local algorithm, two separate networks are introduced to discriminate the (x, h) and (h, k) pairs, respectively. SSGAN We assume that there are two types of latent variables. One is invariant across time, denoted as h and the other varies across time, denoted as vt for time stamp t = 1, ..., T. Further, SSGAN assumes that vts form a Markov Chain. Formally, the generative process of SSGAN is: v1 \u223cN(0, I), h \u223cN(0, I), \u03f5t \u223cN(0, I), \u2200t = 1, 2, ..., T \u22121, vt+1|vt = O(vt, \u03f5t), \u2200t = 1, 2, ..., T \u22121, xt|h, vt = G(h, vt), \u2200t = 1, 2, ..., T, where Z = (h, v1, ..., vT ), and O and G are the transition operator and the generator, respectively. They are shared across time under the stationary and output independent assumptions, respectively. For simplicity, we use the mean-\ufb01eld recognition model as the approximate posterior: h|x1, x2..., xT = E1(x1, x2..., xT ), vt|x1, x2..., xT = E2(xt), \u2200t = 1, 2, ..., T, where E1 and E2 are the extractors that map the data points to h and v respectively. E2 is also shared across time. In the global baseline, a single network is used to discriminate the (x1, ..., xT , v1, ..., vT , h) samples. In our local algorithm, two separate networks are introduced to discriminate the (vt, vt+1) pairs and (xt, vt, h) tuples, respectively. Both networks are shared across time, as well. 4 Related Work General framework The work of [13, 16, 22] are the closest papers on the structured deep generative models. Johnson et al. [13] introduce structured Bayesian priors to Variational Auto-Encoders 5 \f(VAE) [17] and propose ef\ufb01cient inference algorithms with conjugated exponential family structure. Lin et al. [22] consider a similar model as in [13] and derive an amortized variational message passing algorithm to simplify and generalize [13]. Compared to [13, 22], Graphical-GAN is more \ufb02exible on the model de\ufb01nition and learning methods, and hence can deal with natural data. Adversarial Massage Passing (AMP) [16] also considers structured implicit models but there exist several key differences to make our work unique. Theoretically, Graphical-GAN and AMP optimize different local divergences. As presented in Sec. 2.3, we follow the recipe of EP precisely to optimize D(q(A)q(A)||p(A)q(A)) and naturally derive our algorithm that involves only the factors de\ufb01ned by p(X, Z), e.g. A = (zi, paG(zi)). On the other hand, AMP optimizes another local divergence D(q(A\u2032)||p(A)), where A\u2032 is a factor de\ufb01ned by q(X, Z), e.g. A\u2032 = (zi, paH(zi)). In general, A\u2032 can be different from A because the DAGs G and H have different structures. Further, the theoretical difference really matters in practice. In AMP, the two factors involved in the local divergence are de\ufb01ned over different domains and hence may have different dimensionalities generally. Therefore, it remains unclear how to implement AMP 4 because a discriminator cannot take two types of inputs with different dimensionalities. In fact, no empirical evidence is reported in AMP [16]. In contrast, Graphical-GAN is easy to implement by considering only the factors de\ufb01ned by p(X, Z) and achieves excellent empirical results (See Sec. 5). There is much work on the learning of implicit models. f-GAN [31] and WGAN [2] generalize the original GAN using the f-divergence and Wasserstein distance, respectively. The work of [40] minimizes a penalized form of the Wasserstein distance in the optimal transport point of view and naturally considers both the generative modelling and inference together. The Wasserstein distance can also be used in Graphical-GAN to generalize our current algorithms and we leave it for the future work. The recent work of [34] and [41] perform Bayesian learning for GANs. In comparison, Graphical-GAN focuses on learning a probabilistic graphical model with latent variables instead of posterior inference on global parameters. Instances Several methods have learned the discrete structures in an unsupervised manner. Makhzani et al. [24] extend an autoencoder to a generative model by matching the aggregated posterior to a prior distribution and shows the ability to cluster handwritten digits. [4] introduce some interpretable codes independently from the other latent variables and regularize the original GAN loss with the mutual information between the codes and the data. In contrast, GMGAN explicitly builds a hierarchical model with top-level discrete codes and no regularization is required. The most direct competitor [7] extends VAE [17] with a mixture of Gaussian prior and is compared with GMGAN in Sec. 5.1. There exist extensive prior methods on synthesizing videos but most of them condition on input frames [38, 32, 25, 15, 46, 43, 6, 42]. Three of these methods [44, 35, 42] can generate videos without input frames. In [44, 35], all latent variables are generated jointly and without structure. In contrast, SSGAN explicitly disentangles the invariant latent variables from the variant ones and builds a Markov chain on the variant ones, which makes it possible to do motion analogy and generalize to longer sequences. MoCoGAN [42] also exploits the temporal dependency of the latent variables via a recurrent neural network but it requires heuristic regularization terms and focuses on generation. In comparison, SSGAN is an instance of the Graphical-GAN framework, which provides theoretical insights and a recognition model for inference. Compared with all instances, Graphical-GAN does not focus on a speci\ufb01c structure, but provides a general way to deal with arbitrary structures that can be encoded as Bayesian networks. 5 Experiments We implement our model using the TensorFlow [1] library.5 In all experiments, we optimize the JSdivergence. We use the widely adopted DCGAN architecture [33] in all experiments to fairly compare Graphical-GAN with existing methods. We evaluate GMGAN on the MNIST [20], SVHN [30], CIFAR10 [19] and CelebA [23] datasets. We evaluate SSGAN on the Moving MNIST [38] and 3D chairs [3] datasets. See Appendix D for further details of the model and datasets. In our experiments, we are going to show that 4Despite our best efforts to contact the authors we did not receive an answer of the issue. 5Our source code is available at https://github.com/zhenxuan00/graphical-gan. 6 \f(a) GAN-G (b) GMGAN-G (K = 10) (c) GMGAN-L (K = 10) (d) GMVAE (K = 10) Figure 2: Samples on the MNIST dataset. The results of (a) are comparable to those reported in [8]. The mixture k is \ufb01xed in each column of (b) and (c). k is \ufb01xed in each row of (d), which is from [7]. (a) (K = 50) (b) (K = 30) (c) (K = 100) Figure 4: Part of samples of GMGAN-L on SVHN (a) CIFAR10 (b) and CelebA (c) datasets. The mixture k is \ufb01xed in each column. See the complete results in Appendix E. \u2022 Qualitatively, Graphical-GAN can infer the latent structures and generate structured samples without any regularization, which is required by existing models [4, 43, 6, 42]; \u2022 Quantitatively, Graphical-GAN can outperform all baseline methods [7\u20139] in terms of inference accuracy, sample quality and reconstruction error consistently and substantially. 5.1 GMGAN Learns Discrete Structures (a) GAN-G (b) GMGAN-L (c) GAN-G (d) GMGAN-L Figure 3: Reconstruction on the MNIST and SVHN datasets. Each odd column shows the test inputs and the next even column shows the corresponding reconstruction. (a) and (c) are comparable to those reported in [8, 9]. We focus on the unsupervised learning setting in GMGAN. Our assumption is that there exist discrete structures, e.g. classes and attributes, in the data but the ground truth is unknown. We compare Graphical-GAN with three existing methods, i.e. ALI [8, 9], GMVAE [38] and the global baseline. For simplicity, we denote the global baseline and our local algorithm as GMGAN-G and GMGAN-L, respectively. Following this, we also denote ALI as GAN-G. We \ufb01rst compare the samples of all models on the MNIST dataset in Fig. 2. As for sample quality, GMGAN-L has less meaningless samples compared with GAN-G (i.e. ALI), and has sharper samples than those of the GMVAE. Besides, as for clustering performance, GMGAN-L is superior to GMGAN-G and GMVAE with less ambiguous clusters. We then demonstrate the ability of GMGAN-L to deal with more challenging datasets. The samples on the SVHN, CIFAR10 and CelebA datasets are shown in Fig. 4. Given a \ufb01xed mixture k, GMGAN-L can generate samples with similar semantics and visual factors, including the object classes, backgrounds and attributes like 7 \fTable 1: The clustering accuracy (ACC) [37], inception score (IS) [36] and mean square error (MSE) results for inference, generation and reconstruction tasks, respectively. The results of our implementation are averaged over 10 (ACC) or 5 (IS and MSE) runs with different random seeds. Algorithm ACC on MNIST IS on CIFAR10 MSE on MNIST GMVAE 92.77 (\u00b11.60) [7] CatGAN 90.30 [37] GAN-G 5.34 (\u00b10.05) [45] GMM (our implementation) 68.33(\u00b10.21) GAN-G+GMM (our implementation) 70.27(\u00b10.50) 5.26 (\u00b10.05) 0.071 (\u00b10.001) GMGAN-G (our implementation) 91.62 (\u00b11.91) 5.41 (\u00b10.08) 0.056 (\u00b10.001) GMGAN-L (ours) 93.03 (\u00b11.65) 5.94 (\u00b10.06) 0.044 (\u00b10.001) SSGAN-L 3DCNN ConcatX ConcatZ Figure 5: Samples on the Moving MNIST and 3D chairs datasets when T = 4. Each row in a sub\ufb01gure represents a video sample. SSGANL 3DCNN ConcatX ConcatZ Figure 6: Samples (\ufb01rst 12 frames) on the Moving MNIST dataset when T = 16. \u201cwearing glasses\u201d. We also show the samples of GMGAN-L by varying K and linearly interpolating the latent variables in Appendix E. We further present the reconstruction results in Fig. 3. GMGAN-L outperforms GAN-G signi\ufb01cantly in terms of preserving the same semantics and similar visual appearance. Intuitively, this is because the Gaussian mixture prior helps the model learn a more spread latent space with less ambiguous areas shared by samples in different classes. We empirically verify the intuition by visualizing the latent space via the t-SNE algorithm in Appendix E. Finally, we compare the models on inference, generation and reconstruction tasks in terms of three widely adopted metrics in Tab. 1. As for the clustering accuracy, after clustering the test samples, we \ufb01rst \ufb01nd the sample that is nearest to the centroid of each cluster and use the label of that sample as the prediction of the testing samples in the same cluster following [37]. GAN-G cannot cluster the data directly, and hence we train a Gaussian mixture model (GMM) on the latent space of GAN-G and the two-stage baseline is denoted as GAN-G + GMM. We also train a GMM on the raw data as the simplest baseline. For the GMM implementation, we use the sklearn package and the settings are same as our Gaussian mixture prior. AAE [24] achieves higher clustering accuracy while it is less comparable to our method. Nevertheless, GMGAN-L outperforms all baselines consistently, which agrees with the qualitative results. We also provide the clustering results on the CIFAR10 dataset in Appendix E. 5.2 SSGAN Learns Temporal Structures We denote the SSGAN model trained with the local algorithm as SSGAN-L. We construct three types of baseline models, which are trained with the global baseline algorithm but use discriminators with different architectures. The ConcatX baseline concatenates all input frames together and processes the input as a whole image with a 2D CNN. The ConcatZ baseline processes the input frames independently using a 2D CNN and concatenates the features as the input for fully connected layers to obtain the latent variables. The 3DCNN baseline uses a 3D CNN to process the whole input directly. In particular, the 3DCNN baseline is similar to existing generative models [44, 35]. The 8 \fFigure 7: Motion analogy results. Each odd row shows an input and the next even row shows the sample. Figure 8: 16 of 200 frames generated by SSGANL. The frame indices are 47-50, 97-100, 147-150 and 197-200 from left to right in each row. key difference is that we omit the two stream architecture proposed in [44] and the singular value clipping proposed in [35] for fair comparison as our contribution is orthogonal to these techniques. Also note that our problem is more challenging than those in existing methods [44, 35] because the discriminator in Graphical-GAN needs to discriminate the latent variables besides the video frames. All models can generate reasonable samples of length 4 on both Moving MNIST and 3D chairs datasets, as shown in Fig. 5. However, if the structure of the data gets complicated, i.e. T = 16 on Moving MNIST and T = 31 on 3D chairs, all baseline models fail while SSGAN-L can still successfully generate meaningful videos, as shown in Fig. 6 and Appendix F, respectively. Intuitively, this is because a single discriminator cannot provide reliable divergence estimate with limited capability in practise. See the reconstruction results of SSGAN-L in Appendix F. Compared with existing GAN models [44, 35, 42] on videos, SSGAN-L can learn interpretable features thanks to the factorial structure in each frame. We present the motion analogy results on the 3D chairs dataset in Fig. 7. We extract the variant features v, i.e. the motion, from the input testing video and provide a \ufb01xed invariant feature h, i.e. the content, to generate samples. The samples can track the motion of the corresponding input and share the same content at the same time. Existing methods [43, 6] on learning interpretable features rely on regularization terms to ensure the disentanglement while SSGAN uses a purely adversarial loss. Finally, we show that though trained on videos of length 31, SSGAN can generate much longer sequences of 200 frames in Fig. 8 thanks to the Markov structure, which again demonstrates the advantages of SSGAN over existing generative models [44, 35, 42]. 6 Conclusion This paper introduces a \ufb02exible generative modelling framework called Graphical Generative Adversarial Networks (Graphical-GAN). Graphical-GAN provides a general solution to utilize the underlying structural information of the data. Empirical results of two instances show the promise of Graphical-GAN on learning interpretable representations and generating structured samples. Possible extensions to Graphical-GAN include: generalized learning and inference algorithms, instances with more complicated structures (e.g., trees) and semi-supervised learning for structured data. Acknowledgments The work was supported by the National Key Research and Development Program of China (No. 2017YFA0700900), the National NSF of China (Nos. 61620106010, 61621136008, 61332007), the MIIT Grant of Int. Man. Comp. Stan (No. 2016ZXFB00001), the Youth Top-notch Talent Support Program, Tsinghua Tiangong Institute for Intelligent Computing, the NVIDIA NVAIL Program and a Project from Siemens. This work was done when C. Li visited the university of Amsterdam. During this period, he was supported by China Scholarship Council." + }, + { + "url": "http://arxiv.org/abs/1703.02291v4", + "title": "Triple Generative Adversarial Nets", + "abstract": "Generative Adversarial Nets (GANs) have shown promise in image generation and\nsemi-supervised learning (SSL). However, existing GANs in SSL have two\nproblems: (1) the generator and the discriminator (i.e. the classifier) may not\nbe optimal at the same time; and (2) the generator cannot control the semantics\nof the generated samples. The problems essentially arise from the two-player\nformulation, where a single discriminator shares incompatible roles of\nidentifying fake samples and predicting labels and it only estimates the data\nwithout considering the labels. To address the problems, we present triple\ngenerative adversarial net (Triple-GAN), which consists of three players---a\ngenerator, a discriminator and a classifier. The generator and the classifier\ncharacterize the conditional distributions between images and labels, and the\ndiscriminator solely focuses on identifying fake image-label pairs. We design\ncompatible utilities to ensure that the distributions characterized by the\nclassifier and the generator both converge to the data distribution. Our\nresults on various datasets demonstrate that Triple-GAN as a unified model can\nsimultaneously (1) achieve the state-of-the-art classification results among\ndeep generative models, and (2) disentangle the classes and styles of the input\nand transfer smoothly in the data space via interpolation in the latent space\nclass-conditionally.", + "authors": "Chongxuan Li, Kun Xu, Jun Zhu, Bo Zhang", + "published": "2017-03-07", + "updated": "2017-11-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "main_content": "Introduction Deep generative models (DGMs) can capture the underlying distributions of the data and synthesize new samples. Recently, signi\ufb01cant progress has been made on generating realistic images based on Generative Adversarial Nets (GANs) [7, 3, 22]. GAN is formulated as a two-player game, where the generator G takes a random noise z as input and produces a sample G(z) in the data space while the discriminator D identi\ufb01es whether a certain sample comes from the true data distribution p(x) or the generator. Both G and D are parameterized as deep neural networks and the training procedure is to solve a minimax problem: min G max D U(D, G) = Ex\u223cp(x)[log(D(x))] + Ez\u223cpz(z)[log(1 \u2212D(G(z)))], where pz(z) is a simple distribution (e.g., uniform or normal) and U(\u00b7) denotes the utilities. Given a generator and the de\ufb01ned distribution pg, the optimal discriminator is D(x) = p(x)/(pg(x) + p(x)) in the nonparametric setting, and the global equilibrium of this game is achieved if and only if pg(x) = p(x) [7], which is desired in terms of image generation. GANs and DGMs in general have also proven effective in semi-supervised learning (SSL) [11], while retaining the generative capability. Under the same two-player game framework, Cat-GAN [26] generalizes GANs with a categorical discriminative network and an objective function that minimizes the conditional entropy of the predictions given the real data while maximizes the conditional entropy \u2217J. Zhu is the corresponding author. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. arXiv:1703.02291v4 [cs.LG] 5 Nov 2017 \f\ud835\udc7f\ud835\udc84, \ud835\udc80\ud835\udc84 ~\ud835\udc91\ud835\udc84(\ud835\udc7f, \ud835\udc80) \ud835\udc7f\ud835\udc8d, \ud835\udc80\ud835\udc8d\u223c\ud835\udc91(\ud835\udc7f, \ud835\udc80) \ud835\udc7f\ud835\udc84\u223c\ud835\udc91(\ud835\udc7f) \ud835\udc81\ud835\udc88\u223c\ud835\udc91\ud835\udc9b(\ud835\udc81) \ud835\udc7f\ud835\udc88, \ud835\udc80\ud835\udc88 ~\ud835\udc91\ud835\udc88(\ud835\udc7f, \ud835\udc80) \ud835\udc7f\ud835\udc8d, \ud835\udc80\ud835\udc8d\u223c\ud835\udc91(\ud835\udc7f, \ud835\udc80) G C D A/R A A/R CE CE \ud835\udc80\ud835\udc88\u223c\ud835\udc91(\ud835\udc80) Figure 1: An illustration of Triple-GAN (best view in color). The utilities of D, C and G are colored in blue, green and yellow respectively, with \u201cR\u201d denoting rejection, \u201cA\u201d denoting acceptance and \u201cCE\u201d denoting the cross entropy loss for supervised learning. \u201cA\u201ds and \u201cR\u201ds are the adversarial losses and \u201cCE\u201ds are unbiased regularizations that ensure the consistency between pg, pc and p, which are the distributions de\ufb01ned by the generator, classi\ufb01er and true data generating process, respectively. of the predictions given the generated samples. Odena [20] and Salimans et al. [25] augment the categorical discriminator with one more class, corresponding to the fake data generated by the generator. There are two main problems in existing GANs for SSL: (1) the generator and the discriminator (i.e. the classi\ufb01er) may not be optimal at the same time [25]; and (2) the generator cannot control the semantics of the generated samples. For the \ufb01rst problem, as an instance, Salimans et al. [25] propose two alternative training objectives that work well for either classi\ufb01cation or image generation in SSL, but not both. The objective of feature matching works well in classi\ufb01cation but fails to generate indistinguishable samples (See Sec.5.2 for examples), while the other objective of minibatch discrimination is good at realistic image generation but cannot predict labels accurately. The phenomena are not analyzed deeply in [25] and here we argue that they essentially arise from the two-player formulation, where a single discriminator has to play two incompatible roles\u2014identifying fake samples and predicting labels. Speci\ufb01cally, assume that G is optimal, i.e p(x) = pg(x), and consider a sample x \u223cpg(x). On one hand, as a discriminator, the optimal D should identify x as a fake sample with non-zero probability (See [7] for the proof). On the other hand, as a classi\ufb01er, the optimal D should always predict the correct class of x con\ufb01dently since x \u223cp(x). It con\ufb02icts as D has two incompatible convergence points, which indicates that G and D may not be optimal at the same time. Moreover, the issue remains even given imperfect G, as long as pg(x) and p(x) overlaps as in most of the real cases. Given a sample form the overlapped area, the two roles of D still compete by treating the sample differently, leading to a poor classi\ufb01er2. Namely, the learning capacity of existing two-player models is restricted, which should be addressed to advance current SSL results. For the second problem, disentangling meaningful physical factors like the object category from the latent representations with limited supervision is of general interest [30, 2]. However, to our best knowledge, none of the existing GANs can learn the disentangled representations in SSL, though some work [22, 5, 21] can learn such representations given full labels. Again, we believe that the problem is caused by their two-player formulation. Speci\ufb01cally, the discriminators in [26, 25] take a single data instead of a data-label pair as input and the label information is totally ignored when justifying whether a sample is real or fake. Therefore, the generators will not receive any learning signal regarding the label information from the discriminators and hence such models cannot control the semantics of the generated samples, which is not satisfactory. To address these problems, we present Triple-GAN, a \ufb02exible game-theoretical framework for both classi\ufb01cation and class-conditional image generation in SSL, where we have a partially labeled dataset. We introduce two conditional networks\u2013a classi\ufb01er and a generator to generate pseudo labels given real data and pseudo data given real labels, respectively. To jointly justify the quality of the samples from the conditional networks, we de\ufb01ne a single discriminator network which has the sole role of distinguishing whether a data-label pair is from the real labeled dataset or not. The resulting model is called Triple-GAN because not only are there three networks, but we consider three joint distributions, i.e. the true data-label distribution and the distributions de\ufb01ned by the conditional networks (See Figure 1 for the illustration of Triple-GAN). Directly motivated by the desirable equilibrium that both the classi\ufb01er and the conditional generator are optimal, we carefully design 2The results of minibatch discrimination approach in [25] well support our analysis. 2 \fcompatible utilities including adversarial losses and unbiased regularizations (See Sec. 3), which lead to an effective solution to the challenging SSL task, justi\ufb01ed both in theory and practice. In particular, theoretically, instead of competing as stated in the \ufb01rst problem, a good classi\ufb01er will result in a good generator and vice versa in Triple-GAN (See Sec. 3.2 for the proof). Furthermore, the discriminator can access the label information of the unlabeled data from the classi\ufb01er and then force the generator to generate correct image-label pairs, which addresses the second problem. Empirically, we evaluate our model on the widely adopted MNIST [14], SVHN [19] and CIFAR10 [12] datasets. The results (See Sec. 5) demonstrate that Triple-GAN can simultaneously learn a good classi\ufb01er and a conditional generator, which agrees with our motivation and theoretical results. Overall, our main contributions are two folded: (1) we analyze the problems in existing SSL GANs [26, 25] and propose a novel game-theoretical Triple-GAN framework to address them with carefully designed compatible objectives; and (2) we show that on the three datasets with incomplete labels, Triple-GAN can advance the state-of-the-art classi\ufb01cation results of DGMs substantially and, at the same time, disentangle classes and styles and perform class-conditional interpolation. 2 Related Work Recently, various approaches have been developed to learn directed DGMs, including Variational Autoencoders (VAEs) [10, 24], Generative Moment Matching Networks (GMMNs) [16, 6] and Generative Adversarial Nets (GANs) [7]. These criteria are systematically compared in [28]. One primal goal of DGMs is to generate realistic samples, for which GANs have proven effective. Speci\ufb01cally, LAP-GAN [3] leverages a series of GANs to upscale the generated samples to high resolution images through the Laplacian pyramid framework [1]. DCGAN [22] adopts (fractionally) strided convolution layers and batch normalization [8] in GANs and generates realistic natural images. Recent work has introduced inference networks in GANs. For instance, InfoGAN [2] learns explainable latent codes from unlabeled data by regularizing the original GANs via variational mutual information maximization. In ALI [5, 4], the inference network approximates the posterior distribution of latent variables given true data in unsupervised manner. Triple-GAN also has an inference network (classi\ufb01er) as in ALI but there exist two important differences in the global equilibria and utilities between them: (1) Triple-GAN matches both the distributions de\ufb01ned by the generator and classi\ufb01er to true data distribution while ALI only ensures that the distributions de\ufb01ned by the generator and inference network to be the same; (2) the discriminator will reject the samples from the classi\ufb01er in Triple-GAN while the discriminator will accept the samples from the inference network in ALI, which leads to different update rules for the discriminator and inference network. These differences naturally arise because Triple-GAN is proposed to solve the existing problems in SSL GANs as stated in the introduction. Indeed, ALI [5] uses the same approach as [25] to deal with partially labeled data and hence it still suffers from the problems. In addition, Triple-GAN outperforms ALI signi\ufb01cantly in the semi-supervised classi\ufb01cation task (See comparison in Table. 1). To handle partially labeled data, the conditional VAE [11] treats the missing labels as latent variables and infer them for unlabeled data. ADGM [17] introduces auxiliary variables to build a more expressive variational distribution and improve the predictive performance. The Ladder Network [23] employs lateral connections between a variation of denoising autoencoders and obtains excellent SSL results. Cat-GAN [26] generalizes GANs with a categorical discriminator and an objective function. Salimans et al. [25] propose empirical techniques to stabilize the training of GANs and improve the performance on SSL and image generation under incompatible learning criteria. Triple-GAN differs signi\ufb01cantly from these methods, as stated in the introduction. 3 Method We consider learning DGMs in the semi-supervised setting,3 where we have a partially labeled dataset with x denoting the input data and y denoting the output label. The goal is to predict the labels y for unlabeled data as well as to generate new samples x conditioned on y. This is different from the unsupervised setting for pure generation, where the only goal is to sample data x from a generator to fool a discriminator; thus a two-player game is suf\ufb01cient to describe the process as in GANs. 3Supervised learning is an extreme case, where the training set is fully labeled. 3 \fIn our setting, as the label information y is incomplete (thus uncertain), our density model should characterize the uncertainty of both x and y, therefore a joint distribution p(x, y) of input-label pairs. A straightforward application of the two-player GAN is infeasible because of the missing values on y. Unlike the previous work [26, 25], which is restricted to the two-player framework and can lead to incompatible objectives, we build our game-theoretic objective based on the insight that the joint distribution can be factorized in two ways, namely, p(x, y) = p(x)p(y|x) and p(x, y) = p(y)p(x|y), and that the conditional distributions p(y|x) and p(x|y) are of interest for classi\ufb01cation and classconditional generation, respectively. To jointly estimate these conditional distributions, which are characterized by a classi\ufb01er network and a class-conditional generator network, we de\ufb01ne a single discriminator network which has the sole role of distinguishing whether a sample is from the true data distribution or the models. Hence, we naturally extend GANs to Triple-GAN, a three-player game to characterize the process of classi\ufb01cation and class-conditional generation in SSL, as detailed below. 3.1 A Game with Three Players Triple-GAN consists of three components: (1) a classi\ufb01er C that (approximately) characterizes the conditional distribution pc(y|x) \u2248p(y|x); (2) a class-conditional generator G that (approximately) characterizes the conditional distribution in the other direction pg(x|y) \u2248p(x|y); and (3) a discriminator D that distinguishes whether a pair of data (x, y) comes from the true distribution p(x, y). All the components are parameterized as neural networks. Our desired equilibrium is that the joint distributions de\ufb01ned by the classi\ufb01er and the generator both converge to the true data distribution. To this end, we design a game with compatible utilities for the three players as follows. We make the mild assumption that the samples from both p(x) and p(y) can be easily obtained.4 In the game, after a sample x is drawn from p(x), C produces a pseudo label y given x following the conditional distribution pc(y|x). Hence, the pseudo input-label pair is a sample from the joint distribution pc(x, y) = p(x)pc(y|x). Similarly, a pseudo input-label pair can be sampled from G by \ufb01rst drawing y \u223cp(y) and then drawing x|y \u223cpg(x|y); hence from the joint distribution pg(x, y) = p(y)pg(x|y). For pg(x|y), we assume that x is transformed by the latent style variables z given the label y, namely, x = G(y, z), z \u223cpz(z), where pz(z) is a simple distribution (e.g., uniform or standard normal). Then, the pseudo input-label pairs (x, y) generated by both C and G are sent to the single discriminator D for judgement. D can also access the input-label pairs from the true data distribution as positive samples. We refer the utilities in the process as adversarial losses, which can be formulated as a minimax game: min C,G max D U(C, G, D) =E(x,y)\u223cp(x,y)[log D(x, y)] + \u03b1E(x,y)\u223cpc(x,y)[log(1 \u2212D(x, y))] +(1 \u2212\u03b1)E(x,y)\u223cpg(x,y)[log(1 \u2212D(G(y, z), y))], (1) where \u03b1 \u2208(0, 1) is a constant that controls the relative importance of generation and classi\ufb01cation and we focus on the balance case by \ufb01xing it as 1/2 throughout the paper. The game de\ufb01ned in Eqn. (1) achieves its equilibrium if and only if p(x, y) = (1 \u2212\u03b1)pg(x, y) + \u03b1pc(x, y) (See details in Sec. 3.2). The equilibrium indicates that if one of C and G tends to the data distribution, the other will also go towards the data distribution, which addresses the competing problem. However, unfortunately, it cannot guarantee that p(x, y) = pg(x, y) = pc(x, y) is the unique global optimum, which is not desirable. To address this problem, we introduce the standard supervised loss (i.e., cross-entropy loss) to C, RL = E(x,y)\u223cp(x,y)[\u2212log pc(y|x)], which is equivalent to the KL-divergence between pc(x, y) and p(x, y). Consequently, we de\ufb01ne the game as: min C,G max D \u02dc U(C, G, D) =E(x,y)\u223cp(x,y)[log D(x, y)] + \u03b1E(x,y)\u223cpc(x,y)[log(1 \u2212D(x, y))] +(1 \u2212\u03b1)E(x,y)\u223cpg(x,y)[log(1 \u2212D(G(y, z), y))] + RL. (2) It will be proven that the game with utilities \u02dc U has the unique global optimum for C and G. 3.2 Theoretical Analysis and Pseudo Discriminative Loss 4In semi-supervised learning, p(x) is the empirical distribution of inputs and p(y) is assumed same to the distribution of labels on labeled data, which is uniform in our experiment. 4 \fAlgorithm 1 Minibatch stochastic gradient descent training of Triple-GAN in SSL. for number of training iterations do \u2022 Sample a batch of pairs (xg, yg) \u223cpg(x, y) of size mg, a batch of pairs (xc, yc) \u223cpc(x, y) of size mc and a batch of labeled data (xd, yd) \u223cp(x, y) of size md. \u2022 Update D by ascending along its stochastic gradient: \u2207\u03b8d \uf8ee \uf8f01 md ( X (xd,yd) log D(xd, yd))+ \u03b1 mc X (xc,yc) log(1\u2212D(xc, yc))+ 1 \u2212\u03b1 mg X (xg,yg) log(1\u2212D(xg, yg)) \uf8f9 \uf8fb. \u2022 Compute the unbiased estimators \u02dc RL and \u02dc RP of RL and RP respectively. \u2022 Update C by descending along its stochastic gradient: \u2207\u03b8c \uf8ee \uf8f0\u03b1 mc X (xc,yc) pc(yc|xc) log(1 \u2212D(xc, yc)) + \u02dc RL + \u03b1P \u02dc RP \uf8f9 \uf8fb. \u2022 Update G by descending along its stochastic gradient: \u2207\u03b8g \uf8ee \uf8f01 \u2212\u03b1 mg X (xg,yg) log(1 \u2212D(xg, yg)) \uf8f9 \uf8fb. end for We now provide a formal theoretical analysis of Triple-GAN under nonparametric assumptions and introduce the pseudo discriminative loss, which is an unbiased regularization motivated by the global equilibrium. For clarity of the main text, we defer the proof details to Appendix A. First, we can show that the optimal D balances between the true data distribution and the mixture distribution de\ufb01ned by C and G, as summarized in Lemma 3.1. Lemma 3.1. For any \ufb01xed C and G, the optimal D of the game de\ufb01ned by the utility function U(C, G, D) is: D\u2217 C,G(x, y) = p(x, y) p(x, y) + p\u03b1(x, y), (3) where p\u03b1(x, y) := (1 \u2212\u03b1)pg(x, y) + \u03b1pc(x, y) is a mixture distribution for \u03b1 \u2208(0, 1). Given D\u2217 C,G, we can omit D and reformulate the minimax game with value function U as: V (C, G) = maxD U(C, G, D), whose optimal point is summarized as in Lemma 3.2. Lemma 3.2. The global minimum of V (C, G) is achieved if and only if p(x, y) = p\u03b1(x, y). We can further show that C and G can at least capture the marginal distributions of data, especially for pg(x), even there may exist multiple global equilibria, as summarized in Corollary 3.2.1. Corollary 3.2.1. Given p(x, y) = p\u03b1(x, y), the marginal distributions are the same for p, pc and pg, i.e. p(x) = pg(x) = pc(x) and p(y) = pg(y) = pc(y). Given the above result that p(x, y) = p\u03b1(x, y), C and G do not compete as in the two-player based formulation and it is easy to verify that p(x, y) = pc(x, y) = pg(x, y) is a global equilibrium point. However, it may not be unique and we should minimize an additional objective to ensure the uniqueness. In fact, this is true for the utility function \u02dc U(C, G, D) in problem (2), as stated below. Theorem 3.3. The equilibrium of \u02dc U(C, G, D) is achieved if and only if p(x, y) = pg(x, y) = pc(x, y). The conclusion essentially motivates our design of Triple-GAN, as we can ensure that both C and G will converge to the true data distribution if the model has been trained to achieve the optimum. We can further show another nice property of \u02dc U, which allows us to regularize our model for stable and better convergence in practice without bias, as summarized below. Corollary 3.3.1. Adding any divergence (e.g. the KL divergence) between any two of the joint distributions or the conditional distributions or the marginal distributions, to \u02dc U as the additional regularization to be minimized, will not change the global equilibrium of \u02dc U. 5 \fBecause label information is extremely insuf\ufb01cient in SSL, we propose pseudo discriminative loss RP = Epg[\u2212log pc(y|x)], which optimizes C on the samples generated by G in the supervised manner. Intuitively, a good G can provide meaningful labeled data beyond the training set as extra side information for C, which will boost the predictive performance (See Sec. 5.1 for the empirical evidence). Indeed, minimizing pseudo discriminative loss with respect to C is equivalent to minimizing DKL(pg(x, y)||pc(x, y)) (See Appendix A for proof) and hence the global equilibrium remains following Corollary 3.3.1. Also note that directly minimizing DKL(pg(x, y)||pc(x, y)) is infeasible since its computation involves the unknown likelihood ratio pg(x, y)/pc(x, y). The pseudo discriminative loss is weighted by a hyperparameter \u03b1P. See Algorithm 1 for the whole training procedure, where \u03b8c, \u03b8d and \u03b8g are trainable parameters in C, D and G respectively. 4 Practical Techniques In this section we introduce several practical techniques used in the implementation of Triple-GAN, which may lead to a biased solution theoretically but work well for challenging SSL tasks empirically. One crucial problem of SSL is the small size of the labeled data. In Triple-GAN, D may memorize the empirical distribution of the labeled data, and reject other types of samples from the true data distribution. Consequently, G may collapse to these modes. To this end, we generate pseudo labels through C for some unlabeled data and use these pairs as positive samples of D. The cost is on introducing some bias to the target distribution of D, which is a mixture of pc and p instead of the pure p. However, this is acceptable as C converges quickly and pc and p are close (See results in Sec.5). Since properly leveraging the unlabeled data is key to success in SSL, it is necessary to regularize C heuristically as in many existing methods [23, 26, 13, 15] to make more accurate predictions. We consider two alternative losses on the unlabeled data. The con\ufb01dence loss [26] minimizes the conditional entropy of pc(y|x) and the cross entropy between p(y) and pc(y), weighted by a hyperparameter \u03b1B, as RU = Hpc(y|x) + \u03b1BEp \u0002 \u2212log pc(y) \u0003 , which encourages C to make predictions con\ufb01dently and be balanced on the unlabeled data. The consistency loss [13] penalizes the network if it predicts the same unlabeled data inconsistently given different noise \u03f5, e.g., dropout masks, as RU = Ex\u223cp(x)||pc(y|x, \u03f5) \u2212pc(y|x, \u03f5\u2032)||2, where || \u00b7 ||2 is the square of the l2-norm. We use the con\ufb01dence loss by default except on the CIFAR10 dataset (See details in Sec. 5). Another consideration is to compute the gradients of Ex\u223cp(x),y\u223cpc(y|x)[log(1 \u2212D(x, y))] with respect to the parameters \u03b8c in C, which involves summation over the discrete random variable y, i.e. the class label. On one hand, integrating out the class label is time consuming. On the other hand, directly sampling one label to approximate the expectation via the Monte Carlo method makes the feedback of the discriminator not differentiable with respect to \u03b8c. As the REINFORCE algorithm [29] can deal with such cases with discrete variables, we use a variant of it for the endto-end training of our classi\ufb01er. The gradients in the original REINFORCE algorithm should be Ex\u223cp(x)Ey\u223cpc(y|x)[\u2207\u03b8c log pc(y|x) log(1 \u2212D(x, y))]. In our experiment, we \ufb01nd the best strategy is to use most probable y instead of sampling one to approximate the expectation over y. The bias is small as the prediction of C is rather con\ufb01dent typically. 5 Experiments We now present results on the widely adopted MNIST [14], SVHN [19], and CIFAR10 [12] datasets. MNIST consists of 50,000 training samples, 10,000 validation samples and 10,000 testing samples of handwritten digits of size 28 \u00d7 28. SVHN consists of 73,257 training samples and 26,032 testing samples and each is a colored image of size 32 \u00d7 32, containing a sequence of digits with various backgrounds. CIFAR10 consists of colored images distributed across 10 general classes\u2014airplane, automobile, bird, cat, deer, dog, frog, horse, ship and truck. There are 50,000 training samples and 10,000 testing samples of size 32 \u00d7 32 in CIFAR10. We split 5,000 training data of SVHN and CIFAR10 for validation if needed. On CIFAR10, we follow [13] to perform ZCA for the input of C but still generate and estimate the raw images using G and D. 6 \fTable 1: Error rates (%) on partially labeled MNIST, SHVN and CIFAR10 datasets, averaged by 10 runs. The results with \u2020 are trained with more than 500,000 extra unlabeled data on SVHN. Algorithm MNIST n = 100 SVHN n = 1000 CIFAR10 n = 4000 M1+M2 [11] 3.33 (\u00b10.14) 36.02 (\u00b10.10) VAT [18] 2.33 24.63 Ladder [23] 1.06 (\u00b10.37) 20.40 (\u00b10.47) Conv-Ladder [23] 0.89 (\u00b10.50) ADGM [17] 0.96 (\u00b10.02) 22.86 \u2020 SDGM [17] 1.32 (\u00b10.07) 16.61(\u00b10.24)\u2020 MMCVA [15] 1.24 (\u00b10.54) 4.95 (\u00b10.18) \u2020 CatGAN [26] 1.39 (\u00b10.28) 19.58 (\u00b10.58) Improved-GAN [25] 0.93 (\u00b10.07) 8.11 (\u00b11.3) 18.63 (\u00b12.32) ALI [5] 7.3 18.3 Triple-GAN (ours) 0.91 (\u00b10.58) 5.77(\u00b10.17) 16.99 (\u00b10.36) Table 2: Error rates (%) on MNIST with different number of labels, averaged by 10 runs. Algorithm n = 20 n = 50 n = 200 Improved-GAN [25] 16.77 (\u00b14.52) 2.21 (\u00b11.36) 0.90 (\u00b10.04) Triple-GAN (ours) 4.81 (\u00b14.95) 1.56 (\u00b10.72) 0.67 (\u00b10.16) We implement our method based on Theano [27] and here we brie\ufb02y summarize our experimental settings.5 Though we have an additional network, the generator and classi\ufb01er of Triple-GAN have comparable architectures to those of the baselines [26, 25] (See details in Appendix F). The pseudo discriminative loss is not applied until the number of epochs reach a threshold that the generator could generate meaningful data. We only search the threshold in {200, 300}, \u03b1P in {0.1, 0.03} and the global learning rate in {0.0003, 0.001} based on the validation performance on each dataset. All of the other hyperparameters including relative weights and parameters in Adam [9] are \ufb01xed according to [25, 15] across all of the experiments. Further, in our experiments, we \ufb01nd that the training techniques for the original two-player GANs [3, 25] are suf\ufb01cient to stabilize the optimization of Triple-GAN. 5.1 Classi\ufb01cation For fair comparison, all the results of the baselines are from the corresponding papers and we average Triple-GAN over 10 runs with different random initialization and splits of the training data and report the mean error rates with the standard deviations following [25]. Firstly, we compare our method with a large body of approaches in the widely used settings on MNIST, SVHN and CIFAR10 datasets given 100, 1,000 and 4,000 labels6, respectively. Table 1 summarizes the quantitative results. On all of the three datasets, Triple-GAN achieves the state-of-the-art results consistently and it substantially outperforms the strongest competitors (e.g., Improved-GAN) on more challenging SVHN and CIFAR10 datasets, which demonstrate the bene\ufb01t of compatible learning objectives proposed in Triple-GAN. Note that for a fair comparison with previous GANs, we do not leverage the extra unlabeled data on SVHN, while some baselines [17, 15] do. Secondly, we evaluate our method with 20, 50 and 200 labeled samples on MNIST for a systematical comparison with our main baseline Improved-GAN [25], as shown in Table 2. Triple-GAN consistently outperforms Improved-GAN with a substantial margin, which again demonstrates the bene\ufb01t of Triple-GAN. Besides, we can see that Triple-GAN achieves more signi\ufb01cant improvement as the number of labeled data decreases, suggesting the effectiveness of the pseudo discriminative loss. Finally, we investigate the reasons for the outstanding performance of Triple-GAN. We train a single C without G and D on SVHN as the baseline and get more than 10% error rate, which shows that G is important for SSL even though C can leverage unlabeled data directly. On CIFAR10, the baseline (a simple version of \u03a0 model [13]) achieves 17.7% error rate. The smaller improvement is reasonable as CIFAR10 is more complex and hence G is not as good as in SVHN. In addition, we evaluate 5Our source code is available at https://github.com/zhenxuan00/triple-gan 6We use these amounts of labels as default settings throughout the paper if not speci\ufb01ed. 7 \f(a) Feature Matching (b) Triple-GAN (c) Automobile (d) Horse Figure 2: (a-b) Comparison between samples from Improved-GAN trained with feature matching and Triple-GAN on SVHN. (c-d) Samples of Triple-GAN in speci\ufb01c classes on CIFAR10. (a) SVHN data (b) SVHN samples (c) CIFAR10 data (d) CIFAR10 samples Figure 3: (a) and (c) are randomly selected labeled data. (b) and (d) are samples from Triple-GAN, where each row shares the same label and each column shares the same latent variables. (a) SVHN (b) CIFAR10 Figure 4: Class-conditional latent space interpolation. We \ufb01rst sample two random vectors in the latent space and interpolate linearly from one to another. Then, we map these vectors to the data level given a \ufb01xed label for each class. Totally, 20 images are shown for each class. We select two endpoints with clear semantics on CIFAR10 for better illustration. Triple-GAN without the pseudo discriminative loss on SVHN and it achieves about 7.8% error rate, which shows the advantages of compatible objectives (better than the 8.11% error rate of ImprovedGAN) and the importance of the pseudo discriminative loss (worse than the complete Triple-GAN by 2%). Furthermore, Triple-GAN has a comparable convergence speed with Improved-GAN [25], as shown in Appendix E. 5.2 Generation We demonstrate that Triple-GAN can learn good G and C simultaneously by generating samples in various ways with the exact models used in Sec. 5.1. For fair comparison, the generative model and the number of labels are the same to the previous method [25]. In Fig. 2 (a-b), we \ufb01rst compare the quality of images generated by Triple-GAN on SVHN and the Improved-GAN with feature matching [25],7 which works well for semi-supervised classi\ufb01cation. We can see that Triple-GAN outperforms the baseline by generating fewer meaningless samples and clearer digits. Further, the baseline generates the same strange sample four times, labeled with red rectangles in Fig. 2 . The comparison on MNIST and CIFAR10 is presented in Appendix B. We 7Though the Improved-GAN trained with minibatch discrimination [25] can generate good samples, it fails to predict labels accurately. 8 \falso evaluate the samples on CIFAR10 quantitatively via the inception score following [25]. The value of Triple-GAN is 5.08 \u00b1 0.09 while that of the Improved-GAN trained without minibatch discrimination [25] is 3.87 \u00b1 0.03, which agrees with the visual comparison. We then illustrate images generated from two speci\ufb01c classes on CIFAR10 in Fig. 2 (c-d) and see more in Appendix C. In most cases, Triple-GAN is able to generate meaningful images with correct semantics. Further, we show the ability of Triple-GAN to disentangle classes and styles in Fig. 3. It can be seen that Triple-GAN can generate realistic data in a speci\ufb01c class and the latent factors encode meaningful physical factors like: scale, intensity, orientation, color and so on. Some GANs [22, 5, 21] can generate data class-conditionally given full labels, while Triple-GAN can do similar thing given much less label information. Finally, we demonstrate the generalization capability of our Triple-GAN on class-conditional latent space interpolation as in Fig. 4. Triple-GAN can transit smoothly from one sample to another with totally different visual factors without losing label semantics, which proves that Triple-GANs can learn meaningful latent spaces class-conditionally instead of over\ufb01tting to the training data, especially labeled data. See these results on MNIST in Appendix D. Overall, these results con\ufb01rm that Triple-GAN avoid the competition between C and G and can lead to a situation where both the generation and classi\ufb01cation are good in semi-supervised learning. 6 Conclusions We present triple generative adversarial networks (Triple-GAN), a uni\ufb01ed game-theoretical framework with three players\u2014a generator, a discriminator and a classi\ufb01er, to do semi-supervised learning with compatible utilities. With such utilities, Triple-GAN addresses two main problems of existing methods [26, 25]. Speci\ufb01cally, Triple-GAN ensures that both the classi\ufb01er and the generator can achieve their own optima respectively in the perspective of game theory and enable the generator to sample data in a speci\ufb01c class. Our empirical results on MNIST, SVHN and CIFAR10 datasets demonstrate that as a uni\ufb01ed model, Triple-GAN can simultaneously achieve the state-of-the-art classi\ufb01cation results among deep generative models and disentangle styles and classes and transfer smoothly on the data level via interpolation in the latent space. Acknowledgments The work is supported by the National NSF of China (Nos. 61620106010, 61621136008, 61332007), the MIIT Grant of Int. Man. Comp. Stan. and New Bus. Pat. \u201cInt. Man. Eval. In. Stan. R. & V.\u201d, the Youth Top-notch Talent Support Program, Tsinghua Tiangong Institute for Intelligent Computing, the NVIDIA NVAIL Program and a Project from Siemens." + }, + { + "url": "http://arxiv.org/abs/1602.07416v2", + "title": "Learning to Generate with Memory", + "abstract": "Memory units have been widely used to enrich the capabilities of deep\nnetworks on capturing long-term dependencies in reasoning and prediction tasks,\nbut little investigation exists on deep generative models (DGMs) which are good\nat inferring high-level invariant representations from unlabeled data. This\npaper presents a deep generative model with a possibly large external memory\nand an attention mechanism to capture the local detail information that is\noften lost in the bottom-up abstraction process in representation learning. By\nadopting a smooth attention model, the whole network is trained end-to-end by\noptimizing a variational bound of data likelihood via auto-encoding variational\nBayesian methods, where an asymmetric recognition network is learnt jointly to\ninfer high-level invariant representations. The asymmetric architecture can\nreduce the competition between bottom-up invariant feature extraction and\ntop-down generation of instance details. Our experiments on several datasets\ndemonstrate that memory can significantly boost the performance of DGMs and\neven achieve state-of-the-art results on various tasks, including density\nestimation, image generation, and missing value imputation.", + "authors": "Chongxuan Li, Jun Zhu, Bo Zhang", + "published": "2016-02-24", + "updated": "2016-05-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "main_content": "Introduction Deep learning models are able to extract abstract representations from low-level inputs by adopting a deep architecture with explicitly designed nonlinear transformations (Bengio et al., 2013a). Among many types of deep models, deep generative models (DGMs) learn abstract Proceedings of the 33 rd International Conference on Machine Learning, New York, NY, USA, 2016. JMLR: W&CP volume 48. Copyright 2016 by the author(s). representations from unlabeled data and can perform a wide range of tasks, including density estimation, data generation and missing value imputation. Depending on the building blocks, various types of DGMs exist, including undirected models (Salakhutdinov & Hinton, 2009), directed models (Neal, 1992; Hinton et al., 2006), autoregressive models (Larochelle & Murray, 2011; Gregor et al., 2014), and Markov chain based models (Bengio et al., 2014). Recently, DGMs have attracted much attention on developing ef\ufb01cient and (approximately) accurate learning algorithms, such as stochastic variational methods (Kingma & Welling, 2014; Rezende et al., 2014; Bornschein & Bengio, 2015; Burda et al., 2015) and Monte Carlo methods (Adams et al., 2010; Gan et al., 2015; Du et al., 2015). Although current DGMs are able to extract high-level abstract representations, they may not be suf\ufb01cient in generating high-quality input samples. This is because more abstract representations are generally invariant or less sensitive to most speci\ufb01c types of local changes of the input. This bottom-up abstraction progress is good for identifying predictive patterns, especially when a discriminative objective is optimized (Li et al., 2015); but it also loses the detail information that is necessary in the top-down generating process. It remains a challenge for DGMs to generate real data, especially for images that have complex structures. Simply increasing the model size is apparently not wise, as it may lead to serious over-\ufb01tting without proper regularization as well as heavy computation burden. Some recent progress has been made to improve the generation quality. For example, DRAW (Gregor et al., 2015) iteratively constructs complex images over time through a recurrent encoder and decoder together with an attention mechanism and LAPGAN (Denton et al., 2015) employs a cascade of generative adversarial networks (GANs) (Goodfellow et al., 2014) to generate high quality natural images through a Laplacian pyramid framework (Burt & Adelson, 1983). However, no efforts have been made on enriching the capabilities of probabilistic DGMs by designing novel building blocks in the generative model. arXiv:1602.07416v2 [cs.LG] 28 May 2016 \fLearning to Generate with Memory In this paper, we address the above challenges by presenting a new architecture for building probabilistic deep generative models with a possibly large external memory and an attention mechanism. Although memory has been explored in various deep models for capturing long-term dependencies in reasoning and prediction tasks (See Section 2 for a review), our work represents a \ufb01rst attempt to leverage external memory to enrich the capabilities of probabilistic DGMs for better density estimation, data generation and missing value imputation. The overall architecture of our model is an interleave between stochastic layers and deterministic layers, where each deterministic layer is associated with an external memory to capture local variant information. An attention mechanism is used to record information in the memory during learning and retrieve information from the memory during data generation. This attention mechanism can be trained because the invariant information and local variant information are correlated, e.g., both containing implicit label information. Both the memory and attention mechanisms are parameterized as differentiable components with some smooth nonlinear transformation functions. Such a design allows us to learn the whole network end-to-end by developing a stochastic variational method, which introduces a recognition network without memory to characterize the variational distribution. Different from (Kingma & Welling, 2014; Burda et al., 2015), our recognition network is asymmetric to the generative network. This asymmetric recognition network is suf\ufb01cient for extracting invariant representations in bottom-up inference, and is compact in parameterization. Furthermore, this asymmetry can help reduce the competition between bottom-up invariant feature extraction (using the recognition network) and top-down input generation (using the deep generative model with memory). We quantitatively and qualitatively evaluate our method on several datasets in various tasks, including density estimation, data generation and missing value imputation. Our results demonstrate that an external memory together with a proper attention mechanism can signi\ufb01cantly improve DGMs to obtain state-of-the-art performance. 2. Related Work Memory has recently been leveraged in deep models to capture long-term dependencies for various tasks, such as algorithm inference (Graves et al., 2014), question answering (Weston et al., 2015; Sukhbaatar et al., 2015) and neural language transduction (Grefenstette et al., 2015). The external memory in these models provides a way to record information stably and interact with the environment, and hence extends the capability of traditional learning models. Typically the interaction, e.g., reading from and writing on the memory, is done through an associated attention mechanism and the whole system is trained with supervision. The attention mechanism can be differentiable and trained in an end-to-end manner (Graves et al., 2014; Sukhbaatar et al., 2015), or really discrete and trained by a Reinforcement Learning algorithm (Zaremba & Sutskever, 2015). In addition to the memory-based models mentioned above, attention mechanisms have been used in other deep models for various tasks, such as image classi\ufb01cation (Larochelle & Hinton, 2010; Ba et al., 2015), object tracking (Mnih et al., 2014), conditional caption generation (Xu et al., 2015), machine translation (Bahdanau et al., 2015) and image generation (Graves, 2013; Gregor et al., 2015). Recently, DRAW (Gregor et al., 2015) introduces a novel 2-D attention mechanism to decide \u201cwhere to read and write\u201d on the image and does well in generating objects with clear track, such as handwritten digits and sequences of real digits. Compared with previous memory-based networks (Graves et al., 2014; Weston et al., 2015), we propose to employ an external hierarchical memory to capture variant information at different abstraction levels trained in an unsupervised manner. Besides, our memory cannot be written directly like (Graves et al., 2014; Weston et al., 2015); instead it is updated through optimization. Compared with previous DGMs with visual attention (Tang et al., 2014; Gregor et al., 2015), we make different assumptions about the data, i.e., the main object (such as faces) has massive local features, which cannot be modeled by a limited number of latent factors. We employ an external memory to capture this and the associated attention mechanism is used to retrieve the memory, not to learn \u201cwhat-where\u201d combination on the images. Besides, the external memory used in our model and the memory units of LSTMs used in DRAW (Gregor et al., 2015) can complement each other (Graves et al., 2014). Further investigation on DRAW with external memory is our future work. Considering the bottom-up inference procedure and topdown generation procedure together, additional memory mechanisms can help to reduce the competition between invariant feature extraction and local variant reconstruction, especially when label information is provided (e.g., in supervised or semi-supervised setting). Similar idea is highlighted in the Ladder Network (Valpola, 2014; Rasmus et al., 2015), which reconstructs the input hierarchically using an extension of denoising autoencoders (dAEs) (Vincent et al., 2010) with the help of lateral connections and achieves excellent performance on semi-supervised learning (Rasmus et al., 2015). Though it is possible to interpret the Ladder Network probabilistically as in (Bengio et al., 2013b), we model the data likelihood directly with the help of external memory instead of explicit lateral edges. Our method can also be extended to do supervised or semi\fLearning to Generate with Memory supervised learning as in (Kingma et al., 2014), which is our future work. 3. Probabilistic DGMs with Memory We present a probabilistic deep generative model (DGM) with a possibly large external memory as well as a soft attention mechanism. 3.1. Overall Architecture Formally, given a set of training data D, we assume each x \u2208D is independently generated with a set of hierarchically organized latent factors zL, . . . , z1 as follows: \u2022 Draw the top-layer factors zL \u223cN(0, I). \u2022 For l = L \u22121, . . . , 0, calculate the mean parameters \u00b5l = gl(zl+1; Ml) and draw the factors zl \u223cPl(\u00b5l), where each gl is a nonlinear function, often assumed to be smooth for the ease of learning. To connect with observations, the bottom layer is clamped at z0 = x. Each zl is randomly sampled from a Gaussian distribution except z0 whose distribution depends on the properties of the data (e.g., Gaussian for continuous data or Multinomial for discrete case). All the distributions Pl are assumed to be of an exponential family form, with mean parameters \u00b5l. Here, we de\ufb01ne gl as a feed-forward deep neural network with Il deterministic layers and a set of associated memories {M (i) l }Il\u22121 i=0 , one per layer. We parameterize each memory as a trainable matrix with dimension ds \u00d7 ns, where ds is the number of slots in the memory and ns is the dimension of each slot. Then, the network is formally parameterized as follows: \u2022 Initialize the top-layer factors h(Il) l = zl+1. \u2022 For i = Il \u22121, . . . , 0, do the transformation h(i) l = \u03c6(h(i+1) l ; M (i) l ), where \u03c6 is a proper (e.g., smooth) function for linear or nonlinear transformation. The bottom layer is our output \u00b5l = h0 l , which is called a stochastic layer as it computes the mean parameters for a distribution to get samples from. All the other layers are called deterministic layers. Compared with previous DGMs, one key feature of our model is that it incorporates an external memory at each deterministic layer, as detailed below. The overall architecture is a stack of multiple such layers interleaved with stochastic layers as above. In such a DGM architecture, memory M (i) l can recover the information that is missing in higher-layers h(>i) l . In other words, the higher layers do not need to represent all details, but focusing on representing abstract invariant features if they seem more relevant to the task at hand than the more detailed information. 3.2. General Memory Mechanism for a Single Layer We now present a single layer with memory generally, which is our building block for the above DGM. For notation simplicity, we omit the sub-script l in the following text. Formally, let hin denote the input information, and hout denote the output after some deterministic transformation with memory. In our model, hin can be either the samples of latent factors or the output from a higher-level deterministic layer; and similarly hout can be used as the input of either a stochastic layer or a lower-level deterministic layer. A layer of standard DGMs without memory generates the low-level generative information hg based on hin through a proper transformation, which can be generally put as: hg = \u03c6(hin; Wg, bg), where Wg and bg are the weights and biases of the transformation respectively and uses it as the \ufb01nal output, i.e. hout = hg. In our DGM with memory M, we \ufb01rst compute the lowlevel generative information hg in the same way as a standard layer, and then retrieve the memory with some proper attention mechanism to get knowledge hm. Finally, we combine hg and hm to get the output hout. Formally, the memory retrieval process is parameterized as hm = fm(ha; M), where ha = fa(hg; A, bA) is the information used to access the memory and computed by an attention mechanism parameterized by a controlling matrix A and a bias vector bA. The attention mechanism takes the generative information hg, which is the \ufb01nal output of a vanilla layer described previously, as input. fa is the mapping function in the attention mechanism and fm is the mapping function in the memory mechanism, which are deterministic transformations to be speci\ufb01ed. The \ufb01nal output hout is the combination of hg and hm as follows: hout = fc(hg, hm; C), where C is a set of trainable parameters in the combination function fc, which is another deterministic transformation to be speci\ufb01ed. We visualize the computation \ufb02ow of these two types of layers in Figure 1, where each component will be speci\ufb01ed next. 3.3. Concrete Examples with Hierarchical Memory Mechanisms With the above building blocks, we can stack multiple layers to build a DGM as in Section 3.1. For simplicity, here we consider a generative model with only one stochastic \fLearning to Generate with Memory attention memory combination function standard layer Figure 1. Architecture comparison between a standard layer (topleft part) and a layer with memory (the whole \ufb01gure). layer and I deterministic layers to explain our memory mechanism, which can be straightforwardly extended to cases with multiple stochastic layers. Let the top most information to be the random samples from the prior, i.e., h(I+1) = z. Using permutation invariant architecture as an example, we compute the low-level generative information h(i) g based on the input h(i+1) as: h(i) g = \u03c6(W (i) g h(i+1) + b(i) g ). We further retrieve the knowledge h(i) m from memory. Though various strategies exist, we consider the simple one that adopts a linear combination of slots in memory as h(i) m = fm(h(i) a ) = M (i)h(i) a , where the coef\ufb01cients h(i) a are computed as h(i) a = fa(h(i) g ) = \u03c3(A(i)h(i) g + b(i) A ), and \u03c3(x) = 1/(1 + exp(\u2212x)) is the sigmoid function. Therefore, each element of h(i) a is a real value in the interval (0, 1), which represents the preference of x to the corresponding memory slot. An alternative soft attention function used in our experiment is the softmax function, which normalizes the summation of the preference values on all slots to be one (Bahdanau et al., 2015). A hard attention mechanism trained with Reinforcement Learning (Xu et al., 2015) can be further investigated in the future work. The most straightforward choice of the composition function is the element-wise summation: h(i) = h(i) g + h(i) m , where the memory encodes the residual between the true target h(i) and the generative information h(i) g . However, in practise, we found that a more \ufb02exible composition function can lead to a better result. Inspired by the Ladder Network (Valpola, 2014; Rasmus et al., 2015), we specify the combination function of h(i) m and h(i) g as element wise multiple layer perceptron with optionally \ufb01nal nonlinearity \u03c6: h(i) = fc(h(i) g , h(i) m ) = \u03c6(a(i) + b(i) 1 c(i)), where the inside linear part a(i) is the summation of scaled inputs and cross terms as well as biases: a(i) = a(i) 1 + a(i) 2 \u2299h(i) m + a(i) 3 \u2299h(i) g + a(i) 4 \u2299h(i) g \u2299h(i) m , and the inside nonlinear part c(i) is computed similarly but goes through a sigmoid function: c(i) = \u03c3(c(i) 1 + c(i) 2 \u2299h(i) m + c(i) 3 \u2299h(i) g + c(i) 4 \u2299h(i) g \u2299h(i) m ), where \u2299is the element wise product. The output in our model only depends on the top-down signals hg initially, instead of the auxiliary information as in the Ladder Network, which will be discussed in the experiment setting. (W (i) g , b(i) g , M (i), A(i), b(i) A , a(i) 1,2,3,4, b(i) 1 , c(i) 1,2,3,4) are trainable parameters in single layer. We illustrate each component in Figure. 1. 4. Inference and Learning Learning a DGM is generally challenging due to the highly nonlinear transformations in multiple layers plus a stochastic formalism. To develop a variational approximation method, it is important to have a rich family of variational distributions that can well-characterize the nonlinear transforms. Signi\ufb01cant progress has been made recently on stochastic variational inference methods with a sophisticated recognition model to parameterize the variational distributions (Kingma & Welling, 2014; Rezende et al., 2014). In this section, we develop such an algorithm for our DGM with memory. Let \u03b8g be the collection of parameters in the DGM. Then the joint distribution of each data x and the corresponding latent factor z can be generally put in a factorized form: p(x, z; \u03b8g) = p(z; \u03b8g)p(x|z; \u03b8g), where the prior is often of a simple form, such as spherical Gaussian in our experiments, and the form of the conditional distribution p(x|z; \u03b8g) is chosen according to the data and its mean parameters depend on the external memories through a deep architecture as stated above. \fLearning to Generate with Memory \ud835\udc67 \u210e\ud835\udc54 (2) \u210e2 \ud835\udc40(2) \ud835\udc34(2) \u210e2 \ud835\udc65 \u210e1 \ud835\udf07\ud835\udc67 \ud835\udc65 \u210e\ud835\udc5a (2) \u210e\ud835\udc54 (1) \ud835\udc40(1) \u210e\ud835\udc5a (1) \u210e(1) \ud835\udc34(1) Q-Net P-Net Figure 2. A model with one stochastic layer and two deterministic layers, where z is shared by the P-net and Q-net. As in (Kingma & Welling, 2014), we adopt deep neural networks to parameterize a recognition model as the approximate posterior distribution q(z|x; \u03b8r), where \u03b8r is the collection of the parameters in the recognition model (denoted by Q-Net, as it characterizes distribution q). Since the Q-Net implements the bottom-up abstraction process to identify invariant features, it is unnecessary to have an external memory. Furthermore, the Q-Net without memory is compact in parameterization. The overall architecture is asymmetric, as illustrated in Figure 2, where the components at the left side of the dot line together with sampling z from q(z|x) is the Q-Net and the components at the right side of the dot line with z sampled from the prior is the generative model (denoted by P-Net, as it characterizes model distribution p). The solid arrow means the corresponding component is used as input of next component and the dash arrow means the corresponding component is used as the training target of next component, as explained below. The components representing external memory and associated attention mechanisms are \ufb01lled with shallow gray. We omit the components corresponding to the combination functions for better visualization. We de\ufb01ne the Q-Net as follows. Following the example with one stochastic layer and I deterministic layers in the previous section, we extract the high-level features \u02c6 h(i+1) as follows: \u02c6 h(i+1) = \u03c6(V (i)\u02c6 h(i) + b(i) r ), where \u03c6 is a proper nonlinear function and (V (i), b(i) r ) are trainable parameters. The bottom layer is the input data, i.e. \u02c6 h(0) = x and the top layer is still factorized Gaussian distribution. The mean of z is computed by linear transformation of \u02c6 h(I) and the variance of z is computed similarly but with a \ufb01nal exponential nonlinearity. A variational lower bound of log-likelihood for per data x can be formulated as: L(\u03b8g, \u03b8r; x) \u225cEq(z|x;\u03b8r)[log p(x, z; \u03b8g)\u2212log q(z|x; \u03b8r)]. We add local reconstruction error terms as an optional regularizer, and jointly optimize the parameters in the generative model and the recognition model: min \u03b8g,\u03b8r 1 |D| X x\u2208D \u0000L(\u03b8g, \u03b8r; x) + I X i=1 \u03bb(i)||h(i) \u2212\u02c6 h(i)||2 2 \u0001 , where the relative weights \u03bb(i) are pre\ufb01xed hyperparameters. We optimize the objective with a stochastic gradient variational Bayes (SGVB) method (Kingma & Welling, 2014). Note that we cannot send the message of a intermediate layer in the recognition model to a layer in the generative model through a lateral connection as in Ladder Network (Valpola, 2014; Rasmus et al., 2015) because that indeed changes the distribution of p(x|z) according to the data x. However, we do not use any information of x in the generative model explicitly and the correctness of the variational bound can be veri\ufb01ed. We employ batch normalization layers (Ioffe & Szegedy, 2015) in both the recognition model and generative model to accelerate the training procedure, and the intermediate features in local reconstruction error terms are replaced by a corresponding normalized version. To compare with statof-the-art results, we also train our method as in importance weighted autoencoders (IWAE) (Burda et al., 2015), which uses importance weighting estimate of log likelihood with multiple samples in the training procedure to achieve a strictly tighter variational lower bound. 5. Experiments We now present both quantitative and qualitative evaluations of our method on the real-valued MNIST, OCR-letters and Frey faces datasets for various tasks. The MNIST dataset (Lecun et al., 1998) consists of 50,000 training, 10,000 validation and 10,000 testing images of handwritten digits and each image is of 28 \u00d7 28 pixels. The OCRletters dataset (Bache & Lichman, 2013) consists of 32,152 training, 10,000 validation and 10,000 testing letter images of size 16 \u00d7 8 pixels. The Frey faces dataset consists of 1,965 real facial expression images of size 28 \u00d7 20 pixels. We model MNIST and OCR-letters datasets as Bernoulli distribution and model Frey faces dataset as Gaussian distribution at data level. Our basic competitors are VAE (Kingma & Welling, 2014) and IWAE (Burda et al., 2015). We add the memory mechanisms to these methods and denote our models as MEMVAE and MEM-IWAE, respectively. In all experiments except the visualization in Appendix D, MEM-VAE employs \fLearning to Generate with Memory the sigmoid function and element-wise MLP as the attention and composition functions respectively. Our implementation is based on Theano (Bastien et al., 2012).1 We use ADAM (Kingma & Ba, 2015) in all experiments with parameters \u03b21 = 0.9, \u03b22 = 0.999 (decay rates of moving averages) and \u03f5 = 10\u22124 (a constant that prevents over\ufb02ow). As a default, the global learning rate is \ufb01xed as 10\u22123 for 1,000 epochs and annealed by a factor 0.998 for 2,000 epochs with minibatch size 100. Initially, We set ai 3 and ci 3 as vectors \ufb01lled with ones and (a(i) 1,2,4, b(i) 1 , c(i) 1,2,4) as vectors \ufb01lled with zeros to avoid poor local optima. This means that we initialize the output as signals from top-down inference, which is different from the Ladder Network (Rasmus et al., 2015). We initialize the memory matrix as Gaussian random variables and other parameters following (Glorot & Bengio, 2010). We specify \u03c6 as recti\ufb01ed linear units (ReLu) (Nair & Hinton, 2010) in both the generative model and the recognition model. We do not tune the hyper-parameters of our method heavily. We choose a model with one stochastic layer and two deterministic layers as the default setting. The values of \u03bb(1) and \u03bb(2) are \ufb01xed as 0.1 following Ladder Network (Rasmus et al., 2015). We do not include a local reconstruction error term at data level since the variational lower bound penalizes the reconstruction error of data already. The dimension of slots in memory ds is the same as that of the corresponding generative information hg because we use element-wise combination function fc. We employ the memory mechanism in both deterministic layers and make the total number of slots n(1) s + n(2) s to be 100 to keep the number of additional parameters relatively small. We choose a 70-30 architecture according to the validation performance on the MNIST dataset and then make it default for all experiments if not mentioned. 5.1. Density Estimation We follow (Burda et al., 2015) to split the MNIST dataset into 60,000 training data and 10,000 testing data after choosing the hyper-parameters. We train both the baselines and our models with 1, 5 and 50 importance samples respectively and evaluate the test likelihood with 5,000 importance samples as in (Burda et al., 2015). In each training epoch, we binarize the data stochastically as the input. The results of VAE, IWAE-5 (trained with 5 importance samples) and IWAE-50 (trained with 50 importance samples) with one stochastic layer in (Burda et al., 2015) are -86.76, -85.54 and -84.78 nats respectively. However, we use 500 hidden units in the deterministic layers and 100 latent variables in the stochastic layer to achieve a stronger baseline result with a different architecture and more parameters. 1Source code at https://github.com/zhenxuan00/MEM_DGM Table 1. Log likelihood estimation on MNIST and OCR-letters datasets. Results are from [1] (Murray & Salakhutdinov, 2009), [2] (Burda et al., 2015), [3] (Bornschein & Bengio, 2015), [4] (Larochelle & Murray, 2011) and [5] (Gregor et al., 2014). Results with * are evaluated on binarised MNIST dataset. MODELS MNIST OCR-LETTERS VAE -85.67 -30.09 MEM-VAE(ours) -84.41 -29.09 IWAE-5 -84.49 -28.69 MEM-IWAE-5(ours) -83.26 -27.65 IWAE-50 -83.67 -27.60 MEM-IWAE-50(ours) -82.84 -26.90 DBN[1] -84.55 S2-IWAE-50[2] -82.90 RWS-SBN/SBN[3]* -85.48 -29.99 RWS-NADE/NADE[3]* -85.23 -26.43 NADE[4]* -88.86 -27.22 DARN[5]* -84.13 -28.17 We present our likelihood results in Table 1. We can see that our methods improve the results of baselines (both VAE and IWAE) signi\ufb01cantly and achieve state-of-the-art results on the real-valued MNIST dataset with permutation invariant architectures. DRAW (Gregor et al., 2015) achieves -80.97 nats by exploiting the spatial information. Our method MEM-IWAE-50 even outperforms S2-IWAE50, which is the best model in (Burda et al., 2015) with two stochastic layers and four deterministic layers. To compare with a broader family of benchmarks, we further quantitatively evaluate our model on the OCR-letters dataset. We use 200 hidden units in the deterministic layers and 50 latent variables in the stochastic layer as the dimension of the input is much smaller. The test log-likelihood is evaluated with 100,000 importance samples as in (Bornschein & Bengio, 2015) and shown in Table 1. Again, our methods outperform the baseline approaches signi\ufb01cantly and are comparable with the best competitors, which often employ autoregressive connections (Larochelle & Murray, 2011; Gregor et al., 2014) that are effective on small images with simple structures. Note that these sophisticated structures are not exclusive to our memory mechanisms. A systematic investigation of using memory with such structures is our future work. 5.2. Analysis of Our Model We now present a careful analysis of our model to investigate the possible reasons for the outstanding performance. Classi\ufb01cation: We investigate the effect of external memory on the training of the recognition model by classi\ufb01ca\fLearning to Generate with Memory (a) Layer 2 (b) Layer 1 (c) Layer 2 (d) Layer 1 Figure 3. (a-b): Averaged activations on each memory slot over different classes of testing data on MNIST dataset in layer 2 and layer 1 respectively. (c-d): 2-D visualization for correlation between classes for layer 2 and layer 1 respectively (best view in color). tion and MEM-VAE outperforms VAE (See details in Appendix A). A larger baseline: We test VAE with a 530-530-100 architecture, which has almost the same number of parameters as MEM-VAE. The log-likelihood trained with 1, 5 and 50 importance samples on MNIST are -85.69, -84.43 and -83.58 respectively. We can see that using our memory leads to much better results than simply increasing the model size. A comparison of number of parameters used in all of the models can be seen in Appendix B. Importance of memory: We test the relative importance of the memory mechanism and local reconstruction error regularizer. MEM-VAE in the default settings but without local reconstruction error regularizer achieves a test log density estimation of -84.44 nats. VAE with additional local reconstruction regularizer achieves test log density estimation of -85.68 nats. These experiments demonstrate that the memory mechanism plays a central role in the recovery of detailed information. The local reconstruction error regularizer may help more provided supervision. Preference of memory slots over classes: We investigate the preference of memory slots over different classes in MEM-VAE. We average ha and normalize the activations for each class and visualize the matrices in Figure 3(a-b), where each column represents a slot and each row represents a class (0-9 in top-down order). The averaged and normalized activations are used as the intensities for the corresponding positions in the matrices. Furthermore, we compute the correlation coef\ufb01cients between activations of different classes and visualize them in a 2-D graph in Figure 3(c-d), where each node represents a class and each edge represents the correlation between two endpoints. The larger the correlation is, the wider and darker the edge is. We observe that the trained attention model can access the memory based on the implicit label information in the input, which accords with our assumption. The activations are correlated for those digits that share similar structures such as \u201c7\u201d and \u201c9\u201d. Furthermore, different layers of memory focus on different patterns. For example, layer 1 has a strong activation of a vertical line pattern which is shared among digits \u201c1\u201d, \u201c4\u201d, \u201c7\u201d and \u201c9\u201d, while layer 2 activates (a) IWAE-50 (b) MEM-IWAE-50 Figure 4. (a-b): Random generation from IWAE-50 and MEMIWAE-50 on MNIST dataset respectively. most to a semi-circle pattern which is shared among digits \u201c3\u201d, \u201c5\u201d and \u201c8\u201d. Besides, layer 1 has almost the same 2D-visualization result as the raw data. Visualization: We visualize the generative information hg and memory information hm by mapping these vectors to images (See details in Appendix C and D respectively). 5.3. Random Generation We further evaluate the random generations from the baseline and our model empirically on MNIST and Frey faces datasets, which is shown in Figure 4 and Figure 5 respectively. We label unclear or meaningless images with red rectangles. This is done by majority voting of several volunteers. We do not select any pictures for both datasets. For the MNIST dataset, the setting is same as in Section 5.1. We observe that the memory mechanism helps a lot to get clear and meaningful samples as in Figure 4. For Frey faces dataset, we randomly split into 1,865 training data and 100 testing data. We use a single deterministic layer with 200 hidden units and a stochastic layer with 10 latent factors and set n(1) s to be 20 as the number of training samples is small. We use one sample of the recognition model in both of the training and testing procedure as in (Kingma & Welling, 2014). We \ufb01nd that the minibatch size effects the results a lot, and the quality of visualization \fLearning to Generate with Memory (a) VAE (b) MEM-VAE Figure 5. (a-b): Random generation from VAE and MEM-VAE on Frey faces dataset respectively. and the averaged test log density are inconsistent (Theis et al., 2016). Speci\ufb01cally, setting the minibatch size to be 100, VAE achieves test log density of 1308 nats, which reproduces the result with same architectures in (Kingma & Welling, 2014), but the visualization is somehow unclear; while setting the minibatch size to be 10, VAE achieves test log density of 1055 nats, but the visualization is much better. All of the parameters are set referred to (Kingma & Welling, 2014) or based on the performance of test log density of VAE. We also \ufb01nd that MEM-VAE outperforms VAE in both cases in terms of the quantitative test likelihood and qualitative visualization \u2014 the corresponding log density of MEM-VAE are 1330 and 1240 nats respectively. The random samples given minibatch size 100 is shown in Figure 5, where we can see that all samples of MEM-VAE are clear but some of VAE cannot present all details in the facial expression successfully. 5.4. Missing Value Imputation Finally, we evaluate our method on the task of missing value imputation with three different types of noise, including (1) RECT-12 means that a centered rectangle of size 12 \u00d7 12 is missing; (2) RAND-0.6 means that each pixel is missing with a pre\ufb01xed probability 0.6; and (3) HALF means that the left half of the image is missing. For both VAE and MEM-VAE, the missing values are randomly initialized and then inferred by a Markov chain that samples latent factors based on the current guess of missing values and then re\ufb01nes the missing values based on the current latent factors. We compare the mean square error (MSE) results after 100 epochs of inference as in Table 2 on the MNIST dataset. The results demonstrate that DGM with external memory can capture the underlying structures of data better than vanilla methods under different types of Table 2. MSE results on MNIST dataset with different types of noise. NOISE TYPE VAE MEM-VAE RECT-12 0.1403 0.1362 RAND-0.6 0.0194 0.0187 HALF 0.0550 0.0539 noise. Besides, MEM-VAE has better qualitative results (See Appendix E). 6. Conclusions and Future Work In this paper, we introduce a novel building block for deep generative models (DGMs) with an external memory and an associated soft attention mechanism. In the top-down generative procedure, the additional memory helps to recover the local detail information, which is often lost in the bottom-up abstraction procedure for learning invariant representations. Various experiments on handwritten digits and letters as well as real faces datasets demonstrate that our method can substantially improve the vanilla DGM on density estimation, random generation and missing value imputation tasks, and we can achieve state-of-the-art results among a broad family of benchmarks. There are three possible extensions of our method: \u2022 The use of other types of memory and attention mechanisms in DGMs can be further investigated. Particularly, the combination of external memory and visual attention as well as recurrent networks (Gregor et al., 2015) may achieve better results in generative tasks. \u2022 A class conditional DGM (Kingma et al., 2014) with memory can potentially achieve better performance on both classi\ufb01cation and generation because the external memory helps to reduce the competition between the invariant feature extraction and detailed generation, and explicit label information can make the whole system be easier to train. \u2022 Our method can be further applied to convolutional neural networks by sharing parameters across different channels and then employed in non-probabilistic DGMs such as LAPGAN (Denton et al., 2015) to re\ufb01ne generation on high-dimensional data. Acknowledgments The work was supported by the National Basic Research Program (973 Program) of China (Nos. 2013CB329403, 2012CB316301), National NSF of China (Nos. 61322308, 61332007), the Youngth Top-notch Talent Support Program, Tsinghua TNList Lab Big Data Initiative, and Tsinghua Initiative Scienti\ufb01c Research Program (No. 20141080934). \fLearning to Generate with Memory" + }, + { + "url": "http://arxiv.org/abs/1504.06787v4", + "title": "Max-margin Deep Generative Models", + "abstract": "Deep generative models (DGMs) are effective on learning multilayered\nrepresentations of complex data and performing inference of input data by\nexploring the generative ability. However, little work has been done on\nexamining or empowering the discriminative ability of DGMs on making accurate\npredictions. This paper presents max-margin deep generative models (mmDGMs),\nwhich explore the strongly discriminative principle of max-margin learning to\nimprove the discriminative power of DGMs, while retaining the generative\ncapability. We develop an efficient doubly stochastic subgradient algorithm for\nthe piecewise linear objective. Empirical results on MNIST and SVHN datasets\ndemonstrate that (1) max-margin learning can significantly improve the\nprediction performance of DGMs and meanwhile retain the generative ability; and\n(2) mmDGMs are competitive to the state-of-the-art fully discriminative\nnetworks by employing deep convolutional neural networks (CNNs) as both\nrecognition and generative models.", + "authors": "Chongxuan Li, Jun Zhu, Tianlin Shi, Bo Zhang", + "published": "2015-04-26", + "updated": "2015-12-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "main_content": "Introduction Max-margin learning has been effective on learning discriminative models, with many examples such as univariate-output support vector machines (SVMs) [5] and multivariate-output max-margin Markov networks (or structured SVMs) [30, 1, 31]. However, the ever-increasing size of complex data makes it hard to construct such a fully discriminative model, which has only single layer of adjustable weights, due to the facts that: (1) the manually constructed features may not well capture the underlying high-order statistics; and (2) a fully discriminative approach cannot reconstruct the input data when noise or missing values are present. To address the \ufb01rst challenge, previous work has considered incorporating latent variables into a max-margin model, including partially observed maximum entropy discrimination Markov networks [37], structured latent SVMs [32] and max-margin min-entropy models [20]. All this work has primarily focused on a shallow structure of latent variables. To improve the \ufb02exibility, learning SVMs with a deep latent structure has been presented in [29]. However, these methods do not address the second challenge, which requires a generative model to describe the inputs. The recent work on learning max-margin generative models includes max-margin Harmoniums [4], maxmargin topic models [34, 35], and nonparametric Bayesian latent SVMs [36] which can infer the dimension of latent features from data. However, these methods only consider the shallow structure of latent variables, which may not be \ufb02exible enough to describe complex data. Much work has been done on learning generative models with a deep structure of nonlinear hidden variables, including deep belief networks [25, 16, 23], autoregressive models [13, 9], and stochastic variations of neural networks [3]. For such models, inference is a challenging problem, but fortunately there exists much recent progress on stochastic variational inference algorithms [12, 24]. However, the primary focus of deep generative models (DGMs) has been on unsupervised learning, 1 arXiv:1504.06787v4 [cs.LG] 15 Dec 2015 \fwith the goals of learning latent representations and generating input samples. Though the latent representations can be used with a downstream classi\ufb01er to make predictions, it is often bene\ufb01cial to learn a joint model that considers both input and response variables. One recent attempt is the conditional generative models [11], which treat labels as conditions of a DGM to describe input data. This conditional DGM is learned in a semi-supervised setting, which is not exclusive to ours. In this paper, we revisit the max-margin principle and present a max-margin deep generative model (mmDGM), which learns multi-layer representations that are good for both classi\ufb01cation and input inference. Our mmDGM conjoins the \ufb02exibility of DGMs on describing input data and the strong discriminative ability of max-margin learning on making accurate predictions. We formulate mmDGM as solving a variational inference problem of a DGM regularized by a set of max-margin posterior constraints, which bias the model to learn representations that are good for prediction. We de\ufb01ne the max-margin posterior constraints as a linear functional of the target variational distribution of the latent presentations. Then, we develop a doubly stochastic subgradient descent algorithm, which generalizes the Pagesos algorithm [28] to consider nontrivial latent variables. For the variational distribution, we build a recognition model to capture the nonlinearity, similar as in [12, 24]. We consider two types of networks used as our recognition and generative models: multiple layer perceptrons (MLPs) as in [12, 24] and convolutional neural networks (CNNs) [14]. Though CNNs have shown promising results in various domains, especially for image classi\ufb01cation, little work has been done to take advantage of CNN to generate images. The recent work [6] presents a type of CNN to map manual features including class labels to RBG chair images by applying unpooling, convolution and recti\ufb01cation sequentially; but it is a deterministic mapping and there is no random generation. Generative Adversarial Nets [7] employs a single such layer together with MLPs in a minimax two-player game framework with primary goal of generating images. We propose to stack this structure to form a highly non-trivial deep generative network to generate images from latent variables learned automatically by a recognition model using standard CNN. We present the detailed network structures in experiments part. Empirical results on MNIST [14] and SVHN [22] datasets demonstrate that mmDGM can signi\ufb01cantly improve the prediction performance, which is competitive to the state-of-the-art methods [33, 17, 8, 15], while retaining the capability of generating input samples and completing their missing values. 2 Basics of Deep Generative Models We start from a general setting, where we have N i.i.d. data X = {xn}N n=1. A deep generative model (DGM) assumes that each xn \u2208RD is generated from a vector of latent variables zn \u2208RK, which itself follows some distribution. The joint probability of a DGM is as follows: p(X, Z|\u03b1, \u03b2) = N Y n=1 p(zn|\u03b1)p(xn|zn, \u03b2), (1) where p(zn|\u03b1) is the prior of the latent variables and p(xn|zn, \u03b2) is the likelihood model for generating observations. For notation simplicity, we de\ufb01ne \u03b8 = (\u03b1, \u03b2). Depending on the structure of z, various DGMs have been developed, such as the deep belief networks [25, 16], deep sigmoid networks [21], deep latent Gaussian models [24], and deep autoregressive models [9]. In this paper, we focus on the directed DGMs, which can be easily sampled from via an ancestral sampler. However, in most cases learning DGMs is challenging due to the intractability of posterior inference. The state-of-the-art methods resort to stochastic variational methods under the maximum likelihood estimation (MLE) framework, \u02c6 \u03b8 = argmax\u03b8 log p(X|\u03b8). Speci\ufb01cally, let q(Z) be the variational distribution that approximates the true posterior p(Z|X, \u03b8). A variational upper bound of the per sample negative log-likelihood (NLL) \u2212log p(xn|\u03b1, \u03b2) is: L(\u03b8, q(zn); xn) \u225cKL(q(zn)||p(zn|\u03b1)) \u2212Eq(zn)[log p(xn|zn, \u03b2)], (2) where KL(q||p) is the Kullback-Leibler (KL) divergence between distributions q and p. Then, L(\u03b8, q(Z); X)\u225cP nL(\u03b8, q(zn); xn) upper bounds the full negative log-likelihood \u2212log p(X|\u03b8). It is important to notice that if we do not make restricting assumption on the variational distribution q, the lower bound is tight by simply setting q(Z) = p(Z|X, \u03b8). That is, the MLE is equivalent to solving the variational problem: min\u03b8,q(Z) L(\u03b8, q(Z); X). However, since the true posterior is intractable except a handful of special cases, we must resort to approximation methods. One common 2 \fassumption is that the variational distribution is of some parametric form, q\u03c6(Z), and then we optimize the variational bound w.r.t the variational parameters \u03c6. For DGMs, another challenge arises that the variational bound is often intractable to compute analytically. To address this challenge, the early work further bounds the intractable parts with tractable ones by introducing more variational parameters [26]. However, this technique increases the gap between the bound being optimized and the log-likelihood, potentially resulting in poorer estimates. Much recent progress [12, 24, 21] has been made on hybrid Monte Carlo and variational methods, which approximates the intractable expectations and their gradients over the parameters (\u03b8, \u03c6) via some unbiased Monte Carlo estimates. Furthermore, to handle large-scale datasets, stochastic optimization of the variational objective can be used with a suitable learning rate annealing scheme. It is important to notice that variance reduction is a key part of these methods in order to have fast and stable convergence. Most work on directed DGMs has been focusing on the generative capability on inferring the observations, such as \ufb01lling in missing values [12, 24, 21], while little work has been done on investigating the predictive power, except the semi-supervised DGMs [11] which builds a DGM conditioned on the class labels and learns the parameters via MLE. Below, we present max-margin deep generative models, which explore the discriminative max-margin principle to improve the predictive ability of the latent representations, while retaining the generative capability. 3 Max-margin Deep Generative Models We consider supervised learning, where the training data is a pair (x, y) with input features x \u2208RD and the ground truth label y. Without loss of generality, we consider the multi-class classi\ufb01cation, where y \u2208C = {1, . . . , M}. A max-margin deep generative model (mmDGM) consists of two components: (1) a deep generative model to describe input features; and (2) a max-margin classi\ufb01er to consider supervision. For the generative model, we can in theory adopt any DGM that de\ufb01nes a joint distribution over (X, Z) as in Eq. (1). For the max-margin classi\ufb01er, instead of \ufb01tting the input features into a conventional SVM, we de\ufb01ne the linear classi\ufb01er on the latent representations, whose learning will be regularized by the supervision signal as we shall see. Speci\ufb01cally, if the latent representation z is given, we de\ufb01ne the latent discriminant function F(y, z, \u03b7; x) = \u03b7\u22a4f(y, z), where f(y, z) is an MK-dimensional vector that concatenates M subvectors, with the yth being z and all others being zero, and \u03b7 is the corresponding weight vector. We consider the case that \u03b7 is a random vector, following some prior distribution p0(\u03b7). Then our goal is to infer the posterior distribution p(\u03b7, Z|X, Y), which is typically approximated by a variational distribution q(\u03b7, Z) for computational tractability. Notice that this posterior is different from the one in the vanilla DGM. We expect that the supervision information will bias the learned representations to be more powerful on predicting the labels at testing. To account for the uncertainty of (\u03b7, Z), we take the expectation and de\ufb01ne the discriminant function F(y; x) = Eq \u0002 \u03b7\u22a4f(y, z) \u0003 , and the \ufb01nal prediction rule that maps inputs to outputs is: \u02c6 y = argmax y\u2208C F(y; x). (3) Note that different from the conditional DGM [11], which puts the class labels upstream, the above classi\ufb01er is a downstream model, in the sense that the supervision signal is determined by conditioning on the latent representations. 3.1 The Learning Problem We want to jointly learn the parameters \u03b8 and infer the posterior distribution q(\u03b7, Z). Based on the equivalent variational formulation of MLE, we de\ufb01ne the joint learning problem as solving: min \u03b8,q(\u03b7,Z),\u03be L(\u03b8, q(\u03b7, Z); X) + C N X n=1 \u03ben (4) \u2200n, y \u2208C, s.t. : \u001a Eq[\u03b7\u22a4\u2206fn(y)] \u2265\u2206ln(y) \u2212\u03ben \u03ben \u22650, where \u2206fn(y) = f(yn, zn) \u2212f(y, zn) is the difference of the feature vectors; \u2206ln(y) is the loss function that measures the cost to predict y if the true label is yn; and C is a nonnegative regularization parameter balancing the two components. In the objective, the variational bound is de\ufb01ned 3 \fas L(\u03b8, q(\u03b7, Z); X) = KL(q(\u03b7, Z)||p0(\u03b7, Z|\u03b1)) \u2212Eq [log p(X|Z, \u03b2)], and the margin constraints are from the classi\ufb01er (3). If we ignore the constraints (e.g., setting C at 0), the solution of q(\u03b7, Z) will be exactly the Bayesian posterior, and the problem is equivalent to do MLE for \u03b8. By absorbing the slack variables, we can rewrite the problem in an unconstrained form: min \u03b8,q(\u03b7,Z) L(\u03b8, q(\u03b7, Z); X) + CR(q(\u03b7, Z; X)), (5) where the hinge loss is: R(q(\u03b7, Z); X) = PN n=1 maxy\u2208C(\u2206ln(y) \u2212Eq[\u03b7\u22a4\u2206fn(y)]). Due to the convexity of max function, it is easy to verify that the hinge loss is an upper bound of the training error of classi\ufb01er (3), that is, R(q(\u03b7, Z); X) \u2265P n \u2206ln(\u02c6 yn). Furthermore, the hinge loss is a convex functional over the variational distribution because of the linearity of the expectation operator. These properties render the hinge loss as a good surrogate to optimize over. Previous work has explored this idea to learn discriminative topic models [34], but with a restriction on the shallow structure of hidden variables. Our work presents a signi\ufb01cant extension to learn deep generative models, which pose new challenges on the learning and inference. 3.2 The Doubly Stochastic Subgradient Algorithm The variational formulation of problem (5) naturally suggests that we can develop a variational algorithm to address the intractability of the true posterior. We now present a new algorithm to solve problem (5). Our method is a doubly stochastic generalization of the Pegasos (i.e., Primal Estimated sub-GrAdient SOlver for SVM) algorithm [28] for the classic SVMs with fully observed input features, with the new extension of dealing with a highly nontrivial structure of latent variables. First, we make the structured mean-\ufb01eld (SMF) assumption that q(\u03b7, Z) = q(\u03b7)q\u03c6(Z). Under the assumption, we have the discriminant function as Eq[\u03b7\u22a4\u2206fn(y)] = Eq(\u03b7)[\u03b7\u22a4]Eq\u03c6(z(n))[\u2206fn(y)]. Moreover, we can solve for the optimal solution of q(\u03b7) in some analytical form. In fact, by the calculus of variations, we can show that given the other parts the solution is q(\u03b7) \u221d p0(\u03b7) exp \u0010 \u03b7\u22a4P n,y \u03c9y nEq\u03c6[\u2206fn(y)] \u0011 , where \u03c9 are the Lagrange multipliers (See [34] for details). If the prior is normal, p0(\u03b7) = N(0, \u03c32I), we have the normal posterior: q(\u03b7) = N(\u03bb, \u03c32I), where \u03bb = \u03c32 P n,y \u03c9y nEq\u03c6[\u2206fn(y)]. Therefore, even though we did not make a parametric form assumption of q(\u03b7), the above results show that the optimal posterior distribution of \u03b7 is Gaussian. Since we only use the expectation in the optimization problem and in prediction, we can directly solve for the mean parameter \u03bb instead of q(\u03b7). Further, in this case we can verify that KL(q(\u03b7)||p0(\u03b7)) = ||\u03bb||2 2\u03c32 and then the equivalent objective function in terms of \u03bb can be written as: min \u03b8,\u03c6,\u03bb L(\u03b8, \u03c6; X) + ||\u03bb||2 2\u03c32 + CR(\u03bb, \u03c6; X), (6) where R(\u03bb, \u03c6; X) = PN n=1 \u2113(\u03bb, \u03c6; xn) is the total hinge loss, and the per-sample hinge-loss is \u2113(\u03bb, \u03c6; xn) = maxy\u2208C(\u2206ln(y) \u2212\u03bb\u22a4Eq\u03c6[\u2206fn(y)]). Below, we present a doubly stochastic subgradient descent algorithm to solve this problem. The \ufb01rst stochasticity arises from a stochastic estimate of the objective by random mini-batches. Speci\ufb01cally, the batch learning needs to scan the full dataset to compute subgradients, which is often too expensive to deal with large-scale datasets. One effective technique is to do stochastic subgradient descent [28], where at each iteration we randomly draw a mini-batch of the training data and then do the variational updates over the small mini-batch. Formally, given a mini batch of size m, we get an unbiased estimate of the objective: \u02dc Lm := N m m X n=1 L(\u03b8, \u03c6; xn) + ||\u03bb||2 2\u03c32 + NC m m X n=1 \u2113(\u03bb, \u03c6; xn). The second stochasticity arises from a stochastic estimate of the per-sample variational bound and its subgradient, whose intractability calls for another Monte Carlo estimator. Formally, let zl n \u223cq\u03c6(z|xn, yn) be a set of samples from the variational distribution, where we explicitly put the conditions. Then, an estimate of the per-sample variational bound and the per-sample hinge-loss is \u02dc L(\u03b8, \u03c6; xn)= 1 L X l log p(xn, zl n|\u03b2)\u2212log q\u03c6(zl n); \u02dc \u2113(\u03bb, \u03c6; xn)=max y \u0010 \u2206ln(y)\u22121 L X l \u03bb\u22a4\u2206fn(y, zl n) \u0011 , 4 \fwhere \u2206fn(y, zl n) = f(yn, zl n) \u2212f(y, zl n). Note that \u02dc L is an unbiased estimate of L, while \u02dc \u2113is a biased estimate of \u2113. Nevertheless, we can still show that \u02dc \u2113is an upper bound estimate of \u2113under expectation. Furthermore, this biasedness does not affect our estimate of the gradient. In fact, by using the equality \u2207\u03c6q\u03c6(z) = q\u03c6(z)\u2207\u03c6 log q\u03c6(z), we can construct an unbiased Monte Carlo estimate of \u2207\u03c6(L(\u03b8, \u03c6; xn) + \u2113(\u03bb, \u03c6; xn)) as: g\u03c6 = 1 L L X l=1 \u0010 log p(zl n, xn) \u2212log q\u03c6(zl n) + C\u03bb\u22a4\u2206fn(\u02dc yn, zl n) \u0011 \u2207\u03c6 log q\u03c6(zl n), (7) where the last term roots from the hinge loss with the loss-augmented prediction \u02dc yn = argmaxy(\u2206ln(y) + 1 L P l \u03bb\u22a4f(y, zl n)). For \u03b8 and \u03bb, the estimates of the gradient \u2207\u03b8L(\u03b8, \u03c6; xn) and the subgradient \u2207\u03bb\u2113(\u03bb, \u03c6; xn) are easier, which are: g\u03b8 = 1 L X l \u2207\u03b8 log p(xn, zl n|\u03b8), g\u03bb = 1 L X l \u0000f(\u02dc yn, zl n) \u2212f(yn, zl n) \u0001 . Notice that the sampling and the gradient \u2207\u03c6 log q\u03c6(zl n) only depend on the variational distribution, not the underlying model. Algorithm 1 Doubly Stochastic Subgradient Algorithm Initialize \u03b8, \u03bb, and \u03c6 repeat draw a random mini-batch of m data points draw random samples from noise distribution p(\u03f5) compute subgradient g = \u2207\u03b8,\u03bb,\u03c6 \u02dc L(\u03b8, \u03bb, \u03c6; Xm, \u03f5) update parameters (\u03b8, \u03bb, \u03c6) using subgradient g. until Converge return \u03b8, \u03bb, and \u03c6 The above estimates consider the general case where the variational bound is intractable. In some cases, we can compute the KL-divergence term analytically, e.g., when the prior and the variational distribution are both Gaussian. In such cases, we only need to estimate the rest intractable part by sampling, which often reduces the variance [12]. Similarly, we could use the expectation of the features directly, if it can be computed analytically, in the computation of subgradients (e.g., g\u03b8 and g\u03bb) instead of sampling, which again can lead to variance reduction. With the above estimates of subgradients, we can use stochastic optimization methods such as SGD [28] and AdaM [10] to update the parameters, as outlined in Alg. 1. Overall, our algorithm is a doubly stochastic generalization of Pegasos to deal with the highly nontrivial latent variables. Now, the remaining question is how to de\ufb01ne an appropriate variational distribution q\u03c6(z) to obtain a robust estimate of the subgradients as well as the objective. Two types of methods have been developed for unsupervised DGMs, namely, variance reduction [21] and auto-encoding variational Bayes (AVB) [12]. Though both methods can be used for our models, we focus on the AVB approach. For continuous variables Z, under certain mild conditions we can reparameterize the variational distribution q\u03c6(z) using some simple variables \u03f5. Speci\ufb01cally, we can draw samples \u03f5 from some simple distribution p(\u03f5) and do the transformation z = g\u03c6(\u03f5, x, y) to get the sample of the distribution q(z|x, y). We refer the readers to [12] for more details. In our experiments, we consider the special Gaussian case, where we assume that the variational distribution is a multivariate Gaussian with a diagonal covariance matrix: q\u03c6(z|x, y) = N(\u00b5(x, y; \u03c6), \u03c32(x, y; \u03c6)), (8) whose mean and variance are functions of the input data. This de\ufb01nes our recognition model. Then, the reparameterization trick is as follows: we \ufb01rst draw standard normal variables \u03f5l \u223cN(0, I) and then do the transformation zl n = \u00b5(xn, yn; \u03c6) + \u03c3(xn, yn; \u03c6) \u2299\u03f5l to get a sample. For simplicity, we assume that both the mean and variance are function of x only. However, it is worth to emphasize that although the recognition model is unsupervised, the parameters \u03c6 are learned in a supervised manner because the subgradient (7) depends on the hinge loss. Further details of the experimental settings are presented in Sec. 4.1. 4 Experiments We now present experimental results on the widely adopted MNIST [14] and SVHN [22] datasets. Though mmDGMs are applicable to any DGMs that de\ufb01ne a joint distribution of X and Z, we 5 \fconcentrate on the Variational Auto-encoder (VA) [12], which is unsupervised. We denote our mmDGM with VA by MMVA. In our experiments, we consider two types of recognition models: multiple layer perceptrons (MLPs) and convolutional neural networks (CNNs). We implement all experiments based on Theano [2]. 1 4.1 Architectures and Settings In the MLP case, we follow the settings in [11] to compare both generative and discriminative capacity of VA and MMVA. In the CNN case, we use standard convolutional nets [14] with convolution and max-pooling operation as the recognition model to obtain more competitive classi\ufb01cation results. For the generative model, we use unconvnets [6] with a \u201csymmetric\u201d structure as the recognition model, to reconstruct the input images approximately. More speci\ufb01cally, the top-down generative model has the same structure as the bottom-up recognition model but replacing max-pooling with unpooling operation [6] and applies unpooling, convolution and recti\ufb01cation in order. The total number of parameters in the convolutional network is comparable with previous work [8, 17, 15]. For simplicity, we do not involve mlpconv layers [17, 15] and contrast normalization layers in our recognition model, but they are not exclusive to our model. We illustrate details of the network architectures in appendix A. In both settings, the mean and variance of the latent z are transformed from the last layer of the recognition model through a linear operation. It should be noticed that we could use not only the expectation of z but also the activation of any layer in the recognition model as features. The only theoretical difference is from where we add a hinge loss regularization to the gradient and backpropagate it to previous layers. In all of the experiments, the mean of z has the same nonlinearity but typically much lower dimension than the activation of the last layer in the recognition model, and hence often leads to a worse performance. In the MLP case, we concatenate the activations of 2 layers as the features used in the supervised tasks. In the CNN case, we use the activations of the last layer as the features. We use AdaM [10] to optimize parameters in all of the models. Although it is an adaptive gradient-based optimization method, we decay the global learning rate by factor three periodically after suf\ufb01cient number of epochs to ensure a stable convergence. We denote our mmDGM with MLPs by MMVA. To perform classi\ufb01cation using VA, we \ufb01rst learn the feature representations by VA, and then build a linear SVM classi\ufb01er on these features using the Pegasos stochastic subgradient algorithm [28]. This baseline will be denoted by VA+Pegasos. The corresponding models with CNNs are denoted by CMMVA and CVA+Pegasos respectively. 4.2 Results on the MNIST dataset We present both the prediction performance and the results on generating samples of MMVA and VA+Pegasos with both kinds of recognition models on the MNIST [14] dataset, which consists of images of 10 different classes (0 to 9) of size 28\u00d728 with 50,000 training samples, 10,000 validating samples and 10,000 testing samples. 4.2.1 Predictive Performance Table 1: Error rates (%) on MNIST dataset. MODEL ERROR RATE VA+Pegasos 1.04 VA+Class-conditionVA 0.96 MMVA 0.90 CVA+Pegasos 1.35 CMMVA 0.45 Stochastic Pooling [33] 0.47 Network in Network [17] 0.47 Maxout Network [8] 0.45 DSN [15] 0.39 In the MLP case, we only use 50,000 training data, and the parameters for classi\ufb01cation are optimized according to the validation set. We choose C = 15 for MMVA and initialize it with an unsupervised pre-training procedure in classi\ufb01cation. First three rows in Table 1 compare VA+Pegasos, VA+Class-condtionVA and MMVA, where VA+Class-condtionVA refers to the best fully supervised model in [11]. Our model outperforms the baseline signi\ufb01cantly. We further use the t-SNE algorithm [19] to embed the features learned by VA and MMVA on 2D plane, which again demonstrates the stronger discriminative ability of MMVA (See Appendix B for details). In the CNN case, we use 60,000 training data. Table 2 shows the effect of C on classi\ufb01cation error rate and variational lower bound. Typically, as C gets lager, CMMVA learns more discriminative features and leads to a worse estimation of data likelihood. However, if C is too small, the supervision is not enough to lead to predictive features. Nevertheless, C = 103 is quite a good trade-off 1The source code is available at https://github.com/zhenxuan00/mmdgm. 6 \f(a) VA (b) MMVA (c) CVA (d) CMMVA Figure 1: (a-b): randomly generated images by VA and MMVA, 3000 epochs; (c-d): randomly generated images by CVA and CMMVA, 600 epochs. between the classi\ufb01cation performance and generative performance and this is the default setting of CMMVA on MNIST throughout this paper. In this setting, the classi\ufb01cation performance of our CMMVA model is comparable to the recent state-of-the-art fully discriminative networks (without data augmentation), shown in the last four rows of Table 1. 4.2.2 Generative Performance Table 2: Effects of C on MNIST dataset with a CNN recognition model. C ERROR RATE (%) LOWER BOUND 0 1.35 -93.17 1 1.86 -95.86 10 0.88 -95.90 102 0.54 -96.35 103 0.45 -99.62 104 0.43 -112.12 We further investigate the generative capability of MMVA on generating samples. Fig. 1 illustrates the images randomly sampled from VA and MMVA models where we output the expectation of the gray value at each pixel to get a smooth visualization. We do not pre-train our model in all settings when generating data to prove that MMVA (CMMVA) remains the generative capability of DGMs. 4.3 Results on the SVHN (Street View House Numbers) dataset SVHN [22] is a large dataset consisting of color images of size 32 \u00d7 32. The task is to recognize center digits in natural scene images, which is signi\ufb01cantly harder than classi\ufb01cation of hand-written digits. We follow the work [27, 8] to split the dataset into 598,388 training data, 6000 validating data and 26, 032 testing data and preprocess the data by Local Contrast Normalization (LCN). We only consider the CNN recognition model here. The network structure is similar to that in MNIST. We set C = 104 for our CMMVA model on SVHN by default. Table 3: Error rates (%) on SVHN dataset. MODEL ERROR RATE CVA+Pegasos 25.3 CMMVA 3.09 CNN [27] 4.9 Stochastic Pooling [33] 2.80 Maxout Network [8] 2.47 Network in Network [17] 2.35 DSN [15] 1.92 Table 3 shows the predictive performance. In this more challenging problem, we observe a larger improvement by CMMVA as compared to CVA+Pegasos, suggesting that DGMs bene\ufb01t a lot from max-margin learning on image classi\ufb01cation. We also compare CMMVA with state-of-the-art results. To the best of our knowledge, there is no competitive generative models to classify digits on SVHN dataset with full labels. We further compare the generative capability of CMMVA and CVA to examine the bene\ufb01ts from jointly training of DGMs and max-margin classi\ufb01ers. Though CVA gives a tighter lower bound of data likelihood and reconstructs data more elaborately, it fails to learn the pattern of digits in a complex scenario and could not generate meaningful images. Visualization of random samples from CVA and CMMVA is shown in Fig. 2. In this scenario, the hinge loss regularization on recognition model is useful for generating main objects to be classi\ufb01ed in images. 4.4 Missing Data Imputation and Classi\ufb01cation Finally, we test all models on the task of missing data imputation. For MNIST, we consider two types of missing values [18]: (1) Rand-Drop: each pixel is missing randomly with a pre-\ufb01xed probability; and (2) Rect: a rectangle located at the center of the image is missing. Given the perturbed images, we uniformly initialize the missing values between 0 and 1, and then iteratively do the following steps: (1) using the recognition model to sample the hidden variables; (2) predicting the missing values to generate images; and (3) using the re\ufb01ned images as the input of the next round. For SVHN, we do the same procedure as in MNIST but initialize the missing values with Guassian 7 \f(a) Training data (b) CVA (c) CMMVA (C = 103) (d) CMMVA (C = 104) Figure 2: (a): training data after LCN preprocessing; (b): random samples from CVA; (c-d): random samples from CMMVA when C = 103 and C = 104 respectively. random variables as the input distribution changes. Visualization results on MNIST and SVHN are presented in Appendix C and Appendix D respectively. Table 4: MSE on MNIST data with missing values in the testing procedure. NOISE TYPE VA MMVA CVA CMMVA RAND-DROP (0.2) 0.0109 0.0110 0.0111 0.0147 RAND-DROP (0.4) 0.0127 0.0127 0.0127 0.0161 RAND-DROP (0.6) 0.0168 0.0165 0.0175 0.0203 RAND-DROP (0.8) 0.0379 0.0358 0.0453 0.0449 RECT (6 \u00d7 6) 0.0637 0.0645 0.0585 0.0597 RECT (8 \u00d7 8) 0.0850 0.0841 0.0754 0.0724 RECT (10 \u00d7 10) 0.1100 0.1079 0.0978 0.0884 RECT (12 \u00d7 12) 0.1450 0.1342 0.1299 0.1090 Intuitively, generative models with CNNs could be more powerful on learning patterns and high-level structures, while generative models with MLPs lean more to reconstruct the pixels in detail. This conforms to the MSE results shown in Table 4: CVA and CMMVA outperform VA and MMVA with a missing rectangle, while VA and MMVA outperform CVA and CMMVA with random missing values. Compared with the baseline, mmDGMs also make more accurate completion when large patches are missing. All of the models infer missing values for 100 iterations. We also compare the classi\ufb01cation performance of CVA, CNN and CMMVA with Rect missing values in testing procedure in Appendix E. CMMVA outperforms both CVA and CNN. Overall, mmDGMs have comparable capability of inferring missing values and prefer to learn highlevel patterns instead of local details. 5 Conclusions We propose max-margin deep generative models (mmDGMs), which conjoin the predictive power of max-margin principle and the generative ability of deep generative models. We develop a doubly stochastic subgradient algorithm to learn all parameters jointly and consider two types of recognition models with MLPs and CNNs respectively. In both cases, we present extensive results to demonstrate that mmDGMs can signi\ufb01cantly improve the prediction performance of deep generative models, while retaining the strong generative ability on generating input samples as well as completing missing values. In fact, by employing CNNs in both recognition and generative models, we achieve low error rates on MNIST and SVHN datasets, which are competitive to the state-of-the-art fully discriminative networks. Acknowledgments The work was supported by the National Basic Research Program (973 Program) of China (Nos. 2013CB329403, 2012CB316301), National NSF of China (Nos. 61322308, 61332007), Tsinghua TNList Lab Big Data Initiative, and Tsinghua Initiative Scienti\ufb01c Research Program (Nos. 20121088071, 20141080934)." + } + ], + "Chendong Xiang": [ + { + "url": "http://arxiv.org/abs/2303.18181v2", + "title": "A Closer Look at Parameter-Efficient Tuning in Diffusion Models", + "abstract": "Large-scale diffusion models like Stable Diffusion are powerful and find\nvarious real-world applications while customizing such models by fine-tuning is\nboth memory and time inefficient. Motivated by the recent progress in natural\nlanguage processing, we investigate parameter-efficient tuning in large\ndiffusion models by inserting small learnable modules (termed adapters). In\nparticular, we decompose the design space of adapters into orthogonal factors\n-- the input position, the output position as well as the function form, and\nperform Analysis of Variance (ANOVA), a classical statistical approach for\nanalyzing the correlation between discrete (design options) and continuous\nvariables (evaluation metrics). Our analysis suggests that the input position\nof adapters is the critical factor influencing the performance of downstream\ntasks. Then, we carefully study the choice of the input position, and we find\nthat putting the input position after the cross-attention block can lead to the\nbest performance, validated by additional visualization analyses. Finally, we\nprovide a recipe for parameter-efficient tuning in diffusion models, which is\ncomparable if not superior to the fully fine-tuned baseline (e.g., DreamBooth)\nwith only 0.75 \\% extra parameters, across various customized tasks.", + "authors": "Chendong Xiang, Fan Bao, Chongxuan Li, Hang Su, Jun Zhu", + "published": "2023-03-31", + "updated": "2023-04-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "main_content": "Introduction Diffusion models [14,35,36] have recently become popular due to their excellent ability to generate high-quality and diverse images [9, 30, 31, 33]. By interacting with the condition information in its iterative generation process, diffusion models have an outstanding performance in conditional generation tasks, which motivate its applications such as text-to-image generation [30, 31, 33], image-to-image translation [6, 25, 41], image restoration [18, 34], 3D syn*Corresponding author. (a) Tuned-parameters and CLIP similarity comparison between Our method with best setting and Dreambooth. (b) Memory peak and time cost comparison between Our method with best setting and Dreambooth. Figure 1. Comparison of resource usage. (a) Our method reaches comparable performance with much fewer parameters. (b) Our method reduces memory usage and time cost by around 30%. thesis [28], audio synthesis [5, 20] and inverse molecular design [3]. With the knowledge learned from massive data, large1 arXiv:2303.18181v2 [cs.CV] 12 Apr 2023 \fInput images DreamBooth (0.841) Input images DreamBooth (0.899) Ours (0.908) Ours (0.899) Figure 2. Comparison with DreamBooth. Images generated by fully \ufb01ne-tuned method (DreamBooth [32]) and our parameter-ef\ufb01cient tuning method with the best setting (Ours) on personalization tasks. We select the best samples for both methods (see more samples in Appendix). Ours achieves better performance in terms of both visual quality and the CLIP similarity\u2191(in brackets). scale diffusion models act as strong priors for downstream tasks [28, 32, 40]. Among them, DreamBooth [32] tunes all parameters in a large-scale diffusion model to generate speci\ufb01c objects that users desire. However, \ufb01ne-tuning the entire model is inef\ufb01cient in terms of computation, memory and storage cost. An alternative way is the parameteref\ufb01cient transfer learning methods [12,16] originating from the area of natural language processing (NLP). These methods insert small trainable modules (termed as adapters) into the model and freeze the original model. Nevertheless, parameter-ef\ufb01cient transfer learning has not been thoroughly studied in the area of diffusion models. In contrast to the transformer-based language models [4, 7, 8, 29] in NLP, the U-Net architecture widely used in diffusion models includes more components such as residual block with down/up-sampling operators, self-attention and crossattention. This leads to a larger design space of parameteref\ufb01cient transfer learning than the transformer-based language models. In this paper, we present a \ufb01rst systematical study on the design space of parameter-ef\ufb01cient tuning in largescale diffusion models. We consider Stable Diffusion [31] as the concrete case, since currently it is the only opensource large-scale diffusion model. In particular, we decompose the design space of adapters into orthogonal factors \u2013 the input position, the output position, and the function form. Through performing a powerful tool for analyzing differences between groups in experimental research named Analysis of Variance (ANOVA) [11] on these factors, we \ufb01nd that the input position is the critical factor in\ufb02uencing the performance of downstream tasks. Then, we carefully study the choice of the input position, and we \ufb01nd that putting the input position after the cross attention block can maximally encourage the network to perceive the change in input prompt (see Figure 11), therefore leading to the best performance. Built upon our study, our best setting could reach comparable if not better results with the fully \ufb01ne-tuned method 2 \fwithin 0.75% extra parameters on both the personalization task introduced in Dreambooth [32] and the task of \ufb01netuning on a small set of text-image pairs. 2. Background 2.1. Diffusion Models Diffusion models learn the data distribution q(x0) by reversing a noise-injection process q(x1:T |x0) = T Y t=0 q(xt|xt\u22121), where q(xt|xt\u22121) = N(xt|\u221a\u03b1txt\u22121, \u03b2tI) corresponds to a step of noise injection. The transition of the reverse is approximated by a Gaussian model p(xt\u22121|xt) = N(xt\u22121|\u00b5(xt), \u03c32 t I), where the optimal mean under the maximal likelihood estimation [2] is \u00b5\u2217 t (xt) = 1 \u221a\u03b1t (xt \u2212 \u03b2t 1 \u2212\u03b1t E[\u03f5|xt]). Here \u03b1t = Qt i=1 \u03b1i and \u03f5 is the standard Gaussian noise injected to xt. To obtain the optimal mean, it is suf\ufb01cient to estimate the conditional expectation E[\u03f5|xt] via a noise prediction objective min \u03b8 Et,x0,\u03f5\u2225\u03f5\u03b8(xt, t) \u2212\u03f5\u22252 2, where \u03f5\u03b8(xt, t) is the noise prediction network, and the optimal one satis\ufb01es \u03f5\u03b8\u2217(xt, t) = E[\u03f5|xt] according to the property of \u21132 loss. In practice, we often care conditional generation. To perform it with diffusion models, we only need to introduce the condition information c to the noise prediction network during training min \u03b8 Et,x0,c,\u03f5\u2225\u03f5\u03b8(xt, t, c) \u2212\u03f5\u22252 2. 2.2. The Architecture in Stable Diffusion Currently, the most popular architecture for diffusion models is the U-Net-based architecture [9, 14, 30, 31, 33]. Speci\ufb01cally, the U-Net-based architecture in Stable Diffusion [31] is shown in Figure 3. The U-Net comprises stacked basic blocks, each containing a transformer block and a residual block. In the transformer block, there are three types of sublayers: a self-attention layer, a cross attention layer, and a fully connected feed-forward network. The attention layer operates on queries Q \u2208Rn\u00d7dk, and key-value pairs K \u2208Rm\u00d7dk, V \u2208Rm\u00d7dv: Attn(Q, K, V ) \u2208Rn\u00d7dv = softmax \u0012QKT \u221adk \u0013 V (1) where n is the number of queries, m is the number of keyvalue pairs dk is dimension of key, dv is the dimension of value. In the self-attention layer, x \u2208Rn\u00d7dx is the only input. In the cross attention layer of conditioned diffusion model, there are two inputs x \u2208Rn\u00d7dx and c \u2208Rm\u00d7dc, where x is the output from prior block and c represents the condition information. The fully connected feed-forward network, which consists of two linear transformations with the ReLU activation function.: FFN(x) = ReLU (xW1 + b1) W2 + b2 (2) where W1 \u2208Rd\u00d7dm, W2 \u2208Rdm\u00d7d are the learnable weights, and b1 \u2208Rdm, b2 \u2208Rd are the learnable biases. The residual block consists of a sequence of convolutional layers and activations, where the time embedding is injected into the residual block by an addition operation. 2.3. Parameter-Ef\ufb01cient Transfer Learning Transfer learning is a technique that leverages the knowledge learned from one task to improve the performance of a related task. The method of pre-training and then performing transfer learning on downstream tasks is widely used. However, traditional transfer learning approaches require large amounts of parameters, which is computationally expensive and memory-intensive. Parameter-ef\ufb01cient transfer learning is \ufb01rst proposed in the area of natural language processing (NLP). The key idea of parameter-ef\ufb01cient transfer learning is to reduce the number of updated parameters. This could be done by updating a part of the model or adding extra small modules. Some parameter-ef\ufb01cient transfer learning methods (such as adapter [16], LoRA [17]) choose to add extra small modules named adapters to the model. In contrast, other methods (pre\ufb01x tuning [22], prompt-tuning [21]) prepend some learnable vectors to activations or inputs. Extensive study has validated that the ef\ufb01cient parameter \ufb01ne-tuning method can achieve considerable results with a small number of parameters in the area of NLP. 3. Design Space of Parameter-Ef\ufb01cient Learning in Diffusion Models Despite the success of parameter-ef\ufb01cient transfer learning in NLP, this technique is not fully understood in the area of diffusion models due to the existence of more components such as the residual block and cross-attention. Before presenting our analysis on parameter-ef\ufb01cient tuning in diffusion models, we decompose the design space of adapters into three orthogonal factors \u2013 the input position, the output position, and the function form. This work considers Stable Diffusion [31], since currently it is the only open-source large-scale diffusion model (see Figure 3 for its U-Net-based architecture). Below we 3 \fFigure 3. Background. The top left \ufb01gure shows the overview architecture of UNet-based diffusion model. The top right shows how the diffusion model removes noise from noisy data by T \u22121 steps. The bottom half of the \ufb01gure shows the architecture of residual block and transformer block. Adapters (blocks with red color in the \ufb01gure) are modules with a small number of parameters inserted into the model for parameter-ef\ufb01cient transfer learning. Cross Attention (CA) Self Attention (SA) Transformer Block (Trans) condition : Add : SA!\" : SA#$% : CA!\" : CA#$% : FFN!\" : CA& Time embed In block T Out block Feed Forward (FFN) : FFN#$% : Trans#$% : Res!\" : Res#$% Residual Block (Res) Figure 4. Illustration of activation position. generally, the main name of a activation position is an alias of a speci\ufb01c block in the model, the subscript of activation position explains the relationship between the activation and the block. elaborate the input position, the output position, and the function form based on the architecture of Stable Diffusion. 3.1. Input Position and Output Position The input position is where the adapter\u2019s input comes from, and the output position is where the adapter\u2019s output goes. For a neat notation, as shown in Figure 4, the positions are named according to its neighboring layer. For example, SAin represents that the position corresponds to the input of the self-attention layer, Transout corresponds to the output of the transformer block, and CAc corresponds to the condition input of the cross attention layer. In our framework, the input position could be any one of the activation positions described in Figure 4. Thus, there are ten different options for the input position in total. As for output, some positions are equivalent since the addition is commutative. For example, putting output to SAout is equivalent to putting output to CAin. As a result, the options for the output position are reduced to seven in total. Another constraint is that the output position must be placed after the input position. 3.2. Function Form Function form describes how an adapter transfers the input into the output. We present the function form of adapters in the transformer block and residual block respectively (see Figure 5), where both consist of a downsampling operator, an activation function, an up-sampling operator, and a scaling factor. The down-sampling operator reduces the dimension of the input and the up-sampling operator increases the dimension to ensure the output has the 4 \fAdapter \u753b\u56fe \ud835\udc4a !\"#$ \ud835\udc4a %& \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc63!\"#$ \ud835\udc36\ud835\udc5c\ud835\udc5b\ud835\udc63%& Activation Activation Norm&SiLU Scaling Scaling Transformer block adapter Residual block adapter Figure 5. The function form of adapters in the transformer block and residual block. same dimension as the input. The output is further multiplied with a scaling factor s to control its strength in in\ufb02uencing the original network. Speci\ufb01cally, the transformer block adapter uses lowrank matrices Wdown and Wup as the down-sampling and up-sampling operators respectively, and the residual block adapter employs 3\u00d73 convolution layers Convdown and Convup as the down-sampling and up-sampling operators respectively. Note that these convolution layers only change the number of channels without changing spatial size. Besides, the residual block adapter also processes its input with a group normalization [38] operator. We include different activation functions and scaling factors in our design choice. The activation functions include ReLU, Sigmoid, SiLU, and identity operator as our design choices, and the scale factors include 0.5, 1.0, 2.0, 4.0. 4. Discover the Key Factor with Analysis of Variance As mentioned earlier, \ufb01nding the optimal solution in such a large discrete search space is a challenge. To discover which factor in the design space in\ufb02uences the performance the most, we quantify the correlation between model performance and factors by leveraging the one-way analysis of variance (ANOVA) method, which is widely used in many \ufb01elds, including psychology, education, biology, and economics. The main idea behind ANOVA is to partition the total variation in the data into two components: variation within groups (MSE) and variation between groups (MSB). MSB measures the difference between the group means, while the variation within groups measures the difference between individual observations and their respective group means. The statistical test used in ANOVA is based on the F-distribution, which compares the ratio of the variation between groups to the variation within groups (F-statistic). If the F-statistic is large enough, it suggests that there is a signi\ufb01cant difference between the means of the groups, which indicates a strong correlation. \u5206\u79bb\u7684dreambooh method compare \u6b63\u5f0f\u7248 Input position Output position Figure 6. The relationship between the performance (i.e., CLIP similarity\u2191) and the input & output position of adapters in the DreamBooth task. \u5206\u79bb\u7684flower method compare \u6b63\u5f0f\u7248 Input position Output position Figure 7. The relationship between the performance (i.e., FID\u2193) and the input & output position of adapters in the \ufb01ne-tuning task. 5. Experiments We \ufb01rst present our experimental setup in Section 5.1. Then we analyze which factor in the design space is the most critical in Section 5.2. After discovering the importance of the input position, we present a detailed ablation study on it in Section 5.3. Finally, we present a comprehensive comparison between our best setting and DreamBooth (i.e., \ufb01ne-tuning all parameters) in Section 5.4. 5.1. Setup Tasks & datasets. We consider two transfer learning tasks in diffusion models characterized by different amount of data. DreamBooth task. The \ufb01rst task is to personalize diffusion models with less than 10 input images, as proposed in DreamBooth [32]. We term it DreamBooth task for simplicity. The training dataset of DreamBooth consists of two sets of data: personalization data and regularization data. Personalization data is images of a speci\ufb01c object (e.g., a white dog) provided by the user. Regularization data is images of a general object similar to personalization data (e.g., dogs with different colors). Personalization data size is less than ten, and regularization data could be collected or generated by the model. DreamBooth uses rare token [V ] and class word Cclass to distinguish regularization data and personalization data. In particular, with regularization data, the prompt will be \u201ca photo of Cclass\u201d; with personalization 5 \fdata, the prompt will be \u201ca photo of [V ] Cclass\u201d. Where Cclass is a word to describe the general class of data (e.g., dog). We collect personalization data from both the Internet and live-action photography, and also with data from DreamBooth (33 in total). we use Stable Diffusion itself to generate corresponding regularization data conditioned on prompt \u201ca photo of Cclass\u201d. Fine-tuning task. The other task is to \ufb01ne-tune on a small set of text-image pairs. We term it \ufb01ne-tuning task for simplicity. Following [39], we consider \ufb01ne-tuning on \ufb02ower dataset [27] with 8189 images and use the same setting. We caption each image with the prompt \u201ca photo of Fname\u201d, where Fname is the \ufb02ower name of the image class. Tuning. We use AdamW [23] optimizer. For the DreamBooth task, we set the learning rate as 1e-4 which could let both DreamBooth and our method convergence around 1k step. \ufb01x the adapter size to 1.5M (0.17% of the UNet model), and train with 2.5k steps. For the task of \ufb01ne-tuning on a small set of text-image pairs, we set the learning rate as 1e-5, \ufb01x the adapter size to 6.4M (0.72% of the UNet model), and train 60k steps. Sampling. For a better sampling ef\ufb01ciency, we choose DPM-Solver [24] as the sampling algorithm with 25 sampling steps, and a classi\ufb01er free guidance (cfg) [15] scale of 7.0. In some cases, we use cfg scale of 5.0 for better image quality. Evaluation. For the DreamBooth task, we evaluate faithfulness using the image distance in the CLIP space as proposed in [10]. Speci\ufb01cally, for each personalization target, we generate 32 images using the prompt: \u201cA photo of [V ] Cclass\u201d. The metric is the mean pair-wise CLIP-space cosine-similarity (CLIP similarity) between the generated images and the images of the personalization training set. For the task on \ufb01ne-tuning on a small set of text-image pairs, we use the FID score [13] to evaluate the similarity between the training images and generated images. We randomly draw 5k prompts from the training set, use these prompts to generate images, and then compute FID by comparing generated images with training images. 5.2. Analysis of Variance (ANOVA) on the Design Space Recall that we decompose the design space into factors of input position, output position, and function form. We perform ANOVA method (see Section 4 for details) on these design dimensions We consider the DreamBooth task for ef\ufb01ciency, since it requires fewer training steps. As shown in Figure 8, when grouped by input position, the F-statistic is large, which indicates that the input position is a critical factor to model\u2019s performance. When grouped by output position, it shows a weak correlation. When grouped by function form (both activation function and scale factor), which have an F-statistic of around 1, inInput Output Act Scale 0 2 4 6 8 10 F-statistics Figure 8. F-statistic of ANOVA by grouping input position, output position, activation function, and scale factor. The F-statistic is large when grouping by input position, which indicates there is signi\ufb01cant relation to the input position. dicating that the variability between groups is similar to the variability within groups, which suggests that there is no signi\ufb01cant difference between the group means. We further visualize the performance with different input positions and output positions. Figure 6 shows the results of the DreamBooth task. Figure 7 shows the FID results of the \ufb01ne-tuning task. As discussed above, we conclude that the input position of adapter is the key factor affecting the performance of parameter-ef\ufb01cient transfer learning. 5.3. Ablate the Input Position As shown in Figure 6 and Figure 7, we \ufb01nd that adapters with input position of CAc or CAout have a good performance on both tasks. In Figure 9, we present generated samples in the personalized diffusion models with different input positions of adapters. Adapters with input position at CAc or CAout are able to generate personalized images comparable to \ufb01ne-tuning all parameters, while adapters with input position at other places do not. We further compute the difference between the noise prediction given prompt \u201ca photo of [V ] Cclass\u201d and \u201ca photo of Cclass\u201d. The pipeline is shown in Figure 10, where we \ufb01rstly add noise to an image from the regularization data, use the U-Net to predict noise given the two prompts, and visualize the difference between the difference of two predicted noise. As shown in Figure 11, adapters with input position of CAc or CAout present a signi\ufb01cant difference between the noise prediction. 5.4. Compare with DreamBooth In this section, we compare our best setting (with input position at CAout and output position as FFNin) to DreamBooth, which \ufb01ne-tunes all parameters in diffusion models. 6 \fCA! SA\"# \ud835\udc45\ud835\udc52\ud835\udc60\"# CA$%& \ud835\udc39\ud835\udc39\ud835\udc41$%& Tune-all Train data Reg data Success methods Fail methods Figure 9. The generated samples of personalized diffusion models with different input positions of adapters. All samples are conditioned on \u201ca photo of [V ] Cclass\u201d, it is worth noticing that the success methods generate the right images, but the fail methods are likely to generate pictures similar to regularization data. Prompt \u654f\u611f\u6027\u7b97\u6cd5 dog [V] dog Figure 10. Pipeline of experiment visualize the difference of noise prediction. We show the results of each case on the DreamBooth task in Figure 12, which show that our method is better in most cases. We also compare our best setting to the fully \ufb01ne-tuned method in the \ufb01ne-tuning task on the \ufb02ower dataset. Our recipe reach FID of 24.49, which is better than 28.15 of the fully \ufb01ne-tuned method. 6. Related Work Personalization. Large-scale text-to-image diffusion models trained on web data can generate high-resolution and diverse images whose contents are controlled by the input text, but often lacks the ability for personalized generation on a certain object that the user desires. Recent work such as textual inversion [10] and DreamBooth [32] aims to address this by \ufb01ne-tuning the diffusion model on a small set of images for the object. The textual inversion only tunes a word embedding. To obtain a stronger performance, DreamBooth tunes all parameters with a regularization loss to prevent over\ufb01tting. Parameter-ef\ufb01cient transfer learning. Parameteref\ufb01cient transfer learning is originated from the area of NLP, such as adapter [16], pre\ufb01x tuning [22], prompt tuning [21] and LoRA [17]. Speci\ufb01cally, adapter [16] inserts small low-rank multilayer perceptron (MLP) with nonlinear activation function f(\u00b7) between transformer block; pre\ufb01x tuning [22] prepends tunable pre\ufb01x vectors to the keys and values at each attention layer; prompt-tuning [21] simpli\ufb01es pre\ufb01x-tuning by adding tunable input word embeddings; LoRA [17] injects tunable low-rank matrices into the query and value projection matrices of the transformer block. While these parameter-ef\ufb01cient transfer learning methods have different forms or motivations, recent work [12] proposes a uni\ufb01ed view of these methods by designating a set of factors to describe the design space of parameteref\ufb01cient transfer learning in pure transformers [37]. These factors include modi\ufb01ed representation, insertion form, 7 \fPrompt \u654f\u611f\u6027\u7684\u8bc1\u660e No tune DreamBooth Figure 11. The noise prediction difference of various settings. The \u201cNo tune\u201d method uses the original Stable Diffusion model without any \ufb01ne-tuning. All adapter methods are noted as the form of input \u2212ouput. We found that adapters with input position of CAout and CAc react better with the prompt changes. Figure 12. Performance compare with DreamBooth. Our method performs better on most cases. functional form, and composition function. In contrast, our method focuses on U-Net with more components than pure transformers, leading to a larger design space. Besides, we use a simpler way to decompose the design space into orthogonal factors, i.e., the input position, the output position and the function form. Transfer learning for diffusion models. There are methods that transfer the diffusion model to recognize a speci\ufb01c object or perform semantic editting [19,32] by tuning the whole model. Previous work [39] tries to transfer a large diffusion model into an image-to-image model on small datasets, but the total number of parameters tuned is nearly half of the original model. [26, 40] transfers diffusion model to accept new conditions and introduces much more parameters than ours. Concurrent work [1] also performs parameter-ef\ufb01cient transfer learning on Stable Diffusion, their method could reach comparable results with fully \ufb01ne-tuned method on DreamBooth [32] task, while their method is based on adding adapters on multiple positions at the same time, leading to a more complicated design space. 7. Conclusion In this paper, we perform a systematical study on the design space of parameter-ef\ufb01cient transfer learning by inserting adapters in diffusion models. We decompose the design space of adapters into orthogonal factors \u2013 the input position, the output position and the function form. By performing Analysis of Variance (ANOVA), we discover the input position of adapters is the critical factor in\ufb02uencing the performance of downstream tasks. Then, we carefully study the choice of the input position, and we \ufb01nd that putting the input position after the cross-attention block can lead to the best performance, validated by additional visualization analyses. Finally, we provide a recipe for parameteref\ufb01cient tuning in diffusion models, which is comparable if not superior to the fully \ufb01ne-tuned baseline (e.g., DreamBooth) with only 0.75 % extra parameters, across various customized tasks." + } + ], + "Jiashuo Liu": [ + { + "url": "http://arxiv.org/abs/2311.05054v1", + "title": "Geometry-Calibrated DRO: Combating Over-Pessimism with Free Energy Implications", + "abstract": "Machine learning algorithms minimizing average risk are susceptible to\ndistributional shifts. Distributionally Robust Optimization (DRO) addresses\nthis issue by optimizing the worst-case risk within an uncertainty set.\nHowever, DRO suffers from over-pessimism, leading to low-confidence\npredictions, poor parameter estimations as well as poor generalization. In this\nwork, we conduct a theoretical analysis of a probable root cause of\nover-pessimism: excessive focus on noisy samples. To alleviate the impact of\nnoise, we incorporate data geometry into calibration terms in DRO, resulting in\nour novel Geometry-Calibrated DRO (GCDRO) for regression. We establish the\nconnection between our risk objective and the Helmholtz free energy in\nstatistical physics, and this free-energy-based risk can extend to standard DRO\nmethods. Leveraging gradient flow in Wasserstein space, we develop an\napproximate minimax optimization algorithm with a bounded error ratio and\nelucidate how our approach mitigates noisy sample effects. Comprehensive\nexperiments confirm GCDRO's superiority over conventional DRO methods.", + "authors": "Jiashuo Liu, Jiayun Wu, Tianyu Wang, Hao Zou, Bo Li, Peng Cui", + "published": "2023-11-08", + "updated": "2023-11-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "Introduction Machine learning algorithms with empirical risk minimization (ERM) have been shown to perform poorly under distributional shifts, especially sub-population shifts where substantial data subsets are underrepresented in the average risk due to their small sample sizes. As an alternative, Distributionally Robust Optimization (DRO) [30, 4, 5, 14, 41, 27, 19, 20] aims to optimize against the worst-case risk distribution within a predefined uncertainty set. This uncertainty set is centered around the training distribution, and generalization performance can be guaranteed when the test distribution falls within this set. However, DRO methods have been found to experience the over-pessimism problem in practice [22, 41] (i.e., low-confidence predictions, poor parameter estimations, and generalization), recent studies have sought to address this issue. From the uncertainty set perspective, Blanchet et al. [6], Liu et al. [27, 28] proposed data-driven methods to learn distance metrics from data. However, these approaches remain vulnerable to noisy samples, as demonstrated in Table 2. Recently, S\u0142owik and Bottou [37], Agarwal and Zhang [1] observed that DRO may overly focus on sub-populations with higher noise levels, leading to suboptimal generalization. Consequently, from the risk objective perspective, they suggest incorporating calibration terms to mitigate this issue. Nevertheless, applicable calibration terms either require expert knowledge or are computationally intensive, and few practical algorithms have been proposed. Short version appears at 37th Conference on Neural Information Processing Systems (NeurIPS 2023), Workshop on Distribution Shifts (DistShift). 1 arXiv:2311.05054v1 [cs.LG] 8 Nov 2023 \fTo devise a practical calibration term for DRO, we first aim to identify the root causes of over-pessimism, which we attribute to the excessive focus on noisy samples that frequently exhibit higher prediction errors. For typical DRO methods [30, 38, 14, 28], based on a simple yet insightful linear example, we theoretically demonstrate that the variance of estimated parameters becomes substantially large when noisy samples have higher densities, in line with the empirical findings reported in [41]. Furthermore, we demonstrate that existing outlier-robust regression methods are not directly applicable for mitigating noisy samples in DRO scenarios where both noisy samples and distribution shifts coexist, highlighting the non-trivial nature of this problem. In this work, inspired by the ideas in [37, 1], we design calibration terms, i.e., total variation and entropy regularization, to prevent DRO from excessively focusing on random noisy samples. In conjunction with the Geometric Wasserstein uncertainty set [28] utilized in our methods, these calibration terms effectively incorporate information from the data manifold, leading to improved regulation of the worst-case distribution in DRO. Specifically, during the optimization, the total variation term penalizes the variation of weighted prediction errors along the data manifold, preventing random noisy samples from gaining excessive densities. The entropy regularization term, also used in [28], acts as a non-linear graph Laplacian operator that enforces the smoothness of the sample weights along the manifold. These calibration terms work together to render the worst-case distribution more reasonable for DRO, leading to our Geometry-Calibrated DRO (GCDRO) approach. We validate the effectiveness of our GCDRO on both simulation and real-world data. Furthermore, from a statistical physics perspective, we demonstrate that our risk objective corresponds to the Helmholtz free energy, comprising three components: interaction energy, potential energy, and entropy. The free energy formulation generalizes typical DRO methods such as KLDRO, \u03c72-DRO [14], MMD-DRO [38] and GDRO [28]. This physical interpretation provides a novel perspective for understanding different DRO methods by drawing parallels between the worst-case distribution and the steady state in statistical physics, offering valuable insights. From the free energy point of view, our GCDRO specifically addresses the interaction energy between samples to mitigate the effects of noisy samples. Motivated by the study of the Fokker-Planck equation (FPE, [9, 15]), through gradient flow in the Geometric Wasserstein space, we derive an approximate minimax algorithm with a bounded error ratio e\u2212CTin after Tin inner-loop iterations. Our optimization method supports any quadratic form of interaction energy, potentially paving the way for designing more effective calibration terms for DRO in the future. 2 Preliminaries: Noisy Samples Bring Over-Pessimism in DRO Notations. X \u2208X denotes the covariates, Y \u2208Y denotes the target, f\u03b8(\u00b7) : X \u2192Y is the predictor parameterized by \u03b8 \u2208\u0398. \u02c6 PN denotes the empirical counterpart of distribution P(X, Y ) with N samples, and p = (p1, . . . , pN)T \u2208RN + is the probability vector. [N] = {1, 2, . . . , N} denotes the set of integers from 1 to N. The random variable of data points is denoted by Z = (X, Y ) \u2208Z. The random vector of n dimension is denoted by \u20d7 hn = (h1, . . . , hn)T . GN = (V, E, W) denotes a finite weighted graph with N nodes, where V = [N] is the vertex set, E is the edge set and W = {wij}(i,j)\u2208E is the weight matrix of the graph. And (x)+ = max(x, 0). Distributionally Robust Optimization (DRO) is formulated as: \u03b8\u2217(P) = arg min \u03b8\u2208\u0398 sup Q\u2208P(P ) EQ[\u2113(f\u03b8(X), Y )] (2.1) where \u2113is the loss function (typically mean square error) and P(P) = {Q : Dist(Q, P) \u2264\u03c1} 2 \fFigure 1. Visualizing the Worst-Case Distribution for Different DRO Methods: We show the data manifold and sample weights for each point, where blue points represent the major group, green ones represent the minor group, and red ones are noisy samples. The bars display the total sample weights of different groups, and the original group ratio is major (93.1%), minor (4.9%), (noisy 2%). denotes the \u03c1-radius uncertainty ball around the distribution P. Different distance metrics derive different DRO methods, e.g., f-divergence DRO (f-DRO, Namkoong and Duchi [30], Duchi and Namkoong [14]) with the Cressie-Read family of R\u00e9nyi divergence, Wasserstein DRO (WDRO, Sinha et al. [36], Blanchet and Murthy [4], Blanchet et al. [5, 6]), MMD-DRO [38] with maximum mean discrepancy, and Geometric DRO (GDRO, Liu et al. [28]) with Geometric Wasserstein distance. Although DRO methods are designed to resist sub-population shifts, they have been observed to have poor generalization performances [22, 17, 37] in practice, which is referred to as over-pessimism. In this section, we identify one of the root causes of the over-pessimism of DRO: the excessive focus on noisy samples with typically high prediction errors. \u2022 We showcase DRO methods\u2019 excessive focus on noisy samples in practice and reveal their probability densities are linked to high prediction errors in worst-case distributions. \u2022 Through a simple yet insightful regression example, we prove that such a phenomenon leads to high estimation variances and subsequently poor generalization performance. \u2022 We demonstrate that existing outlier-robust regression methods are not directly applicable for mitigating noisy samples in DRO scenarios, emphasizing the non-trivial nature of this problem. Problem Setting Given the underlying clean distribution Pclean = (1\u2212\u03b1)Pmajor+\u03b1Pminor, 0 < \u03b1 < 1 2, the goal of DRO can be viewed as achieving good performance across all possible sub-populations Pminor. Denote the observed contaminated training distribution by Ptrain. Based on Huber\u2019s \u03f5-contamination model [23], we formulate Ptrain as: Ptrain = (1 \u2212\u03f5)Pclean + \u03f5 \u02dc Q = (1 \u2212\u03f5)(1 \u2212\u03b1)Pmajor | {z } major sub-population + (1 \u2212\u03f5)\u03b1Pminor | {z } minor sub-population + \u03f5 \u02dc Q |{z} noisy sub-population , (2.2) where \u02dc Q is an arbitrary noisy distribution (typically with larger noise scale), 0 < \u03f5 < 1 2 is the noise level. Note that the minor sub-population could represent any distribution with a proportion of \u03b1 in P. However, we explicitly specify it here to emphasize the distinction between our setting and the traditional Huber\u2019s \u03f5-contaminated setting, as the latter does not take sub-population shifts into account. Empirical Observations. Following a typical regression setting [14, 28], we demonstrate the worst-case distribution of KL-DRO, \u03c72-DRO, and GDRO in Figure 1, where the size of each point is proportional to its density. In this scenario, the underlying distribution P comprises a known major sub-population (95%, blue points) and a minor sub-population (5%, green points). And the noise level \u03f5 in Ptrain is 2%. DRO methods are expected to upweight samples from minor sub-population to learn a model with uniform performances w.r.t. sub-populations. However, from Figure 1, we could observe that KL-DRO, \u03c72-DRO and GDRO excessively focus on noisy samples, resulting in a noise level 10 to 15 times larger than the original. This observation helps to explain their poor performance on this task (detailed results can be found in Table 2). 3 \fTheoretical Analysis. To support our observations, we first analyze the worst distribution of KL-DRO, \u03c72-DRO and GDRO, shedding light on the underlying reasons for this phenomenon. Proposition 1 (Worst-case Distribution). Let \u02c6 Q\u2217 N = (q\u2217 1, q\u2217 2, . . . , q\u2217 N)T \u2208RN + denotes the worst-case distribution, and \u2113(f\u03b8(xi), yi) ( abbr. \u2113i) denotes the prediction error of sample i \u2208[N]. For different choices of Dist(\u00b7, \u00b7) in P(P) = {Q : Dist(Q, P) \u2264\u03c1}, we have: \u2022 KL-DRO: q\u2217 i /q\u2217 j \u221dexp(\u2113i \u2212\u2113j); \u2022 GDRO\u2019s final state (gradient flow step T \u2192\u221e): q\u2217 i /q\u2217 j \u221dexp(\u2113i \u2212\u2113j); \u2022 \u03c72-DRO: q\u2217 i /q\u2217 j = (\u2113i \u2212\u03bb)+/(\u2113j \u2212\u03bb)+, and \u03bb \u22650 is the dual parameter independent of i. Proposition 1 demonstrates that for KL-DRO, \u03c72-DRO, and GDRO (large gradient flow step), the relative density between samples is solely determined by their prediction errors, indicating that a larger prediction error results in a higher density. However, in our problem setting, samples from both minor sub-population Pminor and noisy sub-population \u02dc Q exhibit high prediction errors. The primary goal of DRO is to focus on the minor sub-population Pminor, but the presence of noisy samples in \u02dc Q significantly interferes with this objective and hurts model learning. As shown in Figure 1, for KL-DRO, \u03c72-DRO and GDRO, noisy samples attract much density. Intuitively, it is not surprising that an excessive focus on noisy samples can have a detrimental impact. As KL-DRO, \u03c72-DRO, and GDRO can be viewed as optimization within a weighted empirical distribution, we use the following simple example with the weighted least square model to demonstrate how this excessive focus on noisy samples can lead to high estimation variance, ultimately causing over-pessimism. Example 1 (Weighted Least Square): Consider the data generation process as Y = kX + \u03be, where X, Y \u2208R and random noise \u03be satisfies \u03be \u22a5X, E[\u03be] = 0 and E[\u03be2] (abbr. \u03c32) is finite. Assume that the training dataset XD consists of clean samples {x(i) c , y(i) c }i\u2208[Nc] and noisy samples {x(i) o , y(i) o }i\u2208[No] with \u03c32 c < \u03c32 o. Consider the weighted least-square model f(X) = \u03b8X. Denote the sample weight of a clean sample (x(i) c , y(i) c ) as w(i) c \u2208R+, i \u2208[Nc], and the sample weight of a noisy sample (x(i) o , y(i) o ) as w(i) o \u2208R+, i \u2208[No] with P i\u2208[Nc] w(i) c + P i\u2208[No] w(i) o = 1. The variance of the estimator \u02c6 \u03b8 is given by: Var[\u02c6 \u03b8|XD] = PNc i=1(w(i) c )2(x(i) c )2\u03c32 c + PNo i=1(w(i) o )2(x(i) o )2\u03c32 o hPNc i=1 w(i) c (x(i) c )2 + PNo i=1 w(i) o (x(i) o )2 i2 , (2.3) where XD = {x(i) c }Nc 1 \u222a{x(i) o }No 1 are the sampled covariates in the dataset. Besides, the minimum variance is achieved if and only if \u22001 \u2264i \u2264Nc, 1 \u2264j \u2264No, w(j) o /w(i) c = \u03c32 c/\u03c32 o < 1. \u22c4From the results, we make the following remarks: \u2022 If noisy samples have higher weights than clean samples (e.g., wo/wc > 1), the variance of the estimated parameter \u02c6 \u03b8 will be larger, suggesting that the learned \u02c6 \u03b8 could be significantly unstable. \u2022 In conjunction with Proposition 1, DRO methods tend to assign high weights to noisy samples, which can lead to unstable parameter estimation. While this example is relatively simple, this phenomenon aligns with the empirical findings in Zhai et al. [41], which demonstrate that DRO methods can be quite unstable when confronted with label noise. Relationship with Conventional Outlier-robust Regression. We would like to explain why conventional outlier-robust regression methods cannot be directly applied to our problem. The main challenge stems from the coexistence of noisy samples and minor sub-populations, both of which typically exhibit high prediction errors, leading to a misleading worst-case distribution in 4 \fDRO. Conventional outlier-robust regression methods [10, 25, 11] primarily focus on mitigating the effects of outliers without considering sub-population shifts. For instance, the L2-estimation-error of outlier-robust linear regression is O(\u03f5 log(1/\u03f5)) [10], where \u03f5 represents the noise level in Equation 2.1. However, as analyzed in Proposition 1 and demonstrated in Figure 1, during the optimization of DRO, the noise level \u03f5 significantly increases, rendering even outlier-robust estimation quite inaccurate. Moreover, [25] propose finding a pseudo distribution with minimal prediction errors to avoid outliers (see Algorithm 5.2 in [25]). Nevertheless, this approach might inadvertently exclude minor sub-populations, which should be the focus under sub-population shifts, due to the main challenge: the coexistence of noisy samples and minor sub-populations. Zhai et al. [41] incorporate this idea into DRO. Still, their method requires an implicit assumption that the prediction errors of noisy samples are higher than those of minor sub-populations, which does not always hold in practice. And Bennouna and Van Parys [3] build the uncertainty set via two measures, KL-divergence and Wasserstein distance, leading to a combined approach of KL-DRO and ridge regression. Despite this, as we discussed earlier, DRO tends to increase the noise level in data, making it difficult to fix using ridge regression. Based on the analysis above, we stress the importance of integrating more data-derived information. In pursuit of this, we propose to leverage the unique geometric properties that distinguish noisy samples from minor sub-populations to address this issue. 3 Proposed Method In this work, with a focus on regression, we introduce our Geometry-Calibrated DRO (GCDRO). The fundamental idea is to utilize data geometry to distinguish between random noisy samples and minor sub-populations. It is motivated by the fact that prediction errors for minor sub-populations typically exhibit local smoothness along the data manifold, a property that is not shared by noisy samples. Discrete Geometric Wasserstein Distance. We briefly revisit the definition of the discrete geometric Wasserstein distance. Given a weighted finite graph GN = (V, E, W), the probability set P(GN) supported on the vertex set V is defined as P(GN) = {p \u2208RN| PN i=1 pi = 1, pi \u22650, for i \u2208 V }, and its interior is denoted as Po(GN). A velocity field v = (vij)i,j\u2208V \u2208RN\u00d7N on GN is defined on the edge set E satisfying that vij = \u2212vji if (i, j) \u2208E. \u03beij(p) is a function interpolated with the associated nodes\u2019 densities pi, pj. The flux function pv \u2208RN\u00d7N on GN is defined as pv := (vij\u03beij(p))(i,j)\u2208E and its divergence is defined as divGN(pv) := \u2212(P j\u2208V :(i,j)\u2208E \u221awijvij\u03beij(p))N i=1 \u2208 RN. Then for distributions p0, p1 \u2208Po(GN), the discrete geometric Wasserstein distance [9, 28] is defined as: GW2 GN (p0, p1) := inf v \u001a Z 1 0 1 2 X (i,j)\u2208E \u03beij(p(t))v2 ijdt s.t.dp dt + divGN (pv) = 0, p(0) = p0, p(1) = p1 \u001b . (3.1) Equation 3.1 computes the shortest (geodesic) length among all potential plans, integrating the total kinetic energy of the velocity field throughout the transportation process. A key distinction from the Wasserstein distance is that it only permits density to appear at the graph nodes. Formulation Given training dataset Dtr = {(xi, yi)}N i=1 and a finite weighted graph GN = (V, E, W) representing the inherent structure of sample covariates. Denote the empirical marginal distribution as \u02c6 PX, the formulation of GCDRO is: 5 \fTable 1. Free energy implications of some DRO methods. \u2206N denotes the N-dimensional simplex, \u03b7 in marginal DRO is the dual parameter. Method Energy Type Specific Formulation Interaction Potential Entropy K V H[q] P KL-DRO % \" \" \u2212\u20d7 \u2113 H[q] \u2206N \u03c72-DRO \" \" % \u03bbI \u2212\u20d7 \u2113 \u2206N MMD-DRO \" \" % Kernel Gram Matrix K \u2212\u20d7 \u2113\u22122\u03bb N K\u22a41 \u2206N Marginal \u03c72-DRO % \" % \u2212(\u20d7 \u2113\u2212\u03b7)+ \u2206N with H\u00f6lder continuity GDRO % \" \" \u2212\u20d7 \u2113 H[q] Geometric Wasserstein Set GCDRO \" \" \" Interaction Matrix K \u2212\u20d7 \u2113 H[q] Geometric Wasserstein Set min \u03b8\u2208\u0398 sup q:GW2 GN ( \u02c6 PX,q)\u2264\u03c1 | {z } Geometric Wasserstein set \u001a RN(\u03b8, q) := N X i=1 qi\u2113(f\u03b8(xi), yi) \u2212\u03b1 2 \u00b7 X (i,j)\u2208E wijqiqj(\u2113i \u2212\u2113j)2 | {z } Calibration Term I \u2212\u03b2 \u00b7 N X i=1 qi log qi | {z } Calibration Term II \u001b , (3.2) where \u03c1 is the pre-defined radius of the uncertainty set, \u2113i is the loss on the i-th sample and wij \u2208W denotes the edge weight between sample i and j. \u03b1 and \u03b2 are hyper-parameters. Illustrations. In our formulation, for any distribution q within the uncertainty set, Calibration term I (P (i,j)\u2208E wijqiqj(\u2113i \u2212\u2113j)2) calculates the graph total variation of prediction errors along the data manifold that is characterized by GN. Intuitively, when selecting the worst-case distribution, this term imposes a penalty on distributions that allocate high densities to random noisy samples, as this allocation significantly amplifies the overall variation in prediction errors. Conversely, this term does not penalize distributions that allocate high densities to minor sub-populations, as their errors are smooth and have a relatively small impact on the total variation along the manifold. This differing phenomenon arises from the distinct geometric properties of random noisy samples and minor sub-populations, as samples from the latter typically cluster together on the data manifold. Further, during the optimization of model parameter \u03b8, this term acts like a variance term, resulting in a quantile-like risk objective, which helps to mitigate the effects of outliers. Calibration term II (PN i=1 qi log qi) represents the negative entropy of distribution q. As discussed in Section 3.2, during optimization, this term transforms into a non-linear graph Laplacian operator that encourages sample weights to be smooth along the manifold, avoiding extreme sample weights in the worst-case distribution. 3.1 Free Energy Implications on Worst-case Distribution We first demonstrate the free energy implications of our risk objective RN(\u03b8, q). Intuitively, the change of sample weights across N samples (the inner maximization problem of RN(\u03b8, q)) can be analogously related to the dynamics of particles in a system, wherein the concentration of densities coincides with the aggregation of particle masses at N distinct locations (in the case of infinite samples, these locations converge to the data manifold). As a result, a deeper understanding of the steady state in a particle system can offer valuable insights into the worst-case distribution for DRO. Building on this analogy, we can dive deeper into the physics of particle interactions. When particles exist within a potential energy field, they are subject to external forces. Simultaneously, there are interactions among the particles themselves, leading to a constant state of motion within 6 \fthe system. In statistical physics, a key point of interest is identifying when a system reaches a steady state. In a standard process like the reversible isothermal process, it is established that spontaneous reactions consistently move in the direction of decreasing Helmholtz free energy [18, 34, 16], which consists of interaction energy, potential energy and the negative entropy: E(q) = q\u22a4Kq | {z } Interaction Energy + q\u22a4V |{z} Potential Energy \u2212\u03b2 N X i=1 (\u2212qi log qi) | {z } Temperature\u00d7Entropy = \u2212RN(\u03b8, q). (3.3) By taking V = \u2212\u20d7 \u2113and Kij = \u03b1 2 wij(\u2113i \u2212\u2113j)2 for (i, j) \u2208E, our risk objective is a special case of Helmholtz free energy, where the potential energy of sample i is \u2212\u2113iqi and the interaction energy between sample i and j is \u03b1 2 wij(\u2113i \u2212\u2113j)2qiqj. Specifically, such mutual interactions can manifest as repulsive forces between adjacent particles, thereby preventing the concentration of mass in locations where local prediction errors are significantly high. And this explains from a physical perspective why our calibration term I could mitigate random noisy samples. Additionally, Proposition 2 offers physical interpretations to comprehend the worst-case distribution of various DRO methods. We make some remarks: (1) current DRO methodologies, except MMD-DRO, do not explicitly formulate the interaction term between samples in their design considerations (\u03c72-DRO does not involve interaction between samples), despite the corresponding interaction energy between particles being a common phenomenon in physics; (2) MMD-DRO simply uses kernel gram matrix for interaction and lacks efficient optimization algorithms; (3) by considering this interaction energy, our proposed GCDRO is capable of mitigating the impacts of random noisy samples. Proposition 2 (Free Energy Implications). The dual reformulations of some typical DRO methods are equivalent to the free-energy-based minimax problem min\u03b8\u2208\u0398,\u03bb\u22650 maxq\u2208P \u001a \u03bb\u03c1 \u2212E(q, \u03b8, \u03bb) \u001b with different choices of P, \u03c1 and K, V, H[q] in the free energy E. Details are shown in Table 1. Through free energy, we could understand the type of energy or steady state that DRO methods strive to achieve, and design better interaction energy terms in DRO. Moreover, our optimization, as outlined in Section 3.2, could accommodate multiple quadratic forms of interaction energy. 3.2 Optimization Then we derive an approximate minimax optimization for our GCDRO. For the inner maximization problem, we approximately deal with it via the gradient flow of \u2212RN(\u03b8, Q) w.r.t. Q in the geometric Wasserstein space (Po(GN), GWGN ). We show that the error rate is O(e\u2212CTin) after Tin iterations inner loop, which gives a nice approximation. We denote the Continuous gradient flow as q : [0, T] \u2192Po(GN), the probability density of sample i at time t is abbreviated as qi(t), and the Time-discretized gradient flow with time step \u03c4 as \u02c6 q\u03c4. For inner maximization, we utilize the \u03c4-time-discretized gradient flow [39] for \u2212RN(\u03b8, q) in the geometric Wasserstein space (Po(GN), GW2 GN ) as: \u02c6 q\u03c4(t + \u03c4) = argmax q\u2208Po(GN) RN(\u03b8, q) \u22121 2\u03c4 GW2 GN (\u02c6 q\u03c4(t), q). (3.4) 7 \fThe gradient of q in Equation 3.4 is given as (when \u03c4 \u21920): dqi dt = X (i,j)\u2208E wij\u03beij \u0012 q, \u2113i \u2212\u2113j + \u03b2(log qj \u2212log qi) + \u03b1 \u0000X h\u2208N(j) (\u2113h \u2212\u2113j)2wjhqh \u2212 X h\u2208N(i) (\u2113h \u2212\u2113i)2wihqh \u0001\u0013 , (3.5) where E is the edge set of GN, wij is the edge weight between node i and j, N(i) denotes the set of neighbors of node i, \u2113i denotes the loss of sample i, and \u03beij(\u00b7, \u00b7) : P(GN) \u00d7 R \u2192R is: \u03beij(q, v) := v \u00b7 \u0000I(v > 0)qj + I(v \u22640)qi \u0001 , v \u2208R, (3.6) which is the upwind interpolation commonly used in statistical physics and guarantees that the probability vector q keeps positive. From the gradient, we could see that the entropy regularization acts as a non-linear graph Laplacian operator to make the sample weights smooth along the manifold. In our algorithm, we fix the steps of the gradient flow to be Tin and prove that the error ratio is e\u2212CTin compared with the ground-truth worst-case risk RN(\u03b8, q\u2217) constrained in an \u03c1(\u03b8, Tin)-radius ball. Proposition 3 (Approximation Error Ratio). Given the model parameter \u03b8, denote the distribution after time Tin as qTin(\u03b8), and the distance to training distribution \u02c6 PX as \u03c1(\u03b8, Tin) := GW2 GN ( \u02c6 PX, qTin(\u03b8)) ( abbr. \u03c1(\u03b8)). Assume RN(\u03b8, q) is convex w.r.t q. Then define the ground-truth worst-case distribution q\u2217(\u03b8) within the \u03c1(\u03b8)-radius ball as: q\u2217(\u03b8) := arg sup q:GW2 GN ( \u02c6 PX,q)\u2264\u03c1(\u03b8) RN(\u03b8, q). (3.7) The upper bound of the error rate of the objective function RN(\u03b8, qTin) satisfies: RN(\u03b8, q\u2217) \u2212RN(\u03b8, qTin) RN(\u03b8, q\u2217) \u2212RN(\u03b8, \u02c6 PX) < e\u2212CTin, C = 2m\u03bbsec(\u02c6 L)\u03bbmin(\u22072RN) 1 (r + 1)2 > 0, (3.8) where \u02c6 L is the Laplacian matrix of GN. \u03bbsec, \u03bbmin are the second smallest and smallest eigenvalue, m, r are constants depending on RN, GN, \u03b2. We make some remarks: \u2022 For the assumption that RN is convex w.r.t. q, the Hessian is given by \u22072RN = \u03b2diag(1/q1, ..., 1/qN)+ 2K. Since K is a sparse matrix whose nonzero elements in each row is far smaller than N, it is easily satisfied in empirical settings that the Hessian matrix \u22072R is diagonally dominant and thus positive definite, making the inner maximization concave w.r.t q. \u2022 During the optimization, our algorithm finds an approximate worst-case distribution that is close to the ground-truth one within a \u03c1(\u03b8)-radius uncertainty set. Our robustness guarantee is similar to Sinha et al. [36] (see Equation 12 in Sinha et al. [36]). \u2022 The error ratio is e\u2212CTin, enabling to find a nice approximation efficiently with finite Tin steps. 3.3 Mitigate the Effects of Random Noisy Samples Finally, we prove that our GCDRO method effectively de-emphasizes \u2019noisy samples\u2019 with locally non-smooth prediction errors. Due to the challenge of assessing intermediate states in gradient flow, we focus on its final state (as Tin \u2192\u221e). For the worst-case distribution q\u2217, we denote the density ratio between samples as \u03b3(i, j) := q\u2217 i /q\u2217 j . In sensitivity analysis, when only sample i is perturbed with label noises, we denote the density ratio 8 \fin the new worst-case distribution \u02dc q\u2217as \u03b3noisy(i, j) := \u02dc q\u2217 i / \u02dc q\u2217 j . The sample weight sensitivity \u03be(i, j) is defined as \u03be(i, j) = log \u03b3noisy(i, j) \u2212log \u03b3(i, j), which measures how much density ratio changes under perturbations on one sample. Larger \u03be(i, j) indicates larger sensitivity to noisy samples. Proposition 4. Assume \u2113noisy i \u2212\u2113i \u22652( P k\u2208N(i) q\u2217 kwik\u2113k P k\u2208N(i) q\u2217 kwik \u2212\u2113i) which is locally non-smooth. For any \u03b1 > 0 (in Equation 3.2), we have \u03beGCDRO < \u03beGDRO. Furthermore, there exists M > 0 such that for any \u03b1 > M, we have \u03beGCDRO(i, j) < 0 < min{\u03be\u03c72\u2212DRO(i, j), \u03beGDRO(i, j)(= \u03beKL-DRO(i, j))}, indicating that GCDRO is not sensitive to locally non-smooth noisy samples. In practice, we do a grid search over \u03b1 \u2208[0.1, 10] on an independent held-out validation dataset to select the best \u03b1. The complexity of gradient flow scales linearly with sample size. 4 Experiments In this section, we test the empirical performances of our proposed GCDRO on simulation data and real-world regression datasets with natural distributional shifts. As for the baselines, we compare with empirical risk minimization (ERM), WDRO, two typical f-DRO methods, including KL-DRO, \u03c72-DRO [14], GDRO [28], HRDRO [3] and DORO [41], where HRDRO and DORO are designed to mitigate label noises. Table 2. Results on the simulation data. We report the average root mean square errors (RMSE) over 5 runs, excluding the small standard deviations. Weak Label Noise (noise level 0.5%) Strong Label Noise (noise level 5%) Train (major) Train (minor) Test Mean Test Std Parameter Est Error Train (major) Train (minor) Test Mean Test Std Parameter Est Error ERM 0.337 0.850 0.598 0.264 0.423 0.368 0.855 0.599 0.243 0.431 WDRO 0.337 0.851 0.589 0.292 0.424 0.368 0.857 0.600 0.268 0.432 \u03c72-DRO 0.596 0.765 0.680 0.088 0.447 1.072 0.708 0.875 0.193 0.443 KL-DRO 0.379 1.616 0.974 0.660 0.886 0.468 1.683 1.037 0.621 0.913 HRDRO 0.325 1.298 0.794 0.516 0.693 0.330 1.343 0.801 0.522 0.694 DORO 0.347 0.793 0.565 0.230 0.384 0.334 0.919 0.611 0.295 0.449 GDRO 0.692 0.516 0.605 0.094 0.198 0.618 0.752 0.677 0.063 0.421 GCDRO 0.411 0.554 0.482 0.070 0.190 0.494 0.591 0.540 0.044 0.268 4.1 Simulation Data Data Generation. We design simulation settings with both sub-population shifts and noisy samples. The input covariates X = [S, U, V ]T \u2208R10 consist of stable covariates S \u2208R5, irrelevant ones U \u2208R4 and the unstable covariate V \u2208R: [S, U] \u223cN(0, 2I9), Y = \u03b8T S S + 0.1S1S2S3 + N(0, 0.5), (4.1) V \u223cLaplace(sign(r) \u00b7 Y, 1/5 ln |r|), (4.2) where \u03b8S \u2208R5 is the coefficients of the true model, |r| > 1 is the adjustment factor for each sub-population, and Laplace(\u00b7, \u00b7) denotes the Laplace distribution. From the data generation, the relationship between S and Y stays invariant under different r, U \u22a5Y , while the relationship between V and Y is controlled by r, which varies across sub-populations. Intuitively, sign(r) controls whether the spurious correlation V -Y is positive or negative. And |r| controls the strength of the spurious correlation: the larger |r| is, the stronger the spurious correlation is. Furthermore, in order to conform to real data which are naturally assembled with label noises [41], we introduce label noises by an \u03f5 proportion of labels as Y \u2032 \u223cN(0, Std(Y )). \u03f5 controls the noise level. 9 \f(a) Bike Dataset (b) House Dataset (c) Temperature Dataset Figure 2. Results (over 5 runs) of real-world datasets with natural shifts. We do not manually add label noises here, since real-world datasets intrinsically contain noises. Settings. In training, we generate 9,500 points with r = 1.9 (majority, strong positive spurious correlation V -Y ) and 500 points with r = \u22121.3 (minority, weak negative spurious correlation V -Y ). In testing, we vary r \u2208{3.0, 2.3, \u22121.9, \u22122.7} to simulate different spurious correlations V -Y . We use linear model with mean square error (MSE) and report the prediction root-mean-square errors (RMSE) for each sub-population, the mean and standard deviation of prediction errors among all testing sub-populations. Also, we report the parameter estimation errors \u2225\u02c6 \u03b8 \u2212\u03b8\u2217\u22252 of all methods (\u03b8\u2217= (\u03b8T S , 0, . . . , 0)T ). The results over 10 runs are shown in Table 2. Analysis. From Table 2, (1) compared with ERM, all typical DRO methods, especially \u03c72-DRO and KL-DRO, are strongly affected by label noises. (2) Although DORO is designed to mitigate outliers, it does not perform well under strong noises (\u03ba = 5%), because it relies on the assumption that noisy points have the largest prediction errors, which does not always hold. (3) Our proposed GCDRO outperforms all baselines under different strengths of label noises, which demonstrates its effectiveness. (4) Compared with GDRO, we could see that our calibration terms in Equation 3.2 is effective to mitigate label noises. From Figure 1, the worst-case distribution of our GCDRO significantly upweighs on the minority (green points) and does not put much density on the noisy data (red points), while the others put much higher weights on the noisy samples and perform poorly. 4.2 Real-world Data We use three real-world regression datasets with natural distributional shifts, including bike-sharing prediction, house price, and temperature prediction. For all these experiments, we use a two-layer MLP model with mean square error (MSE). We use the Adam optimizer [24] with the default learning rate 1e \u22123. And all methods are trained for 5e3 epochs. Datasets. (1) Bike-sharing dataset [12] contains the daily count of rental bikes in the Capital bike-sharing system with the corresponding 11 weather and seasonal covariates. The task is to predict the count of rental bikes of casual users. Note that the count of casual users is likely to be more random and noisy, which is suitable to verify the effectiveness of our method. We split the dataset according to the season for natural shifts. In the training data, the ratio of four seasons\u2019 data is 9 : 7 : 5 : 3. We test on the rest of the data and report the prediction error of each season. (2) House Price dataset1 contains house sales prices from King County, USA. The task is to predict the transaction price of the house via 17 predictive covariates such as the number of bedrooms, 1https://www.kaggle.com/c/house-prices-advanced-regressiontechniques/data 10 \fsquare footage of the house, etc. We divide the data into 5 sub-populations according to the built year of each house with each sub-population covering a span of 25 years. In training, we use data from the first group (built year < 1920) and report the prediction error for each testing group. (3) Temperature dataset [12] is largely composed of the LDAPS model\u2019s next day\u2019s forecast data, in-situ maximum and minimum temperatures of present-day, and geographic auxiliary variables in South Korea from 2013 to 2017. The task is to predict the next-day\u2019s maximum air temperatures based on the 22 covariates. We divide the data into 5 groups corresponding with 5 years. In the training data, the ratio of five years\u2019 data is 9 : 7 : 5 : 3 : 1. We test on the rest of the data and report the prediction error of each year. More details could be found in Appendix. Analysis. (1) From the results in Figure 2a, we could see that the performances of ERM drop a lot under distributional shifts, and DRO methods have better performance as well as robustness. (2) Our proposed GCDRO outperforms all baselines under strong shifts, with the most stable performances under natural distributional shifts. (3) As for the kNN graph\u2019s fitting accuracy of the data manifold, we visualize the learned manifold in Appendix and we could see that the learned kNN graph fits the data manifold well. Besides, we show in Appendix that the performances of our GCDRO are relatively stable across different choices of k. Also, our GCDRO only needs the input graph GN to represent the data structure and any manifold learning or graph learning methods could be plugged in to give a better estimation of GN. 5 Future Directions Our work deals with the over-pessimism in DRO via geometric calibration terms and provides free energy implications. The high-level idea could inspire future research on (1) relating free energy with DRO; (2) designing more reasonable calibration terms in DRO; (3) incorporating data geometry in general risk minimization algorithms. We hope this work could help to make DRO methods more effective in practice. And future improvements may be extend this method to classification scenarios with more complicated data like images and languages. 11" + }, + { + "url": "http://arxiv.org/abs/2307.05284v1", + "title": "On the Need for a Language Describing Distribution Shifts: Illustrations on Tabular Datasets", + "abstract": "Different distribution shifts require different algorithmic and operational\ninterventions. Methodological research must be grounded by the specific shifts\nthey address. Although nascent benchmarks provide a promising empirical\nfoundation, they implicitly focus on covariate shifts, and the validity of\nempirical findings depends on the type of shift, e.g., previous observations on\nalgorithmic performance can fail to be valid when the $Y|X$ distribution\nchanges. We conduct a thorough investigation of natural shifts in 5 tabular\ndatasets over 86,000 model configurations, and find that $Y|X$-shifts are most\nprevalent. To encourage researchers to develop a refined language for\ndistribution shifts, we build WhyShift, an empirical testbed of curated\nreal-world shifts where we characterize the type of shift we benchmark\nperformance over. Since $Y|X$-shifts are prevalent in tabular settings, we\nidentify covariate regions that suffer the biggest $Y|X$-shifts and discuss\nimplications for algorithmic and data-based interventions. Our testbed\nhighlights the importance of future research that builds an understanding of\nhow distributions differ.", + "authors": "Jiashuo Liu, Tianyu Wang, Peng Cui, Hongseok Namkoong", + "published": "2023-07-11", + "updated": "2023-07-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "main_content": "Introduction The performance of predictive models has been observed to degrade under distribution shifts in a wide range of applications, such as healthcare [8, 68, 56, 67], economics [28, 18], education [5], vision [55, 47, 64, 70], and language [46, 6]. Distribution shifts vary in type, typically defined as either a change in the marginal distribution of the covariates (X-shifts), or changes in the conditional relationship between the outcome and covariate (Y |X-shifts). Real-world scenarios comprise of both types of shifts. In computer vision [46, 37, 60, 30, 72], Y |X-shifts are less likely as Y is constructed from human labels given an input X. Due to the prevalence of X-shifts, the implicit goal of many researchers is to develop a single robust model that can generalize effectively across multiple domains, akin to humans. For tabular data, Y |X-shifts may arise because of missing variables and hidden confounders. For example, the prevalence of diseases among patients may be affected by covariates that are not recorded in medical datasets but vary among individuals, such as lifestyle factors (e.g., diet, exercise, smoking status) and socioeconomic status [31, 74, 67]. Under Y |X-shifts, there may be a fundamental trade-off between learning algorithms: to perform well on a target distribution, a model may have to necessarily perform worse on others. Algorithmically, typical methods for addressing Y |X-shifts include distributionally robust optimization (DRO) [11, 63, 21, 59, 20] and causal learning methods [54, 7, 62, 36]. Operationally, the modeler can identify and collect an unobserved confounder 1More information on the data, codes and python packages about WhyShift are available at https://github. com/namkoong-lab/whyshift. *Equal contribution 1 arXiv:2307.05284v1 [cs.LG] 11 Jul 2023 \fAdult (sex & race) BRFSS (sex & race) COMPAS (sex & race) ACS Pub. (sex & race) ACS Inc. (sex & race) ACS Inc. (Young\u2192Old) ACS Pub. (2010\u21922017) ACS Pub. (NE\u2192LA) ACS Mob. (MS\u2192HI) Taxi (NYC\u2192BOG) US Acci. (CA\u2192OR) ACS Inc. (CA\u2192PR) 0% 25% 50% 100% 200% Relative Regret Figure 1. Relative regret (1.1) in typical benchmarks [19, 23] (left 5 bars) and seven settings designed in our benchmark (right 7 bars). We use XGBoost F here for illustration. C such that Y |X, C remains invariant across domains, or resort to overhauling the entire model development pipeline to collect more samples from the target. Although nascent benchmarks provide a promising foundation [55, 37, 18, 60], the distribution shift types in these datasets are poorly understood, underscoring the ambiguity in the external validity of the empirical findings based on them. For example, we find that previous (unqualified) empirical findings only hold over mild X-shifts, but fail to hold over Y |X-shifts (Figure 2 to come). As the validity of empirical findings implicitly depends on the type of shift, it is essential to understand the patterns of real-world distribution shifts. Any methodological development must be grounded by the specific shifts it does and does not address. In this work, we focus on tabular datasets to illustrate how a deeper understanding of the underlying shift is necessary for empirical rigor. As a motivating example, we consider tabular datasets that have been previously used to benchmark model performance over demographic subgroups: Adult, BRFSS, COMPAS, ACS Public Coverage, and ACS Income [3, 73, 23]. To understand the type of distribution shift introduced in these datasets, we take the largest demographic subgroup and the smallest subgroup (e.g., for Adult, white men as P and non-white women as Q), and study the optimality gap of the model fP learned on P as measured on the target data Q. Formally, define the relative regret over the target distribution Q: EQ[\u2113(Y, fP (X))] minf\u2208F EQ[\u2113(Y, f(X))] \u22121 where fP \u2208argmin f\u2208F EP [\u2113(Y, f(X))], (1.1) where we use 0-1 loss for \u2113(\u00b7, \u00b7) in Figure 1. For the widely-used benchmarks (left 5 bars), we observe that the relative regret is small, suggesting that the Y |X distribution is largely transferable across those demographic groups and that X-shifts are mild. Towards building an understanding of how empirical findings depend on the type of distribution shift, we present a testbed representing a diverse range of shifts. Our comprehensive benchmark is based on 5 real-world tabular datasets constructed from the US Census (as proposed by Ding et al. [18]) and traffic measurements [48, 49, 1, 2]. We carefully select 7 specific source-target pairs and characterize the extent of Y |X-shifts in each setting. As we observe in Figure 1, our benchmarks cover a wide range of Y |X-shifts and provide the following initial observations. Y |X-shifts are prevalent in tabular settings Through our benchmark, we highlight that Y |Xshifts constitute a substantial proportion of real-world distribution shifts. Out of 169 source-target pairs with significant performance degradation (> 8 percentage points of accuracy drop), we find 80% of them are primarily attributed to Y |X-shifts. By explicitly modeling Y |X-shifts prevalent in applications, as shown in Figure 1, our benchmark enables a nuanced understanding of how empirical findings depend on the type of shift. In particular, we observe that Y |X-shifts introduce considerable performance variation on the target distribution, leading to different relationships between in2 \fand out-of-distribution performances across settings and datasets. This is in stark contrast to the recently observed accuracy-on-the-line phenomenon [47], where the inand out-of-distribution performance have been posited to exhibit a strong linear relationship. In Figure 2, we showcase how the accuracy-on-the-line trend fails to hold when Y |X-shifts are strong. We need tools that build a deeper understanding of distribution shifts There is a salient need for methodological research that builds a deep understanding of distributional differences. This is particularly important for Y |X-shifts since in the worst case, the source data may not be informative for modeling the Y |X relationship in the target, necessitating a full overhaul of the data collection and validation procedure. To inform algorithmic or operational interventions, we must understand why the distribution changed. Our empirical study in Section 3 underscores how identifying the causes behind Y |X-shifts can help shape the solution space beyond the typical algorithmic interventions. We introduce a simple method that identifies covariate regions that have worse performance due to Y |X-shifts. In our benchmark, we illustrate how our methodology suggests effective operational interventions: 1) collecting some target data over a particular covariate region, and 2) collecting specific features C such that the Y |X, C distribution is more stable across source and target. A comprehensive benchmark with specified shift patterns We construct WhyShift, a benchmark for evaluating complex distribution shifts, encompassing six datasets and curated sourcetarget transfer pairs. We call our benchmark WhyShift since addressing Y |X-shifts requires an understanding of why the distribution changed. We evaluate 22 methods on 7 specified distribution shifts with over 86,000 model configurations, comparing a broad range of algorithms including tree ensembles, DRO, imbalance, and fairness methods. We highlight our key findings. \u2022 In tabular settings, model performance rankings change over different shift patterns. \u2022 Tree ensemble methods are competitive, but still suffer from significant performance degradation. \u2022 DRO methods are sensitive to configurations and exhibit significant performance variations. \u2022 Imbalance and fairness methods show similar performance with the base learner (XGBoost). \u2022 A small validation data from the target distribution goes a long way, and more generally, nonalgorithmic interventions warrant greater consideration. 2 Distribution Shifts in Tabular Settings To illustrate how complex distribution shift patterns arise in tabular data. we compare 22 algorithms including tree ensemble methods, robust learning, imbalance, and fairness methods. On 5 real-world tabular datasets (ACS Income, ACS Public Coverage (ACS Pub. Cov), ACS Mobility [18], US Accident [48, 49] and Taxi [1, 2]), we consider the natural spatial shifts between states/cities (e.g., California to Puerto Rico). For the ACS Pub.Cov dataset, we also consider temporal shifts, e.g., from 2010 to 2017. Since virtually all natural distribution shifts we consider are largely induced by Y |X-shifts, we construct a synthetic subgroup shift from younger people to older people in order to simulate X-shifts. Deferring a detailed summary to Section 4.1, we focus on introducing representative phenomena in this section. 3 \f60 100 Source Acc 20 60 100 Target Acc R2 = 0.191 Acc Drop 10.3, Y|X: 89% y = x Fit line (a) ACS Income (CA\u2192PR) 60 100 Source Acc 20 60 100 R2 = 0.501 Acc Drop 9.9, Y|X: 65% y = x Fit line (b) ACS Income (CA\u2192SD) 70 100 Source Acc 40 70 100 R2 = 0.371 Acc Drop 8.3, Y|X: 82% y = x Fit line (c) ACS Mobility (MS\u2192HI) 40 70 100 Source Acc 30 60 90 R2 = 0.511 Acc Drop 16.1, Y|X: 100% y = x Fit line (d) Taxi (NYC\u2192BOG) 40 70 100 Source Acc 30 60 90 Target Acc R2 = 0.379 Acc Drop 16.5, Y|X: 62% y = x Fit line (e) ACS Pub.Cov (NE\u2192LA) 70 100 Source Acc 40 70 100 R2 = 0.038 Acc Drop 21.6, Y|X: 60% y = x Fit line (f) US Accident (CA\u2192OR) 40 70 100 Source Acc 30 60 90 R2 = 0.749 Acc Drop 6.4, Y|X: 13% y = x Fit line (g) ACS Pub.Cov (2010\u21922017) 60 100 Source Acc 50 90 R2 = 0.841 Acc Drop 11.2, Y|X: 0% y = x Fit line (h) ACS Income (Young\u2192Old) Figure 2. Target vs. source accuracies for 22 algorithms and datasets in our benchmark. A linear fit (green line) and its corresponding R2 value is reported on the top left of each figure. Each blue point represents one hyperparameter configuration. (a)-(b): two examples of ACS Income dataset with California (CA) as the source state, and Puerto Rico (PR) and South Dakota (SD) as targets. (c)-(g): five examples of ACS Mobility, Taxi, ACS Pub.Cov, US Accident datasets. (h): simulated covariate shifts on on sub-sampled ACS Income dataset. In Figure 2, we present the source (in-distribution) and target (out-of-distribution) performances of 22 algorithms, each with 200 hyperparameter configurations. To understand shift patterns, we utilize the recently proposed DIstribution Shift DEcomposition (DISDE) framework [13] which decomposes the performance degradation into components attributed to Y |Xand X-shifts. Using the best XGBoost configuration as the baseline model for each source-target pair, we present the total performance degradation and the proportion attributed to Y |X-shift. Distribution shifts are predominantly Y |X-shifts We find performance degradation under natural shifts is overwhelmingly attributed to Y |X-shifts, as illustrated in the curated list in Figure 2. More generally, out of the 169 source-target pairs whose performance degradation is larger than 8 percentage points, 87.2% of them have over 50% of the performance degradation attributed to Y |X-shifts (70.2% of them have over 60% of the gap attributed to Y |X-shifts). We conjecture that Y |X-shifts are prevalent in tabular data due to missing features. For example, in the context of income prediction, individual outcomes may change due to unobserved economic and political factors whose distribution changes over geographical locations [18]. In contrast, in vision and language tasks, the input (e.g., pixels and words) often encapsulate most of the necessary information for predicting the outcome, making strong Y |X-shifts less likely unless the labeling noise is severe. Consequently, compared to domain generalization tasks in vision and language, tabular data exhibits more pronounced real Y |X-shifts. Our findings highlight the importance of understanding the cause 4 \fACS Income (Young\u2192Old) ACS Pub.Cov (2010\u21922017) ACS Pub.Cov (NE\u2192LA) US Accident (CA\u2192OR) ACS Income (CA\u2192PR) ACS Mobility (MS\u2192HI) Taxi (NYC\u2192BOG) 60 70 80 Target Acc X Shift > Y |X Shift Y |X Shift > X Shift LR SVM MLP \u03c72-DRO XGBoost SUBG Inprocess-DP Figure 3: Performances of typical algorithms of 7 settings in our benchmark. of the distribution shift. Accuracy-on-the-line fails to hold over Y |X-shifts We find significant variation in the relationship between source and target performance throughout all natural distribution shifts presented in Figure 2a-g. The correlation between source and target performance is relatively weak, and we tend to see poor linear fits (low R2) when the bulk of the performance degradation is attributed to Y |X-shifts. In Figure 2a-b where we study models trained on California data, we see a significant decrease in the correlation as Y |X-shifts become stronger (SD to PR). Similarly, we see in Figures 2 (c)-(f) that the relationship between the two performances exhibits significant fluctuations across different source-target pairs. In Figure 3, we observe performance rankings of algorithms substantially vary across different Y |X-shifts. Our finding highlights the inherent complexity associated with real distribution shifts in tabular datasets, which stands in sharp contrast to the \u201caccuracy-on-the-line\u201d phenomena [47]. The varied shift patterns in tabular data highlight how empirical observations must be qualified over the range of shifts they remain valid over. This is particularly important for Y |X-shifts which introduce larger variations in the relationship between source and target performance. Source and target performances are correlated when X-shifts dominate Across all natural shifts we study, we find X-shifts are only prominent in temporal shifts (ACS Time dataset; Figure 2g) To better investigate the role of X-shifts, we subsample the data to artificially induce strong covariate shifts over an individual\u2019s age. Specifically, we focus on individuals from California and form two groups according to whether their age is \u226525. The source data oversamples low age groups where 80% is drawn from the age \u226425 group; proportions are reversed in the target data. On this synthetic shift we construct, the DISDE [13] method attributes the bulk of the performance degradation to X-shifts in Figure 2h. Our finding confirms the intuition that unobserved economic factors remain relatively consistent for individuals from the same state (CA). In this synthetic example with X-shifts, we observe a relatively strong correlation between source and target performance. Moreover, the large performance degradation on these datasets suggests that existing robust learning methods are still severely affected by covariate shifts, indicating the need for future research that addresses covariate shifts in tabular data. 3 Case Study: Understanding Distribution Shifts Facilitates Interventions Typical algorithmic approaches to distribution shift optimize performance over a postulated set of distribution shifts. Causal learning assumes the underlying causal structure can be learned to withstand distribution shift [54, 62, 61, 57], while DRO methods explicitly optimize worst-case 5 \fCA PR Difference Accuracy 81.7 71.4 Accuracy degradation Y|X shift X shift(P) X shift(Q) (a) Raw setting: drop 10.3 CA PR Difference 81.8 79.7 Accuracy degradation Y|X shift X shift(P) X shift(Q) (b) Add ENG: drop 2.1 Work Hour \u226534.5 Education \u2265 College Occupation \u2208\ud835\udc9c Risk Region yes yes yes no no Rules \u2022 Sex: female Age \u2265 31 \u2022 Wo\ud835\udc2b\ud835\udc24\ud835\udc07\ud835\udc28\ud835\udc2e\ud835\udc2b\u2208[34.5,49.5] \u2022 Education \u2265 College \u2022 Occupation set \ud835\udc9c: MGR, BUS, FIN, LGL, EDU, ENT \u2026 \u2026 \u2026 no \u2026 (c) Region with Y |X-shifts (XGBoost) Work Hour > 35.5 Sex: female Risk Region yes yes yes no no Rules \u2022 Sex: female Age \u2265 39 \u2022 Wo\ud835\udc2b\ud835\udc24\ud835\udc07\ud835\udc28\ud835\udc2e\ud835\udc2b> 35.5 \u2022 Education \u2265 College \u2026 \u2026 \u2026 no \u2026 Education > College (d) Region with Y |X-shifts (MLP) LR MLP RF LightGBM XGBoost 70 80 Test Accuracy Original Setting Add Target Data Add Region Data (e) Test accuracies of different ways to incorporate data. 20 60 100 Source Acc 60 100 Target Acc Original Add ENG y = x (f) Add ENG (CA\u2192PR) 20 60 100 Source Acc 60 100 Original Add ENG y = x (g) Add ENG (CA\u2192SD) Figure 4. Case study illustrations. (a)-(b) Decomposition of performance degradation for the XGBoost classifier from CA to PR. Figure (a) is for the original setting and (b) corresponds to the results post-integration of the \"ENG\" feature. (c)-(d) Demonstration of Algorithm 1: an interpretable version of the region with strong Y |X-shifts for the XGBoost and MLP models, respectively. (e) Test accuracies of five typical base methods trained on the source, post addition of 250 randomly selected target observations, and 250 observations from the identified risk region. (f)-(g) Performances of all algorithms prior to and following the addition of the \"ENG\" feature. Figure (f) corresponds to the CA to PR, and Figure (g) is CA to SD. 6 \fAlgorithm 1: Identify Regions with Strong Y |X-ShiftS. Input: Source samples (Xi, Yi) i.i.d \u223cP, with size nP and target samples (Xj, Yj) i.i.d \u223cQ with size nQ. Model discrepancy threshold b. 1 Estimate \u02c6 \u03c0(x) = P( \u02dc X \u223cQx| \u02dc X = x) by training a classifier on source and target samples. 2 Calculate density ratios \u02c6 sX/pX and \u02c6 sX/qX according to Equation (3.3). 3 Fit prediction models fP and fQ according to Equation (3.4) and (3.5). 4 Using weighted samples from SX, fit a shallow decision tree h(x) to predict |fP (x) \u2212fQ(x)|. Output: Region R = {x \u2208X : h(x) \u2265b}. performance over a set of distributions [11, 39, 21, 20]. Despite progress in algorithm design, there are few efforts that examine the patterns of real-world distribution shifts. It remains unclear whether the data assumptions made by algorithms hold in practice, and this mismatch often leads to poor empirical performance [32, 68, 56, 14, 36]. Complementing the active literature on algorithmic development, we present an empirical study that underscores the practical significance of tools that provide a qualitative understanding of the shift at hand. In light of the prevalence of Y |X-shifts in tabular data, we introduce a simple yet effective approach for identifying covariate regions that suffer strong Y |X-shifts. We demonstrate our approach on the income prediction task (ACS Income), and show that it can guide operational interventions for addressing distribution shifts. Our case study is not meant to be a rigorous scientific analysis, but rather a (heuristic) vignette illustrating the need for future research on methodologies that can generate qualitative insights on distributional differences. 3.1 Identifying Regions with Strong Y |X-shifts Here we propose a simple yet effective method for identifying covariate regions with strong Y |X-shifts. Despite its simplicity, we demonstrate in the following subsections that our method can inspire operational and modeling interventions. Consider a model f : X \u2192Y that predicts outcome Y \u2208Y from covariates X \u2208X with the associated loss function \u2113(f(X), Y ). Our goal is to identify a region S \u2286X where P(Y |X) differs a lot from Q(Y |X). Since P(Y |X) and Q(Y |X) are undefined outside of the support of P(X) and Q(X) respectively, the comparison can only be made on a subset of the common support. To operationalize this notion of common support, we construct a shared distribution SX over X whose support is contained in that of PX and QX (following [13]). Let pX, qX be the densities of X under P, Q. We define the shared density sX as sX(x) \u221dpX(x)qX(x)/(pX(x) + qX(x)). (3.1) Since we do not have access to samples from the shared distribution SX, we reweight samples from PX and QX using the likelihood ratios sX(x)/pX(x) \u221dqX(x)/(pX(x) + qX(x)) and sX(x)/qX(x) \u221d pX(x)/(pX(x) + qX(x)). Noting that the ratio can be modeled as the probability that an input x is from PX vs QX, we train a binary \u201cdomain\u201d classifier to estimate the ratios. The \u201cdomain\u201d classifier can be any black-box method, and we use XGBoost throughout. Empirically, given that the numbers 7 \fof samples from PX and QX may differ (i.e., nP \u0338= nQ), we have: \u02c6 \u03b1 = nQ nP + nQ and \u02c6 \u03c0(x) = P( \u02dc X from QX| \u02dc X = x), (3.2) \u02c6 sX pX (x) \u221d \u02c6 \u03c0(x) (1 \u2212\u02c6 \u03b1)\u02c6 \u03c0(x) + \u02c6 \u03b1(1 \u2212\u02c6 \u03c0(x)) and \u02c6 sX qX (x) \u221d 1 \u2212\u02c6 \u03c0(x) (1 \u2212\u02c6 \u03b1)\u02c6 \u03c0(x) + \u02c6 \u03b1(1 \u2212\u02c6 \u03c0(x)). (3.3) We estimate the best prediction model under P and Q over the shared distribution SX (using XGBoost as the model class F) fP := arg min f\u2208F ESX \u0014 EP [\u2113(f(X), Y )|X] \u0015 = EP \u0014 \u2113(f(X), Y )dSX dPX (X) \u0015 , (3.4) fQ := arg min f\u2208F ESX \u0014 EQ[\u2113(f(X), Y )|X] \u0015 = EQ \u0014 \u2113(f(X), Y ) dSX dQX (X) \u0015 . (3.5) Then, for any threshold b \u2208[0, 1], {x \u2208X : |fP (x) \u2212fQ(x)| \u2265b} suggests a region that may suffer model performance degradation due to Y |X-shifts. To allow simple interpretation, we use a shallow decision tree h(x) to approximate |fP (x)\u2212fQ(x)| on the shared distribution SX, and consider {x \u2208X : h(x) \u2265b} given by the leaf node of the tree. The pseudo-code is shown in the Algorithm 1; we show that the node splitting criterion in a standard decision trees training procedure corresponds with our goal of finding regions with the largest discrepancy in Appendix B.2. 3.2 Model Interventions Using Algorithm 1, we now demonstrate how a better understanding of distribution shifts can facilitate the design of interventions. We focus on the ACS Income dataset where the goal is to predict whether an individual\u2019s income exceeds 50k (Y ) based on their tabular census data (X). We train an income classifier on 20,000 samples from California (CA, source), and deploy the classifier in Puerto Rico and South Dakota (PR & SD, target), where we get 4,000 samples from PR and SD after deployment. Given the considerable disparities in the economy, job markets, and cost of living between CA and PR/SD, we observe substantial performance degradation due to distribution shifts. In Figure 4a, we first decompose the performance degradation from CA to PR to understand the shift and find Y |X-shifts are the predominant factor. We dive deeper into the significant Y |X-shifts and identify from CA to PR for the XGBoost and MLP classifier. From the region shown in Figure 4c and Figure 4d, we find college-educated individuals in business and educational roles (such as management, business, and educational work) exhibit large Y |X differences. To illustrate how our analysis can inspire subsequent operational interventions to enhance performance on the target distribution, we study two operational interventions. Collect specific data from the target To improve target performance, the most natural operational intervention is to collect additional data from the target distribution. While a rich body of work on domain adaptation [51, 16, 22, 66, 65] study how to effectively utilize data from the target distribution to improve performance, there is little work that discusses how to efficiently collect supervised data from the target distribution to maximize out-of-distribution generalization. To highlight the need for future research in this space, we use the interpretable region identified by Algorithm 1 as shown in Figure 4c to simulate a concerted data collection effort. 8 \fTable 1: Overview of datasets and 7 selected settings. #ID Dataset Type #Samples #Features Outcome #Domains Selected Settings Shift Patterns 1 ACS Income Natural 1,599,229 9 Income\u226550k 51 California \u2192Puerto Rico Y |X \u226bX 2 ACS Mobility Natural 620,937 21 Residential Address 51 Mississippi \u2192Hawaii Y |X \u226bX 3 Taxi Natural 1,506,769 7 Duration time\u226530 min 4 New York City\u2192Botog\u00e1 Y |X \u226bX 4 ACS Pub.Cov Natural 1,127,446 18 Public Ins. Coverage 51 Nebraska \u2192Louisiana Y |X > X 5 US Accident Natural 297,132 47 Severity of Accident 14 California\u2192Oregon Y |X > X 6 ACS Pub.Cov Natural 859,632 18 Public Ins. Coverage 4 2010 (NY)\u21922017 (NY) Y |X < X 7 ACS Income Synthetic 195,665 9 Income\u226550k 2 Younger\u2192Older Y |X \u226aX Since indiscriminately collecting data from the target distribution can be resource-intensive, we concentrate sampling efforts on the subpopulation that may suffer from Y |X-shifts and selectively gather data on them. For five base methods (logistic regression, MLP, random forest, lightGBM, and XGBoost), we randomly sample 250 points from the whole target distribution and the identified region suffering prominent Y |X-shifts, respectively. We report the test accuracies in Figure 4e, and observe that incorporating data from this region is more effective in enhancing OOD generalization. While preliminary, our results demonstrate the potential robustness benefits of efficiently allocating resources toward concerted data collection. Future methodological research in this direction may be fruitful; potential connections may exist with active learning algorithms [69, 43, 27]. Add more relevant features We now illustrate the potential benefits of generating qualitative insights on the distribution shift at hand. Our analysis in Figure 4c suggests educated individuals in financial, educational, and legal professions tend to experience large Y |X-shifts from CA to PR. These roles typically need communication skills, and language barriers could potentially affect their incomes. In California (CA), English is the primary language, while in Puerto Rico (PR), despite both English and Spanish being recognized as official languages, Spanish is predominantly spoken. Consequently, for a model trained on CA data and tested on PR data, incorporating a new feature that denotes English language proficiency (hereafter denoted \u201cENG\u201d) might prove beneficial in improving generalization performances. However, this feature is not included in the ACS Income dataset. To address this, we went back to the Census Bureau\u2019s American Community Survey database to include the ENG feature in the set of covariates. In Figure 4b, we observe that the inclusion of this feature substantially reduces the degradation due to Y |X-shifts, verifying that the originally missing ENG feature may be one cause of Y |X-shifts. Figure 4f contrasts the performances of 22 algorithms (each with 200 hyperparameter configurations) with original features with those that additionally use the ENG feature. The new feature significantly improves target performances across all algorithms; roughly speaking, we posit that we have identified a variable C such that Y |X, C remains similar across CA and PR. However, when we extend this comparison to the source-target pair CA \u2192SD, we observe no significant improvement (Figure 4g). This highlights that the selection of new features should be undertaken judiciously depending on the target distributions of interest. A feature that proves effective in one target distribution might not yield similar results in another. 4 WhyShift: Benchmarking Distribution Shifts on Tabular Data In this section, we detail our benchmark and summarize the main observations. Our finding highlights the importance of future research that builds an understanding of why the distribution shifted. 9 \f4.1 Setup Datasets We explore distribution shifts on 5 real-world tabular datasets from the economic and traffic sectors with natural spatiotemporal distribution shifts. For economic data, we use ACS Income, ACS Mobility, and ACS Public Coverage datasets from the US-wide ACS PUMS data [18], where the outcome is whether an individual\u2019s income exceeds 50k, whether an individual changed the residential address one year ago, and whether an individual is covered by public health insurance, respectively. We primarily focus on spatial shifts across different states in the US. To complement spatial shifts, we derive an ACS Time task based on the ACS Public Coverage dataset, where there are temporal shifts between different years (2010 to 2021). For traffic data, we use US Accident [48, 49] and Taxi [1, 2], where the outcome is whether an accident is severe and whether the total ride duration time exceeds 30 minutes, respectively. We focus on spatial shifts between different states/cities. We summarize the datasets in Table 1 and defer a full description to the Appendix C.1. Algorithms We evaluate 22 algorithms that span a wide range of learning strategies on tabular data, and compare their performances under different patterns of distribution shifts we construct. Concretely, these algorithms include: (1) base learners: Logistic Regression, SVM, fully-connected neural networks (MLP) with standard ERM optimization; (2) tree ensemble models: Random Forest, XGBoost, LightGBM; (3) robust learning: CVaR-DRO and \u03c72-DRO with fast implementation [39], CVaR-DRO and \u03c72-DRO of outlier-robust enhancement [76], Group DRO [58]; (4) imbalanced learning: JTT [42], SUBY, RWY, SUBG, RWG [33], DWR [38] and (5) fairness-enhancing methods: inprocessing method [4] with demographic parity, equal opportunity, error parity as constraints, postprocessing method [29] with exponential and threshold controls. For DRO methods (i.e. (3)), we use MLP as the backbone model. For other algorithms compatible with tree ensemble models (i.e. (4-5)), we use the XGBoost model due to its superior performance on tabular data [26]. For algorithms requiring group labels, we use \u2018hour\u2019 for US Accident and Taxi, and \u2018sex\u2019 for the others. Detailed descriptions for each algorithm can be found in Appendix C.5. Benchmarks We conduct experiments with more than 86,000 model configurations on various source-target distribution shift pairs, and carefully select 7 representative pairs with different distribution shift patterns. In Table 1, we introduce 7 selected settings, and characterize the shift patterns of source-target pairs, which contain different proportions of Y |X-shifts and X-shifts corresponding with plots in Figure 2. The first six settings are natural shifts. In the last setting, we sub-sample the dataset according to age to introduce covariate shift, where we focus on individuals from California and form two groups according to whether their age is \u226525. The source data over-samples the low age group where 80% is drawn from the age \u226525 group, and the proportions are reversed in the target data. In Figure 5 and Figure 6, we plot the performance of algorithms using their best hyperparameter configuration on the validation dataset (i.i.d. with the source distribution). Additional results with various source distributions are in the Appendix. Our benchmark is designed to support empirical research, including new learning algorithms and diagnostics that provide qualitative insights on distribution shifts. Hyper-parameter Tuning For each model, we conduct a grid search over a large set of hyper-parameters. (See Appendix C.3 for the complete search space for each method.) When one method includes another as a \u201cbase\u201d learner (e.g., DRO with MLP, RWY with XGBoost), we explore the full tuning space for the base model (e.g., the cross-product of all MLP hyper-parameters with all DRO hyper-parameters). To control for computational effort, each method is run with 200 configurations for each source-target pair and we select the best configuration according to the i.i.d. 10 \fACS Income (Young\u2192Old) ACS Pub.Cov (2010\u21922017) ACS Pub.Cov (NE\u2192LA) US Accident (CA\u2192OR) ACS Income (CA\u2192PR) ACS Mobility (MS\u2192HI) Taxi (NYC\u2192BOG) 60 70 80 Target Acc Y |X \u226aX Y |X < X Y |X > X Y |X \u226bX LR SVM MLP DRO Methods Tree Ensembles Imbalance Methods Fairness Methods Figure 5: Overall performances of all algorithms on the target data in our selected 7 settings. ACS Income (Young\u2192Old) ACS Pub.Cov (2010\u21922017) ACS Pub.Cov (NE\u2192LA) US Accident (CA\u2192OR) ACS Income (CA\u2192PR) ACS Mobility (MS\u2192HI) Taxi (NYC\u2192BOG) 0 -10 -20 Performance Drop Y |X \u226aX Y |X < X Y |X > X Y |X \u226bX i.i.d performance on source LR SVM MLP DRO Methods Tree Ensembles Imbalance Methods Fairness Methods Figure 6. Performance drop between source and target data of all algorithms in our selected 7 settings. validation performance. In Figure 7b, we further compare different choices of validation protocols. Evaluation Metrics In our benchmark, we include different metrics for a thorough evaluation. Specifically, we use Average Accuracy, Worst-group Accuracy, and Macro-F1 score in our main results where we only have one target distribution. For the results with multiple target distributions (i.e. 3 in Taxi, 13 in US Accident and 50 in the others), we present all target accuracies and Macro-F1 scores, as well as the worst-distribution accuracy and Macro-F1 score among all target distributions in Appendix C.6, C.7, C.8, C.9. 4.2 Analysis Different algorithms do not exhibit consistent rankings over different shift patterns. In Figure 5, we observe the rankings across different shifts are quite different, especially for ACS Income (CA\u2192PR) and ACS Mobility (MS\u2192HI) where Y |X-shifts dominate. This observation reaffirms the phenomena in Figure 2 that as Y |X-shifts become stronger, the relationship between source vs target performances becomes less consistent. In Appendix C.3, we also show that even for a fixed source distribution in one fixed prediction task, algorithmic rankings of performances on different target distributions vary a lot. Tree ensemble methods show competitive performance, but do not significantly improve the generalization drop between source and target data. From Figure 5, treebased ensembles (yellow bars) show robust and competitive performance on the target distribution in 6 out of 7 settings. However, in Figure 6 which plots the performance degradation between source and target, tree ensembles do not show improved robustness. This suggests that they do not actually achieve better robustness against real-world distribution shifts, and their better performances on 11 \f60 100 Source Acc 20 60 100 Target Acc DRO Methods Imbalance Methods (a) DRO & Imb. Methods i.i.d Validation Subgroup Discrepancy Worst Subgroup Oracle 60 70 80 Target Acc LR SVM MLP DRO Methods Tree Ensembles Imbalance Methods Fairness Methods (b) Performances under different validations on ACS Income dataset. Figure 7. (a): Sensitivity of DRO methods and Imbalance Methods w.r.t. configurations. (b) Target performances of 22 algorithms under different validation protocols on ACS Income (CA\u2192PR) setting. target data may simply be due to better fitting the source distribution. DRO methods are sensitive to configurations, with rankings varying significantly across 7 different settings. From Figure 5, DRO methods exhibit competitive performances on ACS Mobility (MS\u2192HI), Taxi (NYC\u2192BOG), and ACS Income (Young\u2192Old), yet underperform in others. This sensitivity to configurations, as shown in Figure 7a (red points), could be attributed to the worst-case optimization that perturbs the training distribution within a pre-defined uncertainty set, without any information regarding the target distribution. However, when target information is incorporated for hyper-parameter tuning (as shown in Figure 7b), there is a marked improvement in the performance of DRO methods. Our observations suggest potential avenues for building more refined uncertainty sets in DRO methods. Imbalance methods and fairness methods show similar performance with the base learner (XGBoost). In our experiments, we choose the XGBoost model as the base learner for imbalance and fairness methods due to its superior performance on tabular data [26]. However, from Figure 5 and Figure 6, imbalance methods and fairness methods do not show a clear improvement upon their base learner (XGBoost, last yellow bar). Further, as shown in Figure 7a, imbalance methods (green) are also quite sensitive to configurations, and their performances do not improve much when their hyperparameters are tuned over the target data (Oracle). Target information matters in validation. Based on the ACS Income (CA\u2192PR) dataset, we compare different validation protocols, including the best average accuracy, minimum subgroup discrepancy, and best worst-subgroup accuracy on validation data generated from the source distribution. We also use the Oracle validation that chooses the configuration with the best average accuracy on validation data generated from the target distribution. In Figure 7b, we find the first three protocols do not show a significant difference. However, oracle validation with target information substantially improves the effectiveness of both DRO and tree ensemble methods. We conclude using target information for model selection can provide robustness gains even with a small target dataset. Non-algorithmic interventions warrant greater consideration. Reflecting on Section 3, it is clear that operational interventions yield significant enhancements for various methods, as demonstrated in Figure 4e and Figure 4f. In comparison to algorithmic interventions, such as designing different algorithms (e.g., DRO, Imbalance methods), a data-centric approach can be more effective in addressing distribution shifts. For instance, research on feature collection and feature engineering methods may prove impactful. Another avenue for future work is developing methods 12 \fthat can optimally incorporate expensive samples from the target distribution. 5 Conclusion We explore in depth the complexity of distribution shifts in real-world tabular datasets. Through a comprehensive case study, we demonstrate how a better understanding of distribution shifts facilitates algorithmic and data-based interventions. Using natural shifts from 5 real-world tabular datasets across different domains, we specify the shift pattern and evaluate 22 methods via experiments with over 86k trained models. Our benchmark WhyShift encompasses various distribution shift patterns to evaluate the robustness of methods. Our findings highlight the importance of future research to understand how distributions differ in real-world applications. Acknowledgement We thank Tiffany Cai for her help with implementing the DISDE method on our benchmarks. Hongseok Namkoong was partially supported by the Amazon Research Award. 13" + }, + { + "url": "http://arxiv.org/abs/2304.00305v1", + "title": "Predictive Heterogeneity: Measures and Applications", + "abstract": "As an intrinsic and fundamental property of big data, data heterogeneity\nexists in a variety of real-world applications, such as precision medicine,\nautonomous driving, financial applications, etc. For machine learning\nalgorithms, the ignorance of data heterogeneity will greatly hurt the\ngeneralization performance and the algorithmic fairness, since the prediction\nmechanisms among different sub-populations are likely to differ from each\nother. In this work, we focus on the data heterogeneity that affects the\nprediction of machine learning models, and firstly propose the \\emph{usable\npredictive heterogeneity}, which takes into account the model capacity and\ncomputational constraints. We prove that it can be reliably estimated from\nfinite data with probably approximately correct (PAC) bounds. Additionally, we\ndesign a bi-level optimization algorithm to explore the usable predictive\nheterogeneity from data. Empirically, the explored heterogeneity provides\ninsights for sub-population divisions in income prediction, crop yield\nprediction and image classification tasks, and leveraging such heterogeneity\nbenefits the out-of-distribution generalization performance.", + "authors": "Jiashuo Liu, Jiayun Wu, Bo Li, Peng Cui", + "published": "2023-04-01", + "updated": "2023-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.IT", + "math.IT" + ], + "main_content": "Introduction Big Data provides great opportunities for the growth and advancement of Arti\ufb01cial Intelligence (AI) systems. Nowadays, AI has emerged as a ubiquitous tool that permeates almost every aspect of the contemporary technological landscape, making it an indispensable asset in various \ufb01elds and industries, such as scienti\ufb01c discoveries, policy-making, healthcare, drug discovery, and so on. However, along with the widespread deployment of AI systems, the reliability, fairness, and stability of AI algorithms have been increasingly doubted. For example, in sociological research (Tipton et al., 2020), studies have shown that even for carefully designed randomized trials, there are huge selection biases, making \u2217. Equal Contributions. \u2020. Corresponding Author. \u00a92022 Jiashuo Liu, Jiayun Wu, Bo Li and Peng Cui. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. arXiv:2304.00305v1 [cs.LG] 1 Apr 2023 \fJiashuo Liu, Jiayun Wu, Bo Li and Peng Cui scienti\ufb01c discoveries unreliable; in disease diagnosis, studies (Wynants et al., 2020; Roberts et al., 2021) have found hundreds of existing AI algorithms fail to detect and prognosticate for COVID-19 using chest radiographs and CT scans; in social welfare, decision support AI systems for credit loan applications are found to exhibit biases against certain demographic groups (Hardt et al., 2016; Verma, 2019); in various machine learning tasks, algorithms are faced with severely poor generalization performances under distributional shifts (Shen et al., 2021), etc. Another well-known example is Simpson\u2019s paradox, which brings false discoveries to the social research (Wagner, 1982; Hern\u00b4 an et al., 2011). In order to mitigate the barriers that inhibit the deployment of AI systems in crucial, high-stakes applications, numerous researchers have taken recourse to the established research paradigm of model-centric AI, whereby they endeavor to develop innovative algorithms aimed at addressing these challenges. However, in contemporary discourse about machine learning, it is increasingly evident that the challenges faced by algorithms extend beyond their intrinsic properties and extend to the nature of the data utilized in training these models. Speci\ufb01cally, the heterogeneity of data employed has emerged as a pivotal factor underlying these issues. The concept of data heterogeneity encompasses the diversity that exists within data, including variations in data sources, generation mechanisms, subpopulations, and data structures. Failure to account for such diversity in AI systems can lead to overemphasis on patterns found only in dominant sub-populations or groups, thereby resulting in false scienti\ufb01c discoveries, unreliable and inequitable decision-making, and poor generalization performance when confronted with new data. Given the high-stakes scenarios in which trustworthy AI is required, addressing the problem of data heterogeneity an inherent property of big data should receive increased attention. Moreover, in the current era of big models, where model development is approaching its limits, researchers have huge opportunities to explore the intricacies of big data, thereby facilitating the development of AI in parallel with the advancement of AI models and algorithms. Despite its widespread existence, due to its complexity, data heterogeneity has not converged to a uniform formulation so far, and has di\ufb00erent meanings among di\ufb00erent \ufb01elds. Li and Reynolds (1995) de\ufb01ne the heterogeneity in ecology based on the system property and complexity or variability. Rosenbaum (2005) views the uncertainty of the potential outcome as unit heterogeneity in observational studies in economics. More recently, in machine learning, several works of causal learning (Peters et al., 2016; Arjovsky et al., 2019; Koyama and Yamaguchi, 2020; Liu et al., 2021a; Creager et al., 2021) and robust learning (Sagawa et al., 2019; Liu et al., 2022) leverage heterogeneous data from multiple environments to improve the out-of-distribution generalization ability. However, previous works have not provided a precise de\ufb01nition or sound quanti\ufb01cation. In this work, targeting at the prediction task in machine learning, from the perspective of prediction power, we propose the predictive heterogeneity, a new type of data heterogeneity. From a machine learning perspective, a major concern is the potential adverse e\ufb00ects of data heterogeneity on prediction accuracy. In this study, we propose predictive heterogeneity, which refers to the heterogeneity of data that impacts the performance of machine learning models. Our goal is to facilitate the development of machine learning systems by addressing this issue. To this end, we introduce a precise de\ufb01nition of predictive heterogeneity that quanti\ufb01es the maximal additional predictive information that can be obtained by dividing the entire data distribution into sub-populations. This measure takes into account 2 \fPredictive Heterogeneity: Measures and Applications the model capacity and computational constraints and can be accurately estimated from \ufb01nite samples with probably approximately correct (PAC) bounds. We conduct a theoretical analysis of the properties of this measure and examine it under typical scenarios of data heterogeneity. In addition, we propose the information maximization (IM) algorithm to empirically explore the predictive heterogeneity within data. Through our empirical investigations, we \ufb01nd that the explored heterogeneity is interpretable and provides valuable insights for sub-population divisions in various \ufb01elds, such as agriculture, sociology, object recognition, and healthcare. Moreover, the identi\ufb01ed sub-populations can be utilized to identify features related to Covid-19 mortality and enhance the out-of-distribution generalization performance of machine learning models. This has been con\ufb01rmed through experiments with both simulated and real-world data. In conclusion, our study contributes to the development of machine learning systems by providing a precise de\ufb01nition of predictive heterogeneity and a reliable measure for its estimation. Our \ufb01ndings demonstrate the potential of the IM algorithm for exploring predictive heterogeneity, assisting scienti\ufb01c discoveries and improving the generalization performance of machine learning models in real-world applications. 2 Preliminaries on Mutual Information and Predictive V-Information In this section, we brie\ufb02y introduce the mutual information and predictive V-information (Xu et al., 2020) which are the preliminaries of our proposed predictive heterogeneity. Notations. For a probability triple (S, F, P), de\ufb01ne random variables X : S \u2192X and Y : S \u2192Y where X is the covariate space and Y is the target space. Accordingly. x \u2208X denotes the covariates, and y \u2208Y denotes the target. Denote the set of random categorical variables as C = {C : S \u2192N|supp(C) is \ufb01nite}. Additionally, P(X), P(Y) denote the set of all probability measures over the Borel algebra on the spaces X, Y respectively. H(\u00b7) denotes the Shannon entropy of a discrete random variable and the di\ufb00erential entropy of a continuous variable, and H(\u00b7|\u00b7) denotes the conditional entropy of two random variables. In information theory, the mutual information of two random variables X, Y measures the dependence between the two variables, which quanti\ufb01es the reduction of entropy for one variable when observing the other: I(X; Y ) = H(Y ) \u2212H(Y |X). (1) It is known that the mutual information is associated with the predictability of Y (Cover Thomas and Thomas Joy, 1991). While the standard de\ufb01nition of mutual information unrealistically assumes the unbounded computational capacity of the predictor, rendering it hard to estimate especially in high dimensions. To mitigate this problem, Xu et al. (2020) raise the predictive V-information under realistic computational constraints, where the predictor is only allowed to use models in the predictive family V to predict the target variable Y . De\ufb01nition 1 (Predictive Family (Xu et al., 2020)) Let \u2126= {f : X \u222a{\u2205} \u2192P(Y)}. We say that V \u2286\u2126is a predictive family if it satis\ufb01es: \u2200f \u2208V, \u2200P \u2208range(f), \u2203f \u2032 \u2208V, s.t. \u2200x \u2208X, f \u2032[x] = P, f \u2032[\u2205] = P. (2) 3 \fJiashuo Liu, Jiayun Wu, Bo Li and Peng Cui A predictive family contains all predictive models that are allowed to use, which forms computational or statistical constraints. The additional condition in Equation 2 means that the predictor can always ignore the input covariates (x) if it chooses to (only use \u2205). De\ufb01nition 2 (Predictive V-information (Xu et al., 2020)) Let X, Y be two random variables taking values in X \u00d7Y and V be a predictive family. The predictive V-information from X to Y is de\ufb01ned as: IV(X \u2192Y ) = HV(Y |\u2205) \u2212HV(Y |X), (3) where HV(Y |\u2205), HV(Y |X) are the predictive conditional V-entropy de\ufb01ned as: HV(Y |X) = inf f\u2208V Ex,y\u223cX,Y [\u2212log f[x](y)]. (4) HV(Y |\u2205) = inf f\u2208V Ey\u223cY [\u2212log f[\u2205](y)]. (5) Notably that f \u2208V is a mapping: X \u222a{\u2205} \u2192P(Y), so f[x] \u2208P(Y) is a probability measure on Y, and f[x](y) \u2208R is the density evaluated on y \u2208Y. HV(Y |\u2205) is also denoted as HV(Y ). Compared with the mutual information, the predictive V-information restricts the computational power and is much easier to estimate in high-dimensional cases. When the predictive family V contains all possible models, i.e. V = \u2126, it is proved that IV(X \u2192Y ) = I(X; Y ) (Xu et al., 2020). 3 Predictive Heterogeneity In this paper, from the machine learning perspective, we quantify the data heterogeneity that a\ufb00ects decision making, named Predictive Heterogeneity, which is easy to integrate with machine learning algorithms and could help analyze big data and build more rational algorithms. 3.1 Interaction Heterogeneity To formally de\ufb01ne the predictive heterogeneity, we begin with the formulation of the interaction heterogeneity. The interaction heterogeneity is de\ufb01ned as: De\ufb01nition 3 (Interaction Heterogeneity) Let X, Y be random variables taking values in X \u00d7 Y. Denote the set of random categorical variables as C, and take its subset E \u2286C. Then E is an environment set i\ufb00there exists E \u2208E such that X, Y \u22a5 \u22a5E. E \u2208E is called an environment variable. The interaction heterogeneity between X and Y w.r.t. the environment set E is de\ufb01ned as: HE (X, Y ) = sup E\u2208E I(Y ; X|E) \u2212I(Y ; X). (6) Each environment variable E represents a stochastic \u2018partition\u2019 of X \u00d7 Y, and the condition for the environment set implies that there exists such a stochastic partition that the joint distribution of X, Y could be preserved in each environment. In information theory, I(Y ; X|E) \u2212I(Y ; X) is called the interaction information, which measures the in\ufb02uence of the environment variable E on the amount of information shared between the target Y and 4 \fPredictive Heterogeneity: Measures and Applications the covariate X. And the interaction heterogeneity de\ufb01ned in Equation 6 quanti\ufb01es the maximal additional information that can be gained from involving or uncovering the environment variable E. Intuitively, large HE (X, Y ) indicates that the predictive power from X to Y is enhanced by E, which means that uncovering the latent sub-population associated with the environment partition E will bene\ufb01t the X \u2192Y prediction. 3.2 Predictive Heterogeneity Based on the mutual information, the computation of the interaction heterogeneity is quite hard, since the standard mutual information is notoriously di\ufb03cult to estimate especially in big data scenarios. Also, even if the mutual information could be accurately estimated, the prediction model may not be able to make good use of it. Inspired by Xu et al. (2020), we raise the Predictive Heterogeneity, which measures the interaction heterogeneity that can be captured under computational constraints and a\ufb00ects the prediction of models within the speci\ufb01ed predictive family. To begin with, we propose the Conditional Predictive V-information, which generalizes the predictive V-information. De\ufb01nition 4 (Conditional Predictive V-information) Let X, Y be two random variables taking values in X \u00d7 Y and E be an environment variable. For a predictive family V, the conditional predictive V-information is de\ufb01ned as: IV(X \u2192Y |E) = HV(Y |\u2205, E) \u2212HV(Y |X, E), (7) where HV(Y |\u2205, E) and HV(Y |X, E) are de\ufb01ned as: HV(Y |X, E) = Ee\u223cE \u0014 inf f\u2208V Ex,y\u223cX,Y |E=e[\u2212log f[x](y)] \u0015 . (8) HV(Y |\u2205, E) = Ee\u223cE \u0014 inf f\u2208V Ey\u223cY |E=e[\u2212log f[\u2205](y)] \u0015 . (9) Intuitively, the conditional predictive V-information measures the weighted average of predictive V-information among environments. And here we are ready to formalize the predictive heterogeneity measure. De\ufb01nition 5 (Predictive Heterogeneity) Let X, Y be random variables taking values in X \u00d7Y and E be an environment set. For a predictive family V, the predictive heterogeneity for the prediction X \u2192Y with respect to E is de\ufb01ned as: HE V(X \u2192Y ) = sup E\u2208E IV(X \u2192Y |E) \u2212IV(X \u2192Y ), (10) where IV(X \u2192Y ) is the predictive V-information following from De\ufb01nition 2. Leveraging the predictive V-information, the predictive heterogeneity de\ufb01ned in Equation 10 characterizes the maximal additional information that can be used by the prediction model when involving the environment variable E. It restricts the prediction models in V and the explored additional information could bene\ufb01t the prediction performance of the model f \u2208V, for which it is named predictive heterogeneity. Next, we present some basic properties of the interaction heterogeneity and predictive heterogeneity. 5 \fJiashuo Liu, Jiayun Wu, Bo Li and Peng Cui Proposition 6 (Basic Properties of Predictive Heterogeneity) Let X, Y be random variables taking values in X \u00d7Y, V be a function family, and E , E1, E2 be environment sets. 1. Monotonicity: If E1 \u2286E2, HE1 V (X \u2192Y ) \u2264HE2 V (X \u2192Y ). 2. Nonnegativity: HE V(X \u2192Y ) \u22650. 3. Boundedness: For discrete Y , HE V(X \u2192Y ) \u2264HV(Y |X). 4. Corner Case: If the predictive family V is the largest possible predictive family that includes all possible models, i.e. V = \u2126, we have HE (X, Y ) = HE \u2126(X \u2192Y ). Proofs can be found at Appendix A. For further theoretical properties of predictive heterogeneity, in Section 3.3, we derive its explicit forms under endogeneity, a common re\ufb02ection of data heterogeneity. And we demonstrate in Section 3.4 that our proposed predictive heterogeneity can be empirically estimated with guarantees if the complexity of V is bounded (e.g., its Rademacher complexity). 3.3 Theoretical Properties in Linear Cases In this section, we conduct a theoretical analysis of the predictive heterogeneity in multiple linear settings. Speci\ufb01cally, we consider two scenarios: (1) a homogeneous case with independent noises and (2) heterogeneous cases with endogeneity arising from selection bias and hidden variables. By examining these typical settings, we approximate the analytical forms of the proposed measure and draw insightful conclusions that can be generalized to more complex scenarios. Firstly, under a homogeneous case with no data heterogeneity, Theorem 7 proves that our measure is bounded by the scale of label noises (which is usually small) and reduces to 0 in linear case under mild assumptions. It indicates that the predictive heterogeneity is insensitive to independent noises. Notably that in the linear case we only deal with the environment variable satisfying X \u22a5\u03f5|E, since in common prediction tasks, the independent noises are unknown and unrealistic to be exploited for the prediction. Theorem 7 (Homogeneous Case with Independent Noises) For a prediction task X \u2192 Y where X, Y are random variables taking values in Rn \u00d7 R, consider the data generation process as Y = g(x) + \u03f5, \u03f5 \u223cN(0, \u03c32) where g : Rn \u2192R is a measurable function. 1) For a function class G such that g \u2208G, de\ufb01ne the function family as VG = {f|f[x] = N(\u03c6(x), \u03c32 V ), \u03c6 \u2208G, \u03c3V \u2208R+}. With an environment set E , we have HE VG(X \u2192Y ) \u2264\u03c0\u03c32. 2) Take n = 1 and g(x) = \u03b2x,\u03b2 \u2208R. Without loss of generality, assume E[X] = 0 and E[X2] exists. Given the function family V\u03c3 = {f|f[x] = N(\u03b8x, \u03c32), \u03b8 \u2208R, \u03c3 \ufb01xed } and the environment set E = {E|E \u2208C, |supp(E)| = 2, X \u22a5\u03f5|E}. We have HE V\u03c3(X \u2192Y ) = 0. Proofs can be found at Appendix B. Secondly, we examine the proposed measure under two typical cases of data heterogeneity (Fan et al., 2014), named endogeneity by selection bias (Heckman, 1979; Winship and Mare, 1992; Cui and Athey, 2022) and endogeneity with hidden variables (Fan et al., 2014; Arjovsky et al., 2019). To begin with, in Theorem 8, we consider the prediction task X \u2192Y with X, Y taking values in R2 \u00d7 R. Let X = [S, V ]T . The predictive family is speci\ufb01ed as: V = {f|f[x] = N(\u03b8SS + \u03b8V V, \u03c32), \u03b8S, \u03b8V \u2208R, \u03c3 = 1}. (11) 6 \fPredictive Heterogeneity: Measures and Applications And the data distribution P(X, Y ) is a mixture of latent sub-populations, which could be formulated by an environment variable E\u2217\u2208C such that P(X, Y ) = P e\u2208supp(E\u2217) P(E\u2217= e)P(X, Y |E\u2217= e). For each e \u2208supp(E\u2217), P(X, Y |E\u2217= e) is the distribution of a homogeneous sub-population. Note that the prediction task is to predict Y with covariates X, and the sub-population structure is latent. That is, P(E\u2217|X, Y ) is unknown for models. In the following, we derive the analytical forms of our measure under the one typical case. Theorem 8 (Endogeneity with Selection Bias) For the prediction task X = [S, V ]T \u2192 Y with a latent environment variable E\u2217, the data generation process with selection bias is de\ufb01ned as: Y = \u03b2S + f(S) + \u03f5Y , \u03f5Y \u223cN(0, \u03c32 Y ); V = r(E\u2217)f(S) + \u03c3(E\u2217) \u00b7 \u03f5V , \u03f5V \u223cN(0, 1), (12) where f : R \u2192R and r, \u03c3 : supp(E\u2217) \u2192R are measurable functions. \u03b2 \u2208R. Assume that E[S2] is \ufb01nite, E[f(S)S] = 0 and there exists L > 1 such that L\u03c32(E\u2217) < r2(E\u2217)E[f2]. For the predictive family de\ufb01ned in equation 11 and the environment set E = C, the predictive heterogeneity of the prediction task [S, V ]T \u2192Y approximates to: HC V(X \u2192Y ) \u2248Var(re)E[f 2] + E[\u03c32(E\u2217)] E[r2 e]E[f 2] + E[\u03c32(E\u2217)] E[f 2(S)], error bounded by 1 2 max(\u03c32 Y , R(r, \u03c3, f)). (13) And further we have R(r(E\u2217), \u03c3(E\u2217), f) = E[( 1 r2E[f2] \u03c32 + 1 )2]E[f2] + EE\u2217[( 1 r \u03c3 + \u03c3 rE[f2] )2] < E[f2]( 1 (L + 1)2 + 1 L + 2 + 1 L ). (14) Proofs can be found at Appendix C. Intuitively, the data generation process in Theorem 8 introduces the spurious correlation between the spurious feature V and the target Y , which varies across di\ufb00erent sub-populations (i.e. r(E\u2217) and \u03c3(E\u2217) varies) and brings about data heterogeneity. Here E[f(S)S] = 0 indicates a model misspeci\ufb01cation since there is a nonlinear term f(S) that could not be inferred by the linear predictive family with the stable feature S. The constant L characterizes the strength of the spurious correlation between V and Y . Larger L means V could provide more information for prediction. From the approximation in Equation 13, we can see that our proposed predictive heterogeneity is dominated by two terms: (1) Var[r(E\u2217)]/E[r2(E\u2217)] characterizes the variance of r(E\u2217) among sub-populations; (2) E[f2(S)] re\ufb02ects the strength of model misspeci\ufb01cations. These two components account for two sources of the data heterogeneity under selection bias, which validates the rationality of our proposed measure. Based on the theorem, it can be inferred that the degree of predictive heterogeneity increases with greater variability of r(E\u2217) among sub-populations and stronger model misspeci\ufb01cations. In other words, when the sub-populations di\ufb00er signi\ufb01cantly from each other and the model is not accurately speci\ufb01ed, the predictive heterogeneity is likely to be larger. Additionally, in Theorem 9, we analyze our measure under endogeneity with hidden variables. In Theorem 9, an anti-causal covariate V is generated via the causal diagram 7 \fJiashuo Liu, Jiayun Wu, Bo Li and Peng Cui like Y \u2192V \u2190E\u2217with a hidden environment variable E\u2217. However, since E\u2217is omitted from the prediction models, the relationship between V and Y is biased, which inhibits the prediction power. Theorem 9 (Endogeneity with Hidden Variables) For the prediction task [S, V ]T \u2192 Y with a latent environment variable E\u2217, the data generation process with hidden variables is de\ufb01ned as: Y = \u03b2S + f(S) + \u03f5Y , \u03f5Y \u223cN(0, \u03c32 Y ); V = r(E\u2217)(f(S) + \u03f5Y ) + \u03c3(E\u2217)\u03f5V , \u03f5V \u223cN(0, 1), (15) where f : R \u2192R and r, \u03c3 : supp(E\u2217) \u2192R are measurable functions. \u03b2 \u2208R. Assume that E[f(S)S] = 0 and there exists L > 1 such that L\u03c32(E\u2217) < r2(E\u2217)(E[f2] + \u03c32 Y ). For the predictive family de\ufb01ned in equation 11 and the environment set E = C, the predictive heterogeneity of the prediction task [S, V ]T \u2192Y approximates to: HC V(X \u2192Y ) \u2248Var(re)(E[f 2] + \u03c32 Y ) + E[\u03c32(E\u2217)] E[r2 e](E[f 2] + \u03c32 Y ) + E[\u03c32(E\u2217)] (E[f 2(S)] + \u03c32 Y ), error bounded by 1 2 max(\u03c32 Y , R(r, \u03c3, f)). (16) And further we have: R(r(E\u2217), \u03c3(E\u2217), f) = E[( 1 r2(E[f 2]+\u03c32 Y ) \u03c32 + 1 )2](E[f 2] + \u03c32 Y ) + EE\u2217[( 1 r \u03c3 + \u03c3 r(E[f 2]+\u03c32 Y ) )2] < (E[f 2] + \u03c32 Y )( 1 (L + 1)2 + 1 L + 2 + 1 L ). (17) Proofs can be found at Appendix C. Intuitively, the data generation process in Theorem 9 introduces the biased anti-causal relationship between the spurious feature V and the target Y , which varies across di\ufb00erent sub-populations (i.e. r(E\u2217) and \u03c3(E\u2217) varies) and brings about data heterogeneity. Here, similar as Theorem 8, E[f(S)S] = 0 indicates model misspeci\ufb01cation and the constant L characterizes the strength of the biased anti-causal relationship between V and Y , where larger L means more information that V could provide for predicting Y when E\u2217is missing. From the approximation in Equation 16, we can see that our proposed predictive heterogeneity is dominated by two terms: (1) Var[r(E\u2217)]/E[r2(E\u2217)] characterizes the variance of r(E\u2217) among sub-populations; (2) E[f2(S)]+\u03c32 Y re\ufb02ects the maximal additional information that could be provided by V . In the broader context, Theorem 1, 2, and 3 suggest that our proposed predictive heterogeneity measure is equipped with remarkable properties, namely its insensitivity to homogeneous cases and its ability to account for the latent heterogeneity arising from typical sources of data heterogeneity. These \ufb01ndings highlight the efectiveness of our measure in accurately characterizing predictive heterogeneity in various machine learning tasks. 3.4 PAC Guarantees for Predictive Heterogeneity Estimation De\ufb01ned under explicit computation constraints, our Predictive Heterogeneity could be empirically estimated with guarantees if the complexity of the model family V is bounded. In 8 \fPredictive Heterogeneity: Measures and Applications this work, we provide \ufb01nite sample generalization bounds with the Rademacher complexity. First, we describe the de\ufb01nition of the empirical predictive heterogeneity, the explicit formula for which could be found in De\ufb01nition 10. The dataset D = {(xi, yi)}|D| i=1 is independently and identically drawn from the population X, Y . Given a function family V and an environment set EK such that for E \u2208EK, supp(E) = {(ek)K k=1}. , let Q be the set of all probability distributions of X,Y ,E where E \u2208EK. The empirical predictive heterogeneity \u02c6 HEK V (X \u2192Y ; D) is given by: \u02c6 HEK V (X \u2192Y ; D) = sup E\u2208EK \u02c6 IV(X \u2192Y |E; D) \u2212\u02c6 IV(X \u2192Y ; D) (18) = sup \u02c6 Q\u2208Q K X k=1 h \u02c6 Q(E = ek) \u02c6 HV(Y |E = ek; D) \u2212\u02c6 Q(E = ek) \u02c6 HV(Y |X, E = ek; D) i (19) \u2212[ \u02c6 HV(Y ; D) \u2212\u02c6 HV(Y |X; D)]. (20) Speci\ufb01cally, \u02c6 Q(E = ek) \u02c6 HV(Y |X, E = ek; D) (21) = inf f\u2208V \u02c6 Q(E = ek) X xi,yi\u2208D \u2212log f[xi](yi) \u02c6 Q(xi, yi|E = ek) P xj,yj\u2208D \u02c6 Q(xj, yj|E = ek) (22) = inf f\u2208V \u02c6 Q(E = ek) X xi,yi\u2208D \u2212log f[xi](yi) \u02c6 Q(E = ek|xi, yi) \u02c6 Q(xi, yi) P xj,yj\u2208D \u02c6 Q(E = ek|xj, yj) \u02c6 Q(xj, yj) (23) = inf f\u2208V \u02c6 Q(E = ek) X xi,yi\u2208D \u2212log f[xi](yi) \u02c6 Q(E = ek|xi, yi) \u02c6 Q(xi, yi) \u02c6 Q(E = ek) (24) = inf f\u2208V X xi,yi\u2208D \u2212log f[xi](yi) \u02c6 Q(E = ek|xi, yi) \u02c6 Q(xi, yi) (25) = inf f\u2208V 1 |D| X xi,yi\u2208D \u2212log f[xi](yi) \u02c6 Q(E = ek|xi, yi). (26) The explicit formula for \u02c6 Q(E = ek) \u02c6 HV(Y |E = ek; D), \u02c6 HV(Y |X; D) and \u02c6 HV(Y ; D) could be similarly derived. Here we are ready to formally de\ufb01ne the empirical predictive heterogeneity. De\ufb01nition 10 (Empirical Predictive Heterogeneity) For the prediction task X \u2192Y with X, Y taking values in X \u00d7 Y, a dataset D is independently and identically drawn from the population such that D = {(xi, yi)N i=1 \u223cX, Y }. Given the predictive family V and the environment set EK = {E|E \u2208C, |supp(E)| = K} where K \u2208N. Without loss of generality, we specify that supp(E) = {(ek)K k=1} where ek denotes a single environment. Let Q be the set of all probability distributions of X,Y ,E where E \u2208EK. The empirical predictive heterogeneity \u02c6 HEK V (X \u2192Y ; D) with respect to D is de\ufb01ned as: \u02c6 HEK V (X \u2192Y ; D) = sup \u02c6 Q\u2208Q K X k=1 h \u02c6 Q(E = ek) \u02c6 HV(Y |E = ek; D) \u2212\u02c6 Q(E = ek) \u02c6 HV(Y |X, E = ek; D) i \u2212[ \u02c6 HV(Y ; D) \u2212\u02c6 HV(Y |X; D)], (27) 9 \fJiashuo Liu, Jiayun Wu, Bo Li and Peng Cui where \u02c6 Q(E = ek) \u02c6 HV(Y |X, E = ek; D) = inf f\u2208V 1 |D| X xi,yi\u2208D \u2212log f[xi](yi) \u02c6 Q(E = ek|xi, yi). (28) \u02c6 Q(E = ek) \u02c6 HV(Y |E = ek; D) = inf f\u2208V 1 |D| X xi,yi\u2208D \u2212log f[\u2205](yi) \u02c6 Q(E = ek|xi, yi). (29) \u02c6 HV(Y |X; D) = inf f\u2208V 1 |D| X xi,yi\u2208D \u2212log f[xi](yi). (30) \u02c6 HV(Y ; D) = inf f\u2208V 1 |D| X xi,yi\u2208D \u2212log f[\u2205](yi). (31) Then we give the PAC bound over the empirical usable predictive heterogeneity in Theorem 11. Theorem 11 (PAC Bound) Consider the prediction task X \u2192Y where X, Y are random variables taking values in X \u00d7Y. Assume that the predictive family V satis\ufb01es \u2200x \u2208X, \u2200y \u2208Y,\u2200f \u2208V, log f[x](y) \u2208[\u2212B, B] where B > 0. For given K \u2208N, the environment set is de\ufb01ned as EK = {E|E \u2208C, |supp(E)| = K} where K \u2208N. Without loss of generality, we specify that supp(E) = {(ek)K k=1} where ek denotes a single environment. Let Q be the set of all probability distributions of X,Y ,E where E \u2208EK. Take an e \u2208supp(E) and de\ufb01ne a function class GV = {g|g(x, y) = log f[x](y)Q(E = e|x, y), f \u2208V, Q \u2208Q}. Denote the Rademacher complexity of G with N samples by RN(G). Then for any \u03b4 \u2208(0, 1/(2K + 2)), with a probability over 1 \u22122(K + 1)\u03b4, for dataset D independently and identically drawn from X, Y , we have: |HEK V (X \u2192Y ) \u2212\u02c6 HEK V (X \u2192Y ; D)| \u22644(K + 1)R|D|(GV) + 2(K + 1)B r 2 log 1 \u03b4 /|D|, (32) where R|D|(GV) = O(|D|\u22121 2 ) (Bartlett and Mendelson, 2002). Proofs can be found at Appendix D. 4 Algorithm To empirically estimate the predictive heterogeneity in De\ufb01nition 10, we derive the Information Maximization (IM) algorithm from the formal de\ufb01nition in Equation 27 to infer the distribution of E that maximizes the empirical predictive heterogeneity \u02c6 HEK V (X \u2192Y ; D). Objective Function. Given dataset D = {XN, YN} = {(xi, yi)}N i=1, denote supp(E) = {e1, . . . , eK}, we parameterize the distribution of E|(XN, YN) with weight matrix W \u2208WK, where K is the pre-de\ufb01ned number of environments and WK = {W : W \u2208RN\u00d7K + and W1K = 1N} is the allowed weight space. Each element wij in W represents P(E = ej|xi, yi) (the probability of the i-th data point belonging to the j-th sub-population). For a predictive family V, the solution to the supremum problem in the De\ufb01nition 10 is equivalent to the following objective function: min W \u2208WKRV(W, \u03b8\u2217 1(W), . . . , \u03b8\u2217 K(W)) = \uf8f1 \uf8f2 \uf8f3 1 N N X i=1 K X j=1 wij\u2113V(f\u03b8\u2217 j (xi), yi) + UV(W, YN) \uf8fc \uf8fd \uf8fe, s.t. \u03b8\u2217 j (W) \u2208arg min \u03b8 ( Lj V(W, \u03b8) = N X i=1 wij\u2113V(f\u03b8(xi), yi) ) , for j = 1, . . . , K, (33) 10 \fPredictive Heterogeneity: Measures and Applications where f\u03b8 : X \u2192Y denotes a predicting function parameterized by \u03b8, \u2113V(\u00b7, \u00b7) : Y \u00d7 Y \u2192R represents a loss function and UV(W, YN) is a regularizer. Speci\ufb01cally, f\u03b8, \u2113V and UV are determined by the predictive family V. Here we provide implementations for two typical and general machine learning tasks, regression and classi\ufb01cation. 4.1 Regression For the regression task, the predictive family is typically modeled as: V1 = {g : g[x] = N(f\u03b8(x), \u03c32), f is the predicting function and \u03b8 is learnable, \u03c3 is a constant}. (34) The corresponding loss function is \u2113V1(f\u03b8(X), Y ) = (f\u03b8(X)\u2212Y )2, and UV1(W, YN) becomes UV1(W, YN) = Varj\u2208[K](Y j N) = K X j=1 N X i=1 wijyi !2 1 N PN i=1 wij \u2212 1 N N X i=1 yi !2 (35) where Y j N denotes the mean value of the label Y given E = ej and U(W, YN) calculates the variance of Y j N among sub-populations e1 \u223ceK. 4.2 Classi\ufb01cation For the classi\ufb01cation task, the predictive family is typically modeled as: V2 = {g : g[x] = f\u03b8(x) \u2208\u2206c, f is the classi\ufb01cation model and \u03b8 is learnable}, (36) where c is the class number and \u2206c denotes the c-dimensional simplex. Here each model in the predictive family V2 outputs a discrete distribution in the form of a c-dimensional simplex. In this case, the corresponding loss function \u2113V2(\u00b7, \u00b7) is the cross entropy loss and the regularizer becomes UV2(W, YN) = \u2212PK j=1 1 N (PN i=1 wij)H(Y j N), where H(Y j N) is the entropy of Y given E = ej. 4.3 Optimization. The bi-level optimization in Equation 33 can be solved by performing projected gradient descent w.r.t. W. The gradient of W can be calculated by: (we omit the subscript V here) \u2207W R = \u2207W U + \u0002 \u2113(f\u03b8j(xi), yi) \u0003N\u00d7K i,j + K X j=1 \u2207\u03b8jR|\u03b8\u2217 j \u2207W \u03b8\u2217 j , (37) where \u2207\u03b8jR \f \f \u03b8\u2217 j \u2207W \u03b8\u2217 j \u2248\u2207\u03b8jR \f \f \u03b8t j X h\u2264t \uf8ee \uf8f0Y k 3, the learned subpopulations will shrink to 3 sub-populations. In Figure 5 and 6, to conduct a more thorough examination of the learned subgroups, we analyze the age distribution of each group, as well as the average value of their corresponding risk factors. Our analysis reveals several noteworthy \ufb01ndings: 1. We observe a distinct di\ufb00erence in the age distribution of the learned subgroups. Speci\ufb01cally, Group 0 is primarily composed of individuals over the age of 70, while Group 1 consists of individuals around 60 years old. Group 2, on the other hand, is comprised of middle-aged individuals spanning multiple age groups. 2. The average values of the risk factors reveal notable di\ufb00erences among the various subgroups, indicative of distinct causes of mortality. More speci\ufb01cally, Group 0 exhibits a considerably higher prevalence of underlying diseases, such as renal, neurologic, liver, and immunosuppression, when compared to the other groups. In contrast, Group 1 shows a substantially lower level of underlying diseases in comparison. Interestingly, Group 2 does not exhibit any underlying diseases, yet has a markedly higher level of diarrhea and vomiting. These \ufb01ndings suggest that the learned subgroups may be used to identify speci\ufb01c risk factors associated with mortality, which can inform targeted interventions for individuals with distinct risk pro\ufb01les. Having identi\ufb01ed distinct patterns among the subgroups, we seek to identify the speci\ufb01c risk factors associated with mortality. To further validate our \ufb01ndings, we incorporate the expertise of domain experts. By leveraging their insights, we are able to con\ufb01rm the reliability of the identi\ufb01ed risk factors and the importance of our subgroup analysis. 5.2.2 Scientific Findings Based on the learned group, we \ufb01t a logistic regression model on each group and pick the top-6 features with the largest coe\ufb03cients, which are shown in Table 1. Firstly, our analysis reveals that in Group 0 and 1, the top features associated with mortality are primarily SPO2 and underlying diseases, which align with the common risk factors of older individuals. In contrast, Group 2 exhibits a distinct set of top features, including symptoms of COVID-19 such as fever, cough, and vomiting. Notably, Group 2 is composed of middle-aged individuals spanning multiple age groups. Our \ufb01ndings suggest that severe COVID-19 symptoms can lead to mortality regardless of age. Secondly, to further our analysis, we \ufb01t a model for the entire dataset and observe that the top features remain SPO2 and underlying diseases, consistent with the top features found for older individuals. However, this may not be bene\ufb01cial or could even lead to harm for interventions targeted towards younger or middle-aged individuals who generally do not have severe underlying diseases. For instance, doctors may tend to treat younger patients with severe COVID-19 symptoms optimistically and underestimate their mortality risk because they typically do not have underlying diseases. Thus, exploring and leveraging the predictive heterogeneity within the data can lead to more reliable scienti\ufb01c discoveries while avoiding potential harm caused by latent heterogeneity. 16 \fPredictive Heterogeneity: Measures and Applications Thirdly, our analysis reveals two important features in Group 2, namely vomiting and diarrhea, which are rarely considered in traditional analysis. We have reviewed relevant literature on COVID-19 and discovered that various studies have recognized these two symptoms as important indicators of higher risk of mortality caused by COVID-19. Zhong et al. (2020) highlighted the potential mechanisms of gastrointestinal and hepatic injuries in COVID-19 to raise awareness of digestive system injury in COVID-19. Liu et al. (2021b) analyzed 29,393 laboratory-con\ufb01rmed COVID-19 patients diagnosed before March 21, 2020, in cities outside of Wuhan in mainland China and found that patients with both GI symptoms and fever and patients with fever alone had a signi\ufb01cantly higher risk of death, where GI symptoms refer to one of the following symptoms: nausea, vomiting, diarrhea, or abdominal pain. Zeng et al. (2021) also found that gastrointestinal symptoms are associated with the severity of COVID-19, and the severe rate was more than 40% in COVID-19 patients with gastrointestinal symptoms. Ghimire et al. (2021) demonstrated that the presence of diarrhea as a presenting symptom is associated with increased disease severity and likely worse prognosis. Chan et al. (2022) have called for the consideration of COVID-19 in the di\ufb00erential diagnosis for patients who present with abdominal pain and gastrointestinal symptoms typical of gastroenteritis or surgical abdomen, even if they lack respiratory symptoms of COVID-19. These studies validate the reliability of our \ufb01ndings and demonstrate that studies utilizing the proposed predictive heterogeneity can uncover unusual risk factors that do not appear in analysis of the overall dataset. This example serves as an illustration of the potential bene\ufb01ts that our predictive heterogeneity can o\ufb00er to a wide range of scienti\ufb01c \ufb01elds. By exploiting the heterogeneity within a dataset, our approach can reveal novel patterns and relationships that may be overlooked in traditional analyses, leading to more reliable and comprehensive scienti\ufb01c discoveries Table 1: Top features of each learned subgroup and overall data on the COVID-19 dataset. Group ID Top Features 0 SPO2 Diabetes Renal Neurologic Pulmonary Cardiovascular 1 Diabetes SPO2 Neurologic Cardiovascular Pulmonary Renal 2 Fever Cough Renal Vomitting Shortness of breath Dihareea All SPO2 Renal Neurologic Diabetes Pulmonary Cardiovascular 5.3 Bene\ufb01t Generalization In this section, we aim to evaluate the e\ufb03cacy of our IM algorithm in enhancing the outof-distribution (OOD) generalization performance of machine learning models. To this end, we conduct experiments on both simulated data and real-world colored MNIST data. Our results suggest that the learned sub-population structures by our IM algorithm could signi\ufb01cantly bene\ufb01t the OOD generalization of machine learning models. Baselines First, we compare with empirical risk minimization (ERM) and environment inference for invariant learning (EIIL, (Creager et al., 2021)) which infers the environments for learning invariance. Then we compare with the well-known KMeans algorithm, which is the most popular clustering algorithm. For our IM algorithm and KMeans, we involve three algorithms as backbones to leverage the learned sub-populations, including sub-population balancing and invariant learning methods. The sub-population balancing 17 \fJiashuo Liu, Jiayun Wu, Bo Li and Peng Cui (a) KMeans. (b) EIIL. (c) Our IM. Figure 7: Sub-population division on the simulated data of three methods, where two colors denote two sub-populations. simply equally weighs the learned sub-populations. invariant risk minimization (IRM, (Arjovsky et al., 2019)) and inter-environment gradient alignment (IGA, (Koyama and Yamaguchi, 2020)) are typical methods in OOD generalization, which take the sub-populations as input environments to learn the invariant models. 5.3.1 Simulation Data of Sample Selection Bias The input features X = [S, T, V ]T \u2208R10 consist of stable features S \u2208R5, noisy features T \u2208R4 and the spurious feature V \u2208R: S \u223cN(0, 2I5), T \u223cN(0, 2I4), Y = \u03b8T S S+h(S)+N(0, 0.5), V \u223cLaplace(sign(r)\u00b7Y, 1/(5 ln |r|)) (41) where \u03b8S \u2208R5 is the coe\ufb03cient and h(S) = S1S2S3 is the nonlinear term. |r| > 1 is a factor for each sub-population, and here the data heterogeneity is brought by the endogeneity with hidden variable (Fan et al., 2014). V is the spurious feature whose relationship with Y is unstable across sub-populations and is controlled by the factor r. Intuitively, sign(r) controls whether the spurious correlation between V and Y is positive or negative. And |r| controls the strength of the spurious correlation, i.e. the larger |r| means the stronger spurious correlation. In training, we generate 10000 points, where the major group contains 80% data with r = 1.9 (i.e. strong positive spurious correlation) and the minor group contains 20% data with r = \u22121.9 (i.e. strong negative spurious correlation). In testing, we test the performances of the two groups respectively, and we also set r = \u22122.3 and r = \u22122.7 Table 2: Results of the experiments on out-of-distribution generalization, including the simulated data and colored MNIST data. Method 1. Simulated Data 2. Colored MNIST Training Sub-population Error Test Error Train Accuracy Test Accuracy Major (r = 1.9) Minor (r = \u22121.9) r = \u22122.3 r = \u22122.7 ERM 0.255(\u00b10.024) 0.740(\u00b10.022) 0.738(\u00b10.035) 0.737(\u00b10.023) 0.998(\u00b10.001) 0.406(\u00b10.019) EIIL 0.164(\u00b10.014) 1.428(\u00b10.035) 1.431(\u00b10.061) 1.431(\u00b10.046) 0.812(\u00b10.006) 0.610(\u00b10.016) KMeans Balance 0.231(\u00b10.022) 0.847(\u00b10.024) 0.846(\u00b10.039) 0.845(\u00b10.026) 0.999(\u00b10.001) 0.328(\u00b10.021) IRM 0.231(\u00b10.022) 0.845(\u00b10.024) 0.844(\u00b10.039) 0.843(\u00b10.026) 0.947(\u00b10.004) 0.259(\u00b10.021) IGA 0.235(\u00b10.022) 0.840(\u00b10.023) 0.839(\u00b10.038) 0.838(\u00b10.027) 0.997(\u00b10.001) 0.302(\u00b10.021) Ours Balance 0.403(\u00b10.041) 0.423(\u00b10.016) 0.416(\u00b10.022) 0.416(\u00b10.014) 0.749(\u00b10.012) 0.692(\u00b10.039) IRM 0.391(\u00b10.039) 0.432(\u00b10.016) 0.430(\u00b10.022) 0.430(\u00b10.014) 0.759(\u00b10.014) 0.727(\u00b10.047) IGA 0.449(\u00b10.037) 0.426(\u00b10.017) 0.417(\u00b10.022) 0.417(\u00b10.014) 0.759(\u00b10.012) 0.713(\u00b10.034) 18 \fPredictive Heterogeneity: Measures and Applications to simulate stronger distributional shifts. We use linear regression and set K = 2 for all methods, and we report the mean-square errors (MSE) of all methods. The results over 10 runs are shown in Table 2. From the results in Table 2, for both the simulated and colored MNIST data, the two backbones with our IM algorithm achieve the best OOD generalization performances. Also, for the simulated data, the learned predictive heterogeneity enables backbone algorithms to equally treat the majority and minority inside data (i.e. low-performance gap between \u2019Major\u2019 and \u2019Minor\u2019), and signi\ufb01cantly bene\ufb01ts the OOD generalization. Further, we plot the learned sub-populations of our IM algorithm in Figure 7. From Figure 7, compared with KMeans and EIIL, our predictive heterogeneity exploits the spurious correlation between V and Y , and enables the backbone algorithms to eliminate it. 5.3.2 Simulation Data of Hidden Variables Also, we add one more experiment to show that (1) when the chosen K is smaller than the ground-truth, the performances of our methods will drop but are still better than ERM (2) when the chosen K is larger, the performances are not a\ufb00ected much. The input features X = [S, T, V ] \u2208R10 consist of stable features S \u2208R5, noisy features T \u2208R4 and the spurious feature V \u2208R: S \u223cN(2, 2I5), T \u223cN(0, 2I4), Y = \u03b8T S S + S1S2S3 + N(0, 0.5), and we generate the spurious feature via: V = \u03b8e V Y + N(0, 0.3), where \u03b8e V varies across sub-populations and is dependent on which sub-population the data point belongs to. In training, we sample 8000 data points from e1 with \u03b81 V = 3.0, 1000 points from e2 with \u03b82 V = \u22121.0, 1000 points from e3 with \u03b83 V = \u22122.0 and 1000 points from e4 with \u03b84 V = \u22123.0. Therefore, the ground-truth number of sub-populations is 4. In testing, we test the performances on e4 with \u03b84 V = \u22123.0, which has strong distributional shifts from training data. The average MSE over 10 runs are shown in Figure 8. From the results, we can see that when K is smaller than the ground-truth, increasing K bene\ufb01ts the OOD generalization performance, and when K is larger, the performances are not a\ufb00ected much. For our IM algorithm, we think there are mainly two ways to choose K: \u2022 According to the predictive heterogeneity index: When the chosen K is smaller than the ground-truth, our measure tends to increase quickly when increasing K; and when K is larger than the ground-truth, the increasing speed will slow down, which could direct people to choose an appropriate K. \u2022 According to the prediction model: Since our IM algorithm aims to learn sub-populations with di\ufb00erent prediction mechanisms, one could compare the learned model parameters \u03b81, . . . , \u03b8K to judge whether K is much larger than the ground-truth, i.e., if two resultant models are quite similar, K may be too large (divide one sub-population into two). For linear models, one can directly compare the coe\ufb03cients. For deep models, we think one can calculate the transfer losses across sub-populations. 19 \fJiashuo Liu, Jiayun Wu, Bo Li and Peng Cui Figure 8: The OOD generalization error of our methods with Sub-population Balancing, IRM and IGA as backbones for the added experiments. The ground-truth sub-population number is 4. Figure 9: Sub-population division on the MNIST data of our IM algorithm. 5.3.3 Colored MNIST Following Arjovsky et al. (2019), we design a binary classi\ufb01cation task constructed on the MNIST dataset. Firstly, digits 0 \u223c4 are labeled Y = 0 and digits 5 \u223c9 are labeled Y = 1. Secondly, noisy labels \u02dc Y are induced by randomly \ufb02ipping the label Y with a probability of 0.2. Then we sample the colored id V spurious correlated with \u02dc Y as V = n + \u02dc Y , with probability r, \u2212\u02dc Y , with probability 1 \u2212r. (42) In fact, r controls the spurious correlation between \u02dc Y and V . In training, we randomly sample 10000 data points and set r = 0.85, meaning that for 85% of the data, V is positively correlated with \u02dc Y and for the rest 15%, the spurious correlation becomes negative, which causes data heterogeneity w.r.t. V and \u02dc Y . In testing, we set r = 0 (strong negative spurious correlation), bringing strong shifts between training and testing. From the results in Table 2, for both the simulated and colored MNIST data, the two backbones with our IM algorithm achieve the best OOD generalization performances. We plot the learned sub-populations of our IM algorithm in Figure 9. From Figure 9, the learned sub-populations of our method also re\ufb02ect the di\ufb00erent directions of the spurious correlation between digit labels Y and colors (red or green), which helps backbone methods to avoid using colors to predict digits. 6 Related Work To the best of our knowledge, data heterogeneity has not converged to a uniform formulation so far, and has di\ufb00erent meanings among di\ufb00erent \ufb01elds. Li and Reynolds (1995) de\ufb01ne the heterogeneity in ecology based on the system property and complexity or variability. Rosenbaum (2005) views the uncertainty of the potential outcome as unit heterogeneity in observational studies in economics. For graph data, the heterogeneity refers to various types of nodes and edges (Wang et al. (2019)). More recently, in machine learning, several 20 \fPredictive Heterogeneity: Measures and Applications works of causal learning (Peters et al., 2016; Arjovsky et al., 2019; Koyama and Yamaguchi, 2020; Creager et al., 2021) and robust learning (Sagawa et al., 2019) leverage heterogeneous data from multiple environments to improve the out-of-distribution generalization ability. Speci\ufb01cally, invariant learning methods (Arjovsky et al., 2019; Koyama and Yamaguchi, 2020; Creager et al., 2021; Zhou et al., 2022) leverage the heterogeneous environment to learn the invariant predictors that have uniform performances across environments. And in distributionally robust optimization \ufb01eld, Sagawa et al. (2019); Duchi et al. (2022) propose to optimize the worst-group prediction error to guarantee the OOD generalization performance. However, in machine learning, previous works have not provided a precise de\ufb01nition or sound quanti\ufb01cation of data heterogeneity, which makes it confusing and hard to leverage to develop more rational machine learning algorithms. As for clustering algorithms, most algorithms only focus on the covariates X, typi\ufb01ed by KMeans and Gaussian Mixture Model (GMM, (Reynolds, 2009)). However, the learned clusters by KMeans can only re\ufb02ect heterogeneous structures in P(X), which is shown by our experiments. Notably that our predictive heterogeneity could re\ufb02ect the heterogeneity in P(Y |X). And the expectation maximization (EM, (Moon, 1996)) can also be used for clustering. However, our IM algorithm has essential di\ufb00erences from EM, for our IM algorithm infers latent variables that maximizes the predictive heterogeneity but EM maximizes the likelihood. Also, there are methods (Creager et al., 2021) from the invariant learning \ufb01eld to infer environments. Though it could bene\ufb01t the OOD generalization, it lacks the theoretical foundation and only works in some settings. 7 Discussion on di\ufb00erences with sub-group discovery Subgroup discovery (SD, (Helal, 2016)) is aimed at extracting \u201dinteresting\u201d relations among di\ufb00erent variables (X) with respect to a target variable Y . Coverage and precision of each discovered group is the focus of such method. To be speci\ufb01c, it learns a partition on P(X) such that some target label y dominates within each group. The most sigin\ufb01cant gap between subgroup discovery and our predictive heterogeneity lies in the pattern of distributional shift among clusters: for subgroup discovery, P(X) and P(Y ) varies across subgroups but there is a universal P(Y |X). While for predictive heterogeneity P(Y |X) di\ufb00ers across sub-population, which indicates diversi\ufb01ed prediction mechanism. It is such disparity of prediction mechanism that inhibits the performance of a universal predictive model on a heterogeneous dataset, which is the emphasis of OOD problem and group fairness. We think sub-group discovery is more applicable for settings where the distributional shift is minor while high explainability is required, since it generates simpli\ufb01ed rules that people can understand. Also, sub-group discovery methods is suitable for the settings that only involve tabular data (typlically from a relational database), where the input features have clear semantics. And our proposed method could deal with general machine learning settings, including complicated data (e.g., image data) that involves representation learning. Also, when people have to handle settings where data heterogeneity w.r.t. prediciton mechanism exists inside data, our method is more applicable. However, both kinds of methods can be used to help people understand data and make more reasonable decisions. 21 \fJiashuo Liu, Jiayun Wu, Bo Li and Peng Cui 8 Discussion on the Potential for fairness We \ufb01nd combining our measure with algorithmic fairness is an interesting and promising direction and we think our measure has the potential to deal with algorithmic bias. Our method could generate sub-populations with possibly di\ufb00erent prediction mechanisms, which could do some help in the following aspects: Risk feature selection: we could select features according to our predictive heterogeneity measure to see what features bring the largest heterogeneity. If they are sensitive features, people should avoid their e\ufb00ects, and if they are not, they could direct people to build better machine learning models. Examine the algorithmic fairness: we could use the learned sub-populations to examine whether a given algorithm is fair by calculating the performance gap across the sub-populations. 9 Conclusion We de\ufb01ne the predictive heterogeneity, as the \ufb01rst quantitative formulation of the data heterogeneity that a\ufb00ects the prediction of machine learning models. We demonstrate its theoretical properties and show that it bene\ufb01ts the out-of-distribution generalization performances. 22 \fPredictive Heterogeneity: Measures and Applications Appendix A. Proof of Proposition 6 Proof [Proof of Proposition 6] 1. Monotonicity: Because of E1 \u2286E2, HE1 V (X \u2192Y ) = sup E\u2208E1 IV(X \u2192Y |E) \u2212IV(X \u2192Y ) (43) \u2264sup E\u2208E2 IV(X \u2192Y |E) \u2212IV(X \u2192Y ) (44) = HE2 V (X \u2192Y ). (45) 2. Nonnegativity: According to the de\ufb01nition of the environment set, there exists E0 \u2208E such that for any e \u2208supp(E), X, Y |E = e is identically distributed as X, Y . Thus, we have HE V(X \u2192Y ) = sup E\u2208E [HV(Y |\u2205, E) \u2212HV(Y |X, E)] \u2212[HV(Y |\u2205) \u2212HV(Y |X)] (46) \u2265[HV(Y |\u2205, E0) \u2212HV(Y |X, E0)] \u2212[HV(Y |\u2205) \u2212HV(Y |X)] . (47) Speci\ufb01cally, HV(Y |X, E0) = Ee\u223cE0 \u0014 inf f\u2208V Ex,y\u223cX,Y |E=e[\u2212log f[x](y)] \u0015 (48) = Ee\u223cE0 \u0014 inf f\u2208V Ex,y\u223cX,Y [\u2212log f[x](y)] \u0015 (49) = HV(Y |X). (50) Similarly, HV(Y |\u2205, E0) = HV(Y |\u2205). Thus, HE V(X \u2192Y ) \u22650. 3. Boundedness: First, we have HV(Y |X, E) = Ee\u223cE \u0014 inf f\u2208V Ex,y\u223cX,Y |E=e[\u2212log f[x](y)] \u0015 (51) = Ee\u223cE \u0014 inf f\u2208V Ex\u223cX|E=e \u0002 Ey\u223cY |x,e[\u2212log f[x](y)] \u0003\u0015 (52) \u22650, (53) by noticing that Ey\u223cY |x[\u2212log f[x](y)] is the cross entropy between Y |x, e and f[x]. Next, HV(Y |\u2205, E) = Ee\u223cE \u0014 inf f\u2208V Ey\u223cY |E=e[\u2212log f[\u2205](y)] \u0015 (54) \u2264inf f\u2208V Ee\u223cE \u0002 Ey\u223cY |E=e[\u2212log f[\u2205](y)] \u0003 (55) = inf f\u2208V Ey\u223cY [\u2212log f[\u2205](y)] (56) = HV(Y |\u2205), (57) where Equation 55 is due to Jensen\u2019s inequality. 23 \fJiashuo Liu, Jiayun Wu, Bo Li and Peng Cui Combing the above inequalities, HE V(X \u2192Y ) = sup E\u2208E [HV(Y |\u2205, E) \u2212HV(Y |X, E)] \u2212[HV(Y |\u2205) \u2212HV(Y |X)] (58) \u2264sup E\u2208E HV(Y |\u2205, E) \u2212[HV(Y |\u2205) \u2212HV(Y |X)] (59) \u2264HV(Y |\u2205) \u2212[HV(Y |\u2205) \u2212HV(Y |X)] (60) = HV(Y |X). (61) 4. Corner Case: According to Proposition 2 in Xu et al. (2020), H\u2126(Y |\u2205) = H(Y ). (62) H\u2126(Y |X) = H(Y |X). (63) By taking random variables R, S identically distributed as X, Y |E = e for e \u2208supp(E), we have H\u2126(Y |X, E = e) = H\u2126(S|R) = H(S|R) = H(Y |X, E = e). (64) Thus, H\u2126(Y |X, E) = Ee\u223cE[H\u2126(Y |X, E = e)] = Ee\u223cE[H(Y |X, E = e)] = H(Y |X, E). (65) Similarly, we have H\u2126(Y |\u2205, E) = H(Y |E). Thus, HE \u2126(X \u2192Y ) = sup E\u2208E [H\u2126(Y |\u2205, E) \u2212H\u2126(Y |X, E)] \u2212[H\u2126(Y |\u2205) \u2212H\u2126(Y |X)] (66) = sup E\u2208E [H(Y |E) \u2212H(Y |X, E)] \u2212[H(Y ) \u2212H(Y |X)] (67) = sup E\u2208E I(Y ; X|E) \u2212I(Y ; X) (68) = HE (X, Y ). (69) Appendix B. Proof of Theorem 7 Proof [Proof of Theorem 7] 1) HVG(Y |X) = inf f\u2208VG Ex\u223cX \u0002 Ey\u223cY |x[\u2212log f[x](y)] \u0003 (70) \u2264Ex\u223cX \" Ey\u223cY |x[\u2212log 1 \u221a 2\u03c0 \u00b7 1 \u221a 2\u03c0 exp \" \u2212(y \u2212g(x))2 2 \u00b7 1 2\u03c0 ## (71) = Ex\u223cX \u0002 Ey\u223cY |x[\u03c0(y \u2212g(x))2] \u0003 = \u03c0\u03c32. (72) 24 \fPredictive Heterogeneity: Measures and Applications Equation 71 holds by taking f[x] = N(g(x), 1 2\u03c0). 2) Given the function family V\u03c3 = {f|f[x] = N(\u03b8x, \u03c32), \u03b8 \u2208R, \u03c3 \ufb01xed }, by expanding the Gaussian probability density function in the de\ufb01nition of predictive V-information, it could be shown that IV\u03c3(X \u2192Y ) \u221dmin k\u2208R \u2212E[(Y \u2212kX)2] + Var(Y ), (73) where the predictive V-information is proportional to Mean Square Error subtracted by the variance of target, by a coe\ufb03cient completely dependent on \u03c3. The minimization problem is solved by k = E[XY ] E[X2] = 1. (74) Substituting k into eq.73, IV\u03c3(X \u2192Y ) \u221d(\u2212E[\u03f52] + Var(X + \u03f5)) = Var(X) = E[X2]. (75) Denote supp(E) = {E1, E2}. Let Q be the joint distribution of (X, \u03f5, E). Let Q(E1) = \u03b1 and Q(E2) = 1 \u2212\u03b1 be the marginal of E. Abbreviate Q(X, \u03f5|E = E1) by P1(X, \u03f5) and Q(X, \u03f5|E = E2) by P2(X, \u03f5). Similar to 73, IV\u03c3(X \u2192Y |E) \u221dmin k \u2212E[(Y \u2212kX)2|E] + Var(Y |E). (76) For E = E1, the minimization problem is solved by k = EP1[XY ] EP1[X2] . (77) Thus, IV\u03c3(X \u2192Y |E = E1) \u221d\u2212EP1 \"\u0012 Y \u2212EP1[XY ] EP1[X2] X \u00132# + VarP1(Y ) (78) = \u2212EP1[Y 2] + E2 P1[XY ] EP1[X2] + (EP1[Y 2] \u2212E2 P1[Y ]) = \u2212E2 P1[Y ] + E2 P1[XY ] EP1[X2] . (79) Similarly, we have IV\u03c3(X \u2192Y |E = E2) \u221d\u2212E2 P2[Y ] + E2 P2[XY ] EP2[X2] . (80) Notably, EP1[X2] and EP2[X2] are constrained by \u03b1 and E[X2]. E[X2] = E[E[X2|E]] = \u03b1EP1[X2] + (1 \u2212\u03b1)EP2[X2]. (81) Similarly, E[X2] = E[XY ] = \u03b1EP1[XY ] + (1 \u2212\u03b1)EP2[XY ]. (82) 25 \fJiashuo Liu, Jiayun Wu, Bo Li and Peng Cui 0 = E[Y ] = \u03b1EP1[Y ] + (1 \u2212\u03b1)EP2[Y ]. (83) The moments of P2 could thereafter be represented by those of P1. EP2[X2] = E[X2] \u2212\u03b1EP1[X2] 1 \u2212\u03b1 , EP2[XY ] = E[X2] \u2212\u03b1EP1[XY ] 1 \u2212\u03b1 , EP2[Y ] = \u2212\u03b1EP1[Y ] 1 \u2212\u03b1 . (84) Substituting to eq.80, IV\u03c3(X \u2192Y |E = E2) \u221d\u2212 \u03b12 (1 \u2212\u03b1)2 E2 P1[Y ] + 1 1 \u2212\u03b1 \u0000E[X2] \u2212\u03b1EP1[XY ] \u00012 E[X2] \u2212\u03b1EP1[X2] . (85) Thus, HE V\u03c3(X \u2192Y ) = sup E\u2208E \u2212IV\u03c3(X \u2192Y ) + \u03b1IV\u03c3(X \u2192Y |E = E1) + (1 \u2212\u03b1)IV\u03c3(X \u2192Y |E = E2) (86) \u221dsup E\u2208E \u2212E[X2] \u2212\u03b1E2 P1[Y ] + \u03b1E2 P1[XY ] EP1[X2] \u2212 \u03b12 1 \u2212\u03b1E2 P1[Y ] + \u0000E[X2] \u2212\u03b1EP1[XY ] \u00012 E[X2] \u2212\u03b1EP1[X2] (87) = sup E\u2208E \u2212 \u03b1 1 \u2212\u03b1E2 P1[X + \u03f5] + \u03b1 E2 P1[X\u03f5] EP1[X2] (E[X2] \u2212\u03b1EP1[X2])E[X2]. (88) Assuming X \u22a5\u03f5 | E, HE V\u03c3(X \u2192Y ) \u221dsup E\u2208E \u2212 \u03b1 1 \u2212\u03b1E2 P1[X + \u03f5] \u22640. (89) From Proposition 6, we have HE V\u03c3(X \u2192Y ) \u22650. Thus, HE V\u03c3(X \u2192Y ) = 0. Appendix C. Proof of Linear Cases (Theorem 8 and 9) Proof [Proof of Theorem 8] For the ease of notion, we denote the r(E\u2217) as re, \u03c3(E\u2217) as \u03c3e, and \u03c3(E\u2217) \u00b7 \u03f5v as \u03f5e. And we omit the superscript C of HC V. Firstly, we calculate the HV[Y |\u2205] as: HV[Y |\u2205] = 1 2\u03c32 Var(Y ) + log \u03c3 + 1 2 log 2\u03c0, (90) HV[Y |\u2205, E\u2217] = 1 2\u03c32 EE\u2217[Var(Y |E\u2217)] + log \u03c3 + 1 2 log 2\u03c0. (91) Therefore, we have HV[Y |\u2205, E\u2217] \u2212HV[Y |\u2205] = \u22121 2\u03c32 Var(E[Y |E\u2217]) \u22640. (92) As for HV[Y |X], we have HV[Y |X] = inf hS,hV EX,Y \u0002 \u2225Y \u2212(hSS + hV V )\u22252\u0003 1 2\u03c32 (93) 26 \fPredictive Heterogeneity: Measures and Applications = inf hS,hV EE\u2217\u0002 E[\u2225f(S) + \u03f5Y \u2212(hSS + hV (ref(S) + \u03f5e))\u22252|E\u2217] \u0003 1 2\u03c32 , (94) where we let hS = hS \u2212\u03b2 here. Then we have 2\u03c32HV[Y |X] = inf hS,hV EE\u2217\u0002 E[\u2225(1 \u2212hV re)f(S) + \u03f5Y \u2212hSS \u2212hV \u03f5e\u22252|E\u2217] \u0003 (95) = inf hS,hV EE\u2217\u0002 E[\u2225(1 \u2212hV re)f(S) \u2212hSS\u22252|E\u2217] \u0003 + \u03c32 Y + h2 V EE\u2217[\u03c32 e], (96) notably that here for ei, ej \u2208supp(E\u2217), we assume P ei(S, Y ) = P ej(S, Y ) (we choose such E\u2217as one possible split). And the solution of hS, hV is hS = Var(re)E[f2(S)]E[f(S)S] + E[\u03c32 e]E[f(S)S] E[r2 e]E[f2(S)]E[S2] + E[\u03c32 e]E[S2] \u2212E2[re]E2[f(S)S], (97) hV = E[re](E[f2(S)]E[S2] \u2212E2[f(S)S]) E[r2 e]E[f2(S)]E[S2] + E[\u03c32 e]E[S2] \u2212E2[re]E2[f(S)S]. (98) According to the assumption that E[f(S)S] = 0, we have hS = 0, hV = E[r(E\u2217)]E[f2] E[r2(E\u2217)]E[f2] + E[\u03c32(E\u2217)]. (99) Therefore, we have 2\u03c32HV[Y |X] = EE\u2217[E[\u2225(1 \u2212hV re)f(S)\u22252|E\u2217]] + \u03c32 Y + h2 V EE\u2217[\u03c32 e] (100) = Var(re)E[f2] + E[\u03c32(E\u2217)] E[r2 e]E[f2] + E[\u03c32(E\u2217)] E[f2(S)] + \u03c32 Y , (101) 2\u03c32HV[Y |X, E\u2217] = \u03c32 Y + E[( 1 r2 eE[f2] \u03c32 e + 1 )2]E[f2] + EE\u2217[( 1 re \u03c3e + \u03c3e reE[f2] )2]. (102) Note that here we simply set \u03c3 = 1 in the main body. And we have: HV(X \u2192Y ) \u2248Var(re)E[f2] + E[\u03c32(E\u2217)] E[r2 e]E[f2] + E[\u03c32(E\u2217)] E[f2(S)] (103) The approximation error is bounded by 1 2 max(\u03c32 Y , R(r(E\u2217), \u03c3(E\u2217), E[f2])), and R(r(E\u2217), \u03c3(E\u2217), E[f2]) is de\ufb01ned as: R(r(E\u2217), \u03c3(E\u2217), E[f2]) = E[( 1 r2 eE[f2] \u03c32 e + 1 )2]E[f2] + EE\u2217[( 1 re \u03c3e + \u03c3e reE[f2] )2] (104) Proof [Proof of Theorem 9] Similar as the above proof. 27 \fJiashuo Liu, Jiayun Wu, Bo Li and Peng Cui Appendix D. Proof of the Error Bound for Finite Sample Estimation (Theorem 11) In this section, we will prove the error bound of estimating the predictive heterogeneity with the empirical predictive heterogeneity. Before the proof of Theorem 11 which is inspired by Xu et al. (2020), we will introduce three lemmas. Lemma 14 Assume \u2200x \u2208X,\u2200y \u2208Y,\u2200f \u2208V, log f[x](y) \u2208[\u2212B, B] where B > 0. De\ufb01ne a function class Gk V = {g|g(x, y) = log f[x](y)q(E = ek|x, y), f \u2208V, q \u2208Q}. Denote the Rademacher complexity of G with N samples by RN(G). De\ufb01ne \u02c6 fk = arg inf f 1 |D| X xi,yi\u2208D \u2212log f[xi](yi)q(E = ek|xi, yi). (105) Then for any q \u2208Q, any \u03b4 \u2208(0, 1), with a probability over 1 \u2212\u03b4, we have \f \f \f \f \f \f q(E = ek)HV(Y |X, E = ek) \u22121 |D| X xi,yi\u2208D \u2212log \u02c6 fk[xi](yi)q(E = ek|xi, yi) \f \f \f \f \f \f (106) \u22642R|D|(Gk V) + B s 2 log 1 \u03b4 |D| . (107) Proof Apply McDiarmid\u2019s inequality to the function \u03a6(D) which is de\ufb01ned as: \u03a6(D) = sup f\u2208V,q\u2208Q \f \f \f \f \f \f q(E = ek)Eq [\u2212log f[x](y)|E = ek] \u22121 |D| X xi,yi\u2208D \u2212log f[xi](yi)q(E = ek|xi, yi) \f \f \f \f \f \f . (108) Let D and D\u2032 be two identical datasets except for one data point xj \u0338= x\u2032 j. We have: \u03a6(D) \u2212\u03a6(D\u2032) (109) \u2264 sup f\u2208V,q\u2208Q \uf8ee \uf8f0 \f \f \f \f \f \f q(E = ek)Eq [\u2212log f[x](y)|E = ek] \u22121 |D| X xi,yi\u2208D \u2212log f[xi](yi)q(E = ek|xi, yi) \f \f \f \f \f \f (110) \u2212 \f \f \f \f \f \f q(E = ek)Eq [\u2212log f[x](y)|E = ek] \u2212 1 |D\u2032| X x\u2032 i,y\u2032 i\u2208D\u2032 \u2212log f[x\u2032 i](y\u2032 i)q(E = ek|x\u2032 i, y\u2032 i) \f \f \f \f \f \f \uf8f9 \uf8fb (111) \u2264 sup f\u2208V,q\u2208Q \f \f \f \f \f \f 1 |D| X xi,yi\u2208D \u2212log f[xi](yi)q(E = ek|xi, yi) \u2212 1 |D\u2032| X x\u2032 i,y\u2032 i\u2208D\u2032 \u2212log f[x\u2032 i](y\u2032 i)q(E = ek|x\u2032 i, y\u2032 i) \f \f \f \f \f \f (112) = sup f\u2208V,q\u2208Q 1 |D| \f \flog f[xj](yj)q(E = ek|xj, yj) \u2212log f[x\u2032 j](y\u2032 j)q(E = ek|x\u2032 j, y\u2032 j) \f \f (113) \u22642B |D|. (114) 28 \fPredictive Heterogeneity: Measures and Applications According to McDiarmid\u2019s inequality, for any \u03b4 \u2208(0, 1), with a probability over 1 \u2212\u03b4, we have: \u03a6(D) \u2264ED[\u03a6(D)] + B s 2 log 1 \u03b4 |D| . (115) Next we derive a bound for ED[\u03a6(D)]. Consider a dataset D\u2032 independently and identically drawn from q(X, Y ) = P(X, Y ) with the same size as D. We notice that q(E = ek)Eq [\u2212log f[x](y)|E = ek] = ED\u2032 \uf8ee \uf8f0\u22121 |D\u2032| X x\u2032 i,y\u2032 i\u2208D\u2032 \u2212log f[x\u2032 i](y\u2032 i)q(E = ek|x\u2032 i, y\u2032 i) \uf8f9 \uf8fb. (116) Thus, ED[\u03a6(D)] could be reformulated as: ED[\u03a6(D)] = ED \uf8ee \uf8f0 sup f\u2208V,q\u2208Q \f \f \f \f \f \f ED\u2032 \uf8ee \uf8f0\u22121 |D\u2032| X x\u2032 i,y\u2032 i\u2208D\u2032 \u2212log f[x\u2032 i](y\u2032 i)q(E = ek|x\u2032 i, y\u2032 i) \uf8f9 \uf8fb (117) \u22121 |D| X xi,yi\u2208D \u2212log f[xi](yi)q(E = ek|xi, yi) \f \f \f \f \f \f \uf8f9 \uf8fb (118) \u2264ED \uf8ee \uf8f0 sup f\u2208V,q\u2208Q ED\u2032 \f \f \f \f \f \f \u22121 |D\u2032| X x\u2032 i,y\u2032 i\u2208D\u2032 \u2212log f[x\u2032 i](y\u2032 i)q(E = ek|x\u2032 i, y\u2032 i) (119) \u22121 |D| X xi,yi\u2208D \u2212log f[xi](yi)q(E = ek|xi, yi) \f \f \f \f \f \f \uf8f9 \uf8fb (120) \u2264ED,D\u2032 \uf8ee \uf8f0 sup f\u2208V,q\u2208Q 1 |D| \f \f \f \f \f \f X xi,yi\u2208D log f[xi](yi)q(E = ek|xi, yi) (121) \u2212 X x\u2032 i,y\u2032 i\u2208D\u2032 log f[x\u2032 i](y\u2032 i)q(E = ek|x\u2032 i, y\u2032 i) \f \f \f \f \f \f \uf8f9 \uf8fb (122) \u2212 X x\u2032 i,y\u2032 i\u2208D\u2032 \u03c3i log f[x\u2032 i](y\u2032 i)q(E = ek|x\u2032 i, y\u2032 i) \f \f \f \f \f \f \uf8f9 \uf8fb (123) \u2264ED,\u03c3 \uf8ee \uf8f0 sup f\u2208V,q\u2208Q 1 |D| \f \f \f \f \f \f X xi,yi\u2208D \u03c3i log f[xi](yi)q(E = ek|xi, yi) \f \f \f \f \f \f \uf8f9 \uf8fb (124) + ED\u2032,\u03c3 \uf8ee \uf8f0 sup f\u2208V,q\u2208Q 1 |D\u2032| \f \f \f \f \f \f X x\u2032 i,y\u2032 i\u2208D\u2032 \u03c3i log f[x\u2032 i](y\u2032 i)q(E = ek|x\u2032 i, y\u2032 i) \f \f \f \f \f \f \uf8f9 \uf8fb (125) = 2R|D|(Gk V), (126) where \u03c3i are independent Rademacher variables. Equation 121 follows from Jensen\u2019s inequality and the convexity of sup. Equation 123 holds due to the symmetry of log f[xi](yi)q(E = ek|xi, yi) \u2212log f[x\u2032 i](y\u2032 i)q(E = ek|x\u2032 i, y\u2032 i) and the argument that Radamacher variables preserve the expected sum of symmetric random variables with a convex mapping (Ledoux and Talagrand (1991), Lemma 6.3). 29 \fJiashuo Liu, Jiayun Wu, Bo Li and Peng Cui Substituting Equation 126 to Equation 115, we have for any \u03b4 \u2208(0, 1), with a probability over 1 \u2212\u03b4, \u2200f \u2208V, \u2200q \u2208Q, the following holds: \f \f \f \f \f \f q(E = ek)Eq [\u2212log f[x](y)|E = ek] \u22121 |D| X xi,yi\u2208D \u2212log f[xi](yi)q(E = ek|xi, yi) \f \f \f \f \f \f (127) \u22642R|D|(Gk V) + B s 2 log 1 \u03b4 |D| . (128) Let \u02dc fk = arg inff{q(E = ek)Eq [\u2212log f[x](y)|E = ek]}. Let \u02c6 fk = arg inff{ 1 |D| P xi,yi\u2208D \u2212log f[xi](yi)q(E = ek|xi, yi)}. Now we have q(E = ek)Eq h \u2212log \u02dc fk[x](y)|E = ek i \u22121 |D| X xi,yi\u2208D \u2212log \u02dc fk[xi](yi)q(E = ek|xi, yi) (129) \u2264q(E = ek)HV(Y |X, E = ek) \u22121 |D| X xi,yi\u2208D \u2212log \u02c6 fk[xi](yi)q(E = ek|xi, yi) (130) \u2264q(E = ek)Eq h \u2212log \u02c6 fk[x](y)|E = ek i \u22121 |D| X xi,yi\u2208D \u2212log \u02c6 fk[xi](yi)q(E = ek|xi, yi). (131) Combining Equation 127 and Equation 129-131, the lemma is proved. Lemma 15 Assume \u2200x \u2208X,\u2200y \u2208Y,\u2200f \u2208V, log f[\u2205](y) \u2208[\u2212B, B] where B > 0. The de\ufb01nition of Gk V and RN(G) follows from Lemma 14. De\ufb01ne \u02c6 fk = arg inff 1 |D| P xi,yi\u2208D \u2212log f[\u2205](yi)q(E = ek|xi, yi). Then for any q \u2208Q, any \u03b4 \u2208(0, 1), with a probability over 1 \u2212\u03b4, we have \f \f \f \f \f \f q(E = ek)HV(Y |E = ek) \u22121 |D| X xi,yi\u2208D \u2212log \u02c6 fk[\u2205](yi)q(E = ek|xi, yi) \f \f \f \f \f \f (132) \u22642R|D|(Gk V) + B s 2 log 1 \u03b4 |D| . (133) Proof Similar to Lemma 14, we could prove that \f \f \f \f \f \f q(E = ek)HV(Y |E = ek) \u22121 |D| X xi,yi\u2208D \u2212log \u02c6 fk[\u2205](yi)q(E = ek|xi, yi) \f \f \f \f \f \f (134) \u22642R|D|(Gk V\u2205) + B s 2 log 1 \u03b4 |D| , (135) where Gk V\u2205= {g|g(x, y) = log f[\u2205](y)q(E = ek|x, y), f \u2208V, q \u2208Q}. 30 \fPredictive Heterogeneity: Measures and Applications According to the de\ufb01nition for the predictive family V (Xu et al. (2020), De\ufb01nition 1), \u2200f \u2208V, there exists f\u2032 \u2208V such that \u2200x \u2208X, f[\u2205] = f\u2032[x]. Thus, Gk V\u2205\u2282Gk V, and therefore R|D|(Gk V\u2205) \u2264R|D|(Gk V). Substituting into Equation 134, the lemma is proved. Lemma 16 ((Xu et al., 2020), Theorem 1) Assume \u2200x \u2208X,\u2200y \u2208Y,\u2200f \u2208V, log f[x](y) \u2208 [\u2212B, B] where B > 0. De\ufb01ne a function class G\u2217 V = {g|g(x, y) = log f[x](y), f \u2208V}. The de\ufb01nition of RN(G) follows from Lemma 14. Then for any \u03b4 \u2208(0, 0.5), with a probability over 1 \u22122\u03b4, we have \f \f \fIV(X \u2192Y ) \u2212\u02c6 IV(X \u2192Y ) \f \f \f \u22644R|D|(G\u2217 V) + 2B s 2 log 1 \u03b4 |D| . (136) Finally we are prepared to prove Theorem 11. Proof [Proof of Theorem 11] We \ufb01rst bound the error of empirical estimation with the sum of items in Lemma 14,15,16. |HEK V (X \u2192Y ) \u2212\u02c6 HEK V (X \u2192Y ; D)| (137) \u2264 \f \f \f \f sup E\u2208EK IV(X \u2192Y |E) \u2212sup E\u2208EK \u02c6 IV(X \u2192Y |E; D) \f \f \f \f + \f \f \fIV(X \u2192Y ) \u2212\u02c6 IV(X \u2192Y ; D) \f \f \f (138) \u2264sup E\u2208EK \f \f \fIV(X \u2192Y |E) \u2212\u02c6 IV(X \u2192Y |E; D) \f \f \f + \f \f \fIV(X \u2192Y ) \u2212\u02c6 IV(X \u2192Y ; D) \f \f \f (139) = sup q\u2208Q \f \f \f \f \f K X k=1 [q(E = ek)HV(Y |E = ek) \u2212q(E = ek)HV(Y |X, E = ek)] (140) \u2212 K X k=1 h q(E = ek) \u02c6 HV(Y |E = ek; D) \u2212q(E = ek) \u02c6 HV(Y |X, E = ek; D) i\f \f \f \f \f (141) + \f \f \fIV(X \u2192Y ) \u2212\u02c6 IV(X \u2192Y ; D) \f \f \f (142) \u2264 K X k=1 sup q\u2208Q \f \f \fq(E = ek)HV(Y |E = ek) \u2212q(E = ek) \u02c6 HV(Y |E = ek; D) \f \f \f (143) + K X k=1 sup q\u2208Q \f \f \fq(E = ek)HV(Y |X, E = ek) \u2212q(E = ek) \u02c6 HV(Y |X, E = ek; D) \f \f \f (144) + \f \f \fIV(X \u2192Y ) \u2212\u02c6 IV(X \u2192Y ; D) \f \f \f (145) = K X k=1 sup q\u2208Q \f \f \f \f \f \f q(E = ek)HV(Y |E = ek) \u22121 |D| X xi,yi\u2208D \u2212log \u02c6 fk[xi](yi)q(E = ek|xi, yi) \f \f \f \f \f \f (146) + K X k=1 sup q\u2208Q \f \f \f \f \f \f q(E = ek)HV(Y |X, E = ek) \u22121 |D| X xi,yi\u2208D \u2212log \u02c6 f \u2032 k[\u2205](yi)q(E = ek|xi, yi) \f \f \f \f \f \f (147) + \f \f \fIV(X \u2192Y ) \u2212\u02c6 IV(X \u2192Y ; D) \f \f \f , (148) where \u02c6 fk = arg inff 1 |D| P xi,yi\u2208D \u2212log f[xi](yi)q(E = ek|xi, yi), 31 \fJiashuo Liu, Jiayun Wu, Bo Li and Peng Cui and \u02c6 f\u2032 k = arg inff 1 |D| P xi,yi\u2208D \u2212log f[\u2205](yi)q(E = ek|xi, yi), for any q \u2208Q and 1 \u2264k \u2264 K. For simplicity, let Errk = sup q\u2208Q \f \f \f \f \f \f q(E = ek)HV(Y |X, E = ek) \u22121 |D| X xi,yi\u2208D \u2212log \u02c6 fk[xi](yi)q(E = ek|xi, yi) \f \f \f \f \f \f . (149) Err\u2032 k = sup q\u2208Q \f \f \f \f \f \f q(E = ek)HV(Y |X, E = ek) \u22121 |D| X xi,yi\u2208D \u2212log \u02c6 f \u2032 k[\u2205](yi)q(E = ek|xi, yi) \f \f \f \f \f \f . (150) Err\u2217= \f \f \fIV(X \u2192Y ) \u2212\u02c6 IV(X \u2192Y ; D) \f \f \f . (151) Then, by Lemma 14,15,16, Pr \uf8ee \uf8f0|HV K \u2212\u02c6 HV K(D)| > 4(K + 1)R|D|(GV) + 2(K + 1)B s 2 log 1 \u03b4 |D| \uf8f9 \uf8fb (152) \u2264Pr \uf8ee \uf8f0 K X i=1 Errk + K X i=1 Err\u2032 k + Err\u2217> 4(K + 1)R|D|(GV) + 2(K + 1)B s 2 log 1 \u03b4 |D| \uf8f9 \uf8fb (153) \u2264Pr \uf8ee \uf8f0 K X i=1 Errk + K X i=1 Err\u2032 k + Err\u2217> K X k=1 4R|D|(Gk V) + 4R|D|(G\u2217 V) + 2(K + 1)B s 2 log 1 \u03b4 |D| \uf8f9 \uf8fb (154) + \uf8eb \uf8edErr\u2217> 4R|D|(G\u2217 V) + 2B s 2 log 1 \u03b4 |D| \uf8f6 \uf8f8 \uf8f9 \uf8fb (155) + Pr \uf8ee \uf8f0Err\u2217> 4R|D|(G\u2217 V) + 2B s 2 log 1 \u03b4 |D| \uf8f9 \uf8fb (156) \u22642(K + 1)\u03b4. (157) Equation 154 is because of Gk V = GV, G\u2217 V \u2282GV and therefore R|D|(Gk V) \u2264R|D|(GV), R|D|(G\u2217 V) \u2264R|D|(GV). Hence, Pr \uf8ee \uf8f0|HEK V (X \u2192Y ) \u2212\u02c6 HEK V (X \u2192Y ; D)| \u22644(K + 1)R|D|(GV) + 2(K + 1)B s 2 log 1 \u03b4 |D| \uf8f9 \uf8fb (158) \u22651 \u22122(K + 1)\u03b4. (159) Appendix E. Proof of Theorem 12 Proof [Proof of Theorem 12] The objective function of our IM algorithm is directly derived from the de\ufb01nition of empirical predictive heterogeneity in De\ufb01nition 10. For the regression task, we assume the predictive family as V1 = {g : g[x] = N(f\u03b8(x), \u03c32), f is the regression model and \u03b8 is learnable, \u03c3 = 1.0(\ufb01xed)}, (160) 32 \fPredictive Heterogeneity: Measures and Applications where we only care about the output of the model and the noise scale of the Gaussian distribution is often ignored, for which we simply set \u03c3 = 1.0 as a \ufb01xed term. Then for each environment e \u2208supp(E\u2217), the IV(X \u2192Y |E\u2217= e) becomes IV(X \u2192Y |E\u2217= e) \u221dmin \u03b8 E[\u2225Y \u2212f\u03b8(X)\u22252|E\u2217= e] \u2212Var(Y |E\u2217), (161) which corresponds with the MSE loss and the proposed regularizer in Equation 35. For the classi\ufb01cation task, the derivation is similar, and the regularizer becomes the entropy of Y in sub-population e and the loss function becomes the cross-entropy loss." + }, + { + "url": "http://arxiv.org/abs/2206.02990v2", + "title": "Enhancing Distributional Stability among Sub-populations", + "abstract": "Enhancing the stability of machine learning algorithms under distributional\nshifts is at the heart of the Out-of-Distribution (OOD) Generalization problem.\nDerived from causal learning, recent works of invariant learning pursue strict\ninvariance with multiple training environments. Although intuitively\nreasonable, strong assumptions on the availability and quality of environments\nare made to learn the strict invariance property. In this work, we come up with\nthe ``distributional stability\" notion to mitigate such limitations. It\nquantifies the stability of prediction mechanisms among sub-populations down to\na prescribed scale. Based on this, we propose the learnability assumption and\nderive the generalization error bound under distribution shifts. Inspired by\ntheoretical analyses, we propose our novel stable risk minimization (SRM)\nalgorithm to enhance the model's stability w.r.t. shifts in prediction\nmechanisms ($Y|X$-shifts). Experimental results are consistent with our\nintuition and validate the effectiveness of our algorithm. The code can be\nfound at https://github.com/LJSthu/SRM.", + "authors": "Jiashuo Liu, Jiayun Wu, Jie Peng, Xiaoyu Wu, Yang Zheng, Bo Li, Peng Cui", + "published": "2022-06-07", + "updated": "2024-02-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "INTRODUCTION Traditional machine learning algorithms with empirical risk minimization (ERM) are vulnerable when exposed to data drawn out of the training distribution. In order to mitigate the failures in the out-ofdistribution (OOD) generalization, invariant learning Proceedings of the 27th International Conference on Artificial Intelligence and Statistics (AISTATS) 2024, Valencia, Spain. PMLR: Volume 238. Copyright 2024 by the author(s). methods (Arjovsky et al., 2019; Ahuja et al., 2020, 2021; Peters et al., 2016) are proposed to learn prediction mechanisms that are strictly invariant across given multiple environments. Such strict invariance property enables models to generalize under distributional shifts (Peters et al., 2016; Rojas-Carulla et al., 2018; Arjovsky et al., 2019; Koyama et al., 2020). To fulfill the promise of invariant learning, environment labels have to be provided to achieve strict invariance. Moreover, the concept of strict invariance even assumes the access of all possible environments. However, such requirement is unrealistic in real-world applications where modern datasets are often constructed by amalgamating data from various sources, thus significantly limiting the applicability of invariant learning techniques. Recent efforts, such as EIIL (Creager et al., 2021) and HRM (Liu et al., 2021a,b), have focused on generating pseudo environment labels to facilitate invariant learning. Nonetheless, the characteristics of these pseudo environments, the extent of invariance they enable, and even the validity of the problem framework itself, remain unclear and inadequately justified. To address these limitations, our research shifts focus towards developing models that generalize outof-distribution within contexts of latent heterogeneity, where the training data is gathered from multiple sources, but lacking explicit source labels. In this setting, the training data exhibits sub-population structures, with probably distinct prediction mechanisms varying across sub-populations. To tackle this problem, we introduce an approach that extends strict invariance to the concept of \u201cdistributional stability\u201d. This metric assesses the consistency of prediction mechanisms across sub-populations. Unlike the binary nature of strict invariance, which is either yes or no, distributional stability provides a continuous measure that quantifies the degree of predictive mechanism stability across varying contexts. This nuanced approach allows for a more refined assessment of model robustarXiv:2206.02990v2 [cs.LG] 14 Feb 2024 \fEnhancing Distributional Stability among Sub-populations ness in handling distribution shifts. In Section 2, we formally define the distributional stability, and introduce its properties as well as relationships with strict invariance. And we also demonstrate its relationship with the distributional robustness from the distributionally robust optimization literature (Duchi et al., 2018, 2019). Then in Section 3, we characterize the learnability of the problem to rationalize the problem setting itself and clarify what kind of target distributions could be generalized to. Then we derive the OOD generalization error bound for this problem based on the distributional stability. Inspired by the theoretical results, we find that models with strong distributional stability could generalize well with respect to shifts on prediction mechanisms (Y |X-shifts). Thus, we propose an empirical algorithm named Stable Risk Minimization (SRM) in Section 4, and experimental results on both simulation and real-world data validate the effectiveness of our method. Notations Throughout this paper, we let X \u2208X denote the covariates, Y \u2208Y denote the target. f\u03b8(\u00b7) : X \u2192Y is the predictor parameterized by \u03b8 \u2208\u0398. E is the random variable taking values in all possible environments. The random variable of data points is denoted by Z = (X, Y ) \u2208Z. Pe(Z) abbreviated with Pe denotes the joint distribution in environment e, and for environments e1, e2 \u2208supp(E), the data distribution can be quite different. Ptrain(Z) and Ptest(Z) abbreviated with Ptr and Pte respectively represent the joint training distribution and test distribution. Denote the feature extractor \u03a6\u03b8(X) parameterized by \u03b8, and the predicting function \u02c6 Y = h\u03b7(\u03a6\u03b8(X)) parameterized by \u03b7 (not restricted to linear h(\u00b7)), which gives the whole prediction model f\u03b7,\u03b8(X) = h\u03b7(\u03a6\u03b8(X)). For simplicity, we omit the subscripts \u03b8, \u03b7 without causing misunderstanding. Denote the sample size by n and the vector of sample weights w \u2208Rn + = [w1, . . . , wn]T with w \u22650 and wT 1 = 1. 2 DISTRIBUTIONAL STABILITY In this section, we first introduce the strict invariance property as well as its limitations. Then we propose the distributional stability property, a relaxed alternative under latent heterogeneity. 2.1 Strict invariance Inspired by causal inference literature, strict invariance (Arjovsky et al., 2019; Ahuja et al., 2020; Koyama et al., 2020; Creager et al., 2021; Liu et al., 2021a,b) requires that the prediction mechanism Y |X remains the same among environments, which has two typical formulations. Definition 1 (Strict Invariance). Denote the random variable taking values of all possible environments as E. A representation \u03a6 is strictly invariant if condition 1 or condition 2 holds: Condition 1 (Arjovsky et al., 2019; Ahuja et al., 2020; Creager et al., 2021): for any e1, e2 \u2208supp(E), E[Y |\u03a6, E = e1] = E[Y |\u03a6, E = e2]. (1) Condition 2 (Koyama et al., 2020; Liu et al., 2021a,b): for any e1, e2 \u2208supp(E), P(Y |\u03a6, E = e1) = P(Y |\u03a6, E = e2). (2) Invariant learning methods use strict invariance as a constraint during the model learning procedure. Arjovsky et al. (2019) prove that a linear model only uses invariant features under condition (1), and Koyama et al. (2020) prove the resultant model under condition (2) is optimal for OOD generalization. Despite the promising theoretical results, one major concern in defining the strict invariance as Definition 1 is the access to all possible environments E. The strict invariance requires all possible environments to examine whether the prediction mechanism Y |\u03a6 stays invariant. However, in most of the real-world applications, it is impossible to acquire all possible environments, which renders the goal of strict invariance unrealistic to reach in practice. As a result, the learned invariance only holds for the finite training environments, but whether it is violated in other agnostic environments and how much it is violated remain entirely unknown for machine learning engineers and system users, which brings huge risks in high-stakes applications. 2.2 A relaxed alternative To mitigate the limitations above, we relax the requirements for multiple environments and instead consider an elaborated setting where the observed data are heterogeneous. More precisely, following Duchi et al. (2019), we assume that X, Y \u223cPtr := \u03b1Q0 + (1 \u2212\u03b1)Q1 where the proportion \u03b1 \u2208(0, 1) and Q0, Q1 denote the sub-populations in Ptr. Since the sub-population distributions are not pre-defined, it is termed latent heterogeneity. To measure the stability of a machine learning model under potential distributional shifts, inspired by strict invariance among given environments (i.e. explicit heterogeneity), we could examine whether the predicting mechanism holds among all potential sub-populations within Ptr. First, we define the sub-population set for a distribution in Definition 2. \fJiashuo Liu, Jiayun Wu, Jie Peng, Xiaoyu Wu, Yang Zheng, Bo Li, Peng Cui Definition 2 (Sub-population set). Given distribution P(Z), for \u03b10 \u2208(0, 1/2) as a lower bound on the sub-population proportion \u03b1, the set of sub-populations of distribution P is P\u03b10(P) := {Q0 : P = \u03b1Q0 + (1 \u2212\u03b1)Q1, for some \u03b1 \u2208[\u03b10, 1) and distribution Q1 on Z}. Remark. Intuitively, P\u03b10(P) contains all subpopulations of P with proportion \u03b1 \u2265\u03b10. \u03b10 controls the size of the minimal sub-populations considered, i.e. smaller \u03b10 corresponds with smaller sub-populations but larger size of the set (|P\u03b10|). Based on this, we introduce a nuanced variant of strict invariance, named \u03b10-distributional stability. This concept serves to quantify the level of stability in the face of shifts among sub-populations. It represents a more flexible approach that captures the degree to which a model\u2019s predictions remain consistent across varying sub-populations, offering a refined metric for evaluating the robustness of models in heterogeneous data environments. Definition 3 (\u03b10-distributional stability). Given data distribution P(Z), for \u03b10 \u2208(0, 1/2), the \u03b10distributional stability of the prediction mechanism Y |X is defined as DS\u03b10(Y |X; P) := sup Q\u2208P\u03b10(P) \u03c1KL(Q(Y |X), P(Y |X)) (3) where \u03c1KL(\u00b7, \u00b7) denotes the KL-divergence between two distributions. Remark. Intuitively, \u03b10-distributional stability measures the maximal variation of the prediction mechanism (Y |X) among sub-populations within P in terms of KL-divergence. It picks the worst sub-population Q\u22c6 in the set P\u03b10(P) and calculates the KL-divergence between Q\u22c6(Y |X) and P(Y |X). The smaller the DS\u03b10 is, the more stable the prediction mechanism Y |X is, since one can hardly find a sub-population that violates P(Y |X). Then we demonstrate some properties of the proposed \u03b10-distributional stability. Proposition 1 (Properties of DS\u03b10(P)). For observed data distribution P(Z) and \u03b10 \u2208(0, 1/2), we have 1. Nonnegativity: DS\u03b10(Y |X; P) \u22650; 2. Monotonicity: if \u03b11 \u2265\u03b12, we have DS\u03b11(Y |X; P) \u2264 DS\u03b12(Y |X; P) Remark. The smaller \u03b10 is, the larger distribution set P\u03b10(P) is, and the larger the stability criterion is, since the mechanism Y |X is examined under more fine-grained sub-populations. Proposition 2 (Relationship with strict invariance). Here we demonstrate the connections and differences between \u03b10-distributional stability and strict invariance: 1. Connection with condition (1): replace \u03c1KL(\u00b7, \u00b7) with E[\u2225EQ[Y |X] \u2212EP[Y |X]\u22252], and replace the sub-population set P\u03b10(P) with E, then we have: DS\u03b10(Y |X; P) = 0 is equivalent to condition (1). 2. Connection with condition (2): replace the sub-population set P\u03b10(P) with E, then we have: DS\u03b10(Y |X; P) = 0 is equivalent to condition (2). Remark (Connection with distributional robustness). Although both terms involve the sub-population set, distributional stability and distributional robustness are inherently different from each other. Distributional robustness (Duchi et al., 2018; Sinha et al., 2018; Duchi et al., 2019) refers to the worst-case performance inside the pre-defined uncertainty set P, while distributional stability measures the maximal variation of the prediction mechanism Y |X. Therefore, distributional robustness reflects the performance at a single point (i.e. the worst-case distribution), but distributional stability measures the variation of the prediction mechanisms (i.e. contrast between two distributions). Such difference leads to a huge discrepancy in the guarantees of the OOD generalization performances. DRO methods to obtain distributional robustness could only ensure the performance within the distribution set P, while methods to pursue distributional stability could generalize to agnostic testing distributions under the learnability assumption, which will be discussed in detail in Section 3. 3 THEORETICAL ANALYSIS Based on distributional stability, we formally define the OOD generalization problem under latent heterogeneity. Then we provide theoretical analysis of this problem, including the learnability assumption and the generalization error bound. Problem 1 (Setup). Given data Z \u223cPtr(Z) collected from multiple agnostic sources, the goal is to learn models with good generalization performances on data from agnostic target distribution Pte(Z). For traditional machine learning problems, the analysis of the learnability is based on the i.i.d. assumption. However, in Problem 1, the target distribution is agnostic and could significantly differ from the training one. Therefore, without any further assumptions, even the learnability itself can hardly hold in general. Given this, we characterize the learnability assumption of Problem 1, which makes assumptions on the target distribution. Following Ye et al. (2021), we define the \fEnhancing Distributional Stability among Sub-populations expansion function as follows: Definition 4 (Expansion Function). A function s : R+\u222a{0} \u2192R+\u222a{0, +\u221e} is an expansion function, iff the following properties hold: (1) s(\u00b7) is monotonically increasing and s(x) \u2265x, \u2200x; (2) lim x\u21920+ s(x) = s(0) = 0. Besides, for training distribution Ptr(Z) and target distribution Pte(Z), we define the out-of-distribution stability of Y |X as: ODS(Y |X; Ptr, Pte) := \u03c1KL(Pte(Y |X), Ptr(Y |X)), which measures the stability of the prediction mechanism between Ptr and Pte. Note that here Pte denotes the target distribution, which may not be included in the pre-defined sub-population set. Then we formally provide the learnability assumption of Problem 1. Assumption 1 (Learnability of Problem 1). Problem 1 from Ptr to Pte is (\u03b10, s)-learnable if there exists an expansion function s(\u00b7) such that ODS(Y |X; Ptr, Pte) \u2264s(DS\u03b10(Y |X; Ptr)). Note that here X could be replaced by some representations \u03a6(X). Here we make some remarks. Remark. (1) Assumption 1 assumes that the \u03b10distributional stability measure of the training distribution should approximately hold in testing, that is, its variation on the target distribution is upper bounded by the expansion function. Intuitively, it requires that the conditional distribution Pte(Y |X) cannot arbitrarily change. If Pte(Y |X) could arbitrarily change, the problem is unlearnable, since the prediction mechanism learned in training may not hold in testing. (2) The steepness of the expansion function reflects the difficulty of Problem 1, since the steeper the expansion function is, the less likely the learned distributional stability will hold in testing. As shown in Theorem 1, the expansion function influences the generalization error bound. We then derive the OOD generalization bound for Problem 1. Theorem 1 (Generalization Bound). Under Assumption 1, assume that \u2113(\u00b7, \u00b7) is upper bounded, the conditional generalization error gap could be bounded by the distributional stability as: EPte \u0014\r \r \rEPte[\u2113(X,Y )|X] \u2212EPtr[\u2113(X, Y )|X] \r \r \r \u0015 \u2264O( p 1 \u2212e\u2212s(DS\u03b10(Y |X;Ptr))), (4) where \u2113(\u00b7, \u00b7) denotes the loss function. In Theorem 1, we calculate the conditional error gap bound, which excludes covariate shifts by aligning the covariate distribution with Pte(X). From Equation (4), we can see that controlling the distributional stability DS\u03b10(Y |X; Ptr) could decrease the generalization error gap between training and testing. The theoretical results motivate our Stable Risk Minimization (SRM) algorithm in Section 4. 4 METHOD To enhance the distributional stability, inspired by Theorem 1, we propose our Stable Risk Minimization (SRM) algorithm based on the newly-proposed distributional stability. We first introduce the overall objective function, and then derive an approximated optimization method for classification and regression. Objective function. To learn models with good distributional stability, we introduce the stability constraints to the general risk minimization and propose our stable risk minimization framework as: \u03b8\u2217, \u03b7\u2217= arg min \u03b8,\u03b7 EX,Y \u223cPtr [\u2113(h\u03b7(\u03a6\u03b8(X)), Y )] s.t. DS\u03b10(Y |\u03a6\u03b8\u2217(X); Ptr) \u2264\u03b4 (5) where \u03b10 is the pre-defined lower-bound on the subpopulation proportion, and \u03b4 \u22650 is the threshold of distributional stability of the prediction mechanism Y |\u03a6\u03b8\u2217(X). The constraint could help to learn representation \u03a6\u03b8\u2217(X) that is stable among sub-populations within Ptr. Following the approximation techniques typically adopted in robust learning (Arjovsky et al., 2019; Sinha et al., 2018), we give up the requirement of a prescribed constraint \u03b4 of distributional stability, and instead focus on the Lagrangian penalty problem, which also corresponds with our theoretical results in Theorem 1. min \u03b8,\u03b7 EPtr[\u2113(h\u03b7(\u03a6\u03b8(X)), Y )] + \u03bb \u00b7 DS\u03b10(Y |\u03a6\u03b8(X); Ptr) (6) The key challenge lies in the calculation of the distributional stability constraint DS\u03b10(Y |\u03a6\u03b8(X); Ptr). Recall that it relies on the worst sub-population Q\u22c6in Equation (3). Therefore, the optimization involves a twoplayer game, where a variation exploiter keeps picking the worst sub-population Q\u22c6from P\u03b10(Ptr), and a stable learner learns a more stable representation with smaller discrepancy between Q\u22c6(Y |\u03a6\u03b8(X)) and Ptr(Y |\u03a6\u03b8(X)). \fJiashuo Liu, Jiayun Wu, Jie Peng, Xiaoyu Wu, Yang Zheng, Bo Li, Peng Cui Algorithm 1 Stable risk minimization (SRM) Input: Training Data D = {xi, yi}n i=1, hyperparameter \u03bb, epoch number T, prescribed subpopulation ratio \u03b10. Initialize: \u03a6(1) = X for t = 1 to T do Step 1. Variation explorer: Given \u03a6(t), find the worst sub-population Q\u22c6(t) characterized by w\u2217according to Equation (8). Step 2. Stable learner: Given the learned worst sub-population Q\u22c6(t), perform stable risk minimization on {\u02c6 Ptr, Q\u22c6(t)} according to Equation (10) to obtain the representation \u03a6(t+1). end for 4.1 Player 1: variation explorer Given current representation \u03a6\u03b8(X) (abbr. \u03a6), the \u03b10-distributional stability takes the form of: DS\u03b10(Y |\u03a6; Ptr) = sup Q\u2208P\u03b10(Ptr) EQ \u0014 log Q(Y |\u03a6) Ptr(Y |\u03a6) \u0015 . The goal of the variation explorer is to find the subpopulation Q\u22c6that: Q\u22c6= arg sup Q\u2208P\u03b10(Ptr) EQ \u0014 log Q(Y |\u03a6) Ptr(Y |\u03a6) \u0015 . (7) For different kinds of tasks, we propose different ways to approximate Equation (7) in the following. (1) For regression tasks. Given the representation \u03a6 \u2208\u03a5 and the label Y \u2208R, we parameterize the conditional distribution Ptr(Y |\u03a6) and Q(Y |X) as: Ptr(Y |\u03a6) \u2248N(ftr(\u03a6), \u03c32 tr), Q(Y |\u03a6) \u2248N(fq(\u03a6), \u03c32 q), where ftr = EPtr[Y |\u03a6] and fq = EQ[Y |\u03a6] denote the prediction functions tailored to fit the data distributions Ptr and Q, respectively. \u03c3tr, \u03c3q are noise scale parameters. Based on this approximation, Equation (7) for regression tasks becomes: Q\u22c6= arg sup Q\u2208P\u03b10(Ptr) EQ \u0014(Y \u2212ftr(\u03a6))2 \u03c32 tr \u2212(Y \u2212fq(\u03a6))2 \u03c32 q \u0015 . (2) For classification tasks.. Denote the number of classes by K, the conditional distribution is discrete and can be modeled via a K-dimensional simplex. Given the representation \u03a6 \u2208\u03a5 and target variable Y \u2208[K], Ptr(Y |\u03a6) and Q(Y |\u03a6) are modeled as: Ptr(Y |\u03a6) \u2248ftr(\u03a6) \u2208\u2206K, Q(Y |\u03a6) \u2248fq(\u03a6) \u2208\u2206K, where ftr, fq denote the prediction models that fit the data from distribution Ptr and Q respectively. Then Equation (7) for classification tasks becomes: Q\u22c6= arg sup Q\u2208P\u03b10(Ptr) EQ \u0014 log fq(\u03a6)[Y ] ftr(\u03a6)[Y ] \u0015 . where fq(\u03a6)[Y ] denotes the value of Y -th dimension of fq(\u03a6) \u2208\u2206K, and the same for ftr(\u03a6)[Y ]. Now we are ready to derive the empirical objective function from Equation (7) for both regression and classification tasks. Empirically, given dataset D = {xi, yi}n i=1 drawn from Ptr, \u02c6 Ptr can be represented by \u02c6 Ptr = 1 n n X i=1 \u03b4(xi,yi), where \u03b4(x,y) denotes the Dirac distribution that is supported on (x, y). Similarly, the sub-population set P\u03b10(\u02c6 Ptr) can be modeled as: P\u03b10(\u02c6 Ptr) = {w = [w1, . . . , wn]T : w \u2208\u2206n, w \u2264 1 \u03b10n}, where the sub-population Q \u2208P\u03b10(\u02c6 Ptr) is characterized by sample weights, and wi denotes the weight of the i-th sample. Then Equation (7) can be reformulated as: w\u22c6= arg max w\u2208P\u03b10(\u02c6 Ptr) n X i=1 wi \u00b7 gi , s.t. w \u2208\u2206n and w \u2264 1 \u03b10n, (8) where gi depends on the task type (regression or classification): gi := ( (yi\u2212ftr(\u03d5i))2 \u03c32 tr \u2212(yi\u2212fq(\u03d5i))2 \u03c32 q , for regression log fq(\u03d5i)[yi] ftr(\u03d5)[yi] , for classification where \u03d5i = \u03a6(xi). To estimate ftr, \u03c3tr, fq, \u03c3q, through maximal likelihood estimation, we have: ftr = arg min f n X i=1 \u2113(f(\u03d5i), yi), fq = arg min f n X i=1 wi\u2113(f(\u03d5i), yi), \u03c32 tr = EPtr[\u21132(ftr(\u03a6), Y )] \u2212(EPtr[\u2113(ftr(\u03a6), Y )])2, \u03c32 q = EQ[\u21132(fq(\u03a6), Y )] \u2212(EQ[\u2113(fq(\u03a6), Y )])2. Notably, for ftr, since it fits the empirical training distribution \u02c6 Ptr and is not affected by sample weights, we only train it once and fix it. For fq, one could use bi-level optimization to jointly optimize the sample \fEnhancing Distributional Stability among Sub-populations weights w and fq. In this work, we find that adopting an iterative training process yields impressive results in practical, and therefore we did not implement bi-level optimization here. But we refer the readers interested in the bi-level optimization of Equation (8) to (Shu et al., 2019; Shaban et al., 2019). Complexity Analysis Here we analyze the complexity of the variation exploitation stage. First, this stage is based on the representation \u03a6(X) of the input data X. Therefore, the conditional distribution P(Y |\u03a6(X)) is easy to fit empirically and typically is chosen as linear model, which could be viewed as the last layer of a deep neural network. Second, we analyze the additional computation cost, and show that it is similar to adversarial training. Denote the sample size as N, dimension of \u03a6 as d\u03d5, and the training epoch as T, the additional cost is O(Nd\u03d5T). Notably, since fq is linear, the convergence is quick, and we set it to 50 in our experiments. To demonstrate that this computation cost is acceptable, we further analyze the additional cost of adversarial training for comparison. Denote the overall number of parameters as D, the attack step as Ta, the additional cost of adversarial training is O(NDTa) with D \u226bd\u03d5. Therefore, the additional computation cost of our method is lower (or no larger) than adversarial training, which is acceptable. Third, to further lower the computation burden, we perform the variation exploitation stage once every K epochs, and K is set to 20 in our experiments. Therefore, the additional time complexity further reduces to O(Nd\u03d5T/K). 4.2 Player 2: stable learner Given the worst sub-population Q\u22c6in Equation (7), the distributional stability could be simplified to: DS\u03b10(Y |\u03a6; Ptr) = \u03c1KL(Q\u22c6(Y |\u03a6)\u2225Ptr(Y |\u03a6)). Therefore, for the stable learner (player 2), the Lagrangian penalty problem in Equation (6) becomes: L(\u03b8, \u03b7) =EPtr[\u2113(h\u03b7(\u03a6\u03b8(X)), Y )]+ \u03bb\u03c1KL(Q\u22c6(Y |\u03a6\u03b8(X))\u2225Ptr(Y |\u03a6\u03b8(X))). (9) Following the approximation in (Koyama et al., 2020), we have: \u03c1KL(Q\u22c6(Y |\u03a6\u03b8(X))\u2225Ptr(Y |\u03a6\u03b8(X))) \u2248O(\u03b12)+ \u03b1\u2207\u03b8,\u03b7 (RPtr(\u03b8, \u03b7) \u2212RQ\u22c6(\u03b8, \u03b7))T \u2207\u03b8,\u03b7RQ\u22c6(\u03b8, \u03b7) where \u03b1 is the learning rate of model parameters \u03b8, \u03b7, RPtr = EPtr[\u2113(X, Y )] denotes the average prediction error under distribution Ptr, and RQ\u22c6= EQ\u22c6[\u2113(X, Y )] denotes the average prediction error under distribution Q\u22c6. Given the worst sub-population Q\u22c6, the overall objective function of the player 2 becomes: L(\u03b8, \u03b7) = EPtr[\u2113(h\u03b7(\u03a6\u03b8(X)), Y )]+ \u03bb\u2207\u03b8,\u03b7 (RPtr(\u03b8, \u03b7) \u2212RQ\u22c6(\u03b8, \u03b7))T \u2207\u03b8,\u03b7RQ\u22c6(\u03b8, \u03b7), (10) which can be efficiently optimized via gradient descent. 5 RELATED WORK In this section, we discuss the related works in detail. There are mainly two branches of literatures related to our work, including invariant learning (Arjovsky et al., 2019; Ahuja et al., 2020; Koyama et al., 2020; Liu et al., 2021a,c; Ahuja et al., 2021; Creager et al., 2021) and distributionally robust optimization (Duchi et al., 2018; Sinha et al., 2018). For invariant learning, Arjovsky et al. (2019) first come up with the OOD generalization problem and design a regularizer to learn such representations that the optimal linear classifier remains the same across training environments, and this method is a typical method in invariant learning. And Koyama et al. (2020) theoretically characterize when the invariance will benefit OOD generalization and propose to learn the maximal invariant predictor to achieve OOD optimality. Ahuja et al. (2021) combines invariant learning with information bottleneck for better OOD generalization performance. The proposed invariance definition requires an invariant relationship among all possible environments, termed as the strict invariance. However, whether it exists in real applications remains doubtful, since the noises are likely to change in different environments and therefore violate the strict invariance. Further, the availability of multiple training environments itself is quite hard to meet with in real scenarios, making many invariant learning methods inapplicable in real applications. In order to mitigate such limitations, recently, some works (Creager et al., 2021; Liu et al., 2021a,c) try to learn pseudo-environments first and then perform invariant learning. Creager et al. (2021) directly maximize the regularizer of IRM with a given biased model to generate environments. Liu et al. (2021a,c) propose to iteratively learn the environment splits and the invariant predictors, although intuitively reasonable, the property of learned environments still remains vague, which renders the proposed framework unstable. Since the property of learned environments cannot be analyzed or guaranteed, whether the invariance can be achieved also remains unclear and cannot be certified. Inspired by this, it is of paramount importance to reformulate the invariant learning problem under latent heterogeneity to a more reasonable one. \fJiashuo Liu, Jiayun Wu, Jie Peng, Xiaoyu Wu, Yang Zheng, Bo Li, Peng Cui Distributionally robust optimization (DRO) methods, typified by f-DRO (Duchi et al., 2018), propose to optimize the worst-case error with respect to a predefined distribution set that lies around the training distribution. When the testing distribution lies in the pre-defined distribution set, the OOD generalization performance can be controlled by the worstcase. However, when the target distribution is not captured by the pre-defined set, the performance of DRO depends on the relationship between the target distribution and the worst-case distribution in the pre-defined set, which cannot be guaranteed. This is also reflected in our Figure 1(a) (the curve of fDRO is quite fluctuant). Unfortunately, such circumstances are quite likely to happen in real scenarios, since the pre-defined set cannot be set too large because of the over-pessimism problem (Hu et al., 2018; Frogner et al., 2019). In this work, we borrow the idea of distribution set from DRO to characterize the sub-population set, based on which we come up with the notion of distributional stability, which is a relaxed alternative of the strict invariance. 6 EXPERIMENTS Baselines. We compare our proposed SRM algorithm with the following methods: Empirical Risk Minimization (ERM), Distributionally Robust Optimization (f-DRO, Duchi et al. (2018)), Environment Inference for Invariant Learning (EIIL, Creager et al. (2021)), Kernelized Heterogeneous Risk Minimization (KerHRM, Liu et al. (2021c)) and Invariant Risk Minimization (IRM, Arjovsky et al. (2019)) with environment Etr labels. Note that IRM requires environment labels, and we provide the ground-truth sub-population labels for IRM. Evaluation Metrics. For experiments with multiple testing distributions, we use Mean Error defined as: Mean Error = 1 |Etest| X e\u2208Etest EPe[\u2113(X, Y )], and Std Error defined as: Std Error = s 1 |Etest| \u22121 X e\u2208Etest (EPe[\u2113(X, Y )] \u2212Mean Error)2, and Max Error defined as: Max Error = max e\u2208Etest EPe[\u2113(X, Y )], which are mean error, standard deviation error, and the worst-case error across testing environments Etest. 6.1 Simulation Data Regression with Selection Bias In this setting, the relationships between covariates and the target are perturbed through the selection bias mechanism across sub-populations. We generate the data following the mechanism adopted by Liu et al. (2021c, 2022), where we assume X = [S, V ]T \u2208R10 and Y = f(S) + \u03f5 = \u03b2T S + S1S2S3 + N(0, 0.1). To generate different sub-populations, we maintain P(Y |S) the same across sub-populations and leverage a data selection mechanism to vary P(Y |V ). Specifically, we select data point (xi, yi) with probability \u03c4i according to one certain variable Vb \u2208V as \u03c4i = |r|\u22125\u2217|yi\u2212sign(r)\u00b7Vb| where |r| > 1. Intuitively, r controls the strengths and direction of the spurious correlation between Vb and Y . The larger value of |r| means the stronger spurious correlation between Vb and Y , and r > 0 means positive correlation and vice versa (i.e. if r > 0, a data point whose Vb is close to its y is more probably to be selected.). Therefore, we use r to define different sub-populations. For training data, we mix 2000 data points from different r1 and 200 points from r2 = \u22121.1. For different testing scenarios, we sample 1000 data points from r \u2208{\u22121.9, \u22122.1, . . . , \u22122.9}, respectively. For our SRM algorithm and f-DRO, we set \u03b10 = 0.1 (the ground truth is 0.09). Linear models are used in this experiment. Classification with Spurious Correlation Following Sagawa et al. (2020), we induce spurious correlations between the label Y \u2208{+1, \u22121} and a spurious attribute A \u2208{+1, \u22121} of different strengths and directions. We assume X = [S, V ]T \u2208R2d, where S \u2208Rd is the invariant feature generated from the label Y and V \u2208Rd the variant feature generated from the spurious attribute A: S|Y \u223cN(Y 1, \u03c32 sId), V |A \u223cN(A1, \u03c32 vId). (11) In this setting, we characterize different groups with the bias rate r \u2208(0, 1], which represents that for 100 \u00b7 r% data, A = Y , and for the other 100 \u00b7 (1 \u2212r)% data, A = \u2212Y . Intuitively, r controls the spurious correlation between the label Y and spurious attribute A. In training, we generate 2000 data points, where 50% points are from group 1 with r1 = 0.9 and the other from group 2 with varying r2. In testing, we generate 1000 data points with r3 = 0.0 to simulate strong distributional shifts, since the direction of spurious correlations is reversed from training. We design multiple settings with different bias rates r2 as well as the dimensions d of features. For our SRM algorithm and f-DRO, we set \u03b10 = 0.15 (the ground truth is 0.17). We use a two-layer MLP for this experiment. \fEnhancing Distributional Stability among Sub-populations Table 1: Overall results in selection bias simulation experiments with varying bias rates r1. Bias Ratio r r1 = 1.5 r1 = 1.9 r1 = 2.3 Methods Mean Error Std Error Max Error Mean Error Std Error Max Error Mean Error Std Error Max Error ERM 2.651(\u00b10.106) 0.119(\u00b10.038) 2.820(\u00b10.140) 3.155(\u00b10.210) 0.147(\u00b10.039) 3.348(\u00b10.184) 3.240(\u00b10.174) 0.136(\u00b10.039) 3.433(\u00b10.197) f-DRO 1.835(\u00b10.144) 0.070(\u00b10.024) 1.940(\u00b10.169) 1.973(\u00b10.261) 0.096(\u00b10.025) 2.107(\u00b10.274) 2.018(\u00b10.422) 0.100(\u00b10.025) 2.149(\u00b10.425) EIIL 1.764(\u00b10.402) 0.074(\u00b10.022) 1.864(\u00b10.423) 2.043(\u00b10.600) 0.101(\u00b10.036) 2.185(\u00b10.656) 1.840(\u00b10.347) 0.085(\u00b10.022) 1.962(\u00b10.349) KerHRM 1.825(\u00b10.354) 0.089(\u00b10.040) 1.978(\u00b10.374) 1.658(\u00b10.472) 0.068(\u00b10.031) 1.788(\u00b10.617) 1.572(\u00b10.504) 0.088(\u00b10.036) 1.677(\u00b10.537) IRM(with Etr label) 1.683(\u00b10.201) 0.066(\u00b10.024) 1.780(\u00b10.227) 1.782(\u00b10.134) 0.067(\u00b10.018) 1.886(\u00b10.163) 1.964(\u00b10.276) 0.067(\u00b10.015) 2.057(\u00b10.295) SRM 1.288(\u00b10.344) 0.059(\u00b10.024) 1.367(\u00b10.365) 1.323(\u00b10.223) 0.054(\u00b10.020) 1.402(\u00b10.233) 1.382(\u00b10.283) 0.059(\u00b10.018) 1.457(\u00b10.299) Table 2: Overall results in classification simulation experiments with varying bias rates r2. Bias Ratio r2 r2 = 0.75 r2 = 0.80 Dimension d d = 5 d = 10 d = 5 d = 10 Methods Train Acc Test Acc Train Acc Test Acc Train Acc Test Acc Train Acc Test Acc ERM 0.917(\u00b10.009) 0.388(\u00b10.039) 0.972(\u00b10.007) 0.573(\u00b10.026) 0.931(\u00b10.005) 0.364(\u00b10.023) 0.975(\u00b10.005) 0.526(\u00b10.030) f-DRO 0.766(\u00b10.012) 0.452(\u00b10.021) 0.920(\u00b10.006) 0.611(\u00b10.028) 0.787(\u00b10.011) 0.427(\u00b10.022) 0.930(\u00b10.005) 0.616(\u00b10.022) EIIL 0.727(\u00b10.145) 0.544(\u00b10.058) 0.814(\u00b10.160) 0.451(\u00b10.049) 0.743(\u00b10.155) 0.571(\u00b10.050) 0.823(\u00b10.165) 0.406(\u00b10.056) KerHRM 0.784(\u00b10.035) 0.636(\u00b10.182) 0.834(\u00b10.143) 0.659(\u00b10.205) 0.780(\u00b10.043) 0.665(\u00b10.178) 0.800(\u00b10.097) 0.674(\u00b10.139) IRM(with Etr label) 0.855(\u00b10.010) 0.467(\u00b10.046) 0.908(\u00b10.007) 0.529(\u00b10.058) 0.876(\u00b10.005) 0.386(\u00b10.047) 0.914(\u00b10.006) 0.448(\u00b10.056) SRM 0.781(\u00b10.032) 0.716(\u00b10.066) 0.869(\u00b10.023) 0.684(\u00b10.052) 0.787(\u00b10.030) 0.703(\u00b10.073) 0.871(\u00b10.017) 0.697(\u00b10.061) Better OOD Generalization Performance: We report the results of the regression and classification tasks in Table 1 and 2. From the results, our SRM outperforms all baselines in terms of higher prediction accuracy and better stability among distributional shifts, which validates that our SRM can achieve better OOD generalization performance and is consistent with our theoretical analysis in Theorem 1. \u03b10 Controls the Extent of Stability: In the definition of \u03b10-distributional stability, \u03b10 controls the range of stability, i.e. smaller \u03b10 examines more finegrained stability. To demonstrate the effect of \u03b10 in our SRM algorithm, for the classification task, we plot the curve of testing accuracy w.r.t. \u03b10 for our SRM and f-DRO in Figure 1(a). Since the real proportion of the minor sub-population is set to 0.17, we hope SRM is effective when \u03b10 \u22640.17. From the results, we could see that the performances of SRM maintain at a high level for \u03b10 \u2208[0.05, 0.17], which validates our intuitions. For too small \u03b10, the performances drop due to the insufficient number of samples and stronger noises. Also, the performances of f-DRO are oscillating, which corresponds with our analysis in Remark 2.2 that: since distributional robustness only cares about the worst sub-population performances, when the testing distribution falls out of the pre-defined distribution set, it cannot guarantee the OOD generalization performance. However, for our SRM, the guarantees for the OOD generalization ability in Theorem 1 do not put strong requirements for the testing distributions, since it only requires the learnability of the problem. 6.2 Real-World Data: Retiring Adults To better validate the effectiveness of the proposed SRM algorithm, we consider a much more challenging scenario on a real-world dataset, named ACSTravelTime (Ding et al., 2021). The task is to predict whether an individual has a commute to work that is longer than 20 minutes. In this task, we have 16 features and 1,428,642 data points in total from all 50 US states. Since there are 50 distinct environments, this dataset contains natural geographic shifts, which makes it suitable for testing the OOD generalization performances. In training, we sample 2000 data points from MA and validate on the rest data from MA. In testing, we test different methods on all the other 49 states. In Figure 1(b), we plot the accuracy and F1 score for each method on the 50 states, and in Figure 1(c) we show the overall testing accuracy of different methods. Note that the original code released by KerHRM is too time-consuming to run on this data because of the large amount of data (over 1 million data points), therefore we use HRM (Liu et al., 2021a) here to replace the KerHRM, which can only deal with the raw feature data. Since there is one environment in this experiment and we do not know the underlying subpopulations, we cannot compare with IRM in this setting. And EIIL can be viewed as an alternative to IRM with learned environments from training data. From the results in Figure 1(b), the average performance of our SRM locates in the top right of the figure, which shows that our methods achieve the best OOD generalization performance w.r.t. testing accuracy and F1 score. Further, in Figure 1(c), for our SRM, the performances of most environments are concentrated at high accuracy, and the variance of different environments is significantly smaller than the baselines. It shows that our SRM algorithm can learn some distributional stability among different sub-populations, which benefits the generalization performances. And \fJiashuo Liu, Jiayun Wu, Jie Peng, Xiaoyu Wu, Yang Zheng, Bo Li, Peng Cui (a) Certified robustness. (b) F1 Score and Testing Accuracy. (c) Overall testing accuracy. Figure 1: Experimental results. (a): Demonstration of the certified robustness via the classification task (in Section 6.1), where we vary the \u03b10 and plot the corresponding testing accuracy for f-DRO and our proposed SRM. (b): The F1 score and testing accuracy on all 50 target states of different methods. We highlight the average F1 score and testing accuracy (in Section 6.2). (c): The distribution of testing accuracy of different methods (in Section 6.2). the good OOD generalization performance also corresponds with our intuition from Theorem 1 that considering the distributional stability could benefit the OOD generalization error. 7 CONCLUSION In this paper, we propose the distributional stability, which measures the stability of prediction mechanisms among sub-populations. Based on this criterion, we propose an approximated algorithm, termed stable risk minimization, to enhance the model\u2019s stability with respect to distribution shifts in prediction mechanisms. Despite the theoretical and empirical results, our work has the following limitations (or potential directions to improve): Analysis of the approximation. Based on the overall objective function in Equation (5), we make several approximations to derive a tractable optimization algorithm. A notable challenge associated with this approach is the difficulty in thoroughly analyzing the behavior of the approximated algorithm, particularly with regard to its convergence properties and the bounds on its generalization error. A promising avenue for future research lies in the development of improved approximation techniques that come with stronger theoretical guarantees. Lack of large-scale suitable datasets. In the current version of our study, both simulated and realworld experiments are conducted on a small scale. This limitation is largely due to the nature of datasets commonly employed in large-scale research, which predominantly consist of image data. These datasets usually exhibit shifts in the input space, X, rather than in the conditional distribution (Y |X-shifts) that are more pertinent to our investigation into invariant learning. As the field of invariant learning evolves, a noticeable trend is the application of these methods to complex tasks, particularly image classification datasets. However, a crucial question emerges: Are these image datasets genuinely conducive to invariant learning methods aimed at aligning the Y |X distributions? Research by Gulrajani and Lopez-Paz (2020) reveals that Empirical Risk Minimization (ERM) often outperforms most domain generalization and invariant learning methods tailored for these datasets. This suggests that the prevalent distribution shifts in image datasets are primarily X-shifts, with the primary objective being to model EPtr[Y |X]. Additionally, numerous empirical studies, such as those by (Miller et al., 2021), have identified a strong correlation between out-of-distribution (OOD) generalization performance and in-distribution (ID) performance. This correlation further underscores the inadequacy of traditional image classification tasks as a testing ground for invariant learning methods. In light of these findings, we advocate for a shift in research focus towards understanding the patterns of distribution shifts in real-world applications, as highlighted by (Liu et al., 2023). A promising avenue of exploration involves the creation of real-world, largescale datasets featuring Y |X-shifts. These datasets would likely offer a more fitting and challenging environment for assessing the capabilities of invariant learning methods. \fEnhancing Distributional Stability among Sub-populations 8 Acknowledgements Peng Cui was supported by National Natural Science Foundation of China (No. 62141607). Bo Li\u2019s research was supported by the National Natural Science Foundation of China (No.72171131, 72133002); the Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grants 2020AAA0108400 and 2020AAA0108403." + }, + { + "url": "http://arxiv.org/abs/2110.12425v1", + "title": "Kernelized Heterogeneous Risk Minimization", + "abstract": "The ability to generalize under distributional shifts is essential to\nreliable machine learning, while models optimized with empirical risk\nminimization usually fail on non-$i.i.d$ testing data. Recently, invariant\nlearning methods for out-of-distribution (OOD) generalization propose to find\ncausally invariant relationships with multi-environments. However, modern\ndatasets are frequently multi-sourced without explicit source labels, rendering\nmany invariant learning methods inapplicable. In this paper, we propose\nKernelized Heterogeneous Risk Minimization (KerHRM) algorithm, which achieves\nboth the latent heterogeneity exploration and invariant learning in kernel\nspace, and then gives feedback to the original neural network by appointing\ninvariant gradient direction. We theoretically justify our algorithm and\nempirically validate the effectiveness of our algorithm with extensive\nexperiments.", + "authors": "Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen", + "published": "2021-10-24", + "updated": "2021-10-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Traditional machine learning algorithms which optimize the empirical risk often suffer from poor generalization performance under distributional shifts caused by latent heterogeneity or selection biases that widely exist in real-world data[12, 25]. How to guarantee a machine learning algorithm with good generalization ability on data drawn out-of-distribution is of paramount signi\ufb01cance, especially in high-stake applications such as \ufb01nancial analysis, criminal justice and medical diagnosis, etc.[16, 21], which is known as the out-of-distribution(OOD) generalization problem[1]. To ensure the OOD generalization ability, invariant learning methods assume the existence of the causally invariant correlations and exploit them through given environments, which makes their performances heavily dependent on the quality of environments. Further, the requirements for the environment labels are too strict to meet with, since real-world datasets are frequently assembled by merging data from multiple sources without explicit source labels. Recently, several works[5, 18] to relax such restrictions have been proposed. Creager et al.[5] directly infer the environments according to a given biased model \ufb01rst and then performs invariant learning. But the two stages cannot be jointly optimized and the quality of inferred environments depends heavily on the pre-provided biased model. Further, for complicated data, using invariant representation for environment inference is harmful, since the environment-speci\ufb01c features are gradually discarded, causing the extinction of latent heterogeneity and rendering data from different latent environments undistinguishable. Liu et al.[18] design a mechanism where two interactive modules for environment inference and invariant learning respectively can promote each other. However, it can only deal with scenarios where invariant and variant features are decomposed on raw feature level, and will break down when the decomposition can only be performed in representation space(e.g., image data). \u2217Equal Contributions \u2020Corresponding Author 35th Conference on Neural Information Processing Systems (NeurIPS 2021), Sydney, Australia. arXiv:2110.12425v1 [cs.LG] 24 Oct 2021 \fThis paper focuses on the integration of latent heterogeneity exploration and invariant learning on representation level. In order to incorporate representation learning with theoretical guarantees, we introduce Neural Tangent Kernel(NTK[13]) into our algorithm. According to NTK theory[13], training the neural network is equivalent to linear regression using Neural Tangent Features(NTF), which converts non-linear neural networks into linear regression in NTF space and makes the integration possible. Based on this, our Kernelized Heterogeneous Risk Minimization (KerHRM) algorithm is proposed, which synchronously optimizes the latent heterogeneity exploration module Mc and invariance learning module Mp in NTF space. Speci\ufb01cally, we propose our novel Invariant Gradient Descent(IGD) for Mp, which performs invariant learning in NTF space and then feeds back to neural networks with appointed invariant gradient direction. For Mc, we construct an orthogonal heterogeneity-aware kernel to capture the environment-speci\ufb01c features and to further accelerate the heterogeneity exploration. Theoretically, we demonstrate our heterogeneity exploration algorithm for Mc with rate-distortion theory and justify the orthogonality property of the built kernel, which jointly can illustrate the mutual promotion between the two modules. Empirically, experiments on both synthetic and real-world data validate the superiority of KerHRM in terms of good out-of-distribution generalization performance. 2 Preliminaries Following [1, 3], we consider data D = {De}e\u2208supp(Etr) with different sources data De = {Xe, Y e} collected from multiple training environments Etr. Here environment labels are unavailable as in most of the real applications. Etr is a random variable on indices of training environments and P e is the distribution of data and label in environment e. The goal of this work is to \ufb01nd a predictor f(\u00b7) : X \u2192Y with good out-of-distribution generalization performance, which is formalized as: arg min f max e\u2208supp(E) L(f|e) (1) where L(f|e) = Ee[\u2113(Xe, Y e)] represents the risk of predictor f on environment e, and \u2113(\u00b7, \u00b7) : Y \u00d7 Y \u2192R+ the loss function. Note that E is the random variable on indices of all possible environments such that supp(Etr) \u2282supp(E). Usually, for all e \u2208supp(E) \\ supp(Etr), the data and label distribution P e(X, Y ) can be quite different from that of training environments Etr. Therefore, the problem in equation 1 is referred to as Out-of-Distribution (OOD) Generalization problem [1]. Since it is impossible to characterize the latent environments E without any prior knowledge or structural assumptions, the invariance assumption is proposed for invariant learning: Assumption 2.1. There exists random variable \u03a8\u2217 S(X) such that the following properties hold: a. Invariance property: for all e, e\u2032 \u2208supp(E), we have P e(Y |\u03a8\u2217 S(X)) = P e\u2032(Y |\u03a8\u2217 S(X)) holds. b. Su\ufb03ciency property: Y = f(\u03a8\u2217 S) + \u03f5, \u03f5 \u22a5X. This assumption indicates invariance and suf\ufb01ciency for predicting the target Y using \u03a8\u2217 S, which is known as invariant representations with stable relationships with Y across E. To acquire such \u03a8\u2217 S, a branch of works[4, 14, 18] proposes to \ufb01nd the maximal invariant predictor(MIP) of an invariance set, which are de\ufb01ned as follows: De\ufb01nition 2.1. The invariance set I with respect to E is de\ufb01ned as: IE = {\u03a8S(X) : Y \u22a5E|\u03a8S(X)} = {\u03a8S(X) : H[Y |\u03a8S(X)] = H[Y |\u03a8S(X), E]} (2) where H[\u00b7] is the Shannon entropy of a random variable. The corresponding maximal invariant predictor (MIP) of IE is de\ufb01ned as S = arg max\u03a6\u2208IE I(Y ; \u03a6), where I(\u00b7; \u00b7) measures Shannon mutual information between two random variables. Firstly, we propose that using the maximal invariant predictor S of IE can guarantee OOD optimality in Theorem 2.1. The formal statement is similar to [18] and can be found in Appendix A.3. Theorem 2.1. (Optimality Guarantee, informal) For predictor \u03a6\u2217(X) satisfying Assumption 2.1, \u03a8\u2217 S is the maximal invariant predictor with respect to E and the solution to OOD problem in equation 1 is EY [Y |\u03a8\u2217 S] = arg minf supe\u2208supp(E) E[L(f)|e]. However, recent works[4, 14] on \ufb01nding MIP solutions rely on the availability of data from multiple training environments Etr, which is hard to meet with in practice. Further, their validity is highly determined by the given Etr. Since IE \u2286IEtr, the invariance regularized by Etr is often too large and the learned MIP may contain variant components and fails to generalize well. Based on this, 2 \fFigure 1: The framework for KerHRM. The middle block diagram shows the overall \ufb02ow of the algorithm, which consists of two modules named heterogeneity exploration module Mc and invariant prediction module Mp. The whole algorithm runs iteratively between Mc and Mp, where one iteration consists of three steps, which we illustrate in section 3.1, 3.2 and 3.3 respectively. Heterogeneous Risk Minimization(HRM[18]) proposes to generate environments Etr with minimal |IEtr| and to conduct invariant prediction with learned Etr. However, the proposed HRM can only deal with simple scenarios where X = [\u03a8\u2217 S, \u03a8\u2217 V ]T on raw feature level (\u03a8\u2217 S are invariant features and \u03a8\u2217 V variant ones), and will break down where X = h(\u03a8\u2217 S, \u03a8\u2217 V ) (h(\u00b7, \u00b7) is an unknown transformation function), since the decomposition can only be performed in representation space. In this work, we focus on the integration of latent heterogeneity exploration and invariant learning in general scenarios where invariant features are latent in X, which can be easily ful\ufb01lled in real applications. Problem 1. (Problem Setting) Assume that X = h(\u03a8\u2217 S, \u03a8\u2217 V ) \u2208Rd, where \u03a8\u2217 S satis\ufb01es Assumption 2.1, h(\u00b7) is an unknown transformation function and \u03a8\u2217 S \u22a5\u03a8\u2217 V (following functional representation lemma[7]), given heterogeneous dataset D = {De}e\u2208supp(Elatent) without environment labels, the task is to generate environments Elearn with minimal |IElearn| and meanwhile learn invariant models. 3 Method Remark. Following the analysis in section 2, to generate environments Elearn with minimal |IElearn| is equivalent to generate environments with as varying P(Y |\u03a8\u2217 V ) as possible, so as to exclude variant parts \u03a8\u2217 V from the invariant set IElearn. In spite of such insight, the latent \u03a8\u2217 S, \u03a8\u2217 V make it impossible to directly generate Elearn. In this work, we propose our Kernelized Heterogeneous Risk Minimization (KerHRM) algorithm with two interactive modules, the frontend Mc for heterogeneity exploration and backend Mp for invariant prediction. Speci\ufb01cally, given pooled data, the algorithm starts with the heterogeneity exploration module Mc with a learned heterogeneity-aware kernel \u03bac to generate Elearn. The learned environments are used by Mp to produce invariant direction \u03b8inv in Neural Tangent Feature(NTF) space that captures the invariant components \u03a8S, and then \u03b8inv is used to guide the gradient descent of neural networks. After that, we update the kernel \u03bac to orthogonalize with the invariant direction \u03b8inv so as to better capture the variant components \u03a8V and realize the mutual promotion between Mc and Mp iteratively. The whole framework is jointly optimized, so that the mutual promotion between heterogeneity exploration and invariant learning can be fully leveraged. For smoothness we Algorithm 1 Kernelized Heterogeneous Risk Minimization (KerHRM) Algorithm Input: Heterogeneous dataset D = {De = (Xe, Y e)}e\u2208Etr Initialization: MLP model fw(\u00b7) with initialized w0, Neural Tangent Feature \u03a6(X) = \u2207wfw0(X)(\ufb01xed in following), clustering kernel initialized as \u03ba(0) c (x1, x2) = xT 1 x2 for t = 1 to T do 1. Generate E(t) learn with clustering kernel \u03ba(t\u22121) c : E(t) learn = Mc((\u03a6(X), Y ), \u03ba(t\u22121) c ) 2. Learn invariant model parameters \u03b8(t) inv with E(t) learn in NTF space: \u03b8(t) inv = Mp(E(t) learn) 3. Feedback to Neural Network fw(\u00b7) with \u03b8(t) inv: w(t) inv = arg minw L(w; X, Y ) + Reg(w, \u03b8(t) inv) 4. Update the clustering kernel \u03ba(t) c with \u03b8(t) inv: \u03ba(t) c \u2190Orthogonal Transform(\u03ba(t\u22121) c , \u03b8(t) inv) end for 3 \fbegin with the invariant prediction step to illustrate our algorithm, and the \ufb02ow of whole algorithm is shown in \ufb01gure 1. 3.1 Mp: Invariant Gradient Descent with Elearn (Step 1) For our invariant learning module Mp, we propose Invariant Gradient Descent (IGD) algorithm. Taking the learned environments Elearn as input, our IGD \ufb01rstly performs invariant learning in Neural Tangent Feature (NTF[13]) space to obtain the invariant direction \u03b8inv, and then guides the whole neural network fw(\u00b7) with \u03b8inv to learn the invariant model(neural network)\u2019s parameters winv. Neural Tangent Feature Space The NTK theory[13] shows that training the neural network is equivalent to linear regression using non-linear NTFs \u03c6(x), as in equation 4. For each data point x \u2208Rd, where d is the feature dimension, the corresponding feature is given by \u03c6(x) = \u2207wfw(x) \u2208 Rp, where p is the number of neural network\u2019s parameters. Firstly, we would like to dissect the feature components within \u03c6(x) by decomposing the invariant and variant components hidden in \u03c6(x). Therefore, we propose to perform Singular Value Decomposition (SVD) on the NTF matrix: \u03a6(X)T | {z } Rn\u00d7p \u2248 U |{z} Rn\u00d7k \u00b7 S |{z} Rk\u00d7k \u00b7 V T |{z} Rk\u00d7p where p \u226bn \u2265k (3) Intuitively, in equation 3, each row V T j,: of V T represents the j-th feature component of Rp and we take k such different feature components with the top k largest singular values to represent the data, and the rationality of low rank decomposition is guaranteed theoretically[26, 19] and empirically[2]. Since SVD ensures every feature components orthogonal, the neural tangent feature of the i-th data point can be decomposed into \u03c6(xi)T \u2248Pk j=1 Ui,j \u00b7 Sj,j \u00b7 V T j,:, where Ui,j \u00b7 Sj,j denotes the strength of the j-th feature component in the i-th data. However, since neural networks have millions of parameters, the high dimension prevents us from learning directly on high dimensional NTFs \u03a6(X). Therefore, we rewrite the initial formulation of linear regression into: fw(X) \u2248fw0(X) + \u03a6(X)T (w \u2212w0) \u2248fw0(X) + USV T (w \u2212w0) (4) = fw0(X) + \u03a8(X) \u0010 V T (w \u2212w0) \u0011 = fw0(X) + \u03a8(X)\u03b8 (5) where we let \u03b8 = V T (w\u2212w0) \u2208Rk which re\ufb02ects how the model parameter w utilizes the k feature components. Since V T is orthogonal, \ufb01tting w \u2212w0 with features \u03a6(X) is equivalent to \ufb01tting \u03b8 using reduced NTFs \u03a8(X). In this way, we convert the original high-dimensional regression problem into the low-dimensional one in equation 5, since in wide neural networks, we have p \u226bn \u2265k. Invariant Learning with Reduced NTFs \u03a8(X) We could perform invariant learning on reduced NTFs \u03a8(X) in linear space. In this work, we adopt the invariant regularizer proposed in [14] to learn \u03b8 = V T (w \u2212w0) due to its optimality guarantees, and the objective function is: \u03b8inv = arg min \u03b8 X e\u2208Elearn Le(\u03b8; \u03a8, Y ) + \u03b1 \u00b7 VarElearn (\u2207\u03b8Le) (6) Guide Neural Network with invariant direction \u03b8inv With the learned \u03b8inv, it remains to feed back to the neural network\u2019s parameters w. Since for neural networks with millions of parameters whose p \u2248108, it is dif\ufb01cult to directly obtain w as w = w0 + V \u03b8inv. Therefore, we design a loss function to approximate the projection (w\u2212w0//V \u03b8inv). Note that fw(X) = fw0(X)+USV T (w\u2212 w0) = fw0(X) + US\u03b8inv, we have S\u22121U T (fw(X) \u2212fw0(X)) = \u03b8inv (7) Therefore, we can ensure the updated parameters w satisfy that S\u22121U T (fw(X) \u2212fw0(X)) \u2208Rk is parallel to \u03b8inv, which leads to the following loss function: winv = arg min w L(w; X, Y ) + \u03bb 1 \u2212| \u03b8inv, S\u22121U T (fw(X) \u2212fw0(X)) \u000b | \u2225\u03b8inv\u2225\u2225S\u22121U T (fw(X) \u2212fw0(X))\u2225 ! (8) where L(w; X, Y ) is the empirical prediction loss over training data and the second term is to force the invariance property of the neural network. 3.2 Variant Component Decomposition with \u03b8inv (Step 2) The core of our KerHRM is the mutual promotion of the heterogeneity exploration module Mc and the invariant learning module Mp. From our insight, we should leverage the variant components \u03a8V 4 \fto exploit the latent heterogeneity. Therefore, with the better invariant direction \u03b8inv learned by Mp that captures the invariant components in data, it remains to capture better variant components \u03a8V so as to further accelerate the heterogeneity exploration procedure, for which we design a clustering kernel \u03bac on the reduce NTF space of Rk with the help of \u03b8inv learned in section 3.1. Recall the NTF decomposition in equation 3, the initial similarity of two data points xi and xj can be decomposed as: \u03ba(0) c (xi, xj) = \u03c6(xi)T \u03c6(xj) = \u27e8UiS, UjS\u27e9 (9) With the invariant direction \u03b8(t) inv learned by Mp in iteration t, we can wipe out the invariant components used by \u03b8(t) inv via \u03a8(t+1) V (xi) \u2190UiS \u2212 D UiS, \u03b8(t) inv E \u03b8(t) inv/\u2225\u03b8(t) inv\u22252 (10) which gives a new heterogeneity-aware kernel that better captures the variant components \u03a8\u2217 V as \u03ba(t+1) c (xi, xj) = \u03a8(t+1) V (xi)T \u03a8(t+1) V (xj). 3.3 Mc: Heterogeneity exploration with \u03bac (Step 3) Mc takes one heterogeneous dataset as input, and outputs a learned multi-environment partition Elearn for invariant prediction module Mp, and we implement it as a clustering algorithm with kernel regression given the heterogeneity-aware \u03bac(xi, xj) = \u03a8V (xi)T \u03a8V (xj) that captures the variant components in data. Following the analysis above, only the variant components \u03a8\u2217 V should be leveraged to identify the latent heterogeneity, and therefore we use the kernel \u03bac as well as \u03a8V (X) learned in section 3.2 to capture the different relationship between \u03a8\u2217 V and Y , for which we use P(Y |\u03a8V ) as the clustering centre. Speci\ufb01cally, we assume the j-th cluster centre P\u0398j(Y |\u03a8V (X)) to be a Gaussian around f(\u0398j; \u03a8V (X)) as: hj(\u03a8V (X), Y ) = P\u0398j(Y |\u03a8V (X)) = ( \u221a 2\u03c0\u03c3)\u22121 exp(\u2212(Y \u2212f(\u0398j; \u03a8V (X)))2/2\u03c32) (11) For the given N = P e\u2208supp(Elatent) |De| data points D = {\u03c8V (xi), yi}N i=1, the empirical distribution can be modeled as \u02c6 PN = 1 N PN i=1 \u03b4\u03c8V (xi),yi. Under this setting, we propose one convex clustering algorithm, which aims at \ufb01nding a mixture distribution in distribution set Q de\ufb01ned as: Q = {Q = X j\u2208[K] qjhj(\u03a8V (X), Y ), q \u2208\u2206K} (12) to \ufb01t the empirical data best. Therefore, the original objective function and the simpli\ufb01ed one are: min Q\u2208Q DKL( \u02c6 PN\u2225Q) \u21d4min \u0398,q \uf8f1 \uf8f2 \uf8f3Lc = \u22121 N X i\u2208[N] log \uf8ee \uf8f0X j\u2208[K] qjhj(\u03c8V (xi), yi) \uf8f9 \uf8fb \uf8fc \uf8fd \uf8fe (13) Note that our clustering algorithm differs from others since the cluster centres are learned models parameterized with \u0398. As for optimization, we use EM algorithm to optimize the centre parameters \u0398 and the mixture weights q iteratively. Speci\ufb01cally, when optimizing the cluster centre model f(\u0398j; \u00b7), we use kernel regression with \u03bac(\u00b7, \u00b7) to avoid computing \u03a8V (X) and allow large k. For generating the learned environments Elearn, we assign i-th point to j-th cluster with probability Pi,j = qjhj(\u03c8V (xi), yi)/ P l\u2208[K] qlhl(\u03c8V (xi), yi). 4 Theoretical Analysis In this section, we provide theoretical justi\ufb01cations of the mutual promotion between Mc and Mp. Since our algorithm does not violate the theoretical analysis in [14] and [13] which proves that better Elearn from Mc bene\ufb01ts the MIP learned by Mp, to \ufb01nish the mutual promotion, we only need to justify that better \u03b8inv from Mp bene\ufb01ts the learning of Elearn in Mc. 1. Using \u03a8V \u2217bene\ufb01ts the clustering. Firstly, we introduce Lemma 4.1 from [18] to show that using \u03a8\u2217 V bene\ufb01ts the clustering in terms of larger between-cluster distance. Lemma 4.1. For ei, ej \u2208supp(Elatent), assume that X satisfying Assumption 2.1, then under reasonable assumption([18]), we have DKL(P ei(Y |X)\u2225P ej(Y |X)) \u2264DKL(P ei(Y |\u03a8\u2217 V )\u2225P ej(Y |\u03a8\u2217 V )). Then similar to [17], we use the rate-distortion theory to demonstrate why larger DKL between cluster centres bene\ufb01ts our convex clustering as well as the quality of Elearn. 5 \fTheorem 4.1. (Rate-Distortion) For the proposed convex clustering algorithm, we have: min Q\u2208Q DKL( \u02c6 PN||Q) = min \u0398 I(I; J) + (1/2\u03c32)EI,J[d(\u03c8V (xi), yi, \u0398j)] + Const (14) where rij = P(j|\u03c8V (xi), yi) is a discrete random variable over the space {1, 2, ..., N} \u00d7 {1, 2, ..., K} which denotes the probability of i-th data point belonging to j-th cluster, I, J are the marginal distribution of random variable rij respectively, d(\u03c8V (xi), yi, \u0398j) = (f\u0398j(\u03c8V (xi))\u2212yi)2 and I(\u00b7; \u00b7) the Shannon mutual information. Note that the optimal r can be obtained by the optimal \u0398 and therefore we only minimize the r.h.s with respect to \u0398. Actually d models the conditional distribution P(Y |\u03a8V ). If in the underlying distribution of the empirical data P(Y |\u03a8V ) differs a lot between different clusters, the optimizer will put more efforts in optimizing EI,J[d(\u03c8V (xi), yi, \u0398j)] to avoid inducing larger error, resulting in smaller efforts put on optimization of I(I; J) and a relatively larger I(I; J). This means data sample points I have a larger mutual information with cluster index J, thus the clustering is prone to be more accurate. 2. Orthogonality Property: Better \u03b8inv for better \u03a8V . Firstly, we prove the orthogonality property between \u03b8inv(equation 6) and parameters \u0398 of clustering centres f\u0398j(\u00b7). Theorem 4.2. (Orthogonality Property) Denote the data matrix of j-th environment Xj and \u03a8j V = \u03a8V (Xj), then for each \u0398j(j \u2208[K]) = ((\u03a8j V )T \u03a8j V )\u22121(\u03a8j V )T Y j, we have Span(\u0398) \u2286Ker(\u03b8inv) and Span(\u03a8j V ) \u2286Ker(\u03b8inv), where Span denotes the column space and Ker the null space. Theorem 4.2 justi\ufb01es that the parameter space for clustering model f\u0398(\u00b7) as well as the space of learned variant components \u03a8V is orthogonal to the invariant direction \u03b8inv, which indicates that better invariant direction \u03b8inv regulates better variant components \u03a8V and therefore better heterogeneity. Taking (1) and (2) together, we conclude that better results(\u03b8inv) of Mp promotes the latent heterogeneity exploration in Mc because of larger between-cluster distance. Finally, we use a linear but general setting for further clari\ufb01cation. Example. Assume that data points from environments e \u2208E are generated as follows: X = Y (\u03a8\u2217 S + \u03b2e\u03a8\u2217 V ) + N(0, \u03a3) \u2208Rd (15) where Y = \u00b11 with equal probability, the coef\ufb01cient \u03b2e varies across environment e, \u03a8\u2217 S \u2208Rd is the invariant feature and following functional representation lemma [7] \u03a8\u2217 V is the variant feature with \u03a8\u2217 V \u22a5\u03a8\u2217 S \u2208Rd and its relationship with the target Y relies on the environment-speci\ufb01c \u03b2e. Remark. In example 4, when Mp achieves optimal, we have \u03b8inv = \u03a8\u2217 S, which is the mid vertical hyperplane of the two Gaussian distribution. Then following equation 10, we have \u03a8V = X \u2212 (XT \u03b8inv)\u03b8inv = Y \u03b2e\u03a8\u2217 V , which directly shows that in the next iteration, Mc uses solely variant components \u03a8\u2217 V in X to learn environments Elearn with diverse P(Y |X) = P(Y |\u03a8\u2217 V ), which by lemma 4.1 and theorem 4.1 gives the best clustering results. 5 Experiments In this section, we validate the effectiveness of our method on synthetic data and real-world data. Baselines We compare our proposed KerHRM with the following methods: \u2022 Empirical Risk Minimization(ERM): min\u03b8 EPtr[\u2113(\u03b8; X, Y )] \u2022 Distributionally Robust Optimization(DRO [6]): min\u03b8 supQ\u2208Df (Q,Ptr)\u2264\u03c1 EQ[\u2113(\u03b8; X, Y )] \u2022 Environment Inference for Invariant Learning(EIIL [5]): min \u03a6 max u X e\u2208E 1 Ne X i ui(e)\u2113(w\u2299\u03a6(xi), yi)+\u03bb\u2225\u2207w|w=1.0 1 Ne X i ui(e)\u2113(w\u2299\u03a6(xi), yi)\u22252 (16) \u2022 Heterogeneous Risk Minimization(HRM [18]) \u2022 Invariant Risk Minimization(IRM [1]) with environment Etr labels: min \u03a6 X e\u2208Etr Le + \u03bb\u2225\u2207w|w=1.0Le(w \u2299\u03a6)\u22252 (17) 6 \fWe choose one typical method[6] of DRO as DRO is another main branch of methods for OOD generalization problem of the same setting with us (no environment labels). And HRM and EIIL are another methods for inferring environments for invariant learning without environment labels. We choose IRM as another baseline for its fame in invariant learning, but note that IRM is based on multiple training environments and we provide Etr labels for it, while the others do not need. Further, for ablation study, we run KerHRM for only one iteration without the feedback loop and denote it as Static KerHRM(KerHRMs). For all experiments, we use a two-layer MLP with 1024 hidden units. Evaluation Metrics To evaluate the prediction performance, for task with only one testing environment, we simply use the prediction accuracy of the testing environment. While for tasks with multiple environments, we introduce Mean_Error de\ufb01ned as Mean_Error = 1 |Etest| P e\u2208Etest Le, Std_Error de\ufb01ned as Std_Error = q 1 |Etest|\u22121 P e\u2208Etest(Le \u2212Mean_Error)2, which are mean and standard deviation error across Etest. And we use the average mean square error for Le. 5.1 Synthetic Data Classi\ufb01cation with Spurious Correlation Following [23], we induce the spurious correlation between the label Y \u2208{+1, \u22121} and a spurious attribute A \u2208{+1, \u22121}. Speci\ufb01cally, each environment is characterized by its bias rate r \u2208(0, 1], where the bias rate r represents that for 100 \u2217r% data, A = Y , and for the other 100 \u2217(1 \u2212r)% data, A = \u2212Y . Intuitively, r measures the strength and direction of the spurious correlation between the label Y and spurious attribute A, where larger |r \u22120.5| signi\ufb01es higher spurious correlation between Y and A, and sign(r \u22120.5) represents the direction of such spurious correlation, since there is no spurious correlation when r = 0.5. We assume X = H[S, V ]T \u2208R2d, where S \u2208Rd is the invariant feature generated from label Y and V the variant feature generated from spurious attribute A: S|Y \u223cN(Y 1, \u03c32 sId), V |A \u223cN(A1, \u03c32 vId) (18) and H \u2208R2d\u00d72d is an random orthogonal matrix to scramble the invariant and variant component, which makes it more practical. Typically, we set \u03c32 v \u2265\u03c32 s to let the model more prone to use spurious V since V is more informative. In training, we set d = 5 and generate 2000 data points, where 50% points are from environment e1 with r1 = 0.9 and the other from environment e2 with r2. For our method, we set the cluster number K = 2. In testing, we generate 1000 data points from environment e3 with r3 = 0.1 to induce distributional shifts from training. In this experiments, we vary the bias rate r2 of environment e2 and the scrambled matrix H which can be an orthogonal or identity matrix (as done in [1]), and results after 10 runs are reported in Table 1. From the results, we have the following observations and analysis: ERM suffers from the distributional shifts between training and testing, which yields the worst performance in testing. DRO can only provide slight resistance to distributional shifts, which we think is due to the over-pessimism problem[9]. EIIL achieves the best training performance but also performs poorly in testing. HRM outperforms the above three baselines, but its testing accuracy is just around the random guess(0.50), which is due to the disturbance of the simple raw feature setting in [18]. IRM performs better when the heterogeneity between training environments is large(r2 is small), which veri\ufb01es our analysis in section 2 that the performance of invariant learning methods highly depends on the quality of the given Etr. Compared to all baselines, our KerHRM performs the best with respect to highest testing accuracy and lowest (Train_Acc \u2212Test_Acc), showing its superiority to IRM and original HRM. Further, we also empirically analyze the sensitivity to the choice of cluster number K of our KerHRM. We set r2 = 0.80 and test the performance with K = {2, 3, 4, 5} respectively. Results compared with IRM are shown in Table 2. From the results, we can see that the cluster number of our methods does not need to be the ground truth number(ground truth is 2) and our KerHRM is not sensitive to the choice of cluster number K. Intuitively, we only need the learned environments to re\ufb02ect the variance of relationships between P(Y |\u03a8\u2217 V ), but do not require the environments to be ground truth. However, we notice that when K is far away from the proper one, the convergence of clustering algorithm is much slower. Regression with Selection Bias In this setting, we induce the spurious correlation between the label Y and spurious attributes V through selection bias mechanism, which is similar to that in [15]. We assume X = H[S, V ]T \u2208 7 \fTable 1: Results in classi\ufb01cation simulation experiments of different methods with varying bias rate r2, and scrambled matrix H, and each result is averaged over ten times runs. r2 r2 = 0.70 r2 = 0.75 r2 = 0.80 Methods Train_Acc Test_Acc Train_Acc Test_Acc Train_Acc Test_Acc ERM 0.850 0.400 0.862 0.325 0.875 0.254 DRO 0.857 0.473 0.870 0.432 0.883 0.395 EIIL 0.927 0.523 0.925 0.470 0.946 0.463 HRM 0.836 0.543 0.832 0.519 0.852 0.488 IRM(with Etr label) 0.836 0.606 0.853 0.544 0.877 0.401 KerHRMs 0.764 0.671 0.782 0.632 0.663 0.619 KerHRM 0.759 0.724 0.760 0.686 0.741 0.693 Table 2: Ablation study on the cluster number K. Each result is averaged over ten times runs. IRM HRM KerHRM (K = 2) KerHRM (K = 3) KerHRM (K = 4) KerHRM (K = 5) Train_Acc 0.877 0.852 0.741 0.758 0.756 0.753 Test_Acc 0.401 0.488 0.693 0.687 0.698 0.668 Rd and Y = f(S) + \u03f5, where f(\u00b7) is a non-linear function such that P(Y |S) remains invariant across environments while P(Y |V ) changes arbitrarily. For simplicity, we select data (xi, yi) with probability P(xi, yi) according to a certain variable Vb \u2208V : \u02c6 P(xi, yi) = |r|\u22125\u2217|yi\u2212sign(r)\u2217Vb| (19) where |r| > 1. Intuitively, r eventually controls the strengths and direction of the spurious correlation between Vb and Y (i.e. if r > 0, a data point whose Vb is close to its y is more probably to be selected.). The larger value of |r| means the stronger spurious correlation between Vb and Y , and r > 0 means positive correlation and vice versa. Therefore, here we use r to de\ufb01ne different environments. In training, we generate 1000 points from environment e1 with a prede\ufb01ned r and 100 points from e2 with r = \u22121.1. In testing, to simulate distributional shifts, we generate data points for 6 environments with r \u2208[\u22122.9, \u22122.7, . . . , \u22121.9]. We compare our KerHRM with ERM, DRO, EIIL and IRM. We conduct experiments with different settings on r and the scrambled matrix H. From the results in Table 3, we have the following analysis: ERM, DRO and EIIL performs poor with respect to high average and stability error, which is similar to that in classi\ufb01cation experiments(Table 1). The results of HRM are quite different in two scenarios, where Scenario 1 corresponds to the simple raw feature setting(H = I) in [18] but Scenario 2 violates such simple setting with random orthogonal H and greatly harms HRM. Compared to all baselines, our KerHRM achieves lowest average error in 5/6 settings, and its superiority is especially obvious in our more general setting(Scenario 2). Colored MNIST To further validate our method\u2019s capacity under general settings, we use the colored MNIST dataset, where data X are high-dimensional non-linear transformation from invariant features(digits Y ) and variant features(color C). Following [1], we build a synthetic binary classi\ufb01cation task, where each image is colored either red or green in a way that strongly and spuriously correlates with the class label Y . Firstly, a binary label Y is assigned to each images according to its digits: Y = 0 for digits 0\u223c4 and Y = 1 for digits 5\u223c9. Secondly, we sample the color id C by \ufb02ipping Y with probability e and therefore forms environments, where e = 0.1 for the \ufb01rst training environment, e = 0.2 for the second training environments and e = 0.9 for the testing environment. Thirdly, we induce noisy labels by randomly \ufb02ipping the label Y with probability 0.2. We randomly sample 2500 images for each environments, and the two training environments are mixed without environment label Etr for ERM, DRO, EIIL, HRMs and HRM, while for IRM, the Etr labels are provided. For IRM, we sample 1000 data from the two training environments respectively and select the hyper-parameters which maximize the minimum accuracy of two validation environments. Note that we have no access to the testing environment while training, therefore we cannot resort to testing data to select the best one, which is more reasonable and different from that in [1]. For the others, since we have no access to E labels, we simply pool the 2000 data points for validation. The results are shown in Table 4, where Perfect Inv. Model represents the oracle 8 \fTable 3: Results in selection bias simulation experiments of different methods with varying selection bias r, and scrambled matrix H, and each result is averaged over ten times runs. Scenario 1: Non-Scrambled Setting (H = I, varying r) r r = 1.5 r = 1.9 r = 2.3 Methods Mean_Error Std_Error Mean_Error Std_Error Mean_Error Std_Error ERM 5.056 0.223 5.442 0.204 5.503 0.234 DRO 4.571 0.205 4.908 0.180 5.081 0.209 EIIL 5.006 0.211 5.252 0.172 5.428 0.205 HRM 3.625 0.057 3.901 0.050 4.017 0.082 IRM(with Etr label) 3.873 0.176 4.536 0.172 4.509 0.194 KerHRMs 4.384 0.191 3.989 0.195 3.527 0.178 KerHRM 4.112 0.182 3.659 0.186 3.409 0.174 Scenario 2: Scrambled Setting (random orthogonal H, varying r) r r = 1.5 r = 1.9 r = 2.3 Methods Mean_Error Std_Error Mean_Error Std_Error Mean_Error Std_Error ERM 5.059 0.229 5.285 0.207 5.478 0.211 DRO 4.494 0.212 4.717 0.175 4.978 0.207 EIIL 4.945 0.215 5.207 0.187 5.294 0.220 HRM 4.397 0.096 4.801 0.142 4.721 0.096 IRM(with Etr label) 4.269 0.218 4.477 0.174 4.392 0.178 KerHRMs 4.379 0.205 3.543 0.169 3.571 0.164 KerHRM 4.122 0.195 3.375 0.163 3.473 0.160 results that can be achieved under this setting. We run each method for 5 times and report the average accuracy, and since the variance of all methods are relatively small, we omit it in the table. Table 4: Colored MNIST results. The \ufb01rst row indicates whether each method needs the environment label. The Perfect Inv. Model represents the oracle results that can be achieved. The Generalization Gap is de\ufb01ned as (Test Accuracy \u2212Train Accuracy). Method ERM DRO EIIL HRM IRM KerHRMs KerHRM Perfect Inv. Model Need Etr Label? % % % % ! % % Train Accuracy 0.845 0.644 0.777 0.835 0.766 0.802 0.654 0.800 Test Accuracy 0.106 0.419 0.542 0.282 0.468 0.296 0.648 0.800 Generalization Gap -0.739 -0.223 -0.235 -0.553 -0.298 -0.506 -0.006 From the results, our KerHRM generalize the HRM to much more complicated data and consistently achieves the best performances. KerHRM even outperforms IRM signi\ufb01cantly in an unfair setting where we provide perfect environment labels for IRM, which shows the limitation of manually labeled environments. Further, to best show the mutual promotion between Mc and Mp, we plot the training and testing accuracy as well as the KL-divergence DKL of P(Y |C) between the learned Elearn over iterations in \ufb01gure 2. From \ufb01gure 2, we \ufb01rstly validate the mutual promotion between Mc and Mp since DKL and testing accuracy escalate synchronously over iterations. Secondly, \ufb01gure 2 corresponds to our analysis in section 2 that the performance of invariant learning method is highly correlated to the heterogeneity of Etr, which sheds lights to the importance of how to leverage the intrinsic heterogeneity in training data for invariant learning. 5.2 Real-world Data In this experiment, we test our method on a real-world regression dataset (Kaggle) of house sales prices from King County, USA3, where the target variable is the transaction price of the house and each sample contains 17 predictive variables, such as the built year, number of bedrooms, and square footage of home, etc. Since it is fairly reasonable to assume the relationships between predictive variables and the target vary along the time (for example, the pricing mode may change along the time), there exist distributional shifts in the price-prediction task with respect to the build year of houses. Speci\ufb01cally, the houses in this dataset were built between 1900 \u223c2015, and we divide the whole dataset into 6 periods, where each contains a time span of two decades. Notice that the later periods have larger distributional shifts. We train all methods on the \ufb01rst period where built_year \u2208[1900, 1920) and test on the other 5 periods and report the average results over 10 runs in \ufb01gure 3. For IRM, we further divide the period 1 into two decades for the Etr provided. 3https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data 9 \f0 5 10 15 20 25 30 Whole-Iteration 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy Train Test 0 2 4 6 8 10 KL-divergence between learn DKL Figure 2: Results for the Colored MNIST task. We plot the training and testing accuracy, as well as the KL-divergence between learned Etr. Figure 3: Results for the real-word regression task. We train all methods on e1 and test on the others, and report the average results over 10 runs. Analysis The testing errors of ERM and DRO increase sharply across environments, indicating the existence of the distributional shifts between environments. IRM performs better than ERM and DRO, which shows the usefulness of environment labels for OOD generalization and the possibility of learning invariant predictor from multiple environments. The proposed KerHRM outperforms EIIL and HRM, which validates its superiority of heterogeneity exploration. KerHRM even outperforms IRM, which indicates the limitation of manually labeled environments in invariant learning and the necessity of latent heterogeneity exploration. 6 Limitations Although the proposed KerHRM is a competitive method, it has several limitations. Firstly, since in Mc we take the model parameters as cluster centres, the strict convergence guarantee for our clustering algorithm Mc is quite hard to analyze. And empirically, we \ufb01nd when the pre-de\ufb01ned cluster number K is far away from the ground-truth, the convergence of Mc will become quite slow. Further, such restriction also affects the analysis of the mutual promotion between Mc and Mp, which we can only empirically provide some veri\ufb01cation. Besides, although we incorporate Neural Tangent Kernel to deal with data beyond raw feature level, how to deal with more complicated data still remains unsolved. Also, how to incorporate deep learning with the mutual promotion between the two modules needs further investigation, and we left it for future work. 7 Conclusion In this paper, we propose the KerHRM algorithm for the OOD generalization problem, which achieves both the latent heterogeneity exploration and invariant prediction. From our theoretical and empirical analysis, we \ufb01nd that the heterogeneity of environments plays a key role in invariant learning, which is consistent with some recent analysis[20] and opens a new line of research for OOD generalization problem. Our code is available at https://github.com/LJSthu/Kernelized-HRM. Acknowledgements This work was supported National Key R&D Program of China (No. 2018AAA0102004). A Appendix A.1 Experimental Details In this section, we introduce the experimental details as well as additional results. In all experiments, we take k = {10, 15, 20, 25} for our KerHIL and select the best one according to the validation results. 10 \fClassi\ufb01cation with Spurious Correlation For our synthetic data, we set \u03c32 s = 3.0 and \u03c32 v = 0.3 to let the model more prone to use spurious V since V is more informative. Regression with Selection Bias In this setting, the correlations among covariates are perturbed through selection bias mechanism. According to assumption 2.1, we assume X = H[S, V ]T \u2208Rd and S = [S1, S2, . . . , Sns]T \u2208Rns is independent from V = [V1, V2, . . . , Vnv] \u2208Rnv while the covariates in S are dependent with each other. We assume Y = f(S) + \u03f5 and P(Y |S) remains invariant across environments while P(Y |V ) can arbitrarily change. Therefore, we generate training data points with the help of auxiliary variables Z \u2208Rns+1 as following: Z1, . . . , Zns+1 iid \u223cN(0, 1.0) (20) V1, . . . , Vnv iid \u223cN(0, 1.0) (21) Si = 0.8 \u2217Zi + 0.2 \u2217Zi+1 for i = 1, . . . , ns (22) To induce model misspeci\ufb01cation, we generate Y as: Y = f(S) + \u03f5 = \u03b8T s S + \u03b2 \u2217S1S2S3 + \u03f5 (23) where \u03b8s = [ 1 2, \u22121, 1, \u22121 2, 1, \u22121, . . . ] \u2208Rns, and \u03f5 \u223cN(0, 0.3). For our synthetic data, we set \u03b2 = 5.0, ns = 5 and d = 10. As we assume that P(Y |S) remains unchanged while P(Y |V ) can vary across environments, we design a data selection mechanism to induce this kind of distribution shifts. For simplicity, we select data points according to a certain variable set Vb \u2208V : \u02c6 P(x, y) = |r|\u22125\u2217|y\u2212sign(r)\u2217Vb| (24) \u00b5 \u223cUni(0, 1) (25) M(r; (x, y)) = \u001a1, \u00b5 \u2264\u02c6 P 0, otherwise (26) where |r| > 1. Given a certain r, a data point (x, y) is selected if and only if M(r; (x, y)) = 1 (i.e. if r > 0, a data point whose Vb is close to its Y is more probably to be selected.) Intuitively, r eventually controls the strengths and direction of the spurious correlation between Vb and Y (i.e. if r > 0, a data point whose Vb is close to its Y is more probably to be selected.). The larger value of |r| means the stronger spurious correlation between Vb and Y , and r \u22650 means positive correlation and vice versa. Therefore, here we use r to de\ufb01ne different environments. A.2 Proof of Theorems A.2.1 Proof of Theorem 2.1 First, we would like to prove that a random variable satisfying assumption 2.1 is MIP. Theorem A.1. A representation \u03a8\u2217 S \u2208I satisfying assumption 2.1 is the maximal invariant predictor. Proof. \u2192: To prove \u03a8\u2217 S = arg minZ\u2208I I(Y ; Z). If \u03a8\u2217 S is not the maximal invariant predictor, assume \u03a6\u2032 = arg maxZ\u2208I I(Y ; Z). Using functional representation lemma, consider (\u03a8\u2217 S, \u03a6\u2032), there exists random variable \u03a6extra such that \u03a6 \u2032 = \u03c3(\u03a8\u2217 S, \u03a6extra) and \u03a6\u2217\u22a5\u03a6extra. Then I(Y ; \u03a6 \u2032) = I(Y ; \u03a6\u2217, \u03a6extra) = I(f(\u03a8\u2217 S); \u03a8\u2217 S, \u03a6extra) = I(f(\u03a8\u2217 S); \u03a8\u2217 S). \u2190: To prove the maximal invariant predictor \u03a8\u2217 S satis\ufb01es the suf\ufb01ciency property in assumption 2.1. The converse-negative proposition is : Y \u0338= f(\u03a8\u2217 S) + \u03f5 \u2192\u03a8\u2217 S \u0338= arg max Z\u2208I I(Y ; Z) (27) Suppose Y \u0338= f(\u03a8\u2217 S) + \u03f5 and \u03a8\u2217 S = arg maxZ\u2208I I(Y ; Z), and suppose Y = f(\u03a6 \u2032) + \u03f5 where \u03a6 \u2032 \u0338= \u03a8\u2217 S. Then we have: I(f(\u03a6 \u2032); \u03a8\u2217 S) \u2264I(f(\u03a6 \u2032); \u03a6 \u2032) (28) Therefore, \u03a6 \u2032 = arg maxZ\u2208I I(Y ; Z) 11 \fThen we provide the proof of Theorem 2.1 with Assumption A.1. Assumption A.1. Heterogeneity Assumption. For random variable pair (X, \u03a6\u2217) and \u03a6\u2217satisfying Assumption 2.1, using functional representation lemma [7], there exists random variable \u03a8\u2217such that X = X(\u03a6\u2217, \u03a8\u2217), then we assume P e(Y |\u03a8\u2217) can arbitrary change across environments e \u2208supp(E). Theorem A.2. Let g be a strictly convex, differentiable function and let D be the corresponding Bregman Loss function. Let \u03a8\u2217 S is the maximal invariant predictor with respect to IE, and put h\u2217(X) = EY [Y |\u03a8\u2217 S]. Under Assumption A.1, we have: h\u2217= arg min h sup e\u2208supp(E) E[D(h(X), Y )|e] (29) Proof. Firstly, according to theorem A.1, \u03a8\u2217 S satis\ufb01es Assumption 2.1. Consider any function h, we would like to prove that for each distribution P e(e \u2208E), there exists an environment e\u2032 such that: E[D(h(X), Y )|e\u2032] \u2265E[D(h\u2217(X), Y )|e] (30) For each e \u2208E with density ([\u03a8S, \u03a8V ], Y ) 7\u2192P(\u03a8S, \u03a8V , Y ), we construct environment e\u2032 with density Q(\u03a8S, \u03a8V , Y ) that satis\ufb01es: (omit the superscript \u2217of \u03a8S and \u03a8V for simplicity) Q(\u03a8S, \u03a8V , Y ) = P(\u03a8S, Y )Q(\u03a8V ) (31) Note that such environment e\u2032 exists because of the heterogeneity property assumed in Assumption A.1. Then we have: Z D(h(\u03c8s, \u03c8v), y)q(\u03c8s, \u03c8v, y)d\u03c8sd\u03c8vdy (32) = Z \u03c8v Z \u03c8s,y D(h(\u03c8s, \u03c8v), y)p(\u03c8s, y)q(\u03c8v)d\u03c8sdyd\u03c8v (33) = Z \u03c8v Z \u03c8s,y D(h(\u03c8s, \u03c8v), y)p(\u03c8s, y)d\u03c8sdyq(\u03c8v)d\u03c8v (34) \u2265 Z \u03c8v Z \u03c8s,y D(h\u2217(\u03c8s, \u03c8v), y)p(\u03c8s, y)d\u03c8sdyq(\u03c8v)d\u03c8v (35) = Z \u03c8v Z \u03c8s,y D(h\u2217(\u03c8s), y)p(\u03c8s, y)d\u03c8sdyq(\u03c8v)d\u03c8v (36) = Z \u03c8s,y D(h\u2217(\u03c8s), yp(\u03c8s, y)d\u03c8sdy (37) = Z \u03c8s,\u03c8v,y D(h\u2217(\u03c8s), y)p(\u03c8s, \u03c8v, y)d\u03c8sd\u03c8vdy (38) (39) A.2.2 Proof of Lemma 4.1 Firstly, we add the assumption in [18]. Assumption A.2. Assume the pooled training data is made up of heterogeneous data sources: Ptr = P e\u2208supp(Etr) weP e. For any ei, ej \u2208Etr, ei \u0338= ej, we assume Ic i,j(Y ; \u03a6\u2217|\u03a8\u2217) \u2265max(Ii(Y ; \u03a6\u2217|\u03a8\u2217), Ij(Y ; \u03a6\u2217|\u03a8\u2217)) (40) where \u03a6\u2217is invariant feature and \u03a8\u2217the variant. Ii represents mutual information in P ei and Ic i,j represents the cross mutual information between P ei and P ej takes the form of Ic i,j(Y ; \u03a6|\u03a8) = Hc i,j[Y |\u03a8] \u2212Hc i,j[Y |\u03a6, \u03a8] and Hc i,j[Y ] = \u2212 R pei(y) log pej(y)dy. Then the proof for Lemma 4.1 can be found in [18]. 12 \fA.2.3 Proof of Theorem 4.1 Firstly, we transform the clustering objective in Equation 12, making it more suitable for further analysis. Proof can be found in [17]. Theorem A.3. Let Q\u2032 be the set of distributions of the complete data random variable (J, \u03a8, Y ) \u2208 {1, 2, ..., K} \u00d7 Rd \u00d7 R with elements: Q\u2032(J = j, \u03a8 = \u03c8, Y = y) = qjhj(\u03c8, y), (41) i.e. Q\u2032(j, \u03c8, y) is the probability of data point (\u03c8, y) belonging to the j-th cluster. Let P\u2032 be the set of distributions on the same random variable (J, \u03a8, Y ) which have \u02c6 PN as their marginal on (\u03a8, Y ). Speci\ufb01cally for any P \u2032 \u2208P\u2032 we have: P \u2032(j, \u03c8, y) = \u02c6 PN(\u03c8, y)P \u2032(j|\u03c8, y) = \uf8f1 \uf8f2 \uf8f3 1 N rij, if (\u03c6, y) = (\u03c6i, yi) 0, otherwise (42) where rij = P \u2032(j|\u03c8i, yi). Then: min Q\u2208Q DKL( \u02c6 PN||Q) = min P \u2032\u2208P\u2032,Q\u2032\u2208Q\u2032 DKL(P \u2032||Q\u2032). (43) In the new optimization problem in Equation 43, we optimize P \u2032 \u2208P\u2032 and Q\u2032 \u2208Q\u2032. Speci\ufb01cally, in the former we can optimize rij, which is a discrete random variable over the space {1, 2, ..., N} \u00d7 {1, 2, ..., K}. Meanwhile, in the latter we can optimize {\u0398j}K j=1 and {qj}K j=1, which are the cluster centers and cluster weights, respectively. Substituting the de\ufb01nitions of P \u2032 and Q\u2032 respective in Equation 42 and Equation 41 to Equation 43, we come the following equation: DKL(P \u2032||Q\u2032) = 1 N N X i=1 K X j=1 rij[log rij qj + \u03b2d(\u03c8i, yi, mj)] + Const, (44) where \u03b2 = 1 2\u03c32 is to better illustrate the Rate-Distortion theorem and d(\u03c8i, yi, \u0398j) = (f\u0398j(\u03c8i)\u2212yi)2. It is straightforward to show that for any set of values rij, setting qj = 1 N PN i=1 rij minimize the objective, therefore: DKL(P \u2032||Q\u2032\u2217(P \u2032)) = 1 N N X i=1 K X j=1 rij[log rij 1 N PN i\u2032=1 ri\u2032j + \u03b2d(\u03c8i, yi, mj)] + Const =I(I; J) + \u03b2EI,Jd(\u03c8i, yi, \u0398j) + Const, (45) where I, J are the marginal distribution of random variable rij respectively. The \ufb01rst term is the mutual information between the random variables I (data points) and J (exemplars) under the empirical distribution and the second term is the expected value of the pairwise distances with the same distribution on indices. Actually d(\u03c8i, yi, \u0398j) models the conditional distribution P(Y |\u03a8). If in the underlying distribution of the empirical data P(Y |\u03a8) differs a lot between different clusters, then d(\u03c8i, yi, \u0398j) will be focused more to be optimized because different clusters are more diverse so the optimizer will put more efforts in optimizing d(\u03c8i, yi, \u0398j). Resulting in smaller efforts put on optimization of I(I; J), resulting in a relatively larger I(I; J). This means data sample points I has a larger mutual information with exemplars J, thus the clustering is more accurate. We can provide another intuition of why larger I(I; J) means more accurate clustering. For a static dataset to be clustered, setting larger \u03b2 causes distance between points larger, resulting in more clusters which is more accurate. On the other hand, larger \u03b2 signi\ufb01es the model puts more efforts to optimize d(\u03c8i, yi, \u0398j) and puts less efforts on the optimization of I(I; J), resulting in larger I(I; J). 13 \fA.2.4 Proof of Theorem 4.2 Firstly, since \u03a8(t+1) V (xi) \u2190UiS \u2212 D UiS, \u03b8(t) inv E \u03b8(t) inv/\u2225\u03b8(t) inv\u22252, we have D \u03a8(t+1) V , \u03b8(t) inv E = 0 (46) Therefore, we have Span(\u03a8(t+1) V ) \u22a5\u03b8(t) inv (47) and Span(\u03a8(t+1) V ) \u2286Ker(\u03b8(t) inv) (48) As for the clustering parameters \u0398, since the kernel regression is equivalent to linear regression using mapping function \u03a8V , we can directly derive the analytical solution of \u0398j as: \u0398j(j \u2208[K]) = ((\u03a8j V )T \u03a8j V )\u22121(\u03a8j V )T Y j (49) where \u03a8j V denotes the data matrix of environment j and Y j the corresponding label matrix. Then since ((\u03a8j V )T \u03a8j V )\u22121(\u03a8j V )T Y j = (\u03a8j V )T ((\u03a8j V )(\u03a8j V )T )\u22121Y j (50) we have \u0398T j \u03b8inv = h (\u03a8j V )T ((\u03a8j V )(\u03a8j V )T )\u22121Y jiT \u03b8inv (51) = 0 (52) which gives the conclusion. A.3 Limitations and Future Work This work focus on the integration of latent heterogeneity exploitation and invariant learning on representation level. To ful\ufb01ll the mutual promotion between environment inference and invariant learning, we give up deep learning for representation learning, since the representation space in deep learning is hard to theoretically analyzed, which makes it quite hard to maintain the property we need. As an alternative, we leverage Neural Tangent Kernel(NTK) and convert data into Neural Tangent Feature(NTF) space, for NTK theory[13] builds the equivalency between MLP and kernel regression. However, we have to admit that using NTF space for representation space is not as powerful as the representation space produced by recent deep learning methods. But we would like to emphasize the dif\ufb01culty in incorporating deep learning, since we cannot directly use the learned representation for heterogeneity exploitation, because during the invariant representation learning process, deep models will gradually extract the latent invariant components \u03a8\u2217 S in data and discard those variant components \u03a8\u2217 V . We have to resort to variant components \u03a8\u2217 V rather than invariant ones \u03a8\u2217 S to explore the heterogeneity, but variant components are discarded during the training of deep models. Therefore, incorporating deep learning while maintaining mutual promotion is quite hard and we leave it for future work. A.4 Related Work There are mainly two branches of methods for OOD generalization problem, namely Distributionally Robust Optimization(DRO) methods[6, 8, 22, 24] and Invariant Learning methods[1, 3, 5, 14, 18]. To ensure the OOD generalization performances, DRO methods[6, 8, 22, 24] aim to optimize the worst-performance over a distribution set, which is usually characterized by f-divergence or Wasserstein distance. However, in real scenarios, it is often necessary for the distributional set to be large to contain the potential testing distributions, which results in the over-pessimism problem because of the large distribution set[10, 11]. Realizing the dif\ufb01culty of solving OOD generalization problem without prior knowledge or structural assumptions, invariant learning methods assume the existence of causally invariant relationships and propose to explore them through multiple environments. However, the effectiveness of such methods relies heavily on the quality of training environments. Further, modern big data are frequently assembled by merging data from multiple sources without explicit source labels, which results in latent heterogeneity in pooled data and renders these invariant learning methods inapplicable. 14 \fRecently, there are methods[5, 18] aiming at relaxing the need for multiple environments for invariant learning. [5] directly infers the environments according to a given biased model \ufb01rst and then performs invariant learning. But the two stages cannot be jointly optimized and the quality of inferred environments depends heavily on the pre-provided biased model. Further, for complicated data, using invariant representation for environment inference is harmful, since the environment-speci\ufb01c features are gradually discarded, causing the extinction of latent heterogeneity and rendering data from different latent environments undistinguishable. [18] designs a mechanism where two interactive modules for environment inference and invariant learning respectively can promote each other. However, it can only deal with scenarios where invariant and variant features are decomposed on raw feature level, and will break down when the decomposition can only be performed in representation space(e.g., image data). 15" + }, + { + "url": "http://arxiv.org/abs/2108.13624v2", + "title": "Towards Out-Of-Distribution Generalization: A Survey", + "abstract": "Traditional machine learning paradigms are based on the assumption that both\ntraining and test data follow the same statistical pattern, which is\nmathematically referred to as Independent and Identically Distributed\n($i.i.d.$). However, in real-world applications, this $i.i.d.$ assumption often\nfails to hold due to unforeseen distributional shifts, leading to considerable\ndegradation in model performance upon deployment. This observed discrepancy\nindicates the significance of investigating the Out-of-Distribution (OOD)\ngeneralization problem. OOD generalization is an emerging topic of machine\nlearning research that focuses on complex scenarios wherein the distributions\nof the test data differ from those of the training data. This paper represents\nthe first comprehensive, systematic review of OOD generalization, encompassing\na spectrum of aspects from problem definition, methodological development, and\nevaluation procedures, to the implications and future directions of the field.\nOur discussion begins with a precise, formal characterization of the OOD\ngeneralization problem. Following that, we categorize existing methodologies\ninto three segments: unsupervised representation learning, supervised model\nlearning, and optimization, according to their positions within the overarching\nlearning process. We provide an in-depth discussion on representative\nmethodologies for each category, further elucidating the theoretical links\nbetween them. Subsequently, we outline the prevailing benchmark datasets\nemployed in OOD generalization studies. To conclude, we overview the existing\nbody of work in this domain and suggest potential avenues for future research\non OOD generalization. A summary of the OOD generalization methodologies\nsurveyed in this paper can be accessed at\nhttp://out-of-distribution-generalization.com.", + "authors": "Jiashuo Liu, Zheyan Shen, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, Peng Cui", + "published": "2021-08-31", + "updated": "2023-07-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Contemporary machine learning methodologies have demonstrated their superior proficiency across various domains such as natural language processing, computer vision, recommendation systems, etc. While these techniques have been observed to exceed human-level performance under controlled experimental conditions, a growing body of research has underscored the susceptibility of machine learning models to data distribution shifts. The costs of such errors vary substantially across different applications. While minor inconveniences, such as a suboptimal movie recommendation or a misclassified image, are generally tolerable, slight inaccuracies in high-stakes domains such as healthcare or autonomous driving can cause catastrophic consequences. Consequently, the exploration of Out-of-Distribution (OOD) generalization has emerged as a pressing concern in both academic and industrial fields, with a view to enhancing the robustness and reliability of intelligent systems across diverse real-world scenarios. *Equal contribution \u2020Corresponding Author 1 arXiv:2108.13624v2 [cs.LG] 27 Jul 2023 \fDespite the importance of OOD generalization, conventional supervised learning techniques cannot be straightforwardly applied to resolve it. From a theoretical standpoint, the fundamental assumption underpinning classic supervised learning is that of Independent and Identically Distributed (i.i.d.) data, postulating that the training and test datasets originate from the same distribution. However, this assumption is systematically violated in OOD generalization scenarios due to inevitable distributional shifts, rendering classical learning theory inadequate. From an empirical perspective, conventional supervised learning approaches typically focus on minimizing average training errors, greedily incorporating all correlations within the data to improve predictive accuracy. Although this strategy has proven effective in i.i.d. settings, it is detrimental to model performance under distributional shifts, as not all correlations persist in unfamiliar test distributions. Numerous studies [55, 11, 41, 236, 157] demonstrate that, when confronted with severe distributional shifts, models optimized purely based on average training errors perform poorly, often proving inferior even to random guesses. These observations underline the urgent need for tailored methodologies to address OOD generalization problems effectively. Addressing the Out-of-Distribution (OOD) generalization problem necessitates the resolution of several pivotal issues. First, a formal characterization of the distributional shifts is required, given that training and test data can originate from different distributions. This issue remains largely unresolved in the OOD generalization literature, with various methodological branches adopting distinct approaches to model the potential test distribution. Causal learning techniques [200, 26] formulate training and test distributions using causal structures, with distributional shifts largely attributed to interventions or confounding factors. Invariant learning methodologies [181, 74, 75, 146, 11], on the other hand, primarily focus on real-world scenarios, leveraging data collected across diverse domains. Stable learning methods [131, 132, 236] introduce distributional shifts through selection bias. Second, designing an algorithm with robust OOD generalization performance is a prevalent research focus. This endeavor has given rise to multiple branches of methodologies, each with distinct research objectives, including unsupervised representation learning, supervised model learning, and optimization techniques. Third, evaluating the OOD generalization performance of various methods also poses a significant challenge. This is due to the need for specific datasets as well as evaluation metrics, as traditional benchmarks for the i.i.d. setting are not applicable. This situation further demonstrates the need for curated datasets and evaluation frameworks. In this paper, we aim to deliver a systematic and comprehensive survey of research undertakings in the realm of Out-of-Distribution (OOD) generalization. Our survey adopts an expansive view of the OOD generalization problem, encompassing all stages from its formal definition and methodological approaches, to its evaluation, implications, and prospective directions. To our knowledge, this paper represents the first effort to examine OOD generalization in such an extensive, holistic manner. While previous research efforts have addressed related topics\u2014Wang et al. [257], Zhou et al. [309] primarily focus on domain generalization, and Ye et al. [287] discuss evaluation benchmarks for OOD generalization\u2014these works each contribute a piece of the broader OOD generalization puzzle. In contrast, our work cohesively integrates these disparate elements in a clear and succinct manner. Specifically, we classify existing methods into three categories, based on their position in the overall learning pipeline, and elaborate on the theoretical connections between different methods through the perspective of causality. To further facilitate future research in OOD generalization, we also provide a comprehensive survey of datasets to evaluate learning methods under distribution shifts. The structure of this paper is organized as follows. In Section 2, we formulate the OOD general2 \fization problem, elucidate its relationship with existing research areas, and propose a categorization of methods. Sections 3, 4, and 5 respectively detail the representative methods of each category. Section 6 offers theoretical connections and insights between different methods, while Sections 7 and 8 summarize applicable benchmarks for OOD generalization and its potential implications. Finally, we conclude this paper in Section 9, suggesting promising directions for future research. 2 Problem Definition and Categorization of Methods In this section, we first formalize the overarching Out-of-Distribution (OOD) generalization problem and illustrate its similarities and differences with the classic Independent and Identically Distributed (i.i.d.) learning problem. We then proceed to explore several research domains related to OOD generalization, including Domain Adaptation, Domain Generalization, Federated Learning, and Out-of-Distribution Detection. Finally, we classify existing methodologies that address OOD generalization into distinct categories based on their respective positions within the entire learning pipeline. 2.1 Problem Definition Let X be the feature space and Y the label space. A parametric model is defined as f\u03b8 : X \u2192Y, which serves as a mapping function from original features to the label with learnable parameter \u03b8. A loss function \u2113: Y \u00d7 Y \u2192R, which measures the distance between predictions and ground-truth labels. The classic supervised learning problem is defined as Definition 1. Definition 1 (Supervised Learning). Given a set of n training samples of the form {(x1, y1), . . . , (xn, yn)} which are drawn from training distribution Ptr(X, Y ), a supervised learning problem is to find an optimal model f\u2217 \u03b8 which can generalize best on data drawn from test distribution Pte(X, Y ): f\u2217 \u03b8 := arg min f\u03b8 EX,Y \u223cPte[\u2113(f\u03b8(X), Y )]. (2.1) Traditional learning algorithms typically assume that both the training and test samples are Independent and Identically Distributed (i.i.d.) realizations from a shared underlying distribution, namely, Ptr(X, Y ) = Pte(X, Y ). Based on this assumption, the Empirical Risk Minimization (ERM) framework [250], which seeks to minimize the average loss on training samples, is capable of yielding an optimal model that successfully generalizes to test distributions [251]. Specifically, ERM seeks to minimize the following objective: LERM(\u03b8) := 1 n n X i=1 \u2113(f\u03b8(xi), yi). (2.2) The admirable properties provided by the i.i.d. assumption have served as a strong foundation for the development of numerous learning models over the past few decades. Out-of-Distribution Generalization Problem In real-world scenarios, the test distribution upon which a model is deployed may diverge from the training distribution [202], that is, Ptr(X, Y ) \u0338= Pte(X, Y ). This distribution shift could be attributed to various factors, such as the temporal or spatial evolution of data, or the sample selection bias inherent in the data collection process, which 3 \frender the problem more complex than the i.i.d. learning scenario. Moreover, the test distribution that one may encounter is typically unknown due to the nature of applications like stream-based online scenarios, wherein test data are generated in the future. In summary, the general Out-ofDistribution (OOD) generalization problem can be defined as a specific instance of the supervised learning problem wherein the test distribution Pte(X, Y ) diverges from the training distribution Ptr(X, Y ) and remains unknown during the training phase. There are multiple taxonomies for the OOD generalization problem [154, 223, 246, 27]. This survey primarily focuses on a popular categorization, which classifies distribution shifts as either covariate shifts (changes in the marginal distribution PX) or concept shifts (changes in the conditional distribution PY |X), and provides a systematic review of various methodological approaches addressing the OOD generalization problem. Reviews of some specific distribution shifts have been developed relatively independently and can be found in well-established surveys [77, 72]. 2.2 Categorization of Methods To address the challenges posed by unknown distribution shifts, a significant number of efforts have been dedicated to out-of-distribution generalization, resulting in a vast array of relevant methods. The adopted techniques vary extensively, ranging from causality to robustness, and from structurebased to optimization-based strategies. However, to the best of our knowledge, little effort has been made to systematically and comprehensively survey these diverse methodologies within the broader context of OOD generalization, as well as elucidating the differences and interconnections between these works. In this paper, we aim to bridge this gap by reviewing the related methods of OOD generalization. Broadly speaking, the supervised learning problem, as defined in Equation (2.1), can be divided into three relatively independent components: (1) The representation of features X (e.g., g(X)); (2) The mapping function f\u03b8(X) from features X (or g(X)) to the label Y , which is generally also known as the model or inductive bias; (3) The optimization objective. Based on this learning pipeline, we classify existing methods into three categories, according to their respective positions in the pipeline: \u2022 Unsupervised Representation Learning for OOD Generalization includes unsupervised domain generalization and disentangled representation learning, which exploit the unsupervised representation learning techniques to initialize a better representation for downstream OOD generalization tasks. \u2022 Supervised Model Learning for OOD Generalization includes invariant representation learning, training strategy, causal learning, invariant risk minimization, stable Learning, and heterogeneity-aware invariant learning, which design various model architectures and learning strategies to achieve OOD generalization. \u2022 Optimization for OOD Generalization includes distributionally robust optimization and other variants, which directly formulate the objective of OOD generalization and mainly focus on robust optimization with theoretical guarantees for OOD optimality. Within each primary category, we have established numerous sub-categories based on differing technical approaches and any additional information prerequisites. 4 \f2.3 Discussion on Related Topics We then discuss several research topics related to the OOD generalization problem. Domain Adaptation & Generalization A field related to OOD generalization is domain adaptation, which assumes the accessibility of the testing distribution, whether labeled Pte(X, Y ) or unlabeled Pte(X). Domain adaptation can be viewed as a particular instance of OOD generalization where there is some prior knowledge of the test distribution. Under such conditions, domain adaptation can avail itself of theoretical guarantees [17] which maintain the optimality of the trained model in test scenarios. The detailed exploration of domain adaptation methods is beyond the scope of this paper; interested readers may refer to well-established studies in this area [195, 191, 243, 313, 42, 168]. Over the past decade, domain generalization (DG), a popular branch of methodology, has rapidly garnered research attention [21]. By assuming the heterogeneity of the training data, DG methods utilize additional domain (also known as environment) labels to learn an invariant model that can generalize to unseen and shifted test data. Supported by a number of high-quality benchmarks, domain generalization studies are primarily conducted in the field of computer vision tasks. In this paper, in order to provide balanced content from various fields, we focus on the more general out-of-distribution generalization problem and only introduce a select number of typical DG methods. For a comprehensive introduction to the domain generalization problem itself, one may refer to specific DG surveys [309, 257]. Federated learning Federated learning (FL), raised by McMahan et al. [177], addresses scenarios where multiple entities (clients) collaborate to solve a machine learning problem under the coordination of a central server or service provider [115]. Over recent years, there has been widespread interest in various aspects of FL, including communication-efficient learning, model ensemble, compression integration, system heterogeneity, data heterogeneity, personalization, and privacy. We direct readers to [115, 256] for a more comprehensive survey. Among these aspects, data heterogeneity is most closely related to OOD generalization problems. Both OOD and FL problems assume data heterogeneity within their training datasets. In OOD generalization problems, data heterogeneity is leveraged by models to infer invariant models. In FL, data is distributed across clients and is statistically heterogeneous as the training samples on clients may come from different distributions. Various assumptions [82, 118, 128, 148] are made in regard to data heterogeneity to guarantee the performance of FL models, which is assessed as the expected utility across all clients. The key difference between FL and OOD lies in the mode of evaluation. In FL, while distributions on various clients differ, researchers assume that a distribution exists among the clients, and the clients in the training and test datasets are independently drawn from this distribution. Consequently, the training and test datasets are i.i.d. in a certain sense in FL. In contrast, the testing distribution in OOD generalization problems remains unknown. Out-of-Distribution Detection Out-of-Distribution (OOD) detection [278], aims to identify and reject unfamiliar objects not encountered during training to ensure reliability (e.g., to forward them to experts for safe handling). Unlike OOD generalization tasks, which primarily focus on performance under distribution shifts, the OOD detection community [211, 152, 98, 244] concentrates more on detecting samples from unseen distributions. Some works in this area also pay attention to test samples with non-overlapping labels (i.e., new classes) compared to training data. 5 \fAnother related topic is open set classification, which aims to directly recognize unknown categories in test data [110, 79]. This also differentiates from OOD generalization, where the label space is shared between the training and test data. In the following sections, we provide a comprehensive and detailed review of OOD generalization methods corresponding to the above order and discuss their differences and theoretical connections. 3 Unsupervised Representation Learning In this section, we review methods that concentrate on unsupervised representation learning, which primarily include unsupervised domain generalization and disentangled representation learning. These methods either independently learn domain-agnostic features, or they employ pre-existing human knowledge to structure and regulate the representation learning process. By doing so, they ensure that the learned representation possesses certain attributes that may facilitate out-of-distribution generalization. 3.1 Unsupervised Domain Generalization Learning a discriminative and robust representation across diverse distributions, particularly with limited labeled data, can serve as the foundation for out-of-distribution (OOD) generalization ability [173, 301]. Prior to this, the issue of initializing pre-trained weights for OOD generalization has remained a crucial but often overlooked aspect. In the realm of computer vision, it is conventional to initialize with ImageNet pre-trained weights for OOD generalization. However, this can introduce significant bias. For instance, the \"real\" domain in DomainNet [304] and the \"photo\" domain in PACS [143] share a similar distribution with ImageNet, while other domains exhibit distinct shifts. Consequently, such initialization can be viewed as pretraining on one of the source domains. Furthermore, for datasets like NICO++ [94, 299], where image contexts are deemed as domains, ImageNet provides additional knowledge of numerous contexts which could lead to the leakage of test domains [290]. Addressing these challenges, Mahajan et al. [173] and Zhang et al. [301] introduce the concept of unsupervised domain generalization (UDG). This approach aims to learn generalizable models using unlabeled data, while simultaneously analyzing the effects of pre-training on OOD generalization. Recently, numerous self-supervised learning methods have shown promising results, using large-scale unlabeled data to learn potent representation spaces [34, 92, 32, 84]. However, these methods cannot directly tackle the OOD generalization problem, as the learned representation space contains domain-specific features used to discriminate negative samples. These features can be unhelpful or even detrimental to downstream tasks [298]. To address this, Zhang et al. [301] propose DARLING, a method that outperforms the ImageNet pre-trained approach using significantly less unlabeled data. This indicates a promising direction for model initialization for OOD generalization. In follow-up work, Harary et al. [88] suggest learning an auxiliary bridge domain along with a set of mappings from training domains to semantically align all domains. Another work [276] utilizes Masked Auto-Encoders [93] to further enhance unsupervised representation learning. Other studies [153, 310] also discuss semi-supervised learning approaches for the OOD generalization problem. 6 \f3.2 Disentangled Representation Learning Disentangled representation learning aims to learn representations where distinct and informative factors of data variation are separated [18, 166]. This is considered a characteristic of high-quality representation and can potentially benefit out-of-distribution generalization. The most prevalent methods for achieving disentanglement are based on Variational Autoencoders (VAE [101, 121]). These are implemented in an entirely unsupervised manner within a single environment, without additional information. These methods prioritize both interpretability and sparsity. Here, sparsity refers to the idea that small changes in distribution typically manifest in a sparse or localized manner within the disentangled factorization [224]. No Additional Information \u03b2-VAE [101] introduces an extra hyperparameter \u03b2 into vanilla VAE objective function, making a trade-off between latent bottleneck capacity and independence constraints, thus encouraging the model to learn more efficient representations. The objective function of \u03b2-VAE is as follows: L = Eq(z|x)[log p(x|z)] \u2212\u03b2KL(q(z|x)\u2225p(z)) (3.1) where z represents the latent representation, x denotes the observed data, p(z) symbolizes the prior distribution of latent factors, p(x|z) indicates the decoding distribution, and q(z|x) is the encoding posterior distribution. When \u03b2 is set to 1.0, this formulation reduces to vanilla VAE. By appropriately tuning \u03b2, the \u03b2-VAE can learn disentangled representations from data in an unsupervised manner. FactorVAE [121] adds the term of Total Correlation into the objective function, which is formulated as the KL-divergence between marginal posterior q(z) and its corresponding factorized distribution \u00af q(z): L = Eq(z|x)[log p(x|z)] \u2212KL(q(z|x)\u2225p(z)) \u2212\u03b3KL(q(z)\u2225\u00af q(z)) (3.2) where \u00af q(z) := Qd j=1 q(zj). This formulation encourages independence for the posterior latent representation. Since the Total Correlation term cannot be computed directly, an extra discriminator is added for density ratio estimation. More recently, despite the popularity of VAE-based methods without contextual information, Locatello et al. [166] challenge some common assumptions of unsupervised disentangled representation learning (e.g., independence of latent factors). This brings back the need for additional information to the attention of researchers. It also questions whether disentanglement can improve downstream task performances, inspiring later works to take downstream tasks into consideration, OOD generalization performance included. Among these works, a new category of disentangled representation learning arises, i.e. causal representation learning. Similar to conventional disentangled representation learning, causal representation learning aims to learn variables in the causal graph with the aid of auxiliary annotations. Further, causal representation can be viewed as the ultimate goal of disentanglement, which satisfies the informal definition of disentangled representation in terms of interpretability and sparsity. With the learned causal representation, one can capture the latent data generation process, which can help to resist the distributional shifts induced by interventions. CausalVAE [281] combines the linear Structural Causal Model (SCM) into the VAE model to endow the learned latent representation with causal structure. Specifically, the causal structure is 7 \fdepicted by an adjacency matrix A as: z = AT z + \u03f5 = (I \u2212AT )\u22121\u03f5, \u03f5 \u223cN(0, I) (3.3) where \u03f5 represents the exogenous factors. In practice, a mild nonlinear function gi is introduced for stability as zi = gi(Ai \u00b7 z; \u03b7i) + \u03f5i. Further, extra labels u of latent causal variables are leveraged in CausalVAE, which gives the objective function as: L = \u2212ELBO + \u03b1DAG(A) + \u03b2lu + \u03b3lm (3.4) where ELBO represents the Evidence Lower Bound, DAG(A) the Directed Acyclic Graph (DAG) constraint, lu = EqX ||u \u2212\u03c3(AT u)||2 2 measures how well A describes causal relations among labels, and lm = Ez\u223cq\u03d5\u03a3n i=1||zi \u2212gi(Ai \u00b7z; \u03b7i)||2 2 measures how well A describes causal relations among latent codes. qX is the empirical data distribution and q\u03d5 the approximate posterior distribution. Moving a step on, DEAR [233] incorporates nonlinear SCM with a bidirectional generative model and assumes the known causal graph structure and extra supervised information of latent factors. The objective function is given as: L(E, G, F) = Lgen(E, G, F) + Lsup(E) (3.5) where E, G denotes the encoder and generator, respectively. The first part Lgen(E, G, F) = KL(qE(x, z), pG,F (x, z)) resembles the VAE loss. The difference lies in the prior distribution of z. In DEAR, this prior is generated by the nonlinear SCM, while in vanilla VAE, it is simply a factorized Gaussian. The second part is Lsup(E) = Ex,yCE( \u00af E(x), u), where CE is the cross entropy loss function, \u00af E represents the deterministic part of E and u the extra labels. Require Additional Information Apart from these fully unsupervised disentangled representation learning methods, there are works utilizing additional information toward disentanglement. The additional information, in the broad sense, includes not only environment labels or domain labels but also auxiliary annotations such as additional information related to latent causal variables. Reed et al. [210] propose disentangling Boltzmann machines by incorporating partial labels to construct corresponding data pairs. Zhu et al. [312] and Yang et al. [277] both explicitly take ground truth transformed images and transformations as auxiliary signals. The former is based on a directed graphical model optimized in a style of EM algorithm, while the latter employs RNN to capture longer-term dependency for multiple transformation steps, inspired by mental experiments on human beings. Meanwhile, Kulkarni et al. [134] implicitly take advantage of contextual information by enforcing specific latent variables held fixed, leaving only other properties varying in a single training batch. Recently, Zhang et al. [296] propose a primal-dual algorithm for joint representation disentanglement and domain generalization, which shows the potential of disentanglement to enhance generalization ability. In addition to the above disentangled representation learning methods, with or without additional information, there exist discussions and explorations on how disentangled representation can benefit OOD generalization. Leeb et al. [139] take advantage of causal ordering information and conduct some quantitative extrapolation experiments, finding that the learned disentangled representation fails to extrapolate to unseen data, while Tr\u00e4uble et al. [247] and Dittadi et al. [49] empirically verify the ability to generalize under OOD circumstances. Reddy et al. [209] propose to utilize bounding box information to achieve disentanglement and come up with a new dataset CANDLE that can 8 \fbe applied in multiple settings including OOD generalization tasks. Lachapelle et al. [135] prove that disentangled representations incorporated with sparse task-specific predictors could improve generalization. Overall, the advantage of disentangled representation on OOD generalization still requires further in-depth research and discussion. 4 Supervised Model Learning for OOD Generalization Aside from the solely unsupervised learning of representations, there are a multitude of studies that incorporate supervised information (labels) to devise different model architectures and learning strategies. These methods place emphasis on end-to-end model learning to enhance their performance in OOD generalization. Given the vast range of literature in this field, we further categorize these methods based on their additional information requirements. This categorization facilitates a fairer and more comprehensible comparison of the various approaches. 4.1 Require Environment Labels Numerous existing methods strive to exploit explicit environment labels to improve OOD generalization. Notably, these approaches encompass causal learning, invariant learning, and a variety of training strategies. In this section, we encapsulate the fundamental concepts underlying these techniques. Causal Learning Causal learning methods aim to learn the underlying causal structure of the data and to predict the outcome variable based on the identified causal variables. By correctly identifying the cause-effect relationships, these methods are expected to perform well even when the data distribution changes, as the underlying causal structure is often assumed to remain invariant across different environments or domains. Invariant Learning Invariant learning methods aim to learn features or representations that are invariant across different environments. The idea is that by focusing on the aspects of the data that do not change across environments, models could generalize better to new, unseen environments. Training Strategies Certain training strategies also utilize explicit environment labels to improve generalization. These methods often involve training models in a way that explicitly takes into account the potential differences between environments. For example, some methods might involve training separate models for each environment or explicitly modeling the differences between environments. 4.1.1 Causal Learning Causal learning, rooted in the causal inference literature, provides a principled approach to the problem of OOD generalization. Its primary goal is to leverage causal variables for predictions, making it increasingly practical in recent times. We start with an introduction to the foundational concepts of causal learning, followed by an exploration of various related studies. The underlying assumption of causal learning is captured in Assumption A, which originates in the causal inference literature. It postulates the existence of a causally invariant relationship between the target variable Y and its direct causes Xpa(Y ). This assumption implies that causal 9 \fvariables Xpa(Y ) remain stable across different environments or despite biases in data selection. This stability has driven a range of studies aimed at achieving OOD generalization through the exclusive exploitation of causal variables. Assumption A (Causality Assumption [26]). The structural equation models: Y e \u2190fY (Xe pa(Y ), \u03f5e Y ), \u03f5e Y \u22a5Xe pa(Y ) (4.1) remains the same across all environments e \u2208supp(Eall), that is, \u03f5e Y has the same distribution as \u03f5Y for all environments. pa(Y ) denotes the direct causes of Y . Next, we explore methods related to causal inference, which endeavor to extract causal variables from heterogeneous data. It\u2019s common knowledge that the gold standard for identifying the causal effect of a variable is to carry out randomized experiments, like A/B testing. However, these full-scale randomized experiments can be prohibitively expensive and often impractical in real-world applications. The ambitious nature of causal inference or causal structure learning makes these techniques more of an ideal \"ground truth\" rather than a practically achievable goal in typical machine learning settings. Therefore, it is more pragmatic to design techniques that provide a more \"causal explanation\" compared to the standard regression or classification framework, while also offering a degree of invariance across environments. Following this intuition, a series of methods [200, 201, 218, 96, 73, 189] have been proposed, leveraging the inherent heterogeneity within data (e.g., across multiple environments). Assumption B (Invariance Assumption). There exists a subset S\u2217\u2286{1, . . . , p} of the covariate indices (including the empty set) such that P(Y e|Xe S\u2217) is the same for all e \u2208E. (4.2) That is, when conditioning on the covariates from S\u2217(denoted by Xe S\u2217), the conditional distribution is invariant across all environments from E. Peters et al. [200] first try to investigate the fact that \"invariance\" could, to some extent, infer the causal structure under necessary conditions and propose Invariant Causal Prediction (ICP). Specifically, they leverage the fact that when considering all direct causes of a target variable, the conditional distribution of the target given the direct causes will not change when interfering all other variables in the model except the target itself. Then they perform a statistical test whether a subset of covariates S satisfies the invariance assumption B for the observed environments in E. The null hypothesis for testing is: H0,S(E) : invariance assumption holds. and all subsets of covariates S which lead to invariance are intersected, that is: \u02c6 S(E) = \\ S {S; H0,S(E) not rejected by test at significance level \u03b1}. Under the assumption of structural equation model and Gaussian residual described in [200], ICP with Chow test [39] could, at least with controllable probability 1-\u03b1, discover subsets of true causal variables, which reads as: P[ \u02c6 S(E) \u2286pa(Y )] \u22651 \u2212\u03b1, (4.3) 10 \fwhere pa(Y ) denotes the direct causes of target Y (e.g. the parental variables of Y in the causal graph). Though being the first attempt to connect invariance with causality, ICP has several limitations. The most straightforward one is the strict requirements for heterogeneity since the power of ICP depends highly on the quality of available environments Etr(or perturbations). If the available perturbed subpopulations are not enough, or even a single environment, the efficacy of ICP will be lost. As discussed in [201], naively estimating the environments from data and then applying ICP may yield less powerful results, so instead of using static data, Pfister et al. [201] propose to leverage the sequential data from a non-stationary environment to detect instantaneous causal relations in multivariate linear time series, which relaxes the assumption of known environments. Besides environmental specification, there are other works trying to consolidate the coverage of the so-called invariance-based method. For example, Heinze-Deml et al. [96] extend the ICP into non-linear model and continuous environments; Gamella and Heinze-Deml [73] apply the ICP into an active learning setting where the interventions (a.k.a. environments) can be proactively chosen during training. ICP serves as a milestone towards inferring causal structure via invariance property. However, the invariance assumption may be violated in more complicated scenarios. Among which, the most common case is the existence of hidden confounders. The instrument variable(IV) method is one typical method for dealing with hidden confounders, which requires the instrument variable E not to directly act on the hidden confounding variable H and outcome variable Y , as shown in Figure 1a. (a) Traditional IV model. (b) Anchor regression model. Figure 1: Comparision of SCM between IV model and Anchor Regression model. Rothenh\u00e4usler et al. [218] investigate more relaxed conditions than the standard IV model which allow the direct effect of instrument variable (which they called anchor variables) on H and Y , as shown in Figure 1b. They realize that despite the attractive notion of invariance guarantee against arbitrarily large intervention or perturbation, one seldom encounters such extreme cases, and exact invariance could be too conservative for moderately perturbed data. Specifically, they focus on the following structural equation: Y = XT \u03b2 + HT \u03b1 + AT \u03be + \u03f5Y , (4.4) with X \u2208Rp, H \u2208Rq and A \u2208Rr, and proposed a regularized formulation of ordinary least squares model by the error projection to the space spanned by anchor variables. \u02c6 \u03b2(\u03b3) = arg min b \u0010 \u2225(I \u2212\u03a0A)(Y \u2212Xb)\u22252 2/n + \u03b3\u2225\u03a0A(Y \u2212Xb)\u22252 2/n \u0011 . (4.5) where \u03a0A denotes the projection in Rn onto the column space of A. For \u03b3 = 1, \u02c6 \u03b2(1) equals the ordinary least squares estimator, for \u03b3 \u2192\u221eit obtains the two-stage least squares procedure from IV regression. 11 \fWhen the observation of instrument A is still hard to fulfill, Oberst et al. [189] further relax the assumption by introducing the noisy proxy of A and prove the robustness of the method under bounded shifts. Based on these, Mazaheri et al. [176] focus on the setting where the causal and anti-causal variables of the outcome variable are unobserved and propose feature selection and engineering methods to identify proxies. 4.1.2 Invariant Learning Relaxing the causality to invariance, the key idea of invariant learning is to learn an invariant representation or model across environments leveraging contextual information such as domain labels. There are works [181, 11, 6] that theoretically or empirically prove that if the representations remain invariant when the domain varies, the representations are transferable and robust on different domains. Methods that seek invariance among different environments can be mainly divided into three categories, namely invariant risk minimization and domain-irrelevant representation learning. Invariant Risk Minimization Deriving from causal inference, invariant risk minimization (IRM, [11]), targets on latent causal mechanisms and extends ICP to more practical and general settings. Different from causal learning methods that act on raw variable level, IRM makes the invariance assumption as: Assumption C (IRM\u2019s Invariance Assumption). There exists data representation \u03a6(X) such that for all e, e\u2032 \u2208supp(Etr), E[Y |\u03a6(Xe)] = E[Y |\u03a6(Xe\u2032)], where Etr denotes the available training environments. Arjovsky et al. [11] propose to find data representation \u03a6(X) that can both predict well and elicit an invariant linear predictor w across Etr, which results in the following objective function: min \u03a6(X),w X e\u2208supp(Etr) Le(w \u2299\u03a6(X), Y ) (4.6) s.t. w \u2208arg min w Le(w \u2299\u03a6(X)), for all e \u2208supp(Etr) (4.7) In order to achieve invariance across Eall by enforcing low error of Equation (4.6) on Etr, IRM requires sufficient diversity across environments and makes the following assumption. Assumption D (IRM\u2019s Condition, Assumption 8 in [11]). A set of training environments Etr lie in linear general position of degree r if |Etr| > d \u2212r + d r for some r \u2208N, and for all non-zero x \u2208Rd: dim \u0012 span \u0010 {EXe[XeXeT ]x \u2212EXe,\u03f5e[Xe\u03f5e]}e\u2208Etr \u0011\u0013 > d \u2212r (4.8) With Assumption D, IRM characterizes under what conditions an invariant predictor w \u25e6\u03a6(X) across Etr remains invariant across Eall in linear cases (Therorem 9 in [11]). It makes the same assumptions about linearity, centered noise, and independence between the noise \u03f5e and the causal variables as ICP [200], but does not assume that the data is Gaussian, the existence of a causal graph, or that the training environments arise from specific types of interventions. And the result of IRM could extend to latent causal variables while ICP [200] restricts to raw causal feature level. Based on IRM, follow-up works have proposed variations on this objective with similar regularizations of the invariance assumption C, resulting in similar alternatives. Chang et al. [30] and Koyama 12 \fand Yamaguchi [129] formulate the desired invariant representation using information theory, and propose to find the maximal invariant predictor (MIP) across training environments. The maximal invariant predictor is defined as: Definition 2. The invariance set I with respect to E is defined as: IE = {\u03a6(X) : Y \u22a5E|\u03a6(X)} = {\u03a6(X) : H[Y |\u03a6(X)] = H[Y |\u03a6(X), E]} (4.9) where H[\u00b7] is the Shannon entropy of a random variable. The corresponding maximal invariant predictor (MIP) of IE is defined as: SE = arg max \u03a6\u2208IE I(Y ; \u03a6) (4.10) where I(\u00b7; \u00b7) measures Shannon mutual information between two random variables. With the invariant predictor SEall, Koyama and Yamaguchi [129] prove that the OOD optimal model is given by E[Y |SEall]. Further, to obtain the MIP solution from training environments Etr, Koyama and Yamaguchi [129] derive a regularizer as: Trace \u0000Vare\u2208Etr(\u2207\u03b8Le(\u03b8)) \u0001 (4.11) where the variance is taken with respect to training environments Etr. Under the controllability condition proposed by Koyama and Yamaguchi [129] which assumes that there exists an environment e such that X \u22a5Y |\u03a6(X), e, the optimality of E[Y |\u03a6(X)] can be verified, as shown in Proposition 3.1 in [129] and Theorem 1 in [215]. Moreover, Ahuja et al. [4] introduce game theory to this field and substitute the linear classifier in IRM with an ensemble of classifiers originating from different environments. Jin et al. [113] replace IRM\u2019s regularizer with predictive regret, imposing more stringent constraints on \u03a6(X). Krueger et al. [130] propose penalizing the variance of risks across different environments, while Xie et al. [265] suggest a similar objective but swap the original penalty with the square root of the variance. Mahajan et al. [173] present a contrastive regularizer that matches the representation of identical objects across different environments. Creager et al. [41] focus on IRM\u2019s issue of missing environment labels and put forward Environment Inference for Invariant Learning (EIIL) to maximize IRM\u2019s penalty by learning environments. This two-stage algorithm first generates environments based on a biased reference model, then carries out invariant learning using the environments learned. Xin et al. [266] elucidate the link between invariant learning and adversarial training for OOD generalization and introduce an adversarial training method to mitigate distribution shifts. Fan et al. [64] examine the invariant learning problem from a statistical perspective and propose an environment-invariant linear least squares objective function. And Li et al. [142] suggest an invariant information bottleneck to extract invariant representations via mutual information. While the results from IRM seem encouraging, Rosenfeld et al. [217] highlight some issues with its application to classification tasks. In the linear case, they offer simple conditions under which the optimal solution either succeeds or, more often, fails to recover the optimal invariant predictor. Notably, Rosenfeld et al. [217] prove that a viable solution can outperform the optimal invariant predictor on all e \u2208Eall, using only environmental features (Theorem 5.3 in [217]). In a nonlinear context, they illustrate that IRM can fail dramatically unless the test data closely resemble the training distribution (Theorem 6.1 in [217]). Furthermore, Kamath et al. [116] demonstrate that 13 \fit\u2019s possible for IRM to learn a sub-optimal predictor, due to non-invariant loss function across environments. Ahuja et al. [5] compare IRM with ERM from a sample complexity perspective across different shift patterns (Table 1 in [5]). They conclude that under covariate shifts, IRM doesn\u2019t show a clear advantage over ERM. However, in the case of other distribution shifts involving confounders or anti-causal variables, IRM is likely to be close to the desired OOD solutions within the finite sample regime. Recently, drawing inspiration from PAC learning, Parulekar et al. [194] provide finite-sample OOD generalization guarantees for approximate invariance. Meanwhile, Wang et al. [254] manage to reduce the required number of training environments to O(1) by using second-order moment information. Domain-irrelevant representation learning Ganin and Lempitsky [74], Ganin et al. [75] first introduce the concept of the Domain-Adversarial Neural Network (DANN) for domain adaptation. The goal of DANN is to cultivate representations that are both discriminative and impervious to domain shifts. This is accomplished by jointly optimizing the base features and a label predictor that predicts class labels during both training and inference phases. Concurrently, a domain classifier is trained to distinguish between source and target domains. By developing representations that confuse this domain classifier, features invariant to the domain are cultivated. Building on this, Li et al. [146] adapt this framework for scenarios where the target domain information is unknown, while Gong et al. [81] extend adversarial training into the manifold space. Li et al. [149] suggest the utilization of classspecific adversarial networks through a Conditional Invariant Adversarial Network (CIAN). Providing a theoretical foundation, Garg et al. [78] and Sicilia et al. [238] derive generalization bounds for domain adversarial training. Rahman et al. [206] propose a correlation-aware adversarial framework that can be applied to both Domain Adaptation (DA) and OOD generalization. This framework leverages both the correlation alignment metric and adversarial learning to minimize the domain discrepancy between the source and target data. On the application front, Shao et al. [232], Jia et al. [111], and Wang et al. [262] apply domain adversarial learning for face anti-spoofing and unseen target stance detection. Moreover, Zhao et al. [305] introduce an entropy-regularization approach that learns invariant representations by minimizing the KL-divergence between the conditional distributions of different source domains. In addition to domain adversarial learning, numerous studies [248, 146, 169, 168, 167, 269] propose the alignment of features to cultivate domain invariant representations. Motiian et al. [180] recommend semantic alignment between various domains, achieved by minimizing the distance between samples of the same class but different domains, and maximizing the distance between samples from different classes and domains. Other research focuses on minimizing feature distribution divergence by reducing the Maximum Mean Discrepancy (MMD) distance [192, 248, 255], Wasserstein distance [308], or second-order correlation [241, 242, 198], for either Domain Adaptation (DA) or Out-of-Distribution (OOD) generalization. In more recent work, Niu et al. [187] integrate knowledge distillation to learn both domain-invariant and domain-specific representations. Yu et al. [291] employ a diffusion model to align training and testing distributions reversibly. Lv et al. [172] suggest extracting causal factors from input data and then reconstructing the invariant causal mechanisms. Wang et al. [259] implement causal invariant transformations that disturb only non-causal features to achieve invariance. Jiang and Veitch [112] exploit the shared causal structure of domains to learn invariant and transferable representations. Meanwhile, Jia and Zhang [109] learn distributional invariance across source domains to align vocabulary and feature distributions using prompting. 14 \fApplications Nowadays, invariant learning has found wide-ranging applications, often yielding improved generalization performance. Wu et al. [264], Chen et al. [35], and Gui et al. [85] have applied invariant learning to graph data, achieving enhanced out-of-distribution (OOD) generalization performance. In the field of drug discovery, Yang et al. [282] propose to learn sub-structure invariance for OOD molecular representations. Expanding the concept of invariance into the realm of reinforcement learning, Saengkyongam et al. [220] introduce the notion of policy invariance. To mitigate the effects of unobserved confounders in recommendation systems, Wang et al. [263] apply invariant learning. And Li et al. [151] introduce invariant grounding to foster interpretability in video question-answering tasks. 4.1.3 Training Strategy In the field of Out-of-Distribution (OOD) generalization, several vision-related studies have focused on the development of training strategies designed to enhance the generalization capability of deep learning models applied to image data. These strategies can be broadly categorized into five key areas: Meta-Learning, Ensemble Learning, Self-Supervised Learning, Feature Normalization, and Prompt Tuning. Meta-learning Meta-learning establishes a unique learning paradigm, wherein knowledge is accrued over multiple learning episodes [102]. The concept of \"episodes\" in the training phase is first introduced by Finn et al. [68] in their model-agnostic meta-learning (MAML) approach for Domain Adaptation (DA). This concept has since greatly influenced research in the field of meta-learning for OOD generalization. In the seminal work of Li et al. [144], meta-learning is first applied to OOD generalization, paving the way for numerous subsequent improvements [14, 190, 150, 51, 165, 53, 54, 306, 37]. The primary approach involves dividing the source domains into meta-train and meta-test sets. The model is then trained to simultaneously optimize both the meta-train and meta-test losses. Model-ensemble learning. Model-ensemble learning methods typically aim to improve generalization capability by utilizing an ensemble of distinct models, each one tailored for different source domains. Several studies adopt domain-specific subnetworks corresponding to individual source domains, while employing a singular classifier [308, 60, 48] or multiple domain-specific classifier heads [260]. Alternatively, some methods use domain-specific batch normalization for different domains to achieve better normalization results [226, 174]. Recently, given the promising performance of pre-trained models, a study by Dong et al. [50] evaluate the inter-class discriminability and inter-domain stability of these models, and construct an ensemble of top-ranked models. This approach achieves state-of-the-art performance on the DomainNet dataset [199]. Self-supervised learning Drawing inspiration from self-supervised learning methods [188], Carlucci et al. [28] combine a self-learning puzzles task with the classification task to learn robust representations. Ryu et al. [219] devise a strategy to sample positive and negative instances using a random forest. In an alternating training approach, Li et al. [145] separately train the convolutional layers and the classifier. To guard against overfitting to source domains, Huang et al. [105] introduce a self-challenging dropout algorithm. 15 \fFeature Normalization Several studies [241, 242, 113] have employed feature normalization to mitigate domain discrepancy. Pan et al. [193] introduce IBN-Net, which ingeniously integrates Instance Normalization (IN) [249] and Batch Normalization (BN) [107] as building blocks to both capture and reduce domain variance. Empirically, they discover that while IN provides visual and appearance invariance, it could potentially diminish the discriminative information of representations. Consequently, they suggest a combination of IN and BN in shallow layers, with only BN being employed in deeper layers. In the realm of style transfer, Huang and Belongie [104] propose the utilization of adaptive IN. Following the assumption that each feature map of a convolutional encoder can be segregated into style-related and shape-related components, Nam and Kim [184] explicitly combine BN and IN to learn style-invariant representations. Lastly, Jin et al. [114] present Style Normalization and Restitution (SNR), a method that distills task-related features from the residual post style normalization, thereby ensuring the discriminability of the features. Prompt tuning Recently, with the remarkable zero-shot generalization exhibited by pre-trained vision-language models across a variety of downstream tasks, several works have sought to leverage prompt tuning during testing to further enhance the model\u2019s ability to generalize to unseen domains. Shu et al. [237] introduce an approach to learn adaptive prompts on the fly using a single test sample. This demonstrates superior generalization performance when compared to prior prompt tuning methodologies. In a different approach, Zhu et al. [311] optimize prompts employing equiangular tight frame (ETF) structures. Zheng et al. [307] integrate domain prompts into the Vision Transformer architecture to improve its generalization capabilities. 4.2 No Environment Labels In this section, we present methods that do not necessitate explicit environment labels. These approaches include stable learning, heterogeneity-aware learning, and others like flatness-aware learning. 4.2.1 Stable Learning Stable learning, as compared to causal and invariant learning, offers an alternative method for integrating causal inference with machine learning, significantly reducing the dependency on environment labels. The problem setting for stable learning is as follows: Problem 1 (Settings of Stable Learning). Given training data De = (Xe, Y e) from one environment e \u2208supp(Eall), the goal of stable learning is to learn a predictive model with uniformly good performance on any possible environments in supp(Eall). To address this problem, drawing inspiration from variable balancing strategies as seen in the literature [12, 314, 87], Shen et al. [234] propose treating all variables as potential treatments and learning a set of global sample weights. These weights serve to remove any confounding bias for all potential treatments from the data distribution. They develop a global balancing loss that can be seamlessly integrated as a regularizer into standard machine learning tasks, as illustrated by Equation (4.12): p X j=1 \r \r \r \r \r XT \u2212j \u00b7 (W \u2299Ij) W T \u00b7 Ij \u2212 XT \u2212j \u00b7 (W \u2299(1 \u2212Ij)) W T \u00b7 (1 \u2212Ij) \r \r \r \r \r 2 2 , (4.12) 16 \fwhere W represents the sample weights, \f \f XT \u2212j\u00b7(W\u2299Ij) W T \u00b7Ij \u2212X\u2212jT \u00b7(W\u2299(1\u2212Ij)) W T \u00b7(1\u2212Ij) \f \f22 signifies the loss of confounder balancing when setting feature j as the treatment variable. Here, X\u2212j refers to all remaining features (i.e., confounders) excluding the jth column. Ij stands for the jth column of I, and Iij indicates the treatment status of unit i when feature j is treated as the treatment variable. By minimizing the global balancing loss, it\u2019s possible to remove the confounding bias on a global scale. Furthermore, Kuang et al. [131] integrate unsupervised feature representation into the global balancing stage using auto-encoders [225] and adapt the original regularizer into a \"deep\" version. The aforementioned methods primarily focus on binary features as mainstream discussions on causal inference predominantly involve binary treatments. However, when the treatment variable is categorical or continuous, traditional balancing methods become infeasible due to the potentially infinite treatment levels. To address this limitation, especially in situations involving continuous treatments, Kuang et al. [132] propose a solution to learn a set of sample weights. These weights are tailored such that the weighted distribution of the treatment and confounder meet the condition of independence. This corresponds to the fact that accurate treatment effect estimates can be obtained if the treatment and confounder are independent. In addition to methods addressing confounder bias, Shen et al. [236] focus on the issue of model misspecification for linear models within the context of stable learning. The primary challenge for stable learning in linear cases stems from the unavoidable model misspecification that typically occurs in real-world scenarios. More specifically, the true generative process often contains not just the linear part, but also an additional misspecification term. This term could be a nonlinear element or interactions between input variables. y = x\u22a4\u00af \u03b21;p + \u00af \u03b20 + b(x) + \u03f5. (4.13) Shen et al. [236] reveal that the collinearity between variables is a crucial factor in achieving a stable model. If a mis-specified model is used at the training phase, the presence of collinearity amongst variables can escalate a minor mis-specification error to an arbitrarily large magnitude, resulting in unstable prediction performance across variably distributed test data. To mitigate the effects of collinearity among variables, Shen et al. [236] propose to learn a set of sample weights that promote near orthogonality in the design matrix. Technically, they construct an uncorrelated design matrix, denoted as \u02dc X, from the original X matrix, treating it as the \u2019oracle\u2019. They then learn the sample weights w(x) by estimating the density ratio w(x) = p \u02dc D(x)/pD(x) between the underlying uncorrelated distribution \u02dc D and the original distribution D. To further mitigate the issues of large variance and the shrinkage of the effective sample size introduced by sample reweighting, Shen et al. [235] suggest leveraging unlabeled data gathered from multiple environments to uncover hidden cluster structures among variables. Under several technical assumptions, they demonstrate that decorrelating variables between clusters, rather than among themselves, is sufficient for achieving stable estimation without inflating the variance. In contrast, Yu et al. [289] propose an iterative framework that combines sample reweighting and a sparsity constraint to alleviate these issues, even without access to multiple environments. They provide theoretical proof that the introduction of a sparsity constraint can help lessen the requirement for large sample sizes when selecting stable variables. Recently, Zhang et al. [298] propose StableNet, which extends former linear frameworks [131, 236, 132] to incorporate deep models. Due to the complexity of nonlinear dependencies among features derived from deep models, it is significantly more challenging to measure and eliminate the dependencies among features compared to linear cases. In response, StableNet introduces 17 \fan innovative approach to nonlinear feature decorrelation, leveraging Random Fourier Features (RFF) [203]. Specifically, StableNet iteratively optimizes sample weights w, representation function f, and prediction function g as follows: f(t+1), g(t+1) =arg min f,g n X i=1 w(t) i L(g(f(Xi)), yi), w(t+1) =arg min w\u2208\u2206n X 1\u2264i 0 controls the extent of the distributional shift, and Df(Q\u2225Ptr) = R f( dQ dPtr )dPtr is the fdivergence between Q and Ptr. Intuitively, if the potential testing distribution P etest(X, Y ) \u2208P(Ptr), DRO methods can achieve good generalization performance even if P etest(X, Y ) \u0338= Ptr(X, Y ). As for the optimization, a simplified dual formulation for the Cressie-Read family of f-divergence can be obtained. Lemma 1 (Optimization of f-divergence [55]). For fk(t) = tk\u2212kt+k\u22121 k(k\u22121) and k \u2208(1, +\u221e), k\u2217= k/(k \u22121), and any \u03c1 > 0, we have for all \u03b8 \u2208\u0398: Rk(\u03b8; Ptr) = inf \u03b7\u2208R n ck(\u03c1)EPtr[(\u2113(f(X), Y ) \u2212\u03b7)k\u2217 + ] 1 k\u2217+ \u03b7 o (5.4) where ck(\u03c1) = (k(k \u22121)\u03c1 + 1) 1 k . 5.1.2 Wasserstein Distance Constraints Since the calculation of f-divergence requires the supports of two distributions to be the same while Wasserstein distance does not, the distribution set P(Ptr) formulated by Wasserstein distance is more flexible. Wasserstein distance is defined as: Definition 3. Let Z \u2282Rm+1 and Z = X \u00d7Y , given a transportation cost function c : Z\u00d7Z \u2192[0, \u221e), which is nonnegative, lower semi-continuous and satisfies c(z, z) = 0, for probability measures P and Q supported on Z, the Wasserstein distance between P and Q is : Wc(P, Q) = inf M\u2208\u03a0(P,Q) E(z,z\u2032)\u223cM[c(z, z\u2032)] (5.5) 20 \fwhere \u03a0(P, Q) denotes the couplings with M(A, Z) = P(A) and M(Z, A) = Q(A) for measures M on Z \u00d7 Z. Then the distribution set P(Ptr) of Wasserstein DRO is formulated as: Pc(Ptr) = {Q : Wc(Q, Ptr) \u2264\u03c1} (5.6) where the subscript c denotes the transportation cost function c(\u00b7, \u00b7). However, Wasserstein DRO is difficult to optimize and works targeting different models and transportation cost functions have been proposed. Wasserstein DRO for logistic regression was proposed by Shafieezadeh-Abadeh et al. [228]. Sinha et al. [239] achieved moderate levels of robustness with little computational cost relative to empirical risk minimization with a Lagrangian penalty formulation of WDRO. Recently, Li et al. [147] add martingale constraints to WDRO and derive tractable optimization for martingale DRO. And Liu et al. [159] incorporate geometric properties into DRO with geometric Wasserstein distance. 5.1.3 Robustness Guarantees Here we briefly review some theoretical results in DRO literature, including the relationship between regularization and robustness guarantees. In order to demonstrate how the robust formulation (5.2) provides distributional robustness, several works establish the relationship between distributional robustness and regularization. For norm-based DRO methods, El Ghaoui and Lebret [62] build the equivalence between the worst-case squared residual within a Frobenius norm-based distribution set and the Tikhonov regularization. Xu et al. [268] prove the equivalence between robust linear regression with feature perturbations and the Least Absolute Shrinkage and Selection Operator(LASSO). Yang and Xu [283] and Bertsimas and Copenhaver [19] make some further progress on this. For f-divergence-based DRO methods, Duchi et al. [58] prove that the formulation (5.2) with distribution set P(Ptr) = P\u03c1,n = {p \u2208Rn : pT 1 = 1, p \u22650, Df(p\u22251/n) \u2264\u03c1/n} is a convex approximation to regularizing the empirical risk by variance. For Wasserstein-based DRO methods, Shafieezadeh-Abadeh et al. [228] investigate the Wasserstein DRO of logistic regression and show that the regularized logistic regression is one special case of it. Chen and Paschalidis [31] also build the connection between the Wasserstein DRO of linear regression with \u21131 loss function and regularization constraints on the regression coefficients. Shafieezadeh-Abadeh et al. [229] and Gao et al. [76] connect the Wasserstein DRO and regularizations in a unified framework. Li et al. [147] prove that Wasserstein DRO is equivalent to Tikhonov regularization when exact martingale constraints are imposed. As for the OOD generalization ability, in fact, the guarantees for OOD generalization of DRO methods naturally derive their formulation (5.2). Since DRO methods directly optimize for the worst-case risk within the distribution set P(Ptr), as long as the potential testing distribution Pte \u2208P(Ptr), the OOD generalization ability is guaranteed. Therefore, the remaining work is to provide the finite sample convergence guarantees, which ensure that the population-level objective supQ\u2208P(Ptr) EQ[\u2113(f(X), Y )] can be optimized empirically with finite samples. Duchi and Namkoong [55] analyze the generalization bound of f-divergence-based DRO. Sinha et al. [239], Chen and Paschalidis [31] and Liu et al. [158] also provide similar generalization bounds for Wasserstein DRO. Also, Levy et al. [141] come up with optimization methods for DRO of convex losses with conditional value at risk and X2-divergence uncertainty sets, which are suitable for large-scale applications. 21 \f5.2 With Additional Information Although DRO methods could theoretically guarantee the out-of-distribution generalization ability when P etest(X, Y ) \u2208P(Ptr), there has been work questioning their real effects in practice. Intuitively, in order to achieve good OOD generalization ability, the potential testing distribution should be captured in the built distribution set. However, in real scenarios, to contain the possible true testing distribution, the uncertainty set is often overwhelmingly large, making the learned model make decisions with fairly low confidence, which is also referred to as the low confidence problem. Specifically, Hu et al. [103] proved that in classification tasks, DRO ends up being optimal for the training distribution Ptr, which is due to the over-flexibility of the built distribution set. And Frogner et al. [71] also pointed out the problem of overwhelmingly-large decision set for Wasserstein DRO. In order to overcome such a problem, Blanchet et al. [22] propose a data-driven way to select the transportation cost function. Frogner et al. [71] propose to further restrict the distribution set with a large number of unlabeled data. Liu et al. [159] notice that in real scenarios, different covariates may be perturbed in a non-uniform way, and form a more reasonable distribution set according to the stability of covariates across environments. Duchi et al. [57] assume that P(Y |X) stays invariant and propose to only perturb the marginal distribution P(X) to deal with covariate shifts. Though some meaningful attempts, how to incorporate additional information to form a more reasonable distribution set is also an open problem. We refer readers to [204] for a more comprehensive survey. 6 Theoretical Connections For branches of methods for OOD generalization, there are some inherent connections among them. In this section, we will demonstrate the connections among causal learning methods, distributionally robust optimization (DRO) methods, and stable learning methods, which may benefit the understanding of OOD generalization methods. 6.1 DRO and Causality Recall that DRO methods aim to optimize the worst-case error over a pre-defined distribution set, so as to protect the learned model from potential distributional shifts, which often take the form of: arg min f sup Q\u2208P(Ptr) EX,Y \u223cQ[\u2113(f(X), Y )] (6.1) where P(Ptr) is the distribution set built around the training distribution Ptr. Although in DRO literature, P(Ptr) is often characterized by f-divergence or Wasserstein distance, different choices of P(Ptr) will render DRO equivalent to causal inference in the structural equation model (SEM) context[178], which shows the inherent relationship between causality-based methods and DRO methods. Taking linear equation models for example, suppose we have a directed acyclic graph G = (V, E) with p nodes V = {1, . . . , p} and correspondingly a p-dimension random variable Z, then the training distribution is determined by the structural causal model (SCM) as: Z = BZ + \u03f5 (6.2) where Z = (X, Y ) \u2208Rp is the random variable of interest, B \u2208Rp\u00d7p is the coefficient matrix and \u03f5 \u223cP\u03f5 the random noise. We will show that finding causal coefficients for predicting Y can be 22 \freformulated as performing DRO on interventional distribution set, including do-interventional and shift-interventional distributions. Do-interventions on variables S \u2286V can be formulated as: Zk = (BZ)k + \u03f5 for k \u0338\u2208S; ZK = Ak for k \u2208S (6.3) where A \u2208Rp and the value of the do-intervention on variable k \u2208S is Ak. Then the error distribution P\u03f5, coefficient matrix B, intervention set S \u2286V and intervention value A \u2208Rp induces a distribution for a random variable Z(A, S), denoted as Z(A, S) \u223cP (do) A,S . And the corresponding do-interventional distribution set can be formulated as P(do) = {P (do) A,V/{p} : A \u2208Rp}. Analogously to do-interventions, the shift-interventions is defined as: Z = BZ + \u03f5 + A (6.4) where A \u2208Rp is the shift direction, and the induced distribution is denoted as Z(A) \u223cP (shift) A and the shift-interventional distribution set can be formulated as P(shift) = {P (shift) A : Ap = 0}. When performing DRO on P(do) or P(shift), causal coefficients can be obtained [178] since min \u03b8 sup Q\u2208P(do) E[\u2113(f\u03b8(X), Y )] = ( \u221e, if \u03b8 \u0338= \u03b8causal Var(\u03f5p), if \u03b8 = \u03b8causal (6.5) and min \u03b8 sup Q\u2208P(shift) E[\u2113(f\u03b8(X), Y )] = ( \u221e, if \u03b8 \u0338= \u03b8causal Var(\u03f5p), if \u03b8 = \u03b8causal (6.6) which reveals that causal inference can also be viewed as a special case of distributional robustness. 6.2 Stable Learning and Causality Xu et al. [273] theoretically analyze stable learning algorithms through the lens of feature selection and connect them with causality. They first prove that these algorithms could identify a certain set of variables defined as follows. Definition 4 (Minimal stable variable set [273]). A minimal stable variable set of Y under distribution P is any subset S of X for which EP [Y |S] = EP [Y |X], (6.7) and none of S\u2019s proper subsets satisfies Equation (6.7). They theoretically show that the minimal stable variable set is minimal and optimal to deal with covariate-shift generalization for common loss functions [273, Theorem 3]. As a result, the effectiveness of stable learning algorithms on covariate-shift generalization could be proved. Furthermore, they show that the minimal stable variable set is a subset of the Markov boundary [197]. Markov boundary discovery is generally challenging because traditional methods [7, 8] are always based on the conditional independence test, which is a particularly difficult hypothesis to test for [231]. As a result, stable learning algorithms could help discover the Markov boundary to some extent, which can be of independent interest. 23 \fTable 1. Commonly used image datasets for OOD generalization. Shift type denotes the type of distributional shifts, and the mixed type in image type means that there are both real and unreal images. Image Data ImageNet-Variant Colored MNIST MNIST-R Waterbirds Camelyon17 VLCS PACS [100, 97, 99] [11] [80] [221] [15] [66] [143] # Domains 3 6 2 5 4 4 # Categories 2 10 2 2 5 7 # Examples 6k 4.8k 450k 2.8k 9.99k Shift Type Adversarial Policy Color Angle Background Hospital Data Source Style Image Type Mixed Type Digits Digits Birds Tissue Slides Real Objects Mixed Type Image Data Office-Home DomainNet iWildCam FMoW PovertyMap NICO NICO++ [252] [199] [16] [40] [288] [94] [299] # Domains 4 6 323 16 \u00d7 5 23 \u00d7 2 188 810 # Categories 65 345 182 62 Real Value 19 80 # Examples 15.5k 570k 200k 500k 20k 25k 230k Shift Type Style Style Location Time, Country, Background, Attribute, Action, Location Urban/Rural View and Co-occurring Object Image Type Mixed Type Mixed Type Real Animals Satellite Satellite Real Objects 7 Evaluation for OOD Generalization To promote the research of OOD generalization, it is of vital importance to evaluate the OOD generalization performances of different algorithms. In this section, we summarize the datasets commonly used in literature. Datasets can be classified according to different criteria (e.g., synthetic data and real-world data; tabular data, image data, and language data). And researchers from different fields utilize different kinds of datasets, for example, statistical machine learning often uses synthetic & tabular data, while computer vision researchers often use real-world & image data. As for the OOD generalization, it is necessary to involve distribution shifts to evaluate the generalization ability of different approaches. In line with recent works, we present a comprehensive overview of datasets and evaluation metrics of OOD generalization. 7.1 Synthetic Data Synthetic data are important for simulating explainable and controllable distribution shifts. Aubin et al. [13] find that recent OOD generalization methods perform poorly on some simple lowdimensional linear problems. This demonstrates the need for such simple but challenging data, which could precisely reflect to what extent an algorithm could resist certain kinds of distribution shifts. In this section, we introduce three typical synthetic data generation mechanisms, with which one can simulate certain kinds of distribution shifts to various degrees and evaluate the generalization ability of different algorithms. Throughout these mechanisms, covariates X are divided into two groups as X = [S, V ]T , corresponding to the stable and unstable/spurious parts, i.e. P(Y |S) remains invariant across distributions and P(Y |V ) is perturbed to bring distributional shifts. Unobserved Confounders Confounding bias is one of the most sources of distribution shifts [196, 240, 11], where the unstable covariates V are related to target Y owing to the unobserved confounder C. Here we present the data generation process proposed by Subbaswamy and Saria [240]: V = W e V C + \u03f5V , Y = W T S S + WcC + \u03f5Y , (7.1) 24 \fwhere C is the unobserved confounder. Coefficient W e V controls the relationship between V and Y , and one can change W e V across environments to simulate distribution shifts. Selection Bias Kuang et al. [132] propose a selection bias mechanism, and similar settings are also adopted in [158, 157, 289]. In this setting, P(Y |V ) is perturbed with selection bias. The data generation process is as follows: Y = f(S) + \u03f5 = \u03b8T S S + \u03b2S1 \u00b7 S2 \u00b7 S3 + \u03f5, (7.2) and the sample selection probability \u02c6 P(X) of each data point follows: \u02c6 P(X) = Y vi\u2208V |r|\u22125\u2217|f(S)\u2212sign(r)\u2217vi|. (7.3) |r| > 1 is the bias factor to control the strength of distribution shifts. The larger value of |r| brings the stronger spurious correlation between V and Y , and r \u22650 means positive correlation and vice versa. Regression from Causes and Effects Arjovsky et al. [11] and Liu et al. [157] introduce an anti-causal mechanism to change P(Y |V ). In this setting, the data generation process is defined as: Y = WSS + \u03f5Y , V = W e V Y + \u03f5e V (7.4) where the coefficient W e V and \u03f5y, \u03f5e V control the relationship between V and Y . Intuitively, larger \u03f5Y and smaller \u03f5e V will make the model easier to utilize V for prediction, making OOD generalization more challenging. There are various synthetic data generation mechanisms in literature, and one can refer to [13, 222, 160] for more synthetic settings. 7.2 Real-World Data Although synthetic data could reflect the generalization ability of different approaches, they are difficult to generate complicated data (e.g., image/language data), and whether the simulated shift patterns correspond with real-world scenarios remain unclear. To demonstrate the practical value of OOD generalization methods, it is necessary to involve real-world datasets for evaluation. Here, we describe several typical real-world (and pseudo-real) benchmarks used in OOD generalization literature, including image, tabular, language, graph, and code data. Image Data With the rapid development of computer vision, a number of image datasets have been released. According to the flexibility of customizing distribution shifts, we classify them into three categories, namely pseudo-real shifts, static natural shifts, and controllable natural shifts. A summary of these datasets is shown in Table 1. (a) Pseudo-Real Shifts. For image datasets not designed for OOD generalization, some synthetic transformations are added to introduce distribution shifts. The most typical ones, including ImageNet [45] variants (e.g. ImageNet-A [100], ImageNet-C [97], ImageNet-R [99]) adopt data selection mechanisms or perturbations to generate testing data with distribution shifts. Others, typified by MNIST [138] variants (e.g. Colored MNIST [11], MNIST-R [80]), simulate different environments 25 \fby coloring or rotating original images. And Waterbirds [221] introduces spurious correlations between bird categories and backgrounds. These datasets make it available for preliminary study and evaluation of OOD generalization approaches. (b) Static Natural Shifts. Recently, there are a few datasets supporting OOD generalization validation, which mainly involve natural shifts, e.g., spatial and temporal shifts. Widely used in domain generalization, PACS [143] and Office-Home [252] design environments according to image styles, and VLCS [66] and iWildCam [16] directly uses data sources as environments. Besides, Camelyon17 [15] contains tissue slides sampled and post-processed in different hospitals and DomainNet [304] extends PACS to a larger scale, consisting of more domains and categories. Recently, Koh et al. [127] collect several datasets together and produce Wilds as a benchmark for OOD generalization. And Yao et al. [285] curate Wild-Time to reflect temporal distribution shifts in various real-world applications. (c) Controllable Natural Shifts. Recently, there are datasets enabling more flexible and controllable ways to simulate distributional shifts, typified by NICO [94] and NICO++ [299]. NICO elaborately selects visual contexts with various types, including background, attribute, view and etc. With diverse contexts, NICO could simulate different types of natural shifts, and with a balanced sample size in each context, different degrees of distribution shifts could be easily produced. As an extended version of NICO, NICO++ splits domains into common domains (shared by all categories) and unique domains (for each category). For each category, NICO++ contains 10 common domains and 10 unique domains, supporting both typical DA and OOD generalization settings with flexible and controllable shifts. Besides, FMoW[40] collects satellite images of buildings or land with tokens at different times and regions, and PovertyMap[288] contains images of an urban or rural area from disjoint sets of countries. Tabular Data Tabular data widely exist in real-world high-stake applications, including economics, health care, and so on. Therefore, it is important to deal with natural distribution shifts in tabular data. House sales price dataset1 considers temporal shifts in price prediction and is used in [236, 158, 157]. And demographic shifts are considered in Adult2, BRFSS3, COMPAS4 datasets. Spatial shifts are considered in ACS datasets [47], which contains data from 51 US states. Recently, Liu et al. [162] propose WHTSHIFT, an empirical testbed with curated real-world shifts, where the type of shift is specified for each of the 22 settings. Others OGB-MolPCBA [212] collects molecular graphs in over 100,000 scaffolds and formulates a molecular property prediction task across different scaffolds. CivilComments [24] and Amazon[186] gather the individual comments of different users and distinctive groups (e.g. male and female). GLUE-X [280] provides a unified benchmark for evaluating OOD robustness in NLP models. Towards auto-engineering, Py150 [208] contains codes from 8,421 git repositories for code completion generalization. 1https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data 2https://archive.ics.uci.edu/dataset/2/adult 3https://www.cdc.gov/brfss/ 4https://www.kaggle.com/datasets/danofer/compass 26 \f7.3 Empirical Findings Recently, there are works investigating the OOD generalization performances in a purely empirical way, which provide valuable insights. Gulrajani and Lopez-Paz [86] find that the real effects of domain generalization approaches are relatively weak on real-world image data. Miller et al. [179] empirically show that the OOD performance is strongly correlated with in-distribution performance on image data for a wide range of deep models and distribution shifts. Yang et al. [284] release a comprehensive benchmark of 20 algorithms with 12 real-world datasets in vision, language, and healthcare domains, and they empirically study the relationships between different evaluation criteria. And Liu et al. [162] empirically validate the prevalence of Y |X shifts in real-world tabular data, where the accuracy-on-the-line phenomena do not hold. This addresses the importance of specifying the shift patterns on tabular data, and they release a benchmark with 22 specified distribution shift patterns. 8 Implications for fairness and explainability 8.1 Fairness Nowadays, fairness issues have raised great concerns in decision-making systems such as loan applications [182], hiring processes [213], criminal justice [137], personalized pricing [271], and online markets [272]. Poorly designed algorithms tend to amplify data bias, resulting in discrimination against specific subgroups of individuals based on their inherent characteristics, which are often named sensitive attributes in fairness problems. Many works define their fairness and propose corresponding fair algorithms, from which the definition of fairness can be divided into three types: individual fairness [59, 293], group fairness [89, 119, 270], and causality-based fairness notions [120, 36]. However, different fairness notions are in conflict [125]. Methods that mitigate unfairness in the algorithms fall under three categories: pre-processing [253, 67, 117], in-processing [294, 295, 2], and post-processing [89] algorithms. Fairness has recently been linked to OOD issues, according to Creager et al. [41]. Generally speaking, subgroups split by sensitive attributes in fairness literature correspond to environments in OOD literature. Following that, both areas need to specify learning objectives with respect to the subgroups/environments. In fairness literature, the learning objectives represent context-specific fairness notions, while in OOD literature, the learning objectives should be designed according to invariance assumptions. Similar learning objectives could be adopted in both areas. For example, objectives similar to fairness criterion equalized odds [89] are adopted in OOD literature [149, 3] to deal with simplicity bias [230]. The learning objective of IRM [11] is also similar to calibration in fairness literature [38]. Meanwhile, classical approaches from OOD literature could be applied to address fairness issues. Fair representation learning methods [61, 270, 303] originated from domain adaptation (DA) methods [17, 75]. When sensitive attributes are unknown, DRO and adversarially learning were introduced in fairness literature [90, 136, 56] to obtain a distributionally robust predictor and ensure the worst subgroup performance. [119, 95, 123] also adopt adversarially learning methods to ensure all computationally identifiable subgroups are treated equally. As a result, Pursuing OOD could be considered as pursuing fairness concerning the subgroups/environments if the invariance assumption adopted for OOD could be viewed as a fairness notion. In addition to considering subgroups as environments, [175] investigate another scenario in which the environment is a separable variable. They studied fair classifiers that are robust to perturbations 27 \fin the training distribution and devised a DRO-like method to reach their goal. Fair and robust learning is also applied in [214]. These works differ from the works listed in the last paragraph in that fairness and robustness are two objectives here whereas the aforementioned works consider them the same. 8.2 Explainability Explanation methods can be generally divided into post hoc analyses and model-based methods [183]. There exist several works in both directions. Post hoc analyses usually explain a black-box model by calculating feature importance [1]. Typical methods include gradient-based [227, 207, 43], influence function [126], and Shapley values [171]. Model-based explanation methods often adopt simpler hypotheses such as linear regression [69], LASSO [245], generalized additive models [91], decision trees [69], and rule-based methods [70, 140]. Causality [196] has recently been introduced to model explanation, especially in deep learning methods. Traditional deep-learning algorithms are rarely used in high-stakes applications due to their lack of explainability. Causality could provide a way to shed light on the explainability of deep learning. For example, several works adopt causality to explain deep models in textual and visual explanation [9, 10, 83]. Furthermore, the Causal And-Or Graph was proposed in robotics [267] and object tracking [274] to build explainable algorithms with the knowledge of causality. Kim and Canny [122] also applied a causal filtering step in self-driving automobile problems. OOD \u21d0Causality \u21d2Explainability. (8.1) Actually, causality is the crux for both OOD generalization and explainability as shown in Equation (8.1). The models will have good OOD generalization performance and explainability simultaneously if they utilize the causal relationship between the features and the outcomes. Hence, explainability would be a side product when pursuing OOD generalization with causality. 9 Conclusion and Future Directions Out-of-Distribution (OOD) generalization problem has aroused much research attention recently and is critical for the deployment of machine learning algorithms. In this paper, we systematically and comprehensively review the definition, the main branches of methods, theoretical connections among different methods, and the datasets of the OOD generalization problem. Finally, we list several potential challenges in OOD generalization and we hope they could inspire future research on OOD generalization problem. Theoretical characterization Although growing popular recently, the theoretical characterization of a learnable OOD generalization problem remains vague in recent literature. Characterizing the learnability of a problem is a basic question in machine learning. Though previous research efforts have been made in i.i.d. setting, the learnability is difficult to define and analyze under distributional shifts, since it is impossible to enable models to generalize to arbitrary and unknown distributions. Therefore, in OOD generalization problem, figuring out what kind of distributional shifts should be taken into consideration is critical for the analysis of learnability. There is very little exploration [286] on this and more research efforts need be paid on this. 28 \fDemands for environments Multiple training environments are required for the majority of OOD generalization methods, while in practice modern datasets are often assembled by merging data from multiple sources without keeping source labels. This greatly restricts the deployment of OOD generalization methods in real scenarios. Therefore, it is more practical and realistic that we only have access to one training environment with latent heterogeneity. Recently, while there are some works [41, 157] try to leverage the latent heterogeneity and relax the demands for environments, how to explore and utilize the latent heterogeneity inside data is critical for the deployment of OOD generalization methods and is a promising future direction. Reasonable evaluations Although the evaluation criteria for classic machine learning algorithms under i.i.d. assumption are well-developed, including testing data, model selection mechanisms, and so on, they cannot directly be deployed to OOD scenarios. Since the testing distribution is both different and unknown from the training, how to design fair and realistic experimental settings remains a challenging problem. Further, the model selection mechanism also matters, since the choice of validation data is non-trivial in OOD scenarios, and Gulrajani and Lopez-Paz [86] also demonstrate that domain generalization algorithms without a model selection strategy are incomplete. Also, Gulrajani and Lopez-Paz [86] notice that the real effects of many domain generalization methods are weak, which indicates that existing evaluation criteria are inadequate to validate an OOD generalization algorithm. And Yu et al. [290] reflect on the evaluation protocol of domain generalization. They investigate and demonstrate the test data information leakage from pre-trained weights and a single test environment in the current evaluation protocol. Therefore, it is critical for the community to develop more reasonable evaluation criteria for OOD generalization. Incorporation of Pre-Trained & Large Language Models Recently, there has been a surge in the development of large language models (or pre-trained models), such as BERT [46], GPT3 [25], SimCLR [33], StableDiffusion [216], ChatGPT5, GPT-46. These models propose an approach of initially pre-training on large-scale datasets, followed by fine-tuning or directly deploying on downstream tasks. Since it\u2019s inevitable to encounter distribution shifts between downstream tasks and pre-training datasets, devising efficient pre-trained methods with strong OOD generalization ability becomes critical. Alternatively, the integration of pre-trained methods to enhance OOD generalization performance is also a promising direction for future exploration. Furthermore, it is becoming more important to evaluate the OOD generalization ability of large language models in deployment [279, 258]. 5https://openai.com/blog/chatgpt 6https://openai.com/gpt-4 29" + }, + { + "url": "http://arxiv.org/abs/2105.03818v3", + "title": "Heterogeneous Risk Minimization", + "abstract": "Machine learning algorithms with empirical risk minimization usually suffer\nfrom poor generalization performance due to the greedy exploitation of\ncorrelations among the training data, which are not stable under distributional\nshifts. Recently, some invariant learning methods for out-of-distribution (OOD)\ngeneralization have been proposed by leveraging multiple training environments\nto find invariant relationships. However, modern datasets are frequently\nassembled by merging data from multiple sources without explicit source labels.\nThe resultant unobserved heterogeneity renders many invariant learning methods\ninapplicable. In this paper, we propose Heterogeneous Risk Minimization (HRM)\nframework to achieve joint learning of latent heterogeneity among the data and\ninvariant relationship, which leads to stable prediction despite distributional\nshifts. We theoretically characterize the roles of the environment labels in\ninvariant learning and justify our newly proposed HRM framework. Extensive\nexperimental results validate the effectiveness of our HRM framework.", + "authors": "Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen", + "published": "2021-05-09", + "updated": "2021-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction The effectiveness of machine learning algorithms with empirical risk minimization (ERM) relies on the assumption that the testing and training data are identically drawn from the same distribution, which is known as the IID hypothesis. However, distributional shifts between testing and training data are usually inevitable due to data selection biases or unobserved confounders that widely exist in real data. Under such circumstances, machine learning algorithms 1Department of Computer Science and Technology, Tsinghua University, Beijing, China; Email: {liujiashuo77, zyhu2001}@gmail.com, cuip@tsinghua.edu.cn, shenzy17@mails.tsinghua.edu.cn. 2School of Economics and Management, Tsinghua University, Beijing, China; Email: libo@sem.tsinghua.edu.cn. Correspondence to: Peng Cui . Proceedings of the 38 th International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s). with ERM usually suffer from poor generalization performance due to the greedy exploitation of correlations among the training data, which are not stable under distributional shifts. How to guarantee a machine learning algorithm with out-of-distribution (OOD) generalization ability and stable performances under distributional shifts is of paramount signi\ufb01cance, especially in high-stake applications such as medical diagnosis, criminal justice, and \ufb01nancial analysis etc (Kukar, 2003; Berk et al., 2018; Rudin & Ustun, 2018). There are mainly two branches of methods proposed to solve the OOD generalization problem, namely distributionally robust optimization (DRO) (Esfahani & Kuhn, 2018; Duchi & Namkoong, 2018; Sinha et al., 2018; Sagawa et al., 2019) and invariant learning (Arjovsky et al., 2019; Koyama & Yamaguchi, 2020; Chang et al., 2020). DRO methods aim to optimize the worst-performance over a distribution set to ensure their OOD generalization performances. While DRO is a powerful family of methods, it is often argued for its over-pessimism problem when the distribution set is large (Hu et al., 2018; Frogner et al., 2019). From another perspective, invariant learning methods propose to exploit the causally invariant correlations(rather than varying spurious correlations) across multiple training environments, resulting in out-of-distribution (OOD) optimal predictors. However, the effectiveness of such methods relies heavily on the quality of training environments, and the intrinsic role of environments in invariant learning remains vague in theory. More importantly, modern big data are frequently assembled by merging data from multiple sources without explicit source labels. The resultant unobserved heterogeneity renders these invariant learning methods inapplicable. In this paper, we propose Heterogeneous Risk Minimization (HRM), an optimization framework to achieve joint learning of the latent heterogeneity among the data and the invariant predictor, which leads to better generalization ability despite distributional shifts. More speci\ufb01cally, we theoretically characterize the roles of the environment labels in invariant learning, which motivates us to design two modules in the framework corresponding to heterogeneity identi\ufb01cation and invariant learning respectively. We provide theoretical justi\ufb01cation on the mutual promotion of these two modules, which resonates the joint optimization process in a reciprocal way. Extensive experiments in both synthetic and real-world experiments datasets demonstrate arXiv:2105.03818v3 [cs.LG] 17 Jun 2021 \fSubmission and Formatting Instructions for ICML 2021 the superiority of HRM in terms of average performance, stability performance as well as worst-case performance under different settings of distributional shifts. We summarize our contributions as following: 1. We propose the novel HRM framework for OOD generalization without environment labels, in which heterogeneity identi\ufb01cation and invariant prediction are jointly optimized. 2. We theoretically characterize the role of environments in invariant learning from the perspective of heterogeneity, based on which we propose a novel clustering method for heterogeneity identi\ufb01cation from heterogeneous data. 3. We theoretically justify the mutual promotion relationship between heterogeneity identi\ufb01cation and invariant learning, resonating the joint optimization process in HRM. 2. Problem Formulation 2.1. OOD and Maximal Invariant Predictor Following (Arjovsky et al., 2019; Chang et al., 2020), we consider a dataset D = {De}e\u2208supp(Etr), which is a mixture of data De = {(xe i, ye i )}ne i=1 collected from multiple training environments e \u2208supp(Etr), xe i \u2208X and ye i \u2208Y are the i-th data and label from environment e respectively and ne is number of samples in environment e. Environment labels are unavailable as in most real applications. Etr is a random variable on indices of training environments and P e is the distribution of data and label in environment e. The goal of this work is to \ufb01nd a predictor f(\u00b7) : X \u2192Y with good out-of-distribution generalization performance, which can be formalized as: arg min f max e\u2208supp(E) L(f|e) (1) where L(f|e) = E[l(f(X), Y )|e] = Ee[l(f(Xe), Y e)] is the risk of predictor f on environment e, and l(\u00b7, \u00b7) : Y \u00d7 Y \u2192R+ is the loss function. E is the random variable on indices of all possible environments such that supp(E) \u2283 supp(Etr). Usually, for all e \u2208supp(E) \\ supp(Etr), the data and label distribution P e(X, Y ) can be quite different from that of training environments Etr. Therefore, the problem in Equation 1 is referred to as Out-of-Distribution (OOD) Generalization problem (Arjovsky et al., 2019). Without any prior knowledge or structural assumptions, it is impossible to \ufb01gure out the OOD generalization problem, since one cannot characterize the unseen latent environments in supp(E). A commonly used assumption in invariant learning literature (Rojas-Carulla et al., 2015; Gong et al., 2016; Arjovsky et al., 2019; Kuang et al., 2020; Chang et al., 2020) is as follow: Assumption 2.1. There exists random variable \u03a6\u2217(X) such that the following properties hold: a. Invariance property: for all e, e\u2032 \u2208supp(E), we have P e(Y |\u03a6\u2217(X)) = P e\u2032(Y |\u03a6\u2217(X)) holds. b. Su\ufb03ciency property: Y = f(\u03a6\u2217) + \u03f5, \u03f5 \u22a5X. This assumption indicates invariance and suf\ufb01ciency for predicting the target Y using \u03a6\u2217, which is known as invariant covariates or representations with stable relationships with Y across different environments e \u2208E. In order to acquire the invariant predictor \u03a6\u2217(X), a branch of work to \ufb01nd maximal invariant predictor (Chang et al., 2020; Koyama & Yamaguchi, 2020) has been proposed, where the invariance set and the corresponding maximal invariant predictor are de\ufb01ned as: De\ufb01nition 2.1. The invariance set I with respect to E is de\ufb01ned as: IE = {\u03a6(X) : Y \u22a5E|\u03a6(X)} = {\u03a6(X) : H[Y |\u03a6(X)] = H[Y |\u03a6(X), E]} (2) where H[\u00b7] is the Shannon entropy of a random variable. The corresponding maximal invariant predictor (MIP) of IE is de\ufb01ned as: S = arg max \u03a6\u2208IE I(Y ; \u03a6) (3) where I(\u00b7; \u00b7) measures Shannon mutual information between two random variables. Here we prove that the MIP S can can guarantee OOD optimality, as indicated in Theorem 2.1. The formal statement of Theorem 2.1 as well as its proof can be found in appendix. Theorem 2.1. (Informal) For predictor \u03a6\u2217(X) satisfying Assumption 2.1, \u03a6\u2217is the maximal invariant predictor with respect to E and the solution to OOD problem in equation 1 is EY [Y |\u03a6\u2217] = arg minf supe\u2208supp(E) E[L(f)|e]. Recently, some works suppose the availability of data from multiple environments with environment labels, wherein they can \ufb01nd MIP (Chang et al., 2020; Koyama & Yamaguchi, 2020). However, they rely on the underlying assumption that the invariance set IEtr of Etr is exactly the invariance set IE of all possible unseen environments E, which cannot be guaranteed as shown in Theorem 2.2. Theorem 2.2. IE \u2286IEtr As shown in Theorem 2.2 that IE \u2286IEtr, the learned predictor is only invariant to such limited environments Etr but is not guaranteed to be invariant with respect to all possible environments E. Here we give a toy example in Table 1 to illustrate this. We consider a binary classi\ufb01cation between cats and dogs, where each photo contains 3 features, animal feature X1 \u2208{cat, dog}, a background feature X2 \u2208 {on grass, in water} and the photographer\u2019s signature feature X3 \u2208{Irma, Eric}. Assume all possible testing \fSubmission and Formatting Instructions for ICML 2021 Class 0 (Cats) Class 1 (Dogs) Index X1 X2 X3 X1 X2 X3 e1 Cats Water Irma Dogs Grass Eric e2 Cats Grass Eric Dogs Water Irma e3 Cats Water Eric Dogs Grass Irma e4 Cats Grass Irma Dogs Water Eric e5 Mixture: 90% data from e1 and 10% data from e2 e6 Mixture: 90% data from e3 and 10% data from e4 Table 1. A Toy Example for the difference between IE and IEtr. environments supp(E) = {e1, e2, e3, e4, e5, e6} and the train environment supp(Etr) = {e5, e6}, then IE = {\u03a6|\u03a6 = \u03a6(X1)} while IEtr = {\u03a6|\u03a6 = \u03a6(X1, X2)}. The reason is that e5, e6 only tell us X3 cannot be included in the invariance set but cannot exclude X2. But if e5 and e6 can be further divided into e1, e2 and e3, e4 respectively, the invariance set IEtr becomes IEtr = IE = {\u03a6(X1)}. This example shows that the manually labeled environments may not be suf\ufb01cient to achieve MIP, not to mention the cases where environment labels are not available. This limitation necessitates the study on how to exploit the latent intrinsic heterogeneity in training data (like e5 and e6 in the above example) to form more re\ufb01ned environments for OOD generalization. The environments need to be subtly uncovered, in the sense of OOD generalization problem, as indicated by Theorem B.4, not all environments are helpful to tighten the invariance set. Theorem 2.3. Given set of environments supp( \u02c6 E), denote the corresponding invariance set I \u02c6 E and the corresponding maximal invariant predictor \u02c6 \u03a6. For one newlyadded environment enew with distribution P new(X, Y ), if P new(Y |\u02c6 \u03a6) = P e(Y |\u02c6 \u03a6) for e \u2208supp( \u02c6 E), the invariance set constrained by supp( \u02c6 E) \u222a{enew} is equal to I \u02c6 E. 2.2. Problem of Heterogeneous Risk Minimization Besides Assumption 2.1, we make another assumption on the existence of heterogeneity in training data as: Assumption 2.2. Heterogeneity Assumption. For random variable pair (X, \u03a6\u2217) and \u03a6\u2217satisfying Assumption 2.1, using functional representation lemma (El Gamal & Kim, 2011), there exists random variable \u03a8\u2217 such that X = X(\u03a6\u2217, \u03a8\u2217), then we assume P e(Y |\u03a8\u2217) can arbitrary change across environments e \u2208supp(E). The heterogeneity among provided environments can be evaluated by the compactness of the corresponding invariance set as |IE|. Speci\ufb01cally, smaller |IE| leads to higher heterogeneity, since more variant features can be excluded. Based on the assumption, we come up with the problem of heterogeneity exploitation for OOD generalization. Problem 1. Heterogeneous Risk Minimization. Given heterogeneous dataset D = {De}e\u2208supp(Elatent) without environment labels, the task is to generate environments Etr with minimal |IEtr| and learn invariant model under learned Etr with good OOD performance. Theorem B.4 together with Assumption 2.2 indicate that, to better constrain IEtr, the effective way is to generate environments with varying P(Y |\u03a8\u2217(X)) that can exclude variant features from IEtr. Under this problem setting, we encounter the circular dependency: \ufb01rst we need variant \u03a8\u2217to generate heterogeneous environments Etr; then we need Etr to learned invariant \u03a6\u2217as well as variant \u03a8\u2217. Furthermore, there exists positive feedback between these two steps. When acquiring Etr with tighter IEtr, more invariant predictor \u03a6(X) (i.e. a better approximation of MIP) can be found, which will further bring a clearer picture of variant parts, and therefore promote the generation of Etr. With this notion, we propose our framework for Heterogeneous Risk Minimization (HRM) which leverages the mutual promotion between the two steps and conduct joint optimization. 3. Method In this work, we temporarily focus on a simple but general setting, where X = [\u03a6\u2217, \u03a8\u2217]T \u2208Rd in raw feature level and \u03a6\u2217, \u03a8\u2217satisfy Assumption 2.1. Under this setting, Our Heterogeneous Risk Minimization (HRM) framework contains two interactive parts, the frontend Mc for heterogeneity identi\ufb01cation and the backend Mp for invariant prediction. The general framework is shown in Figure 1. Given the pooled heterogeneous data, it starts with the hetFigure 1. The framework of HRM. erogeneity identi\ufb01cation module Mc leveraging the learned variant representation \u03a8(X) to generate heterogeneous environments Elearn. Then the learned environments are used by OOD prediction module Mp to learn the MIP \u03a6(X) as well as the invariant prediction model f(\u03a6(X)). After that, we derive the variant \u03a8(X) to further boost the module Mc, which is supported by Theorem B.4. As for the \u2019convert\u2019 step, under our setting, we adopt feature selection in this work, through which more variant feature \u03a8 can be attained when more invariant feature \u03a6 is learned. Speci\ufb01cally, the invariant predictor \u03a6(X) is generated as \u03a6(X) = M \u2299X, and the variant part \u03a8(X) = (1\u2212M)\u2299X correspondingly, where M \u2208{0, 1}d is the binary invariant feature selection mask. For instance, for Table 1, X = [X1, X2, X3], the \fSubmission and Formatting Instructions for ICML 2021 ground truth binary mask is M = [1, 0, 0]. In this way, the better \u03a6 is learned, the better \u03a8 can be obtained. Note that we use the soft selection which is more \ufb02exible and general in our algorithm with M \u2208[0, 1]d. The whole framework is jointly optimized, so that the mutual promotion between heterogeneity identi\ufb01cation and invariant learning can be fully leveraged. 3.1. Implementation of Mp Here we introduce our invariant prediction module Mp, which takes multiple environments training data D = {De}e\u2208supp(Etr) as input, and outputs the corresponding invariant predictor f and the indices of invariant features M given current environments Etr. We combine feature selection with invariant learning under heterogeneous environments, which can select the features with stable/invariant correlations with the label across Etr. Speci\ufb01cally, the former module can select most informative features with respect to the loss function and latter module ensures the selected features are invariant. Their combination ensures Mp to select the most informative invariant features. For invariant learning, we follow the variance penalty regularizer proposed in (Koyama & Yamaguchi, 2020) and simplify it in feature selection scenarios. The objective function of Mp with M \u2208{0, 1}d is: Le(M \u2299X, Y ; \u03b8) = EP e[\u2113(M \u2299Xe, Y e; \u03b8)] (4) Lp(M \u2299X, Y ; \u03b8) = EEtr[Le] + \u03bbtrace(VarEtr(\u2207\u03b8Le)) (5) However, as the optimization of hard feature selection with binary mask M suffers from high variance, we use the soft feature selection with gates taking continuous value in [0, 1]. Speci\ufb01cally, following (Yamada et al., 2020), we approximate each element of M = [m1, . . . , md]T to clipped Gaussian random variable parameterized by \u00b5 = [\u00b51, . . . , \u00b5d]T as mi = max{0, min{1, \u00b5i + \u03f5}} (6) where \u03f5 is drawn from N(0, \u03c32). With this approximation, the objective function with soft feature selection can be written as: Le(\u03b8, \u00b5) = EP eEM [\u2113(M \u2299Xe, Y e; \u03b8) + \u03b1\u2225M\u22250] (7) where M is a random vector with d independent variables mi for i \u2208[d]. Under the approximation in Equation 6, \u2225M\u22250 is simply P i\u2208[d] P(mi > 0) and can be calculated as \u2225M\u22250 = P i\u2208[d] CDF(\u00b5i/\u03c3), where CDF is the standard Gaussian CDF. We formulate our objective as risk minimization problem: min \u03b8,\u00b5 Lp(\u03b8; \u00b5) = EEtr[Le(\u03b8, \u00b5)] + \u03bbtrace(VarEtr(\u2207\u03b8Le)) (8) where Le(\u03b8, \u00b5) = EP eEM \uf8ee \uf8f0\u2113(M \u2299Xe, Y e; \u03b8) + \u03b1 X i\u2208[d] CDF(\u00b5i/\u03c3) \uf8f9 \uf8fb (9) Further, as for linear models, we simply approximate the regularizer trace(VarEtr(\u2207\u03b8Le)) by \u2225VarEtr(\u2207\u03b8Le)\u2299 M\u22252. Then we obtain \u03a6(X) and \u03a8(X) when we obtain \u00b5 as well as M. Further in Section 4, we theoretically prove that the prediction module Mp is able to learn the MIP with respect to given environments Etr. 3.2. Implementation of Mc Notation. \u03a8 means the learned variant part \u03a8(X). \u2206K means K-dimension simplex. f\u03b8(\u00b7) means the function f parameterized by \u03b8. The heterogeneity identi\ufb01cation module Mc takes a single dataset as input, and outputs a multi-environment dataset partition for invariant prediction. We implement it with a clustering algorithm. As indicated in Theorem B.4, the more diverse P(Y |\u03a8) for our generated environments, the better the invariance set I is. Therefore, we cluster the data points according to the relationship between \u03a8 and Y , for which we use P(Y |\u03a8) as the cluster centre. Note that \u03a8 is initialized as \u03a8(X) = X in our joint optimization. Speci\ufb01cally, we assume the j-th cluster centre P\u0398j(Y |\u03a8) parameterized by \u0398j to be a Gaussian around f\u0398j(\u03a8) as N(f\u0398j(\u03a8), \u03c32): hj(\u03a8, Y ) = P\u0398j(Y |\u03a8) = 1 \u221a 2\u03c0\u03c3 exp(\u2212(Y \u2212f\u0398j(\u03a8))2 2\u03c32 ) (10) For the given N = P e\u2208Etr ne empirical data samples D = {\u03c8i(xi), yi}N i=1, the empirical distribution is modeled as \u02c6 PN = 1 N PN i=1 \u03b4i(\u03a8, Y ) where \u03b4i(\u03a8, Y ) = ( 1, if \u03a8 = \u03c8i and Y = yi 0, otherwise (11) The target of our heterogeneous clustering is to \ufb01nd a distribution in Q = {Q|Q = P j\u2208[K] qjhj(\u03a8, Y ), q \u2208 \u2206K} to \ufb01t the empirical distribution best. Therefore, the objective function of our heterogeneous clustering is: min Q\u2208Q DKL( \u02c6 PN\u2225Q) (12) The above objective can be further simpli\ufb01ed to: min \u0398,q ( Lc = \u22121 N N X i=1 log \" K X j=1 qjhj(\u03c8i, yi) #) (13) As for optimization, we use EM algorithm to optimize the centre parameter \u0398 and the mixture weight q. After optimizing equation 13, for building Etr, we assign each data point to environment ej \u2208Etr with probability: P(ej|\u03a8, Y ) = qjhj(\u03a8, Y )/ K X i=1 qihi(\u03a8, Y ) ! (14) In this way, Etr is generated by Mc. \fSubmission and Formatting Instructions for ICML 2021 4. Theoretical Analysis In this section, we theoretically analyze our proposed Heterogeneous Risk Minimization (HRM) method. We \ufb01rst analyze our proposed invariant learning module Mp, and then justify the existence of the positive feedback in our HRM. Justi\ufb01cation of Mp We prove that given training environments Etr, our invariant prediction model Mp can learn the maximal invariant predictor \u03a6(X) with respect to the corresponding invariance set IEtr. Theorem 4.1. Given Etr, the learned \u03a6(X) = M \u2299X is the maximal invariant predictor of IEtr. Justi\ufb01cation of the Positive Feedback The core of our HRM framework is the mechanism for Mc and Mp to mutual promote each other. Here we theoretically justify the existence of such positive feedback. In Assumption 2.1, we assume the invariance and suf\ufb01ciency properties of the stable features \u03a6\u2217and assume the relationship between unstable part \u03a8\u2217and Y can arbitrarily change. Here we make a more speci\ufb01c assumption on the heterogeneity across environments with respect to \u03a6\u2217and \u03a8\u2217. Assumption 4.1. Assume the pooled training data is made up of heterogeneous data sources: Ptr = P e\u2208supp(Etr) weP e. For any ei, ej \u2208Etr, ei \u0338= ej, we assume Ic i,j(Y ; \u03a6\u2217|\u03a8\u2217) \u2265max(Ii(Y ; \u03a6\u2217|\u03a8\u2217), Ij(Y ; \u03a6\u2217|\u03a8\u2217)) (15) where \u03a6\u2217is invariant feature and \u03a8\u2217the variant. Ii represents mutual information in P ei and Ic i,j represents the cross mutual information between P ei and P ej takes the form of Ic i,j(Y ; \u03a6|\u03a8) = Hc i,j[Y |\u03a8] \u2212Hc i,j[Y |\u03a6, \u03a8] and Hc i,j[Y ] = \u2212 R pei(y) log pej(y)dy. Here we would like to intuitively demonstrate this assumption. Firstly, the mutual information Ii(Y ; \u03a6\u2217) = Hi[Y ] \u2212Hi[Y |\u03a6\u2217] can be viewed as the error reduction if we use \u03a6\u2217to predict Y rather than predict by nothing. Then the cross mutual information Ii,j(Y ; \u03a6\u2217) can be viewed as the error reduction if we use the predictor learned on \u03a6\u2217 in environment ej to predict in environment ei, rather than predict by nothing. Therefore, the R.H.S in equation 15 measures that, in environment ei, how much prediction error can be reduced if we further add \u03a6\u2217for prediction rather than use only \u03a8\u2217. And the L.H.S measures that, when using predictors trained in ei to predict in ej, how much prediction error can be reduced if we further add \u03a6\u2217for prediction rather than use only \u03a8\u2217. Intuitively, Assumption 4.1 assumes that invariant feature \u03a6\u2217provides more information for predicting Y across environments than in one single environment, and correspondingly, the information provided by \u03a8\u2217shrinks a lot across environments, which indicates that the relationship between variant feature \u03a8\u2217 and Y varies across environments. Based on this assumption, we \ufb01rst prove that the cluster centres are pulled apart as invariant feature is excluded from clustering. Theorem 4.2. For ei, ej \u2208 supp(Etr), assume that X = [\u03a6\u2217, \u03a8\u2217]T satisfying Assumption 2.1, where \u03a6\u2217is invariant and \u03a8\u2217variant. Then under Assumption 4.1, we have DKL(P ei(Y |X)\u2225P ej(Y |X)) \u2264 DKL(P ei(Y |\u03a8\u2217)\u2225P ej(Y |\u03a8\u2217)) Theorem B.6 indicates that the distance between cluster centres is larger when using variant features \u03a8\u2217, therefore, it is more likely to obtain the desired heterogeneous environments, which explains why we use learned variant part \u03a8(X) for clustering. Finally, we provide the theorem for optimality guarantee for our HRM. Theorem 4.3. Under Assumption 2.1 and 4.1, for the proposed Mc and Mp, we have the following conclusions: 1. Given environments Etr such that IE = IEtr, the learned \u03a6(X) by Mp is the maximal invariant predictor of IE. 2. Given the maximal invariant predictor \u03a6\u2217of IE, assume the pooled training data is made up of data from all environments in supp(E), there exist one split that achieves the minimum of the objective function and meanwhile the invariance set regularized is equal to IE. Intuitively, Theorem 4.3 proves that given one of the Mc and Mp optimal, the other is optimal, which validates the existence of the global optimal point of our algorithm. 5. Experiment In this section, we validate the effectiveness of our method on simulation data and real-world data. Baselines We compare our proposed HRM with the following methods: \u2022 Empirical Risk Minimization(ERM): min\u03b8 EP0[\u2113(\u03b8; X, Y )] \u2022 Distributionally Robust Optimization(DRO (Sinha et al., 2018)): min\u03b8 supQ\u2208W (Q,P0)\u2264\u03c1 EQ[\u2113(\u03b8; X, Y )] \u2022 Environment Inference for Invariant Learning(EIIL (Creager et al., 2020)): min \u03a6 max u X e\u2208E 1 Ne X i ui(e)\u2113(w \u2299\u03a6(xi), yi)+ X e\u2208E \u03bb\u2225\u2207w|w=1.0 1 Ne X i ui(e)\u2113(w \u2299\u03a6(xi), yi)\u22252 (16) \u2022 Invariant Risk Minimization(IRM (Arjovsky et al., 2019)) with environment Etr labels: min \u03a6 X e\u2208Etr Le + \u03bb\u2225\u2207w|w=1.0Le(w \u2299\u03a6)\u22252 (17) \fSubmission and Formatting Instructions for ICML 2021 Further, for ablation study, we also compare with HRMs, which runs HRM for only one iteration without the feedback loop. Note that IRM is based on multiple training environments and we provide environment Etr labels for IRM, while others do not need environment labels. Evaluation Metrics To evaluate the prediction performance, we use Mean Error de\ufb01ned as Mean Error = 1 |Etest| P e\u2208Etest Le, Std Error de\ufb01ned as Std Error = q 1 |Etest|\u22121 P e\u2208Etest(Le \u2212Mean Error)2, which are mean and standard deviation error across Etest and Max Error = maxe\u2208Etest Le, which are mean error, standard deviation error and worst-case error across Etest. Imbalanced Mixture It is a natural phenomena that empirical data follow a power-law distribution, i.e. only a few environments/subgroups are common and the rest are rare (Shen et al., 2018; Sagawa et al., 2019; 2020). Therefore, we perform non-uniform sampling among different environments in training set. 5.1. Simulation Data We design two mechanisms to simulate the varying correlations among covariates across environments, named by selection bias and anti-causal effect. Selection Bias In this setting, the correlations between variant covariates and the target are perturbed through selection bias mechanism. According to Assumption 2.1, we assume X = [\u03a6\u2217, \u03a8\u2217]T \u2208Rd and Y = f(\u03a6\u2217) + \u03f5 and that P(Y |\u03a6\u2217) remains invariant across environments while P(Y |\u03a8\u2217) changes arbitrarily. For simplicity, we select data points according to a certain variable set Vb \u2282\u03a8\u2217: \u02c6 P(x) = Y vi\u2208Vb |r|\u22125\u2217|f(\u03c6\u2217)\u2212sign(r)\u2217vi| (18) where |r| > 1, Vb \u2208Rnb and \u02c6 P(x) denotes the probability of point x to be selected. Intuitively, r eventually controls the strengths and direction of the spurious correlation between Vb and Y (i.e. if r > 0, a data point whose Vb is close to its y is more probably to be selected.). The larger value of |r| means the stronger spurious correlation between Vb and Y , and r \u22650 means positive correlation and vice versa. Therefore, here we use r to de\ufb01ne different environments. In training, we generate sum = 2000 data points, where \u03ba = 95% points from environment e1 with a prede\ufb01ned r and 1 \u2212\u03ba = 5% points from e2 with r = \u22121.1. In testing, we generate data points for 10 environments with r \u2208[\u22123, \u22122.7, \u22122.3, . . . , 2.3, 2.7, 3.0]. \u03b2 is set to 1.0. We compare our HRM with ERM, DRO, EIIL and IRM for Linear Regression. We conduct extensive experiments with different settings on r, nb and d. In each setting, we carry out the procedure 10 times and report the average results. The results are shown in Table 4. Figure 2. Visualization of differences between environments in scenario 1 in selection bias experiment(r = 1.9). The left \ufb01gure shows the initial clustering results using X, and the right one shows the learned Elearn using the learned variant part \u03a8(X). From the results, we have the following observations and analysis: ERM suffers from the distributional shifts in testing and yields poor performance in most of the settings. DRO surprisingly has the worst performance, which we think is due to the over-pessimism problem (Frogner et al., 2019). EIIL has the similar performance with ERM, which indicates that its inferred environments cannot reveal the spurious correlations between Y and Vb. IRM performs much better than the above two baselines, however, as IRM depends on the available environment labels to work, it uses much more information than the other three methods. Compared to the three baselines, our HRM achieves nearly perfect performance with respect to average performance and stability, especially the variance of losses across environments is close to 0, which re\ufb02ects the effectiveness of our heterogeneous clustering as well as the invariant learning algorithm. Furthermore, our HRM does not need environment labels, which veri\ufb01es that our clustering algorithm can mine the latent heterogeneity inside the data and further shows our superiority to IRM. Besides, we visualize the differences between environments using Task2Vec (Achille et al., 2019) in Figure 2, where larger value means the two environments are more heterogeneous. The pooled training data are mixture of environments with r = 1.9 and r = \u22121.1, the difference between whom is shown in yellow box. And the red boxes show differences between learned environments by HRMs and HRM. The big promotion between Einit and Elearn veri\ufb01es our HRM can exploit heterogeneity inside data as well as the existence of the positive feedback. Due to space limitation, results of varying sum, \u03ba, nb as well as experimental details are left to appendix. Anti-causal Effect Inspired by (Arjovsky et al., 2019), we induce the spurious correlation by using anti-causal relationship from the target Y to the variant covariates \u03a8\u2217. In this experiment, we assume X = [\u03a6\u2217, \u03a8\u2217]T \u2208 Rd and \ufb01rstly sample \u03a6\u2217from mixture Gaussian distribution characterized as Pk i=1 zkN(\u00b5i, I) and the target Y = \u03b8T \u03c6 \u03a6\u2217+ \u03b2\u03a61\u03a62\u03a63 + N(0, 0.3). Then the spurious \fSubmission and Formatting Instructions for ICML 2021 Table 2. Results in selection bias simulation experiments of different methods with varying selection bias r, and dimensions nb and d of training data, and each result is averaged over ten times runs. Scenario 1: varying selection bias rate r (d = 10, nb = 1) r r = 1.5 r = 1.9 r = 2.3 Methods Mean Error Std Error Max Error Mean Error Std Error Max Error Mean Error Std Error Max Error ERM 0.476 0.064 0.524 0.510 0.108 0.608 0.532 0.139 0.690 DRO 0.467 0.046 0.516 0.512 0.111 0.625 0.535 0.143 0.746 EIIL 0.477 0.057 0.543 0.507 0.102 0.613 0.540 0.139 0.683 IRM(with Etr label) 0.460 0.014 0.475 0.456 0.015 0.472 0.461 0.015 0.475 HRMs 0.465 0.045 0.511 0.488 0.078 0.577 0.506 0.096 0.596 HRM 0.447 0.011 0.462 0.449 0.010 0.465 0.447 0.011 0.463 Scenario 2: varying dimension d (r = 1.9, nb = 0.1d) d d = 10 d = 20 d = 40 Methods Mean Error Std Error Max Error Mean Error Std Error Max Error Mean Error Std Error Max Error ERM 0.510 0.108 0.608 0.533 0.141 0.733 0.528 0.175 0.719 DRO 0.512 0.111 0.625 0.564 0.186 0.746 0.555 0.196 0.758 EIIL 0.507 0.102 0.613 0.543 0.147 0.699 0.542 0.178 0.727 IRM(with Etr label) 0.456 0.015 0.472 0.484 0.014 0.489 0.500 0.051 0.540 HRMs 0.488 0.078 0.577 0.486 0.069 0.555 0.477 0.081 0.553 HRM 0.449 0.010 0.465 0.466 0.011 0.478 0.465 0.015 0.482 Table 3. Prediction errors of the anti-causal effect experiment. We design two settings with different dimensions of \u03a6\u2217and \u03a8\u2217as n\u03c6 and n\u03c8 respectively. The results are averaged over 10 runs. Scenario 1: n\u03c6 = 9, n\u03c8 = 1 e Training environments Testing environments Methods e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 ERM 0.290 0.308 0.376 0.419 0.478 0.538 0.596 0.626 0.640 0.689 DRO 0.289 0.310 0.388 0.428 0.517 0.610 0.627 0.669 0.679 0.739 EIIL 0.075 0.128 0.349 0.485 0.795 1.162 1.286 1.527 1.558 1.884 IRM(with Etr label) 0.306 0.312 0.325 0.328 0.343 0.358 0.365 0.374 0.377 0.392 HRMs 1.060 1.085 1.112 1.130 1.207 1.280 1.325 1.340 1.371 1.430 HRM 0.317 0.314 0.322 0.318 0.321 0.317 0.315 0.315 0.316 0.320 Scenario 2: n\u03c6 = 5, n\u03c8 = 5 e Training environments Testing environments Methods e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 ERM 0.238 0.286 0.433 0.512 0.629 0.727 0.818 0.860 0.895 0.980 DRO 0.237 0.294 0.452 0.529 0.651 0.778 0.859 0.911 0.950 1.028 EIIL 0.043 0.145 0.521 0.828 1.237 1.971 2.523 2.514 2.506 3.512 IRM(with Etr label) 0.287 0.293 0.329 0.345 0.382 0.420 0.444 0.461 0.478 0.504 HRMs 0.455 0.463 0.479 0.478 0.495 0.508 0.513 0.519 0.525 0.533 HRM 0.316 0.315 0.315 0.330 0.3200 0.317 0.326 0.330 0.333 0.335 correlations between \u03a8\u2217and Y are generated by anti-causal effect as \u03a8\u2217= \u03b8\u03c8Y + N(0, \u03c3(\u00b5i)2) (19) where \u03c3(\u00b5i) means the Gaussian noise added to \u03a8\u2217depends on which component the invariant covariates \u03a6\u2217belong to. Intuitively, in different Gaussian components, the corresponding correlations between \u03a8\u2217and Y are varying due to the different value of \u03c3(\u00b5i). The larger the \u03c3(\u00b5i) is, the weaker correlation between \u03a8\u2217and Y . We use the mixture weight Z = [z1, . . . , zk]T to de\ufb01ne different environments, where different mixture weights represent different overall strength of the effect Y on \u03a8\u2217. In this experiment, we set \u03b2 = 0.1 and build 10 environments with varying \u03c3 and the dimension of \u03a6\u2217, \u03a8\u2217, the \ufb01rst three for training and the last seven for testing. We run experiments for 10 times and the averaged results are shown in Table 3. EIIL achieves the best training performance with respect to prediction errors on training environments e1, e2, e3, while its performances in testing are poor. ERM suffers from distributional shifts in testing. DRO seeks for over-considered robustness and performs much worse. IRM performs much better as it learns invariant representations with help of environment labels. HRM achieves nearly uniformly good performance in training environments as well as the testing ones, which validates the effectiveness of our method and proves its excellent generalization ability. 5.2. Real-world Data We test our method on three real-world tasks, including car insurance prediction, people income prediction and house price prediction. 5.2.1. SETTINGS Car Insurance Prediction In this task, we use a realworld dataset for car insurance prediction (Kaggle). It is \fSubmission and Formatting Instructions for ICML 2021 (a) Training and testing accuracy for the car insurance prediction. Left sub-\ufb01gure shows the training results for 5 settings and the right shows their corresponding testing results. (b) Mis-Classi\ufb01cation Rate for the income prediction. (c) Prediction error for the house price prediction. RMSE refers to the Root Mean Square Error. Figure 3. Results of real-world datasets, including training and testing performance for \ufb01ve methods. a classi\ufb01cation task to predict whether a person will buy car insurance based on related information, such as vehicle damage, annual premium, vehicle age etc1. We impose selection bias mechanism on the correlation between the outcome (i.e. the label indicating whether buying insurance) and the sex attribute to simulate multiple environments. Speci\ufb01cally, we simulate different strengths |r| of the spurious correlation between sex and target in training, and reverse the direction of such correlation in testing(+|r| in training and \u2212|r| in testing). For IRM, in each setting, we divide the training data into three training environments with r1 = 0.95, r2 = 0.9, r3 = \u22120.8, and different overall correlation r corresponds to different numbers of data in e1, e2, e3. We perform 5 experiments with varying r and the results in both training and testing are shown in Figure 3(a). People Income Prediction In this task we use the Adult dataset (Dua & Graff, 2017) to predict personal income levels as above or below $50,000 per year based on personal details. We split the dataset into 10 environments according to demographic attributes sex and race. In training phase, all methods are trained on pooled data including 693 points from environment 1 and 200 from environment 2, and validated on 100 sampled from both. For IRM, the ground-truth environment labels are provided. In testing phase, we test all methods on the 10 environments and report the mis-classi\ufb01cation rate on all environments in Figure 3(b). House Price Prediction In this experiment, we use a realworld regression dataset (Kaggle) of house sales prices from King County, USA2. The target variable is the transaction price of the house and each sample contains 17 predictive variables such as the built year of the house, number of bedrooms, and square footage of home, etc. We simulate 1https://www.kaggle.com/anmolkumar/health-insurancecross-sell-prediction 2https://www.kaggle.com/c/house-prices-advancedregressiontechniques/data different environments according to the built year of the house, since it is fairly reasonable to assume the correlations among covariates and the target may vary along time. Speci\ufb01cally, we split the dataset into 6 periods, where each period approximately covers a time span of two decades. All methods are trained on data from the \ufb01rst period([1900, 1920)) and test on the other periods. For IRM, we further divide the training data into two environments where built year \u2208[1900, 1910) and [1910, 1920) respectively. Results are shown in Figure 3(c). 5.2.2. ANALYSIS From the results of three real-world tasks, we have the following observations and analysis: ERM achieves high accuracy in training while performing much worse in testing, indicating its inability in dealing with OOD predictions. DRO\u2019s performance is not satisfactory, sometimes even worse than ERM. One plausible reason is its overpessimistic nature which leads to too conservative predictors. Comparatively, invariant learning methods perform better in testing. IRM performs better than ERM and DRO, which shows the usefulness of environment labels for OOD generalization and the possibility of learning invariant predictor from multiple environments. EIIL performs inconsistently across different tasks, possibly due to its instability of the environment inference method. In all tasks and almost all testing environments (16/18), HRM consistently achieves the best performances. HRM even outperforms IRM signi\ufb01cantly in a unfair setting where we provide perfect environment labels for IRM. One one side, it shows the limitation of manually labeled environments. On the other side, it demonstrates that, relieving the dependence on environment labels, HRM can effectively uncover and fully leverage the intrinsic heterogeneity in training data for invariant learning. \fSubmission and Formatting Instructions for ICML 2021 6. Related Works There are mainly two branches of methods for the OOD generalization problem, namely distributionally robust optimization (DRO) (Esfahani & Kuhn, 2018; Duchi & Namkoong, 2018; Sinha et al., 2018; Sagawa et al., 2019) and invariant learning (Arjovsky et al., 2019; Koyama & Yamaguchi, 2020; Chang et al., 2020; Creager et al., 2020). DRO methods propose to optimize the worst-case risk within an uncertainty set, which lies around the observed training distribution and characterizes the potential testing distributions. However, in real scenarios, to better capture the testing distribution, the uncertainty set should be pretty large, which also results in the over-pessimism problem of DRO methods(Hu et al., 2018; Frogner et al., 2019). Realizing the dif\ufb01culty of solving OOD generalization problem without any prior knowledge or structural assumptions, invariant learning methods assume the existence of causally invariant relationships between some predictors \u03a6(X) and the target Y . (Arjovsky et al., 2019) and (Koyama & Yamaguchi, 2020) propose to learning an invariant representation through multiple training environments. (Chang et al., 2020) also proposes to select features whose predictive relationship with the target stays invariant across environments. However, their effectiveness relies on the quality of the given multiple training environments, and the role of environments remains vague theoretically. Recently, (Creager et al., 2020) improves (Arjovsky et al., 2019) by relaxing its requirements for multiple environments. Speci\ufb01cally, (Creager et al., 2020) proposes a two-stage method, which \ufb01rstly infers the environment division with a pre-provided biased model, and then performs invariant learning on the inferred environments. However, the two stages cannot be jointly optimized, and the environment division relies on the given biased model and lacks theoretical guarantees. 7. Discussions In this work, we theoretically analyze the role of environments in invariant learning, and propose our HRM for joint heterogeneity identi\ufb01cation and invariant prediction, which relaxes the requirements for environment labels and opens a new direction for invariant learning. To our knowledge, this is the \ufb01rst work to both theoretically and empirically analyze how the equality of multiple environments affects invariant learning. This paper mainly focuses on the raw variable level with the assumption of X = [\u03a6\u2217, \u03a8\u2217]T , which is able to cover a broad spectrum of applications, e.g. healthcare, \ufb01nance, marketing etc, where the raw variables are informative enough. However, our work has some limitations, which we hope to improve in the future. Firstly, in order to achieve the mutual promotion, we should use the variant features \u03a8\u2217for heterogeneity identi\ufb01cation rather than the invariant ones. However, the process of invariant prediction continuously discards the variant features \u03a8\u2217(for invariant features or representation), which makes it quite hard to recover the variant features. To overcome this, we focus on the simple setting where X = [\u03a6\u2217, \u03a8\u2217]T , since we can directly obtain the variant features \u03a8\u2217when having invariant features \u03a6\u2217. To further extend the power of HRM, we will consider to incorporate representation learning from X in future work. Secondly, our clustering algorithm in Mc lacks theoretical guarantees for its convergence. To the best of our knowledge, in order to theoretically analyze the convergence of a clustering algorithm, it is necessary to measure the distance between data points. However, our clustering algorithm takes models\u2019 parameters as cluster centers and aims to cluster data points (X, Y ) according to their relationships between X and Y , whose dissimilarity cannot be easily measured, since the relationship is statistical magnitude and cannot be calculated individually. How to theoretically analyze the convergence property of such clustering algorithms remains unsolved. 8. Acknowledgements This work was supported in part by National Key R&D Program of China (No. 2018AAA0102004, No. 2020AAA0106300), National Natural Science Foundation of China (No. U1936219, 61521002, 61772304), Beijing Academy of Arti\ufb01cial Intelligence (BAAI), and a grant from the Institute for Guo Qiang, Tsinghua University. Bo Li\u2019s research was supported by the Tsinghua University Initiative Scienti\ufb01c Research Grant, No. 2019THZWJC11; Technology and Innovation Major Project of the Ministry of Science and Technology of China under Grant 2020AAA0108400 and 2020AAA01084020108403; Major Program of the National Social Science Foundation of China (21ZDA036). \fSubmission and Formatting Instructions for ICML 2021" + }, + { + "url": "http://arxiv.org/abs/2006.04414v2", + "title": "Stable Adversarial Learning under Distributional Shifts", + "abstract": "Machine learning algorithms with empirical risk minimization are vulnerable\nunder distributional shifts due to the greedy adoption of all the correlations\nfound in training data. Recently, there are robust learning methods aiming at\nthis problem by minimizing the worst-case risk over an uncertainty set.\nHowever, they equally treat all covariates to form the decision sets regardless\nof the stability of their correlations with the target, resulting in the\noverwhelmingly large set and low confidence of the learner.In this paper, we\npropose Stable Adversarial Learning (SAL) algorithm that leverages\nheterogeneous data sources to construct a more practical uncertainty set and\nconduct differentiated robustness optimization, where covariates are\ndifferentiated according to the stability of their correlations with the\ntarget. We theoretically show that our method is tractable for stochastic\ngradient-based optimization and provide the performance guarantees for our\nmethod. Empirical studies on both simulation and real datasets validate the\neffectiveness of our method in terms of uniformly good performance across\nunknown distributional shifts.", + "authors": "Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li, Yishi Lin", + "published": "2020-06-08", + "updated": "2021-05-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Traditional machine learning algorithms which optimize the average loss often suffer from the poor generalization performance under distributional shifts induced by latent heterogeneity, unobserved confounders or selection biases in training data(Daume and Marcu 2006; Torralba and Efros 2011; Kuang et al. 2018; Shen et al. 2019). However, in high-stake applications such as medical diagnosis(Kukar 2003), criminal justice(Berk et al. 2018; Rudin and Ustun 2018) and autonomous driving (Huval et al. 2015), it is critical for the learning algorithms to ensure the robustness against potential unseen data. Therefore, robust learning methods have recently aroused much attention due to its favorable property of robustness guarantee(Ben-Tal and Nemirovski 1998; Goodfellow, Shlens, and Szegedy 2014; Madry et al. 2017). Instead of optimizing the empirical cost on training data, robust learning methods seek to optimize the worst-case cost over an uncertainty set and can be further separated into two main branches named adversarially and distributionally robust learning. In adversarially robust learning, the uncertainty set is constructed point-wisely(Goodfellow, Shlens, Copyright \u00a9 2021, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. and Szegedy 2014; Papernot et al. 2016; Madry et al. 2017; Ye and Zhu 2018). Adversarial attack is performed independently on each data point within a L2 or L\u221enorm ball around itself. In distributionally robust learning, on the other hand, the uncertainty set is characterized on a distributional level(Sinha, Namkoong, and Duchi 2018; Esfahani and Kuhn 2018; Duchi and Namkoong 2018). A joint perturbation, typically measured by Wasserstein distance or fdivergence, is applied to the entire distribution entailed by training data. These methods can provide robustness guarantees under distributional shifts when testing distribution is captured in the uncertainty set. However, in real scenarios, to contain the true distribution, the uncertainty set is often overwhelmingly large, which is also referred to as the over pessimism or the low con\ufb01dence problem(Frogner et al. 2019; Sagawa et al. 2019). Speci\ufb01cally, with an overwhelmingly large set, the learner optimizes for implausible worst-case scenarios, resulting in meaningless results (e.g. the classi\ufb01er assigns equal probability to all classes). Such a problem greatly hurts the generalization ability of robust learning methods in practice. The essential problem of the above methods lies in the construction of the uncertainty set. To address the over pessimism of the learning algorithm, one should form a more practical uncertainty set which is likely to contain the potential distributional shifts in the future. More speci\ufb01cally, in real applications we observe that different covariates may be perturbed in a non-uniform way, which should be considered in building a practical uncertainty set. Taking the problem of waterbirds and landbirds classi\ufb01cation as an example(Wah et al. 2011). There exist two types of covariates where the stable covariates (e.g. representing the bird itself) preserve immutable correlations with the target across different environments, while those unstable ones (e.g. representing the background) are likely to change. Therefore, for the example above, the construction of the uncertainty set should mainly focus on the perturbation of those unstable covariates (e.g. background) to generate more practical and meaningful samples. Following such intuition, there are several work(Bhattad et al. 2019; Vaishnavi et al. 2019) based on the adversarial attack which focus on perturbing the color or background of images to improve the adversarial robustness. However, these methods mainly follow a step by step routine where the segmentation is conducted \ufb01rst to sepaarXiv:2006.04414v2 [cs.LG] 11 May 2021 \frate the background from the foreground and cannot theoretically provide robustness guarantees under unknown distributional shifts, which limits their applications on more general settings. In this paper, we propose the Stable Adversarial Learning (SAL) algorithm to address this problem in a more principled and uni\ufb01ed way, which leverages heterogeneous data sources to construct a more practical uncertainty set. Specifically, we adopt the framework of Wasserstein distributionally robust learning(WDRL) and further characterize the uncertainty set to be anisotropic according to the stability of covariates across the multiple environments, which induces stronger adversarial perturbations on unstable covariates than those stable ones. A synergistic algorithm is designed to jointly optimize the covariates differentiating process as well as the adversarial training process of model\u2019s parameters. Compared with traditional robust learning techniques, the proposed method is able to provide robustness under strong distributional shifts while maintaining enough con\ufb01dence of the learner. Theoretically, we prove that our method constructs a more compact uncertainty set, which as far as we know is the \ufb01rst analysis of the compactness of adversarial sets in WDRL literature. Empirically, the advantages of our SAL algorithm are demonstrated on both synthetic and real-world datasets in terms of uniformly good performance across distributional shifts. The SAL Method We \ufb01rst introduce the Wasserstein Distributionally Robust Learning (WDRL) framework which attempts to learn a model with minimal risk against the worst-case distribution in the uncertainty set characterized by Wasserstein distance: De\ufb01nition 1 Let Z \u2282Rm+1 and Z = X \u00d7 Y , given a transportation cost function c : Z \u00d7 Z \u2192[0, \u221e), which is nonnegative, lower semi-continuous and satis\ufb01es c(z, z) = 0, for probability measures P and Q supported on Z, the Wasserstein distance between P and Q is : Wc(P, Q) = inf M\u2208\u03a0(P,Q) E(z,z\u2032)\u223cM[c(z, z\u2032)] (1) where \u03a0(P, Q) denotes the couplings with M(A, Z) = P(A) and M(Z, A) = Q(A) for measures M on Z \u00d7 Z. As mentioned above, the uncertainty set built in WDRL is often overwhelmingly large in wild high-dimensional scenarios. To demonstrate this over pessimism problem of WDRL, we design a toy example in to show the necessity to construct a more practical uncertainty set. Indeed, without any prior knowledge or structural assumptions, it is quite dif\ufb01cult to design a practical set for robustness under distributional shifts. Therefore, we consider a more \ufb02exible setting with heterogeneous datasets De = {Xe, Y e} from multiple training environments e \u2208Etr. Speci\ufb01cally, each dataset De contains examples identically and independently distributed according to some joint distribution P e XY on X \u00d7Y. Then we come up with one basic assumption for our problem. Given the observations that in real scenarios, different covariates have different extents of stability, we propose assumption 1. Assumption 1 There exists a decomposition of all the covariates X = {S, V }, where S represents the stable covariate set and V represents the unstable one, so that for all environments e \u2208E, E[Y e|Se = s, V e = v] = E[Y e|Se = s] = E[Y |S = s]. Intuitively, assumption 1 indicates that the correlation between stable covariates S and the target Y stays invariant across environments, which is quite similar to the assumption in (Arjovsky et al. 2019; Kuang et al. 2020; Shen et al. 2020). Moreover, assumption 1 also demonstrates that the in\ufb02uence of V on the target Y can be wiped out as long as whole information of S is accessible. Under the assumption 1, the disparity among covariates revealed in the heterogeneous datasets can be leveraged for better construction of the uncertainty set. Here we propose the Stable Adversarial Learning (SAL) algorithm, which leverages heterogeneous data to build a more practical uncertainty set with covariates differentiated according to their stability. The objective function of our SAL algorithm is: min \u03b8\u2208\u0398 sup Q:Wcw (Q,P0)\u2264\u03c1 EX,Y \u223cQ[\u2113(\u03b8; X, Y )] (2) where cw(z1, z2) = \u2225w \u2299(z1 \u2212z2)\u22252 2 and (3) w \u2208arg min w\u2208W ( 1 |Etr| X e\u2208Etr Le(\u03b8) + \u03b1 max ep,eq\u2208Etr Lep \u2212Leq ) (4) where P0 denotes the training distribution, Wcw denotes the Wasserstein distance with transportation cost function cw de\ufb01ned as equation 3, W = \b w : w \u2208[1, +\u221e)m+1 && min(w(1), . . . , w(m+1)) = 1 \t denotes the covariate weight space(w(i) denotes the ith element of w), and Le denotes the average loss in environment e \u2208Etr, \u03b1 is a hyper-parameter to adjust the tradeoff between average performance and the stability. Intuitively, w controls the perturbation level of each covariate and formulates an anisotropic uncertainty set compared with the conventional WDRL methods. The objective function of w (equation 4) contains two parts: the average loss in training environments as well as the maximum margin, which aims at learning such w that the resulting uncertainty set leads to a learner with uniformly good performance across environments. Equation 2 is the objective function of model\u2019s parameters via distributionally robust learning with the learnable covariate weight w. During training, the covariate weight w and model\u2019s parameters \u03b8 are iteratively optimized. Details of the algorithm are delineated below. We \ufb01rst will introduce the optimization of model\u2019s parameter in section , then the transportation cost function learning procedure in section . Tractable Optimization In SAL algorithm, the model\u2019s parameters \u03b8 and covariate weight w is optimized iteratively. In each iteration, given current w, the objective function for \u03b8 is: min \u03b8\u2208\u0398 sup Q:Wcw (Q,P0)\u2264\u03c1 EX,Y \u223cQ[\u2113(\u03b8; X, Y )] (5) \fThe duality results in lemma 1 show that the in\ufb01nitedimensional optimization problem (5) can be reformulated as a \ufb01nite-dimensional convex optimization problem (Esfahani and Kuhn 2018). Besides, inspired by (Sinha, Namkoong, and Duchi 2018), a Lagrangian relaxation is provided for computation ef\ufb01ciency. Lemma 1 Let Z = X \u00d7 Y and \u2113: \u0398 \u00d7 Z \u2192R be continuous. For any distribution Q and any \u03c1 \u22650, let s\u03bb(\u03b8; (x, y)) = sup \u03be\u2208Z (\u2113(\u03b8; \u03be) \u2212\u03bbcw(\u03be, (x, y))), P = {Q : Wc(Q, P0) \u2264\u03c1},we have: sup Q\u2208P EQ[\u2113(\u03b8; x, y)] = inf \u03bb\u22650{\u03bb\u03c1 + EP0[s\u03bb]} (6) and for any \u03bb \u22650, we have: sup Q\u2208P {EQ[\u2113(\u03b8; (x, y))] \u2212\u03bbWcw(Q, P0)} = EP0[s\u03bb] (7) Notice that there exists only the inner supremum in EP0[s\u03bb(\u03b8; (x, y))], which can be seen as a relaxed Lagrangian penalty function of the original objective function (5). Here we give up the prescribed amount \u03c1 of robustness in equation (5) and focus instead on the relaxed Lagrangian penalty function for ef\ufb01ciency in equation (7). The loss function on empirical distribution \u02c6 PN becomes 1 N PN i=1 s\u03bb(\u03b8; (xi, yi)). We adopt adversarial training procedure proposed in (Sinha, Namkoong, and Duchi 2018) to approximate the supremum for s\u03bb. Speci\ufb01cally, given predictor x, we adopt gradient ascent to obtain an approximate maximizer \u02c6 x of {\u2113(\u03b8; (\u02c6 x, y)) \u2212\u03bbcw(\u02c6 x, x)} and optimize the model\u2019s parameter \u03b8 using \u02c6 x as: \u02c6 L = 1 N PN i=1 \u2113(\u03b8; \u02c6 x, y). In the following parts, we simply use XA to denote {\u02c6 x}N, which means the set of maximizers for training data {x}N. The convergence guarantee for this optimization can be referred to (Sinha, Namkoong, and Duchi 2018). Learning for Transportation Cost Function We introduce the learning for transportation cost function cw in this section. In supervised scenarios, perturbations are typically only added to predictor X and not target Y . Therefore, we simplify cw : Z \u00d7 Z \u2192[0, +\u221e)(Z = X \u00d7 Y) to be: cw(z1, z2) = cw(x1, x2) + \u221e\u00d7 I(y1 \u0338= y2) (8) = \u2225w \u2299(x1 \u2212x2)\u22252 2 + \u221e\u00d7 I(y1 \u0338= y2) (9) and omit \u2019y-part\u2019 in cw as well as w, that is w \u2208[1, +\u221e)m in the following parts. Intuitively, w controls the strength of adversary put on each covariate. The higher the weight is, the weaker perturbation is put on the corresponding covariate. Ideally, we hope the covariate weights on stable covariates are extremely high to protect them from being perturbed and to maintain the stable correlations, while weights on unstable covariates are nearly 1 to encourage perturbations for breaking the harmful spurious correlations. With the goal towards uniformly good performance across environments, we come up with the objective function R(\u03b8(w)) for learning w as: R(\u03b8(w)) = 1 |Etr| X e\u2208Etr Le(\u03b8(w)) + \u03b1 max ep,eq\u2208Etr (Lep \u2212Leq) (10) where \u03b1 is the hyper-parameter. R(\u03b8(w)) contains two parts: the \ufb01rst is the average loss in multiple training environments; the second re\ufb02ects the max margin among environments, which re\ufb02ects the stability of \u03b8(w), since it is easy to prove that max ep,eq\u2208Etr Lep(\u03b8(w)) \u2212Leq(\u03b8(w)) = 0 if and only if the errors among all training environments are same. Here \u03b1 is used to adjust the tradeoff between average performance and stability. In order to optimize w, \u2202R(\u03b8(w))/\u2202w can be approximated as following. \u2202R(\u03b8(w)) \u2202w = \u2202R \u2202\u03b8 \u2202\u03b8 \u2202XA \u2202XA \u2202w (11) Note that the \ufb01rst term \u2202R/\u2202\u03b8 can be calculated easily. The second term can be approximated during the gradient descent process of \u03b8 as : \u2202\u03b8 \u2202XA \u2248\u2212\u03f5 X t \u2207\u03b8 \u02c6 L(\u03b8t; XA, Y ) \u2202XA (12) where \u2207\u03b8 \u02c6 L(\u03b8t;XA,Y ) \u2202XA can be calculated during the training process. The third term \u2202XA/\u2202w can be approximated during the adversarial learning process of XA as: \u2202XA \u2202w \u2248\u22122\u03f5x\u03bb X t Diag(Xt A \u2212X) (13) which can be accumulated during the adversarial training process. Then given current \u03b8, we can update w as: wt+1 = ProjW \u0012 wt \u2212\u03f5w \u2202R(\u03b8t) \u2202w \u0013 (14) where ProjW means projecting onto the space W. Theoretical Analysis Here we \ufb01rst provide the robustness guarantee for our method, and then we analyze the rationality of our uncertainty set, which also demonstrates the uncertainty set built in our SAL is more practical. First, we provide the robustness guarantee in theorem 1 with the help of lemma 1 and Rademacher complexity(Bartlett and Mendelson 2002). Theorem 1 Let \u0398 = Rm, x \u2208X, y \u2208Y. Assume |\u2113(\u03b8; z)| is bounded by T\u2113\u22650 for all \u03b8 \u2208\u0398, z = (x, y) \u2208X \u00d7 Y. Let F : X \u2192Y be a class of prediction functions, then for \u03b8 \u2208\u0398, \u03c1 \u22650, \u03bb \u22650, with probability at least 1 \u2212\u03b4, for P \u2208{P : Wcw(P, P0) \u2264\u03c1}, we have: sup P EP [\u2113(\u03b8; Z)] \u2264\u03bb\u03c1+E \u02c6 Pn [s\u03bb(\u03b8; Z)]+Rn(e \u2113\u25e6F)+kT\u2113 r ln(1/\u03b4) n (15) Specially, let M(\u03b8; z0) = arg min z\u2208Z {s\u03bb(\u03b8; z0)} when \u02c6 \u03c1n(\u03b8) = E \u02c6 Pn [cw(M(\u03b8; Z), Z)], for P \u2208 {P : Wcw(P, P0) \u2264\u02c6 \u03c1n(\u03b8)}, sup P EP [\u2113(\u03b8; Z)] = sup P EP [\u2113(\u03b8; Z)]+Rn(e \u2113\u25e6F)+kT\u2113 r ln(1/\u03b4) n (16) with probability at least 1 \u2212\u03b4, where e \u2113\u25e6F = {(x, y) 7\u2192 \u2113(f(x), y) \u2212\u2113(0, y) : f \u2208 F} and Rn denotes the Rademacher complexity(Bartlett and Mendelson 2002) and k is a numerical constant no less than 0. \fTheorem 1 is the standard result on Rademacher complexity as in previous distributionally robust optimization literature. It proves our empirical loss given by our optimization method can control the original worst-case cost of the uncertainty set in SAL. Then we analyze the rationality of our method in theorem 2, where our major theoretical contribution lies on. As far as we know, it is the \ufb01rst analysis of the compactness of adversary sets in WDRL literature. Assumption 2 Given \u03c1 > 0, \u2203Q0 \u2208P0 that satis\ufb01es: (1) \u2200\u03f5 > 0, \f \f \f \f inf M\u2208\u03a0(P0,Q0) E(z1,z2\u223cM) [c(z1, z2)] \f \f \f \f \u2264\u03f5, we refer to the couple minimizing the expectation as M0. (2) EM\u2208\u03a0(P0,Q0)\u2212M0 [c(z1, z2)] \u2265\u03c1, where \u03a0(P0, Q0)\u2212 M0 means excluding M0 from \u03a0(P0, Q0). (3) Q0#S \u0338= P0#S, where S = {i : w(i) > 1} and w(i) denotes the ith element of w and P#S denotes the marginal distribution on dimensions S. Assumption 2 describes the boundary property of the original uncertainty set P0 = {Q : Wc(Q, Po) \u2264\u03c1}, which assumes that there exists at least one distribution on the boundary whose marginal distribution on S is not the same as the center distribution P0\u2019s and is easily satis\ufb01ed. Based on this assumption, we come up with the following theorem. Theorem 2 Under assumption 2, assume the transportation cost function in Wasserstein distance takes form of c(x1, x2) = \u2225x1 \u2212x2\u22251 or c(x1, x2) = \u2225x1 \u2212x2\u22252 2. Then, given observed distribution P0 supported on Z and \u03c1 \u22650, for the adversary set P = {Q : Wcw(Q, P0) \u2264\u03c1} and the original P0 = {Q : Wc(Q, P0) \u2264\u03c1}, given cw where min(w(1), . . . , w(m)) = 1 and max(w(1), . . . , w(m)) > 1, we have P \u2282P0. Furthermore, for the set U = {i|w(i) = 1}, \u2203Q0 \u2208P that satis\ufb01es Wcw(P0#U, Q0#U) = \u03c1. Theorem 2 proves that the constructed uncertainty set of our method is smaller than the original. Intuitively, in adversarial learning paradigm, if stable covariates are perturbed, the target should also change correspondingly to maintain the underlying relationship. However, we have no access to the target value corresponding to the perturbed stable covariates in practice, so optimizing under an isotropic uncertainty set (e.g. P0) which contains perturbations on both stable and unstable covariates would generally lower the con\ufb01dence of the learner and produce meaningless results. Therefore, from this point of view, by adding high weights on stable covariates in the cost function, we may construct a more reasonable and practical uncertainty set in which the ineffective perturbations are avoided. Experiments In this section, we validate the effectiveness of our method on simulation data and real-world data. Baselines We compare our proposed SAL with the following methods. \u2022 Empirical Risk Minimization(ERM): min \u03b8 EP0 [\u2113(\u03b8; X, Y )] \u2022 Wasserstein Distributionally Robust Learning(WDRL): min \u03b8 sup Q\u2208W (Q,P0)\u2264\u03c1 EQ [\u2113(\u03b8; X, Y )] \u2022 Invariant Risk Minimization(IRM(Arjovsky et al. 2019)): min \u03b8 P e\u2208E Le + \u03bb\u2225\u2207w|w=1.0Le(w \u00b7 \u03b8)\u22252 For ERM and WDRL, we simply pool the multiple environments data for training. For fairness, we search the hyperparameter \u03bb in {0.01, 0.1, . . . , 1e0, 1e1, . . . , 1e4} for IRM and the hyper-parameter \u03c1 in {1, 5, 10, 20, 50, 80, 100} for WDRL, and select the best hyper-parameter according to the validation performance. Evaluation Metrics To evaluate the prediction performance, we use Mean Error de\ufb01ned as Mean Error = 1 |Ete| P e\u2208Ete Le and Std Error de\ufb01ned as Std Error = q 1 |Ete|\u22121 P e\u2208Ete (Le \u2212Mean Error)2, which are the mean and standard deviation error across testing environments e \u2208Ete. Imbalanced Mixture In our experiments, we perform a non-uniform sampling among different environments in training set which follows the natural phenomena that empirical data follow a power-law distribution. It is widely accepted that only a few environments/subgroups are common and the rest majority are rare(Shen et al. 2018; Sagawa et al. 2019, 2020). Simulation Data Firstly, we design one toy example to demonstrate the over pessimism problem of conventional WDRL. Then, we design two mechanisms to simulate the varying correlations of unstable covariates across environments, named by selection bias and anti-causal effect. Toy Example In this setting, we have Y = 5 \u2217S + S2 + \u03f5, V = \u03b1Y + \u03f5, where the effect of S on Y stays invariant, but the correlation between V and Y , i.e. the parameter \u03b1, varies across environments. In training, we generate 180 data points with \u03b1 = 1 for environment 1 and 20 data points with \u03b1 = \u22120.1 for environment 2. We compared methods for linear regression across testing environments with \u03b1 \u2208{\u22122.0, \u22121.5, . . . , 1.5, 2.0}. We \ufb01rst set the radius for WDRL and SAL to be 20.0, and the results are shown in Figure 1(a). We \ufb01nd the ERM induces high estimation error as it puts high regression coef\ufb01cient on V . Therefore, it performs poor in terms of prediction error when there are distribution shifts. While WDRL achieves more robust performance than ERM across environments, the prediction error is much higher than the others. Our method SAL achieves not only the smallest prediction error, but also the most robust performance across environments. Furthermore, we train SAL and WDRL for linear regression with a varying radius \u03c1 \u2208{0.0, 0.01, . . . , 20.0}. From the results shown in Figure 1(b), we can see that, with the radius growing larger, the robustness of WDRL becomes better, but meanwhile, its performance maintains poor in terms of high Mean Error and much worse than ERM (\u03c1 = 0). This further veri\ufb01es the limitation of WDRL with respect to \f(a) Testing performance for each environment. (b) Testing performance with respect to radius (c) The learned coef\ufb01cient value of S and V with respect to radius Figure 1: Results of the toy example. The left \ufb01gure shows the testing performance in different environments under \ufb01xed radius, where RMSE is root mean square error for the prediction. The middle and right denotes the prediction error and the learned coef\ufb01cients of WDRL and SAL with respect to radius respectively. the overwhelmingly-large adversary distribution set. In contrast, SAL achieves not only better prediction performance but also better robustness across environments. The plausible reason for the performance difference between WDRL and SAL can be explained by Figure 1(c). As the radius \u03c1 grows larger, WDRL tends to conservatively estimate small coef\ufb01cients for both S and V so that the model can produce robust prediction performances over the overwhelminglylarge uncertainty set. Comparatively, as our SAL provides a mechanism to differentiate covariates and focus on the robustness optimization over unstable ones, the learned coef\ufb01cient of unstable covariate V is gradually decreased to improve robustness, while the coef\ufb01cient of stable covariate S does not change much to guarantee high prediction accuracy. Selection Bias In this setting, the correlations between unstable covariates and the target are perturbed through selection bias mechanism. According to assumption 1, we assume X = [S, V ]T and Y = f(S) + \u03f5 and P(Y |S) remains invariant across environments while P(Y |V ) can arbitrarily change. For simplicity, we select data points according to a certain unstable covariate v0. \u02c6 P(x) = |r|\u22125\u2217|f(s)\u2212sign(r)\u2217v0| (17) where |r| > 1 and \u02c6 P(x) denotes the probability of point x to be selected. Intuitively, r eventually controls the strengths and direction of the spurious correlation between v0 and Y (i.e. if r > 0, a data point whose v0 is close to its y is more probably to be selected.). The larger value of |r| means the stronger spurious correlation between v0 and Y , and r \u22650 means positive correlation and vice versa. Therefore, here we use r to de\ufb01ne different environments. In training, we generate n data points, where \u03ban points from environment e1 with a prede\ufb01ned r and (1\u2212\u03ba)n points from e2 with r = \u22121.1. In testing, we generate data points for 10 environments with r \u2208[\u22123, \u22122, \u22121.7, . . . , 1.7, 2, 3]. \u03b2 is set to 1.0. We compare our SAL with ERM, IRM and WDRL for Linear Regression. We conduct extensive experiments with different settings on r, n, and \u03ba. In each setting, we carry out the procedure 15 times and report the average results. The results are shown in Table 1. From the results, we have the following observations and analysis: ERM suffers from the distributional shifts in testing and yields poor performance in most of the settings. Compared with ERM, the other three robust learning methods achieve better average performance due to the consideration of robustness during the training process. When the distributional shift becomes serious as r grows, WDRL suffers from the overwhelmingly-large distribution set and performs poorly in terms of prediction error, which is consistent with our analysis. IRM has stable performances across testing environments, while its average error is higher than SAL, which reveals that IRM may harm the average performance for stability. Compared with other robust learning baselines, our SAL achieves nearly perfect performance with respect to average performance and stability, especially the variance of losses across environments close to 0, which re\ufb02ects the effectiveness of assigning different weights to covariates for constructing the uncertainty set. Anti-causal Effect Inspired by (Arjovsky et al. 2019), in this setting, we introduce the spurious correlation by using anti-causal relationship from the target Y to the unstable covariates V . In this experiment, we assume X = [S, V ]T , and \ufb01rstly sample S from mixture Gaussian distribution characterized as Pk i=1 zkN(\u00b5i, I) and the target Y = \u03b8T s S + \u03b2S1S2S3 + N(0, 0.3). Then the unstable covariates V are generated by anti-causal effect from Y as V = \u03b8vY + N(0, \u03c3(\u00b5i)2) (18) where \u03c3(\u00b5i) means the Gaussian noise added to V depends on which component the stable covariates S belong to. Intuitively, in different Gaussian components, the corresponding correlations between V and Y are varying due to the different value of \u03c3(\u00b5i). The larger the \u03c3(\u00b5i) is, the weaker correlation between V and Y . We use the mixture weight Z = [z1, . . . , zk]T to de\ufb01ne different environments, where different mixture weights represent different overall strength of the effect Y on V . \fScenario 1: varying selection bias rate r (n = 2000, p = 10, \u03ba = 0.95) r r = 1.5 r = 1.7 r = 2.0 Methods Mean Error Std Error Mean Error Std Error Mean Error Std Error ERM 0.484 0.058 0.561 0.124 0.572 0.140 WDRL 0.482 0.044 0.550 0.114 0.532 0.112 IRM 0.475 0.014 0.464 0.015 0.477 0.015 SAL 0.450 0.019 0.449 0.015 0.452 0.017 Scenario 2: varying ratio \u03ba and sample size n (p = 10, r = 1.7) \u03ba, n \u03ba = 0.90, n = 500 \u03ba = 0.90, n = 1000 \u03ba = 0.975, n = 4000 Methods Mean Error Std Error Mean Error Std Error Mean Error Std Error ERM 0.580 0.103 0.562 0.113 0.555 0.110 WDRL 0.563 0.101 0.527 0.083 0.536 0.108 IRM 0.460 0.014 0.464 0.015 0.459 0.014 SAL 0.454 0.015 0.451 0.015 0.448 0.014 Table 1: Results in selection bias simulation experiments of different methods with varying selection bias r, ratio \u03ba and sample size n of training data, and each result is averaged over ten times runs. In this experiment, we set \u03b2 = 0.1 and build 10 environments with varying \u03c3 and the dimension of S, V , the \ufb01rst three for training and the last seven for testing. The average prediction errors are shown in Table 2, where the \ufb01rst three environments are used for training and the last seven are not captured in training with weaker correlation between V and Y . ERM and IRM achieve the best training performance with respect to their prediction errors on training environments e1, e2, e3, while their performances in testing are poor. WDRL performs worst due to its over pessimism problem. SAL achieves nearly uniformly good performance in training environments as well as the testing ones, which validates the effectiveness of our method and proves the excellent generalization ability of SAL. Real Data Regression In this experiment, we use a real-world regression dataset (Kaggle) of house sales prices from King County, USA, which includes the houses sold between May 2014 and May 2015 1. The target variable is the transaction price of the house and each sample contains 17 predictive variables such as the built year of the house, number of bedrooms, and square footage of home, etc. We normalize all the predictive covariates to get rid of the in\ufb02uence by their original scales. To test the stability of different algorithms, we simulate different environments according to the built year of the house. It is fairly reasonable to assume the correlations between parts of the covariates and the target may vary along time, due to the changing popular style of architecture. Speci\ufb01cally, the houses in this dataset were built between 1900 \u223c 2015 and we split the dataset into 6 periods, where each period approximately covers a time span of two decades. In training, we train all methods on the \ufb01rst and second decade where built year \u2208 [1900, 1910) and [1910, 1920) respectively and validate on 100 data points sampled from the second period. 1https://www.kaggle.com/c/house-prices-advanced-regressiontechniques/data From the results shown in \ufb01gure 2(a), we can \ufb01nd that SAL achieves not only the smallest Mean Error but also the lowest Std Error compared with baselines. From \ufb01gure 2(b), we can \ufb01nd that from period 4 and so on, where large distribution shifts occurs, ERM performs poorly and has larger prediction errors. IRM performs stably across the \ufb01rst 4 environments but it also fails on the last two, whose distributional shifts are stronger. WDRL maintains stable across environments while the mean error is high, which is consistent with our analysis in that WDRL equally perturbs all covariates and sacri\ufb01ces accuracy for robustness. From \ufb01gure 2(b), we can \ufb01nd that from period 3 and so on, SAL performs better than ERM, IRM and WDRL, especially when distributional shifts are large. In periods 1-2 with slight distributional shift, the SAL method incurs a performance drop compared with IRM and WDRL, while SAL performs much better when larger distributional shifts occur, which is consistent with our intuition that our method sacri\ufb01ce a little performance in nearly I.I.D. setting for its superior robustness under unknown distribution shifts. Classi\ufb01cation Finally, we validate the effectiveness of our SAL on classi\ufb01cation tasks, including an income prediction task and colored MNIST classi\ufb01cation task. Income Prediction In this task we use the Adult dataset(Dua and Graff 2017) which involves predicting personal income levels as above or below $50,000 per year based on personal details. We split the dataset into 10 environments according to demographic attributes, among which distributional shifts might exist. In training phase, we train all methods on 693 data points from environment 1 and 200 points from the second respectively and validate on 100 points sampled from both. We normalize all the predictive covariates to get rid of the in\ufb02uence by their original scales. In testing phase, we test all methods on the 10 environments and report the mis-classi\ufb01cation rate on all environments in \ufb01gure 3. From the results shown in \ufb01gure 3, we can \ufb01nd that the SAL outperforms baselines on almost all environments except a slight drop on the \ufb01rst. However, our SAL outperforms the others in the rest 8 environments where agnostic distributional shifts occur. \fScenario 1: S \u2208R5, V \u2208R5 e Training environments Testing environments Methods e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 ERM 0.281 0.305 0.341 0.461 0.555 0.636 0.703 0.733 0.765 0.824 IRM 0.287 0.293 0.329 0.345 0.382 0.420 0.444 0.461 0.478 0.504 WDRL 0.282 0.331 0.399 0.599 0.750 0.875 0.983 1.030 1.072 1.165 SAL 0.324 0.329 0.331 0.357 0.380 0.403 0.425 0.435 0.446 0.458 Scenario 2: S \u2208R9, V \u2208R1 e Training environments Testing environments Methods e1 e2 e3 e4 e5 e6 e7 e8 e9 e10 ERM 0.272 0.278 0.298 0.362 0.411 0.460 0.504 0.526 0.534 0.580 IRM 0.306 0.312 0.325 0.328 0.343 0.358 0.365 0.374 0.377 0.394 WDRL 0.300 0.314 0.332 0.396 0.441 0.483 0.529 0.545 0.555 0.596 SAL 0.290 0.284 0.288 0.287 0.288 0.287 0.290 0.284 0.293 0.294 Table 2: Results of the anti-causal effect experiment. The average prediction errors of 15 runs are reported. (a) Mean Error and Std Error. (b) Prediction error with respect to build year. Figure 2: Results of the real regression dataset. RMSE refers to the Root Mean Square Error. Figure 3: Results of the Adult dataset. Colored MNIST In this task we build a synthetic binary classi\ufb01cation task derived from MNIST. The goal is to predict a binary label assigned to each image based on the digit. We color each image either red or green which spuriously correlates with the label similar to (Arjovsky et al. 2019). The direction of the correlation is reversed in the testing environment, which ruins the method relying on such spurious correlation to predict. Speci\ufb01cally, we generate the color id by \ufb02ipping the label with probability \u00b5, where \u00b5 = 1.0 in the \ufb01rst environment, \u00b5 = 0.3 in the second and \u00b5 = \u22121.0 in testing. Furthermore, we induce noisy labels by randomly \ufb02ipping the label with probability 0.2. In this experiment, we consider the imbalanced mixture which is a more challenging and practical problem. Speci\ufb01cally, we sample 20000 images from environment 1 and 500 from 2 as training data and 10000 images from environment 3 for testing. For our SAL and WDRL, we conduct a twostage optimization which \ufb01rstly uses a three-layer CNN to extract the representation of 128 dimensions as the input covariates. For ERM and IRM, we use the same architecture and do the end-to-end optimization. We select the hyperparameters according to the performance on the validation set sampled from training environments. From the results in Table 3, ERM performs terribly because of the spurious correlations and IRM and WDRL are closed to random guess. Our SAL outperforms all baselines, which shows that our method can handle more complicated data such as vision and lingual data with a feature extractor(e.g. deep neural network). Algorithm ERM WDRL IRM SAL Random Test Acc 0.085 0.48 0.51 0.57 0.50 Table 3: Results of the colored MNIST experiment. We report the average results of 10 runs. Conclusion In this paper, we address a practical problem of overwhelmingly-large uncertainty set in robust learning, which often results in unsatisfactory performance under distributional shifts in real situations. We propose the Stable Adversarial Learning (SAL) algorithm that anisotropically considers each covariate to achieve more realistic robustness. We theoretically show that our method constructs a better uncertainty set. Empirical studies validate the effectiveness of our methods in terms of uniformly good performance across different distributed data. We temporarily focus our method at raw feature level for solid theoretical guarantees, and we leave the extension of combining representation learning into our framework as the future work. \fAcknowledgements This work was supported in part by National Key R&D Program of China (No. 2018AAA0102004), National Natural Science Foundation of China (No. U1936219, 61772304, 61531006, U1611461), Beijing Academy of Arti\ufb01cial Intelligence (BAAI ), and a grant from the Institute for Guo Qiang, Tsinghua University. Kun Kuang\u2019s research was supported in part by National Natural Science Foundation of China (No. 62006207), National Key Research and Development Program of China (No. 2018AAA0101900), the Fundamental Research Funds for the Central Universities. Bo Li\u2019s research was supported by the Tsinghua University Initiative Scienti\ufb01c Research Grant, No. 2019THZWJC11; National Natural Science Foundation of China, No. 71490723 and No. 71432004; Science Foundation of Ministry of Education of China, No. 16JJD630006." + } + ], + "Hang Su": [ + { + "url": "http://arxiv.org/abs/2310.12651v2", + "title": "Accelerating the force-coupling method for hydrodynamic interactions in periodic domains", + "abstract": "The efficient simulation of fluid-structure interactions at zero Reynolds\nnumber requires the use of fast summation techniques in order to rapidly\ncompute the long-ranged hydrodynamic interactions between the structures. One\napproach for periodic domains involves utilising a compact or exponentially\ndecaying kernel function to spread the force on the structure to a regular grid\nwhere the resulting flow and interactions can be computed efficiently using an\nFFT-based solver. A limitation to this approach is that the grid spacing must\nbe chosen to resolve the kernel and thus, these methods can become inefficient\nwhen the separation between the structures is large compared to the kernel\nwidth. In this paper, we address this issue for the force-coupling method (FCM)\nby introducing a modified kernel that can be resolved on a much coarser grid,\nand subsequently correcting the resulting interactions in a pairwise fashion.\nThe modified kernel is constructed to ensure rapid convergence to the exact\nhydrodynamic interactions and a positive-splitting of the associated mobility\nmatrix. We provide a detailed computational study of the methodology and\nestablish the optimal choice of the modified kernel width, which we show plays\na similar role to the splitting parameter in Ewald summation. Finally, we\nperform example simulations of rod sedimentation and active filament\ncoordination to demonstrate the performance of fast FCM in application.", + "authors": "Hang Su, Eric E Keaveny", + "published": "2023-10-19", + "updated": "2024-03-04", + "primary_cat": "physics.flu-dyn", + "cats": [ + "physics.flu-dyn", + "math-ph", + "math.MP" + ], + "main_content": "Introduction Micron-scale fluid-structure interactions are present throughout a range of industrial processes involving colloidal particles and suspensions, as well as natural processes including those in cell-level biology. Notable examples from biology include the movement of fluid driven by flagella and cilia [1, 2, 3], the flexible and motile hair-like organelles protruding from cell surfaces, and the dynamical rearrangement of filament-motor protein complexes in the cell interior [4]. Many interesting examples arise also in engineering applications, including the rheological changes exhibited by flowing suspensions of particles and fibres [5, 6, 7, 8], the self-assembly of structures in more advanced materials such as magnetorheological fluids [9], and the motion and interactions of exotic colloidal particles such as nanomotors and other types of phoretic particles [10]. A unifying feature of these systems is the presence of long-ranged hydrodynamic interactions that couple the motion of structures that are present. For microscopic objects moving through viscous fluids, viscous forces typically dominate over inertia resulting in the fluid motion being governed by the linear and steady Stokes equations. This, along with the negligible effect of structure inertia, confers a linear relationship between the forces and velocities governing the motion of the structures. The matrix relating the velocities and forces is called the mobility matrix. While the relationship is linear, the mobility matrix is itself configuration dependent and dense due to the slow decay of the fluid velocity fields generated by the forced structures. In direct implementations of methods for particulate suspensions such as Brownian [11, 12] and Stokesian dynamics [13], the application of the mobility matrix is performed by pairwise summation \u2013 an O(N2) calculation where N is the number of degrees of freedom. This scaling for the computational cost is also present in direct implementations of Preprint submitted to Journal of Computational Physics March 5, 2024 arXiv:2310.12651v2 [physics.flu-dyn] 4 Mar 2024 \fother approaches such as the boundary element method [14], the rigid multiblob method [15], or the method of regularised Stokeslets [16, 17], where multiple degrees of freedom are associated with each structure. Thus, for simulations involving many degrees of freedom, computational costs quickly grow prohibitive and approaches that circumvent pairwise summation need to be considered. Reducing the computational cost often requires taking advantage of fast summation techniques, such as the fast multipole method [18, 19], that provide the action of the mobility matrix without ever computing the mobility matrix directly. For periodic domains, such methods can be constructed around the fast Fourier transform (FFT). In this context, there are two related approaches. The first stems directly from classical Ewald summation [20] where the application of the inverse Stokes operator is split between sums in real and Fourier space. By introducing an appropriately chosen splitting function, rapid convergence of both the sums can be ensured. The sum in real space can be interpreted as a local, pairwise correction to the changes to the mobility matrix introduced by the splitting function. In terms of computation, the sum in Fourier space can be performed for all degrees of freedom simultaneously, while the pairwise correction only needs to be performed for degrees of freedom in close proximity. This approach has been applied successfully for the evaluation of the interactions based on point-forces (Stokeslets) [21], as well as the Rotne Prager Yamakawa (RPY) tensor in the positively split Ewald (PSE) method [22] and incorporated into accelerated [23] and fast [24] Stokesian Dynamics. The second approach is to instead consider regular, localised force distributions that can be evaluated on a grid where the Stokes equations can be solved using an FFT-based method. This is done, for example with the immersed boundary method (IBM) [25, 26] and the force-coupling method (FCM) [27, 28, 29] with a main difference between the two being the particular choice of function, or kernel, used to transfer the structure force to the grid. Additionally, with these methods structure velocities are obtained by using the same kernels to interpolate, or volume average, the resulting fluid flow. The result is a positive definite mobility matrix and proper energy balance with the viscous dissipation in the surrounding fluid. As discussed in [22], a limitation of this approach is that the grid must be chosen to provide sufficient resolution of the kernel. As a result, computations can become expensive in cases where the separation of the structures is large compared to the kernel width. To emphasise this point, [22] showed that PSE utilises a maximum wavenumber approximately 3 times smaller than that needed for FCM. In this paper, we develop a fast implementation of FCM which alleviates this limitation by eliminating the need for the grid to resolve the FCM kernel. We accomplish this by substituting the FCM kernel by a modified kernel with a larger width, and correcting the resulting particle velocities to ensure errors are below a user specified tolerance. This extends similar ideas developed in [12, 30] for Stokeslet interactions, where regularised forces in real space, rather than an Ewald splitting function in Fourier space, is used to formulate the Ewald summation. Our modified kernel is carefully chosen to ensure that the resulting pairwise mobility converges exponentially to the standard FCM mobility as particle separation distance increases. Additionally, the specific choice of modified kernel ensures positive splitting of the force-velocity mobility matrix, resulting in SPD matrices for both the modified mobility matrix as well as the local pairwise correction matrix. We also extend fast FCM to allow for torques and rotations. We present a GPU-based implementation of fast FCM and perform a number of tests to demonstrate how parameters can be tuned to optimise computational performance. In doing so, we find that in many cases fast FCM can be an order of magnitude faster than the standard FCM computation. We provide example simulations of rod sedimentation and active filament dynamics that demonstrate the effectiveness of fast FCM in application and as part of larger computations. 2. Force-coupling method: the force-velocity mobility matrix We begin by reviewing FCM [27] for the motion of N particles of hydrodynamic radius a through a fluid with viscosity \u03b7 whose flow is governed by the Stokes equations. Each particle 2 \fn = 1,...,N is centred at Yn and exerts the force Fn on the fluid. The fluid and particles occupy domain \u2126. The force on each particle n is transferred to the fluid using a Gaussian distribution, or kernel, \u2206n(x;\u03c3) = (2\u03c0\u03c32)\u22123/2 exp(\u2212\u2223x \u2212Yn\u22232/2\u03c32). (1) Accordingly, the Stokes system for the resulting fluid flow u(x) and pressure p(x) is, \u2212\u03b7\u22072u + \u2207p = J \u2020 [F] \u2207\u22c5u = 0, (2) for x \u2208\u2126, where F = [F \u2020 1 ,F \u2020 2 ,...,F \u2020 N] \u2020 is the 3N \u00d7 1 vector containing the components of the forces on all particles and J \u2020 [\u22c5] is the spreading operator that transfers the forces on all particles to the fluid such that, J \u2020 [F] = N \u2211 n=1 Fn\u2206n(x;\u03c3). (3) The motion of the particles is determined using the kernel to locally average u(x). The velocity of particle m is then given by, Vm = (J [u])m = \u222b\u2126u(x)\u2206m(x;\u03c3)d3x, (4) recognising also that the interpolation operator J and the spreading operator are adjoints. The kernel width, \u03c3, sets the hydrodynamic radius of the particles via a = \u03c3\u221a\u03c0. This particular choice of a recovers the Stokes drag law for a single particle in an unbounded domain [27]. The successive actions of spreading the particle force to the fluid, solving for the resulting fluid velocity, and interpolating the fluid velocity to obtain the particle velocities provides the single linear relationship between the particle forces, F, and velocities, and V = [V \u2020 1 ,V \u2020 2 ,...,V \u2020 N] \u2020 , V = MVFF (5) where MVF is the 3N \u00d7 3N force-velocity mobility matrix. This can be written in terms of the operator J and L\u22121, the inverse of Stokes operator, as MVF[\u22c5] = J [L\u22121[J \u2020[\u22c5]]]. (6) 2.1. Grid-based algorithm for FCM in periodic domains For simulations of particles in periodic domains where \u2126= [0,Lx)\u00d7[0,Ly)\u00d7[0,Lz), the FCM mobility matrix can be applied using an FFT-based algorithm described previously in [29, 31]. Here, \u2126is discretised using a uniform grid with spacing \u2206x and size Mx \u00d7 My \u00d7 Mz, where Mx, My and Mz are the number of gridpoints in the x\u2212, y\u2212, and z\u2212directions, respectively. The total number of grid points is then M = MxMyMz. The algorithm consists of three main steps, each corresponding to the action of one of the continuous operators described above. 1. Applying J \u2020: The particle force is communicated to the fluid by evaluating the Gaussian force distributions on the grid. Although the Gaussian is not compactly supported, it decays rapidly and the tails can be safely truncated in an error-controlled fashion. Thus, each Gaussian is supported locally on the grid by MG \u00d7 MG \u00d7 MG gridpoints. 2. Applying the inverse Stokes operation, L\u22121: (a) The Fourier transform of the total force distribution on the grid is computed using the FFT. 3 \f(b) The fluid velocity in Fourier space is obtained by applying the inverse Stokes operator in Fourier space. (c) The inverse Fourier transform of the fluid velocity is computed using the FFT. 3. Applying J : The particle velocities are obtained by evaluating numerically the integral appearing in (4) using the trapezoidal rule. Again, the sums are performed using the truncated Gaussians on grid of size MG \u00d7 MG \u00d7 MG. The operation count associated with evaluating the force on the grid and interpolating the resulting flow is O(NM3 G), while that associated with the application of the inverse Stokes operator is O(M log M). While we see that this algorithm avoids the O(N2) scaling associated with pairwise computation, when the grid is large compared to the number of particles, the cost of the FCM algorithm will exceed that of pairwise evaluation. The accuracy of this algorithm hinges on the grid being sufficiently small as to resolve the Gaussian kernel used in J and J \u2020, i.e. \u2206x < \u03c3. Thus, the grid spacing, and hence number of grid points, is determined by \u03c3 and consequently the hydrodynamic radius, a. As a result, for simulations at low volume fraction where the separation between particles is large compared to the particle size, the FFT and applying the inverse Stokes operator will require excessive computational time. It is also worth noting that the grid-based approach can incur a high memory cost. Flow data on a 1024 \u00d7 1024 \u00d7 1024 grid using double precision requires 25.8GB RAM. The purpose of this paper is to formulate fast FCM by decoupling the grid spacing from the Gaussian width to facilitate more efficient computation across all particle volume fractions. We accomplish this by replacing the Gaussian kernel in the FCM algorithm by a modified kernel of larger width, and then correcting the resulting particle velocities by a pairwise computation. An essential aspect of this procedure is constructing a modified kernel that ensures the number of particle pairs requiring correction remains small while simultaneously having the kernel width as large as possible. 2.2. FCM pairwise mobility matrix A key piece of information needed to formulate fast FCM is the FCM pairwise mobility matrix, M V F nm , which provides the contribution to the velocity of particle n due to the force on particle m. We can obtain an analytical expression for M V F nm using the expressions found in [27] for the fluid velocity generated by a Gaussian force distribution. The flow generated by f(x) = F \u2206(x;\u03c3) with \u2206(x;\u03c3) = (2\u03c0\u03c32)\u22123/2 exp(\u2212r2/2\u03c32) and r = \u2225x\u2225 can be expressed as u(x) = S(x;\u03c3)F . (7) where we can write S(x;\u03c3) = S(1)(x;\u03c3) + S(2)(x;\u03c3) + S(3)(x;\u03c3) with S(1)(x;\u03c3) = 1 8\u03c0\u03b7r (I + xxT r2 )erf( r \u03c3 \u221a 2 ), (8) S(2)(x;\u03c3) = 1 8\u03c0\u03b7r3 (I \u22123xxT r2 )\u03c32erf( r \u03c3 \u221a 2 ), (9) S(3)(x;\u03c3) = \u2212\u03c32 2\u03b7 (I \u22123xxT r2 )(\u03c32 r2 )\u2206(x;\u03c3). (10) In terms of the Oseen tensor, G(x) = 1 8\u03c0\u03b7r (I + xxT r2 ), (11) 4 \fwe have S(1)(x;\u03c3) = erf( r \u03c3 \u221a 2 )G(x), (12) S(2)(x;\u03c3) = \u03c32 2 erf( r \u03c3 \u221a 2 )\u22072G(x) (13) The expression for the pairwise mobility matrix for FCM follows directly from that for the flow. Namely, the entries of the mobility matrix that relate the velocity of particle n to the force on particle m are M V F nm = S(Yn \u2212Ym;\u03c3 \u221a 2). (14) We see that this is identical to the expression for the flow field, but with the envelope size replaced by \u03c3 \u221a 2. This slight modification is a result of the volume averaging. The validity of this expression is most easily established using Fourier integrals, as shown in Appendix A. Along with these expressions, it will also be useful to have at hand the flow generated by f(x) = H\u22072\u2206(x;\u03c3), which is u(x) = Q(x;\u03c3)H, (15) where Q(x;\u03c3) can be decomposed into two terms Q(x;\u03c3) = Q(1)(x;\u03c3) + Q(2)(x;\u03c3) where Q(1)(x;\u03c3) = 1 4\u03c0\u03b7r3 (I \u22123xxT r2 )erf( r \u03c3 \u221a 2 ), (16) Q(2)(x;\u03c3) = \u22121 \u03b7 ((1 + \u03c32 r2 )I \u2212(1 + 3\u03c32 r2 )xxT r2 )\u2206(x;\u03c3). (17) The expression (16) can also be written as Q(1)(x;\u03c3) = erf( r \u03c3 \u221a 2 )\u22072G(x). (18) 3. Fast FCM With the results from the previous sections established, we are in the position to formulate and justify the fast FCM framework. As is typically done in Ewald splitting, we decompose the mobility matrix into two parts MVF = \u0303 MVF + (MVF \u2212\u0303 MVF), (19) and aim for the action of \u0303 MVF to be evaluated efficiently using a grid-based computation and the correction (\u0303 MVF \u2212MVF) to be applied pairwise but only for a limited number of pairs whose separations are within a cut-off radius, i.e. (\u0303 MVF \u2212MVF) is sparse. In standard Ewald splitting, the decomposition and aims are achieved by introducing the splitting function H(k;\u03be), where k = \u2223k\u2223is the magnitude of the wavenumber k and \u03be is the splitting parameter, in the Fourier transform of the inverse Stokes operator such that \u02c6 L\u22121 = \u02c6 L\u22121H(k;\u03be) + \u02c6 L\u22121(1 \u2212H(k;\u03be)), which are then associated with \u0303 MVF and (MVF \u2212\u0303 MVF), respectively. The splitting function is selected to decay exponentially with increasing k, with the decay rate controlled by the splitting parameter, \u03be. Common choices for H(k;\u03be) include the Hasimoto [20] HH(k;\u03be) = (1 + k2 4\u03be2 )e\u2212k2/4\u03be2 (20) 5 \fand Beenakker [32] HB(k;\u03be) = (1 + k2 4\u03be2 + k4 8\u03be4 )e\u2212k2/4\u03be2 (21) splitting functions. With fast FCM, we achieve a similar splitting by replacing the kernel \u2206(x;\u03c3) in FCM with a modified kernel \u0303 \u2206(x;\u03a3), which we use to compute the action of the approximate mobility \u0303 MV F using the standard, FFT-based FCM algorithm. Thus, to ensure that the cost of applying \u0303 MV F will be less then that of MV F , we must have \u03a3 > \u03c3 to be able to use a smaller grid. At the same time, however, the choice of \u0303 \u2206(x;\u03a3) should yield an approximate pairwise mobility matrix that satisfies \u2225\u0303 M V F nm \u2212M V F nm \u2225\u21920 exponentially as \u2225Yn \u2212Ym\u2225\u2192\u221e. This requirement ensures that the correction matrix \u0303 MVF \u2212MVF is sparse and the number of pairwise corrections needed to achieve a given tolerance is minimised. With these requirements in mind, we utilise the modified kernel, \u0303 \u2206n(x;\u03a3) = (1 + \u03c32 \u2212\u03a32 2 \u22072)\u2206n(x;\u03a3). (22) with \u03a3 > \u03c3. The inclusion of the operator (1 + (\u03c32 \u2212\u03a32)/2)\u22072, as we will see, enables the exponential convergence of the pairwise mobility. 3.1. The action of \u0303 MVF Computing the action \u0303 MVF follows the same steps as the standard FCM computation described in Section 2 with \u2206n(x;\u03c3) replaced by \u0303 \u2206n(x;\u03a3). Accordingly, we consider the Stokes system \u2212\u03b7\u22072\u0303 u + \u2207\u0303 p = \u0303 J \u2020[F], (23) \u2207\u22c5\u0303 u = 0, (24) where \u0303 J \u2020[F] = N \u2211 n=1 Fn \u0303 \u2206n(x;\u03a3), (25) and compute the flow field. Then, from the fluid velocity, we determine the approximate particle velocities by evaluating \u0303 Vm = ( \u0303 J [u])m = \u222b \u221e \u2212\u221eu(x)\u0303 \u2206m(x;\u03a3)d3x, (26) 3.2. The action of MVF \u2212\u0303 MVF To correct these velocities, we utilise an analytical expression for the pairwise mobility. We derive this expression from the integral representation [31] of the pairwise mobility, \u0303 M V F nm = \u222b\u222b \u0303 \u2206(y \u2212Yn;\u03a3)\u0303 \u2206(x \u2212Ym;\u03a3)G(x \u2212y)d3yd3x, (27) which we then expand as \u0303 M V F nm =\u222b\u222b\u2206(y \u2212Yn;\u03a3)\u2206(x \u2212Ym;\u03a3) \u00d7 G(x \u2212y)d3yd3x + \u222b\u222b\u2206(y \u2212Yn;\u03a3)\u2206(x \u2212Ym;\u03a3)(\u03c32 \u2212\u03a32)\u22072G(x \u2212y)d3yd3x + \u222b\u222b\u2206(y \u2212Yn;\u03a3)\u2206(x \u2212Ym;\u03a3)(\u03c32 \u2212\u03a32)2 4 \u22072\u22072G(x \u2212y)d3yd3x (28) 6 \fEach integral can be related to the expressions for the FCM mobility matrices found in Section 2.2. Specifically, we have \u0303 M V F nm = S(Yn \u2212Ym; \u221a 2\u03a3) + (\u03c32 \u2212\u03a32)Q(Yn \u2212Ym; \u221a 2\u03a3) + (\u03c32 \u2212\u03a32)2 4 T (Yn \u2212Ym; \u221a 2\u03a3), (29) where S(x;\u03c3) and Q(x;\u03c3) are provided by (8)-(10) and (16)-(17), respectively. The last term, T (Yn \u2212Ym; \u221a 2\u03a3), is new and given by T (x;\u03c3) = 1 \u03b7\u03c32 (2I + 1 \u03c32 (xxT \u2212r2I))\u2206(x;\u03c3). (30) Taking the difference with the FCM pairwise mobility (14), we arrive at the expression for the correction matrix, M V F nm \u2212\u0303 M V F nm =(G(Yn \u2212Ym) + \u03c32\u22072G(Yn \u2212Ym))(erf( r \u03c3 \u221a 2 ) \u2212erf( r \u03a3 \u221a 2 )) + S(3)(Yn \u2212Ym;\u03c3 \u221a 2) \u2212S(3)(Yn \u2212Ym;\u03a3 \u221a 2) \u2212(\u03c32 \u2212\u03a32)Q(2)(Yn \u2212Ym;\u03a3 \u221a 2) \u2212(\u03c32 \u2212\u03a32)2 4 T (Yn \u2212Ym;\u03a3 \u221a 2). (31) Along with providing the expression needed to correct the particle velocities, (31) reveals the important property that each term on the right-hand side decays exponentially as \u2225Yn\u2212Ym\u2225\u2192\u221e. The expression for (31) evaluated at \u2225Yn\u2212Ym\u2225= 0, which is well-defined, is provided in Appendix B and is used to correct the self-mobility matrix for every particle. Note also that we assume that the domain size L is always sufficiently large so corrections are not needed between a particle and its own periodic images. As with standard Ewald splitting, rapid convergence is crucial to ensuring that the correction matrix is sparse. We also observe that the parameter \u03a3 that controls the width of the modified kernel plays the same role as \u03be in Ewald splitting. Accordingly, \u03a3 will need to be chosen to optimise the overall performance of fast FCM. 3.3. Positive splitting An additional desirable feature associated with this choice of \u0303 \u2206is that it provides a positive splitting of the mobility matrix, namely that both \u0303 MVF and MVF \u2212\u0303 MVF are symmetric positive definite matrices. While positive splitting is not essential for the efficient evaluation of the hydrodynamic interactions, as discussed in [22], it is essential for the efficient computation of Brownian displacements when thermal fluctuations are to be included in the computation. First, we notice that \u0303 MVF is positive definite by construction due to the positive definiteness of the inverse Stokes operator and the spreading operator being the adjoint of the interpolation operator. It remains to show that MVF \u2212\u0303 MVF is positive definite. We begin with the expression for the pairwise FCM mobility matrix in terms of the Fourier transforms (see also Appendix A) of \u2206(x;\u03c3) and the inverse Stokes operator, M V F nm = (2\u03c0)3 \u222beik\u22c5(Yn\u2212Ym) \u02c6 \u2206(k;\u03c3) \u02c6 L\u22121(k) \u02c6 \u2206(k;\u03c3)d3k, (32) where \u02c6 \u2206(k;\u03c3) = (2\u03c0)\u22123e\u2212\u03c32k2/2. (33) If we were to apply Ewald splitting directly, we would have M V F nm =(2\u03c0)3 \u222beik\u22c5(Yn\u2212Ym) \u02c6 \u2206(k;\u03c3) \u02c6 L\u22121(k)(1 \u2212H(k;\u03be)) \u02c6 \u2206(k;\u03c3)d3k + (2\u03c0)3 \u222beik\u22c5(Yn\u2212Ym) \u02c6 \u2206(k;\u03c3) \u02c6 L\u22121(k)H(k;\u03be) \u02c6 \u2206(k;\u03c3)d3k. (34) 7 \fAs described in [22], the splitting is positive if 0 < H(k;\u03be) < 1 for k > 0. To demonstrate that our modified kernel approach yields positive splitting, we find the corresponding splitting function and show that it satisfies this condition. Since, \u02c6 \u0303 \u2206(k;\u03a3) = (1 \u2212\u03c32 \u2212\u03a32 2 k2)(2\u03c0)\u22123e\u2212\u03a32k2/2, (35) we can write \u02c6 \u0303 \u2206(k;\u03a3) = (1 \u2212\u03c32 \u2212\u03a32 2 k2)(e\u2212\u03c32k2/2) \u03a32/\u03c32\u22121 \u02c6 \u2206(k;\u03c3). (36) This allows us to express \u0303 M V F nm as, \u0303 M V F nm = (2\u03c0)3 \u222beik\u22c5(Yn\u2212Ym) \u02c6 \u2206(k;\u03c3) \u02c6 L\u22121(k)HFCM(k;\u03a3,\u03c3) \u02c6 \u2206(k;\u03c3)d3k. (37) where the fast FCM splitting function is HFCM(k;\u03a3,\u03c3) = p(k)e\u2212(\u03a32\u2212\u03c32)k2, (38) p(k;\u03a3,\u03c3) = 1 \u2212(\u03c32 \u2212\u03a32)k2 + (\u03c32 \u2212\u03a32)2 4 k4. (39) We first notice that HFCM(k;\u03a3,\u03c3) > 0 which follows from e\u2212(\u03a32\u2212\u03c32)k2 > 0 and as \u03a3 > \u03c3, p(k;\u03a3,\u03c3) = (1 \u2212(\u03c32 \u2212\u03a32)k2/2)2 > 0. Differentiating HFCM(k;\u03a3,\u03c3) and p(k;\u03a3,\u03c3), we see that H\u2032 FCM(k;\u03a3,\u03c3) = (p\u2032(k;\u03a3,\u03c3) \u22122kp(k;\u03a3,\u03c3)(\u03a32 \u2212\u03c32))e\u2212(\u03a32\u2212\u03c32)k2, (40) p\u2032(k;\u03a3,\u03c3) = \u22122(\u03c32 \u2212\u03a32)k + (\u03c32 \u2212\u03a32)k3. (41) and examining the expression for H\u2032(k;\u03a3,\u03c3) more closely, we find p\u2032(k;\u03a3,\u03c3) \u22122kp(k;\u03a3,\u03c3)(\u03a32 \u2212\u03c32) = k3(\u03c32 \u2212\u03a32)2 [\u03c32 \u2212\u03a32 2 k2 \u22121]. (42) Thus, since k \u22650 and \u03a3 > \u03c3, we have p\u2032(k;\u03a3,\u03c3) \u22122kp(k;\u03a3,\u03c3)(\u03a32 \u2212\u03c32) \u22640, revealing that H\u2032 FCM(k;\u03a3,\u03c3) \u22640 and HFCM(k;\u03a3,\u03c3) is monotonically decreasing with the maximum HFCM(0;\u03a3,\u03c3) = 1. Therefore, we have 0 < HFCM(k;\u03a3,\u03c3) < 1 for k > 0, indicating that the splitting is positive. From this analysis, we also see how the approach of using a modified kernel corresponds to the standard Ewald splitting framework, namely that \u02c6 \u0303 \u2206(k;\u03a3) = (HFCM(k;\u03a3,\u03c3))1/2 \u02c6 \u2206(k;\u03c3). Thus, it would be possible to have used one of the standard splitting functions, such as that used by Hasimoto [20], provided that HH(k;\u03be)1/2 exists for all k and its inverse Fourier transform can be found to evaluate its real-space representation on a grid. It is important to note that while this direct correspondence is available for periodic domains, in other domains such as channels, using a modified kernel in real-space to perform the splitting, as done for Stokeslets [12, 30], may be the only route to accelerating the computation. 3.4. Extension of fast FCM to torques and angular velocities As introduced in [28], FCM can be extended to include torques and particle rotations. Along with the force, Fn, the torque, Tn, on each particle n is also spread to the fluid through \u2212\u03b7\u22072u + \u2207p = J \u2020 [F] + N \u2020 [T ], (43) \u2207\u22c5u = 0, (44) 8 \fwhere T = [T \u2020 1 ,T \u2020 2 ,...,T \u2020 N] \u2020 , J \u2020 is given in (25), and N \u2020 [T ] = 1 2 N \u2211 n=1 Tn \u00d7 \u2207\u2206n(x;\u03c3D). (45) After solving for the fluid flow, the angular velocity, \u2126m, on particle m is found by applying N to the flow field, \u2126m = (N [u])m = 1 2 \u222b \u221e \u2212\u221e\u2207\u00d7 u(x)\u2206m(x;\u03c3D)d3x. (46) For particle rotations, the width of the Gaussian is \u03c3D = a/(6\u221a\u03c0)1/3 to ensure FCM provides the correct value for the viscous torque on a single spherical particle. With the addition of torques and angular velocities, W = [W \u2020 1 ,W \u2020 2 ,...,W \u2020 N] \u2020 , the mobility relationship is now ( V W) = M (F T ) = (MVF MVT MWF MWT )(F T ), (47) where MVF is given by (14), and the other mobility 3N \u00d7 3N submatrices are MVF[\u22c5] = J [L\u22121[J \u2020[\u22c5]]] (48) MVT [\u22c5] = J [L\u22121[N \u2020[\u22c5]]] (49) MWF[\u22c5] = N[L\u22121[J \u2020[\u22c5]]] (50) MWF[\u22c5] = N[L\u22121[N \u2020[\u22c5]]]. (51) We may utilise the analytical expressions [27, 28] for the vorticity induced by single FCM force and torque distributions located at the origin to obtain expressions for the pairwise mobility matrices. The vorticity resulting from a force distribution is, \u03c9(x) = R(x;\u03c3)F . (52) where R(x;\u03c3) = 1 8\u03c0\u03b7r3 (erf( r \u03c3 \u221a 2 ) \u22124\u03c0r\u03c32\u2206(x;\u03c3))(\u2212x\u00d7), (53) and (\u2212x\u00d7) is the matrix (\u2212x\u00d7) = \u239b \u239c \u239d 0 x3 \u2212x2 \u2212x3 0 x1 x2 \u2212x1 0 \u239e \u239f \u23a0 . (54) For a torque, the vorticity is given by \u03c9(x) = P (x;\u03c3D)T . (55) with P (x;\u03c3D) = P (1)(x;\u03c3D) + P (2)(x;\u03c3D), where P (1)(x;\u03c3D) = 1 8\u03c0\u03b7r3 (\u2212I + 3xxT r2 )erf( r \u03c3D \u221a 2 ) (56) P (2)(x;\u03c3D) = 1 2\u03b7r2 ((\u03c32 D + r2)I \u2212(3\u03c32 D + r2)xxT r2 )\u2206(x;\u03c3D), (57) 9 \fWithout presenting the details as the steps are similar those used for the force-velocity pairwise mobility, the additional pairwise mobilities are then M \u2126F nm = R(Yn \u2212Ym; \u221a \u03c32 + \u03c32 D) (58) M \u2126T nm = 1 2P (Yn \u2212Ym;\u03c3D \u221a 2), (59) along with M V T nm = (M \u2126F mn ) \u2020. As done when applying the force-velocity mobility, we split the mobility into two parts M = \u0303 M + (M \u2212\u0303 M), (60) such that the application of \u0303 M can be evaluated rapidly on a grid while M \u2212\u0303 M is evaluated pairwise for a limited number of particle pairs. For the grid-based computation, we solve \u2212\u03b7\u22072\u0303 u + \u2207\u0303 p = \u0303 J \u2020[F] + \u0303 N \u2020[T ], (61) \u2207\u22c5\u0303 u = 0, (62) with \u0303 J \u2020 given by (25), and \u0303 N \u2020[T ] = \u22121 2 Np \u2211 n=1 T (n) \u00d7 \u2207\u2206n(x;\u03a3D). (63) As before, the resulting particle velocities follow from applying \u0303 J to the flow field, while the angular velocities are given by \u2126m = ( \u0303 N [u])m = 1 2 \u222b \u221e \u2212\u221e\u2207\u00d7 \u02dc u(x)\u2206m(x;\u03a3D)d3x., (64) Note that for \u0303 N, we simply replace the Gaussian with \u03c3D by one with \u03a3D > \u03c3D. No additional term involving the Laplacian is required to ensure exponential convergence. The real space corrections are computed from the pairwise mobility relations. They are M \u2126F nm \u2212\u0303 M \u2126F nm = R(Yn \u2212Ym; \u221a \u03c32 + \u03c32 D) \u2212R(Yn \u2212Ym; \u221a \u03a32 + \u03a32 D) \u2212\u03c32 \u2212\u03a32 2 K(Yn \u2212Ym; \u221a \u03a32 + \u03a32 D) (65) where R(x;\u03c3) is provided in (53) and K(x;\u03c3) = \u2212 1 2\u03c32\u03b7\u2206(x;\u03c3)(\u2212x\u00d7), (66) and M \u2126T nm \u2212\u0303 M \u2126T nm = P (Yn \u2212Ym;\u03c3D \u221a 2) \u2212P (Yn \u2212Ym;\u03a3D \u221a 2), (67) where P (x;\u03c3) is given by (56)-(57). As final point, we note that \u03a3D, the width of the modified kernel appearing in \u0303 N, can be set independent of \u03a3. For convenience, however, in our computations below when we have torques, we take \u03a3D = \u03a3. 10 \f4. Fast FCM Algorithm To evaluate the action of the mobility matrix using fast FCM, we must combine the gridbased FFT computation described in Section 2.1 for FCM with an efficient scheme for evaluating the pairwise correction for particles whose separation distance is within a tolerance-related cutoff radius, Rc. To do this, we both sort the particles and create neighbour lists using the approach described below. Our approach has the dual effect of ensuring efficient computation of the pairwise corrections and a localisation in memory of nearby particles, allowing for efficient spreading and interpolation to the grid. 1. Spatial hashing: The first step involves dividing the domain into cells in order to group particles based on their positions within the domain. This division needs to be performed only once during the initialisation of the simulation. Here, the periodic domain is divided into mx \u00d7 my \u00d7 mz cells, where the number of cells in each direction is determined from mi = max(int(Li/Rc),3). This number of cells guarantees that for all particles, any other particle within a distance Rc is either in the same cell or in one of the adjacent cells. The index for each cell is cell index = xc + (yc + zc \u22c5my) \u22c5mx. (68) where xc = j,j = 0,...,mx \u22121, and yc and zc are defined similarly. At each timestep, each particle is assigned to a specific cell based on its location in the domain and the cell index is used as the particle\u2019s hash value. 2. Particle sorting and cell lists: All particle data arrays (positions, forces, torques) are then sorted based on their hash values, which we perform using \u2018sort by key.\u2019 This sorting facilitates efficient memory retrieval during grid operations since particles positioned consecutively in memory will now have in common grid points to which they spread and from which they interpolate. In addition, since particles in a cell are now contiguous in memory, the pairwise corrections for pairs in the same and adjacent cells can readily be evaluated by knowing the index of the first and the last particle in each cell. 3. Applying \u02dc J \u2020: This step is the same as Step 3 in the FCM computation presented in Section 2.1 with J replaced by \u02dc J \u2020. 4. Applying L\u22121: This step is identical to Step 2 in Section 2.1. 5. Applying \u02dc J : Again, this is the same as Step 3 in FCM computation, with J replaced by \u02dc J . 6. Pairwise correction: For each particle m, we determine its cell index and identify the adjacent cells. Then, using the stop and start indices, we compute the distance between particle m and the other particles in the same and adjacent cells. If the distance between particles m and n, denoted as Rmn, satisfies Rmn < Rc, we apply the pairwise correction (31) using (65) and (67) to adjust the velocities of particles m and n accordingly. The operation counts for applying \u02dc J and \u02dc J \u2020 will be identical to those for applying J and J \u2020 in FCM, as will the operation count for applying L\u22121. It is important to remember, however, that the grid used for fast FCM will be smaller. For a random dispersion of N FCM particles, the operation count for the pairwise correction will be O(N\u03d5(Rc/a)3), where \u03d5 is the volume fraction. Thus, the savings in computation time provided by fast FCM will broadly depend on the reduction of computation time due to the reduced grid size and the increased cost of the pairwise correction. 4.1. Implementation In the remainder of the paper, we explore the performance of fast FCM using a C++/CUDAbased implementation that utilises the Graphics Processing Unit (GPU) to accelerate the computation. Our CUDA implementation is freely available on GitHub: https://github.com/ racksa/cuFCM_demo. 11 \fIn our implementation, specific attention paid to many GPU-related considerations. We carefully organise the data access pattern for quantities such as the flow field, that reside on the grid. This limits waiting times when fetching data from memory. We utilise the CUDA interface which offers flexibility in manipulating hardware memory through the \u2019shared memory\u2019 hierarchy in the GPU. In doing so, we can take advantage of important features of GPU computing, including the ability to switch processing cores between tasks while waiting for memory communications, thereby reducing idle time. For a more detailed summary of the CUDA-specific choices used in our implementation, please see Appendix C. 5. Error control and parameter optimisation We set the values of computational parameters appearing in fast FCM to ensure that particle velocities are returned efficiently to within a user specified tolerance. These parameters are: 1. MG, the size of the grid support of \u0303 \u2206n(x;\u03a3) in one direction, 2. \u03a3/\u2206x, the kernel width relative to the grid-spacing, 3. \u03a3/\u03c3, the kernel width relative to the FCM envelope width, 4. Rc/\u03c3, the pairwise correction cutoff radius relative to the FCM envelope width. The parameters MG and \u03a3/\u2206x are selected to provide sufficient resolution of the grid-based computation associated with applying \u0303 MVF. The remaining parameters, \u03a3/\u03c3 and Rc/\u03c3, are set to ensure that the particle velocity error is below the specified tolerance while balancing the computational effort between the grid-based and pairwise computations. One can see that for \u03a3/\u03c3 = 1, Rc/\u03c3 can be rather small. Thus, very few pairwise corrections will be required, but nothing has been gained with respect to the standard FCM computation. At the other extreme where \u03a3/\u03c3 \u226b1, the grid size will be greatly reduced, but a large Rc/\u03c3 will be required. In this case, an exceedingly large number of pairwise corrections will be needed to deliver errors below an acceptable tolerance. We seek, therefore, intermediate values of \u03a3/\u03c3 and Rc/\u03c3 so as to yield the optimal balance in computation time between the pairwise and grid-based computations. 5.1. Resolving the action of \u0303 MVF We first determine the values of MG and \u03a3/\u2206x that are needed to return the application of \u0303 MVF to within a desired tolerance. The parameter \u03a3/\u2206x controls the grid-resolution of the kernel, and MG sets the finite extent of the grid support of \u0303 \u2206(x;\u03a3). Hence, errors associated with MG are linked to the truncation of Gaussian kernel\u2019s tails. As computational cost increases with \u03a3/\u2206x and MG, it is important to ensure that these are set to the lowest values possible for a given tolerance. Fig. 1(a) shows the error, \u03f5, in applying \u0303 MVF for a random suspension of N = 64457 particles with volume fraction \u03d5 = 8% in a domain of size L \u00d7 L \u00d7 L for a range of \u03a3/\u2206x and MG. To ensure any error incurred is associated with \u0303 MVF, the correction is applied to all particle pairs by setting Rc = L. Each particle n, is subject to a random force, Fn, and the resulting velocity Vn for each n is determined. The error is given by \u03f5 = 1 N N \u2211 n=1 \u2223Vn \u2212Un\u2223 \u2223Un\u2223 , (69) where a computation with error tolerance 10\u221215 was used to generate the exact velocities, Un. We observe that \u03f5 decreases exponentially as \u03a3/\u2206x and MG increase simultaneously. We notice also that the error contours are approximately rectangular. This indicates that minimum values of \u03a3/\u2206x and MG are required to achieve a desired tolerance. These values are provided in Table 1, where we see both \u03a3/\u2206x and MG approximately double in value as \u03f5 decreases from 10\u22122 to 10\u22128. 12 \f6 8 10 12 14 16 MG 0.6 0.8 1.0 1.2 1.4 1.6 / x 10 11 10 9 10 7 10 5 10 3 10 1 Linear velocity % error (a) 2 4 6 8 10 12 / 10 20 30 40 50 60 70 80 Rc/ 10 11 10 9 10 7 10 5 10 3 10 1 Linear velocity % error (b) Figure 1: Contour plots showing the fast FCM error as a function of (a) \u03a3/\u2206x and MG and (b) \u03a3/\u03c3 and Rc/\u03c3 for a random suspension with N = 64457, \u03d5 = 8% and a/L = 1/150. \u03f5 \u03a3/\u2206x MG 10\u22122 0.71 8 10\u22123 0.87 9 10\u22124 0.99 10 10\u22126 1.20 12 10\u22128 1.39 14 Table 1: The minimum values of \u03a3/\u2206x and MG needed to achieve the error tolerance, \u03f5, for a random suspension with N = 64457, \u03d5 = 8% and a/L = 1/150. \u03f5 \u03bb\u03f5 \u03d5 = 0.04 \u03d5 = 0.08 \u03d5 = 0.16 \u03d5 = 0.32 10\u22122 2.31 2.47 2.63 2.79 10\u22123 3.55 3.66 3.76 3.86 10\u22124 4.87 5.08 5.24 5.43 10\u22126 7.10 7.18 7.26 7.35 10\u22128 8.53 8.57 8.63 8.72 Table 2: The slope of the line Rc/\u03c3 = \u03bb\u03f5\u03a3/\u03c3 that provides the relationship between Rc and \u03a3 needed to achieve tolerance, \u03f5, for random suspensions with different \u03d5 and a/L = 1/150. 5.2. Determining the optimal Rc/\u03c3 and \u03a3/\u03c3 With values of MG and \u03a3/\u2206x established for the grid-based application of \u0303 MVF, we move to determining values for the remaining two parameters Rc/\u03c3 and \u03a3/\u03c3. To do so, we must now consider the complete computation and assess how the inclusion of pairwise corrections affects the overall error. This will provide a relationship between the values of Rc/\u03c3 and \u03a3/\u03c3 needed to achieve a given \u03f5. Then, using parameter values lying on these curves, we can find the combination of Rc/\u03c3 and \u03a3/\u03c3 that minimises computational time for a specified \u03f5. Fig. 1(b) shows the error for the random suspension with N = 64457 particles and volume fraction \u03d5 = 8% over a range of Rc/\u03c3 and \u03a3/\u03c3. For all simulations performed in this paper, there is no overlapping between particles. The computations were performed with \u03a3/\u2206x = 2.0 and MG = 16 to ensure sufficient resolution of the grid-based computation for all values of \u03f5. Based on these parameter values, the number of grid points ranges from Mx = 30 to Mx = 260, with Mx = My = Mz in all cases. We see that error contours can be approximated by lines that pass through the origin with slopes that increase as the error decreases. Thus, for a given \u03f5, we may write Rc/\u03c3 = \u03bb\u03f5\u03a3/\u03c3, where \u03bb\u03f5 is the slope of the line. Table 2 shows the values of \u03bb\u03f5 for different values of \u03f5 over a range of \u03d5. We see that for a given \u03f5, changing \u03d5 results in only a modest change in \u03bb\u03f5. While all Rc/\u03c3 and \u03a3/\u03c3 along these lines return values within the specified error tolerance, the computational time for different parameter values along the lines can vary widely. At low Rc/\u03c3 and \u03a3/\u03c3, there will be a high computational cost incurred due to a very fine grid, while at high Rc/\u03c3 and \u03a3/\u03c3, the pairwise computation will dominate the cost. Thus, the appropriate choice 13 \ffor Rc/\u03c3 and \u03a3/\u03c3 will be one along the desired tolerance curve that balances the computational costs of the grid-based and pairwise computations. 2 4 6 8 10 / 0.0 0.2 0.4 0.6 0.8 1.0 PTPS 1e7 N=7639, =0.05% N=30557, =0.2% N=122230, =0.8% N=488923, =3.2% N=1955695, =12.8% Figure 2: The computational cost measured in particle timesteps per second (PTPS) as a function of \u03a3/\u03c3 for random suspensions with different \u03d5. The computations are performed with \u03f5 = 10\u22124 and a/L = 1/400. Data obtained using our a single-precision CUDA implementation of fast FCM run on a single RTX 2080Ti. \u03d5 optimal \u03a3/\u03c3 optimal Rc/\u03c3 0.05% 5.9 32.6 0.2% 5.1 27.9 0.8% 3.4 18.6 3.2% 1.8 9.8 12.8% 1.4 8.1 Table 3: The optimal values of \u03a3/\u03c3 and Rc/\u03c3 for different volume fractions with \u03f5 = 10\u22124 and a/L = 1/400. Fig. 2 shows the computational cost measured in particle time-steps per second (PTPS) [22] as a function of \u03a3/\u03c3 for random suspensions with different values of \u03d5. The PTPS is the number of particles divided by the average time required to apply the mobility matrix. Thus, high values of PTPS correspond to lower computation times. We set the error tolerance to be \u03f5 = 10\u22124, and using the values from Tables 1 and 2, we have \u03a3/\u2206x = 1.00, MG = 10 and \u03bb\u03f5 = 5.5. We see for each \u03d5, PTPS attains a maximum value and the value of \u03a3/\u03c3 where the peak is realised decreases as \u03d5 increases. The values of \u03a3/\u03c3 and Rc/\u03c3 for which the peak values occur are given in Table 3. The peak PTPS value is approximately 7 \u00d7 106 for all \u03d5 and exhibits a slight reduction at the lowest values of \u03d5. For the most dilute case, \u03d5 = 0.0005, the peak occurs at \u03a3/\u03c3 \u22486. Thus, for this case, using fast FCM reduces the grid size by a factor of approximately 63. As the suspension becomes denser, the average particle separation decreases resulting in the peak occurring at lower values of \u03a3/\u03c3. Eventually, when the volume fraction becomes significantly high, the optimal computation is realised for \u03a3/\u03c3 \u22481, and the standard FCM computation (no pairwise corrections) becomes the most efficient choice. 14 \f5.3. Cost comparison with standard FCM With the optimal parameters established, we can assess the performance of fast FCM relative to the standard FCM implementation. To do so, we compute the PTPS for fast and standard FCM for random suspensions with different \u03d5, and determine \u03d5c, the value of \u03d5 where the computational cost for fast FCM exceeds that of the standard implementation, i.e. PTPSFFCM < PTPSFCM. To ensure a proper comparison, the two methods utilise the same code base and code optimisation where possible. They both use the same FFT package and gridding subroutines, and both are run on the same device. We perform the comparison for a/L = 0.004,0.008 and 0.012, and set the tolerance to \u03f5 = 10\u22124. Fig. 3(a) shows the PTPS ratio, PTPSFFCM/PTPSFCM, as a function of \u03d5 for three difference values of a/L. For all cases, we have PTPSFFCM/PTPSFCM > 1 at low \u03d5 followed by an overall decrease to values PTPSFFCM/PTPSFCM < 1 as \u03d5 increases. As discussed above, at low \u03d5 when the particles are on average further apart, the number of grid points for fast FCM is lower than that needed in standard FCM, and hence the PTPS for fast FCM is higher. As \u03d5 increases, the particles become more closely spaced and the number grid points for both algorithms become more comparable until reaching near equality at \u03d5c. The value of \u03d5c increases from approximately 10% for a/L = 0.012 to approximately 18% for a/L = 0.004. We see, therefore, that for a fixed particle size, the computational cost of fast FCM relative to the standard computation decreases as the simulation domain increases in size. This dependence of \u03d5c on a/L is shown in Fig. 3(b). We observe that \u03d5c decreases linearly, \u03d5c \u2248\u22129.24(a/L) + 0.22, over the range of a/L that we are able to assess. Similar results are obtained when the tolerance is lowered to \u03f5 = 10\u22126 (see Figs. 3(c) and 3(d)). For a/L < 0.004, the grid for standard FCM becomes too large to be accommodated in the memory of the GPU (NVidia RTX 2080 Ti), while for a/L > 0.012, the FCM grid is too small, resulting in idle nodes on the GPU. For small a/L but now with \u03d5 fixed, we again have that the standard FCM computation requires large M to ensure \u2206x < a for sufficient resolution of the particles. This results in excessively costly grid computations, including very large memory overheads, that are mitigated in fast FCM by the inclusion of the pairwise correction. Before presenting results from simulations performed using fast FCM, it is important to note that the parameter values provided here are a general guide as to how errors can be controlled and computation times minimised when using fast FCM. Our results were compiled for simulations of random suspensions with different volume fractions performed on one type of device. The performance of fast FCM, and hence the optimal parameter values, will depend on the particle arrangement, as well as the device on which the computation is to be performed. Thus, users are encouraged to explore how these parameters should be tuned to optimise fast FCM for their specific computation. 15 \f0.00 0.05 0.10 0.15 0.20 100 101 PTPSffcm/PTPSfcm a/L=0.004 a/L=0.008 a/L=0.012 (a) 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 a/L 1e 2 0.00 0.05 0.10 0.15 0.20 0.25 c FCM region FFCM region fit: y= 9.24x+0.22 Empirical fit Data (b) 0.00 0.05 0.10 0.15 0.20 100 101 PTPSffcm/PTPSfcm a/L=0.004 a/L=0.008 a/L=0.012 (c) 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 a/L 1e 2 0.00 0.05 0.10 0.15 0.20 0.25 c FCM region FFCM region fit: y= 7.40x+0.18 Empirical fit Data (d) Figure 3: (a) The ratio of PTPS for fast FCM to that for standard FCM as a function of \u03d5 for different a/L and \u03f5 = 10\u22124. Timings are obtained by averaging 50 computations after an initial GPU warm-up period. (b) \u03d5c as a function of a/L. The solid line shows \u03d5c = \u22129.24(a/L)+0.22 which is determined from a linear fit to the data. (c) The same as (a), but with \u03f5 = 10\u22126. (d) The same as (b), but with \u03f5 = 10\u22126. 6. Simulations using fast FCM In this section, we employ fast FCM as the hydrodynamic solver for simulations of rigid bodies and flexible filaments. In these simulations, fast FCM is used in a way similar to the immersed boundary method [25] or rigid multiblob method [15] where the immersed body is discretised into a number of FCM particles, which then experience forces that correspond to the internal stresses experienced by the structure. These forces can arise through constraints, or through a constitutive law that relates structure deformation to the internal stress. As such, this allows us to examine the usefulness and performance of fast FCM as part of a larger, more complex computation. 6.1. Sedimentation of rod suspensions We first employ fast FCM to simulate the sedimentation of a planar suspension of NR rigid rods. Following [33], the individual rods are constructed from 22 FCM particles stacked as alternating doublets, as shown in Figure 4a. As a result of this construction, the total rod length is l = 14.14a. The position of rod n is Xn, while its orientation is provided by the unit-quaternion qn. The simulations are conducted in triply-periodic domains with dimensions [L \u00d7 L \u00d7 60a] (see again Fig. 4). 16 \f(a) Illustration of the sedimentation of 2304 rods with L = 960a. 0 10000 20000 30000 40000 t/T 0.6 0.8 1.0 1.2 1.4 1.6 /W Area fraction=10.50% Area fraction=2.62% Area fraction=0.66% (b) Figure 4: (a) Image showing a snapshot from a simulation of a monolayer of 2304 sedimenting rods in a domain of size 960a \u00d7 960a \u00d7 60a. The rods are constructed from FCM particle doublets as shown in the inset image. (b) Average sedimentation velocity, \u27e8Vx\u27e9= (\u2211n Vn \u22c5\u02c6 x)/NR, over time for simulations with NR = 2304 and L = 960a (\u03d5 = 10.50%) (solid line), L = 1920a (\u03d5 = 2.62%) (dashed line), and L = 3840a (\u03d5 = 0.66%) (dotted line). In the simulations, the rods are subject to gravity and short-ranged, inter-rod repulsive forces. These forces are imposed on the individual FCM particles making up the rods. For gravity, each FCM particle is subject to the constant force F G = F0\u02c6 x, yielding the timescale T = \u03b7l/F0. To prevent the rods from overlapping if they collide, we apply a pairwise barrier force [34] between the FCM particles making up the rods of the form F B ij = FS \u239b \u239d 4a2\u03c72 \u2212r2 ij 4a2(\u03c72 \u22121) \u239e \u23a0 4 rij 2a (70) if rij \u22642\u03c7a and F B ij = 0 if rnm > 2\u03c7a, where FS = 44F0, \u03c7 = 1.1, rij = Yi \u2212Yj, and rij = \u2223rij\u2223. As a result, we can write the total force and torque on rod n as fn = 22F G + \u2211 i\u2208Nn F B i (71) \u03c4n = \u2211 i\u2208Nn (Yi \u2212Xn) \u00d7 (F G + F B i ), (72) where F B i denotes the total barrier force on particle i and Nn is the set of FCM particle indices that comprise rod n. As the rods are rigid bodies, their motion is governed by, dXn dt = Un, (73) dqn dt = 1 2(0,\u0393n) \u25cfqn, (74) where Un and \u0393n are, respectively, the translational and angular velocities of rod n. The symbol \u25cfis the quaternion product, which for quaternions p = (p0, \u02dc p) and q = (q0, \u02dc q) is p \u25cfq = 17 \f(p0q0 \u2212\u02dc p \u22c5\u02dc q, p0 \u02dc q + q0 \u02dc p + \u02dc p \u00d7 \u02dc q). Computing Un and \u0393n for all n is done by solving Vi = Un + \u0393n \u00d7 (Yi \u2212Xn) (75) V = MVFF (76) fn = \u2211 i\u2208Nn Fi (77) \u03c4n = \u2211 i\u2208Nn (Yi \u2212Xn) \u00d7 Fi (78) where, as before V = [V \u2020 1 ,V \u2020 2 ,...,V \u2020 N] \u2020 and F = [F \u2020 1 ,F \u2020 2 ,...,F \u2020 N] \u2020 are the vectors containing the velocities and forces, respectively, on all FCM particles. These equations enforce that conditions that the motion of individual FCM particles is consistent with the rigid body motion of the rods to which they belong. In particular, note that fn and \u03c4n are given by (71) and (72), respectively, and F must be solved for as part of the problem. In the simulations, we discretise (73) and (74) using the second-order, geometric BDF scheme [35] that preserves the unit length of qn for all n. The resulting algebraic equations, along with (75) (78), constitute a nonlinear system, which we must solve to obtain the updated rod positions and orientations, as well as F that enforces the rigid body constraints. To solve the system, we employ Broyden\u2019s method [36] with an initial approximation of the Jacobian based on a diagonal mobility matrix. At each Broyden iteration, fast FCM provides the action of MVF on F to determine V from the current solution. As the tolerance for Broyden\u2019s method is set to 10\u22124, we also require that the fast FCM return the velocities to within \u03f5 = 10\u22124. At the start of the simulations, the rods are positioned on a square lattice and randomly oriented in that plane in the plane at z = 30a. As F G and F B i for all i are in the (x,y)-plane and the rods are symmetric with respect to reflections about this plane, the dynamics of their positions and orientations should remain 2D as the suspension evolves. The small out-of-plane rod motion incurred as a result of numerical error is ignored when the rod positions and orientations are advanced in time. Fig. 5 shows the evolution of a suspension of with NR = 7744 with an area fraction of 10.5%. The simulation is run to time t = 4992.8T and images of the suspension and the rod velocities at time t = 339.6T and 4992.8T are provided. Initially, when the particles have random positions and orientations, the average settling speed is below that of a single rod.. After a short time, the rods form groups that settle rapidly, while other regions in the suspension where the rod density is lower are seen to move upwards. Similar dynamics were observed in sedimentation simulations of rod suspensions in 3D periodic domains [37] As the rapidly falling groups encounter regions moving more slowly, they deform and eventually break up, only to reform again. The periodic process of group formation and breaking continues over the course of the simulation. To understand the dynamics more quantitatively, we perform additional simulations with NR = 2304 with a final time of t = 46676T. We consider L = 960a,1920a, and 3840a, corresponding to area fractions 10.5%, 2.6%, and 0.66%, respectively. Fig. 4b shows the average sedimentation velocity \u27e8Vx\u27e9= (\u2211n Vn \u22c5\u02c6 x)/NR over time relative to the settling speed W, the sedimentation velocity of a single rod moving along its axis. In all three cases, we see that the velocity initially increases from values below the settling speed of a single particle, corresponding to the initial formation of rod clusters, as seen in Fig. 5. After this initial growth, however, the speed plateaus and fluctuates around an average value, as a result of the clusters continually breaking and reforming. The average speed for the plateau for area fractions 0.66% and 2.6% are approximately equal \u27e8Vx\u27e9/W \u22481.15, while it is significantly higher for area fraction 10.5%, where we have \u27e8Vx\u27e9/W \u22481.45. Additionally, the time required to reach the plateau value decreases as the area fraction increases. 18 \f0 250 500 750 1000 1250 1500 1750 y 0 200 400 600 800 1000 1200 1400 1600 x (a) 0 250 500 750 1000 1250 1500 1750 y 0 200 400 600 800 1000 1200 1400 1600 x 0.006 0.427 0.847 1.268 1.689 2.109 2.530 2.951 |V|/W (b) 0 250 500 750 1000 1250 1500 1750 y 0 200 400 600 800 1000 1200 1400 1600 x (c) 0 250 500 750 1000 1250 1500 1750 y 0 200 400 600 800 1000 1200 1400 1600 x 0.13 6.23 12.33 18.42 24.52 30.61 36.71 42.81 |V|/W (d) Figure 5: (a) Snapshot of the simulation with 7744 rods at t = 339.64T. (b) Image showing the rod speeds at t = 339.64T. (c) Snapshot of the simulation with 7744 rods at t = 4992.81T. (b) Image showing the rod speeds at t = 4992.81T. 19 \f6.2. Arrays of filaments Along with simulating interacting rigid bodies, we perform simulations of arrays of interacting flexible filaments employing fast FCM as the hydrodynamic solver. Specifically, we investigate the coordinated motion in periodic domains of an array of clamped and tethered follower-force driven filaments [38, 39, 40, 41]. To do so, we employ the filament model presented in [35] and used previously in [40, 41] to simulate follower-force driven filaments, which, for the sake of clarity, we describe here in the case of a single filament. The position along the filament centerline is denoted by Y (s,t) which is a function of arclength, s \u2208[0,l), and time, t. The filament has length l and cross-sectional radius a. Additionally, at each point along the filament centerline is the orthonormal basis {\u02c6 t(s,t), \u02c6 \u00b5(s,t), \u02c6 \u03bd(s,t)}, where \u02c6 t(s,t) is the unit-vector tangent to the centerline, \u2202Y \u2202s = \u02c6 t. (79) The force and moment balances along the filament are \u2202\u039b \u2202s + f H = 0, (80) \u2202M \u2202s + \u02c6 t \u00d7 \u039b + \u03c4 H = 0, (81) where \u039b(s,t) and M(s,t) are the internal forces and moments, respectively, on the filament cross-section, and f H(s,t) and \u03c4 H(s,t) are the hydrodynamic forces and torques per unit length experienced by the filament. While the internal moments are given by a constitutive law M(s,t) = KB (\u02c6 t \u00d7 \u2202\u02c6 t \u2202s) + KT (\u02c6 \u03bd \u22c5\u2202\u02c6 \u00b5 \u2202s ) \u02c6 t, (82) where KB and KT are the bending and twist modulli, respectively, the internal forces exist to ensure (79) is satisfied. At s = 0, the filament is tethered and clamped, such that Y (0,t) = Y0 and \u02c6 t(0,t) = \u02c6 z. At s = l, M(l,t) = 0, and \u039b(l,t) = \u2212fKB l2 \u02c6 t(l,t), (83) where f is the non-dimensional parameter controlling the magnitude of the follower force. The filament is discretised into NS segments of length \u2206l such that segment i has position Yi and frame {\u02c6 ti, \u02c6 \u00b5i, \u02c6 \u03bdi}, which is determined from the segment quaternion, qi. Central differencing is applied to (79), (80) and (81) and after multiplying by \u2206l, we obtain the discrete force and moment balances, F C i + F H i = 0, (84) T E i + T C i + T H i = 0, (85) and the discrete kinematic constraint, Yi+1 \u2212Yi \u2212\u2206l 2 (\u02c6 ti + \u02c6 ti+1) = 0. (86) In these expressions, T E i = Mi+1/2 \u2212Mi\u22121/2 is the elastic torque, F C i = \u039bi+1/2 \u2212\u039bi\u22121/2 and T C i = (\u2206l/2)\u02c6 ti \u00d7 (\u039bi+1/2 + \u039bi\u22121/2) are the constraint forces and torques respectively, and F H i = f H i \u2206l and T H i = \u03c4 H i \u2206l are the hydrodynamic force and torque on segment i. Segment velocities and angular velocities are determined by applying the FCM mobility matrix, M, using fast FCM to the vector of hydrodynamic force and torques, ( V W ) = M ( F T ), (87) 20 \fwhere V and W are, respectively, 3NS \u00d7 1 vectors containing the components of the translational and angular velocities of all segments, and F = \u2212[(F H 1 )\u2020,(F H 2 )\u2020,...,(F H NS)\u2020] \u2020 and T = \u2212[(T H 1 )\u2020,(T H 2 )\u2020,...,(T H NS)\u2020] \u2020. The hydrodynamic radius of each segment is a. Using the translational and angular velocities, the segment positions and quaternions are obtained by integrating in time the differential-algebraic system dYi dt = Vi, (88) dqi dt = 1 2(0,\u2126i) \u25cfqi, (89) Yi+1 \u2212Yi \u2212\u2206l 2 (\u02c6 ti + \u02c6 ti+1) = 0, (90) for each i. As done in the rigid rod simulations, we discretise these equations in time using a second-order, geometric BDF scheme [35]. The resulting equations along with the kinematic constraints form a nonlinear system that we then solve iteratively using Broyden\u2019s method [36]. As filament interact exclusively through the surrounding fluid, the model is easily extended to many filaments by using fast FCM to couple the motion of all segments of all filaments when applying the mobilty matrix. In our simulations, we have NS = 20 for each filament and set the segment length \u2206l = 2.2a, so the total filament length is l = 44a. The strength of the follower force is f = 220. In previous work [40, 41] using the RPY tensor with a no-slip surface [42] for segment mobility, this value of f results in dynamics where the filament beats in a plane. When many such filaments are in an array, their beating becomes aligned and synchronised. Using fast FCM, we can explore how these dynamics might change under periodic boundary conditions and in the absence of a no-slip surface, where the fluid is able to flow through the array. We examine how the filament spacing and the number of filaments affect the dynamics. To begin, we first consider a single filament in the periodic domain, corresponding to an infinite array of filaments with synchronised motion. The base of the filament is tethered at the origin and the orientation of the corresponding segment is \u02c6 t1 = \u02c6 z. The dimensions of the periodic domain are L \u00d7 L \u00d7 H with L = 2.27l fixed, while H varies between 2.27l to 45.45l. Initially, the filament is straight and oriented vertically, i.e. \u02c6 ti = \u02c6 z for all i, and a random perturbation is introduced to initiate filament motion. To ensure that we observe the asymptotically stable state, we simulate each system for at least 10000T0, where T0 is the period of filament motion. The resulting filament behaviour and its variation with H are shown in Fig. 6. We find for low values of H < 29.54l, we recover the planar beating observed in [40, 41] with beating along one of the lattice directions. For H > 29.54l, however, instead of planar beating, we observe filament whirling, which is typically associated with lower values of f. We probe the coordination further by now having NF = 100 filaments within the periodic domain. As these filaments can now move independently, the final state that emerges can be different from when NF = 1 and synchronisation of the lattice is imposed. To keep the filament spacing the same as the single filament case, we have L = 22.7l and space the filaments equally in the x\u2212and y\u2212directions on a square lattice. We vary H between 2.84L and 45.45L and allow the simulations to run for more than 10000T0 to be confident that the final asymptotic state is reached. The different states that emerge and the range of H for which they are realised are shown in Fig. 7. We also provide a visual representation of the filament tip trajectories over one period as viewed from above of each state. We find the synchronised whirling state for H > 29.54L and the synchronised beating state for 20.0L < H < 29.54L. These are identical to the solutions that we obtained when there was only one filament in the domain. At low values of H, however, we find that two different states emerge. For 12.00L < H < 20.00L, planar beating persists at the individual level and beating occurs in the same plane for all filaments, however, their motion is no longer synchronised across 21 \fthe array. Instead, synchronised motion is observed for filaments along rows in the beat direction and phase differences develop between the different rows. Finally, at the lowest values where 2.84L < H < 12.00L, planar beating is lost and the individual filaments are found to move more erratically, often changing the direction of tip motion. At the collective level, however, we do find synchronised movement to occur in subsections, or patches, of the array. Figure 6: Dynamics of a single filament in the periodic domain for different H. Below H = 29.54L, the filament exhibits planar beating (\u220e), while above H = 29.54L whirling ( ) is observed. Figure 7: Coordinated motion of 100 filaments. The black curves indicate the projection in the xy-plane of the filament tip trajectories over one period. The red arrows depict the tip velocities in the final snapshot. Along with the synchronised beating (\u220e) and whirling ( ) exhibited in the single filament case we also observe phase-shifted beating in rows (\u25bc), and patches of coordinated, non-planar motion (\u2600). 6.3. Performance Before concluding, we examine the computational performance of fast FCM for the rod and filament simulations presented above. Fig. 8 show the average wall time required per Broyden iteration as the number of rods or filaments increases while keeping the domain size fixed. In both cases, as expected, the time required for fast FCM to apply the mobility matrix scales linearly with the number of rods or filaments. What we do see, again for both cases, is that with fast FCM, the computational cost of the hydrodynamics solver is reduced to approximately one third of the total cost per iteration. This is a marked change with respect to simulations using standard FCM, where the hydrodynamic solver dominates the computational cost. In our rod simulations, for example, portions of the computation involving rank-one updates to the Jacobian and the arithmetic operations of updating the rod positions and orientations are 22 \f0 2500 5000 7500 10000 12500 15000 NR 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Wall time per iteration/s Fast FCM T otal 0 10 20 30 40 50 60 70 No. of iterations (a) 0 1000 2000 3000 4000 NF 0.0 0.1 0.2 0.3 0.4 Wall time per iteration/s Fast FCM T otal 0 5 10 15 20 25 30 No. of iterations (b) Figure 8: Wall time per Broyden iteration and iterations per time step for (a) rod simulations with L = 3840a using a grid of size 512\u00d7512\u00d78 and Rc/\u03a3 = 6.0, and (b) filament simulations in a domain of dimension 5760a\u00d75760a\u00d7360a using a grid of size 256 \u00d7 256 \u00d7 16 and constant Rc/\u03a3 = 6.0. In both plots, the solid black lines show the wall time per iteration, the dotted black lines show the wall time for fast FCM per iteration, and the dashed blue lines show the average number of Broyden iterations required at each timestep. comparable in cost to applying the mobility matrix using fast FCM. It is important to note that in our implemenation, updating the Jacobian and positions and orientations are both performed on the CPU rather than the GPU, and memory transfer between the CPU and the device can also affect overall wall times. We see, therefore, that the speed and efficiency of fast FCM forces us to reevaluate the algorithms and their implementations for the nonhydrodynamic aspects of the computation. In particular, questions as to whether more aspects of the computation can be performed using the GPU need to be considered. Along with computational time, we also note the reduction in memory cost associated with fast FCM. For instance, for the NF = 100 filament simulations with H = 22.72L presented in the previous section would require a grid of approximately 1800 \u00d7 1800 \u00d7 1800 points if standard FCM were used. Storing the flow field and its Fourier transform on such as grid would require on the order of 280GB of RAM. Even if such a memory requirement could be met, fetching data from memory for gridding and interpolation would compromise computational efficiency. For fast FCM, however, the typical grid used in the H = 22.72L simulation contains 128 \u00d7 128 \u00d7 128 points, requiring less than 1GB of RAM which can be easily accommodated on a single modern GPU. 7. Conclusions and outlook In this paper, we have introduced fast FCM to accelerate the application of the FCM mobility matrix in periodic domains. The method relies on decoupling the grid spacing from the FCM particle hydrodynamic radius by replacing the FCM Gaussian kernel by one with a larger width. The resulting particle velocities from the grid-based computation are then corrected pairwise using analytical expressions to obtain values accurate to within a user specified tolerance. We show that our specific choice of kernel ensures positive splitting, an important feature to use fast FCM for Brownian simulations, where we expect to compute random particle displacements using a combination of fluctuating FCM [31, 43] with the grid-based computation and the Lanczos algorithm for the pairwise corrections, as in PSE [44]. We also show that fast FCM can be extended to include torques on the FCM particles. We perform a numerical calibration of the method to obtain the optimal value of the modified envelope width and correction cut-off radius for a given tolerance. Using our GPU based implementation on a Nvidia RTX 2080Ti, we can achieve approximately 7 \u00d7 106 PTPS. By comparing with the standard FCM computation, we 23 \ffind that fast FCM can, in many cases, accelerate by the application of the FCM hydrodynamic mobility matrix by an order of magnitude. The largest gains are achieved for large-scale simulations at low volume fractions, where there are many particles that are on average far apart. In these situations, fast FCM also avoids incurring large memory costs associated with the need to perform a large grid-based computation. Finally, we showed that fast FCM can readily be applied as part of a larger, more complex computation for many interesting problems including rigid body suspension dynamics and the dynamics of flexible filament arrays. There are several interesting directions in which fast FCM can be further extended. While our general approach to accelerating FCM is similar in spirit to spectral Ewald summation [21] and Positively Split Ewald [22], a key difference is that for fast FCM, the splitting is performed through a modified kernel in real space, rather than through a splitting function in Fourier space. Thus, we are in the position to move beyond periodic domains and consider other boundary conditions such as no-slip channel walls. Indeed, this kind of approach has been pursued [12, 30] for Stokeslets (point forces) in channels using a combination of Fourier and finite-difference methods for the grid-based solver. Enabling fast FCM for channels would require determining mobility corrections that includes the hydrodynamic effects of the boundaries. This would likely need to be done numerically to build up a look-up table that could then be interpolated, or fit to a convenient functional form that could then be evaluated. We have extended fast FCM to accommodate torques applied to the FCM particles, but it remains to also include the stresslets, which are important in the context of suspension rheology [29], or active particle dynamics [45]. Additionally, it will be important to show that the torque extension yields positive splitting, though we note that the full FCM mobility including torques is itself positive definite due to the positive definiteness of the Stokes operator, and the FCM spreading and interpolation operators being adjoint. Finally, a nice feature of FCM and other immersed boundary methods is the inclusion of the fluid degrees of freedom, allowing for the seamless consideration of other physics such as the interaction with chemical fields [46], or polymer stresses [47]. The use of the modified kernel for the grid based computation means that we no longer compute the correct flow field. Developing correction schemes for the flow to enable simulation in viscoelastic fluids, for example, as well as the other advancements mentioned here will form the foundation of future work along side our application of these results to interesting problems involving microscale fluid-structure interactions. Appendix A. FCM mobility In this section, we derive (14), the expression for the FCM pairwise mobility matrix, M V F nm , that relates the force on particle m to the velocity on particle n. The flow due to particle m is given by the solution to, \u2212\u03b7\u22072um + \u2207pm = Fm\u2206m(x;\u03c3) \u2207\u22c5um = 0. (A.1) This flow in Fourier space is \u02c6 um(k) = \u02c6 L\u22121(k) \u02c6 \u2206m(k;\u03c3)Fm, (A.2) where k is the wavevector and \u02c6 L\u22121(k) is the inverse Stokes operator in Fourier space. Taking the inverse Fourier transform, we have um(x) = [\u222b \u02c6 L\u22121(k) \u02c6 \u2206m(k;\u03c3)exp(ik \u22c5x)d3k]Fm. (A.3) Note that the integral in the square brackets is equivalent to S(x \u2212Ym;\u03c3) in (7). Recognising that \u2206m(x;\u03c3) = \u2206(x \u2212Ym;\u03c3), we have \u02c6 \u2206m(k;\u03c3) = exp(\u2212ik \u22c5Ym) \u02c6 \u2206(k;\u03c3). Thus, um(x) = \u222b \u02c6 L\u22121(k) \u02c6 \u2206(k;\u03c3)exp(ik \u22c5x \u2212Ym)d3kFm. (A.4) 24 \fThe contribution of particle m to the velocity of particle n is, Vnm = \u222bum(x)\u2206n(x;\u03c3)d3x, (A.5) which using (A.4) becomes, Vnm = [\u222b(\u222b \u02c6 L\u22121(k) \u02c6 \u2206(k;\u03c3)exp(ik \u22c5x \u2212Ym)d3k)\u2206n(x;\u03c3)d3x]Fm. (A.6) Performing the integration first over x, and recognising that \u222bexp(ik \u22c5x)\u2206n(x;\u03c3)d3x = (2\u03c0)3 exp(ik \u22c5Yn) \u02c6 \u2206(k;\u03c3), (A.7) we have Vnm = [(2\u03c0)3 \u222bexp(ik \u22c5(Yn \u2212Ym)) \u02c6 L\u22121(k)[ \u02c6 \u2206(k;\u03c3)] 2 d3k]Fm. (A.8) Finally, since (2\u03c0)3 [ \u02c6 \u2206(k;\u03c3)] 2 = (2\u03c0)\u22123 exp(\u2212k2\u03c32) = \u02c6 \u2206(k;\u03c3 \u221a 2), Vnm = [\u222bexp(ik \u22c5(Yn \u2212Ym)) \u02c6 L\u22121(k) \u02c6 \u2206(k;\u03c3 \u221a 2)d3k]Fm. (A.9) The integral is the FCM pairwise mobility matrix, and further, we recognise that it is simply the inverse Fourier transform of the Stokes flow due to the force distribution \u2206(x;\u03c3 \u221a 2). Thus, M V F nm = \u222bexp(ik \u22c5(Yn \u2212Ym)) \u02c6 L\u22121(k) \u02c6 \u2206(k;\u03c3 \u221a 2)d3k (A.10) = S(Yn \u2212Ym;\u03c3 \u221a 2). (A.11) Appendix B. Self corrections We provide below the self correction matrices for when r = 0: lim r\u21920(M V F nm \u2212\u0303 M V F nm ) = ( 1 6\u03c0\u03b7a \u2212 1 6\u03c0\u03b7(\u03a3\u221a\u03c0) + \u03c32 \u2212\u03a32 12\u03b7(\u03a3\u221a\u03c0)3 \u2212(\u03c32 \u2212\u03a32)2 32\u03b7\u03a35\u03c03/2 )I (B.1) lim r\u21920(M \u2126T nm \u2212\u0303 M \u2126T nm ) = ( 1 8\u03c0\u03b7a3 \u2212 1 48\u03b7(\u03a3\u221a\u03c0)3 )I (B.2) Appendix C. CUDA parallisation We provide here the CUDA-specific parallisation strategy for the fast FCM algorithm described in Section 4. 1. For Steps 1 and 2, we assign one thread per particle to compute the hash value and to update the cell list. To sort the particles, we employ the parallelised radix sort function in the CUDA cub library. 25 \f2. For applying \u02dc J \u2020 and \u02dc J \u2020, we adopt a block per particle (BPP) approach. This is one of two main approaches to particle-based gridding with CUDA, with the other being thread per particle (TPP). For TPP, each thread is assigned one particle and is responsible for spreading the force and interpolating the velocity for that particle to the M3 G associated grid points. For BPP, each block is assigned one particle and each thread within that block is responsible for the particles spreading and interpolation for a single grid point. The advantage of BPP is that there sufficient shared memory to store the 3MG values needed to reconstruct the separable kernel function at the grid points. These values can be accessed by all threads. We use atomic operation when performing spreading to avoid race conditions (different particles attempting to write to the same grid point). For gathering, we first store the kernel weighted grid velocities in the register memory of the individual threads and, once all velocities are accounted for, a reduction operation is performed to sum the values from all threads to obtain the particle velocity. This eliminates the need for expensive atomic operations and as a result, the time for gathering is one third of that for spreading. A more detailed discussion of BPP and TPP gridding algorithms can be found in [48]. In our implementation of fast FCM, BPP always outperforms TPP. 3. To apply L\u22121, we utilise the parallelised CUDA cuFFT toolkit to perform the FFT and IFFT, assigning one thread per grid point for the computation. 4. Finally, for the pairwise correction, each thread is assigned one particle and performs the calculations to check the distance with other particles in the relevant cells, and applies the correction when required." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file