yilunzhao commited on
Commit
ae1b453
·
verified ·
1 Parent(s): ea5e962

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 20240318/2204.04236v3.json +0 -0
  2. 20240318/2208.00726v2.json +332 -0
  3. 20240318/2208.03886v5.json +18 -0
  4. 20240318/2209.05208v3.json +662 -0
  5. 20240318/2209.12605v2.json +0 -0
  6. 20240318/2210.05279v2.json +597 -0
  7. 20240318/2303.17790v2.json +42 -0
  8. 20240318/2305.11490v5.json +0 -0
  9. 20240318/2305.19115v2.json +547 -0
  10. 20240318/2306.03000v3.json +0 -0
  11. 20240318/2306.09860v2.json +0 -0
  12. 20240318/2306.11035v2.json +569 -0
  13. 20240318/2306.11044v2.json +0 -0
  14. 20240318/2307.06212v3.json +0 -0
  15. 20240318/2307.11714v3.json +415 -0
  16. 20240318/2308.07233v3.json +0 -0
  17. 20240318/2308.07553v2.json +380 -0
  18. 20240318/2308.08305v2.json +528 -0
  19. 20240318/2308.13137v3.json +0 -0
  20. 20240318/2309.00464v2.json +105 -0
  21. 20240318/2309.08249v3.json +0 -0
  22. 20240318/2309.10668v2.json +825 -0
  23. 20240318/2309.14184v2.json +333 -0
  24. 20240318/2310.03173v2.json +0 -0
  25. 20240318/2310.04152v2.json +0 -0
  26. 20240318/2310.05155v2.json +0 -0
  27. 20240318/2310.05773v2.json +0 -0
  28. 20240318/2310.08044v2.json +0 -0
  29. 20240318/2310.12486v2.json +219 -0
  30. 20240318/2310.14402v2.json +220 -0
  31. 20240318/2310.17513v3.json +0 -0
  32. 20240318/2311.08146v2.json +184 -0
  33. 20240318/2311.18605v3.json +0 -0
  34. 20240318/2312.09094v2.json +471 -0
  35. 20240318/2312.15045v3.json +0 -0
  36. 20240318/2312.15736v2.json +0 -0
  37. 20240318/2401.06604v3.json +715 -0
  38. 20240318/2401.10253v2.json +304 -0
  39. 20240318/2401.11969v3.json +0 -0
  40. 20240318/2401.12873v3.json +0 -0
  41. 20240318/2403.01962v2.json +131 -0
  42. 20240318/2403.05822v2.json +0 -0
  43. 20240318/2403.05828v2.json +155 -0
  44. 20240318/2403.06467v2.json +0 -0
  45. 20240318/2403.08282v2.json +745 -0
  46. 20240318/2403.09195v2.json +275 -0
  47. 20240318/2403.09473v2.json +99 -0
  48. 20240318/2403.09701v2.json +444 -0
  49. 20240318/2403.10040v2.json +0 -0
  50. 20240318/2403.11377v1.json +0 -0
20240318/2204.04236v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2208.00726v2.json ADDED
@@ -0,0 +1,332 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Fair Division of Multi-layered Cakes",
3
+ "abstract": "We consider multi-layered cake cutting in order to fairly allocate numerous divisible resources (layers of cake) among a group of agents under two constraints: contiguity and feasibility. We first introduce a new computational model in a multi-layered cake named \u201ca pair of knives\u201d. Then, we show the existence of an exact multi-allocation for two agents and two layers using the new computational model. We demonstrate the computation procedure of a feasible and contiguous proportional multi-allocation over a three-layered cake for more than three agents. Finally, we develop a technique for computing proportional allocations for any number of agents and layers, where is any positive integer.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "There are several instances of time scheduling in our daily lives where we arrange our schedules so that we can finish our daily tasks. Consider a group of university students who desire to enjoy several facilities, such as a seminar lecture or an indoor game. The two facilities have the same opening and closing hours. Everyone in the group of students is willing to enjoy both facilities, but each has a distinct preferred time period for taking each one.\nIn simple terms, the problem of dividing a cake is a metaphor for how to divide a resource that can be shared among agents with different preferences in a fair way. The cake-cutting problem is a central topic in the theory of fair division [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###], and it has received a significant amount of attention in the domains of mathematics, economics, political science, and computer science [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###]. It is hard to give each agent a fair share of the cake.\nEnvy-freeness and proportionality are the most important criteria for a fair allocation in the cake-cutting literature. In an envy-free allocation, every agent is pleased with the pieces they are allocated as opposed to any other agent\u2019s allocation. In a proportional allocation, each agent receives at least of the value he estimates for the cake. When all of the cake has been divided, envy-freeness entails proportionality. It is generally known that envy-free allocations always exist [11 ###reference_b11###], even if we specify that each agent must receive a connected piece [12 ###reference_b12###, 13 ###reference_b13###]. In addition to its existence, the algorithmic design aspect of the process has also been thought about for a long time [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###]. For any number of agents, we are able to calculate a proportional allocation as well as an envy-free allocation.\n\nWe cannot consider the problem of getting several facilities in the example above to be a cake-cutting problem. We have to divide the two time intervals independently so that each agent can enjoy both facilities. Now the issue is how to fairly divide each facility\u2019s time interval in accordance with their preferences. As a result, every student can enjoy every facility, and the allotted time intervals for each facility never overlap. Adopting the above constraints, Hosseini et al. [20 ###reference_b20###] initiate the multi-layered cake cutting problem. In the multi-layered cake cutting problem, we see how to solve this problem. We consider each facility as a divisible heterogeneous layer of a multi-layered cake. Every student has an additive preference, called valuation over disjoint (non-overlapping) intervals.\nNote that the valuation of the same\npart of the cake for different\nstudents can be very different.\nA division of a multi-layered cake is feasible if no student\u2019s time intervals contain overlapping intervals. The division is contiguous if each student gets a contiguous time interval for each facility. Our goal in multi-layered cake cutting is to find multi-allocations that are fair while also meeting the constraints of feasibility and contiguity."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Our results",
15
+ "text": "In section , we show the existence of exact feasible multi-allocation for two agents using the idea of the Austin-moving knife procedure. In section , we show that there exists a proportional multi-allocation that is feasible and contiguous for three layers and any number of agents. We also prove the existence of a proportional multi-allocation that is feasible and contiguous for layers and for agents, where ."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Related Work",
21
+ "text": "Cake cutting problem is the central topic in the fair division. In recent years, it has been extensively studied in economics, mathematics, and computer science literature [21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###]. In order to fairly divide multiple divisible resources among a group of agents, Hosseini et al. [20 ###reference_b20###] initiate the study of multi-layered cake cutting. They show that there exists an envy-free multi-allocation that is feasible and contiguous for two layers and three agents with two types of preferences. Igarashi and Meunier [26 ###reference_b26###] also show that an envy-free multi-allocation exists that is feasible and contiguous when the number of agents is a prime power and the number of layers is at most the number of agents, using combinatorial topology. There are a few papers most related to multi-layered cake cutting, where agents can simultaneously benefit from all allocated pieces with no constraints [27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###]."
22
+ },
23
+ {
24
+ "section_id": "2",
25
+ "parent_section_id": null,
26
+ "section_name": "Preliminaries",
27
+ "text": "We take into account the multi-layered cake cutting notion that Hosseini et al. [20 ###reference_b20###] developed. The number of layers and agents are specified in the problem of cutting a multi-layered cake. The setting of the model includes a set of agents and a set of layers. An -layered cake is denoted by where is an interval such that for each . Due to each , we allude as -th layer and as -th layer cake.\nCorresponding to each -th layer, each agent is endowed with a non-negative integrable density function where . The valuation function of each agent for -th layer is a function representing the preference of the agent over different parts of denoted by .\nIf is a piece of cake of the -th layer, then represents the value assigned to it by the agent i.e, . We consider the total valuation of each agent over the entire cake is 1 i.e, . \nA sequence of pieces of each layer of multi-layered cake is called a layered piece. A layered piece is said to be contiguous if each , is a contiguous piece of the layer . A layered piece is said to be non-overlapping, if for any two pieces from different layers never overlap i.e., for any two different layers and for any and either or where is one of the endpoints of both and . We assume the valuation functions related to layers of each agent are additive over layers and written as , where is the valuation function of each agent over the entire cake where .\n\nLet and be two layered pieces. If for any , then it is said that the agent weakly prefers the layered piece to the layered piece . A multi-allocation is a partition of the multi-layered cake where each is a layered piece of the cake that is assigned to agent where .\nCorresponding to a multi-allocation and an agent , the valuation of each agent is . A multi-allocation is said to be complete if for each , . A multi-allocation is said to be\ncontiguous if for each , is contiguous.\nfeasible if for each , is non-overlapping."
28
+ },
29
+ {
30
+ "section_id": "2.1",
31
+ "parent_section_id": "2",
32
+ "section_name": "Fairness notions",
33
+ "text": "Definition 1. A multi-allocation is said to be exact if for any two agents ,\n.\n\nDefinition 2. A multi-allocation is said to be proportional if for any agent ,\n.\n\nDefinition 3. A multi-allocation is said to be envy-free if for any two agents ,\n."
34
+ },
35
+ {
36
+ "section_id": "2.2",
37
+ "parent_section_id": "2",
38
+ "section_name": "The m (even)-layered cut",
39
+ "text": "Hosseini et al. [20 ###reference_b20###] define a partition of a multi-layered cake into two diagonal pieces that follows the feasibility constraint. For any , defined :\n\n\nconsists of all subintervals of type of each layer and subintervals of type of each layer .\nconsists of all subintervals of type of each layer and subintervals of type of each layer .\nThe merge of is an -layered cake whose -th layer piece is defined where and for . Similar for . We use and in the place of and , respectively."
40
+ },
41
+ {
42
+ "section_id": "2.3",
43
+ "parent_section_id": "2",
44
+ "section_name": "Computational model",
45
+ "text": "In the cake cutting problem, Robertson-Webb query model plays an important role [3 ###reference_b3###]. Following Robertson\u2013Webb query model, Hosseini et al. [20 ###reference_b20###] introduce a new computational model in the multi-layered cake cutting problem where there are two types of queries, one is a short knife and the other is long knife . \n1. Short Knife:\nShort evaluation query: For any given interval of -th layer of an -layered cake , implies the valuation of the agent in the interval of -th layer cake . Here .\nShort cut query: For any given point on -th layered cake and indicates the minimum point on the -th layered cake for which .\n2. Long Knife:\nLong evaluation query: For any given point implies the valuation of the agent for .\nLong cut query: For any given indicates the minimum point for which the valuation of the agent should be for a piece of cake if such a point exists."
46
+ },
47
+ {
48
+ "section_id": "2.4",
49
+ "parent_section_id": "2",
50
+ "section_name": "Switching point",
51
+ "text": "We find a point in the interval that divides the entire -layered cake into a pair of diagonal pieces, and , such that both have the same valuation for a particular agent, i.e., if there exists a point for which the valuation of an agent for two pairs of diagonal pieces is the same, i.e., . Then the point is called a switching point over the entire cake for the agent . Hosseini et al. [20] first acknowledged the existence of such a type of point.\n\nA set of agents is said to be a majority set if . Suppose and are two layered pieces, and is a majority set. If every agent , weakly prefers to , we say that a majority weakly prefers to and denote it by . A point is said to be a majority switching point over an -layered cake , if\n and\n.\nHosseini et al. [20 ###reference_b20###] show that there exists a majority switching point over an -layered cake for any number of agents, where is even."
52
+ },
53
+ {
54
+ "section_id": "3",
55
+ "parent_section_id": null,
56
+ "section_name": "Exact multi-layered cake cutting",
57
+ "text": "We analyze the challenge of attaining a complete exact multi-allocation for a pair of agents on a two-layered cake. The Austin moving-knife procedure for two agents provides the fundamental concept needed for proving the existence of exact multi-allocation. The intermediate value theorem is the primary mathematical tool used by the Austin moving-knife procedure [30 ###reference_b30###]. Simply speaking, we reach our goal by continuously moving a pair of knives (defined below), taking advantage of the intermediate value theorem.\n\nWe provide a particular approach for partitioning the layered cake in order to meet the non-overlapping criterion when cutting it. While showing the existence of exact multi-allocation, we take advantage of this partition.\n\n The m (even)-layered cut.\nWe define a partition for an -layered cake that satisfies the non-overlapping constraint, where is an even number. For any two points and in the interval , where , we define\n;\n.\nFigure 2: Example of the partition induced by and for a six-layered cake.\nLook at the -layered cut, where is equal to 2. With a 2-layered cut, a 2-layered cake gets divided into two portions that are\n and , where and .\nWhen and , then and . When and , then and .\nFigure 3: Examples of the partitions for two pairs and .\nWe propose a query model that is comparable to the long knife that Hosseini et al. [20] described for cutting multi-layered cakes.\nComputational model. We propose the pair of knives query, which takes its cues from the Austin moving-knife procedure, to demonstrate the existence of exact multi-allocation.\n\nPair of knives : Pair evaluation query: For any pair of points where , implies the valuation of the agent for . Pair cut query: For any known , indicates a pair of points for which the valuation of the agent should be for the piece of cake , where and .\n\nSuppose there are two knives with the designations and , respectively. If denotes where knife is located and indicates where knife is located, then the two knives and are referred to as a pair of knives.\n\nSimilarities between the pair of knives and the long knife :\nA pair of knives is comparable to a long knife when one of the knives is positioned at one of the unit interval\u2019s endpoints and the other is positioned in its interiors.\n\nNow we show the existence of exact multi-allocation over any two-layered cake for two agents.\nThere exists an exact complete multi-allocation over any two-layered cake for two agents that satisfies the feasibility condition.\nThe Austin moving-knife procedure bears the fingerprints of the proof.\nIn the Austin moving-knife procedure, we initially start by moving two knives over a single-layered cake from positions and so that the value of the piece between the two knives is always half with respect to agent 1, where the point divides the cake into equal-valued pieces with respect to agent 1. The movement of those two knives will come to an end at points and , respectively, since point divides the cake into two equal-valued portions with regard to agent 1. When the second agent believes the value of the piece between the two knives is half, he will order the movement of the knives to stop. The intermediate value theorem demands that this situation occur.\n\nWe begin to move the knives and from locations and in a manner similar to the Austin moving-knife procedure such that the value of the piece is half for agent 1, where is a switching point for agent 1. We continuously move these two knives such that the piece is valued at half with respect to agent 1, where . Given that the piece is still valued at half with regard to agent 1, the terminal locations of the knives and are at points and , respectively.\nWhile we continuously move these two knives, keeping the value on the piece always half with regard to agent 1, agent 2 can sometimes say \"stop\" when he thinks the value on the piece is half. The intermediate value theorem implies that this situation occurs.\nFigure 4: When the exactness happens for a pair of points (x,y) where and .\n\u220e"
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Proportional multi-layered cake cutting",
63
+ "text": "Igarashi and Meunier [26 ###reference_b26###] use combinatorial topology to show that a proportional multi-allocation exists and is feasible and contiguous for any number of layers and any number of agents. Our main goal in this study is to demonstrate the existence of fair multi-allocation using the computational model that Hosseini et al. [20 ###reference_b20###] proposed. Using the cut-and-eval queries proposed by Hosseini et al., we demonstrate the computation of a proportional multi-allocation that is feasible and contiguous for three layers and any number of agents. In their work, Hosseini et al. [20 ###reference_b20###] leave this as an open question.\n\nWe use the following lemma to prove Theorem 3 ###reference_orem3###.\nSuppose that the number of layers is even. Take\nany . Let be such that and , or and . Then, there exists a point such that values exactly at , i.e. . In particular, a switching point for always exists [20 ###reference_b20###].\nA proportional complete multi-allocation that is feasible and contiguous exists for three layers and three agents.\nAssume that is a three-layered cake, and that , and are the valuation functions for agents 1, 2, and 3 in that order. Due to , without loss of generality, we assume that agent 1 has a valuation of at least over one of the first two layers and at most over the other of the first two layers. So there exists a point y such that by Lemma 2 ###reference_orem2###.\nA new cake, , is now defined as , the status of and . As of right now, agent has valuation over new cake . Due to Lemma 2 ###reference_orem2###, we find a point such that where either or . We design a multi-allocation = (), where the layered pieces , , and are made from, respectively, , , and . As a result, .\nThe constructed multi-allocation must differ depending on where and are. We obtain the required multi-allocation by using this multi-allocation .\nCase 1: When , , and are the corresponding layered pieces. If two distinct layered pieces and are obtained, such that\n and , then allocate\n to agent , to agent , and the remaining layered piece to agent , where . Thus, we obtain a proportional multi-allocation that is feasible and contiguous.\nOtherwise, there is only one unique layered piece with a value larger than 1/3 for agents 2 and 3, where , if there are no two distinct layered pieces and are obtained in which and .\nSubcase I: In the case when , we allocate the layered piece to agent 1 and construct a new cake where and . The value of agents and on the new cake is now at least .\nWe now show that by applying the cut-and-choose procedure between agents and on the new cake , we can achieve the required multi-allocation for three agents. Due to Lemma 2 ###reference_orem2###, we obtain a point such that . Out of the diagonal pieces, Agent selects a layered piece that he or she only weakly prefers. Without loss of generality, we assume that agent prefers , and agent receives the remaining diagonal piece .\nAccording to the positions of and , the required multi-allocation should be different.\n\nWhen , the required multi-allocation where , and .\n\nWhen , the required multi-allocation where , , and .\nSubcase II: In the scenario when or , we assign the layered piece to agent 1, and we define a new cake where and\n. Now agents and have a value over the new cake of at least two-thirds. Similar to Subcase I, we obtain the required multi-allocation that depends on the positions of and , where is a switching point of agent . \nWhen , the required multi-allocation where , and .\n\nWhen , the required multi-allocation where , and .\nCase 2: When , , and are the equivalent layered pieces. In the case in which two distinct layered pieces and are obtained, such that and , assign to agent , to agent , and the last layered piece to agent , where . Thus, we obtain a feasible and contiguous proportional multi-allocation, . Otherwise, for agents and , there is only one unique layered piece with a value greater than , where .\nSubcase I: We define a new cake and assign the layered piece to agent when , where and . Another two agents, and , have valuations at least over the new cake . We now show that we can obtain the required multi-allocation for three agents by using the cut-and-choose procedure among agents and over the new cake . In accordance with Lemma 2 ###reference_orem2###, we must get a point that satisfies the condition . Without loss of generality, we assume that agent chooses and agent obtains . The required multi-allocation must differ depending on where and are.\nWhen the required multi-allocation where , and .\nWhen , the layered pieces of required multi-allocation are , and .\nSubcase II: When or , we allocate the layered piece to agent and define a new cake where and . The value of agents and on the new cake is now at least . Similar to Subcase I, We get the required multi-allocation , which depends on the locations of and , where is a switching point of agent .\nWhen , the required multi-allocation where , and . \nWhen the layered pieces of the required multi-layered cake are , and .\nIn either scenario, we obtain a proportional multi-allocation that satisfies the feasibility and contiguity criteria.\n\u220e\nNow we are ready to give computational procedure to find a proportional multi-allocation that satisfies feasibility and contiguity conditions when .\nIn these computational procedure, we recall the Theorem 4 ###reference_orem4###. In Theorem 4 ###reference_orem4###, Hosseini et al. [20 ###reference_b20###] give the computational procedure to find a feasible and contiguous multi-allocation when , where for .\nA proportional complete multi-allocation that is feasible and contiguous exists, for any number of layers and any number of agents, where for some .\nIn Theorem 5 ###reference_orem5###, we show that feasible and contiguous proportional allocation can be computed efficiently when and .\nA proportional complete multi-allocation that is feasible and contiguous exists for three layers and any number agents.\nWe show the computation of the required multi-allocation in two scenarios where is even or odd and . Theorem 3 implies the proof to take into account .\nCase 1: Suppose that is even and is of the form , where . Because there are an odd number of layers, we are unable to directly apply the majority switching point property. After treating the first and second layers of the cake , aka and , as a single layer of a new cake , we apply this property, where\n, and . The non-negative integrable density function of an agent over the layer is , where is the density function of the agent over the layer for . denotes that the layer and layer are mutually overlapping.\nWe obtain a majority switching point due to the fact that the cake has an even number of layers. We set to be and to be . Due to the fact that is even and the notion of a majority switching point, we may divide the set of agents\n into and , such that , where is the set of agents who weakly prefer to and is the set of agents who weakly prefer to .\nWe have and .\nWe now take into account two other 2-layered cakes, and , which are obtained by merging and , respectively, where and . As a result, each agent on the cake , where and , has a value of at least . Thus, we get a proportional multi-allocation feasible and contiguous for the set of agents on the cake , where and , since and Theorem 4 ###reference_orem4### imply that. As a result, merging the two multi-allocations yields a contiguous, feasible, and complete multi-allocation that guarantees that each agent receives a proportional share.\nCase 2:\nIn the case when is odd, it can be expressed as , where . In this case, our aim is to reduce this case to case 1. Without loss of generality, we assume that there are some agents such that each has a value is at least on the top layer cake . Then, we ask each agent to place a mark at point on cake layer so that the value of the piece is equal to and allocate the piece to agent where min. In order to decide how to share the cake among agents, we reduce it to an instance\n(, , ) where and for . Each agent has a value of at least on the remaining cake. Due to Case , we obtain a proportional multi-allocation\n that is contiguous and feasible, for instance (, , ). Together with the allocated piece and the multi-allocation , we get a proportional multi-allocation that is feasible and contiguous. Pictures are shown in figure 16.\n\u220e\nWe will use the following lemma to computing proportional allocation for any number of agents and layers, where is any positive integer.\nLet be a -layered cake and .\nSuppose that is a -layered cake obtained by merging or . Then, each non-overlapping contiguous layered piece of is a non-overlapping contiguous layered piece of the original cake [20 ###reference_b20###].\nIn Theorem 7 ###reference_orem7###, we show that feasible and contiguous proportional allocation can be computed efficiently when , where the number of layers is of the form and is a positive integer.\nIf the number of agents and the number of layers are equal and is of the form where , then we can compute a proportional multi-allocation that is contiguous and feasible.\nWithout loss of generality, we assume that every layered cake is a replica of the unit interval .\nWe design the following recursive algorithm , which accepts a -layered cake together with a subset of agents with and a valuation profile and yields a proportional complete multi-allocation of the cake to the agents that is feasible.\nNow consider the case when for some integers . The algorithm searches a majority switching point over the cake . We let and . Due to the fact that is even, and by the definition of a majority switching point, we can split the set of agents into and such that , where is the set of agents who weakly prefer to and be the set of agents who weakly prefer to . Now we run the algorithm on the cake with the set of agents for each respectively, where is the merge of for . The complete multi-allocation returned by attains proportionality in addition to feasibility and contiguity, as we will show via induction on the exponential .\nIn the case when , and occur from the scenario. Thus each is -layered cake due to the fact that is -layered cake and is obtained from the merge of a diagonal piece of where and . Theorem 3 ###reference_orem3### yields a feasible and contiguous multi-allocation over for the set of agents that satisfies proportionality condition. Each agent receives a value of at least . Therefore, as stated in Lemma 6 ###reference_orem6###, merging both multi-allocations can result in a contiguous and feasible complete multi-allocation that ensures each agent a proportional share.\n\nAssume that the claim is true for with ; we will prove it for and . If , then . Assume that the algorithm partitions the input cake into\n and\n making use of the majority switching point x. Suppose that the agents are divided into two groups (, ) where is the set of agents who weakly prefer to and be the set of agents who weakly prefer to . Notice that the set of agents is the set of agents who weakly prefer\n to , which results in for all\n. Similarly, for all\n. Thus, by the induction hypothesis, each agent has value at least for its assigned layered piece . According to the induction hypothesis, the algorithm generates a feasible and contiguous multi-allocation for each merge. Lemma 6 ###reference_orem6### implies that every non-overlapping and contiguous layered piece of the merge of is also a non-overlapping and contiguous layered piece of the original cake. Thus, the algorithm gives a proportional complete multi-allocation that is contiguous and feasible as an output.\n\u220e\nWe will further develop the aforementioned theorem to the situation in which there are strictly more agents than layers. When , it comes to intuitive understanding that there is at least one layer whose sub-piece may be \"safely\" assigned to a particular agent without leaving the non-overlapping condition. In the following theorem, we show that feasible and contiguous proportional allocation can be computed efficiently when , where the number of layers is of the form and is a positive integer number.\nThere exists a proportional multi-allocation over -layered cake for any number of agents that is feasible and contiguous, where is of form for all .\nWe design the following recursive algorithm , which accepts a -layered cake together with a subset of agents with and a valuation profile and yields a proportional complete multi-allocation of the cake to the agents that is feasible.\nWhen , we apply the algorithm outlined in Theorem 7 ###reference_orem7###\u2019s proof. In the case when , for some integers , the algorithm finds a layer where at least some agents have values of at least on . Without loss of generality, we assume that . The algorithm then instructs each agent to place a mark at point on cake layer so that the value of the piece is equal to and allocate the piece to agent where min. In order to decide how to share the remaining cake, we perform the algorithm to the reduced instance (, , ) where and for .\nWe will show via induction on that the complete multi-allocation returned by follows proportionality as well as feasibility and contiguity. Due to Theorem 7 ###reference_orem7###, this is true for .\nFor , let\u2019s assume that the claim is true.\nWe will now show that the claim also holds for . Assume that agent gets the contiguous piece . Clearly, agent receives a proportional value under .\nObserve that for the remaining cake, all remaining agents have a value of at least . Thus, by the induction hypothesis, each agent has value at least for its assigned layered piece . The induction hypothesis makes it obvious that is both feasible and contiguous. The proof has concluded with the above.\n\u220e"
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Discussion",
69
+ "text": "We study the problem of multi-layered cake cutting, where we divide multi-layered cake among a set of agents under two constraints, feasibility and contiguity. In section 3 ###reference_###, we propose a new computational model. Then, we show the existence of an exact feasible multi-allocation for two agents and two layers using the new computational model. In section 4 ###reference_###, we show that a proportional multi-allocation can be computed for three layers and any number of agents greater than three using the cut-and-eval queries. We also show a technique for computing a proportional allocations for any number of agents and layers, where is any positive integer.\nIgarashi and Meunier [26 ###reference_b26###] show the existence of a feasible and contiguous proportional multi-allocation for any number of layers and any number of agents. But our results are based on the computation of proportional multi-allocation for various agents and layers.\nThere are several directions that are open for future research.\nQuery complexity of envy-free multi-allocation :\nIn the multi-layered cake-cutting problem, the query complexity of finding a feasible multi-allocation that is envy-free is open.\nComputation of proportional multi-allocation :\nIgarashi and Meunier [26 ###reference_b26###] show the existence of a feasible and contiguous proportional multi-allocation for any number of layers and any number of agents. Hosseini et al. [20 ###reference_b20###] give a computational procedure of finding a feasible and contiguous multi-allocation for agents and layers, where is a positive integer. We extend the result for agents and layers, where and is a positive integer. Extending the result for an arbitrary is unsolved.\nExistence of exact multi-allocation : Austin\n[30 ###reference_b30###] shows the existence of an exact allocation for a single-layered cake and two agents. We show the existence of an exact feasible multi-allocation for two layers and two agents. Alon [31 ###reference_b31###] shows the existence of an exact allocation for a single-layered cake. Finding the existence of an exact feasible multi-allocation for agents and layers is open.\nEfficiency of multi-allocation : Caragiannis et al. [7 ###reference_b7###] explore how allocation efficiency is affected by fairness. They take into account three distinct concepts of fairness for the allocations of divisible and indivisible goods and chores: proportionality, envy-freeness, and equitability. In comparison to optimal allocations, fair allocations lose efficiency. They quantify this loss and demonstrate the price of justice under three different concepts. Aumann and Dombb [6 ###reference_b6###] examine how fairness criteria may lead to a decrease in social welfare, focusing mostly on a scenario where each agent requires a connected piece. The study of efficiency in multi-layered cakes is another direction of work."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {
74
+ "1": {
75
+ "table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S1.T1.10.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S1.T1.11.2\" style=\"font-size:90%;\">Computational procedures of finding fair multi-allocation for different layers and agents are shown in the following table.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S1.T1.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S1.T1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.1.1.1\" style=\"padding-bottom:2.15277pt;\">Agents()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S1.T1.1.1.2\" style=\"padding-bottom:2.15277pt;\">Layers(m)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S1.T1.1.1.3\" style=\"padding-bottom:2.15277pt;\">EF</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S1.T1.1.1.4\" style=\"padding-bottom:2.15277pt;\">Prop</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S1.T1.8.9.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.8.9.1.1\">2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S1.T1.8.9.1.2\">2</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.8.9.1.3\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2208.00726v2#bib.bib20\" title=\"\">20</a>]</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.8.9.1.4\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2208.00726v2#bib.bib20\" title=\"\">20</a>]</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.8.10.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.8.10.2.1\">3</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S1.T1.8.10.2.2\">2</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.8.10.2.3\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2208.00726v2#bib.bib20\" title=\"\">20</a>]</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.8.10.2.4\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2208.00726v2#bib.bib20\" title=\"\">20</a>]</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.2.2.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S1.T1.3.3.2\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.3.3.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.3.3.4\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2208.00726v2#bib.bib20\" title=\"\">20</a>]</cite></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.8.11.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.8.11.3.1\">3</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S1.T1.8.11.3.2\">3</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.8.11.3.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.8.11.3.4\">Theorem <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2208.00726v2#Thmtheorem3\" title=\"Theorem 3. \u2023 4 Proportional multi-layered cake cutting \u2023 Fair Division of Multi-layered Cakes\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.4.4.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S1.T1.4.4.2\">3</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.4.4.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.4.4.4\">Theorem <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2208.00726v2#Thmtheorem5\" title=\"Theorem 5. \u2023 4 Proportional multi-layered cake cutting \u2023 Fair Division of Multi-layered Cakes\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.5.5.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S1.T1.6.6.2\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.6.6.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.6.6.4\">Theorem <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2208.00726v2#Thmtheorem7\" title=\"Theorem 7. \u2023 4 Proportional multi-layered cake cutting \u2023 Fair Division of Multi-layered Cakes\"><span class=\"ltx_text ltx_ref_tag\">7</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.7.7.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r ltx_border_t\" id=\"S1.T1.8.8.2\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S1.T1.8.8.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S1.T1.8.8.4\">Theorem <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2208.00726v2#Thmtheorem8\" title=\"Theorem 8. \u2023 4 Proportional multi-layered cake cutting \u2023 Fair Division of Multi-layered Cakes\"><span class=\"ltx_text ltx_ref_tag\">8</span></a>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
76
+ "capture": "Table 1: Computational procedures of finding fair multi-allocation for different layers and agents are shown in the following table."
77
+ }
78
+ },
79
+ "image_paths": {},
80
+ "validation": true,
81
+ "references": [
82
+ {
83
+ "1": {
84
+ "title": "Fair Division: From cake-cutting to dispute resolution.",
85
+ "author": "Steven J Brams and Alan D Taylor.",
86
+ "venue": "Cambridge University Press, 1996.",
87
+ "url": null
88
+ }
89
+ },
90
+ {
91
+ "2": {
92
+ "title": "Fair division and collective welfare.",
93
+ "author": "Herv\u00e9 Moulin.",
94
+ "venue": "MIT press, 2004.",
95
+ "url": null
96
+ }
97
+ },
98
+ {
99
+ "3": {
100
+ "title": "Cake-cutting algorithms: Be fair if you can.",
101
+ "author": "Jack Robertson and William Webb.",
102
+ "venue": "CRC Press, 1998.",
103
+ "url": null
104
+ }
105
+ },
106
+ {
107
+ "4": {
108
+ "title": "Handbook of computational social choice.",
109
+ "author": "Felix Brandt, Vincent Conitzer, Ulle Endriss, J\u00e9r\u00f4me Lang, and Ariel D\nProcaccia.",
110
+ "venue": "Cambridge University Press, 2016.",
111
+ "url": null
112
+ }
113
+ },
114
+ {
115
+ "5": {
116
+ "title": "Cake cutting really is not a piece of cake.",
117
+ "author": "Jeff Edmonds and Kirk Pruhs.",
118
+ "venue": "ACM Transactions on Algorithms (TALG), 7(4):1\u201312, 2011.",
119
+ "url": null
120
+ }
121
+ },
122
+ {
123
+ "6": {
124
+ "title": "The efficiency of fair division with connected pieces.",
125
+ "author": "Yonatan Aumann and Yair Dombb.",
126
+ "venue": "ACM Transactions on Economics and Computation (TEAC),\n3(4):1\u201316, 2015.",
127
+ "url": null
128
+ }
129
+ },
130
+ {
131
+ "7": {
132
+ "title": "The efficiency of fair division.",
133
+ "author": "Ioannis Caragiannis, Christos Kaklamanis, Panagiotis Kanellopoulos, and Maria\nKyropoulou.",
134
+ "venue": "Theory of Computing Systems, 50:589\u2013610, 2012.",
135
+ "url": null
136
+ }
137
+ },
138
+ {
139
+ "8": {
140
+ "title": "Children crying at birthday parties. why?",
141
+ "author": "William Thomson.",
142
+ "venue": "Economic Theory, 31(3):501\u2013521, 2007.",
143
+ "url": null
144
+ }
145
+ },
146
+ {
147
+ "9": {
148
+ "title": "Cake cutting: Not just child\u2019s play.",
149
+ "author": "Ariel D Procaccia.",
150
+ "venue": "Communications of the ACM, 56(7):78\u201387, 2013.",
151
+ "url": null
152
+ }
153
+ },
154
+ {
155
+ "10": {
156
+ "title": "The query complexity of cake cutting.",
157
+ "author": "Simina Br nzei and Noam Nisan.",
158
+ "venue": "Advances in Neural Information Processing Systems,\n35:37905\u201337919, 2022.",
159
+ "url": null
160
+ }
161
+ },
162
+ {
163
+ "11": {
164
+ "title": "An envy-free cake division protocol.",
165
+ "author": "Steven J Brams and Alan D Taylor.",
166
+ "venue": "The American Mathematical Monthly, 102(1):9\u201318, 1995.",
167
+ "url": null
168
+ }
169
+ },
170
+ {
171
+ "12": {
172
+ "title": "Rental harmony: Sperner\u2019s lemma in fair division.",
173
+ "author": "Francis Edward Su.",
174
+ "venue": "The American mathematical monthly, 106(10):930\u2013942, 1999.",
175
+ "url": null
176
+ }
177
+ },
178
+ {
179
+ "13": {
180
+ "title": "How to cut a cake fairly.",
181
+ "author": "Walter Stromquist.",
182
+ "venue": "The American Mathematical Monthly, 87(8):640\u2013644, 1980.",
183
+ "url": null
184
+ }
185
+ },
186
+ {
187
+ "14": {
188
+ "title": "How to cut a cake fairly.",
189
+ "author": "Lester E Dubins and Edwin H Spanier.",
190
+ "venue": "The American Mathematical Monthly, 68(1P1):1\u201317, 1961.",
191
+ "url": null
192
+ }
193
+ },
194
+ {
195
+ "15": {
196
+ "title": "A discrete and bounded envy-free cake cutting protocol for four\nagents.",
197
+ "author": "Haris Aziz and Simon Mackenzie.",
198
+ "venue": "In Proceedings of the forty-eighth annual ACM symposium on\nTheory of Computing, pages 454\u2013464, 2016.",
199
+ "url": null
200
+ }
201
+ },
202
+ {
203
+ "16": {
204
+ "title": "A discrete and bounded envy-free cake cutting protocol for any number\nof agents.",
205
+ "author": "Haris Aziz and Simon Mackenzie.",
206
+ "venue": "In 2016 IEEE 57th Annual Symposium on Foundations of Computer\nScience (FOCS), pages 416\u2013427. IEEE, 2016.",
207
+ "url": null
208
+ }
209
+ },
210
+ {
211
+ "17": {
212
+ "title": "A bounded and envy-free cake cutting algorithm.",
213
+ "author": "Haris Aziz and Simon Mackenzie.",
214
+ "venue": "Commun. ACM, 63(4):119\u2013126, 2020.",
215
+ "url": null
216
+ }
217
+ },
218
+ {
219
+ "18": {
220
+ "title": "A note on cake cutting.",
221
+ "author": "Shimon Even and Azaria Paz.",
222
+ "venue": "Discrete Applied Mathematics, 7(3):285\u2013296, 1984.",
223
+ "url": null
224
+ }
225
+ },
226
+ {
227
+ "19": {
228
+ "title": "Envy-free cake divisions cannot be found by finite protocols.",
229
+ "author": "Walter Stromquist.",
230
+ "venue": "the electronic journal of combinatorics, 15(1):R11, 2008.",
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "20": {
236
+ "title": "Fair division of time: Multi-layered cake cutting.",
237
+ "author": "Hadi Hosseini, Ayumi Igarashi, and Andrew Searns.",
238
+ "venue": "In International Joint Conference on Artificial Intelligence,\n2020.",
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "21": {
244
+ "title": "The unreasonable fairness of maximum nash welfare.",
245
+ "author": "Ioannis Caragiannis, David Kurokawa, Herv\u00e9 Moulin, Ariel D Procaccia,\nNisarg Shah, and Junxing Wang.",
246
+ "venue": "ACM Transactions on Economics and Computation (TEAC),\n7(3):1\u201332, 2019.",
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "22": {
252
+ "title": "Thou shalt covet thy neighbor\u2019s cake.",
253
+ "author": "Ariel D Procaccia.",
254
+ "venue": "In Twenty-First International Joint Conference on Artificial\nIntelligence, 2009.",
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "23": {
260
+ "title": "Algorithmic solutions for envy-free cake cutting.",
261
+ "author": "Xiaotie Deng, Qi Qi, and Amin Saberi.",
262
+ "venue": "Operations Research, 60(6):1461\u20131476, 2012.",
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "24": {
268
+ "title": "The geometry of efficient fair division.",
269
+ "author": "Julius B Barbanel.",
270
+ "venue": "Cambridge University Press, 2005.",
271
+ "url": null
272
+ }
273
+ },
274
+ {
275
+ "25": {
276
+ "title": "Computing socially-efficient cake divisions.",
277
+ "author": "Yonatan Aumann, Yair Dombb, and Avinatan Hassidim.",
278
+ "venue": "AAMAS, 2012.",
279
+ "url": null
280
+ }
281
+ },
282
+ {
283
+ "26": {
284
+ "title": "Envy-free division of multi-layered cakes.",
285
+ "author": "Ayumi Igarashi and Fr\u00e9d\u00e9ric Meunier.",
286
+ "venue": "In International Conference on Web and Internet Economics,\npages 504\u2013521. Springer, 2021.",
287
+ "url": null
288
+ }
289
+ },
290
+ {
291
+ "27": {
292
+ "title": "Envy-free two-player m-cake and three-player two-cake divisions.",
293
+ "author": "Nicolas Lebert, Fr\u00e9d\u00e9ric Meunier, and Quentin Carbonneaux.",
294
+ "venue": "Operations Research Letters, 41(6):607\u2013610, 2013.",
295
+ "url": null
296
+ }
297
+ },
298
+ {
299
+ "28": {
300
+ "title": "Fair division with multiple pieces.",
301
+ "author": "Kathryn Nyman, Francis Edward Su, and Shira Zerbib.",
302
+ "venue": "Discrete Applied Mathematics, 283:115\u2013122, 2020.",
303
+ "url": null
304
+ }
305
+ },
306
+ {
307
+ "29": {
308
+ "title": "Two-player envy-free multi-cake division.",
309
+ "author": "John Cloutier, Kathryn L Nyman, and Francis Edward Su.",
310
+ "venue": "Mathematical Social Sciences, 59(1):26\u201337, 2010.",
311
+ "url": null
312
+ }
313
+ },
314
+ {
315
+ "30": {
316
+ "title": "Sharing a cake.",
317
+ "author": "A Keith Austin.",
318
+ "venue": "The Mathematical Gazette, 66(437):212\u2013215, 1982.",
319
+ "url": null
320
+ }
321
+ },
322
+ {
323
+ "31": {
324
+ "title": "Splitting necklaces.",
325
+ "author": "Noga Alon.",
326
+ "venue": "Advances in Mathematics, 63(3):247\u2013253, 1987.",
327
+ "url": null
328
+ }
329
+ }
330
+ ],
331
+ "url": "http://arxiv.org/html/2208.00726v2"
332
+ }
20240318/2208.03886v5.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "What can we know about that which we cannot even imagine?",
3
+ "abstract": "In this essay I will consider a sequence of questions.\nThe first questions concern the biological function of intelligence in general, and\ncognitive prostheses of human intelligence in particular. These will lead into questions concerning\nhuman language, perhaps the most important cognitive prosthesis humanity has ever developed. While it is traditional to rhapsodize about the cognitive power encapsulated in human language, I will emphasize how horribly limited human language\nis \u2013 and therefore how limited our cognitive abilities are, despite their being augmented with language.\nThis will lead to questions of whether human mathematics, being ultimately formulated in terms of human\nlanguage, is also deeply limited. I will then combine these questions\nto pose a partial, sort-of, sideways answer to the guiding concern of this essay:\nwhat we can ever discern about that we cannot even conceive?",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Acknowledgments:",
9
+ "text": "I would like to thank David Kinney, Mikhail Prokopenko, as well as Daniel Dennett\nfor interesting conversation on these issues. This work was supported by funding\nfrom the Santa Fe Institute."
10
+ }
11
+ ],
12
+ "appendix": [],
13
+ "tables": {},
14
+ "image_paths": {},
15
+ "validation": true,
16
+ "references": [],
17
+ "url": "http://arxiv.org/html/2208.03886v5"
18
+ }
20240318/2209.05208v3.json ADDED
@@ -0,0 +1,662 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Graph Neural Modeling of Network Flows",
3
+ "abstract": "Network flow problems, which involve distributing traffic such that the underlying infrastructure is used effectively, are ubiquitous in transportation and logistics. Among them, the general Multi-Commodity Network Flow (MCNF) problem concerns the distribution of multiple flows of different sizes between several sources and sinks, while achieving effective utilization of the links. Due to the appeal of data-driven optimization, these problems have increasingly been approached using graph learning methods. In this paper, we propose a novel graph learning architecture for network flow problems called Per-Edge Weights (PEW). This method builds on a Graph Attention Network and uses distinctly parametrized message functions along each link. We extensively evaluate the proposed solution through an Internet flow routing case study using Service Provider topologies and routing schemes. We show that PEW yields substantial gains over architectures whose global message function constrains the routing unnecessarily. We also find that an MLP is competitive with other standard architectures. Furthermore, we analyze the relationship between graph structure and predictive performance for data-driven routing of flows, an aspect that has not been considered by existing work in the area.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Flow routing represents a fundamental problem that captures a variety of optimization scenarios that arise in real-world networks (Ahuja, 1993 ###reference_b1###, Chapter 17). One classic example is the maximum flow problem, which seeks to find the best (in terms of maximum capacity) path between a source node and a sink node. The more general Multi-Commodity Network Flow problem allows for multiple flows of different sizes between several sources and sinks that share the same distribution network. It is able to formalize the distribution of packets in a computer network, of goods in a logistics network, or cars in a rail network (Hu, 1963 ###reference_b31###). We illustrate MCNF problems in Figure 1 ###reference_###.\nFor maximum flow problems, efficient algorithms have been developed (Cormen et al., 2022 ###reference_b9###, Chapter 26), including a recent near-linear time approach (Chen et al., 2022 ###reference_b8###). For the more complex MCNF problems, Linear Programming solutions can be leveraged in order to compute, in polynomial time, the optimal routes given knowledge of pairwise demands between the nodes in the graph (Fortz & Thorup, 2000 ###reference_b16###; Tardos, 1986 ###reference_b48###). At the other end of the spectrum, oblivious routing methods derive routing strategies with partial or no knowledge of traffic demands, optimizing for \u201cworst-case\u201d performance (R\u00e4cke, 2008 ###reference_b41###).\nAs recognized by existing works, a priori knowledge of the full demand matrix is an unrealistic assumption, as loads in real systems continuously change (Feldmann et al., 2001 ###reference_b14###). Instead, ML techniques may enable a middle ground (Valadarsky et al., 2017 ###reference_b49###): learning a model trained on past loads that can perform well in a variety of traffic scenarios, without requiring a disruptive redeployment of the routing strategy (Fortz & Thorup, 2002 ###reference_b17###). Hence, developing an effective learning representation is fundamental to the application of ML in flow routing scenarios.\nFrom a more practical point of view, this shift towards data-driven approaches is illustrated by the concepts of data-driven computer networking (Jiang et al., 2017 ###reference_b33###) and self-driving networks (Feamster & Rexford, 2017 ###reference_b13###). Early works in this area were based on MLP architectures (Valadarsky et al., 2017 ###reference_b49###; Reis et al., 2019 ###reference_b42###). More recently, models purposely designed to operate on graphs, including variants of the expressive Message Passing Neural Networks (Rusek et al., 2019 ###reference_b44###; Almasan et al., 2021 ###reference_b2###) and Graph Nets (Battaglia et al., 2018 ###reference_b3###), have been adopted.\nDespite the promise of graph learning, current works nevertheless adopt schemes that aggregate messages along neighboring edges using the same message functions. In the context of routing flows, this constrains the model unnecessarily. Instead, we argue that nodes should be able to weight flows along each link separately, so that each node may independently update its state given incoming and outgoing traffic, leading to better algorithmic alignment (Xu et al., 2020 ###reference_b52###) between the computational mechanism of the GNN and the task. We illustrate this in Figure 2 ###reference_###.\n###figure_1### Furthermore, the ways in which prior works encode the demands as node features varies between the full demand matrix (Valadarsky et al., 2017 ###reference_b49###; Zhang et al., 2020 ###reference_b54###) and a node-wise summation (Hope & Yoneki, 2021 ###reference_b29###), and it is unclear when either is beneficial. Besides the learning representation aspects, existing approaches in this area are trained using very few graph topologies (typically 1 or 2) of small sizes (typically below 20 nodes). This makes it difficult to assess the gain that graph learning solutions bring over vanilla architectures such as the MLP. Additionally, a critical point that has not been considered is the impact of the underlying graph topology on the effectiveness of the learning process. To address these shortcomings, we make contributions along the following axes:\nLearning representations for data-driven flow routing. We propose a novel mechanism for aggregating messages along each link with a different parametrization, which we refer to as Per-Edge Weights (PEW). We propose an instantiation that extends the GAT (Veli\u010dkovi\u0107 et al., 2018 ###reference_b50###) via a construction akin to the RGAT (Busbridge et al., 2019 ###reference_b5###). Despite its simplicity, we show that this mechanism yields substantial predictive gains over architectures that use the same message function for all neighbors. We also find that PEW can exploit the complete demand matrix as node features, while the GAT performs better with the lossy node-wise sum used in prior work.\nRigorous and systematic evaluation. Whereas existing works test on few, small-scale topologies, we evaluate the proposed method and baselines on real-world Internet Service Provider topologies and routing schemes in the context of a case study in computer networks, yielding independent model training runs. Perhaps surprisingly, we find that a well-tuned MLP is competitive with other GNN architectures when given an equal hyperparameter and training budget.\nUnderstanding the impact of topology. The range of experiments we carry out allows us to establish that a strong link exists between topology and the difficulty of the prediction task, which is consistent across routing schemes. Generally, the predictive performance decreases with the size of the graph and increases with heterogeneity in the local node and edge properties. Moreover, we find that, when graph structure varies through the presence of different subsets of nodes, the predictive performance of GNNs increases compared to structure-agnostic methods, such as MLP.\n###figure_2###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Methods",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Routing formalization and learning task",
27
+ "text": "Flow routing formalization. We assume the splittable-flow routing formalization proposed by Fortz & Thorup (2004 ###reference_b18###). We let be a directed graph, with representing the set of nodes and the set of edges. We use and as shorthands, as well as and to denote specific nodes and edges, respectively. Each edge has an associated capacity . We also define a demand matrix where entry is the traffic that source node sends to destination . With each tuple we associate the quantity , which specifies the amount of traffic flow from to that goes over the edge . The load of edge , , is the total traffic flow traversing it, i.e., . Furthermore, the quantities must obey the following flow conservation constraints:\nwhere the sets are node \u2019s outgoing and incoming edges, respectively. Intuitively, these constraints capture the fact that traffic sent from to originates at the source (first clause), must be absorbed at the target (second clause), and ingress equals egress for all other nodes (final clause).\nRouting schemes. A routing scheme specifies how to distribute the traffic flows. Specifically, we consider two well-known routing schemes. The first is the Standard Shortest Paths (SSP) scheme in which, for a given node, the full flow quantity with destination is sent to the neighbor on the shortest path to . The widely used ECMP scheme (Hopps, 2000 ###reference_b30###) instead splits outgoing traffic among all the neighbors on the shortest path to if multiple such neighbors exist.\nPrediction target. A common way of evaluating a routing strategy is Maximum Link Utilization (MLU), i.e., the maximal ratio between link load and capacity. Formally, given a demand matrix , we denote it as . This target metric has been extensively studied in prior work (Kandula et al., 2005 ###reference_b34###) and is often used by ISPs to gauge when the underlying infrastructure needs to be upgraded (Guichard et al., 2005 ###reference_b23###).\nSupervised learning setup. We assume that we are provided with a dataset of traffic matrices . Given that our model produces an approximation of the true Maximum Link Utilization, the goal is to minimize the Mean Squared Error ."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Per-Edge Weights",
33
+ "text": "We propose a simple mechanism to increase the expressivity of models for data-driven flow routing. As previously mentioned, several works in recent years have begun adopting various graph learning methods for flow routing problems such as variants of Message Passing Neural Networks (Geyer & Carle, 2018 ###reference_b21###; Rusek et al., 2019 ###reference_b44###; Almasan et al., 2021 ###reference_b2###) or Graph Networks (Hope & Yoneki, 2021 ###reference_b29###). In particular, MPNNs derive hidden features for node in layer by computing messages and applying updates of the form:\nwhere is the neighborhood of node , are features for edge , and and are the differentiable message (sometimes also called edge) and vertex update functions in layer . Typically, is some form of MLP that is applied in parallel when computing the update for each node in the graph. An advantage of applying the same message function across the entire graph is that the number of parameters remains fixed in the size of the graph, enabling a form of combinatorial generalization (Battaglia et al., 2018 ###reference_b3###). However, while this approach has been very successful in many graph learning tasks such as graph classification, we argue that it is not best suited for flow routing problems.\nInstead, for this family of problems, the edges do not have uniform semantics. Each of them plays a different role when the flows are routed over the graph and, as shown in Figure 1 ###reference_###, each will take on varying levels of load. Equivalently, from a node-centric perspective, each node should be able to decide flexibly how to distribute several flows of traffic over its neighboring edges. This intuition can be captured by using a different message function when aggregating messages received along each edge . We call this mechanism Per-Edge Weights, or PEW. We illustrate the difference between PEW and a typical MPNN in Figure 2 ###reference_###.\nLet us formulate the PEW architecture by a similar construction to the additive self-attention, across-relation variant of RGAT (Busbridge et al., 2019 ###reference_b5###). Let and denote the closed and open neighborhoods of node .\nTo compute the coefficients for each edge, one first needs to compute intermediate representations by multiplying the node features with the per-edge weight matrix . Subsequently, the \u201cquery\u201d and \u201ckey\u201d representations are defined as below, where and represent per-edge query and key kernels respectively:\nThen, the attention coefficients are computed according to:\nFinally, the embeddings are computed as:"
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Evaluation protocol",
39
+ "text": "This section describes the experimental setup we use for our evaluation. We focus on a case study on routing flows in computer networks to demonstrate its effectiveness in real-world scenarios, which can be considered representative of a variety of settings in which we wish to predict characteristics of a routing scheme from an underlying network topology and a set of observed demand matrices.\nModel architectures. We compare PEW with three widely used graph learning architectures: the GAT (Veli\u010dkovi\u0107 et al., 2018 ###reference_b50###), GCN (Kipf & Welling, 2017 ###reference_b36###), and GraphSAGE (Hamilton et al., 2017 ###reference_b27###).\nWe also compare against a standard MLP architecture made up of fully-connected layers followed by ReLU activations. The features provided as input to the five methods are the same: for the GNN methods, the node features are the demands in accordance with the demand input representations defined later in this section, while the edge features are the capacities , and the adjacency matrix governs the message passing. For GCN and GraphSAGE, which do not support edge features, we include the mean edge capacity as a node feature. For the MLP, we unroll and concatenate the demand input representation derived from , the adjacency matrix , and all edge capacities in the input layer. We note that other non-ML baselines, such as Linear Programming, are not directly applicable for this task: while they can be used to derive a routing strategy, in this chapter the goal is to predict a property of an existing routing strategy (SSP or ECMP, as defined in Section 3.1 ###reference_###).\nTraffic generation. In order to generate synthetic flows of traffic, we use the \u201cgravity\u201d approach proposed by Roughan (2005 ###reference_b43###). Akin to Newton\u2019s law of universal gravitation, the traffic between nodes and is proportional to the amount of traffic, , that enters the network via and , the amount that exits the network at . The values and are random variables that are identically and independently distributed according to an exponential distribution. Despite its simplicity in terms of number of parameters, this approach has been shown to synthesize traffic matrices that correspond closely to those observed in real-world networks (Roughan, 2005 ###reference_b43###; Hartert et al., 2015 ###reference_b28###). We additionally apply a rescaling of the volume by the MLU (defined in Section 3.1 ###reference_###) under the LP solution of the MCNF formulation, as recommended in the networking literature (Haddadi & Bonaventure, 2013 ###reference_b25###; Gvozdiev et al., 2018 ###reference_b24###).\nNetwork topologies. We consider real-world network topologies that are part of the Repetita and Internet Topology Zoo repositories (Gay et al., 2017 ###reference_b19###; Knight et al., 2011 ###reference_b37###). In case there are multiple snapshots of the same network topology, we only use the most recent so as not to bias the results towards these graphs. We limit the size of the considered topologies to between nodes, which we note is still substantially larger than topologies used for training in prior work on ML for routing flows. Furthermore, we only consider heterogeneous topologies with at least two different link capacities. Given the traffic model above, for some topologies the MLU dependent variable is nearly always identical regardless of the demand matrix, making it trivial to devise a good predictor. Out of the resulting topologies, we filter out those for which the minimum MLU is equal to the 90th percentile MLU over 100 demand matrices, leaving unique topologies. The properties of these topologies are summarized in the Appendix. For the experiments involving topology variations, they are generated as follows: a number of nodes to be removed from the graph is chosen uniformly at random in the range , subject to the constraint that the graph does not become disconnected. Demand matrices are generated starting from this modified topology.\nDatasets. The datasets , , of demand matrices are disjoint and contain demand matrices each. Both the demands and capacities are standardized by dividing them by the maximum value across the union of the datasets. As shown in the Appendix, the datasets for the smallest topology contain flows, while the datasets for the largest topology consist of flows.\nDemand input representation. We also consider two different demand input representations that appear in prior work, which we term raw and sum. In the former, the feature vector for node is , which corresponds to the concatenated outgoing and incoming demands, respectively. The latter is an aggregated version equal to , i.e., it contains the summed demands.\nTraining and evaluation protocol. Training and evaluation are performed separately for each graph topology and routing scheme. All methods are given an equal grid search budget of hyperparameter configurations whose values are provided in the Appendix. To compute means and confidence intervals, we repeat training and evaluation across different random seeds. Training is done by mini-batch SGD using the Adam optimizer (Kingma & Ba, 2015 ###reference_b35###) and proceeds for epochs with a batch size of . We perform early stopping if the validation performance does not improve after epochs, also referred to as \u201cpatience\u201d in other graph learning works (Veli\u010dkovi\u0107 et al., 2018 ###reference_b50###; Errica et al., 2020 ###reference_b12###). Since the absolute value of the MLUs varies significantly in datapoints generated for different topologies, we apply a normalization when reporting results such that they are comparable. Namely, the MSE of the predictors is normalized by the MSE of a simple baseline that outputs the average MLU for all DMs in the provided dataset. We refer to this as Normalized MSE (NMSE).\nScale of experiments. Given the range of considered graph learning architectures, hyperparameter configurations, network topologies and routing models, to the best of our knowledge, our work represents the most extensive suite of benchmarks on graph learning for the MCNF problem to date. The primary experiments consist of independent model training runs, while the entirety of our experiments comprise runs. We believe that this systematic experimental procedure and evaluation represents in itself a significant contribution of our work and, akin to (Errica et al., 2020 ###reference_b12###) for graph classification, it can serve as a foundation for members of the graph learning community working on MCNF scenarios to build upon."
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "Evaluation results",
45
+ "text": "###figure_3### Benefits of PEW for flow routing. The primary results are shown in Figure 3 ###reference_###, in which we compare the normalized MSE obtained by the architectures on the topologies. The two rows correspond to the SSP and ECMP schemes respectively. Learning curves for the best-performing hyperparameter configurations are presented in the Appendix. We find that PEW improves the predictive performance over a vanilla GAT in nearly all () of the settings tested, and that it performs the best out of all predictors in of cases.\nHence, this highlights the importance of parametrizing links differently, suggesting that it is an effective inductive bias for this family of problems. Interestingly, the MLP performs better than GAT in 80% of the considered cases, and is competitive with GCN and GraphSAGE. This echoes findings in other graph learning works (Errica et al., 2020 ###reference_b12###), i.e., the fact that a well-tuned MLP can be competitive against GNN architectures and even outperform them. Furthermore, both the relative differences between predictors and their absolute normalized MSEs are fairly consistent across the different topologies.\nVarying graph structure. Next, we investigate the impact of variations in topology on predictive performance. In this experiment, the sole difference wrt. the setup described above is that the datasets contain demand matrices that are instead distributed on variations in topology of the original graph (i.e., we have DMs per variation making up each dataset). To evaluate the methods, we use two ranking metrics: the Win Rate (WR) is the percentage of topologies for which the method obtains the lowest NMSE, and the Mean Reciprocal Rank (MRR) is the arithmetic average of the complements of the ranks of the three predictors. For both metrics, higher values are better. Results are shown in Table 1 ###reference_###. PEW remains the best architecture and manifests a decrease in MRR for SSP and a gain for ECMP. We also find that the relative performance of the GCN increases while that of the MLP decreases when varying subsets of the nodes in the original graph are present. This suggests that GNN-based approaches are more resilient to changes in graph structure (e.g., nodes joining and leaving the network), a commonly observed phenomenon in practice.\nBest demand input representation. To compare the two demand input representations, we additionally train the model architectures on subsets of and of the datasets. Recall that the raw representation contains the full demand matrix while the sum representation is a lossy aggregation of the same information. The latter may nevertheless help to avoid overfitting. Furthermore, given that the distribution of the demands is exponential, the largest flows will dominate the values of the features. Results are shown in Figure 4 ###reference_###. The -axis indicates the number of demand matrices used for training and evaluation, while the -axis displays the difference in normalized MSE between the raw and sum representations, averaged across all topologies. As marked in the figure, means that the raw representation performs better, while the reverse is true for .\nWith very few datapoints, the two input representations yield similar errors for both PEW and GAT. Beyond this, two interesting trends emerge: as the number of datapoints increases, PEW performs better with the raw demands, while the vanilla GAT performs better with the lossy representation. This suggests that, while the PEW model is able to exploit the granular information contained in the raw demands, they instead cause the standard GAT to overfit and obtain worse generalization performance.\n###figure_4### Impact of topology. Our final set of experiments examines the relationship between the topological characteristics of graphs and the relative performance of our proposed model architecture. The six properties that we examine are defined as follows, noting that the first three are global properties while the final three measure the variance in local node and edge properties:\nNumber of nodes: the cardinality of the node set ;\nDiameter: maximum pairwise shortest path length;\nEdge density: the ratio of links to nodes ;\nCapacity variance: the variance in the normalized capacities ;\nDegree variance: the variance in ;\nWeighted betweenness variance: the variance in a weighted version of betweenness centrality (Brandes, 2001 ###reference_b4###) measuring the fraction of all-pairs shortest paths passing through each node.\n###figure_5### The results of this analysis are shown in Figure 5 ###reference_###. As previously, the normalized MSE of the PEW model is shown on the -axis, while the -axis measures properties of the graphs. Each datapoint represents one of the topologies. We find that topological characteristics do not fully determine model performance but, nevertheless, it is possible to make a series of observations related to them. Generally, the performance of the method decreases as the size of the graph grows in number of nodes, diameter, and edge density (metrics that are themselves correlated). This result can be explained by the fact that our experimental protocol relies on a fixed number of demand matrices, which represent a smaller sample of the distribution of demand matrices as the graph increases in size. Hence, this can lead to a model with worse generalization from the training to the test phase, despite the larger parameter count. On the other hand, the performance of the method typically improves with increasing heterogeneity in node and link-level properties (namely, variance in the capacities and degree / weighted betweenness centralities). The relationship between the NMSE and some properties (e.g., weighted betweenness) may be non-linear. Additional results that relate topological characteristics to the percentage changes in NMSE from the other architectures to PEW are presented in the Appendix. These analyses further corroborate the findings concerning the relationship between the predictive performance of PEW and graph structure."
46
+ },
47
+ {
48
+ "section_id": "6",
49
+ "parent_section_id": null,
50
+ "section_name": "Limitations",
51
+ "text": "A possible disadvantage of PEW is that the number of parameters grows linearly with the edge count. However, since the same amount of computations are performed, there is no increase in runtime compared to the GAT. Additionally, given the relatively small scale of ISP backbone networks (several hundreds of nodes), in practice, the impact on memory usage has not been significant in our experiments. The largest PEW model, used for the Uninett2011 graph, has approximately parameters. If required, approaches for reducing the parameter count, such as the basis and block-diagonal decompositions proposed by Schlichtkrull et al. (2018 ###reference_b46###), have already been validated for significantly larger-scale relational graphs. Other routing-specific options that may be investigated in future work could be the \u201cclustering\u201d of the edges depending on the structural roles that they play (such as peripheral or core links) or the use of differently parametrized neighborhoods for the regions of the graph, which may perform well if a significant proportion of the traffic is local. Furthermore, a key assumption behind PEW is that node identities are known, so that when topologies vary, the mapping to a particular weight parametrization is kept consistent. This is a suitable assumption for a variety of real-world networks, such as the considered ISP backbone networks, which are characterized by infrequent upgrades. However, performance may degrade in highly dynamic networks, where the timescale of the structural changes is substantially lower than the time needed in order for systems making use of such a predictive model to adapt."
52
+ },
53
+ {
54
+ "section_id": "7",
55
+ "parent_section_id": null,
56
+ "section_name": "Conclusion",
57
+ "text": "In this work, we have addressed the problem of data-driven routing of flows across a graph, which has several applications of practical relevance in areas as diverse as logistics and computer networking. We have proposed Per-Edge Weights (PEW), an effective model architecture for predicting link loads in a network based on historical observations, given a demand matrix and a routing strategy. The novelty of our approach resides in the use of weight parametrizations for aggregating messages that are unique for each edge of the graph. In a rigorous and systematic evaluation comprising training runs, we have demonstrated that PEW improves predictive performance over standard graph learning and MLP approaches. Furthermore, we have shown that PEW is able to exploit the full demand matrix, unlike the standard GAT, for which a lossy aggregation of features is preferable. Our findings also highlight the importance of topology for data-driven routing. Given the same number of historical observations, performance typically decreases when the graph grows in size, but increases with higher levels of heterogeneity of local properties. While this paper has focused on learning the properties of existing routing protocols, in future work we aim to investigate learning new routing protocols given the proposed learning representation and broader insights in this problem space that we have obtained."
58
+ }
59
+ ],
60
+ "appendix": [
61
+ {
62
+ "section_id": "Appendix 1",
63
+ "parent_section_id": null,
64
+ "section_name": "Appendix A Implementation and runtime details",
65
+ "text": "Implementation. In a future version, our implementation will be made publicly available as Docker containers together with instructions that enable reproducing (up to hardware differences) all the results reported in the paper, including tables and figures. We implement all approaches and baselines in Python using a variety of numerical and scientific computing packages (Hunter, 2007 ###reference_b32###; Hagberg et al., 2008 ###reference_b26###; McKinney, 2011 ###reference_b39###; Paszke et al., 2019 ###reference_b40###; Waskom, 2021 ###reference_b51###). For implementations of the graph learning methods, we make use of PyTorch Geometric (Fey & Lenssen, 2019 ###reference_b15###). Due to the relationship between the RGAT and PEW architectures, we are able to leverage the existing RGAT implementation in this library.\nData availability. The network topology data used in this paper is part of the Repetita suite (Gay et al., 2017 ###reference_b19###) and it is publicly available at https://github.com/svissicchio/Repetita ###reference_### without any licensing restrictions. We also use the synthetic traffic generator from (Gvozdiev et al., 2018 ###reference_b24###), available at https://github.com/ngvozdiev/tm-gen ###reference_###.\nInfrastructure and runtimes. Experiments were carried out on a cluster of 8 machines, each equipped with 2 Intel Xeon E5-2630 v3 processors and 128GB RAM. On this infrastructure, all the experiments reported in this paper took approximately 35 days to complete. The training and evaluation of models were performed exclusively on CPUs."
66
+ },
67
+ {
68
+ "section_id": "Appendix 2",
69
+ "parent_section_id": null,
70
+ "section_name": "Appendix B Hyperparameter details",
71
+ "text": "Learning rates\nDemand input \nrepresentations\nDimension of \nfeature vector\nFirst hidden \nlayer size\nAll methods are given an equal grid search budget of hyperparameter configurations consisting of the two choices of demand input representations, three choices of learning rate , and two choices of model complexity as detailed in Table 2 ###reference_###. For the MLP, subsequent hidden layers contain half the units of the first hidden layer. For the GNN-based methods, sum pooling is used to compute a graph-level embedding from the node-level features. Despite potential over-smoothing issues of GNNs in graph classification (e.g., as described in (Chen et al., 2020 ###reference_b7###)), for the flow routing problem, we set the number of layers equal to the diameter so that all traffic entering the network can also exit, including traffic between pairs of points that are the furthest away in the graph."
72
+ },
73
+ {
74
+ "section_id": "Appendix 3",
75
+ "parent_section_id": null,
76
+ "section_name": "Appendix C Additional results",
77
+ "text": "Topologies used. High-level statistics about the considered topologies are shown in Table 3 ###reference_###.\nImpact of topological characteristics on PEW relative performance. Figures 6 ###reference_### to 9 ###reference_### compare the percentage changes in NMSE between PEW and the other learning architectures. The results are consistent with the standalone analysis presented in the main text: namely, given the same number of observed traffic matrices, the performance of PEW deteriorates as graph size increases, but improves with higher levels of heterogeneity in node and link-level properties.\nLearning curves. Representative learning curves are shown in the remainder of this Appendix. For their generation, we report the MSE on the held-out validation set of the best-performing hyperparameter combination for each architecture and demand input representation. To smoothen the curves, we apply exponential weighting with an . This value is chosen such that a sufficient amount of noise is removed and the overall trends in validation losses can be observed. We also skip the validation losses for the first epochs since their values are on a significantly larger scale and would distort the plots. As large spikes sometimes arise, validation losses are truncated to be at most the value obtained after the first epochs. An interesting trend shown by the learning curves is that the models consistently require more epochs to reach a low validation loss in the ECMP case compared to SSP, reflecting its increased complexity.\n###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26###"
78
+ }
79
+ ],
80
+ "tables": {
81
+ "1": {
82
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Mean Reciprocal Rank and Win Rates for the different predictors. PEW maintains the overall best performance. The relative performance of the MLP decreases when the graph structure varies by means of different subsets of nodes being present and generating demands.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T1.10\" style=\"width:469.8pt;height:98.1pt;vertical-align:-0.9pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-26.2pt,5.4pt) scale(0.899756413525633,0.899756413525633) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.10.10\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.10.10.11.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S5.T1.10.10.11.1.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T1.10.10.11.1.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T1.10.10.11.1.3\"><span class=\"ltx_text\" id=\"S5.T1.10.10.11.1.3.1\" style=\"font-size:120%;\">PEW (ours)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T1.10.10.11.1.4\"><span class=\"ltx_text\" id=\"S5.T1.10.10.11.1.4.1\" style=\"font-size:120%;\">GAT</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T1.10.10.11.1.5\"><span class=\"ltx_text\" id=\"S5.T1.10.10.11.1.5.1\" style=\"font-size:120%;\">MLP</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T1.10.10.11.1.6\"><span class=\"ltx_text\" id=\"S5.T1.10.10.11.1.6.1\" style=\"font-size:120%;\">GraphSAGE</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T1.10.10.11.1.7\"><span class=\"ltx_text\" id=\"S5.T1.10.10.11.1.7.1\" style=\"font-size:120%;\">GCN</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row\" id=\"S5.T1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row\" id=\"S5.T1.6.6.6.7\">Metric</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.2.2.2.2\">Original \n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.6.6.6.8\">Variations</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.3.3.3.3\">Original \n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.6.6.6.9\">Variations</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.4.4.4.4\">Original \n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.6.6.6.10\">Variations</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.5.5.5.5\">Original \n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.6.6.6.11\">Variations</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.6.6.6.6\">Original \n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.6.6.6.12\">Variations</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.7.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.7.7.7.2\">SSP</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.7.7.7.1\">MRR \n</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T1.7.7.7.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.7.7.3.1\">0.798</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T1.7.7.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.7.7.4.1\">0.747</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T1.7.7.7.5\">0.252</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T1.7.7.7.6\">0.240</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T1.7.7.7.7\">0.419</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T1.7.7.7.8\">0.396</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T1.7.7.7.9\">0.367</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T1.7.7.7.10\">0.349</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T1.7.7.7.11\">0.448</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T1.7.7.7.12\">0.551</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.8.8.8\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.T1.8.8.8.2\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.8.8.8.1\">WR \n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.8.8.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.8.8.8.3.1\">70.588</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.8.8.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.8.8.8.4.1\">58.824</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.8.8.8.5\">0.000</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.8.8.8.6\">0.000</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.8.8.8.7\">17.647</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.8.8.8.8\">11.765</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.8.8.8.9\">0.000</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.8.8.8.10\">5.882</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.8.8.8.11\">11.765</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.8.8.8.12\">23.529</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.9.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.9.9.9.2\">ECMP</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.9.9.9.1\">MRR \n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.9.9.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.9.9.9.3.1\">0.734</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.9.9.9.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.9.9.9.4.1\">0.755</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.9.9.9.5\">0.250</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.9.9.9.6\">0.254</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.9.9.9.7\">0.462</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.9.9.9.8\">0.413</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.9.9.9.9\">0.381</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.9.9.9.10\">0.338</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.9.9.9.11\">0.456</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.9.9.9.12\">0.524</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.10.10.10\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T1.10.10.10.2\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T1.10.10.10.1\">WR \n</th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.T1.10.10.10.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.10.10.10.3.1\">58.824</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.T1.10.10.10.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.10.10.10.4.1\">58.824</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.T1.10.10.10.5\">0.000</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.T1.10.10.10.6\">0.000</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.T1.10.10.10.7\">23.529</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.T1.10.10.10.8\">11.765</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.T1.10.10.10.9\">5.882</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.T1.10.10.10.10\">5.882</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.T1.10.10.10.11\">11.765</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.T1.10.10.10.12\">23.529</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
83
+ "capture": "Table 1: Mean Reciprocal Rank and Win Rates for the different predictors. PEW maintains the overall best performance. The relative performance of the MLP decreases when the graph structure varies by means of different subsets of nodes being present and generating demands."
84
+ },
85
+ "2": {
86
+ "table_html": "<figure class=\"ltx_table\" id=\"A2.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Hyperparameters used.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A2.T2.18\" style=\"width:469.8pt;height:126.8pt;vertical-align:-126.8pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-21.6pt,0.0pt) scale(0.915687026501801,0.915687026501801) ;\">\n<br class=\"ltx_break\"/>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A2.T2.18.18\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A2.T2.18.18.19.1\">\n<td class=\"ltx_td\" id=\"A2.T2.18.18.19.1.1\" style=\"width:71.1pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.18.18.19.1.2\">PEW (ours)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.18.18.19.1.3\">GAT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.18.18.19.1.4\">MLP</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.18.18.19.1.5\">GraphSAGE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.18.18.19.1.6\">GCN</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.6.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"A2.T2.1.1.1.1\" style=\"width:71.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"A2.T2.1.1.1.1.1.1\">Learning rates </p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A2.T2.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A2.T2.3.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A2.T2.4.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A2.T2.5.5.5.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A2.T2.6.6.6.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.11.11.11\">\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"A2.T2.11.11.11.6\" style=\"width:71.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"A2.T2.11.11.11.6.1\">Demand input \n<br class=\"ltx_break\"/>representations</p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.7.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.8.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.9.9.9.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.10.10.10.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.11.11.11.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.16.16.16\">\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"A2.T2.12.12.12.1\" style=\"width:71.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"A2.T2.12.12.12.1.1.1\">Dimension of \n<br class=\"ltx_break\"/>feature vector </p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.13.13.13.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.14.14.14.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.16.16.16.6\">n/a</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.15.15.15.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.16.16.16.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.18.18.18\">\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"A2.T2.18.18.18.3\" style=\"width:71.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"A2.T2.18.18.18.3.1\">First hidden \n<br class=\"ltx_break\"/>layer size</p>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.18.18.18.4\">n/a</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.18.18.18.5\">n/a</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.18.18.18.2\">\n <span class=\"ltx_text ltx_font_italic\" id=\"A2.T2.18.18.18.2.1\">sum</span> / <span class=\"ltx_text ltx_font_italic\" id=\"A2.T2.18.18.18.2.2\">raw</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.18.18.18.6\">n/a</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.18.18.18.7\">n/a</td>\n</tr>\n</tbody>\n</table>\n<br class=\"ltx_break\"/>\n<br class=\"ltx_break\"/>\n</span></div>\n</figure>",
87
+ "capture": "Table 2: Hyperparameters used."
88
+ },
89
+ "3": {
90
+ "table_html": "<figure class=\"ltx_table\" id=\"A3.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Properties of the topologies.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A3.T3.4\" style=\"width:211.4pt;height:407.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(21.6pt,-41.6pt) scale(1.25657802242178,1.25657802242178) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"A3.T3.4.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T3.4.4.4.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.4.5.1\" style=\"font-size:90%;\">Graph</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T3.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T3.2.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T3.4.4.4.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.4.6.1\" style=\"font-size:90%;\">Diam.</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T3.3.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T3.4.4.4.4\">\n<span class=\"ltx_text\" id=\"A3.T3.4.4.4.4.1\" style=\"font-size:90%;\">Flows in </span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.5.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A3.T3.4.4.5.1.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.5.1.1.1\" style=\"font-size:90%;\">Aconet</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A3.T3.4.4.5.1.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.5.1.2.1\" style=\"font-size:90%;\">23</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A3.T3.4.4.5.1.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.5.1.3.1\" style=\"font-size:90%;\">62</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A3.T3.4.4.5.1.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.5.1.4.1\" style=\"font-size:90%;\">4</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A3.T3.4.4.5.1.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.5.1.5.1\" style=\"font-size:90%;\">2.70</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A3.T3.4.4.5.1.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.5.1.6.1\" style=\"font-size:90%;\">1587000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.6.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.6.2.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.6.2.1.1\" style=\"font-size:90%;\">Agis</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.6.2.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.6.2.2.1\" style=\"font-size:90%;\">25</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.6.2.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.6.2.3.1\" style=\"font-size:90%;\">60</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.6.2.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.6.2.4.1\" style=\"font-size:90%;\">7</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.6.2.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.6.2.5.1\" style=\"font-size:90%;\">2.40</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.6.2.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.6.2.6.1\" style=\"font-size:90%;\">1875000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.7.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.7.3.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.7.3.1.1\" style=\"font-size:90%;\">Arnes</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.7.3.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.7.3.2.1\" style=\"font-size:90%;\">34</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.7.3.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.7.3.3.1\" style=\"font-size:90%;\">92</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.7.3.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.7.3.4.1\" style=\"font-size:90%;\">7</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.7.3.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.7.3.5.1\" style=\"font-size:90%;\">2.71</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.7.3.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.7.3.6.1\" style=\"font-size:90%;\">3468000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.8.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.8.4.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.8.4.1.1\" style=\"font-size:90%;\">Cernet</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.8.4.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.8.4.2.1\" style=\"font-size:90%;\">41</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.8.4.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.8.4.3.1\" style=\"font-size:90%;\">116</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.8.4.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.8.4.4.1\" style=\"font-size:90%;\">5</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.8.4.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.8.4.5.1\" style=\"font-size:90%;\">2.83</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.8.4.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.8.4.6.1\" style=\"font-size:90%;\">5043000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.9.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.9.5.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.9.5.1.1\" style=\"font-size:90%;\">Cesnet201006</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.9.5.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.9.5.2.1\" style=\"font-size:90%;\">52</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.9.5.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.9.5.3.1\" style=\"font-size:90%;\">126</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.9.5.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.9.5.4.1\" style=\"font-size:90%;\">6</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.9.5.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.9.5.5.1\" style=\"font-size:90%;\">2.42</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.9.5.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.9.5.6.1\" style=\"font-size:90%;\">8112000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.10.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.10.6.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.10.6.1.1\" style=\"font-size:90%;\">Grnet</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.10.6.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.10.6.2.1\" style=\"font-size:90%;\">37</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.10.6.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.10.6.3.1\" style=\"font-size:90%;\">84</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.10.6.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.10.6.4.1\" style=\"font-size:90%;\">8</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.10.6.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.10.6.5.1\" style=\"font-size:90%;\">2.27</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.10.6.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.10.6.6.1\" style=\"font-size:90%;\">4107000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.11.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.11.7.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.11.7.1.1\" style=\"font-size:90%;\">Iij</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.11.7.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.11.7.2.1\" style=\"font-size:90%;\">37</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.11.7.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.11.7.3.1\" style=\"font-size:90%;\">130</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.11.7.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.11.7.4.1\" style=\"font-size:90%;\">5</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.11.7.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.11.7.5.1\" style=\"font-size:90%;\">3.51</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.11.7.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.11.7.6.1\" style=\"font-size:90%;\">4107000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.12.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.12.8.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.12.8.1.1\" style=\"font-size:90%;\">Internode</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.12.8.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.12.8.2.1\" style=\"font-size:90%;\">66</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.12.8.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.12.8.3.1\" style=\"font-size:90%;\">154</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.12.8.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.12.8.4.1\" style=\"font-size:90%;\">6</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.12.8.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.12.8.5.1\" style=\"font-size:90%;\">2.33</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.12.8.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.12.8.6.1\" style=\"font-size:90%;\">13068000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.13.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.13.9.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.13.9.1.1\" style=\"font-size:90%;\">Janetlense</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.13.9.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.13.9.2.1\" style=\"font-size:90%;\">20</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.13.9.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.13.9.3.1\" style=\"font-size:90%;\">68</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.13.9.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.13.9.4.1\" style=\"font-size:90%;\">4</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.13.9.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.13.9.5.1\" style=\"font-size:90%;\">3.40</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.13.9.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.13.9.6.1\" style=\"font-size:90%;\">1200000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.14.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.14.10.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.14.10.1.1\" style=\"font-size:90%;\">Karen</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.14.10.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.14.10.2.1\" style=\"font-size:90%;\">25</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.14.10.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.14.10.3.1\" style=\"font-size:90%;\">56</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.14.10.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.14.10.4.1\" style=\"font-size:90%;\">7</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.14.10.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.14.10.5.1\" style=\"font-size:90%;\">2.24</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.14.10.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.14.10.6.1\" style=\"font-size:90%;\">1875000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.15.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.15.11.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.15.11.1.1\" style=\"font-size:90%;\">Marnet</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.15.11.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.15.11.2.1\" style=\"font-size:90%;\">20</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.15.11.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.15.11.3.1\" style=\"font-size:90%;\">54</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.15.11.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.15.11.4.1\" style=\"font-size:90%;\">3</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.15.11.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.15.11.5.1\" style=\"font-size:90%;\">2.70</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.15.11.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.15.11.6.1\" style=\"font-size:90%;\">1200000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.16.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.16.12.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.16.12.1.1\" style=\"font-size:90%;\">Niif</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.16.12.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.16.12.2.1\" style=\"font-size:90%;\">36</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.16.12.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.16.12.3.1\" style=\"font-size:90%;\">82</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.16.12.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.16.12.4.1\" style=\"font-size:90%;\">7</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.16.12.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.16.12.5.1\" style=\"font-size:90%;\">2.28</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.16.12.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.16.12.6.1\" style=\"font-size:90%;\">3888000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.17.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.17.13.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.17.13.1.1\" style=\"font-size:90%;\">PionierL3</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.17.13.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.17.13.2.1\" style=\"font-size:90%;\">38</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.17.13.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.17.13.3.1\" style=\"font-size:90%;\">90</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.17.13.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.17.13.4.1\" style=\"font-size:90%;\">10</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.17.13.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.17.13.5.1\" style=\"font-size:90%;\">2.37</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.17.13.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.17.13.6.1\" style=\"font-size:90%;\">4332000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.18.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.18.14.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.18.14.1.1\" style=\"font-size:90%;\">Sinet</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.18.14.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.18.14.2.1\" style=\"font-size:90%;\">74</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.18.14.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.18.14.3.1\" style=\"font-size:90%;\">152</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.18.14.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.18.14.4.1\" style=\"font-size:90%;\">7</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.18.14.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.18.14.5.1\" style=\"font-size:90%;\">2.05</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.18.14.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.18.14.6.1\" style=\"font-size:90%;\">16428000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.19.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.19.15.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.19.15.1.1\" style=\"font-size:90%;\">SwitchL3</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.19.15.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.19.15.2.1\" style=\"font-size:90%;\">42</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.19.15.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.19.15.3.1\" style=\"font-size:90%;\">126</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.19.15.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.19.15.4.1\" style=\"font-size:90%;\">6</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.19.15.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.19.15.5.1\" style=\"font-size:90%;\">3.00</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.19.15.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.19.15.6.1\" style=\"font-size:90%;\">5292000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.20.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T3.4.4.20.16.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.20.16.1.1\" style=\"font-size:90%;\">Ulaknet</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.20.16.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.20.16.2.1\" style=\"font-size:90%;\">82</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.20.16.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.20.16.3.1\" style=\"font-size:90%;\">164</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.20.16.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.20.16.4.1\" style=\"font-size:90%;\">4</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.20.16.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.20.16.5.1\" style=\"font-size:90%;\">2.00</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T3.4.4.20.16.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.20.16.6.1\" style=\"font-size:90%;\">20172000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.4.4.21.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A3.T3.4.4.21.17.1\"><span class=\"ltx_text\" id=\"A3.T3.4.4.21.17.1.1\" style=\"font-size:90%;\">Uninett2011</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A3.T3.4.4.21.17.2\"><span class=\"ltx_text\" id=\"A3.T3.4.4.21.17.2.1\" style=\"font-size:90%;\">69</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A3.T3.4.4.21.17.3\"><span class=\"ltx_text\" id=\"A3.T3.4.4.21.17.3.1\" style=\"font-size:90%;\">192</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A3.T3.4.4.21.17.4\"><span class=\"ltx_text\" id=\"A3.T3.4.4.21.17.4.1\" style=\"font-size:90%;\">9</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A3.T3.4.4.21.17.5\"><span class=\"ltx_text\" id=\"A3.T3.4.4.21.17.5.1\" style=\"font-size:90%;\">2.78</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A3.T3.4.4.21.17.6\"><span class=\"ltx_text\" id=\"A3.T3.4.4.21.17.6.1\" style=\"font-size:90%;\">14283000</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
91
+ "capture": "Table 3: Properties of the topologies."
92
+ }
93
+ },
94
+ "image_paths": {
95
+ "1": {
96
+ "figure_path": "2209.05208v3_figure_1.png",
97
+ "caption": "Figure 1: Top. An illustration of the Multi-Commodity Network Flow family of problems. The requirements of the routing problem are defined using a matrix that specifies the total amount of traffic that has to be routed between each pair of nodes in a graph. We are also given a graph topology in which links are equipped with capacities. All flows have an entry and exit node and share the same underlying transportation infrastructure. Under a particular routing scheme, such as shortest path routing, the links are loaded by the total amount of traffic passing over them. Bottom. A model is trained using a dataset of the link utilizations for certain demand matrices and graph topologies, and is then used to predict the Maximum Link Utilization for an unseen demand matrix.",
98
+ "url": "http://arxiv.org/html/2209.05208v3/x1.png"
99
+ },
100
+ "2": {
101
+ "figure_path": "2209.05208v3_figure_2.png",
102
+ "caption": "Figure 2: Left. An illustration of the MPNN used in previous flow routing works, which uses the same message function M(l)superscript\ud835\udc40\ud835\udc59M^{(l)}italic_M start_POSTSUPERSCRIPT ( italic_l ) end_POSTSUPERSCRIPT for aggregating neighbor messages. Right. An illustration of our proposed Per-Edge Weights (PEW), which uses uniquely parametrized per-edge message functions.",
103
+ "url": "http://arxiv.org/html/2209.05208v3/x2.png"
104
+ },
105
+ "3": {
106
+ "figure_path": "2209.05208v3_figure_3.png",
107
+ "caption": "Figure 3: Normalized MSE obtained by the predictors on different topologies for the SSP (top) and ECMP (bottom) routing schemes. Lower values are better. PEW improves over vanilla GAT substantially and performs best out of all architectures. An MLP is competitive with the other GNNs.",
108
+ "url": "http://arxiv.org/html/2209.05208v3/x3.png"
109
+ },
110
+ "4": {
111
+ "figure_path": "2209.05208v3_figure_4.png",
112
+ "caption": "Figure 4: Difference in normalized MSE between the raw and sum demand input representations as a function of the number of training datapoints for PEW and GAT for the SSP (left) and ECMP routing schemes (right). As the dataset size increases, PEW is able to exploit the granular demand information, while GAT performs better with a lossy aggregation of the demand information.",
113
+ "url": "http://arxiv.org/html/2209.05208v3/x4.png"
114
+ },
115
+ "5": {
116
+ "figure_path": "2209.05208v3_figure_5.png",
117
+ "caption": "Figure 5: Impact of topological characteristics on the predictive performance of PEW. Performance degrades as the graph size increases (first 3 columns), but improves with higher levels of heterogeneity of the graph structure (last 3 columns).",
118
+ "url": "http://arxiv.org/html/2209.05208v3/x5.png"
119
+ },
120
+ "6": {
121
+ "figure_path": "2209.05208v3_figure_6.png",
122
+ "caption": "Figure 6: Relationship between the percentage changes in NMSE from GAT to PEW and the topological characteristics of the considered graphs.",
123
+ "url": "http://arxiv.org/html/2209.05208v3/x6.png"
124
+ },
125
+ "7": {
126
+ "figure_path": "2209.05208v3_figure_7.png",
127
+ "caption": "Figure 7: Relationship between the percentage changes in NMSE from MLP to PEW and the topological characteristics of the considered graphs.",
128
+ "url": "http://arxiv.org/html/2209.05208v3/x7.png"
129
+ },
130
+ "8": {
131
+ "figure_path": "2209.05208v3_figure_8.png",
132
+ "caption": "Figure 8: Relationship between the percentage changes in NMSE from GraphSAGE to PEW and the topological characteristics of the considered graphs.",
133
+ "url": "http://arxiv.org/html/2209.05208v3/x8.png"
134
+ },
135
+ "9": {
136
+ "figure_path": "2209.05208v3_figure_9.png",
137
+ "caption": "Figure 9: Relationship between the percentage changes in NMSE from GCN to PEW and the topological characteristics of the considered graphs.",
138
+ "url": "http://arxiv.org/html/2209.05208v3/x9.png"
139
+ },
140
+ "10": {
141
+ "figure_path": "2209.05208v3_figure_10.png",
142
+ "caption": "Figure 10: Learning curves for Aconet.",
143
+ "url": "http://arxiv.org/html/2209.05208v3/x10.png"
144
+ },
145
+ "11": {
146
+ "figure_path": "2209.05208v3_figure_11.png",
147
+ "caption": "Figure 11: Learning curves for Agis.",
148
+ "url": "http://arxiv.org/html/2209.05208v3/x11.png"
149
+ },
150
+ "12": {
151
+ "figure_path": "2209.05208v3_figure_12.png",
152
+ "caption": "Figure 12: Learning curves for Arnes.",
153
+ "url": "http://arxiv.org/html/2209.05208v3/x12.png"
154
+ },
155
+ "13": {
156
+ "figure_path": "2209.05208v3_figure_13.png",
157
+ "caption": "Figure 13: Learning curves for Cernet.",
158
+ "url": "http://arxiv.org/html/2209.05208v3/x13.png"
159
+ },
160
+ "14": {
161
+ "figure_path": "2209.05208v3_figure_14.png",
162
+ "caption": "Figure 14: Learning curves for Cesnet201006.",
163
+ "url": "http://arxiv.org/html/2209.05208v3/x14.png"
164
+ },
165
+ "15": {
166
+ "figure_path": "2209.05208v3_figure_15.png",
167
+ "caption": "Figure 15: Learning curves for Grnet.",
168
+ "url": "http://arxiv.org/html/2209.05208v3/x15.png"
169
+ },
170
+ "16": {
171
+ "figure_path": "2209.05208v3_figure_16.png",
172
+ "caption": "Figure 16: Learning curves for Iij.",
173
+ "url": "http://arxiv.org/html/2209.05208v3/x16.png"
174
+ },
175
+ "17": {
176
+ "figure_path": "2209.05208v3_figure_17.png",
177
+ "caption": "Figure 17: Learning curves for Internode.",
178
+ "url": "http://arxiv.org/html/2209.05208v3/x17.png"
179
+ },
180
+ "18": {
181
+ "figure_path": "2209.05208v3_figure_18.png",
182
+ "caption": "Figure 18: Learning curves for Janetlense.",
183
+ "url": "http://arxiv.org/html/2209.05208v3/x18.png"
184
+ },
185
+ "19": {
186
+ "figure_path": "2209.05208v3_figure_19.png",
187
+ "caption": "Figure 19: Learning curves for Karen.",
188
+ "url": "http://arxiv.org/html/2209.05208v3/x19.png"
189
+ },
190
+ "20": {
191
+ "figure_path": "2209.05208v3_figure_20.png",
192
+ "caption": "Figure 20: Learning curves for Marnet.",
193
+ "url": "http://arxiv.org/html/2209.05208v3/x20.png"
194
+ },
195
+ "21": {
196
+ "figure_path": "2209.05208v3_figure_21.png",
197
+ "caption": "Figure 21: Learning curves for Niif.",
198
+ "url": "http://arxiv.org/html/2209.05208v3/x21.png"
199
+ },
200
+ "22": {
201
+ "figure_path": "2209.05208v3_figure_22.png",
202
+ "caption": "Figure 22: Learning curves for PionierL3.",
203
+ "url": "http://arxiv.org/html/2209.05208v3/x22.png"
204
+ },
205
+ "23": {
206
+ "figure_path": "2209.05208v3_figure_23.png",
207
+ "caption": "Figure 23: Learning curves for Sinet.",
208
+ "url": "http://arxiv.org/html/2209.05208v3/x23.png"
209
+ },
210
+ "24": {
211
+ "figure_path": "2209.05208v3_figure_24.png",
212
+ "caption": "Figure 24: Learning curves for SwitchL3.",
213
+ "url": "http://arxiv.org/html/2209.05208v3/x24.png"
214
+ },
215
+ "25": {
216
+ "figure_path": "2209.05208v3_figure_25.png",
217
+ "caption": "Figure 25: Learning curves for Ulaknet.",
218
+ "url": "http://arxiv.org/html/2209.05208v3/x25.png"
219
+ },
220
+ "26": {
221
+ "figure_path": "2209.05208v3_figure_26.png",
222
+ "caption": "Figure 26: Learning curves for Uninett2011.",
223
+ "url": "http://arxiv.org/html/2209.05208v3/x26.png"
224
+ }
225
+ },
226
+ "validation": true,
227
+ "references": [
228
+ {
229
+ "1": {
230
+ "title": "Network Flows: Theory, Algorithms, and Applications.",
231
+ "author": "Ravindra K. Ahuja.",
232
+ "venue": "Prentice Hall, Englewood Cliffs, NJ, 1993.",
233
+ "url": null
234
+ }
235
+ },
236
+ {
237
+ "2": {
238
+ "title": "Towards real-time routing optimization with deep reinforcement\nlearning: Open challenges.",
239
+ "author": "Paul Almasan, Jos\u00e9 Su\u00e1rez-Varela, Bo Wu, Shihan Xiao, Pere Barlet-Ros,\nand Albert Cabello.",
240
+ "venue": "In HPSR, 2021.",
241
+ "url": null
242
+ }
243
+ },
244
+ {
245
+ "3": {
246
+ "title": "Relational inductive biases, deep learning, and graph networks.",
247
+ "author": "Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, \u00c1lvaro\nS\u00e1nchez-Gonz\u00e1lez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti,\nDavid Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song,\nAndrew Ballard, Justin Gilmer, George Dahl, Ashish Vaswani, Kelsey Allen,\nCharles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra,\nPushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, and Razvan Pascanu.",
248
+ "venue": "arXiv preprint arXiv:1806.01261, 2018.",
249
+ "url": null
250
+ }
251
+ },
252
+ {
253
+ "4": {
254
+ "title": "A faster algorithm for betweenness centrality.",
255
+ "author": "Ulrik Brandes.",
256
+ "venue": "Journal of Mathematical Sociology, 25(2):163\u2013177, 2001.",
257
+ "url": null
258
+ }
259
+ },
260
+ {
261
+ "5": {
262
+ "title": "Relational graph attention networks.",
263
+ "author": "Dan Busbridge, Dane Sherburn, Pietro Cavallo, and Nils Y. Hammerla.",
264
+ "venue": "arXiv preprint arXiv:1904.05811, 2019.",
265
+ "url": null
266
+ }
267
+ },
268
+ {
269
+ "6": {
270
+ "title": "Combinatorial optimization and reasoning with graph neural networks.",
271
+ "author": "Quentin Cappart, Didier Ch\u00e9telat, Elias Khalil, Andrea Lodi, Christopher\nMorris, and Petar Veli\u010dkovi\u0107.",
272
+ "venue": "In IJCAI, 2021.",
273
+ "url": null
274
+ }
275
+ },
276
+ {
277
+ "7": {
278
+ "title": "Measuring and relieving the over-smoothing problem for graph neural\nnetworks from the topological view.",
279
+ "author": "Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun.",
280
+ "venue": "In AAAI, 2020.",
281
+ "url": null
282
+ }
283
+ },
284
+ {
285
+ "8": {
286
+ "title": "Maximum flow and minimum-cost flow in almost-linear time.",
287
+ "author": "Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg,\nand Sushant Sachdeva.",
288
+ "venue": "In FOCS, 2022.",
289
+ "url": null
290
+ }
291
+ },
292
+ {
293
+ "9": {
294
+ "title": "Introduction to Algorithms.",
295
+ "author": "Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein.",
296
+ "venue": "MIT Press, Fourth edition, 2022.",
297
+ "url": null
298
+ }
299
+ },
300
+ {
301
+ "10": {
302
+ "title": "Convolutional Neural Networks on Graphs with Fast Localized\nSpectral Filtering.",
303
+ "author": "Micha\u00ebl Defferrard, Xavier Bresson, and Pierre Vandergheynst.",
304
+ "venue": "In NeurIPS, 2016.",
305
+ "url": null
306
+ }
307
+ },
308
+ {
309
+ "11": {
310
+ "title": "R-GCN: The R could stand for random.",
311
+ "author": "Vic Degraeve, Gilles Vandewiele, Femke Ongenae, and Sofie Van Hoecke.",
312
+ "venue": "arXiv preprint arXiv:2203.02424, 2022.",
313
+ "url": null
314
+ }
315
+ },
316
+ {
317
+ "12": {
318
+ "title": "A fair comparison of graph neural networks for graph classification.",
319
+ "author": "Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli.",
320
+ "venue": "In ICLR, 2020.",
321
+ "url": null
322
+ }
323
+ },
324
+ {
325
+ "13": {
326
+ "title": "Why (and how) networks should run themselves.",
327
+ "author": "Nick Feamster and Jennifer Rexford.",
328
+ "venue": "arXiv preprint arXiv:1710.11583, 2017.",
329
+ "url": null
330
+ }
331
+ },
332
+ {
333
+ "14": {
334
+ "title": "Deriving traffic demands for operational IP networks: Methodology\nand experience.",
335
+ "author": "Anja Feldmann, Albert Greenberg, Carsten Lund, Nick Reingold, Jennifer Rexford,\nand Fred True.",
336
+ "venue": "IEEE/ACM Transactions On Networking, 9(3):265\u2013279, 2001.",
337
+ "url": null
338
+ }
339
+ },
340
+ {
341
+ "15": {
342
+ "title": "Fast graph representation learning with PyTorch Geometric.",
343
+ "author": "Matthias Fey and Jan Eric Lenssen.",
344
+ "venue": "In ICLR Workshop on Representation Learning on Graphs and\nManifolds, 2019.",
345
+ "url": null
346
+ }
347
+ },
348
+ {
349
+ "16": {
350
+ "title": "Internet Traffic Engineering by Optimizing OSPF Weights.",
351
+ "author": "Bernard Fortz and Mikkel Thorup.",
352
+ "venue": "In IEEE INFOCOM, 2000.",
353
+ "url": null
354
+ }
355
+ },
356
+ {
357
+ "17": {
358
+ "title": "Optimizing OSPF/IS-IS Weights in a Changing World.",
359
+ "author": "Bernard Fortz and Mikkel Thorup.",
360
+ "venue": "IEEE Journal on Selected Areas in Communications, 20(4):756\u2013767, 2002.",
361
+ "url": null
362
+ }
363
+ },
364
+ {
365
+ "18": {
366
+ "title": "Increasing internet capacity using local search.",
367
+ "author": "Bernard Fortz and Mikkel Thorup.",
368
+ "venue": "Computational Optimization and Applications, 29(1):13\u201348, 2004.",
369
+ "url": null
370
+ }
371
+ },
372
+ {
373
+ "19": {
374
+ "title": "Repetita: Repeatable experiments for performance evaluation of\ntraffic-engineering algorithms.",
375
+ "author": "Steven Gay, Pierre Schaus, and Stefano Vissicchio.",
376
+ "venue": "arXiv preprint arXiv:1710.08665, 2017.",
377
+ "url": null
378
+ }
379
+ },
380
+ {
381
+ "20": {
382
+ "title": "Neural bipartite matching.",
383
+ "author": "Dobrik Georgiev and Pietro Li\u00f2.",
384
+ "venue": "In ICML Workshop on Graph Representation Learning and Beyond\n(GRL+), 2020.",
385
+ "url": null
386
+ }
387
+ },
388
+ {
389
+ "21": {
390
+ "title": "Learning and Generating Distributed Routing Protocols Using\nGraph-Based Deep Learning.",
391
+ "author": "Fabien Geyer and Georg Carle.",
392
+ "venue": "In Big-DAMA, 2018.",
393
+ "url": null
394
+ }
395
+ },
396
+ {
397
+ "22": {
398
+ "title": "Neural Message Passing for Quantum Chemistry.",
399
+ "author": "Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and\nGeorge E. Dahl.",
400
+ "venue": "In ICML, 2017.",
401
+ "url": null
402
+ }
403
+ },
404
+ {
405
+ "23": {
406
+ "title": "Definitive MPLS Network Designs.",
407
+ "author": "Jim Guichard, Fran\u00e7ois Le Faucheur, and Jean-Philippe Vasseur.",
408
+ "venue": "Cisco Press, 2005.",
409
+ "url": null
410
+ }
411
+ },
412
+ {
413
+ "24": {
414
+ "title": "On low-latency-capable topologies, and their impact on the design of\nintra-domain routing.",
415
+ "author": "Nikola Gvozdiev, Stefano Vissicchio, Brad Karp, and Mark Handley.",
416
+ "venue": "In SIGCOMM\u201918, 2018.",
417
+ "url": null
418
+ }
419
+ },
420
+ {
421
+ "25": {
422
+ "title": "Recent Advances in Networking.",
423
+ "author": "Hamed Haddadi and Olivier Bonaventure.",
424
+ "venue": "2013.",
425
+ "url": null
426
+ }
427
+ },
428
+ {
429
+ "26": {
430
+ "title": "Exploring network structure, dynamics, and function using networkx.",
431
+ "author": "Aric Hagberg, Pieter Swart, and Daniel S. Chult.",
432
+ "venue": "In SciPy, 2008.",
433
+ "url": null
434
+ }
435
+ },
436
+ {
437
+ "27": {
438
+ "title": "Inductive Representation Learning on Large Graphs.",
439
+ "author": "William L. Hamilton, Zhitao Ying, and Jure Leskovec.",
440
+ "venue": "In NeurIPS, 2017.",
441
+ "url": null
442
+ }
443
+ },
444
+ {
445
+ "28": {
446
+ "title": "A declarative and expressive approach to control forwarding paths in\ncarrier-grade networks.",
447
+ "author": "Renaud Hartert, Stefano Vissicchio, Pierre Schaus, Olivier Bonaventure,\nClarence Filsfils, Thomas Telkamp, and Pierre Francois.",
448
+ "venue": "ACM SIGCOMM Computer Communication Review, 45(4):15\u201328, 2015.",
449
+ "url": null
450
+ }
451
+ },
452
+ {
453
+ "29": {
454
+ "title": "GDDR: GNN-based Data-Driven Routing.",
455
+ "author": "Oliver Hope and Eiko Yoneki.",
456
+ "venue": "In ICDCS, 2021.",
457
+ "url": null
458
+ }
459
+ },
460
+ {
461
+ "30": {
462
+ "title": "Analysis of an equal-cost multi-path algorithm.",
463
+ "author": "C. Hopps.",
464
+ "venue": "RFC 2992, RFC Editor, November 2000.",
465
+ "url": null
466
+ }
467
+ },
468
+ {
469
+ "31": {
470
+ "title": "Multi-commodity network flows.",
471
+ "author": "T. Chiang Hu.",
472
+ "venue": "Operations Research, 11(3):344\u2013360, 1963.",
473
+ "url": null
474
+ }
475
+ },
476
+ {
477
+ "32": {
478
+ "title": "Matplotlib: A 2D graphics environment.",
479
+ "author": "J. D. Hunter.",
480
+ "venue": "Computing in Science & Engineering, 9(3):90\u201395, 2007.",
481
+ "url": null
482
+ }
483
+ },
484
+ {
485
+ "33": {
486
+ "title": "Unleashing the potential of data-driven networking.",
487
+ "author": "Junchen Jiang, Vyas Sekar, Ion Stoica, and Hui Zhang.",
488
+ "venue": "In COMSNETS, 2017.",
489
+ "url": null
490
+ }
491
+ },
492
+ {
493
+ "34": {
494
+ "title": "Walking the tightrope: Responsive yet stable traffic engineering.",
495
+ "author": "Srikanth Kandula, Dina Katabi, Bruce Davie, and Anna Charny.",
496
+ "venue": "In SIGCOMM, 2005.",
497
+ "url": null
498
+ }
499
+ },
500
+ {
501
+ "35": {
502
+ "title": "Adam: A Method for Stochastic Optimization.",
503
+ "author": "Diederik P. Kingma and Jimmy Ba.",
504
+ "venue": "In ICLR, 2015.",
505
+ "url": null
506
+ }
507
+ },
508
+ {
509
+ "36": {
510
+ "title": "Semi-Supervised Classification with Graph Convolutional\nNetworks.",
511
+ "author": "Thomas N. Kipf and Max Welling.",
512
+ "venue": "In ICLR, 2017.",
513
+ "url": null
514
+ }
515
+ },
516
+ {
517
+ "37": {
518
+ "title": "The Internet Topology Zoo.",
519
+ "author": "Simon Knight, Hung X. Nguyen, Nickolas Falkner, Rhys Bowden, and Matthew\nRoughan.",
520
+ "venue": "IEEE Journal on Selected Areas in Communications, 29(9):1765\u20131775, 2011.",
521
+ "url": null
522
+ }
523
+ },
524
+ {
525
+ "38": {
526
+ "title": "Gated Graph Sequence Neural Networks.",
527
+ "author": "Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel.",
528
+ "venue": "In ICLR, 2017.",
529
+ "url": null
530
+ }
531
+ },
532
+ {
533
+ "39": {
534
+ "title": "pandas: a foundational Python library for data analysis and\nstatistics.",
535
+ "author": "Wes McKinney.",
536
+ "venue": "Python for High Performance and Scientific Computing,\n14(9):1\u20139, 2011.",
537
+ "url": null
538
+ }
539
+ },
540
+ {
541
+ "40": {
542
+ "title": "PyTorch: An imperative style, high-performance deep learning\nlibrary.",
543
+ "author": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory\nChanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban\nDesmaison, Andreas K\u00f6pf, Edward Yang, Zach DeVito, Martin Raison, Alykhan\nTejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith\nChintala.",
544
+ "venue": "In NeurIPS, 2019.",
545
+ "url": null
546
+ }
547
+ },
548
+ {
549
+ "41": {
550
+ "title": "Optimal hierarchical decompositions for congestion minimization in\nnetworks.",
551
+ "author": "Harald R\u00e4cke.",
552
+ "venue": "In STOC, 2008.",
553
+ "url": null
554
+ }
555
+ },
556
+ {
557
+ "42": {
558
+ "title": "Deep neural networks for network routing.",
559
+ "author": "Joao Reis, Miguel Rocha, Truong Khoa Phan, David Griffin, Franck Le, and Miguel\nRio.",
560
+ "venue": "In IJCNN, 2019.",
561
+ "url": null
562
+ }
563
+ },
564
+ {
565
+ "43": {
566
+ "title": "Simplifying the synthesis of internet traffic matrices.",
567
+ "author": "Matthew Roughan.",
568
+ "venue": "ACM SIGCOMM Computer Communication Review, 35(5):93\u201396, 2005.",
569
+ "url": null
570
+ }
571
+ },
572
+ {
573
+ "44": {
574
+ "title": "Unveiling the potential of graph neural networks for network modeling\nand optimization in SDN.",
575
+ "author": "Krzysztof Rusek, Jos\u00e9 Su\u00e1rez-Varela, Albert Mestres, Pere Barlet-Ros,\nand Albert Cabellos-Aparicio.",
576
+ "venue": "In SOSR, 2019.",
577
+ "url": null
578
+ }
579
+ },
580
+ {
581
+ "45": {
582
+ "title": "The Graph Neural Network Model.",
583
+ "author": "Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele\nMonfardini.",
584
+ "venue": "IEEE Transactions on Neural Networks, 20(1):61\u201380, 2009.",
585
+ "url": null
586
+ }
587
+ },
588
+ {
589
+ "46": {
590
+ "title": "Modeling relational data with graph convolutional networks.",
591
+ "author": "Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan\nTitov, and Max Welling.",
592
+ "venue": "In ESWC, pp. 593\u2013607. Springer, 2018.",
593
+ "url": null
594
+ }
595
+ },
596
+ {
597
+ "47": {
598
+ "title": "Supervised neural networks for the classification of structures.",
599
+ "author": "Alessandro Sperduti and Antonina Starita.",
600
+ "venue": "IEEE Transactions on Neural Networks, 8(3):714\u2013735, 1997.",
601
+ "url": null
602
+ }
603
+ },
604
+ {
605
+ "48": {
606
+ "title": "A strongly polynomial algorithm to solve combinatorial linear\nprograms.",
607
+ "author": "\u00c9va Tardos.",
608
+ "venue": "Operations Research, 34(2):250\u2013256, 1986.",
609
+ "url": null
610
+ }
611
+ },
612
+ {
613
+ "49": {
614
+ "title": "Learning to route.",
615
+ "author": "Asaf Valadarsky, Michael Schapira, Dafna Shahaf, and Aviv Tamar.",
616
+ "venue": "In ACM HotNets, 2017.",
617
+ "url": null
618
+ }
619
+ },
620
+ {
621
+ "50": {
622
+ "title": "Graph attention networks.",
623
+ "author": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero,\nPietro Li\u00f2, and Yoshua Bengio.",
624
+ "venue": "In ICLR, 2018.",
625
+ "url": null
626
+ }
627
+ },
628
+ {
629
+ "51": {
630
+ "title": "Seaborn: statistical data visualization.",
631
+ "author": "Michael L. Waskom.",
632
+ "venue": "Journal of Open Source Software, 6(60):3021, 2021.",
633
+ "url": null
634
+ }
635
+ },
636
+ {
637
+ "52": {
638
+ "title": "What can neural networks reason about?",
639
+ "author": "Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi, and\nStefanie Jegelka.",
640
+ "venue": "In ICLR, 2020.",
641
+ "url": null
642
+ }
643
+ },
644
+ {
645
+ "53": {
646
+ "title": "Experience-driven networking: A deep reinforcement learning based\napproach.",
647
+ "author": "Zhiyuan Xu, Jian Tang, Jingsong Meng, Weiyi Zhang, Yanzhi Wang, Chi Harold Liu,\nand Dejun Yang.",
648
+ "venue": "In IEEE INFOCOM, 2018.",
649
+ "url": null
650
+ }
651
+ },
652
+ {
653
+ "54": {
654
+ "title": "CFR-RL: Traffic engineering with reinforcement learning in SDN.",
655
+ "author": "Junjie Zhang, Minghao Ye, Zehua Guo, Chen-Yu Yen, and H. Jonathan Chao.",
656
+ "venue": "IEEE Journal on Selected Areas in Communications, 38(10):2249\u20132259, 2020.",
657
+ "url": null
658
+ }
659
+ }
660
+ ],
661
+ "url": "http://arxiv.org/html/2209.05208v3"
662
+ }
20240318/2209.12605v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2210.05279v2.json ADDED
@@ -0,0 +1,597 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Zeroth-Order Hard-Thresholding: Gradient Error vs. Expansivity",
3
+ "abstract": "constrained optimization is prevalent in machine learning, particularly for high-dimensional problems, because it is a fundamental approach to achieve sparse learning. Hard-thresholding gradient descent is a dominant technique to solve this problem. However, first-order gradients of the objective function may be either unavailable or expensive to calculate in a lot of real-world problems, where zeroth-order (ZO) gradients could be a good surrogate. Unfortunately, whether ZO gradients can work with the hard-thresholding operator is still an unsolved problem.\nTo solve this puzzle, in this paper, we focus on the constrained black-box stochastic optimization problems, and propose a new stochastic zeroth-order gradient hard-thresholding (SZOHT) algorithm with a general ZO gradient estimator powered by a novel random support sampling. We provide the convergence analysis of SZOHT under standard assumptions. Importantly, we reveal a conflict between the deviation of ZO estimators and the expansivity of the hard-thresholding operator, and provide a theoretical minimal value of the number of random directions in ZO gradients. In addition, we find that the query complexity of SZOHT is independent or weakly dependent on the dimensionality under different settings. Finally, we illustrate the utility of our method on a portfolio optimization problem as well as black-box adversarial attacks.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "",
9
+ "text": "constrained optimization is prevalent in machine learning, particularly for high-dimensional problems, because it is a fundamental approach to achieve sparse learning. In addition to improving the memory, computational and environmental footprint of the models, these sparse constraints help reduce overfitting and obtain consistent statistical estimation Yuan and Li (2021 ###reference_b46###); B\u00fchlmann and Van De Geer (2011 ###reference_b5###); Raskutti et al. (2011 ###reference_b33###); Negahban et al. (2012 ###reference_b29###).\nWe formulate the problem as follows:\nwhere is a differentiable function and is a noise term, for instance related to an underlying finite sum structure in , of the form: .\nHard-thresholding gradient algorithm Jain et al. (2014 ###reference_b17###); Nguyen et al. (2017 ###reference_b31###); Yuan et al. (2017 ###reference_b45###) is a dominant technique to solve this problem. It generally consists in alternating between a gradient step, and a hard-thresholding operation which only keeps the -largest components (in absolute value) of the current iterate. The advantage of hard-thresholding over its convex relaxations (Tibshirani (1996 ###reference_b39###); Van de Geer (2008 ###reference_b41###)) is that it can often attain similar precision, but is more computationally efficient, since it can directly ensure a desired sparsity level instead of tuning an penalty or constraint. The only expensive computation in hard-thresholding is the hard-thresholding step itself, which requires finding the top elements of the current iterate. Hard-thresholding was originally developed in its full gradient form Jain et al. (2014 ###reference_b17###), but has been later on extended to the stochastic setting by Nguyen et al. (2017 ###reference_b31###), which developed a stochastic gradient descent (SGD) version of hard thresholding (StoIHT), and further more with Zhou et al. (2018 ###reference_b47###), Shen and Li (2017 ###reference_b36###) and Li et al. (2016 ###reference_b20###), which used variance reduction technique to improve upon StoIHT.\nThe definition of Restricted Strong Convexity from Cai et al. (2022 ###reference_b7###) is different from ours and that of Nguyen et al. (2017 ###reference_b31###), hence the bis subscript.\nWe refer to the modified version of ZSCG (Algorithm 3 in Balasubramanian and Ghadimi (2018 ###reference_b2###)).\nRSPGF and ZORO minimize : only needs to be smooth.\nHowever, the first-order gradients used in the above methods may be either unavailable or expensive to calculate in a lot of real-world problems. For example, in certain graphical modeling tasks Wainwright et al. (2008 ###reference_b42###), obtaining the gradient of the objective function is computationally hard. Even worse, in some settings, the gradient is inaccessible by nature, for instance in bandit problems Shamir (2017 ###reference_b35###), black-box adversarial attacks Tu et al. (2019 ###reference_b40###); Chen et al. (2017 ###reference_b9###, 2019 ###reference_b10###), or reinforcement learning Salimans et al. (2017 ###reference_b34###); Mania et al. (2018 ###reference_b27###); Choromanski et al. (2020 ###reference_b11###). To tackle those problems, ZO optimization methods have been developed Nesterov and Spokoiny (2017 ###reference_b30###). Those methods usually replace the inaccessible gradient by its finite difference approximation which can be computed only from function evaluations, following the idea that for a differentiable function , we have: . Later on, ZO methods have been adapted to deal with a convex constraints set, and can therefore be used to solve the convex relaxation of problem (1 ###reference_###). To that end, Ghadimi et al. (2016 ###reference_b14###), and Cai et al. (2022 ###reference_b7###) introduce proximal ZO algorithms, Liu et al. (2018b ###reference_b24###) introduce a ZO projected gradient algorithm and Balasubramanian and Ghadimi (2018 ###reference_b2###) introduce a ZO conditional gradient Levitin and Polyak (1966 ###reference_b19###) algorithm. We provide a review of those results in Table 1 ###reference_###. As can be seen from the table, their query complexity is high (linear in ), except Cai et al. (2022 ###reference_b7###) that has a complexity of , but assumes that gradients are sparse. In addition, those methods must introduce a hyperparameter (the strength of the penalty) or (the radius of the ball), which need to be tuned to find which value ensures the right sparsity level. Therefore, it would be interesting to use the hard-thresholding techniques described in the previous paragraph, instead of those convex relaxations.\nUnfortunately, ZO hard-thresholding gradient algorithms have not been exploited formally. Even more, whether ZO gradients can work with the hard-thresholding operator is still an unknown problem. Although there was one related algorithm proposed recently by Balasubramanian and Ghadimi (2018 ###reference_b2###), they did not target constrained optimization problem and importantly have strong assumptions in their convergence analysis. Indeed, they assume that the gradients, as well as the solution of the unconstrained problem, are -sparse: and , where . In addition, it was recently shown by Cai et al. (2022 ###reference_b7###) that they must in fact assume that the support of the gradient is fixed for all , for their convergence result to hold, which is a hard limitation, since that amounts to say that the function depends on fixed variables.\nTo fill this gap, in this paper, we focus on the constrained black-box stochastic optimization problems, and propose a novel stochastic zeroth-order gradient hard-thresholding (SZOHT) algorithm. Specifically, we propose a dimension friendly ZO gradient estimator powered by a novel random support sampling technique, and then embed it into the standard hard-thresholding operator.\nWe then provide the convergence and complexity analysis of SZOHT under the standard assumptions of sparse learning, which are restricted strong smoothness (RSS), and restricted strong convexity (RSC) Nguyen et al. (2017 ###reference_b31###); Shen and Li (2017 ###reference_b36###), to retain generality, therefore providing a positive answer to the question of whether ZO gradients can work with the hard-thresholding operator. Crucial to our analysis is to provide carefully tuned requirements on the parameters (the number of random directions used to estimate the gradient, further defined in Section 3.1 ###reference_###) and . Finally, we illustrate the utility of our method on a portfolio optimization problem as well as black-box adversarial attacks, by showing that our method can achieve competitive performance in comparison to state of the art methods for sparsity-enforcing zeroth-order algorithm described in Table 1 ###reference_###, such as Ghadimi et al. (2016 ###reference_b14###); Balasubramanian and Ghadimi (2018 ###reference_b2###); Cai et al. (2022 ###reference_b7###).\nImportantly, we also show that in the smooth case, the query complexity of SZOHT is independent of the dimensionality, which is significantly different to the dimensionality dependent results for most existing ZO algorithms. Indeed, it is known from Jamieson et al. (2012 ###reference_b18###) that the worst case query complexity of ZO optimization over the class of -strongly convex and -smooth functions defined over a convex set is linear in . Our work is thus in line with other works achieving dimension-insensitive query complexity in zeroth-order optimization such as Golovin et al. (2019 ###reference_b15###); Sokolov et al. (2018 ###reference_b37###); Wang et al. (2018 ###reference_b44###); Cai et al. (2022 ###reference_b7###, 2021 ###reference_b6###); Balasubramanian and Ghadimi (2018 ###reference_b2###); Cai et al. (2022 ###reference_b7###); Liu and Yang (2021 ###reference_b22###); Jamieson et al. (2012 ###reference_b18###), but contrary to those, instead of making further assumptions (i.e. restricting the class to a smaller class), we bypass the impossibility result by replacing the convex feasible set by a non-convex set (the ball), which is how we can avoid making stringent assumptions on the class of functions ."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "",
15
+ "text": "Throughout this paper, we denote by the Euclidean norm for a vector , by the maximum absolute component of that vector, and by the norm (which is not a proper norm). For simplicity, we denote . We call (resp. ) the vector which sets all coordinates of (resp. ) to . We also denote by the solution of problem (1 ###reference_###) defined in the introduction, for some target sparsity which could be smaller than . To derive our result, we will need the following assumptions on .\nis said to be restricted strongly convex with sparsity parameter if it is differentiable, and there exist a generic constant such that for all with :\nFor almost any , is said to be restricted smooth with sparsity level , if it is differentiable, and there exist a generic constant such that for all with :\nis said to have -finite gradient noise if for almost any , is differentiable and the gradient noise defined below is finite:\nEven though the original version of Gower et al. (2019 ###reference_b16###) uses the norm, we use the norm here, in order to give more insightful results in terms of and , as is done classically in optimization, similarly to Zhou et al. (2018 ###reference_b47###). We also note that in Gower et al. (2019 ###reference_b16###), denotes an unconstrained minimum when in our case it denotes the constrained minimum for some sparsity .\nFor Corollary 2 ###reference_ollary2###, we will also need the more usual smoothness assumption:\nFor almost any , is said to be smooth, if it is differentiable, and for all :"
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "",
27
+ "text": "In this section, we describe our zeroth-order gradient estimator. It is basically composed of a random support sampling step, followed by a random direction with uniform smoothing on the sphere supported by this support. We also use the technique of averaging our estimator over dimensions, as described in Liu et al. (2020 ###reference_b25###). More formally, our gradient estimator is described below:\nwhere each random direction is a unit vector sampled uniformly from the set . We can obtain such vectors by sampling first a random support (i.e. a set of coordinates) of size from , (denoted as in Algorithm 1 ###reference_thm1###) and then by sampling a random unit vector supported on that support , that is, uniformly sampled from the set , (denoted as ) in algorithm 1 ###reference_thm1###). The original uniform smoothing technique on the sphere is described in more detail in Gao et al. (2018 ###reference_b12###). However, in our case, the sphere along which we sample is restricted to a random support of size .\nOur general estimator, through the setting of the variable , can take several forms, which are similar to pre-existing gradient estimators from the literature described below:\nIf , is the usual vanilla estimator with uniform smoothing on the sphere Gao et al. (2018 ###reference_b12###).\nIf , our estimator is similar to the Random Block-Coordinate gradient estimator from Lian et al. (2016 ###reference_b21###), except that the blocks are not fixed at initialization but chosen randomly, and that we use a uniform smoothing with forward difference on the given block instead of a coordinate-wise estimation with central difference. This random support technique allows us to give a convergence analysis under the classical assumptions of the hard-thresholding literature (see Remark 3 ###reference_ark3###), and to deal with huge scale optimization, when sampling uniformly from a unit -sphere is costly Cai et al. (2022 ###reference_b7###, 2021 ###reference_b6###): in the distributed setting for instance, each worker would just need to sample an -sparse random vector, and only the centralized server would materialize the full gradient approximation containing up to non-zero entries.\nError Bounds of the Zeroth-Order Estimator. We now derive error bounds on the gradient estimator, that will be useful in the convergence rate proof, except that we consider only the restriction to some support (that is, we consider a subset of components of the gradient/estimator). Indeed, proofs in the hard-thresholding literature (see for instance Yuan et al. (2017 ###reference_b45###)), are usually written only on that support. That is the key idea which explains how the dimensionality dependence is reduced when doing SZOHT compared to vanilla ZO optimization. We give more insight on the shape of the original distribution of gradient estimators, and the distribution of their projection onto a hyperplane in Figure 5 ###reference_### in Appendix E ###reference_###. We can observe that even if the original gradient estimator is poor, in the projected space, the estimation error is reduced, which we quantify in the proposition below.\n(Proof in Appendix C.3 ###reference_### ) Let us consider any support of size (). For the Z0 gradient estimator in (2 ###reference_###), with random directions, and random supports of size , and assuming that each is -RSS, we have, with denoting the hard thresholding of the gradient on (that is, we set all coordinates not in to ):"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "",
33
+ "text": "We now present our full algorithm to optimize problem 1 ###reference_###, which we name SZOHT (Stochastic Zeroth-Order Hard Thresholding). Each iteration of our algorithm is composed of two steps: (i) the gradient estimation step, and (ii) the hard thresholding step, where the gradient estimation step is the one described in the section above, and the hard-thresholding is described in more detail in the following paragraph. We give the full formal description of our algorithm in Algorithm 1 ###reference_thm1###.\nIn the hard thresholding step, we only keep the largest (in magnitude) components of the current iterate . This ensures that all our iterates (including the last one) are -sparse. This hard-thresholding operator has been studied for instance in Shen and Li (2017 ###reference_b36###), and possesses several interesting properties. Firstly, it can be seen as a projection on the ball. Second, importantly, it is not non-expansive, contrary to other operators like the soft-thresholding operator Shen and Li (2017 ###reference_b36###). That expansivity plays an important role in the analysis of our algorithm, as we will see later.\nCompared to previous works, our algorithm can be seen as a variant of Stochastic Hard Thresholding (StoIHT from Nguyen et al. (2017 ###reference_b31###))\n, where we replaced the true gradient of by the estimator . It is also very close to Algorithm 5 from Balasubramanian and Ghadimi (2018 ###reference_b2###) (Truncated-ZSGD), with just a different zeroth-order gradient estimator: we use a uniform smoothing, random-block estimator, instead of their gaussian smoothing, full support vanilla estimator. This allows us to deal with very large dimensionalities, in the order of millions, similarly to Cai et al. (2021 ###reference_b6###). Furthermore, as described in the Introduction, contrary to Balasubramanian and Ghadimi (2018 ###reference_b2###), we provide the analysis of our algorithm without any gradient sparsity assumption.\nThe key challenge arising in our analysis is described in Figure 1 ###reference_###: the hard-thresholding operator being expansive Shen and Li (2017 ###reference_b36###), each approximate gradient step must approach the solution enough to stay close to it even after hard-thresholding. Therefore, it is a priori unclear whether the zeroth-order estimate can be accurate enough to guarantee the convergence of SZOHT. Hopefully, as we will see in the next section, we can indeed ensure convergence, as long as we carefully choose the value of .\n###figure_1###"
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "",
39
+ "text": "In this section, we provide the convergence analysis of SZOHT, using the assumptions from section 2 ###reference_###, and discuss an interesting property of the combination of the zeroth-order gradient estimate and the hard-thresholding operator, providing a positive answer to the question from the previous section.\n(Proof in Appendix D.1 ###reference_###)\nAssume that that each is -RSS, and that is -RSC and -FGN, with , with , with defined as below. Suppose that we run SZOHT with random supports of size , random directions, a learning rate of , and coordinates kept at each iterations. Then, we have a geometric convergence rate, of the following form, with denoting the -iterate of SZOHT:\nThe format of our result is similar to the ones in Yuan et al. (2017 ###reference_b45###) and Nguyen et al. (2017 ###reference_b31###), in that it contains a linear convergence term, and a system error which depends on the expected norm of the gradient at (through the variable ). We note that if has a -sparse unconstrained minimizer, which could happen in sparse reconstruction, or with overparameterized deep networks (see for instance (Peste et al., 2021 ###reference_b32###, Assumption (2))), then we would have , and hence that part of the system error would vanish. In addition to that usual system error, we also have here another system error, which depends on the smoothing radius , due to the error from the ZO estimate.\nIf we take , the first assumption of Theorem 1 ###reference_orem1### becomes the requirement that is -RSS. Therefore, SZOHT as well as the theorem above provides, up to our knowledge, the first algorithm that can work in the usual setting of hard-thresholding algorithms (that is, -RSS and -RSC Nguyen et al. (2017 ###reference_b31###); Shen and Li (2017 ###reference_b36###)), as well as its convergence rate.\nInterplay between hard-thresholding and zeroth-order error\nImportantly, contrary to previous works in ZO optimization, must be chosen carefully here, due to our specific setting combining ZO and hard-thresholding. Indeed, as described in Shen and Li (2017 ###reference_b36###), the hard-thresholding operator is not non-expansive (contrary to projection onto the ball) so it can drive the iterates away from the solution. Therefore, enough descent must be made by the (approximate) gradient step to get close enough to the solution, and it is therefore crucial to limit errors in gradient estimation. This problem arises with any kind of gradient errors: for instance with SGD errors Nguyen et al. (2017 ###reference_b31###); Zhou et al. (2018 ###reference_b47###), it is generally dealt with either by ensuring some conditions on the function Nguyen et al. (2017 ###reference_b31###), forming bigger batches of samples Zhou et al. (2018 ###reference_b47###), and/or considering a larger number of components kept in hard-thresholding (to make the hard-thresholding less expansive). In our work, similarly to Zhou et al. (2018 ###reference_b47###), we deal with this problem by relaxing and sampling more directions (which is the ZO equivalent to taking bigger batch-size in SGD). However, there is an additional effect that happens in our case, specific to ZO estimation: as described in Proposition 1 ###reference_position1###, the quality of our estimator also depends on . Therefore, it may be hard to make the algorithm converge only by considering larger : higher means less expansivity (which helps convergence), but worse gradient estimate (which harms convergence). We further illustrate this conflict between the non-expansiveness of hard-thresholding (quantified by the parameter Shen and Li (2017 ###reference_b36###)), and the error from the zeroth-order estimate, in Figure 1 ###reference_###. Therefore, it is even more crucial to tune precisely our remaining degree of freedom at hand which is . More precisely, a minimal value of is always necessary to ensure convergence in our setting, contrary to most ZO setting (in which taking even can work, as long as other constants like are well chosen, see for instance (Liu et al., 2018a ###reference_b23###, Corollary 3)). The remark below gives some necessary conditions on to illustrate that fact.\nLet and assume, that is such that (which ensures that ), and that . These conditions imply the following necessary (but not sufficient) condition on :\nif :\nif :\nRemark 4 ###reference_ark4### is just a warning that usual rules from ZO do not apply to SZOHT, but it does not say how to choose to ensure convergence: for that we would need some sufficient conditions on for Theorem 1 to apply. We give such conditions in the next section."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "",
45
+ "text": "In this section, we provide Corollaries 1 ###reference_ollary1### and 2 ###reference_ollary2###, following from Theorem 1 ###reference_orem1###, which give an example of that is sufficient to converge (that is, to obtain in Theorem 1 ###reference_orem1###), and that achieves weak dimensionality dependence in the case of RSS, and complete dimension independence in the case of smoothness.\nAssume that that almost all are -RSS, and that is -RSC and -FGN, with , with (with ) . Suppose that we run SZOHT with random support of size , a learning rate of , with coordinates kept at each iterations by the hard-thresholding, and with . Then, we have a geometric convergence rate, of the following form, with denoting the -iterate of SZOHT:\nwith , and are defined in (4 ###reference_###), and .\nTherefore, the query complexity (QC) to ensure that is .\nWe now turn to the case where the functions are smooth. The key result in that case is that we can have a query complexity independent of the dimension , which is, up to our knowledge, the first result of such kind for sparse zeroth-order optimization without assuming any gradient sparsity.\nAssume that, in addition to the conditions from Corollary 1 ###reference_ollary1### above, almost all are -smooth, with (with ), and take , and (that is, no random support sampling). Then, we have a geometric convergence rate, of the following form, with denoting the -iterate of SZOHT:\nTherefore, the QC to ensure that is .\nAdditionally, our convergence rate highlights an interesting connection between the geometry of (defined by the condition number ), and the number of random directions that we need to take at each iteration: if the problem is ill-conditioned, that is is high, then we need a bigger . This result is standard in the litterature (see for instance Yuan et al. (2017 ###reference_b45###)). But specifically, in our ZO case, it also impacts the query complexity: since the projected gradient is harder to approximate when the dimension of the projection is larger, needs to grow too, resulting in higher query complexity. We believe this is an interesting result for the sparse zeroth-order optimization community: it reveals that the query complexity may in fact depend on some notion of intrinsic dimension to the problem, related to both the sparsity of the iterates , and the geometry of the function for a given (through the restricted condition number ), rather than the dimension of the original space as in previous works like Ghadimi et al. (2016 ###reference_b14###)."
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "",
51
+ "text": ""
52
+ },
53
+ {
54
+ "section_id": "5.1",
55
+ "parent_section_id": "5",
56
+ "section_name": "",
57
+ "text": "We first conduct a sensitivity parameter analysis on a toy example, to highlight the importance of the choice of , as discussed in Section 4 ###reference_###. We fix a target sparsity , choose , and consider a sparse quadric function , with: ( denotes the elementwise product), with if and otherwise (to ensure is -RSC and smooth, with ), and for all and for all (we make such a choice in order to have small enough). We choose as in Theorem 1 ###reference_orem1###: with defined in Proposition 1 ###reference_position1### in terms of and (we take ), , and present our results in Figure 4 ###reference_###, for six values of . We can observe on Figure 2(b) ###reference_sf2### that the smaller the , the less can descend. Interestingly, we can also see on Figure 2(a) ###reference_sf1### that for and , diverges: we can indeed compute that for those , which explains the divergence, from Theorem 1 ###reference_orem1###."
58
+ },
59
+ {
60
+ "section_id": "5.2",
61
+ "parent_section_id": "5",
62
+ "section_name": "",
63
+ "text": "We compare our SZOHT algorithms with state of the art zeroth-order algorithms that can deal with sparsity constraints, that appear in Table 1 ###reference_###:\nZSCG Balasubramanian and Ghadimi (2018 ###reference_b2###) is a Frank-Wolfe ZO algorithm, for which we consider an ball constraint.\nRSPGF Ghadimi et al. (2016 ###reference_b14###) is a proximal ZO algorithm, for which we consider an penalty.\nZORO Cai et al. (2022 ###reference_b7###) is a proximal ZO algorithm, that makes use of sparsity of gradients assumptions, using a sparse reconstruction algorithm at each iteration to reconstruct the gradient from a few measurements. Similarly, as for ZSCG, we consider an penalty.\nIn all the applications below, we will tune the sparsity of SZOHT, the penalty of RSPGF and ZORO, and the radius of the constraint of ZSCG, such that all algorithms attain a similar converged objective value, for fair comparison."
64
+ },
65
+ {
66
+ "section_id": "5.3",
67
+ "parent_section_id": "5",
68
+ "section_name": "",
69
+ "text": "We compare the algorithms above on two tasks: a sparse asset risk management task from Chang et al. (2000 ###reference_b8###), and an adversarial attack task Chen et al. (2017 ###reference_b9###) with a sparsity constraint.\nWe consider the portfolio management task and dataset from Chang et al. (2000 ###reference_b8###), similarly to Cai et al. (2022 ###reference_b7###). We have a given portfolio of assets, with each asset giving an expected return , and with a global covariance matrix of the return of assets denoted as . The cost function we minimize is the portfolio risk: , where is a vector where each component denotes how much is invested in each asset, and we require to minimize it under a constraint of minimal return : . We enforce that constraint using the Lagrangian form below. Finally, we add a sparsity constraint, to restrict the investments to only assets. Therefore, we obtain the cost function below:\nWe use three datasets: port3, port4 and port5 from the OR-library Beasley (1990 ###reference_b3###), of respective dimensions . We keep and the same for the 4 algorithms: , (for port3 and port4); and , for port5. For SZOHT, we set , , , and for port4, and for port5 ( and are both obtained by grid search over the interval ). For all other algorithms, we got the optimal hyper-parameters through grid search. We present our results in Figure 4 ###reference_###.\nWe consider the problem of adversarial attacks with a sparse constraint. Our goal is to minimize such that , where is the Carlini-Wagner cost function Chen et al. (2017 ###reference_b9###), that is computed from the outputs of a pre-trained model on the corresponding dataset. We consider three different datasets for the attacks: MNIST, CIFAR, and Imagenet, of dimension respectively . All algorithms are initialized with . We set the hyperparameters of SZOHT as follows: MNIST: , , , , ; CIFAR: , , , , ; ImageNet: , , , , . We present our results in Figure 4 ###reference_###.\nAll experiments are conducted in the workstation with four NVIDIA RTX A6000 GPUs, and take about one day to run."
70
+ },
71
+ {
72
+ "section_id": "5.4",
73
+ "parent_section_id": "5",
74
+ "section_name": "",
75
+ "text": "We can observe from Figures 4 ###reference_### and 4 ###reference_### that the performance of SZOHT is comparable or better than the other algorithms. This can be explained by the fact that SZOHT has a linear convergence, but the query complexity of ZSCG and RSPGF is in . We can also notice that RSPGF is faster than ZSCG, which is natural since proximal algorithms are faster than Frank-Wolfe algorithms (indeed, in case of possible strong-convexity, vanilla Frank-Wolfe algorithms maintain a rate Garber and Hazan (2015 ###reference_b13###), when proximal algorithms get a linear rate (Beck, 2017 ###reference_b4###, Theorem 10.29)). Finally, it appears that the convergence of ZORO is sometimes slower, particularly at the early stage of training, which may come from the fact that ZORO assumes sparse gradients, which is not necessarily verified in real-world use cases like the ones we consider; in those cases where the gradient is not sparse, it is possible that the sparse gradient reconstruction step of ZORO does not work well. This motivates even further the need to consider algorithms able to work without those assumptions, such as SZOHT.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9###"
76
+ },
77
+ {
78
+ "section_id": "6",
79
+ "parent_section_id": null,
80
+ "section_name": "",
81
+ "text": "In this paper, we proposed a new algorithm, SZOHT, for sparse zeroth-order optimization. We gave its convergence analysis and showed that it is dimension independent in the smooth case, and weak dimension-dependent in the RSS case. We further verified experimentally the efficiency of SZOHT in several settings. Moreover, throughout the paper, we showed how the condition number of as well as the gradient error have an important impact on the convergence of SZOHT. As such, it would be interesting to study whether we can improve the query complexity by regularizing , by using an adaptive learning rate or acceleration methods, or by using recent variance reduction techniques. Finally, it would also be interesting to extend this work to a broader family of sparse structures, such as low-rank approximations or graph sparsity. We leave this for future work."
82
+ }
83
+ ],
84
+ "appendix": [
85
+ {
86
+ "section_id": "Appendix 1",
87
+ "parent_section_id": null,
88
+ "section_name": "",
89
+ "text": "Throughout this appendix, we will use the following notations:\nwe denote the vectors in bold letters.\ndenotes the gradient of at .\ndenotes the set of all integers between and : .\ndenotes the -th coordinate of vector , and the -th coordinate of .\ndenotes the norm (which is not a proper norm).\ndenotes the norm.\ndenotes the maximum absolute component of a vector.\ndenotes that the random variable (denoted as ), of realization , follows a probability distribution (we abuse notation by denoting similarly a random variable and its realization).\ndenotes that we draw i.i.d. samples of a random variable , each from the distribution .\ndenotes the value of the probability of according to its probability distribution.\n(or simply if there is no possible confusion) to denote the expectation of which follows the distribution .\nWe denote by the support of a vector , that is the set of its non-zero coordinates.\nthe cardinality (number of elements) of a set .\nAll the sets we consider are subsets of . So for a given set , denotes the complement of in\n(or for simplicity if ) denotes the -sphere of radius , that is .\nthe uniform distribution on that unit sphere.\nis the surface area of the unit -sphere defined above.\ndenotes a set that we call the restricted -sphere on , described as: , that is the set of unit vectors supported by .\ndenotes the uniform distribution on that restricted sphere above.\nWe denote by (resp. ) the hard-thresholding of (resp. ) over the support , that is, a vector which keeps (resp. ) untouched for the set of coordinates in , but sets all other coordinates to .\ndenotes the set of all subsets of that contain elements: .\ndenotes the uniform distribution on the set above.\ndenotes the identity matrix .\ndenotes the identity matrix with 1 on the diagonal only at indices belonging to the support : , and 0 elsewhere.\ndenotes that set contains the element .\ndenotes the -uple of elements .\ndenotes the Gamma function Arfken and Weber (1999 ###reference_b1###).\ndenotes the integral of over the set .\ndenotes the natural logarithm (in base )."
90
+ },
91
+ {
92
+ "section_id": "Appendix 2",
93
+ "parent_section_id": null,
94
+ "section_name": "",
95
+ "text": "Let , and denote , we have:\nThe proof is given in Sykora (2005 ###reference_b38###).\n\u220e\nLet be a subset of , of size , with .\nWe have the following:\nWe start by proving (6 ###reference_###). Decomposing the norm onto every component, we get:\nBy symmetry, each has the same marginal probability distribution, so:\nWe also know, from the definition of the norm, and the fact that is a unit vector, that:\nTherefore, combining (9 ###reference_###) and (10 ###reference_###):\nPlugging this into (8 ###reference_###), we get (6 ###reference_###):\nUsing Jensen\u2019s inequality, (5 ###reference_###) follows from (6 ###reference_###).\nLet us now prove (7 ###reference_###). By definition of the expectation for a uniform distribution on the unit sphere:\nWe further develop the integral as follows:\nUsing Lemma B.1 ###reference_appxlem1### in the expression above, with , and , we obtain:\nWhere in (a) we used the fact that and . So:\nWhere (b) comes from the closed form for the area of a unit sphere: \n\u220e\nThe proof is given in Gao et al. (2018 ###reference_b12###).\n\u220e\nLet be an arbitrary -dimensional vector and be any -sparse vector. Denote , and the vector with all the smallest components set to 0 (that is, is the best -sparse approximation of ). Then, we have the following bound:\nThe proof is given in Shen and Li (2017 ###reference_b36###).\n\u220e\nWith the notations and variables above in Lemma B.4 ###reference_appxlem4###, we also have the following, simpler bound, from Yuan et al. (2017 ###reference_b45###):\nwith\nThere are two possibilities for in Lemma B.4 ###reference_appxlem4###: either (if ) or (if ). In the latter case:\nTherefore, in both cases, , which, plugging into Lemma B.4 ###reference_appxlem4###, gives Corollary B.1 ###reference_appxcor1###.\n\u220e"
96
+ },
97
+ {
98
+ "section_id": "Appendix 3",
99
+ "parent_section_id": null,
100
+ "section_name": "",
101
+ "text": "With an abuse of notation, let us denote by any function for some given value of the noise .\nFirst, we derive in section C.1 ###reference_### the error of the gradient estimate if we sample only one direction (). Then, in section C.2 ###reference_###, we show how sampling directions reduces the error of the gradient estimator, producing the results of Proposition 1 ###reference_position1###.\nThroughout all this section, we assume that for the gradient estimator defined in (2 ###reference_###).\nWe now describe how sampling random directions improves the gradient estimate. Our proof is similar to the proof of Lemma 2 in Liu et al. (2018b ###reference_b24###), however we make sure that it works for our random support gradient estimator, and with our new expression in C.2 ###reference_appxlem2###, which depends on the two terms and . We express our results here in the form of a general lemma, depending only on the general bounding factors , , and defined below, in such a way that the proof of Proposition 1 ###reference_position1### follows immediately from plugging the results of Lemma C.1 ###reference_appxlem1### and C.2 ###reference_appxlem2### into Lemma C.3 ###reference_appxlem3### below.\nFor any -RSS function , we use the gradient estimator defined in (2 ###reference_###) with . Let us suppose that the estimator is such that for , it verifies the following bounds for some , , and in , for any support , with :\n(i) , and\n(ii) \nThen, the estimator also verifies, for arbitrary :\n(a) \n(b)\nLet us denote by the gradient estimate from (2 ###reference_###) along the i.i.d. sampled directions (we simplify it into if there is only one direction ).\nWe can first see that, since the random directions are independent identically distributed (i.i.d.) we have:\nThis proves C.3 ###reference_appxlem3### (a). Let us now turn to C.3 ###reference_appxlem3### (b).\nWe have:\nWhere (a) comes from the fact that the random directions are i.i.d. and (b) comes from assumptions (i) and (ii) of the current Lemma (Lemma C.3 ###reference_appxlem3###). Assumption (ii) also allows to bound the last term above in the following way:\nPlugging (23 ###reference_###) into (22 ###reference_###), we obtain:\n\u220e\nProposition 1 ###reference_position1### (a) and (b) follow by plugging the values of , , and from Lemma C.1 ###reference_appxlem1### and Lemma C.2 ###reference_appxlem2### into Lemma C.3 ###reference_appxlem3###.\nProposition (c) follows from the inequality , for and in with .\n\u220e"
102
+ },
103
+ {
104
+ "section_id": "Appendix 4",
105
+ "parent_section_id": null,
106
+ "section_name": "",
107
+ "text": "We will combine the proof from Yuan et al. (2017 ###reference_b45###) and Nesterov and Spokoiny (2017 ###reference_b30###), using ideas of the proof of Theorem 8 from Nesterov to deal with zeroth order gradient approximations, and ideas from the proof of Yuan et al. (2017 ###reference_b45###) (Theorem 2 and 5, Lemma 19), to deal with the hard thresholding operation in the convergence rate.\nLet us call an arbitrary learning rate, that will be fixed later in the proof.\nLet us call the following support , with . We have, for a given random direction and function noise , at a given timestep of SZOHT:\nTaking the expectation with respect to and to the possible random directions (that we denote with a simple , abusing notations) at step , we get:\nWhere (a) follows from the inequality for any . From Proposition 1 ###reference_position1### (b), since almost each is -RSS (hence also -RSS), we know that for the , and defined in Proposition 1 ###reference_position1### (b), we have for almost all : . This allows to develop the last term of (24 ###reference_###) into the following:\nJust like the proof in Yuan et al. (2017 ###reference_b45###), we will express our result in terms of the infinity norm of . For that, we will plug above the two following inequalites:\nSame as their proof of Lemma 19, we have (that is because we will have equality if the sets in the definition of , namely , and , are disjoints (because their cardinality is respectively , and ), but they may intersect). And we also have (by definition of the norm and of the norm). Similarly, we also have: , since .\nTherefore, we obtain:\nWhere (a) follows by observing in Proposition 1 ###reference_position1### (b) that , and using the definition of the Euclidean norm. Let us plug the above into (24 ###reference_###), and use the fact that, from Proposition 1 ###reference_position1### (a), since each is -RSS, it is also -RSS, so for the from Proposition 1 ###reference_position1### (a), we have, for almost any given : , and let us also use the fact that since each is -RSS , it is also -RSS (since ) which gives that for almost any : : , to finally obtain:\nSince is -RSC, it is also -RSC, since , therefore, we have: (this can be proven by adding together the definition of -RSC written respectively at ,, and at ,). Plugging this into the above:\nThe value of that minimizes the left term in is equal to (because the optimum of the quadratic function is attained in and its value is ). Let us choose it, that is, we fix . Let us now define the following :\nWe therefore have:\nWe can now use the fact that for all , as well as Jensen\u2019s inequality, to obtain:\nWe can now formulate a first decrease-rate type of result, before the hard thresholding operation, as follows, using for the value previously defined, and with:\nWhere (a) follows from the -FGN assumption. We now consider , that is, the best--sparse approximation of from the hard thresholding operation in SZOHT. We can notice that (because ), which gives . Since , the coordinates of the top magnitude components of are in , so they are also those of the top magnitude components of . Therefore, is also the best k-sparse approximation of . Therefore, using Corollary B.1 ###reference_appxcor1###, we obtain:\nwith:\nWhere . Plugging this into (26 ###reference_###) gives:\nThis will allow us to obtain the following final result:\nwith and .\nWe need to have in order to have a contraction at each step. Let us suppose that : we will show that this value for allows to verify that condition on . That implies . We then have, from the definition of in (27 ###reference_###):\nTherefore, we indeed have when choosing .\nUnrolling inequality (28 ###reference_###) through time, we then have, at iteration , and denoting by the noise drawn at time step and the random directions chosen at time step , from the law of total expectations:\nWhere the last inequality follows from the fact that .\n\u220e\nWe show below that, due to the complex impact of and on the convergence analysis in our ZO + HT (hard-thresholding) setting (compared to ZO only), cannot be taken as small as we want here (in particular we can never take , which is different from classical ZO algorithms such as (Liu et al., 2018a ###reference_b23###, Corollary 3)), if we want Theorem 1 to apply with . In other words, there is a necessary (but not sufficient) minimal (i.e. ) value for .\nA necessary condition for Theorem 1 to describe convergence of SZOHT is that . From the expressions of and We have , and . We recall those expressions below:\n\n with .\nwith:\n, with (we consider the smallest possible from Theorem 1 ###reference_orem1###)\nSo therefore:\nLet us define and \nWe then have:\nTo ensure convergence, we need to have , therefore (following the same derivation as in (29 ###reference_###)) a necessary condition that we need to verify is .\nWhich means we need:\nIf we want that there exist a such that this is true, we need (since ):\nwith:\nLet us express and in terms of , as:\nSo plugging in (34 ###reference_###), what we need is:\nTo ensure that, we need to compute , defined as:\nWe now have:\nTherefore, there is a minimal value for , and it is:\nWith:\nCase : Assuming gives , and since\n and \nThis gives: \nTherefore:\n\nwith , which reads:\nCase : In the case , we have , so therefore, from (38 ###reference_###), , so the necessary condition on as above so that there exist such that: does not apply here.\nWe may therefore think that it may be possible to take in that case. However, there is another condition on that should also be enforced, which is that (since we cannot keep more components than ). And in that case, we have , and (from (35 ###reference_###) and (36 ###reference_###)). Now, enforcing the condition leads to the following chain of implications (i.e. each downstream assertion is a necessary condition for the upstream assertion):\nWhere the last inequality follows from the expression of in (36 ###reference_###) when .\nSo the right hand side in (40 ###reference_###) is also a minimal necessary value for in this case, though for a different reason than in the case .\n\u220e\nWe first restrict the result of Theorem 1 ###reference_orem1### to a particular . By inspection of Proposition 1 ###reference_position1### (b), we choose such that the part of that depends on becomes : we believe this will allow to better understand the dependence between variables in our convergence rate result, although other choices of are possible. Therefore, we choose:\nso that we obtain: (from Proposition 1 ###reference_position1### (b)), which also implies :\nand:\nNow, regarding the value of , we also note that any value of random directions can be taken too, since the bound in Proposition 1 ###reference_position1### (b) would then still be verified for (that is, we would still have ) (with the value of for ).\nTherefore, we will choose a value so that our result is simpler. First, notice that . Therefore, if we take , we will also have .\nLet us now impose a lower bound on that is slightly (twice) bigger than the lower bound from Theorem 1 ###reference_orem1###. As will become clear below, this allows us to have a enough bounded away from 1, which guarantees a reasonable constant in the notation for the query complexity (see the end of the proof).\nLet us therefore take:\nand plug the value of above into the expression:\nWith denoting . Therefore, if we take:\nwe will indeed verify the formula above .\nWe now turn to describing the query complexity of the algorithm:\nTo ensure that , we need:\nwith belonging to the interval .\nLet us compute more precisely an upper bound to in this case, to show that it is reasonably enough bounded away from 1:\nTaking as described in (43 ###reference_###), and plugging that value into the expression of from Theorem 1 ###reference_orem1###, we obtain:\nWhere the simplification in (a) above follows similarly to (29 ###reference_###).\nTherefore, in that case, we have:\nWhere (a) follows because \nTherefore:\nGiven that for all , we have:\nTherefore:\nTherefore, plugging this into (44 ###reference_###), we obtain that with iterations, we can get .\nTo obtain the query complexity (QC), we therefore just need to multiply the number of iterations by the number of queries per iteration : to ensure , we need to query the zeroth-order oracle at least the following number of times: , since .\nAlmost all are -smooth, which is equivalent to saying that they are -RSS. So we can directly plug in equation (41 ###reference_###), which gives a necessary value for of:\nSince any value of larger than the one in (46 ###reference_###) is valid, we choose for simplicity. The query complexity is obtained similarly as in the proof of Corollary 1 ###reference_ollary1### above, with that new value for (the number of iterations needed is unchanged from the proof of Corollary 1 ###reference_ollary1###), only the query complexity per iteration changes), which means we need to query the zeroth-order oracle the following number of times: \n\u220e"
108
+ },
109
+ {
110
+ "section_id": "Appendix 5",
111
+ "parent_section_id": null,
112
+ "section_name": "",
113
+ "text": "Below we plot the true gradient and its estimator (for ), as well as their respective projections and , with (i.e. is the hyperplane ), for random directions. In Figure 5(b) ###reference_sf2###, due to the large number of random directions, we plot them as points not vectors. For simplicity, the figure is plotted for , and . We can see that even though gradient estimates are poor estimates of , is a better estimate of .\n###figure_10### ###figure_11### An interesting fact that can be observed in Figure 5(b) ###reference_sf2### above is that when and , the ZO gradient estimates belong to a sphere. This comes from the fact that, in that case, the ZO estimate using the random direction is actually a directional derivative (scaled by d): , for which we have :\n(since ). That is, gradient estimates belong to a sphere of center and radius . However, the distribution of is not uniform on that sphere: it is more concentrated around as we can observe in Figure 5(b) ###reference_sf2###."
114
+ },
115
+ {
116
+ "section_id": "Appendix 6",
117
+ "parent_section_id": null,
118
+ "section_name": "",
119
+ "text": "In this section, we further illustrate the importance on the value of as discussed in Remark 4 ###reference_ark4###, by showing in Figure 6 ###reference_### that if is too small, then there does not exist any that verifies the condition , no matter how small is (i.e., even if ). However, if is large enough, then there exist some such that this condition is true. To generate the curves below, we simply use the formulas for and with from Theorem 1 ###reference_orem1###, and with and .\n###figure_12### ###figure_13### ###figure_14###"
120
+ },
121
+ {
122
+ "section_id": "Appendix 7",
123
+ "parent_section_id": null,
124
+ "section_name": "",
125
+ "text": "In this section, we show the dependence of SZOHT on the dimension. To that end, we consider minimizing the following synthetic problem:\nwith , and chosen as: , with if and if with . In other words, the last components of are regularly spaced from to : in a way, this simulates the recovery of a -sparse vector by observing only the squared deviation of some queries .\nIn that case, we can easily check that verifies the following properties:\nis -smooth with , as well as -RSS for any such that , with , and -RSC with and (so )\n\n\nso is -FGN with\nWe also note that the above setting of and verifies (since ).\nFinally, we initialize such that if and otherwise. We choose this initialization and not , just to ensure that for any : this way the optimization is really done over all variables, not just the last ones. In addition, this initialization ensures that is constant no matter the , which makes the convergence curves comparable.\nWe consider several settings of to showcase the dependence on the dimension below."
126
+ },
127
+ {
128
+ "section_id": "Appendix 8",
129
+ "parent_section_id": null,
130
+ "section_name": "",
131
+ "text": "In this section, we provide additional results for the adversarial attacks problem in 5.3 ###reference_.SSS0.Px2###, in Figure 11 ###reference_###. The parameters we used for SZOHT to generate that table are the same as in 5.3 ###reference_.SSS0.Px2###, except for MNIST, for which we choose , , and , and for ImageNet, for which we choose and . As we can see, SZOHT allows to obtain sparse attacks, contrary to the other algorithms, and with a smaller distance and a larger success rate, using less iterations: this shows that SZOHT allows to enforce sparsity, and efficiently exploits that sparsity in order to have a lower query complexity than vanilla sparsity constrained ZO algorithms."
132
+ }
133
+ ],
134
+ "tables": {
135
+ "1": {
136
+ "table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Complexity of sparsity-enforcing algorithms. We give the query complexity for a precision , up to the system error (see section <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2210.05279v2#S4\" title=\"4 Convergence analysis \u2023 Zeroth-Order Hard-Thresholding: Gradient Error vs. Expansivity\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>). For first-order algorithms (FO), we give it in terms of number of first order oracle calls (#IFO), that is, calls to , and for ZO algorithms, in terms of calls of . Here denotes the condition number , with is the smoothness (or RSS) constant and is the strong-convexity (or RSC) constant.\n</figcaption>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.31\">\n<tr class=\"ltx_tr\" id=\"S1.T1.31.18\">\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.31.18.1\">Type</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.31.18.2\">Name</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.31.18.3\">Assumptions</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.31.18.4\">#IZO/#IFO</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S1.T1.31.18.5\">#HT ops.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.17.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.15.1.1\">FO/\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.17.3.4\">StoIHT <cite class=\"ltx_cite ltx_citemacro_cite\">Nguyen et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2210.05279v2#bib.bib31\" title=\"\">2017</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.17.3.5\">RSS, RSC</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.16.2.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.17.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.19.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.18.4.1\">ZO/\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.19.5.3\">RSPGF <cite class=\"ltx_cite ltx_citemacro_cite\">Ghadimi et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2210.05279v2#bib.bib14\" title=\"\">2016</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.19.5.4\">smooth<sup class=\"ltx_sup\" id=\"S1.T1.19.5.4.1\">3</sup>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.19.5.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.19.5.5\">\u2014</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.21.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.20.6.1\">ZO/\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.21.7.3\">ZSCG<sup class=\"ltx_sup\" id=\"S1.T1.21.7.3.1\">2</sup> <cite class=\"ltx_cite ltx_citemacro_cite\">Balasubramanian and Ghadimi (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2210.05279v2#bib.bib2\" title=\"\">2018</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.21.7.4\">\n<span class=\"ltx_text\" id=\"S1.T1.21.7.4.1\"></span><span class=\"ltx_text\" id=\"S1.T1.21.7.4.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.21.7.4.2.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.21.7.4.2.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S1.T1.21.7.4.2.1.1.1\">convex, smooth</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S1.T1.21.7.4.3\"></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.21.7.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.21.7.5\">\u2014</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.25.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.22.8.1\">ZO/\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.25.11.5\">ZORO <cite class=\"ltx_cite ltx_citemacro_cite\">Cai et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2210.05279v2#bib.bib7\" title=\"\">2022</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.24.10.3\">\n<span class=\"ltx_text\" id=\"S1.T1.24.10.3.3\"></span><span class=\"ltx_text\" id=\"S1.T1.24.10.3.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.24.10.3.2.2\">\n<span class=\"ltx_tr\" id=\"S1.T1.23.9.2.1.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S1.T1.23.9.2.1.1.1.1\">-sparse gradient,</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.24.10.3.2.2.3\">\n<span class=\"ltx_td ltx_align_left\" id=\"S1.T1.24.10.3.2.2.3.1\">weakly sparse hessian,</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.24.10.3.2.2.2\">\n<span class=\"ltx_td ltx_align_left\" id=\"S1.T1.24.10.3.2.2.2.1\">smooth<sup class=\"ltx_sup\" id=\"S1.T1.24.10.3.2.2.2.1.1\">3</sup>, <sup class=\"ltx_sup\" id=\"S1.T1.24.10.3.2.2.2.1.2\">1</sup></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S1.T1.24.10.3.4\"></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.25.11.4\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.25.11.6\">\u2014</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.28.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.26.12.1\">ZO/\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.28.14.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S1.T1.28.14.4.1\">SZOHT</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.28.14.5\">RSS, RSC</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.27.13.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S1.T1.28.14.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.31.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S1.T1.29.15.1\">ZO/\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S1.T1.31.17.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S1.T1.31.17.4.1\">SZOHT</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S1.T1.31.17.5\">smooth, RSC</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S1.T1.30.16.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S1.T1.31.17.3\"></td>\n</tr>\n</table>\n<ul class=\"ltx_itemize\" id=\"S1.I1\">\n<li class=\"ltx_item\" id=\"S1.I1.ix1\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\">1</span>\n<div class=\"ltx_para\" id=\"S1.I1.ix1.p1\">\n<p class=\"ltx_p\" id=\"S1.I1.ix1.p1.1\"><span class=\"ltx_text\" id=\"S1.I1.ix1.p1.1.1\" style=\"font-size:80%;\">The definition of Restricted Strong Convexity from </span><cite class=\"ltx_cite ltx_citemacro_cite\">Cai et\u00a0al. <span class=\"ltx_text\" id=\"S1.I1.ix1.p1.1.2.1.1.1\" style=\"font-size:80%;\">(</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2210.05279v2#bib.bib7\" title=\"\">2022 ###reference_b7###</a><span class=\"ltx_text\" id=\"S1.I1.ix1.p1.1.3.2.2.1\" style=\"font-size:80%;\">)</span></cite><span class=\"ltx_text\" id=\"S1.I1.ix1.p1.1.4\" style=\"font-size:80%;\"> is different from ours and that of </span><cite class=\"ltx_cite ltx_citemacro_cite\">Nguyen et\u00a0al. <span class=\"ltx_text\" id=\"S1.I1.ix1.p1.1.5.1.1.1\" style=\"font-size:80%;\">(</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2210.05279v2#bib.bib31\" title=\"\">2017 ###reference_b31###</a><span class=\"ltx_text\" id=\"S1.I1.ix1.p1.1.6.2.2.1\" style=\"font-size:80%;\">)</span></cite><span class=\"ltx_text\" id=\"S1.I1.ix1.p1.1.7\" style=\"font-size:80%;\">, hence the </span><span class=\"ltx_text ltx_markedasmath\" id=\"S1.I1.ix1.p1.1.8\" style=\"font-size:80%;\">bis</span><span class=\"ltx_text\" id=\"S1.I1.ix1.p1.1.9\" style=\"font-size:80%;\"> subscript.</span></p>\n</div>\n</li>\n<li class=\"ltx_item\" id=\"S1.I1.ix2\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\">2</span>\n<div class=\"ltx_para\" id=\"S1.I1.ix2.p1\">\n<p class=\"ltx_p\" id=\"S1.I1.ix2.p1.1\"><span class=\"ltx_text\" id=\"S1.I1.ix2.p1.1.1\" style=\"font-size:80%;\">We refer to the modified version of ZSCG (Algorithm 3 in </span><cite class=\"ltx_cite ltx_citemacro_cite\">Balasubramanian and Ghadimi <span class=\"ltx_text\" id=\"S1.I1.ix2.p1.1.2.1.1.1\" style=\"font-size:80%;\">(</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2210.05279v2#bib.bib2\" title=\"\">2018 ###reference_b2###</a><span class=\"ltx_text\" id=\"S1.I1.ix2.p1.1.3.2.2.1\" style=\"font-size:80%;\">)</span></cite><span class=\"ltx_text\" id=\"S1.I1.ix2.p1.1.4\" style=\"font-size:80%;\">).</span></p>\n</div>\n</li>\n<li class=\"ltx_item\" id=\"S1.I1.ix3\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\">3</span>\n<div class=\"ltx_para\" id=\"S1.I1.ix3.p1\">\n<p class=\"ltx_p\" id=\"S1.I1.ix3.p1.2\"><span class=\"ltx_text\" id=\"S1.I1.ix3.p1.2.1\" style=\"font-size:80%;\">RSPGF and ZORO minimize </span><span class=\"ltx_text\" id=\"S1.I1.ix3.p1.2.2\" style=\"font-size:80%;\">: only </span><span class=\"ltx_text\" id=\"S1.I1.ix3.p1.2.3\" style=\"font-size:80%;\"> needs to be smooth.</span></p>\n</div>\n</li>\n</ul>\n</figure>",
137
+ "capture": "Table 1: Complexity of sparsity-enforcing algorithms. We give the query complexity for a precision , up to the system error (see section 4). For first-order algorithms (FO), we give it in terms of number of first order oracle calls (#IFO), that is, calls to , and for ZO algorithms, in terms of calls of . Here denotes the condition number , with is the smoothness (or RSS) constant and is the strong-convexity (or RSC) constant.\n"
138
+ }
139
+ },
140
+ "image_paths": {
141
+ "1": {
142
+ "figure_path": "2210.05279v2_figure_1.png",
143
+ "caption": "Figure 1: Conflict between the hard-thresholding operator and the zeroth-order estimate.",
144
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/conflict-crop2.png"
145
+ },
146
+ "2(a)": {
147
+ "figure_path": "2210.05279v2_figure_2(a).png",
148
+ "caption": "(a) MNIST\nFigure 4: f\u2062(\ud835\udc99)\ud835\udc53\ud835\udc99f(\\bm{x})italic_f ( bold_italic_x ) vs. # queries (adversarial attack)",
149
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/final_figs/MNIST.png"
150
+ },
151
+ "2(b)": {
152
+ "figure_path": "2210.05279v2_figure_2(b).png",
153
+ "caption": "(b) CIFAR\nFigure 4: f\u2062(\ud835\udc99)\ud835\udc53\ud835\udc99f(\\bm{x})italic_f ( bold_italic_x ) vs. # queries (adversarial attack)",
154
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/final_figs/CIFAR.png"
155
+ },
156
+ "2(c)": {
157
+ "figure_path": "2210.05279v2_figure_2(c).png",
158
+ "caption": "(c) Imagenet\nFigure 4: f\u2062(\ud835\udc99)\ud835\udc53\ud835\udc99f(\\bm{x})italic_f ( bold_italic_x ) vs. # queries (adversarial attack)",
159
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/final_figs/Imagenet.png"
160
+ },
161
+ "3(a)": {
162
+ "figure_path": "2210.05279v2_figure_3(a).png",
163
+ "caption": "(a) ndir=1subscript\ud835\udc5bdir1n_{\\text{dir}}=1italic_n start_POSTSUBSCRIPT dir end_POSTSUBSCRIPT = 1\nFigure 5: \u2207f\u2062(x)\u2207\ud835\udc53\ud835\udc65\\nabla f(x)\u2207 italic_f ( italic_x ) and \u2207^\u2062f\u2062(x)^\u2207\ud835\udc53\ud835\udc65\\hat{\\nabla}f(x)over^ start_ARG \u2207 end_ARG italic_f ( italic_x ) and their projections \u2207Ff\u2062(x)subscript\u2207\ud835\udc39\ud835\udc53\ud835\udc65\\nabla_{F}f(x)\u2207 start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT italic_f ( italic_x ) and \u2207^F\u2062f\u2062(x)subscript^\u2207\ud835\udc39\ud835\udc53\ud835\udc65\\hat{\\nabla}_{F}f(x)over^ start_ARG \u2207 end_ARG start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT italic_f ( italic_x ) onto F\ud835\udc39Fitalic_F",
164
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/figure.png"
165
+ },
166
+ "3(b)": {
167
+ "figure_path": "2210.05279v2_figure_3(b).png",
168
+ "caption": "(b) ndir=106subscript\ud835\udc5bdirsuperscript106n_{\\text{dir}}=10^{6}italic_n start_POSTSUBSCRIPT dir end_POSTSUBSCRIPT = 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT\nFigure 5: \u2207f\u2062(x)\u2207\ud835\udc53\ud835\udc65\\nabla f(x)\u2207 italic_f ( italic_x ) and \u2207^\u2062f\u2062(x)^\u2207\ud835\udc53\ud835\udc65\\hat{\\nabla}f(x)over^ start_ARG \u2207 end_ARG italic_f ( italic_x ) and their projections \u2207Ff\u2062(x)subscript\u2207\ud835\udc39\ud835\udc53\ud835\udc65\\nabla_{F}f(x)\u2207 start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT italic_f ( italic_x ) and \u2207^F\u2062f\u2062(x)subscript^\u2207\ud835\udc39\ud835\udc53\ud835\udc65\\hat{\\nabla}_{F}f(x)over^ start_ARG \u2207 end_ARG start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT italic_f ( italic_x ) onto F\ud835\udc39Fitalic_F",
169
+ "url": "http://arxiv.org/html/2210.05279v2/x1.png"
170
+ },
171
+ "4(a)": {
172
+ "figure_path": "2210.05279v2_figure_4(a).png",
173
+ "caption": "(a) q=200\ud835\udc5e200q=200italic_q = 200\nFigure 6: \u03c1\u2062\u03b3\ud835\udf0c\ud835\udefe\\rho\\gammaitalic_\u03c1 italic_\u03b3 (y\ud835\udc66yitalic_y axis) as a function of k\ud835\udc58kitalic_k (x\ud835\udc65xitalic_x axis) for several values of q\ud835\udc5eqitalic_q and k*superscript\ud835\udc58k^{*}italic_k start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT",
174
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/gen_figs_curves/asset_risk/result/curve_k_200_30000.png"
175
+ },
176
+ "4(b)": {
177
+ "figure_path": "2210.05279v2_figure_4(b).png",
178
+ "caption": "(b) q=5000\ud835\udc5e5000q=5000italic_q = 5000\nFigure 6: \u03c1\u2062\u03b3\ud835\udf0c\ud835\udefe\\rho\\gammaitalic_\u03c1 italic_\u03b3 (y\ud835\udc66yitalic_y axis) as a function of k\ud835\udc58kitalic_k (x\ud835\udc65xitalic_x axis) for several values of q\ud835\udc5eqitalic_q and k*superscript\ud835\udc58k^{*}italic_k start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT",
179
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/gen_figs_curves/asset_risk/result/curve_k_5000_30000.png"
180
+ },
181
+ "4(c)": {
182
+ "figure_path": "2210.05279v2_figure_4(c).png",
183
+ "caption": "(c) q=30000\ud835\udc5e30000q=30000italic_q = 30000\nFigure 6: \u03c1\u2062\u03b3\ud835\udf0c\ud835\udefe\\rho\\gammaitalic_\u03c1 italic_\u03b3 (y\ud835\udc66yitalic_y axis) as a function of k\ud835\udc58kitalic_k (x\ud835\udc65xitalic_x axis) for several values of q\ud835\udc5eqitalic_q and k*superscript\ud835\udc58k^{*}italic_k start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT",
184
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/gen_figs_curves/asset_risk/result/curve_k_30000_30000.png"
185
+ },
186
+ "5(a)": {
187
+ "figure_path": "2210.05279v2_figure_5(a).png",
188
+ "caption": "Figure 7: \\sus=d\\sus\ud835\udc51\\sus=d= italic_d",
189
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/gen_figs_curves/asset_risk/result/f_ds.png"
190
+ },
191
+ "5(b)": {
192
+ "figure_path": "2210.05279v2_figure_5(b).png",
193
+ "caption": "Figure 7: \\sus=d\\sus\ud835\udc51\\sus=d= italic_d",
194
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/gen_figs_curves/asset_risk/result/dis_ds.png"
195
+ },
196
+ "5(c)": {
197
+ "figure_path": "2210.05279v2_figure_5(c).png",
198
+ "caption": "Figure 7: \\sus=d\\sus\ud835\udc51\\sus=d= italic_d",
199
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/gen_figs_curves/asset_risk/result/s2dcst_f_ds.png"
200
+ },
201
+ "5(d)": {
202
+ "figure_path": "2210.05279v2_figure_5(d).png",
203
+ "caption": "Figure 7: \\sus=d\\sus\ud835\udc51\\sus=d= italic_d",
204
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/gen_figs_curves/asset_risk/result/s2dcst_dis_ds.png"
205
+ },
206
+ "5(e)": {
207
+ "figure_path": "2210.05279v2_figure_5(e).png",
208
+ "caption": "Figure 7: \\sus=d\\sus\ud835\udc51\\sus=d= italic_d",
209
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/gen_figs_curves/asset_risk/result/s2cst_f_ds.png"
210
+ },
211
+ "5(f)": {
212
+ "figure_path": "2210.05279v2_figure_5(f).png",
213
+ "caption": "Figure 7: \\sus=d\\sus\ud835\udc51\\sus=d= italic_d",
214
+ "url": "http://arxiv.org/html/2210.05279v2/extracted/5477845/figs/gen_figs_curves/asset_risk/result/s2cst_dis_ds.png"
215
+ }
216
+ },
217
+ "validation": true,
218
+ "references": [
219
+ {
220
+ "1": {
221
+ "title": "Mathematical methods for physicists.",
222
+ "author": "George B Arfken and Hans J Weber.",
223
+ "venue": "American Association of Physics Teachers, 1999.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "2": {
229
+ "title": "Zeroth-order (non)-convex stochastic optimization via conditional\ngradient and gradient updates.",
230
+ "author": "Krishnakumar Balasubramanian and Saeed Ghadimi.",
231
+ "venue": "In Advances in Neural Information Processing Systems,\nvolume 31, 2018.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "3": {
237
+ "title": "Or-library: distributing test problems by electronic mail.",
238
+ "author": "John E Beasley.",
239
+ "venue": "Journal of the operational research society, 41(11):1069\u20131072, 1990.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "4": {
245
+ "title": "First-order methods in optimization.",
246
+ "author": "Amir Beck.",
247
+ "venue": "SIAM, 2017.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "5": {
253
+ "title": "Statistics for high-dimensional data: methods, theory and\napplications.",
254
+ "author": "Peter B\u00fchlmann and Sara Van De Geer.",
255
+ "venue": "Springer Science & Business Media, 2011.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "6": {
261
+ "title": "A zeroth-order block coordinate descent algorithm for huge-scale\nblack-box optimization.",
262
+ "author": "HanQin Cai, Yuchen Lou, Daniel McKenzie, and Wotao Yin.",
263
+ "venue": "In International Conference on Machine Learning, pages\n1193\u20131203. PMLR, 2021.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "7": {
269
+ "title": "Zeroth-order regularized optimization (zoro): Approximately sparse\ngradients and adaptive sampling.",
270
+ "author": "HanQin Cai, Daniel McKenzie, Wotao Yin, and Zhenliang Zhang.",
271
+ "venue": "SIAM Journal on Optimization, 32(2):687\u2013714, 2022.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "8": {
277
+ "title": "Heuristics for cardinality constrained portfolio optimisation.",
278
+ "author": "T-J Chang, Nigel Meade, John E Beasley, and Yazid M Sharaiha.",
279
+ "venue": "Computers & Operations Research, 27(13):1271\u20131302, 2000.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "9": {
285
+ "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural\nnetworks without training substitute models.",
286
+ "author": "Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh.",
287
+ "venue": "In Proceedings of the 10th ACM workshop on artificial\nintelligence and security, pages 15\u201326, 2017.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "10": {
293
+ "title": "Zo-adamm: Zeroth-order adaptive momentum method for black-box\noptimization.",
294
+ "author": "Xiangyi Chen, Sijia Liu, Kaidi Xu, Xingguo Li, Xue Lin, Mingyi Hong, and David\nCox.",
295
+ "venue": "In Advances in Neural Information Processing Systems,\nvolume 32, 2019.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "11": {
301
+ "title": "Provably robust blackbox optimization for reinforcement learning.",
302
+ "author": "Krzysztof Choromanski, Aldo Pacchiano, Jack Parker-Holder, Yunhao Tang, Deepali\nJain, Yuxiang Yang, Atil Iscen, Jasmine Hsu, and Vikas Sindhwani.",
303
+ "venue": "In Conference on Robot Learning, pages 683\u2013696. PMLR, 2020.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "12": {
309
+ "title": "On the information-adaptive variants of the admm: an iteration\ncomplexity perspective.",
310
+ "author": "Xiang Gao, Bo Jiang, and Shuzhong Zhang.",
311
+ "venue": "Journal of Scientific Computing, 76(1):327\u2013363, 2018.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "13": {
317
+ "title": "Faster rates for the frank-wolfe method over strongly-convex sets.",
318
+ "author": "Dan Garber and Elad Hazan.",
319
+ "venue": "In International Conference on Machine Learning, pages\n541\u2013549. PMLR, 2015.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "14": {
325
+ "title": "Mini-batch stochastic approximation methods for nonconvex stochastic\ncomposite optimization.",
326
+ "author": "Saeed Ghadimi, Guanghui Lan, and Hongchao Zhang.",
327
+ "venue": "Mathematical Programming, 155(1):267\u2013305,\n2016.",
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "15": {
333
+ "title": "Gradientless descent: High-dimensional zeroth-order optimization.",
334
+ "author": "Daniel Golovin, John Karro, Greg Kochanski, Chansoo Lee, Xingyou Song, and\nQiuyi Zhang.",
335
+ "venue": "In International Conference on Learning Representations, 2019.",
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "16": {
341
+ "title": "SGD: General analysis and improved rates.",
342
+ "author": "Robert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor\nShulgin, and Peter Richt\u00e1rik.",
343
+ "venue": "In Proceedings of the 36th International Conference on Machine\nLearning, volume 97, pages 5200\u20135209. PMLR, 2019.",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "17": {
349
+ "title": "On iterative hard thresholding methods for high-dimensional\nm-estimation.",
350
+ "author": "Prateek Jain, Ambuj Tewari, and Purushottam Kar.",
351
+ "venue": "In Advances in Neural Information Processing Systems,\nvolume 27, 2014.",
352
+ "url": null
353
+ }
354
+ },
355
+ {
356
+ "18": {
357
+ "title": "Query complexity of derivative-free optimization.",
358
+ "author": "Kevin G Jamieson, Robert Nowak, and Ben Recht.",
359
+ "venue": "In Advances in Neural Information Processing Systems,\nvolume 25, 2012.",
360
+ "url": null
361
+ }
362
+ },
363
+ {
364
+ "19": {
365
+ "title": "Constrained minimization methods.",
366
+ "author": "Evgeny S Levitin and Boris T Polyak.",
367
+ "venue": "USSR Computational mathematics and mathematical physics,\n6(5):1\u201350, 1966.",
368
+ "url": null
369
+ }
370
+ },
371
+ {
372
+ "20": {
373
+ "title": "Nonconvex sparse learning via stochastic optimization with\nprogressive variance reduction.",
374
+ "author": "Xingguo Li, Raman Arora, Han Liu, Jarvis Haupt, and Tuo Zhao.",
375
+ "venue": "arXiv preprint arXiv:1605.02711, 2016.",
376
+ "url": null
377
+ }
378
+ },
379
+ {
380
+ "21": {
381
+ "title": "A comprehensive linear speedup analysis for asynchronous stochastic\nparallel optimization from zeroth-order to first-order.",
382
+ "author": "Xiangru Lian, Huan Zhang, Cho-Jui Hsieh, Yijun Huang, and Ji Liu.",
383
+ "venue": "Advances in Neural Information Processing Systems, 29, 2016.",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "22": {
389
+ "title": "A dimension-insensitive algorithm for stochastic zeroth-order\noptimization.",
390
+ "author": "Hongcheng Liu and Yu Yang.",
391
+ "venue": "arXiv preprint arXiv:2104.11283, 2021.",
392
+ "url": null
393
+ }
394
+ },
395
+ {
396
+ "23": {
397
+ "title": "Zeroth-order online alternating direction method of multipliers:\nConvergence analysis and applications.",
398
+ "author": "Sijia Liu, Jie Chen, Pin-Yu Chen, and Alfred Hero.",
399
+ "venue": "In International Conference on Artificial Intelligence and\nStatistics, pages 288\u2013297. PMLR, 2018a.",
400
+ "url": null
401
+ }
402
+ },
403
+ {
404
+ "24": {
405
+ "title": "Zeroth-order stochastic variance reduction for nonconvex\noptimization.",
406
+ "author": "Sijia Liu, Bhavya Kailkhura, Pin-Yu Chen, Paishun Ting, Shiyu Chang, and Lisa\nAmini.",
407
+ "venue": "In Advances in Neural Information Processing Systems,\nvolume 31, 2018b.",
408
+ "url": null
409
+ }
410
+ },
411
+ {
412
+ "25": {
413
+ "title": "A primer on zeroth-order optimization in signal processing and\nmachine learning: Principals, recent advances, and applications.",
414
+ "author": "Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred O Hero III, and\nPramod K Varshney.",
415
+ "venue": "IEEE Signal Processing Magazine, 37(5):43\u201354, 2020.",
416
+ "url": null
417
+ }
418
+ },
419
+ {
420
+ "26": {
421
+ "title": "Regularized m-estimators with nonconvexity: Statistical and\nalgorithmic theory for local optima.",
422
+ "author": "Po-Ling Loh and Martin J Wainwright.",
423
+ "venue": "Advances in Neural Information Processing Systems, 26, 2013.",
424
+ "url": null
425
+ }
426
+ },
427
+ {
428
+ "27": {
429
+ "title": "Simple random search of static linear policies is competitive for\nreinforcement learning.",
430
+ "author": "Horia Mania, Aurelia Guy, and Benjamin Recht.",
431
+ "venue": "In Advances in Neural Information Processing Systems,\nvolume 31, 2018.",
432
+ "url": null
433
+ }
434
+ },
435
+ {
436
+ "28": {
437
+ "title": "A unified framework for high-dimensional analysis of -estimators\nwith decomposable regularizers.",
438
+ "author": "Sahand Negahban, Bin Yu, Martin J Wainwright, and Pradeep Ravikumar.",
439
+ "venue": "Advances in neural information processing systems, 22, 2009.",
440
+ "url": null
441
+ }
442
+ },
443
+ {
444
+ "29": {
445
+ "title": "A unified framework for high-dimensional analysis of -estimators\nwith decomposable regularizers.",
446
+ "author": "Sahand N Negahban, Pradeep Ravikumar, Martin J Wainwright, and Bin Yu.",
447
+ "venue": "Statistical science, 27(4):538\u2013557, 2012.",
448
+ "url": null
449
+ }
450
+ },
451
+ {
452
+ "30": {
453
+ "title": "Random gradient-free minimization of convex functions.",
454
+ "author": "Yurii Nesterov and Vladimir Spokoiny.",
455
+ "venue": "Foundations of Computational Mathematics, 17(2):527\u2013566, 2017.",
456
+ "url": null
457
+ }
458
+ },
459
+ {
460
+ "31": {
461
+ "title": "Linear convergence of stochastic iterative greedy algorithms with\nsparse constraints.",
462
+ "author": "Nam Nguyen, Deanna Needell, and Tina Woolf.",
463
+ "venue": "IEEE Transactions on Information Theory, 63(11):6869\u20136895, 2017.",
464
+ "url": null
465
+ }
466
+ },
467
+ {
468
+ "32": {
469
+ "title": "Ac/dc: Alternating compressed/decompressed training of deep neural\nnetworks.",
470
+ "author": "Alexandra Peste, Eugenia Iofinova, Adrian Vladu, and Dan Alistarh.",
471
+ "venue": "Advances in Neural Information Processing Systems, 34, 2021.",
472
+ "url": null
473
+ }
474
+ },
475
+ {
476
+ "33": {
477
+ "title": "Minimax rates of estimation for high-dimensional linear regression\nover -balls.",
478
+ "author": "Garvesh Raskutti, Martin J Wainwright, and Bin Yu.",
479
+ "venue": "IEEE transactions on information theory, 57(10):6976\u20136994, 2011.",
480
+ "url": null
481
+ }
482
+ },
483
+ {
484
+ "34": {
485
+ "title": "Evolution strategies as a scalable alternative to reinforcement\nlearning.",
486
+ "author": "Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever.",
487
+ "venue": "arXiv preprint arXiv:1703.03864, 2017.",
488
+ "url": null
489
+ }
490
+ },
491
+ {
492
+ "35": {
493
+ "title": "An optimal algorithm for bandit and zero-order convex optimization\nwith two-point feedback.",
494
+ "author": "Ohad Shamir.",
495
+ "venue": "The Journal of Machine Learning Research, 18(1):1703\u20131713, 2017.",
496
+ "url": null
497
+ }
498
+ },
499
+ {
500
+ "36": {
501
+ "title": "A tight bound of hard thresholding.",
502
+ "author": "Jie Shen and Ping Li.",
503
+ "venue": "The Journal of Machine Learning Research, 18(1):7650\u20137691, 2017.",
504
+ "url": null
505
+ }
506
+ },
507
+ {
508
+ "37": {
509
+ "title": "Sparse stochastic zeroth-order optimization with an application to\nbandit structured prediction.",
510
+ "author": "Artem Sokolov, Julian Hitschler, Mayumi Ohta, and Stefan Riezler.",
511
+ "venue": "arXiv preprint arXiv:1806.04458, 2018.",
512
+ "url": null
513
+ }
514
+ },
515
+ {
516
+ "38": {
517
+ "title": "Surface integrals over n-dimensional spheres.",
518
+ "author": "Stanislav Sykora.",
519
+ "venue": "Stan\u2019s Library, (Volume I), May 2005.",
520
+ "url": null
521
+ }
522
+ },
523
+ {
524
+ "39": {
525
+ "title": "Regression shrinkage and selection via the lasso.",
526
+ "author": "Robert Tibshirani.",
527
+ "venue": "Journal of the Royal Statistical Society: Series B\n(Methodological), 58(1):267\u2013288, 1996.",
528
+ "url": null
529
+ }
530
+ },
531
+ {
532
+ "40": {
533
+ "title": "Autozoom: Autoencoder-based zeroth order optimization method for\nattacking black-box neural networks.",
534
+ "author": "Chun-Chen Tu, Paishun Ting, Pin-Yu Chen, Sijia Liu, Huan Zhang, Jinfeng Yi,\nCho-Jui Hsieh, and Shin-Ming Cheng.",
535
+ "venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 33, pages 742\u2013749, 2019.",
536
+ "url": null
537
+ }
538
+ },
539
+ {
540
+ "41": {
541
+ "title": "High-dimensional generalized linear models and the lasso.",
542
+ "author": "Sara A Van de Geer.",
543
+ "venue": "The Annals of Statistics, 36(2):614\u2013645,\n2008.",
544
+ "url": null
545
+ }
546
+ },
547
+ {
548
+ "42": {
549
+ "title": "Graphical models, exponential families, and variational inference.",
550
+ "author": "Martin J Wainwright, Michael I Jordan, et al.",
551
+ "venue": "Foundations and Trends\u00ae in Machine Learning,\n1(1\u20132):1\u2013305, 2008.",
552
+ "url": null
553
+ }
554
+ },
555
+ {
556
+ "43": {
557
+ "title": "Hand-book on statistical distributions for experimentalists.",
558
+ "author": "Christian Walck et al.",
559
+ "venue": "University of Stockholm, 10:96\u201301, 2007.",
560
+ "url": null
561
+ }
562
+ },
563
+ {
564
+ "44": {
565
+ "title": "Stochastic zeroth-order optimization in high dimensions.",
566
+ "author": "Yining Wang, Simon Du, Sivaraman Balakrishnan, and Aarti Singh.",
567
+ "venue": "In International Conference on Artificial Intelligence and\nStatistics, pages 1356\u20131365. PMLR, 2018.",
568
+ "url": null
569
+ }
570
+ },
571
+ {
572
+ "45": {
573
+ "title": "Gradient hard thresholding pursuit.",
574
+ "author": "Xiao-Tong Yuan, Ping Li, and Tong Zhang.",
575
+ "venue": "Journal of Machine Learning Research, 18(1):6027\u20136069, 2017.",
576
+ "url": null
577
+ }
578
+ },
579
+ {
580
+ "46": {
581
+ "title": "Stability and risk bounds of iterative hard thresholding.",
582
+ "author": "Xiaotong Yuan and Ping Li.",
583
+ "venue": "In International Conference on Artificial Intelligence and\nStatistics, pages 1702\u20131710. PMLR, 2021.",
584
+ "url": null
585
+ }
586
+ },
587
+ {
588
+ "47": {
589
+ "title": "Efficient stochastic gradient hard thresholding.",
590
+ "author": "Pan Zhou, Xiaotong Yuan, and Jiashi Feng.",
591
+ "venue": "In Advances in Neural Information Processing Systems,\nvolume 31, 2018.",
592
+ "url": null
593
+ }
594
+ }
595
+ ],
596
+ "url": "http://arxiv.org/html/2210.05279v2"
597
+ }
20240318/2303.17790v2.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A Study of an Atomic Mobility Game With Uncertainty Under Prospect Theory",
3
+ "abstract": "In this paper, we present a study of a mobility game with uncertainty in the decision-making of travelers and incorporate prospect theory to model travel behavior. We formulate a mobility game that models how travelers distribute their traffic flows in a transportation network with splittable traffic, utilizing the Bureau of Public Roads function to establish the relationship between traffic flow and travel time cost. Given the inherent non-linearities and complexity introduced by the uncertainties, we propose a smooth approximation function to estimate the prospect-theoretic cost functions. As part of our analysis, we characterize the best-fit parameters and derive an upper bound for the error. We then show the existence of an equilibrium and its its best-possible approximation.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "INTRODUCTION",
9
+ "text": "Emerging mobility systems (e.g., connected and automated vehicles (CAVs), shared mobility) provide the most intriguing opportunity for enabling users to monitor transportation network conditions better and make efficient decisions for improving safety and transportation efficiency. The data and shared information of emerging mobility systems are associated with a new level of complexity in modeling and control [1 ###reference_b1###]. The impact of selfish or irrational social behavior in routing networks of cars has been studied in recent years [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###]. Other efforts have addressed how people learn and make routing decisions with behavioral dynamics [5 ###reference_b5###]. The problem of how travelers often have to make decisions under the uncertainty of experiencing delays, especially when uncertainties directly affect travel time in a transportation network, has not been adequately approached yet. Hence, our problem of interest is to study in a game-theoretic setting these interactions and analyze the equilibrium of the travelers\u2019 decisions under uncertainties [6 ###reference_b6###]. We study the interactions of a finite group of players that seek to travel in a transportation network (with a unique origin-destination pair) comprised of roads with splittable traffic. A key characteristic of our approach is that we incorporate prospect theory, a behavioral model that captures the perceptions of utility under uncertainty (how likely and how much).\nSome of the existing game-theoretical literature in control and transportation theory assumes that the players\u2019 behavior follows the rational choice theory, i.e., each player is a risk-neutral and utility maximizer. This makes transportation models unrealistic, as unexpected travel delays can lead to uncertainty in a traveler\u2019s utility. There is strong evidence from empirical experiments that show how humans\u2019 choices and preferences systematically may deviate from the choice and preferences of a game-theoretic player under the rational choice theory [7 ###reference_b7###]. For example, humans compare the outcomes of their choices to a known expected amount of utility (called reference) and make their final decision, using that reference to assess their losses or gains asymmetrically. Prospect theory has laid down the theoretical foundations to study such biases and the subjective perception of risk in the utility of humans [7 ###reference_b7###, 8 ###reference_b8###]. This theory has been recognized as a closer-to-reality behavioral model for the decision-making of humans in different engineering problems [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###].\nIn general, one of the standard approaches to alleviate congestion in a transportation system has been managing the travel demand and supply while also taking into consideration the scarce resources. Such approaches focus primarily on traffic routing, which aims to optimize the routing decisions in a transportation network [12 ###reference_b12###]. Another approach is game theory that allows us to investigate the impact of selfish routing on efficiency and congestion [13 ###reference_b13###] and assign travelers routes to minimize travel time under a Nash Equilibrium (NE) [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###]. A fundamental theoretical approach in alleviating congestion is routing/congestion games [19 ###reference_b19###, 20 ###reference_b20###], which are a generalization of the standard resource-sharing game of an arbitrary number of resources in a network.\nIn this paper, we use Prelec\u2019s probability weighting function and an S-shaped value function to model how travelers perceive traffic uncertainties and their travel gains/losses. So, our first contribution is incorporating prospect theory into an atomic routing game with splittable traffic to capture a realistic version of the travelers\u2019 decision-making regarding travel time costs. The S-shaped value function is adopted to represent the curvature of the travel time cost function and account for the travelers\u2019 perception of gains/losses in travel time according to a reference point (defined using the US Bureau of Public Roads function). To address prospect theory\u2019s mathematical intractabilities, our second contribution proposes a smooth approximation function that estimates the non-linear piecewise prospect-theoretic cost functions. Thus, we can estimate how travelers perceive gains/losses and probabilities in travel time costs. This work is focused on establishing the fitness of the approximation function, proving the existence of at least one NE in pure strategies.\nThe remainder of the paper is structured as follows. In Section II ###reference_###, we present the mathematical formulation of the proposed game-theoretic framework. In Section III ###reference_###, we derive the theoretical properties of the proposed framework, and finally, draw conclusions in Section IV ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Modeling Framework",
15
+ "text": "We consider a routing game with a finite non-empty set of players , . Each player may represents a class of travelers that could use connected and automated vehicles (CAVs) and who control a significant amount of traffic, say . Thus, we interpret as the representation of the flow of traffic that player contributes to a transportation network. We define traffic flow in this setting as the number of vehicles passing through each point in the network over time. This decision variable is non-negative as players (or the travelers) make trips using their vehicle over time in the transportation network. This is in contrast to non-atomic routing games, where players only control an infinitesimal amount of traffic. We also assume that traffic is splittable. Travelers seek to travel in a transportation network represented by a directed multigraph , where each node in may represent different city areas or neighborhoods (e.g., Braess\u2019 paradox network). Each edge may represent a road. For our purposes, we think of as a representation of a smart city network with a road infrastructure. Any player seeks to travel from an origin to a destination . So, all players are associated with the same unique origin-destination pair . Next, each player may use a sequence of edges that connects the OD pair . We define as the set of routes available to any player , where their route consists of a sequence of edges connecting the origin-destination pair . We are interested in how such players may compete over the routes in the network for routing their traffic flows (this is a multiple-route traffic flow decision-making problem).\nSince each player seeks to route their traffic represented by flow in the network , we define, for each , the set of actions as\nwhere , is the total flow of player , and denotes the -th route in the network.\nNote here that each player controls their traffic flow , which we represent as a vector since player may choose to use different routes in the transportation network, thus sending traffic for some . The total traffic flow controlled by player is finite, though. And so, we represent this by introducing .\nWe write for the Cartesian product of all the players\u2019 action sets. We also write for the action profile that excludes player . Next, for the aggregate action profile, we write , .\nThe flow on edge is the sum of relevant components of all players\u2019 traffic flows that have chosen a route that includes edge , i.e., .\nIn our routing game where each player chooses their traffic flow vector over a common set of routes , if player chooses to send traffic along route , then this traffic will be distributed along all the edges in this route . This is because a traveler\u2019s traffic on some route is a single quantity among all the route\u2019s edges.\nNext, we introduce a travel time latency function to capture the cost that players may experience. Intuitively, we capture the players\u2019 preferences for different outcomes using a \u201ccost function,\u201d in which players are expected to act as cost minimizers. For each , we consider non-negative cost functions . We assume that the cost functions at each edge are convex, continuous, and differentiable with respect to . One standard way to define in an exact form is by the US Bureau of Public Roads (BPR) function, as it is a commonly used model for the relationship between flow and travel time. Mathematically, we have, for any edge ,\nwhere is the free-flow travel time and is the critical capacity of traffic flow on road . Note that the BPR function is non-linear, continuous, differentiable, strictly increasing, and strictly convex for .\nIf the maximum flow on edge is , then for the critical flow, , on edge we have .\nNext, for some route of any player , its cost is the sum of the costs on the edges that constitute route , i.e., . The total cost for player is\nwhere .\nThe game is fully characterized by the tuple . This non-cooperative routing game is a simultaneous-move game where players make decisions simultaneously and commute in of network . Players behave selfishly and aim to minimize their costs (e.g., travel time latencies). Naturally, players compete with each other over the available yet limited routes and how to utilize them in the transportation network. Indirectly, players make route choices that satisfy their travel needs (modeled through traffic flow). Next, we clarify \u201cwho knows what?\u201d in . All players have complete knowledge of the game and the network. Each player knows their own information (action and cost) as well as the information of other players. At equilibrium, we want to ensure that no player has an incentive to unilaterally deviate from their chosen decisions and change how they distribute their traffic flows over the available routes in the network. So, for our purposes, we observe that an NE in terms of the players\u2019 traffic flows in pure strategies is the most appropriate a solution concept for our game.\nA feasible flow profile constitutes a NE if for each player , , for all .\nIn other words, a flow profile is an NE if no player can reduce their total cost by unilaterally changing how they distribute their total traffic flow over the available routes in the network. In an NE, each player\u2019s specific has the lowest possible cost among all possible distributions over the routes, given the choices made by other players."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Prospect Theory Analysis",
21
+ "text": "In this subsection, we briefly introduce prospect theory and its main concepts [21 ###reference_b21###]. Prospect theory attempts to answer one of the main questions of how a decision-maker may evaluate different possible actions/outcomes under uncertain and risky circumstances. Thus, prospect theory is a descriptive behavioral model and focuses on three main behavioral factors: (i) Reference dependence: decision-makers make decisions based on their utility, which is measured from the \u201cgains\u201d or \u201closses.\u201d However, the utility is a gain or loss relative to a reference point that may be unique to each decision-maker. (ii) Diminishing sensitivity: changes in value have a greater impact near the reference point than away from the reference point. (iii) Loss aversion: decision-makers are more conservative in gains and riskier in losses. One way to mathematize the above behavioral factors (1) - (3) is to consider an action by a decision-maker as a \u201cgamble\u201d with objective utility value . We say that this decision-maker perceives subjectively using a value function [7 ###reference_b7###, 22 ###reference_b22###]\nwhere represents a reference point, are parameters that represent the diminishing sensitivity. Both shape (4 ###reference_###) in a way that the changes in value have a greater impact near the reference point than away from the reference point. We observe that (4 ###reference_###) is concave in the domain of gains and convex in the domain of losses. Moreover, reflects the level of loss aversion of decision-makers. To the best of our knowledge, a widely agreed theory does not exist that determines and defines the reference dependence [7 ###reference_b7###]. In engineering [23 ###reference_b23###, 11 ###reference_b11###], it is assumed that captures a decision-maker\u2019s expected status-quo level of the resources.\nProspect theory models the subjective behavior of decision-makers under uncertainty and risk. Each objective utility is associated with a probabilistic occurrence, say . Decision-makers are subjective and perceive differently depending on its value. To capture this behavior, we introduce a strictly increasing function with and called the probability weighting function. This function allows us to model how decision-makers may overestimate small probabilities of objective utilities, i.e., if is close to , or underestimate high probabilities, i.e., if is close to . We use Prelec\u2019s probability weighting function first introduced in [24 ###reference_b24###], , where represents a rational index, i.e., the distortion of a decision-maker\u2019s probability perceptions. Mathematically, controls the curvature of the weighting function.\nSuppose that there are possible outcomes available to a decision-maker and is the -th gain/loss of objective utility. Then a prospect is a tuple of the utilities and their respective probabilities, i.e., , where . We denote the -th prospect more compactly as . We have that and is well-ordered, i.e., . Under prospect theory, the decision-maker evaluates their \u201csubjective utility\u201d as , where is the profile of prospects of outcomes.\nIn the remainder of this subsection, we apply the prospect theory to our modeling framework, clearly define the mobility outcomes (objective and subjective utilities), and then show that the prospect-theoretic game admits a NE.\nPlayers may be uncertain about the value of the traffic disturbances as it is affected by unexpected factors, and so we use Prelec\u2019s probability weighting function to capture how different traveler populations \u201cperceive\u201d probabilities. In addition, we are interested in capturing how players may perceive their gains or losses regarding their travel time costs with respect to the costs at critical density. Hence, we define the mobility prospect as whether will reach its critical or jammed point. Formally, is the probability that , and is the probability for . We then use the prospect-theoretic S-shaped value function to capture how players may perceive such costs. Hence, we have\nwhere the reference dependence is represented by , , and for each , we have . We justify in above as it has been verified to produce extremely good results, and the outcomes are consistent with the original data [8 ###reference_b8###]. We define\nIt is important to note that our prospect-theoretic value function is \u201creversed,\u201d capturing the way a traveler will perceive the gains in travel time through a cost function. Using as a reference point the critical traffic flow on edge , we can pinpoint the exact point that any more delays become socially unacceptable, i.e., a higher flow causes a higher travel time that a traveler will not tolerate.\nThe new cost function is\nObserve that this particular formulation allows only two main outcomes for any player. One outcome may represent an easy commute (no traffic), and the other may represent traffic. For our purposes, we naturally expect two probabilities for these two outcomes. Future work will ensure to allow a larger distribution of probabilities for many different outcomes for any player (in such cases, cumulative prospect theory would be a more appropriate model [8 ###reference_b8###]).\nThe total cost on some route for player under prospect theory is . Now, the total cost of some player is given by\nNote, however, that in this case, the prospect-theoretic cost is capturing the gains and losses of travel. Thus, the aim is to maximize this function to maximize the gains (by minimizing the actual cost of travel latencies).\nWhat we observe in (8 ###reference_###) is that it is rather cumbersome to analyze it analytically as issues in its smoothness arise quickly. The problem in analyzing such a function is that the exponent takes values in . To address this theoretical obstacle, we propose a new function that approximates the prospect-theoretic function and, most importantly, can be shown to have useful properties. Hence, we define the following function\nwhere , and . Hence, we can approximately evaluate (8 ###reference_###) with the following:"
22
+ },
23
+ {
24
+ "section_id": "3",
25
+ "parent_section_id": null,
26
+ "section_name": "III Analysis and Properties of the Game",
27
+ "text": "In this section, we provide a formal analysis of the properties of our proposed modeling framework, characterize the coefficients of function, and show that our game admits an NE in pure strategies.\nThe strategy space of the game is non-empty, compact, and convex.\nThe proof has been omitted here due to space-constraints.\n\u220e\nThe approximation function given by (9 ###reference_###) in the interval , , is strictly concave with respect to when , , and (i) , , or alternatively (ii) , .\nGiven that , we analyze the second-order derivative of the function to determine the conditions for strict concavity. First, let us find the first and second-order derivatives of with respect to , i.e.,\nNow, we examine the conditions for . First, controls the sign of the second-order derivative as follows: if and , will be negative when , which simplifies to . If in either of the cases, then the signs are reversed. We do require though that is well-defined, so . On greater detail, determines the conditions for to be negative. If , we need , which implies that (since ). If , we need , which implies that .\nCombining these insights, we can conclude that the function becomes strictly concave in the entire interval. So, it is strictly concave for if: (i) and ; (ii) , , and . If , then the relation between and is naturally reversed. Note that the parameter does not affect the convexity of the function, as it only shifts the function vertically. Therefore, we have derived the necessary conditions that ensure is negative for all , making strictly concave.\n\u220e\nIt follows that it is strictly decreasing, continuous, and (continuously) differentiable with respect to the traffic flow for any edge .\nNow, we discuss the error characterization of our approximation function. Let us define the error function as the squared difference between and , integrated over the interval :\nThe goal is to minimize with respect to the parameters , and . First, we find the critical points of by setting its gradient to zero and solving the resulting system of equations: . This results in a system of equations involving the partial derivatives of with respect to each of the parameters, i.e., , and . To compute these partial derivatives, we need to differentiate the integrand with respect to each parameter and then integrate it again, for example, . This process needs to be repeated for all parameters. However, due to the complexity of the function (being a non-linear piecewise function), it is not possible to obtain an explicit analytical expression for these partial derivatives. For our purposes, we rely on numerical optimization techniques to find the exact best-fit parameters that minimize the error function, as these methods can easily handle complex and non-linear optimization.\nThe error is upper bounded by , where is some real number and .\nFor the purposes of this proof we assume that , and and and . We substitute now the known equations to get . Using a straightforward computation of the second-order derivative, we can get the inflection point of , which will lie in . This means that it is sufficient for us to compute at and focus on for . Since is smooth and strictly concave in that interval, it approximates the worst around the inflection point. So, we have the following . This expression simplifies to\nwhere we have and , and . Since is only a positive parameter constant, it is negligible, so we drop it from our analysis. The first component simplifies to , which is negative when we evaluate near the inflection point. Next, it follows that the second component is positive for small values of and . We use the Taylor series expansion evaluated at , where is a small positive number to get\nwhich is clearly negative. For the second component, we use the Taylor series expansion at to get\nWe combine the expressions for the first and second components. Next, we have\nWe want to find an upper bound for the error, which means we need to show that (17 ###reference_###) is less than or equal to for some . Note that for any with and , it is always true that . So,\nAs is positively small, we take the limit as . We note that the term dominates as , and so the first component approaches as . For the second component, the term dominates as , and since and , the second component is positive. Hence, we can write\nAs , we have , hence\nNow, let . Since the second component is positive, we have , thus . Therefore, the error is upper bounded by , where and .\n\u220e\nThe game admits at least one NE.\nWe formally prove the existence of an NE in the prospect-theoretic routing game using Brouwer\u2019s fixed point theorem. Recall that for any player , , where is our smooth and monotonic approximation function. We define the best-response correspondence for each player as: . Smoothness in the approximation function implies that it is continuous and has continuous derivatives. This implies that we can estimate the utility function continuously with respect to the traffic vector . To show that the best-response correspondence is continuous, we need the operator to be continuous. Since the set of maximizers is compact, which actually follows from the compactness of the strategy space by Lemma 1 ###reference_ma1###. By Lemma 2 ###reference_ma2###, we have that is concave on a specific interval . This implies that we can estimate the utility function within the interval pointwise in a strictly decreasing and strictly concave curve with respect to for any player . However, a strictly concave function has at most one unique maximum, which ensures the single-validness of the best-response correspondence . We now define the combined best-response correspondence . Since each is continuous, is also continuous, and thus it maps the strategy space to itself. Hence, now we can apply Brouwer\u2019s fixed point theorem, which guarantees that there exists a fixed point ; the result then follows.\n\u220e"
28
+ },
29
+ {
30
+ "section_id": "4",
31
+ "parent_section_id": null,
32
+ "section_name": "IV CONCLUSIONS AND FUTURE WORK",
33
+ "text": "In this paper, we presented our mobility game incorporating an atomic splittable routing game with prospect theory to study travel behavior in mobility systems. We modeled the overestimation/underestimation of probabilities using Prelec\u2019s probability weighting function, and we considered the traffic uncertainties and travelers\u2019 perception of gains/losses in travel time using a prospect-theoretic S-shaped value function. We proposed an approximation function to address the non-linear and piecewise nature of the prospect-theoretic cost functions and showed that at least one NE exists. Finally, we also derived an upper bound for the error.\nIn future research, we can explore how to analyze a convex-concave piecewise non-linear optimization problem using optimization techniques, such as sequential convex programming or cutting plane methods. Developing such an optimization framework can enhance our ability to predict travel decisions in mobility systems under prospect theory. Another direction is to incorporate prospect theory and a taxation mechanism and study using artificial intelligence how we can incentivize prospect-theoretic travelers and the trade-offs of efficiency in the mobility systems [25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###]."
34
+ }
35
+ ],
36
+ "appendix": [],
37
+ "tables": {},
38
+ "image_paths": {},
39
+ "validation": true,
40
+ "references": [],
41
+ "url": "http://arxiv.org/html/2303.17790v2"
42
+ }
20240318/2305.11490v5.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2305.19115v2.json ADDED
@@ -0,0 +1,547 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "High-Gain Disturbance Observer for Robust Trajectory Tracking of Quadrotors",
3
+ "abstract": "This paper presents a simple method to boost the robustness of quadrotors in trajectory tracking.\nThe presented method features a high-gain disturbance observer (HGDO) that provides disturbance estimates in real-time.\nThe estimates are then used in a trajectory control law to compensate for disturbance effects.\nWe present theoretical convergence results showing that the proposed HGDO can quickly converge to an adjustable neighborhood of actual disturbance values.\nWe will then integrate the disturbance estimates with a typical robust trajectory controller, namely sliding mode control (SMC), and present Lyapunov stability analysis to establish the boundedness of trajectory tracking errors.\nHowever, our stability analysis can be easily extended to other Lyapunov-based controllers to develop different HGDO-based controllers with formal stability guarantees.\nWe evaluate the proposed HGDO-based control method using both simulation and laboratory experiments in various scenarios and in the presence of external disturbances.\nOur results indicate that the addition of HGDO to a quadrotor trajectory controller can significantly improve the accuracy and precision of trajectory tracking in the presence of external disturbances.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Recently, quadrotor uncrewed aerial vehicles (UAVs) have garnered significant attention from researchers due to their potential applications, including tasks like power line monitoring, inspection, logistics distribution, and firefighting. To execute these complex missions accurately and with high quality, it is crucial to ensure the stability and robustness of the position and attitude control systems. However, achieving robustness is a challenging task when dealing with external disturbances. Quadrotors are susceptible to various external disturbances, including wind gusts [1 ###reference_b1###], airflow distortion in the vicinity of surfaces [2 ###reference_b2###], and wake turbulence [3 ###reference_b3###]. These disturbances are often unmodeled or challenging to measure. Consequently, enhancing reliability and safety in trajectory-tracking missions has emerged as a formidable challenge.\nVarious control approaches are studied for robust trajectory tracking of vehicles in the presence of disturbances. Examples include model predictive control [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###], sliding mode control (SMC) [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###], adaptive control [11 ###reference_b11###, 12 ###reference_b12###], neural networks [13 ###reference_b13###], and reinforcement learning [14 ###reference_b14###].\nNumerous control methods exist that rely on disturbance observers (DOs).\nFor example, the time delay controller (TDC) involves a DO that uses the time delay between the control input and the system output to estimate the disturbance [15 ###reference_b15###], successfully applied to quadrotor attitude control, altitude control, and position control [16 ###reference_b16###].\nSliding mode DO is another variation of DOs that relies on a sliding mode observer to estimate the disturbance, improving the vehicle robustness to external disturbances and sensor noise [17 ###reference_b17###].\nFurther examples include generalized extended state observer [18 ###reference_b18###], and the uncertainty and disturbance estimator (UDE) [19 ###reference_b19###, 20 ###reference_b20###].\nThe latter offers several advantages, including the absence of system delays, the elimination of control signal oscillations, and the obviation of the need to measure state vector derivatives [21 ###reference_b21###].\nThe challenge of disturbances in constrained systems has been addressed by using iterative learning control, taking into account both input and output constraints, along with model uncertainty and output disturbances [22 ###reference_b22###].\nThe use of DO in a hierarchical control framework combined with adaptive control techniques is investigated in [23 ###reference_b23###], enabling quadrotors to adapt to varying disturbances and compensate for aerodynamic damping effects, resulting in robust and precise control.\nA bank of nonlinear DOs is utilized alongside a set of generalized backstepping and SMCs to counteract the impact of unaccounted uncertainties that affect the vehicle during flight [24 ###reference_b24###].\nFinite-time nonlinear DO is studied in [25 ###reference_b25###].\nAn integrated adaptive dynamic programming (ADP) technique is used in [26 ###reference_b26###] to achieve asymptotic tracking. By using real-time input-output data, the control algorithm can compute an approximated optimal fault-tolerant control. This approach allows the system to reject disturbances and maintain stable performance even in the presence of uncertainties and faults.\nWhile DO-based control is proven to be effective for quadrotor trajectory tracking, many existing DOs suffer from complex structures that add to the computational overhead of flight controllers. While significant progress has been made on the computational power of flight controllers, the extensive computational demands for autonomous or semi-autonomous operations, coupled with the power and weight restrictions of quadrotors, impose constraints on the available computing power of flight controllers; therefore, simple and computationally efficient DOs are still desired.\nAlso, complex DOs often involve several tuning parameters which require an involved tuning process to ensure a fast convergence rate.\nOne category of nonlinear observers that are simple and fast is high-gain observers (HGOs) [27 ###reference_b27###].\nAs its name suggests, HGO relies on the idea of applying a high gain to quickly recover the state estimates.\nHGOs present several desirable properties.\nFirst, they are relatively simple to design and implement since the observer is a copy of the model of the system with a gain whose expression is explicitly given.\nSecond, the observer tuning is realized simply through the choice of a single scalar design parameter.\nFinally, they can provide global or semi-global stability results for a large class of systems.\nSuch appealing properties; however, come at a cost.\nConventional HGOs suffer from measurement noise amplification; however, this problem is alleviated by recent designs [28 ###reference_b28###, 29 ###reference_b29###].\nGiven the advantages of HGOs, their applications for disturbance estimation are also explored, giving rise to high-gain disturbance observers (HGDOs) [30 ###reference_b30###].\nHowever, it has not been thoroughly studied for quadrotors, especially on an actual vehicle with real measurement errors and real disturbances, beyond simulation.\nThe papers that we were able to find on this topic either presented a limited simulation study or focused on only attitude control [31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###, 18 ###reference_b18###]. Another related work is [34 ###reference_b34###] which does not design HGDO, but rather an HGO for quadrotor state estimation, as an alternative to the extended Kalman filter.\nIt is worth noting that the application of HGDOs for helicopters has been studied in a few research papers [35 ###reference_b35###, 36 ###reference_b36###]. While helicopters and quadrotors are both rotary-wing aircraft, there exist fundamental differences in their flight mechanisms and control. Moreover, the tail rotor in helicopters adds an additional degree of authority for lateral control, which is missing in quadrotors.\nIn light of the above discussion, our objective here is to design HGDO for trajectory control of quadrotors.\nThe main contribution of this paper is that it develops HGDO for the attitude and position control of quadrotors. We integrated the proposed HGDO with a Lypaunov-based robust control law, namely SMC. We conduct extensive simulation and hardware experiments to compare the proposed HGDO+SMC method with existing methods. Our results demonstrate fast and accurate disturbance estimation, enabling accurate trajectory tracking for quadrotors, outperforming the benchmark methods tested.\nWe present Lyapunov stability analysis to establish the boundedness of tracking errors and disturbance estimations.\nThe SMC can be easily replaced by another Lyapunov-based controller, and a similar stability analysis can be derived for the alternative controller.\nOverall, our intention is not a major overhaul in the quadrotor flight control, but rather to introduce a simple, easy-to-tune, and computationally efficient module that can be added to a flight controller and boost trajectory tracking robustness against external disturbances."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Quadrotor Modeling",
15
+ "text": "The details of the quadrotor model are explained in various references such as [37 ###reference_b37###].\nHere, we attempt to write the quadrotor translation and rotational dynamics in second-order controllability canonical forms which will become useful in our developments.\nThe problem statement will follow.\nLet us begin by setting as an Earth-fixed inertial coordinates frame, and as the body-fixed coordinates frame whose origin coincides with the center of mass of the vehicle (Fig. 1 ###reference_###).\nWe assume that the quadrotor body is rigid and symmetric, with arms aligned to and .\nThe length of each arm is , and the mass of the vehicle is .\nAlso, the inertia matrix is which is diagonal due to the symmetry of the vehicle.\nWe denote the position of the vehicle in by .\nFor the vehicle attitude, we use , where , , and are the Euler angles representing pitch, roll, and yaw in the yaw-pitch-roll sequence.\nWith the above Euler angle configuration, the rotation matrix from to takes the following form\nwhere and stand for cosine and sine functions.\nAlso, if represents the angular velocity vector, then according to the Euler kinematical equation, we have , where the superscript indicates that the vector components are expressed in and\n###figure_1### Each of the vehicle\u2019s rotors produces a thrust in the direction of .\ns are usually approximated by where is the angular velocity of -th rotor, and is a coefficient.\nThe rotor angular velocities on the and axes have opposite signs (, ) to counterbalance the reaction torque induced by the rotors and to control .\nLet and be the aerodynamic force and torque vectors produced by s.\nThen, we can express them in as follows\nwhere is the distance from the center of mass to the rotor, is the total thrust, is thrust coefficient and is torque coefficient. The efficiency of the propulsion system is greatly affected by the thrust and torque coefficients, which are shaped by the rotor\u2019s design characteristics. These coefficients are also affected by the motion of the vehicle as detailed in [38 ###reference_b38###, 39 ###reference_b39###].\nIn addition, we assume that there exist two unknown disturbance vectors and .\nApplying Newton\u2019s law of motion, the translational dynamics of the vehicle take the following form\nwhere , and is the gravity vector with set to .\nUsing Euler\u2019s rotation theorem, the rotational dynamics of the vehicle take the following form\nwhere the index indicates the cross product.\nWe can now use (4 ###reference_###) and (5 ###reference_###) to write the quadrotor dynamics as a set of second-order nonlinear systems in controllability canonical forms.\nHowever, due to the particular structure of (5 ###reference_###), the equations will become complicated, and this will lead to complex DO and control laws.\nIf and are small, can be greatly simplified such that .\nAs such, the expressions in (4 ###reference_###) and (5 ###reference_###) transfer into\nBy setting\n, , , , , , , , ,\nthe equations in (6 ###reference_###) can be converted into the following state space representation\nOur first objective is to design an HGDO that can estimate and .\nOur second objective is to use the disturbance estimates and design control laws and such that the vehicle can follow a desired trajectory in the presence of unknown disturbances.\nOnce and are determined, can be calculated using the following expression\nWe assume that the disturbance terms and and their derivatives are bounded. Let us denote the -th component of by . Then,\nwhere s denote unknown but finite positive constants and is the norm defined as .\nThe disturbance terms can generally include unknown external disturbances, gyroscopic effects of rotors, or aerodynamic effects such as drag.\nThe models for gyroscopic effects or drag exist in the literature [40 ###reference_b40###] and one can include them in (4 ###reference_###) and (5 ###reference_###) to have a more elaborated model.\nHowever, these effects can be combined into disturbance terms in each axes, and HGDO can estimate their amplitudes.\nTherefore, one advantage of using HGDO is reducing the need for complex models."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "High-Gain Disturbance Observer Design",
21
+ "text": "Our objective in this section is to design an HGDO to estimate and .\nLet and be the estimated values of and .\nThe basic idea behind HGDO is to construct an observer of the following form\nwhere and are the observer gains.\nEach equation in (10 ###reference_###) constitutes a first-order filter of the form .\nBy choosing small positive values for , the settling time of the filter becomes small, and therefore quickly converges to .\nNote that s are unknown; however, from (7 ###reference_###), they can be expressed as follows\nTherefore, one can suggest the following HGDO structure for quadrotor\nWe assume that the initial disturbance estimates are set to zero i.e. .\nThe drawback of (12 ###reference_###) is the inclusion of derivatives of system states which amplifies the effect of measurement noise.\nInspired by [29 ###reference_b29###], we propose an HGDO using auxiliary varibles\nwith dynamics given as follows\nTo establish the convergence results for the observer, let us define the disturbance estimation error as\n.\nTaking the derivative of and using (13 ###reference_###) result in\nSubstituting (14 ###reference_###) in (15 ###reference_###) leads to\nThe above differential equations can be solved by multiplying both sides with and integrating over which yield\nWriting (17 ###reference_###) in a component-wise form and applying the absolute value operator to both sides lead to\nAccording to Holder\u2019s inequality [29 ###reference_b29###], .\nTherefore,\nBy integrating (19 ###reference_###) over , we get\nThe first term on the right-hand side of (20 ###reference_###) is bounded by .\nThe second term is equal to where and is the convolution operator.\nAccording to Young\u2019s convolution theorem [41 ###reference_b41###], .\nAlso, from the definition of norm, we have .\nTherefore,\nConsidering that the left-hand side of (21 ###reference_###) is the definition of norm for and also using (9 ###reference_###) result in\nwhich implies\nAs is a small constant, (22 ###reference_###) suggests that the disturbance estimation error is small and bounded for all time and this provides a theoretical verification for the convergence of the proposed HGDO."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Controller Design",
27
+ "text": "Our objective in this section is to study how the proposed HGDO can be integrated into the Lyapunov-based control design.\nWe will study SMC design here as an example and establish stability results for the HGDO and the control law.\nNote that we employ a cascaded structure to handle the underactuation of the quadrotor. As shown in Fig. 2 ###reference_###, the outer loop controls the transnational dynamics, while the inner loop tackles the rotational dynamics of the vehicle.\n###figure_2### Let us start with the definition of tracking errors.\nGiven the desired trajectories , we define the position tracking error as and the attitude tracking error as .\nDefine sliding surfaces as follows\nwhere are design parameters with positive components.\nLet us now consider the following Lyapunov function candidate\nTaking the derivative from both sides of (25 ###reference_###) and substituting (16 ###reference_###) and (24 ###reference_###) result in\nUsing the tracking error definition and substituting system dynamics (7 ###reference_###) yield\nLet us define the control laws as follows\nwhere denotes the sign function and and are design parameters.\nUsing (6 ###reference_###), the individual components of can de determined as\nNext, we determine the desired pitch angle , desired roll angle , and the input to the position controller, , using (29 ###reference_###)\nIn order to ensure system stability, must be positive-definite, and must satisfy a criterion to be detailed shortly. Substituting (28 ###reference_###) in (26 ###reference_###) leads to\nFor the last two terms in (31 ###reference_###), we use (16 ###reference_###) to write\nFor terms, we can use (9 ###reference_###) and (23 ###reference_###) to find upper bounds as follows\nBy substituting (16 ###reference_###), (32 ###reference_###), and (33 ###reference_###) in (31 ###reference_###), we get\nLet us now choose in a way that its components satisfy\nThen, we can write\nwhere denotes the largest eigenvalue.\nWe can now write\nwhere and .\nTaking steps similar to (15 ###reference_###)-(17 ###reference_###), we get\nwhich implies is bounded.\nSubsequently, s are bounded.\nThis means that the tracking errors are bounded.\nTherefore, with the control law (28 ###reference_###) and (35 ###reference_###), the vehicle trajectory will remain in an adjustable neighborhood of the desired trajectory.\nOne benefit of HGDO becomes clear in (35 ###reference_###).\nNote that needs to be small to ensure fast disturbance estimation.\nThis means that have relatively small values, and thus, with small gains the stability condition (35 ###reference_###) can be satisfied.\nSmall gains can potentially lead to smaller control efforts.\nIt is worth noting that the control laws in (28 ###reference_###) can lead to chattering due to the discontinuity of the sign function.\nTherefore, for practical implementations, we will replace the sign function with saturation function where the slope of its linear portion is .\nWhen , the above stability results hold.\nTherefore, the system states will remain bounded even when the saturation function is used.\nOne major concern about HGO- and HGDO-based controllers is the peaking phenomenon that could result from very small values.\nIn [27 ###reference_b27###], it is shown that passing the computed control signal from a saturation function can resolve the issue. As such, in our experiments presented in the next section, we will apply a saturation function to the computed signal to address the peaking phenomenon."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Experiments",
33
+ "text": "This section presents the results of our simulation and laboratory experiments to evaluate the effectiveness of the proposed HGDO-based control.\nOur primary focus will be on laboratory experiments.\nHowever, simulations are necessary to assess the accuracy of disturbance estimations; as it is otherwise challenging due to the difficulty of measuring the exact values of disturbances in practice.\nTo highlight the benefits gained by HGDO, we compare our HGDO+SMC approach with the uncertainty and disturbance estimator method (UDE) [19 ###reference_b19###], SMC-only, and also with one of the recent DO-based control methods [10 ###reference_b10###] that have tackled a similar problem.\nThe vehicle under consideration throughout the experiments was a Crazyflie 2.1, flying in a controlled environment equipped with the lighthouse positioning system [42 ###reference_b42###].\nAll the flight control computations were conducted on an external computer using MATLAB and transferred to the vehicle in real-time using Robot Operating System (ROS).\nThe parameter values for the vehicle, SMC, and HGDO used throughout simulations and real experiments are given in Tab. 1 ###reference_###. For tuning the SMC parameters, we used the genetic algorithm where the objective function was the integral of squared tracking errors.\nOnce the SMC parameters were optimized, we tuned the HGDO parameters in a quick trial-and-error process.\nTo show the effect of in HGDO performance, we present results for three different values of 0.01, 0.04, and 0.08.\n###table_1###"
34
+ },
35
+ {
36
+ "section_id": "5.1",
37
+ "parent_section_id": "5",
38
+ "section_name": "Simulations",
39
+ "text": "The desired trajectories of the quadrotor were chosen as and .\nFor external disturbances, we used a sinusoidal disturbance with the maximum frequency of 4 Hz as\n. This disturbance is taken from [10 ###reference_b10###], with the addition of a term to account for relatively higher frequency disturbances.\nWe also used the Dryden wind turbulence model [43 ###reference_b43###], a widely recognized model for its applicability in capturing real-world disturbances.\n###figure_3### ###figure_4### Figures 3 ###reference_### and 4 ###reference_### compare the actual disturbance and the estimated values using HGDO and the DOs given in [10 ###reference_b10###] and [19 ###reference_b19###].\nIn Fig. 3 ###reference_###, among HGO results with different gains, the best disturbance estimation results correspond to , as expected.\nThe differences are clear in both convergence speed and estimation error. It is easy to verify that the value of directly influences the speed and accuracy of estimation, providing an easy way to calibrate the DO.\nComparing the HGDO with the method in [10 ###reference_b10###], it is evident that even with , the HGDO exhibits higher convergence speed.\nIn terms of estimation accuracy, the difference between the method in [10 ###reference_b10###] and HGDO with becomes insignificant over time; however, when is set to 0.01, HGDO has a clear advantage.\nThe UDE method described in [19 ###reference_b19###] exhibits a considerable overshoot at the start of the simulation. While this overshoot signifies the method\u2019s ability to address disturbances promptly, it could present practical difficulties, particularly in situations requiring precise and immediate adherence to a set trajectory without initial deviations.\nOnce the transient behavior is passed, the UDE\u2019s estimation accuracy is comparable to HGDO with ; however, the HGDO has a smaller overshoot at the initial phase.\nConcerning Fig. 4 ###reference_###, although the disturbance generated by the Dryden model is stochastic; the disturbance estimates have converged to the actual disturbance values only after a short transient time; however, the HGDO estimates are much closer to the actual disturbances compared to the two alternative methods.\nSuch a fast and accurate disturbance estimation presents a significant advantage in disturbance compensation and robust trajectory tracking.\n###figure_5### To investigate the HGDO performance in the presence of noise, we used white Gaussian measurement noise in our simulations, conducting three trials, each with a different noise power (0.001, 0.01, and 0.1 W) as shown in Fig. 5 ###reference_###.\nHGOs are known to be sensitive to measurement noise, and this is evident in simulation results, especially for the highest noise power considered.\nHowever, the estimation results are still reasonable, with an accuracy comparable to our benchmark methods with no measurement noise.\nHowever, if the jitter in HGDO estimations is deemed to be problematic for a certain application, there exists a large body of literature on dealing with measurement noise for HGOs, e.g., by switching between two gains [44 ###reference_b44###], gain adaptation [45 ###reference_b45###], or integration with Kalman filter [28 ###reference_b28###].\nNote that there exists a formal mathematical guarantee for the boundedness of HGO estimation errors in the presence of bounded measurement noise, detailed in Theorem 8.1 of [27 ###reference_b27###].\n###figure_6### ###figure_7### Figure 7 ###reference_### depicts the 3D plot of the vehicle trajectory versus the desired trajectory in the time interval .\nNote that, in each trial, the vehicle completes the lemniscate figure twice.\nThe largest tracking error is associated with the SMC-only controller.\nWhen DO is introduced, the trajectory tracking is much improved; however, the convergence to the desired trajectory is noticeably slower with the DOs in [10 ###reference_b10###] and [19 ###reference_b19###].\nThis likely stems from the slower convergence of disturbance estimations with these methods compared to the HGDO, as confirmed in Figs. 3 ###reference_### and 4 ###reference_###.\nFigure 7 ###reference_### presents a closer look at the vehicle position.\nThe magnified windows provide a means to compare the magnitude of tracking errors.\nNotably, HGDO with exhibits a remarkably near-zero tracking error.\nWhile tracking errors degrade by the increase in , the worst HGDO-based results still have faster convergence and similar tracking errors compared to the three alternative methods."
40
+ },
41
+ {
42
+ "section_id": "5.2",
43
+ "parent_section_id": "5",
44
+ "section_name": "Laboratory Experiments",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "5.2.x",
49
+ "parent_section_id": "5.2",
50
+ "section_name": "Scenario 1: Tracking a lemniscate trajectory",
51
+ "text": "###figure_8### In this scenario, the vehicle tracks a lemniscate path similar to the one mentioned in simulations. An external fan generates a wind disturbance with a speed of in the vicinity of the path, and the vehicle undergoes a non-uniform disturbance as it completes the path.\nFigure 8 ###reference_### presents the vehicle trajectory for the different methods.\nNote that the vehicle completes the path twice to have a rough assessment of the repeatability of the results.\nA closer look at the position states and the tracking error in Figs. 11 ###reference_### and 11 ###reference_### highlight the superiority of HGDO+SMC, both in terms of convergence speed and tracking error.\nWe also study the vehicle\u2019s attitude in this scenario, targeting a desired trajectory of .\nComparing HGDO+SMC results with SMC-only and [10 ###reference_b10###] in Fig. 11 ###reference_###, it is clear that HGDO+SMC presents outperforms the alternative methods.\nTo provide a quantitative assessment of the performance of the controllers, we tabulate the root mean square (RMS) of position and attitude tracking errors of the different control strategies in Tab. 2 ###reference_###.\nFirst, the SMC-only results have the highest error values, highlighting the benefits of adding a DO to achieve higher tracking accuracy.\nSecond, comparing DO-based results shows that all HGDO+SMC results, even with , exhibit smaller errors compared to [10 ###reference_b10###].\nAs expected, the HGDO with has a clear advantage over all the other alternatives.\n###figure_9### ###figure_10### ###figure_11###"
52
+ },
53
+ {
54
+ "section_id": "5.2.x",
55
+ "parent_section_id": "5.2",
56
+ "section_name": "Scenario 2: Ground effect",
57
+ "text": "In this scenario, we conducted a comprehensive evaluation of the vehicle\u2019s performance in close proximity to the ground, maintaining a minimal altitude of just 10 cm while executing a lemniscate trajectory, with the objective of ensuring that the vehicle\u2019s attitude remains stable and near zero throughout the flight.\nWe did not use an external fan in this case to focus on another form of external disturbance that exists due to the airflow distortion near the ground known as the ground effect.\nFigures 14 ###reference_### - 14 ###reference_### illustrate the results of trajectory tracking for position, while Fig. 16 ###reference_### depicts the results for attitude tracking. Once again, these figures confirm the superior performance achieved through the use of HGDO+SMC.\nFig. 16 ###reference_### presents the real-time estimation of disturbances encountered during flight including the ground effect.\nInterestingly, the disturbance estimate along the -axis is relatively larger, which can be explained by the presence of the ground effect.\nTable 3 ###reference_### compares the RMS values of position and attitude tracking errors for different control strategies in this scenario, again, showing the superiority of HGDO+SMC to other methods.\n###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16###"
58
+ },
59
+ {
60
+ "section_id": "5.2.x",
61
+ "parent_section_id": "5.2",
62
+ "section_name": "Scenario 3: Hovering control",
63
+ "text": "In this scenario, the vehicle takes off from the origin, flies to point , and hovers at this point, all with the aim of preserving the vehicle\u2019s attitude in a stable and nearly zero state throughout the entire flight duration.\nThere is an external fan in the vicinity of the hover point generating a wind disturbance with a speed of .\nIn the hover point, the vehicle resists the disturbance and holds its position.\nFigure 18 ###reference_### shows the position tracking error for each experimental trial. It is clear that, with all methods, the vehicle has successfully reached to hover point and held its position despite the disturbances. However, when compared to the other two methods, HGDO+SMC exhibits reduced fluctuations in the hover point.\nIn this experiment, all the desired rotational states were set to zero. However, the external fan was generating disturbances in the rotational states. Fig. 18 ###reference_### showcases the attitude trajectory in each trial, confirming a better disturbance rejection for the rotational states using HGDO, especially with . Table 4 ###reference_### compares the RMS of position and attitude tracking errors for different control strategies in this scenario.\n###figure_17### ###figure_18###"
64
+ },
65
+ {
66
+ "section_id": "6",
67
+ "parent_section_id": null,
68
+ "section_name": "Conclusion",
69
+ "text": "This study developed an HGDO for robust trajectory tracking of quadrotors.\nOur theoretical results established that (i) HGDO can guarantee the boundedness of disturbance estimation error with a short transient time, and (ii) HGDO combined with Lyapunov-based controllers can guarantee the boundedness of position and attitude tracking errors.\nOur experimental results conformed with theoretical results and demonstrated that adding HGDO to a flight controller significantly improves the quadrotor\u2019s robustness against external disturbances.\nNote that despite HGDO\u2019s several desirable properties, its use in quadrotor control has remained minimal in the past.\nBesides the fact that the advantages of HGDO were not thoroughly studied for quadrotors before this paper, there could be skepticism attributed to the sensitivity of conventional HGDOs to measurement noise.\nHowever, our HGDO-based flight control was capable of handling the typical measurement noise present in inertial measurement units and motion capture systems in a common research vehicle.\nTherefore, we argue that HGDO is a simple, easy-to-tune, and computationally efficient module that can be added to conventional quadrotor flight control approaches to boost system robustness.\nThe HGDO-based control performance can be further improved by explicitly considering measurement noise, high-frequency disturbances, and the limitations of sensors and actuators in the design. Future work in HGDO can delve into these topics to further strengthen HGDO advantages for quadrotor control."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {
74
+ "1": {
75
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Parameter values used during experiments</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T1.26\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.26.27.1\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S5.T1.26.27.1.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T1.26.27.1.2\"><span class=\"ltx_text\" id=\"S5.T1.26.27.1.2.1\" style=\"font-size:90%;\">Parameter</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T1.26.27.1.3\"><span class=\"ltx_text\" id=\"S5.T1.26.27.1.3.1\" style=\"font-size:90%;\">Value</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.26.28.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" colspan=\"3\" id=\"S5.T1.26.28.2.1\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S5.T1.26.28.2.1.1\" style=\"font-size:90%;\">a) Vehicle Parameters:</span><span class=\"ltx_text\" id=\"S5.T1.26.28.2.1.2\" style=\"font-size:90%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2\">\n<td class=\"ltx_td\" id=\"S5.T1.2.2.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4\">\n<td class=\"ltx_td\" id=\"S5.T1.4.4.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6\">\n<td class=\"ltx_td\" id=\"S5.T1.6.6.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.6.6.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.8.8\">\n<td class=\"ltx_td\" id=\"S5.T1.8.8.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.8.8.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.10.10\">\n<td class=\"ltx_td\" id=\"S5.T1.10.10.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.10.10.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.26.29.3\">\n<td class=\"ltx_td ltx_align_left\" colspan=\"3\" id=\"S5.T1.26.29.3.1\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S5.T1.26.29.3.1.1\" style=\"font-size:90%;\">b) SMC Parameters:</span><span class=\"ltx_text\" id=\"S5.T1.26.29.3.1.2\" style=\"font-size:90%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.12.12\">\n<td class=\"ltx_td\" id=\"S5.T1.12.12.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.12.12.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.14.14\">\n<td class=\"ltx_td\" id=\"S5.T1.14.14.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.14.14.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.16.16\">\n<td class=\"ltx_td\" id=\"S5.T1.16.16.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.15.15.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.16.16.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.18.18\">\n<td class=\"ltx_td\" id=\"S5.T1.18.18.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.17.17.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.18.18.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.20.20\">\n<td class=\"ltx_td\" id=\"S5.T1.20.20.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.19.19.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.20.20.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.22.22\">\n<td class=\"ltx_td\" id=\"S5.T1.22.22.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.21.21.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.22.22.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.26.30.4\">\n<td class=\"ltx_td ltx_align_left\" colspan=\"3\" id=\"S5.T1.26.30.4.1\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S5.T1.26.30.4.1.1\" style=\"font-size:90%;\">c) HGDO Parameters:</span><span class=\"ltx_text\" id=\"S5.T1.26.30.4.1.2\" style=\"font-size:90%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.24.24\">\n<td class=\"ltx_td\" id=\"S5.T1.24.24.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.23.23.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.24.24.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.26.26\">\n<td class=\"ltx_td ltx_border_bb\" id=\"S5.T1.26.26.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T1.25.25.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T1.26.26.2\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
76
+ "capture": "Table 1: Parameter values used during experiments"
77
+ },
78
+ "2": {
79
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.11.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S5.T2.12.2\" style=\"font-size:90%;\">Root mean square of position and attitude tracking errors in tracking a lemniscate trajectory.</span></figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T2.9\" style=\"width:433.6pt;height:118.3pt;vertical-align:-0.9pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-15.8pt,4.3pt) scale(0.932061988866312,0.932061988866312) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.9.9\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T2.3.3.3.4\">Parameter</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.1\">SMC+HGDO with = 0.01</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.2.2.2.2\">SMC+HGDO with = 0.04</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.3.3.3.3\">SMC+HGDO with = 0.08</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.3.3.3.5\">Ref. [10]</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.3.3.3.6\">SMC</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.4.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.4.4.4.2\">0.022</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.4.4.4.3\">0.035</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.4.4.4.4\">0.056</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.4.4.4.5\">0.060</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.4.4.4.6\">0.111</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.5.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.5.2\">0.044</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.5.3\">0.070</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.5.4\">0.119</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.5.5\">0.128</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.5.6\">0.242</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.6.6.6.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.6.6.6.2\">0.037</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.6.6.6.3\">0.050</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.6.6.6.4\">0.064</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.6.6.6.5\">0.072</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.6.6.6.6\">0.018</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.7.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.7.7.7.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.7.7.7.2\">0.021</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.7.7.7.3\">0.032</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.7.7.7.4\">0.041</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.7.7.7.5\">0.044</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.7.7.7.6\">0.054</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.8.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.8.8.8.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.8.8.8.2\">0.008</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.8.8.8.3\">0.010</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.8.8.8.4\">0.013</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.8.8.8.5\">0.019</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.8.8.8.6\">0.041</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T2.9.9.9.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.9.9.9.2\">0.006</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.9.9.9.3\">0.008</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.9.9.9.4\">0.009</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.9.9.9.5\">0.012</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.9.9.9.6\">0.018</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
80
+ "capture": "Table 2: Root mean square of position and attitude tracking errors in tracking a lemniscate trajectory."
81
+ },
82
+ "3": {
83
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T3.11.1.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S5.T3.12.2\" style=\"font-size:90%;\">Root mean square of position and attitude tracking errors in the ground effect scenario.</span></figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T3.9\" style=\"width:433.6pt;height:118.3pt;vertical-align:-0.9pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-15.8pt,4.3pt) scale(0.932061988866312,0.932061988866312) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.9.9\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T3.3.3.3.4\">Parameter</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.1.1.1.1\">SMC+HGDO with = 0.01</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.2.2.2.2\">SMC+HGDO with = 0.04</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.3.3.3.3\">SMC+HGDO with = 0.08</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.3.3.3.5\">Ref. [10]</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.3.3.3.6\">SMC</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.4.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.4.4.4.2\">0.028</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.4.4.4.3\">0.033</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.4.4.4.4\">0.036</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.4.4.4.5\">0.037</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.4.4.4.6\">0.042</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.5.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.5.5.5.2\">0.037</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.5.5.5.3\">0.041</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.5.5.5.4\">0.043</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.5.5.5.5\">0.042</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.5.5.5.6\">0.044</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.6.6.6.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.6.6.6.2\">0.012</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.6.6.6.3\">0.013</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.6.6.6.4\">0.014</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.6.6.6.5\">0.013</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.6.6.6.6\">0.014</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.7.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.7.7.7.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.7.7.7.2\">0.026</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.7.7.7.3\">0.030</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.7.7.7.4\">0.031</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.7.7.7.5\">0.031</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.7.7.7.6\">0.044</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.8.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.8.8.8.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.8.8.8.2\">0.016</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.8.8.8.3\">0.020</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.8.8.8.4\">0.023</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.8.8.8.5\">0.024</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.8.8.8.6\">0.028</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.9.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T3.9.9.9.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.9.9.9.2\">0.007</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.9.9.9.3\">0.010</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.9.9.9.4\">0.012</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.9.9.9.5\">0.013</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.9.9.9.6\">0.016</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
84
+ "capture": "Table 3: Root mean square of position and attitude tracking errors in the ground effect scenario."
85
+ },
86
+ "4": {
87
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T4.11.1.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S5.T4.12.2\" style=\"font-size:90%;\">Root mean square of position and attitude tracking errors in the hovering scenario.</span></figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T4.9\" style=\"width:433.6pt;height:118.3pt;vertical-align:-0.9pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-15.8pt,4.3pt) scale(0.932061988866312,0.932061988866312) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T4.9.9\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T4.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T4.3.3.3.4\">Parameter</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T4.1.1.1.1\">SMC+HGDO with = 0.01</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T4.2.2.2.2\">SMC+HGDO with = 0.04</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T4.3.3.3.3\">SMC+HGDO with = 0.08</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T4.3.3.3.5\">Ref. [10]</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T4.3.3.3.6\">SMC</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T4.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T4.4.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.4.4.2\">0.042</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.4.4.3\">0.044</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.4.4.4\">0.055</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.4.4.5\">0.044</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.4.4.6\">0.065</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T4.5.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.5.5.5.2\">0.043</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.5.5.5.3\">0.044</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.5.5.5.4\">0.055</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.5.5.5.5\">0.043</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.5.5.5.6\">0.064</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T4.6.6.6.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.6.6.6.2\">0.045</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.6.6.6.3\">0.045</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.6.6.6.4\">0.054</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.6.6.6.5\">0.060</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.6.6.6.6\">0.065</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.7.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T4.7.7.7.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.7.7.7.2\">0.023</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.7.7.7.3\">0.033</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.7.7.7.4\">0.036</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.7.7.7.5\">0.054</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.7.7.7.6\">0.069</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.8.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T4.8.8.8.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.8.8.8.2\">0.008</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.8.8.8.3\">0.018</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.8.8.8.4\">0.021</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.8.8.8.5\">0.024</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.8.8.8.6\">0.027</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.9.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T4.9.9.9.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T4.9.9.9.2\">0.010</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T4.9.9.9.3\">0.016</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T4.9.9.9.4\">0.019</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T4.9.9.9.5\">0.022</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T4.9.9.9.6\">0.026</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
88
+ "capture": "Table 4: Root mean square of position and attitude tracking errors in the hovering scenario."
89
+ }
90
+ },
91
+ "image_paths": {
92
+ "1": {
93
+ "figure_path": "2305.19115v2_figure_1.png",
94
+ "caption": "Figure 1: Quadrotor model and coordinate frames",
95
+ "url": "http://arxiv.org/html/2305.19115v2/extracted/5479648/Figures/frame2.png"
96
+ },
97
+ "2": {
98
+ "figure_path": "2305.19115v2_figure_2.png",
99
+ "caption": "Figure 2: Overview of the control system architecture",
100
+ "url": "http://arxiv.org/html/2305.19115v2/x1.png"
101
+ },
102
+ "3": {
103
+ "figure_path": "2305.19115v2_figure_3.png",
104
+ "caption": "Figure 3: Sinusoidal disturbance with the maximum frequency of 4 Hz and its estimation",
105
+ "url": "http://arxiv.org/html/2305.19115v2/x2.png"
106
+ },
107
+ "4": {
108
+ "figure_path": "2305.19115v2_figure_4.png",
109
+ "caption": "Figure 4: Dryden wind disturbance and its estimation",
110
+ "url": "http://arxiv.org/html/2305.19115v2/x3.png"
111
+ },
112
+ "5": {
113
+ "figure_path": "2305.19115v2_figure_5.png",
114
+ "caption": "Figure 5: Sinusoidal disturbance and its estimation in the presence of noise.",
115
+ "url": "http://arxiv.org/html/2305.19115v2/x4.png"
116
+ },
117
+ "6(a)": {
118
+ "figure_path": "2305.19115v2_figure_6(a).png",
119
+ "caption": "Figure 6: 3D plot of the vehicle trajectory compared to the desired trajectory in the simulation study.",
120
+ "url": "http://arxiv.org/html/2305.19115v2/x5.png"
121
+ },
122
+ "6(b)": {
123
+ "figure_path": "2305.19115v2_figure_6(b).png",
124
+ "caption": "Figure 6: 3D plot of the vehicle trajectory compared to the desired trajectory in the simulation study.",
125
+ "url": "http://arxiv.org/html/2305.19115v2/x6.png"
126
+ },
127
+ "7": {
128
+ "figure_path": "2305.19115v2_figure_7.png",
129
+ "caption": "Figure 8: 2D plot of the vehicle trajectory in comparison with the desired trajectory in tracking a lemniscate trajectory.",
130
+ "url": "http://arxiv.org/html/2305.19115v2/x7.png"
131
+ },
132
+ "8(a)": {
133
+ "figure_path": "2305.19115v2_figure_8(a).png",
134
+ "caption": "Figure 9: Desired and actual vehicle position trajectories in tracking a lemniscate trajectory scenario.",
135
+ "url": "http://arxiv.org/html/2305.19115v2/x8.png"
136
+ },
137
+ "8(b)": {
138
+ "figure_path": "2305.19115v2_figure_8(b).png",
139
+ "caption": "Figure 9: Desired and actual vehicle position trajectories in tracking a lemniscate trajectory scenario.",
140
+ "url": "http://arxiv.org/html/2305.19115v2/x9.png"
141
+ },
142
+ "8(c)": {
143
+ "figure_path": "2305.19115v2_figure_8(c).png",
144
+ "caption": "Figure 9: Desired and actual vehicle position trajectories in tracking a lemniscate trajectory scenario.",
145
+ "url": "http://arxiv.org/html/2305.19115v2/x10.png"
146
+ },
147
+ "9(a)": {
148
+ "figure_path": "2305.19115v2_figure_9(a).png",
149
+ "caption": "Figure 12: 2D plot of the vehicle trajectory in comparison with the desired trajectory in the ground effect scenario.",
150
+ "url": "http://arxiv.org/html/2305.19115v2/x11.png"
151
+ },
152
+ "9(b)": {
153
+ "figure_path": "2305.19115v2_figure_9(b).png",
154
+ "caption": "Figure 12: 2D plot of the vehicle trajectory in comparison with the desired trajectory in the ground effect scenario.",
155
+ "url": "http://arxiv.org/html/2305.19115v2/x12.png"
156
+ },
157
+ "9(c)": {
158
+ "figure_path": "2305.19115v2_figure_9(c).png",
159
+ "caption": "Figure 12: 2D plot of the vehicle trajectory in comparison with the desired trajectory in the ground effect scenario.",
160
+ "url": "http://arxiv.org/html/2305.19115v2/x13.png"
161
+ },
162
+ "10(a)": {
163
+ "figure_path": "2305.19115v2_figure_10(a).png",
164
+ "caption": "Figure 15: Disturbance estimates in the ground effect scenario.",
165
+ "url": "http://arxiv.org/html/2305.19115v2/x14.png"
166
+ },
167
+ "10(b)": {
168
+ "figure_path": "2305.19115v2_figure_10(b).png",
169
+ "caption": "Figure 15: Disturbance estimates in the ground effect scenario.",
170
+ "url": "http://arxiv.org/html/2305.19115v2/x15.png"
171
+ },
172
+ "11(a)": {
173
+ "figure_path": "2305.19115v2_figure_11(a).png",
174
+ "caption": "Figure 17: Position tracking error in the hovering scenario.",
175
+ "url": "http://arxiv.org/html/2305.19115v2/x16.png"
176
+ },
177
+ "11(b)": {
178
+ "figure_path": "2305.19115v2_figure_11(b).png",
179
+ "caption": "Figure 17: Position tracking error in the hovering scenario.",
180
+ "url": "http://arxiv.org/html/2305.19115v2/x17.png"
181
+ }
182
+ },
183
+ "validation": true,
184
+ "references": [
185
+ {
186
+ "1": {
187
+ "title": "Modeling of the urban gust environment with application to autonomous flight.",
188
+ "author": "David Galway, Jason Etele, and Giovanni Fusina.",
189
+ "venue": "In AIAA Atmospheric Flight Mechanics Conference and Exhibit, page 6565, 2008.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "2": {
195
+ "title": "Quasi-steady in-ground-effect model for single and multirotor aerial vehicles.",
196
+ "author": "Xiang He and Kam K Leang.",
197
+ "venue": "AIAA Journal, 58(12):5318\u20135331, 2020.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "3": {
203
+ "title": "Numerical studies on modeling the near-and far-field wake vortex of a quadrotor in forward flight.",
204
+ "author": "Joshua C Nathanael, Chung-Hung John Wang, and Kin Huat Low.",
205
+ "venue": "Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, 236(6):1166\u20131183, 2022.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "4": {
211
+ "title": "Flatness-based model predictive control for quadrotor trajectory tracking.",
212
+ "author": "Melissa Greeff and Angela P Schoellig.",
213
+ "venue": "In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 6740\u20136745. IEEE, 2018.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "5": {
219
+ "title": "Finite-time sliding mode control for singularly perturbed pde systems.",
220
+ "author": "Qiyuan Zhang, Xiaona Song, Shuai Song, and Vladimir Stojanovic.",
221
+ "venue": "Journal of the Franklin Institute, 360(2):841\u2013861, 2023.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "6": {
227
+ "title": "Nonlinear mpc for quadrotor fault-tolerant control.",
228
+ "author": "Fang Nan, Sihao Sun, Philipp Foehn, and Davide Scaramuzza.",
229
+ "venue": "IEEE Robotics and Automation Letters, 7(2):5047\u20135054, 2022.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "7": {
235
+ "title": "Sliding mode control of a quadrotor helicopter.",
236
+ "author": "Rong Xu and Umit Ozguner.",
237
+ "venue": "In Proceedings of the 45th IEEE Conference on Decision and Control, pages 4957\u20134962. IEEE, 2006.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "8": {
243
+ "title": "Nonsingular terminal sliding mode control for a quadrotor uav with a total rotor failure.",
244
+ "author": "Zhiwei Hou, Peng Lu, and Zhangjie Tu.",
245
+ "venue": "Aerospace Science and Technology, 98:105716, 2020.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "9": {
251
+ "title": "Second order sliding mode control for a quadrotor uav.",
252
+ "author": "En-Hui Zheng, Jing-Jing Xiong, and Ji-Liang Luo.",
253
+ "venue": "ISA transactions, 53(4):1350\u20131356, 2014.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "10": {
259
+ "title": "3d trajectory tracking control for a thrust-propelled vehicle with time-varying disturbances.",
260
+ "author": "Meisam Kabiri, Hajar Atrianfar, and Mohammad Bagher Menhaj.",
261
+ "venue": "International Journal of Control, Automation and Systems, 17:1978\u20131986, 2019.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "11": {
267
+ "title": "Position trajectory tracking of a quadrotor helicopter based on l1 adaptive control.",
268
+ "author": "Paul De Monte and Boris Lohmann.",
269
+ "venue": "In 2013 European Control Conference (ECC), pages 3346\u20133353. IEEE, 2013.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "12": {
275
+ "title": "1 bit encoding\u2013decoding-based event-triggered fixed-time adaptive control for unmanned surface vehicle with guaranteed tracking performance.",
276
+ "author": "Xiaona Song, Chenglin Wu, Vladimir Stojanovic, and Shuai Song.",
277
+ "venue": "Control Engineering Practice, 135:105513, 2023.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "13": {
283
+ "title": "Enforcing robust control guarantees within neural network policies.",
284
+ "author": "Priya L Donti, Melrose Roderick, Mahyar Fazlyab, and J Zico Kolter.",
285
+ "venue": "arXiv preprint arXiv:2011.08105, 2020.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "14": {
291
+ "title": "Cascade flight control of quadrotors based on deep reinforcement learning.",
292
+ "author": "Haoran Han, Jian Cheng, Zhilong Xi, and Bingcai Yao.",
293
+ "venue": "IEEE Robotics and Automation Letters, 7(4):11134\u201311141, 2022.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "15": {
299
+ "title": "A time delay controller for systems with unknown dynamics.",
300
+ "author": "Kamal Youcef-Toumi and Osamu Ito.",
301
+ "venue": "In 1988 American Control Conference, pages 904\u2013913, 1988.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "16": {
307
+ "title": "Altitude control of a quad-rotor system by using a time-delayed control method.",
308
+ "author": "Jeong Geun Lim and Seul Jung.",
309
+ "venue": "Journal of Institute of Control, Robotics and Systems, 20(7):724\u2013729, 2014.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "17": {
315
+ "title": "Sliding mode disturbance observer-based control for a reusable launch vehicle.",
316
+ "author": "Charles E Hall and Yuri B Shtessel.",
317
+ "venue": "Journal of guidance, control, and dynamics, 29(6):1315\u20131328, 2006.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "18": {
323
+ "title": "Generalized extended state observer based high precision attitude control of quadrotor vehicles subject to wind disturbance.",
324
+ "author": "Di Shi, Zhong Wu, and Wusheng Chou.",
325
+ "venue": "IEEE Access, 6:32349\u201332359, 2018.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "19": {
331
+ "title": "Model following sliding mode control based on uncertainty and disturbance estimator.",
332
+ "author": "SE Talole and SB Phadke.",
333
+ "venue": "ASME Journal of Dynamic Systems, Measurement, and Control, 130(3):034501, 2008.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "20": {
339
+ "title": "Improving the performance of ude-based controller using a new filter design.",
340
+ "author": "TS Chandar and SE Talole.",
341
+ "venue": "Nonlinear Dynamics, 77(3):753\u2013768, 2014.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "21": {
347
+ "title": "Control of uncertain lti systems based on an uncertainty and disturbance estimator.",
348
+ "author": "Qing-Chang Zhong and David Rees.",
349
+ "venue": "J. Dyn. Sys., Meas., Control, 126(4):905\u2013910, 2004.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "22": {
355
+ "title": "Robust point-to-point iterative learning control for constrained systems: A minimum energy approach.",
356
+ "author": "Chenhui Zhou, Hongfeng Tao, Yiyang Chen, Vladimir Stojanovic, and Wojciech Paszke.",
357
+ "venue": "International Journal of Robust and Nonlinear Control, 32(18):10139\u201310161, 2022.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "23": {
363
+ "title": "Geometric adaptive robust hierarchical control for quadrotors with aerodynamic damping and complete inertia compensation.",
364
+ "author": "Weisheng Liang, Zheng Chen, and Bin Yao.",
365
+ "venue": "IEEE Transactions on Industrial Electronics, 69(12):13213\u201313224, 2021.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "24": {
371
+ "title": "Robust observer-based dynamic sliding mode controller for a quadrotor uav.",
372
+ "author": "Nuradeen Fethalla, Maarouf Saad, Hannah Michalska, and Jawhar Ghommam.",
373
+ "venue": "IEEE access, 6:45846\u201345859, 2018.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "25": {
379
+ "title": "Finite-time control for a uav system based on finite-time disturbance observer.",
380
+ "author": "Deqing Huang, Tianpeng Huang, Na Qin, Yanan Li, and Yong Yang.",
381
+ "venue": "Aerospace Science and Technology, 129:107825, 2022.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "26": {
387
+ "title": "Fault-tolerant control of a hydraulic servo actuator via adaptive dynamic programming.",
388
+ "author": "Vladimir Stojanovi\u0107.",
389
+ "venue": "Mathematical Modelling and Control, 2023.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "27": {
395
+ "title": "High-gain observers in nonlinear feedback control.",
396
+ "author": "Hassan K Khalil.",
397
+ "venue": "SIAM, 2017a.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "28": {
403
+ "title": "An adaptive high-gain observer for nonlinear systems.",
404
+ "author": "Nicolas Boizot, Eric Busvelle, and Jean-Paul Gauthier.",
405
+ "venue": "Automatica, 46(9):1483\u20131488, 2010.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "29": {
411
+ "title": "High-gain disturbance observer-based backstepping control with output tracking error constraint for electro-hydraulic systems.",
412
+ "author": "Daehee Won, Wonhee Kim, Donghoon Shin, and Chung Choo Chung.",
413
+ "venue": "IEEE Transactions on Control Systems Technology, 23(2):787\u2013795, 2015.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "30": {
419
+ "title": "Extended high-gain observers as disturbance estimators.",
420
+ "author": "Hassan K Khalil.",
421
+ "venue": "SICE Journal of Control, Measurement, and System Integration, 10(3):125\u2013134, 2017b.",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "31": {
427
+ "title": "Uncertainty and disturbance estimation for quadrotor control using extended high-gain observers: Experimental implementation.",
428
+ "author": "Connor J Boss, Joonho Lee, and Jongeun Choi.",
429
+ "venue": "In Dynamic Systems and Control Conference, volume 58288, page V002T01A003. American Society of Mechanical Engineers, 2017.",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "32": {
435
+ "title": "Composite disturbance rejection attitude control for quadrotor with unknown disturbance.",
436
+ "author": "Kai Zhao, Jinhui Zhang, Dailiang Ma, and Yuanqing Xia.",
437
+ "venue": "IEEE Transactions on Industrial Electronics, 67(8):6894\u20136903, 2019.",
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "33": {
443
+ "title": "Uncertainty and disturbance estimator-based global trajectory tracking control for a quadrotor.",
444
+ "author": "Qi Lu, Beibei Ren, and Siva Parameswaran.",
445
+ "venue": "IEEE/ASME Transactions on Mechatronics, 25(3):1519\u20131530, 2020.",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "34": {
451
+ "title": "A high-gain observer approach to robust trajectory estimation and tracking for a multi-rotor uav.",
452
+ "author": "Connor J Boss and Vaibhav Srivastava.",
453
+ "venue": "arXiv preprint arXiv:2103.13429, 2021.",
454
+ "url": null
455
+ }
456
+ },
457
+ {
458
+ "35": {
459
+ "title": "Output feedback control design using extended high-gain observers and dynamic inversion with projection for a small scaled helicopter.",
460
+ "author": "Joonho Lee, Joohwan Seo, and Jongeun Choi.",
461
+ "venue": "Automatica, 133:109883, 2021.",
462
+ "url": null
463
+ }
464
+ },
465
+ {
466
+ "36": {
467
+ "title": "Robust control of small-scale unmanned helicopter with matched and mismatched disturbances.",
468
+ "author": "Xing Fang, Aiguo Wu, Yujia Shang, and Na Dong.",
469
+ "venue": "Journal of the Franklin Institute, 353(18):4803\u20134820, 2016.",
470
+ "url": null
471
+ }
472
+ },
473
+ {
474
+ "37": {
475
+ "title": "Multirotor aerial vehicles: Modeling, estimation, and control of quadrotor.",
476
+ "author": "Robert Mahony, Vijay Kumar, and Peter Corke.",
477
+ "venue": "IEEE robotics & automation magazine, 19(3):20\u201332, 2012.",
478
+ "url": null
479
+ }
480
+ },
481
+ {
482
+ "38": {
483
+ "title": "Full control of a quadrotor.",
484
+ "author": "Samir Bouabdallah and Roland Siegwart.",
485
+ "venue": "In 2007 IEEE/RSJ international conference on intelligent robots and systems, pages 153\u2013158. Ieee, 2007.",
486
+ "url": null
487
+ }
488
+ },
489
+ {
490
+ "39": {
491
+ "title": "Towards intelligent miniature flying robots.",
492
+ "author": "Samir Bouabdallah and Roland Siegwart.",
493
+ "venue": "In Field and Service Robotics: Results of the 5th International Conference, pages 429\u2013440. Springer, 2006.",
494
+ "url": null
495
+ }
496
+ },
497
+ {
498
+ "40": {
499
+ "title": "Euler-lagrange modeling and control of quadrotor uav with aerodynamic compensation.",
500
+ "author": "Simone Martini, Serhat S\u00f6nmez, Alessandro Rizzo, Margareta Stefanovic, Matt J Rutherford, and Kimon P Valavanis.",
501
+ "venue": "In 2022 International Conference on Unmanned Aircraft Systems (ICUAS), pages 369\u2013377. IEEE, 2022.",
502
+ "url": null
503
+ }
504
+ },
505
+ {
506
+ "41": {
507
+ "title": "Measure and integral: an introduction to real analysis, volume 308.",
508
+ "author": "Richard L Wheeden.",
509
+ "venue": "CRC press, 2015.",
510
+ "url": null
511
+ }
512
+ },
513
+ {
514
+ "42": {
515
+ "title": "URL https://store.bitcraze.io/products/crazyflie-2-1.",
516
+ "author": "Crazyflie 2.1, 2023.",
517
+ "venue": null,
518
+ "url": null
519
+ }
520
+ },
521
+ {
522
+ "43": {
523
+ "title": "Digital simulation of atmospheric turbulence for dryden and von karman models.",
524
+ "author": "TR Beal.",
525
+ "venue": "Journal of Guidance, Control, and Dynamics, 16(1):132\u2013138, 1993.",
526
+ "url": null
527
+ }
528
+ },
529
+ {
530
+ "44": {
531
+ "title": "High-gain observers in the presence of measurement noise: A switched-gain approach.",
532
+ "author": "Jeffrey H Ahrens and Hassan K Khalil.",
533
+ "venue": "Automatica, 45(4):936\u2013943, 2009.",
534
+ "url": null
535
+ }
536
+ },
537
+ {
538
+ "45": {
539
+ "title": "On the performance of high-gain observers with gain adaptation under measurement noise.",
540
+ "author": "Ricardo G Sanfelice and Laurent Praly.",
541
+ "venue": "Automatica, 47(10):2165\u20132176, 2011.",
542
+ "url": null
543
+ }
544
+ }
545
+ ],
546
+ "url": "http://arxiv.org/html/2305.19115v2"
547
+ }
20240318/2306.03000v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2306.09860v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2306.11035v2.json ADDED
@@ -0,0 +1,569 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Adversarial Training Should Be Cast as a Non-Zero-Sum Game",
3
+ "abstract": "One prominent approach toward resolving the adversarial vulnerability of deep neural networks is the two-player zero-sum paradigm of adversarial training, in which predictors are trained against adversarially chosen perturbations of data. Despite the promise of this approach, algorithms based on this paradigm have not engendered sufficient levels of robustness and suffer from pathological behavior like robust overfitting. To understand this shortcoming, we first show that the commonly used surrogate-based relaxation used in adversarial training algorithms voids all guarantees on the robustness of trained classifiers. The identification of this pitfall informs a novel non-zero-sum bilevel formulation of adversarial training, wherein each player optimizes a different objective function. Our formulation yields a simple algorithmic framework that matches and in some cases outperforms state-of-the-art attacks, attains comparable levels of robustness to standard adversarial training algorithms, and does not suffer from robust overfitting.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "",
9
+ "text": "A longstanding disappointment in the machine learning (ML) community is that deep neural networks (DNNs) remain vulnerable to seemingly innocuous changes to their input data, including nuisances in visual data (Laidlaw et al., 2020 ###reference_b28###; Hendrycks & Dietterich, 2019 ###reference_b20###), sub-populations (Santurkar et al., 2021 ###reference_b40###; Koh et al., 2021 ###reference_b25###), and distribution shifts (Xiao et al., 2021 ###reference_b50###; Arjovsky et al., 2019 ###reference_b1###; Robey et al., 2021 ###reference_b38###). Prominent amongst these vulnerabilities is the setting of adversarial examples, wherein it has been conclusively shown that imperceptible, adversarially-chosen perturbations can fool state-of-the-art classifiers parameterized by DNNs (Szegedy et al., 2013 ###reference_b44###; Biggio et al., 2013 ###reference_b6###). In response, a plethora of research has proposed so-called adversarial training (AT) algorithms (Madry et al., 2018 ###reference_b31###; Goodfellow et al., 2015 ###reference_b16###), which are designed to improve robustness against adversarial examples.\nAT is ubiquitously formulated as a two-player zero-sum game, where both players\u2014often referred to as the defender and the adversary\u2014respectively seek to minimize and maximize the classification error. However, this zero-sum game is not implementable in practice as the discontinuous nature of the classification error is not compatible with first-order optimization algorithms. To bridge this gap between theory and practice, it is commonplace to replace the classification error with a smooth surrogate loss (e.g., the cross-entropy loss) which is amenable to gradient-based optimization (Madry et al., 2018 ###reference_b31###; Zhang et al., 2019 ###reference_b52###). And while this seemingly harmless modification has a decades-long tradition in the ML literature due to the guarantees it imparts on non-adversarial objectives (Bartlett et al., 2006 ###reference_b5###; Shalev-Shwartz & Ben-David, 2014 ###reference_b42###; Roux, 2017 ###reference_b39###), there is a pronounced gap in the literature regarding the implications of this relaxation on the standard formulation of AT.\nAs the field of robust ML has matured, surrogate-based AT algorithms have collectively resulted in steady progress toward stronger attacks and robust defenses (Croce et al., 2020a ###reference_b11###). However, despite these advances, recent years have witnessed a plateau in robustness measures on popular leaderboards, resulting in the widely held beliefs that robustness and accuracy may be irreconcilable (Tsipras et al., 2019 ###reference_b45###; Dobriban et al., 2020 ###reference_b13###) and that robust generalization requires significantly more data (Schmidt et al., 2018 ###reference_b41###; Chen et al., 2020 ###reference_b8###). Moreover, various phenomena such as robust overfitting (Rice et al., 2020 ###reference_b37###) have indicated that progress has been overestimated (Croce & Hein, 2020 ###reference_b10###). To combat these pitfalls, state-of-the-art algorithms increasingly rely on ad-hoc regularization schemes (Kannan et al., 2018 ###reference_b23###; Chan et al., 2020 ###reference_b7###), weight perturbations (Wu et al., 2020 ###reference_b49###; Sun et al., 2021 ###reference_b43###), and heuristics such as multiple restarts, carefully crafted learning rate schedules, and convoluted stopping conditions, all of which contribute to an unclear set of best practices and a growing literature concerned with identifying flaws in various AT schemes (Latorre et al., 2023 ###reference_b29###).\nMotivated by these challenges, we argue that the pervasive surrogate-based zero-sum approach to AT suffers from a fundamental flaw. Our analysis of the standard minimax formulation of AT reveals that maximizing a surrogate like the cross-entropy provides no guarantee that the the classification error will increase, resulting in weak adversaries and ineffective AT algorithms. In identifying this shortcoming, we prove that to preserve guarantees on the optimality of the classification error objective, the defender and the adversary must optimize different objectives, resulting in a non-zero-sum game. This leads to a novel, yet natural bilevel formulation (Bard, 2013 ###reference_b4###) of AT in which the defender minimizes an upper bound on the classification error, while the attacker maximizes a continuous reformulation of the classification error. We then propose an algorithm based on our formulation which is free from heuristics and ad hoc optimization techniques. Our empirical evaluations reveal that our approach matches the test robustness achieved by the state-of-the-art, yet highly heuristic approaches such as AutoAttack, and that it eliminates robust overfitting.\nContributions. Our contributions are as follows.\nNew formulation for adversarial robustness. Starting from the discontinuous minmax formulation of AT with respect to the 0-1 loss, we derive a novel continuous bilevel optimization formulation, the solution of which guarantees improved robustness against the optimal adversary.\nNew adversarial training algorithm. We derive BETA, a new, heuristic-free algorithm based on our bilevel formulation which offers competitive empirical robustness on CIFAR-10.\nElimination of robust overfitting. Our algorithm does not suffer from robust overfitting. This suggest that robust overfitting is an artifact of the use of improper surrogates in the original AT paradigm, and that the use of a correct optimization formulation is enough to solve it.\nState-of-the-art robustness evaluation. We show that our proposed optimization objective for the adversary yields a simple algorithm that matches the performance of the state-of-the-art, yet highly complex AutoAttack method, on state-of-the-art robust classifiers trained on CIFAR-10."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "",
21
+ "text": "We consider a -way classification setting, wherein data arrives in the form of instance-label pairs drawn i.i.d. from an unknown joint distribution taking support over , where . Given a suitable hypothesis class , one fundamental goal in this setting is to select an element which correctly predicts the label of a corresponding instance . In practice, this hypothesis class often comprises functions which are parameterized by a vector , as is the case when training DNNs. In this scenario, the problem of learning a classifier that correctly predicts from can written as follows:\nHere denotes the component of the logits vector and we use the notation to denote the indicator function of an event , i.e., . In this sense, denotes the classification error of on the pair .\nAmong the barriers to solving (1 ###reference_###) in practice is the fact that the classification error is a discontinuous function of , which in turn renders continuous first-order methods intractable. Fortunately, this pitfall can be resolved by minimizing a surrogate loss function in place of the classification error (Shalev-Shwartz & Ben-David, 2014 ###reference_b42###, \u00a712.3). For minimization problems, surrogate losses are chosen to be differentiable upper bounds of the classification error of in the sense that\nThis inequality gives rise to a differentiable counterpart of (1 ###reference_###) which is amenable to minimization via first-order methods and can be compactly expressed in the following optimization problem:\nExamples of commonly used surrogates are the hinge loss and the cross-entropy loss. Crucially, the inequality in (2 ###reference_###) guarantees that the problem in (3 ###reference_###) provides a solution that decreases the classification error (Bartlett et al., 2006 ###reference_b5###), which, as discussed above, is the primary goal in supervised classification."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "",
27
+ "text": "For common hypothesis classes, it is well-known that classifiers obtained by solving (3 ###reference_###) are sensitive to adversarial examples (Szegedy et al., 2013 ###reference_b44###; Biggio et al., 2013 ###reference_b6###), i.e., given an instance label pair , it is relatively straightforward to find perturbations with small norm for some fixed such that\nThe task of finding such perturbations which cause the classifier to misclassify perturbed data points can be compactly cast as the following maximization problem:\nHere, if both of the expressions in (4 ###reference_###) hold for the perturbation , then the perturbed instance is called an adversarial example for with respect to the instance-label pair .\nDue to prevalence of adversarial examples, there has been pronounced interest in solving the robust analog of (1 ###reference_###), which is designed to find classifiers that are insensitive to small perturbations. This robust analog is ubiquitously written as the following a two-player zero-sum game with respect to the discontinuous classification error:\nAn optimal solution for (6 ###reference_###) yields a model that achieves the lowest possible classification error despite the presence of adversarial perturbations. For this reason, this problem\u2014wherein the interplay between the maximization over and the minimization over comprises a two-player zero-sum game\u2014 is the starting point for numerous algorithms which aim to improve robustness."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "",
33
+ "text": "As discussed in \u00a7 2.1 ###reference_###, the discontinuity of the classification error complicates the task of finding adversarial examples, as in (5 ###reference_###), and of training against these perturbed instances, as in (6 ###reference_###). One appealing approach toward overcoming this pitfall is to simply deploy a surrogate loss in place of the classification error inside (6 ###reference_###), which gives rise to the following pair of optimization problems:\n\n\n\n\n\n\n\n(7)\n\n\n\n\n\n\n\n\n(8)\nIndeed, this surrogate-based approach is pervasive in practice. Madry et al.\u2019s seminal paper on the subject of adversarial training employs this formulation (Madry et al., 2018 ###reference_b31###), which has subsequently been used as the starting point for numerous AT schemes (Huang et al., 2015 ###reference_b21###; Kurakin et al., 2017 ###reference_b27###).\nPitfalls of surrogate-based optimization.\nDespite the intuitive appeal of this paradigm, surrogate-based adversarial attacks are known to overestimate robustness (Mosbach et al., 2018 ###reference_b34###; Croce et al., 2020b ###reference_b12###; Croce & Hein, 2020 ###reference_b10###), and standard adversarial training algorithms are known to fail against strong attacks. Furthermore, this formulation suffers from pitfalls such as robust overfitting (Rice et al., 2020 ###reference_b37###) and trade-offs between robustness and accuracy (Zhang et al., 2019 ###reference_b52###). To combat these shortcomings, empirical adversarial attacks and defenses have increasingly relied on heuristics such as multiple restarts, variable learning rate schedules (Croce & Hein, 2020 ###reference_b10###), and carefully crafted initializations, resulting in a widening gap between the theory and practice of adversarial learning. In the next section, we argue that these pitfalls can be attributed to the fundamental limitations of (8 ###reference_###)."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "",
39
+ "text": "From an optimization perspective, the surrogate-based approaches to adversarial evaluation and training outlined in \u00a7 2.3 ###reference_### engenders two fundamental limitations.\nLimitation I: Weak attackers. In the adversarial evaluation problem of (7 ###reference_###), the adversary maximizes an upper bound on the classification error. This means that any solution to (7 ###reference_###) is not guaranteed to increase the classification error in (5 ###reference_###), resulting in adversaries which are misaligned with the goal of finding adversarial examples. Indeed,\nwhen the surrogate is an upper bound on the classification error, the only conclusion about the perturbation obtained from (7 ###reference_###) and its true objective (5 ###reference_###) is:\nNotably, the RHS of (9 ###reference_###) can be arbitrarily large while the left hand side can simultaneously be equal to zero, i.e., the problem in (7 ###reference_###) can fail to produce an adversarial example, even at optimality. Thus, while it is known empirically that attacks based on (7 ###reference_###) tend to overestimate robustness (Croce & Hein, 2020 ###reference_b10###), this argument shows that this shortcoming is evident a priori.\nLimitation II: Ineffective defenders. Because attacks which seek to maximize upper bounds on the classification error are not proper surrogates for the classification error (c.f., Limitation I), training a model on such perturbations does not guarantee any improvement in robustness. Therefore, AT algorithms which seek to solve (8 ###reference_###) are ineffective in that they do not optimize the worst-case classification error. For this reason, it should not be surprising that robust overfitting (Rice et al., 2020 ###reference_b37###) occurs for models trained to solve eq. 8 ###reference_###.\nBoth of Limitation I and Limitation II arise directly by virtue of rewriting (7 ###reference_###) and (8 ###reference_###) with the surrogate loss . To illustrate this more concretely, consider the following example.\nLet be given, let denote the number of classes in a classification problem, and let denote the cross-entropy loss. Consider two possible logit vectors of class probabilities:\nAssume without loss of generality that the correct class is the first class. Then does not lead to an adversarial example, whereas does. However, observe that , which tends to as and . In contrast, which remains bounded as . Hence, an adversary maximizing the cross-entropy will always choose over and will therefore fail to identify the adversarial example.\nTherefore, to summarize, there is a distinct tension between the efficient, yet misaligned paradigm of surrogate-based adversarial training with the principled, yet intractable paradigm of minimax optimization on the classification error. In the remainder of this section, we resolve this tension by decoupling the optimization problems of the attacker and the defender."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "",
45
+ "text": "Our starting point is the two-player zero-sum formulation in (6 ###reference_###). Observe that this minimax optimization problem can be equivalently cast as a bilevel optimization problem111To be precise, the optimal value in (17 ###reference_###) is a function of , i.e., , and the constraint must hold for almost every . We omit these details for ease of exposition.:\nWhile this problem still constitutes a zero-sum game, the role of the attacker (the constraint in (12 ###reference_###)) and the role of the defender (the objective in (11 ###reference_###)) are now decoupled.\nFrom this perspective, the tension engendered by introducing surrogate losses is laid bare: the attacker ought to maximize a lower bound on the classification error (c.f., Limitation I), whereas the defender ought to minimize an upper bound on the classification error (c.f., Limitation II). This implies that to preserve guarantees on optimality, the attacker and defender must optimize separate objectives. In what follows, we discuss these objectives for the attacker and defender in detail.\nThe attacker\u2019s objective. We first address the role of the attacker. To do so, we define the negative margin of the classifier as follows:\nWe call the negative margin because a positive value of (13 ###reference_###) corresponds to a misclassification. As we show in the following proposition, the negative margin function (which is differentiable) provides an alternative characterization of the classification error.\nGiven a fixed data pair , let denote any maximizer of over the classes and perturbations satisfying , i.e.,\nThen if , induces a misclassification and satisfies the constraint in (12 ###reference_###), meaning that is an adversarial example. Otherwise, if ,\nthen any satisfies (12 ###reference_###), and no adversarial example exists for the pair . In summary, if is as in eq. 14 ###reference_###, then solves the lower level problem in eq. 12 ###reference_###.\nWe present a proof in appendix A ###reference_###222This result is similar in spirit to (Gowal et al., 2019 ###reference_b17###, Theorem 3.1). However, (Gowal et al., 2019 ###reference_b17###, Theorem 3.1) only holds for linear functions, whereas Proposition 1 ###reference_position1### holds for an arbitrary function .. Proposition 1 ###reference_position1### implies that the non-differentiable constraint in (12 ###reference_###) can be equivalently recast as an ensemble of differentiable optimization problems that can be solved independently. This can collectively be expressed as\nNote that this does not constitute a relaxation; (12 ###reference_###) and (15 ###reference_###) are equivalent optimization problems. This means that the attacker can maximize the classification error directly using first-order optimization methods without resorting to a relaxation. Furthermore, in Appendix D ###reference_###, we give an example of a scenario wherein solving (15 ###reference_###) retrieves the optimal adversarial perturbation whereas maximizing the standard adversarial surrogate fails to do so.\nThe defender\u2019s objective. Next, we consider the role of the defender. To handle the discontinuous upper-level problem in (11 ###reference_###), note that this problem is equivalent to a perturbed version of the supervised learning problem in (1 ###reference_###). As discussed in \u00a7 2.1 ###reference_###, the strongest results for problems of this kind have historically been achieved by means of a surrogate-based relaxation. Subsequently, replacing the 0-1 loss with a differentiable upper bound like the cross-entropy is a principled, guarantee-preserving approach for the defender."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "",
51
+ "text": "By combining the disparate problems discussed in the preceeding section, we arrive at a novel non-zero-sum (almost-everywhere) differentiable formulation of adversarial training:\n\n\n\n\n\n\n\n\n\n(16)\n\n\n\n\n\n\n(17)\n\n\nNotice that the second level of this bilevel problem remains non-smooth due to the maximization over the classes . To impart smoothness on the problem without relaxing the constraint, observe that we can equivalently solve distinct smooth problems in the second level for each sample , resulting in the following equivalent optimization problem:\nHence, in (20 ###reference_###), we first obtain one perturbation per class which maximizes the negative margin for that particular class. Next, in (19 ###reference_###), we select the class index corresponding to the perturbation that maximized the negative margin. And finally, in the upper level, the surrogate minimization over is on the perturbed data pair . The result is a non-zero-sum formulation for AT that is amenable to gradient-based optimization, and preserves the optimality guarantees engendered by surrogate loss minimization without weakening the adversary."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "",
57
+ "text": "Given the non-zero-sum formulation of AT, the next question is how one should solve this bilevel problem in practice. Our starting point is the empirical version of this bilevel problem, wherein we assume access to a finite dataset of instance-label pairs sampled i.i.d. from .\nTo solve this empirical problem, we adopt a stochastic optimization based approach. That is, we first iteratively sample mini-batches from our dataset uniformly at random, and then obtain adversarial perturbations by solving the lower level problems in (22 ###reference_###) and (23 ###reference_###). Note that given the differentiability of the negative margin, the lower level problems can be solved iteratively with generic optimizers, e.g., Adam (Kingma & Ba, 2014 ###reference_b24###) or RMSprop. This procedure is summarized in Algorithm 1 ###reference_###, which we call the BEst Targeted Attack (BETA), given that it directly maximizes the classification error.\nAfter obtaining such perturbations, we calculate the perturbed loss in (21 ###reference_###), and then differentiate through this loss with respect to the model parameters. By updating the model parameters in the negative direction of this gradient, our algorithm seeks classifiers that are robust against perturbations found by BETA. We call the full adversarial training procedure based on this attack BETA Adversarial Training (BETA-AT), as it invokes BETA as a subroutine; see Algorithm 2 ###reference_### for details. Also see Figures LABEL:fig:timing,fig:performance in the appendix for an empirical study of the computational complexity of BETA."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "",
63
+ "text": "###figure_1### ###figure_2### In this section, we evaluate the performance of BETA and BETA-AT on CIFAR-10 (Krizhevsky et al., 2009 ###reference_b26###). Throughout, we consider a range of AT algorithms, including PGD (Madry et al., 2018 ###reference_b31###), FGSM (Goodfellow et al., 2015 ###reference_b16###), TRADES (Zhang et al., 2019 ###reference_b52###), MART (Wang et al., 2020 ###reference_b46###), as well as a range of adversarial attacks, including APGD and AutoAttack (Croce & Hein, 2020 ###reference_b10###). We consider the standard perturbation budget of , and all training and test-time attacks use a step size of . For both TRADES and MART, we set the trade-off parameter , which is consistent with the original implementations (Wang et al., 2020 ###reference_b46###; Zhang et al., 2019 ###reference_b52###).\nThe bilevel formulation eliminates robust overfitting. Robust overfitting occurs when the robust test accuracy peaks immediately after the first learning rate decay, and then falls significantly in subsequent epochs as the model continues to train (Rice et al., 2020 ###reference_b37###). This is illustrated in Figure 0(a) ###reference_sf1###, in which we plot the learning curves (i.e., the clean and robust accuracies for the training and test sets) for a ResNet-18 (He et al., 2016 ###reference_b19###) trained using 10-step PGD against a 20-step PGD adversary. Notice that after the first learning rate decay at epoch 100, the robust test accuracy spikes, before dropping off in subsequent epochs. On the other hand, BETA-AT does not suffer from robust overfitting, as shown in Figure 0(b) ###reference_sf2###. We argue that this strength of our method is a direct result of our bilevel formulation, in which we train against a proper surrogate for the adversarial classification error.\nBETA-AT outperforms baselines on the last iterate of training. We next compare the performance of ResNet-18 models trained using four different AT algorithms: FGSM, PGD, TRADES, MART, and BETA. PGD, TRADES, and MART used a 10-step adversary at training time. At test time, the models were evaluated against five different adversaries: FGSM, 10-step PGD, 40-step PGD, 10-step BETA, and APGD. We report the performance of two different checkpoints for each algorithm: the best performing checkpoint chosen by early stopping on a held-out validation set, and the performance of the last checkpoint from training. Note that while BETA performs comparably to the baseline algorithms with respect to early stopping, it outperforms these algorithms significantly when the test-time adversaries attack the last checkpoint of training. This owes to the fact that BETA does not suffer from robust overfitting, meaning that the last and best checkpoints perform similarly.\nBETA matches the performance of AutoAttack. AutoAttack is a state-of-the-art attack which is widely used to estimate the robustness of trained models on leaderboards such as RobustBench (Croce et al., 2020a ###reference_b11###; Croce & Hein, 2020 ###reference_b10###). In brief, AutoAttack comprises a collection of four disparate attacks: APGD-CE, APGD-T, FAB, and Square Attack. AutoAttack also involves several heuristics, including multiple restarts and variable stopping conditions. In Table 2 ###reference_###, we compare the performance of the top-performing models on RobustBench against AutoAttack, APGD-T, and BETA with RMSprop. Both APGD-T and BETA used thirty steps, whereas we used the default implementation of AutoAttack, which runs for 100 iterations. We also recorded the gap between AutoAttack and BETA. Notice that the 30-step BETA\u2014a heuristic-free algorithm derived from our bilevel formulation of AT\u2014performs almost identically to AutoAttack, despite the fact that AutoAttack runs for significantly more iterations and uses five restarts, which endows AutoAttack with an unfair computational advantage. That is, excepting for a negligible number of samples, BETA matches the performance of AutoPGD-targeted and AutoAttack, despite using an off-the-shelf optimizer."
64
+ },
65
+ {
66
+ "section_id": "6",
67
+ "parent_section_id": null,
68
+ "section_name": "",
69
+ "text": "Robust overfitting. Several recent papers (see, e.g., (Rebuffi et al., 2021 ###reference_b36###; Chen et al., 2021 ###reference_b9###; Yu et al., 2022 ###reference_b51###; Dong et al., 2022 ###reference_b14###; Wang et al., 2020 ###reference_b46###; Lee et al., 2020 ###reference_b30###)) have attempted to explain and resolve robust overfitting (Rice et al., 2020 ###reference_b37###). However, none of these works point to a fundamental limitation of AT as the cause of robust overfitting. Rather, much of this past work has focused on proposing heuristics for algorithms specifically designed to reduce robust overfitting, rather than to improve AT. In contrast, we posit that the lack of guarantees of the zero-sum surrogate-based AT paradigm Madry et al. (2018 ###reference_b31###) is at fault, as this paradigm is not designed to maximize robustness with respect to classification error. And indeed, our empirical evaluations in the previous section confirm that our non-zero-sum formulation eliminates robust overfitting.\nEstimating adversarial robustness. There is empirical evidence that attacks based on surrogates (e.g., PGD) overestimate the robustness of trained classifiers (Croce & Hein, 2020 ###reference_b10###; Croce et al., 2020b ###reference_b12###). Indeed, this evidence served as motivation for the formulation of more sophisticated attacks like AutoAttack (Croce & Hein, 2020 ###reference_b10###), which tend to provide more accurate estimates of robustness. In contrast, we provide solid, theoretical evidence that commonly used attacks overestimate robustness due to the misalignment between standard surrogate losses and the adversarial classification error. Moreover, we show that optimizing the BETA objective with a standard optimizer (e.g., RMSprop) achieves the same robustness as AutoAttack without employing ad hoc training procedures such as multiple restarts. convoluted stopping conditions, or adaptive learning rates.\nOne notable feature of past work is an overservation made in (Gowal et al., 2019 ###reference_b17###), which finds that multitargeted attacks tend to more accurately estimate robustness. However, their theoretical analysis only applies to linear functions, whereas our work extends these ideas to the nonlinear setting of DNNs. Moreover, (Gowal et al., 2019 ###reference_b17###) do not explore training using a multitargeted attack, whereas we show that BETA-AT is an effective AT algorithm that mitigates the impact of robust overfitting.\nBilevel formulations of AT. Prior to our work, (Zhang et al., 2022 ###reference_b53###) proposed a different pseudo-bilevel333In a strict sense, the formulation of (Zhang et al., 2022 ###reference_b53###) is not a bilevel problem. In general, the most concise way to write a bilevel optimization problem is subject to . In such problems the value only depends on , as the objective function is then uniquely determined. This is not the case in (Zhang et al., 2022 ###reference_b53###, eq. (7)), where an additional variable appears, corresponding to the random initialization of Fast-AT. Hence, in (Zhang et al., 2022 ###reference_b53###) the function is not uniquely defined by , but is a random function realized at each iteration of the algorithm. formulation for AT, wherein the main objective was to justify the FastAT algorithm introduced in (Wong et al., 2020 ###reference_b48###). Specifically, the formulation in (Zhang et al., 2022 ###reference_b53###) is designed to produce solutions that coincide with the iterates of FastAT by linearizing the attacker\u2019s objective. In contrast, our bilevel formulation appears naturally following principled relaxations of the intractable classification error AT formulation. In this way, the formulation in (Zhang et al., 2022 ###reference_b53###) applies only in the context of Fast AT, whereas our formulation deals more generally with the task of AT.\nIn the same spirit as our work, (Mianjy & Arora, 2024 ###reference_b33###) solve a problem equivalent to a bilevel problem wherein the adversary maximizes a \u201creflected\u201d cross-entropy loss. While this paper focuses on binary classification, the authors show that this approach leads to improved adversarial robustness and admits convergence guarantees. Our approach, while related, is distinct in its reformulation of the adversarial training problem via the\nnegative margin loss. Moreover, our results show that BETA mitigates robustness overfitting and is roughly five times as effective as AutoAttack.\nTheoretical underpinnings of surrogate minimization. In this paper, we focused on the empirical performance of AT in the context of the literature concerning adversarial examples in computer vision. However, the study of the efficacy of surrogate losses in minimizing the target 0-1 loss is a well studied topic among theorists. Specifically, this literature considers two notions of minimizers for the surrogate loss also minimizing the target loss: (1) consistency, which requires uniform convergence, and (2) calibration, which requires the weaker notion of pointwise convergence (although (Bartlett et al., 2006 ###reference_b5###) shows that these notions are equivalent for standard, i.e., non-adversarial, classification).\nIn the particular case of classification in the presence of adversaries, (Bao et al., 2020 ###reference_b3###) and (Meunier et al., 2022 ###reference_b32###) claimed that for the class of linear models, no convex surrogate loss is calibrated with respect to the 0-1 zero-sum formulation of AT, although certain classes of nonconvex losses can maintain calibration for such settings. However, in (Awasthi et al., 2021 ###reference_b2###), the authors challenge this claim, and generalize the calibration results considered by (Bao et al., 2020 ###reference_b3###) beyond linear models. One interesting direction future work would be to provide a theoretical analysis of BETA with respect to the margin-based consistency results proved very recently in (Frank & Niles-Weed, 2023 ###reference_b15###). We also note that in parallel, efforts have been made to design algorithms that are approximately calibrated, leading to\u2014among other things\u2014the TRADES algorithm (Zhang et al., 2019 ###reference_b52###), which we compare to in Section 5 ###reference_###. Our work is in the same vein, although BETA does not require approximating a divergence term, which leads to non-calibration of the TRADES objective."
70
+ },
71
+ {
72
+ "section_id": "7",
73
+ "parent_section_id": null,
74
+ "section_name": "",
75
+ "text": "In this paper, we argued that the surrogate-based relaxation commonly employed to improve the tractability of adversarial training voids guarantees on the ultimate robustness of trained classifiers, resulting in weak adversaries and ineffective algorithms. This shortcoming motivated the formulation of a novel, yet natural bilevel approach to adversarial training and evaluation in which the adversary and defender optimize separate objectives, which constitutes a non-zero-sum game.\nBased on this formulation, we developed a new adversarial attack algorithm (BETA) and a concomitant AT algorithm, which we call BETA-AT. In our experiments, we showed that BETA-AT eliminates robust overfitting and we showed that even when early stopping based model selection is used, BETA-AT performs comparably to AT. Finally, we showed that BETA provides almost identical estimates of robustness to AutoAttack."
76
+ }
77
+ ],
78
+ "appendix": [
79
+ {
80
+ "section_id": "Appendix 1",
81
+ "parent_section_id": null,
82
+ "section_name": "",
83
+ "text": "Suppose that there exists satisfying such that for some , we have . That is, assume that\nand for some and some we have , which implies that\n. Hence, induces a misclassification error, i.e.,\nIn particular, if\nthen it holds that\nOtherwise, if it holds that\nthen for all and all , we have ,\nso that , i.e., there is no adversarial example in the ball. In this case, for any , if it holds that\nthen\nIn conclusion, the solution\nalways yields a maximizer of the misclassification error."
84
+ },
85
+ {
86
+ "section_id": "Appendix 2",
87
+ "parent_section_id": null,
88
+ "section_name": "",
89
+ "text": "First, note that the problem in eqs. 21 ###reference_###, 22 ###reference_### and 23 ###reference_### is equivalent to\nThis is because the maximum over in eq. 32 ###reference_### is always attained at the coordinate vector \nsuch that is maximum.\nAn alternative is to smooth the lower level optimization problem by adding an entropy regularization:\nwhere is some temperature constant. The inequality here is due to the fact that the entropy of a discrete probability is positive. The innermost maximization problem in (33 ###reference_###) has the closed-form solution:\nHence, after relaxing the second level maximization problem following eq. 33 ###reference_###, and plugging in the optimal values for we arrive at:\nIn this formulation, both upper- and lower-level problems are smooth (barring the possible use of nonsmooth components like ReLU). Most importantly (I) the smoothing is obtained through a lower bound of the original objective in eqs. 22 ###reference_### and 23 ###reference_###, retaining guarantees that the adversary will increase the misclassification error and (II) all the adversarial perturbations obtained for each class now appear in the upper level (40 ###reference_###), weighted by their corresponding negative margin. In this way, we make efficient use of all perturbations generated: if two perturbations from different classes achieve the same negative margin, they will affect the upper-level objective in fair proportion. This formulation gives rise to algorithm 3 ###reference_###."
90
+ },
91
+ {
92
+ "section_id": "Appendix 3",
93
+ "parent_section_id": null,
94
+ "section_name": "",
95
+ "text": "###figure_3### ###figure_4### In Figure 2 ###reference_###, we analyze the trade-off between the running time and performance of BETA. Specifically, on the horizontal axis, we plot the running time (in seconds) of an epoch of BETA, and on the vertical axis we plot the performance measured via the robust accuracy with respect to a 20-step PGD adversary. We compare BETA to PGD and TRADES, and we show the speed-performance trade-off when each of these algorithms are run for 5, 10, and 20 iterations; the iteration count is labeled next to each data point. The leftmost panel shows early stopping model selection, and the rightmost panel shows last iterate model selection. Notice that while BETA is significantly more resource intensive than PGD and TRADES, BETA tends to outperform the baselines, particularly if one looks at the gap between early stopping and last iterate model selection.\nWe next analyze the running time of BETA when used as to adversarially evaluate state-of-the-art robust models. In particular, we return to the setting of Table 2 ###reference_###, wherein we compared the performance of AutoAttack to BETA. In Figure 3 ###reference_###, we show the wall-clock time of performing adversarial evaluation using both of these algorithms. Notice that AutoAttack takes significantly longer to evaluate each of these models, and as we showed in Table 2 ###reference_###, this additional time does not yield a better estimate of the robustness of these models. Indeed, by averaging over the scores in Figure 0(b) ###reference_sf2###, we find that BETA is 5.11 faster than AutoAttack on average."
96
+ },
97
+ {
98
+ "section_id": "Appendix 4",
99
+ "parent_section_id": null,
100
+ "section_name": "",
101
+ "text": "In this appendix, we show that there exists cases in which our margin-based inner maximization retrieves the optimal adversarial perturbation while the standard inner max with the surrogate loss fails to do so. In this example, we consider a classification problem in which the classifier is linear across three classes . Specifically we define in the following way:\nFurthermore, let , let , and assume without loss of generality that the correct class is . The solution for the maximization of cross-entropy loss is given by:\nwhere denotes the cross entropy loss. Now observe that by the monotonicity of the logarithm function, this problem on the right-hand-side is equivalent to the following problem:\nwhere in the final step we split the problem so that we optimize separately over and . Observe that the inner problem, for which the numerator is constant, satisfies the following:\nAs the objective is linear in the rightmost optimization problem, it\u2019s clear that . Now returning to (45 ###reference_###), we substitute and are therefore left to solve the following problem:\nwhere in the final step we used the fact that the objective is symmetric in . By visual inspection, this function achieves its maximum at (see Figure 4 ###reference_###). Hence, the optimal perturbation obtained via cross-entropy maximization is . Therefore,\nThen, by applying the classifier , we find that\nThis shows that the class assigned to this optimally perturbed example is still the correct class , i.e., the attacker fails to find an adversarial example.\n###figure_5### In contrast, the main idea in the derivation of the BETA algorithm is to optimize the margins separately for both possible incorrect classes and . In particular, for the class , BETA solves the following problem:\nThe point is optimal for this linear problem. On the other hand, for the class , BETA solves the following problem:\nThe point is optimal for this problem. Observe that both achieve the same value of the margin, so BETA can choose either optimal point; without loss of generality, assume that BETA chooses the second point\n as the optimal solution. The corresponding classifier takes the following form:\nHence, the classifier returns the incorrect class, i.e., the attack is successful. This shows that whereas the cross-entropy maximization problem fails to find an adversarial example, BETA succeeds in finding an adversarial example."
102
+ }
103
+ ],
104
+ "tables": {
105
+ "1": {
106
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.11.1\">Adversarial performance on CIFAR-10.</span> We report the test accuracies of various AT algorithms against different adversarial attacks on the CIFAR-10 dataset.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T1.9\" style=\"width:390.3pt;height:213.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(30.7pt,-16.8pt) scale(1.18642355036303,1.18642355036303) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.9.9\">\n<tr class=\"ltx_tr\" id=\"S5.T1.9.9.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T1.9.9.10.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T1.9.9.10.1.1\"><span class=\"ltx_text\" id=\"S5.T1.9.9.10.1.1.1\"></span> <span class=\"ltx_text\" id=\"S5.T1.9.9.10.1.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.9.9.10.1.1.2.1\">\n<span class=\"ltx_tr\" id=\"S5.T1.9.9.10.1.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T1.9.9.10.1.1.2.1.1.1\">Training</span></span>\n<span class=\"ltx_tr\" id=\"S5.T1.9.9.10.1.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T1.9.9.10.1.1.2.1.2.1\">algorithm</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S5.T1.9.9.10.1.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"12\" id=\"S5.T1.9.9.10.2\">Test accuracy</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T1.3.3.3.4\">Clean</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T1.3.3.3.5\">FGSM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T1.1.1.1.1\">PGD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T1.2.2.2.2\">PGD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T1.3.3.3.3\">BETA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T1.3.3.3.6\">APGD</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.9.9.11\">\n<td class=\"ltx_td\" id=\"S5.T1.9.9.11.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.11.2\">Best</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.11.3\">Last</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.11.4\">Best</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.11.5\">Last</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.11.6\">Best</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.11.7\">Last</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.11.8\">Best</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.11.9\">Last</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.11.10\">Best</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.11.11\">Last</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.11.12\">Best</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.11.13\">Last</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.9.9.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.12.1\">FGSM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.12.2\">81.96</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.12.3\">75.43</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.12.4\">94.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.12.5\">94.22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.12.6\">42.64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.12.7\">1.49</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.12.8\">42.66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.12.9\">1.62</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.12.10\">40.30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.12.11\">0.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.12.12\">41.56</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.9.9.12.13\">0.00</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.1\">PGD\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.2\">83.71</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.3\">83.21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.4\">51.98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.5\">47.39</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.6\">46.74</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.7\">39.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.8\">45.91</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.9\">39.45</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.10\">43.64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.11\">40.21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.12\">44.36</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.13\">42.62</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.5.5.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.1\">TRADES\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.2\">81.64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.3\">81.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.4\">52.40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.5\">51.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.6\">47.85</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.7\">42.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.8\">47.76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.9\">42.92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.10\">44.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.11\">40.97</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.12\">43.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.13\">41.33</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.1\">MART\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.2\">78.80</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.3\">77.20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.4\">53.84</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.5\">53.73</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.6\">49.08</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.7\">41.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.8\">48.41</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.9\">41.55</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.10\">44.81</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.11\">41.22</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.12\">45.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.13\">42.90</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.7.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.7.7.1\">BETA-AT\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.7.7.2\">87.02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.7.7.3\">86.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.7.7.4\">51.22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.7.7.5\">51.10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.7.7.6\">44.02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.7.7.7\">43.22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.7.7.8\">43.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.7.7.9\">42.56</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.7.7.10\">42.62</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.7.7.11\">42.61</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.7.7.12\">41.44</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.7.7.7.13\">41.02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.8.8.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.1\">BETA-AT\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.2\">85.37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.3\">85.30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.4\">51.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.5\">51.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.6\">45.67</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.7\">45.39</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.8\">45.22</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.9\">45.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.10\">44.54</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.11\">44.36</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.12\">44.32</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.13\">44.12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.9.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.1\">BETA-AT\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.2\">82.11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.3\">81.72</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.4\">54.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.5\">53.99</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.6\">49.96</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.7\">48.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.8\">49.20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.9\">48.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.10\">46.91</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.11\">45.90</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.12\">45.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.13\">45.25</td>\n</tr>\n</table>\n</span></div>\n</figure>",
107
+ "capture": "Table 1: Adversarial performance on CIFAR-10. We report the test accuracies of various AT algorithms against different adversarial attacks on the CIFAR-10 dataset."
108
+ },
109
+ "2": {
110
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1\">Estimated robustness (robust test accuracy).</span> BETA+RMSprop (ours) vs APGD-targeted (APGD-T) vs AutoAttack (AA). CIFAR-10. BETA and APGD-T use 30 iterations + single restart. . AA uses 4 different attacks with 100 iterations and 5 restarts.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T2.5\" style=\"width:303.5pt;height:120pt;vertical-align:-0.7pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-77.2pt,30.4pt) scale(0.662713624458047,0.662713624458047) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T2.5.1\">\n<tr class=\"ltx_tr\" id=\"S5.T2.5.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T2.5.1.1.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">Model</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.5.1.1.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">BETA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.5.1.1.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">APGD-T</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.5.1.1.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">AA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.5.1.1.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">BETA/AA gap</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.5.1.1.6\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">Architecture</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T2.5.1.2.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"><cite class=\"ltx_cite ltx_citemacro_citet\">Wang et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.11035v2#bib.bib47\" title=\"\">2023</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.5.1.2.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">70.78</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.5.1.2.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">70.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.5.1.2.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">70.69</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.5.1.2.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">0.09</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.5.1.2.6\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">WRN-70-16</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.1.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.1.3.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"><cite class=\"ltx_cite ltx_citemacro_citet\">Wang et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.11035v2#bib.bib47\" title=\"\">2023</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.3.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">67.37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.3.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">67.33</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.3.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">67.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.3.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">0.06</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.3.6\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">WRN-28-10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.1.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.1.4.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"><cite class=\"ltx_cite ltx_citemacro_citet\">Rebuffi et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.11035v2#bib.bib36\" title=\"\">2021</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.4.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">66.75</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.4.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">66.71</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.4.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">66.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.4.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">0.17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.4.6\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">WRN-70-16</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.1.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.1.5.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"><cite class=\"ltx_cite ltx_citemacro_citet\">Gowal et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.11035v2#bib.bib18\" title=\"\">2021</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.5.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">66.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.5.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">66.26</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.5.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">66.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.5.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">0.16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.5.6\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">WRN-70-16</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.1.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.1.6.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"><cite class=\"ltx_cite ltx_citemacro_citet\">Huang et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.11035v2#bib.bib22\" title=\"\">2022</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.6.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">65.88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.6.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">65.88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.6.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">65.79</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.6.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">0.09</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.6.6\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">WRN-A4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.1.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.1.7.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"><cite class=\"ltx_cite ltx_citemacro_citet\">Rebuffi et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.11035v2#bib.bib36\" title=\"\">2021</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.7.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">64.73</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.7.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">64.71</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.7.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">64.64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.7.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">0.09</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.7.6\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">WRN-106-16</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.1.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.1.8.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"><cite class=\"ltx_cite ltx_citemacro_citet\">Rebuffi et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.11035v2#bib.bib36\" title=\"\">2021</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.8.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">64.36</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.8.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">64.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.8.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">64.25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.8.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">0.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.8.6\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">WRN-70-16</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.1.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.1.9.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"><cite class=\"ltx_cite ltx_citemacro_citet\">Gowal et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.11035v2#bib.bib18\" title=\"\">2021</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.9.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">63.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.9.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">63.45</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.9.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">63.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.9.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">0.14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.1.9.6\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">WRN-28-10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.1.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.5.1.10.1\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\"><cite class=\"ltx_cite ltx_citemacro_citet\">Pang et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.11035v2#bib.bib35\" title=\"\">2022</a>)</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.5.1.10.2\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">63.38</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.5.1.10.3\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">63.37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.5.1.10.4\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">63.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.5.1.10.5\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">0.03</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.5.1.10.6\" style=\"padding-top:0.5pt;padding-bottom:0.5pt;\">WRN-70-16</td>\n</tr>\n</table>\n</span></div>\n</figure>",
111
+ "capture": "Table 2: Estimated robustness (robust test accuracy). BETA+RMSprop (ours) vs APGD-targeted (APGD-T) vs AutoAttack (AA). CIFAR-10. BETA and APGD-T use 30 iterations + single restart. . AA uses 4 different attacks with 100 iterations and 5 restarts."
112
+ }
113
+ },
114
+ "image_paths": {
115
+ "1(a)": {
116
+ "figure_path": "2306.11035v2_figure_1(a).png",
117
+ "caption": "(a) PGD1010{}^{10}start_FLOATSUPERSCRIPT 10 end_FLOATSUPERSCRIPT learning curves.\nFigure 1: BETA does not suffer from robust overfitting. We plot the learning curves against a PGD2020{}^{20}start_FLOATSUPERSCRIPT 20 end_FLOATSUPERSCRIPT adversary for PGD1010{}^{10}start_FLOATSUPERSCRIPT 10 end_FLOATSUPERSCRIPT and BETA-AT1010{}^{10}start_FLOATSUPERSCRIPT 10 end_FLOATSUPERSCRIPT. Observe that although PGD displays robust overfitting after the first learning rate decay step, BETA-AT does not suffer from this pitfall.",
118
+ "url": "http://arxiv.org/html/2306.11035v2/x1.png"
119
+ },
120
+ "1(b)": {
121
+ "figure_path": "2306.11035v2_figure_1(b).png",
122
+ "caption": "(b) BETA-AT1010{}^{10}start_FLOATSUPERSCRIPT 10 end_FLOATSUPERSCRIPT learning curves.\nFigure 1: BETA does not suffer from robust overfitting. We plot the learning curves against a PGD2020{}^{20}start_FLOATSUPERSCRIPT 20 end_FLOATSUPERSCRIPT adversary for PGD1010{}^{10}start_FLOATSUPERSCRIPT 10 end_FLOATSUPERSCRIPT and BETA-AT1010{}^{10}start_FLOATSUPERSCRIPT 10 end_FLOATSUPERSCRIPT. Observe that although PGD displays robust overfitting after the first learning rate decay step, BETA-AT does not suffer from this pitfall.",
123
+ "url": "http://arxiv.org/html/2306.11035v2/x2.png"
124
+ },
125
+ "2": {
126
+ "figure_path": "2306.11035v2_figure_2.png",
127
+ "caption": "Figure 2: Adversarial training performance-speed trade-off. Each point is annotated with the number of steps with which the corresponding algorithm was run. Observe that robust overfitting is eliminated by BETA, but that this comes at the cost of increased computational overhead. This reveals an expected performance-speed trade-off for our algorithm.",
128
+ "url": "http://arxiv.org/html/2306.11035v2/extracted/5479317/figures/performance-time.png"
129
+ },
130
+ "3": {
131
+ "figure_path": "2306.11035v2_figure_3.png",
132
+ "caption": "Figure 3: Adversarial evaluation timing comparison. The running time for evaluating the top models on RobustBench using AutoAttack and BETA with the same settings as Table 2 are reported. On average, BETA is 5.11 times faster than AutoAttack.",
133
+ "url": "http://arxiv.org/html/2306.11035v2/extracted/5479317/figures/timing.png"
134
+ },
135
+ "4": {
136
+ "figure_path": "2306.11035v2_figure_4.png",
137
+ "caption": "Figure 4: Plot of function to be maximized in eq. 48. We subtract y=2.5\ud835\udc662.5y=2.5italic_y = 2.5 for ease of viewing",
138
+ "url": "http://arxiv.org/html/2306.11035v2/extracted/5479317/figures/plot_function_max_ce.png"
139
+ }
140
+ },
141
+ "validation": true,
142
+ "references": [
143
+ {
144
+ "1": {
145
+ "title": "Invariant risk minimization.",
146
+ "author": "Martin Arjovsky, L\u00e9on Bottou, Ishaan Gulrajani, and David Lopez-Paz.",
147
+ "venue": "arXiv preprint arXiv:1907.02893, 2019.",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "2": {
153
+ "title": "Calibration and consistency of adversarial surrogate losses.",
154
+ "author": "Pranjal Awasthi, Natalie Frank, Anqi Mao, Mehryar Mohri, and Yutao Zhong.",
155
+ "venue": "Advances in Neural Information Processing Systems, 34:9804\u20139815, 2021.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "3": {
161
+ "title": "Calibrated surrogate losses for adversarially robust classification.",
162
+ "author": "Han Bao, Clay Scott, and Masashi Sugiyama.",
163
+ "venue": "In Conference on Learning Theory, pp. 408\u2013451. PMLR, 2020.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "4": {
169
+ "title": "Practical bilevel optimization: algorithms and applications, volume 30.",
170
+ "author": "Jonathan F Bard.",
171
+ "venue": "Springer Science & Business Media, 2013.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "5": {
177
+ "title": "Convexity, classification, and risk bounds.",
178
+ "author": "Peter L Bartlett, Michael I Jordan, and Jon D McAuliffe.",
179
+ "venue": "Journal of the American Statistical Association, 101(473):138\u2013156, 2006.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "6": {
185
+ "title": "Evasion attacks against machine learning at test time.",
186
+ "author": "B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Srndic, P. Laskov, G. Giacinto, and F. Roli.",
187
+ "venue": "In ECML/PKKD, 2013.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "7": {
193
+ "title": "Jacobian adversarially regularized networks for robustness.",
194
+ "author": "Alvin Chan, Yi Tay, Yew Soon Ong, and Jie Fu.",
195
+ "venue": "ICLR, 2020.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "8": {
201
+ "title": "More data can expand the generalization gap between adversarially robust and standard models.",
202
+ "author": "Lin Chen, Yifei Min, Mingrui Zhang, and Amin Karbasi.",
203
+ "venue": "In International Conference on Machine Learning, pp. 1670\u20131680. PMLR, 2020.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "9": {
209
+ "title": "Robust overfitting may be mitigated by properly learned smoothening.",
210
+ "author": "Tianlong Chen, Zhenyu Zhang, Sijia Liu, Shiyu Chang, and Zhangyang Wang.",
211
+ "venue": "In International Conference on Learning Representations, 2021.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "10": {
217
+ "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks.",
218
+ "author": "Francesco Croce and Matthias Hein.",
219
+ "venue": "In International conference on machine learning, pp. 2206\u20132216. PMLR, 2020.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "11": {
225
+ "title": "Robustbench: a standardized adversarial robustness benchmark.",
226
+ "author": "Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein.",
227
+ "venue": "arXiv preprint arXiv:2010.09670, 2020a.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "12": {
233
+ "title": "Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks.",
234
+ "author": "Francesco Croce, Jonas Rauber, and Matthias Hein.",
235
+ "venue": "International Journal of Computer Vision, 128:1028\u20131046, 2020b.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "13": {
241
+ "title": "Provable tradeoffs in adversarially robust classification.",
242
+ "author": "Edgar Dobriban, Hamed Hassani, David Hong, and Alexander Robey.",
243
+ "venue": "arXiv preprint arXiv:2006.05161, 2020.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "14": {
249
+ "title": "Exploring memorization in adversarial training.",
250
+ "author": "Yinpeng Dong, Ke Xu, Xiao Yang, Tianyu Pang, Zhijie Deng, Hang Su, and Jun Zhu.",
251
+ "venue": "In International Conference on Learning Representations, 2022.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "15": {
257
+ "title": "The adversarial consistency of surrogate risks for binary classification.",
258
+ "author": "Natalie Frank and Jonathan Niles-Weed.",
259
+ "venue": "arXiv preprint arXiv:2305.09956, 2023.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "16": {
265
+ "title": "Explaining and harnessing adversarial examples.",
266
+ "author": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy.",
267
+ "venue": "In ICLR, 2015.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "17": {
273
+ "title": "An alternative surrogate loss for pgd-based adversarial testing.",
274
+ "author": "Sven Gowal, Jonathan Uesato, Chongli Qin, Po-Sen Huang, Timothy Mann, and Pushmeet Kohli.",
275
+ "venue": "arXiv preprint arXiv:1910.09338, 2019.",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "18": {
281
+ "title": "Improving robustness using generated data.",
282
+ "author": "Sven Gowal, Sylvestre-Alvise Rebuffi, Olivia Wiles, Florian Stimberg, Dan Andrei Calian, and Timothy Mann.",
283
+ "venue": "In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "19": {
289
+ "title": "Deep residual learning for image recognition.",
290
+ "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.",
291
+ "venue": "In CVPR, 2016.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "20": {
297
+ "title": "Benchmarking neural network robustness to common corruptions and perturbations.",
298
+ "author": "Dan Hendrycks and Thomas Dietterich.",
299
+ "venue": "In International Conference on Learning Representations, 2019.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "21": {
305
+ "title": "Learning with a strong adversary.",
306
+ "author": "Ruitong Huang, Bing Xu, Dale Schuurmans, and Csaba Szepesvari.",
307
+ "venue": "ArXiv, abs/1511.03034, 2015.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "22": {
313
+ "title": "Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective.",
314
+ "author": "Shihua Huang, Zhichao Lu, Kalyanmoy Deb, and Vishnu Naresh Boddeti.",
315
+ "venue": "arXiv e-prints, art. arXiv:2212.11005, December 2022.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "23": {
321
+ "title": "Adversarial logit pairing.",
322
+ "author": "H. Kannan, A. Kurakin, and I. Goodfellow.",
323
+ "venue": "arXiv preprint arXiv:1803.06373, 2018.",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "24": {
329
+ "title": "Adam: A method for stochastic optimization.",
330
+ "author": "Diederik P Kingma and Jimmy Ba.",
331
+ "venue": "arXiv preprint arXiv:1412.6980, 2014.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "25": {
337
+ "title": "Wilds: A benchmark of in-the-wild distribution shifts.",
338
+ "author": "Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Irena Gao, et al.",
339
+ "venue": "In International Conference on Machine Learning, pp. 5637\u20135664. PMLR, 2021.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "26": {
345
+ "title": "Cifar datasets (canadian institute for advanced research).",
346
+ "author": "Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.",
347
+ "venue": "2009.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "27": {
353
+ "title": "Adversarial examples in the physical world.",
354
+ "author": "Alexey Kurakin, Ian Goodfellow, and Samy Bengio.",
355
+ "venue": "ICLR Workshop, 2017.",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "28": {
361
+ "title": "Perceptual adversarial robustness: Defense against unseen threat models.",
362
+ "author": "Cassidy Laidlaw, Sahil Singla, and Soheil Feizi.",
363
+ "venue": "arXiv preprint arXiv:2006.12655, 2020.",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "29": {
369
+ "title": "Finding actual descent directions for adversarial training.",
370
+ "author": "Fabian Latorre, Igor Krawczuk, Leello Tadesse Dadi, Thomas Pethick, and Volkan Cevher.",
371
+ "venue": "In The Eleventh International Conference on Learning Representations, 2023.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "30": {
377
+ "title": "Adversarial vertex mixup: Toward better adversarially robust generalization.",
378
+ "author": "Saehyung Lee, Hyungyu Lee, and Sungroh Yoon.",
379
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "31": {
385
+ "title": "Towards deep learning models resistant to adversarial attacks.",
386
+ "author": "Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu.",
387
+ "venue": "In ICLR, 2018.",
388
+ "url": null
389
+ }
390
+ },
391
+ {
392
+ "32": {
393
+ "title": "Towards consistency in adversarial classification.",
394
+ "author": "Laurent Meunier, Rapha\u00ebl Ettedgui, Rafael Pinot, Yann Chevaleyre, and Jamal Atif.",
395
+ "venue": "Advances in Neural Information Processing Systems, 35:8538\u20138549, 2022.",
396
+ "url": null
397
+ }
398
+ },
399
+ {
400
+ "33": {
401
+ "title": "Robustness guarantees for adversarially trained neural networks.",
402
+ "author": "Poorya Mianjy and Raman Arora.",
403
+ "venue": "Advances in Neural Information Processing Systems, 36, 2024.",
404
+ "url": null
405
+ }
406
+ },
407
+ {
408
+ "34": {
409
+ "title": "Logit pairing methods can fool gradient-based attacks.",
410
+ "author": "Marius Mosbach, Maksym Andriushchenko, Thomas Trost, Matthias Hein, and Dietrich Klakow.",
411
+ "venue": "arXiv preprint arXiv:1810.12042, 2018.",
412
+ "url": null
413
+ }
414
+ },
415
+ {
416
+ "35": {
417
+ "title": "Robustness and accuracy could be reconcilable by (Proper) definition.",
418
+ "author": "Tianyu Pang, Min Lin, Xiao Yang, Jun Zhu, and Shuicheng Yan.",
419
+ "venue": "In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 17258\u201317277. PMLR, 17\u201323 Jul 2022.",
420
+ "url": null
421
+ }
422
+ },
423
+ {
424
+ "36": {
425
+ "title": "Fixing data augmentation to improve adversarial robustness.",
426
+ "author": "Sylvestre-Alvise Rebuffi, Sven Gowal, Dan A. Calian, Florian Stimberg, Olivia Wiles, and Timothy Mann.",
427
+ "venue": "arXiv preprint arXiv:2103.01946, 2021.",
428
+ "url": null
429
+ }
430
+ },
431
+ {
432
+ "37": {
433
+ "title": "Overfitting in adversarially robust deep learning.",
434
+ "author": "Leslie Rice, Eric Wong, and J Zico Kolter.",
435
+ "venue": "In ICML, 2020.",
436
+ "url": null
437
+ }
438
+ },
439
+ {
440
+ "38": {
441
+ "title": "Model-based domain generalization.",
442
+ "author": "Alexander Robey, George J Pappas, and Hamed Hassani.",
443
+ "venue": "Advances in Neural Information Processing Systems, 34:20210\u201320229, 2021.",
444
+ "url": null
445
+ }
446
+ },
447
+ {
448
+ "39": {
449
+ "title": "Tighter bounds lead to improved classifiers.",
450
+ "author": "Nicolas Le Roux.",
451
+ "venue": "In International Conference on Learning Representations, 2017.",
452
+ "url": null
453
+ }
454
+ },
455
+ {
456
+ "40": {
457
+ "title": "Breeds: Benchmarks for subpopulation shift.",
458
+ "author": "Shibani Santurkar, Dimitris Tsipras, and Aleksander Madry.",
459
+ "venue": "International Conference on Learning Representations, 2021.",
460
+ "url": null
461
+ }
462
+ },
463
+ {
464
+ "41": {
465
+ "title": "Adversarially robust generalization requires more data.",
466
+ "author": "Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry.",
467
+ "venue": "Advances in neural information processing systems, 31, 2018.",
468
+ "url": null
469
+ }
470
+ },
471
+ {
472
+ "42": {
473
+ "title": "Understanding machine learning: From theory to algorithms.",
474
+ "author": "Shai Shalev-Shwartz and Shai Ben-David.",
475
+ "venue": "Cambridge university press, 2014.",
476
+ "url": null
477
+ }
478
+ },
479
+ {
480
+ "43": {
481
+ "title": "Exploring the vulnerability of deep neural networks: A study of parameter corruption.",
482
+ "author": "Xu Sun, Zhiyuan Zhang, Xuancheng Ren, Ruixuan Luo, and Liangyou Li.",
483
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 11648\u201311656, 2021.",
484
+ "url": null
485
+ }
486
+ },
487
+ {
488
+ "44": {
489
+ "title": "Intriguing properties of neural networks.",
490
+ "author": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Dumitru Erhan Joan Bruna, Ian Goodfellow, and Rob Fergus.",
491
+ "venue": "In ICLR, 2013.",
492
+ "url": null
493
+ }
494
+ },
495
+ {
496
+ "45": {
497
+ "title": "Robustness may be at odds with accuracy.",
498
+ "author": "Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry.",
499
+ "venue": "In ICLR, 2019.",
500
+ "url": null
501
+ }
502
+ },
503
+ {
504
+ "46": {
505
+ "title": "Improving adversarial robustness requires revisiting misclassified examples.",
506
+ "author": "Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, and Quanquan Gu.",
507
+ "venue": "ICLR, 2020.",
508
+ "url": null
509
+ }
510
+ },
511
+ {
512
+ "47": {
513
+ "title": "Better diffusion models further improve adversarial training.",
514
+ "author": "Zekai Wang, Tianyu Pang, Chao Du, Min Lin, Weiwei Liu, and Shuicheng Yan.",
515
+ "venue": "arXiv preprint arXiv:2302.04638, 2023.",
516
+ "url": null
517
+ }
518
+ },
519
+ {
520
+ "48": {
521
+ "title": "Fast is better than free: Revisiting adversarial training.",
522
+ "author": "Eric Wong, Leslie Rice, and J. Zico Kolter.",
523
+ "venue": "ICLR, 2020.",
524
+ "url": null
525
+ }
526
+ },
527
+ {
528
+ "49": {
529
+ "title": "Adversarial weight perturbation helps robust generalization.",
530
+ "author": "Dongxian Wu, Shu tao Xia, and Yisen Wang.",
531
+ "venue": "NeurIPS, 2020.",
532
+ "url": null
533
+ }
534
+ },
535
+ {
536
+ "50": {
537
+ "title": "Noise or signal: The role of image backgrounds in object recognition.",
538
+ "author": "Kai Xiao, Logan Engstrom, Andrew Ilyas, and Aleksander Madry.",
539
+ "venue": "International Conference on Machine Learning, 2021.",
540
+ "url": null
541
+ }
542
+ },
543
+ {
544
+ "51": {
545
+ "title": "Understanding robust overfitting of adversarial training and beyond.",
546
+ "author": "Chaojian Yu, Bo Han, Li Shen, Jun Yu, Chen Gong, Mingming Gong, and Tongliang Liu.",
547
+ "venue": "In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 25595\u201325610. PMLR, 17\u201323 Jul 2022.",
548
+ "url": null
549
+ }
550
+ },
551
+ {
552
+ "52": {
553
+ "title": "Theoretically principled trade-off between robustness and accuracy.",
554
+ "author": "Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan.",
555
+ "venue": "In ICML, 2019.",
556
+ "url": null
557
+ }
558
+ },
559
+ {
560
+ "53": {
561
+ "title": "Revisiting and advancing fast adversarial training through the lens of bi-level optimization.",
562
+ "author": "Yihua Zhang, Guanhua Zhang, Prashant Khanduri, Mingyi Hong, Shiyu Chang, and Sijia Liu.",
563
+ "venue": "In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (eds.), Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pp. 26693\u201326712. PMLR, 17\u201323 Jul 2022.",
564
+ "url": null
565
+ }
566
+ }
567
+ ],
568
+ "url": "http://arxiv.org/html/2306.11035v2"
569
+ }
20240318/2306.11044v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2307.06212v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2307.11714v3.json ADDED
@@ -0,0 +1,415 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Convergence of SGD for Training Neural Networks with Sliced Wasserstein Losses",
3
+ "abstract": "Optimal Transport has sparked vivid interest in recent years, in particular thanks to the Wasserstein distance, which provides a geometrically sensible and intuitive way of comparing probability measures. For computational reasons, the Sliced Wasserstein (SW) distance was introduced as an alternative to the Wasserstein distance, and has seen uses for training generative Neural Networks (NNs). While convergence of Stochastic Gradient Descent (SGD) has been observed practically in such a setting, there is to our knowledge no theoretical guarantee for this observation. Leveraging recent works on convergence of SGD on non-smooth and non-convex functions by Bianchi et al. (2022), we aim to bridge that knowledge gap, and provide a realistic context under which fixed-step SGD trajectories for the SW loss on NN parameters converge. More precisely, we show that the trajectories approach the set of (sub)-gradient flow equations as the step decreases. Under stricter assumptions, we show a much stronger convergence result for noised and projected SGD schemes, namely that the long-run limits of the trajectories approach a set of generalised critical points of the loss function.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": ""
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Optimal Transport in Machine Learning",
15
+ "text": "Optimal Transport (OT) allows the comparison of measures on a metric space by generalising the use of the ground metric. Typical applications use the so-called 2-Wasserstein distance, defined as\nwhere is the set of probability measures on admitting a second-order moment and where is the set of measures of of first marginal and second marginal . One may find a thorough presentation of its properties in classical monographs such as Peyr\u00e9 & Cuturi (2019 ###reference_b29###); Santambrogio (2015 ###reference_b31###); Villani (2009 ###reference_b35###)\nThe ability to compare probability measures is useful in probability density fitting problems, which are a sub-genre of generation tasks. In this formalism, one considers a probability measure parametrised by a vector which is designed to approach a target data distribution (typically the real-world dataset). In order to determine suitable parameters, one may choose any probability discrepancy (Kullback-Leibler, Ciszar divergences, f-divergences or Maximum Mean Discrepancy (Gretton et al., 2006 ###reference_b20###)), or in our case, the Wasserstein distance. In the case of Generative Adversarial Networks, the optimisation problem which trains the \"Wasserstein GAN\" (Arjovsky et al., 2017 ###reference_b1###) stems from the Kantorovitch-Rubinstein dual expression of the 1-Wasserstein distance."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "The Sliced Wasserstein Distance as an Alternative",
21
+ "text": "The Wasserstein distance suffers from the curse of dimensionality, in the sense that the sample complexity for samples in dimension is of the order (Dudley, 1969 ###reference_b17###). Due to this practical limitation and to the computational cost of the Wasserstein distance, the study of cheaper alternatives has become a prominent field of research. A prominent example is Entropic OT introduced by Cuturi (2013 ###reference_b13###), which adds an entropic regularisation term, advantageously making the problem strongly convex. Sample complexity bounds have been derived by Genevay et al. (2019 ###reference_b19###), showing a convergence in with a constant depending on the regularisation factor.\nAnother alternative is the Sliced Wasserstein (SW) Distance introduced by Rabin et al. (2012 ###reference_b30###), which consists in computing the 1D Wasserstein distances between projections of input measures, and averaging over the projections. The aforementioned projection of a measure on is done by the push-forward operation by the map . Formally, is the measure on such that for any Borel set , . Once the measures are projected onto a line , the computation of the Wasserstein distance becomes substantially simpler numerically. We illustrate this fact in the discrete case, which arises in practical optimisation settings. Let two discrete measures on : with supports and . Their push-forwards by are simply computed by the formula , and the 2-Wasserstein distance between their projections can be computed by sorting their supports: let a permutation sorting , and a permutation sorting , one has the simple expression\nThe SW distance is the expectation of this quantity with respect to , i.e. uniform on the sphere: . The 2-SW distance is also defined more generally between two measures :\nIn addition to its computational accessibility, the SW distance enjoys a dimension-free sample complexity (Nadjahi et al., 2020 ###reference_b26###). Additional statistical, computational and robustness properties of SW have been explored by Nietert et al. (2022 ###reference_b27###). Moreover, central-limit results have been shown by Xu & Huang (2022 ###reference_b39###) for 1-SW and the 1-max-SW distance (a variant of SW introduced by Deshpande et al. (2019 ###reference_b16###)), and related work by Xi & Niles-Weed (2022 ###reference_b38###) shows the convergence of the sliced error process , where the samples and are drawn for each . Another salient field of research for SW is its metric properties, and while it has been shown to be weaker than the Wasserstein distance in general by Bonnotte (2013 ###reference_b8###), and metric comparisons with Wasserstein and max-SW have been undergone by Bayraktar & Guo (2021 ###reference_b2###) and Paty & Cuturi (2019 ###reference_b28###)."
22
+ },
23
+ {
24
+ "section_id": "1.3",
25
+ "parent_section_id": "1",
26
+ "section_name": "Related Works",
27
+ "text": "Our subject of interest is the theoretical properties of SW as a loss for implicit generative modelling, which leads to minimising in the parameters , where is the target distribution, and is the image by the NN111Similarly to the 1D case, is the push-forward measure of by , i.e. the law of when . of , a low-dimensional input distribution (often chosen as Gaussian or uniform noise). In order to train a NN in this manner, at each iteration one draws samples from and (denoted and as discrete measures with points), as well as a projection (or a batch of projections) and performs an SGD step on the sample loss\nTaking the expectation of this loss over the samples yields the minibatch Sliced-Wasserstein discrepancy, a member of the minibatch variants of the OT distances, introduced formally by Fatras et al. Fatras et al. (2021 ###reference_b18###). The framework 2 ###reference_### fits several Machine Learning applications, for instance, Deshpande et al. (2018 ###reference_b15###) trains GANs and auto-encoders with this method, and Wu et al. (2019 ###reference_b37###) consider related dual formulations. Other examples within this formalism include the synthesis of images by minimising the SW distance between features of the optimised image and a target image, as done by Heitz et al. (2021 ###reference_b21###) for textures with neural features, and by Tartavel et al. (2016 ###reference_b33###) with wavelet features (amongst other methods).\nThe general study of convergence of SGD in the context of non-smooth, non-convex functions (as is the case of from 2 ###reference_###) is an active field of research: Majewski et al. (2018 ###reference_b24###) and Davis et al. (2020 ###reference_b14###) show the convergence of diminishing-step SGD under regularity constraints, while Bolte & Pauwels (2021 ###reference_b5###) leverage conservative field theory to show convergence results for training with back-propagation. Finally, the recent work by Bianchi et al. (2022 ###reference_b4###) shows the convergence of fixed-step SGD schemes on a general function under weaker regularity assumptions.\nMore specifically, the study of convergence for OT-based generative NNs has been tackled by Fatras et al. (2021 ###reference_b18###), who prove strong convergence results for minibatch variants of classical OT distances, namely the Wasserstein distance, the Entropic OT and the Gromov Wasserstein distance (another OT variant introduced by M\u00e9moli (2011 ###reference_b25###)). A related study on GANs by Huang et al. (2023 ###reference_b22###) derive optimisation properties for one layer and one dimensional Wasserstein-GANs and generalise to higher dimensions by turning to SW-GANs. Another work by Br\u00e9chet et al. (2023 ###reference_b9###) focuses on the theoretical properties of linear NNs trained with the Bures-Wasserstein loss (introduced by Bures (1969 ###reference_b10###); see also (Bhatia et al., 2017 ###reference_b3###) for reference on this metric). Finally, the regularity and optimisation properties of the simpler energy have been studied by Tanguy et al. (2023 ###reference_b32###).\nIn practice, it has been observed that SGD in such settings always converges (in the loose numerical sense, see (Deshpande et al., 2018 ###reference_b15###), Section 5, or (Heitz et al., 2021 ###reference_b21###), Figure 3), yet this property is not known theoretically. The aim of this work is to bridge the gap between theory and practical observation by proving convergence results for SGD on (minibatch) Sliced Wasserstein generative losses of the form ."
28
+ },
29
+ {
30
+ "section_id": "1.4",
31
+ "parent_section_id": "1",
32
+ "section_name": "Contributions",
33
+ "text": "Under practically realistic assumptions, we prove in 1 ###reference_orem1### that piecewise affine interpolations (defined in Equation 10 ###reference_###) of constant-step SGD schemes on (formalised in Equation 7 ###reference_###) converge towards the set of sub-gradient flow solutions (see Equation 9 ###reference_###) as the gradient step decreases. This results signifies that with very small learning rates, SGD trajectories will be close to sub-gradient flows, which themselves converge to critical points of (omitting serious technicalities).\nThe assumptions for this result are practically reasonable: the input measure and the true data measure are assumed to be compactly supported. As for the network , we assume that for a fixed datum , is piecewise -smooth and that it is Lipschitz jointly in both variables.\nWe require additional assumptions on which are more costly, but are verified as long as is a NN composed of typical activations and linear units, with the constraint that the parameters and data stay both stay within a fixed bounded domains. We discuss a class of neural networks that satisfy all of the assumptions of the paper in the Appendix (D ###reference_###). Furthermore, this result can be extended to other orders of SW: we present the tools for this generalisation in E ###reference_###.\nIn order to obtain a stronger convergence result, we consider a variant of SGD where each iteration receives an additive noise (scaled by the learning rate) which allows for better space exploration, and where each iteration is projected on a ball in order to ensure boundedness. This alternative SGD scheme remains within the realm of practical applications, and we show in 2 ###reference_orem2### that long-run limits of such trajectories converge towards a set of generalised critical points of , as the gradient step approaches 0. This result is substantially stronger, and can serve as an explanation of the convergence of practical SGD trajectories, specifically towards a set of critical points which amounts to the stationary points of the energy (barring theoretical technicalities).\nUnfortunately, we require additional assumptions in order to obtain this stronger convergence result, the most important of which is that the input data measure and the dataset measure are discrete. For the latter, this is always the case in practice, however the former assumption is more problematic, since it is common to envision generative NNs as taking an argument from a continuous space (the input is often Gaussian of Uniform noise), thus a discrete setting is a substantial theoretical drawback. For practical concerns, one may argue that the discrete can have an arbitrary fixed amount of points, and leverage strong sample complexity results to ascertain that the discretisation is not costly if the number of samples is large enough."
34
+ },
35
+ {
36
+ "section_id": "2",
37
+ "parent_section_id": null,
38
+ "section_name": "Stochastic Gradient Descent with as Loss",
39
+ "text": "Training Sliced-Wasserstein generative models consists in training a neural network\nby minimising the SW minibatch loss through Stochastic Gradient Descent (as described in 1 ###reference_thm1###). The probability distribution is the law of the input of the generator . The distribution is the data distribution, which aims to simulate. Finally, will denote the uniform measure on the unit sphere of , denoted by . Given a list of points denote the associated discrete uniform measure . By abuse of notation, we write . The reader may find a summary of this paper\u2019s notations in 1 ###reference_###.\nIn the following, we will apply results from (Bianchi et al., 2022 ###reference_b4###), and we pave the way to the application of these results by presenting their theoretical framework. Consider a sample loss function that is locally Lipschitz in the first variable, and a probability measure on which is the law of the samples drawn at each SGD iteration. Consider an almost-everywhere gradient of , which is to say that for almost every (since each is locally Lipschitz, it is differentiable almost-everywhere by Rademacher\u2019s theorem). The complete loss function is the expectation of the sample loss, . An SGD trajectory of step for is a sequence of the form:\nwhere is the distribution of the initial position . Within this framework, we define an SGD scheme described by 1 ###reference_thm1###, with and the minibatch SW sample loss\nWith this definition for , we have\nthus the population loss compares the \"true\" data with the model\u2019s generation using (minibatch) SW. We now wish to define an almost-everywhere gradient of . To this end, notice that one may write , where for and . The differentiability properties of are already known (Tanguy et al., 2023 ###reference_b32###; Bonneel et al., 2015 ###reference_b7###), in particular one has the following almost-everywhere gradient of\nwhere the permutation is , with being a sorting permutation of the list . The sorting permutations are chosen arbitrarily when there is ambiguity. To define an almost-everywhere gradient, we must differentiate for which we need regularity assumptions on : this is the goal of 1 ###reference_umption1###. In the following, denotes the topological closure of a set , its boundary, and denotes the Lebesgue measure of .\nFor every there exists a family of disjoint connected open sets such that , and .\nNote that for measure-theoretic reasons, the sets are assumed countable. One may understand this assumption broadly as the neural networks being piecewise smooth with respect to the parameters , where the pieces depend on the input data . In practice, 1 ###reference_umption1### is an assumption on the activation functions of the neural network. For instance, it is of course satisfied in the case of smooth activations, or in the common case of piecewise polynomial activations. We detail suitable neural networks in the Appendix (D ###reference_###).\n1 ###reference_umption1### implies that given fixed, is differentiable almost-everywhere, and that one may define the following almost-everywhere gradient 6 ###reference_###.\nwhere for denotes the matrix of the differential of , which is defined for almost-every . Given (a point of potential non-differentiability), take instead . (Any choice at such points would still define an a.e. gradient, and will make no difference).\nGiven a step , and an initial position , we may now define formally the following fixed-step SGD scheme for :\nAn important technicality that we must verify in order to apply Bianchi et al. (2022 ###reference_b4###)\u2019s results is that and are locally Lipschitz. Before proving those claims, we reproduce a useful Property from (Tanguy et al., 2023 ###reference_b32###). In the following, denotes given , and for a norm on , and shall denote the open ball of of centre and radius for the norm (if is omitted, then is an euclidean ball).\nThe are uniformly locally Lipschitz (Tanguy et al., 2023 ###reference_b32###) Prop. 2.1. \nLet , for and . Then is -Lipschitz in the neighbourhood :\nIn order to deduce regularity results on and from 1 ###reference_p1###, we will make the assumption that is globally Lipschitz in . In practice, this is the case when both parameters are enforced to stay within a fixed bounded domain, for instance by multiplying a typical NN with the indicator of such a set. We present this in detail in the Appendix (D ###reference_###).\nThere exists such that\nUnder 2 ###reference_umption2###, for and , let . Then is -Lipschitz in :\nLet and . Let . Using 2 ###reference_umption2###, we have , with . \nDenoting , we apply successively 1 ###reference_p1### (first inequality), then 2 ###reference_umption2### (second inequality):\n\u220e\n2 ###reference_p2### shows that is locally Lipschitz in . We now assume some conditions on the measures and in order to prove that is also locally Lipschitz. Specifically, we require that the data measures and be supported on bounded domains, which imposes little restriction in practice.\nand are Radon probability measures on and respectively, supported by the compacts and respectively. Denote and .\nAssume 2 ###reference_umption2### and 3 ###reference_umption3###. For let and .\n\nLet . We have .\nLet and . We have\nNow by 2 ###reference_umption2###, is continuous on the compact , thus upper-bounded by a certain . We can define , which verifies . Since is compact and is a Radon probability measure by 3 ###reference_umption3###, is well-defined and finite, thus is finite. Likewise, let .\n\nFinally, .\n\u220e\nHaving shown that our losses are locally Lipschitz, we can now turn to convergence results. These conclusions are placed in the context of non-smooth and non-convex optimisation, thus will be tied to the Clarke sub-differential of , which we denote . The set of Clarke sub-gradients at a point is the convex hull of the limits of gradients of :\nwhere is the set of differentiability of . At points where is differentiable, , and if is convex in a neighbourhood of , then the Clarke differential at is the set of its convex sub-gradients. The interested reader may turn to C ###reference_### for further context on non-smooth and non-convex optimisation."
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Convergence of Interpolated SGD Trajectories on",
45
+ "text": "In general, the idea behind SGD is a discretisation of the gradient flow equation . In our non-smooth setting, the underlying continuous-time problem is instead the Clarke differential inclusion . Our objective is to show that in a certain sense, the SGD trajectories approach the set of solutions of this inclusion problem, as the step size decreases. We consider solutions that are absolutely continuous (we will write ) and start within , a fixed compact set. We can now define the solution set formally as\nwhere we write for \"almost every\". In order to compare the discrete SGD trajectories to this set of continuous-time trajectories, we interpolate the discrete points in an affine manner: Equation 10 ###reference_### defines the piecewise-affine interpolated SGD trajectory associated to a discrete SGD trajectory of learning rate .\nIn order to compare our interpolated trajectories with the solutions, we consider the metric of uniform convergence on all segments\nIn order to prove a convergence result on the interpolated trajectories, we will leverage the work of Bianchi et al. (2022 ###reference_b4###) which hinges on three conditions on the loss that we reproduce and verify successively. Firstly, 1 ###reference_dition1### assumes mild regularity on the sample loss function .\nThere exists measurable such that each is -integrable, and:\nThere exists such that is -integrable.\nOur regularity result on 2 ###reference_p2### allows us to verify 1 ###reference_dition1###, by letting and . 1 ###reference_dition1### ii) is immediate since for all is continuous in each variable separately, thanks to the regularity of provided by 2 ###reference_umption2###, and to the regularities of . This continuity implies that all are -integrable, since is a compactly supported probability measure under 3 ###reference_umption3###. Secondly, 2 ###reference_dition2### concerns the local Lipschitz constant introduced in 1 ###reference_dition1###: it is assumed to increase slowly with respect to the network parameters .\nThe function of 1 ###reference_dition1### verifies:\nThere exists such that .\nFor every compact .\n2 ###reference_dition2###.ii) is verified by given its regularity. However, 2 ###reference_dition2###.i) requires that increase slowly as increases, which is more costly.\nThere exists an -integrable function such that .\n4 ###reference_umption4### is satisfied in particular as soon as is bounded (which is the case for a neural network with bounded activation functions), or if is of the form , i.e. limiting the network parameters to be bounded. This second case does not yield substantial restrictions in practice (see D ###reference_### for a class of NNs that satisfy all of the assumptions), yet vastly simplifies theory. Under 4 ###reference_umption4###, we have for any with from 2 ###reference_p2### and from 3 ###reference_p3###,\nAs a consequence, 2 ###reference_dition2### holds under our assumptions. We now consider the Markov kernel associated to the SGD schemes:\nGiven is a probability measure on which dictates the law of the positions of the next SGD iteration , conditionally to . With denoting the Lebesgue measure on , let . is the set of learning rates for which the kernel maps any absolutely continuous probability measure to another such measure. We will verify the following condition, which can be interpreted as the SGD trajectories continuing to explore the entire space for a small enough learning rate :\nThe closure of contains 0.\nIn order to satisfy 3 ###reference_dition3###, we require an additional regularity condition on the neural network which we formulate in 5 ###reference_umption5###.\nThere exists a constant , such that (with the notations of 1 ###reference_umption1### and 3 ###reference_umption3###)\nThe upper bounds in 5 ###reference_umption5### bear strong consequences on the behaviour of for , and are only practical for networks of the form , similarly to 4 ###reference_umption4###. We detail the technicalities of verifying this assumption along with the others in the Appendix (D ###reference_###).\nUnder 1 ###reference_umption1###, 3 ###reference_umption3### and 5 ###reference_umption5###, for the SGD trajectories 7 ###reference_###, contains , where .\nWe postpone the proof to B ###reference_###. Now that we have verified 1 ###reference_dition1###, 2 ###reference_dition2### and 3 ###reference_dition3###, we can apply (Bianchi et al., 2022 ###reference_b4###), Theorem 2 to , showing a convergence result on interpolated SGD trajectories.\nConsider a neural network and measures , satisfying 1 ###reference_umption1###, 2 ###reference_umption2###, 3 ###reference_umption3###, 4 ###reference_umption4### and 5 ###reference_umption5###. Let (see 4 ###reference_p4###).\n\nLet a collection of SGD trajectories associated to 7 ###reference_###. Consider their associated interpolations. For any compact and any , we have:\nThe distance is defined in 11 ###reference_###. As the learning rate decreases, the interpolated trajectories approach the trajectory set , which is essentially a solution of the gradient flow equation (ignoring the set of non-differentiability, which is -null). To get a tangible idea of the concepts at play, if was and had a finite amount of critical points, then one would have the convergence of a solution to a critical point of , as . These results have implicit consequences on the value of the parameters at the \"end\" of training for low learning rates, which is why we will consider a variant of SGD for which we can say more precise results on the convergence of the parameters."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Convergence of Noised Projected SGD Schemes on",
51
+ "text": "In practice, it is seldom desirable for the parameters of a neural network to reach extremely large values during training. Weight clipping is a common (although contentious) method of enforcing that stay Lipschitz, which is desirable for theoretical reasons. For instance the 1-Wasserstein duality in Wasserstein GANs (Arjovsky et al., 2017 ###reference_b1###) requires Lipschitz networks, and similarly, Sliced-Wasserstein GANs (Deshpande et al., 2018 ###reference_b15###) use weight clipping and enforce their networks to be Lipschitz.\nGiven a radius , we consider SGD schemes that are restricted to , by performing projected SGD. At each step , we also add a noise , where is an additive noise of law , which is often taken as standard Gaussian in practice. These additions yield the following SGD scheme:\nwhere denotes the orthogonal projection on the ball . Thanks to 1 ###reference_dition1###, 2 ###reference_dition2### and the additional noise, we can verify the assumptions for (Bianchi et al., 2022 ###reference_b4###) Theorem 4, yielding the same result as 1 ###reference_orem1### for the noised projected scheme 13 ###reference_###. In fact, under additional assumptions, we shall prove a stronger mode of convergence for the aforementioned trajectories. The natural context in which to perform gradient descent is on functions that admit a chain rule, which is formalised in the case of almost-everywhere differentiability by the notion of path differentiability, as studied thoroughly in (Bolte & Pauwels, 2021 ###reference_b5###). We also provide a brief presentation in the Appendix (C.1 ###reference_###).\nis path differentiable, which is to say that for any , for almost all .\nThere are alternate equivalent formulations for 4 ###reference_dition4###. Indeed, as presented in further detail in C.1 ###reference_###, is path differentiable if and only if is a conservative field for if and only if has a chain rule for (the latter is the formulation chosen above in 4 ###reference_dition4###).\nIn order to satisfy 4 ###reference_dition4###, we need to make the assumption that the NN input measure and the data measure are discrete measures, which is the case for in the case of generative neural networks, but is less realistic for in practice. We define the -simplex: its elements are the s.t. and .\nOne may write and , with the coefficient vectors , and .\nThere is little practical reason to consider non-uniform measures, however the generalisation to any discrete measure makes no theoretical difference. Note that 3 ###reference_umption3### is clearly implied by 6 ###reference_umption6###.\nIn order to show that is path differentiable, we require the natural assumption that each be path differentiable. Since is a vector-valued function, we need to extend the notion of path-differentiability. Thankfully, Bolte & Pauwels (2021 ###reference_b5###) define conservative mappings for vector-valued locally Lipschitz functions (Definition 4), which allows us to define naturally path differentiability of a vector-valued function as the path-differentiability of all of its coordinate functions. See C.2 ###reference_### for a detailed presentation.\nFor any is path differentiable.\n7 ###reference_umption7### holds as soon as each the neural network has the typical structure of compositions of linear units and typical activations, as was proved by Davis et al. (2020 ###reference_b14###), Corollary 5.11 and Bolte & Pauwels (2021 ###reference_b5###), Section 6.2. We provide a more specific class of NNs that are path differentiable and satisfy all our other assumptions in D ###reference_###.\nUnder 2 ###reference_umption2###, 6 ###reference_umption6### and 7 ###reference_umption7###, is path differentiable.\nWe shall use repeatedly the property that the composition of path differentiable functions remains path differentiable, which is proved in (Bolte & Pauwels, 2021 ###reference_b5###), Lemma 6.\n\nLet . By (Tanguy et al., 2023 ###reference_b32###), Proposition 2.4.3, each is semi-concave and thus is path differentiable (by (Tanguy et al., 2023 ###reference_b32###), Proposition 4.3.3).\n\nThanks to 6 ###reference_umption6###, and are discrete measures on and respectively, allowing one to write and . Then is path differentiable as a sum ((Bolte & Pauwels, 2021 ###reference_b5###), Corollary 4) of compositions ((Bolte & Pauwels, 2021 ###reference_b5###), Lemma 6) of path differentiable functions.\n\u220e\nWe have now satisfied all the assumptions to apply (Bianchi et al., 2022 ###reference_b4###), Theorem 6, showing that trajectories of 13 ###reference_### converge towards to a set of generalised critical points222Typically referred to as the set of Karush-Kahn-Tucker points of the differential inclusion . defined as\nwhere refers to the normal cone of the ball at . The term in 14 ###reference_### only makes a difference in the pathological case , which never happens in practice since the idea behind projecting is to do so on a very large ball, in order to avoid gradient explosion, to limit the Lipschitz constant and to satisfy theoretical assumptions. Omitting the term, and denoting the points where is differentiable, 14 ###reference_### simplifies to , i.e. the critical points of for the usual differential. Like in 1 ###reference_orem1###, we let , where is defined in 4 ###reference_p4###. We have met the conditions to apply Bianchi et al. (2022 ###reference_b4###), Theorem 6, showing a long-run convergence results on the SGD trajectories 13 ###reference_###.\nConsider a neural network and measures , satisfying 1 ###reference_umption1###, 2 ###reference_umption2###, 4 ###reference_umption4###, 5 ###reference_umption5###, 6 ###reference_umption6### and 7 ###reference_umption7###. Let be SGD trajectories defined by 13 ###reference_### for and . One has\nThe distance above is the usual euclidean distance. 2 ###reference_orem2### shows essentially that as the learning rate approaches 0, the long-run limits of the SGD trajectories approach the set of in probability. Omitting the points of non-differentiability and the pathological case , the general idea is that , which is the convergence that would be achieved by the gradient flow of , in the simpler case of smoothness."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Conclusion and Outlook",
57
+ "text": "Under reasonable assumptions, we have shown that SGD trajectories of parameters of generative NNs with a minibatch SW loss converge towards the desired sub-gradient flow solutions, implying in a weak sense the convergence of said trajectories. Under stronger assumptions, we have shown that trajectories of a mildly modified SGD scheme converge towards a set of generalised critical points of the loss, which provides a missing convergence result for such optimisation problems.\nThe core limitation of this theoretical work is the assumption that the input data measure is discrete (6 ###reference_umption6###), which we required in order to prove that the loss is path differentiable. In order to generalise to a non-discrete measure, one would need to apply or show a result on the stability of path differentiability through integration: in our case, we want to show that is path differentiable, knowing that is path differentiable by composition (see the proof of 5 ###reference_p5### for the justification). Unfortunately, in general if each is path differentiable, it is not always the case that is path differentiable (at the very least, there is no theorem stating this, even in the simpler case of another sub-class of path differentiable functions, see (Bianchi et al., 2022 ###reference_b4###), Section 6.1). However, there is such a theorem (specifically (Clarke, 1990 ###reference_b11###), Theorem 2.7.2 with Remark 2.3.5) for Clarke regular functions (see C.3 ###reference_### for a presentation of this regularity class), sadly the composition of Clarke regular functions is not always Clarke regular, it is only known to be the case in excessively restrictive cases (see (Clarke, 1990 ###reference_b11###), Theorems 2.3.9 and 2.3.10). Similarly to the continuous case, the simpler generalisation in which has a countable support adds substantial difficulty, since all of the typical tools (path differentiability itself, Clarke regularity or even definability (see (Bolte & Pauwels, 2021 ###reference_b5###) Section 4.1 for a first introduction) do not have readily applicable results for infinite operations, to our knowledge. As a result, we leave the generalisation to a non-discrete input measure for future work.\nOur studies focus on the 2-SW distance, but our results from 3 ###reference_### can be extended to , as presented in the appendix (E ###reference_###). However, as also discussed in the Appendix, the generalisation of 4 ###reference_### is still an open problem, since it has not yet be proven that is path differentiable for .\nThis paper studies the use of the average SW distance as a loss, and an extension to related distances would be worth considering. The average SW distance aggregates the projected distances through an expectation, while the closely-related max-Sliced Wasserstein distance introduced by Deshpande et al. (2019 ###reference_b16###) aggregates the projections via a maximisation on the axis . The training paradigm presented in (Deshpande et al., 2019 ###reference_b16###) differs strongly from our formalism since it applies to GANs, however one could consider an extension of our formalism in which the optimal projection becomes a learned parameter of the neural network. A related extension is the Subspace-Robust Wasserstein distance (Paty & Cuturi, 2019 ###reference_b28###), which can take the following formulation\nfor which one could consider a similar extension where the positive semi-definite becomes a learned parameter of .\nAnother avenue for future study would be to tie the flow approximation result from 1 ###reference_orem1### to Sliced Wasserstein Flows (Liutkus et al., 2019 ###reference_b23###; Bonet et al., 2022 ###reference_b6###). The difficulty in seeing the differential inclusion 9 ###reference_### as a flow of lies in the non-differentiable nature of the functions at play, as well as the presence of the composition between SW and the neural network , which bodes poorly with Clarke sub-differentials."
58
+ }
59
+ ],
60
+ "appendix": [
61
+ {
62
+ "section_id": "Appendix 1",
63
+ "parent_section_id": null,
64
+ "section_name": "Appendix A Table of Notations",
65
+ "text": ""
66
+ },
67
+ {
68
+ "section_id": "Appendix 2",
69
+ "parent_section_id": null,
70
+ "section_name": "Appendix B Postponed Proofs",
71
+ "text": ""
72
+ },
73
+ {
74
+ "section_id": "Appendix 3",
75
+ "parent_section_id": null,
76
+ "section_name": "Appendix C Background on Non-Smooth and Non-Convex Analysis",
77
+ "text": "This work is placed within the context of non-smooth optimisation, a field of study in part introduced by Clarke with the so-called Clarke differential, which we introduced in Equation 8 ###reference_### (see (Clarke, 1990 ###reference_b11###) for a general reference on this object). The purpose of this appendix is to present several adjacent objects that can be useful to the application of our results, even though we do not need them in order to prove our theorems.\nThe Clarke differential of a locally Lipschitz function (defined in Equation 8 ###reference_###) is an example of a set-valued map. Such a map is a function from the subsets of to the subsets of , for instance in the case of the Clarke differential, we have the signature . A set-valued map is graph closed if its graph is a closed set of . A set-valued map is said to be a conservative field, when it is graph closed, has non-empty compact values and for any absolutely continuous loop with , we have\nSimilarly to primitive functions in calculus, one may define a function using a conservative field up to an additive constant through following expression:\nIn this case, we say that is a potential function for the field . This notion allows us to define a new regularity class: a function is called path differentiable when there exists a conservative field of which it is a potential. A standard result in non-smooth optimisation is the following equivalence between different notions of regularity:\nBolte & Pauwels (2021 ###reference_b5###), Corollary 2. Let locally Lipschitz. We have the equivalence between the following statements:\nis path differentiable\nis a conservative field\nhas a chain rule for the Clarke differential :\nThis equivalence justifies the terminology used in 4 ###reference_dition4###. The reader seeking a complete presentation of conservative field theory may refer to (Bolte & Pauwels, 2021 ###reference_b5###).\nThe notion of conservative fields for real-valued locally Lipschitz functions can be generalised to conservative mappings for vector-valued locally Lipschitz functions , which one may see as a generalised Jacobian matrix (see (Bolte & Pauwels, 2021 ###reference_b5###), Section 3.3 for further details). A set-valued map is a conservative mapping for such a if\nIn this case, we shall say that is path differentiable. Note that if each coordinate function is the potential of a conservative field , then the set-valued map\nis a conservative mapping for (although not all conservative mappings for can be written in this manner). As a consequence, one could interpret (simplistically) vector-valued path differentiability as coordinate-wise path differentiability.\nAnother notion of regularity for locally Lipschitz functions is that of Clarke regularity. Let and , is said to be Clarke regular at if the two quantities\nexist and are equal for all . Note that this notion implies path differentiability by (Bolte & Pauwels, 2021 ###reference_b5###), Proposition 2. Clarke regularity is the central concept of Clarke\u2019s monograph (Clarke, 1990 ###reference_b11###).\nIn non-smooth analysis, one of the simplest regularity cases is the class of semi-algebraic functions, which are essentially piecewise polynomial functions defined on polynomial pieces. To be precise, a set is semi-algebraic if it can be written under the form\nwhere the and are real multivariate polynomials. A function is semi-algebraic if its graph is semi-algebraic.\nA locally Lipschitz real-valued semi-algebraic function is path differentiable (see for instance (Bolte & Pauwels, 2021 ###reference_b5###), Proposition 2), and in the light of (Bolte & Pauwels, 2021 ###reference_b5###), Lemma 3, this is also the case in the vector-valued case. Another useful property of semi-algebraic functions is that their class is stable by composition and product. The interested reader may consult (Wakabayashi, 2008 ###reference_b36###) for additional properties of semi-algebraic objects, or (Coste, 1999 ###reference_b12###; Van Den Dries & Miller, 1996 ###reference_b34###), for a presentation of o-minimal structures, a generalisation of this concept."
78
+ },
79
+ {
80
+ "section_id": "Appendix 4",
81
+ "parent_section_id": null,
82
+ "section_name": "Appendix D Suitable Neural Networks",
83
+ "text": "In this section, we detail our claim that typical NN structures satisfy our conditions. To this end, we define a class of practical neural networks whose properties are sufficient (not all NNs that satisfy our assumptions are within this framework). Consider the set of NNs of the form\nwith and . The function is a smoothed version of the usual indicator function : it is any function that has value 1 in , 0 outside and is -smooth (see 2 ###reference_ark2### for a possible construction). Given that one may take arbitrarily large radii, these indicators are added for theoretical purposes and impose no realistic constraints in practice. Additionally, , the -th layer of a recursive NN structure defined by\nwhere:\nAll functions are -smooth, or all locally Lipschitz semi-algebraic activation functions (applied entry-wise). The former condition is satisfied by the common sigmoid, hyperbolic tangent or softplus activations. The latter condition applies to the non-differentiable ReLU activation, its \"Leaky ReLU\" extension, and continuous piecewise polynomial activations. Note that other non-linearities such as softmax can also be considered under the same regularity restrictions, but we limit ourselves to entry-wise non-linearities for notational consistency.\nEach dimension is a positive integer, with obviously , the output dimension.\nEach is a linear map: , which maps a parameter vector to a matrix. Since the entire parameter vector is given at each layer, this allows the architecture to only use certain parameters at each layer (as is more typical in practice). One may see this map as a 3-tensor of shape , as specified in the formulation\nThe matrix determines the intercept from the full parameter vector .\nIn this model, each layer depends on all the previous layers, allowing for residual inputs for instance. Overall, all typical networks fit this description, once bounded using the indicator functions, with only a technicality on the regularities of the activations which need to be all -smooth, or all semi-algebraic. One could extend this class of NNs to those with definable activations within the same o-minimal structure (similarly to Davis et al. (2020 ###reference_b14###) and Bolte & Pauwels (2021 ###reference_b5###)).\nWe mention that we may construct a -smooth in explicitly as follows:\nBefore proving the properties of NNs from the class , we require a technical result on path differentiable functions.\nLet path differentiable, and of class . Then their product is path differentiable.\nOur objective is to apply (Bolte & Pauwels, 2021 ###reference_b5###) Corollary 2 (stated in 6 ###reference_p6###), which is to say that admits a chain rule for ). First, we apply the definition of the Clarke differential and compute\nNote that we used the smoothness of . We now consider an absolutely continuous curve . By Bolte & Pauwels (2021 ###reference_b5###) Lemma 2, since is path differentiable, is differentiable almost everywhere. Let the associated set of differentiability, then let and , writing with . We compute . Now since is path differentiable and , by 6 ###reference_p6### item 3, we have . On the other hand, since is . Finally by definition of and bilinearity of ,\n\u220e\nWe now have all the tools to prove that the class of NNs satisfies all of the assumptions of our paper.\nAll networks of the class verify 1 ###reference_umption1###, 2 ###reference_umption2###, 4 ###reference_umption4###, 5 ###reference_umption5### and 7 ###reference_umption7###.\nLet , and its associated underlying network. We begin with regularity considerations."
84
+ },
85
+ {
86
+ "section_id": "Appendix 5",
87
+ "parent_section_id": null,
88
+ "section_name": "Appendix E Generalisation to Other Sliced Wasserstein Orders",
89
+ "text": "In this section, we shall discuss how some of our results can be extended by replacing the 2-SW term with for ."
90
+ }
91
+ ],
92
+ "tables": {
93
+ "1": {
94
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"A1.T1.54.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"A1.T1.55.2\" style=\"font-size:90%;\">List of Notations</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T1.52\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T1.52.53.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.52.53.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T1.52.53.1.1.1\">Symbol</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"A1.T1.52.53.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T1.52.53.1.2.1\">Explanation</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.2.2.2\">Given \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.5.5.3\">\n an input data sample of law \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.6.6.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.8.8.3\">input data probability measure on , supported on \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.11.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.9.9.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.11.11.3\">\n a target data sample of law \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.14.14\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.12.12.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.14.14.3\">target data probability measure on , supported on \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.16.16\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.15.15.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.16.16.2\">direction in \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.18.18\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.17.17.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.18.18.2\">uniform measure on \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.21.21\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.19.19.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.21.21.3\">sample in and \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.24.24\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.22.22.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.24.24.3\">probability measure for the samples , supported on \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.26.26\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.25.25.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.26.26.2\">neural network parameters in \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.27.27\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.27.27.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.27.27.2\">neural network function defined in <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.11714v3#S2.E3\" title=\"3 \u2023 2 Stochastic Gradient Descent with SW as Loss \u2023 Convergence of SGD for Training Neural Networks with Sliced Wasserstein Losses\"><span class=\"ltx_text ltx_ref_tag\">3</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.28.28\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.28.28.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.28.28.2\">sample loss function defined in <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.11714v3#S2.E4\" title=\"4 \u2023 2 Stochastic Gradient Descent with SW as Loss \u2023 Convergence of SGD for Training Neural Networks with Sliced Wasserstein Losses\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.29.29\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.29.29.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.29.29.2\">population loss function defined in <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.11714v3#S2.E5\" title=\"5 \u2023 2 Stochastic Gradient Descent with SW as Loss \u2023 Convergence of SGD for Training Neural Networks with Sliced Wasserstein Losses\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.31.31\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.30.30.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.31.31.2\">discrete and projected 2-Wasserstein distance \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.33.33\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.32.32.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.33.33.2\">almost-everywhere gradient of defined in <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.11714v3#S2.E6\" title=\"6 \u2023 2 Stochastic Gradient Descent with SW as Loss \u2023 Convergence of SGD for Training Neural Networks with Sliced Wasserstein Losses\"><span class=\"ltx_text ltx_ref_tag\">6</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.35.35\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.34.34.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.35.35.2\">local Lipschitz constants of respectively (see Propositions 1, 2, 3)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.36.36\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.36.36.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.36.36.2\">SGD learning rate; noise level</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.40.40\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.37.37.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.40.40.4\">Lebesgue measure on ; a measure absolutely continuous w.r.t. \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.41.41\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.41.41.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.41.41.2\">Clarke differential, defined in <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.11714v3#S2.E8\" title=\"8 \u2023 2 Stochastic Gradient Descent with SW as Loss \u2023 Convergence of SGD for Training Neural Networks with Sliced Wasserstein Losses\"><span class=\"ltx_text ltx_ref_tag\">8</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.43.43\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.42.42.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.43.43.2\">probability measure of SGD initialisation \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.46.46\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.44.44.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.46.46.3\">additive noise in at SGD step \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.48.48\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.47.47.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.48.48.2\">additive noise probability measure on \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.52.52\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T1.49.49.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"A1.T1.52.52.4\">open (resp. closed) ball of centre and radius for the norm \n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
95
+ "capture": "Table 1: List of Notations"
96
+ }
97
+ },
98
+ "image_paths": {},
99
+ "validation": true,
100
+ "references": [
101
+ {
102
+ "1": {
103
+ "title": "Wasserstein generative adversarial networks.",
104
+ "author": "Martin Arjovsky, Soumith Chintala, and L\u00e9on Bottou.",
105
+ "venue": "In Doina Precup and Yee Whye Teh (eds.), Proceedings of the\n34th International Conference on Machine Learning, volume 70 of\nProceedings of Machine Learning Research, pp. 214\u2013223. PMLR, 06\u201311\nAug 2017.",
106
+ "url": null
107
+ }
108
+ },
109
+ {
110
+ "2": {
111
+ "title": "Strong equivalence between metrics of Wasserstein type.",
112
+ "author": "Erhan Bayraktar and Gaoyue Guo.",
113
+ "venue": "2021.",
114
+ "url": null
115
+ }
116
+ },
117
+ {
118
+ "3": {
119
+ "title": "On the Bures-Wasserstein distance between positive definite\nmatrices.",
120
+ "author": "Rajendra Bhatia, Tanvi Jain, and Yongdo Lim.",
121
+ "venue": "arXiv, December 2017.",
122
+ "url": null
123
+ }
124
+ },
125
+ {
126
+ "4": {
127
+ "title": "Convergence of constant step stochastic gradient descent for\nnon-smooth non-convex functions.",
128
+ "author": "Pascal Bianchi, Walid Hachem, and Sholom Schechtman.",
129
+ "venue": "Set-Valued and Variational Analysis, 30(3):1117\u20131147, 2022.",
130
+ "url": null
131
+ }
132
+ },
133
+ {
134
+ "5": {
135
+ "title": "Conservative set valued fields, automatic differentiation, stochastic\ngradient methods and deep learning.",
136
+ "author": "J\u00e9r\u00f4me Bolte and Edouard Pauwels.",
137
+ "venue": "Mathematical Programming, 188:19\u201351, 2021.",
138
+ "url": null
139
+ }
140
+ },
141
+ {
142
+ "6": {
143
+ "title": "Efficient gradient flows in sliced-Wasserstein space.",
144
+ "author": "Cl\u00e9ment Bonet, Nicolas Courty, Fran\u00e7ois Septier, and Lucas Drumetz.",
145
+ "venue": "Transactions on Machine Learning Research, 2022.",
146
+ "url": null
147
+ }
148
+ },
149
+ {
150
+ "7": {
151
+ "title": "Sliced and Radon Wasserstein barycenters of measures.",
152
+ "author": "Nicolas Bonneel, Julien Rabin, Gabriel Peyr\u00e9, and Hanspeter Pfister.",
153
+ "venue": "Journal of Mathematical Imaging and Vision, 51(1):22\u201345, 2015.",
154
+ "url": null
155
+ }
156
+ },
157
+ {
158
+ "8": {
159
+ "title": "Unidimensional and evolution methods for optimal transportation.",
160
+ "author": "Nicolas Bonnotte.",
161
+ "venue": "PhD Thesis, Paris 11, 2013.",
162
+ "url": null
163
+ }
164
+ },
165
+ {
166
+ "9": {
167
+ "title": "Critical points and convergence analysis of generative deep linear\nnetworks trained with Bures-Wasserstein loss.",
168
+ "author": "Pierre Br\u00e9chet, Katerina Papagiannouli, Jing An, and Guido Mont\u00fafar.",
169
+ "venue": "arXiv preprint arXiv:2303.03027, 2023.",
170
+ "url": null
171
+ }
172
+ },
173
+ {
174
+ "10": {
175
+ "title": "An extension of Kakutani\u2019s theorem on infinite product measures\nto the tensor product of semifinite w*-algebras.",
176
+ "author": "Donald Bures.",
177
+ "venue": "Transactions of the American Mathematical Society,\n135:199\u2013212, 1969.",
178
+ "url": null
179
+ }
180
+ },
181
+ {
182
+ "11": {
183
+ "title": "Optimization and nonsmooth analysis.",
184
+ "author": "Frank H Clarke.",
185
+ "venue": "SIAM, 1990.",
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "12": {
191
+ "title": "An introduction to o-minimal geometry, inst. rech.",
192
+ "author": "M Coste.",
193
+ "venue": "RAAG Notes, Institut de Recherche Math\u00e9matique de Rennes,\n1999.",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "13": {
199
+ "title": "Sinkhorn distances: Lightspeed computation of optimal transport.",
200
+ "author": "Marco Cuturi.",
201
+ "venue": "Advances in neural information processing systems, 26, 2013.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "14": {
207
+ "title": "Stochastic subgradient method converges on tame functions.",
208
+ "author": "Damek Davis, Dmitriy Drusvyatskiy, Sham Kakade, and Jason D Lee.",
209
+ "venue": "Foundations of computational mathematics, 20(1):119\u2013154, 2020.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "15": {
215
+ "title": "Generative modeling using the sliced Wasserstein distance.",
216
+ "author": "Ishan Deshpande, Ziyu Zhang, and Alexander G. Schwing.",
217
+ "venue": "In 2018 IEEE Conference on Computer Vision and Pattern\nRecognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 3483\u20133491. Computer Vision Foundation / IEEE Computer Society, 2018.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "16": {
223
+ "title": "Max-sliced Wasserstein distance and its use for gans.",
224
+ "author": "Ishan Deshpande, Yuan-Ting Hu, Ruoyu Sun, Ayis Pyrros, Nasir Siddiqui, Sanmi\nKoyejo, Zhizhen Zhao, David Forsyth, and Alexander G Schwing.",
225
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, pp. 10648\u201310656, 2019.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "17": {
231
+ "title": "The speed of mean Glivenko-Cantelli convergence.",
232
+ "author": "Richard Mansfield Dudley.",
233
+ "venue": "The Annals of Mathematical Statistics, 40(1):40\u201350, 1969.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "18": {
239
+ "title": "Minibatch optimal transport distances; analysis and applications.",
240
+ "author": "Kilian Fatras, Younes Zine, Szymon Majewski, R\u00e9mi Flamary, R\u00e9mi\nGribonval, and Nicolas Courty.",
241
+ "venue": "arXiv preprint arXiv:2101.01792, 2021.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "19": {
247
+ "title": "Sample complexity of Sinkhorn divergences.",
248
+ "author": "Aude Genevay, L\u00e9naic Chizat, Francis Bach, Marco Cuturi, and Gabriel\nPeyr\u00e9.",
249
+ "venue": "In The 22nd international conference on artificial intelligence\nand statistics, pp. 1574\u20131583. PMLR, 2019.",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "20": {
255
+ "title": "A kernel method for the two-sample-problem.",
256
+ "author": "Arthur Gretton, Karsten Borgwardt, Malte Rasch, Bernhard Sch\u00f6lkopf, and\nAlex Smola.",
257
+ "venue": "Advances in neural information processing systems, 19, 2006.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "21": {
263
+ "title": "A sliced Wasserstein loss for neural texture synthesis.",
264
+ "author": "Eric Heitz, Kenneth Vanhoey, Thomas Chambon, and Laurent Belcour.",
265
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, pp. 9412\u20139420, 2021.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "22": {
271
+ "title": "On characterizing optimal Wasserstein GAN solutions for\nnon-Gaussian data.",
272
+ "author": "Yu-Jui Huang, Shih-Chun Lin, Yu-Chih Huang, Kuan-Hui Lyu, Hsin-Hua Shen, and\nWan-Yi Lin.",
273
+ "venue": "2023.",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "23": {
279
+ "title": "Sliced-Wasserstein flows: Nonparametric generative modeling via\noptimal transport and diffusions.",
280
+ "author": "Antoine Liutkus, Umut Simsekli, Szymon Majewski, Alain Durmus, and\nFabian-Robert St\u00f6ter.",
281
+ "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.),\nProceedings of the 36th International Conference on Machine Learning,\nvolume 97 of Proceedings of Machine Learning Research, pp. 4104\u20134113. PMLR, 09\u201315 Jun 2019.",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "24": {
287
+ "title": "Analysis of nonsmooth stochastic approximation: the differential\ninclusion approach.",
288
+ "author": "Szymon Majewski, B\u0142a\u017cej Miasojedow, and Eric Moulines.",
289
+ "venue": "arXiv preprint arXiv:1805.01916, 2018.",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "25": {
295
+ "title": "Gromov\u2013Wasserstein distances and the metric approach to object\nmatching.",
296
+ "author": "Facundo M\u00e9moli.",
297
+ "venue": "Foundations of computational mathematics, 11:417\u2013487, 2011.",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "26": {
303
+ "title": "Statistical and topological properties of sliced probability\ndivergences.",
304
+ "author": "Kimia Nadjahi, Alain Durmus, L\u00e9na\u00efc Chizat, Soheil Kolouri, Shahin\nShahrampour, and Umut Simsekli.",
305
+ "venue": "In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin\n(eds.), Advances in Neural Information Processing Systems, volume 33,\npp. 20802\u201320812. Curran Associates, Inc., 2020.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "27": {
311
+ "title": "Statistical, robustness, and computational guarantees for sliced\nWasserstein distances.",
312
+ "author": "Sloan Nietert, Ziv Goldfeld, Ritwik Sadhu, and Kengo Kato.",
313
+ "venue": "Advances in Neural Information Processing Systems,\n35:28179\u201328193, 2022.",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "28": {
319
+ "title": "Subspace robust Wasserstein distances.",
320
+ "author": "Fran\u00e7ois-Pierre Paty and Marco Cuturi.",
321
+ "venue": "In International conference on machine learning, pp. 5072\u20135081. PMLR, 2019.",
322
+ "url": null
323
+ }
324
+ },
325
+ {
326
+ "29": {
327
+ "title": "Computational optimal transport.",
328
+ "author": "G. Peyr\u00e9 and M. Cuturi.",
329
+ "venue": "Foundations and Trends in Machine Learning, 51(1):1\u201344, 2019.",
330
+ "url": null
331
+ }
332
+ },
333
+ {
334
+ "30": {
335
+ "title": "Wasserstein barycenter and its application to texture mixing.",
336
+ "author": "Julien Rabin, Gabriel Peyr\u00e9, Julie Delon, and Marc Bernot.",
337
+ "venue": "In Scale Space and Variational Methods in Computer Vision:\nThird International Conference, SSVM 2011, Ein-Gedi, Israel, May 29\u2013June 2,\n2011, Revised Selected Papers 3, pp. 435\u2013446. Springer, 2012.",
338
+ "url": null
339
+ }
340
+ },
341
+ {
342
+ "31": {
343
+ "title": "Optimal transport for applied mathematicians.",
344
+ "author": "Filippo Santambrogio.",
345
+ "venue": "Birk\u00e4user, NY, 55(58-63):94, 2015.",
346
+ "url": null
347
+ }
348
+ },
349
+ {
350
+ "32": {
351
+ "title": "Properties of discrete sliced Wasserstein losses.",
352
+ "author": "Eloi Tanguy, R\u00e9mi Flamary, and Julie Delon.",
353
+ "venue": "arXiv preprint arXiv:2307.10352, 2023.",
354
+ "url": null
355
+ }
356
+ },
357
+ {
358
+ "33": {
359
+ "title": "Wasserstein loss for image synthesis and restoration.",
360
+ "author": "Guillaume Tartavel, Gabriel Peyr\u00e9, and Yann Gousseau.",
361
+ "venue": "SIAM Journal on Imaging Sciences, 9(4):1726\u20131755, 2016.",
362
+ "url": null
363
+ }
364
+ },
365
+ {
366
+ "34": {
367
+ "title": "Geometric categories and o-minimal structures.",
368
+ "author": "Lou Van Den Dries and Chris Miller.",
369
+ "venue": "Duke Mathematical Journal, 84(2):497\u2013540,\n1996.",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "35": {
375
+ "title": "Optimal transport : old and new / C\u00e9dric Villani.",
376
+ "author": "C\u00e9dric Villani.",
377
+ "venue": "Grundlehren der mathematischen Wissenschaften. Springer, Berlin,\n2009.",
378
+ "url": null
379
+ }
380
+ },
381
+ {
382
+ "36": {
383
+ "title": "Remarks on semi-algebraic functions, January 2008.",
384
+ "author": "Seiichiro Wakabayashi.",
385
+ "venue": "Online Notes.",
386
+ "url": null
387
+ }
388
+ },
389
+ {
390
+ "37": {
391
+ "title": "Sliced Wasserstein generative models.",
392
+ "author": "J. Wu, Z. Huang, D. Acharya, W. Li, J. Thoma, D. Paudel, and L. Van Gool.",
393
+ "venue": "In 2019 IEEE/CVF Conference on Computer Vision and Pattern\nRecognition (CVPR), pp. 3708\u20133717, Los Alamitos, CA, USA, jun 2019. IEEE\nComputer Society.",
394
+ "url": null
395
+ }
396
+ },
397
+ {
398
+ "38": {
399
+ "title": "Distributional convergence of the sliced Wasserstein process.",
400
+ "author": "Jiaqi Xi and Jonathan Niles-Weed.",
401
+ "venue": "Advances in Neural Information Processing Systems,\n35:13961\u201313973, 2022.",
402
+ "url": null
403
+ }
404
+ },
405
+ {
406
+ "39": {
407
+ "title": "Central limit theorem for the sliced 1-Wasserstein distance and the\nmax-sliced 1-Wasserstein distance.",
408
+ "author": "Xianliang Xu and Zhongyi Huang.",
409
+ "venue": "arXiv preprint arXiv:2205.14624, 2022.",
410
+ "url": null
411
+ }
412
+ }
413
+ ],
414
+ "url": "http://arxiv.org/html/2307.11714v3"
415
+ }
20240318/2308.07233v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2308.07553v2.json ADDED
@@ -0,0 +1,380 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks",
3
+ "abstract": "Poisoning attacks can disproportionately influence model behaviour by making small changes to the training corpus. While defences against specific poisoning attacks do exist, they in general do not provide any guarantees, leaving them potentially countered by novel attacks. In contrast, by examining worst-case behaviours Certified Defences make it possible to provide guarantees of the robustness of a sample against adversarial attacks modifying a finite number of training samples, known as pointwise certification. We achieve this by exploiting both Differential Privacy and the Sampled Gaussian Mechanism to ensure the invariance of prediction for each testing instance against finite numbers of poisoned examples. In doing so, our model provides guarantees of adversarial robustness that are more than twice as large as those provided by prior certifications.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Despite the impressive performance, many modern machine learning models have been shown to be vulnerable to adversarial data perturbations (Biggio, Nelson, and Laskov 2013 ###reference_b6###; Chen et al. 2017 ###reference_b10###; Chakraborty et al. 2018 ###reference_b8###). This adversarial sensitivity is a significant concern now that machine learning models are increasingly being deployed in sensitive applications. Of particular concern are data poisoning attacks, where an adversary manipulates the training set to change the decision boundary of learned models. The risk of such attacks is heightened by the prevalence of large, user-generated datasets that are constructed without vetting. The fact that these attacks can render a model useless further underscores the need for robust defence mechanisms. Some examples of models that are vulnerable to data poisoning attacks include email spam filters and malware classifiers. These models have been shown to be susceptible to attacks that either render the model ineffective (Biggio, Nelson, and Laskov 2013 ###reference_b6###), or that produce targeted misclassifications (Chen et al. 2017 ###reference_b10###).\nThe defences intrinsically counter specific poisoning attacks means that even state-of-the-art defences (Carnerero-Cano et al. 2020 ###reference_b7###; Paudice et al. 2018 ###reference_b27###) can be vulnerable to new attacks. To circumvent this inherent dependency of defences on attacks, recent work has begun to consider the construction of guarantees of predictive invariance against bounded numbers of poisoned training examples. This is known as the certified robustness, which is commonly achieved through the addition of calibrated noise through randomised smoothing (Lecuyer et al. 2019 ###reference_b20###). While these certifications have been successfully applied to poisoning attacks on labels and/or input features (Rosenfeld et al. 2020 ###reference_b28###; Wang et al. 2020 ###reference_b30###), their applicability has been limited to attacks that modify training examples, rather than the more general insertion/deletion operations. On the other hand, classifiers trained with differential privacy (DP) can be shown to be certifiably robust against poisoning attacks even against insertion/deletion operations (Ma, Zhu, and Hsu 2019 ###reference_b23###; Hong et al. 2020 ###reference_b17###). However, to date, such certifications do not provide pointwise guarantees which ensures the robustness for individual samples against a finite number of poisoned training examples. This omission still leaves a vulnerability that can be exploited by a motivated adversary to compel the model to misclassify a particular testing sample. Recent works (Jia, Cao, and Gong 2020 ###reference_b18###; Levine and Feizi 2021 ###reference_b21###) leveraging bagging have achieved pointwise guarantees against poisoning attacks that allow insertion/deletion. However, some of these methods are specialized to particular learning approaches.\nIn this work, we establish a general framework for deriving pointwise-certifiably robust guarantees against data poisoning attacks that can influence both the label and feature sets. Such guarantees ensure that the predicted class of an individual sample are invariant to a finite number of changes to the training dataset. Prior works have leveraged DP to improve statistical properties of certification against data poisoning across a dataset. In contrast, we are the first to extend DP to certify individual samples. By producing an improved group privacy for the Sampled Gaussian Mechanism, our new approach even yields certifications that hold for more changes to the training dataset than what had been identified by prior approaches (Ma, Zhu, and Hsu 2019 ###reference_b23###; Jia, Cao, and Gong 2020 ###reference_b18###; Levine and Feizi 2021 ###reference_b21###). Our specific achievements can be summarized as follows:\nA general framework providing pointwise-certified robustness guarantees for models that are trained with differentially-private learners.\nThe framework provides a general poisoning attack defence against insertion, deletion, and modification attacks on both the label and feature sets. The defence improves the existing differential privacy based approaches, and its efficiency is enhanced through optimised group privacy in the Sampled Gaussian Mechanism and sub-sampled training.\nOur defence method achieves more than double the number of poisoned examples compared to existing certified approaches as demonstrated by experiments on MNIST, Fashion-MNIST and CIFAR-."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Data Poisoning Attacks and Defences",
15
+ "text": "Training-time or data poisoning attacks (Barreno et al. 2006 ###reference_b4###; Biggio, Fumera, and Roli 2014 ###reference_b5###) enable malicious users to manipulate training data and modify the learned model. The expressiveness of machine learning model families makes modern machine learning particularly susceptible to such attacks (Chakraborty et al. 2018 ###reference_b8###; Goldblum et al. 2021 ###reference_b14###). These attacks can be taxonomically classified as either label attacks, which only modify dataset labels (Xiao, Xiao, and Eckert 2012 ###reference_b32###); features attacks, in which the training features are modified (Shafahi et al. 2018 ###reference_b29###); or example attacks, such as the backdoor, which seek to influence both labels and features of the training corpus (Shafahi et al. 2018 ###reference_b29###). Defending against any of these attacks is inherently complex, as their existence implies that the attacker has access to both the training architecture and dataset. Although previous works have examined attackers who solely modify the training data, our threat model assumes a more comprehensive scenario, whereby attackers have the freedom to introduce or remove samples from the training dataset, as outlined in Table 1 ###reference_###. However, this freedom is subject to certain constraints that aim to reduce the probability of detection."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Outcomes Guarantee",
21
+ "text": "By exploiting both DP and the Sampled Gaussian Mechanism (SGM), our certification framework empirically improves pointwise certifications against data poisoning. Such certificates can be used to quantify the confidence in a sample\u2019s prediction, in the face of potential dataset manipulation.\nTo support our enhancements, we will first define some key properties of DP, then propose the outcomes guarantee that generalises to most DP mechanisms, and finally introduce the SGM with improved group privacy.\nModification\nAddition/ Deletion\nStatistical certification\nPointwise certification\nStatistical DP (Ma, Zhu, and Hsu 2019 ###reference_b23###)\n\u2713\n\u2717\nRandomized smoothing (Rosenfeld et al. 2020 ###reference_b28###; Weber et al. 2021 ###reference_b31###)\n\u2717\n\u2713\nBagging (Jia, Cao, and Gong 2020 ###reference_b18###; Levine and Feizi 2021 ###reference_b21###)\n\u2713\n\u2713\nThis Paper\n\u2713\n\u2713"
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Outcomes-Guaranteed Certifications",
27
+ "text": "While pointwise-certified robustness guarantees can be applied to the output of any model, within this work we highlight both multinomial outputs and scored outputs.\nConsider a randomised learner that preserves a -outcome guarantee,\nand an arbitrary (possibly randomised) inference function mapping learned parameters and the input instance to an inferred score.\nThen for any such that, for any input instance , label , training datasets and ,\nIn the case of multinomial outputs, the first inequality follows from the post-processing property: the composition preserves the same outcome guarantee. The second inequality follows by symmetry in the roles of and by being strictly increasing. To admit scored outputs, the probabilities in -outcome guarantee need to be converted into expected values . To that end, the integral over the right-tail distribution function of the probabilities in Definition 7 ###reference_orem7### are taken.\n\nThe expected value -outcome guarantee of -ADP and -R\u00e9nyi-DP can be shown to take the same form of Equation 7 ###reference_### and Equation 8 ###reference_### by Lecuyer et al. (2019 ###reference_b20###) and H\u00f6lder\u2019s inequality (as detailed in Appendix A.3) respectively.\n\u220e\nThe main result of this section establishes conditions under which a DP learner provides pointwise-certified robustness against general poisoning attacks up to size .\nConsider a training dataset , an input instance , and a randomised learner that preserves a -outcomes guarantee.\nLet\ndenote the label predicted on under multinomial interpretation of Definition 1 ###reference_orem1###.\nIf there exist such that\nthen is pointwise-certified robust to radius about dataset at input (see Definition 3 ###reference_orem3###).\nThe proof can be found in Appendix A.1."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Experiments",
33
+ "text": "To verify the effectiveness of our proposed pointwise-certified defence, we conducted experiments across MNIST, Fashion-MNIST, and CIFAR- for varying levels of added noise . For MNIST and Fashion-MNIST, training occurred using the LeNet-5 architecture (Lecun et al. 1998 ###reference_b19###), with class probabilities/expectations estimated based upon model instances trained on the entire dataset. In contrast, CIFAR- was trained upon the example model from Opacus tutorial (Yousefpour et al. 2021 ###reference_b33###) (Opa-tut) with rather simple architecture, and more complex ResNet-18 (He et al. 2015 ###reference_b15###) for comprehensive evaluation. Both were estimated based upon instances trained on sub-datasets of size .\nAcross all experiments adjust the sample ratio to have a batch size of , with training conducted using ADAM with a learning rate of optimising the Cross-Entropy loss. The clip size is fine-tuned for each experiment (around on MNIST, on CIFAR-10). In each case, uncertainties were estimated for a confidence interval suitable for . All experiments were conducted in Pytorch using a single NVIDIA RTX Ti GPU with GB of GPU RAM.\nTo quantify performance the proportion of samples correctly predicted with a certification of at least was used, henceforth known as the certified accuracy. This quantity takes the form\nwhere and are the input instances and corresponding ground truth labels for a testing sample, and , and are the predicted label and corresponding certified radius returned by the defence model. We also investigate the median and maximum value of certification achieved among all samples.\nWe further divide our experiments into four different frameworks. These are ADP with either multinomial labels (ADP-multinomial) or probability scores (ADP-prob-scores) output, and then R\u00e9nyi-DP with either multinomial labels (RDP-multinomial) or probability scores (RDP-prob-scores) output. In each case, Theorem 10 ###reference_orem10### is employed to generate a guaranteed certificate of defence to data poisoning attacks.\nTo validate the efficacy of our technique, these results are considered against prior works, specifically the DP-based defence method of Ma, Zhu, and Hsu (2019 ###reference_b23###) (Baseline-DP), the bagging-based defence of Jia, Cao, and Gong (2020 ###reference_b18###); Chen et al. (2020 ###reference_b9###) (Baseline-Bagging) and deterministic Deep Partition Aggregation (DPA) method of (Levine and Feizi 2021 ###reference_b21###) (Baseline-DPA). Of these, conceptual similarities between our work and DP-baseline allow both techniques to be compared while utilising the same trained models. However, it must be noted that Ma, Zhu, and Hsu (2019 ###reference_b23###) bound the DP-baseline in terms of statistically certified accuracy which is calculated as the lower bound of expected accuracy with confidence level among obtained model instances. As for Bagging-baseline, it provides the same pointwise-certified defence as we do. Hence, by letting the number of base classifiers equal the number of model instances and adjusting the size of sub-training datasets, we force the Bagging-baseline to have the same certified accuracy at radius . The DPA method has significant differences between their underlying assumptions and ours. The DPA only applies to the models that are deterministic, which means for a given training dataset the parameters in the resulting model should always be the same. This approach requires specific model architectures and a deterministic training process while our method applies to more general situations. Compared with standard training approaches, the extra step involved in incorporating SGM introduces a negligible difference in training time. Note the change in the relative performance of Baseline-Bagging and Baseline-DPA from the original papers are the product of different model architectures. We ensure all methods apply the same model architecture for fair comparisons (Appendix A.4).\nFigure 1 ###reference_### demonstrates that our method consistently provides a more robust certified defence, across the full suite of experiments. In the case of MNIST and Fashion-MNIST, for a given radius, RDP-multinomial is capable of providing the highest certified accuracy in most cases, which means more testing samples are certified to be correctly predicted within this radius. For example, in the experiments on Fashion-MNIST, RDP-multinomial achieves certified accuracy at radius , whereas the other baselines only achieve at most certified accuracy. Additionally, our method can generate the largest certification as shown, which provides a better defence for the confident testing samples. As illustrated in the experiments on CIFAR- for both Opa-tut and ResNet-18 models, RDP-prob-scores outperform the other baselines with regard to the largest certified radius by doubling the size. Based upon these results, when considering Fashion-MNIST our method achieves a and improvement in the median and maximum value respectively when compared to Baseline-Bagging (further details of this can be found in Appendix A.6).\nAs the bound functions are the same in both multinomial and probability scores methods, the difference between them can be directly attributed to the differences in how these techniques construct their upper and lower bounds. As indicated in Theorem 10 ###reference_orem10###, the larger the gap between the lower and upper bounds, the larger radius it can certify. Intuitively, if the defence model is confident with the predicted label of an easy testing sample, then this sample should be more resilient to poisoned examples in the training dataset. In the multinomial method, the uncertainty within each model instance is ignored by selecting a single label, while the uncertainty remains in the probability scores method. As a consequence of this, the multinomial method provides a higher radius for moderately confident examples but the probability scores method is able to certify a larger radius for the very confident ones. Further improvements can be found in the application of Renyi-DP, relative to Approximate-DP, due to the former providing a more precise accounting of model privacy. This in turn allows tighter bounds to be constructed, with performance further enhanced by way of Theorem 8 ###reference_orem8###.\nThe influence of the magnitude of injected noise is shown in the left-hand column of Figure 1 ###reference_###. These results broadly align with previous works, in that adding more noise can produce larger robustness guarantees (larger certified radius), at the cost of decreased predictive performance upon un-attacked datasets (). The increase of semantic-complexity of the dataset also limits the tolerance of the noise. It is also important to note that the sample rate () and robustness are negatively correlated, as increasing the sample rate requires that more training examples are utilised in constructing the output, which provides weaker privacy guarantees. Therefore, a grid search is usually required to find the best combination of parameters (, , clip size)."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Conclusion",
39
+ "text": "By carefully exploiting both DP, SGM, and bagging, this work presents a mechanism for tightening guarantees of pointwise-certified robustness relative to prior implementations. This is made possible by calculating group privacy directly from the SGM. When compared to the current state-of-the-art, our technique can produce a more than improvement in the median certification."
40
+ }
41
+ ],
42
+ "appendix": [
43
+ {
44
+ "section_id": "Appendix 1",
45
+ "parent_section_id": null,
46
+ "section_name": "Appendix A Appendix",
47
+ "text": "Our goal is equivalent to proving that the probability of predicting label by the poisoned model is larger than for any other labels, i.e.,\nGiven preserves a -outcome guarantee, then by Lemma 9 ###reference_orem9###, it is possible to derive the lower bound of and upper bounds of as:\nand\nIf the probability lower bound of predicting is larger than the probability upper bound of predicting any other labels\nthen we have our goal proven in the case of\n\u220e\nMironov, Talwar, and Zhang (2019 ###reference_b26###) proposed calculating the amount of R\u00e9nyi-DP obtained from SGM. We extend Theorem from this \u201cadjacent datasets\u201d to \u201cdatasets that differ in up to examples\u201d, such that it enables the modified method to calculate group privacy of size .\nLet be the Sampled Gaussian mechanism for some function . Then satisfies -RDP of group size whenever\nwhere under the assumption for any that differ up examples.\nLet be a pair of datasets that differ in examples, such that . We wish to bound the R\u00e9nyi divergences and , where is the Sampled Gaussian mechanism for some function with -sensitivity .\nTo achieve this, let denote a set-valued random variable defined by taking a random subset of , where each element of is independently placed in with probability . Conditioned on , the mechanism samples from a Gaussian with mean . Thus\nwhere the sum here denotes mixing of the distributions with weights . Similarly,\nAs R\u00e9nyi divergence is quasi-convex it is possible to construct the bound\nby way of the translation invariance of R\u00e9nyi divergence. Since these covariances are symmetric, we can through rotation assume that for some constant . The two distributions at hand are then both product distributions that are identical in all coordinates except the first. By the additivity of R\u00e9nyi divergence for product distributions, it then follows that\nFor any , the noise can be obtained from by adding noise from , and the same operation allows us to to obtain from . Thus by the data processing inequality for R\u00e9nyi divergence, we conclude\nAn identical argument implies that\nas claimed.\n\u220e\nNote that the only difference between the original Theorem of and Theorem 11 ###reference_orem11### is the modified in the conclusion. To complete the proof of Theorem 8 ###reference_orem8###, we can utilize the conclusion in Theorem 11 ###reference_orem11### and continue the steps after Theorem in the paper (Mironov, Talwar, and Zhang 2019 ###reference_b26###) by replacing with .\nSuppose a randomized function , with bounded output , satisfies -R\u00e9nyi-DP. Then for any the expected value of its output follows:\nwhere the expectation is taken over the randomness in .\nWe first recall H\u00f6lder\u2019s Inequality, which states that for real-valued functions and , and real , such that ,\nThis in turn allows the expected output to be expressed as\nBy then applying the outcomes guarantee of R\u00e9nyi-DP as stated in (8 ###reference_###), we have that\nBy H\u00f6lder\u2019s Inequality setting and , , , allows for us to state that\n\u220e\nFollowing the discussion in Section 4 ###reference_###, if we interpret the output of as the returned probability distribution for each label , then by applying Lemma 12 ###reference_orem12### for each label with it follows that we must have the expected value bound\nWe ensure all methods use the same model architecture for fair comparisons. In our experiment setting, the LeNet-5 is employed on MNIST/Fashion-MNIST, while the Opa-tut and ResNet-18 are employed on CIFAR-10. Specifically, Baseline-Bagging\u2019s original paper and code employed a simple CNN model for each base classifier on MNIST, and Baseline-DPA use the NiN (Lin, Chen, and Yan 2013 ###reference_b22###) architecture for both MNIST and CIFAR-10. Motivated by the need to deploy such models in larger scale, practical environments, we instead considered the algorithm in the context of the the widely accepted architecture LeNet-5 (Lecun et al. 1998 ###reference_b19###) and ResNet-18 (He et al. 2015 ###reference_b15###) for MNIST and CIFAR-10 respectively, and it was these numbers that we reported. It is these changes which have resulted in the change in relative performance.\n###figure_1### ###figure_2### Consider a training dataset , and input instance , and a randomised learner ."
48
+ }
49
+ ],
50
+ "tables": {
51
+ "1": {
52
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S3.T1.1.1.1.1\" style=\"width:173.4pt;padding-bottom:0.0pt;\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S3.T1.1.1.1.2\" style=\"padding-bottom:0.0pt;\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S3.T1.1.1.1.2.1\">Training-time threat model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S3.T1.1.1.1.3\" style=\"padding-bottom:0.0pt;\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S3.T1.1.1.1.3.1\">Testing-time certification</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.2.2\">\n<td class=\"ltx_td\" id=\"S3.T1.1.2.2.1\" style=\"width:173.4pt;\"></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.2.2.2\" style=\"width:43.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.2.2.2.1\">Modification</p></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.2.2.3\" style=\"width:43.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.2.2.3.1\">Addition/ Deletion</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.2.2.4\" style=\"width:43.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.2.2.4.1\">Statistical certification</p></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.2.2.5\" style=\"width:43.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.2.2.5.1\">Pointwise certification</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.3.3\">\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.3.3.1\" style=\"width:173.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.3.3.1.1\">Statistical DP\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Ma, Zhu, and Hsu <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.07553v2#bib.bib23\" title=\"\">2019 ###reference_b23###</a>)</cite></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.3.3.2\" style=\"width:43.4pt;\">\u2713</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.3.3.3\" style=\"width:43.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.3.3.3.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.3.3.4\" style=\"width:43.4pt;\">\u2713</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.3.3.5\" style=\"width:43.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.3.3.5.1\">\u2717</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.4.4\">\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.4.4.1\" style=\"width:173.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.4.4.1.1\">Randomized smoothing\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Rosenfeld et\u00a0al. <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.07553v2#bib.bib28\" title=\"\">2020 ###reference_b28###</a>; Weber et\u00a0al. <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.07553v2#bib.bib31\" title=\"\">2021 ###reference_b31###</a>)</cite></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.4.4.2\" style=\"width:43.4pt;\">\u2713</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.4.4.3\" style=\"width:43.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.4.4.3.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.4.4.4\" style=\"width:43.4pt;\">\u2713</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.4.4.5\" style=\"width:43.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.4.4.5.1\">\u2713</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.5.5\">\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.5.5.1\" style=\"width:173.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.5.5.1.1\">Bagging\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Jia, Cao, and Gong <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.07553v2#bib.bib18\" title=\"\">2020 ###reference_b18###</a>; Levine and Feizi <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.07553v2#bib.bib21\" title=\"\">2021 ###reference_b21###</a>)</cite></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.5.5.2\" style=\"width:43.4pt;\">\u2713</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.5.5.3\" style=\"width:43.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.5.5.3.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.5.5.4\" style=\"width:43.4pt;\">\u2713</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.1.5.5.5\" style=\"width:43.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.5.5.5.1\">\u2713</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S3.T1.1.6.6.1\" style=\"width:173.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.6.6.1.1\">This Paper</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S3.T1.1.6.6.2\" style=\"width:43.4pt;\">\u2713</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S3.T1.1.6.6.3\" style=\"width:43.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.6.6.3.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S3.T1.1.6.6.4\" style=\"width:43.4pt;\">\u2713</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S3.T1.1.6.6.5\" style=\"width:43.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.1.6.6.5.1\">\u2713</p>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>A summary of different approaches of certified defence against data poisoning attacks. We investigate them from two perspectives. The training-time threat model: whether it permits the more general addition/deletion of training samples or only modification. The testing-time certification: whether it provides the more strict pointwise certification for each test sample or only statistical certification over all test samples.</figcaption>\n</figure>",
53
+ "capture": "Table 1: A summary of different approaches of certified defence against data poisoning attacks. We investigate them from two perspectives. The training-time threat model: whether it permits the more general addition/deletion of training samples or only modification. The testing-time certification: whether it provides the more strict pointwise certification for each test sample or only statistical certification over all test samples."
54
+ },
55
+ "2": {
56
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T2.22\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T2.22.23.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A1.T2.22.23.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A1.T2.22.23.1.2\">Architecture</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T2.22.23.1.3\">RDP-multinomial</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T2.22.23.1.4\">RDP-prob scores</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T2.22.23.1.5\">ADP-multinomial</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T2.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T2.3.3.4\">MNIST</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T2.3.3.5\">LeNet-5</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.6.6.4\">Fashion-MNIST</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.6.6.5\">LeNet-5</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.5.5.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.6.6.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.9.9.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"A1.T2.9.9.4.1\">CIFAR-10</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.9.9.5\">Opa-tut</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.9.9.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.12.12.4\">ResNet-18</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.11.11.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.12.12.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.22.24.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" colspan=\"2\" id=\"A1.T2.22.24.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"A1.T2.22.24.1.2\">ADP-prob scores</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"A1.T2.22.24.1.3\">Baseline-Bagging</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"A1.T2.22.24.1.4\">Baseline-DPA</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.15.15\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.15.15.4\">MNIST</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.15.15.5\">LeNet-5</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.15.15.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.17.17\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.17.17.3\">Fashion-MNIST</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.17.17.4\">LeNet-5</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.16.16.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.17.17.2\"></td>\n<td class=\"ltx_td\" id=\"A1.T2.17.17.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.19.19\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A1.T2.19.19.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"A1.T2.19.19.3.1\">CIFAR-10</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A1.T2.19.19.4\">Opa-tut</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.18.18.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.19.19.2\"></td>\n<td class=\"ltx_td\" id=\"A1.T2.19.19.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.22.22\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A1.T2.22.22.4\">ResNet-18</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T2.20.20.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T2.21.21.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T2.22.22.3\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>The table summarizes the <em class=\"ltx_emph ltx_font_italic\" id=\"A1.T2.25.1\">median</em> and <em class=\"ltx_emph ltx_font_italic\" id=\"A1.T2.26.2\">maximum</em> value of certification over all test samples for each method in each dataset. The first and second numbers in each entry represent the median and maximum value respectively.</figcaption>\n</figure>",
57
+ "capture": "Table 2: The table summarizes the median and maximum value of certification over all test samples for each method in each dataset. The first and second numbers in each entry represent the median and maximum value respectively."
58
+ }
59
+ },
60
+ "image_paths": {
61
+ "1(a)": {
62
+ "figure_path": "2308.07553v2_figure_1(a).png",
63
+ "caption": "(a) MNIST (LeNet-5), performance against different \u03c3\ud835\udf0e\\sigmaitalic_\u03c3.\nFigure 1: The left column contains certified accuracy plots for the method RDP-multinomial against different noise levels (\u03c3\ud835\udf0e\\sigmaitalic_\u03c3); the right column contains certified accuracy plots for comparisons against variants and baselines. In the plots, the X-axis is radius r\ud835\udc5fritalic_r (symmetric difference) while the Y-axis is the corresponding certified accuracy C\u2062Ar\ud835\udc36subscript\ud835\udc34\ud835\udc5fCA_{r}italic_C italic_A start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at radius r\ud835\udc5fritalic_r.",
64
+ "url": "http://arxiv.org/html/2308.07553v2/extracted/5472131/figures/figures_font12/mnist_lenet_sigma.png"
65
+ },
66
+ "1(b)": {
67
+ "figure_path": "2308.07553v2_figure_1(b).png",
68
+ "caption": "(b) MNIST (LeNet-5), comparative performance at \u03c3=3.0\ud835\udf0e3.0\\sigma=3.0italic_\u03c3 = 3.0.\nFigure 1: The left column contains certified accuracy plots for the method RDP-multinomial against different noise levels (\u03c3\ud835\udf0e\\sigmaitalic_\u03c3); the right column contains certified accuracy plots for comparisons against variants and baselines. In the plots, the X-axis is radius r\ud835\udc5fritalic_r (symmetric difference) while the Y-axis is the corresponding certified accuracy C\u2062Ar\ud835\udc36subscript\ud835\udc34\ud835\udc5fCA_{r}italic_C italic_A start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at radius r\ud835\udc5fritalic_r.",
69
+ "url": "http://arxiv.org/html/2308.07553v2/extracted/5472131/figures/figures_font12/mnist_lenet_compare.png"
70
+ },
71
+ "1(c)": {
72
+ "figure_path": "2308.07553v2_figure_1(c).png",
73
+ "caption": "(c) Fashion-MNIST (LeNet-5), performance against different \u03c3\ud835\udf0e\\sigmaitalic_\u03c3.\nFigure 1: The left column contains certified accuracy plots for the method RDP-multinomial against different noise levels (\u03c3\ud835\udf0e\\sigmaitalic_\u03c3); the right column contains certified accuracy plots for comparisons against variants and baselines. In the plots, the X-axis is radius r\ud835\udc5fritalic_r (symmetric difference) while the Y-axis is the corresponding certified accuracy C\u2062Ar\ud835\udc36subscript\ud835\udc34\ud835\udc5fCA_{r}italic_C italic_A start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at radius r\ud835\udc5fritalic_r.",
74
+ "url": "http://arxiv.org/html/2308.07553v2/extracted/5472131/figures/figures_font12/fashion_mnist_lenet_sigma.png"
75
+ },
76
+ "1(d)": {
77
+ "figure_path": "2308.07553v2_figure_1(d).png",
78
+ "caption": "(d) Fashion-MNIST (LeNet-5), comparative performance at \u03c3=3.0\ud835\udf0e3.0\\sigma=3.0italic_\u03c3 = 3.0.\nFigure 1: The left column contains certified accuracy plots for the method RDP-multinomial against different noise levels (\u03c3\ud835\udf0e\\sigmaitalic_\u03c3); the right column contains certified accuracy plots for comparisons against variants and baselines. In the plots, the X-axis is radius r\ud835\udc5fritalic_r (symmetric difference) while the Y-axis is the corresponding certified accuracy C\u2062Ar\ud835\udc36subscript\ud835\udc34\ud835\udc5fCA_{r}italic_C italic_A start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at radius r\ud835\udc5fritalic_r.",
79
+ "url": "http://arxiv.org/html/2308.07553v2/extracted/5472131/figures/figures_font12/fashion_mnist_lenet_compare.png"
80
+ },
81
+ "1(e)": {
82
+ "figure_path": "2308.07553v2_figure_1(e).png",
83
+ "caption": "(e) CIFAR-10101010 (Opa-tut), performance against different \u03c3\ud835\udf0e\\sigmaitalic_\u03c3.\nFigure 1: The left column contains certified accuracy plots for the method RDP-multinomial against different noise levels (\u03c3\ud835\udf0e\\sigmaitalic_\u03c3); the right column contains certified accuracy plots for comparisons against variants and baselines. In the plots, the X-axis is radius r\ud835\udc5fritalic_r (symmetric difference) while the Y-axis is the corresponding certified accuracy C\u2062Ar\ud835\udc36subscript\ud835\udc34\ud835\udc5fCA_{r}italic_C italic_A start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at radius r\ud835\udc5fritalic_r.",
84
+ "url": "http://arxiv.org/html/2308.07553v2/extracted/5472131/figures/figures_font12/cifar10_tftut_sigma.png"
85
+ },
86
+ "1(f)": {
87
+ "figure_path": "2308.07553v2_figure_1(f).png",
88
+ "caption": "(f) CIFAR-10101010 (Opa-tut), comparative performance at \u03c3=3.0\ud835\udf0e3.0\\sigma=3.0italic_\u03c3 = 3.0.\nFigure 1: The left column contains certified accuracy plots for the method RDP-multinomial against different noise levels (\u03c3\ud835\udf0e\\sigmaitalic_\u03c3); the right column contains certified accuracy plots for comparisons against variants and baselines. In the plots, the X-axis is radius r\ud835\udc5fritalic_r (symmetric difference) while the Y-axis is the corresponding certified accuracy C\u2062Ar\ud835\udc36subscript\ud835\udc34\ud835\udc5fCA_{r}italic_C italic_A start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at radius r\ud835\udc5fritalic_r.",
89
+ "url": "http://arxiv.org/html/2308.07553v2/extracted/5472131/figures/figures_font12/cifar10_tftut_compare.png"
90
+ },
91
+ "1(g)": {
92
+ "figure_path": "2308.07553v2_figure_1(g).png",
93
+ "caption": "(g) CIFAR-10101010 (ResNet-18), performance against different \u03c3\ud835\udf0e\\sigmaitalic_\u03c3.\nFigure 1: The left column contains certified accuracy plots for the method RDP-multinomial against different noise levels (\u03c3\ud835\udf0e\\sigmaitalic_\u03c3); the right column contains certified accuracy plots for comparisons against variants and baselines. In the plots, the X-axis is radius r\ud835\udc5fritalic_r (symmetric difference) while the Y-axis is the corresponding certified accuracy C\u2062Ar\ud835\udc36subscript\ud835\udc34\ud835\udc5fCA_{r}italic_C italic_A start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at radius r\ud835\udc5fritalic_r.",
94
+ "url": "http://arxiv.org/html/2308.07553v2/extracted/5472131/figures/figures_font12/cifar10_resnet18_sigma.png"
95
+ },
96
+ "1(h)": {
97
+ "figure_path": "2308.07553v2_figure_1(h).png",
98
+ "caption": "(h) CIFAR-10101010 (ResNet-18), comparative performance at \u03c3=0.2\ud835\udf0e0.2\\sigma=0.2italic_\u03c3 = 0.2.\nFigure 1: The left column contains certified accuracy plots for the method RDP-multinomial against different noise levels (\u03c3\ud835\udf0e\\sigmaitalic_\u03c3); the right column contains certified accuracy plots for comparisons against variants and baselines. In the plots, the X-axis is radius r\ud835\udc5fritalic_r (symmetric difference) while the Y-axis is the corresponding certified accuracy C\u2062Ar\ud835\udc36subscript\ud835\udc34\ud835\udc5fCA_{r}italic_C italic_A start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT at radius r\ud835\udc5fritalic_r.",
99
+ "url": "http://arxiv.org/html/2308.07553v2/extracted/5472131/figures/figures_font12/cifar10_resnet18_compare.png"
100
+ },
101
+ "2(a)": {
102
+ "figure_path": "2308.07553v2_figure_2(a).png",
103
+ "caption": "(a) MNIST (LeNet-5) against standard group privacy\nFigure 2: The plots contain certified accuracy plot for the method RDP-multinomial with proposed improved group privacy (RDP-multinomial) against RDP-multinomial with standard group privacy (RDP-multinomial-GP) on datasets MNIST and Fashion-MNIST.",
104
+ "url": "http://arxiv.org/html/2308.07553v2/extracted/5472131/figures/figures/mnist_RDP_GP_compare.png"
105
+ },
106
+ "2(b)": {
107
+ "figure_path": "2308.07553v2_figure_2(b).png",
108
+ "caption": "(b) Fashion-MNIST (LeNet-5), against standard group privacy\nFigure 2: The plots contain certified accuracy plot for the method RDP-multinomial with proposed improved group privacy (RDP-multinomial) against RDP-multinomial with standard group privacy (RDP-multinomial-GP) on datasets MNIST and Fashion-MNIST.",
109
+ "url": "http://arxiv.org/html/2308.07553v2/extracted/5472131/figures/figures/fashion_mnist_RDP_GP_compare.png"
110
+ }
111
+ },
112
+ "validation": true,
113
+ "references": [
114
+ {
115
+ "1": {
116
+ "title": "Deep Learning with Differential Privacy.",
117
+ "author": "Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H. B.; Mironov, I.; Talwar, K.;\nand Zhang, L. 2016.",
118
+ "venue": "Proceedings of the 2016 ACM SIGSAC Conference on Computer and\nCommunications Security, 308\u2013318.",
119
+ "url": null
120
+ }
121
+ },
122
+ {
123
+ "2": {
124
+ "title": "Privacy Amplification by Subsampling: Tight Analyses via\nCouplings and Divergences.",
125
+ "author": "Balle, B.; Barthe, G.; and Gaboardi, M. 2018.",
126
+ "venue": "arXiv:1807.01647 [cs, stat].",
127
+ "url": null
128
+ }
129
+ },
130
+ {
131
+ "3": {
132
+ "title": "Hypothesis Testing Interpretations and Renyi Differential\nPrivacy.",
133
+ "author": "Balle, B.; Barthe, G.; Gaboardi, M.; Hsu, J.; and Sato, T. 2019.",
134
+ "venue": "arXiv:1905.09982 [cs, stat].",
135
+ "url": null
136
+ }
137
+ },
138
+ {
139
+ "4": {
140
+ "title": "Can Machine Learning Be Secure?",
141
+ "author": "Barreno, M.; Nelson, B.; Sears, R.; Joseph, A. D.; and Tygar, J. D. 2006.",
142
+ "venue": "In Proceedings of the 2006 ACM Symposium on Information,\nComputer and Communications Security, 16\u201325.",
143
+ "url": null
144
+ }
145
+ },
146
+ {
147
+ "5": {
148
+ "title": "Security Evaluation of Pattern Classifiers under Attack.",
149
+ "author": "Biggio, B.; Fumera, G.; and Roli, F. 2014.",
150
+ "venue": "IEEE Transactions on Knowledge and Data Engineering, 26(4):\n984\u2013996.",
151
+ "url": null
152
+ }
153
+ },
154
+ {
155
+ "6": {
156
+ "title": "Poisoning Attacks against Support Vector Machines.",
157
+ "author": "Biggio, B.; Nelson, B.; and Laskov, P. 2013.",
158
+ "venue": "arXiv:1206.6389 [cs, stat].",
159
+ "url": null
160
+ }
161
+ },
162
+ {
163
+ "7": {
164
+ "title": "Regularisation Can Mitigate Poisoning Attacks: A Novel\nAnalysis Based on Multiobjective Bilevel Optimisation.",
165
+ "author": "Carnerero-Cano, J.; Mu\u00f1oz-Gonz\u00e1lez, L.; Spencer, P.; and Lupu, E. C. 2020.",
166
+ "venue": "arXiv:2003.00040 [cs, stat].",
167
+ "url": null
168
+ }
169
+ },
170
+ {
171
+ "8": {
172
+ "title": "Adversarial Attacks and Defences: A Survey.",
173
+ "author": "Chakraborty, A.; Alam, M.; Dey, V.; Chattopadhyay, A.; and Mukhopadhyay, D.\n2018.",
174
+ "venue": "arXiv:1810.00069 [cs, stat].",
175
+ "url": null
176
+ }
177
+ },
178
+ {
179
+ "9": {
180
+ "title": "A Framework of Randomized Selection Based Certified\nDefenses Against Data Poisoning Attacks.",
181
+ "author": "Chen, R.; Li, J.; Wu, C.; Sheng, B.; and Li, P. 2020.",
182
+ "venue": "arXiv:2009.08739 [cs, stat].",
183
+ "url": null
184
+ }
185
+ },
186
+ {
187
+ "10": {
188
+ "title": "Targeted Backdoor Attacks on Deep Learning Systems Using\nData Poisoning.",
189
+ "author": "Chen, X.; Liu, C.; Li, B.; Lu, K.; and Song, D. 2017.",
190
+ "venue": "arXiv:1712.05526 [cs].",
191
+ "url": null
192
+ }
193
+ },
194
+ {
195
+ "11": {
196
+ "title": "Certified Adversarial Robustness via Randomized Smoothing.",
197
+ "author": "Cohen, J. M.; Rosenfeld, E.; and Kolter, J. Z. 2019.",
198
+ "venue": "arXiv:1902.02918 [cs, stat].",
199
+ "url": null
200
+ }
201
+ },
202
+ {
203
+ "12": {
204
+ "title": "Calibrating Noise to Sensitivity in Private Data Analysis.",
205
+ "author": "Dwork, C.; McSherry, F.; Nissim, K.; and Smith, A. 2006.",
206
+ "venue": "In Theory of cryptography conference, 265\u2013284. Springer.",
207
+ "url": null
208
+ }
209
+ },
210
+ {
211
+ "13": {
212
+ "title": "Data Mining with Differential Privacy.",
213
+ "author": "Friedman, A.; and Schuster, A. 2010.",
214
+ "venue": "In Proceedings of the 16th ACM SIGKDD international conference\non Knowledge discovery and data mining, 493\u2013502.",
215
+ "url": null
216
+ }
217
+ },
218
+ {
219
+ "14": {
220
+ "title": "Dataset Security for Machine Learning: Data Poisoning,\nBackdoor Attacks, and Defenses.",
221
+ "author": "Goldblum, M.; Tsipras, D.; Xie, C.; Chen, X.; Schwarzschild, A.; Song, D.;\nMadry, A.; Li, B.; and Goldstein, T. 2021.",
222
+ "venue": "arXiv:2012.10544 [cs].",
223
+ "url": null
224
+ }
225
+ },
226
+ {
227
+ "15": {
228
+ "title": "Deep Residual Learning for Image Recognition.",
229
+ "author": "He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015.",
230
+ "venue": "arXiv:1512.03385 [cs].",
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "16": {
236
+ "title": "Probability Inequalities for Sums of Bounded Random\nVariables.",
237
+ "author": "Hoeffding, W. 1963.",
238
+ "venue": "Journal of the American Statistical Association, 58(301):\n13\u201330.",
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "17": {
244
+ "title": "On the Effectiveness of Mitigating Data Poisoning Attacks\nwith Gradient Shaping.",
245
+ "author": "Hong, S.; Chandrasekaran, V.; Kaya, Y.; Dumitra\u015f, T.; and Papernot, N. 2020.",
246
+ "venue": "arXiv:2002.11497 [cs].",
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "18": {
252
+ "title": "Intrinsic Certified Robustness of Bagging against Data\nPoisoning Attacks.",
253
+ "author": "Jia, J.; Cao, X.; and Gong, N. Z. 2020.",
254
+ "venue": "arXiv:2008.04495 [cs].",
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "19": {
260
+ "title": "Gradient-based Learning Applied to Document Recognition.",
261
+ "author": "Lecun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998.",
262
+ "venue": "Proceedings of the IEEE, 86(11): 2278\u20132324.",
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "20": {
268
+ "title": "Certified Robustness to Adversarial Examples with\nDifferential Privacy.",
269
+ "author": "Lecuyer, M.; Atlidakis, V.; Geambasu, R.; Hsu, D.; and Jana, S. 2019.",
270
+ "venue": "arXiv:1802.03471 [cs, stat].",
271
+ "url": null
272
+ }
273
+ },
274
+ {
275
+ "21": {
276
+ "title": "Deep Partition Aggregation: Provable Defense against\nGeneral Poisoning Attacks.",
277
+ "author": "Levine, A.; and Feizi, S. 2021.",
278
+ "venue": "arXiv:2006.14768 [cs, stat].",
279
+ "url": null
280
+ }
281
+ },
282
+ {
283
+ "22": {
284
+ "title": "Network in network.",
285
+ "author": "Lin, M.; Chen, Q.; and Yan, S. 2013.",
286
+ "venue": "arXiv preprint arXiv:1312.4400.",
287
+ "url": null
288
+ }
289
+ },
290
+ {
291
+ "23": {
292
+ "title": "Data Poisoning against Differentially-Private Learners:\nAttacks and Defenses.",
293
+ "author": "Ma, Y.; Zhu, X.; and Hsu, J. 2019.",
294
+ "venue": "In Proceedings of the Twenty-Eighth International Joint\nConference on Artificial Intelligence, 4732\u20134738. Macao, China:\nInternational Joint Conferences on Artificial Intelligence Organization.",
295
+ "url": null
296
+ }
297
+ },
298
+ {
299
+ "24": {
300
+ "title": "Empirical Bernstein Bounds and Sample Variance\nPenalization.",
301
+ "author": "Maurer, A.; and Pontil, M. 2009.",
302
+ "venue": "arXiv:0907.3740 [stat].",
303
+ "url": null
304
+ }
305
+ },
306
+ {
307
+ "25": {
308
+ "title": "Renyi Differential Privacy.",
309
+ "author": "Mironov, I. 2017.",
310
+ "venue": "2017 IEEE 30th Computer Security Foundations Symposium (CSF),\n263\u2013275.",
311
+ "url": null
312
+ }
313
+ },
314
+ {
315
+ "26": {
316
+ "title": "R\u00e9nyi Differential Privacy of the Sampled Gaussian\nMechanism.",
317
+ "author": "Mironov, I.; Talwar, K.; and Zhang, L. 2019.",
318
+ "venue": "arXiv:1908.10530 [cs, stat].",
319
+ "url": null
320
+ }
321
+ },
322
+ {
323
+ "27": {
324
+ "title": "Detection of Adversarial Training Examples in Poisoning\nAttacks through Anomaly Detection.",
325
+ "author": "Paudice, A.; Mu\u00f1oz-Gonz\u00e1lez, L.; Gyorgy, A.; and Lupu, E. C. 2018.",
326
+ "venue": "arXiv:1802.03041 [cs, stat].",
327
+ "url": null
328
+ }
329
+ },
330
+ {
331
+ "28": {
332
+ "title": "Certified Robustness to Label-Flipping Attacks via\nRandomized Smoothing.",
333
+ "author": "Rosenfeld, E.; Winston, E.; Ravikumar, P.; and Kolter, Z. 2020.",
334
+ "venue": "In Proceedings of the 37th International Conference on\nMachine Learning, 8230\u20138241. PMLR.",
335
+ "url": null
336
+ }
337
+ },
338
+ {
339
+ "29": {
340
+ "title": "Poison Frogs! Targeted Clean-Label Poisoning Attacks on\nNeural Networks.",
341
+ "author": "Shafahi, A.; Huang, W. R.; Najibi, M.; Suciu, O.; Studer, C.; Dumitras, T.; and\nGoldstein, T. 2018.",
342
+ "venue": "arXiv:1804.00792 [cs, stat].",
343
+ "url": null
344
+ }
345
+ },
346
+ {
347
+ "30": {
348
+ "title": "On Certifying Robustness against Backdoor Attacks via\nRandomized Smoothing.",
349
+ "author": "Wang, B.; Cao, X.; jia, J.; and Gong, N. Z. 2020.",
350
+ "venue": "arXiv:2002.11750 [cs].",
351
+ "url": null
352
+ }
353
+ },
354
+ {
355
+ "31": {
356
+ "title": "RAB: Provable Robustness Against Backdoor Attacks.",
357
+ "author": "Weber, M.; Xu, X.; Karla\u0161, B.; Zhang, C.; and Li, B. 2021.",
358
+ "venue": "arXiv:2003.08904 [cs, stat].",
359
+ "url": null
360
+ }
361
+ },
362
+ {
363
+ "32": {
364
+ "title": "Adversarial Label Flips Attack on Support Vector\nMachines.",
365
+ "author": "Xiao, H.; Xiao, H.; and Eckert, C. 2012.",
366
+ "venue": "ECAI 2012, 870\u2013875.",
367
+ "url": null
368
+ }
369
+ },
370
+ {
371
+ "33": {
372
+ "title": "Opacus: User-Friendly Differential Privacy Library in PyTorch.",
373
+ "author": "Yousefpour, A.; Shilov, I.; Sablayrolles, A.; Testuggine, D.; Prasad, K.;\nMalek, M.; Nguyen, J.; Ghosh, S.; Bharadwaj, A.; Zhao, J.; Cormode, G.; and\nMironov, I. 2021.",
374
+ "venue": "arXiv preprint arXiv:2109.12298.",
375
+ "url": null
376
+ }
377
+ }
378
+ ],
379
+ "url": "http://arxiv.org/html/2308.07553v2"
380
+ }
20240318/2308.08305v2.json ADDED
@@ -0,0 +1,528 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Warped geometric information on the optimisation of Euclidean functions",
3
+ "abstract": "We consider the fundamental task of optimising a real-valued function defined in a potentially high-dimensional Euclidean space, such as the loss function in many machine-learning tasks or the logarithm of the probability distribution in statistical inference. We use Riemannian geometry notions to redefine the optimisation problem of a function on the Euclidean space to a Riemannian manifold with a warped metric, and then find the function\u2019s optimum along this manifold. The warped metric chosen for the search domain induces a computational friendly metric-tensor for which optimal search directions associated with geodesic curves on the manifold becomes easier to compute. Performing optimization along geodesics is known to be generally infeasible, yet we show that in this specific manifold we can analytically derive Taylor approximations up to -order. In general these approximations to the geodesic curve will not lie on the manifold, however we construct suitable retraction maps to pull them back onto the manifold. Therefore, we can efficiently optimize along the approximate geodesic curves. We cover the related theory, describe a practical optimization algorithm and empirically evaluate it on a collection of challenging optimisation benchmarks. Our proposed algorithm, using -order approximation of geodesics, tends to outperform standard Euclidean gradient-based counterparts in term of number of iterations until convergence.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "A central task in computational statistics and machine learning (ML) is defined in terms of optimization. Usually termed as learning, the goal is to find a parameter that maximises (or, equivalently, minimises) some objective function . For instance, maximum a posteriori (MAP) estimation falls into this category, with corresponding to the logarithm of a posterior distribution in Bayesian Statistics. Such optimization problems are routinely solved using gradient-based methods (Hestenes et al., 1952 ###reference_b19###; Nocedal and Wright, 2006 ###reference_b30###), with stochastic versions (Kingma and Ba, 2015 ###reference_b21###) dominating the field for large-scale models such as deep neural networks and approximate second-order methods like BFGS (Nocedal, 1989 ###reference_b29###) are used for faster convergence in problems of smaller scale.\nTypical optimization methods assume the objective function domain to be Euclidean and they vary primarily in terms of how the search directions are specified, from direct use of gradients to various forms of conjugate gradient variants, see for example Nesterov (1983 ###reference_b28###), Bhaya and Kaszkurewicz (2004 ###reference_b5###) or Shanno (1978 ###reference_b39###), and how updates of those directions in combination with gradients are specified (Shanno, 1978 ###reference_b39###). The scientific literature covers such optimization methods in great detail, with several theoretical results and practical efficiency covered in Shanno (1978 ###reference_b39###) and Polak (1997 ###reference_b33###).\nWe approach the problem from the Riemannian geometry viewpoint. Rather than directly optimizing the target function whose domain (search space) is Euclidean, we define a new function on the target\u2019s function graph and endow the space in which the graph is immersed with a warped geometry. The domain of can now be seen as embedded Riemannian manifold with a warped metric and this is formally called warped product space, see for example O\u2019Neill (1983 ###reference_b31###). For the sake of introduction, let\u2019s denote this manifold as and its elements that will be made precise later on. Each point on the manifold encodes both and the function value in a bijective manner with , thus the optima of on preserves the optima of on . Because the set is a Riemannnian manifold, we can harness the geometric information contained in the domain of and endow the optimisation routine with Riemannian tools. In the recent literature, Duruisseaux and Leok (2022a ###reference_b12###, b ###reference_b13###) and references therein have shown that optimisation on manifolds can achieve accelerated convergence rates.\nFor arbitrary manifolds the computational burden would increase due to the need for accounting extra Riemannian notions. For example, the notion of straight lines is replaced with geodesic paths on the and the generalization of parallelism relies on the parallel transport operation (see Do Carmo, 1992 ###reference_b10###). Those more general concepts commonly bring extra difficulties and higher computational costs as no closed-form arithmetics are usually known. However, for particular embeddings and suitably chosen metrics it turns out that we can perform all the necessary computations for individual updates within the algorithm faster, in the sense of linear memory storage and quadratic in the number of arithmetic operations.\nOur proposed algorithm follows closely the work by Zhu (2020 ###reference_b45###), where the search directions in Riemannian conjugate gradient (RCG) methods (see Sato, 2021 ###reference_b36###; Sakai and Iiduka, 2021 ###reference_b35###; Sato, 2022 ###reference_b37###) and parallel transport operations are respectively replaced by a -order geodesic approximation (retraction map) and vector transport, the latter using the idea of inverse backward retraction mapping via orthogonal projection (Luenberger, 1972 ###reference_b26###). As also presented in Zhu (2020 ###reference_b45###), these operations are of easy computation and have provided similar convergence speed performance compared to closed-form parallel transport on specific matrix manifolds (Absil et al., 2008 ###reference_b1###; Sato, 2021 ###reference_b36###). Note that these previous works account for the Riemannian manifold in simpler manner, using -order retractions which are linear approximations of geodesic curves.\nOur proposed approach builds on two key elements. First, we recast the optimisation task of a Euclidean function to the optimisation of a new function on the embedded manifold which is given by the function graph\u2019s whose embedding space is associated with a specific warped Riemannian metric. This will allow us to harness the intrinsic geometric properties of the problem to design a new optimisation algorithm. Related approaches were recently used by Hartmann et al. (2022 ###reference_b18###) and Yu et al. (2023 ###reference_b44###, 2024 ###reference_b43###), for constructing a geometric version of Markov Chain Monte Carlo samplers and Laplace\u2019s method for analytical approximations. They show that the methods induce a natural Riemannian metric-tensor that has highly desirable computational properties. For instance, we can compute its inverse metric-tensor and the Christoffel symbols in closed-form to bring down the computational costs considerably.\nThe second key contribution is the use of a -order approximations of geodesic paths as search directions. While we cannot perform efficient computation along the exact geodesics because it would require numerical solution of a system of differential equations and within it calling the metric-tensor itself several times, we show that we can construct a computationally efficient Taylor series of geodesics up to -order at any point on without the need of inverting and storing full matrices at each step. Monera et al. (2014 ###reference_b27###) noted that the tangential component of the geodesic only depends on the -order geometry of , suggesting that both - and -order approximation are possible from theoretical viewpoint, and we are not aware of any practical algorithms that have used these approximations. As we will show, the -order approximation can be rewritten using only the -order geometry of (see also Song et al., 2018 ###reference_b40###) and it is not necessary to form the Hessian explicitly. Instead we directly implement its multiplication by a vector of suitable dimension. This brings down the memory cost to linear in the problem dimensionality (Pearlmutter, 1994 ###reference_b32###). Because the approximate geodesics usually will not map back to a point in , we need to perform a retraction step to push the updated result back onto the manifold. For our case we can define a valid retraction map based on the embedding with no significant additional computation.\nWe evaluate the algorithm in a range of optimization tasks, covering both optimization problems with known challenging geometry and a subset of CUTE models implemented in the ADNLPmodels.jl, Julia\u2019s package (Bezanson et al., 2017 ###reference_b4###). The main goal of our experiments is to show that using the approximate geodesics as search directions reduces the number of iterations until convergence, within the scope of conjugate gradient algorithms. We show that compared to conjugate gradients with Euclidean gradient directions we observe a significant reduction, and that the proposed Riemannian conjugate gradient can be comparable to using Newton\u2019s directions that uses the inverse of the Hessian matrix in the Euclidean sense. The proposed method is also efficient in terms of overall computational speed in comparison against other methods using exact line search, but compared to methods using Hager-Zhang type of inexact line search (Hager and Zhang, 2006 ###reference_b16###) it requires too many function evaluations to remain competitive in downstream tasks."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries and notation",
15
+ "text": "A set is called manifold of dimension if together with bijective smooth mappings (at times called parametrisation) satisfies (a) and (b) for each , . A manifold is called a Riemmanian manifold when it is characterized by a pair where for each the function (called metric) associates the usual dot product of vectors in the tangent space at (denoted as ), that is . If is a non negative function we call it Riemannian metric.\nLet and be Riemannian manifolds of dimensions and respectively. Also let be a positive and smooth function namely warp function. The product endowed with the Riemannian metric\nis called warped product space and denoted as . Let and where is the -dimensional open set of with the usual Euclidean metric. Denote as an arbitrary function whose graph is defined as . The canonical parametrisation of in is set as where . Let\u2019s denote tangent vectors at as and where stacks the tangent basis vectors associated with the canonical parametrisation and . Then, the induced metric on , using (1 ###reference_###) is given by\nwhere is the warped metric-tensor. From now on we will omit the argument of functions that will depend either on or and recall that since is a bijection we will only make the use of the notation or as a variable of a function whenever the current text passage calls it necessary. Observe that the metric-tensor above has the same structural properties as the metric proposed by Hartmann et al. (2022 ###reference_b18###) where the function plays a more general role rather than a fixed scalar value."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Problem formulation and method overview",
21
+ "text": "The notion of manifold and differentiable manifolds allow us to extend differential calculus to spaces more general than Euclidean in a coordinate-invariant (or parametrisation-invariant) manner. For example, the idea of gradient of a function as the highest rate of change at a given point becomes invariant under a differentiable manifold viewpoint. The role of the tangent space above is to do exactly this; if we were to choose a different global atlas representing the manifold, the tangent vector would only have a different basis but is still the same. Hence, under this property the above notion of gradient also becomes parametrisation invariant. This seems, at least, a compelling reason to perform practical optimisation procedures using notions of Riemannian geometry so that it would free us of the task of choosing a coordinate-system (parametrisation) on which the optimisation procedure theoretically behaves the best. From a computational perspective this is also important as it allows us, whenever arithmetically solvable, to choose a parametrisation so that a metric would be fully diagonal. This way, computational procedures would clearly incur faster algorithms see details in Hartmann (2018 ###reference_b17###) and references therein. Moreover, Amari (1998 ###reference_b2###), Hartmann (2018 ###reference_b17###) and Duruisseaux and Leok (2022a ###reference_b12###, b ###reference_b13###) observed that geometric notions can make algorithms less prone to stability issues. They can also improve the conditional number (Hird and Livingstone, 2023 ###reference_b20###) which in turn make them more reliable and overall lead to faster convergence rates. From now on, we will introduce the problem and formulate it from the Riemannian viewpoint.\nConsider that is now an objective function for which we aim to solve the maximization task\nWe rephrase the optimisation of the function to a problem of maximizing a function where is an embedded manifold with the metric given in . It will be useful later on to observe now that the embedding space endowed with the metric is the same as with a unitary diagonal metric except for the last component of its main diagonal whose entry is the value of the warp function. The tangent basis vectors at a point in will be denote as where is the canonical basis. This way any tangent vector can also be represented as .\nLet\u2019s now specify the mapping and since is a bijection between and we have,\nThis means that the first components of are the same as . As is now the search space endowed with a geometry that is Riemannian (see Do Carmo, 1992 ###reference_b10###, 2017 ###reference_b11###, for example) we can harness its intrinsic geometric information and design an optimisation algorithm based on Riemannian concepts.\n###figure_1### Besides, note that due to the Sherman-Morrison-Woodburry identity the inverse of is fast to compute since \nwhere . We also see that , which is fast to compute."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Riemannian conjugate gradient (RCG) with backward retraction",
27
+ "text": "The manifold characterizes all the geometric information of the domain of which can now be used to perform optimization of through using an iterative procedure following the general template of the RGC method presented by Zhu (2020 ###reference_b45###). At each step (a) we identify a point , a search direction and the current Riemannian gradient, (b) obtain a new by optimizing the objective along a given curve passing through with the search direction given in (b), (c) transport the current search direction on to and repeat the above steps. We presented the mains steps of the algorithm and our main contributions in what follows, leaving the extra technical details of the algorithm itself in the original paper of Zhu (2020 ###reference_b45###) and the mathematical development of this paper in the appendices.\nLet be a smooth function. RCG methods ideally rely on the exponential map and parallel transport (see Do Carmo, 1992 ###reference_b10###, Section 2 and 3). That is, for a point and a tangent vector at , , the general form of the iterative updates is given by,\nwhere is the parallel transport of along the geodesic from in direction of to , and denotes the Riemannian gradient (the natural gradient, see Appendix B ###reference_###). Note that the choice of the scalar must satisfy the Wolfe conditions in Riemannian settings (see Absil et al., 2008 ###reference_b1###, for example). For the scalar parameter many choices are also possible, each of which will impact the speed of convergence of RCG (see Sato, 2021 ###reference_b36###, for empirical evaluation). In practice RCG methods are difficult, both geodesics and parallel transport require solving a system of differential equations whose solution is usually computed using numerical solvers. That is why, only in the last decades, these methodologies have been used only for a few matrix manifolds (Absil et al., 2008 ###reference_b1###; Byrne, 2013 ###reference_b7###) where the exponential map and parallel transport have closed-form arithmetics.\nUsually, in practice, the exponential map is replaced by the retraction map and the parallel transport by the vector transport . In this way the iterative updates take the form,\nFrom the numerical viewpoint both of these operations, when suitable defined, do not require solving a system of differentials equations. Moreover, they can alleviate the computational overhead considerably while preserving the convergence guarantees of RCG methods (Absil et al., 2008 ###reference_b1###; Boumal, 2023 ###reference_b6###). We refer to these operations as in Absil et al. (2008 ###reference_b1###), Definition 4.1.1 (page 55) and Definition 8.1.1 (page 169).\nRecently, Zhu (2020 ###reference_b45###) have proposed a RCG where the vector transport is defined via a backward retraction map, which is a way of measuring the displacement of two points on a manifold using tangent vectors. For general submanifolds of the Euclidean space, such as the submanifold we are working with, this is computationally feasible and fast to evaluate. They also show that by doing so, their algorithms are able to reduce wall-clock time to reach convergence (see Table 2, Section 6 in their paper).\nThis work presents the general RCG with the inverse backward retraction that subsumes the method proposed by Zhu (2020 ###reference_b45###) (see Section 3, 5 and Equation (46) in their paper) and generalized by Sato (2022 ###reference_b37###) (Section 4). On the top of those formulations, we propose a retraction map based on the -order Taylor approximation of the geodesic path. For the vector transport, we proposed the inverse backward retraction map. As we will show, for the embedded manifold , both retraction map and vector transport will incur linear cost in memory requirements and quadratic cost in the number of arithmetic operations . Following the next sections, we present the Taylor approximation, the choice of retraction based on it and the particular form of the backward retraction map. After that, we finally present the RCG optimisation algorithm using these particular tools."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Third-order geodesic approximation",
33
+ "text": "A geodesic on is a curve that minimizes distance between two points on . It generalizes the notion of straight path on flat spaces (straight lines). Equivalently, the classical -order derivative of these curves have only the normal vector component at each point . That is, where superscript denotes the orthogonal complement of a set.\nFollowing Monera et al. (2014 ###reference_b27###) we compute a -order approximation of a geodesic by explicitly considering the parametrisation of . Let where is a curve on the chart which yields a geodesic . Recall that the exponential map is also a retraction map, , which we express as where . Take a vector so that where is the -dimensional unit sphere. Then the -order approximation of the geodesic at with , in the direction of the tangent vector , is given by,\nwhere . From the fact that is a geodesic the quadratic component of the Taylor series and so the -order geometry of around only depends on the second-fundamental form (see Do Carmo, 2017 ###reference_b11###, for example). Monera et al. (2014 ###reference_b27###) also observed that the tangential component of only depends on the -order geometry of . We will exploit these properties to compute a -order Taylor approximation of the geodesic.\nWe start by noting that, in general, the second derivative of a curve on the embedded manifold can be written as,\nwhere denotes the covariant derivative of a tangent in the direction of (see Do Carmo, 1992 ###reference_b10###, Chapter 2), denotes the normal component at and the second-fundamental form on in the direction of (Do Carmo, 1992 ###reference_b10###, Chapter 6). Then, if we express , where , the covariant derivative above can be expressed in matrices forms as\nwhere . The Christoffel symbols above, assuming the natural Levi-Civitta connection,\nwere arranged in matrices ( for easiness and mnemonic notation. Their general formulation can be found in Do Carmo (1992 ###reference_b10###), Chapter 2, page 56.\nSince geodesics have only normal component, the coordinates of the tangent component must be the zero vector, thus for the quadratic component it holds , where at . Because is an embedding there is a unique normal vector at which we have denoted as , such that it is of length one (under the warp metric) and it is orthogonal to any vector in . In our case this reads (see Appendix C ###reference_### for proof).\nThe second-fundamental form is a bilinear form defined as where is the natural connection associated with the metric of the ambient space (see Equation (1 ###reference_###)) and the vectors are natural extensions of and . Specifically, after a long computation we obtain (see Appendix F ###reference_###)\nThe computation of the cubic term of the Taylor expansion is slightly more involved as it depends on the time derivative of the second fundamental form, the normal vector and consequently depends on the geodesic equations (Monera et al., 2014 ###reference_b27###), see Appendix E ###reference_###. In the following, we will present the general derivative leaving the details in the Appendix G ###reference_###.\nAs seen in the above equation the acceleration vector appears, and once we are approximating geodesic curves the derivative is given by the geodesic equations. Therefore it follows that (see Appendix E ###reference_### for proof),\nwhere and .\nThus, the -order Taylor approximation of a geodesic path on for a given and is\nWe can now see that this final expression does not involve the inverse of Hessian or storage of it in full, but only Hessian-vector products both in and . Therefore the computational implementation has linear memory cost and it is quadratic in the number of computer operations \nIn the next sections we provide the choice of retraction map based on this approximation and the choice of the vector transport."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Retraction choice",
39
+ "text": "In Section 4 ###reference_###, we referred to the retraction map, that for a given point it takes a vector and maps back onto . Usually the point along the approximate geodesic path (16 ###reference_###) will not lie on and thus it does not satisfy the definition of the retraction map. To define a valid retraction on , we proceed by applying the orthogonal projection of onto and using the canonical parametrisation to push it back onto . That is,\nwhere and\nThe quadratic and cubic coefficients of the Taylor approximation are given by\nSee Appendix E ###reference_### for more details. In order to show that Equation (17 ###reference_###) is indeed a retraction map, let us show that it does satisfy the necessary properties of its definition. Define a curve as,\nEvaluate this curve at , i.e., and the first property holds. For the second property we need to show that we recover in the derivative . The curve derivative is given by\nthus at we have . Therefore we conclude that Equation (17 ###reference_###) is a retraction map. It is also interesting to observe that the term in the Equation (25 ###reference_###), see Appendix E ###reference_###, equals the coefficient and that was obtained by the projection of the normal component of the approximate geodesic curve onto ."
40
+ },
41
+ {
42
+ "section_id": "7",
43
+ "parent_section_id": null,
44
+ "section_name": "Vector transport as inverse backward retraction",
45
+ "text": "The last tools necessary to complete our proposed algorithm is to define a valid vector transport. We use the inverse backward retraction map proposed by Luenberger (1972 ###reference_b26###) and Zhu (2020 ###reference_b45###) as the inverse orthographic projection. At first, this map is a projection of the difference of two points on the onto a tangent space which does not seem to characterize a vector transport. However, we still we can express it as a vector transport as follows (see Sato, 2022 ###reference_b37###, for details).\nLet , , and . The inverse backward retraction map is defined as . Given , and , define vector transport operation along as\nObserve that when we have the retraction thus the vector transport becomes , that is, we recover the Euclidean parallelism on . See also Appendix D ###reference_### for orthogonal projection, and how to recover a vector when given ."
46
+ },
47
+ {
48
+ "section_id": "8",
49
+ "parent_section_id": null,
50
+ "section_name": "The novel RCG algorithm",
51
+ "text": "After having obtained Equations (17 ###reference_###) and (7 ###reference_###) as the retraction map and the vector transport, we now propose a new RCG algorithm that optimises the function and therefore the function . This is presented in Algorithm 1 ###reference_###. In the Step 1, we set the initial tangent vector as the Riemannian gradient. This can be recalled as the natural gradient (Amari, 1998 ###reference_b2###); see Appendix B ###reference_### for details. The Step 3, consists in the optimisation of a unidimensional composite function . In this function observe that the gradient and the Hessian-vector product do not need to be recomputed as they can be repeatedly retrieved from the cache memory at every iteration inside this inner optimisation. Here we call the Julia package Optim.jl to perform such univariate optimisation of the function .\nAfter the optima of has been found, the Steps 5-8 compute , , (see Equation (23 ###reference_###)) and . Particularly, in Steps 6-7, the computation of and involves dot-products at the tangent spaces and these can still be further simplified, see Appendix A ###reference_###. In Step 8, we do need to compute the Riemannian gradient at the new point and the vector transport of from to along to set the update . However all those computations do not add more than memory load and arithmetic operations.\nLet initial point\nThe convergence of the Algorithm 1 ###reference_### is guaranteed by the fact we have a valid retraction map and the value at Step 4, with exact line search also satisfies the Wolfe conditions. This implies convergence to a stationary point where the Riemannian gradient is null. See for example Sakai and Iiduka (2021 ###reference_b35###), Zhu (2020 ###reference_b45###) (Section 4) and the generalization of the methodological proofs in Sato (2022 ###reference_b37###). An important question in the proposed algorithm is whether we can use variants of the scalar value analogous to the Euclidean cases. For example, Sato (2022 ###reference_b37###) (Equations 4.10-4.12 in that paper) requires the vector transport of the gradient to the new point on the manifold. Unfortunately, we cannot apply the vector transport defined within their proposed algorithm. There is no guarantee that by plugging into Equation (17 ###reference_###) instead of , it will end up at the same point on the manifold if we had used . Therefore it derails the use of Equation (7 ###reference_###). It would be possible to use an approximation of the parallel transport as noted by Calin and Udri\u015fte (2014 ###reference_b8###) (page 253, problem 8.3) but this is left for future work."
52
+ },
53
+ {
54
+ "section_id": "9",
55
+ "parent_section_id": null,
56
+ "section_name": "Experiments",
57
+ "text": "In this section we conduct experiments using three different sets of of functions defined first entirely on Euclidean spaces with varying dimensionality . For each function and dimension , we perform optimisation using Algorithm 1 ###reference_### with the retraction map based on the -order approximation of the geodesic. As our choice of involves two scalar values and (see Appendix H ###reference_### for details) we perform some preliminary runs in low dimension ( to check convergence and performance. After that the selected values were and for all the following experiments with three different set of model examples.\nWe compare the RCG method, labelled as \u201dRCG (ours)\u201d, with two other conjugate gradient methods, differing in the choice of the search direction and the line-search method used for determining the step size . The first method, denoted as \u201dCG-exact (ours)\u201d, uses standard Euclidean gradients and exact-line search when . This corresponds to the Euclidean version of Dai-Yuan conjugate gradient (Dai and Yuan, 1999 ###reference_b9###). We use here our own implementation, therefore the approximate geodesics are naturally replaced by straight lines in the Euclidean sense. The retraction simply becomes and the Riemannian gradient becomes the classical Euclidean gradient. The vector transport reduces to the Euclidean parallelism (see Section 7 ###reference_###). This comparison directly quantifies the value of considering Riemannian search directions within our own implementation routine.\nThe other two methods use the classical conjugate gradient method and purely Newton\u2019s directions as the search directions. Both are already implemented in the Julia package Optim.jl and LineSearchers.jl. Both of these methods, however, are set to use Hager-Zhang type of inexact line search as implemented in LineSearchers.jl. We denote these as \u201dCG-inexact\u201d and \u201dND-inexact\u201d in the upcoming plots and experiments. These methods are representative of the practical optimization algorithms used, and hence help understanding the possibilities and limitations of the proposed method.\nFor the first and second sets of models we know the optima beforehand, so that the stopping criteria for these models under test will be given by (machine precision), and in all runs we set the maximum number of iterations to be . For the third set of models, we do not know the optima. The stopping criteria is then set to be the absolute function difference between consecutive points as or with maximum number of iterations equal to 4000. In the next sections we specify the models and the practical experiments.\nNotes on computational implementation :\nThe main cost in formulation of the RCG method presented lies in the number of arithmetic operations in the Hessian vector products. Except that for the time derivative of the Hessian vector product, we use automatic differentation (AD) (Baydin et al., 2018 ###reference_b3###) via the package . As mentioned in Section 5 of Baydin et al. (2018 ###reference_b3###), when AD is carefully implemented it only increases the computational complexity by a small factor. This is also the reason we use the package ADNLPmodels.jl, it allowed us to implement the RCG method to some models from the CUTE library using AD tools. Not all models in CUTE library are implemented in ADNLPmodels.jl (personal communication with package developer PhD. Tangi Migot from Polytechnique Montr\u00e9al)."
58
+ },
59
+ {
60
+ "section_id": "9.1",
61
+ "parent_section_id": "9",
62
+ "section_name": "The D-dimensional squiggle probability model",
63
+ "text": "The squiggle probability distribution has expression , \u2026, with parameters and positive-definite (PD) matrix, where denotes the multivariate Gaussian density function with parameters and . The squiggle function can have the shape of a thin sine function that concentrates its probability density around a zig-zag region, producing narrow uphill curved region towards its unique global maximiser with . The PD matrix controls the orientation and how much thin the sine-shaped function can be. This effect is more pronounced with large values of . We set these parameters to and . As initial value for this function we set to be far away from the maxima, this mimics real practice where we do not know the maximizer beforehand.\nIn Figure 2 ###reference_###, we display the trace of three different optimisation routines as measure of computation effort (iterations) against accuracy measure (function discrepancy with its known maxima at each iteration until the maxima). This is the case again because the maximizer is known and therefore the function maxima. We also consider varying dimensionality in the experiments and plot the traces for each dimension. This experiment shows that RCG (ours) improves the number of iterations until convergence when compared to CG-inexact and it is a competitor to ND-inexact. Observe that, RCG (ours) and the CG-exact (ours) use exact line search and the other two methods, via Julia\u2019s implementation use inexact line search. For the latter cases, the Wolfe conditions may be too conservative with the step-size due to the \u201dzig-zag\u201d shape of the Squiggle function, leading to a slower convergence of the CG-inexact. The ND-inexact counterbalance this problem since the Newton\u2019s direction might be nearly orthogonal to the gradient direction.\n###figure_2###"
64
+ },
65
+ {
66
+ "section_id": "9.2",
67
+ "parent_section_id": "9",
68
+ "section_name": "The generalized Rosenbrock function",
69
+ "text": "The Rosenbrock function, , has been widely used in benchmark tests for numerical optimisation problems\n(Rosenbrock, 1960 ###reference_b34###; Kok, 2009 ###reference_b22###). For and its surface landscape is quite difficult. There is one global maxima in with and one local maxima at with . The global maximiser lies in a very narrow uphill region which makes the optimisation harder. The starting point for the optimisation routines is set to to make the task harder as this function has been studied in the range (Franca et al., 2020 ###reference_b14###). This function is not log-concave in its entire domain .\nFor this set of models we plot the same measure of computation effort and accuracy as in the previous example. Figure 3 ###reference_### displays the performance of the RCG (ours), CG-exact (ours), CG-inexact and ND-inexact for variety of varying dimensions for the Rosenbrock model. In this case we also observe an improvement of RCG (ours) as the number of iterations until the discrepancy measure reaches the stopping criteria above as it decreases faster than the CG-exact (ours) and CG-inexact. The RCG (ours) is not faster than ND-inexact. Please, observe that the RCG (ours) is a first-order Riemannan optimisation scheme, not second. The analogous Riemannan second-order optimisation routine would use the Riemannian Hessian, see Boumal (2023 ###reference_b6###).\n###figure_3###"
70
+ },
71
+ {
72
+ "section_id": "9.3",
73
+ "parent_section_id": "9",
74
+ "section_name": "A test-set of the CUTE library",
75
+ "text": "The third and last set of models comprise some from the library CUTE (The Constrained and Unconstrained Testing Environment)111www.cuter.rl.ac.uk//mastsif.html implemented in Julia\u2019s package ADNLPmodels.jl 222https://github.com/JuliaSmoothOptimizers/ADNLPModels.jl. This subset of models can be found online333www.cuter.rl.ac.uk/Problems/classification.shtml and has classification SUR2-AN-V-0 (unconstrained search space). The models here chosen under this classification have IDs respectively given by EXTROSNB, CHNROSNB and GENROSE. We also vary the dimensionality of each model and use the same dimensions as in the previous tests. For all these models the initial value where the value is a quantity obtained from the model type and its respective dimension inside the library ADNLPmodels.jl.\nIn Figure 4 ###reference_### we display the performance of the RCG (ours), CG-exact (ours), CG-inexact and ND-inexact on this subset of models from the CUTE library. For the model EXTROSNB, we observe the performance of RCG (ours) is aligned with CG-exact (ours) and both are faster than CG-inexact and ND-inexact. In this case, the choice of parameters may be far from optimal, but the RCG (ours) was still able to be in pair with the CG-exact (ours). For the last two types of models, the RCG (ours) shows smaller number of iterations to achieve the stopping criteria when compared to the CG-exact (ours) and CG-inexact. In relation to ND-exact the RCG (ours) is not faster, achieving the stopping criteria after ND-inexact would do so. In relation to the wall-clock time and memory consumption the RCG (ours) generally dominates when compared to Julia\u2019s implementation. The wall-clock time and memory load of RCG (ours) is in line with CG-exact (ours). For these models and the chosen quality measure the CG-inexact is actually the fastest in all cases, but we note that there are likely significant differences in implementation quality when comparing our proof-of-concept implementation with established and highly optimized software packages.\n###figure_4###"
76
+ }
77
+ ],
78
+ "appendix": [
79
+ {
80
+ "section_id": "Appendix 1",
81
+ "parent_section_id": null,
82
+ "section_name": "Appendix A Simplification of dot-products on tangent spaces",
83
+ "text": "Recall that for a point and tangents expressed as , in the parameterization , the inner product at is given by . We use that fact that is the embedded manifold to simplify the computations. The following inner products are given by\nand\nIf then ."
84
+ },
85
+ {
86
+ "section_id": "Appendix 2",
87
+ "parent_section_id": null,
88
+ "section_name": "Appendix B Derivation of the Riemannian gradient as the Natural gradient",
89
+ "text": "Let such that with (the chart). Moreover, let . By definition the Riemannian gradient is the vector in the tangent space , such that for a given it holds . Recall that the base of the tangent space is . Then the differential is . By definition and we get .\nThen we see and . Note that so that which yields .\nFrom where we identify the expression as the Natural gradient, which are the components of the gradient vector of at . We also note that Boumal (2023 ###reference_b6###) provides an easier way to express the Riemmanian gradient on embedded manifolds."
90
+ },
91
+ {
92
+ "section_id": "Appendix 3",
93
+ "parent_section_id": null,
94
+ "section_name": "Appendix C Normal vector on",
95
+ "text": "This section computes the normal vector at a point considering the warped metric . The notation or will be used interchangeably. Denote where and is the normal vector to . Considering the canonical parametrisation , the orthogonality under the metric gives for . This implies the system of equations . Assuming that the normal vector has unit length we have . Using the system of equations to solve for the coordinate we get\nand solving for the last coordinate we obtain . This leads to .\nTherefore the normal vector at is\nwhere and this vector has unit norm under the metric ."
96
+ },
97
+ {
98
+ "section_id": "Appendix 4",
99
+ "parent_section_id": null,
100
+ "section_name": "Appendix D Orthogonal projection on",
101
+ "text": "Again denote where and is the normal vector to . We known that and we want to find the orthogonal projection of onto . Since we need to find the coordinate components of . To do so we need to solve for . This is given by the weighted least-square solution. That is,\nwhere is the metric-tensor of the ambient space . Therefore the orthogonal projection of a vector onto , denoted as has the form,\nObserve that is the metric-tensor induced on . Expanding the terms above we get,"
102
+ },
103
+ {
104
+ "section_id": "Appendix 5",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix E Christoffel symbols and geodesic equations on",
107
+ "text": "Recall that is a function of , the metric-tensor induced on is whose inverse is where . The Christoffel symbols for are computed using its general formulation (see Do Carmo, 1992 ###reference_b10###, for example). After some algebraic manipulation we can organize the Christofell symbols in matrices. The development leads to,\nBecause the Hessian is symmetric, some terms will cancel out and others combine. Expanding the summation above, we get\nNote that, except for the last term, all the terms in the first passage are computed similarly. That is why the equation is shortened. In the last passage we explicitly show the complete form of all the terms composing . The Christoffel symbols when arranged in full matrices are denoted as , and are generally written as,\nTo further simplify the notation, let\nThus,\nThe computation of the geodesic equations associated to will also follow the general formalism. We will use the results above to make the equations more compact aiming at computational purposes. Using the geodesic equations and\nexpanding one element in its right hand-side, Equation (10 ###reference_###), we get,\nObserve that the quadratic form can be expanded to have a computational-friendly expression, so that we do not need to form squared matrices. It follows,\nand\nWe now clearly see the need of only two gradients, each of which are multiplied respectively by and which are scalar numbers. Therefore the geodesic equations above become,\nindeed all elements of the above equation are dependent on ."
108
+ },
109
+ {
110
+ "section_id": "Appendix 6",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix F Second-fundamental form on",
113
+ "text": "The second-fundamental form acting on the tangent space of in computed as follows. In short notation the normal vector on the ambient space will be denoted , where . Its Euclidean partial derivative in the direction is given by\nfor and . The matrix , it is composed by the canonical basis of . The second-fundamental form in the direction of is defined as,\nwhere ,\n and is the connection associated with the warped metric in the ambient space . In the ambient space, the Christoffel symbols (in matrices forms) associated with the connection are given by\nfor and\nfor since does not depend on . The covariant derivative can be computed using the general definition in Do Carmo (1992 ###reference_b10###). It follows,\nwhere is the coordinate of and for . Observe that , however which makes the last term in the sum above disappear. Plugging the covariant derivative and the tangent vector into the definition of the second-fundamental form yields\nAfter some algebraic manipulation the first term of the above sum becomes\nTherefore, considering the negative sign, the second-fundamental acting on the tangent space of is given by"
114
+ },
115
+ {
116
+ "section_id": "Appendix 7",
117
+ "parent_section_id": null,
118
+ "section_name": "Appendix G Third-order Taylor expansion of the geodesic curve",
119
+ "text": "The third-order order degree Taylor approximation of the geodesic curve on a point in the direction of is given by\nwhere is the normal component of the geodesic curve on . This normal component is given by the second-fundamental form multiplied by the normal vector since geodesics have null tangential component. That is, for a geodesic , . Expanding this expression at we get\nwhere for a given the coordinates components can be recovered using the orthogonal projection above, that is, . The third-order component of the approximate geodesic is and obtained by taking the time derivative of at in the direction of .\nHere we are interested in the first components of this approximation since usually. Following the retraction map choice, we apply the orthogonal projection of towards to obtain its first component. Then we can write\nwhere we have used that . The particular derivatives which compose the complete time derivative above are straightforward to compute and accordingly to the choice of the warp function."
120
+ },
121
+ {
122
+ "section_id": "Appendix 8",
123
+ "parent_section_id": null,
124
+ "section_name": "Appendix H The choice of warp function",
125
+ "text": "Suppose that we have a natural embedding of on equipped with the Euclidean metric and the canonical parametrisation aforementioned. Then the normal vector at is given by \nWe first define the warp function to be the norm of the orthogonal projection of over . This implies that .\nFrom here we can see that . We can see that in regions far away from the optima of , the function as we will expect the components of the gradient to have high magnitude. Note that close to the optima as the gradient components tend to zero and the metric (identity), so that at the optima we recover the Euclidean metric. However, we believe that this function may be too restrictive for functions that can induce strong \u201dbending\u201d on the approximate geodesic path which may undesirable for practical purposes. For this reason we propose a more flexible warp function defining\nwhere are scalars controlling respectively the upper bound and the flattening of the function . The larger the values of the smaller the function will be. If the function . If and , . In order to visualize the behaviour of the Taylor series in the approximation of geodesics with varying and see Figure 5 ###reference_###. In this figure the example constructed considers the function where denotes the Gaussian density function in two dimensions with and . The approximations are made at and and both are depicted in blue colour. The panel displays the -order Taylor series approximation with a series of increasing values. See also Equation (20 ###reference_###) for the general approximate geodesic expression. On the right side of the picture we plot the profile of the composite function as a function of a scalar . As varies, the plot on the right corresponds a walk-through along the approximate geodesics path on the left plot for different values.\n###figure_5### In the previous calculations across the paper the gradient of the warp function was required. For this particular choice of warp function we obtain its gradient and time derivative as follows. Denote , we have"
126
+ },
127
+ {
128
+ "section_id": "Appendix 9",
129
+ "parent_section_id": null,
130
+ "section_name": "Appendix I Computational costs in the experiments",
131
+ "text": "Here we show extra experiments showing the performance of RCG (ours) method proposed for the first two sets of models. In Figure 6 ###reference_###, we added the wall-clock time and memory consumption. Panel (a) shows the squiggle model. In this case ND-inexact, RCG (ours) and CG-exact (ours) dominate the speed of convergence. Time and memory comsumptions are greater for CG-exact (ours) when compared to other optimisers. For the Rosenbrock model, in panel (b), we see that RCG (ours) is faster in terms of number of iterations when compared to the to CG counterparts and closer to the ND-inexact. The time and memory consumption is dominated by the CG-exact (ours).\n###figure_6### ###figure_7###"
132
+ }
133
+ ],
134
+ "tables": {},
135
+ "image_paths": {
136
+ "1": {
137
+ "figure_path": "2308.08305v2_figure_1.png",
138
+ "caption": "Figure 1: Visual interpretation of the domain of the functions \u2113\u2113\\ellroman_\u2113 and f\ud835\udc53fitalic_f. On the left panel, the plane region (\u03b81,\u03b82)\u2208\u0398subscript\ud835\udf031subscript\ud835\udf032\u0398(\\theta_{1},\\theta_{2})\\in\\Theta( italic_\u03b8 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_\u03b8 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) \u2208 roman_\u0398 is to be understood as Euclidean. The coloured surface depicts where the function f\ud835\udc53fitalic_f is defined on the graph of \u2113\u2113\\ellroman_\u2113, that is on \u0393\u2113subscript\u0393\u2113\\Gamma_{\\ell}roman_\u0393 start_POSTSUBSCRIPT roman_\u2113 end_POSTSUBSCRIPT and the ambient space as \ud835\udca9\u00d7\u2133\u03c8\ud835\udca9subscript\u2133\ud835\udf13\\operatorname{\\mathcal{N}\\times\\mathcal{M}_{\\psi}}caligraphic_N \u00d7 caligraphic_M start_POSTSUBSCRIPT italic_\u03c8 end_POSTSUBSCRIPT. In this example the function \u2113\u2062(\ud835\udf3d)=log\u2061\ud835\udca2\u2062([\u03b81,\u03b82+sin\u2061(\u03b81)]|\ud835\udf41,\u03a3)\u2113\ud835\udf3d\ud835\udca2conditionalsubscript\ud835\udf031subscript\ud835\udf032subscript\ud835\udf031\ud835\udf41\u03a3\\ell(\\operatorname{\\boldsymbol{\\theta}})=\\log\\mathcal{G}\\big{(}[\\theta_{1},%\n\\theta_{2}+\\sin(\\theta_{1})]|\\operatorname{\\boldsymbol{\\mu}},\\Sigma\\big{)}roman_\u2113 ( bold_italic_\u03b8 ) = roman_log caligraphic_G ( [ italic_\u03b8 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_\u03b8 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + roman_sin ( italic_\u03b8 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ] | bold_italic_\u03bc , roman_\u03a3 ) where \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G denotes the Gaussian density \ud835\udf41=\ud835\udfce\ud835\udf41\ud835\udfce\\operatorname{\\boldsymbol{\\mu}}=\\operatorname{\\boldsymbol{0}}bold_italic_\u03bc = bold_0 and \u03a3=diag\u2061(1,0.01)\u03a3diag10.01\\Sigma=\\operatorname{\\mathrm{diag}}(1,0.01)roman_\u03a3 = roman_diag ( 1 , 0.01 ). The set \u0393\u2113subscript\u0393\u2113\\Gamma_{\\ell}roman_\u0393 start_POSTSUBSCRIPT roman_\u2113 end_POSTSUBSCRIPT has element \ud835\udc31=(\ud835\udf3d,\u2113\u2062(\ud835\udf3d))\ud835\udc31\ud835\udf3d\u2113\ud835\udf3d\\operatorname{\\boldsymbol{x}}=(\\operatorname{\\boldsymbol{\\theta}},\\ell(%\n\\operatorname{\\boldsymbol{\\theta}}))bold_x = ( bold_italic_\u03b8 , roman_\u2113 ( bold_italic_\u03b8 ) ) and is showed on the \u201dheight\u201d axis. This set can be understood as a embedded Riemannian manifold in the higher-dimensional space \ud835\udca9\u00d7\u2133\u03c8=\u211d3\ud835\udca9subscript\u2133\ud835\udf13superscript\u211d3\\operatorname{\\mathcal{N}\\times\\mathcal{M}_{\\psi}}=\\mathbb{R}^{3}start_OPFUNCTION caligraphic_N \u00d7 caligraphic_M start_POSTSUBSCRIPT italic_\u03c8 end_POSTSUBSCRIPT end_OPFUNCTION = blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT (associated with the warped metric). On the right panel we show the behaviour of the domain of f\ud835\udc53fitalic_f as a function of a given warp function \u03c8\ud835\udf13\\psiitalic_\u03c8. As \u03c8\ud835\udf13\\psiitalic_\u03c8 is closer to zero, the closer to Euclidean the set (or geometry) \u0393\u2113subscript\u0393\u2113\\Gamma_{\\ell}roman_\u0393 start_POSTSUBSCRIPT roman_\u2113 end_POSTSUBSCRIPT is.",
139
+ "url": "http://arxiv.org/html/2308.08305v2/extracted/5479201/domain-fig2.png"
140
+ },
141
+ "2": {
142
+ "figure_path": "2308.08305v2_figure_2.png",
143
+ "caption": "Figure 2: Number of iterations until convergence for a variety of dimensions using the squiggle model. The RCG (ours) in Algorithm 1, presents, in general, faster convergence than the, CG-exact (ours) and CG-inexact. Both RCG (ours) and CG-exact (ours), are comparable to the ND-inexact that also converges fast (in number of iterations) for all cases when compared to CG-inexact. Here the dimension is taken up to D=250\ud835\udc37250D=250italic_D = 250\n(See Appendix I for extra information).",
144
+ "url": "http://arxiv.org/html/2308.08305v2/extracted/5479201/squigglefall-2.png"
145
+ },
146
+ "3": {
147
+ "figure_path": "2308.08305v2_figure_3.png",
148
+ "caption": "Figure 3: This figure show the number of iterations until convergence considering the Rosenbrock function with varying dimensions D\ud835\udc37Ditalic_D. The parameters of the function were set to be a=1\ud835\udc4e1a=1italic_a = 1 and b=100\ud835\udc4f100b=100italic_b = 100 as it is usually done in benchmark settings. The RCG (ours), tends to converge faster than the CG counterparts and slower when compared to ND-inexact. See also Appendix I for other types of computational cost and discussions.",
149
+ "url": "http://arxiv.org/html/2308.08305v2/extracted/5479201/rosenbrfall-2.png"
150
+ },
151
+ "4": {
152
+ "figure_path": "2308.08305v2_figure_4.png",
153
+ "caption": "Figure 4: This figure depicts the convergence of the RCG (ours), CG-exact (ours), CG-inexact and ND-exact in the test set of problems from the CUTE library. The RCG (ours) method generally reaches smaller number of iterations to satisfy the stopping criteria when compared to CG counterparts. The ND-inexact achieves smallest number of iterations in comparison to RCG (ours) and CG counterparts for the second and third CUTE models for all dimensions. The computational cost in wall-clock time and memory requirement is generally larger for the RCG (ours) implementation.",
154
+ "url": "http://arxiv.org/html/2308.08305v2/extracted/5479201/cutefall-2.png"
155
+ },
156
+ "5": {
157
+ "figure_path": "2308.08305v2_figure_5.png",
158
+ "caption": "Figure 5: Geodesic approximations based on 3rdsuperscript3rd3^{\\textrm{rd}}3 start_POSTSUPERSCRIPT rd end_POSTSUPERSCRIPT-order Taylor series. The level set in gray represents the function \u2113\u2062(\ud835\udf3d)=log\u2061\ud835\udca9\u2062([\u03b81,\u03b82+sin\u2061(1.3\u2062\u03b81)]|\ud835\udf41,\u03a3)\u2113\ud835\udf3d\ud835\udca9conditionalsubscript\ud835\udf031subscript\ud835\udf0321.3subscript\ud835\udf031\ud835\udf41\u03a3\\ell(\\operatorname{\\boldsymbol{\\theta}})=\\log\\mathcal{N}\\big{(}[\\theta_{1},%\n\\theta_{2}+\\sin(1.3\\theta_{1})]|\\operatorname{\\boldsymbol{\\mu}},\\Sigma\\big{)}roman_\u2113 ( bold_italic_\u03b8 ) = roman_log caligraphic_N ( [ italic_\u03b8 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_\u03b8 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT + roman_sin ( 1.3 italic_\u03b8 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) ] | bold_italic_\u03bc , roman_\u03a3 ) where \ud835\udca9\ud835\udca9\\mathcal{N}caligraphic_N denotes the Gaussian density \ud835\udf41=\ud835\udfce\ud835\udf41\ud835\udfce\\operatorname{\\boldsymbol{\\mu}}=\\operatorname{\\boldsymbol{0}}bold_italic_\u03bc = bold_0 and \u03a3=diag\u2061(20,0.1)\u03a3diag200.1\\Sigma=\\operatorname{\\mathrm{diag}}(20,0.1)roman_\u03a3 = roman_diag ( 20 , 0.1 ). The blue point \u03be\u22121\u2062(\ud835\udc31)=[3.0 1.4]\u22a4superscript\ud835\udf091\ud835\udc31superscriptdelimited-[]3.01.4top\\xi^{-1}(\\operatorname{\\boldsymbol{x}})=[3.0\\ 1.4]^{\\top}italic_\u03be start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( bold_x ) = [ 3.0 1.4 ] start_POSTSUPERSCRIPT \u22a4 end_POSTSUPERSCRIPT and blue vector \ud835\udc2f=[\u22121.2\u22121.0]\u22a4\ud835\udc2fsuperscriptdelimited-[]1.21.0top\\operatorname{\\boldsymbol{v}}=[-1.2\\ -1.0]^{\\top}bold_v = [ - 1.2 - 1.0 ] start_POSTSUPERSCRIPT \u22a4 end_POSTSUPERSCRIPT display the point and direction where the approximation of the geodesic curve on is made for a series of increasing \u03c32superscript\ud835\udf0e2\\sigma^{2}italic_\u03c3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT values. As \u03c32superscript\ud835\udf0e2\\sigma^{2}italic_\u03c3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT values increase the approximations tend to be closer to a straight line and in the limit of \u03c32\u2192\u221e\u2192superscript\ud835\udf0e2\\sigma^{2}\\rightarrow\\inftyitalic_\u03c3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT \u2192 \u221e the geodesic approximation becomes aligned with the search direction \ud835\udc2f\ud835\udc2f\\operatorname{\\boldsymbol{v}}bold_v.",
159
+ "url": "http://arxiv.org/html/2308.08305v2/extracted/5479201/geodapprox.png"
160
+ },
161
+ "6(a)": {
162
+ "figure_path": "2308.08305v2_figure_6(a).png",
163
+ "caption": "(a)\nFigure 6: In the panels (a) and (b) display the computation performance for the first two sets of models. The squiggle model and Rosenbrock model. The experiments are in terms of number of iterations, wall-clock time and memory consumption (from left to right). All measured until the stopping criteria of the the algorithms. Panel (a) shows the performance of all algorithms for the squiggle model. Panel (b) shows the same experiment but for the rosenbrock model. The performance of the RCG (ours) clearly improves the number of iterations until convergence, however the time and memory consumption until convergence has been higher than the state-of-art implementation of classical algorithms such as CG-inexact and ND-inexact in Julia language.",
164
+ "url": "http://arxiv.org/html/2308.08305v2/extracted/5479201/squiggleNTM-2.png"
165
+ },
166
+ "6(b)": {
167
+ "figure_path": "2308.08305v2_figure_6(b).png",
168
+ "caption": "(b)\nFigure 6: In the panels (a) and (b) display the computation performance for the first two sets of models. The squiggle model and Rosenbrock model. The experiments are in terms of number of iterations, wall-clock time and memory consumption (from left to right). All measured until the stopping criteria of the the algorithms. Panel (a) shows the performance of all algorithms for the squiggle model. Panel (b) shows the same experiment but for the rosenbrock model. The performance of the RCG (ours) clearly improves the number of iterations until convergence, however the time and memory consumption until convergence has been higher than the state-of-art implementation of classical algorithms such as CG-inexact and ND-inexact in Julia language.",
169
+ "url": "http://arxiv.org/html/2308.08305v2/extracted/5479201/rosenbrNTM-2.png"
170
+ }
171
+ },
172
+ "validation": true,
173
+ "references": [
174
+ {
175
+ "1": {
176
+ "title": "Princeton University Press, illustrated edition edn.",
177
+ "author": "Absil, P.-A., Mahony, R. and Sepulchre, R. (2008) Optimization\nalgorithms on matrix manifolds.",
178
+ "venue": null,
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "2": {
184
+ "title": "Neural Computation (communicated by Steven Nowlan and Erkki\nOja), 10, 251\u2013276.",
185
+ "author": "Amari, S. (1998) Natural Gradient Works Efficiently in Learning.",
186
+ "venue": null,
187
+ "url": null
188
+ }
189
+ },
190
+ {
191
+ "3": {
192
+ "title": "Journal of Machine Learning Research, 18, 1\u201343.",
193
+ "author": "Baydin, A. G., Pearlmutter, B. A., Radul, A. A. and Siskind, J. M. (2018)\nAutomatic differentiation in machine learning: a survey.",
194
+ "venue": null,
195
+ "url": null
196
+ }
197
+ },
198
+ {
199
+ "4": {
200
+ "title": "SIAM Review, 59, 65\u201398.",
201
+ "author": "Bezanson, J., Edelman, A., Karpinski, S. and Shah, V. B. (2017) Julia: A fresh\napproach to numerical computing.",
202
+ "venue": null,
203
+ "url": null
204
+ }
205
+ },
206
+ {
207
+ "5": {
208
+ "title": "Neural Networks, 17, 65\u201371.",
209
+ "author": "Bhaya, A. and Kaszkurewicz, E. (2004) Steepest descent with momentum for\nquadratic functions is a version of the conjugate gradient method.",
210
+ "venue": null,
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "6": {
216
+ "title": "Cambridge University Press.",
217
+ "author": "Boumal, N. (2023) An Introduction to Optimization on Smooth Manifolds.",
218
+ "venue": null,
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "7": {
224
+ "title": "Scandinavian Journal of Statistics, 40, 825\u2013845.",
225
+ "author": "Byrne, Simon; Girolami, M. (2013) Geodesic Monte Carlo on embedded\nmanifolds.",
226
+ "venue": null,
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "8": {
232
+ "title": "Springer International Publishing, 1 edn.",
233
+ "author": "Calin, O. and Udri\u015fte, C. (2014) Geometric Modeling in Probability and\nStatistics.",
234
+ "venue": null,
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "9": {
240
+ "title": "SIAM Journal on Optimization, 10, 177\u2013182.",
241
+ "author": "Dai, Y. H. and Yuan, Y. (1999) A nonlinear conjugate gradient method with a\nstrong global convergence property.",
242
+ "venue": null,
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "10": {
248
+ "title": "Mathematics. Theory & applications. Birkh\u00e4user, 1 edn.",
249
+ "author": "Do Carmo, M. P. (1992) Riemannian Geometry.",
250
+ "venue": null,
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "11": {
256
+ "title": "Dover Publications, 2nd edition edn.",
257
+ "author": "\u2014 (2017) Differential Geometry of Curves and Surfaces.",
258
+ "venue": null,
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "12": {
264
+ "title": "ArXiv.",
265
+ "author": "Duruisseaux, V. and Leok, M. (2022a) Accelerated optimization on\nRiemannian manifolds via projected variational integrators.",
266
+ "venue": "URL: https://arxiv.org/abs/2201.02904.",
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "13": {
272
+ "title": "SIAM Journal on Mathematics of Data Science, 4,\n649\u2013674.",
273
+ "author": "\u2014 (2022b) A variational formulation of accelerated optimization\non Riemannian manifolds.",
274
+ "venue": null,
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "14": {
280
+ "title": "In Advances in Neural Information Processing Systems (eds.\nH. Larochelle, M. Ranzato, R. Hadsell, M. Balcan and H. Lin), vol. 33,\n16916\u201316926. Curran Associates, Inc.",
281
+ "author": "Franca, G., Sulam, J., Robinson, D. and Vidal, R. (2020) Conformal symplectic\nand relativistic optimization.",
282
+ "venue": null,
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "15": {
288
+ "title": "Journal of the Royal Statistical Society: Series B\n(Statistical Methodology), 73, 123\u2013214.",
289
+ "author": "Girolami, M. and Calderhead, B. (2011) Riemann manifold Langevin and\nHamiltonian Monte Carlo methods.",
290
+ "venue": null,
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "16": {
296
+ "title": "ACM Transactions on Mathematical Software, 32,\n113\u2013137.",
297
+ "author": "Hager, W. W. and Zhang, H. (2006) Algorithm 851: Cgdescent, a conjugate\ngradient method with guaranteed descent.",
298
+ "venue": null,
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "17": {
304
+ "title": "Statistics and Computing, 29, 753\u2013773.",
305
+ "author": "Hartmann, Marcelo; Vanhatalo, J. (2018) Laplace approximation and natural\ngradient for Gaussian process regression with heteroscedastic student-t\nmodel.",
306
+ "venue": null,
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "18": {
312
+ "title": "In Proceedings of The 25th International Conference on\nArtificial Intelligence and Statistics (eds. G. Camps-Valls, F. J. R. Ruiz\nand I. Valera), vol. 151 of Proceedings of Machine Learning\nResearch, 4764\u20134781. PMLR.",
313
+ "author": "Hartmann, M., Girolami, M. and Klami, A. (2022) Lagrangian manifold Monte\nCarlo on Monge patches.",
314
+ "venue": null,
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "19": {
320
+ "title": "Journal of research of the National Bureau of Standards,\n49, 409\u2013436.",
321
+ "author": "Hestenes, M. R., Stiefel, E. et al. (1952) Methods of conjugate gradients for\nsolving linear systems.",
322
+ "venue": null,
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "20": {
328
+ "title": "In 3rd International Conference on Learning Representations,\nICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings\n(eds. Y. Bengio and Y. LeCun).",
329
+ "author": "Kingma, D. P. and Ba, J. (2015) Adam: A method for stochastic optimization.",
330
+ "venue": null,
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "21": {
336
+ "title": "Evolutionary Computation, 17, 437\u2013453.",
337
+ "author": "Kok, Schalk; Sandrock, C. (2009) Locating and characterizing the stationary\npoints of the extended Rosenbrock function.",
338
+ "venue": null,
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "22": {
344
+ "title": "Journal of Computational Physics, 308, 81\u2013101.",
345
+ "author": "Lan, S., Bui-Thanh, T., Christie, M. and Girolami, M. (2016) Emulation of\nhigher-order tensors in manifold monte carlo methods for bayesian inverse\nproblems.",
346
+ "venue": null,
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "23": {
352
+ "title": "Journal of Computational and Graphical Statistics,\n24, 357\u2013378.",
353
+ "author": "Lan, S., Stathopoulos, V., Shahbaba, B. and Girolami, M. (2015) Markov chain\nMonte Carlo from Lagrangian dynamics.",
354
+ "venue": null,
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "24": {
360
+ "title": "Springer texts in statistics. Springer, 2nd ed edn.",
361
+ "author": "Lehmann, G. C. (2003) Theory of Point Estimation.",
362
+ "venue": null,
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "25": {
368
+ "title": "Management Science, 18, 620\u2013631.",
369
+ "author": "Luenberger, D. G. (1972) The gradient projection method along geodesics.",
370
+ "venue": null,
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "26": {
376
+ "title": "Revista de la Real Academia de Ciencias Exactas, Fisicas y\nNaturales. Serie A. Matematicas, 108, 881\u2013906.",
377
+ "author": "Monera, M. G., Montesinos-Amilibia, A. and Sanabria-Codesal, E. (2014) The\nTaylor expansion of the exponential map and geometric applications.",
378
+ "venue": null,
379
+ "url": null
380
+ }
381
+ },
382
+ {
383
+ "27": {
384
+ "title": "In Doklady Akademii Nauk, 543\u2013547. Russian Academy of\nSciences.",
385
+ "author": "Nesterov, Y. E. (1983) A method of solving a convex programming problem with\nconvergence rate .",
386
+ "venue": null,
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "28": {
392
+ "title": "Mathematical Programming, 45, 503\u2013528.",
393
+ "author": "Nocedal, D. C. L. J. (1989) On the limited memory BFGS method for large scale\noptimization.",
394
+ "venue": null,
395
+ "url": null
396
+ }
397
+ },
398
+ {
399
+ "29": {
400
+ "title": "Springer series in operations research. Springer, 2nd ed edn.",
401
+ "author": "Nocedal, J. and Wright, S. (2006) Numerical Optimization.",
402
+ "venue": null,
403
+ "url": null
404
+ }
405
+ },
406
+ {
407
+ "30": {
408
+ "title": "ISSN. Elsevier Science.",
409
+ "author": "O\u2019Neill, B. (1983) Semi-Riemannian Geometry with applications to\nrelativity.",
410
+ "venue": null,
411
+ "url": null
412
+ }
413
+ },
414
+ {
415
+ "31": {
416
+ "title": "Neural Computation, 6, 147\u2013160.",
417
+ "author": "Pearlmutter, B. A. (1994) Fast exact multiplication by the Hessian.",
418
+ "venue": null,
419
+ "url": null
420
+ }
421
+ },
422
+ {
423
+ "32": {
424
+ "title": "Applied Mathematical Science 124. Springer, New York, 1st edn.",
425
+ "author": "Polak, E. (1997) Optimization: Algorithms and Consistent\nApproximations.",
426
+ "venue": null,
427
+ "url": null
428
+ }
429
+ },
430
+ {
431
+ "33": {
432
+ "title": "The Computer Journal, 3, 175\u2013184.",
433
+ "author": "Rosenbrock, H. H. (1960) An automatic method for finding the greatest or least\nvalue of a function.",
434
+ "venue": null,
435
+ "url": null
436
+ }
437
+ },
438
+ {
439
+ "34": {
440
+ "title": "Journal of Optimization Theory and Applications,\n190, 130\u2013150.",
441
+ "author": "Sakai, H. and Iiduka, H. (2021) Sufficient descent Riemannian conjugate\ngradient methods.",
442
+ "venue": null,
443
+ "url": null
444
+ }
445
+ },
446
+ {
447
+ "35": {
448
+ "title": "SpringerBriefs in Electrical and Computer Engineering. Springer\nInternational Publishing.",
449
+ "author": "Sato, H. (2021) Riemannian Optimization and Its Applications.",
450
+ "venue": null,
451
+ "url": null
452
+ }
453
+ },
454
+ {
455
+ "36": {
456
+ "title": "SIAM Journal on Optimization, 32, 2690\u20132717.",
457
+ "author": "\u2014 (2022) Riemannian conjugate gradient methods: General framework and\nspecific algorithms with convergence analyses.",
458
+ "venue": null,
459
+ "url": null
460
+ }
461
+ },
462
+ {
463
+ "37": {
464
+ "title": "Springer Series in Statistics. Springer New York.",
465
+ "author": "Schervish, M. (2012) Theory of Statistics.",
466
+ "venue": null,
467
+ "url": null
468
+ }
469
+ },
470
+ {
471
+ "38": {
472
+ "title": "Mathematics of Operations Research, 3, 244\u2013256.",
473
+ "author": "Shanno, D. F. (1978) Conjugate gradient methods with inexact searches.",
474
+ "venue": null,
475
+ "url": null
476
+ }
477
+ },
478
+ {
479
+ "39": {
480
+ "title": "In Proceedings of the 35th International Conference on\nMachine Learning (eds. J. Dy and A. Krause), vol. 80 of Proceedings\nof Machine Learning Research, 4713\u20134722. PMLR.",
481
+ "author": "Song, Y., Song, J. and Ermon, S. (2018) Accelerating natural gradient with\nhigher-order invariance.",
482
+ "venue": null,
483
+ "url": null
484
+ }
485
+ },
486
+ {
487
+ "40": {
488
+ "title": "In Thirty-seventh Conference on Neural Information Processing\nSystems.",
489
+ "author": "Titsias, M. (2023) Optimal preconditioning and Fisher adaptive Langevin\nsampling.",
490
+ "venue": null,
491
+ "url": null
492
+ }
493
+ },
494
+ {
495
+ "41": {
496
+ "title": "Statistics & Probability Letters, 91, 14\u201319.",
497
+ "author": "Xifara, T., Sherlock, C., Livingstone, S., Byrne, S. and Girolami, M. (2014)\nLangevin diffusions and the Metropolis-adjusted Langevin algorithm.",
498
+ "venue": null,
499
+ "url": null
500
+ }
501
+ },
502
+ {
503
+ "42": {
504
+ "title": "In Artificial Intelligence and Statistics : The\nTwenty-Seventh International Conference on Artificial Intelligence and\nStatistics (AISTATS). May 2-4, 2024, Valencia , Spain.",
505
+ "author": "Yu, H., Hartmann, M., Williams, B., Girolami, M. and Klami, A. (2024)\nRiemannian laplace approximation with the fisher metric.",
506
+ "venue": null,
507
+ "url": null
508
+ }
509
+ },
510
+ {
511
+ "43": {
512
+ "title": "Transactions on Machine Learning Research.",
513
+ "author": "Yu, H., Hartmann, M., Williams, B. and Klami, A. (2023) Scalable Stochastic\nGradient Riemannian Langevin Dynamics in Non-Diagonal Metrics.",
514
+ "venue": null,
515
+ "url": null
516
+ }
517
+ },
518
+ {
519
+ "44": {
520
+ "title": "Computational Optimization and Applications, 77,\n779\u2013810.",
521
+ "author": "Zhu, Xiaojing; Sato, H. (2020) Riemannian conjugate gradient methods with\ninverse retraction.",
522
+ "venue": null,
523
+ "url": null
524
+ }
525
+ }
526
+ ],
527
+ "url": "http://arxiv.org/html/2308.08305v2"
528
+ }
20240318/2308.13137v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2309.00464v2.json ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A Theoretical and Practical Framework for Evaluating Uncertainty Calibration in Object Detection",
3
+ "abstract": "The proliferation of Deep Neural Networks has resulted in machine learning systems becoming increasingly more present in various real-world applications. Consequently, there is a growing demand for highly reliable models in many domains, making the problem of uncertainty calibration pivotal when considering the future of deep learning. This is especially true when considering object detection systems, that are commonly present in safety-critical applications such as autonomous driving, robotics and medical diagnosis. For this reason, this work presents a novel theoretical and practical framework to evaluate object detection systems in the context of uncertainty calibration. This encompasses a new comprehensive formulation of this concept through distinct formal definitions, and also three novel evaluation metrics derived from such theoretical foundation. The robustness of the proposed uncertainty calibration metrics is shown through a series of representative experiments.\n\nKeywords: Uncertainty Calibration, Object Detection, Reliability",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Deep Neural Networks (DNNs) have revolutionized the applicability of Machine Learning (ML) systems in real-world scenarios. Deep Learning (DL) models are now extensively used in critical domains such as medicine, transportation, remote sensing, and robotics, where the consequences of erroneous decisions can be severe. Consequently, it is vital for DNNs to provide reliable confidence scores that accurately quantify the true likelihood of their predictions, thus properly estimating their predictive uncertainty. For this reason, the problem of uncertainty calibration (also referred as confidence calibration [11 ###reference_b11###] or simply calibration [6 ###reference_b6###]) is becoming ubiquitous when developing DL models that are reliable and robust enough for real-world applicability.\nThe importance of uncertainty calibration of DNNs is illustrated by the growing body of scientific work developed in recent years regarding this subject [6 ###reference_b6###, 15 ###reference_b15###, 21 ###reference_b21###, 24 ###reference_b24###, 10 ###reference_b10###, 25 ###reference_b25###]. Nonetheless, most of these works are developed around the problem of uncertainty calibration in classification problems. Contrastingly, a significant number of DL safety-critical applications are related to object detection problems (e.g., autonomous driving, human-robot interaction, surveillance). We argue that one of the main reasons for a lack of scientific work regarding uncertainty calibration in object detection scenarios is due to the fact that there is no complete/proper theoretical and practical formulation regarding the understanding and evaluation of this problem. As such, this work aims at filling this gap, by proposing three uncertainty calibration evaluation metrics for object detection, based on a comprehensive theoretical framework (introduced in Section 3 ###reference_###). This framework focuses on semantic/label uncertainty (instead of spatial uncertainty, like other approaches to evaluate the probabilistic quality of detections [7 ###reference_b7###, 12 ###reference_b12###]), by leveraging Intersection Over Union (IoU) in a threshold-based evaluation, akin to the conventional Mean Average Precision (mAP). For this reason, our formulation for the problem of uncertainty calibration is consistent with the classical conception of an object detection problem. Additionally, and like in a standard evaluation framework for object detection, the metrics proposed in Section 4 ###reference_### can also have a stronger focus on localization performance, by averaging over different IoUs (e.g., from 0.5 to 0.95 in intervals of 0.05). \nThe key contributions of this work are twofold:\n1) A comprehensive theoretical formulation of the uncertainty calibration problem in object detection (Section 3 ###reference_###) that, to the best of our knowledge, is absent in relevant literature.\n2) Three novel uncertainty calibration metrics 111Code for the proposed uncertainty calibration metrics available in the Supplementary Material. specifically designed for the context of object detection evaluation (Section 4 ###reference_###), consistent with the mentioned theoretical formulation."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "The problem of uncertainty calibration is introduced to the DL community through the work presented in [6 ###reference_b6###], where the calibration quality of various modern DNNs is evaluated on classification problems, with distinct datasets, from computer vision and natural language processing domains. The authors argue that despite their increased accuracy, modern DNNs suffer from significant miscalibration issues, which surpass those observed in \u2018older\u2019 but less accurate architectures.\nThe evaluation of uncertainty calibration in classification problems often relies on the widely used Expected Calibration Error (ECE) [18 ###reference_b18###]. However, limitations of this metric have been acknowledged in relevant literature [19 ###reference_b19###, 25 ###reference_b25###]. On the other hand, the use of proper scoring rules [5 ###reference_b5###] like the Brier score [1 ###reference_b1###] has been an increasingly common practice in recent literature regarding uncertainty calibration for classification problems [21 ###reference_b21###, 24 ###reference_b24###, 15 ###reference_b15###].\nThe lack of evaluation metrics specifically designed to the problem of uncertainty calibration in object detection is addressed in [11 ###reference_b11###], that proposes the Detection Expected Calibration Error (D-ECE). Since then, this metric has been adopted as a \u201cgo-to\u201d evaluation metric in different works regarding uncertainty calibration in object detection [17 ###reference_b17###, 22 ###reference_b22###, 16 ###reference_b16###]. Comparably to the metrics proposed in this work, the D-ECE leverages the IoU for a threshold-based evaluation, thus being also focused on semantic uncertainty. Nonetheless, D-ECE is built on an incomplete formulation of the problem of uncertainty calibration in object detection, and therefore does not incorporate the effect of False Negative detections (see Section 4 ###reference_### for further details), which can be critical in safety-related applications.\nOther approaches to assess the reliability and probabilistic quality of object detector\u2019s predictions were proposed in recent years. The authors in [7 ###reference_b7###] propose Probability-based Detection Quality measure (PDQ), focusing on both label and spatial probabilistic quality; because of the focus on spatial quality, \u201cPDQ has been primarily developed to evaluate new types of probabilistic object detectors that are designed to quantify spatial and semantic uncertainties\u201d, which is not the case for most common state-of-the art object detectors like YOLO [23 ###reference_b23###], Fast R-CNN [4 ###reference_b4###] or SSD [14 ###reference_b14###]. The work in [12 ###reference_b12###] focuses specifically on spatial uncertainty and therefore proposes evaluating uncertainty calibration for object detection as a probabilistic regression task, by evaluating only object detectors with probabilistic regression output \u201cwhere a mean and a variance score is inferred for each bounding box quantity\u201d [8 ###reference_b8###]. Also, the authors in [20 ###reference_b20###] propose the Self Aware Object Detection (SAOD) task, a testing framework which includes uncertainty calibration evaluation; for this evaluation, Localization Aware ECE (LaECE) is introduced, through the inclusion of spatial evaluation into the existing D-ECE [11 ###reference_b11###], by multiplying precision with IoU; nonetheless, and similarly to the D-ECE, this metric is not affected by the existence of False Negatives."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Uncertainty Calibration for Object Detection",
21
+ "text": "In this section we formally introduce the concept of uncertainty calibration for the object detection domain. Since the proposed theoretical framework focuses on semantic uncertainty, the presented definitions are inspired in the concept of uncertainty calibration for classification problems [25 ###reference_b25###], though adapted to the particularities of the object detection\u2019s case. \nFor the subsequent definitions we will consider , the sample space of inputs, the sample space of bounding-box locations and associated classes. We can now consider the respective random variables, and , defined under those sample spaces. Let us note that a realization of can define an arbitrary number of locations (bounded by the total number of possible locations) and respective classes i.e., any realization of , for a problem with different classes, is a subset of , where . For this reason, we will denote as the abbreviation of i.e., the probability that some singular bounding-box location (with respective class) belongs to some realization of .\nBefore considering the concept of calibration in the context of object detection, we have to define the mathematical object for which such considerations are pertinent, i.e., a confidence-based detector. Let us first consider the function , that maps an input from the sample space to the set of all possible bounding-box locations in that specific input. We are now in condition to assert the following definition.\nLet us define a confidence-based detector (for a problem with classes) as a function that maps each input to a set of all possible bounding-box locations and correspondent confidence scores, for each respective class. Formally, we have that\nWe note that denotes a bounding-box detection of the form (Location, Class, Confidence score).\nAlthough in a practical scenario most object detection systems will not consider all possible bounding-box locations, from a theoretical point-of-view Definition 1 ###reference_i1### is a reasonable definition, because we can associate all disregarded locations with a confidence score equal to 0. As such, this definition is consistent with common state-of-the-art object detectors. \nLet us observe that, since most object detection systems incorporate suppression strategies (like Non Maximum Suppression), considering the concept of calibration for precise localization can be unreasonable in a practical scenario. Therefore, let us take as\n to designate the well known IoU function, and define (for some threshold value ) as\nreferring to the maximum confidence value, for a given class, under the IoU threshold conditions. It is worth observing that, following the definition of a confidence-based detector (Definition 1 ###reference_i1###), the value of can be 0 (this is the case were there is no positive prediction that satisfies the condition ). We can now consider the following definition.\nWe say that a confidence-based detector is globally calibrated (for some threshold value ) iff\nBased on Definition 2 ###reference_i2### we will develop new evaluation metrics with the purpose of assessing the calibration of the predictive uncertainty of confidence-based detectors in a practical object detection scenario.\nDefinition 2 ###reference_i2### requires that, for every portion of the input, there is a spatial bounding box neighborhood (defined under IoU conditions) whose confidence value (that can be 0) correctly codifies the likelihood of the existence of a certain object. On the other hand, a weaker (and therefore incomplete) formulation of the uncertainty calibration problem in object detection (similar to that used in [11 ###reference_b11###]) can also be defined, by considering only the probabilistic quality of the detections that have a positive confidence score (i.e., the detections that are actually returned by a model in a practical scenario). The latter is outlined below.\nWe say that a confidence-based detector is locally calibrated (for some threshold value ) iff\nWe care to note that such formulation does not take into account the existence of False Negatives."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Evaluation Metrics",
27
+ "text": "Note: For the remainder of this work we define bag (also called a multiset) as an extension of the notion of set, that can have repeated elements (i.e., different instances of the same element).\nEvaluating uncertainty calibration in a practical object detection setting encompasses some of the same challenges found when evaluating this problem in a classification scenario, regarding the nonexistence of ground-truth information for the true likelihood values (left side of both Equations (4 ###reference_###) and (5 ###reference_###)). For this reason, arises the need to develop practical evaluation metrics in relation to the previously outlined formal definitions. Therefore, considering the theoretical formulation presented in Definition 2 ###reference_i2###, we have developed uncertainty calibration evaluation metrics in the context of object detection.\nBefore introducing such metrics, some concepts have to be outlined, that are common in the context of object detection problems and will be necessary when evaluating uncertainty calibration. Let us take a confidence-based detector , a finite set of inputs , and the bag of respective ground truth locations and classes . In a setting of this type, and for some threshold value , we can define: , as the bag of confidence scores associated with True Positives i.e., detections that have a corresponding ground-truth (with an IoU equal or greater than ); as the bag of confidence scores associated with False Positives i.e., detections that do not have a corresponding ground-truth; and as the bag of confidence scores associated with False Negatives i.e., ground-truth bounding boxes with no corresponding detection. Although is a bag of identical (theoretically zero-valued) confidence scores, the number of such detections will be fundamental in the computation of the proposed uncertainty calibration metrics.\nIn the following subsections we propose three novel uncertainty calibration metrics based on Definition 2 ###reference_i2###. The first two metrics (Subsections 4.1 ###reference_### and 4.2 ###reference_###) are based on proper scoring rules [5 ###reference_b5###], while the third one (Subsection 4.3 ###reference_###) relies on bin-wise computations, similarly to the ECE [18 ###reference_b18###]."
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "Quadratic Global Calibration Score",
33
+ "text": "Inspired in the Brier score [1 ###reference_b1###] - a proper scoring rule widely used to evaluate uncertainty calibration in classification problems - we introduce in this section the Quadratic Global Calibration score (QGC), that leverages the same fundamental principle of its predecessor by computing the quadratic difference between a confidence score and its true response. \nWe start by considering a confidence-based detector , a finite set of inputs and the bag of respective ground truth locations and classes . In this context we can construct our bags , and . The QGC can now be computed as\nWe remind that denotes a confidence score. A lower value of QGC translates a better performance in terms of uncertainty calibration, reaching its optimal value at 0."
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "Spherical Global Calibration Score",
39
+ "text": "The Spherical Global Calibration score (SGC) is inspired in a less common proper scoring rule, the Spherical score [5 ###reference_b5###]. Let us first denote\nA direct adaptation of the Spherical score would be formulated as\nSuch formulation reaches its optimal value at , while a higher value translates a better performance (similar to the original Spherical score). As such, we define the SGC as\nA lower value of SGC translates a better performance in terms of uncertainty calibration, reaching its optimal value at 0."
40
+ },
41
+ {
42
+ "section_id": "4.3",
43
+ "parent_section_id": "4",
44
+ "section_name": "Expected Global Calibration Error",
45
+ "text": "The Expected Global Calibration Error (EGCE) is an adaptation of the popular ECE [18 ###reference_b18###]. The ECE is widely used to evaluate uncertainty calibration in classification problems, and works based on the principle of computing the bin-wise difference between average confidence scores and average accuracy. Since the original ECE leverages the concept of \u201caccuracy\u201d - common in the evaluation of classification systems - the use of the EGCE will require an adaptation of this concept to the context of object detection. In fact, such challenge is also addressed in [11 ###reference_b11###] and is reflected on the development of the D-ECE. Therefore, EGCE will be based on the same principle of the D-ECE, but incorporating the necessary adaptations to address the previously discussed limitations of its counterpart.\nWe start by creating the sets of bins and , where each bin is a bag of confidence scores defined as\nfor . Additionally, we can now consider the set of bins where each bin is a bag defined as\n\nfor . For each bin, we define the average confidence and the precision per bin, respectively, as\nWe can now outline the definition of the D-ECE as\nNote: in fact, the definition given in [11 ###reference_b11###] is the average of (15 ###reference_###) i.e., divided by .\nBecause D-ECE evaluates only the calibration of the detections that have a positive confidence score (i.e., the bags and ), it can be considered a metric for local calibration (Definition 3 ###reference_i3###).\nThe EGCE will leverage the D-ECE\u2019s principle of contrasting precision and confidence. Nonetheless, contrarily to the latter, the EGCE will incorporate False Negative detections by considering them analogous to False Positives with a confidence of 1, and therefore being incorporated in the last bin. As such, let us start by considering\nWe can now finally define the EGCE as\nFollowing similar guidelines of those in [11 ###reference_b11###], we will only consider detections with a confidence value above a given threshold (in our case 0.1), when constructing the bags , and for calculating the D-ECE and EGCE (this avoids a bias to the behaviour of low-confidence detections, that is common in this type of bin-wise metrics). Both the D-ECE and the EGCE represent a better performance (in terms of uncertainty calibrations) with lower values, reaching their optimal value at 0. The common number of bins for these types of metrics is between 10 and 20 bins, thus, in our experiments, we will use 15 bins."
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "Experiments and Results",
51
+ "text": "Note: For readability purposes, we will subsequently use the acronyms for True Positive (TP), False Positive (FP) and False Negative (FN), with no confusion with the associated mathematical objects , and .\nThe experiments described hereafter have been done with YOLOv5 [9 ###reference_b9###] object detectors. When using the COCO dataset [13 ###reference_b13###], the pre-trained models provided by the YOLOv5 developers [9 ###reference_b9###] have been evaluated on the COCO validation set with 5000 images (since the official test set has no available ground-truth). When using the PASCAL VOC (2012) [3 ###reference_b3###] dataset, the models are trained for 100 epochs starting with random weights and standard YOLOv5 hyper-parameters; the available PASCAL VOC data is divided randomly into training and test sets, with a 70/30 split. A variety of representative experiments, designed to give a wide understanding on how the proposed metrics behave under various circumstances, have been performed and the results are compared to the existing D-ECE metric. Each subsection encapsulates one ore more pertinent scientific questions, as summarized below. \nCalibration vs. performance (Subsection 5.1 ###reference_###): How do the uncertainty calibration metrics behave when evaluating deep models with increasing mAP performance under distinct IoU threshold conditions. Sensitivity tests (Subsection 5.2 ###reference_###.): How do the metrics react to increasing proportions of specific types of detections (e.g., FNs, high-confidence FPs, low-confidence TPs) and what specific properties can be derived from their behaviour. Effects of distribution-shifts and calibration strategies (Subsection 5.3 ###reference_###): How does the introduction of distribution-shifts on the test set impact the evaluation done with the employed metrics; how do state-of-the-art strategies - that improve uncertainty calibration in classification problems - work in the context of these metrics, when adapted to object detection."
52
+ },
53
+ {
54
+ "section_id": "5.1",
55
+ "parent_section_id": "5",
56
+ "section_name": "Calibration vs. performance",
57
+ "text": "###figure_1### Figure 1 ###reference_### shows the performance of the different versions of the YOLOv5 object detector, namely Nano, Small, Medium, Large and Extra Large (respectively with 3.2, 12.6, 35.7, 76.8 and 140.7 million parameters), when evaluated using the proposed metrics (QGC, SGC, EGCE) and the D-ECE (for comparison) against the classical mAP evaluation. The values of the uncertainty calibration metrics are presented with absolute values (instead of the average) because, unlike what happens in classification problems (where is common to present the averaged values), distinct object detection models can output a varying number of detections. As such, averaging could create problems when comparing the performance of different models; e.g., a model that outputs a large number of low-confidence FP detections could appear to perform better in terms of uncertainty calibration than a model with a relatively lower number of FP detections, because of proportionality issues. Additionally, we care to note that it is expectable to achieve an absolute value of the D-ECE smaller than the QGC, SGC, and EGCE, because the D-ECE metric does not incorporate FN detections.\nFrom Fig. 1 ###reference_### we can infer some key observations. When considering the evaluation with an IoU threshold of 0.5 (Figures 1 ###reference_###.a, 1 ###reference_###.c), there is a relationship between the uncertainty calibration metrics and mAP, with the models showing better calibration results (i.e., lower scores) as the mAP performance improves; this relation is stronger with the proposed metrics (QGC, SGC, EGCE) than with the D-ECE. Considering the cases where we average the results from different IoU thresholds (Figures 1 ###reference_###.b, 1 ###reference_###.d), similar conclusions can be made for the proposed metrics but not for the D-ECE, where the latter is progressively aggravated with better performing models (Figure 1 ###reference_###.b), or shows an inconsistent relation to mAP evaluation (Figure 1 ###reference_###.d). A deeper look into the reason behind this phenomenon is taken in Supplementary Material by analysing the evolution of the scores with increasing IoU threshold values, for the Nano and Extra Large versions of Yolov5.\nThe main take, from the results reported in this Subsection, is that the three proposed uncertainty calibration metrics show a relation to mAP performance evaluation that is robust to different IoU conditions. Nonetheless, it is observable that strong improvements in performance do not translate proportionally to uncertainty calibration evaluation."
58
+ },
59
+ {
60
+ "section_id": "5.2",
61
+ "parent_section_id": "5",
62
+ "section_name": "Sensitivity tests",
63
+ "text": "###figure_2### Figure 2 ###reference_### portraits a series of results that evaluate how different types of detections influence the uncertainty calibration metrics (QGC, SGC, EGCE and D-ECE). As an example, in Figure 2 ###reference_###.b we gradually increase the number of FPs with confidence scores extracted from a Uniform distribution in the interval [0.8,1]; for instance, an increase of 60% means that it has been added a number of such detections equal to 60% of the original number of detections (i.e., ). Specifically, we have carried out these experiments with FNs (Figure 2 ###reference_###.a), low-confidence and high-confidence FPs (Figures 2 ###reference_###.b and 2 ###reference_###.d) and also low-confidence and high-confidence TPs (Figures 2 ###reference_###.c, 2 ###reference_###.e and 2 ###reference_###.f); further details can be found in the caption of Figure 2 ###reference_###. In these experiments the \u201cstarting point\u201d results are based on the COCO pre-trained YOLOv5 (Large). The results are presented from 5% until 100% increase, with a step of 5%. The values of the previously referred metrics are averaged (contrarily to when we were comparing different models) because in this situation we are actually interested in analysing how specific types of detections proportionally influence the uncertainty calibration metrics. The IoU threshold is set at 0.5.\nIt is important to observe that, in the context of uncertainty calibration, it is expectable that evaluation metrics penalise both overconfident and underconfident detections, specifically: overconfident FPs (Figure 2 ###reference_###.b), underconfident TPs (Figure 2 ###reference_###.c) and also FNs (Figure 2 ###reference_###.a). On the other hand, the metrics are expected to reward higher proportions of high-confidence TPs (Figs 2 ###reference_###.e and 2 ###reference_###.f), and even low-confidence FPs (Fig. 2 ###reference_###.d).\nWe start by comparing the behaviour of the proper scoring rule-based metrics, QGC and SGC. It can be observed that the metrics present a similar behaviour in most cases; specifically, in Figures 2 ###reference_###.e, 2 ###reference_###.d, 2 ###reference_###.f and 2 ###reference_###.a their behaviour is illustrated as nearly identical. In Figures 2 ###reference_###.b and 2 ###reference_###.c, although still similar, we observe that the SGC presents a more sensitive behaviour (reflected by stronger increase in value) than the QGC.\nWe can now compare the behaviour of the EGCE with the QGC and SGC. We start by observing that QGC and SGC behave in a symmetrical way, meaning that adding high-confidence FPs produces the same negative effect as adding low-confidence TPs (comparing Figs 2 ###reference_###.b and 2 ###reference_###.c), just like adding high-confidence TPs produces the same positive effect as adding low-confidence FPs (comparing Figures 2 ###reference_###.e and 2 ###reference_###.d). Contrarily, the EGCE penalizes high-confidence FNs more than low-confidence TPs, as well as rewards high-confidence TPs more favorably than low-confidence FPs. This behaviour can be advantageous since it is more consistent with a general evaluation of an object detection model (that will obviously favour TP detections over the FP counterparts).\nFinally, we compare the behaviour of the three uncertainty calibration metrics proposed in this work (QGC, SGC, EGCE) against D-ECE. As previously referred, the D-ECE can be interpreted through our theoretical formulation as a local calibration metric (in contrast to the proposed global calibration metrics); therefore, as illustrated in Figure 2 ###reference_###.a, the D-ECE has no sensibility to FNs, while the other metrics show a significant decrease in performance when exposed to increasing proportions of FNs. Furthermore, contrarily to the other metrics, the D-ECE still has small increases in values when exposed to a larger proportion of supposedly \u201cdesirable\u201d detections (i.e., high-confidence TPs and low-confidence FPs); however, in the edge case presented in Fig. 2 ###reference_###.f we witness a slight decrease in D-ECE. \nFinally, the main conclusions are: contrarily to the D-ECE, the metrics QGC, SGC, and EGCE show robust sensitivity in the presence of FNs and behave as expected with increasing proportions of \u201cdesirable\u201d detections; the QGC and the SGC show similar behaviour, with the SGC being sightly more sensitive to increasing proportions of \u201cundesirable\u201d detections; the EGCE, contrarily to both the QGC and SGC, does not have a symmetrical behaviour towards TPs and FPs, and also shows higher sensibility with increasing proportions of \u201cdesirable\u201d detections than the latter referred metrics."
64
+ },
65
+ {
66
+ "section_id": "5.3",
67
+ "parent_section_id": "5",
68
+ "section_name": "Effects of distribution-shifts and calibration strategies",
69
+ "text": "###figure_3### ###figure_4### We start by evaluating the effect that distribution-shifts have in terms of uncertainty calibration, as quantified by the proposed metrics and the D-ECE (results shown in Fig. 3 ###reference_###). These type of shifts induced on the test data have shown to induce negative effects in the uncertainty calibration of DNNs in classification scenarios [21 ###reference_b21###]. For the purpose of these experiments, and similarly to what is done in [21 ###reference_b21###], the distribution-shifts are artificially induced with 5 different degrees of intensity (see details in Supplementary Material). \nRegarding the results, similar conclusions can be derived from both Figures 3 ###reference_###.a and 3 ###reference_###.b. First, all three proposed uncertainty calibration metrics demonstrate a consistent increase as the intensity of the shift rises; this is more evident on the PASCAL VOC dataset (Figure 3 ###reference_###.b) where the scores almost double in value. This type of consistent increase is in line with what is observed with classical uncertainty calibration metrics used in classification problems [21 ###reference_b21###]. On the other hand, when looking at the D-ECE, the behaviour of this metric is inconsistent to what is expected, with small increases in low degrees of shift intensity, followed by a decrease in the score as the intensity of the shift is aggravated.\nThe latter portion of this subsection is focused on a concise examination of how calibration strategies impact the uncertainty calibration metrics employed earlier. Histogram binning [26 ###reference_b26###, 11 ###reference_b11###] (code in Supplementary Material) and test time augmentation (TTA) are the techniques chosen for these experiments (details on these techniques in Supplementary Material). The rationale behind this choice lies in a relevant fundamental difference between these two techniques: while histogram binning only acts on the confidence value of existing detections (therefore only addressing local calibration), TTA can also decrease the rate of FNs besides altering existing confidence values (possibly addressing global calibration in the process). On an additional note, although not a common strategy, some evidence of the positive effects of TTA-based strategies in uncertainty calibration has already been outlined [2 ###reference_b2###].\nFor the discussion of the results, we start by analysing Figure 4 ###reference_###.a, where the experiments are made in the in-distribution (i.e., with no distribution-shift) PASCAL VOC test set. As expected, histogram binning is not capable of improving the global calibration metrics (QGC, SGC, EGCE) - improving only the D-ECE - while TTA induces small decreases in those metrics (but not in the D-ECE). Regarding the results in Figure 4 ###reference_###.b, where the experiments are made with a shifted test set, we start by observing that histogram binning is not capable of improving any uncertainty calibration metric; this is somewhat expected given that this technique relies on the distribution of its training data, which, in this case, differs from that of the test set. TTA shows relatively good improvements in terms of QGC, SGC and EGCE, and a small improvement in terms of D-ECE.\nThe main observations are: contrarily to the D-ECE, the three proposed metrics show a behaviour under distribution-shifts that is congruent to what has been already observed in classification problems; there is evidence to suggest that typical post-hoc calibration methods, that only alter the confidence value of existing detections, may not be sufficient in the context of global calibration evaluation in object detection scenarios."
70
+ },
71
+ {
72
+ "section_id": "6",
73
+ "parent_section_id": null,
74
+ "section_name": "Final Remarks",
75
+ "text": "This article introduces a comprehensive theoretical and practical framework for assessing uncertainty calibration in object detection. The conceptual distinction between global and local calibration, outlined in this work, proved to be not only useful as theoretical foundation for the development of new evaluation metrics, but also for understanding the underlying fundamental differences between these metrics and the existing D-ECE.\nThe evaluation metrics proposed in the context of our framework are successfully used to evaluate the calibration of uncertainty estimates in various object detection scenarios. From these experiments, some interesting concluding remarks can be derived regarding some of the intrinsic properties of the proposed metrics that, in contrast, were not found in D-ECE: 1) they show consistent relation to mAP evaluation under different IoU threshold conditions; 2) they show robust sensitivity to varying proportions of representative types of detections; 3) their response under distribution-shifts is in line with what has been observed in classification scenarios.\nSince it was outside the main scope of the article, this work has some limitations regarding the experimentation with different calibration strategies. Nonetheless, the presented evidence seems to suggest that, under a complete formulation for the problem of uncertainty calibration, there is a need to re-think the way calibration techniques are developed and applied, specially when considering post-hoc strategies that only act on the confidence values of existing detections.\nOn a final note, since the QGC and the SGC have a fairly similar behaviour as uncertainty calibration metrics, we suggest that the application of one of these metrics - paired with the EGCE - is sufficient for assessing the global calibration of object detection systems."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {},
80
+ "image_paths": {
81
+ "1": {
82
+ "figure_path": "2309.00464v2_figure_1.png",
83
+ "caption": "Figure 1: Evaluating QGC, SGC, EGCE and D-ECE - against mAP - using YOLOv5 models (Nano, Small, Medium, Large and Extra Large). mAP increases proportionally to the model capacity shown from left to right (highlighted as grey-text legend in (a)). The results were obtained: on the COCO dataset with a) IoU threshold of 0.5, b) averaging the results with IoU threshold values between 0.5 and 0.95, with a step of 0.05; on the PASCAL VOC dataset with c) IoU threshold of 0.5; d) averaging the results with IoU threshold values between 0.5 and 0.95, with a step of 0.05.",
84
+ "url": "http://arxiv.org/html/2309.00464v2/x1.png"
85
+ },
86
+ "2": {
87
+ "figure_path": "2309.00464v2_figure_2.png",
88
+ "caption": "Figure 2: Evaluating QGC, SGC, EGCE and D-ECE for increasing proportions of: a) FN detections; b) FP detections with confidence scores extracted from the Uniform distribution U\u2062[0.8,1]\ud835\udc480.81U[0.8,1]italic_U [ 0.8 , 1 ]; c) TP detections with confidence scores extracted from U\u2062[0,0.2]\ud835\udc4800.2U[0,0.2]italic_U [ 0 , 0.2 ]; d) FP detections with confidence scores extracted from U\u2062[0,0.2]\ud835\udc4800.2U[0,0.2]italic_U [ 0 , 0.2 ]; e) TP detections with confidence scores extracted from U\u2062[0.8,1]\ud835\udc480.81U[0.8,1]italic_U [ 0.8 , 1 ]; f) TP detections with confidence scores extracted from U\u2062[0.98,1]\ud835\udc480.981U[0.98,1]italic_U [ 0.98 , 1 ].",
89
+ "url": "http://arxiv.org/html/2309.00464v2/x2.png"
90
+ },
91
+ "3": {
92
+ "figure_path": "2309.00464v2_figure_3.png",
93
+ "caption": "Figure 3: Evaluating QGC, SGC, EGCE and D-ECE, with increasing intensity of shifts in the distribution of the test data, using Yolov5 (Small) with a) the COCO dataset and b) the PASCAL VOC dataset.",
94
+ "url": "http://arxiv.org/html/2309.00464v2/x3.png"
95
+ },
96
+ "4": {
97
+ "figure_path": "2309.00464v2_figure_4.png",
98
+ "caption": "Figure 4: Evaluating QGC, SGC, EGCE and D-ECE, after applying histogram binning (H.B.) and TTA, compared to a vanilla (V.) approach - i.e.with no calibration strategy - using Yolov5 (Small) in the PASCAL VOC test set a) with no distribution-shift and b) with level 5 distribution-shift.",
99
+ "url": "http://arxiv.org/html/2309.00464v2/x4.png"
100
+ }
101
+ },
102
+ "validation": true,
103
+ "references": [],
104
+ "url": "http://arxiv.org/html/2309.00464v2"
105
+ }
20240318/2309.08249v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2309.10668v2.json ADDED
@@ -0,0 +1,825 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Language Modeling Is Compression",
3
+ "abstract": "It has long been established that predictive models can be transformed into lossless compressors and vice versa.\nIncidentally, in recent years, the machine learning community has focused on training increasingly large and powerful self-supervised (language) models.\nSince these large language models exhibit impressive predictive capabilities, they are well-positioned to be strong compressors.\nIn this work, we advocate for viewing the prediction problem through the lens of compression and evaluate the compression capabilities of large (foundation) models.\nWe show that large language models are powerful general-purpose predictors and that the compression viewpoint provides novel insights into scaling laws, tokenization, and in-context learning. For example, Chinchilla 70B, while trained primarily on text, compresses ImageNet patches to 43.4% and LibriSpeech samples to 16.4% of their raw size, beating domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively. Finally, we show that the prediction-compression equivalence allows us to use any compressor (like gzip) to build a conditional generative model.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Information theory and machine learning are inextricably linked and have even been referred to as \u201ctwo sides of the same coin\u201d (MacKay, 2003 ###reference_b41###).\nOne particularly elegant connection is the essential equivalence between probabilistic models of data and lossless compression.\nThe source coding theorem (Shannon, 1948 ###reference_b66###) is the fundamental theorem describing this idea, i.e., the expected message length in bits of an optimal entropy encoder is equal to the negative -likelihood of the statistical model.\nIn other words, maximizing the -likelihood (of the data) is equivalent to minimizing the number of bits required per message.\nIndeed, lossless compression with a probabilistic model can be achieved in a variety of different ways, including Huffman coding (Huffman, 1952 ###reference_b26###), arithmetic coding (Pasco, 1977 ###reference_b51###; Rissanen, 1976 ###reference_b58###), and asymmetric numeral systems (Duda, 2009 ###reference_b16###).\nArithmetic coding, in particular, is known to be optimal in terms of coding length, meaning that the overall compression performance depends on the capabilities of the probabilistic model (see Fig. 1 ###reference_### for an overview of arithmetic coding).\nIncidentally, in recent years, large pre-trained Transformers (Vaswani et al., 2017 ###reference_b76###), so-called foundation models (Bommasani et al., 2021 ###reference_b5###), have proven to be highly successful across a wide range of predictive tasks (Bubeck et al., 2023 ###reference_b8###; Rae et al., 2021 ###reference_b55###) and are thus promising candidates for use with arithmetic coding.\nIndeed, Transformer-based compression with arithmetic coding has produced state-of-the-art results both in the online (Bellard, 2021 ###reference_b3###; Mao et al., 2022 ###reference_b43###) and offline settings (Valmeekam et al., 2023 ###reference_b74###).\nIn the online setting, a pseudo-randomly initialized model is directly trained on the stream of data that is to be compressed, while the offline setting, which we consider in our work, trains the model on an external dataset before employing it to compress a (potentially different) data stream.\nConsequently, offline compression is performed in-context, with a fixed set of model parameters.\nTransformers have demonstrated impressive in-context learning abilities (Laskin et al., 2023 ###reference_b38###; Brown et al., 2020 ###reference_b7###; Wei et al., 2022 ###reference_b78###; Genewein et al., 2023 ###reference_b19###) and are thus ideally suited for offline compression.\nThe context length is a key limiting factor in offline compression, as it dictates the maximum number of bytes a model can compress at a time.\nTransformers can only compress a few kilobytes (each \u201ctoken\u201d being coded with 2 or 3 bytes), while requiring a lot of compute.\nCorrespondingly, many challenging predictive tasks (e.g., algorithmic reasoning or long-term memory) require long contexts (Del\u00e9tang et al., 2023 ###reference_b14###), and thus extending these models\u2019 context lengths is a key challenge which is gaining increased attention (Zaheer et al., 2020 ###reference_b82###; Guo et al., 2022 ###reference_b22###; Bulatov et al., 2023 ###reference_b9###).\nThe in-context compression view provides insights into the failure modes of current foundation models."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Background",
15
+ "text": "In this section, we review the necessary background on information theory and its relation to likelihood maximization.\nTo that end, we consider streams of data of length from a finite set of symbols .\nWe write for and denote the empty string as .\nFinally, we denote the concatenation of two strings and by ."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Experimental Evaluation",
21
+ "text": "Here, we evaluate foundation models\u2019 (in-context) compression capabilities (details in Appendix B ###reference_### and\ncode at https://github.com/google-deepmind/language_modeling_is_compression ###reference_e_modeling_is_compression###)."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Datasets",
27
+ "text": "We consider datasets of three different modalities, text, image, and audio, which have (a priori) very different biases for compression and thus provide a good testbed for evaluating a compressor\u2019s general capabilities.\nTo render the results comparable across modalities, all our datasets are 1GB.\nA key question is how to reconcile the different context lengths of the compressors we consider.\nTransformers are restricted to short contexts (2048 \u201ctokens\u201d, coded over 1 byte for our trained transformers, and 4 bytes for the pretrained models), while gzip uses a maximum context of 32 kilobytes, and LZMA2 has a virtually \u201cinfinite\u201d context length.\nHaving a longer context allows a compressor to exploit more sequential dependencies to achieve a better compression rate.\nFor compressors with finite contexts, there are two approaches to compress sequences that are longer than the context length: (i) slide the compressor byte by byte, thus always processing a history of the previous bytes when compressing a new byte, and (ii) chunk the data stream into sequences of bytes and evaluate the in-context compression (without any history) averaged across batches.\nFor Transformers, we consider the latter approach since sliding would increase their (already very long) running time by a factor of .\nTherefore, we chunk all datasets into sequences of bytes and feed them to the compressors one-by-one.\nHowever, since classical compressors usually include a header in their compressed output, which can be larger than the compressed data in some cases, we only count it once for all batches.\nMoreover, since chunking deteriorates the performance of classical compressors, which have context lengths , we also report their compression rates on the unchunked datasets.\nWe consider the following datasets:\nThe enwik9 dataset (Hutter, 2006 ###reference_b28###) consists of the first (1 billion) bytes of the English Wikipedia XML dump on March 3rd, 2006 and is typically used to measure a model\u2019s ability to compress data.\nIt is an extension of the enwik8 dataset that only contains the first 100 million bytes.\nWe train our vanilla Transformer models on enwik8, but evaluate on both enwik8 and enwik9 (to evaluate the out-of-distribution compression performance).\nWhile enwik8 is included in enwik9, it only represents the first 10% and thus still constitutes a significant distribution shift.\nThe ImageNet dataset (Russakovsky et al., 2015 ###reference_b60###) contains annotated images from the WordNet hierarchy.\nSince 2010, the dataset has been used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nWe extract contiguous patches of size from all images, flatten them, convert them to grayscale (so that each byte represents exactly one pixel) to obtain samples of 2048 bytes.\nWe then concatenate of these patches, following the original dataset order, to create a dataset of 1 GB.\nLibriSpeech (Panayotov et al., 2015 ###reference_b50###) contains roughly hours of 16kHz English speech data derived from audiobooks of the LibriVox project that has been segmented and aligned.\nWe chunk the samples into 2048 bytes and gather such chunks into a dataset of size 1 GB."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Comparing Compression Rates",
33
+ "text": "Table 1 ###reference_### shows the compression rates for all compressors and datasets.\nWe show both the raw compression rate, which does not take the model size (in bytes) into account, as well as the adjusted rate, which does.\nThe size of the Python program for classical compressors is very small (a few kilobytes at most) and thus barely affects the compression rate.\nIn contrast, language models suffer a huge loss in compression rate due to their large size, which cannot be offset when compressing only 1GB of data.\nWe encode each neural network parameter with 2 bytes, using a float16 representation since quantizing weights to this level does not significantly affect performance (Tao et al., 2022 ###reference_b69###) and is standard for model inference.\nNote that further compressing the float16 parameters using classical compressors does not significantly reduce their size (we obtained rates of 92.2% and 89.1% on a 38M parameter Transformer with gzip and LZMA2, respectively).\nWe only consider the offline setting, which computes the adjusted compression rate using a two-part code (i.e., it adds the model size to the -loss of the data).\nIn contrast, prequential (online) coding would provide an alternative view on adjusted compression by computing the adjusted compression rate as the -loss plus the size of the training script (not the model parameters).\nPrequential coding leads to better compression with overparametrized neural networks (Blier & Ollivier, 2018 ###reference_b4###), but it requires training the model online both during encoding and decoding (which is very costly for our models).\nA lossless compressor induces an injective function over bit sequences, meaning that we cannot compress all sequences equally well (by the pigeonhole principle).\nConsequently, in practice, compressors are often tailored to a particular setting, e.g., FLAC for audio or PNG for images, and thus fail to compress other data modalities well (see Table 1 ###reference_###).\nIn contrast, general-purpose compressors, such as gzip, offer good performance on a wide range of data sources.\nSurprisingly, large language models, while trained primarily on text, also appear to be general-purpose compressors, as they outperform all other compressors, even on image and audio data (see Table 1 ###reference_###).\nNote that these models have not been trained on this kind of data: for Chinchilla, Appendix A. of Hoffmann et al. (2022 ###reference_b23###) states that the training dataset consists of a mix of internet text data (Wikipedia, websites, github) and books.\nHowever, it is still possible (but unlikely) that some images or audio samples were encoded into text on some websites.\nThus, pretrained models achieve their impressive compression performance by conditioning a (meta-)trained model to a particular task at hand via in-context learning (Genewein et al., 2023 ###reference_b19###).\nIn contrast, smaller Transformers, trained manually on enwik8, only achieve good compression rates on similar Wikipedia data, i.e., enwik9.\nHowever, larger models\u2019 stronger in-context compression (or in-context learning) comes at a price: the number of parameters, which has to be offset with increasingly large data sources when computing the adjusted compression rate (see Section 3.3 ###reference_###).\n###figure_1###"
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Optimal Model-Dataset Size Tradeoff",
39
+ "text": "As shown in Table 1 ###reference_###, foundation models incur a huge cost in compression rates when accounting for their size, which is in the order of hundreds of GBs for billions of parameters.\nIn theory, if the dataset is infinite, we can ignore the model\u2019s size since it is insignificant compared to the size of the dataset.\nHowever, in practice, a foundation model can only achieve non-trivial (adjusted) compression rates when evaluated on datasets in the order of TBs (or more).\nSince this is infeasible under reasonable hardware constraints, we instead investigate the optimal model size with smaller Transformers that we train on enwik8.\nRecall that the model size (in bytes) is twice the number of (float16) parameters.\nFig. 2 ###reference_### visualizes the adjusted compression rate for vanilla Transformers of different sizes for enwik.\nWe observe that larger models achieve better compression rates on larger datasets, justifying recent trends in model scaling (Kaplan et al., 2020 ###reference_b31###).\nHowever, they achieve worse rates on smaller datasets, indicating that scaling laws are, in fact, dependent on the size of the test set.\nThat is, for each dataset, the model sizes reach a critical point, after which the adjusted compression rate starts to increase again as the number of parameters overweighs the size of the dataset.\nNote that we evaluate offline compression, i.e., we do not necessarily compress the data the model was trained on, meaning that the results on enwik7 and enwik8 are in-distribution, while enwik9 is (partially) out-of-distribution."
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "Compressors as Generative Models",
45
+ "text": "In Section 3.2 ###reference_### we showed that any predictor can be employed as a compressor.\nHere, following Section 2 ###reference_###, we empirically demonstrate the opposite direction, i.e., that compressors can be used as a sequence prediction model, establishing our main claim that \u201clanguage modeling is compression\u201d.\nWe compute the length of the compressed sequence for all possible to get the probabilities .\nThis can straightforwardly be extended to sampling a whole continuation autoregressively by appending the last output to the sequence and iterating.\nTheoretically, there is no strong guarantee that a good compression rate leads to \u201cgood\u201d autoregressive samples.\nHowever, empirically it has been shown that better sequence prediction (i.e., lower -loss) often leads to better generation (Rae et al., 2021 ###reference_b55###; Brown et al., 2020 ###reference_b7###).\nNevertheless, in autoregressive sampling small errors often accumulate, which can lead to samples that diverge from the ground-truth distribution.\nAlso, this standard sampling technique only looks one step into the future, and can be biased: gzip, for instance, builds an internal dictionary of \u2019tokens\u2019, which will be compressed using their indexes. Extending the sequence with one of these tokens will lead to a good compression rate, but will be omitted as it can be longer than one byte. Our neural models do not suffer such bias as they are trained to predict one step ahead with the cross-entropy loss.\nWe compare the generative capabilities of gzip and Chinchilla 70B on images in Fig. 3 ###reference_###.\nEach image is a sampled from ImageNet with height 290 and width 500.\nFor each row in the image, we condition the model on the first 250 pixels and autoregressively generate the remaining 250 pixels, treating different rows as independent of each other (an oversimplification w.r.t. natural image statistics).\nWe use the same byte conversions and tokenization details as explained in Appendix B ###reference_###.\nChinchilla 70B shows signatures of visually appropriate continuations (judged qualitatively), which tend to degrade with increased sample length as more and more error accumulates.\ngzip produces much noisier completions.\nWe compare the generative performance of gzip and Chinchilla (1B, 7B, and 70B) across all three data modalities in Figs. C.1 ###reference_###, C.2 ###reference_### and C.3 ###reference_### for text, image, and audio data, respectively.\n###figure_2### ###figure_3### ###figure_4###"
46
+ },
47
+ {
48
+ "section_id": "3.5",
49
+ "parent_section_id": "3",
50
+ "section_name": "Sequential Evolution of In-Context Compression",
51
+ "text": "Language models take a very different \u201capproach\u201d to compression compared to classical compressors.\nClassical compressors have a small program size and optimize for a large context length to exploit sequential dependencies in the data.\nIn contrast, foundation models consist of billions of parameters, which enable rapid adaptation in their (relatively) short context window (Genewein et al., 2023 ###reference_b19###).\nThus, arithmetic coding-based compressors rely heavily on the predictive models\u2019 in-context learning capabilities to achieve competitive compression performance.\nWe investigate this phenomenon in Fig. 4 ###reference_###, which visualizes the compression rate across sequence lengths for gzip, Chinchilla 1B and a Transformer pretrained on enwik8.\nIntuitively, the longer the sequence, the more data the model can process in its context, and therefore, the better the compression.\nAs expected, most compression rates decrease quickly with increasing sequence length, indicating that the models learn some data statistics in-context, without any gradient-based training.\nAs in Table 1 ###reference_###, the Chinchilla model achieves the best compression rates across all three data modalities and sequence lengths.\n###figure_5### ###figure_6### ###figure_7###"
52
+ },
53
+ {
54
+ "section_id": "3.6",
55
+ "parent_section_id": "3",
56
+ "section_name": "Tokenization Is Compression",
57
+ "text": "Transformers are generally not trained on raw input data but on tokenized versions thereof, both for efficiency and performance reasons.\nAs a consequence, Transformers are trained on compressed data, with tokenizers acting as the compressor.\nSince tokenization is known to have an impact on the generalization performance (Radford et al., 2019 ###reference_b54###), we investigate its impact on the compression rate in Table 2 ###reference_###.\nConcretely, we train Transformers on enwik8 using different tokenizers: ASCII, i.e., an alphabet of size 256 (no tokenization), and byte-pair encoding trained on enwik8, with various vocabulary sizes (1K, 2K, 5K, 10K, and 20K tokens).\nNote that the tokenizations are lossless.\nIncreasing the number of tokens (i.e., the \u201calphabet size\u201d) reduces the length of the sequence and thus increases the amount of information in a models context.\nHowever, decreasing the sequence length comes at a price: the number of tokens is larger, which makes the prediction task more challenging since reducing the entropy of the conditional distribution is increasingly difficult for larger alphabet size.\nIn theory, as the tokenization is a lossless compression, the two effects should compensate.\nIn practice, we observe that if the model is small, increasing the number of possible tokens boosts the compression performance.\nIn contrast, for bigger models, it seems that the converse happens: having a larger token vocabulary harms the final compression rate of the model.\nNevertheless, short sequence lengths also help Transformers since their time complexity scales quadratically with context length, and it has been shown they do not generalize well to long contexts (Del\u00e9tang et al., 2023 ###reference_b14###; Ruoss et al., 2023 ###reference_b59###)."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Related work",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Conclusion",
69
+ "text": "In this paper we investigated how and why sequence modeling is equivalent to compression.\nArithmetic coding transforms a sequence model into a compressor, and, conversely, a compressor can be transformed into a predictor using its coding lengths to construct probability distributions following Shannon\u2019s entropy principle.\nWe evaluated large language models as compressors against various standard compressors and showed that they are competitive not only on text but also on modalities they have never been trained on (image and audio data).\nWe showed that the compression view provides novel insights on scaling laws since it takes the model size into account, unlike the log-loss objective, which is standard in current language modeling research.\nConsequently, we showed that the optimal model size is inextricably linked to the dataset size and cannot be scaled without limit."
70
+ }
71
+ ],
72
+ "appendix": [
73
+ {
74
+ "section_id": "Appendix 1",
75
+ "parent_section_id": null,
76
+ "section_name": "Appendix A Arithmetic Coding",
77
+ "text": "Here we provide a step-by-step explanation of the arithmetic encoding example visualized in Fig. 1 ###reference_###.\nRecall from Section 2 ###reference_### that arithmetic encoding iteratively partitions the interval according to a predictive model and an input string, i.e., for Fig. 1 ###reference_###.\nFirst, we construct the intervals for the first token, corresponding to :\nfor\nfor\nfor\nSince the first token is , we set and iterate. Thus, the intervals for are:\nfor\nfor\nfor\nSince the next token is , we set and so on. We terminate with for . Next, arithmetic will compute the binary sequence corresponding to iteratively splitting the interval in half until it is fully contained in . Concretely, this yields the binary sequence.\n\n\n\n\n\n\n\nAs is fully contained in , the compressed output is , which consists of 7 bits as opposed to the 4 bytes used to encode ."
78
+ },
79
+ {
80
+ "section_id": "Appendix 2",
81
+ "parent_section_id": null,
82
+ "section_name": "Appendix B Experimental Details",
83
+ "text": "As described in the last subsection, the data fed to the large language models we use (Chinchilla and LLama2) is an ASCII string of exactly 2048 characters. However, the models immediately tokenizes the string using SentencePiece (Kudo & Richardson, 2018 ###reference_b37###). The string is transformed into a sequence of integer tokens between 0 and , being the vocabulary size (they both use ). Note that the length of the sequence has now completely changed, and depends on the input: tokenization is already a form of lossless compression. This sequence is fed into the big pretrained Transformer model, which gives us the conditionals for all histories and tokens in the alphabet . Denoting the length of the sequence after tokenization as , we obtain log-probabilities. We can pass them to an arithmetic encoder of vocabulary size , to encode the sequence into bits. This is our final compressed sequence, which size in bytes is compared with the initial size, i.e., 2048 bytes.\nIn practice, the large models had only access to the top-k next token log-probabilities, for each context. We chose , which almost fully recovers the conditional distribution. Arithmetic coding can still be applied as the alphabet size is allowed to change while coding: what matters is that the conditional probabilities in each step sum to 1. Accordingly, we renormalize the top-k log-probabilities.\nThe Transformer models we trained specifically on enwik do not use any tokenization, except in Section 3.6 ###reference_###. The reasoning above also holds, except that our models returned the full distribution over tokens, and not only the top-k."
84
+ },
85
+ {
86
+ "section_id": "Appendix 3",
87
+ "parent_section_id": null,
88
+ "section_name": "Appendix C Additional Results",
89
+ "text": "Fig. C.1 ###reference_###, Fig. 3 ###reference_### and Fig. C.3 ###reference_### show data autoregressively generated by compressors, one step at a time. Note that for Chinchilla, we generate tokens (which size in bytes can vary) until we reach the length in bytes we desire. Also, note that gzip samples are biased, as explained in Section 3.4 ###reference_###: looking 1 step ahead is not sufficient to get good samples, and it\u2019s likely that looking multiple steps ahead would improve the results. However, that\u2019s not the purpose of this paper, and we kept the simplest setup for all our compressors.\nContext Text (1948 Bytes)\nGround Truth (100 Bytes)\ngzip Samples (100 Bytes)\nChinchilla 1B Samples (100 bytes)\nChinchilla 7B Samples (100 bytes)\nChinchilla 70B Samples (100 bytes)\n###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13###"
90
+ }
91
+ ],
92
+ "tables": {
93
+ "1": {
94
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>\nCompression rates (compressed size / raw size) on different datatsets (lower is better).\nThe raw compression rate does not take the parameter size into account for the neural models, while the adjusted compression rate considers the parameter size part of the compressed size.\nAll datasets are of size 1GB.\nRandom data is used as a baseline and should not be compressible.\nTransformer, Llama 2, and Chinchilla are predictive models, which we use with arithmetic coding to obtain lossless compressors.\nWe train the Transformer models from scratch on enwik8, while the Chinchilla models are pretrained on large text datasets.\nTransformers trained on enwik overfit to that data modality, while large language models are good compressors for various data types.\n</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T1.2\" style=\"width:397.5pt;height:282.5pt;vertical-align:-0.9pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-17.2pt,12.2pt) scale(0.920398019794144,0.920398019794144) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.2.2\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.3.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T1.2.2.3.1.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T1.2.2.3.1.2\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S3.T1.2.2.3.1.3\">Raw Compression Rate (%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S3.T1.2.2.3.1.4\">Adjusted Compression Rate (%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.2.4.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.4.2.1.1\">Chunk</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.2.4.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.4.2.2.1\">Compressor</span></th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.4.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.4.2.3.1\">enwik9</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.4.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.4.2.4.1\">ImageNet</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.4.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.4.2.5.1\">LibriSpeech</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.4.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.4.2.6.1\">Random</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.4.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.4.2.7.1\">enwik9</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.4.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.4.2.8.1\">ImageNet</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.4.2.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.4.2.9.1\">LibriSpeech</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.4.2.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.4.2.10.1\">Random</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.1.1.1.1\" rowspan=\"4\"><span class=\"ltx_text\" id=\"S3.T1.1.1.1.1.1\"></span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.1.1.1.2\">gzip</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.1.1.3\">32.3</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.1.1.4\">70.7</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.1.1.5\">36.4</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.1.1.6\">100.0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.1.1.7\">32.3</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.1.1.8\">70.7</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.1.1.9\">36.4</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.1.1.1.10\">100.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.2.5.3.1\">LZMA2</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.5.3.2\">23.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.5.3.3\">57.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.5.3.4\">29.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.5.3.5\">100.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.5.3.6\">23.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.5.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.5.3.7.1\">57.9</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.5.3.8\">29.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.5.3.9\">100.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.2.6.4.1\">PNG</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.6.4.2\">42.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.6.4.3\">58.5</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.6.4.4\">32.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.6.4.5\">100.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.6.4.6\">42.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.6.4.7\">58.5</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.6.4.8\">32.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.6.4.9\">100.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.2.7.5.1\">FLAC</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.7.5.2\">89.5</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.7.5.3\">61.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.7.5.4\">30.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.7.5.5\">107.8</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.7.5.6\">89.5</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.7.5.7\">61.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.7.5.8\">30.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.7.5.9\">107.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S3.T1.2.2.2.1\" rowspan=\"11\"><span class=\"ltx_text\" id=\"S3.T1.2.2.2.1.1\"></span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.2.2.2.2\">gzip</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.2.3\">48.1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.2.4\">68.6</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.2.5\">38.5</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.2.6\">100.1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.2.7\">48.1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.2.8\">68.6</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.2.9\">38.5</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.2.10\">100.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.2.8.6.1\">LZMA2</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.8.6.2\">50.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.8.6.3\">62.4</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.8.6.4\">38.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.8.6.5\">100.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.8.6.6\">50.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.8.6.7\">62.4</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.8.6.8\">38.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.8.6.9\">100.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.9.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.2.9.7.1\">PNG</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.9.7.2\">80.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.9.7.3\">61.7</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.9.7.4\">37.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.9.7.5\">103.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.9.7.6\">80.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.9.7.7\">61.7</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.9.7.8\">37.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.9.7.9\">103.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.10.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.2.10.8.1\">FLAC</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.10.8.2\">88.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.10.8.3\">60.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.10.8.4\">30.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.10.8.5\">107.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.10.8.6\">88.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.10.8.7\">60.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.10.8.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.10.8.8.1\">30.3</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.10.8.9\">107.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.11.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.2.2.11.9.1\">Transformer 200K</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.11.9.2\">30.9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.11.9.3\">194.0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.11.9.4\">146.6</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.11.9.5\">195.5</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.11.9.6\">30.9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.11.9.7\">194.0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.11.9.8\">146.6</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.11.9.9\">195.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.12.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.2.12.10.1\">Transformer 800K</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.12.10.2\">21.7</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.12.10.3\">185.1</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.12.10.4\">131.1</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.12.10.5\">200.1</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.12.10.6\">21.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.12.10.7\">185.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.12.10.8\">131.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.12.10.9\">200.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.13.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.2.13.11.1\">Transformer 3.2M</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.13.11.2\">17.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.13.11.3\">215.8</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.13.11.4\">228.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.13.11.5\">224.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.13.11.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.13.11.6.1\">17.7</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.13.11.7\">216.5</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.13.11.8\">228.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.13.11.9\">224.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.14.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.2.2.14.12.1\">Llama 2 (7B)</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.14.12.2\">8.9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.14.12.3\">53.4</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.14.12.4\">23.1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.14.12.5\">103.2</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.14.12.6\">1408.9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.14.12.7\">1453.4</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.14.12.8\">1423.1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.14.12.9\">1503.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.15.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.2.2.15.13.1\">Chinchilla 1B</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.15.13.2\">11.3</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.15.13.3\">62.2</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.15.13.4\">24.9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.15.13.5\">108.8</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.15.13.6\">211.3</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.15.13.7\">262.2</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.15.13.8\">224.9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.2.2.15.13.9\">308.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.16.14\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.2.16.14.1\">Chinchilla 7B</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.16.14.2\">10.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.16.14.3\">54.7</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.16.14.4\">23.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.16.14.5\">101.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.16.14.6\">1410.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.16.14.7\">1454.7</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.16.14.8\">1423.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.2.2.16.14.9\">1501.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.17.15\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T1.2.2.17.15.1\">Chinchilla 70B</th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S3.T1.2.2.17.15.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.17.15.2.1\">8.3</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S3.T1.2.2.17.15.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.17.15.3.1\">48.0</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S3.T1.2.2.17.15.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.17.15.4.1\">21.0</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S3.T1.2.2.17.15.5\">100.8</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S3.T1.2.2.17.15.6\">14008.3</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S3.T1.2.2.17.15.7\">14048.0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S3.T1.2.2.17.15.8\">14021.0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S3.T1.2.2.17.15.9\">14100.8</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
95
+ "capture": "Table 1: \nCompression rates (compressed size / raw size) on different datatsets (lower is better).\nThe raw compression rate does not take the parameter size into account for the neural models, while the adjusted compression rate considers the parameter size part of the compressed size.\nAll datasets are of size 1GB.\nRandom data is used as a baseline and should not be compressible.\nTransformer, Llama 2, and Chinchilla are predictive models, which we use with arithmetic coding to obtain lossless compressors.\nWe train the Transformer models from scratch on enwik8, while the Chinchilla models are pretrained on large text datasets.\nTransformers trained on enwik overfit to that data modality, while large language models are good compressors for various data types.\n"
96
+ }
97
+ },
98
+ "image_paths": {
99
+ "1": {
100
+ "figure_path": "2309.10668v2_figure_1.png",
101
+ "caption": "Figure 1: \nArithmetic encoding of \u2018AIXI\u2019 with a probabilistic model P\ud835\udc43Pitalic_P (blue) resulting in the binary code \u2018b0101010\u2019 (green).\nWe iteratively divide the real interval I=[0,1)\ud835\udc3c01I=[0,1)italic_I = [ 0 , 1 ) according to the model\u2019s (conditional) probabilities and select the sub-interval corresponding to the observed symbol (e.g., I=[0,0.45)\ud835\udc3c00.45I=[0,0.45)italic_I = [ 0 , 0.45 ) for P\u2062(A)\ud835\udc43\ud835\udc34P(A)italic_P ( italic_A )).\nWe further refine I\ud835\udc3cIitalic_I for each input symbol (indicated by the arrows), e.g., I=[0.09,0.36)\ud835\udc3c0.090.36I=[0.09,0.36)italic_I = [ 0.09 , 0.36 ) for P\u2062(I|A)\ud835\udc43conditional\ud835\udc3c\ud835\udc34P(I|A)italic_P ( italic_I | italic_A ).\nTo determine the encoded output, we iteratively split [0,1)01[0,1)[ 0 , 1 ) in half and assign a binary code to each sub-interval (shaded red areas).\nAt every step we can output the binary code if I\ud835\udc3cIitalic_I is fully contained in the corresponding binary interval (e.g., \u2018b0\u2019 for \u2018A\u2019, but not for \u2018AI\u2019 as it could be \u2018b00\u2019 or \u2018b01\u2019).\nAt the end of the input, the code is \u2018b0101\u2019, which cannot be uniquely decoded (P\u2062(A|A\u2062I\u2062X)\ud835\udc43conditional\ud835\udc34\ud835\udc34\ud835\udc3c\ud835\udc4bP(A|AIX)italic_P ( italic_A | italic_A italic_I italic_X ), P\u2062(I|A\u2062I\u2062X)\ud835\udc43conditional\ud835\udc3c\ud835\udc34\ud835\udc3c\ud835\udc4bP(I|AIX)italic_P ( italic_I | italic_A italic_I italic_X ), P\u2062(X|A\u2062I\u2062X)\ud835\udc43conditional\ud835\udc4b\ud835\udc34\ud835\udc3c\ud835\udc4bP(X|AIX)italic_P ( italic_X | italic_A italic_I italic_X ) all overlap with \u2018b0101\u2019).\nThus, we further refine the binary code until its binary interval is fully contained in I\ud835\udc3cIitalic_I (all calculations in Appendix A).",
102
+ "url": "http://arxiv.org/html/2309.10668v2/x1.png"
103
+ },
104
+ "2(a)": {
105
+ "figure_path": "2309.10668v2_figure_2(a).png",
106
+ "caption": "(a) Original image\nFigure 3: \nCompression-based generation for image data.\nWe condition gzip and Chinchilla on the first half of every row of the ImageNet image and then sample the remaining half autoregressively.\nBoth models produce incoherent samples, but Chinchilla looks much less noisy than gzip.",
107
+ "url": "http://arxiv.org/html/2309.10668v2/extracted/5473974/figures/original_imagenet_generation.png"
108
+ },
109
+ "2(b)": {
110
+ "figure_path": "2309.10668v2_figure_2(b).png",
111
+ "caption": "(b) gzip (row-wise)\nFigure 3: \nCompression-based generation for image data.\nWe condition gzip and Chinchilla on the first half of every row of the ImageNet image and then sample the remaining half autoregressively.\nBoth models produce incoherent samples, but Chinchilla looks much less noisy than gzip.",
112
+ "url": "http://arxiv.org/html/2309.10668v2/extracted/5473974/figures/gzip_imagenet_generation_autoreg.png"
113
+ },
114
+ "2(c)": {
115
+ "figure_path": "2309.10668v2_figure_2(c).png",
116
+ "caption": "(c) Chinchilla (row-wise)\nFigure 3: \nCompression-based generation for image data.\nWe condition gzip and Chinchilla on the first half of every row of the ImageNet image and then sample the remaining half autoregressively.\nBoth models produce incoherent samples, but Chinchilla looks much less noisy than gzip.",
117
+ "url": "http://arxiv.org/html/2309.10668v2/extracted/5473974/figures/chinchilla70b_image_autoreg.png"
118
+ },
119
+ "3(a)": {
120
+ "figure_path": "2309.10668v2_figure_3(a).png",
121
+ "caption": "(a) enwik9\nFigure 4: \nIn-context compression rate over sequence length.\nFor every dataset, we compute the compression rate for all subsequences of 2048 bytes, averaged over 100 sequences.",
122
+ "url": "http://arxiv.org/html/2309.10668v2/x3.png"
123
+ },
124
+ "3(b)": {
125
+ "figure_path": "2309.10668v2_figure_3(b).png",
126
+ "caption": "(b) ImageNet\nFigure 4: \nIn-context compression rate over sequence length.\nFor every dataset, we compute the compression rate for all subsequences of 2048 bytes, averaged over 100 sequences.",
127
+ "url": "http://arxiv.org/html/2309.10668v2/x4.png"
128
+ },
129
+ "3(c)": {
130
+ "figure_path": "2309.10668v2_figure_3(c).png",
131
+ "caption": "(c) LibriSpeech\nFigure 4: \nIn-context compression rate over sequence length.\nFor every dataset, we compute the compression rate for all subsequences of 2048 bytes, averaged over 100 sequences.",
132
+ "url": "http://arxiv.org/html/2309.10668v2/x5.png"
133
+ },
134
+ "5(a)": {
135
+ "figure_path": "2309.10668v2_figure_5(a).png",
136
+ "caption": "(a) Chinchilla 1b\nFigure C.2: \nCompression-based generation for image data, for 3 Chinchilla models with different number of parameters.\nWe condition the models on the first half of every row of the image (250 bytes) and then sample the remaining half (250 bytes) autoregressively.",
137
+ "url": "http://arxiv.org/html/2309.10668v2/extracted/5473974/figures/chinchilla1b_image_autoreg.png"
138
+ },
139
+ "5(b)": {
140
+ "figure_path": "2309.10668v2_figure_5(b).png",
141
+ "caption": "(b) Chinchilla 7b\nFigure C.2: \nCompression-based generation for image data, for 3 Chinchilla models with different number of parameters.\nWe condition the models on the first half of every row of the image (250 bytes) and then sample the remaining half (250 bytes) autoregressively.",
142
+ "url": "http://arxiv.org/html/2309.10668v2/extracted/5473974/figures/chinchilla7b_image_autoreg.png"
143
+ },
144
+ "5(c)": {
145
+ "figure_path": "2309.10668v2_figure_5(c).png",
146
+ "caption": "(c) Chinchilla 70b\nFigure C.2: \nCompression-based generation for image data, for 3 Chinchilla models with different number of parameters.\nWe condition the models on the first half of every row of the image (250 bytes) and then sample the remaining half (250 bytes) autoregressively.",
147
+ "url": "http://arxiv.org/html/2309.10668v2/extracted/5473974/figures/chinchilla70b_image_autoreg.png"
148
+ },
149
+ "6(a)": {
150
+ "figure_path": "2309.10668v2_figure_6(a).png",
151
+ "caption": "(a) Original spectrogram\nFigure C.3: \nCompression-based generation for audio data.\nWe condition gzip and Chinchilla on the first 1024 bytes of the base sequence (from LibriSpeech) and then sample the remaining 1024 bytes autoregressively.\nChinchilla predictions exhibit a typical \u201cloop\u201d pattern of autoregressive generation.",
152
+ "url": "http://arxiv.org/html/2309.10668v2/extracted/5473974/figures/original_speech_generation.png"
153
+ },
154
+ "6(b)": {
155
+ "figure_path": "2309.10668v2_figure_6(b).png",
156
+ "caption": "(b) gzip\nFigure C.3: \nCompression-based generation for audio data.\nWe condition gzip and Chinchilla on the first 1024 bytes of the base sequence (from LibriSpeech) and then sample the remaining 1024 bytes autoregressively.\nChinchilla predictions exhibit a typical \u201cloop\u201d pattern of autoregressive generation.",
157
+ "url": "http://arxiv.org/html/2309.10668v2/extracted/5473974/figures/gzip_speech_generation_autoreg.png"
158
+ },
159
+ "6(c)": {
160
+ "figure_path": "2309.10668v2_figure_6(c).png",
161
+ "caption": "(c) Chinchilla\nFigure C.3: \nCompression-based generation for audio data.\nWe condition gzip and Chinchilla on the first 1024 bytes of the base sequence (from LibriSpeech) and then sample the remaining 1024 bytes autoregressively.\nChinchilla predictions exhibit a typical \u201cloop\u201d pattern of autoregressive generation.",
162
+ "url": "http://arxiv.org/html/2309.10668v2/extracted/5473974/figures/llm_speech_generation_autoreg.png"
163
+ }
164
+ },
165
+ "validation": true,
166
+ "references": [
167
+ {
168
+ "1": {
169
+ "title": "Accelerated deep lossless image coding with unified paralleleized\nGPU coding architecture.",
170
+ "author": "Benjamin Lukas Cajus Barzen, Fedor Glazov, Jonas Geistert, and Thomas Sikora.",
171
+ "venue": "In PCS, 2022.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "2": {
177
+ "title": "Lossless data compression with neural networks.",
178
+ "author": "Fabrice Bellard.",
179
+ "venue": "Technical report, Amarisoft, 2019.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "3": {
185
+ "title": "NNCP v2: Lossless data compression with transformer.",
186
+ "author": "Fabrice Bellard.",
187
+ "venue": "Technical report, Amarisoft, 2021.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "4": {
193
+ "title": "The description length of deep learning models.",
194
+ "author": "L\u00e9onard Blier and Yann Ollivier.",
195
+ "venue": "In NeurIPS, 2018.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "5": {
201
+ "title": "On the opportunities and risks of foundation models.",
202
+ "author": "Rishi Bommasani et al.",
203
+ "venue": "arXiv:2108.07258, 2021.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "6": {
209
+ "title": "PNG (portable network graphics) specification version 1.0.",
210
+ "author": "Thomas Boutell.",
211
+ "venue": "RFC, 1997.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "7": {
217
+ "title": "Language models are few-shot learners.",
218
+ "author": "Tom B. Brown, Benjamin Mannand Nick Ryder, Melanie Subbiah, et al.",
219
+ "venue": "In NeurIPS, 2020.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "8": {
225
+ "title": "Sparks of artificial general intelligence: Early experiments with\nGPT-4.",
226
+ "author": "S\u00e9bastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke,\nEric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M.\nLundberg, Harsha Nori, Hamid Palangi, Marco T\u00falio Ribeiro, and\nYi Zhang.",
227
+ "venue": "arXiv:2303.12712, 2023.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "9": {
233
+ "title": "Scaling transformer to 1m tokens and beyond with RMT.",
234
+ "author": "Aydar Bulatov, Yuri Kuratov, and Mikhail S. Burtsev.",
235
+ "venue": "arXiv:2304.11062, 2023.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "10": {
241
+ "title": "A survey of model compression and acceleration for deep neural\nnetworks.",
242
+ "author": "Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang.",
243
+ "venue": "arXiv:1710.09282, 2017.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "11": {
249
+ "title": "Data compression using adaptive coding and partial string matching.",
250
+ "author": "John G. Cleary and Ian H. Witten.",
251
+ "venue": "IEEE Trans. Commun., 1984.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "12": {
257
+ "title": "Free lossless audio codec, 2008.",
258
+ "author": "Josh Coalson.",
259
+ "venue": "URL https://xiph.org/flac.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "13": {
265
+ "title": "Syntactically informed text compression with recurrent neural\nnetworks.",
266
+ "author": "David Cox.",
267
+ "venue": "arXiv:1608.02893, 2016.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "14": {
273
+ "title": "Neural networks and the chomsky hierarchy.",
274
+ "author": "Gr\u00e9goire Del\u00e9tang, Anian Ruoss, Jordi Grau-Moya, Tim Genewein,\nLi Kevin Wenliang, Elliot Catt, Chris Cundy, Marcus Hutter, Shane Legg, Joel\nVeness, and Pedro A. Ortega.",
275
+ "venue": "In ICLR, 2023.",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "15": {
281
+ "title": "GZIP file format specification version 4.3.",
282
+ "author": "Peter Deutsch.",
283
+ "venue": "RFC, 1996.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "16": {
289
+ "title": "Asymmetric numeral systems.",
290
+ "author": "Jarek Duda.",
291
+ "venue": "arXiv:0902.0271, 2009.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "17": {
297
+ "title": "Text categorization using compression models.",
298
+ "author": "Eibe Frank, Chang Chui, and Ian H. Witten.",
299
+ "venue": "In Data Compression Conference, 2000.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "18": {
305
+ "title": "In-context autoencoder for context compression in a large language\nmodel.",
306
+ "author": "Tao Ge, Jing Hu, Xun Wang, Si-Qing Chen, and Furu Wei.",
307
+ "venue": "arXiv:2307.06945, 2023.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "19": {
313
+ "title": "Memory-based meta-learning on non-stationary distributions.",
314
+ "author": "Tim Genewein, Gr\u00e9goire Del\u00e9tang, Anian Ruoss, Li Kevin Wenliang,\nElliot Catt, Vincent Dutordoir, Jordi Grau-Moya, Laurent Orseau, Marcus\nHutter, and Joel Veness.",
315
+ "venue": "In ICML, 2023.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "20": {
321
+ "title": "Deepzip: Lossless data compression using recurrent neural networks.",
322
+ "author": "Mohit Goyal, Kedar Tatwawadi, Shubham Chandak, and Idoia Ochoa.",
323
+ "venue": "In DCC, 2019.",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "21": {
329
+ "title": "Dzip: Improved general-purpose lossless compression based on novel\nneural network modeling.",
330
+ "author": "Mohit Goyal, Kedar Tatwawadi, Shubham Chandak, and Idoia Ochoa.",
331
+ "venue": "In DCC, 2020.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "22": {
337
+ "title": "Longt5: Efficient text-to-text transformer for long sequences.",
338
+ "author": "Mandy Guo, Joshua Ainslie, David C. Uthus, Santiago Onta\u00f1\u00f3n, Jianmo\nNi, Yun-Hsuan Sung, and Yinfei Yang.",
339
+ "venue": "In NAACL-HLT (Findings), 2022.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "23": {
345
+ "title": "Training compute-optimal large language models.",
346
+ "author": "Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, et al.",
347
+ "venue": "arXiv:2203.15556, 2022.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "24": {
353
+ "title": "Integer discrete flows and lossless compression.",
354
+ "author": "Emiel Hoogeboom, Jorn W. T. Peters, Rianne van den Berg, and Max Welling.",
355
+ "venue": "In NeurIPS, 2019.",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "25": {
361
+ "title": "Analysis of arithmetic coding for data compression.",
362
+ "author": "Paul G. Howard and Jeffrey Scott Vitter.",
363
+ "venue": "In Data Compression Conference, 1991.",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "26": {
369
+ "title": "A method for the construction of minimum-redundancy codes.",
370
+ "author": "David A. Huffman.",
371
+ "venue": "Proceedings of the IRE, 1952.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "27": {
377
+ "title": "Universal Artificial Intellegence - Sequential Decisions Based\non Algorithmic Probability.",
378
+ "author": "Marcus Hutter.",
379
+ "venue": "Springer, 2005.",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "28": {
385
+ "title": "500\u2019000\u20ac prize for compressing human knowledge, 2006.",
386
+ "author": "Marcus Hutter.",
387
+ "venue": "URL http://prize.hutter1.net.",
388
+ "url": null
389
+ }
390
+ },
391
+ {
392
+ "29": {
393
+ "title": "Few-shot non-parametric learning with deep latent variable model.",
394
+ "author": "Zhiying Jiang, Yiqin Dai, Ji Xin, Ming Li, and Jimmy Lin.",
395
+ "venue": "In NeurIPS, 2022.",
396
+ "url": null
397
+ }
398
+ },
399
+ {
400
+ "30": {
401
+ "title": "\"low-resource\" text classification: A parameter-free classification\nmethod with compressors.",
402
+ "author": "Zhiying Jiang, Matthew Y. R. Yang, Mikhail Tsirlin, Raphael Tang, Yiqin Dai,\nand Jimmy Lin.",
403
+ "venue": "In ACL (Findings), 2023.",
404
+ "url": null
405
+ }
406
+ },
407
+ {
408
+ "31": {
409
+ "title": "Scaling laws for neural language models.",
410
+ "author": "Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon\nChild, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei.",
411
+ "venue": "arXiv:2001.08361, 2020.",
412
+ "url": null
413
+ }
414
+ },
415
+ {
416
+ "32": {
417
+ "title": "Bit-swap: Recursive bits-back coding for lossless compression with\nhierarchical latent variables.",
418
+ "author": "Friso H. Kingma, Pieter Abbeel, and Jonathan Ho.",
419
+ "venue": "In ICML, 2019.",
420
+ "url": null
421
+ }
422
+ },
423
+ {
424
+ "33": {
425
+ "title": "CMIX, 2014.",
426
+ "author": "Byron Knoll.",
427
+ "venue": "URL http://www.byronknoll.com/cmix.html.",
428
+ "url": null
429
+ }
430
+ },
431
+ {
432
+ "34": {
433
+ "title": "A machine learning perspective on predictive coding with PAQ8.",
434
+ "author": "Byron Knoll and Nando de Freitas.",
435
+ "venue": "In DCC, 2012.",
436
+ "url": null
437
+ }
438
+ },
439
+ {
440
+ "35": {
441
+ "title": "On tables of random numbers.",
442
+ "author": "Andrei N. Kolmogorov.",
443
+ "venue": "Theoretical Computer Science, 1998.",
444
+ "url": null
445
+ }
446
+ },
447
+ {
448
+ "36": {
449
+ "title": "Subword regularization: Improving neural network translation models\nwith multiple subword candidates.",
450
+ "author": "Taku Kudo.",
451
+ "venue": "In ACL (1), 2018.",
452
+ "url": null
453
+ }
454
+ },
455
+ {
456
+ "37": {
457
+ "title": "Sentencepiece: A simple and language independent subword tokenizer\nand detokenizer for neural text processing.",
458
+ "author": "Taku Kudo and John Richardson.",
459
+ "venue": "In EMNLP (Demonstration), 2018.",
460
+ "url": null
461
+ }
462
+ },
463
+ {
464
+ "38": {
465
+ "title": "In-context reinforcement learning with algorithm distillation.",
466
+ "author": "Michael Laskin, Luyu Wang, et al.",
467
+ "venue": "In ICLR. OpenReview.net, 2023.",
468
+ "url": null
469
+ }
470
+ },
471
+ {
472
+ "39": {
473
+ "title": "An Introduction to Kolmogorov Complexity and Its Applications,\n4th Edition.",
474
+ "author": "Ming Li and Paul M. B. Vit\u00e1nyi.",
475
+ "venue": "Springer, 2019.",
476
+ "url": null
477
+ }
478
+ },
479
+ {
480
+ "40": {
481
+ "title": "DecMac: A deep context model for high efficiency arithmetic\ncoding.",
482
+ "author": "Qian Liu, Yiling Xu, and Zhu Li.",
483
+ "venue": "In ICAIIC, 2019.",
484
+ "url": null
485
+ }
486
+ },
487
+ {
488
+ "41": {
489
+ "title": "Information theory, inference, and learning algorithms.",
490
+ "author": "David J. C. MacKay.",
491
+ "venue": "Cambridge University Press, 2003.",
492
+ "url": null
493
+ }
494
+ },
495
+ {
496
+ "42": {
497
+ "title": "Fast text compression with neural networks.",
498
+ "author": "Matthew V. Mahoney.",
499
+ "venue": "In FLAIRS, 2000.",
500
+ "url": null
501
+ }
502
+ },
503
+ {
504
+ "43": {
505
+ "title": "TRACE: A fast transformer-based general-purpose lossless\ncompressor.",
506
+ "author": "Yu Mao, Yufei Cui, Tei-Wei Kuo, and Chun Jason Xue.",
507
+ "venue": "In WWW, 2022.",
508
+ "url": null
509
+ }
510
+ },
511
+ {
512
+ "44": {
513
+ "title": "Practical full resolution learned lossless image compression.",
514
+ "author": "Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, and Luc Van\nGool.",
515
+ "venue": "In CVPR, 2019.",
516
+ "url": null
517
+ }
518
+ },
519
+ {
520
+ "45": {
521
+ "title": "Learning better lossless compression using lossy compression.",
522
+ "author": "Fabian Mentzer, Luc Van Gool, and Michael Tschannen.",
523
+ "venue": "In CVPR, 2020.",
524
+ "url": null
525
+ }
526
+ },
527
+ {
528
+ "46": {
529
+ "title": "Statistical Language Models Based on Neural Networks.",
530
+ "author": "Tomas Mikolov.",
531
+ "venue": "PhD thesis, Brno Universtiy of Technology, 2012.",
532
+ "url": null
533
+ }
534
+ },
535
+ {
536
+ "47": {
537
+ "title": "Large language models as general pattern machines.",
538
+ "author": "Suvir Mirchandani, Fei Xia, Pete Florence, Brian Ichter, Danny Driess,\nMontserrat Gonzalez Arenas, Kanishka Rao, Dorsa Sadigh, and Andy Zeng.",
539
+ "venue": "arXiv:2307.04721, 2023.",
540
+ "url": null
541
+ }
542
+ },
543
+ {
544
+ "48": {
545
+ "title": "Exploring generalization in deep learning.",
546
+ "author": "Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro.",
547
+ "venue": "In NIPS, 2017.",
548
+ "url": null
549
+ }
550
+ },
551
+ {
552
+ "49": {
553
+ "title": "Gzip versus bag-of-words for text classification, 2023.",
554
+ "author": "Juri Opitz.",
555
+ "venue": null,
556
+ "url": null
557
+ }
558
+ },
559
+ {
560
+ "50": {
561
+ "title": "Librispeech: An ASR corpus based on public domain audio books.",
562
+ "author": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.",
563
+ "venue": "In ICASSP, 2015.",
564
+ "url": null
565
+ }
566
+ },
567
+ {
568
+ "51": {
569
+ "title": "Source coding algorithms for fast data compression (ph.d. thesis\nabstr.).",
570
+ "author": "Richard C. Pasco.",
571
+ "venue": "IEEE Trans. Inf. Theory, 1977.",
572
+ "url": null
573
+ }
574
+ },
575
+ {
576
+ "52": {
577
+ "title": "7z Format, 2019.",
578
+ "author": "Igor Pavlov.",
579
+ "venue": "URL http://www.7-zip.org/7z.html.",
580
+ "url": null
581
+ }
582
+ },
583
+ {
584
+ "53": {
585
+ "title": "Bpe-dropout: Simple and effective subword regularization.",
586
+ "author": "Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita.",
587
+ "venue": "In ACL, 2020.",
588
+ "url": null
589
+ }
590
+ },
591
+ {
592
+ "54": {
593
+ "title": "Language models are unsupervised multitask learners.",
594
+ "author": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya\nSutskever.",
595
+ "venue": "Technical report, OpenAI, 2019.",
596
+ "url": null
597
+ }
598
+ },
599
+ {
600
+ "55": {
601
+ "title": "Scaling language models: Methods, analysis & insights from\ntraining gopher.",
602
+ "author": "Jack W. Rae et al.",
603
+ "venue": "arXiv:2112.11446, 2021.",
604
+ "url": null
605
+ }
606
+ },
607
+ {
608
+ "56": {
609
+ "title": "A philosophical treatise of universal induction.",
610
+ "author": "Samuel Rathmanner and Marcus Hutter.",
611
+ "venue": "Entropy, 2011.",
612
+ "url": null
613
+ }
614
+ },
615
+ {
616
+ "57": {
617
+ "title": "LC-FDNet: Learned lossless image compression with frequency\ndecomposition network.",
618
+ "author": "Hochang Rhee, Yeong Il Jang, Seyun Kim, and Nam Ik Cho.",
619
+ "venue": "In CVPR, 2022.",
620
+ "url": null
621
+ }
622
+ },
623
+ {
624
+ "58": {
625
+ "title": "Generalized kraft inequality and arithmetic coding.",
626
+ "author": "Jorma Rissanen.",
627
+ "venue": "IBM J. Res. Dev., 1976.",
628
+ "url": null
629
+ }
630
+ },
631
+ {
632
+ "59": {
633
+ "title": "Randomized positional encodings boost length generalization of\ntransformers.",
634
+ "author": "Anian Ruoss, Gr\u00e9goire Del\u00e9tang, Tim Genewein, Jordi Grau-Moya,\nR\u00f3bert Csord\u00e1s, Mehdi Bennani, Shane Legg, and Joel Veness.",
635
+ "venue": "In ACL (2), 2023.",
636
+ "url": null
637
+ }
638
+ },
639
+ {
640
+ "60": {
641
+ "title": "Imagenet large scale visual recognition challenge.",
642
+ "author": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma,\nZhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein,\nAlexander C. Berg, and Li Fei-Fei.",
643
+ "venue": "Int. J. Comput. Vis., 2015.",
644
+ "url": null
645
+ }
646
+ },
647
+ {
648
+ "61": {
649
+ "title": "Deep-learning-based lossless image coding.",
650
+ "author": "Ionut Schiopu and Adrian Munteanu.",
651
+ "venue": "IEEE Trans. Circuits Syst. Video Technol., 2020.",
652
+ "url": null
653
+ }
654
+ },
655
+ {
656
+ "62": {
657
+ "title": "CNN-based prediction for lossless coding of photographic images.",
658
+ "author": "Ionut Schiopu, Yu Liu, and Adrian Munteanu.",
659
+ "venue": "In PCS, 2018.",
660
+ "url": null
661
+ }
662
+ },
663
+ {
664
+ "63": {
665
+ "title": "Predictive coding with neural nets: Application to text compression.",
666
+ "author": "J\u00fcrgen Schmidhuber and Stefan Heil.",
667
+ "venue": "In NIPS, pp. 1047\u20131054. MIT Press, 1994.",
668
+ "url": null
669
+ }
670
+ },
671
+ {
672
+ "64": {
673
+ "title": "Sequential neural text compression.",
674
+ "author": "J\u00fcrgen Schmidhuber and Stefan Heil.",
675
+ "venue": "IEEE Trans. Neural Networks, 1996.",
676
+ "url": null
677
+ }
678
+ },
679
+ {
680
+ "65": {
681
+ "title": "Neural machine translation of rare words with subword units.",
682
+ "author": "Rico Sennrich, Barry Haddow, and Alexandra Birch.",
683
+ "venue": "In ACL (1), 2016.",
684
+ "url": null
685
+ }
686
+ },
687
+ {
688
+ "66": {
689
+ "title": "A mathematical theory of communication.",
690
+ "author": "Claude E. Shannon.",
691
+ "venue": "Bell Syst. Tech. J., 1948.",
692
+ "url": null
693
+ }
694
+ },
695
+ {
696
+ "67": {
697
+ "title": "A formal theory of inductive inference. part I.",
698
+ "author": "Ray J. Solomonoff.",
699
+ "venue": "Inf. Control., 1964a.",
700
+ "url": null
701
+ }
702
+ },
703
+ {
704
+ "68": {
705
+ "title": "A formal theory of inductive inference. part II.",
706
+ "author": "Ray J. Solomonoff.",
707
+ "venue": "Inf. Control., 1964b.",
708
+ "url": null
709
+ }
710
+ },
711
+ {
712
+ "69": {
713
+ "title": "Compression of generative pre-trained language models via\nquantization.",
714
+ "author": "Chaofan Tao, Lu Hou, Wei Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, and\nNgai Wong.",
715
+ "venue": "In ACL (1), 2022.",
716
+ "url": null
717
+ }
718
+ },
719
+ {
720
+ "70": {
721
+ "title": "Using Compression-Based Language Models for Text\nCategorization, pp. 141\u2013165.",
722
+ "author": "William J. Teahan and David J. Harper.",
723
+ "venue": "Springer Netherlands, 2003.",
724
+ "url": null
725
+ }
726
+ },
727
+ {
728
+ "71": {
729
+ "title": "Llama: Open and efficient foundation language models.",
730
+ "author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, et al.",
731
+ "venue": "arXiv:2302.13971, 2023a.",
732
+ "url": null
733
+ }
734
+ },
735
+ {
736
+ "72": {
737
+ "title": "Llama 2: Open foundation and fine-tuned chat models.",
738
+ "author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine\nBabaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale,\nDan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem\nCucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,\nCynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar\nHosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,\nIsabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux,\nThibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier\nMartinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew\nPoulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan\nSilva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang,\nRoss Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan\nZarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang,\nAur\u00e9lien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom.",
739
+ "venue": "arXiv:2307.09288, 2023b.",
740
+ "url": null
741
+ }
742
+ },
743
+ {
744
+ "73": {
745
+ "title": "Practical lossless compression with latent variables using bits back\ncoding.",
746
+ "author": "James Townsend, Thomas Bird, and David Barber.",
747
+ "venue": "In ICLR (Poster), 2019.",
748
+ "url": null
749
+ }
750
+ },
751
+ {
752
+ "74": {
753
+ "title": "Llmzip: Lossless text compression using large language models.",
754
+ "author": "Chandra Shekhara Kaushik Valmeekam, Krishna Narayanan, Dileep Kalathil,\nJean-Fran\u00e7ois Chamberland, and Srinivas Shakkottai.",
755
+ "venue": "arXiv:2306.04050, 2023.",
756
+ "url": null
757
+ }
758
+ },
759
+ {
760
+ "75": {
761
+ "title": "The student-t mixture as a natural image patch prior with application\nto image compression.",
762
+ "author": "A\u00e4ron van den Oord and Benjamin Schrauwen.",
763
+ "venue": "J. Mach. Learn. Res., 2014.",
764
+ "url": null
765
+ }
766
+ },
767
+ {
768
+ "76": {
769
+ "title": "Attention is all you need.",
770
+ "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,\nAidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin.",
771
+ "venue": "In NIPS, 2017.",
772
+ "url": null
773
+ }
774
+ },
775
+ {
776
+ "77": {
777
+ "title": "Compress and control.",
778
+ "author": "Joel Veness, Marc G. Bellemare, Marcus Hutter, Alvin Chua, and Guillaume\nDesjardins.",
779
+ "venue": "In AAAI, 2015.",
780
+ "url": null
781
+ }
782
+ },
783
+ {
784
+ "78": {
785
+ "title": "Chain-of-thought prompting elicits reasoning in large language\nmodels.",
786
+ "author": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia,\nEd H. Chi, Quoc V. Le, and Denny Zhou.",
787
+ "venue": "In NeurIPS, 2022.",
788
+ "url": null
789
+ }
790
+ },
791
+ {
792
+ "79": {
793
+ "title": "A technique for high-performance data compression.",
794
+ "author": "Terry A. Welch.",
795
+ "venue": "Computer, 1984.",
796
+ "url": null
797
+ }
798
+ },
799
+ {
800
+ "80": {
801
+ "title": "The context-tree weighting method: basic properties.",
802
+ "author": "Frans M. J. Willems, Yuri M. Shtarkov, and Tjalling J. Tjalkens.",
803
+ "venue": "IEEE Trans. Inf. Theory, 1995.",
804
+ "url": null
805
+ }
806
+ },
807
+ {
808
+ "81": {
809
+ "title": "Arithmetic coding for data compression.",
810
+ "author": "Ian H. Witten, Radford M. Neal, and John G. Cleary.",
811
+ "venue": "Commun. ACM, 1987.",
812
+ "url": null
813
+ }
814
+ },
815
+ {
816
+ "82": {
817
+ "title": "Big bird: Transformers for longer sequences.",
818
+ "author": "Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris\nAlberti, Santiago Onta\u00f1\u00f3n, Philip Pham, Anirudh Ravula, Qifan\nWang, Li Yang, and Amr Ahmed.",
819
+ "venue": "In NeurIPS, 2020.",
820
+ "url": null
821
+ }
822
+ }
823
+ ],
824
+ "url": "http://arxiv.org/html/2309.10668v2"
825
+ }
20240318/2309.14184v2.json ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Linearly implicit exponential integrators for damped Hamiltonian PDEs",
3
+ "abstract": "Structure-preserving linearly implicit exponential integrators are constructed for Hamiltonian partial differential equations with linear constant damping. Linearly implicit integrators are derived by polarizing the polynomial terms of the Hamiltonian function and portioning out the nonlinearly of consecutive time steps. They require only a solution of one linear system at each time step. Therefore they are computationally more advantageous than implicit integrators. We also construct an exponential version of the well-known one-step Kahan\u2019s method by polarizing the quadratic vector field. These integrators are applied to one-dimensional damped Burger\u2019s, Korteweg-de-Vries, and nonlinear Schr\u00f6dinger equations. Preservation of the dissipation rate of linear and quadratic conformal invariants and the Hamiltonian is illustrated by numerical experiments.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Many physical systems are commonly affected by external forces or by the dissipative effects of friction. Many well-known partial differential equations (PDEs) can be expressed with damping terms: Burger\u2019s equation\n [2 ###reference_b2###], Korteweg-de-Vries (KdV) equation [1 ###reference_b1###, 14 ###reference_b14###], nonlinear Schr\u00f6dinger (NLS) equation [3 ###reference_b3###, 6 ###reference_b6###, 13 ###reference_b13###, 23 ###reference_b23###], Klein Gordon equation\n[2 ###reference_b2###, 23 ###reference_b23###], semi-linear wave equation [23 ###reference_b23###] and Camassa-Holm equation [3 ###reference_b3###] are some known examples. \nIn this paper, we consider a Hamiltonian PDE with linear constant damping\nwhere is the solution vector for some integer , is a constant skew-adjoint operator, is the Hamiltonian, is the variational derivative and the parameter stands for the constant damping rate [1 ###reference_b1###, 2 ###reference_b2###, 4 ###reference_b4###, 21 ###reference_b21###].\nDamped PDEs (1 ###reference_###) are characterized by the possession of\nqualitative properties that decay exponentially along any solution, which are referred to as conformal invariants.\nA conformal invariant depending on the solution is defined as [2 ###reference_b2###, 21 ###reference_b21###, 23 ###reference_b23###]\nThis decay in the solution or qualitative\nproperties of a PDE is often the result of the presence of resistive forces in the system.\nThe conformal invariant describes a quantity of the system such as energy (or Hamiltonian), mass or momentum that decreases with time. This decay in the solution or qualitative properties of a PDE is often due to resistive forces in the system such as friction, damping, dissipation, or viscosity, and hence, are a more realistic model of a physical phenomenon than the conservative systems.\nIt is important to maintain as many properties of the physical system as possible when modeling physical phenomena of some useful discretization techniques.\nNumerical methods, especially the energy-preserving methods, for conservative and dissipative systems, have attracted a significant amount of attention in recent years.\nNumerical methods that preserve the conformal symplectic structure of conformal Hamiltonian systems are known as conformal symplectic methods.\nThey were first constructed for ordinary differential equations (ODEs) using splitting techniques [19 ###reference_b19###] by solving the linear dissipative part exactly and the nonlinear conservative part with a symplectic method, and then composing the flow\nmaps.\nVarious integrators are constructed using splitting techniques preserving the conformal multi-symplectic structure of damped PDEs; the KdV equation [14 ###reference_b14###],\nthe NLS equation [13 ###reference_b13###], semi-linear wave equation [22 ###reference_b22###].\nOther conformal structure-preserving integrators are conformal multi-symplectic Euler-Preissman scheme [20 ###reference_b20###] and discrete gradient method [23 ###reference_b23###], St\u00f6rmer-Verlet and conformal implicit midpoint methods [2 ###reference_b2###], exponential Rung-Kutta methods [3 ###reference_b3###], projected exponential Runge-Kutta methods [1 ###reference_b1###].\nIn this paper, we construct the linearly implicit exponential integrators for damped Hamiltonian PDEs (1 ###reference_###) by combining the linearly implicit methods using polarized energy [10 ###reference_b10###, 18 ###reference_b18###] with the exponential methods using discrete gradient [21 ###reference_b21###]. Implicit exponential integrators are constructed for\ndamped PDEs (1 ###reference_###) using discrete gradients in [21 ###reference_b21###] such as the exponential average vector field method and exponential implicit midpoint method.\nThey can be considered as extension of the energy-preserving discrete gradient methods for Hamiltonian PDEs through the development of exponential integrators.\nSome numerical\nmethods preserve the dissipation properties by simply guaranteeing that the energy or conformal invariant is\ndecreasing with every time step, even though it may be numerically overdamped\nor underdamped.\nThe exponential integrators in [1 ###reference_b1###, 2 ###reference_b2###, 21 ###reference_b21###] preserve the correct rate of dissipation, such that the energy or the conformal invariant is not overdamped or\nunderdamped.\nDue to their implicit nature, a system of nonlinear equations have to be solved iteratively at each time step by Newton\u2019s method or by fixed point iteration.\nOn the other hand, linearly implicit integrators require only a single iteration in the solution of a nonlinear system of equations, which makes the linearly implicit integrators computationally advantageous over the implicit exponential integrators such as the average vector field (AVF) and conformal midpoint methods.\nThe linearly implicit methods are constructed using polarized energy to portion out the nonlinear terms in the Hamiltonian function over consecutive time steps. In this way, a quadratic polarized energy is constructed and then the polarized discrete gradient method is performed. Linearly implicit energy-preserving methods have been applied to Hamiltonian PDEs [5 ###reference_b5###, 11 ###reference_b11###, 12 ###reference_b12###] and gradient systems\n[25 ###reference_b25###] with polynomial nonlinear terms. \nIn this study, we derive two-step linearly implicit exponential methods for damped Burger\u2019s, KdV, and NLS equations where the Hamiltonian functions contain quadratic, cubic, and quartic terms.\nLinearly implicit methods are symmetric, preserve the polarized energy, and have favorable properties like linear error growth and long-time near-conservation of first integrals. \nSimilarly, linearly implicit exponential integrators also preserve the correct dissipation rate of the Hamiltonians and conformal invariants.\nA well-known one-step linearly implicit integrator for linear-quadratic systems is Kahan\u2019s method [16 ###reference_b16###, 17 ###reference_b17###] which is constructed by polarizing the quadratic vector fields. We construct linearly implicit exponential two-step Kahan\u2019s method for damped Burger\u2019s and KdV equations [8 ###reference_b8###].\nPreservation of the dissipation of linear and quadratic conformal invariants and the energy are illustrated for damped Burger\u2019s, KdV, and NLS equations through numerical experiments\nThe rest of this paper is organized as follows. In Section 2 ###reference_###, linearly implicit exponential integrators are introduced. Numerical results for damped Burger\u2019s, KdV and NLS equations are presented in Section 3 ###reference_###. The paper ends with some conclusions in Section 4 ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Linearly implicit exponential integrators",
15
+ "text": "Semidiscretization of (1 ###reference_###) in space with finite differences, yields the dissipative Hamiltonian ODE\nwhere is the unknown solution vector, is the spatial degrees of freedom, and is constant skew-symmetric matrix.\nIt is desirable that a numerical method to the semidiscrete system (3 ###reference_###), preserve the conformal invariant (2 ###reference_###) numerically\nwhere is a discrete approximation of the conformal invariant (2 ###reference_###) at time , and is the time step size.\nSeveral methods have been developed for preserving dissipation of the conformal invariants [1 ###reference_b1###, 2 ###reference_b2###, 4 ###reference_b4###, 14 ###reference_b14###, 23 ###reference_b23###]. In this paper, we construct linearly implicit exponential discrete gradient methods for (3 ###reference_###) following [21 ###reference_b21###, 12 ###reference_b12###, 10 ###reference_b10###]. In the first part, we briefly describe discrete gradient methods and linearly implicit\nintegrators using quadratic polarization of the Hamiltonians. In the second part, we derive the linearly implicit exponential integrators for dissipatively perturbed Hamiltonian systems."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Discrete gradient methods and linearly implicit integrators",
21
+ "text": "For conservative Hamiltonian systems without damping )\nthe discrete gradient method is given by\nwhere is the solution vector approximating .\nThe discrete gradient is a vector such that for any vectors , it holds\n,\n.\nThe discrete gradient method (6 ###reference_###) preserves the energy of the conservative Hamiltonian system (5 ###reference_###) at any time step.\nA particular choice of such as mean value approximation\nleads to the AVF method for Hamiltonian systems [7 ###reference_b7###, 9 ###reference_b9###, 24 ###reference_b24###].\nThese methods are implicit, which means a nonlinear system of equations has to be solved iteratively at each time step.\nLinearly implicit integrators require the solution of one linear system\nof equations at each time step.\nLinearly implicit methods are constructed by portioning out the nonlinearity over consecutive time steps by devising\na quadratic polarized energy satisfying consistency and invariance properties\nand then performing the polarized discrete gradient (PDG) method.\nA PDG for is a function satisfying [5 ###reference_b5###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 18 ###reference_b18###, 25 ###reference_b25###]\nThe corresponding polarized two-step discrete gradient scheme is given by [10 ###reference_b10###, 18 ###reference_b18###]\nwhich preserves the polarized Hamiltonian in the sense for all [12 ###reference_b12###].\nQuadratic polarization of several for polynomial functions are given below [10 ###reference_b10###, 18 ###reference_b18###]:\n, \u2003,\n,\n.\nFor linear-quadratic systems such as Burger\u2019s equation and KdV equation with cubic Hamiltonian functions, there exist a well known linearly implicit method, namely Kahan\u2019s method [8 ###reference_b8###, 16 ###reference_b16###, 17 ###reference_b17###].\nWhen restricted to quadratic vector fields, Kahan\u2019s method coincides with the Runge-Kutta method [8 ###reference_b8###]\nThe two-step linearly implicit PDG scheme [12 ###reference_b12###]\nis equivalent to Kahan\u2019s method (10 ###reference_###) over two consecutive steps, when applied to ODEs with homogeneous cubic\nThe two-step PDG scheme (12 ###reference_###) preserves the polarized invariant [12 ###reference_b12###]\nif the one-step Kahan\u2019s method (10 ###reference_###) is used to calculate from .\nThe one-step (10 ###reference_###) and two-step (12 ###reference_###) Kahan\u2019s methods are time-symmetric (reversible) and therefore second-order accurate in time."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Linearly implicit exponential integrators",
27
+ "text": "For dissipative Hamiltonian systems (1 ###reference_###), in [21 ###reference_b21###], the following exponential discrete gradient method was introduced\nwhich preserves some particular choices of the conformal invariant (2 ###reference_###), where\n\nThe mean value averaged exponential discrete gradient method leads to the exponential AVF method [21 ###reference_b21###]\nwith the Hamiltonian preserved along the transformed solution [21 ###reference_b21###]\nThe conformal implicit midpoint method [2 ###reference_b2###, 21 ###reference_b21###]\nis equivalent to the mean value averaged exponential discrete gradient method (14 ###reference_###) for linearly damped Hamiltonian systems with cubic Hamiltonian functions such as the damped Burger\u2019s equation and damped KdV equation. It is symmetric and hence second-order\n[21 ###reference_b21###].\nThe exponential version of the one-step exponential Kahan\u2019s method (10 ###reference_###) can be given as\nwhose adjoint equation\nis the same as (16 ###reference_###). Hence it is symmetric and second-order.\nSimilarly, the two-step exponential Kahan\u2019s method has the form\nIt is symmetric under and as the right hand side is coming from a symmetric multilinear form. Therefore the scheme (17 ###reference_###) is second-order accurate.\nThe two-step linearly implicit exponential discrete gradient methods for damped Hamiltonian systems have the general form\nwhere\nLike the exponential Kahan\u2019s method (16 ###reference_###), the exponential polarized two-step discrete gradient method (18 ###reference_###) is also second-order, since the adjoint equation\nis the same as the (18 ###reference_###), since the right-hand side of (18 ###reference_###) is symmetric.\nBoth (17 ###reference_###) and (18 ###reference_###) preserve the polarized Hamiltonian (13 ###reference_###) along the transformed solution\nLinearly implicit two-step exponential discrete gradient integrators using the polarization of quadratic, cubic, and quartic terms in Hamiltonian function for damped Burger\u2019s, damped KdV, and damped NLS equations are given in Section 3 ###reference_###.\nFor the dissipative Hamiltonian systems, the Hamiltonian and the conformal invariants dissipate like\n\nWhen they are initially positive, exponential integrators guarantee that the Hamiltonian, and conformal invariants are decreasing at each time step, i.e.\nfor small values of the damping term , and satisfy energy and conformal invariant balance equations [2 ###reference_b2###, 21 ###reference_b21###]\nPreservation of the Hamiltonian (or energy) dissipation is measured using with residual [21 ###reference_b21###]\nSimilarly, the dissipation of the conformal invariants is measured with the residual [2 ###reference_b2###, 21 ###reference_b21###]\nBoth the residuals (19 ###reference_###) and (20 ###reference_###) is used to check whether the decrease of Hamiltonian and conformal invariants are over/underdamped or not."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Numerical experiments",
33
+ "text": "In this Section, we report on numerical experiments for three Hamiltonian PDEs with constant linear damping under periodic boundary conditions; Burger\u2019s equation, KdV equation, and NLS equation. They are discretized in space with finite differences and the resulting dissipative Hamiltonian ODEs are solved with the implicit and linearly implicit exponential integrators: the conformal implicit midpoint method (CIMP) (15 ###reference_###), the exponential AVF method (EAVF) (14 ###reference_###), the two-step exponential Kahan\u2019s method (EK) (17 ###reference_###), and the two-step linearly implicit exponential integrator (LIE) (18 ###reference_###).\nSpace discretization on the interval is performed\nby introducing a uniform spatial grid of the nodes with the grid size such that , and is even. Then, for any , we approximate by , , with periodic boundary conditions , and define the solution vector\n.\nThe matrices and correspond to the centered finite difference discretization of the first and second-order derivative operators and , respectively, under periodic boundary conditions\nFor time discretization, we divide the time interval into uniform elements , , and we denote by the full discrete approximation vector at time , .\nThe computations are carried out via MATLAB 7.0 with Intel(R) Core(TM) i5-7500 CPU 3.40 GHz."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Damped Burger\u2019s equation",
39
+ "text": "The damped or modified Burger\u2019s equation [2 ###reference_b2###]\ncan be written as a damped Hamiltonian PDE (1 ###reference_###)\nThe semidiscrete system reads as\nwhere denotes the elementwise vector or matrix multiplication.\nThe Hamiltonian of the full discrete system is given as\nApplying the exponential integrators presented in Section 2 ###reference_###, we obtain the following schemes:\nCIMP:\nwhere\nEK:\nwhere\nDamped Burger\u2019s equation (21 ###reference_###) possess a linear conformal invariant, mass [2 ###reference_b2###].\nAs the numerical test problem, we consider damped Burger\u2019s equation (21 ###reference_###) [2 ###reference_b2###] on with the damping factor . Initial condition is taken as the Gaussian distribution with density and mean zero, i.e., . We set the spatial and temporal mesh sizes and , respectively, and the target time . We show the numerical solution and decrease of the conformal invariant and of the Hamiltonians only for the EK.\nThe numerical results obtained with CIMP are very close to those obtained with EK, therefore thy are not shown.\nThe solution profile becomes steeper and decreases gradually with time in Figure 1 ###reference_### as in [2 ###reference_b2###]. The linear conformal invariant, mass, Hamiltonian, and modified Hamiltonian decrease as time progress in Figure 2 ###reference_###.\n###figure_1### ###figure_2### The error in the residual (20 ###reference_###) of mass is preserved up to machine precision in Figure 3 ###reference_###. The error in the residual of the Hamiltonians (19 ###reference_###) is much larger than of the mass, but does not show any drift as time progresses, which indicates that they are over-or under damped.\nThe mass is also preserved with high accuracy with CIMP as shown in Figure 4 ###reference_###.\n###figure_3### ###figure_4### The solution profiles and the residual errors are shown, computed with linearly implicit two-step Kahan\u2019s method (17 ###reference_###) are shown in Figure 5 ###reference_### and in Figure 6 ###reference_###. The solution deteriorates as time increases, the mass is not preserved as for the EK in Figure 2 ###reference_###. These indicate, that the standard linearly implicit methods are not adequate for the damped Hamiltonian Burger\u2019s equation.\n###figure_5### ###figure_6###"
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Damped Korteweg-de-Vries (KdV) equation",
45
+ "text": "We consider the following damped KdV equation [1 ###reference_b1###]\nIn Hamiltonian form, it reads as\nThe damped KdV equation was solved with the projected exponential Runge-Kutta methods [1 ###reference_b1###] and with the conformal multisymplectic method [14 ###reference_b14###]. It possesses linear and quadratic conformal invariants [1 ###reference_b1###, 14 ###reference_b14###].\nThe semi-discrete form of (23 ###reference_###) given as\nwhere the matrix approximates the third order derivative . The discrete Hamiltonian has the form\nThe exponential integrators in Section 2 ###reference_###, yield the following schemes:\nCIMP:\nEK:\nWe consider the KdV equation on [1 ###reference_b1###] with the parameter values\nThe mesh sizes are and with the final time . The initial condition is given with the Gaussian wave profile\n\nThe numerical\nsolutions develop almost a vertical front, in form a series of wave-trains in Figure 7 ###reference_### as in [1 ###reference_b1###].\n###figure_7### The EK preserves the linear conformal invariant up to machine precision, whereas the error in the residual of the quadratic conformal invariant is diminishing as time progresses in Figure 8 ###reference_###.\n###figure_8###"
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Damped nonlinear Schr\u00f6dinger (NLS) equation",
51
+ "text": "We consider the following damped NLS equation [6 ###reference_b6###, 13 ###reference_b13###, 15 ###reference_b15###, 23 ###reference_b23###]\nwhere is a constant parameter, and is a damping coefficient.\nThe equation (24 ###reference_###) can be written through decomposing \nin real and imaginary components as\nThe system (25 ###reference_###) can be recast into a damped Hamiltonian system\nwith the Hamiltonian\nSemidiscretization with finite differences gives the following ODE system\nwith the discrete Hamiltonian\nThe damped NLS equation (24 ###reference_###) has two quadratic conformal invariants; the mass and the momentum .\nWe solve it with the implicit EAVF and with the linearly implicit LIE integrators.\nApplying the EAVF method (14 ###reference_###) to (26 ###reference_###) gives the following scheme\nwhere\nand\nThe LIE (18 ###reference_###) gives the scheme\nwhere\n\nThe polarized Hamiltonian has the form\nFor the numerical experiment, we consider the damped NLS equation [5 ###reference_b5###, 15 ###reference_b15###] on with the mesh sizes and , and the target time is . We fix the parameter , and take\nthe initial condition as\n.\nFigure 9 ###reference_### shows solutions at initial and final times by using the EAVF and\nLIE schemes with . The damped solitary wave\nis traveling from left to right as\nrequired, by preserving the phase space\nstructure.\n###figure_9### ###figure_10### ###figure_11### In Figure 11 ###reference_###, residuals of the energy balance of the quadratic conformal invariants and the Hamiltonian are plotted. The EAVF and LIE schemes do not preserve the dissipative rate of the invariants exactly, whereas the error in the residuals for LIE is lower than the EAVF. In Figure 11 ###reference_###, errors in the residuals are small for small damping factors plotted, whereas the error in residual of the polarized Hamiltonian is exactly preserved.\n###figure_12### ###figure_13### The CPU time needed for the solution of the systems with LIE is seconds, and with the EAVF seconds, which supports that the linearly implicit exponential integrators are computationally more advantageous than the implicit exponential integrators."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Conclusions",
57
+ "text": "Linearly implicit exponential integrators preserve linear conformal invariants exactly and preserve the quadratic invariants, cubic and quartic Hamiltonians more accurately when the damping coefficient is small.\nSymmetry of the linearly implicit exponential integrators guarantees stable long-time behavior of the solutions.\nCompared with the implicit exponential integrators, linearly implicit exponential integrators show a lower computational cost as illustrated by the damped NLS equation in long-term integration.\nThe computational advantages of the linearly implicit exponential integrators would be more significant for higher dimensional PDEs, which is the subject of future research as well as an extension to time-dependent damping."
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {},
62
+ "image_paths": {
63
+ "1": {
64
+ "figure_path": "2309.14184v2_figure_1.png",
65
+ "caption": "Figure 1: Solutions of the damped Burger\u2019s equation with EK.",
66
+ "url": "http://arxiv.org/html/2309.14184v2/x3.png"
67
+ },
68
+ "2": {
69
+ "figure_path": "2309.14184v2_figure_2.png",
70
+ "caption": "Figure 2: Damped Burger\u2019s equation (EK): mass (left), exact Hamiltonian (middle) and modified Hamiltonian (right).",
71
+ "url": "http://arxiv.org/html/2309.14184v2/x4.png"
72
+ },
73
+ "3": {
74
+ "figure_path": "2309.14184v2_figure_3.png",
75
+ "caption": "Figure 3: Damped Burger\u2019s equation (EK): residuals of mass (left), Hamiltonian (middle) and modified Hamiltonian (right).",
76
+ "url": "http://arxiv.org/html/2309.14184v2/x5.png"
77
+ },
78
+ "4": {
79
+ "figure_path": "2309.14184v2_figure_4.png",
80
+ "caption": "Figure 4: Damped Burger\u2019s equation (CIMP): residuals of mass (left), Hamiltonian (right).",
81
+ "url": "http://arxiv.org/html/2309.14184v2/x6.png"
82
+ },
83
+ "5": {
84
+ "figure_path": "2309.14184v2_figure_5.png",
85
+ "caption": "Figure 5: Solutions of the damped Burger\u2019s equation with Kahan\u2019s method (17).",
86
+ "url": "http://arxiv.org/html/2309.14184v2/x7.png"
87
+ },
88
+ "6": {
89
+ "figure_path": "2309.14184v2_figure_6.png",
90
+ "caption": "Figure 6: Damped Burger\u2019s equation with Kahan\u2019s method (17): residuals of linear conformal invariant (left), exact Hamiltonian (middle) and modified Hamiltonian (right).",
91
+ "url": "http://arxiv.org/html/2309.14184v2/x8.png"
92
+ },
93
+ "7": {
94
+ "figure_path": "2309.14184v2_figure_7.png",
95
+ "caption": "Figure 7: Solution of the damped KdV equation at final time.",
96
+ "url": "http://arxiv.org/html/2309.14184v2/x9.png"
97
+ },
98
+ "8": {
99
+ "figure_path": "2309.14184v2_figure_8.png",
100
+ "caption": "Figure 8: Damped KdV equation (EK): residuals of linear and quadratic conformal invariants (left), Hamiltonian (middle) and modified Hamiltonian (right).",
101
+ "url": "http://arxiv.org/html/2309.14184v2/x10.png"
102
+ },
103
+ "9": {
104
+ "figure_path": "2309.14184v2_figure_9.png",
105
+ "caption": "Figure 9: Solution of the damped NLS equation.",
106
+ "url": "http://arxiv.org/html/2309.14184v2/x11.png"
107
+ },
108
+ "10(a)": {
109
+ "figure_path": "2309.14184v2_figure_10(a).png",
110
+ "caption": "Figure 10: Damped NLS equation with \u03b3=5\u2062e\u22124\ud835\udefe5\ud835\udc524\\gamma=5e-4italic_\u03b3 = 5 italic_e - 4 (EAVF): residuals of mass (left), momentum (middle) and Hamiltonian (right).",
111
+ "url": "http://arxiv.org/html/2309.14184v2/x12.png"
112
+ },
113
+ "10(b)": {
114
+ "figure_path": "2309.14184v2_figure_10(b).png",
115
+ "caption": "Figure 10: Damped NLS equation with \u03b3=5\u2062e\u22124\ud835\udefe5\ud835\udc524\\gamma=5e-4italic_\u03b3 = 5 italic_e - 4 (EAVF): residuals of mass (left), momentum (middle) and Hamiltonian (right).",
116
+ "url": "http://arxiv.org/html/2309.14184v2/x13.png"
117
+ },
118
+ "11(a)": {
119
+ "figure_path": "2309.14184v2_figure_11(a).png",
120
+ "caption": "Figure 11: Damped NLS equation with \u03b3=5\u2062e\u22124\ud835\udefe5\ud835\udc524\\gamma=5e-4italic_\u03b3 = 5 italic_e - 4 (LIE): residuals of mass (left), momentum (middle) and Hamiltonians (right).",
121
+ "url": "http://arxiv.org/html/2309.14184v2/x14.png"
122
+ },
123
+ "11(b)": {
124
+ "figure_path": "2309.14184v2_figure_11(b).png",
125
+ "caption": "Figure 11: Damped NLS equation with \u03b3=5\u2062e\u22124\ud835\udefe5\ud835\udc524\\gamma=5e-4italic_\u03b3 = 5 italic_e - 4 (LIE): residuals of mass (left), momentum (middle) and Hamiltonians (right).",
126
+ "url": "http://arxiv.org/html/2309.14184v2/x15.png"
127
+ }
128
+ },
129
+ "validation": true,
130
+ "references": [
131
+ {
132
+ "1": {
133
+ "title": "Projected exponential Runge\u2013Kutta methods for preserving\ndissipative properties of perturbed constrained Hamiltonian systems.",
134
+ "author": "A. Bhatt.",
135
+ "venue": "Journal of Computational and Applied Mathematics, 394:113556,\n2021.",
136
+ "url": null
137
+ }
138
+ },
139
+ {
140
+ "2": {
141
+ "title": "Second order conformal symplectic schemes for damped Hamiltonian\nsystems.",
142
+ "author": "A. Bhatt, D. Floyd, and B. E. Moore.",
143
+ "venue": "Journal of Scientific Computing, 66(3):1234\u20131259, 2016.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "3": {
149
+ "title": "Exponential integrators preserving local conservation laws of PDEs\nwith time-dependent damping/driving forces.",
150
+ "author": "A. Bhatt and B. E. Moore.",
151
+ "venue": "Journal of Computational and Applied Mathematics, 352:341\u2013351,\n2019.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "4": {
157
+ "title": "Structure-preserving exponential Runge-Kutta methods.",
158
+ "author": "Ashish Bhatt and Brian E. Moore.",
159
+ "venue": "SIAM Journal on Scientific Computing, 39(2):A593\u2013A612, 2017.",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "5": {
165
+ "title": "Linearly implicit local energy-preserving algorithm for a class of\nmulti-symplectic Hamiltonian PDEs.",
166
+ "author": "Jiaxiang Cai and Bangyu Shen.",
167
+ "venue": "Computational & Applied Mathematics, 41(1):Paper No. 33, 19,\n2022.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "6": {
173
+ "title": "Efficient schemes for the damped nonlinear Schr\u00f6dinger equation\nin high dimensions.",
174
+ "author": "Jiaxiang Cai and Haihui Zhang.",
175
+ "venue": "Applied Mathematics Letters, 102:106158, 7, 2020.",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "7": {
181
+ "title": "Preserving energy resp. dissipation in numerical PDEs using the\n\u201daverage vector field\u201d method.",
182
+ "author": "E. Celledoni, V. Grimm, R. I. McLachlan, D. I. McLaren, D. O\u2019Neale, B. Owren,\nand G. R. W. Quispel.",
183
+ "venue": "Journal of Computational Physics, 231(20):6770\u20136789, 2012.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "8": {
189
+ "title": "Geometric properties of Kahan\u2019s method.",
190
+ "author": "E. Celledoni, R. I. McLachlan, B. Owren, and G. R. W. Quispel.",
191
+ "venue": "Journal of Physics. A. Mathematical and Theoretical,\n46(2):025201, 12, 2013.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "9": {
197
+ "title": "Linear energy-preserving integrators for Poisson systems.",
198
+ "author": "D. Cohen and E. Hairer.",
199
+ "venue": "BIT. Numerical Mathematics, 51(1):91\u2013101, 2011.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "10": {
205
+ "title": "A general framework for deriving integral preserving numerical\nmethods for PDEs.",
206
+ "author": "M. Dahlby and B. Owren.",
207
+ "venue": "SIAM Journal on Scientific Computing, 33(5):2318\u20132340, 2011.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "11": {
213
+ "title": "Linearly implicit local and global energy-preserving methods for\nPDEs with a cubic Hamiltonian.",
214
+ "author": "S\u00f8 lve Eidnes and Lu Li.",
215
+ "venue": "SIAM Journal on Scientific Computing, 42(5):A2865\u2013A2888, 2020.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "12": {
221
+ "title": "Linearly implicit structure-preserving schemes for Hamiltonian\nsystems.",
222
+ "author": "S\u00f8 lve Eidnes, Lu Li, and Shun Sato.",
223
+ "venue": "Journal of Computational and Applied Mathematics, 387:Paper No.\n112489, 12, 2021.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "13": {
229
+ "title": "Conformal structure-preserving method for damped nonlinear\nSchr\u00f6dinger equation.",
230
+ "author": "Hao Fu, Wei-En Zhou, Xu Qian, Song-He Song, and Li-Ying Zhang.",
231
+ "venue": "Chinese Physics B, 25(11):110201, 2016.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "14": {
237
+ "title": "Second order conformal multi-symplectic method for the damped\nKorteweg\u2013de Vries equation.",
238
+ "author": "Feng Guo.",
239
+ "venue": "Chinese Physics B, 28(5):050201, 2019.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "15": {
245
+ "title": "Optimal error estimate of a conformal fourier pseudo-spectral method\nfor the damped nonlinear Schr\u00f6dinger equation.",
246
+ "author": "Chaolong Jiang, Wenjun Cai, and Yushun Wang.",
247
+ "venue": "Numerical Methods for Partial Differential Equations,\n34(4):1422\u20131454, 2018.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "16": {
253
+ "title": "Unconventional numerical methods for trajectory calculations.",
254
+ "author": "W. Kahan.",
255
+ "venue": "Technical report, Computer Science Division and Department of\nMathematics, University of California, Berkeley, 1993.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "17": {
261
+ "title": "Unconventional schemes for a class of ordinary differential equations\nwith applications to the Korteweg-de Vries equation.",
262
+ "author": "W. Kahan and Ren-Chang Li.",
263
+ "venue": "Journal of Computational Physics, 134(2):316 \u2013 331, 1997.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "18": {
269
+ "title": "A new symmetric linearly implicit exponential integrator preserving\npolynomial invariants or Lyapunov functions for conservative or dissipative\nsystems.",
270
+ "author": "Lu Li.",
271
+ "venue": "Journal of Computational Physics, 449:Paper No. 110800, 13,\n2022.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "19": {
277
+ "title": "Splitting methods.",
278
+ "author": "R. I. McLachlan and G. R. W. Quispel.",
279
+ "venue": "Acta Numerica, 11:341\u2013434, 2002.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "20": {
285
+ "title": "Multi-conformal-symplectic PDEs and discretizations.",
286
+ "author": "B. E. Moore.",
287
+ "venue": "Journal of Computational and Applied Mathematics, 323:1\u201315,\n2017.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "21": {
293
+ "title": "Exponential integrators based on discrete gradients for linearly\ndamped/driven Poisson systems.",
294
+ "author": "B. E. Moore.",
295
+ "venue": "Journal of Scientific Computing, 87(2):Paper No. 56, 18, 2021.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "22": {
301
+ "title": "Conformal multi-symplectic integration methods for forced-damped\nsemi-linear wave equations.",
302
+ "author": "Brian E. Moore.",
303
+ "venue": "Mathematics and Computers in Simulation, 80(1):20\u201328, 2009.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "23": {
309
+ "title": "Conformal conservation laws and geometric integration for damped\nHamiltonian PDEs.",
310
+ "author": "Brian E. Moore, Laura Nore\u00f1a, and Constance M. Schober.",
311
+ "venue": "Journal of Computational Physics, 232(1):214\u2013233, 2013.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "24": {
317
+ "title": "A new class of energy-preserving numerical integration methods.",
318
+ "author": "G. R. W. Quispel and D. I. McLaren.",
319
+ "venue": "Journal of Physics. A. Mathematical and Theoretical,\n41(4):045206, 7, 2008.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "25": {
325
+ "title": "Linearly implicit methods for Allen-Cahn equation.",
326
+ "author": "M. Uzunca and B. Karas\u00f6zen.",
327
+ "venue": "Applied Mathematics and Computation, 450:Paper No. 127984, 11,\n2023.",
328
+ "url": null
329
+ }
330
+ }
331
+ ],
332
+ "url": "http://arxiv.org/html/2309.14184v2"
333
+ }
20240318/2310.03173v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2310.04152v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2310.05155v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2310.05773v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2310.08044v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2310.12486v2.json ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Trapped acoustic waves and raindrops: high-order accurate integral equation method for localized excitation of a periodic staircase",
3
+ "abstract": "We present a high-order boundary integral equation\n(BIE)\nmethod for the frequency-domain acoustic scattering of a point source by a singly-periodic, infinite, corrugated boundary.\nWe apply it to the accurate numerical study of acoustic radiation in the neighborhood of a sound-hard two-dimensional staircase modeled after the El Castillo pyramid.\nSuch staircases support trapped\nwaves which travel along the surface and decay exponentially away from it.\nWe use the array scanning method (Floquet\u2013Bloch transform) to recover\nthe scattered field as an integral over the family of\nquasiperiodic solutions parameterized by on-surface wavenumber.\nEach such BIE solution requires\nthe quasiperiodic Green\u2019s function, which we evaluate using an efficient integral representation of lattice sum coefficients.\nWe avoid the singularities and branch cuts present in the array scanning integral by\ncomplex contour deformation.\nFor each frequency, this enables a solution accurate to around 10 digits in a couple of seconds.\nWe propose a residue method to extract the limiting powers carried by trapped modes far from the source.\nFinally, by computing the trapped mode dispersion relation,\nwe use a simple ray model to explain an observed acoustic \u201craindrop\u201d effect\n(chirp-like time-domain response).",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Periodic surface geometries\nhave long been exploited to manipulate electromagnetic\nand acoustic waves. Examples on small and large length-scales include photonic\ncrystals busch2007periodic ###reference_b1###; jobook ###reference_b2###, acoustic metamaterials\nchen2017general ###reference_b3###; ji2022recent ###reference_b4###, diffraction gratings\nlin2019integral ###reference_b5###, antennae munk1979plane ###reference_b6###; he2007radiation ###reference_b7###, anechoic\nchambers and materials herrero2020sound ###reference_b8###, and amphitheatres declercq2007acoustic ###reference_b9###.\nHowever, the accurate numerical solution of radiation\nnear such geometries faces several challenges.\nThe domain\u2014a perturbation of the upper half-space\u2014is unbounded vertically,\nthus its truncation must incorporate\nthe correct upward-going radiation conditions.\nThe possibility of coupling to waves trapped along the surface,\nwhich in two dimensions (2D) do not decay at all,\nmeans that reflection errors would result by naive\ntruncation to any finite number of unit cell periods.\nThese trapped waves may also cause resonances with high\nparameter sensitivity shipmanreview ###reference_b10###.\nHaving non-periodic excitation breaks the periodicity of the problem, which\nmakes periodization impossible at first glance. However, it is possible to\ndeconstruct the single point-source solution into sets of quasiperiodic\nsolutions (the array scanning or Floquet\u2013Bloch method).\nAs in any wave problem,\nhigh frequencies may demand a large discretization density chandler2012numerical ###reference_b11###.\nFinally, the staircase model that we focus on here introduces corner\nsingularities that must be addressed in any high-order solver.\nSince solvers are\noften part of a design optimization loop to tune\nmaterial or shape parameters, they must be robust and efficient.\nHowever, the above difficulties mean that nonconvergent Rayleigh expansions\nrichards2018acoustic ###reference_b12###, or ray asymptotic methods\ntsingos2007extending ###reference_b13### are often used in acoustic settings.\nThe purpose of this paper is twofold:\n1) We present a high-order numerical\nboundary integral (BIE) method for 2D acoustic scattering\nof a point source from a singly-periodic geometry, in particular\ncombining it with a high-order accurate array scanning method\nvia contour deformation.\n2) We apply this to the accurate numerical study of acoustic radiation in the neighborhood of a sound-hard 2D staircase model,\nin order to understand time- and frequency-domain phenomena\ndue to nearby point-source excitation.\nFor the time-domain we compute the dispersion relation of trapped modes\nand use it to explain an observed chirp effect calleja ###reference_b14###; hellerbook ###reference_b15###;\nfor the frequency domain we propose a method\nto extract the amplitudes of the left- and right-going trapped modes,\nusing residues in the complexified on-surface wavenumber.\nArchitecturally, solid staircases with sound-hard surfaces are\nclearly very common;\nyet, we have not found an accurate numerical study of acoustic trapping\nand guiding in their vicinity.\nBy combining high-order corner quadratures and lattice sums,\nwe show that a BIE can achieve around 10 digits of accuracy with\nsolution times of a couple of seconds per excitation frequency.\nWe consider a 2D model for\nlinearized acoustics in a simply-connected region \ncontaining a constant-density gas with constant sound speed \ncoltonkress ###reference_b16###; howe1998acoustics ###reference_b17###,\n(coltonkress_scatt, ###reference_b18###, Sec. 3.1).\nThis region lies above a connected corrugated boundary extending\nwith spatial period in the direction,\nand is unbounded in the positive direction;\nsee fig. 1 ###reference_###.\nThe boundary has a maximum and minimum coordinate.\nOur paradigm example will be the slope- right-angle staircase shown\nin the figure, whose repeating element (unit cell) comprises\ntwo equal-length line segments at an angle to each other.\nWe write .\nThis models a 3D situation in which both geometry and source\nare invariant in the 3rd (out of plane) direction.\nThe tools we present mostly concern frequency domain solutions,\nbut we will also use them to understand a chirp phenomenon for the\nfollowing time domain problem.\nConsider an impulse excitation by a point source at time \nand location ; this is a good model\nfor a clap or footstep in the acoustic application.\nThe acoustic pressure then obeys the wave equation\nwith a source,\nwhere is the spatial Laplacian,\nand by a rescaling of time we set the sound speed to .\nWe assume quiescence before the excitation: for .\n111Note that 1 ###reference_### could thus be rephrased as the homogeneous\nwave equation in with initial conditions ,\n.\nThe sound-hard (Neumann) boundary condition is\nwhere is the normal derivative, \nbeing the unit boundary normal vector pointing into .\nThis arises physically since the normal component of the fluid velocity\nvanishes; it is a good approximation to an air-solid interface\n(kaltenbacher2018computational, ###reference_b19###, Ch. 1).\nTaking the Fourier transform of 1 ###reference_### and 2 ###reference_### with respect to \ngives the main focus of this paper, the inhomogeneous Helmholtz\nNeumann boundary value problem (BVP),\nand radiation conditions explained below.\nHere is the frequency, and the wavelength is .\nFrom the -dependence of its solution \none may understand features of the above time-domain solution .\n###figure_1### ###figure_2### The remainder of this extended Introduction\nsets up definitions needed to convert 3 ###reference_### and 4 ###reference_###\ninto BVPs posed on a bounded unit cell,\nand overviews the rest of the paper.\nThe solution to 3 ###reference_### and 4 ###reference_###\nmay be decomposed into a linear combination of\nquasiperiodic solutions,\nvia what is known as the array scanning method\nin the engineering literature munk1979plane ###reference_b6###; rana1981current ###reference_b20###; capolino2005mode ###reference_b21###,\nor inverse Floquet\u2013Bloch transform in mathematics lechleiter2017convergent ###reference_b22###; ruming21 ###reference_b23###.\nThis family of quasiperiodic solution is parameterized\nby , the horizontal (along-surface) wavenumber,\nand each member of the family has the phased translational symmetry\n, for all \nand ,\nwhere the Bloch phase is .\nThe set of plane waves of the form \nobeying the homogeneous Helmholtz equation and quasiperiodic symmetry is discrete, namely , where\nis the shifted lattice of horizontal wavenumbers with the same ,\nand\nis the vertical wavenumber.\n is either positive real (upwards-propagating), zero (horizontally propagating), or positive imaginary (upwards-decaying).\nThen, for a given , the quasiperiodic solution solves the\nBVP\nwhere is any height lying above the support of the source and above\n, and are amplitudes of the outgoing plane waves.\nThe source must naturally\nalso be quasiperiodic: it is a quasiperiodized version of the\ndesired right-hand side , namely\nwhere is the lattice vector.\nFixing , the above BVP has a unique solution\nfor all apart from possibly a discrete set\n(which will correspond to trapped modes, and discussed\nin detail in section 3 ###reference_###) bonnetBDS ###reference_b24###.\nThe above also extends to ; see (shipmanreview, ###reference_b10###, Sec. 4.1).\nEach member of the above quasiperiodic solution family\nis the acoustic response of the periodic geometry to\nan infinite array of phased point sources.\nThe key observation (giving \u201carray scanning\u201d its name)\nis that one may cancel out all but the central\nsource by integration over in the first Brillouin zone:\nThis exploits the fact that for , and zero\nfor .\nSince this is the right-hand side in 7 ###reference_###,\nthen, by linearity, performing the same integral over the solution\nfamily ,\nrecovers , the solution to 7 ###reference_### and 18 ###reference_### with\nupward-propagating radiation conditions.\nIn the case where is not unique for some discrete\nreal values, the integral 13 ###reference_### has poles\non the real axis corresponding to\nevanescent waves trapped on the corrugated surface.\nGeneral conditions for existence of such Neumann boundary condition trapped modes are not known (wilcox, ###reference_b25###, p.11\u201312),\nalthough they are well known, and proven to exist for certain geometries\nevans93 ###reference_b26###; gotlib00 ###reference_b27###.\nThe mathematical formulation of causal\noutgoing radiation conditions in the presence of trapped modes\nis subtle, relying on the limiting absorption principle\n(for dielectric cases see kirsch17 ###reference_b28###; epsteinopen2 ###reference_b29###).\nWe did not find the sound-hard case in the literature,\nbut, following Zhang ruming21 ###reference_b23###, we\ndefine this by the topology of the integration contour with respect to the poles.\nThis is presented in section 4 ###reference_###.\nThe integration interval \nis known as the first Brillouin zone,\nand covers the family of solutions.\nThis is because is the same for all in the set\n5 ###reference_### (since is the same, and so is the\nset of plane waves in the radiation condition, up to relabeling).\nThe integration may also be thought of as over on the unit\ncircle ruming21 ###reference_b23###.\nWe present a high-order accurate\nquadrature method for the above integral 13 ###reference_###. This is\ncrucial for efficiency, because each such quadrature node demands a new\nBVP solution of 7 ###reference_###, 8 ###reference_###, 9 ###reference_### and 10 ###reference_###.\nHere,\ncare is needed due to the possibility of poles mentioned in the above remark,\nbut also because the\nintegrand contains two square-root singularities (with associated branch cuts)\nat so-called (Rayleigh\u2013)Wood anomalies\nwood ###reference_b30###; fano1941theory ###reference_b31###; hessel1965new ###reference_b32###.\nThese anomalies are defined as pairs where\n for some in 6 ###reference_###\n(see fig. 2 ###reference_###),\nresulting in a horizontal plane wave.\nAlthough the quasiperiodic BVP remains well-posed,\nit poses challenges for Green\u2019s function based numerical methods\narens06 ###reference_b33###; brunohaslam09 ###reference_b34###; barnett_repr_QPS_2D ###reference_b35###; delourme14 ###reference_b36###; cho2015robust ###reference_b37###.\nFollowing Zhang ruming21 ###reference_b23###\nin the setting of closed waveguides,\nwe propose contour deformation to complex \nto avoid such anomalies and poles.\nRather than the piecewise-smooth contours of that work, we\nuse a more efficient analytic contour deformation,\nand optimize a deformation parameter to minimize the number of nodes needed;\nsee section 4 ###reference_###.\nBoundary integral equation (BIE) methods\ncoltonkress_scatt ###reference_b18###; kress1989linear ###reference_b38### are especially suited for the high-order accurate solution of each quasiperiodic BVP.\nSince they operate by first converting the PDE to a boundary integral, then discretizing it to yield a dense linear\nsystem, they only require the evaluation of a 1D integral instead of the\ndiscretization of a (truncated) infinite 2D domain. This reduction of dimensionality significantly reduces\nthe number of unknowns, and allows for an easy increase in order of accuracy via high-order quadrature rules.\nFurthermore, the staircase geometry has two corners per period,\none of which induces fractional-power-law singularities in ;\nthese are easily handled with BIE.\nIn contrast, a finite differencing (FD) or finite elements (FEM) would require\nmeshing of the domain in a manner respecting the corner singularity,\nand explicit handling of the radiation condition 10 ###reference_###.\nBoth FD lechleiter2017convergent ###reference_b22### and FEM zhang2018high ###reference_b39###\nhave been combined with array scanning to solve the scattering from periodic surfaces,\nbut using only low-order spatial discretization.\nThere exist other boundary-based approaches.\nThese include meshfree methods such as the method of\nfundamental solutions (MFS) fairweather_mfs ###reference_b40###; cheng_mfs_overview ###reference_b41###; barnett2008stability ###reference_b42###, and the plane waves method alves2005numerical ###reference_b43###.\nThese share common roots with BIE methods in that the solution is constructed\nfrom Helmholtz solutions in the domain,\nbut generally give ill-conditioned systems.\nFor scattering by periodic surfaces, a family of methods exists based on the Rayleigh\nhypothesis (Rayleigh methods) (petit, ###reference_b44###, p. 17), millar1973rayleigh ###reference_b45###.\nThese assume that an expansion of the\nscattered field like 10 ###reference_### is valid close to and on the surface, and\napproximate the solution as a truncation of 10 ###reference_###. This assumption, however, does not generally hold.\nFast approximate solutions are most commonly derived using the\nHelmholtz\u2013Kirchhoff approximation meecham1956use ###reference_b46###, which assumes that\neach point on the boundary scatters as if it was a plane at a slope matching\nthat of the boundary. This represents a short-wavelength limit and is closely\nrelated to the geometrical acoustics\nkeller1958geometrical ###reference_b47###; keller1962geometrical ###reference_b48###; its validity has been\nstudied thoroughly richards2018acoustic ###reference_b12###.\nTo use boundary integral methods, one\nsplits the physical solution as , where\nthe incident wave\n solves 7 ###reference_###\nwith free-space radiation conditions and no boundary conditions,\nand is thus known analytically,\nwhilst the unknown scattered wave\n solves the homogeneous version of 7 ###reference_###,\nwith inhomogeneous boundary condition on ,\nand 9 ###reference_### and 10 ###reference_###.\nA BIE is then used to solve this BVP for , as presented in section 2 ###reference_###.\nA BIE formulation on the infinite surface would not be numerically\nfeasible, and its truncated solution converges very slowly\nsince the contributions of distant sources decay only\nlike .\nHowever, the computation of the quasiperiodic solution may\nbe reduced to a single unit cell of the boundary. This periodization process involves replacing in the integral kernels the\nfree-space Green\u2019s function , defined as the radiative solution to\nwith the quasiperiodic\nGreen\u2019s function defined as the upwards- and downwards-radiating\nsolution to\nThe periodic Green\u2019s function is therefore an infinite phased sum of single point-source Green\u2019s functions.\nThe sum\nis slowly convergent and cannot be used\ndirectly linton98 ###reference_b49###.\nYet a wide range of methods exist\nfor the rapidly-convergent approximation of ,\nincluding reformulation in terms of quickly convergent lattice sums\nlinton2010lattice ###reference_b50###.\nIn section 2.1 ###reference_###, we discuss one such efficient computation of \nbased on yasumoto1999efficient ###reference_b51###.\nBy the use of contour deformation to complex ,\nwe will avoid Wood anomalies where 15 ###reference_### does not exist.\nWith the periodic Green\u2019s function in\nhand, section 2.2 ###reference_### describes the boundary layer representation of the\nsolution and the discretization of the boundary, including the treatment of the singularities present in\nthe boundary integral at the corners of the boundary.\nIn general, these may be handled analytically,\ne.g. via Gauss\u2013Jacobi quadrature tsalamengas2016gauss ###reference_b52###,\nor conformal mapping driscoll2002schwarz ###reference_b53###; or\nhandled to high order by generalized Gauss\nquadrature bremer2010universal ###reference_b54###, recursive compressed inverse\npreconditioning helsing2008corner ###reference_b55###, or rational function approximation\ngopal2019new ###reference_b56###.\nIn section 2.3 ###reference_### that follows, we describe how the total field is\nreconstructed outside the unit cell, and verify the high-order accuracy of the\nsolution with convergence tests which exploit flux conservation.\nA distributed spatial source (right-hand side in 7 ###reference_###)\nmay also be handled by a slight\ngeneralization of our framework:\none numerically computes their incident wave by\nconvolution of the source function with . The\nBIE solutions for then proceed as before.\nIn section 3 ###reference_### we will show how numerically to\nlocate eigenparameters for which trapped modes\nexist.\nSuch modes are eigenfunctions, i.e., nontrivial homogeneous solutions to\nthe BVP 7 ###reference_###, 8 ###reference_###, 9 ###reference_### and 10 ###reference_###.\nAs 1 ###reference_1### implied, at each , such values\nare vital to know since they induce poles in the array scanning\n(inverse Floquet\u2013Bloch) integral.\nYet, the mode dispersion relation\u2014the dependence of \nvs for trapped modes\u2014also will provide a model to\nunderstand the \u201craindrop effect\u201d (chirp-like response)\nfor the time-domain problem 1 ###reference_### and 2 ###reference_###.\nFor this reason,\nwe present a strategy to solve for the trapped at a given .\nThis involves rootfinding on\nthe Fredholm determinant of the -dependent integral operator, as done in zhaodet ###reference_b57###.\nA trapped mode may be reconstructed from the eigenfunction\nof the Fredholm 2nd-kind integral equation.\nFrom the dispersion relation, we\nderive their group velocities; the speed at which a given trapped mode\npropagates along the surface.\nWe use an approximate ray model (neglecting amplitudes) to predict\nthe arrival times of different frequencies at the bottom of a staircase\nmodeled after the El Castillo pyramid,\nin order to understand the chirp-like sounds observed.\nWe also pose, and answer in Section 5 ###reference_###, the following:\nhow can one efficiently use the array scanning method to report\nthe power carried away by (left- or right-going) trapped modes,\nas opposed to power radiated upwards away from the surface?\nCharacterizing this division of radiated power as a function of frequency\nis crucial in related engineering applications such as\npoint-source radiators in nanophotonics and acoustic metamaterials.\nFinally, we discuss avenues for future work in section 6 ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "BIE formulation, periodization, and discretization",
15
+ "text": "Here we present the numerical method for solving the quasiperiodic\nBVP 7 ###reference_###, 8 ###reference_###, 9 ###reference_### and 10 ###reference_###, at a given and .\nThe physical wave (potential) is written .\nFixing the source ,\nwe take the incident wave to be the quasiperiodic function\nwhere we use the notation , recalling\n15 ###reference_###.\nThe PDE and boundary condition for 7 ###reference_### and 8 ###reference_### imply that\n solves the BVP\nwith obeying the quasiperiodicity and radiation conditions 9 ###reference_### and 10 ###reference_###.\nSince the PDE is now homogeneous, a BIE solution becomes possible.\nThe unknown scattered field is\nrepresented by\na quasiperiodic single-layer potential\nwhere the usual fundamental solution has been replaced with\nits quasiperiodic counterpart,\nand is a single unit cell (period) of the boundary.\nAs approaches the boundary from either side, let us define\nHere is the unit normal to the boundary at the target point .\nThe above representation then satisfies the jump relations (see (kress1989linear, ###reference_b38###, Ch 6.3))\nwhere denotes the adjoint double-layer operator on ,\nnamely the operator with kernel\n taken in the principal value sense.\nThe single-layer representation for automatically satisfies 17 ###reference_###, and\nthe boundary condition can be written in terms of the single-layer jump\ncondition as\nThis is a Fredholm integral equation of the second kind.\nThe expert reader may wonder why a combined-field representation\n(\u201cCFIE\u201d) is not\nneeded here to prevent spurious resonances,\nas in the case of a bounded obstacle\ncoltonkress ###reference_b16###.\nIn fact 19 ###reference_### is sufficient when the unbounded\n is a graph of a function, because then the Dirichlet\nBVP in the complementary domain \nis unique for any ; see 4 ###reference_4### below and its proof.\n###figure_3###"
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Evaluation of quasiperiodic Green\u2019s functions",
21
+ "text": "Here we describe an efficient numerical scheme to\nevaluate appearing in the above BIE,\nusing a local expansion about the origin,\nwith lattice sum coefficients evaluated following\nyasumoto1999efficient ###reference_b51###.\nNote that, since it is only valid up to a vertical height comparable to the unit\ncell width, a different plane-wave representation\ngiven in section 2.3 ###reference_### will be used for values beyond this.\nRecall the definitions 5 ###reference_### and 6 ###reference_###, and that\na Wood anomaly is a point in the parameter\nplane where crosses .\nAt such anomalies \ndoes not exist (e.g. see linton98 ###reference_b49###; barnett_repr_QPS_2D ###reference_b35###).\nWe will thus assume that is not at a Wood anomaly\n(and direct the reader to barnett_repr_QPS_2D ###reference_b35###; delourme14 ###reference_b36### for\nmethods for quasiperiodic problems precisely at Wood anomalies).\nThe quasiperiodic Helmholtz Green\u2019s function in 2D is given by linton98 ###reference_b49###\nThe sum\u2019s slow, conditional\nconvergence means it is of little practical use. We follow the approach\ndescribed in yasumoto1999efficient ###reference_b51### to rewrite it in terms of\nrapidly\nconvergent\nlattice sums. Our derivation, however, differs in two key ways; we therefore\nreproduce some of the calculation below for convenience.\nGraf\u2019s\naddition theorem (dlmf, ###reference_b58###, (10.23.7))\nis used first to expand each of the\n centered on source point with an coordinate outside of the\nunit cell, , around the equivalent point inside the unit\ncell. The resulting expression contains the Bessel functions and\n. Terms multiplying the same-order are collected; their\ncoefficient, the th order lattice sum , is expressed as a contour integral.\nFor faster convergence, following barnett_repr_QPS_2D ###reference_b35###,\nwe instead exclude from the sum in\n25 ###reference_### and add them together directly; this can be thought of\nsplitting the periodic Green\u2019s function into a near and far component:\nwith\nwhere . We use Graf\u2019s theorem on the remaining terms to write\nwhere is the angle describing the source-target displacement vector\n in polar coordinates.\nThe lattice sums are independent of the source-target displacement,\ntherefore they only need to be\ncomputed once for each value of and . We follow\nyasumoto1999efficient ###reference_b51### to derive the integral representation\nwith and\nThe integrals appearing in may be evaluated numerically using the trapezoidal rule.\nThe reason behind 29 ###reference_### being an approximate expression instead of an\nequality is that upper limit of the integrals, , has been truncated from\n, exploiting the fact that the integrand decays quickly for \ndue to the exponential factor in .\nThree convergence parameters are needed here: the number of\nterms at which the sum in (in 28 ###reference_###) is truncated, the\nupper limit of the integrals in 29 ###reference_###,\nand the number of nodes used\nin the trapezoidal rule to calculate 29 ###reference_###.\nFor the typical and accuracies that we present, we found , , and sufficient."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Discretizing the boundary integral",
27
+ "text": "We discretize and solve the integral equation 24 ###reference_### using the\nNystr\u00f6m method (kress1989linear, ###reference_b38###, Sec 12.2),\nsummarized below. First, 24 ###reference_### is written in the standard form\nwhere is the appropriate kernel function, and the boundary is\nparameterized by that runs from to .\nThen, the unknown density function is approximated by another function that obeys\nwhere the integral has been replaced with a quadrature formula with nodes\n and weights , and the kernel function with a rank- operator. Then\nthe values of at the nodes , satisfy the linear system\nor in matrix notation,\nwith .\nThen if any vector\n\nsolves the above system, then at any can be\nreconstructed as\nSubstituting the kernel associated with our exterior Helmholtz problem with\nboundary , the matrix in 35 ###reference_### becomes:\nwhile the components of are given by\nFinally, the gradient of , necessary for filling the -matrix in 37 ###reference_###, is\nwhere , is\na unit normal to the displacement vector , and is a unit vector in the \ndirection.\nThe key task for achieving high-order accuracy is in choosing the appropriate quadrature nodes and weights appearing in 33 ###reference_###.\nFor smooth boundaries, efficient quadrature rules are known,\nbased on either the global periodic trapezoid rule, or high-order\npanel-wise quadrature.\nFor boundaries with corners, however, the density function is singular\nat the corners. The closeness of this singularity limits the radius of the Bernstein ellipse associated with the interpolant used in Gaussian quadrature rules, which in turn degrades the accuracy of\ninterpolation and quadrature (atap, ###reference_b59###, Ch 8, 19). To maintain the order of\naccuracy whilst keeping the number of quadrature nodes the same, the size of\nthe quadrature interval needs to be reduced as the corner is approached, i.e. the quadrature grid needs to be refined according to the features of the\nboundary greengard2014fast ###reference_b60###.\nWe divide the two sides of unit cell boundary into equally-spaced\npanels, and use -th order Gauss\u2013Legendre nodes and weights on each, i.e. and are always the Gauss\u2013Legendre nodes and weights on\nthe standard interval . Then\nfor each panel with a corner as an endpoint, we divide the panel in a\n ratio with , using the same -th order Gauss\u2013Legendre\nscheme on each. With (dyadic refinement), the net error after\n levels of refinement will be\n, where is a given\nnumerical precision greengard2014fast ###reference_b60###.\nMoving from a single set of global quadrature nodes to panel quadrature with panels, each with nodes, modifies the expression 34 ###reference_### to\nwhere is the -th Gauss\u2013Legendre node on the -th panel,\n, , are the corresponding density, boundary\ndata, and quadrature weights, and is the value of the\nkernel associated with the given pair of quadrature nodes.\nTypically, different quadrature rules are needed depending on where \nand (the \u201ctarget\u201d and \u201csource\u201d points) lie relative to each\nother bremer2010universal ###reference_b54###; atkinson1997numerical ###reference_b61###, with special care\nrequired if they lie on adjacent or the same panel.\nThe need for this can be seen by inspecting 39 ###reference_### as : simple Gauss\u2013Legendre rules cannot capture the (weak) singularity\nthat emerges in this limit, therefore the close-to-diagonal entries in will\nnot be accurate. In the special case of the staircase, however, it is possible\nto achieve higher order accuracy without invoking special quadrature rules:\nsince quadrature panels are straight lines, the interaction between source and target\npoints on the same or colinear panels vanishes. This is\nclear from 37 ###reference_### and 39 ###reference_###: is\nperpendicular to in this case.\nDecrease in accuracy can arise from\ncatastrophic cancellation if ,\ne.g. nodes lying on small panels either side of a\ncorner. To mitigate this, we parameterize the quadrature panels such that the\nquadrature nodes\u2019 positions are measured relative to the nearest corner.\nNote that reconstructing the solution close to the boundary\nwould require special quadrature rules, since the position\nvectors of the target and source points are not necessarily colinear. Since such special rules are beyond the scope of this paper, the numerical solution will not be accurate\nwithin roughly the length of the nearest boundary panel.\nFortunately this will not prevent us from extracting the trapped power\nto full accuracy.\n###figure_4###"
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Reconstructing the solution",
33
+ "text": "Once the density evaluated at the boundary nodes, , is\nobtained, on may compute the scattered field at a point inside the unit cell from its single-layer representation,\n(After which, .) The lattice sum representation of\n, and hence the above expression, quickly loses accuracy outside of the\ncircle due to the application of Graf\u2019s\naddition theorem222This would be a smaller, radius- circle\nwere not excluded from 28 ###reference_###. .\nAbove the unit cell, the upwards propagating radiation condition\nmay be exploited as follows. Let for and . Evaluating the radiation condition at , notice that\nafter rearranging, it takes the form of a discrete Fourier transform:\nwith . Therefore, using the convention from (numerical_rec, ###reference_b62###, Ch. 12.1),\nand\nIn the th neighboring unit cell, i.e. (at any given ), the solution is found using the quasiperiodicity\ncondition,"
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "Convergence check via flux conservation",
39
+ "text": "In the absence of sources of sinks inside a closed boundary , the net\nacoustic power (flux) leaving is zero. How close it is to zero numerically can\nbe used to measure the accuracy of the method and investigate\nconvergence in terms of the size of the linear system\n35 ###reference_###. The acoustic power passing through a surface is333See shipmanreview ###reference_b10###, or follow the derivation in kaltenbacher2018computational ###reference_b19###: start from (95), then substitute in (92), the 3D generalization of (82), and use harmonicity.\nTo test the convergence of the method above, we choose the incident wave to be\na plane wave, , where is a\nunit vector describing its direction of travel.\nThis corresponds to the limit of moving a quasiperiodic point source array\nto infinity in the vertical direction.\nSpecifically,\n with . Then .\nLet \nbe the perimeter of the unit cell as shown in fig. 1 ###reference_###, bounded from below by the\nstair boundary, above by the line , and\nfrom the two sides by . The net flux through the two\nsides has to be zero by symmetry, and it is also zero across the lower edge of\n due to Neumann boundary conditions. Therefore we have\nIn fig. 4 ###reference_### we evaluate as we refine the\ncorner-adjacent panels with . Initially there are panels on each\nside with nodes on each, and after levels of refinement, there are\n panels and a total of nodes. The experiment\nused , and .\nThe figure confirms exponential convergence and suggests that around levels of\nrefinement are needed for digits of accuracy, and around for\n digits, at which point the system size is still smaller than . The reduction of error stops at this point, which is roughly consistent\nwith the spacing between the closest quadrature nodes approaching machine\nprecision.\n###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9###"
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Trapped acoustic modes and the raindrop effect",
45
+ "text": "In this section we describe and test a Fredholm determinant method to find\nevanescent modes trapped by the corrugated sound-hard interface.\nSuch modes are eigenfunctions, i.e.,\nnontrivial solutions to the homogeneous\nquasiperiodic BVP 7 ###reference_###, 8 ###reference_###, 9 ###reference_### and 10 ###reference_###,\nfor some and .\nTheir parameters form a continuous families, thus curves in the\n plane, known as the band structure\njobook ###reference_b2###; shipmanreview ###reference_b10###.\nEach curve may be described by\nits trapped mode frequency ,\ncommonly referred to as a dispersion relation.\nFor staircase geometries we find empirically that there is only a single\nsuch trapped frequency at each wavenumber\n in the Brillouin zone,\nas shown in fig. 5 ###reference_###.\nRecall that the quasiperiodic BVP has the\nBIE formulation 24 ###reference_###.\nWith homogeneous boundary data this becomes the BIE\nwith the adjoint double-layer operator from the previous section.\nOne might then hope that the condition for a trapped mode to exist\nis equivalent to the existence of a nontrivial density solving\n48 ###reference_###. We now show that this is indeed so,\nthus that the proposed numerical method is robust\n(free of spurious resonances).\nFix and .\nThere is a trapped mode (i.e., a nontrivial solving the\nhomogeneous quasiperiodic BVP 7 ###reference_###, 8 ###reference_###, 9 ###reference_### and 10 ###reference_###)\nif and only if .\nLet be a nontrivial solution to ,\nand let throughout , recalling that \nis the quasiperiodic single-layer potential on (the\npart of in the central unit cell).\nThen in .\nBy the jump relations, (using the notations\n20 ###reference_### and 21 ###reference_###), and by construction \nalso satisfies 9 ###reference_### and 10 ###reference_###.\nIt remains to show that is nontrivial. Assume this were\nso, then , so by the jump relations, .\nHowever, in the half-space below would then be\na homogeneous solution to the (downward-facing)\nquasiperiodic Dirichlet problem,\nwhich is unique (petit, ###reference_b44###, p. 56) (kirsch94, ###reference_b63###, Thm. 2.1).\nThus would vanish below , so . By the\njump relation would vanish, a\ncontradiction with the hypothesis. Thus is nontrivial,\nhence a trapped mode.\nFor the converse, let be a trapped mode,\nthen by the quasiperiodic version of Green\u2019s representation theorem,\nin the upper domain ,\nwhere is the quasiperiodic double-layer potential defined by\nSince , and taking to from above,\n, showing that has a nontrivial\nnull vector. By the Fredholm\u2013Riesz theory, the same holds for ,\nsince it is the adjoint with respect to the bilinear form\n\ncoltonkress_scatt ###reference_b18###.\n\u220e\nThis informs our numerical approach: we fix \nand solve a nonlinear eigenvalue problem with respect to .\nWe adapt the Fredholm determinant method\nof Zhao and the 2nd author zhaodet ###reference_b57###.\n is approximated by an \ndeterminant, where is replaced by its\nNystr\u00f6m matrix with entries given by 37 ###reference_###.\nAt each , then is found as a root of\n, using a simple Newton iteration\nwith a convergence criterion close to machine accuracy.\nFor staircase geometries, we find only one such root,\nand always with ,\nimplying that the mode is trapped (frequency is below the light line),\nrather than embedded in the continuous spectrum\njobook ###reference_b2###; shipmanreview ###reference_b10###.\nBecause the trapped frequencies never intersect the light lines,\nthe issue of Wood anomalies does not cause a problem in the mode-finding\ntask.\nFigure 5 ###reference_### shows this set of found at each\n.\nThe gap between and indicates the\nstrength of trapping (rapidity of evanescent decay as ).\nThe most trapped mode is at , known as an\n\u201coptical mode\u201d, and has for the \nstaircase.\nAs we see trapping becoming arbitrarily weak.\nWith a zero of the determinant found,\nthe null vector obeying is found\nvia an SVD, then gives the mode.\nNumerically the lattice-sum expansion may only be used up to a height\naround above the origin, so as before one must switch\nto Fourier series evaluation above this.\nFigure 6 ###reference_### shows the real part of example modes."
46
+ },
47
+ {
48
+ "section_id": "3.1",
49
+ "parent_section_id": "3",
50
+ "section_name": "Ray model for time-domain chirp response at El Castillo",
51
+ "text": "The above trapped mode dispersion curve provides a simple\nexplanation of the acoustic \u201craindrop\u201d effect on the long\nstone staircases at the El Castillo pyramid at Chichen-Itza.\n444We are very grateful to Eric Heller for suggesting this\nexplanation; it is also described by Heller in (hellerbook, ###reference_b15###, p. 162).\nImpulsive sources of sound, such as footsteps, are reported\nto sound like short chirps (frequency rising vs time) when heard from\ndistances far up or down the staircase.\nPending a full numerical investigation of the time-domain\nsolution to 1 ###reference_### and 2 ###reference_###, we use a simple ray model\nfor dispersive propagation (buhler, ###reference_b64###, Sec. 2.6).\nBoth source and receiver are assumed to be\nclose enough to the surface to couple well to\nthe modes at all wavenumbers (this is true for a footstep,\nless obviously so for a standing listener).\nAbsorption and modeling of amplitudes are ignored.\nImpulsive excitation in 1 ###reference_### is assumed to excite all frequencies.\nEach frequency below the maximum of \nis partly trapped (see section 5 ###reference_###),\nand this component propagates along the\nstaircase at its group velocity\nat the appropriate such that .\nWe plot vs on the right of fig. 5 ###reference_###,\nby differentiating the interpolant of on the -grid.\nThe frequency\u2019s arrival time a distance along the staircase\nis thus .\nFinally, inverting this last relationship gives the frequency \nheard at each time after the impulse.\nWe now insert physical parameters for El Castillo.\nThe nondimensionalized speeds used in the rest of the paper must\nbe multiplied the sound speed m/s.\nAccording to declercq2004theoretical ###reference_b65### the stairs at this pyramid\nhave depth equal to height equal at m, i.e. a staircase\nwith period m.\nThe maximum trapped mode frequency\nis thus Hz;\nthis is the highest frequency explainable in the model.\n555The frequency of a free-space plane wave traveling along the staircase\nwith wavenumber at the Brillouin zone edge \nis Hz. Neither of these frequencies appear in calleja ###reference_b14###, although a \u201craindrop frequency\u201d of 307.8 Hz is mentioned.\nEach staircase has 91 steps, giving m.\nThe resulting predicted frequency vs arrival time\nis shown in fig. 7 ###reference_###.\nThe first arrivals are the lowest frequencies; for these the dispersion\ncurve is almost that of free air () so they start\narriving immediately after any direct (non-trapped) radiation.\nMost of the frequency \u201cchirp\u201d occurs during the first 0.2 s after first\narrival, an interval containing of order 50 cycles, and thus\nplenty to detect the upwards frequency trend.\nA long \u201cbell-like\u201d tail, asymptoting to the maximum 374 Hz, is expected,\nassociated with the slowly-propagating optical modes\nas shown on the left of fig. 6 ###reference_###.\nWe provide a link to a WAV file simulated to match the above chirp\nprediction at this URL: https://doi.org/10.5281/zenodo.10005461 ###reference_###.\nThe authors do not have access to audio recordings of footsteps\non this staircase, and would welcome the chance to validate the\npredictions."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Scattering from a single point source",
57
+ "text": "As laid out in section 1 ###reference_###, we simulate scattering from a single point source\nby integrating over the solutions from a quasiperiodic array of point sources\nwith different values. Let , where the\nsubscript refers to having fixed and is a given source\nposition within the central unit cell. Then from 11 ###reference_### and 12 ###reference_### it follows that\nand, with referring to the total field due to the above incident field , the array scanning integral is\nwhere is the scattered field associated with .\nTo achieve high-order accuracy one must\nunderstand the complex plane singularities of the above integrand.\nFor example, in fig. 8 ###reference_###\nwe plot the integrand in (51 ###reference_###)\nfor a target point ,\nfrequency , and source point\n. Each side of the stair boundary was split into panels\ninitially, then refined times with , until the total number of\nquadrature panels over the boundary was , each with nodes.\nThis gives around accurate digits according to the\nconvergence test in fig. 4 ###reference_###. The two key features in the figure\nare the branch points due to Wood anomalies at , and\ntwo poles at .\nThey are ordered such that . The branch\ncuts associated with the branch points are indicated by dashed lines. The direction of the branch cuts can be\nchosen by altering the integration path in 29 ###reference_### (by choosing ), but as it will become clear\nlater, it is convenient to choose the cuts to lie in the lower and upper\nhalf-plane for negative and positive , respectively. If were\nlarger than ,\nwhere no trapped modes exist for any , there would be no poles.\nIn the limits and , the\ntwo poles coalesce at and respectively.\n###figure_10### The branch points and poles in 51 ###reference_### require careful treatment. If\n, the branch points can be integrated over without leaving\nthe real axis, due to Wood anomalies being known to be square-root\nsingularities\nfano1941theory ###reference_b31###; hessel1965new ###reference_b32###; bolotovskii1968threshold ###reference_b66###; wojcik2021universal ###reference_b67###.\nThey may then be dealt with using special Gaussian quadrature, e.g. huybrechs2009generalized ###reference_b68###. With the emergence of the trapped mode poles\nat , options include singularity extraction\nrana1981current ###reference_b20###, and contour deformation\nhe2007radiation ###reference_b7###; lovat2011dipole ###reference_b69###. We opt for the latter, deforming the\ncontour away from the real axis and the branch cuts, as shown by the solid black line in\nfig. 8 ###reference_###.\nNote that in order for the solution to correspond to outgoing waves, the\nintegration contour has to be deformed into the upper half-plane for negative\n and the lower half-plane for positive . This idea was\nintroduced in v1905reflexion ###reference_b70### and termed the limiting absorption\nprinciple by sveshnikov1950radiation ###reference_b71###. The direction of the branch cuts\nwas chosen with this in mind, following ruming21 ###reference_b23###.\nWe choose a sinusoidal deformation contour parameterized by ,\nwhich is then discretized using the periodic trapezoidal rule with nodes.\nTo get an idea of what accuracy a given and\n yields, we use 50 ###reference_### as a test\nproblem, and investigate how accurately can be reconstructed from\n in fig. 9 ###reference_###. The top part of the figure shows\nthe real part of total solution , computed to digits of accuracy at , with a point source at . The bottom part of\nfig. 9 ###reference_### shows\nthat the exponential convergence rate is sensitive to the exact value of the amplitude, and that values\nbetween and are optimal.\nIn this range, quadrature nodes are\nenough to get an answer accurate to machine precision.\n###figure_11### ###figure_12### So far we only considered computation of the solution inside the central unit\ncell (and above it), . Using the\nquasiperiodicity property, the array scanning integral 51 ###reference_### for a target\nposition in the th unit cell becomes\nFigure 10 ###reference_### shows the array scanning integrand at a target unit cells away from the center, to be compared to\nfig. 8 ###reference_###. The exponential\nfactor causes oscillations in the real and exponential growth in the imaginary\ndirection in the lower (upper) half-plane for positive (negative) . The exponential growth limits the\namount of contour deformation away from the real axis, and both the growth and\noscillations demand more quadrature points to maintain constant accuracy as \nincreases.\nThus as grows, quadrature along the contour becomes a less practical\nmethod to evaluate the solution.\nHowever, in the limit , the solution may be\ncomputed easily in a different manner.\nConsider first the case .\nWe deform the sinusoidal contour to the contour shown in orange\nin fig. 10 ###reference_###;\nif there is a trapped mode (pole) at then this\ndeformation introduces a correction by the residue of that pole.\nThe contribution of the orange contour vanishes in the limit \nby standard Jordan\u2019s lemma type arguments.\nNamely, the term vanishes\nfor any , making the contributions of the segments parallel to the branch cut, and the horizontal segments, zero. The other vertical segments cancel by periodicity.\nTaking the radius of the \u201ckeyhole\u201d around the branch point to zero,\nits contribution vanishes since the integrand involves powers of at least .\nBy Cauchy\u2019s theorem, the value of the array scanning\nintegral is thus only the residue at , if such a pole exists,\nand zero otherwise.\nConsidering now only the case with such a residue,\nsince the phase of changes with , one has to take care incorporating\nthe correct phase in order for a limit to exist:\nIn the case , we deform the sinusoidal contour into the lower half-plane, as shown in blue in fig. 10 ###reference_###. By the same arguments, if a trapped mode exists, the solution is\nand zero otherwise.\nFinally, we explain how we\nnumerically extract the residue at ,\nwhich can be viewed as extracting the left- and right-going\ntrapped mode amplitudes.\nWe use Cauchy\u2019s theorem once more, and integrate on\na circular contour enclosing the relevant pole, applying\na trapezoidal rule with nodes.\nThe radius of this circle, , is chosen so that the amplitude of the\nintegrand along the contour stays as close to as possible, thus avoiding\nloss of accuracy due to catastrophic cancellation. The contour also needs to\nlie on one sheet, and therefore cannot cross a branch cut. The radius is therefore determined by the distance to the nearest branch point or the distance to the nearest pole, whichever is smaller.\nWe discuss the case and use the same parameters for negative . By inspection of the dispersion relation in\nfig. 5 ###reference_###, it is clear that for all but , the distance to the nearest branch point, at , is smaller. We use\nto determine the radius, and nodes, after convergence testing by doubling .\nThe main claim of the previous section is a direct consequence of the above\nresult: at the only contributor to the field infinitely far\naway from the source is the trapped mode at the given . At , where no trapped modes exist, the field in this limit is zero, since\nthe contour in the complex plane contains no poles.\nAs we have seen in\nsection 3 ###reference_###, trapped modes at different frequencies have different vertical\ndecay lengths, and propagate at different speeds. It is therefore of interest to\ncompute how much of the power injected into the system is transported away in\ntrapped modes, as opposed to radiated vertically,\nas a function of frequency\u2014this\nis the subject of the next section.\n###figure_13###"
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Extracting the asymptotically trapped power",
63
+ "text": "In this section we find the total power injected into the system by a single\npoint source, and compute the fractional power carried to infinity in trapped modes, as a\nfunction of frequency. The source is assumed to be inside the central unit cell, i.e. , .\nThe total power radiated by a single point source\ncan be derived by taking the\ntotal flux 46 ###reference_### exiting a circle of radius (denoted\n) centered on the source, and then taking the limit :\nOut of the four terms, only two contribute: since and its derivative is\nfinite everywhere, the integral of vanishes; and only\ngrows like as , therefore its contribution is . Using the asymptotic form of as (dlmf, ###reference_b58###, Ch. 10.7), it can be shown that the remaining terms add up to\nThus the radiated power is influenced by the scattered wave at the emission point.\nFor each value of , we use the strategy outlined in section 4 ###reference_###\nto compute , i.e. integrate along a deformed array scanning contour\nin using the\ntrapezoidal rule. However, in the limits \n(where ), or \n(the cutoff frequency, where ),\nthe contour must pass between two coalescing poles.\nThis necessitates an increase in\nquadrature node density in the section of the contour closest to the\npoles. In these limits, it is therefore inefficient to evenly space the nodes\nin . We instead follow (barnettexpgrading, ###reference_b72###, Sec. 3) and define an\nexponentially graded reparameterization of the real part of ,\nvia a periodic map , namely\nwhich bunches quadrature nodes that are evenly spaced in \nby a factor of order close to either or .\nThe normalization is determined\nnumerically by requiring .\nTo achieve uniform accuracy as or ,\none needs to update as and , respectively, where \u2013 are constants to\nbe determined. Here, we consider , for\nwhich we find (by doubling to gain an upper bound on the\nerror) that setting , is sufficient to obtain a\nvalue for accurate to 8 digits. We therefore use these\nsettings if or , and (evenly\nspaced nodes in ) otherwise.\nThis parameter combination ensures that the nearest pole is at\nleast 5 quadrature node spacing\u2019s distance away from the contour.\nAs shown in section 4 ###reference_###, the only contributors to the field infinitely far away\nfrom the source along the surface are trapped modes.\nLet be the semi-infinite vertical\nline segment with a given coordinate, starting on the surface.\nWe can use 54 ###reference_### to reconstruct or \non this line segment.\nThen the power in the left-going () or right-going ()\ntrapped modes is\nNote that this expression must be independent of the choice of\nhorizontal position , by power conservation.\nFurthermore, the phase introduced in 54 ###reference_### cancels.\nGiven any , we approximate (60 ###reference_###)\nby a quadrature rule from the surface point to an\nupper limit\n, chosen such that the given trapped mode has sufficiently decayed.\nBased on the vertical\nmode intensity decay rate (twice the amplitude decay rate in (6 ###reference_###))\nwe thus set to\nwhere is a desired error tolerance.\nWhile the choice of is immaterial mathematically,\none choice is much more convenient numerically:\nthe corner (trough) at is best, for the following reason.\nFirstly, passing through a corner is to be preferred to any other\npart of ,\nsince the panels discretizing are already geometrically\nrefined towards corners, allowing accurate evaluation\narbitrarily close to the corner via plain quadratures.\n(In contrast, were to intersect any flat part of ,\na special close-evaluation quadrature would be needed at points\ncloser than one panel-size from .)\nSecondly, to decide whether the upper ( angle at ) or lower ( angle at ) corner is to be preferred,\none expands the total field in terms of Bessel functions\naround either point and imposes Neumann boundary conditions.\nThis shows that only the lower corner has a regular (hence analytic) expansion\ninvolving even powers.\nThe potential at the upper corner is nonanalytic since it involves\npowers that are multiples of .\nThe above then\nimplies that Gauss\u2013Legendre quadrature with nodes is high-order accurate for the integral\n60 ###reference_### along the line , so this\nis what we use, with .\nFigure 11 ###reference_### shows the power balance as a function of on the left\nand on the right, with power tolerance . For\nreproducibility, we summarize the parameters used in table 1 ###reference_###.\nSince the fluxes calculated here are unitless, we\nshow the flux in trapped modes , the total flux, and the fraction of the flux\ncarried in trapped modes all in one plot. As expected from their large vertical\ndecay length, the least trapped modes at , carry the\nleast flux. The highest-frequency trapped modes carry a fraction of the total\npower that approaches , and the increase of this fraction with frequency or\nBloch wavenumber is close to linear until an abrupt drop to zero at . These \u201ccritically trapped\u201d acoustic modes are the most\nefficient both\nat sucking power out of the source (given a fixed source amplitude),\nand at trapping this input power at the surface.\n###figure_14###"
64
+ },
65
+ {
66
+ "section_id": "6",
67
+ "parent_section_id": null,
68
+ "section_name": "Conclusions and future work",
69
+ "text": "We described a boundary integral equation method for solving the\n-dimensional, constant-coefficient Helmholtz equation on an exterior domain\noutside of an infinite, periodic boundary with corners. We did this in the\ncontext of acoustic scattering, with a particular emphasis on trapped modes.\nUsing corner-refined Nystr\u00f6m quadrature on the boundary, we built\na dense direct solver for the quasiperiodic problem.\nBy integrating over the quasiperiodicity parameter, a method known\nas array scanning, we computed the scattering solution from a single point\nsource, and extracted the limit of the acoustic pressure field infinitely far\naway from the source. Obtaining high-order accuracy required a detailed\nunderstanding of poles and branch-cuts, due to trapped modes and Wood anomalies\nrespectively. To this end we proposed a complex contour deformation and nonuniform\nreparametrization for an efficient quadrature.\nWe proposed a residue method to extract the amplitudes of left- and right-going\ntrapped surface waves, and used this\nto study the fraction of injected acoustic power\nthat ends up in trapped modes, as a function of frequency.\nWe show that the trapped modes (quasiperiodic eigenfunctions)\nmap precisely to roots of a Fredholm determinant\u2014see Theorem 4 ###reference_4###.\nBy applying Nystr\u00f6m quadrature to this, we\ncompute the trapped mode dispersion relation and group velocity.\nA simple ray model then allowed us to\npredict frequency vs arrival time in the \u201cchirp\u201d phenomenon observed at El Castillo\nat Chichen Itza, and other similar acoustics recorded at step-temples.\nSeveral questions remain for staircase acoustics:\nare there source locations close to the surface which excite trapped modes less, or excite the left- and right-going\nmodes asymmetrically? Similarly, it would be of interest to extend the analysis\nto a wider range of periodic boundaries that possess asymmetry.\nAlthough dense direct linear algebra as presented here was adequate\nfor the studied geometry and accuracies, more complex geometries\nrequiring more than discretization nodes\nwould benefit from the use of iterative solution with FMM acceleration.\nThe extension to 3D doubly-periodic structures to high-order accuracy\nposes an interesting challenge, because there the set\nof Wood-anomaly on-surface wavevectors form curves."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {
74
+ "1": {
75
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.9\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.9.10.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.9.10.1.1\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.9.10.1.1.1\" style=\"font-size:80%;\">Parameter</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.9.10.1.2\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.9.10.1.2.1\" style=\"font-size:80%;\">Description</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.9.10.1.3\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.9.10.1.3.1\" style=\"font-size:80%;\">Value</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.1.1\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.1.2\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\">\n<span class=\"ltx_text\" id=\"S5.T1.1.1.2.1\" style=\"font-size:80%;\">Amplitude of sinusoidal array scanning contour in </span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.12486v2#S5.E59\" style=\"font-size:80%;\" title=\"59 \u2023 5 Extracting the asymptotically trapped power \u2023 Trapped acoustic waves and raindrops: high-order accurate integral equation method for localized excitation of a periodic staircase\"><span class=\"ltx_text ltx_ref_tag\">59</span></a>\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T1.1.1.3\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.1.3.1\" style=\"font-size:80%;\">1.0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.2.2.1\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.2.2.2\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.2.2.2.1\" style=\"font-size:80%;\">Number of trapezoidal nodes along sinusoidal array scanning contour</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.2.2.3\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.2.2.3.1\" style=\"font-size:80%;\">60</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.3.3.1\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.4.4.3\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.4.4.3.1\" style=\"font-size:80%;\">Exponential grading parameter along sinusoidal array scanning contour</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.4.4.2\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\">\n<span class=\"ltx_text\" id=\"S5.T1.4.4.2.1\" style=\"font-size:80%;\">0.0 if </span><span class=\"ltx_text\" id=\"S5.T1.4.4.2.2\" style=\"font-size:80%;\">, 5.0 otherwise</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.5.5.1\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.5.5.2\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.5.5.2.1\" style=\"font-size:80%;\">Radius of circular contour for residual calculation</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.5.5.3\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\">\n<span class=\"ltx_text\" id=\"S5.T1.5.5.3.1\" style=\"font-size:80%;\">given by </span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.12486v2#S4.E56\" style=\"font-size:80%;\" title=\"56 \u2023 4 Scattering from a single point source \u2023 Trapped acoustic waves and raindrops: high-order accurate integral equation method for localized excitation of a periodic staircase\"><span class=\"ltx_text ltx_ref_tag\">56</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.6.6.1\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.6.6.2\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.6.6.2.1\" style=\"font-size:80%;\">Number of trapezoidal nodes along circular residual contour</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.6.6.3\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.6.6.3.1\" style=\"font-size:80%;\">64</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.8.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.7.7.1\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.8.8.3\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\">\n<span class=\"ltx_text\" id=\"S5.T1.8.8.3.1\" style=\"font-size:80%;\">Tolerance for determining the upper limit </span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.12486v2#S5.E61\" style=\"font-size:80%;\" title=\"61 \u2023 5 Extracting the asymptotically trapped power \u2023 Trapped acoustic waves and raindrops: high-order accurate integral equation method for localized excitation of a periodic staircase\"><span class=\"ltx_text ltx_ref_tag\">61</span></a><span class=\"ltx_text\" id=\"S5.T1.8.8.3.2\" style=\"font-size:80%;\"> of trapped power integral </span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.12486v2#S5.E60\" style=\"font-size:80%;\" title=\"60 \u2023 5 Extracting the asymptotically trapped power \u2023 Trapped acoustic waves and raindrops: high-order accurate integral equation method for localized excitation of a periodic staircase\"><span class=\"ltx_text ltx_ref_tag\">60</span></a>\n</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.8.8.2\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T1.9.9.1\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T1.9.9.2\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\">\n<span class=\"ltx_text\" id=\"S5.T1.9.9.2.1\" style=\"font-size:80%;\">Number of Gauss\u2013Legendre nodes in trapped power integral </span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2310.12486v2#S5.E60\" style=\"font-size:80%;\" title=\"60 \u2023 5 Extracting the asymptotically trapped power \u2023 Trapped acoustic waves and raindrops: high-order accurate integral equation method for localized excitation of a periodic staircase\"><span class=\"ltx_text ltx_ref_tag\">60</span></a>\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T1.9.9.3\" style=\"padding-top:0.8pt;padding-bottom:0.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.9.9.3.1\" style=\"font-size:80%;\">128</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Parameters used in computing the fractional power transported by trapped modes, chosen so that the total power and power in trapped modes is accurate to digits for all considered, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T1.29.1\">i.e.\u00a0</span>, . </figcaption>\n</figure>",
76
+ "capture": "Table 1: Parameters used in computing the fractional power transported by trapped modes, chosen so that the total power and power in trapped modes is accurate to digits for all considered, i.e.\u00a0, . "
77
+ }
78
+ },
79
+ "image_paths": {
80
+ "1(a)": {
81
+ "figure_path": "2310.12486v2_figure_1(a).png",
82
+ "caption": "(a)\nFigure 1: Left: Photograph of El Castillo at Chichen Itza, Mexico. Center: 2D infinite staircase geometry with coordinates used. Right: a single period \u0393\u0393\\Gammaroman_\u0393 of the boundary, now shown with x1subscript\ud835\udc651x_{1}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT horizontal (the orientation used throughout).",
83
+ "url": "http://arxiv.org/html/2310.12486v2/x1.jpg"
84
+ },
85
+ "1(b)": {
86
+ "figure_path": "2310.12486v2_figure_1(b).png",
87
+ "caption": "(b)\nFigure 1: Left: Photograph of El Castillo at Chichen Itza, Mexico. Center: 2D infinite staircase geometry with coordinates used. Right: a single period \u0393\u0393\\Gammaroman_\u0393 of the boundary, now shown with x1subscript\ud835\udc651x_{1}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT horizontal (the orientation used throughout).",
88
+ "url": "http://arxiv.org/html/2310.12486v2/x2.png"
89
+ },
90
+ "2": {
91
+ "figure_path": "2310.12486v2_figure_2.png",
92
+ "caption": "Figure 2: Location of Wood anomalies (black lines) and the first Brillouin zone (red shaded region of the \u03ba\ud835\udf05\\kappaitalic_\u03ba axis) in the \u03c9\ud835\udf14\\omegaitalic_\u03c9-\u03ba\ud835\udf05\\kappaitalic_\u03ba plane,\nfor the case of spatial periodicity d=1\ud835\udc511d=1italic_d = 1.",
93
+ "url": "http://arxiv.org/html/2310.12486v2/x3.png"
94
+ },
95
+ "3": {
96
+ "figure_path": "2310.12486v2_figure_3.png",
97
+ "caption": "Figure 3: Discretization nodes on\na single unit cell \u0393\u0393\\Gammaroman_\u0393 of the boundary.\nThe underlying coarse discretization has 8888 equal panels on each\nstraight line.\nPanels touching corners have then been subdivided dyadically\n10101010 times, with shrinkage ratio r=2\ud835\udc5f2r=2italic_r = 2.\nEach resulting panel was populated with 16161616 Gauss\u2013Legendre quadrature\nnodes. An inset shows the result of the refinement.",
98
+ "url": "http://arxiv.org/html/2310.12486v2/x4.png"
99
+ },
100
+ "4": {
101
+ "figure_path": "2310.12486v2_figure_4.png",
102
+ "caption": "Figure 4: Convergence test via flux conservation for the case of an\nincident plane wave. The figure shows the net flux exiting the central unit\ncell, which is analytically zero, as a function of both the number of times\nthe corner-adjacent quadrature panel has been subdivided on the boundary,\nand the size of the resulting linear system. During refinement the panels\nhave been split in a 1:2:121\\mathrel{\\mathop{:}}21 : 2 ratio (r=3\ud835\udc5f3r=3italic_r = 3).",
103
+ "url": "http://arxiv.org/html/2310.12486v2/x5.png"
104
+ },
105
+ "5(a)": {
106
+ "figure_path": "2310.12486v2_figure_5(a).png",
107
+ "caption": "(a)\nFigure 5: Left: numerically computed band structure (dispersion relation) for evanescent trapped modes of the \u03c0/4\ud835\udf0b4\\pi/4italic_\u03c0 / 4-slope staircase with period d=1\ud835\udc511d=1italic_d = 1.\nOnly the right (positive) half of the Brillouin zone is shown.\nDots show the band structure \u03c9tr\u2062(\u03ba)subscript\ud835\udf14tr\ud835\udf05\\omega_{\\mathrm{tr}}(\\kappa)italic_\u03c9 start_POSTSUBSCRIPT roman_tr end_POSTSUBSCRIPT ( italic_\u03ba ), and the line shows the values \u03c9=\u03ba\ud835\udf14\ud835\udf05\\omega=\\kappaitalic_\u03c9 = italic_\u03ba.\nRadiation into the upper half plane is only possible when \u03c9>|\u03ba|\ud835\udf14\ud835\udf05\\omega>|\\kappa|italic_\u03c9 > | italic_\u03ba |.\nRight: group velocity vg:=d\u03c9tr(\u03ba)/d\u03bav_{g}\\mathrel{\\mathop{:}}=d\\omega_{\\mathrm{tr}}(\\kappa)/d\\kappaitalic_v start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT : = italic_d italic_\u03c9 start_POSTSUBSCRIPT roman_tr end_POSTSUBSCRIPT ( italic_\u03ba ) / italic_d italic_\u03ba plotted\nover the same domain.",
108
+ "url": "http://arxiv.org/html/2310.12486v2/x6.png"
109
+ },
110
+ "5(b)": {
111
+ "figure_path": "2310.12486v2_figure_5(b).png",
112
+ "caption": "(b)\nFigure 5: Left: numerically computed band structure (dispersion relation) for evanescent trapped modes of the \u03c0/4\ud835\udf0b4\\pi/4italic_\u03c0 / 4-slope staircase with period d=1\ud835\udc511d=1italic_d = 1.\nOnly the right (positive) half of the Brillouin zone is shown.\nDots show the band structure \u03c9tr\u2062(\u03ba)subscript\ud835\udf14tr\ud835\udf05\\omega_{\\mathrm{tr}}(\\kappa)italic_\u03c9 start_POSTSUBSCRIPT roman_tr end_POSTSUBSCRIPT ( italic_\u03ba ), and the line shows the values \u03c9=\u03ba\ud835\udf14\ud835\udf05\\omega=\\kappaitalic_\u03c9 = italic_\u03ba.\nRadiation into the upper half plane is only possible when \u03c9>|\u03ba|\ud835\udf14\ud835\udf05\\omega>|\\kappa|italic_\u03c9 > | italic_\u03ba |.\nRight: group velocity vg:=d\u03c9tr(\u03ba)/d\u03bav_{g}\\mathrel{\\mathop{:}}=d\\omega_{\\mathrm{tr}}(\\kappa)/d\\kappaitalic_v start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT : = italic_d italic_\u03c9 start_POSTSUBSCRIPT roman_tr end_POSTSUBSCRIPT ( italic_\u03ba ) / italic_d italic_\u03ba plotted\nover the same domain.",
113
+ "url": "http://arxiv.org/html/2310.12486v2/x7.png"
114
+ },
115
+ "6": {
116
+ "figure_path": "2310.12486v2_figure_6.png",
117
+ "caption": "Figure 6: \nThe real part of two example trapped modes \u03d5italic-\u03d5\\phiitalic_\u03d5, for\nthe \u03c0/4\ud835\udf0b4\\pi/4italic_\u03c0 / 4 staircase with d=1\ud835\udc511d=1italic_d = 1.\nLeft: the highest frequency mode, at \u03ba=\u03c0\ud835\udf05\ud835\udf0b\\kappa=\\piitalic_\u03ba = italic_\u03c0.\nRight: An intermediate frequency mode, at \u03ba=1.54\ud835\udf051.54\\kappa=1.54italic_\u03ba = 1.54.",
118
+ "url": "http://arxiv.org/html/2310.12486v2/x8.png"
119
+ },
120
+ "7": {
121
+ "figure_path": "2310.12486v2_figure_7.png",
122
+ "caption": "Figure 7: Graph of frequency (in Hz) vs time (in seconds since emission)\nobserved a distance D\ud835\udc37Ditalic_D along a sound-hard staircase\nfrom an impulsive source for the wave equation.\nParameters for the El Castillo staircase are used,\nwith D\ud835\udc37Ditalic_D the full 91 steps comprising one side of the pyramid.",
123
+ "url": "http://arxiv.org/html/2310.12486v2/x9.png"
124
+ },
125
+ "8": {
126
+ "figure_path": "2310.12486v2_figure_8.png",
127
+ "caption": "Figure 8: Real part of the integrand of the array scanning integral 51 plotted in\nthe complex \u03ba\ud835\udf05\\kappaitalic_\u03ba-plane, where the field is evaluated at a single\ntarget, \ud835\udc31=(0.22,\u22120.16)\ud835\udc310.220.16\\mathbf{x}=(0.22,-0.16)bold_x = ( 0.22 , - 0.16 ). Plotted over this are the branch points and cuts\n(dots with dotted lines), poles (dots), and the contour deformation used (solid black\nline, with an example set of 80 quadrature nodes plotted on top).",
128
+ "url": "http://arxiv.org/html/2310.12486v2/x10.png"
129
+ },
130
+ "9(a)": {
131
+ "figure_path": "2310.12486v2_figure_9(a).png",
132
+ "caption": "(a)\nFigure 9: Top: Real part of total acoustic pressure field from a point source at \ud835\udc310=(\u22120.2,0.1)subscript\ud835\udc3100.20.1\\mathbf{x}_{0}=(-0.2,0.1)bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = ( - 0.2 , 0.1 ), radiating with frequency \u03c9=2.4\ud835\udf142.4\\omega=2.4italic_\u03c9 = 2.4, accurate to 7777 digits. Bottom: convergence test of the reconstruction of\n\u03a6\u2062(\ud835\udc31target)\u03a6subscript\ud835\udc31target\\Phi(\\mathbf{x}_{\\mathrm{target}})roman_\u03a6 ( bold_x start_POSTSUBSCRIPT roman_target end_POSTSUBSCRIPT ), at \ud835\udc31target=(0.13,0.03)subscript\ud835\udc31target0.130.03\\mathbf{x}_{\\mathrm{target}}=(0.13,0.03)bold_x start_POSTSUBSCRIPT roman_target end_POSTSUBSCRIPT = ( 0.13 , 0.03 ), with\nthe array scanning method. The sinusoidal contour shown in\nfig. 8 was taken, with various amplitudes A\ud835\udc34Aitalic_A,\nand discretized with pasmsubscript\ud835\udc5dasmp_{\\mathrm{asm}}italic_p start_POSTSUBSCRIPT roman_asm end_POSTSUBSCRIPT trapezoidal quadrature nodes.",
133
+ "url": "http://arxiv.org/html/2310.12486v2/x11.png"
134
+ },
135
+ "9(b)": {
136
+ "figure_path": "2310.12486v2_figure_9(b).png",
137
+ "caption": "(b)\nFigure 9: Top: Real part of total acoustic pressure field from a point source at \ud835\udc310=(\u22120.2,0.1)subscript\ud835\udc3100.20.1\\mathbf{x}_{0}=(-0.2,0.1)bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = ( - 0.2 , 0.1 ), radiating with frequency \u03c9=2.4\ud835\udf142.4\\omega=2.4italic_\u03c9 = 2.4, accurate to 7777 digits. Bottom: convergence test of the reconstruction of\n\u03a6\u2062(\ud835\udc31target)\u03a6subscript\ud835\udc31target\\Phi(\\mathbf{x}_{\\mathrm{target}})roman_\u03a6 ( bold_x start_POSTSUBSCRIPT roman_target end_POSTSUBSCRIPT ), at \ud835\udc31target=(0.13,0.03)subscript\ud835\udc31target0.130.03\\mathbf{x}_{\\mathrm{target}}=(0.13,0.03)bold_x start_POSTSUBSCRIPT roman_target end_POSTSUBSCRIPT = ( 0.13 , 0.03 ), with\nthe array scanning method. The sinusoidal contour shown in\nfig. 8 was taken, with various amplitudes A\ud835\udc34Aitalic_A,\nand discretized with pasmsubscript\ud835\udc5dasmp_{\\mathrm{asm}}italic_p start_POSTSUBSCRIPT roman_asm end_POSTSUBSCRIPT trapezoidal quadrature nodes.",
138
+ "url": "http://arxiv.org/html/2310.12486v2/x12.png"
139
+ },
140
+ "10": {
141
+ "figure_path": "2310.12486v2_figure_10.png",
142
+ "caption": "Figure 10: Real part of the array scanning integrand 53 with the\ntarget point n=5\ud835\udc5b5n=5italic_n = 5 unit cells away from the source, to be compared with\nfig. 8. Identical branch cuts, poles, and\ncontour deformation are shown in black. In orange and blue we plot the contours\nwe use to derive the field in the limit of n\u2192\u00b1\u221e\u2192\ud835\udc5bplus-or-minusn\\to\\pm\\inftyitalic_n \u2192 \u00b1 \u221e,\nrespectively.",
143
+ "url": "http://arxiv.org/html/2310.12486v2/x13.png"
144
+ },
145
+ "11": {
146
+ "figure_path": "2310.12486v2_figure_11.png",
147
+ "caption": "Figure 11: Left: Total flux (unitless) in the system, flux in trapped modes, and fraction of flux in trapped modes, as a function of Bloch wavenumber \u03ba\ud835\udf05\\kappaitalic_\u03ba. Right: The same quantities plotted against the frequency \u03c9\ud835\udf14\\omegaitalic_\u03c9 of the Helmholtz equation.",
148
+ "url": "http://arxiv.org/html/2310.12486v2/x14.png"
149
+ }
150
+ },
151
+ "validation": true,
152
+ "references": [
153
+ {
154
+ "1": {
155
+ "title": "doi:10.2174/978160805150211001010007.",
156
+ "author": "S. Shipman, Resonant scattering by open periodic waveguides, Vol. 1 of Progress\nin Computational Physics (PiCP), Bentham Science Publishers, 2010, Ch. 2, pp.\n7\u201350.",
157
+ "venue": null,
158
+ "url": "https://doi.org/10.2174/978160805150211001010007"
159
+ }
160
+ },
161
+ {
162
+ "2": {
163
+ "title": "doi:10.1137/1.9781611973167.",
164
+ "author": "D. Colton, R. Kress, Integral Equation Methods in Scattering Theory, Society\nfor Industrial and Applied Mathematics, Philadelphia, PA, 1983.",
165
+ "venue": null,
166
+ "url": "https://doi.org/10.1137/1.9781611973167"
167
+ }
168
+ },
169
+ {
170
+ "3": {
171
+ "title": "doi:10.1007/s00211-021-01229-0.",
172
+ "author": "R. Zhang, Numerical methods for scattering problems in periodic waveguides,\nNumer. Math. 148 (2021) 959\u2013996.",
173
+ "venue": null,
174
+ "url": "https://doi.org/10.1007/s00211-021-01229-0"
175
+ }
176
+ },
177
+ {
178
+ "4": {
179
+ "title": "doi:10.1007/BF02673850.",
180
+ "author": "V. Y. Gotlib, Solutions of the Helmholtz equation, concentrated near a plane\nperiodic boundary, J. Math. Sci. 102 (2000) 4188\u20134194.",
181
+ "venue": null,
182
+ "url": "https://doi.org/10.1007/BF02673850"
183
+ }
184
+ },
185
+ {
186
+ "5": {
187
+ "title": "doi:10.1137/17M1118920.",
188
+ "author": "A. Kirsch, A. Lechleiter, The limiting absorption principle and a radiation\ncondition for the scattering by a periodic layer, SIAM J. Math. Anal. 50 (3)\n(2017) 2536\u201365.",
189
+ "venue": null,
190
+ "url": "https://doi.org/10.1137/17M1118920"
191
+ }
192
+ },
193
+ {
194
+ "6": {
195
+ "title": "arXiv:2310,05816.",
196
+ "author": "C. L. Epstein, Solving the transmission problem for open wave-guides. II\noutgoing estimates (2023).",
197
+ "venue": null,
198
+ "url": "http://arxiv.org/abs/2310,05816"
199
+ }
200
+ },
201
+ {
202
+ "7": {
203
+ "title": "doi:10.1007/978-1-4612-0559-3.",
204
+ "author": "R. Kress, V. Maz\u2019ya, V. Kozlov, Linear integral equations, Vol. 82, Springer,\n1989.",
205
+ "venue": null,
206
+ "url": "https://doi.org/10.1007/978-1-4612-0559-3"
207
+ }
208
+ },
209
+ {
210
+ "8": {
211
+ "title": "doi:10.1088/0266-5611/10/1/011.",
212
+ "author": "A. Kirsch, Uniqueness theorems in inverse scattering theory for periodic\nstructures, Inv. Probs. 10 (1) (1994) 145\u2013152.",
213
+ "venue": null,
214
+ "url": "https://doi.org/10.1088/0266-5611/10/1/011"
215
+ }
216
+ }
217
+ ],
218
+ "url": "http://arxiv.org/html/2310.12486v2"
219
+ }
20240318/2310.14402v2.json ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Value of Assistance for Grasping",
3
+ "abstract": "In multiple realistic settings, a robot is tasked with grasping an object without knowing its exact pose and relies on a probabilistic estimation of the pose to decide how to attempt the grasp. We support settings in which it is possible to provide the robot with an observation of the object before a grasp is attempted but this possibility is limited and there is a need to\ndecide which sensing action would be most beneficial. We support this decision by offering a novel Value of Assistance (VOA) measure for assessing the expected effect a specific\nobservation will have on the robot\u2019s ability to complete it\u2019s task.\nWe evaluate our suggested measure in simulated and real-world collaborative grasping settings.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "INTRODUCTION",
9
+ "text": "Task-driven agents often need to decide how to act based on partial and noisy state estimations which may greatly compromise performance.\nWe consider settings in which an agent is tasked with grasping an object based on a probabilistic estimation of its pose. Before attempting the grasp, another agent may assist by performing a sensing action and sending its observation to the grasping agent.\nIn our settings of interest, sensing and communication may be costly or limited and there is a need to support the decision of which observation to perform by offering principled ways to assess the expected benefit.\nTo demonstrate, consider the simplified automated manufacturing setting depicted in Figure 1 ###reference_###. One agent, denoted as the actor, is a robotic arm with a parallel-jaw gripper that is tasked with grasping an object (here, an adversarial object from [1 ###reference_b1###]). After a successful grasp, the object drops unexpectedly. Since the actor does not have a functioning sensor it can attempt to grasp the object based only on its estimation of the current position of the object. Alternatively, it can attempt the grasp after receiving an observation from another agent, the helper, that is equipped with a functioning sensor (here, an OnRobot 2.5D Vision System).\nThe question that we pose is whether the helper can provide valuable assistance and what is the best position for the helper from which to provide the sensor reading among the possible options. Of course, the same question arises in single-agent settings in which it is the agent itself that needs to decide whether to perform a costly sensing action.\nBeyond this illustrative example, grasping is an essential task for a wide range of robotic applications, including industrial automation, household robotics, agriculture, and more [2 ###reference_b2###, 3 ###reference_b3###]. Accordingly, research on effective grasping capabilities has resulted in many solution approaches that can be generally divided into two main categories [4 ###reference_b4###, 3 ###reference_b3###]. In analytical approaches, a representation of the physical and dynamical models of the agent and the object are used when choosing a configuration from which to attemp a grasp [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###]. In contrast, data-driven approaches rank labeled samples to come up with grasping policies. The ranking is usually based on a heuristic or on experiences collected from simulated or real robots [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###].\nWe assume the actor is associated with a procedure for choosing a grasp given its belief which represents its knowledge about the position of the object. To support the decision of which observation would be most beneficial, we formulate value of assistance (VOA) for grasping and offer ways to compute it for estimating the benefit a sensing action will have on the probability of a successful grasp. This involves accounting for how the actor\u2019s estimation will change based on the acquired observation and assessing how this change will affect the actor\u2019s decision of how to attempt the grasp.\n###figure_1### VOA is used to assess the informative value an observation will have and is therefore closely related to the well-established notions of value of information (VOI) and information gain (IG) [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###], which are widely used across multiple AI frameworks to assess the impact information will have on agents decisions and expected utility. We adapt these ideas to robotic settings. While in [19 ###reference_b19###] we used VOA for assessing the effect localization information would have on a navigating robot\u2019s expected cost, here we use it in the context of a grasping task.\nPerhaps closest to our work is active perception and sensor planning\n[20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###] which refer to the integration of sensing and decision-making processes within a robotic system. This involves actively acquiring and utilizing information from the environment and selecting viewpoints or trajectories likely to reveal relevant information or reduce uncertainty [29 ###reference_b29###, 30 ###reference_b30###]. While these include work on active perception in manipulation tasks they mostly focus on assessing the effect various perspectives will have on the ability to correctly locate and classify objects [20 ###reference_b20###, 31 ###reference_b31###]. We offer a general formulation of VOA for grasping and novel ways to estimate the effect an observation will have on the probability of accomplishing a grasp.\nSince our focus is on settings in which information acquisition actions are limited and may be performed by another agent, our work is also highly related to decision-theoretic communication in particular, where agents communicate over a limited-bandwidth channel and messages are chosen to maximize the utility or effectiveness of the communication [32 ###reference_b32###, 27 ###reference_b27###, 26 ###reference_b26###, 33 ###reference_b33###, 34 ###reference_b34###].\nOur novelty is in offering measures that account for the manipulation and sensing capabilities of robotic agents when assessing the value of communicating an observation within a collaborative grasping setting. Our key contributions are the following:\nWe introduce and formulate Value of Assistance (VOA) for grasping.\nWe instantiate VOA for a collaborative grasping setting with a robotic arm equipped with a gripper and another agent equipped with either a lidar or a depth camera.\nWe empirically demonstrate in both simulated and real-world robotic settings how VOA predicts the effect an observation will have on performance and how it can be used to identify the best assistive action.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Preliminaries",
15
+ "text": "To support a grasping task, where the object is assumed to be in a stable pose, we use a function that assigns a score to a grasping configuration - object stable pose pair.\nGiven a set of object poses and a set of grasp configurations ,\na grasp score function\n\nspecifies the probability that an actor applying grasp configuration will successfully grasp an object at pose ,\ni.e.\n, where is the event of a successful grasp.\nThe grasp score function may be evaluated analytically, by considering diverse factors such as contact area, closure force, object shape, and friction coefficient [35 ###reference_b35###, 3 ###reference_b3###] or empirically, by using data-driven approaches such as [36 ###reference_b36###] where a deep learning model is trained to predict the quality of grasps based on depth images of the objects.\nThe actor\u2019s choice of a grasp configuration relies on a pose belief which describes the perceived likelihood of each object pose within the set of possible stable poses .\nA pose belief is a probability distribution over .\nThe pose belief is affected by different factors, including the model of the object and its dynamics and the collected sensory information. We formulate the initial belief after the object is dropped (Figure 1 ###reference_###) using a joint probability model that captures the prior probability of stable poses, a von Mises PDF for the angle [37 ###reference_b37###], and a multivariate normal distribution for the position on the plane.\nSee Section I of our online appendix for the formulation111https://github.com/CLAIR-LAB-TECHNION/GraspVOA ###reference_pVOA###.\nFor grasp configuration and pose belief , the expected grasp score \nis the weighted aggregated grasp score over the set of possible poses. A maximal grasp of , denoted , maximizes the expected grasp score, i.e.,\n.\nThe actor receives an observation from the helper which corresponds to its readings from a specific sensor configuration and object pose . Our formulation of the observation space is general and represents the set of readings that can be made by the sensor that is available in the considered setting. In our evaluations, we used a planar lidar sensor for which a reading is an array of non-negative distances per angle and a depth camera which emits a\n2D array , where are the image dimensions.\nAs is common in the literature, (e.g., [38 ###reference_b38###, 39 ###reference_b39###]), we consider obtaining an observation as a stochastic process.\nGiven object pose , sensor configuration , and observation , sensor function \nprovides the conditional probability of obtaining from for .\nNotably, an agent may not be aware of the actual distribution and may instead only have a\npredicted sensor function\n and a\npredicted observation probability , based on a distribution which may be incorrect or inaccurate.\nWhen receiving an observation , the actor updates its belief using its belief update function which defines the effect an observation has on the pose belief.\nA belief update function \n maps belief , observation and sensor configuration to an updated belief .\nThe literature is rich with approaches for belief update (e.g., [38 ###reference_b38###, 17 ###reference_b17###, 23 ###reference_b23###, 40 ###reference_b40###]). We use a Bayesian filter such that for any observation taken from sensor configuration , the updated pose belief for pose is given as\nwhere\n is the estimated probability that is the object pose prior to considering the new observation .\nWhen and describe stochastic processes they can be directly used for belief update using Equation 1 ###reference_###. Sometimes, it may be useful to consider deterministic sensor functions where the conditional distribution of an observation is replaced by a deterministic mapping . For example, a deterministic sensor model would assign the value of a cell in a lidar reading based on the predicted distance between the lidar and the object surface at a specific angle. A stochastic sensor model would sample from a Gaussian distribution with this value as the mean and the specified error margins as the standard deviation.\nIn some cases, such as when using a deterministic sensor model, a similarity score is used to compare the predicted and received observations and to compose a valid distribution function for :\nThe literature is rich with ways to measure , which may vary between applications and sensor types. See Appendix III for a description of several approaches including using MSE for assessing the similarity between lidar sensor readings and an SSIM-based measure [41 ###reference_b41###] for depth images."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Value of Assistance (VOA) for Grasping",
21
+ "text": "We offer ways to assess the effect sensing actions will have on the probability of successfully grasping an object.\nWe formulate this as a two-agent collaborative grasping setting with an actor that is tasked with grasping an object for which the exact pose is not known and a helper that can move to a specific configuration within its workspace to collect an observation from its sensor and share it with the actor.\nWe note that we use a two-agent model since it clearly distinguishes between the grasping and sensing capabilities.\nDepending on the application, this model can be used to support an active perception setting in which a single agent needs to choose whether to perform a manipulation or sensing action if it is capable of both.\nThe actor chooses a grasp configuration based on its pose belief (Definition 2 ###reference_inition2###). While the object\u2019s exact pose is unknown, its shape is given and it is assumed to be in a stable pose (see Figure 2 ###reference_###).\nThe actor needs to choose a feasible grasp configuration from which a grasp will be attempted (see Figure 3 ###reference_###) based on its pose belief and grasp score function. The helper can choose among a set of sensor configurations that offer different points of view of the object (Figure 4 ###reference_###) and a potentially different effect on the\nactor\u2019s\npose belief.\nWe aim to assess Value of Assistance (VOA) for grasping as the expected benefit an observation performed from a sensor configuration will have on the actor\u2019s probability of a successful grasp.\nOur perspective is that of the helper and its decision of which sensing action to perform. Accordingly, we seek a way to estimate beforehand the effect an expected observation will have on the actor\u2019s belief and on its choice of configuration from which to attempt the grasp.\nImportantly, the helper\u2019s belief may be different than that of the actor. We denote the helper and actor pose beliefs as and , respectively. We note that when considering a single agent with sensing and grasping capabilities the computation remains the same, with .\nA key element in VOA computation is the expected difference between the utility of the actor with and without the intervention. Here, utility is the grasp score of the configuration chosen by the actor based on its belief.\nGiven actor belief , helper belief , helper perceived sensor function , sensor configuration and actor belief update function ,\nwhere is the predicted observation and is the predicted actor\u2019s belief after receiving from sensor configuration and updating its belief, i.e., .\nIn the definition above, the maximal grasp is the one that maximizes the expected grasp score as in Definition 3 ###reference_inition3###.\nSince VOA estimation is performed by the helper, observation is extracted from its predicted observation probability . In contrast, the actor\u2019s belief update is based on and uses Equation 1 ###reference_### with its own perceived observation probability .\nFor simplicity of presentation, we hereon assume that the actor and helper share an initial pose belief before the grasp attempt.\nIn addition, we assume the predicted observation is generated using a deterministic sensor function such that\n. The updated belief is then a function of and and is denoted as . We note that our evaluation and analysis can be adapted to the more general settings in which these assumptions are relaxed, but this allows us to use a simplified VOA formulation as follows\nAlgorithm 1 in Appendix II describes how VOA can be used for supporting the helper\u2019s decision of which sensing action to perform.\nThe algorithm includes the ComputeVOA function for computing VOA for a sensor configuration. We also provide a complexity analysis for the algorithm."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "IV Empirical Evaluation",
27
+ "text": "The objective of our evaluation is to examine the ability of our proposed VOA measures to predict the effect sensing actions will have on the probability of a successful grasp and on finding one that maximizes this probability.\nWith this objective in mind, our evaluation is comprised of three parts.\nEvaluating grasp score: measuring the success ratio for grasping an object at pose from grasp configuration for different objects.\nEvaluating\n: examining the difference between the predicted sensor function and the readings of the actual sensor .\nAssessing VOA: assessing how well VOA estimates the effect observations will have on grasp score and its ability to identify the best sensing action."
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "IV-A Experimental Setting",
33
+ "text": "We performed our evaluation in a two-agent robotic setting, both at the lab and in simulation.\nIn our lab setting, depicted in Figure 1 ###reference_###,\nthe actor is a UR5e robotic arm [42 ###reference_b42###] with an OnRobot 2FG7 parallel jaw gripper [43 ###reference_b43###].\nWe used two implementations for the helper with two different sensors that might be available, depending on the setting: a LDS-01 lidar [44 ###reference_b44###] that could be moved on the x-y plane and a 2.5D Onrobot vision system [43 ###reference_b43###] mounted on an adjacent UR5e arm.\nFor simulation, we used a MuJoCo [45 ###reference_b45###]\nenvironment (depicted in Figure 3 ###reference_###) [46 ###reference_b46###].\nWe simulated a lidar sensor using the MuJoCo depth camera, taking only one row of the camera\u2019s readings.\nThe simulated gripper was a Robotiq 2F-85 parallel jaw [47 ###reference_b47###].\nWe used five objects for the simulation and lab experiments (Figure 5 ###reference_###).\nObjects meshes are based on the Dex-Net dataset [1 ###reference_b1###].\nWe sampled a set of stable poses and considered four possible grasps indexed (see Figure 7 ###reference_###).\nWe used both the lidar and depth camera. For each object pose , we recorded the observation and the predicted observation received from each sensor configuration, indexed for the lidar and for the depth camera.\nWe empirically evaluated grasp score by examining a set of pose-grasp pairs for each object in both simulated and lab settings. Due to space constraints, the full details and results can be found in Section IV of our online appendix.\n###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21###"
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "IV-B Evaluating the Predicted Sensor Function",
39
+ "text": "We evaluated our predicted sensor functions for both the lidar and depth camera by comparing their predicted observations to the readings collected from the sensor (Figure 6 ###reference_###).\nFor each sensor configuration-object pose pair, we recorded the mean error of the difference between the measured and predicted readings.\nFor the lidar, the set included four sensor configurations for each cardinal direction and computed the closest intersection between the simulated lidar ray and the object meshes for the relevant FoV. The actual observations generated by for this setup were collected in simulation and the lab. However, while in the lab the lidar was able to capture all objects, in simulation MOUSE and MARKER, could not be captured.\nFor the depth camera, rendered synthetic images using the pyrender library [48 ###reference_b48###] which involved projecting the 3D model of an object (transformed into a specific pose), onto a 2D plane using the camera\u2019s intrinsic and extrinsic parameters.\nWe evaluated our observation prediction accuracy compared to the actual image using the Intersection over Union (IoU) measure: we preprocessed the RBG images to extract the object masks and computed IoU between these and the corresponding synthetic images.\nThe set of sensor configurations was generated by randomly sampling\nrobot configurations and translating them into camera poses using forward kinematics i.e. . Each sensor configuration was ranked using a heuristic\nwhere is the Euclidean distance between the camera\u2019s center and the Point of Interest (PoI), representing the mean landing position after the object is dropped, is a maximum acceptable distance used for normalization, and is a visibility score, which assesses how centered the PoI is within the camera\u2019s FoV and is computed as the distance between the projection of the PoI onto the image plane and the center of the image divided by , a reference radius within the image plane that represents the boundary of acceptability.\nResults:\nTable I ###reference_### presents results per sensor for HOLDER (results for all objects are in our online appendix). For each sensor configuration of the lidar, the table shows the average, minimal and maximal error (Avg. Err., Min. Err. and Max. Err., respectively) in mm over object poses . Similarly, for the depth camera, we computed the average, maximal, and minimal IOU values.\nResults for the lidar show that the prediction errors are negligible given the dimensions of the objects examined. In contrast, for the depth camera, errors are more substantial with a maximal average of . At the same time,\nresults show varying performance across different configurations, depending on the object and its pose. For example, configuration gives high accuracy for some objects, while configuration excels for others.\nInconsistencies between the observations are the result of several factors including the noise of the sensor itself, inconsistent scaling of the meshes with regard to the real objects, mismatches between objects and the meshes used for estimation, and inaccuracies in the placement of the objects in the lab (while the observation prediction is based on perfect object positioning). In addition, as the distance between the sensor and the object increases, the object occupies less of the sensor\u2019s FoV and the readings include fewer and less informative data points. Specific to the depth camera is the confusion caused by the reflection of bright light on shiny surfaces which may distort object shape.\nFigure 6 ###reference_### demonstrates inconsistencies between the predicted and actual observation of HOLDER at the lab. Here, this is due to the misplacement of the object. As we show next, despite these inconsistencies, the estimated observations are still useful for VOA computation."
40
+ },
41
+ {
42
+ "section_id": "4.3",
43
+ "parent_section_id": "4",
44
+ "section_name": "IV-C Assessing VOA",
45
+ "text": "We estimate the benefit of using VOA as a decision-making tool by assessing the benefit of choosing which sensor configuration to apply based on VOA values. Our evaluation uses the setting depicted in Figure 1 ###reference_### and assumes the initial pose belief (after the object drops) is shared by the actor and helper. The model of the initial belief is described in Section 2 ###reference_inition2###.\nWe used three belief update functions per sensor, each based on a different similarity metric. For the lidar, uses a deterministic update rule that considers two observations as equivalent if for all angles the values are within a margin of mm, uses the similarity metric to update the belief based on Equation 2 ###reference_###, while uses a multidimensional Gaussian over one observation while the other observation is the mean vector and the covariance matrix is the identity matrix.\nFor the depth camera, is based on the structure element of SSIM [41 ###reference_b41###], is based on IoU between the two observations, and employs the cv2 library [49 ###reference_b49###] for contour matching to quantify the similarity between two masks by detecting their primary contours and comparing their shapes through a shape-matching algorithm.\nResults: Table II ###reference_### presents our evaluation for the different objects at the lab222due to space constraints, complete results as well as our implementation, are in our online appendix.. For each setting, we consider three grasps: is an optimal grasp, is the grasp chosen by the actor based on its initial belief, and is its post-intervention choice, i.e., after receiving an observation from a sensor configuration with the highest VOA value (see Figure 8 ###reference_###). For each belief update function and object, the table reports the average values of:\n: the weighted score difference between the chosen grasp after and before the intervention.\n: the ratio between and the weighted score difference between the best configuration and the configuration chosen before the intervention.\n: the advantage of choosing the maximal VOA sensor configuration defined as the ratio between and the average over all configurations. This represents the difference between choosing a sensor configuration using VOA to choosing randomly.\n###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### 0.19\n0.23\n0.23\n\n\n0.15\n0.23\n0.29\n\n\n0.2\n0.26\n0.28\n\n\n0.03\n0.15\n0.13\n\n\n0.0\n0.0\n0.0\n\n\n0.0\n0.0\n0.0\n\n\n\n0.29\n0.31\n0.29\n\n\n0.29\n0.31\n0.29\n\n\n0.29\n0.31\n0.29\n\n\n0.04\n0.37\n0.29\n\n\n0.00\n0.00\n0.00\n\n\n-0.00\n0.29\n0.29\n\n\n\n0.00\n0.06\n0.04\n\n\n0.00\n0.05\n0.04\n\n\n0.00\n0.07\n0.00\n\n\n0.04\n0.44\n0.53\n\n\n0.00\n0.00\n0.00\n\n\n0.00\n0.00\n0.00\n\n\n\n0.30\n0.33\n0.30\n\n\n0.30\n0.33\n0.41\n\n\n0.37\n0.63\n0.37\n\n\n0.24\n0.38\n0.27\n\n\n0.00\n0.00\n0.00\n\n\n0.00\n0.00\n0.00\n\n\n\n0.25\n0.25\n0.25\n\n\n0.25\n0.25\n0.25\n\n\n0.25\n0.25\n0.25\n\n\n0.02\n0.25\n0.25\n\n\n0.00\n0.00\n0.00\n\n\n0.00\n0.00\n0.00\n\nAVG\n\n0.19\n0.23\n0.23\n\n\n0.19\n0.23\n0.29\n\n\n0.20\n0.26\n0.28\n\n\n0.08\n0.22\n0.34\n\n\n0.00\n0.00\n0.00\n\n\n0.00\n0.00\n0.00\nResults show that for all objects and for of our examined belief update functions (-), selecting the sensor configuration with the highest VOA value is beneficial in terms of the three examined measures. The smallest benefit is for MOUSE for which the initial grasp is optimal for all poses except one for which the optimal grasp has only a slight advantage. This subtlety is captured only by .\nNotably, belief update functions and which rely on IOU and contour matching, respectively, did not perform well on average for any of the objects.\nWe associate this with the fact that the objects examined are small relative to their distance from the sensor, which is something these update functions are sensitive to. Figure 9 ###reference_### presents the similarity scores between the actual (rows) and predicted (columns) observations for that relies solely on the IoU of the masks without considering depth values. This makes it hard for to differentiate between object poses that occupy the same area across multiple poses, as depicted in Figure 10 ###reference_###.\nThe matrix shows a clear distinction between the standing positions and and the laying positions , but the distinction within these two groups is challenging.\nAnother critical issue is demonstrated by the values on the diagonal that show the low similarity scores between the predicted and actual observations. These indicate the sensitivity of to noise in the actual image, where even slight distortions can dramatically impact prediction quality. Similar results were observed for where noise dramatically distorts contours."
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "CONCLUSION",
51
+ "text": "We introduced Value of Assistance (VOA) for grasping and suggested ways to estimate it for sensing actions. Our experiments in both simulation and real-world settings demonstrate how our VOA measures predict the effect an observation will have on performance and how it can be used to support the decision of which observation to perform.\nFuture work will account for optimization considerations of the helper and for integrating VOA in long-term and complex tasks. Another extension will consider multi-agent settings in which VOA can be used not only for choosing which assistive action to perform but also for choosing which agent to assist."
52
+ }
53
+ ],
54
+ "appendix": [],
55
+ "tables": {
56
+ "1": {
57
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.12\" style=\"width:195.1pt;height:204.7pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(20.4pt,-21.4pt) scale(1.26364155504384,1.26364155504384) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.12.12\">\n<tr class=\"ltx_tr\" id=\"S4.T1.12.12.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_rr ltx_border_t\" colspan=\"4\" id=\"S4.T1.12.12.13.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.12.12.13.1.1\">Lidar [mm]</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"4\" id=\"S4.T1.12.12.13.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.12.12.13.2.1\">Depth Camera</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.3\">Avg.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.4\">Min.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S4.T1.2.2.2.5\">Max.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.6\">Avg.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.7\">Max</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.8\">Min.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.12.14\">\n<td class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S4.T1.12.12.14.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.12.12.14.2\">Err.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.12.12.14.3\">Err.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S4.T1.12.12.14.4\">Err</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T1.12.12.14.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.12.12.14.6\">IoU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.12.12.14.7\">IoU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.12.12.14.8\">IoU</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.3.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.4.3\">1.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.4.4\">0.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S4.T1.4.4.4.5\">3.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.4.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.4.6\">0.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.4.7\">0.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.4.8\">0.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T1.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.6.3\">3.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.6.4\">0.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S4.T1.6.6.6.5\">4.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.6.6\">0.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.6.7\">0.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.6.8\">0.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T1.7.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.8.8.8.3\">0.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.8.8.8.4\">0.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S4.T1.8.8.8.5\">1.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.8.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.8.8.8.6\">0.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.8.8.8.7\">0.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.8.8.8.8\">0.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.10.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T1.9.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.10.10.10.3\">3.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.10.10.10.4\">0.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S4.T1.10.10.10.5\">4.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.10.10.10.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.10.10.10.6\">0.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.10.10.10.7\">0.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.10.10.10.8\">0.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.11.11.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T1.11.11.11.2\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.11.11.11.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.11.11.11.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S4.T1.11.11.11.5\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.11.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.11.11.11.6\">0.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.11.11.11.7\">0.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.11.11.11.8\">0.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.12.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T1.12.12.12.2\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.12.12.12.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.12.12.12.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr\" id=\"S4.T1.12.12.12.5\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.12.12.12.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.12.12.12.6\">0.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.12.12.12.7\">0.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.12.12.12.8\">0.1</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.14.1.1\" style=\"font-size:90%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S4.T1.15.2\" style=\"font-size:90%;\">Sensor prediction evaluation for the lab setting of HOLDER. For the lidar\u00a0the lower values are better while for the depth camera higher values are better.</span></figcaption>\n</figure>",
58
+ "capture": "TABLE I: Sensor prediction evaluation for the lab setting of HOLDER. For the lidar\u00a0the lower values are better while for the depth camera higher values are better."
59
+ },
60
+ "2": {
61
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.44\" style=\"width:207.1pt;height:1178.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<p class=\"ltx_p\" id=\"S4.T2.44.44\"><span class=\"ltx_text\" id=\"S4.T2.44.44.44\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T2.44.44.44.44\" style=\"width:203.8pt;height:1178.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(44.3pt,-256.1pt) scale(1.7692185011566,1.7692185011566) ;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.44.44.44.44.44\">\n<span class=\"ltx_tr\" id=\"S4.T2.3.3.3.3.3.3\">\n<span class=\"ltx_td ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.3.3.3.3.3.3.4\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S4.T2.3.3.3.3.3.3.5\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1.1.1.1.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.2.2.2.2\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.3.3.3.3.3.3\" style=\"padding:0.5pt 8.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.5.5.5.5.5.5\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_6\" id=\"S4.T2.4.4.4.4.4.4.1\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.4.4.4.4.1.1\" style=\"color:#000000;\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.5.5.5.5.5.5.2\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.5.5.5.5.5.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.5.5.5.5.5.3.1\" style=\"color:#000000;\">0.19</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.5.5.5.5.5.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.5.5.5.5.5.4.1\" style=\"color:#000000;\">0.23</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.5.5.5.5.5.5.5\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.5.5.5.5.5.5.1\" style=\"color:#000000;\">0.23</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.6.6.6.6.6.6\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.6.6.6.6.6.6.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.6.6.6.6.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.6.6.6.6.6.6.2.1\" style=\"color:#000000;\">0.15</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.6.6.6.6.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.6.6.6.6.6.6.3.1\" style=\"color:#000000;\">0.23</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.6.6.6.6.6.6.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.6.6.6.6.6.6.4.1\" style=\"color:#000000;\">0.29</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.7.7.7.7.7.7\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.7.7.7.7.7.7.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.7.7.7.7.7.7.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.7.7.7.7.7.7.2.1\" style=\"color:#000000;\">0.2</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.7.7.7.7.7.7.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.7.7.7.7.7.7.3.1\" style=\"color:#000000;\">0.26</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.7.7.7.7.7.7.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.7.7.7.7.7.7.4.1\" style=\"color:#000000;\">0.28</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.8.8.8.8.8.8\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.8.8.8.8.8.8.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.8.8.8.8.8.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.8.8.8.8.8.2.1\" style=\"color:#000000;\">0.03</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.8.8.8.8.8.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.8.8.8.8.8.3.1\" style=\"color:#000000;\">0.15</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.8.8.8.8.8.8.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.8.8.8.8.8.4.1\" style=\"color:#000000;\">0.13</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.9.9.9.9.9.9\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.9.9.9.9.9.9.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.9.9.9.9.9.9.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.9.9.9.9.9.9.2.1\" style=\"color:#000000;\">0.0</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.9.9.9.9.9.9.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.9.9.9.9.9.9.3.1\" style=\"color:#000000;\">0.0</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.9.9.9.9.9.9.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.9.9.9.9.9.9.4.1\" style=\"color:#000000;\">0.0</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.10.10.10.10.10.10\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.10.10.10.10.10.10.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.10.10.10.10.10.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.10.10.10.10.10.10.2.1\" style=\"color:#000000;\">0.0</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.10.10.10.10.10.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.10.10.10.10.10.10.3.1\" style=\"color:#000000;\">0.0</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.10.10.10.10.10.10.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.10.10.10.10.10.10.4.1\" style=\"color:#000000;\">0.0</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.12.12.12.12.12.12\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_6\" id=\"S4.T2.11.11.11.11.11.11.1\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.11.11.11.11.11.11.1.1\" style=\"color:#000000;\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.12.12.12.12.12.12.2\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.12.12.12.12.12.12.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.12.12.12.12.12.12.3.1\" style=\"color:#000000;\">0.29</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.12.12.12.12.12.12.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.12.12.12.12.12.12.4.1\" style=\"color:#000000;\">0.31</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.12.12.12.12.12.12.5\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.12.12.12.12.12.12.5.1\" style=\"color:#000000;\">0.29</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.13.13.13.13.13.13\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.13.13.13.13.13.13.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.13.13.13.13.13.13.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.13.13.13.13.13.13.2.1\" style=\"color:#000000;\">0.29</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.13.13.13.13.13.13.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.13.13.13.13.13.13.3.1\" style=\"color:#000000;\">0.31</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.13.13.13.13.13.13.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.13.13.13.13.13.13.4.1\" style=\"color:#000000;\">0.29</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.14.14.14.14.14.14\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.14.14.14.14.14.14.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.14.14.14.14.14.14.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.14.14.14.14.14.14.2.1\" style=\"color:#000000;\">0.29</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.14.14.14.14.14.14.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.14.14.14.14.14.14.3.1\" style=\"color:#000000;\">0.31</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.14.14.14.14.14.14.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.14.14.14.14.14.14.4.1\" style=\"color:#000000;\">0.29</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.15.15.15.15.15.15\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.15.15.15.15.15.15.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.15.15.15.15.15.15.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.15.15.15.15.15.15.2.1\" style=\"color:#000000;\">0.04</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.15.15.15.15.15.15.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.15.15.15.15.15.15.3.1\" style=\"color:#000000;\">0.37</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.15.15.15.15.15.15.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.15.15.15.15.15.15.4.1\" style=\"color:#000000;\">0.29</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.16.16.16.16.16.16\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.16.16.16.16.16.16.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.16.16.16.16.16.16.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.16.16.16.16.16.16.2.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.16.16.16.16.16.16.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.16.16.16.16.16.16.3.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.16.16.16.16.16.16.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.16.16.16.16.16.16.4.1\" style=\"color:#000000;\">0.00</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.17.17.17.17.17.17\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.17.17.17.17.17.17.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.17.17.17.17.17.17.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.17.17.17.17.17.17.2.1\" style=\"color:#000000;\">-0.00</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.17.17.17.17.17.17.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.17.17.17.17.17.17.3.1\" style=\"color:#000000;\">0.29</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.17.17.17.17.17.17.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.17.17.17.17.17.17.4.1\" style=\"color:#000000;\">0.29</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.19.19.19.19.19.19\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_6\" id=\"S4.T2.18.18.18.18.18.18.1\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.18.18.18.18.18.18.1.1\" style=\"color:#000000;\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.19.19.19.19.19.19.2\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.19.19.19.19.19.19.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.19.19.19.19.19.19.3.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.19.19.19.19.19.19.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.19.19.19.19.19.19.4.1\" style=\"color:#000000;\">0.06</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.19.19.19.19.19.19.5\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.19.19.19.19.19.19.5.1\" style=\"color:#000000;\">0.04</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.20.20.20.20.20.20\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.20.20.20.20.20.20.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.20.20.20.20.20.20.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.20.20.20.20.20.20.2.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.20.20.20.20.20.20.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.20.20.20.20.20.20.3.1\" style=\"color:#000000;\">0.05</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.20.20.20.20.20.20.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.20.20.20.20.20.20.4.1\" style=\"color:#000000;\">0.04</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.21.21.21.21.21.21\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.21.21.21.21.21.21.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.21.21.21.21.21.21.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.21.21.21.21.21.21.2.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.21.21.21.21.21.21.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.21.21.21.21.21.21.3.1\" style=\"color:#000000;\">0.07</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.21.21.21.21.21.21.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.21.21.21.21.21.21.4.1\" style=\"color:#000000;\">0.00</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.22.22.22.22.22.22\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.22.22.22.22.22.22.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.22.22.22.22.22.22.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.22.22.22.22.22.22.2.1\" style=\"color:#000000;\">0.04</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.22.22.22.22.22.22.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.22.22.22.22.22.22.3.1\" style=\"color:#000000;\">0.44</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.22.22.22.22.22.22.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.22.22.22.22.22.22.4.1\" style=\"color:#000000;\">0.53</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.23.23.23.23.23.23\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.23.23.23.23.23.23.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.23.23.23.23.23.23.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.23.23.23.23.23.23.2.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.23.23.23.23.23.23.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.23.23.23.23.23.23.3.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.23.23.23.23.23.23.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.23.23.23.23.23.23.4.1\" style=\"color:#000000;\">0.00</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.24.24.24.24.24.24\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.24.24.24.24.24.24.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.24.24.24.24.24.24.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.24.24.24.24.24.24.2.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.24.24.24.24.24.24.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.24.24.24.24.24.24.3.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.24.24.24.24.24.24.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.24.24.24.24.24.24.4.1\" style=\"color:#000000;\">0.00</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.26.26.26.26.26.26\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_6\" id=\"S4.T2.25.25.25.25.25.25.1\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.25.25.25.25.25.25.1.1\" style=\"color:#000000;\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.26.26.26.26.26.26.2\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.26.26.26.26.26.26.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.26.26.26.26.26.26.3.1\" style=\"color:#000000;\">0.30</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.26.26.26.26.26.26.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.26.26.26.26.26.26.4.1\" style=\"color:#000000;\">0.33</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.26.26.26.26.26.26.5\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.26.26.26.26.26.26.5.1\" style=\"color:#000000;\">0.30</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.27.27.27.27.27.27\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.27.27.27.27.27.27.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.27.27.27.27.27.27.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.27.27.27.27.27.27.2.1\" style=\"color:#000000;\">0.30</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.27.27.27.27.27.27.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.27.27.27.27.27.27.3.1\" style=\"color:#000000;\">0.33</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.27.27.27.27.27.27.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.27.27.27.27.27.27.4.1\" style=\"color:#000000;\">0.41</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.28.28.28.28.28.28\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.28.28.28.28.28.28.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.28.28.28.28.28.28.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.28.28.28.28.28.28.2.1\" style=\"color:#000000;\">0.37</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.28.28.28.28.28.28.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.28.28.28.28.28.28.3.1\" style=\"color:#000000;\">0.63</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.28.28.28.28.28.28.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.28.28.28.28.28.28.4.1\" style=\"color:#000000;\">0.37</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.29.29.29.29.29.29\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.29.29.29.29.29.29.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.29.29.29.29.29.29.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.29.29.29.29.29.29.2.1\" style=\"color:#000000;\">0.24</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.29.29.29.29.29.29.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.29.29.29.29.29.29.3.1\" style=\"color:#000000;\">0.38</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.29.29.29.29.29.29.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.29.29.29.29.29.29.4.1\" style=\"color:#000000;\">0.27</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.30.30.30.30.30.30\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.30.30.30.30.30.30.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.30.30.30.30.30.30.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.30.30.30.30.30.30.2.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.30.30.30.30.30.30.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.30.30.30.30.30.30.3.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.30.30.30.30.30.30.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.30.30.30.30.30.30.4.1\" style=\"color:#000000;\">0.00</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.31.31.31.31.31.31\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.31.31.31.31.31.31.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.31.31.31.31.31.31.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.31.31.31.31.31.31.2.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.31.31.31.31.31.31.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.31.31.31.31.31.31.3.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.31.31.31.31.31.31.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.31.31.31.31.31.31.4.1\" style=\"color:#000000;\">0.00</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.33.33.33.33.33.33\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_6\" id=\"S4.T2.32.32.32.32.32.32.1\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.32.32.32.32.32.32.1.1\" style=\"color:#000000;\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.33.33.33.33.33.33.2\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.33.33.33.33.33.33.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.33.33.33.33.33.33.3.1\" style=\"color:#000000;\">0.25</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.33.33.33.33.33.33.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.33.33.33.33.33.33.4.1\" style=\"color:#000000;\">0.25</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.33.33.33.33.33.33.5\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.33.33.33.33.33.33.5.1\" style=\"color:#000000;\">0.25</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.34.34.34.34.34.34\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.34.34.34.34.34.34.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.34.34.34.34.34.34.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.34.34.34.34.34.34.2.1\" style=\"color:#000000;\">0.25</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.34.34.34.34.34.34.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.34.34.34.34.34.34.3.1\" style=\"color:#000000;\">0.25</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.34.34.34.34.34.34.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.34.34.34.34.34.34.4.1\" style=\"color:#000000;\">0.25</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.35.35.35.35.35.35\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.35.35.35.35.35.35.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.35.35.35.35.35.35.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.35.35.35.35.35.35.2.1\" style=\"color:#000000;\">0.25</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.35.35.35.35.35.35.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.35.35.35.35.35.35.3.1\" style=\"color:#000000;\">0.25</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.35.35.35.35.35.35.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.35.35.35.35.35.35.4.1\" style=\"color:#000000;\">0.25</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.36.36.36.36.36.36\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.36.36.36.36.36.36.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.36.36.36.36.36.36.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.36.36.36.36.36.36.2.1\" style=\"color:#000000;\">0.02</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.36.36.36.36.36.36.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.36.36.36.36.36.36.3.1\" style=\"color:#000000;\">0.25</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.36.36.36.36.36.36.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.36.36.36.36.36.36.4.1\" style=\"color:#000000;\">0.25</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.37.37.37.37.37.37\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.37.37.37.37.37.37.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.37.37.37.37.37.37.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.37.37.37.37.37.37.2.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.37.37.37.37.37.37.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.37.37.37.37.37.37.3.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.37.37.37.37.37.37.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.37.37.37.37.37.37.4.1\" style=\"color:#000000;\">0.00</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.38.38.38.38.38.38\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.38.38.38.38.38.38.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.38.38.38.38.38.38.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.38.38.38.38.38.38.2.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.38.38.38.38.38.38.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.38.38.38.38.38.38.3.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.38.38.38.38.38.38.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.38.38.38.38.38.38.4.1\" style=\"color:#000000;\">0.00</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.39.39.39.39.39.39\">\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_6\" id=\"S4.T2.39.39.39.39.39.39.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.39.39.39.39.39.39.2.1\" style=\"color:#000000;\">AVG</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.39.39.39.39.39.39.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.39.39.39.39.39.39.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.39.39.39.39.39.39.3.1\" style=\"color:#000000;\">0.19</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.39.39.39.39.39.39.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.39.39.39.39.39.39.4.1\" style=\"color:#000000;\">0.23</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.39.39.39.39.39.39.5\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.39.39.39.39.39.39.5.1\" style=\"color:#000000;\">0.23</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.40.40.40.40.40.40\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.40.40.40.40.40.40.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.40.40.40.40.40.40.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.40.40.40.40.40.40.2.1\" style=\"color:#000000;\">0.19</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.40.40.40.40.40.40.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.40.40.40.40.40.40.3.1\" style=\"color:#000000;\">0.23</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.40.40.40.40.40.40.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.40.40.40.40.40.40.4.1\" style=\"color:#000000;\">0.29</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.41.41.41.41.41.41\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.41.41.41.41.41.41.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.41.41.41.41.41.41.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.41.41.41.41.41.41.2.1\" style=\"color:#000000;\">0.20</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.41.41.41.41.41.41.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.41.41.41.41.41.41.3.1\" style=\"color:#000000;\">0.26</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.41.41.41.41.41.41.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.41.41.41.41.41.41.4.1\" style=\"color:#000000;\">0.28</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.42.42.42.42.42.42\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.42.42.42.42.42.42.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.42.42.42.42.42.42.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.42.42.42.42.42.42.2.1\" style=\"color:#000000;\">0.08</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.42.42.42.42.42.42.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.42.42.42.42.42.42.3.1\" style=\"color:#000000;\">0.22</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.42.42.42.42.42.42.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.42.42.42.42.42.42.4.1\" style=\"color:#000000;\">0.34</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.43.43.43.43.43.43\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.43.43.43.43.43.43.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.43.43.43.43.43.43.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.43.43.43.43.43.43.2.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.43.43.43.43.43.43.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.43.43.43.43.43.43.3.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.43.43.43.43.43.43.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.43.43.43.43.43.43.4.1\" style=\"color:#000000;\">0.00</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.44.44.44.44.44.44\">\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.44.44.44.44.44.44.1\" style=\"padding:0.5pt 8.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.44.44.44.44.44.44.2\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.44.44.44.44.44.44.2.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.44.44.44.44.44.44.3\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.44.44.44.44.44.44.3.1\" style=\"color:#000000;\">0.00</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.44.44.44.44.44.44.4\" style=\"padding:0.5pt 8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.44.44.44.44.44.44.4.1\" style=\"color:#000000;\">0.00</span></span></span>\n</span>\n</span></span><span class=\"ltx_text\" id=\"S4.T2.44.44.44.45\" style=\"color:#000000;\"></span></span></p>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T2.46.1.1\" style=\"font-size:90%;\">TABLE II</span>: </span><span class=\"ltx_text\" id=\"S4.T2.47.2\" style=\"font-size:90%;\">Results per belief update function (best results per criteria are highlighted) </span></figcaption>\n</figure>",
62
+ "capture": "TABLE II: Results per belief update function (best results per criteria are highlighted) "
63
+ }
64
+ },
65
+ "image_paths": {
66
+ "1": {
67
+ "figure_path": "2310.14402v2_figure_1.png",
68
+ "caption": "Figure 1: Collaborative Grasping Example.",
69
+ "url": "http://arxiv.org/html/2310.14402v2/x1.png"
70
+ },
71
+ "2(a)": {
72
+ "figure_path": "2310.14402v2_figure_2(a).png",
73
+ "caption": "(a)\nFigure 2: Example stable poses.",
74
+ "url": "http://arxiv.org/html/2310.14402v2/x2.png"
75
+ },
76
+ "2(b)": {
77
+ "figure_path": "2310.14402v2_figure_2(b).png",
78
+ "caption": "(b)\nFigure 2: Example stable poses.",
79
+ "url": "http://arxiv.org/html/2310.14402v2/x3.png"
80
+ },
81
+ "2(c)": {
82
+ "figure_path": "2310.14402v2_figure_2(c).png",
83
+ "caption": "(c)\nFigure 2: Example stable poses.",
84
+ "url": "http://arxiv.org/html/2310.14402v2/x4.png"
85
+ },
86
+ "2(d)": {
87
+ "figure_path": "2310.14402v2_figure_2(d).png",
88
+ "caption": "(d)\nFigure 2: Example stable poses.",
89
+ "url": "http://arxiv.org/html/2310.14402v2/x5.png"
90
+ },
91
+ "3(a)": {
92
+ "figure_path": "2310.14402v2_figure_3(a).png",
93
+ "caption": "(a)\nFigure 3: Example grasp configurations from which the actor can attempt to grasp the object - each configuration is associated with a score, i.e., probability of success.",
94
+ "url": "http://arxiv.org/html/2310.14402v2/x6.png"
95
+ },
96
+ "3(b)": {
97
+ "figure_path": "2310.14402v2_figure_3(b).png",
98
+ "caption": "(b)\nFigure 3: Example grasp configurations from which the actor can attempt to grasp the object - each configuration is associated with a score, i.e., probability of success.",
99
+ "url": "http://arxiv.org/html/2310.14402v2/x7.png"
100
+ },
101
+ "3(c)": {
102
+ "figure_path": "2310.14402v2_figure_3(c).png",
103
+ "caption": "(c)\nFigure 3: Example grasp configurations from which the actor can attempt to grasp the object - each configuration is associated with a score, i.e., probability of success.",
104
+ "url": "http://arxiv.org/html/2310.14402v2/x8.png"
105
+ },
106
+ "3(d)": {
107
+ "figure_path": "2310.14402v2_figure_3(d).png",
108
+ "caption": "(d)\nFigure 3: Example grasp configurations from which the actor can attempt to grasp the object - each configuration is associated with a score, i.e., probability of success.",
109
+ "url": "http://arxiv.org/html/2310.14402v2/x9.png"
110
+ },
111
+ "4(a)": {
112
+ "figure_path": "2310.14402v2_figure_4(a).png",
113
+ "caption": "(a)\nFigure 4: Example sensor configurations. Each column represents the RGB image [top] lidar reading [middle] and depth image [bottom] for a sensor configuration-object pose pair.",
114
+ "url": "http://arxiv.org/html/2310.14402v2/x10.png"
115
+ },
116
+ "4(b)": {
117
+ "figure_path": "2310.14402v2_figure_4(b).png",
118
+ "caption": "(b)\nFigure 4: Example sensor configurations. Each column represents the RGB image [top] lidar reading [middle] and depth image [bottom] for a sensor configuration-object pose pair.",
119
+ "url": "http://arxiv.org/html/2310.14402v2/x11.png"
120
+ },
121
+ "4(c)": {
122
+ "figure_path": "2310.14402v2_figure_4(c).png",
123
+ "caption": "(c)\nFigure 4: Example sensor configurations. Each column represents the RGB image [top] lidar reading [middle] and depth image [bottom] for a sensor configuration-object pose pair.",
124
+ "url": "http://arxiv.org/html/2310.14402v2/x12.png"
125
+ },
126
+ "4(d)": {
127
+ "figure_path": "2310.14402v2_figure_4(d).png",
128
+ "caption": "(d)\nFigure 4: Example sensor configurations. Each column represents the RGB image [top] lidar reading [middle] and depth image [bottom] for a sensor configuration-object pose pair.",
129
+ "url": "http://arxiv.org/html/2310.14402v2/x13.png"
130
+ },
131
+ "5": {
132
+ "figure_path": "2310.14402v2_figure_5.png",
133
+ "caption": "Figure 5: Evaluation objects",
134
+ "url": "http://arxiv.org/html/2310.14402v2/extracted/5477336/images/objects.png"
135
+ },
136
+ "6(a)": {
137
+ "figure_path": "2310.14402v2_figure_6(a).png",
138
+ "caption": "(a)\nFigure 6: Observations of HOLDER (a) Actual scene. (b) Comparing the predicted depth image (blue) and the lab-recorded image (red). (c)\nComparing the predicted (blue) and lab-recorded (red) 2D representation of the lidar reading.",
139
+ "url": "http://arxiv.org/html/2310.14402v2/x14.png"
140
+ },
141
+ "6(b)": {
142
+ "figure_path": "2310.14402v2_figure_6(b).png",
143
+ "caption": "(b)\nFigure 6: Observations of HOLDER (a) Actual scene. (b) Comparing the predicted depth image (blue) and the lab-recorded image (red). (c)\nComparing the predicted (blue) and lab-recorded (red) 2D representation of the lidar reading.",
144
+ "url": "http://arxiv.org/html/2310.14402v2/x15.png"
145
+ },
146
+ "6(c)": {
147
+ "figure_path": "2310.14402v2_figure_6(c).png",
148
+ "caption": "(c)\nFigure 6: Observations of HOLDER (a) Actual scene. (b) Comparing the predicted depth image (blue) and the lab-recorded image (red). (c)\nComparing the predicted (blue) and lab-recorded (red) 2D representation of the lidar reading.",
149
+ "url": "http://arxiv.org/html/2310.14402v2/x16.png"
150
+ },
151
+ "7(a)": {
152
+ "figure_path": "2310.14402v2_figure_7(a).png",
153
+ "caption": "(a)\nFigure 7: Four grasp configurations for FLASK at the lab",
154
+ "url": "http://arxiv.org/html/2310.14402v2/x17.jpeg"
155
+ },
156
+ "7(b)": {
157
+ "figure_path": "2310.14402v2_figure_7(b).png",
158
+ "caption": "(b)\nFigure 7: Four grasp configurations for FLASK at the lab",
159
+ "url": "http://arxiv.org/html/2310.14402v2/x18.jpeg"
160
+ },
161
+ "7(c)": {
162
+ "figure_path": "2310.14402v2_figure_7(c).png",
163
+ "caption": "(c)\nFigure 7: Four grasp configurations for FLASK at the lab",
164
+ "url": "http://arxiv.org/html/2310.14402v2/x19.jpeg"
165
+ },
166
+ "7(d)": {
167
+ "figure_path": "2310.14402v2_figure_7(d).png",
168
+ "caption": "(d)\nFigure 7: Four grasp configurations for FLASK at the lab",
169
+ "url": "http://arxiv.org/html/2310.14402v2/x20.jpeg"
170
+ },
171
+ "8": {
172
+ "figure_path": "2310.14402v2_figure_8.png",
173
+ "caption": "Figure 8: Grasps for HOLDER: best grasp g*superscript\ud835\udc54g^{*}italic_g start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT, initial chosen grasp gisubscript\ud835\udc54\ud835\udc56g_{i}italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and chosen grasp after the intervention gfsubscript\ud835\udc54\ud835\udc53g_{f}italic_g start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT.",
174
+ "url": "http://arxiv.org/html/2310.14402v2/extracted/5477336/images/three_grasps.png"
175
+ },
176
+ "9": {
177
+ "figure_path": "2310.14402v2_figure_9.png",
178
+ "caption": "Figure 9: Similarity score between actual observation (rows) for each pose and predicted observations (columns).",
179
+ "url": "http://arxiv.org/html/2310.14402v2/extracted/5477336/images/iou_sim_matrix.png"
180
+ },
181
+ "10(a)": {
182
+ "figure_path": "2310.14402v2_figure_10(a).png",
183
+ "caption": "(a) P1\nFigure 10: Observations for different object poses of HOLDER",
184
+ "url": "http://arxiv.org/html/2310.14402v2/"
185
+ },
186
+ "10(b)": {
187
+ "figure_path": "2310.14402v2_figure_10(b).png",
188
+ "caption": "(b) P2\nFigure 10: Observations for different object poses of HOLDER",
189
+ "url": "http://arxiv.org/html/2310.14402v2/"
190
+ },
191
+ "10(c)": {
192
+ "figure_path": "2310.14402v2_figure_10(c).png",
193
+ "caption": "(c) P3\nFigure 10: Observations for different object poses of HOLDER",
194
+ "url": "http://arxiv.org/html/2310.14402v2/"
195
+ },
196
+ "10(d)": {
197
+ "figure_path": "2310.14402v2_figure_10(d).png",
198
+ "caption": "(d) P4\nFigure 10: Observations for different object poses of HOLDER",
199
+ "url": "http://arxiv.org/html/2310.14402v2/"
200
+ },
201
+ "10(e)": {
202
+ "figure_path": "2310.14402v2_figure_10(e).png",
203
+ "caption": "(e) P5\nFigure 10: Observations for different object poses of HOLDER",
204
+ "url": "http://arxiv.org/html/2310.14402v2/"
205
+ },
206
+ "10(f)": {
207
+ "figure_path": "2310.14402v2_figure_10(f).png",
208
+ "caption": "(f) P6\nFigure 10: Observations for different object poses of HOLDER",
209
+ "url": "http://arxiv.org/html/2310.14402v2/"
210
+ },
211
+ "11": {
212
+ "figure_path": "2310.14402v2_figure_11.png",
213
+ "caption": "Figure 11: FLASK grasp score for the simulated [left] and lab [right] settings.",
214
+ "url": "http://arxiv.org/html/2310.14402v2/extracted/5477336/images/flask_graspscore_btext.png"
215
+ }
216
+ },
217
+ "validation": true,
218
+ "references": [],
219
+ "url": "http://arxiv.org/html/2310.14402v2"
220
+ }
20240318/2310.17513v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2311.08146v2.json ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Joint Source-Channel Coding for Channel-Adaptive Digital Semantic Communications",
3
+ "abstract": "In this paper, we propose a novel joint source-channel coding (JSCC) approach for channel-adaptive digital semantic communications. In semantic communication systems with digital modulation and demodulation, robust design of JSCC encoder and decoder becomes challenging not only due to the unpredictable dynamics of channel conditions but also due to diverse modulation orders. To address this challenge, we first develop a new demodulation method which assesses the uncertainty of the demodulation output to improve the robustness of the digital semantic communication system. We then devise a robust training strategy which enhances the robustness and flexibility of the JSCC encoder and decoder against diverse channel conditions and modulation orders. To this end, we model the relationship between the encoder\u2019s output and decoder\u2019s input using binary symmetric erasure channels and then sample the parameters of these channels from diverse distributions. We also develop a channel-adaptive modulation technique for an inference phase, in order to reduce the communication latency while maintaining task performance. In this technique, we adaptively determine modulation orders for the latent variables based on channel conditions. Using simulations, we demonstrate the superior performance of the proposed JSCC approach for image classification, reconstruction, and retrieval tasks compared to existing JSCC approaches.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Semantic communication [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###] has garnered increasing attention, referring to the process of transmitting and receiving messages designed to convey meaning.\nIn traditional communication, the focus of a transmitter is to transform the message into a bit sequence that can be accurately reconstructed at a receiver with minimal bit errors.\nIn contrast, in semantic communication, the goal of the transmitter is to convey the meaning of the message, aiming to maximize the performance of a desired task at the receiver. The primary advantage of semantic communication lies in its ability to enhance the task performance at the receiver, even in scenarios where perfect reconstruction of the bit sequence is not feasible using traditional communication systems.\nFor example, in [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###], it was reported that the semantic communication achieves a better task performance than the traditional communication that only focuses on delivering the bit sequence.\nMotivated by this advantage, the semantic communication has been recognized as a crucial technology for enabling high-volume data-intensive and low-latency tasks such as VR/AR, video signal used in autonomous vehicle [8 ###reference_b8###], and a drone performing a specific mission [9 ###reference_b9###].\nThere is a rich literature on the design of the semantic communication system that can deliver task-reliant information quickly and accurately over the wireless channels.\nOne notable approach involves employing a joint source-channel coding (JSCC) neural network, sometimes referred to as joint semantic channel coding.\nIn this approach, source and channel encoders/decoders are integrated into a unified neural-type encoder/decoder. Subsequently, the JSCC encoder and decoder are jointly trained by considering the impact of wireless channels such as additive white Gaussian noise (AWGN) and Rayleigh fading channels.\nThe design of the JSCC encoder/decoder was studied for various applications such as image transmission [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###], text transmission [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###], speech transmission [16 ###reference_b16###], and video transmission [17 ###reference_b17###, 18 ###reference_b18###].\nThrough these studies, the potential of the JSCC approach to enhance task performance was demonstrated when compared to traditional separate source and channel coding approaches. Unfortunately, these studies rely on the end-to-end training of the JSCC encoder and decoder for specific training environments. Consequently, their effectiveness may be compromised when operating in diverse communication environments that significantly differ from the training environments.\nTo address this challenge, several studies have explored the concept of a channel-adaptive JSCC approach, enhancing adaptability to diverse channel environments.\nA channel-adaptive JSCC approach for wireless image transmission was studied in [19 ###reference_b19###], which utilizes attention modules to extract feature importance based on signal-to-noise ratio (SNR) and loss.\nThis idea was extended in [20 ###reference_b20###, 21 ###reference_b21###] by incorporating orthogonal frequency division multiplexing (OFDM) waveforms.\nAlso, in [22 ###reference_b22###], the attention mask was transformed into a binary mask for adaptive rate control.\nThe common idea behind these methods is to utilize channel information, such as SNR, as an additional input to control the JSCC encoder and decoder. This concept is realized by incorporating dedicated modules like attention modules or SNR-adaptive modules. However, this approach escalates both the network size and training complexity.\nIn [23 ###reference_b23###], channel-adaptive training was conducted by adjusting the number of active outputs of the encoder according to channel conditions. Unfortunately, the aforementioned techniques primarily focus on analog communication systems, where encoder outputs, either real-valued or complex-valued, are transmitted using analog modulation. Consequently, these methods lack compatibility with modern digital communication systems. Moreover, implementing these techniques introduces various challenges, including issues related to the cost, size, and flexibility of RF hardware components.\nA few studies have attempted to integrate the JSCC approach into a digital semantic communication system. In [24 ###reference_b24###], [25 ###reference_b25###], a digital semantic communication system was employed where a bit sequence is generated through quantization and transmitted without digital modulation. In [26 ###reference_b26###], the real-valued output of the JSCC encoder was mapped to binary phase shift keying symbols, while in [27 ###reference_b27###], [28 ###reference_b28###] a quantizer was used to convert the encoder\u2019s output into conventional quadrature amplitude modulation (QAM) symbols.\nAdditionally, an adaptive masking strategy in [29 ###reference_b29###] was employed for robust operation of digital semantic communications in the presence of semantic noise.\nIn [30 ###reference_b30###], the idea of robust information bottleneck was introduced to enable robust model training across various SNR levels. The common limitation of these techniques is the use of a fixed modulation scheme during the training of the JSCC encoder and decoder. Consequently, the trained encoder and decoder have no compatibility with other modulation schemes, except the one considered during the training process. Although an adaptive modulation technique in [31 ###reference_b31###] can be adopted to provide compatibility with multiple modulation schemes, this technique did not consider an end-to-end JSCC training approach to account for the effects of fading channels and noise. To the best of the authors\u2019 knowledge, no previous studies have explored a JSCC approach for channel-adaptive digital semantic communications, despite its practical appeal for enabling the early adaptation of semantic communications.\n###figure_1### To bridge this research gap, this paper proposes a novel JSCC approach for channel-adaptive digital semantic communications, providing robustness and flexibility against diverse channel conditions and modulation schemes.\nThe proposed approach comprises three novel components: (i) a robust demodulation method, (ii) a robust training strategy, and (iii) a channel-adaptive modulation technique. In the robust demodulation method, we assess the uncertainty of the demodulation output and assign an intermediate value instead of conventional binary outputs when uncertainty arises. Utilizing this method, we enhance the robustness of digital semantic communication systems against fading channels and noise. In the robust training strategy, we model the relationship between the encoder\u2019s output and the decoder\u2019s input using binary symmetric erasure channels (BSECs) and then sample the parameters of these models from diverse distributions. By doing so, we not only facilitate end-to-end training of the JSCC encoder and decoder but also enhance their robustness and flexibility against diverse channel conditions and modulation orders. In the channel-adaptive modulation technique, we adaptively determine modulation orders of the latent variables according to channel conditions, thus reducing the communication latency for transmission while maintaining task performance.\nUsing simulations, we demonstrate the superiority of the proposed JSCC approach for image classification, reconstruction, and retrieval tasks compared to existing JSCC approaches.\nThe major contributions of our paper are summarized below.\nWe develop a new demodulation method for improving the robustness of the digital semantic communication system. In this method, we introduce a criterion to assess the uncertainty of the demodulation output based on a log-likelihood ratio (LLR). We then assign an intermediate output , instead of conventional binary outputs or , when uncertainty arises. To reduce the computational complexity of the demodulation method, we also devise closed-form decision boundaries to check the uncertainty criterion. Through the design of the new demodulation method for semantic communications, we address the challenges posed by conventional hard-output demodulation considered in the literature (e.g., [32 ###reference_b32###]), which has limited expressive power in the latent space and is vulnerable to bit-flip errors in low-SNR regimes.\nWe present a robust end-to-end training strategy for the JSCC encoder and decoder when employing our demodulation method. In this strategy, we employ BSECs to model the stochastic interaction between the encoder\u2019s output and the decoder\u2019s input. We then develop a sampling strategy which introduces variations in the bit-flip probabilities of the BSECs by sampling them from different stochastic distributions. Through this stochastic model with parameter sampling, our strategy effectively enhances the robustness and flexibility of the JSCC encoder and decoder against diverse channel conditions and modulation orders, in comparison to conventional environment-specific training strategies.\nWe devise a channel-adaptive modulation technique for an inference phase, in order to reduce the communication latency while maintaining task performance. To this end, we characterize the bit-error and correct-decision probabilities of the QAM symbols as a function of the SNR and modulation order. Based on this characterization, we determine the best modulation order that can minimize the communication latency while ensuring that the bit-error probability of the QAM signal is below the bit-flip probability set by our training strategy.\nUsing simulations, we demonstrate the superiority of the proposed JSCC approach over the existing JSCC approaches for image classification, reconstruction and retrieval tasks using the MNIST [33 ###reference_b33###], Fashion-MNIST [34 ###reference_b34###], CIFAR-10 and CIFAR-100 [35 ###reference_b35###] datasets. Our results show that the proposed approach outperforms the existing approaches in terms of the classification, reconstruction, and retrieval performances. Using simulations, we also validate the effectiveness of our demodulation method, training strategy, and channel-adaptive modulation technique."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II System Model",
15
+ "text": "In this work, we consider a digital semantic communication system for a dedicated machine learning task at a receiver. An example of the considered system for an image classification task is illustrated in Fig. 1 ###reference_###.\nAt the transmitter, a JSCC encoder configured with a deep neural network (DNN) is employed to transform an input image into a bit sequence.\nLet be an input data (e.g., an image) which is assumed to be independent and identically distributed (IID) over a source distribution . The operation of the JSCC encoder is denoted by a function parameterized by the weights . Suppose that the sigmoid function is employed as an activation function of the output layer. Then each output of the JSCC encoder can be interpreted as the probability\nof the modulated bit being 1, and the sampling from this distribution results in the generation of the bit sequence, where is the length of the bit sequence. The entries of the bit sequence will be treated as binary latent variables.\nAfter this, digital modulation is applied to transform the bit sequence into a symbol sequence .\nThe -th entry of the symbol sequence is denoted as , and this symbol is transmitted at time slot .\nWe assume that each symbol is modulated using -QAM, i.e., , where is a constellation set of -QAM.\nThe wireless channel of the system is modeled as quasi-static fading channels (also known as block fading channels) [37 ###reference_b37###], in which channel coefficients remain constant within a channel coherence time.\nUnder the quasi-static fading channel model, the baseband received signal at time slot is expressed as\nwhere is a complex-valued channel coefficient, and is an AWGN distributed as .\nSuppose that the channel coefficient is perfectly estimated at the receiver via pilot-assisted channel estimation within every channel coherence time.\nBy utilizing the knowledge of , the channel equalization is executed for the received signal in (1 ###reference_###), which yields the equalized signal at time slot given by\nwhere and is the instantaneous SNR of the system.\nThen digital demodulation is executed to reconstruct the transmitted bit sequence from the equalized signals.\nDetails of a demodulation method adopted in our work will be introduced in Sec. III ###reference_###.\nBy applying the aforementioned demodulation process, the transmitted bit sequence is reconstructed at the receiver, which is denoted by .\nAfter reconstructing the bit sequence, the JSCC decoder configured with a DNN is applied to reconstruct the input data , denoted by .\nThe operation of the JSCC decoder is denoted by a function parameterized by the weights .\nFinally, the dedicated machine learning task is performed by a task neural network (e.g., classifier), parameterized by the weights , based on the reconstructed data ."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Robust Demodulation Method for Digital Semantic Communications",
21
+ "text": "In this section, we design a special type of demodulation, referred to as robust demodulation, for improving the robustness of the digital semantic communication system described in Sec. II."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III-A Design Principle",
27
+ "text": "In traditional digital communication systems, two types of demodulation are typically considered: (i) soft-output demodulation, which yields LLR values, and (ii) hard-output demodulation, which generates binary outputs. Unfortunately, when applied to JSCC-based digital semantic communication systems, both demodulation methods encounter their own limitations as described below.\nLimitations of soft-output demodulation: The LLR values obtained from the soft-output demodulation have an infinite number of possibilities. Consequently, the statistical behavior of these values may involve an infinite number of parameters, with distributions influenced by both channel conditions and modulation orders. Therefore, integrating soft-output demodulation into the training of the JSCC encoder/decoder poses a considerable challenge in enabling the encoder/decoder to learn the diverse behavior of the LLR values formed under varying channel conditions and modulation orders.\nLimitations of hard-output demodulation:\nThe outputs of the hard-output demodulation are limited to binary values, significantly restricting the expressive power in the latent space. Furthermore, hard-output demodulation is susceptible to bit-flip errors, especially in low-SNR regimes, which may significantly alter the desired meaning of transmitted data.\nTo address the limitations of the conventional demodulation methods, we devise a robust demodulation method that produces ternary outputs. This method introduces an intermediate value, denoted as 111Our intuition behind the choice of is that biasing the intermediate value towards a particular binary value (i.e., or ) might create difficulty for the JSCC decoder in distinguishing between the intermediate value and the corresponding binary value. This potentially leads to performance degradation in the overall communication process., in addition to the conventional binary values of and . Specifically, our method assigns this intermediate value when uncertainty arises regarding the transmitted binary latent variable. This strategy effectively mitigates frequent bit-flip errors in low-SNR regimes while simultaneously enhancing the expressiveness of the demodulation output compared to conventional hard-output demodulation. These intrinsic features of our demodulation method significantly augment the JSCC decoder\u2019s capability to perform dedicated machine learning tasks. The advantage of our robust demodulation method in facilitating robust training of the JSCC encoder/decoder will be discussed in Sec. IV ###reference_###.\nIn our demodulation method, we measure the reliability level of the decision on each latent variable based on the magnitude of the LLR, in order to determine a criterion for assigning the intermediate value .\nLet be the -th binary latent variable associated with the transmitted symbol at time slot .\nAlso, let be a subset of which consists of symbols whose -th bit is given by after a symbol demapping, i.e.,\nfor , where is a symbol demapping function for -QAM.\nThen the LLR of the -th binary latent variable is computed as\nUnfortunately, prior knowledge about the distribution of bit outputs from the JSCC decoder may not be available at the receiver because it depends on the true distribution of both the encoder weights and the source data. To circumvent this challenge, we assume that the prior probability of each binary latent variable is uniform. Under this assumption, the LLR in (III-A ###reference_###) is rewritten as\nwhere follows from (2 ###reference_###).\nNote that if , the demodulation is uncertain about its decision on and the corresponding LLR will be close to zero.\nTherefore, we use the magnitude of the LLR as a measure of the reliability level when making a decision about the binary latent variable based on the observation .\nIn particular, our demodulation method assigns the intermediate value to the -th latent variable whenever the corresponding LLR is close to .\nLet be a threshold applied to the LLR for assigning the intermediate value.\nThen the output of our demodulation method can be expressed as , where"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "III-B Low-Complexity Robust Demodulation Method",
33
+ "text": "The robust demodulation method in (6 ###reference_###) requires high computational complexity due to the need to measure distances to all symbols in . To alleviate the computational complexity, we design a low-complexity variation of the robust demodulation method. We start by approximating the LLR as\nwhere follows from a well-known log-sum-exp approximation. Thanks to the independence of the real and imaginary parts of the AWGN along with the symmetric property of the QAM, the demodulation method for the bits associated with the real and imaginary parts can be performed independently. Utilizing this fact, we define two LLR functions:\nwhich correspond to the bits associated with the real and imaginary parts, respectively.\nDepending on the value of , a proper LLR function is chosen between and . In particular, our criterion for assigning the intermediate value in the second line of (6) is rewritten as\nTo facilitate the low-complexity computation for checking the criterion in (9 ###reference_###), we now introduce decision boundaries in an in-phase-quadrature (I-Q) constellation diagram for -QAM.\nLet be the first decision boundary, which is closest to the I-axis or the Q-axis, to check the above criterion for a given , where the bit index satisfies . From (III-B ###reference_###) and (9 ###reference_###), if the -th bit has different values across , then can be expressed as\nwhere is the minimum distance for the normalized -QAM constellation set.\nSince adjacent decision boundaries maintain equidistant intervals, once is determined, we obtain the decision boundary values for as follows:\nwhere .\nLet and be the sets of scaled upper and lower decision boundaries associated with two adjacent symbols having different -th bit values. Similarly, let and be the sets of scaled upper and lower decision boundaries associated with two adjacent symbols having the same -th bit value equal to .\nUtilizing these notations, the low-complexity robust demodulation for the real-part bits is represented as\nfor , where\nwhere denotes the set of indices of decision boundaries related to the -th bit being in the -QAM.\nFig. 2 ###reference_### visualizes the decision boundaries associated with the second bit of a 16-QAM symbol (i.e., ). Note that in this case, we have , and .\nIn Fig. 2 ###reference_###, we also visualize the regions and in (III-B ###reference_###), which are shaded in different colors. It should also be noted that the value of the second bit is determined as when or . Fig. 2 ###reference_### clearly illustrates the decision rule of our demodulation method, which readily determines the demodulation output by comparing the value of the received signal with a given set of decision boundaries.\nDue to the symmetric property of QAM, the low-complexity robust demodulation for the imaginary-part bits can be executed similarly. This involves replacing with and appropriately defining the decision boundary sets for the imaginary-part bits.\n###figure_2### Remark 1 (Demodulation with -ary outputs): Our demodulation method can be generalized to produce -ary outputs for any . As increases, the demodulation\u2019s expressive power grows, but controlling the outputs becomes more challenging due to the increased parameters needed to represent their statistical behavior. Moreover, the computational complexity of the demodulation process escalates with . Therefore, we choose as a practical compromise, balancing model complexity and expressive power."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "IV Robust Training Strategy of the Proposed JSCC Approach",
39
+ "text": "In this section, we devise a robust training strategy to enhance the robustness and flexibility of the JSCC encoder and decoder against diverse channel conditions and modulation orders."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "IV-A BSEC Modeling Approach",
45
+ "text": "The fundamental idea of our robust training strategy is to harness a stochastic model to represent the combined effects of digital modulation, fading channel, equalization, and demodulation, instead of explicitly considering their effects individually.\nIn the digital semantic communication system with our robust demodulation method in Sec. III-B, a binary latent variable is stochastically transformed into a ternary variable .\nThis stochastic transformation exactly follows a well-known BSEC model in which the intermediate value is treated as an erased value assigned when the symbol is erased.\nMotivated by this fact, we harness -parallel BSECs to model the relationship between the JSCC encoder\u2019s output and the JSCC decoder\u2019s input , as illustrated in Fig. 1 ###reference_###.\nIn this model, the conditional distribution of the decoder input for a given encoder output is represented as\nwhere , represents the bit-flip probability, represents the bit-erasure probability, and represents the bit-correct probability for the -th BSEC.\nNote that if , implying that there is no intermediate output , the above BSEC reduces to a BSC which is one of the most widely adopted models in wireless communications. Similarly, if , implying that there is no bit-flip error, the above BSEC reduces to a binary erasure channel (BEC) which is another common model in wireless communications. Therefore, our BSEC model can be considered as a generalization of both the BSC and BEC models. By introducing not only the bit-flip probability but also the bit-erasure probability, BSEC offers the advantage of reducing the likelihood of bit-flip errors compared to the BSC, while allowing the consideration of the inevitable bit-flip errors that are ignored in the BEC.\nA key advantage of our BSEC modeling approach is that it facilitates the end-to-end training of the JSCC encoder and decoder without explicitly considering digital modulation, fading channel, channel equalization, and digital demodulation processes; rather, the combined effects of these processes are implicitly captured by the parallel BSECs with parameters during a training phase."
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": "IV-B Parameter Sampling Strategy",
51
+ "text": "When employing our BSEC modeling approach, it is crucial to determine a proper set of parameters that can reflect communication scenarios encountered during an inference phase.\nIf we simply consider fixed parameters during a training phase, they can only represent a certain communication scenario with a particular channel condition and modulation level. For instance, if we only consider small bit-flip probabilities (i.e., ) during the training phase, these probabilities may not align with communication scenarios where the SNR is low and/or the modulation order is high, since these scenarios may lead to high bit-error probabilities.\nThis motivates us to consider flexible determination of the parameters to ensure the robustness of the JSCC encoder and decoder against diverse SNRs and modulation orders that may occur during the inference phase.\nTo promote flexibility in the parameter determination, our strategy is to introduce variations in the bit-flip probabilities by sampling them from different stochastic distributions.\nIn particular, when computing the loss for each training data, we sample the bit-flip probability of the -th BSEC independently from the following uniform distribution222The uniform distribution is employed due to its simplicity; however, alternative distributions like the beta and exponential distributions, which sample values between 0 and 1, can also be applicable.:\nwhere is a target robustness level which represents the maximum bit-flip probability allowed for the -th latent variable .\nBy doing so, the latent variable can be trained to cover the bit-error probability up to the target robustness level .\nIn our training strategy, we set different robustness levels across the latent variables, in order to allow flexibility in choosing different modulation orders, as will be discussed in Sec. V ###reference_###.\nAfter sampling the bit-flip probabilities , we determine the remaining parameters and that match with the sampled value of .\nNote that three parameters , , and are entangled by the demodulation rule for a given SNR and modulation order.\nHowever, both the SNR and modulation order are diverse during the inference phase, making it difficult to characterize the exact relationship among these parameters.\nTo circumvent this difficulty, during the training phase, we assume that 4-QAM (i.e., ) is chosen with . Under this assumption, the first decision boundary is given by for all by the relationship between and characterized in (III-B ###reference_###).\nTherefore, the bit-erasure probability associated with the sampled value of is determined by plugging and into the parameter characterization in (V-A ###reference_7###) and (30 ###reference_###). As a result, the corresponding bit-erasure probability is given by\nwhere is the inverse -function.\nSimilarly, the corresponding bit-correct probability is determined as . These parameter expressions are utilized during the training phase.\nA key advantage of the above sampling strategy is that the JSCC encoder and decoder can be trained with various realizations of during the training process.\nTherefore, our strategy effectively enhances the robustness of the JSCC encoder and decoder against diverse channel conditions that can be encountered during an inference phase.\nThis advantage will be numerically demonstrated in Sec. VI ###reference_###."
52
+ },
53
+ {
54
+ "section_id": "4.3",
55
+ "parent_section_id": "4",
56
+ "section_name": "IV-C Training with BSEC Model",
57
+ "text": "To train the JSCC encoder and decoder using the BSECs with diverse parameters, we modify the training strategy in [32 ###reference_b32###] which was originally developed for the BSCs with the same bit-flip probabilities.\nFollowing the strategy in [32 ###reference_b32###], we aim at maximizing the mutual information between and .\nBy increasing the dependency between and , the decoder gains the ability to effectively utilize for accurate prediction of and therefore reduce the difference between and .\nLet and , where the -th entry of and represents the bit-flip probability and bit-erasure probability determined by our sampling strategy in Sec. IV-B ###reference_### for the -th BSEC, respectively.\nThen the conditional distribution of the parallel BSECs parameterized by and is expressed as\nwhere each factor is characterized in (14 ###reference_###).\nMeanwhile, by assuming that each bit follows an independent Bernoulli distribution with probability , the conditional distribution of the JSCC encoder follows\nGenerating via sampling from the Bernoulli distribution can be regarded as stochastic 1-bit quantization for the -th encoder output which is originally a real-valued constant.\nThe distributions and imply that the channel from to can be described as a noisy memoryless channel with the following conditional distribution [38 ###reference_b38###]:\nwhere is further computed as\nDuring the training process, the decoder\u2019s input is sampled according to (IV-C ###reference_1###) for each input data . Note that by employing the sampling strategy in Sec. IV-B ###reference_###, the parameters and are independently sampled across different input data.\nThe mutual information between and is computed as\nwhere is the true posterior distribution.\nUnfortunately, this distribution is often intractable, so we cannot directly train the JSCC decoder to follow the true posterior distribution. To circumvent this limitation, we use a variational approximation [26 ###reference_b26###] by assuming that the JSCC decoder is a stochastic decoder whose output follows a Gaussian distribution with the mean and an isotropic covariance matrix (i.e., ), where is the dimension of .\nBased on the above strategy, our objective function is expressed as\nwhere follows from the Gaussian assumption on the output of the decoder.\nSince both and are positive constants, minimizing the mean squared error (MSE) between the input data and the reconstructed data maximizes the original objective function in (IV-C ###reference_4###). Therefore, we use the MSE loss function for maximizing the mutual information between and , defined as\nThe ultimate objective of the digital semantic communication system in Sec. II is to perform a dedicated machine learning task at the receiver. Motivated by this fact, we also introduce the loss function for maximizing the performance of the dedicated task by considering an image classification task as an example of such a task.\nFor the loss function design, we aim at maximizing the mutual information between the true label corresponding to and the reconstructed data , in order to train the classifier to make accurate inference about the label using . This can be regarded as essential information for task performance.\nAccording to [39 ###reference_b39###], to maximize , it suffices to minimize the cross-entropy (CE) loss function defined as\nTo maximize both reconstruction and classification accuracies, we finally design our loss function as the weighted summation of the MSE loss in (23 ###reference_###) and the CE loss in (24 ###reference_###) as follows:\nwhere is a hyperparameter determined by the relative importance of the reconstruction accuracy of the JSCC encoder/decoder compared to the classification accuracy. In practice, when computing the loss, we replace the input distribution using the empirical distribution obtained from a training dataset.\nWhen executing gradient back-propagation, we exclude the quantization process after the JSCC encoder since this process involves non-differentiable sampling. Consequently, the computed gradients reach directly from the JSCC decoder\u2019s input to the JSCC encoder\u2019s output ."
58
+ },
59
+ {
60
+ "section_id": "4.4",
61
+ "parent_section_id": "4",
62
+ "section_name": "IV-D Performance Enhancement via Warm-up Period",
63
+ "text": "In the initial stage of the training process, the JSCC encoder faces challenges in generating meaningful latent variables due to its randomly initialized weights, which lack informative patterns. If these latent variables are further corrupted under the BSEC models with non-zero bit-flip and bit-erasure probabilities, the encoder may struggle to capture crucial input data features to maximize task performance. To address this challenge, we set the first epochs of the training process as a warm-up period by setting , ensuring error-free transmission of latent variables.\nThen, during this period, the weights of the JSCC encoder and decoder can be properly updated to generate informative latent variables to maximize the task performance without being disrupted by transmission errors.\nOnce the warm-up period ends, we assign non-zero bit-flip and bit-erasure probabilities to the BSEC models according to our sampling strategy in Sec. IV-B ###reference_###.\nIn this subsequent period, the weights of the JSCC encoder and decoder are updated to enhance robustness against transmission errors while continuing to optimize the task performance."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Channel-Adaptive Modulation Technique of the Proposed JSCC Approach",
69
+ "text": "In this section, we devise a channel-adaptive modulation technique for an inference phase, which reduces the communication latency of transmission while maintaining task performance at the receiver."
70
+ },
71
+ {
72
+ "section_id": "5.1",
73
+ "parent_section_id": "5",
74
+ "section_name": "BER Analysis for QAM",
75
+ "text": "Our key observation is that training and testing environments are essentially the same if the bit-flip and bit-erasure probabilities of the QAM symbols transmitted during the inference phase are exactly the same as the parameters and of the BSECs set in the training phase. Motivated by this observation, we characterize the bit-flip and bit-erasure probabilities of the QAM symbols as a function of the channel condition and modulation order.\nLet be the BER of the -QAM symbol for a given SNR.\nAccording to our demodulation method in (12 ###reference_###), the bit error occurs when the real or imaginary part of the received signal passes the decision boundary that is close to the adjacent symbol, as illustrated in Fig. 2 ###reference_###.\nThis fact implies that the probability of a typical error event can be expressed as\nBased on the above result and the assumption of equiprobable bit\noutputs from the JSCC encoder, the BER of the -QAM symbol is approximately computed as\nwhere is a factor to reflect the error event at the inner, edge, and corner in the constellation set [40 ###reference_b40###].\nIn a similar manner, the correct-decision probability of the -QAM symbol is also computed as\nBy interpreting the bit-error and correct-decision probabilities as the parameters and , respectively, the bit-flip and bit-correct probabilities of the BSEC are characterized as\nrespectively.\nBy utilizing the fact that , the bit-erasure probability of the BSEC is also determined by\n###figure_3###"
76
+ },
77
+ {
78
+ "section_id": "5.2",
79
+ "parent_section_id": "5",
80
+ "section_name": "Channel-Adaptive Modulation",
81
+ "text": "During the training phase, the bit-flip probability has been drawn from , implying that the JSCC encoder and decoder are trained to cover the bit-flip probability up to the robustness level which varies with . Motivated by this fact, our channel-adaptive modulation technique chooses the highest modulation order that maintains the bit-error probability below a certain limit (i.e., ), where is an adjusting factor to compensate for the effect of the BER approximation in (27 ###reference_###).\nFrom the bit-error probability expression in (27 ###reference_###), our criterion for the -QAM symbol is given by\nwhich is equivalent to the following SNR condition:\nThe SNR condition in (32 ###reference_###) implies that if the SNR falls below a certain threshold , the modulation order must be reduced so that the bit-error probability is sufficiently lower than a desired limit .\nThis fact aligns with our intuition because as the SNR decreases, lowering the modulation order becomes necessary to maintain a sufficiently low bit-error probability.\nUtilizing the SNR condition in (32 ###reference_###), the best modulation order to satisfy our criterion among three candidate orders333In this work, we only consider three modulation types, 4-QAM, 16-QAM, and 64-QAM, for simplicity; however, it is straightforward to extend the proposed technique to support a higher order modulation type, such as 256-QAM and 1024-QAM., , is determined as\nwhere\n, , and from (32 ###reference_###). If the transmitter and receiver share the information and as background knowledge, the receiver can compute the thresholds . Then, based on (33 ###reference_###), the receiver can determine the modulation orders of the binary latent variables without requiring explicit information exchanges.\nIt should be noted that we properly set the robustness level so that every latent variable is modulated as the -QAM symbol (i.e., ) even in the worst SNR case.\nTo provide more insights about our technique, in Fig. 3 ###reference_###, we illustrate how the modulation orders assigned to the latent variables change as the SNR increases, when employing our technique. In this figure, , , and are set as assumed in the heterogeneous setting described in Sec. VI ###reference_###.\nAs can be seen in Fig. 3 ###reference_###, the modulation types constituting the symbol sequence change in the following order:\nas the SNR increases.\nFig. 3 ###reference_### clearly demonstrates that the better the channel condition, the higher the modulation order chosen by each latent variable.\nIt is also shown that different modulation orders are assigned across the latent variables according to their robustness levels.\nA key feature of our adaptive modulation technique is that it allows digital semantic communication to adapt not only to the instantaneous channel condition during the inference phase but also to the robustness levels of the latent variables chosen during the training phase. Another key feature is that when employing our technique, the modulation orders assigned to the latent variables increase as the SNR increases. This result coincides with our intuition because when the SNR is sufficiently high, the use of high-order modulation improves the overall communication latency of the system while maintaining the bit-error probability to be less than an acceptable level. Therefore, the use of our adaptive modulation technique provides flexibility in the average spectral efficiency over a wide range of SNR values, enabling our technique to adaptively minimize the communication latency according to the SNR while maintaining task performance.\nRemark 2 (Comparison with Conventional AMC Technique):\nOur channel-adaptive modulation technique resembles with a conventional adaptive modulation and coding (AMC) technique that has been widely adopted in modern wireless standards.\nIn this technique, both modulation order and coding rate are adaptively determined according to the SNR, in order to maximize the spectral efficiency while ensuring a sufficiently low block error rate.\nA key feature that distinguishes our technique from the conventional AMC technique is its ability to assign different modulation orders across data bits (i.e., binary latent variables) even under the same SNR.\nThis flexibility arises from assigning diverse robustness levels to the latent variables during the training phase, enhancing the adaptability of modulation orders, as detailed in Sec. IV-B ###reference_###.\nTherefore, our technique can be viewed as a judicious extension of the conventional AMC technique, strategically designed to minimize the communication latency of the digital semantic communication system while maintaining task performance at the receiver. The performance gain achieved by assigning different modulation orders will be numerically demonstrated in Sec. VI ###reference_###."
82
+ },
83
+ {
84
+ "section_id": "6",
85
+ "parent_section_id": null,
86
+ "section_name": "VI Simulation Results",
87
+ "text": "In this section, we evaluate the superiority of the proposed JSCC approach through simulations using the MNIST [33 ###reference_b33###], Fashion-MNIST [34 ###reference_b34###], CIFAR-10 and CIFAR-100 [35 ###reference_b35###] datasets. The MNIST and Fashion-MNIST datasets consist of 60,000 training images and 10,000 test images. The CIFAR-10 and CIFAR-100 datasets consist of 50,000 training images and 10,000 test images. We normalize the training and test data to have a zero mean and unit variance [41 ###reference_b41###]. The input image size for the JSCC encoder is for MNIST and Fashion-MNIST and for CIFAR-10 and CIFAR-100.\nWhen training the JSCC encoder and decoder, we use the loss defined in (25 ###reference_###) for an image classification task, the MSE loss in (23 ###reference_###) for an image reconstruction task, and the CE loss in (24 ###reference_###) for an image retrieval task. For the loss in (25 ###reference_###), we set . For CIFAR-10, is set to 128 for the image classification and retrieval task and 512 for the image reconstruction task. An Adam optimizer [42 ###reference_b42###] with a learning rate of 0.001 is employed for all the datasets. The batch size is set as . The total number of epochs is for MNIST and Fashion-MNIST, for CIFAR-10, and for CIFAR-100. Table I ###reference_### summarizes the neural network architectures for the considered datasets, where DO indicates that a dropout strategy is applied with a probability of , BN stands for batch normalization, LR represents LeakyReLU with a slope of 0.2, and MP denotes max-pooling with kernel size .\nIn our simulations, we consider the following approaches for performance evaluation:\nLayers\nOutput size\n\n\nEncoder\nDense+ReLU\n512\n\n\nDense+ReLU\n256\n\n\nDense+Sigmoid\n\n\n\nDecoder\nDense+ReLU\n256\n\nFashion-MNIST\nDense+ReLU\n512\n\nMNIST/\nDense+Tanh\n784\n\n\nClassifier\nDense+ReLU\n256\n\n\nDense+ReLU\n256\n\n\nDense+ReLU\n256\n\n\nDense+ReLU\n128\n\n\nDense+ReLU+DO(0.5)\n128\n\n\nDense\n10\n\nCIFAR-10\nEncoder\nTable 1 in [43 ###reference_b43###]\n\n\nFlatten\n\n\nDense+Sigmoid\n\n\nDecoder\nDense\n\n\nUnflatten\n\n\nTable 1 in [43 ###reference_b43###]\n\n\nClassifier\n3x3Conv+BN+LR\n\n\n3x3Conv+BN+LR+MP+DO(0.25)\n\n\n3x3Conv+BN+LR\n\n\n3x3Conv+BN+LR+MP+DO(0.25)\n\n\n3x3Conv+BN+LR\n\n\n3x3Conv+BN+LR+MP+DO(0.25)\n\n\nDense+LR+DO(0.25)\n128\n\nDense\n10\n\nCIFAR-100\nEncoder\nTable 2 in [43 ###reference_b43###] + Remove one residual block\n\n\nFlatten\n\n\nDense+Sigmoid\n\n\nRetrieval\nDense\n\n\nUnflatten\n\n\nTable 2 in [43 ###reference_b43###] + Remove one residual block\n\n\nResnet-18 in [44 ###reference_b44###]\n###figure_4### ###figure_5### ###figure_6### Proposed: We consider two variants of the proposed JSCC approach with different demodulation methods: (i) Proposed (BSEC), utilizing a robust demodulation method with , , modeled by the BSECs during training, and (ii) Proposed (BSC), employing conventional hard-output demodulation with , , modeled by the BSCs during training.\nThe number of the epochs for the warm-up period is set as .\nWhen employing fixed modulation, we set , , in Proposed (BSC) and , , in Proposed (BSEC), unless specified otherwise.\nWhen employing our channel-adaptive modulation technique, we consider two scenarios, referred to as homogeneous and heterogeneous.\nIn the homogeneous setting, we set , , and set . Note that in this setting, is properly selected to ensure that the higher modulation order is chosen for higher SNR values (i.e., , ).\nIn the heterogeneous setting, we set with and , and also set to satisfy both , , and .\nNECST (Ideal): We consider the neural joint source-channel coding (NECST) approach in [32 ###reference_b32###], which assumes the BSCs with the homogeneous bit-flip probabilities, with an ideal training strategy described below. In this strategy, we consider multiple pairs of JSCC encoders and decoders and then train each pair of the JSCC encoder and decoder for a specific SNR value range, which is of interest during the inference phase.\nDuring the inference phase, every time the system encounters the channel condition with a particular SNR, the system selects the pair of the JSCC encoder and decoder trained with the corresponding SNR. This strategy provides the best performance for the NECST approach in [32 ###reference_b32###], but requires a large number of pairs of the JSCC encoder and decoder trained with various SNR values.\nNECST (Sample Mixing): We also consider the NECST approach in [32 ###reference_b32###] with a sample mixing training strategy described below. In this strategy, we divide the batch into 8 sub-batches of equal size. When training with each sub-batch, we choose one of the various bit-flip probabilities for the BSC modeling. For example: 0.01, 0.01, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35. This approach enables a single pair of the JSCC encoder and decoder to be trained for a wide range of SNR values.\nDeepJSCC: We modify an analog JSCC approach considered in [4 ###reference_b4###] using our JSCC encoder and decoder structure, while training the model through a stochastic approach with . In particular, to ensure the average symbol power is the same as normalized QAM symbols, we multiply a factor to the JSCC encoder, ensuring that its output has a value between and , closely resembling an average power of 1.\nDuring the training process in DeepJSCC, the SNR is sampled from the uniform distribution, where the minimum and maximum values correspond to the ends of the SNR range of interest during the inference phase. For the image retrieval task, we set the number of epochs to 200, in order to avoid an overfitting problem.\nDeepJSCC-Q: We modify a digital JSCC approach considered in [27 ###reference_b27###] using our JSCC encoder and decoder structure, while training the model through a stochastic approach with . This approach conducted end-to-end training by incorporating the nonlinearity of digital modulation.\nIn particular, to ensure the average symbol power is the same as normalized QAM symbols, we remove the sigmoid in the last layer of the encoder and performed power normalization. During the training process in DeepJSCC-Q, the SNR is sampled from the uniform distribution, where the minimum and maximum values correspond to the ends of the SNR range of interest during the inference phase. Unlike other approaches, a learning rate is set as 0.00005 for DeepJSCC-Q as it provides the best performance.\nJPEG+1/2-rate LDPC: We consider a separate source-channel coding approach employing JPEG for source coding and 1/2-rate LDPC for channel coding. For the image classification task, a noise-free classifier is separately trained with a learning rate of 0.00005 with the batch size of .\n###figure_7###"
88
+ },
89
+ {
90
+ "section_id": "6.1",
91
+ "parent_section_id": "6",
92
+ "section_name": "VI-A Performance Evaluation with Fixed Modulation",
93
+ "text": "Fig. 4 ###reference_### compares the classification accuracies of various JSCC and non-JSCC approaches with 4-QAM for image classification tasks using the MNIST, Fashion-MNIST, and CIFAR-10 datasets.\nFor JPEG+1/2-rate LDPC, the average bit sequence length is 9282, 10646, and 14966 for the MNIST, Fashion-MNIST, and CIFAR-10 datasets, respectively.\nFig. 4 ###reference_### shows that the proposed approaches achieve a higher classification accuracy than the existing NECST approaches, particularly in the low-SNR regime.\nIn particular, Proposed (BSEC) using a single encoder-decoder pair even outperforms NECST (Ideal) which requires a large number of the encoder-decoder pairs. This result validates the effectiveness of our robust training strategy in Sec. IV ###reference_### in improving the robustness of the JSCC encoder and decoder against diverse SNR. It is also shown that Proposed (BSEC) which adopts the robust demodulation method in Sec. III ###reference_### outperforms Proposed (BSC) which utilizes the conventional hard-output demodulation. This result demonstrates the superiority of our demodulation method over the conventional method. Although DeepJSCC exhibits the highest classification accuracy in some low-SNR regimes, DeepJSCC is not compatible with practical digital communication systems. In addition, DeepJSCC has a longer communication latency compared to other JSCC approaches because DeepJSCC requires channel uses to transmit real-values.\nProposed (BSEC) also outperforms DeepJSCC-Q in terms of the classification accuracy for all the datasets. This performance gain can be attributed to the utilization of the demodulation process, which maps real-domain equalized signals into ternary values. The statistical modeling and robust training based on demodulation enable the JSCC decoder to effectively manage errors in the transmission of latent variables. The performance of JPEG+1/2-rate LDPC degrades significantly below a certain SNR threshold, despite requiring a bit sequence length more than 97 times longer than other JSCC approaches, including Proposed (BSEC). This result demonstrates the superiority of the proposed JSCC approach over a traditional separate source-channel coding approach in terms of both communication latency and task performance.\n###figure_8### ###figure_9### Fig. 5 ###reference_### compares the BERs of various JSCC and non-JSCC approaches for the image classification task using the CIFAR-10 dataset. The BER is computed by counting the number of bit-flip errors. The BERs of NECST (Ideal), NECST (Sample Mixing), and Proposed (BSC) are exactly the same because these methods employ the same hard-output demodulation. Proposed (BSEC) outperforms these methods in terms of the BER, which is achieved by employing the proposed robust demodulation. This result demonstrates that the proposed demodulation method effectively mitigates frequent bit-flip errors. Although JPEG+1/2-rate LDPC achieves the lowest BER when dB, it suffers from significant degradation in performance when dB, resulting in decoding failure. Consequently, the recovery of the semantic information at the receiver completely fails, as evidenced by the results in Fig. 4 ###reference_###. These results demonstrate the unsuitability of the conventional non-JSCC approaches for semantic communications in low-SNR regimes.\nIn NECST or Proposed (BSC), conventional hard-output demodulation is performed before JSCC decoding. The computational complexity required for this demodulation is of the order . In contrast, the Proposed (BSEC) method employs a robust demodulation complexity denoted as . On the other hand, DeepJSCC or DeepJSCC-Q directly input the equalized received signal to the JSCC decoder, eliminating the need for a separate demodulation step. These comparisons reveal that Proposed (BSEC) offers performance gains at the expense of additional complexity compared to existing JSCC approaches. However, this additional complexity remains comparable to the typical demodulation process in traditional communication systems and is therefore acceptable for practical implementation.\nFig. 6 ###reference_###(a) compares the peak signal-to-noise ratios (PSNRs) of various JSCC and non-JSCC approaches with 4-QAM for the image reconstruction task using the CIFAR-10 dataset. In this simulation, data normalization is not executed, while the activation function of the final layer of the JSCC decoder is replaced with a sigmoid function. For both Proposed (BSEC) and Proposed (BSC), we set , . For JPEG+1/2-rate LDPC, the average bit sequence length is set as 10906. Fig. 6 ###reference_###(a) shows that Proposed (BSEC) outperforms other JSCC approaches in terms of PSNR except DeepJSCC. Although DeepJSCC exhibits a better image reconstruction quality than the proposed approaches when dB, DeepJSCC has no compatibility with practical digital systems and requires a longer communication latency than other JSCC approaches. Proposed (BSEC) exhibits a significant performance improvement over Proposed (BSC), affirming the efficacy of our robust demodulation method combined with the BSEC modeling approach.\nJPEG+1/2-rate LDPC is inferior to Proposed (BSEC), while the bit sequence length of JPEG+1/2-rate LDPC is approximately 21 times larger than that of other JSCC approaches. Therefore, our results in Fig. 6 ###reference_###(a) demonstrate the advantage of the JSCC approaches over a traditional separate source-channel coding approach in terms of both communication latency and reconstruction performance, as already observed in Fig. 4 ###reference_###.\nFig. 6 ###reference_###(b) compares the mean average precisions (mAPs) of various JSCC approaches with 4-QAM for the image retrieval task using the CIFAR-100 dataset. Note that the mAP is a well-known performance metric for measuring image retrieval accuracy [45 ###reference_b45###].\nDuring retrieval, image searching is conducted until three images belonging to the same class as the query image is found. The mAP is calculated for 20 superclasses in the CIFAR-100 dataset. Fig. 6 ###reference_###(b) shows that Proposed (BSEC) outperforms other digital-based JSCC approaches at . Additionally, at , Proposed (BSEC), which requires only a single encoder-decoder pair, exhibits a marginal difference of 2.08 compared to NECST (Ideal), which requires a large number of encoder-decoder pairs. Proposed (BSEC) exhibits performance comparable to DeepJSCC at . These results verify the superiority of the proposed approach over the existing JSCC approaches even for the image retrieval task."
94
+ },
95
+ {
96
+ "section_id": "6.2",
97
+ "parent_section_id": "6",
98
+ "section_name": "VI-B Performance Evaluation with Channel-Adaptive Modulation",
99
+ "text": "###figure_10### ###figure_11### ###figure_12### Fig. 7 ###reference_### compares the classification accuracies of the proposed JSCC approach with the channel-adaptive modulation and the NECST approach with a fixed modulation type for an image classification task on the MNIST dataset.\nFig. 7 ###reference_### shows that Proposed (BSEC) with the heterogeneous setting outperforms NECST (Ideal) with a fixed modulation type for the entire SNR regime.\nThese results demonstrate the effectiveness of the channel-adaptive modulation technique, enabling the adaptive selection of modulation orders based on varying SNR and robustness levels.\nFig. 7 ###reference_### also shows that Proposed (BSEC) with the homogeneous setting suffers from performance loss at higher modulation orders.\nThis performance degradation arises from the use of the same modulation order across all latent variables, occurring when employing the same robustness level, as indicated in (33 ###reference_###).\nFor example, when , Proposed (BSEC) with the homogeneous setting chooses 16-QAM, resulting in the inferior performance compared to the NECST approach with 4-QAM. A similar result is observed when , where Proposed (BSEC) with the homogeneous setting chooses 64-QAM.\nOur results clearly highlight that assigning different robustness levels across the latent variables is essential for the proposed JSCC approach not only to ensure flexibility in selecting modulation orders, but also to maximize task performance.\nNevertheless, the proposed approach with the homogeneous setting still outperforms NECST (Ideal) when both approaches utilize the same modulation type.\nFig. 8 ###reference_### illustrates the classification accuracy and spectral efficiency of the proposed JSCC approach with and without the channel-adaptive modulation for an image classification task on the MNIST dataset.\nIn this simulation, we adopt the heterogeneous setting for the proposed approach.\nFig. 8 ###reference_###(a) shows that the classification accuracy of Proposed (BSEC) with a fixed modulation decreases with the modulation order, particularly in the low SNR regime.\nThis is because the bit-error probability decreases as the modulation order decreases, leading to the fundamental trade-off between the classification accuracy and spectral efficiency.\nAlthough this trade-off is inevitable, the classification accuracy of Proposed (BSEC) with the channel-adaptive modulation is consistently close to the best classification accuracy achieved by the 4-QAM case (see Fig. 8 ###reference_###(a)), while improving the spectral efficiency as the SNR increases (see Fig. 8 ###reference_###(b)).\nFor example, when dB, Proposed (BSEC) with the adaptive modulation provides a two-times higher spectral efficiency than the 4-QAM case, while achieving almost the same classification accuracy.\nTherefore, our results validate the effectiveness of our channel-adaptive modulation technique in reducing the communication latency while maintaining task performance at the receiver.\n###table_1### Table II ###reference_### shows the average classification accuracy and spectral efficiency of the proposed JSCC approach for image classification tasks on the MNIST, Fashion-MNIST, and CIFAR-10 datasets. For CIFAR-10, we set to be 396. To account for the variability introduced by channel randomness, the performance metrics are averaged over multiple random realizations of the channel coefficients drawn from a uniform distribution, given by with . Each coefficient is assumed to remain constant only during the transmission of 10 images.\nIf , the channel capacity is computed according to the Shannon-Hartley theorem [46 ###reference_b46###, 47 ###reference_b47###]:\nUtilizing this fact, in Table II ###reference_###, we also provide channel capacity (CC) computed using (34 ###reference_###) as a performance baseline.\nTable II ###reference_### shows that the average classification accuracy achieved with our channel-adaptive modulation technique is almost the same as that achieved with 4-QAM, while providing the average spectral efficiency close to 16-QAM for all the datasets. These results clearly demonstrate the advantages of our modulation technique over a fixed modulation for providing a good latency-performance trade-off for digital semantic communications.\nTable II ###reference_### also shows that the latent variables can be transmitted at a rate times faster than the CC for both MNIST and Fashion-MNIST datasets and times faster than the CC for the CIFAR-10 dataset. These results imply that from a semantic perspective, the proposed JSCC approach can maintain a sufficient task performance even if its transmission rate is beyond the theoretical capacity bound determined by the Shannon theorem."
100
+ },
101
+ {
102
+ "section_id": "7",
103
+ "parent_section_id": null,
104
+ "section_name": "VII Conclusion",
105
+ "text": "In this paper, we have proposed a novel JSCC approach for enabling channel-adaptive digital semantic communications. To this end, we have first developed a robust demodulation method to prevent frequent bit-flip errors of the binary latent variables while enhancing the expressiveness of a demodulation output. We have then developed a robust training strategy which not only facilitates end-to-end training of the JSCC encoder and decoder but also enhances their robustness and flexibility against diverse channel conditions and modulation orders. We have also devised a channel-adaptive modulation technique that can reduce the communication latency for transmission while maintaining task performance. Using simulations, we have demonstrated that the proposed approach outperforms the existing JSCC approaches in terms of communication latency and task performance.\nAn important direction of future research is to further optimize the performance of the proposed approach through the development of advanced designs for modulation schemes, quantization methods, and loss functions.\nAnother promising research direction is to extend the proposed JSCC approach to accommodate soft-output demodulation for JSCC-based semantic communications. Investigating an extension of the proposed approach to support multi-task multi-user semantic communications, where multiple devices engage in different tasks concurrently, would also be an important direction for future research."
106
+ }
107
+ ],
108
+ "appendix": [],
109
+ "tables": {
110
+ "1": {
111
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>The model structure for image reconstruction, image classification and image retrieval tasks on MNIST, Fashion-MNIST, CIFAR-10 and CIFAR-100 datasets.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S6.T1.20\" style=\"width:195.1pt;height:342.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-76.6pt,134.6pt) scale(0.56014,0.56014) ;\">\n<p class=\"ltx_p\" id=\"S6.T1.20.20\"><span class=\"ltx_text\" id=\"S6.T1.20.20.20\" style=\"font-size:80%;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S6.T1.20.20.20.20\" style=\"width:348.4pt;height:612pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<span class=\"ltx_p\" id=\"S6.T1.20.20.20.20.20\"><span class=\"ltx_text\" id=\"S6.T1.20.20.20.20.20.20\" style=\"color:#000000;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S6.T1.20.20.20.20.20.20.20\">\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.21\">\n<span class=\"ltx_td ltx_border_l ltx_border_r ltx_border_t ltx_colspan ltx_colspan_2\" id=\"S6.T1.20.20.20.20.20.20.20.21.1\" style=\"padding:0.8pt 3.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.20.20.20.20.20.20.20.21.2\" style=\"padding:0.8pt 3.0pt;\">Layers</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.20.20.20.20.20.20.20.21.3\" style=\"padding:0.8pt 3.0pt;\">Output size</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.22\">\n<span class=\"ltx_td ltx_border_l ltx_border_r ltx_border_tt\" id=\"S6.T1.20.20.20.20.20.20.20.22.1\" style=\"padding:0.8pt 3.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt ltx_rowspan ltx_rowspan_3\" id=\"S6.T1.20.20.20.20.20.20.20.22.2\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T1.20.20.20.20.20.20.20.22.2.1\">Encoder</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T1.20.20.20.20.20.20.20.22.3\" style=\"padding:0.8pt 3.0pt;\">Dense+ReLU</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T1.20.20.20.20.20.20.20.22.4\" style=\"padding:0.8pt 3.0pt;\">512</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.23\">\n<span class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.23.1\" style=\"padding:0.8pt 3.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.23.2\" style=\"padding:0.8pt 3.0pt;\">Dense+ReLU</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.23.3\" style=\"padding:0.8pt 3.0pt;\">256</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.1.1.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S6.T1.1.1.1.1.1.1.1.1.2\" style=\"padding:0.8pt 3.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.1.1.1.1.1.1.1.1.3\" style=\"padding:0.8pt 3.0pt;\">Dense+Sigmoid</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.1.1.1.1.1.1.1.1.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.24\">\n<span class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.24.1\" style=\"padding:0.8pt 3.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_3\" id=\"S6.T1.20.20.20.20.20.20.20.24.2\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T1.20.20.20.20.20.20.20.24.2.1\">Decoder</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.20.20.20.20.20.20.20.24.3\" style=\"padding:0.8pt 3.0pt;\">Dense+ReLU</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.20.20.20.20.20.20.20.24.4\" style=\"padding:0.8pt 3.0pt;\">256</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.25\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.25.1\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T1.20.20.20.20.20.20.20.25.1.1\">Fashion-MNIST</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.25.2\" style=\"padding:0.8pt 3.0pt;\">Dense+ReLU</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.25.3\" style=\"padding:0.8pt 3.0pt;\">512</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.26\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.26.1\" style=\"padding:0.8pt 3.0pt;\">MNIST/</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.26.2\" style=\"padding:0.8pt 3.0pt;\">Dense+Tanh</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.26.3\" style=\"padding:0.8pt 3.0pt;\">784</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.27\">\n<span class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.27.1\" style=\"padding:0.8pt 3.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_6\" id=\"S6.T1.20.20.20.20.20.20.20.27.2\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T1.20.20.20.20.20.20.20.27.2.1\">Classifier</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.20.20.20.20.20.20.20.27.3\" style=\"padding:0.8pt 3.0pt;\">Dense+ReLU</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.20.20.20.20.20.20.20.27.4\" style=\"padding:0.8pt 3.0pt;\">256</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.28\">\n<span class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.28.1\" style=\"padding:0.8pt 3.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.28.2\" style=\"padding:0.8pt 3.0pt;\">Dense+ReLU</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.28.3\" style=\"padding:0.8pt 3.0pt;\">256</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.29\">\n<span class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.29.1\" style=\"padding:0.8pt 3.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.29.2\" style=\"padding:0.8pt 3.0pt;\">Dense+ReLU</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.29.3\" style=\"padding:0.8pt 3.0pt;\">256</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.30\">\n<span class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.30.1\" style=\"padding:0.8pt 3.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.30.2\" style=\"padding:0.8pt 3.0pt;\">Dense+ReLU</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.30.3\" style=\"padding:0.8pt 3.0pt;\">128</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.31\">\n<span class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.31.1\" style=\"padding:0.8pt 3.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.31.2\" style=\"padding:0.8pt 3.0pt;\">Dense+ReLU+DO(0.5)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.31.3\" style=\"padding:0.8pt 3.0pt;\">128</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.32\">\n<span class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.32.1\" style=\"padding:0.8pt 3.0pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.32.2\" style=\"padding:0.8pt 3.0pt;\">Dense</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.32.3\" style=\"padding:0.8pt 3.0pt;\">10</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.2.2.2.2.2.2.2.2\">\n<span class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt ltx_rowspan ltx_rowspan_14\" id=\"S6.T1.2.2.2.2.2.2.2.2.2\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T1.2.2.2.2.2.2.2.2.2.1\">CIFAR-10</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt ltx_rowspan ltx_rowspan_3\" id=\"S6.T1.2.2.2.2.2.2.2.2.3\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T1.2.2.2.2.2.2.2.2.3.1\">Encoder</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T1.2.2.2.2.2.2.2.2.4\" style=\"padding:0.8pt 3.0pt;\">Table 1 in <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.08146v2#bib.bib43\" title=\"\">43 ###reference_b43###</a>]</cite></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T1.2.2.2.2.2.2.2.2.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.3.3.3.3.3.3.3.3\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.3.3.3.3.3.3.3.3.2\" style=\"padding:0.8pt 3.0pt;\">Flatten</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.3.3.3.3.3.3.3.3.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.4.4.4.4.4.4.4.4\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.4.4.4.4.4.4.4.4.2\" style=\"padding:0.8pt 3.0pt;\">Dense+Sigmoid</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.4.4.4.4.4.4.4.4.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.5.5.5.5.5.5.5.5\">\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_3\" id=\"S6.T1.5.5.5.5.5.5.5.5.2\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T1.5.5.5.5.5.5.5.5.2.1\">Decoder</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.5.5.5.5.5.5.5.5.3\" style=\"padding:0.8pt 3.0pt;\">Dense</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.5.5.5.5.5.5.5.5.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.6.6.6.6.6.6.6.6\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.6.6.6.6.6.6.6.6.2\" style=\"padding:0.8pt 3.0pt;\">Unflatten</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.6.6.6.6.6.6.6.6.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.7.7.7.7.7.7.7.7\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.7.7.7.7.7.7.7.7.2\" style=\"padding:0.8pt 3.0pt;\">Table 1 in <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.08146v2#bib.bib43\" title=\"\">43 ###reference_b43###</a>]</cite></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.7.7.7.7.7.7.7.7.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.8.8.8.8.8.8.8.8\">\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_8\" id=\"S6.T1.8.8.8.8.8.8.8.8.2\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T1.8.8.8.8.8.8.8.8.2.1\">Classifier</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.8.8.8.8.8.8.8.8.3\" style=\"padding:0.8pt 3.0pt;\">3x3Conv+BN+LR</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.8.8.8.8.8.8.8.8.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.9.9.9.9.9.9.9.9\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.9.9.9.9.9.9.9.9.2\" style=\"padding:0.8pt 3.0pt;\">3x3Conv+BN+LR+MP+DO(0.25)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.9.9.9.9.9.9.9.9.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.10.10.10.10.10.10.10.10\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.10.10.10.10.10.10.10.10.2\" style=\"padding:0.8pt 3.0pt;\">3x3Conv+BN+LR</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.10.10.10.10.10.10.10.10.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.11.11.11.11.11.11.11.11\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.11.11.11.11.11.11.11.11.2\" style=\"padding:0.8pt 3.0pt;\">3x3Conv+BN+LR+MP+DO(0.25)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.11.11.11.11.11.11.11.11.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.12.12.12.12.12.12.12.12\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.12.12.12.12.12.12.12.12.2\" style=\"padding:0.8pt 3.0pt;\">3x3Conv+BN+LR</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.12.12.12.12.12.12.12.12.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.13.13.13.13.13.13.13.13\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.13.13.13.13.13.13.13.13.2\" style=\"padding:0.8pt 3.0pt;\">3x3Conv+BN+LR+MP+DO(0.25)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.13.13.13.13.13.13.13.13.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.33\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.33.1\" style=\"padding:0.8pt 3.0pt;\">Dense+LR+DO(0.25)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.33.2\" style=\"padding:0.8pt 3.0pt;\">128</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.34\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.34.1\" style=\"padding:0.8pt 3.0pt;\">Dense</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.34.2\" style=\"padding:0.8pt 3.0pt;\">10</span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.14.14.14.14.14.14.14.14\">\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_tt ltx_rowspan ltx_rowspan_7\" id=\"S6.T1.14.14.14.14.14.14.14.14.2\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T1.14.14.14.14.14.14.14.14.2.1\">CIFAR-100</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt ltx_rowspan ltx_rowspan_3\" id=\"S6.T1.14.14.14.14.14.14.14.14.3\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T1.14.14.14.14.14.14.14.14.3.1\">Encoder</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T1.14.14.14.14.14.14.14.14.4\" style=\"padding:0.8pt 3.0pt;\">Table 2 in <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.08146v2#bib.bib43\" title=\"\">43 ###reference_b43###</a>]</cite> + Remove one residual block</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T1.14.14.14.14.14.14.14.14.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.15.15.15.15.15.15.15.15\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.15.15.15.15.15.15.15.15.2\" style=\"padding:0.8pt 3.0pt;\">Flatten</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.15.15.15.15.15.15.15.15.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.16.16.16.16.16.16.16.16\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.16.16.16.16.16.16.16.16.2\" style=\"padding:0.8pt 3.0pt;\">Dense+Sigmoid</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.16.16.16.16.16.16.16.16.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.17.17.17.17.17.17.17.17\">\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t ltx_rowspan ltx_rowspan_4\" id=\"S6.T1.17.17.17.17.17.17.17.17.2\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T1.17.17.17.17.17.17.17.17.2.1\">Retrieval</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.17.17.17.17.17.17.17.17.3\" style=\"padding:0.8pt 3.0pt;\">Dense</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.17.17.17.17.17.17.17.17.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.18.18.18.18.18.18.18.18\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.18.18.18.18.18.18.18.18.2\" style=\"padding:0.8pt 3.0pt;\">Unflatten</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.18.18.18.18.18.18.18.18.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.19.19.19.19.19.19.19.19\">\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.19.19.19.19.19.19.19.19.2\" style=\"padding:0.8pt 3.0pt;\">Table 2 in <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.08146v2#bib.bib43\" title=\"\">43 ###reference_b43###</a>]</cite> + Remove one residual block</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.19.19.19.19.19.19.19.19.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S6.T1.20.20.20.20.20.20.20.20\">\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.20.2\" style=\"padding:0.8pt 3.0pt;\">Resnet-18 in <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.08146v2#bib.bib44\" title=\"\">44 ###reference_b44###</a>]</cite></span>\n<span class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S6.T1.20.20.20.20.20.20.20.20.1\" style=\"padding:0.8pt 3.0pt;\"></span></span>\n</span> </span></span>\n</span></span></span></p>\n</span></div>\n</figure>",
112
+ "capture": "TABLE I: The model structure for image reconstruction, image classification and image retrieval tasks on MNIST, Fashion-MNIST, CIFAR-10 and CIFAR-100 datasets."
113
+ },
114
+ "2": {
115
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Comparison of the average classification accuracy and spectral efficiency of the proposed JSCC approach for image classification tasks on the MNIST, Fashion-MNIST, and CIFAR-10 datasets.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S6.T2.27\">\n<tr class=\"ltx_tr\" id=\"S6.T2.27.28\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S6.T2.27.28.1\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.27.28.1.1\" style=\"font-size:80%;\">Modulation</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.27.28.2\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.27.28.2.1\" style=\"font-size:80%;\">Adaptive</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.27.28.3\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.27.28.3.1\" style=\"font-size:80%;\">4-QAM</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.27.28.4\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.27.28.4.1\" style=\"font-size:80%;\">16-QAM</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.27.28.5\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.27.28.5.1\" style=\"font-size:80%;\">64-QAM</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_tt\" id=\"S6.T2.4.4.5\" rowspan=\"3\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.4.4.5.1\" style=\"font-size:80%;\">MNIST</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S6.T2.4.4.6\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.4.4.6.1\" style=\"font-size:80%;\">Acc</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.1.1.1\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.1.1.1.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.2.2.2\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.2.2.2.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.3.3.3\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.3.3.3.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.4.4.4\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.4.4.4.1\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T2.8.8.5\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.8.8.5.1\" style=\"font-size:80%;\">SE</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.5.5.1\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.5.5.1.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.6.6.2\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.6.6.2.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.7.7.3\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.7.7.3.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.8.8.4\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.8.8.4.1\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T2.9.9.2\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.9.9.2.1\" style=\"font-size:80%;\">CC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"4\" id=\"S6.T2.9.9.1\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.9.9.1.1\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.13.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_tt\" id=\"S6.T2.13.13.5\" rowspan=\"3\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.13.13.5.1\" style=\"font-size:80%;\">Fashion-MNIST</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S6.T2.13.13.6\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.13.13.6.1\" style=\"font-size:80%;\">Acc</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.10.10.1\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.10.10.1.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.11.11.2\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.11.11.2.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.12.12.3\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.12.12.3.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.13.13.4\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.13.13.4.1\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.17.17\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T2.17.17.5\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.17.17.5.1\" style=\"font-size:80%;\">SE</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.14.14.1\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.14.14.1.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.15.15.2\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.15.15.2.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.16.16.3\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.16.16.3.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.17.17.4\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.17.17.4.1\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.18.18\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T2.18.18.2\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.18.18.2.1\" style=\"font-size:80%;\">CC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"4\" id=\"S6.T2.18.18.1\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.18.18.1.1\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.22.22\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_tt\" id=\"S6.T2.22.22.5\" rowspan=\"3\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.22.22.5.1\" style=\"font-size:80%;\">CIFAR-10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S6.T2.22.22.6\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.22.22.6.1\" style=\"font-size:80%;\">Acc</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.19.19.1\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.19.19.1.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.20.20.2\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.20.20.2.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.21.21.3\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.21.21.3.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S6.T2.22.22.4\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.22.22.4.1\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.26.26\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T2.26.26.5\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.26.26.5.1\" style=\"font-size:80%;\">SE</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.23.23.1\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.23.23.1.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.24.24.2\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.24.24.2.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.25.25.3\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.25.25.3.1\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T2.26.26.4\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.26.26.4.1\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.27.27\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T2.27.27.2\" style=\"padding:0.8pt 3.0pt;\"><span class=\"ltx_text\" id=\"S6.T2.27.27.2.1\" style=\"font-size:80%;\">CC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" colspan=\"4\" id=\"S6.T2.27.27.1\" style=\"padding:0.8pt 3.0pt;\">\n<span class=\"ltx_text\" id=\"S6.T2.27.27.1.1\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n</table>\n</figure>",
116
+ "capture": "TABLE II: Comparison of the average classification accuracy and spectral efficiency of the proposed JSCC approach for image classification tasks on the MNIST, Fashion-MNIST, and CIFAR-10 datasets."
117
+ }
118
+ },
119
+ "image_paths": {
120
+ "1": {
121
+ "figure_path": "2311.08146v2_figure_1.png",
122
+ "caption": "Figure 1: Illustration of the digital semantic communication system with the BSEC modeling considered in our work.",
123
+ "url": "http://arxiv.org/html/2311.08146v2/x1.png"
124
+ },
125
+ "2": {
126
+ "figure_path": "2311.08146v2_figure_2.png",
127
+ "caption": "Figure 2: Visualization of the decision boundaries associated with the second bit of a 16-QAM symbol.",
128
+ "url": "http://arxiv.org/html/2311.08146v2/x2.png"
129
+ },
130
+ "3": {
131
+ "figure_path": "2311.08146v2_figure_3.png",
132
+ "caption": "Figure 3: Changes in the modulation orders assigned to the latent variables for various SNRs when employing our channel-adaptive modulation technique.",
133
+ "url": "http://arxiv.org/html/2311.08146v2/x3.png"
134
+ },
135
+ "4(a)": {
136
+ "figure_path": "2311.08146v2_figure_4(a).png",
137
+ "caption": "Figure 4: Comparison of the classification accuracies of various JSCC and non-JSCC approaches with 4-QAM for image classification tasks using the MNIST, Fashion-MNIST, and CIFAR-10 datasets.",
138
+ "url": "http://arxiv.org/html/2311.08146v2/x4.png"
139
+ },
140
+ "4(b)": {
141
+ "figure_path": "2311.08146v2_figure_4(b).png",
142
+ "caption": "Figure 4: Comparison of the classification accuracies of various JSCC and non-JSCC approaches with 4-QAM for image classification tasks using the MNIST, Fashion-MNIST, and CIFAR-10 datasets.",
143
+ "url": "http://arxiv.org/html/2311.08146v2/x5.png"
144
+ },
145
+ "4(c)": {
146
+ "figure_path": "2311.08146v2_figure_4(c).png",
147
+ "caption": "Figure 4: Comparison of the classification accuracies of various JSCC and non-JSCC approaches with 4-QAM for image classification tasks using the MNIST, Fashion-MNIST, and CIFAR-10 datasets.",
148
+ "url": "http://arxiv.org/html/2311.08146v2/x6.png"
149
+ },
150
+ "5": {
151
+ "figure_path": "2311.08146v2_figure_5.png",
152
+ "caption": "Figure 5: Comparison of the BERs of various JSCC and non-JSCC approaches with 4-QAM for the image classification task using the CIFAR-10 dataset.",
153
+ "url": "http://arxiv.org/html/2311.08146v2/x7.png"
154
+ },
155
+ "6(a)": {
156
+ "figure_path": "2311.08146v2_figure_6(a).png",
157
+ "caption": "Figure 6: \nComparison of the PSNRs and mAPs of various approaches with 4-QAM for the image reconstruction and retrieval tasks using the CIFAR-10 and CIFAR-100 datasets.",
158
+ "url": "http://arxiv.org/html/2311.08146v2/x8.png"
159
+ },
160
+ "6(b)": {
161
+ "figure_path": "2311.08146v2_figure_6(b).png",
162
+ "caption": "Figure 6: \nComparison of the PSNRs and mAPs of various approaches with 4-QAM for the image reconstruction and retrieval tasks using the CIFAR-10 and CIFAR-100 datasets.",
163
+ "url": "http://arxiv.org/html/2311.08146v2/x9.png"
164
+ },
165
+ "7": {
166
+ "figure_path": "2311.08146v2_figure_7.png",
167
+ "caption": "Figure 7: Comparison of the classification accuracies of the proposed JSCC approach with the channel-adaptive modulation and the NECST approach with a fixed modulation for an image classification task on the MNIST dataset.",
168
+ "url": "http://arxiv.org/html/2311.08146v2/x10.png"
169
+ },
170
+ "8(a)": {
171
+ "figure_path": "2311.08146v2_figure_8(a).png",
172
+ "caption": "Figure 8: \nComparison of the classification accuracy and spectral efficiency of the proposed JSCC approach with and without the channel-adaptive modulation for an image classification task on the MNIST dataset.",
173
+ "url": "http://arxiv.org/html/2311.08146v2/x11.png"
174
+ },
175
+ "8(b)": {
176
+ "figure_path": "2311.08146v2_figure_8(b).png",
177
+ "caption": "Figure 8: \nComparison of the classification accuracy and spectral efficiency of the proposed JSCC approach with and without the channel-adaptive modulation for an image classification task on the MNIST dataset.",
178
+ "url": "http://arxiv.org/html/2311.08146v2/x12.png"
179
+ }
180
+ },
181
+ "validation": true,
182
+ "references": [],
183
+ "url": "http://arxiv.org/html/2311.08146v2"
184
+ }
20240318/2311.18605v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2312.09094v2.json ADDED
@@ -0,0 +1,471 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Hopf Arborescent Links, Minor Theory, and Decidability of the Genus Defect",
3
+ "abstract": "While the problem of computing the genus of a knot is now fairly well understood, no algorithm is known for its four-dimensional variants, both in the smooth and in the topological locally flat category.\nIn this article, we investigate a class of knots and links called Hopf arborescent links, which are obtained as the boundaries of some iterated plumbings of Hopf bands.\nWe show that for such links, computing the genus defects, which measure how much the four-dimensional genera differ from the classical genus, is decidable.\nOur proof is non-constructive, and is obtained by proving that Seifert surfaces of Hopf arborescent links under a relation of minors defined by containment of their Seifert surfaces form a well-quasi-order.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "A (tame) knot is a polygonal embedding of the circle into , or equivalently, , and a link is a disjoint union of knots. Knot theory is both an old and very active mathematical field, yet from an algorithmic perspective, many problems arising naturally in knot theory are still shrouded in mystery. This can be illustrated with arguably the most fundamental algorithmic question in knot theory: in the Knot Equivalence problem, we are given two knots and and are tasked with deciding whether they are equivalent, that is, whether one can deform one into the other continuously without creating self-intersections. The best algorithm for this problem, due to Kuperberg, is elementary recursive [25 ###reference_b25###], yet the problem is not even known to be NP-hard (see for example [27 ###reference_b27###, Conclusion]). We refer to Lackenby [28 ###reference_b28###] for a survey on algorithmic problems in knot theory.\nGiven how seemingly hard testing the equivalence of knots is, a huge body of research has been devoted to designing and studying knot invariants in order to tell them apart. A classical invariant of a knot is its genus: this is the smallest possible genus of an embedded oriented surface, called a Seifert surface, having the knot as its boundary. Computing the genus of a knot turns out to be significantly more tractable: celebrated works of Hass, Lagarias and Pippenger [21 ###reference_b21###] and Agol, Hass and Thurston [2 ###reference_b2###], building on the normal surface theory of Haken [20 ###reference_b20###], have shown that deciding if a knot has genus at most is in NP, while Lackenby has proved that it is also in co-NP [29 ###reference_b29###]. These algorithms run also well in practice within the software Regina [10 ###reference_b10###].\nThere are, however, different notions of genus that are much less understood: considering as the boundary of the -dimensional ball , the -genus of a knot is roughly the smallest possible genus of a surface in having the knot as its boundary.\nThis comes in two flavours that are known to not be equivalent: the topological locally flat 4-genus and the smooth 4-genus depending on the regularity of the surface.\nWe refer to the preliminaries for precise definitions.\nA knot is (topologically or smoothly) slice if it bounds a disc in , i.e., has -genus zero.\nOne of the motivations for the study of such -dimensional invariants comes from algebraic geometry, as such surfaces arise naturally around singularities of algebraic curves in [23 ###reference_b23###, 42 ###reference_b42###].\nAnother motivation is the slice-ribbon conjecture [14 ###reference_b14###] which states that a knot is smoothly slice if and only if it is ribbon, i.e., it bounds an immersed disc with only ribbon-type singularities in .\nUnfortunately, no algorithmic framework at all is known to attack topological problems in 4-dimensional topology, and indeed many of these problems are known to be undecidable, e.g., the homeomorphism of -manifolds [31 ###reference_b31###]. For some other problems, the decidability is a well-known open problem: this is the case for -sphere recognition [48 ###reference_b48###] or embeddability of -dimensional complexes in [32 ###reference_b32###].\nSimilarly, no algorithm is known to decide the -genus of a knot or even to decide whether it is slice. To illustrate how hard these problems are, it is only in a recent breakthrough of Picirillo [40 ###reference_b40###] that it was proved that the Conway knot is not smoothly slice, although it only has crossings. From the perspective of lower bounds, recent work of de Mesmay, Rieck, Sedgwick and Tancer [12 ###reference_b12###] has proved that an analogue of the -genus for links, the -ball Euler characteristic, is NP-hard to compute, but it is also not known to be decidable."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Hopf arborescent links",
21
+ "text": "Arborescent links are a class of knots and links originally defined and studied by Conway333Conway called them algebraic links, but this denomination is now more used for the links that come from algebraic curves in ..\nThis class has received much attention from knot theorists [8 ###reference_b8###, 17 ###reference_b17###, 44 ###reference_b44###].\nIn this paper we study a subclass that we call Hopf arborescent links."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Hopf plumbing",
27
+ "text": "The links that we investigate in this paper are boundaries of surfaces which are defined iteratively from Hopf bands using an operation called plumbing.\nLet be a Hopf band and be an oriented surface with boundary, and let us assume that they are unlinked, that is, that there exists a sphere in separating them. To plumb on , pick an arc on whose endpoints lie on and which is not boundary parallel (i.e., is not isotopic relatively to its endpoints, to an arc in ). Let be a small neighbourhood of in that we see as a rectangle with two sides on and two sides in the interior of . Isotope within so that it intersects exactly on , see Figure 3 ###reference_###, left.\nThen, define similarly a neighbourhood of the unique (up to isotopy) non boundary parallel arc in with endpoints in .\nThe orientations of and induce an orientation of the normal direction to the surface (so that concatenating the orientation of the surface with the positive normal direction gives a positive basis in ).\nFinally, isotope within its component of , so that and are identified on in a way that the sides of that are on are matched with the sides of that are not on and the orientations of both rectangles match.\nThe resulting surface is said to be obtained from by Hopf plumbing on top of along , see Figure 3 ###reference_###.\nHopf plumbing is a special case of a more general operation called a Murasugi sum, see [37 ###reference_b37###, 39 ###reference_b39###].\nA key property of Murasugi sums, proved by Gabai [16 ###reference_b16###], is that it preserves fibredness.\nIn the above setting, since Hopf bands are fibred, if is fibred, then the surface obtained from by Hopf plumbing on along any arc is also fibred."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "From plane trees to Hopf arborescent surfaces and links",
33
+ "text": "Recall that in this article, a plane tree is a rooted tree that is embedded in the plane and such that every vertex has a label or .\nIf is a vertex, we denote by its label.\nLet be a plane tree.\nThe associated surface we construct is an oriented surface with boundary that retracts on the union of a finite set of oriented simple curves parametrized by the vertices of , such that every is the core of a Hopf band embedded on and whose sign is the label of the corresponding vertex in .\nFor a vertex in , the curve intersects another curve if and only is an edge of , and the two curves intersect exactly once.\nMoreover, following with its given orientation, the cyclic ordering of the intersection points with the curves coincides with the cyclic orderings of the neighbours of in the plane tree .\nWe now describe the construction inductively, see Figure 4 ###reference_### for an illustration.\nStart from a Hopf band where is the root of , and whose sign is the label .\nFor the induction step, assume that the tree is obtained from by adding at a leaf a finite number of leaves appearing in the plane in this order around , and that the surface is already constructed with its set of core curves .\nSince is a leaf in , the curve intersects only one curve : the curve associated to , the parent of in .\nStarting from this intersection point we place points on in this order. Then we draw on a family of arcs from to itself that correspond to those arcs that retracts on .\nEach such arc intersects the collection exactly at the point .\nFor , perform the Hopf plumbing of a Hopf band of sign on top of along the arc .\nThe resulting surface is .\nFinally for every orient the core of so that when going from to , we follow this rule: if is positive, one turns to the left (with respect to the orientation of ), and if is negative, one turns to the right (once again with respect to the orientation of ), see Figure 5 ###reference_###.\nThe set is the union of with .\nA Hopf arborescent surface is a surface obtained from a plane tree by this construction, see Figure 4 ###reference_### for an example.\nA Hopf arborescent link is the boundary of a Hopf arborescent surface.\nSince Hopf bands are fibred and this property is preserved under plumbing, Hopf arborescent surfaces are fibres for their boundaries, and are thus of minimal (classical) genus.\nThe arbitrary-looking rule that we use to orient the cores of the Hopf band in Step 2d ###reference_i2.I1.i4### is new and will turn out to be key for our proofs of Theorem 1.2 ###reference_theorem2### and Proposition 3.2 ###reference_theorem2###."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Connections to other classes of knots",
39
+ "text": "Here, we provide some additional background on Hopf arborescent links, their plumbing structure and their relations to other classes (arborescent links and fibred links).\nIn particular one may think that considering rooted trees and always plumbing the new Hopf bands on top of the surface is a strong restriction.\nWe will show that this is not the case, i.e. that the family of surfaces and links obtained with less restriction on the sides on which one plumbs is the same as the family considered here."
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "Minors on surfaces, links, and plane trees",
45
+ "text": "Since we focus our investigation on Hopf arborescent links, we define a stronger notion of minor that is well-tailored to these links. We say that a Hopf arborescent link is a link-minor of if there exist and , two labelled plane trees such that and are canonical Seifert surfaces of and respectively and . The main result of this section is the following one, establishing that if is a link-minor of then the Seifert surface of is a surface-minor of the Seifert surface of .\nLet and be two plane trees such that admits a homeomorphic embedding into . Then the Hopf arborescent surface is an incompressible subsurface of .\nThe proof relies on Lemmas 3.4 ###reference_theorem4###, 3.5 ###reference_theorem5###, and 3.6 ###reference_theorem6###, which correspond respectively to the operations (i), (ii), and (iv) defining homeomorphic embeddings of trees. We first prove:\nLet be surface and be an arc that is not boundary-parallel in with both extremities in . Then cutting along yields a surface such that .\nBy definition of , there is a natural map that is injective except on . Let be a tubular neighbourhood of in . Its boundary can be decomposed into , two arcs isotopic to in and two open arcs of . Isotope within so that , , and . It follows that is a subsurface of such that is not an open disc since is not boundary-parallel.\n\u220e\nLemma 3.3 ###reference_theorem3### essentially states that our surfaces behave well with respect to the surface-minor relation when cut along any essential arc.\nAn important point is that cutting along an arc that is the diagonal of a plumbing rectangle merges two bands into one new band with two extra crossings that are either negative or positive depending on the diagonal, see Figure 9 ###reference_###.\nSo, cutting the plumbing of two positive Hopf bands along the diagonal that produces two negative crossings yields a positive Hopf band.\nSymmetrically, one can merge two negative Hopf bands into one negative by cutting along the other diagonal.\nFurthermore, when having a plumbing of two bands with opposite signs, one can merge them into a band with either sign depending on which cut one chooses.\nAssume that is obtained from by deleting a leaf.\nThen is an incompressible subsurface of .\nLet be the plumbing rectangle of the Hopf band associated to the additional leaf of compared to . By definition, has two sides in . Thus is also an arc of with its extremities in . By Lemma 3.3 ###reference_theorem3###, cut along is an incompressible subsurface of . Furthermore, the remaining of is a disc that can be isotoped into a neighbourhood of so that , see Figure 10 ###reference_###.\u220e\nA very similar proof yields the following lemma.\nAssume that is a plane tree whose root has only one child, is the subtree rooted at that child. Then is an incompressible subsurface of .\nThe proof is identical to the previous one: in that situation, is obtained from by plumbing a Hopf band, and cutting along one of the boundaries of the plumbing disc provides the needed incompressible subsurface, as in Figure 10 ###reference_###.\n\u220e\nAssume that is a plane tree in which are three consecutive vertices where has degree , and that\n is obtained from by contracting into a single edge , while preserving the labels of the endpoints.\nThen is an incompressible subsurface of .\nBy the construction of Hopf arborescent surfaces, the edge between and in corresponds to a plumbing rectangle . It is important to recall here our orientation convention: if is labelled positively, the cores and are oriented so that one turns to the left when going from to at the rectangle , while if is labelled negatively, one turns to the right, see Figure 5 ###reference_###. Now, we consider two diagonal arcs and on the plumbing rectangle as pictured in Figure 9 ###reference_###. When cutting along such a diagonal arc, we obtain a new surface in which the cores and merge into a single core. However, their orientations might mismatch, depending on whether we cut along or . We take the convention that is the arc the preserves the orientations, while induces an orientation mismatch, see Figures 9 ###reference_### and 11 ###reference_###.\nNow, let us first consider the case where the labels of and are the same. In this case, we consider the subsurface of obtained by cutting along . This has the effect of merging the core curves and in a way that respects their orientations. However, it might seem that since each curve and corresponds to a Hopf band, merging them like that yields a band that twists too much. But a key observation is that cutting along adds a twist between these two bands, as pictured in Figure 9 ###reference_###, and this twist is negative when the bands are positive, while it is positive when the bands are negative (indeed, this is the reason for our orientation convention). Therefore, the resulting surface is exactly the same as the one corresponding to the tree , and therefore is an incompressible subsurface of by Lemma 3.3 ###reference_theorem3###. See the top and bottom pictures of Figure 11 ###reference_### for an illustration.\nNow, let us consider the case where the label of is while the label of is . In that case, we consider the surface of obtained by cutting along . This has the effect of merging the core curves and but with an orientation mismatch. We take the convention that the resulting core curve is oriented by , and therefore disagrees with the orientation of while it follows it. Since and are labelled with opposite signs the two twists on their Hopf bands cancel out, but cutting along adds a new positive twist, therefore we can consider as being the core curve of a positive Hopf band. Now, let us consider the plumbing rectangle corresponding to the edge between and . Due to the orientation mismatch, arriving at this rectangle from , we are oriented in the direction opposed to the one we would arrive with if we were arriving from . But due to the orientation convention, when going from to in we turn to the right since is negative, while when going from to in we turn to the left since is a positive band. Therefore, this effect cancels out the orientation mismatch, and coincides exactly with the surface corresponding to the tree . Therefore is an incompressible subsurface of by Lemma 3.3 ###reference_theorem3###. See the third picture of Figure 11 ###reference_### for an illustration.\nThe same cancellation effect happens when the label of is and the label of is : when cutting along we have an orientation mismatch which is cancelled out by the the fact that the new band is negative, and thus the orientation convention makes it turn in the opposite direction in the plumbing rectangle between and . Therefore, in that case is also an incompressible subsurface of thanks to Lemma 3.3 ###reference_theorem3###. This is illustrated in the second picture of Figure 11 ###reference_###.\n\u220e\nAs a corollary, contracting any path of into an edge whose labels match with the labels of the extremities of the path produces a tree such that .\nBy definition, if admits a homeomorphic embedding into , it can be obtained iteratively from by (i) removing a child leaf, (ii) removing a parent leaf, (iii) reducing a label or (iv) contracting a path while preserving the labels of the endpoints. Since no two elements on the alphabet are comparable, case (iii) cannot happen. Then the cases (i), (ii) and (iv) are handled respectively by Lemma 3.4 ###reference_theorem4###, Lemma 3.5 ###reference_theorem5### and Lemma 3.6 ###reference_theorem6###.\u220e\nOn the other hand, the Kruskal Tree Theorem directly yields the following proposition.\nHopf arborescent links are well-quasi-ordered under the link-minor relation.\nTake an infinite sequence of Hopf arborescent links, let a sequence of plane trees such that for all is a Seifert surface of . Then, by Theorem 2.3 ###reference_theorem3###, there exists such that admits a homeomorphic embedding into . Hence is a link-minor of .\n\u220e\nWe can deduce Theorem 1.2 ###reference_theorem2### as a direct corollary of Proposition 3.2 ###reference_theorem2### and Proposition 3.7 ###reference_theorem7###.\nTake an infinite sequence of canonical Seifert surfaces of Hopf arborescent links. Then by Proposition 3.7 ###reference_theorem7###, for some . By Proposition 3.2 ###reference_theorem2### we have , i.e. the surface-minor order is a well-quasi-order on Hopf arborescent surfaces.\n\u220e\nThe proof of Proposition 3.2 ###reference_theorem2### highlights that the minor relation on the set of Hopf arborescent surfaces is more subtle and fragile than one might expect.\nIndeed, the cuts involved when taking an incompressible subsurface in the proof of Lemma 3.6 ###reference_theorem6### inevitably merge Hopf bands and thus one needs to be careful in order to control the number of resulting twists.\nIn particular, the proof does not seem to generalise to the more general classes of surfaces obtained by plumbing bands with a bounded number of twists (even though everything works well at the level of trees)."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Decidability of the genus defect for Hopf arborescent links",
51
+ "text": ""
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "Monotonicity of the genus defect",
57
+ "text": "Now that we proved that link-minor is a well-quasi-order on the set of Hopf arborescent links, we want to highlight a property that is stable for this minor relation.\nRecall that the genus defect of an oriented link is the difference between its classical genus and its 4-dimensional genus.\nThe latter can be either in the topological locally flat or in the smooth category.\nAll statements in this section (and in particular Theorem 1.1 ###reference_theorem1###) hold in both categories.\nWe reprove Lemma 6 of [6 ###reference_b6###] in the form of Proposition 4.1 ###reference_theorem1### using the fact that link-minor implies that the associated Seifert are surface-minors.\nThe genus defect is monotone on the family of Hopf arborescent links with respect to the link-minor relation, i.e., if is a link-minor of , then .\nWe rely on the following lemma that highlights how the -genus behaves with respect to surface-minors. It uses a cut-and-paste construction and an Euler characteristic argument.\nLet be an oriented surface of and be a surface-minor of . If we write and , then we have .\nSeeing as the boundary of the -ball , consider a surface in such that and .\nGluing the remaining pieces of to along yields a surface in such that , see Figure 12 ###reference_###.\nBy definition of , we have , and thus .\nFurthermore, the genus of is given by . Thus one has .\nNow if we assume to minimise the 4-genus over surfaces bounded by , we conclude: .\n\u220e\nAs Hopf arborescent surfaces are of minimal genus for Hopf arborescent links, Lemma 4.2 ###reference_theorem2### can be used to prove Proposition 4.1 ###reference_theorem1###.\nLet and be two Hopf arborescent links such that is a link-minor of .\nConsider and the corresponding canonical Seifert surfaces.\nSince Hopf arborescent links are fibred, is a Seifert surface of of minimal genus (see Theorem 2.1 ###reference_theorem1###), i.e. and similarly, .\nBy Proposition 3.2 ###reference_theorem2### one has .\nHence, by Lemma 4.2 ###reference_theorem2###, one gets .\n\u220e"
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "Proof of Theorem 1.1",
63
+ "text": "We first show that the link-minor relation can be decided using the decidability of link equivalence. For knots, equivalence can be tested as a combination of an algorithm that allows to decide whether two -manifolds with boundary are homeomorphic [25 ###reference_b25###, 33 ###reference_b33###] and the Gordon-Luecke Theorem [19 ###reference_b19###] that states that two knots are equivalent if their complements, which are -manifolds with boundaries, are equivalent. In the case of links, we additionally need to keep track of a longitude of each torus boundary component in the complement of the link. We refer to the survey of Lackenby [26 ###reference_b26###, Section 2] for a summary of the techniques that allow to prove the following theorem:\nGiven two links and , the problem of testing whether is ambient isotopic to is decidable.\nGiven a Hopf arborescent link , denote by the set of plane trees whose associated Hopf arborescent surface has as oriented boundary.\nAs a corollary, we obtain:\nGiven a Hopf arborescent link , the set is computable.\nFor increasing , we enumerate all plane trees with vertices labelled by and store the trees such that is isotopic to , where we test isotopy using Theorem 4.3 ###reference_theorem3###.\nIf we find a for which such a tree exists, we finish the enumeration for this value and return the stored trees.\nIndeed, plumbing Hopf bands produces a surface with Betti number .\nBy Theorem 2.1 ###reference_theorem1###, all the trees such that produce surfaces with the same genus, hence have the same number of vertices.\nAs the entry is a Hopf arborescent link, there exists a tree such that . So the algorithm terminates.\n\u220e\nAlternatively, and if one wants some control on the complexity of that algorithm, one can avoid blindly testing for increasing by first computing the genus of the link [21 ###reference_b21###, 33 ###reference_b33###], or just computing an upper bound to it using, e.g., Seifert\u2019s algorithm, and then enumerate only the trees that produce surfaces up to that genus. From Lemma 4.4 ###reference_theorem4###, we obtain:\nGiven two Hopf arborescent links and , testing if is a link-minor of is decidable.\nUsing Lemma 4.4 ###reference_theorem4###, we compute and .\nThe trees in (resp. ) all have the same number (resp. ) of vertices.\nThen we brute force every possible path contraction to an edge and iterated leaf deletion on trees of such that the result is a tree with vertices, and test whether it is equal to a tree of . If such a test succeeds, we output yes, otherwise we return no.\nThere is a finite number of trees in both and and a finite number of a trees with vertices that homeomorphically embed into a tree of .\nHence that algorithm eventually terminates. Its correctness follows directly from the definition of link-minor.\n\u220e\nFinally we prove Theorem 1.1 ###reference_theorem1### by using the stability of the genus-defect by link-minor, the previous algorithms, and the well-quasi-order properties.\nBy Proposition 3.7 ###reference_theorem7###, the order defined by link-minors is a well-quasi-order on the set of Hopf arborescent links. Hence, the set of Hopf arborescent links that have defect at most is characterized by a finite family of forbidden minors. It follows, by Proposition 4.1 ###reference_theorem1### that if and only if for all in , is not a link-minor of . Using Lemma 4.5 ###reference_theorem5### we test for each if is a link-minor of . If such a test succeeds, output no, otherwise the input link has .\n\u220e\nAs said in the introduction, our proof in not constructive as it relies at its core on the existence of a set of forbidden minors for having defect at most . This set of forbidden minors is not explicit and hard-coded in the algorithm. Furthermore, the sets of excluded minors will be different for the two different notions of defect (smooth and locally flat). It is likely that computing them is a topological challenge requiring arguments of different nature.\nTheorem 1.2 ###reference_theorem2### provides the existence of a set of forbidden minors for having defect at most but for a different and stronger definition of minors on links that relies only on the surface-minor relation on the Seifert surface and not the trees. However deciding this relation, even by a brute force argument, seems challenging: in addition to the fact that no algorithm seems to be known for testing isotopy of surfaces, one would also need to control the complexity of the cutting arcs. Even with positive Hopf arborescent links only, this seems delicate [34 ###reference_b34###]."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Examples: Hopf arborescent links with non-trivial defect",
69
+ "text": "It is not clear a priori that the topological and/or smooth defects of Hopf arborescent links are nonzero. For instance, our building block, the Hopf band, has both defects equal to since it bounds an annulus in ; which has genus . Furthermore, as mentioned in the introduction, when Hopf arborescent links are only made with positive Hopf bands, they belong to a class of links called positive links, which implies that they are strongly quasi-positive [43 ###reference_b43###], and this implies in turn that their smooth 4-genus equal their 3-genus [42 ###reference_b42###]. So for this class of links, the smooth defect is always zero. In contrast, in this section, we provide an example of a Hopf arborescent knot for which both the topological and the smooth defect are nonzero, and an argument explaining how to use this knot to provide examples with arbitrarily large defects."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {},
74
+ "image_paths": {},
75
+ "validation": true,
76
+ "references": [
77
+ {
78
+ "1": {
79
+ "title": "http://katlas.org/wiki/8_10.",
80
+ "author": "The knot atlas: .",
81
+ "venue": "Accessed: 2024-13-03.",
82
+ "url": null
83
+ }
84
+ },
85
+ {
86
+ "2": {
87
+ "title": "The computational complexity of knot genus and spanning area.",
88
+ "author": "Ian Agol, Joel Hass, and William Thurston.",
89
+ "venue": "Transactions of the American Mathematical Society,\n358(9):3821\u20133850, 2006.",
90
+ "url": null
91
+ }
92
+ },
93
+ {
94
+ "3": {
95
+ "title": "Positive braids of maximal signature.",
96
+ "author": "Sebastian Baader.",
97
+ "venue": "L\u2019Enseignement Math\u00e9matique, 59(3):351\u2013358, 2014.",
98
+ "url": null
99
+ }
100
+ },
101
+ {
102
+ "4": {
103
+ "title": "Minor theory for surfaces and divides of maximal signature.",
104
+ "author": "Sebastian Baader and Pierre Dehornoy.",
105
+ "venue": "arXiv preprint arXiv:1211.7348, 2012.",
106
+ "url": null
107
+ }
108
+ },
109
+ {
110
+ "5": {
111
+ "title": "Minor theory for quasipositive surfaces.",
112
+ "author": "Sebastian Baader, Pierre Dehornoy, and Livio Liechti.",
113
+ "venue": "In Athanase Papadopoulos, editor, Essays in geometry, dedicated\nto Norbert A\u2019Campo, pages 351\u2013358. EMS Pres, 2023.",
114
+ "url": null
115
+ }
116
+ },
117
+ {
118
+ "6": {
119
+ "title": "On the topological 4-genus of torus knots.",
120
+ "author": "Sebastian Baader, Peter Feller, Lukas Lewark, and Livio Liechti.",
121
+ "venue": "Transactions of the American Mathematical Society,\n370(4):2639\u20132656, 2018.",
122
+ "url": null
123
+ }
124
+ },
125
+ {
126
+ "7": {
127
+ "title": "Classification of genus-two surfaces in .",
128
+ "author": "Filippo Baroni.",
129
+ "venue": "arXiv preprint arXiv:2309.05387, 2023.",
130
+ "url": null
131
+ }
132
+ },
133
+ {
134
+ "8": {
135
+ "title": "New geometric splittings of classical knots and the classification\nand symmetries of arborescent knots.",
136
+ "author": "Francis Bonahon and Laurence C Siebenmann.",
137
+ "venue": "Preprint available on the authors\u2019 webpage, 1979.",
138
+ "url": null
139
+ }
140
+ },
141
+ {
142
+ "9": {
143
+ "title": "Knots.",
144
+ "author": "Gerhard Burde and Heiner Zieschang.",
145
+ "venue": "Walter de gruyter, 2002.",
146
+ "url": null
147
+ }
148
+ },
149
+ {
150
+ "10": {
151
+ "title": "Regina: Software for low-dimensional topology.",
152
+ "author": "Benjamin A. Burton, Ryan Budney, William Pettersson, et al.",
153
+ "venue": "http://regina-normal.github.io/, 1999\u20132023.",
154
+ "url": null
155
+ }
156
+ },
157
+ {
158
+ "11": {
159
+ "title": "Parameterized algorithms, volume 4.",
160
+ "author": "Marek Cygan, Fedor V Fomin, \u0141ukasz Kowalik, Daniel Lokshtanov, D\u00e1niel\nMarx, Marcin Pilipczuk, Micha\u0142 Pilipczuk, and Saket Saurabh.",
161
+ "venue": "Springer, 2015.",
162
+ "url": null
163
+ }
164
+ },
165
+ {
166
+ "12": {
167
+ "title": "The unbearable hardness of unknotting.",
168
+ "author": "Arnaud de Mesmay, Yo\u2019av Rieck, Eric Sedgwick, and Martin Tancer.",
169
+ "venue": "Advances in Mathematics, 381:107648, 2021.",
170
+ "url": null
171
+ }
172
+ },
173
+ {
174
+ "13": {
175
+ "title": "Graph theory.",
176
+ "author": "Reinhard Diestel.",
177
+ "venue": "Number 173 in Graduate texts in mathematics. Springer, New York, 5th\nedition, 2016.",
178
+ "url": null
179
+ }
180
+ },
181
+ {
182
+ "14": {
183
+ "title": "Some problems in knot theory.",
184
+ "author": "Ralph Fox.",
185
+ "venue": "In Topology of 3-manifolds and related topics (Proc. The Univ.\nof Georgia Institute), pages 168\u2013176, Englewood Cliffs, N.J, 1961.\nPrentice-Hall.",
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "15": {
191
+ "title": "The topology of four-dimensional manifolds.",
192
+ "author": "Michael Hartley Freedman.",
193
+ "venue": "Journal of Differential Geometry, 17(3):357\u2013453, 1982.",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "16": {
199
+ "title": "The Murasugi sum is a natural geometric operation.",
200
+ "author": "David Gabai.",
201
+ "venue": "Contemp. Math, 20:131\u2013143, 1983.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "17": {
207
+ "title": "Genera of the Arborescent Links.",
208
+ "author": "David Gabai.",
209
+ "venue": "Memoirs of the American Mathematical Society. American Mathematical\nSociety, 1986.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "18": {
215
+ "title": "On the stable equivalence of open books in three-manifolds.",
216
+ "author": "Emmanuel Giroux and Noah Goodman.",
217
+ "venue": "Geometry & Topology, 10(1):97\u2013114, 2006.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "19": {
223
+ "title": "Knots are determined by their complements.",
224
+ "author": "Cameron McA. Gordon and John S Luecke.",
225
+ "venue": "Journal of the American Mathematical Society, 2:371\u2013415, 1989.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "20": {
231
+ "title": "Theorie der normalfl\u00e4chen.",
232
+ "author": "Wolfgang Haken.",
233
+ "venue": "Acta Mathematica, 105(3):245\u2013375, 1961.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "21": {
239
+ "title": "The computational complexity of knot and link problems.",
240
+ "author": "Joel Hass, Jeffrey C. Lagarias, and Nicholas Pippenger.",
241
+ "venue": "Journal of the ACM (JACM), 46(2):185\u2013211, 1999.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "22": {
247
+ "title": "Algebraic topology.",
248
+ "author": "Allen Hatcher.",
249
+ "venue": "Cambridge University Press, Cambridge ; New York, 2002.",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "23": {
255
+ "title": "The genus of embedded surfaces in the projective plane.",
256
+ "author": "Peter B Kronheimer and Tomasz S Mrowka.",
257
+ "venue": "Mathematical Research Letters, 1(6):797\u2013808, 1994.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "24": {
263
+ "title": "Well-quasi-ordering, the tree theorem, and Vazsonyi\u2019s conjecture.",
264
+ "author": "Joseph B. Kruskal.",
265
+ "venue": "Transactions of the American Mathematical Society, 95:210\u2013225,\n1960.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "25": {
271
+ "title": "Algorithmic homeomorphism of 3-manifolds as a corollary of\ngeometrization.",
272
+ "author": "Greg Kuperberg.",
273
+ "venue": "Pacific Journal of Mathematics, 301(1):189\u2013241, September\n2019.",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "26": {
279
+ "title": "Elementary Knot Theory.",
280
+ "author": "Marc Lackenby.",
281
+ "venue": "In Lectures on Geometry. Oxford University Press, 01 2017.",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "27": {
287
+ "title": "Some conditionally hard problems on links and 3-manifolds.",
288
+ "author": "Marc Lackenby.",
289
+ "venue": "Discrete & Computational Geometry, 58:580\u2013595, 2017.",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "28": {
295
+ "title": "Algorithms in 3-manifold theory.",
296
+ "author": "Marc Lackenby.",
297
+ "venue": "Surveys in Differential Geometry, 2020.",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "29": {
303
+ "title": "The efficient certification of knottedness and Thurston norm.",
304
+ "author": "Marc Lackenby.",
305
+ "venue": "Advances in Mathematics, 387:107796, 2021.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "30": {
311
+ "title": "On the genus defect of positive braid knots.",
312
+ "author": "Livio Liechti.",
313
+ "venue": "Algebraic & Geometric Topology, 20(1):403\u2013428, 2020.",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "31": {
319
+ "title": "The insolubility of the problem of homeomorphy.",
320
+ "author": "Andrei Andreevich Markov.",
321
+ "venue": "In Doklady Akademii Nauk, volume 121, pages 218\u2013220. Russian\nAcademy of Sciences, 1958.",
322
+ "url": null
323
+ }
324
+ },
325
+ {
326
+ "32": {
327
+ "title": "Hardness of embedding simplicial complexes in .",
328
+ "author": "Ji\u0159\u00ed Matou\u0161ek, Martin Tancer, and Uli Wagner.",
329
+ "venue": "Journal of the European Mathematical Society, 13(2):259\u2013295,\n2010.",
330
+ "url": null
331
+ }
332
+ },
333
+ {
334
+ "33": {
335
+ "title": "Algorithmic topology and classification of 3-manifolds.",
336
+ "author": "Sergei Vladimirovich Matveev.",
337
+ "venue": "Springer, 2007.",
338
+ "url": null
339
+ }
340
+ },
341
+ {
342
+ "34": {
343
+ "title": "Cutting arcs for torus links and trees.",
344
+ "author": "Filip Misev.",
345
+ "venue": "Bulletin de la Soci\u00e9t\u00e9 Math\u00e9matique de France,\n145:575\u2013602, 2014.",
346
+ "url": null
347
+ }
348
+ },
349
+ {
350
+ "35": {
351
+ "title": "On the plumbing structure of fibre surfaces.",
352
+ "author": "Filip Misev.",
353
+ "venue": "PhD thesis, Universit\u00e4t Bern, 2016.",
354
+ "url": null
355
+ }
356
+ },
357
+ {
358
+ "36": {
359
+ "title": "Hopf bands in arborescent Hopf plumbings.",
360
+ "author": "Filip Misev.",
361
+ "venue": "Osaka Journal of Mathematics, 56(2):375 \u2013 389, 2019.",
362
+ "url": null
363
+ }
364
+ },
365
+ {
366
+ "37": {
367
+ "title": "On a certain subgroup of the group of an alternating link.",
368
+ "author": "Kunio Murasugi.",
369
+ "venue": "American Journal of Mathematics, 85(4):544\u2013550, 1963.",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "38": {
375
+ "title": "On well-quasi-ordering finite trees.",
376
+ "author": "C. St. J. A. Nash-Williams.",
377
+ "venue": "Mathematical Proceedings of the Cambridge Philosophical\nSociety, 59(4):833\u2013835, 1963.",
378
+ "url": null
379
+ }
380
+ },
381
+ {
382
+ "39": {
383
+ "title": "Generalized plumbings and murasugi sums.",
384
+ "author": "Burak Ozbagci and Patrick Popescu-Pampu.",
385
+ "venue": "Arnold Mathematical Journal, 2(1):69\u2013119, dec 2015.",
386
+ "url": null
387
+ }
388
+ },
389
+ {
390
+ "40": {
391
+ "title": "The conway knot is not slice.",
392
+ "author": "Lisa Piccirillo.",
393
+ "venue": "Annals of Mathematics, 191(2):581\u2013591, 2020.",
394
+ "url": null
395
+ }
396
+ },
397
+ {
398
+ "41": {
399
+ "title": "Knots and links.",
400
+ "author": "Dale Rolfsen.",
401
+ "venue": "AMS Chelsea Pub, Providence, R.I, 2003.",
402
+ "url": null
403
+ }
404
+ },
405
+ {
406
+ "42": {
407
+ "title": "Quasipositivity as an obstruction to sliceness.",
408
+ "author": "Lee Rudolph.",
409
+ "venue": "Bulletin of the American Mathematical Society, 29(1):51\u201359,\n1993.",
410
+ "url": null
411
+ }
412
+ },
413
+ {
414
+ "43": {
415
+ "title": "Positive links are strongly quasipositive.",
416
+ "author": "Lee Rudolph.",
417
+ "venue": "Geometry & Topology Monographs, Volume 2: Proceedings of the\nKirbyfest, pages 555\u2013562, 1998.",
418
+ "url": null
419
+ }
420
+ },
421
+ {
422
+ "44": {
423
+ "title": "Minimal genus seifert surfaces for special arborescent links.",
424
+ "author": "Makoto Sakuma.",
425
+ "venue": "Osaka Journal of Mathematics, 31:861\u2013905, 1994.",
426
+ "url": null
427
+ }
428
+ },
429
+ {
430
+ "45": {
431
+ "title": "Constructions of fibred knots and links.",
432
+ "author": "John R. Stallings.",
433
+ "venue": "Proc. Symp. Pure Math., AMS 27:315\u2013319, 1975.",
434
+ "url": null
435
+ }
436
+ },
437
+ {
438
+ "46": {
439
+ "title": "Slice knots: Knot theory in the 4th dimension, 2011.",
440
+ "author": "Peter Teichner.",
441
+ "venue": "Lecture notes by Julia Collins and Mark Powell. Electronic version\navailable from https://www.maths.ed.ac.uk/ v1ranick/papers/sliceknots2.pdf.",
442
+ "url": null
443
+ }
444
+ },
445
+ {
446
+ "47": {
447
+ "title": "On irreducible 3-manifolds which are sufficiently large.",
448
+ "author": "Friedhelm Waldhausen.",
449
+ "venue": "Annals of Mathematics, 87:56\u201388, 1968.",
450
+ "url": null
451
+ }
452
+ },
453
+ {
454
+ "48": {
455
+ "title": "Homology manifolds.",
456
+ "author": "Shmuel Weinberger.",
457
+ "venue": "Handbook of geometric topology, pages 1085\u20131102, 2002.",
458
+ "url": null
459
+ }
460
+ },
461
+ {
462
+ "49": {
463
+ "title": "Isotopy types of knot spanning surface.",
464
+ "author": "Wilbur Whitten.",
465
+ "venue": "Topology, 12:373\u2013380, 1973.",
466
+ "url": null
467
+ }
468
+ }
469
+ ],
470
+ "url": "http://arxiv.org/html/2312.09094v2"
471
+ }
20240318/2312.15045v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2312.15736v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2401.06604v3.json ADDED
@@ -0,0 +1,715 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Identifying Policy Gradient Subspaces",
3
+ "abstract": "Policy gradient methods hold great potential for solving complex continuous control tasks.\nStill, their training efficiency can be improved by exploiting structure within the optimization problem.\nRecent work indicates that supervised learning can be accelerated by leveraging the fact that gradients lie in a low-dimensional and slowly-changing subspace.\nIn this paper, we conduct a thorough evaluation of this phenomenon for two popular deep policy gradient methods on various simulated benchmark tasks.\nOur results demonstrate the existence of such gradient subspaces despite the continuously changing data distribution inherent to reinforcement learning.\nThese findings reveal promising directions for future work on more efficient reinforcement learning, e.g., through improving parameter-space exploration or enabling second-order optimization.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Deep reinforcement learning (RL) has marked significant achievements in numerous challenging problems, ranging from Atari games (Mnih et al., 2013 ###reference_b29###) to various real robotic challenges, such as contact-rich manipulation (Gu et al., 2017 ###reference_b13###; Kalashnikov et al., 2018 ###reference_b19###), complex planning problems (Everett et al., 2018 ###reference_b8###; Ao et al., 2022 ###reference_b3###), and hard-to-control dynamic tasks (Cheng et al., 2023 ###reference_b7###; Kaufmann et al., 2023 ###reference_b20###).\nDespite these notable successes, deep RL methods are often brittle due to the use of function approximators with large numbers of parameters and persistently changing data distributions \u2013 a setting notoriously hard for optimization.\nDeep RL, in its vanilla form, operates under limited prior knowledge and structural information about the problem, consequently requiring large numbers of interactions with the environment to reach good performance.\nFor supervised learning (SL), Gur-Ari et al. (2018 ###reference_b14###) demonstrated that the gradients utilized for neural network optimization reside in a low-dimensional, slowly-changing subspace.\nBased on this insight, recent works introduce more structured optimization procedures for SL by identifying and harnessing these gradient subspaces.\nExploiting this structure enables the optimization to be carried out in a reduced-dimensional subspace, yielding enhanced efficiency with minimal, if any, loss in performance (Li et al., 2018 ###reference_b25###; Gressmann et al., 2020 ###reference_b12###; Larsen et al., 2021 ###reference_b22###; Li et al., 2022a ###reference_b26###).\nDespite the benefits of subspace methods in SL, their adoption in deep RL has remained limited.\nA straightforward way to transfer these principles is to find lower-dimensional subspaces in policy gradient approaches (Peters & Schaal, 2008 ###reference_b32###).\nPolicy gradient (PG) methods estimate the gradient of the RL objective to update the policy\u2019s parameters using some form of stochastic gradient descent (SGD).\nSince most SL approaches using subspaces operate at the level of the SGD optimization, PG algorithms would be a natural choice to leverage the knowledge about subspaces from SL in the RL context.\nNevertheless, in RL, such methods have been explored primarily within the realm of evolutionary strategies (Maheswaranathan et al., 2019 ###reference_b28###), representation learning (Le Lan et al., 2023 ###reference_b23###), and transfer learning (Gaya et al., 2022 ###reference_b10###).\nA possible explanation is the constantly changing data distribution of RL due to continual exploration that intuitively seems to hinder the identification of gradient subspaces.\nThe limited body of studies using subspaces in PG algorithms underlines the need for a more profound discussion in this domain.\nThis paper conducts a comprehensive empirical evaluation of gradient subspaces in the context of PG algorithms, assessing their properties across various simulated RL benchmarks.\nOur experiments reveal several key findings: (i) there exist parameter-space directions that exhibit significantly larger curvature compared to other parameter-space directions, (ii) the gradients live in the subspace spanned by these directions, and (iii) the subspace remains relatively stable throughout the RL training.\nAdditionally, we analyze the gradients of the critic \u2013 an integral part of the PG estimation in actor-critic methods \u2013 and observe that the critic subspace often exhibits less variability and retains a larger portion of its gradient compared to the actor subspace.\nWe also test the robustness of PG subspaces regarding mini-batch approximations of the gradient that are used in practice during training and evaluate a similar mini-batch approximation of the Hessian.\nLastly, we explore the extent to which the variation in the data distribution influences the aforementioned subspace analysis by conducting experiments with both an on-policy as well as an off-policy algorithm, the latter of which reuses previously collected data for training.\nBy shedding light on gradient subspaces in deep RL, this paper provides insights that can potentially enhance RL performance by advancing parameter-space exploration or enabling second-order optimization.\nWe begin by reviewing existing literature on subspace approaches in Section 2 ###reference_###, followed by a recapitulation of the RL preliminaries in Section 3 ###reference_### as a foundation for the analysis of gradient subspaces in RL in Section 4 ###reference_###.\nSection 5 ###reference_### concludes this work with a discussion of the results and implications of our work.\nThe code for our experiments is available on the project website ###reference_es###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related work",
15
+ "text": "Numerous works have studied the use of gradient subspaces in SL.\nThese works can be roughly divided into informed and random subspace approaches.\nIn the following, we give an overview of these papers and highlight works that investigate related concepts in RL."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Preliminaries",
21
+ "text": "This section introduces the mathematical background and notation used throughout the paper.\nFurthermore, we briefly describe the two RL algorithms that we will analyze in Section 4 ###reference_###."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Mathematical background and notation",
27
+ "text": "For a given objective function , we use to denote the gradient of a model with respect to its parameters and to denote the corresponding Hessian matrix.\nWe use to denote the th largest eigenvector of .\nNote that we use \u201cth largest eigenvector\u201d as shorthand for \u201ceigenvector with respect to the th largest eigenvalue\u201d.\nSince is symmetric, all eigenvectors are orthogonal to each other, and we assume .\nIn this work, we investigate projections of gradients into lower-dimensional subspaces, i.e., mappings from to with .\nThese mappings are defined by a projection matrix .\n denotes the projection of into the subspace and is the mapping of back to the original dimensionality that minimizes the projection error .\nHere, denotes the pseudoinverse of .\nIf the projection matrix is semi-orthogonal, i.e., the columns are orthogonal and norm one, the pseudoinverse simplifies to the transpose .\nThe matrix of the largest eigenvectors is one example of such a semi-orthogonal matrix."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Reinforcement learning",
33
+ "text": "We consider tasks formulated as Markov decision processes (MDPs), defined by the tuple .\nHere, and are the state and action spaces, respectively.\nThe transition dynamics define the probability density of evolving from one state to another.\nAt each timestep the agent receives a scalar reward .\nA stochastic policy, , defines a mapping from state to a probability distribution over actions .\nRL aims to find an optimal policy , maximizing the expected cumulative return, discounted by .\nThe value function represents the expected (discounted) cumulative reward from state following policy , and the action value function denotes the expected (discounted) cumulative reward for taking action in state and then following . The advantage function quantifies the relative benefit of taking an action in state over the average action according to policy .\nRL algorithms generally can be divided into two styles of learning.\nOn-policy methods, like Proximal Policy Optimization (PPO) (Schulman et al., 2017 ###reference_b39###), only use data generated from the current policy for updates. In contrast, off-policy algorithms, such as Soft Actor-Critic (SAC) (Haarnoja et al., 2018 ###reference_b15###) leverage data collected from different policies, such as old iterations of the policy."
34
+ },
35
+ {
36
+ "section_id": "3.2.1",
37
+ "parent_section_id": "3.2",
38
+ "section_name": "3.2.1 Proximal Policy Optimization",
39
+ "text": "On-policy PG methods typically optimize the policy via an objective such as\n\nwith and being an estimator of the advantage function at timestep and denoting the policy before the update (Kakade & Langford, 2002 ###reference_b18###).\nHowever, optimizing this objective can result in excessively large updates, leading to instabilities and possibly divergence.\nProximal Policy Optimization (PPO) (Schulman et al., 2017 ###reference_b39###) is an on-policy actor-critic algorithm designed to address this issue by clipping the probability ratio to the interval , which removes the incentive for moving outside the interval, resulting in the following actor loss.\nThe advantage estimation\n with \nuses a learned value function , which acts as a critic.\nThe hyperparameter determines the trade-off between observed rewards and estimated values.\nThe critic is trained to minimize the mean squared error between the predicted value and the discounted sum of future episode rewards ."
40
+ },
41
+ {
42
+ "section_id": "3.2.2",
43
+ "parent_section_id": "3.2",
44
+ "section_name": "3.2.2 Soft Actor-Critic",
45
+ "text": "Soft Actor-Critic (SAC) (Haarnoja et al., 2018 ###reference_b15###) is a policy gradient algorithm that integrates the maximum entropy reinforcement learning framework with the actor-critic approach.\nAs such, it optimizes a trade-off between the expected return and the policy\u2019s entropy.\nIt is an off-policy algorithm and, as such, stores transitions in a replay buffer , which it samples from during optimization.\nTo that end, SAC modifies the targets for the learned Q-function to include a term that incentivizes policies with large entropy , resulting in the following critic loss.\nNote that SAC, in its original formulation, trains an additional value function and a second Q-function, but we omitted these details for brevity.\nThe algorithm then trains the actor to minimize the KL-divergence between the policy and the exponential of the learned Q-function.\ndenotes the normalization to make the right side of the KL-divergence a proper distribution.\nOptimizing this loss increases the probability of actions with high value under the Q-function."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Gradient subspaces in policy gradient algorithms",
51
+ "text": "In Section 2 ###reference_###, we have highlighted several works from SL that utilize low-dimensional gradient subspaces for improving the learning performance.\nNaturally, we would like to transfer these benefits to policy gradient algorithms.\nHowever, the training in RL is significantly less stationary than in the supervised setting (Bjorck et al., 2022 ###reference_b4###).\nAs the RL agent changes, the data distribution shifts since the data is generated by the agent\u2019s interactions with its environment.\nFurthermore, the value of a state also depends on the agent\u2019s behavior in future states. Thus, the targets for the actor and critic networks change constantly.\nThese crucial differences between SL and RL underscore the need to analyze to which extent insights about gradient subspaces transfer between these settings.\nThe analysis presented in this section focuses on two policy gradient algorithms: PPO (Schulman et al., 2017 ###reference_b39###) and SAC (Haarnoja et al., 2018 ###reference_b15###), which are popular instantiations of on-policy and off-policy RL.\nWe apply the algorithms to twelve benchmark tasks from OpenAI Gym (Brockman et al., 2016 ###reference_b5###), Gym Robotics (Plappert et al., 2018a ###reference_b33###), and the DeepMind Control Suite (Tunyasuvunakool et al., 2020 ###reference_b45###).\nOur code builds upon the algorithm implementations of Stable Baselines3 (Raffin et al., 2021 ###reference_b36###).\nThe learning curves are displayed in Appendix A ###reference_###.\nWe ran each experiment for 10 random seeds and plot the mean and standard deviation for the results in Sections 4.2 ###reference_### and 4.3 ###reference_###.\nDue to space constraints, we show the analysis results only for selected tasks and present detailed results for all twelve tasks in Appendix B ###reference_###.\nMoreover, we conduct an evaluation of the impact of the RL algorithm\u2019s hyperparameters on the gradient subspace in LABEL:app:impact_of_suboptimal_hyperparameters.\nFor the following analyses, we calculate Hessian eigenvectors of the loss with respect to the network parameters via the Lanczos method (Lehoucq et al., 1998 ###reference_b24###) since it is an efficient method for estimating the top eigenvectors that avoids explicitly constructing the Hessian matrix.\nSince we can only estimate the Hessian from data, we use a large number of state-action pairs to obtain precise estimates for the eigenvectors of the true Hessian, similar to how Ilyas et al. (2020 ###reference_b17###) approximate the true policy gradient.\nFor PPO, we collect on-policy samples.\nThis would, however, not be faithful to the diverse distribution of off-policy data that SAC uses for training.\nTo match this data distribution for the analysis, we save the replay buffer during training and use the data of the complete replay buffer for estimating the Hessian.\nNote that the replay buffer also has a capacity of samples but is not completely filled at the beginning of training.\nAs mentioned in Section 3 ###reference_###, SAC and PPO each train two different networks, an actor and a critic.\nWe, therefore, conduct our analysis for each network individually.\nTo verify that there exist high-curvature directions spanning a subspace that stays relatively stable throughout the training and that contains the gradient, we check three conditions:\nSome parameter-space directions exhibit significantly larger curvature in the actor/critic loss than other directions.\nThe actor/critic gradient mainly lies in the subspace spanned by these directions.\nThe subspaces for the actor and critic networks change slowly throughout the training."
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "The loss curvature is large in a few parameter-space directions",
57
+ "text": "The Hessian matrix describes the curvature of a function, with the eigenvectors being the directions of maximum and minimum curvature.\nThe corresponding eigenvalues describe the magnitude of the curvature along these directions.\nTherefore, we verify condition i) ###reference_i1### by plotting the spectrum of Hessian eigenvalues for the actor and critic loss of PPO with respect to the network parameters in Figure 1 ###reference_###.\nThe plots show that there are a few large eigenvalues for both the actor and critic loss.\nAll remaining eigenvalues are distributed close to zero.\nThese plots confirm that there are a few directions with significantly larger curvature; in other words, the problem is ill-conditioned.\n###figure_1### ###figure_2### ###figure_3### ###figure_4###"
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "The gradient lies in the high-curvature subspace",
63
+ "text": "To verify condition ii) ###reference_i2### that the high-curvature subspace contains the gradients of the respective loss, we measure how well these gradients can be represented in the subspace.\nLet be the semi-orthogonal matrix that projects into the high-curvature subspace.\n consists row-wise of the largest Hessian eigenvectors.\nWe compute the relative projection error, i.e., the relative difference between the original gradient and the projected gradient that is the result of mapping into the high-curvature subspace and back into the original space.\nThe fraction of the gradient that can be represented in the subspace is then given by\nwhich simplifies to the following gradient subspace fraction criterion of Gur-Ari et al. (2018 ###reference_b14###).\nWe derive this equality in LABEL:app:derivation_gradient_subspace_fraction.\nNote that holds, where implies that the subspace perfectly contains the gradient, while means that the gradient lies entirely outside the subspace.\nDue to the normalization by , this criterion is invariant to the scale of the gradient, which enables comparing gradient subspaces of different models and models at different stages of the training.\nTo evaluate how the gradient subspace fraction evolves over time, we evaluate the criterion at checkpoints every steps during the RL training.\nTo compactly visualize this data, we split the training into three phases: initial, training, and convergence, and for each phase, we average the results of all timesteps of that phase.\nSince the algorithms require different numbers of timesteps for solving each of the tasks and reach different performance levels, we define the following heuristic criterion for the training phases.\nWe first smooth the learning curves by averaging over a sliding window and compute the maximum episode return over the smoothed curve.\nNext, we calculate the improvement relative to the episode return of the initial policy at each timestep of the smoothed learning curve as\nWe then define the end of the initial phase as the first timestep at which the agent reaches 10% of the total improvement, i.e., .\nSimilarly, we define the start of the convergence phase as the first timestep at which the agent reaches 90% of the total improvement, i.e., .\nWe choose as subspace dimensionality since this subspace already largely captures the gradients, and the largest eigenvectors can still be calculated reasonably efficiently with the Lanczos method.\nAppendix B ###reference_### displays results for different subspace sizes.\nWith the tuned hyperparameters from RL Baselines3 Zoo that we use for training, the PPO actor and critic usually contain around parameters, and the SAC actor and critic around and parameters (2 Q-networks \u00e0 parameters), respectively.\nHence, the subspace dimensionality is around 2% the size of the parameters for PPO and around 0.14% and 0.07% for SAC.\nWe consider a precise approximation of the true gradient computed with state-action pairs for PPO and the full replay buffer for SAC.\nWe denote this approximation as precise gradient and the low-sample gradient used during regular RL training as mini-batch gradient.\nIn a similar manner, we denote the Hessian estimated on the large dataset as precise Hessian and the estimate from samples as mini-batch Hessian.\nWe choose samples for the mini-batch Hessian since that is the amount of data that PPO with default hyperparameters collects for the policy updates.\nHence, this is a realistic setting for estimating the subspace during training.\n###figure_5### ###figure_6### ###figure_7### Figure 2 ###reference_### shows the value of the gradient subspace fraction for PPO and SAC on four different tasks, divided into the three training phases.\nNote that for an uninformed random projection, the gradient subspace fraction would be in expectation, i.e., the ratio of the original and subspace dimensionalities ( for PPO and for SAC\u2019s actor and for its critic).\nThe results in Figure 2 ###reference_### show a significantly higher gradient subspace fraction, which means that the gradients computed by PPO and SAC lie to a large extent in the high-curvature subspace.\nWe observe that the fraction of the gradient in the subspace is considerably higher for the critic than for the actor.\nFurthermore, the gradient subspace fraction is also often higher for SAC\u2019s actor than for PPO\u2019s.\nThis finding is particularly significant since the subspace size corresponds to a significantly lower percentage of the parameter dimensionality for SAC than for PPO.\nWe hypothesize that the effect is caused by the off-policy nature of SAC.\nIn the off-policy setting, the training distribution for the networks changes slowly since the optimization reuses previous data.\nIn this regard, SAC is closer than PPO to the supervised learning setting, where the data distribution is fixed and for which Gur-Ari et al. (2018 ###reference_b14###) report high gradient subspace fractions.\nStill, the subspace fraction for PPO is significant, considering that the dimensionality of the subspace is merely 2% of the original parameter space.\nFurthermore, for PPO, the subspace fraction often improves after the initial phase.\nSimilarly, Gur-Ari et al. (2018 ###reference_b14###) report for the supervised learning setting that the gradient starts evolving in the subspace only after some initial steps.\nHowever, for the SAC actor, this trend appears to be reversed, with the gradient subspace fraction being highest in the initial steps.\nMoreover, the precise gradient, computed with a large number of samples, tends to lie better in the subspace than the mini-batch gradient.\nThe noise resulting from the low-sample approximation seems to perturb the gradient out of the subspace.\nHowever, since the difference is typically small, the gradient subspace is still valid for the low-sample gradient estimates used during RL training.\nLastly, even the subspace identified with the mini-batch Hessian captures the gradient to a significant extent.\nThis property is crucial since it implies that we do not need access to the precise Hessian, which is costly to compute and might require additional data.\nInstead, we can already obtain a reasonable gradient subspace from the mini-batch Hessian."
64
+ },
65
+ {
66
+ "section_id": "4.3",
67
+ "parent_section_id": "4",
68
+ "section_name": "The high-curvature subspace changes slowly throughout the training",
69
+ "text": "So far, we have verified that the gradients of the actor and critic losses optimized by PPO and SAC lie to a large extent in the subspace spanned by the top eigenvectors of the Hessian with respect to the current parameters.\nHowever, even though there are relatively efficient methods for computing the top Hessian eigenvectors without explicitly constructing the Hessian matrix, calculating these eigenvectors at every step would be computationally expensive.\nIdeally, we would like to identify a subspace once that remains constant throughout the training.\nIn practice, however, the gradient subspace will not stay exactly the same during the training, but if it changes slowly, it is possible to reuse knowledge from earlier timesteps and update the subspace at a lower frequency.\nTo that end, we investigate condition iii) ###reference_i3### by calculating the subspace overlap, defined by Gur-Ari et al. (2018 ###reference_b14###).\nThe subspace overlap between timesteps and is defined as\nwhere is the th largest eigenvector at timestep .\n denotes the projection matrix from the full parameter space to the high-curvature subspace, identified at timestep .\nSimilar to Equation 5 ###reference_###, the criterion measures how much of the original vector is preserved during the projection into the subspace.\nFor the subspace overlap, however, we use the projection matrix at timestep not to project the gradient but rather project the Hessian eigenvectors that span the high-curvature subspace identified at a later timestep of the training.\nThis criterion, thus, measures how much the gradient subspace changes between these timesteps.\nNote that we assume the eigenvectors to be normalized to one and therefore do not normalize by their length.\nGur-Ari et al. (2018 ###reference_b14###) showed in the supervised setting that the gradient subspace stabilizes only after some initial update steps.\nTherefore, we choose the timestep at which we initially identify the subspace as since this is still relatively early in the training, but the gradient subspace should already have stabilized reasonably well.\nWe evaluate the subspace overlap criterion every timesteps until timestep .\nThis interval covers a significant portion of the training and showcases the extent to which the subspace changes under significant differences in the network parameters and the data distribution.\nFor the sake of completeness and to further highlight the influence of the data distribution on the subspace, we showcase the subspace overlap over the entire duration of the training in LABEL:app:subspace_overlap_for_the_entire_training.\nAs in Section 4.2 ###reference_###, we use as subspace dimensionality and refer to Appendix B ###reference_### for the evaluation of different subspace sizes.\nThe analysis results in Figure 3 ###reference_### show that the subspace overlap reduces the further apart the two timesteps and are, but in all cases, the subspace overlap remains significantly above zero, implying that information of previous subspaces can be reused at later timesteps.\nIf the two timesteps are close to each other, the overlap is considerable.\nSimilar to the gradient subspace fraction in Section 4.2 ###reference_###, the subspace overlap is often more pronounced for the critic than the actor, particularly for SAC.\n###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13###"
70
+ },
71
+ {
72
+ "section_id": "5",
73
+ "parent_section_id": null,
74
+ "section_name": "Conclusion",
75
+ "text": "In this work, we showed that findings from the SL literature about gradient subspaces transfer to the RL setting.\nDespite the continuously changing data distribution inherent to RL, the gradients of the actor and critic networks of PPO and SAC lie in a low-dimensional, slowly-changing subspace of high curvature.\nWe demonstrated that this property holds for both on-policy and off-policy learning, even though the distribution shift in the training data is particularly severe in the on-policy setting."
76
+ },
77
+ {
78
+ "section_id": "5.1",
79
+ "parent_section_id": "5",
80
+ "section_name": "High-curvature subspaces explain cliffs in reward landscapes",
81
+ "text": "Sullivan et al. (2022 ###reference_b42###) investigate visualizations of the reward landscapes around policies optimized by PPO.\nReward landscapes describe the resulting cumulative rewards over the space of policy parameters.\nThey observe empirically that these landscapes feature \u201ccliffs\u201d in policy gradient direction.\nWhen changing the parameters in this direction, the cumulative reward increases for small steps but drops sharply beyond this increase.\nIn random directions, these cliffs do not seem to occur.\nThe results from Section 4.2 ###reference_### constitute a likely explanation of this phenomenon.\nThe cliffs that the authors describe can be interpreted as signs of large curvature in the reward landscape.\nOur analysis demonstrates that the policy gradient is prone to lie in a high-curvature direction of the policy loss.\nSullivan et al. ###reference_b42### investigate the cumulative reward, which is different from the policy loss that we analyze in this work.\nHowever, one of the fundamental assumptions of policy gradient methods is that there is a strong link between the policy loss and the cumulative reward.\nTherefore, high curvature in the loss likely also manifests in the cumulative reward.\nThere is no such influence for random directions, so the curvature in gradient direction is larger than in random directions."
82
+ },
83
+ {
84
+ "section_id": "5.2",
85
+ "parent_section_id": "5",
86
+ "section_name": "Potential of gradient subspaces in reinforcement learning",
87
+ "text": "Leveraging properties of gradient subspaces has proven beneficial in numerous works in SL, e.g., (Li et al., 2022a ###reference_b26###; Chen et al., 2022 ###reference_b6###; Gauch et al., 2022 ###reference_b9###; Zhou et al., 2020 ###reference_b46###; Li et al., 2022b ###reference_b27###).\nThe analyses in this paper demonstrate that similar subspaces can be found in popular policy gradient algorithms.\nIn the following, we outline two opportunities for harnessing the properties of gradient subspaces and bringing the discussed benefits to RL.\nWhile the network architectures used in reinforcement learning are often small compared to the models used in other fields of machine learning, the dimensionality of the optimization problem is still considerable.\nPopular optimizers, like Adam (Kingma & Ba, 2014 ###reference_b21###), typically rely only on gradient information, as computing the Hessian at every timestep would be computationally very demanding in high dimensions.\nHowever, in Section 4.1 ###reference_###, we have seen that the optimization problem is ill-conditioned.\nSecond-order methods, like Newton\u2019s method, are known to be well-suited for ill-conditioned problems (Nocedal & Wright, 1999 ###reference_b30###).\nWith the insights of this paper, it seems feasible to reduce the dimensionality of the optimization problems in RL algorithms by optimizing in the low-dimensional subspace instead of the original parameter space.\nThe low dimensionality of the resulting optimization problems would enable computing and inverting the Hessian matrix efficiently and make second-order optimization methods feasible.\nThe quality of the exploration actions significantly impacts the performance of RL algorithms (Amin et al., 2021 ###reference_b2###).\nMost RL algorithms explore by applying uncorrelated noise to the actions produced by the policy.\nHowever, this often leads to inefficient exploration, particularly in over-actuated systems, where correlated actuation is crucial (Schumacher et al., 2022 ###reference_b40###).\nA viable alternative is to apply exploration noise to the policy parameters instead (R\u00fcckstiess et al., 2010 ###reference_b37###; Plappert et al., 2018b ###reference_b34###).\nThis approach results in a more directed exploration and can be viewed as exploring strategies similar to the current policy.\nIn Section 4 ###reference_###, we observed that the gradients utilized by policy gradient methods predominantly lie within a small subspace of all parameter-space directions.\nAs typical parameter-space exploration does not consider the properties of the training gradient when inducing parameter noise, only a small fraction of it might actually push the policy parameters along directions that are relevant to the task.\nConsidering that the optimization mostly occurs in a restricted subspace, it might be advantageous to limit exploration to these directions.\nSampling parameter noise only in the high-curvature subspace constitutes one possible way of focusing exploration on informative parameter-space directions."
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "Reproducibility",
93
+ "text": "We applied our analyses to proven and publicly available implementations of the RL algorithms from Stable-Baselines3 (Raffin et al., 2021 ###reference_b36###) on well-known, publicly available benchmark tasks from (Brockman et al., 2016 ###reference_b5###; Plappert et al., 2018b ###reference_b34###; Tunyasuvunakool et al., 2020 ###reference_b45###).\nFurther experimental details like the learning curves of the algorithms and the fine-grained analysis results for the entire training are displayed in Appendices A ###reference_### and B ###reference_###, respectively.\nTo facilitate reproducing our results, we make our code, as well as the raw analysis data, including hyperparmeter settings and model checkpoints, publically available on the project website ###reference_es/###."
94
+ }
95
+ ],
96
+ "appendix": [
97
+ {
98
+ "section_id": "Appendix 1",
99
+ "parent_section_id": null,
100
+ "section_name": "Appendix A Learning curves",
101
+ "text": "###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30### ###figure_31### ###figure_32### ###figure_33###"
102
+ },
103
+ {
104
+ "section_id": "Appendix 2",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix B Detailed analysis results for all tasks",
107
+ "text": "###figure_34### ###figure_35### ###figure_36### ###figure_37### ###figure_38### ###figure_39### ###figure_40### ###figure_41### ###figure_42### ###figure_43### ###figure_44### ###figure_45### ###figure_46### ###figure_47### ###figure_48### ###figure_49### ###figure_50### ###figure_51### ###figure_52### ###figure_53### ###figure_54### ###figure_55### ###figure_56### ###figure_57### ###figure_58### ###figure_59### ###figure_60### ###figure_61### ###figure_62### ###figure_63### ###figure_64### ###figure_65### ###figure_66### ###figure_67### ###figure_68### ###figure_69### ###figure_70### In Section 4.3 ###reference_###, we showed the subspace overlap for a range of timesteps.\nAppendix B ###reference_### visualizes the subspace overlap criterion with for all future timesteps on tasks that require training for 3 million steps.\nWhile in practical applications of gradient subspaces, the subspace would likely be updated multiple times during training, this visualization highlights the influence of the data distribution on the subspace.\nThe plots show a small drop in the subspace overlap for SAC at 1.1 million steps.\nSince the replay buffer has a size of 1 million samples, this marks the point at which the original data from timestep is completely replaced by new data collected by updated policies.\nSince the networks\u2019 training data is sampled from the replay buffer, this drop indicates that this change in the data distribution has a negative effect on the subspace overlap.\nThe effect is generally more pronounced for the critic than the actor because the actor\u2019s subspace overlap degrades faster and is already at a relatively low level at the mark of 1.1 million timesteps.\nFor PPO, there is no such drop in the subspace overlap since the algorithm does not use experience replay and instead collects new data for every update."
108
+ }
109
+ ],
110
+ "tables": {},
111
+ "image_paths": {
112
+ "1(a)": {
113
+ "figure_path": "2401.06604v3_figure_1(a).png",
114
+ "caption": "(a) Finger-spin, actor\nFigure 1: \nThe spectrum of the Hessian eigenvalues for PPO on the tasks Finger-spin (0(a), 0(b)) and Walker2D (0(c), 0(d)).\nThe Hessian is estimated from 106superscript10610^{6}10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT state-action pairs.\nFor both the actor (0(a), 0(c)) and critic (0(b), 0(d)) loss, there is a small number of large eigenvalues, while the bulk of the eigenvalues is close to zero.\nThis finding shows that there is a small number of high-curvature directions in the loss landscapes, which is in accordance with results from SL.",
115
+ "url": "http://arxiv.org/html/2401.06604v3/x1.png"
116
+ },
117
+ "1(b)": {
118
+ "figure_path": "2401.06604v3_figure_1(b).png",
119
+ "caption": "(b) Finger-spin, critic\nFigure 1: \nThe spectrum of the Hessian eigenvalues for PPO on the tasks Finger-spin (0(a), 0(b)) and Walker2D (0(c), 0(d)).\nThe Hessian is estimated from 106superscript10610^{6}10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT state-action pairs.\nFor both the actor (0(a), 0(c)) and critic (0(b), 0(d)) loss, there is a small number of large eigenvalues, while the bulk of the eigenvalues is close to zero.\nThis finding shows that there is a small number of high-curvature directions in the loss landscapes, which is in accordance with results from SL.",
120
+ "url": "http://arxiv.org/html/2401.06604v3/x2.png"
121
+ },
122
+ "1(c)": {
123
+ "figure_path": "2401.06604v3_figure_1(c).png",
124
+ "caption": "(c) Walker2D, actor\nFigure 1: \nThe spectrum of the Hessian eigenvalues for PPO on the tasks Finger-spin (0(a), 0(b)) and Walker2D (0(c), 0(d)).\nThe Hessian is estimated from 106superscript10610^{6}10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT state-action pairs.\nFor both the actor (0(a), 0(c)) and critic (0(b), 0(d)) loss, there is a small number of large eigenvalues, while the bulk of the eigenvalues is close to zero.\nThis finding shows that there is a small number of high-curvature directions in the loss landscapes, which is in accordance with results from SL.",
125
+ "url": "http://arxiv.org/html/2401.06604v3/x3.png"
126
+ },
127
+ "1(d)": {
128
+ "figure_path": "2401.06604v3_figure_1(d).png",
129
+ "caption": "(d) Walker2D, critic\nFigure 1: \nThe spectrum of the Hessian eigenvalues for PPO on the tasks Finger-spin (0(a), 0(b)) and Walker2D (0(c), 0(d)).\nThe Hessian is estimated from 106superscript10610^{6}10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT state-action pairs.\nFor both the actor (0(a), 0(c)) and critic (0(b), 0(d)) loss, there is a small number of large eigenvalues, while the bulk of the eigenvalues is close to zero.\nThis finding shows that there is a small number of high-curvature directions in the loss landscapes, which is in accordance with results from SL.",
130
+ "url": "http://arxiv.org/html/2401.06604v3/x4.png"
131
+ },
132
+ "2(a)": {
133
+ "figure_path": "2401.06604v3_figure_2(a).png",
134
+ "caption": "(a) Actor\nFigure 2: \nThe fraction Sfracsubscript\ud835\udc46fracS_{\\mathrm{frac}}italic_S start_POSTSUBSCRIPT roman_frac end_POSTSUBSCRIPT of the gradient that lies within the high-curvature subspace spanned by the 100 largest Hessian eigenvectors.\nResults are displayed for the actor (top) and critic (bottom) of PPO and SAC on the Ant, Finger-spin, LunarLanderContinuous, and Walker2D tasks.\nThe results demonstrate that a significant fraction of the gradient lies within the high-curvature subspace, but the extent to which the gradient is contained in the subspace depends on the algorithm, task, and training phase.\nFor both algorithms, the gradient subspace fraction is significantly higher for the critic than for the actor.\nFurthermore, the quantity is also often larger for SAC\u2019s actor than for PPO\u2019s, particularly in the early stages of the training.\nEven with mini-batch estimates for the gradient and Hessian, the gradient subspace fraction is considerable.",
135
+ "url": "http://arxiv.org/html/2401.06604v3/x6.png"
136
+ },
137
+ "2(b)": {
138
+ "figure_path": "2401.06604v3_figure_2(b).png",
139
+ "caption": "(b) Critic\nFigure 2: \nThe fraction Sfracsubscript\ud835\udc46fracS_{\\mathrm{frac}}italic_S start_POSTSUBSCRIPT roman_frac end_POSTSUBSCRIPT of the gradient that lies within the high-curvature subspace spanned by the 100 largest Hessian eigenvectors.\nResults are displayed for the actor (top) and critic (bottom) of PPO and SAC on the Ant, Finger-spin, LunarLanderContinuous, and Walker2D tasks.\nThe results demonstrate that a significant fraction of the gradient lies within the high-curvature subspace, but the extent to which the gradient is contained in the subspace depends on the algorithm, task, and training phase.\nFor both algorithms, the gradient subspace fraction is significantly higher for the critic than for the actor.\nFurthermore, the quantity is also often larger for SAC\u2019s actor than for PPO\u2019s, particularly in the early stages of the training.\nEven with mini-batch estimates for the gradient and Hessian, the gradient subspace fraction is considerable.",
140
+ "url": "http://arxiv.org/html/2401.06604v3/x7.png"
141
+ },
142
+ "3(a)": {
143
+ "figure_path": "2401.06604v3_figure_3(a).png",
144
+ "caption": "(a) Ant\nFigure 3: \nEvolution of the overlap between the high-curvature subspace identified at an early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 and later timesteps for the actor and critic of PPO and SAC.\nWhile the overlap between the subspaces degrades as the networks are updated, it remains considerable even after 400,000400000400{,}000400 , 000 timesteps, indicating that the subspace remains similar, even under significant changes in the network parameters and the data distribution.\nThis finding implies that information about the gradient subspace at earlier timesteps can be reused at later timesteps.",
145
+ "url": "http://arxiv.org/html/2401.06604v3/x10.png"
146
+ },
147
+ "3(b)": {
148
+ "figure_path": "2401.06604v3_figure_3(b).png",
149
+ "caption": "(b) Finger-spin\nFigure 3: \nEvolution of the overlap between the high-curvature subspace identified at an early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 and later timesteps for the actor and critic of PPO and SAC.\nWhile the overlap between the subspaces degrades as the networks are updated, it remains considerable even after 400,000400000400{,}000400 , 000 timesteps, indicating that the subspace remains similar, even under significant changes in the network parameters and the data distribution.\nThis finding implies that information about the gradient subspace at earlier timesteps can be reused at later timesteps.",
150
+ "url": "http://arxiv.org/html/2401.06604v3/x11.png"
151
+ },
152
+ "3(c)": {
153
+ "figure_path": "2401.06604v3_figure_3(c).png",
154
+ "caption": "(c) LunarLanderCont.\nFigure 3: \nEvolution of the overlap between the high-curvature subspace identified at an early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 and later timesteps for the actor and critic of PPO and SAC.\nWhile the overlap between the subspaces degrades as the networks are updated, it remains considerable even after 400,000400000400{,}000400 , 000 timesteps, indicating that the subspace remains similar, even under significant changes in the network parameters and the data distribution.\nThis finding implies that information about the gradient subspace at earlier timesteps can be reused at later timesteps.",
155
+ "url": "http://arxiv.org/html/2401.06604v3/x12.png"
156
+ },
157
+ "3(d)": {
158
+ "figure_path": "2401.06604v3_figure_3(d).png",
159
+ "caption": "(d) Walker2D\nFigure 3: \nEvolution of the overlap between the high-curvature subspace identified at an early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 and later timesteps for the actor and critic of PPO and SAC.\nWhile the overlap between the subspaces degrades as the networks are updated, it remains considerable even after 400,000400000400{,}000400 , 000 timesteps, indicating that the subspace remains similar, even under significant changes in the network parameters and the data distribution.\nThis finding implies that information about the gradient subspace at earlier timesteps can be reused at later timesteps.",
160
+ "url": "http://arxiv.org/html/2401.06604v3/x13.png"
161
+ },
162
+ "4(a)": {
163
+ "figure_path": "2401.06604v3_figure_4(a).png",
164
+ "caption": "(a) Ant\nFigure 4: \nLearning curves for PPO and SAC on tasks from OpenAI Gym (Brockman et al., 2016), Gym Robotics (Plappert et al., 2018a), and the DeepMind Control Suite (Tunyasuvunakool et al., 2020).\nWe use the algorithm implementations of Stable Baselines3 (Raffin et al., 2021) with tuned hyperparameters from RL Baselines3 Zoo (Raffin, 2020) for the Gym tasks and hyperparameters tuned by random search over 50 configurations for the Gym Robotics and DeepMind Control Suite tasks.\nResults are averaged over ten random seeds; shaded areas represent the standard deviation across seeds.",
165
+ "url": "http://arxiv.org/html/2401.06604v3/x16.png"
166
+ },
167
+ "4(b)": {
168
+ "figure_path": "2401.06604v3_figure_4(b).png",
169
+ "caption": "(b) Ball_in_cup\nFigure 4: \nLearning curves for PPO and SAC on tasks from OpenAI Gym (Brockman et al., 2016), Gym Robotics (Plappert et al., 2018a), and the DeepMind Control Suite (Tunyasuvunakool et al., 2020).\nWe use the algorithm implementations of Stable Baselines3 (Raffin et al., 2021) with tuned hyperparameters from RL Baselines3 Zoo (Raffin, 2020) for the Gym tasks and hyperparameters tuned by random search over 50 configurations for the Gym Robotics and DeepMind Control Suite tasks.\nResults are averaged over ten random seeds; shaded areas represent the standard deviation across seeds.",
170
+ "url": "http://arxiv.org/html/2401.06604v3/x17.png"
171
+ },
172
+ "4(c)": {
173
+ "figure_path": "2401.06604v3_figure_4(c).png",
174
+ "caption": "(c) BipedalWalker\nFigure 4: \nLearning curves for PPO and SAC on tasks from OpenAI Gym (Brockman et al., 2016), Gym Robotics (Plappert et al., 2018a), and the DeepMind Control Suite (Tunyasuvunakool et al., 2020).\nWe use the algorithm implementations of Stable Baselines3 (Raffin et al., 2021) with tuned hyperparameters from RL Baselines3 Zoo (Raffin, 2020) for the Gym tasks and hyperparameters tuned by random search over 50 configurations for the Gym Robotics and DeepMind Control Suite tasks.\nResults are averaged over ten random seeds; shaded areas represent the standard deviation across seeds.",
175
+ "url": "http://arxiv.org/html/2401.06604v3/x18.png"
176
+ },
177
+ "4(d)": {
178
+ "figure_path": "2401.06604v3_figure_4(d).png",
179
+ "caption": "(d) FetchReach\nFigure 4: \nLearning curves for PPO and SAC on tasks from OpenAI Gym (Brockman et al., 2016), Gym Robotics (Plappert et al., 2018a), and the DeepMind Control Suite (Tunyasuvunakool et al., 2020).\nWe use the algorithm implementations of Stable Baselines3 (Raffin et al., 2021) with tuned hyperparameters from RL Baselines3 Zoo (Raffin, 2020) for the Gym tasks and hyperparameters tuned by random search over 50 configurations for the Gym Robotics and DeepMind Control Suite tasks.\nResults are averaged over ten random seeds; shaded areas represent the standard deviation across seeds.",
180
+ "url": "http://arxiv.org/html/2401.06604v3/x20.png"
181
+ },
182
+ "4(e)": {
183
+ "figure_path": "2401.06604v3_figure_4(e).png",
184
+ "caption": "(e) Finger-spin\nFigure 4: \nLearning curves for PPO and SAC on tasks from OpenAI Gym (Brockman et al., 2016), Gym Robotics (Plappert et al., 2018a), and the DeepMind Control Suite (Tunyasuvunakool et al., 2020).\nWe use the algorithm implementations of Stable Baselines3 (Raffin et al., 2021) with tuned hyperparameters from RL Baselines3 Zoo (Raffin, 2020) for the Gym tasks and hyperparameters tuned by random search over 50 configurations for the Gym Robotics and DeepMind Control Suite tasks.\nResults are averaged over ten random seeds; shaded areas represent the standard deviation across seeds.",
185
+ "url": "http://arxiv.org/html/2401.06604v3/x21.png"
186
+ },
187
+ "4(f)": {
188
+ "figure_path": "2401.06604v3_figure_4(f).png",
189
+ "caption": "(f) HalfCheetah\nFigure 4: \nLearning curves for PPO and SAC on tasks from OpenAI Gym (Brockman et al., 2016), Gym Robotics (Plappert et al., 2018a), and the DeepMind Control Suite (Tunyasuvunakool et al., 2020).\nWe use the algorithm implementations of Stable Baselines3 (Raffin et al., 2021) with tuned hyperparameters from RL Baselines3 Zoo (Raffin, 2020) for the Gym tasks and hyperparameters tuned by random search over 50 configurations for the Gym Robotics and DeepMind Control Suite tasks.\nResults are averaged over ten random seeds; shaded areas represent the standard deviation across seeds.",
190
+ "url": "http://arxiv.org/html/2401.06604v3/x22.png"
191
+ },
192
+ "4(g)": {
193
+ "figure_path": "2401.06604v3_figure_4(g).png",
194
+ "caption": "(g) Hopper\nFigure 4: \nLearning curves for PPO and SAC on tasks from OpenAI Gym (Brockman et al., 2016), Gym Robotics (Plappert et al., 2018a), and the DeepMind Control Suite (Tunyasuvunakool et al., 2020).\nWe use the algorithm implementations of Stable Baselines3 (Raffin et al., 2021) with tuned hyperparameters from RL Baselines3 Zoo (Raffin, 2020) for the Gym tasks and hyperparameters tuned by random search over 50 configurations for the Gym Robotics and DeepMind Control Suite tasks.\nResults are averaged over ten random seeds; shaded areas represent the standard deviation across seeds.",
195
+ "url": "http://arxiv.org/html/2401.06604v3/x24.png"
196
+ },
197
+ "4(h)": {
198
+ "figure_path": "2401.06604v3_figure_4(h).png",
199
+ "caption": "(h) LunarLanderContinuous\nFigure 4: \nLearning curves for PPO and SAC on tasks from OpenAI Gym (Brockman et al., 2016), Gym Robotics (Plappert et al., 2018a), and the DeepMind Control Suite (Tunyasuvunakool et al., 2020).\nWe use the algorithm implementations of Stable Baselines3 (Raffin et al., 2021) with tuned hyperparameters from RL Baselines3 Zoo (Raffin, 2020) for the Gym tasks and hyperparameters tuned by random search over 50 configurations for the Gym Robotics and DeepMind Control Suite tasks.\nResults are averaged over ten random seeds; shaded areas represent the standard deviation across seeds.",
200
+ "url": "http://arxiv.org/html/2401.06604v3/x25.png"
201
+ },
202
+ "4(i)": {
203
+ "figure_path": "2401.06604v3_figure_4(i).png",
204
+ "caption": "(i) Pendulum\nFigure 4: \nLearning curves for PPO and SAC on tasks from OpenAI Gym (Brockman et al., 2016), Gym Robotics (Plappert et al., 2018a), and the DeepMind Control Suite (Tunyasuvunakool et al., 2020).\nWe use the algorithm implementations of Stable Baselines3 (Raffin et al., 2021) with tuned hyperparameters from RL Baselines3 Zoo (Raffin, 2020) for the Gym tasks and hyperparameters tuned by random search over 50 configurations for the Gym Robotics and DeepMind Control Suite tasks.\nResults are averaged over ten random seeds; shaded areas represent the standard deviation across seeds.",
205
+ "url": "http://arxiv.org/html/2401.06604v3/x26.png"
206
+ },
207
+ "4(j)": {
208
+ "figure_path": "2401.06604v3_figure_4(j).png",
209
+ "caption": "(j) Reacher\nFigure 4: \nLearning curves for PPO and SAC on tasks from OpenAI Gym (Brockman et al., 2016), Gym Robotics (Plappert et al., 2018a), and the DeepMind Control Suite (Tunyasuvunakool et al., 2020).\nWe use the algorithm implementations of Stable Baselines3 (Raffin et al., 2021) with tuned hyperparameters from RL Baselines3 Zoo (Raffin, 2020) for the Gym tasks and hyperparameters tuned by random search over 50 configurations for the Gym Robotics and DeepMind Control Suite tasks.\nResults are averaged over ten random seeds; shaded areas represent the standard deviation across seeds.",
210
+ "url": "http://arxiv.org/html/2401.06604v3/x29.png"
211
+ },
212
+ "4(k)": {
213
+ "figure_path": "2401.06604v3_figure_4(k).png",
214
+ "caption": "(k) Swimmer\nFigure 4: \nLearning curves for PPO and SAC on tasks from OpenAI Gym (Brockman et al., 2016), Gym Robotics (Plappert et al., 2018a), and the DeepMind Control Suite (Tunyasuvunakool et al., 2020).\nWe use the algorithm implementations of Stable Baselines3 (Raffin et al., 2021) with tuned hyperparameters from RL Baselines3 Zoo (Raffin, 2020) for the Gym tasks and hyperparameters tuned by random search over 50 configurations for the Gym Robotics and DeepMind Control Suite tasks.\nResults are averaged over ten random seeds; shaded areas represent the standard deviation across seeds.",
215
+ "url": "http://arxiv.org/html/2401.06604v3/x29.png"
216
+ },
217
+ "4(l)": {
218
+ "figure_path": "2401.06604v3_figure_4(l).png",
219
+ "caption": "(l) Walker2D\nFigure 4: \nLearning curves for PPO and SAC on tasks from OpenAI Gym (Brockman et al., 2016), Gym Robotics (Plappert et al., 2018a), and the DeepMind Control Suite (Tunyasuvunakool et al., 2020).\nWe use the algorithm implementations of Stable Baselines3 (Raffin et al., 2021) with tuned hyperparameters from RL Baselines3 Zoo (Raffin, 2020) for the Gym tasks and hyperparameters tuned by random search over 50 configurations for the Gym Robotics and DeepMind Control Suite tasks.\nResults are averaged over ten random seeds; shaded areas represent the standard deviation across seeds.",
220
+ "url": "http://arxiv.org/html/2401.06604v3/x29.png"
221
+ },
222
+ "5(a)": {
223
+ "figure_path": "2401.06604v3_figure_5(a).png",
224
+ "caption": "(a) Ant, actor\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
225
+ "url": "http://arxiv.org/html/2401.06604v3/x42.png"
226
+ },
227
+ "5(b)": {
228
+ "figure_path": "2401.06604v3_figure_5(b).png",
229
+ "caption": "(b) Ant, critic\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
230
+ "url": "http://arxiv.org/html/2401.06604v3/x43.png"
231
+ },
232
+ "5(c)": {
233
+ "figure_path": "2401.06604v3_figure_5(c).png",
234
+ "caption": "(c) Ball_in_cup, actor\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
235
+ "url": "http://arxiv.org/html/2401.06604v3/x44.png"
236
+ },
237
+ "5(d)": {
238
+ "figure_path": "2401.06604v3_figure_5(d).png",
239
+ "caption": "(d) Ball_in_cup, critic\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
240
+ "url": "http://arxiv.org/html/2401.06604v3/x45.png"
241
+ },
242
+ "5(e)": {
243
+ "figure_path": "2401.06604v3_figure_5(e).png",
244
+ "caption": "(e) Bip.Walker, actor\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
245
+ "url": "http://arxiv.org/html/2401.06604v3/x46.png"
246
+ },
247
+ "5(f)": {
248
+ "figure_path": "2401.06604v3_figure_5(f).png",
249
+ "caption": "(f) Bip.Walker, critic\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
250
+ "url": "http://arxiv.org/html/2401.06604v3/x47.png"
251
+ },
252
+ "5(g)": {
253
+ "figure_path": "2401.06604v3_figure_5(g).png",
254
+ "caption": "(g) FetchReach, actor\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
255
+ "url": "http://arxiv.org/html/2401.06604v3/x48.png"
256
+ },
257
+ "5(h)": {
258
+ "figure_path": "2401.06604v3_figure_5(h).png",
259
+ "caption": "(h) FetchReach, critic\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
260
+ "url": "http://arxiv.org/html/2401.06604v3/x49.png"
261
+ },
262
+ "5(i)": {
263
+ "figure_path": "2401.06604v3_figure_5(i).png",
264
+ "caption": "(i) Finger-spin, actor\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
265
+ "url": "http://arxiv.org/html/2401.06604v3/x50.png"
266
+ },
267
+ "5(j)": {
268
+ "figure_path": "2401.06604v3_figure_5(j).png",
269
+ "caption": "(j) Finger-spin, critic\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
270
+ "url": "http://arxiv.org/html/2401.06604v3/x51.png"
271
+ },
272
+ "5(k)": {
273
+ "figure_path": "2401.06604v3_figure_5(k).png",
274
+ "caption": "(k) HalfCheetah, actor\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
275
+ "url": "http://arxiv.org/html/2401.06604v3/x52.png"
276
+ },
277
+ "5(l)": {
278
+ "figure_path": "2401.06604v3_figure_5(l).png",
279
+ "caption": "(l) HalfCheetah, critic\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
280
+ "url": "http://arxiv.org/html/2401.06604v3/x53.png"
281
+ },
282
+ "5(m)": {
283
+ "figure_path": "2401.06604v3_figure_5(m).png",
284
+ "caption": "(m) Hopper, actor\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
285
+ "url": "http://arxiv.org/html/2401.06604v3/x54.png"
286
+ },
287
+ "5(n)": {
288
+ "figure_path": "2401.06604v3_figure_5(n).png",
289
+ "caption": "(n) Hopper, critic\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
290
+ "url": "http://arxiv.org/html/2401.06604v3/x55.png"
291
+ },
292
+ "5(o)": {
293
+ "figure_path": "2401.06604v3_figure_5(o).png",
294
+ "caption": "(o) LunarLander, actor\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
295
+ "url": "http://arxiv.org/html/2401.06604v3/x56.png"
296
+ },
297
+ "5(p)": {
298
+ "figure_path": "2401.06604v3_figure_5(p).png",
299
+ "caption": "(p) LunarLander, critic\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
300
+ "url": "http://arxiv.org/html/2401.06604v3/x57.png"
301
+ },
302
+ "5(q)": {
303
+ "figure_path": "2401.06604v3_figure_5(q).png",
304
+ "caption": "(q) Pendulum, actor\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
305
+ "url": "http://arxiv.org/html/2401.06604v3/x58.png"
306
+ },
307
+ "5(r)": {
308
+ "figure_path": "2401.06604v3_figure_5(r).png",
309
+ "caption": "(r) Pendulum, critic\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
310
+ "url": "http://arxiv.org/html/2401.06604v3/x59.png"
311
+ },
312
+ "5(s)": {
313
+ "figure_path": "2401.06604v3_figure_5(s).png",
314
+ "caption": "(s) Reacher, actor\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
315
+ "url": "http://arxiv.org/html/2401.06604v3/x60.png"
316
+ },
317
+ "5(t)": {
318
+ "figure_path": "2401.06604v3_figure_5(t).png",
319
+ "caption": "(t) Reacher, critic\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
320
+ "url": "http://arxiv.org/html/2401.06604v3/x61.png"
321
+ },
322
+ "5(u)": {
323
+ "figure_path": "2401.06604v3_figure_5(u).png",
324
+ "caption": "(a) Ant\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
325
+ "url": "http://arxiv.org/html/2401.06604v3/x65.png"
326
+ },
327
+ "5(v)": {
328
+ "figure_path": "2401.06604v3_figure_5(v).png",
329
+ "caption": "(b) HalfCheetah\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
330
+ "url": "http://arxiv.org/html/2401.06604v3/x66.png"
331
+ },
332
+ "5(w)": {
333
+ "figure_path": "2401.06604v3_figure_5(w).png",
334
+ "caption": "(c) Swimmer\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
335
+ "url": "http://arxiv.org/html/2401.06604v3/x67.png"
336
+ },
337
+ "5(x)": {
338
+ "figure_path": "2401.06604v3_figure_5(x).png",
339
+ "caption": "(d) Walker2D\nFigure 28: \nEvolution of the subspace overlap between the early timestep t1=100,000subscript\ud835\udc611100000t_{1}=100{,}000italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 100 , 000 (marked by the dashed gray line) and all future timesteps of the training.\nResults for the actor and critic of PPO and SAC.\nFor SAC, a small drop in the subspace overlap is visible in all plots at around 1.1 million timesteps.\nThis marks the timestep at which the data in the replay buffer is replaced completely by new data, indicating that the data distribution affects the subspace overlap.",
340
+ "url": "http://arxiv.org/html/2401.06604v3/x68.png"
341
+ }
342
+ },
343
+ "validation": true,
344
+ "references": [
345
+ {
346
+ "1": {
347
+ "title": "Intrinsic dimensionality explains the effectiveness of language model fine-tuning.",
348
+ "author": "Armen Aghajanyan, Sonal Gupta, and Luke Zettlemoyer.",
349
+ "venue": "In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 7319\u20137328, 2021.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "2": {
355
+ "title": "A survey of exploration methods in reinforcement learning.",
356
+ "author": "Susan Amin, Maziar Gomrokchi, Harsh Satija, Herke van Hoof, and Doina Precup.",
357
+ "venue": "arXiv preprint arXiv:2109.00157, 2021.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "3": {
363
+ "title": "Unified data collection for visual-inertial calibration via deep reinforcement learning.",
364
+ "author": "Yunke Ao, Le Chen, Florian Tschopp, Michel Breyer, Roland Siegwart, and Andrei Cramariuc.",
365
+ "venue": "In International Conference on Robotics and Automation, pp. 1646\u20131652. IEEE, 2022.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "4": {
371
+ "title": "Is high variance unavoidable in RL? A case study in continuous control.",
372
+ "author": "Johan Bjorck, Carla P Gomes, and Kilian Q Weinberger.",
373
+ "venue": "In International Conference on Learning Representations, 2022.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "5": {
379
+ "title": "OpenAI Gym.",
380
+ "author": "Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba.",
381
+ "venue": "arXiv preprint arXiv:1606.01540, 2016.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "6": {
387
+ "title": "Scalable learning to optimize: A learned optimizer can train big models.",
388
+ "author": "Xuxi Chen, Tianlong Chen, Yu Cheng, Weizhu Chen, Ahmed Awadallah, and Zhangyang Wang.",
389
+ "venue": "In European Conference on Computer Vision, pp. 389\u2013405. Springer, 2022.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "7": {
395
+ "title": "Extreme parkour with legged robots.",
396
+ "author": "Xuxin Cheng, Kexin Shi, Ananye Agarwal, and Deepak Pathak.",
397
+ "venue": "arXiv preprint arXiv:2309.14341, 2023.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "8": {
403
+ "title": "Motion planning among dynamic, decision-making agents with deep reinforcement learning.",
404
+ "author": "Michael Everett, Yu Fan Chen, and Jonathan P How.",
405
+ "venue": "In International Conference on Intelligent Robots and Systems, pp. 3052\u20133059. IEEE, 2018.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "9": {
411
+ "title": "Few-shot learning by dimensionality reduction in gradient space.",
412
+ "author": "Martin Gauch, Maximilian Beck, Thomas Adler, Dmytro Kotsur, Stefan Fiel, Hamid Eghbal-zadeh, Johannes Brandstetter, Johannes Kofler, Markus Holzleitner, Werner Zellinger, et al.",
413
+ "venue": "In Conference on Lifelong Learning Agents, pp. 1043\u20131064. PMLR, 2022.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "10": {
419
+ "title": "Learning a subspace of policies for online adaptation in reinforcement learning.",
420
+ "author": "Jean-Baptiste Gaya, Laure Soulier, and Ludovic Denoyer.",
421
+ "venue": "In International Conference of Learning Representations, 2022.",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "11": {
427
+ "title": "Building a subspace of policies for scalable continual learning.",
428
+ "author": "Jean-Baptiste Gaya, Thang Doan, Lucas Caccia, Laure Soulier, Ludovic Denoyer, and Roberta Raileanu.",
429
+ "venue": "In International Conference of Learning Representations, 2023.",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "12": {
435
+ "title": "Improving neural network training in low dimensional random bases.",
436
+ "author": "Frithjof Gressmann, Zach Eaton-Rosen, and Carlo Luschi.",
437
+ "venue": "Advances in Neural Information Processing Systems, 33:12140\u201312150, 2020.",
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "13": {
443
+ "title": "Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates.",
444
+ "author": "Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine.",
445
+ "venue": "In International Conference on Robotics and Automation, pp. 3389\u20133396. IEEE, 2017.",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "14": {
451
+ "title": "Gradient descent happens in a tiny subspace.",
452
+ "author": "Guy Gur-Ari, Daniel A Roberts, and Ethan Dyer.",
453
+ "venue": "arXiv preprint arXiv:1812.04754, 2018.",
454
+ "url": null
455
+ }
456
+ },
457
+ {
458
+ "15": {
459
+ "title": "Soft Actor-Critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor.",
460
+ "author": "Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine.",
461
+ "venue": "In International Conference on Machine Learning, pp. 1861\u20131870. PMLR, 2018.",
462
+ "url": null
463
+ }
464
+ },
465
+ {
466
+ "16": {
467
+ "title": "LoRA: Low-rank adaptation of large language models.",
468
+ "author": "Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al.",
469
+ "venue": "In International Conference on Learning Representations, 2021.",
470
+ "url": null
471
+ }
472
+ },
473
+ {
474
+ "17": {
475
+ "title": "A closer look at deep policy gradients.",
476
+ "author": "Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry.",
477
+ "venue": "In International Conference on Learning Representations, 2020.",
478
+ "url": null
479
+ }
480
+ },
481
+ {
482
+ "18": {
483
+ "title": "Approximately optimal approximate reinforcement learning.",
484
+ "author": "Sham Kakade and John Langford.",
485
+ "venue": "In International Conference on Machine Learning, pp. 267\u2013274, 2002.",
486
+ "url": null
487
+ }
488
+ },
489
+ {
490
+ "19": {
491
+ "title": "QT-Opt: Scalable deep reinforcement learning for vision-based robotic manipulation.",
492
+ "author": "Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, et al.",
493
+ "venue": "arXiv preprint arXiv:1806.10293, 2018.",
494
+ "url": null
495
+ }
496
+ },
497
+ {
498
+ "20": {
499
+ "title": "Champion-level drone racing using deep reinforcement learning.",
500
+ "author": "Elia Kaufmann, Leonard Bauersfeld, Antonio Loquercio, Matthias M\u00fcller, Vladlen Koltun, and Davide Scaramuzza.",
501
+ "venue": "Nature, 620:982\u2013987, 2023.",
502
+ "url": null
503
+ }
504
+ },
505
+ {
506
+ "21": {
507
+ "title": "Adam: A method for stochastic optimization.",
508
+ "author": "Diederik P Kingma and Jimmy Ba.",
509
+ "venue": "arXiv preprint arXiv:1412.6980, 2014.",
510
+ "url": null
511
+ }
512
+ },
513
+ {
514
+ "22": {
515
+ "title": "How many degrees of freedom do we need to train deep networks: A loss landscape perspective.",
516
+ "author": "Brett W Larsen, Stanislav Fort, Nic Becker, and Surya Ganguli.",
517
+ "venue": "In International Conference on Learning Representations, 2021.",
518
+ "url": null
519
+ }
520
+ },
521
+ {
522
+ "23": {
523
+ "title": "A novel stochastic gradient descent algorithm for learning principal subspaces.",
524
+ "author": "Charline Le Lan, Joshua Greaves, Jesse Farebrother, Mark Rowland, Fabian Pedregosa, Rishabh Agarwal, and Marc G Bellemare.",
525
+ "venue": "In International Conference on Artificial Intelligence and Statistics, pp. 1703\u20131718. PMLR, 2023.",
526
+ "url": null
527
+ }
528
+ },
529
+ {
530
+ "24": {
531
+ "title": "ARPACK users\u2019 guide: Solution of large-scale eigenvalue problems with implicitly restarted Arnoldi methods.",
532
+ "author": "Richard B Lehoucq, Danny C Sorensen, and Chao Yang.",
533
+ "venue": "SIAM, 1998.",
534
+ "url": null
535
+ }
536
+ },
537
+ {
538
+ "25": {
539
+ "title": "Measuring the intrinsic dimension of objective landscapes.",
540
+ "author": "Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski.",
541
+ "venue": "In International Conference on Learning Representations, 2018.",
542
+ "url": null
543
+ }
544
+ },
545
+ {
546
+ "26": {
547
+ "title": "Low dimensional trajectory hypothesis is true: DNNs can be trained in tiny subspaces.",
548
+ "author": "Tao Li, Lei Tan, Zhehao Huang, Qinghua Tao, Yipeng Liu, and Xiaolin Huang.",
549
+ "venue": "Transactions on Pattern Analysis and Machine Intelligence, 45(3):3411\u20133420, 2022a.",
550
+ "url": null
551
+ }
552
+ },
553
+ {
554
+ "27": {
555
+ "title": "Subspace adversarial training.",
556
+ "author": "Tao Li, Yingwen Wu, Sizhe Chen, Kun Fang, and Xiaolin Huang.",
557
+ "venue": "In IEEE Conference on Computer Vision and Pattern Recognition, pp. 13409\u201313418, 2022b.",
558
+ "url": null
559
+ }
560
+ },
561
+ {
562
+ "28": {
563
+ "title": "Guided evolutionary strategies: Augmenting random search with surrogate gradients.",
564
+ "author": "Niru Maheswaranathan, Luke Metz, George Tucker, Dami Choi, and Jascha Sohl-Dickstein.",
565
+ "venue": "In International Conference on Machine Learning, pp. 4264\u20134273. PMLR, 2019.",
566
+ "url": null
567
+ }
568
+ },
569
+ {
570
+ "29": {
571
+ "title": "Playing Atari with deep reinforcement learning.",
572
+ "author": "Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller.",
573
+ "venue": "arXiv preprint arXiv:1312.5602, 2013.",
574
+ "url": null
575
+ }
576
+ },
577
+ {
578
+ "30": {
579
+ "title": "Numerical optimization.",
580
+ "author": "Jorge Nocedal and Stephen J Wright.",
581
+ "venue": "Springer, 1999.",
582
+ "url": null
583
+ }
584
+ },
585
+ {
586
+ "31": {
587
+ "title": "Fast efficient hyperparameter tuning for policy gradient methods.",
588
+ "author": "Supratik Paul, Vitaly Kurin, and Shimon Whiteson.",
589
+ "venue": "Advances in Neural Information Processing Systems, 32, 2019.",
590
+ "url": null
591
+ }
592
+ },
593
+ {
594
+ "32": {
595
+ "title": "Reinforcement learning of motor skills with policy gradients.",
596
+ "author": "Jan Peters and Stefan Schaal.",
597
+ "venue": "Neural networks, 21(4):682\u2013697, 2008.",
598
+ "url": null
599
+ }
600
+ },
601
+ {
602
+ "33": {
603
+ "title": "Multi-goal reinforcement learning: Challenging robotics environments and request for research.",
604
+ "author": "Matthias Plappert, Marcin Andrychowicz, Alex Ray, Bob McGrew, Bowen Baker, Glenn Powell, Jonas Schneider, Josh Tobin, Maciek Chociej, Peter Welinder, et al.",
605
+ "venue": "arXiv preprint arXiv:1802.09464, 2018a.",
606
+ "url": null
607
+ }
608
+ },
609
+ {
610
+ "34": {
611
+ "title": "Parameter space noise for exploration.",
612
+ "author": "Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, and Marcin Andrychowicz.",
613
+ "venue": "In International Conference on Learning Representations, 2018b.",
614
+ "url": null
615
+ }
616
+ },
617
+ {
618
+ "35": {
619
+ "title": "RL Baselines3 Zoo.",
620
+ "author": "Antonin Raffin.",
621
+ "venue": "https://github.com/DLR-RM/rl-baselines3-zoo, 2020.",
622
+ "url": null
623
+ }
624
+ },
625
+ {
626
+ "36": {
627
+ "title": "Stable-Baselines3: Reliable reinforcement learning implementations.",
628
+ "author": "Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann.",
629
+ "venue": "The Journal of Machine Learning Research, 22(1):12348\u201312355, 2021.",
630
+ "url": null
631
+ }
632
+ },
633
+ {
634
+ "37": {
635
+ "title": "Exploring parameter space in reinforcement learning.",
636
+ "author": "Thomas R\u00fcckstiess, Frank Sehnke, Tom Schaul, Daan Wierstra, Yi Sun, and J\u00fcrgen Schmidhuber.",
637
+ "venue": "Paladyn, Journal of Behavioral Robotics, 1:14\u201324, 2010.",
638
+ "url": null
639
+ }
640
+ },
641
+ {
642
+ "38": {
643
+ "title": "Evolution strategies as a scalable alternative to reinforcement learning.",
644
+ "author": "Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever.",
645
+ "venue": "arXiv preprint arXiv:1703.03864, 2017.",
646
+ "url": null
647
+ }
648
+ },
649
+ {
650
+ "39": {
651
+ "title": "Proximal policy optimization algorithms.",
652
+ "author": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.",
653
+ "venue": "arXiv preprint arXiv:1707.06347, 2017.",
654
+ "url": null
655
+ }
656
+ },
657
+ {
658
+ "40": {
659
+ "title": "DEP-RL: Embodied exploration for reinforcement learning in overactuated and musculoskeletal systems.",
660
+ "author": "Pierre Schumacher, Daniel Haeufle, Dieter B\u00fcchler, Syn Schmitt, and Georg Martius.",
661
+ "venue": "In International Conference on Learning Representations, 2022.",
662
+ "url": null
663
+ }
664
+ },
665
+ {
666
+ "41": {
667
+ "title": "Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods.",
668
+ "author": "Jascha Sohl-Dickstein, Ben Poole, and Surya Ganguli.",
669
+ "venue": "In International Conference on Machine Learning, pp. 604\u2013612. PMLR, 2014.",
670
+ "url": null
671
+ }
672
+ },
673
+ {
674
+ "42": {
675
+ "title": "Cliff diving: Exploring reward surfaces in reinforcement learning environments.",
676
+ "author": "Ryan Sullivan, Jordan K Terry, Benjamin Black, and John P Dickerson.",
677
+ "venue": "In International Conference on Machine Learning, pp. 20744\u201320776. PMLR, 2022.",
678
+ "url": null
679
+ }
680
+ },
681
+ {
682
+ "43": {
683
+ "title": "The ladder in chaos: A simple and effective improvement to general DRL algorithms by policy path trimming and boosting.",
684
+ "author": "Hongyao Tang, Min Zhang, and Jianye Hao.",
685
+ "venue": "arXiv preprint arXiv:2303.01391, 2023.",
686
+ "url": null
687
+ }
688
+ },
689
+ {
690
+ "44": {
691
+ "title": "Quasi-Newton\u2019s method in the class gradient defined high-curvature subspace.",
692
+ "author": "Mark Tuddenham, Adam Pr\u00fcgel-Bennett, and Jonathan Hare.",
693
+ "venue": "arXiv preprint arXiv:2012.01938, 2020.",
694
+ "url": null
695
+ }
696
+ },
697
+ {
698
+ "45": {
699
+ "title": "dm_control: Software and tasks for continuous control.",
700
+ "author": "Saran Tunyasuvunakool, Alistair Muldal, Yotam Doron, Siqi Liu, Steven Bohez, Josh Merel, Tom Erez, Timothy Lillicrap, Nicolas Heess, and Yuval Tassa.",
701
+ "venue": "Software Impacts, 6:100022, 2020.",
702
+ "url": null
703
+ }
704
+ },
705
+ {
706
+ "46": {
707
+ "title": "Bypassing the ambient dimension: Private SGD with gradient subspace identification.",
708
+ "author": "Yingxue Zhou, Steven Wu, and Arindam Banerjee.",
709
+ "venue": "In International Conference on Learning Representations, 2020.",
710
+ "url": null
711
+ }
712
+ }
713
+ ],
714
+ "url": "http://arxiv.org/html/2401.06604v3"
715
+ }
20240318/2401.10253v2.json ADDED
@@ -0,0 +1,304 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Hybrid-Task Meta-Learning: A Graph Neural Network Approach for Scalable and Transferable Bandwidth Allocation",
3
+ "abstract": "In this paper, we develop a deep learning-based bandwidth allocation policy that is: 1) scalable with the number of users and 2) transferable to different communication scenarios, such as non-stationary wireless channels, different quality-of-service (QoS) requirements, and dynamically available resources. To support scalability, the bandwidth allocation policy is represented by a graph neural network (GNN), with which the number of training parameters does not change with the number of users. To enable the generalization of the GNN, we develop a hybrid-task meta-learning (HML) algorithm that trains the initial parameters of the GNN with different communication scenarios during meta-training. Next, during meta-testing, a few samples are used to fine-tune the GNN with unseen communication scenarios. Simulation results demonstrate that our HML approach can improve the initial performance by , and sampling efficiency by , compared with existing benchmarks. After fine-tuning, our near-optimal GNN-based policy can achieve close to the same reward with much lower inference complexity compared to the optimal policy obtained using iterative optimization.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Throughout the rapid evolution of wireless communication systems, the spectral efficiency, which is the amount of information that can be transmitted over a given bandwidth while maintaining a certain quality of service (QoS) level, still remains one of the most critical performance metrics for future sixth-generation (6G) wireless communications [1 ###reference_b1###, 2 ###reference_b2###]. To maximize spectrum efficiency, low-complexity bandwidth allocation solutions are critical for real-time decision-making within each transmission time interval (TTI) that could be shorter than one millisecond in current fifth-generation (5G) wireless communications. Furthermore, the number of users requesting bandwidth in each TTI is stochastic [3 ###reference_b3###, 4 ###reference_b4###], each user may have different QoS requirements [6 ###reference_b6###, 5 ###reference_b5###, 7 ###reference_b7###], and wireless channels are non-stationary [8 ###reference_b8###, 9 ###reference_b9###], making it difficult to develop a low-complexity bandwidth allocation policy that is scalable with the number of users and can satisfy a diverse range of communication scenarios.\nExisting iterative optimization algorithms can obtain optimal bandwidth allocation policies, but their computational complexity is generally too high to be implemented in real time [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###]. To reduce the computational complexity, deep learning is a promising approach for 6G communications [14 ###reference_b14###, 13 ###reference_b13###]. The idea is to train a deep neural network that maps the network status to the optimal decision. After training, the deep neural network can be used in communication systems for real-time decision-making, referred to as inference [15 ###reference_b15###]. Although deep learning has much lower inference complexity compared with iterative optimization algorithms, existing deep learning solutions using fully connected neural networks (FNNs) are not scalable to different number of users in wireless networks [16 ###reference_b16###]. This is because the number of training parameters of an FNN depends on the dimensions of the input and output, which change with the number of users. Thus, a well-trained FNN is not applicable in wireless networks with stochastic user requests. In contrast to FNNs, graph neural networks (GNNs) have scalable numbers of training parameters that adapt to the number of users [17 ###reference_b17###] \u2014 making them highly-suitable for developing scalable deep learning-based resource allocation solutions for wireless networks [19 ###reference_b19###, 18 ###reference_b18###]. Furthermore, improving the generalization ability of GNN in wireless networks with diverse QoS requirements remains an open problem.\nA key 5G application that requires flexible resource allocation solutions is network slicing, where resources from a shared physical infrastructure is partitioned into distinct network slices supporting diverse QoS requirements, such as data rate [21 ###reference_b21###, 20 ###reference_b20###], latency [22 ###reference_b22###, 23 ###reference_b23###], and security [25 ###reference_b25###, 26 ###reference_b26###, 24 ###reference_b24###, 27 ###reference_b27###], in both long and short coding blocklength regimes [30 ###reference_b30###, 28 ###reference_b28###, 29 ###reference_b29###]. To reserve resources for a single slice, the authors of [31 ###reference_b31###] proposed to compute the weights of different slices based on the corresponding QoS requirements and the number of service requests. With this approach, the amount of reserved resources for each slice is stochastic. Meanwhile, since the wireless channels are non-stationary, the reserved resources and the wireless channels in the training stage could be different from the actual required resources in the testing stage [32 ###reference_b32###, 33 ###reference_b33###]. As such, the mismatch between training data samples and testing data samples remains a crucial bottleneck for implementing efficient learning-based policies in practical wireless networks.\nRecent works have proposed to reduce the online training time by transfer learning, which involves offline pre-training and online fine-tuning [10 ###reference_b10###]. This method effectively reuses previously well-trained neural network features and significantly improves the sample efficiency. To further improve the online training efficiency for unseen tasks, meta-learning has been proposed [34 ###reference_b34###, 37 ###reference_b37###, 35 ###reference_b35###, 36 ###reference_b36###]. One of the meta-learning algorithms, model-agnostic meta-learning (MAML), has been applied to solve policy mismatch issues caused by varying user requests and non-stationary wireless channels [39 ###reference_b39###, 38 ###reference_b38###, 8 ###reference_b8###, 9 ###reference_b9###]. While these aforementioned works have highlighted the generalization ability of meta-learning for non-stationary wireless resource allocation, no works have addressed the impact of diverse QoS requirements in different communication scenarios.\nIn this paper, we put forth a low-complexity bandwidth allocation framework by designing a GNN that is scalable with the number of users and applying meta-learning to generalize the GNN to different communication scenarios.\nThe main contributions are summarized as follows,\nOur proposed GNN is designed to handle six diverse QoS requirements of data rate, latency, and security in each of the long and short coding blocklength regimes. This generalization is achieved by using feature engineering to translate the channel state information (CSI) and customized QoS requirement of individual users into the minimum required bandwidth.\nBased on the extracted feature of minimum required bandwidth, we design a GNN-based bandwidth allocation policy that is scalable to the number of users. To train the GNN, we apply an unsupervised learning method to maximize the sum reward of the users with different QoS requirements in a network-slicing architecture.\nThe optimal bandwidth allocation policies are obtained based on an iterative optimization algorithm to obtain the performance limit of the GNN-based policy in terms of the sum reward. By analyzing the computational complexity, we show that the GNN has a much lower inference complexity compared with the iterative optimization algorithm that is optimal.\nFinally, we develop our generalized hybrid-task meta-learning (HML) algorithm that is transferable to different communication scenarios by using meta-training to train the initial parameters of the GNN. We note that only a few samples are required to fine-tune the parameters of the GNN in meta-testing which validates that our GNN-based policy initialized by HML can be efficiently transferred to previously unseen communication scenarios. Simulation results show that our GNN-based policy achieves near-optimal performance and HML significantly outperforms the three considered benchmarks of MAML, MTL transfer (multi-task learning based transfer learning), and random initialisation.\nIn our simulations, the gap between the sum reward achieved by the GNN-based policy and that of the optimal bandwidth allocation policy obtained from the iterative optimization algorithm is less than . HML also improves the initial performance by up to and sample efficiency by up to compared with the MAML benchmark. We also show that the performance gains of HML is even higher when compared to the other two benchmarks."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Related Works",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Deep Learning for Resource Allocation in Wireless Communications",
21
+ "text": "Applying deep learning for resource allocation in wireless networks has been widely studied in the existing literature [15 ###reference_b15###, 16 ###reference_b16###]. In [15 ###reference_b15###], the authors showed that learning-based algorithms could obtain near-optimal solutions, and the computational complexity in inference is low. In [16 ###reference_b16###], the authors proposed a FNN-based unsupervised learning algorithm to optimize the bandwidth allocation policy. More recently, due to the fact that FNN is not scalable to the number of users, GNNs have been applied in wireless networks optimizations [19 ###reference_b19###, 18 ###reference_b18###]. In [18 ###reference_b18###], the authors designed a GNN, which is scalable to the number of users in a wireless network, to minimize the summation of queuing delay violation probability and packet loss probability. In [19 ###reference_b19###], the authors developed GNN-based scalable learning-based methods to solve radio resource management problems.\n###table_1###"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Generalization of Deep Learning Policies in Non-Stationary Wireless Networks",
27
+ "text": "In wireless networks, the user requests, wireless channels, and available resources for each type of service can be non-stationary. Table I ###reference_### summarizes some QoS requirements considered in the related works. For example, data rate, latency, and security have been investigated in [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###]. These papers mainly focus on scenarios with long channel coding blocklengths, where the achievable rate of a wireless link can be approximated by the Shannon capacity. In 5G, the coding blocklength can be short, and Shannon capacity is not applicable. As such, the authors of [27 ###reference_b27###, 28 ###reference_b28###] established how to optimize wireless communication systems using the achievable rate in the short blocklength regime [29 ###reference_b29###]. Meanwhile, different services may co-exist in one network, and the authors of [30 ###reference_b30###, 10 ###reference_b10###] considered different QoS requirements in both long and short blocklength regimes. To support diverse QoS requirements in network slicing, the authors of [31 ###reference_b31###] proposed to reserve bandwidth for different slices based on the number of users and the required QoS.\nFurther considering that the number of requests, the reserved resources, and the wireless channels are dynamic, improving the generalization ability of deep learning policies has attracted significant research interests in recent years. One approach to address this challenge is to carefully initialize the neural network and fine-tune it online. The authors of [10 ###reference_b10###] applied transfer learning to fine-tune the parameters of deep neural networks that are trained offline in dynamic wireless networks. To further improve the sample efficiency in an unseen communication scenario, meta-learning has been adopted in [8 ###reference_b8###, 9 ###reference_b9###, 38 ###reference_b38###, 39 ###reference_b39###], where the hyper-parameters of a deep neural network, such as the initial parameters, are updated according to a set of communication scenarios in meta-training. In [38 ###reference_b38###], meta-learning was applied to optimize computing resource allocation policies in mobile edge computing networks to fit both time-varying wireless channels and different requests of computing tasks. In [39 ###reference_b39###], meta-learning was applied in virtual reality to quickly adapt to the user movement patterns changing over time. To improve the training efficiency in non-stationary vehicle networks, the authors in [8 ###reference_b8###] proposed optimizing the beamforming using meta-learning. In [9 ###reference_b9###], the authors combined meta-learning and support vector regression to extract the features for beamforming optimization, further improving training efficiency over non-stationary channels."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III System Model and Problem Formulation",
33
+ "text": "We consider an uplink orthogonal-frequency-division-multiple-access communication system with network slicing where users are requesting different types of services from one base station (BS). The BS first reserves bandwidth for each type of service according to the QoS requirement and the number of users. Then, it allocates bandwidth to different users within each slice. The resource reservation for different slices has been extensively studied in the existing literature, so we will focus on developing bandwidth allocation policies for individual slices with different numbers of users, non-stationary wireless channels, and dynamic available bandwidth."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-A Different QoS in Infinite and Short Blocklength Regimes",
39
+ "text": "To investigate the generalization ability of our proposed bandwidth allocation policy, we consider both long and short blocklength regimes with three types of QoS requirements, i.e., data rate, queuing delay, and security. Thus, there are six scenarios in total. We denote the reward of the -th user by\nwhere superscripts represent data rate, effective capacity with queuing delay constraint, and secrecy rate, respectively, whilst the superscripts represent the scenarios in the infinite long and finite short blocklength regimes, respectively."
40
+ },
41
+ {
42
+ "section_id": "3.1.1",
43
+ "parent_section_id": "3.1",
44
+ "section_name": "III-A1 Data Rate Requirement",
45
+ "text": "When the blocklength is long, the data rate reward of the -th user can be expressed as\nwhere is the bandwidth allocated to the -th user, is the transmit power of the -th user, is the single-sided noise spectral density, and is the channel gain, where and represent the large-scale and small-scale channel gains between the -th user and the BS, respectively.\nWhen the blocklength is short, decoding errors cannot be neglected. As such, the data rate reward of the -th user can be approximated by [29 ###reference_b29###]\nwhere is the channel dispersion that measures the stochastic variability of the channel related to a deterministic channel with the same capacity, is the blocklength, and is the transmission duration of each coding block. The function is the inverse of the Gaussian Q-function, and is the decoding error probability."
46
+ },
47
+ {
48
+ "section_id": "3.1.2",
49
+ "parent_section_id": "3.1",
50
+ "section_name": "III-A2 Latency Requirement",
51
+ "text": "When considering latency constraints due to queueing delays, the effective capacity is applied to characterize the statistical QoS requirement in wireless communications, and is expressed as [28 ###reference_b28###]\nwhere is the channel coherence time, is the QoS exponent for queuing delay, denotes the expectation, and is the data rate in (2 ###reference_###) or (3 ###reference_###).\nWe note that is determined by the maximum tolerable delay bound violation probability, , the packet arrival rate, , and the threshold of queuing delay, ."
52
+ },
53
+ {
54
+ "section_id": "3.1.3",
55
+ "parent_section_id": "3.1",
56
+ "section_name": "III-A3 Security Requirement",
57
+ "text": "To formulate the wireless security requirement, we consider that there is an eavesdropper that attempts to wiretap the information transmitted by each user.\nIn the long blocklength regime, the secrecy rate of the -th user can be expressed as [24 ###reference_b24###]\nwhere , and is the data rate of the wiretapped channel from the -th user to the eavesdropper. The channel gain of the wiretapped channel is denoted by , where and represent the large-scale and small-scale channel gains between the -th user and the eavesdropper, respectively.\nIn the short blocklength regime, the achievable secrecy rate of the -th user can be approximated as [27 ###reference_b27###],\nwhere , and represents the information leakage, which describes the statistical independence between the transmitted confidential message and the eavesdropper\u2019s observation, and is measured by the total variation distance [27 ###reference_b27###]."
58
+ },
59
+ {
60
+ "section_id": "3.2",
61
+ "parent_section_id": "3",
62
+ "section_name": "III-B Bandwidth Reservation for Different Slices",
63
+ "text": "We assume that there can be multiple bandwidth reservation policies for different slices in network slicing. Given the total bandwidth of the BS, , the bandwidth reserved for the -th slice is given by\nwhere is the number of users in the -th slice, is the QoS class identifier (QCI) of the -th user in the -th slice, and is the network function for bandwidth reservation in network slicing. Since the sum of the bandwidth reserved for all the slices equals the total bandwidth of the BS, thus\nwhere is the number of slices. Inspired by [31 ###reference_b31###], the bandwidth reserved for each slice depends on the number of users in this slice and the QCI of these users, e.g.,"
64
+ },
65
+ {
66
+ "section_id": "3.3",
67
+ "parent_section_id": "3",
68
+ "section_name": "III-C Problem Formulation",
69
+ "text": "To maximize the sum reward subject to the QoS requirements in each slice, we formulate the bandwidth allocation problem as follows,\nwhere is the bandwidth allocated to the users, and is the minimum threshold of the QoS required by the users. Thus, constraint (10c ###reference_###) guarantees the QoS of all the users."
70
+ },
71
+ {
72
+ "section_id": "3.4",
73
+ "parent_section_id": "3",
74
+ "section_name": "III-D Analysis of Problem Feasibility",
75
+ "text": "Given the available bandwidth constraint in (10a ###reference_###) and the QoS constraint in (10c ###reference_###), problem (10 ###reference_###) will be infeasible when some of the users in this slice have weak channels. We denote the minimum bandwidth required to meet constraint (10c ###reference_###) by . If some users experience deep fading, leading to , then problem (10 ###reference_###) is infeasible. In this case, the BS will only schedule the users with sufficiently strong channels. Alternatively, to maximize the number of scheduled users in problem (10 ###reference_###), we consider that the BS schedules the users with the smallest bandwidth requirement. Denote the set of scheduled users by . Then, for any and , we have .\nAfter user scheduling, problem (10 ###reference_###) can be reformulated as follows,\nIn the following, we investigate how to find the optimal solution to problem (11 ###reference_###)."
76
+ },
77
+ {
78
+ "section_id": "4",
79
+ "parent_section_id": null,
80
+ "section_name": "IV Hybrid-Task Meta-Learning for GNN-based Scalable Bandwidth Allocation",
81
+ "text": "In this section, we first illustrate how to obtain the optimal bandwidth allocation by using an iterative optimization algorithm. Next, we utilize feature engineering techniques to reformulate the problem, and represent the bandwidth allocation policy by a GNN. To generalize the GNN, the feature of required minimum bandwidth that can be used to represent different QoS requirements is used as the GNN\u2019s input. Then, we develop a meta-learning approach to train the GNN. The goal is to obtain a policy that is scalable to the number of users and can generalize well in diverse communication scenarios with different channel distributions, QoS requirements, and available bandwidth."
82
+ },
83
+ {
84
+ "section_id": "4.1",
85
+ "parent_section_id": "4",
86
+ "section_name": "IV-A Optimal Bandwidth Allocation by Iterative Algorithm",
87
+ "text": "Inspired by the optimization algorithm for resource allocation in [10 ###reference_b10###], we propose an iterative optimization algorithm for solving our problems. We denote the bandwidth of each resource block by . At the beginning of the iteration, the bandwidth allocated to each user is . In each iteration, we calculate the incremental reward of each user when an extra resource block is allocated to it, denoted by . Finally, the resource block is allocated to the user with the highest . The details of the algorithm can be found in Algorithm 1 ###reference_###. The optimality of the algorithm depends on the properties of the problems. For problem (11 ###reference_###), if it is a convex problem, then Algorithm 1 ###reference_### can obtain the optimal solution [10 ###reference_b10###]. To validate whether problem (11 ###reference_###) is convex or not, we only need to validate whether is concave or not. In the long blocklength regime, we can prove that the secrecy rate is concave in bandwidth. See proof in Appendix A ###reference_###. Since\nShannon\u2019s capacity is a special case of the secrecy rate when the wiretapped channel is in deep fading, thus Shannon\u2019s capacity is also concave in bandwidth. In addition,\nthe authors of [41 ###reference_b41###] proved that the effective capacity is concave in bandwidth. Therefore, Algorithm 1 ###reference_### can obtain the optimal solution in the long blocklength regime. In the short-blocklength regime, is not concave when . Nevertheless, based on the results in [42 ###reference_b42###], the optimal bandwidth can be obtained in a region , where is concave in bandwidth. By searching for the optimal bandwidth in , Algorithm 1 ###reference_### can obtain the optimal solution in the short blocklength regime."
88
+ },
89
+ {
90
+ "section_id": "4.2",
91
+ "parent_section_id": "4",
92
+ "section_name": "IV-B Feature Engineering and Problem Reformulation",
93
+ "text": "To obtain a policy that can generalize well in different scenarios, we propose to use feature engineering technology to represent the channels and QoS requirements with more general features. Specifically, we first normalize the bandwidth allocation policy by the bandwidth reserved for this slice. The normalized bandwidth allocated to the -th user, , is given by . Then, the normalized minimum bandwidth required by the scheduled users is denoted by . We define the surplus bandwidth as , and further denote the normalized surplus bandwidth by .\nWe note that bandwidth allocation policy maps channels and constraints to the bandwidth allocated to each user. After scheduling and normalization, the features of the channel state information and constraints (11a ###reference_###) and (11b ###reference_###) can be represented by . Therefore, the bandwidth allocation policy can be reformulated as the mapping from and to . We denote this function by\nwhere and . Given the bandwidth reserved for this slice, the achievable rates of the scheduled users can be expressed as\nwhere ,\n,\nand .\nThen, we can reformulate problem (11 ###reference_###) as a functional optimization problem,\nIn the rest part of this section, we will find the optimal solution to problem (14 ###reference_###)."
94
+ },
95
+ {
96
+ "section_id": "4.3",
97
+ "parent_section_id": "4",
98
+ "section_name": "IV-C Proposed GNN",
99
+ "text": "In this subsection, we propose a GNN-based unsupervised learning algorithm to obtain a scalable bandwidth allocation policy.\n###figure_1### Each scheduled user is a vertex in the GNN. We use a fully connected neural network (FNN) to obtain the embedding of each vertex, denoted by . The inputs of each FNN include and . We use to denote the training parameters of the FNN. In the -th epoch, the message passing function is given by . Since the vertices are homogeneous, the training parameters of all the FNNs are the same.\nIn the aggregation step, we first aggregate the embeddings of all the scheduled users by using a concatenation function, , followed by a Softmax function, , which serves as the activation function in the aggregation. The output after aggregation is denoted by .\nThe GNN\u2019s output of each vertex is updated by a readout function given by, . Since is obtained from the function, the summation of its elements is one. From the readout function, all the surplus bandwidth is allocated to the users, and constraints (14a ###reference_###) and (14b ###reference_0###) can be satisfied.\nTo compute the embedding of each vertex, we need to compute the output of the FNN in Fig. 1 ###reference_###. We denote the number of layers of the FNN by and the number of neurons in the -th layer by . Then, the number of multiplications required to compute the output of the -th layer is and the total number of multiplications for computing the embedding is [10 ###reference_b10###]. After obtaining the embeddings of users, the number of multiplications required by and is . Therefore, the inference complexity of the GNN-based bandwidth allocation policy is\nIn each iteration of the optimization algorithm, we assign a small portion of the normalized surplus bandwidth, denoted by , to a user that can maximize the objective function. The algorithm needs to compute the objective function times and find the best user. We denote the complexity for computing the objective function by , then the complexity of the iterative algorithm is given by\nwhere represents the number of iterations used in the iterative algorithm.\nTo obtain bandwidth allocation in each transmission time interval, the transmitter either uses the forward propagation algorithm to compute the outcome of the GNN or executes the iterative algorithm.\nFrom eqs. (16 ###reference_###) and (17 ###reference_###), we can see that the computational complexity of our GNN and the iterative algorithm increase linearly with the number of users. Recall that in eq. (16 ###reference_###) is quite limited. In contrast, the complexity of the iterative algorithm also increases with the amount of surplus bandwidth and the resource block and thus depends on the channels of the users. In addition, the computing complexity for evaluating the objective function, denoted by in eq. (17 ###reference_###), in each iteration of the optimization algorithm could also be extremely high. Thus, the inference complexity of the GNN is much lower than the complexity of the iterative optimization algorithm."
100
+ },
101
+ {
102
+ "section_id": "4.3.1",
103
+ "parent_section_id": "4.3",
104
+ "section_name": "IV-C1 Structure of GNN",
105
+ "text": "As shown in Fig. 1 ###reference_###, the proposed GNN-based bandwidth allocation algorithm comprises three key steps: message passing, aggregation, and readout.\nEach scheduled user is a vertex in the GNN. We use a fully connected neural network (FNN) to obtain the embedding of each vertex, denoted by . The inputs of each FNN include and . We use to denote the training parameters of the FNN. In the -th epoch, the message passing function is given by . Since the vertices are homogeneous, the training parameters of all the FNNs are the same.\nIn the aggregation step, we first aggregate the embeddings of all the scheduled users by using a concatenation function, , followed by a Softmax function, , which serves as the activation function in the aggregation. The output after aggregation is denoted by .\nThe GNN\u2019s output of each vertex is updated by a readout function given by, . Since is obtained from the function, the summation of its elements is one. From the readout function, all the surplus bandwidth is allocated to the users, and constraints (14a ###reference_### ###reference_###) and (14b ###reference_0### ###reference_0###) can be satisfied."
106
+ },
107
+ {
108
+ "section_id": "4.3.2",
109
+ "parent_section_id": "4.3",
110
+ "section_name": "IV-C2 Unsupervised Learning",
111
+ "text": "The learning algorithm is detailed in Algorithm 2 ###reference_###. Specifically, in the -th epoch, we use our GNN to obtain the bandwidth allocation and estimate the expectation of the objective function by using the batch samples according to\nwhere is the batch size. Then, we use stochastic gradient descent (SGA) to maximize the estimated expectation of the objective function in (14 ###reference_###). As shown in [16 ###reference_b16###], maximizing the expectation of the objective function, where the expectation is taken over channels, is equivalent to maximizing the objective function with given channels. Thus, from Algorithm 2 ###reference_###, we can find the bandwidth allocation policy that maximizes the objective function in (14 ###reference_###)."
112
+ },
113
+ {
114
+ "section_id": "4.3.3",
115
+ "parent_section_id": "4.3",
116
+ "section_name": "IV-C3 Computational Complexity",
117
+ "text": "We compare the computational complexity of our GNN with the iterative algorithm introduced in Section IV-A ###reference_###. In cellular systems, both algorithms will be implemented in each transmission time interval with a duration of less than 1 ms. Thus, we are interested in the inference complexity of our GNN, i.e., the number of operations to be executed to obtain the bandwidth allocation in each transmission time interval.\nTo compute the embedding of each vertex, we need to compute the output of the FNN in Fig. 1 ###reference_### ###reference_###. We denote the number of layers of the FNN by and the number of neurons in the -th layer by . Then, the number of multiplications required to compute the output of the -th layer is and the total number of multiplications for computing the embedding is [10 ###reference_b10### ###reference_b10###]. After obtaining the embeddings of users, the number of multiplications required by and is . Therefore, the inference complexity of the GNN-based bandwidth allocation policy is\nIn each iteration of the optimization algorithm, we assign a small portion of the normalized surplus bandwidth, denoted by , to a user that can maximize the objective function. The algorithm needs to compute the objective function times and find the best user. We denote the complexity for computing the objective function by , then the complexity of the iterative algorithm is given by\nwhere represents the number of iterations used in the iterative algorithm.\nTo obtain bandwidth allocation in each transmission time interval, the transmitter either uses the forward propagation algorithm to compute the outcome of the GNN or executes the iterative algorithm.\nFrom eqs. (16 ###reference_### ###reference_###) and (17 ###reference_### ###reference_###), we can see that the computational complexity of our GNN and the iterative algorithm increase linearly with the number of users. Recall that in eq. (16 ###reference_### ###reference_###) is quite limited. In contrast, the complexity of the iterative algorithm also increases with the amount of surplus bandwidth and the resource block and thus depends on the channels of the users. In addition, the computing complexity for evaluating the objective function, denoted by in eq. (17 ###reference_### ###reference_###), in each iteration of the optimization algorithm could also be extremely high. Thus, the inference complexity of the GNN is much lower than the complexity of the iterative optimization algorithm."
118
+ },
119
+ {
120
+ "section_id": "4.4",
121
+ "parent_section_id": "4",
122
+ "section_name": "IV-D Hybrid-Task Meta-Learning",
123
+ "text": "To obtain a GNN with strong generalization ability, we propose an HML algorithm that combines multi-task learning and meta-learning."
124
+ },
125
+ {
126
+ "section_id": "4.4.1",
127
+ "parent_section_id": "4.4",
128
+ "section_name": "IV-D1 Task, Sample, and Taskset",
129
+ "text": "To apply the meta-learning framework, we first define tasks, samples, and tasksets in the context of bandwidth allocation problems. A task is a specific bandwidth allocation problem with a unique combination of system parameters, including the number of users, , the channel model (i.e., path loss model, shadowing, and small-scale channel fading), the QoS requirement, , and the reserved bandwidth, . If any of the above system parameters change, it would result in a different task. For each task, the samples correspond to the wireless channels that have been transformed into the minimum bandwidth requirement by feature engineering, as specified in constraint (14b ###reference_0###). There are four tasksets in meta-learning, and a taskset consists of multiple tasks. We will provide their definitions in the sequel.\n###figure_2### ###figure_3###"
130
+ },
131
+ {
132
+ "section_id": "4.4.2",
133
+ "parent_section_id": "4.4",
134
+ "section_name": "IV-D2 Support Set and Query Set in Meta-Training",
135
+ "text": "As shown in Fig. 2(a) ###reference_sf1###, most meta-learning learning algorithms, such as MAML, consist of a meta-training stage and a meta-testing stage. In meta-training, there are two tasksets, support set and query set . The tasks in the two tasksets are the same, but the samples of each task in the two tasksets are different. Specifically, we first set the initialize parameters of the GNN to , which is randomly initialized at the beginning of meta-training, and updated in every iteration of the meta-training. Then, we train the parameters of the GNN by using the tasks and the corresponding samples in the support set, where is initialized with parameters . Then, we update the initial parameters by using the tasks and the corresponding samples in the query set. We denote the initial parameters trained in meta-training of MAML by . The details of the MAML algorithm can be found in [37 ###reference_b37###]."
136
+ },
137
+ {
138
+ "section_id": "4.4.3",
139
+ "parent_section_id": "4.4",
140
+ "section_name": "IV-D3 Fine-Tuning Set and Evaluation Set in Meta-Testing",
141
+ "text": "To evaluate the generalization ability of the GNN, a different set of tasks that are unseen in the meta-training stage are used in meta-testing. As shown in Fig. 2(a) ###reference_sf1###, the tasks in meta-testing are divided into a fine-tuning set and an evaluation set, denoted by and , respectively. The tasks in and are the same, but the samples of each task in these two tasksets are different. For each new task in meta-testing, the samples from the fine-tuning set are used to fine-tune , which is initialized by obtained in meta-training. After fine-tuning, the updated GNN is tested with the samples from the evaluation set. If no sample is used to fine-tune the GNN in the meta-testing stage, we refer to this approach as zero-shot meta-learning. Otherwise, it is known as few-shot meta-learning. The meta-testing algorithm is detailed in Algorithm 3 ###reference_###."
142
+ },
143
+ {
144
+ "section_id": "4.4.4",
145
+ "parent_section_id": "4.4",
146
+ "section_name": "IV-D4 Meta-Training of Proposed HML Algorithm",
147
+ "text": "Fig. 2(b) ###reference_sf2### illustrates the tasks and tasksets used in the meta-training and meta-testing of the proposed HML algorithm. The difference between MAML and HML lies in the selection of tasks from the query set. In MAML, the tasks selected from the query set are identical to those selected from the support set in each meta-training epoch. To improve the generalization ability, in HML, we select different tasks from the query set to train the initial parameters of the GNN. Specifically, tasks are randomly selected from the query set to estimate the average loss of the GNN parameterized by in the -th epoch of meta-training. The step-by-step algorithm for meta-training of the proposed HML algorithm is described in Algorithm 4 ###reference_###, and the meta-testing algorithm of HML is the same as that of MAML in Algorithm 3 ###reference_###.\n###table_2### ###table_3###"
148
+ },
149
+ {
150
+ "section_id": "5",
151
+ "parent_section_id": null,
152
+ "section_name": "Performance Evaluation",
153
+ "text": "In this section, we evaluate the performance of our GNN-based HML algorithm. The GNN is first initialized by the parameters obtained from meta-training, where all the tasks aim to maximize the sum of the secrecy rate with different numbers of users and channel models. Then, we evaluate the performance of our GNN in unseen tasks with different numbers of users, channel models, objective functions, QoS constraints, and reserved bandwidth."
154
+ },
155
+ {
156
+ "section_id": "5.1",
157
+ "parent_section_id": "5",
158
+ "section_name": "System Setup",
159
+ "text": "We consider a BS, located at m, serving multiple users randomly distributed in a rectangular area, where the coordinates of the users are denoted by , where and . When the QoS requirement is secrecy rate, an eavesdropper is randomly located in the above rectangular area. The transmitted signal of each user is a complex Gaussian process with zero-mean and equal variance, . Channel models include large-scale channels and small-scale channels. Specifically, the large-scale channels depend on path loss and shadowing fading, whilst small-scale channels follow Rice, Nakagami, and Rayleigh distributions with various parameters in Table III ###reference_###. The number of neurons in each layer of the GNN is . Unless otherwise mentioned, the simulation parameters are summarized in Table II ###reference_###, and the parameters of tasksets are defined in Table III ###reference_###."
160
+ },
161
+ {
162
+ "section_id": "5.2",
163
+ "parent_section_id": "5",
164
+ "section_name": "Performance of GNN",
165
+ "text": "###figure_4### Fig. 3 ###reference_### shows the training losses when the number of users increases from to . The results show that the unsupervised learning algorithm can converge after a few hundred training epochs for different numbers of users, and the convergence time increases slightly with the number of users.\n###figure_5### ###figure_6### After the training stage of the unsupervised learning algorithm, we select samples from the evaluation set of the same task to evaluate the constraint and reward achieved by the GNN in Fig. 4 ###reference_###. The results in Fig. 4(a) ###reference_sf1### show that the secrecy rates of all the scheduled users are equal to or higher than the requirement, Mbps. The results in Fig. 4(b) ###reference_sf2### show that the sum secrecy rate achieved by the GNN is close to that achieved by the iterative optimization algorithm in Section IV-A ###reference_### (with legend \u201cOptimal\u201d). In other words, the unsupervised learning algorithm can obtain a near-optimal solution."
166
+ },
167
+ {
168
+ "section_id": "5.3",
169
+ "parent_section_id": "5",
170
+ "section_name": "Meta-Testing Performance of HML",
171
+ "text": "In this subsection, we evaluate the generalization ability of the proposed HML algorithm. The differences between tasks in meta-training and meta-testing are shown in Table. III ###reference_###. In meta-testing, we first select an unseen task that is not included in meta-training. In each training epoch of the meta-testing, samples are randomly selected from to fine-tune the GNN, whilst all the testing samples from the same task in are used to evaluate the performance."
172
+ },
173
+ {
174
+ "section_id": "5.3.1",
175
+ "parent_section_id": "5.3",
176
+ "section_name": "V-C1 Different Wireless Channels and QoS Requirements",
177
+ "text": "In this part, we set MHz and Mbps for all types of services. The other parameters follow the rules in and as shown in Table. III ###reference_###. We compare the initial performance and sample efficiency of HML with four benchmarks: 1) Optimal, 2) Model-agnostic meta-learning (MAML), 3) Multi-task learning-based transfer learning (MTL Transfer), and 4) Random initialization.\nOptimal: The optimal solution is obtained by the iterative algorithm detailed in Section IV-A ###reference_###, and its optimality has been proved in [10 ###reference_b10###].\nMAML: MAML is one of the most widely used meta-learning algorithms, and its key ideas have been discussed in Section IV-D ###reference_###.\nMTL Transfer: Transfer learning improves the sample efficiency by fine-tuning the parameters of the pre-trained GNN in a task with fewer training samples. With multi-task learning (MTL), the initial performance is much better than random initialization as the GNN is pre-trained in multiple tasks [37 ###reference_b37###, 43 ###reference_b43###]. To execute MTL transfer learning, we only need to replace the initialization in line 2 ###reference_### of Algorithm 2 ###reference_### by the pre-trained parameters.\nRandom Initialization: Random initialization is the conventional method that trains the GNN from scratch with a new task.\nIn figures 5 ###reference_###-7 ###reference_###, the horizontal axis represents the training epochs used to fine-tune the GNN, and samples from are used to train the GNN. The vertical axis represents the sum of the rewards of all the users, and the average is taken over samples, i.e., testing samples from are used. We refer to it as the average sum reward.\n###figure_7### ###figure_8### In Fig. 5 ###reference_###, we consider the average sum of secrecy rates and illustrate the impacts of the number of users, channel models, and coding blocklength on the initial performance and sample efficiency of different methods.\nThe results in Fig. 5 ###reference_### show that HML achieves the best initial average sum secrecy rate and the highest sample efficiency compared with all the benchmarks. In Fig. 5(a) ###reference_sf1###, HML can converge in training epochs. Both MAML and MTL transfer learning takes more than epochs to converge. Thus, HML can reduce the convergence time by up to . After the fine-tuning, the gap between learning methods and the optimal solution is around %. In Fig. 5(b) ###reference_sf2###, the coding blocklength in meta-testing is also different from that in meta-training. As a result, the gap between the initial performance of HML and the optimal solution is . After fine-tuning, the gap reduced to , which is larger than the gap in Fig. 5(a) ###reference_sf1###, where the blocklength is the same in meta-training and meta-testing.\n###figure_9### ###figure_10### Fig. 6 ###reference_### shows the average sum of data rates achieved by different methods. The results indicate that when the reward function and the QoS constraint in meta-testing are different from that in meta-training, the gaps between the initial performance of HML and the optimal solution increase to and in long and short blocklength regimes, respectively. After fine-tuning, the gaps between the learning methods and the optimal solution are smaller than that in Fig. 5 ###reference_###. This is because Shannon\u2019s capacity/achievable rate are two special cases of the secrecy rate in the long/short blocklength regimes when the wiretapped channels are in deep fading. It is easier to learn a good policy when the problem becomes less complicated.\n###figure_11### ###figure_12### Fig. 7 ###reference_### shows the average sum of effective capacities achieved in the meta-testing stage, where the initial parameters of the GNN are obtained from meta-training, and the GNN is trained with tasks maximizing the sum secrecy rate in the long blocklength regime. In other words, the QoS requirement in meta-testing is queuing delay requirement, which is quite different from the security requirement in meta-training. By comparing the results in Figs. 7 ###reference_### and 5 ###reference_###, we can observe that the gaps between the HML and the optimal solution in Fig. 7 ###reference_### are larger than the gaps in Fig. 5 ###reference_###. Nevertheless, HML can still converge in around to epochs and outperforms the other benchmarks in Fig. 7 ###reference_###.\n###figure_13### ###figure_14### ###figure_15###"
178
+ },
179
+ {
180
+ "section_id": "5.3.2",
181
+ "parent_section_id": "5.3",
182
+ "section_name": "V-C2 Meta-Testing with Different System Parameters",
183
+ "text": "In this part, we focus on secrecy rates in the long blocklength regime in both meta-training and meta-testing, and change the values of , , and to investigate their impacts on the initial performance and sample efficiency of HML in meta-testing.\n###figure_16### ###figure_17### In Fig. 8 ###reference_###, we evaluate the initial performance and sample efficiency with different in support sets and query sets in meta-training. Specifically, we set to Mbps and Mbps in meta-training in Figs. 8(a) ###reference_sf1### and 8(b) ###reference_sf2###, respectively. In Fig. 8(c) ###reference_sf3###, is randomly selected from the set Mbps in meta-training. In meta-testing, we increase from Mbps to Mbps. The results in Figs. 8(a) ###reference_sf1### and 8(b) ###reference_sf2### indicate that the gaps between zero-shot learning (with training epochs in meta-testing) and the optimal solution increase with the difference between in meta-training and in meta-testing. To increase the generalization ability, we can increase the diversity of tasks in meta-training as shown in Fig. 8(c) ###reference_sf3###. In this way, our GNN is near-optimal with zero-shot learning.\nIn Fig. 9 ###reference_###, we validated the generalization ability of our GNN with dynamic bandwidth . In meta-training, is randomly selecting from the set MHz. In meta-testing, we increase from to MHz. The results in Fig. 9 ###reference_### show that our GNN is near-optimal with different values of .\nIn Fig. 10 ###reference_###, we further validate the generalization ability of our GNN with different numbers of users.\nIn meta-training, the number of total users is randomly selected, . In meta-testing, we increase the number of total users from to . The results in Fig. 10 ###reference_### show that the proposed HML can obtain a GNN that has strong generalization ability with different numbers of users. The gap between the GNN and the optimal policy increases slightly with . This is because the scale of the problem increases with , and it is more difficult to learn the bandwidth allocation policy of a large-scale problem compared with that of a small-scale problem."
184
+ },
185
+ {
186
+ "section_id": "6",
187
+ "parent_section_id": null,
188
+ "section_name": "VI Conclusion",
189
+ "text": "In this paper, we developed an HML approach to train a GNN-based scalable bandwidth allocation policy that can generalize well in various communication scenarios, including different number of users, wireless channels, QoS requirements, and bandwidth. The main idea is to train the initial parameters of the GNN with various tasks in meta-training, and then fine-tune the parameters with a few samples in meta-testing. Simulation results showed that the performance gap between the GNN and the optimal policy obtained by an iterative algorithm is less than % in most of the cases. For unseen communication scenarios, the GNN can converge in to training epochs, which are much faster than the existing benchmarks. Our approach can be extended beyond bandwidth allocation, such as power allocation, precoding, and repetitions. Nevertheless, the featuring engineering and the structure of GNN in other scenarios deserve further investigation."
190
+ }
191
+ ],
192
+ "appendix": [
193
+ {
194
+ "section_id": "Appendix 1",
195
+ "parent_section_id": null,
196
+ "section_name": "Appendix A Proof of Concavity for Secrecy Rate in Long Blocklength Regimes",
197
+ "text": "To prove the concavity of the secrecy rate in long blocklength regimes, we only need to prove that the second derivative of the secrecy rate is positive. We first calculate the partial derivative of the secrecy rate of the -th scheduled user as follows,\nwhere and . Since the secrecy rate of the user increases with the increasing of the allocated bandwidth, we have .\nThe second derivative of can be derived as follows,\nFor any scheduled user, we have . Thus, . Therefore, is concave. This completes the proof."
198
+ }
199
+ ],
200
+ "tables": {
201
+ "1": {
202
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Considered QoS Requirements in Related Works</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S2.T1.12\">\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt ltx_border_tt\" id=\"S2.T1.1.1.1\" rowspan=\"2\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"><span class=\"ltx_text ltx_nopad\" id=\"S2.T1.1.1.1.1\"><svg height=\"21.76\" overflow=\"visible\" version=\"1.1\" width=\"52.04\"><g transform=\"translate(0,21.76) scale(1,-1)\"><path d=\"M 0,21.76 52.04,0\" stroke=\"black\" stroke-width=\"0.4\"></path><g class=\"ltx_svg_fog\" transform=\"translate(0,0)\"><g transform=\"translate(0,9.61) scale(1, -1)\"><foreignobject height=\"9.61\" overflow=\"visible\" width=\"26.02\">\n<span class=\"ltx_inline-block\" id=\"S2.T1.1.1.1.1.pic1.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S2.T1.1.1.1.1.pic1.1.1.1\">\n<span class=\"ltx_p\" id=\"S2.T1.1.1.1.1.pic1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.pic1.1.1.1.1.1\">Refs</span></span>\n</span>\n</span></foreignobject></g></g><g class=\"ltx_svg_fog\" transform=\"translate(26.67,9.61)\"><g transform=\"translate(0,12.15) scale(1, -1)\"><foreignobject height=\"12.15\" overflow=\"visible\" width=\"25.37\">\n<span class=\"ltx_inline-block\" id=\"S2.T1.1.1.1.1.pic1.2.1\">\n<span class=\"ltx_inline-block ltx_align_right\" id=\"S2.T1.1.1.1.1.pic1.2.1.1\">\n<span class=\"ltx_p\" id=\"S2.T1.1.1.1.1.pic1.2.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.pic1.2.1.1.1.1\">QoS</span></span>\n</span>\n</span></foreignobject></g></g></g></svg></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt ltx_border_tt\" colspan=\"2\" id=\"S2.T1.1.1.2\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.1.1.2.1\"></span> <span class=\"ltx_text\" id=\"S2.T1.1.1.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.1.1.2.2.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.1.1.2.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.2.2.1.1.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.2.2.1.1.1.1\">Data rate</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.1.1.2.3\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt ltx_border_tt\" colspan=\"2\" id=\"S2.T1.1.1.3\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.1.1.3.1\"></span> <span class=\"ltx_text\" id=\"S2.T1.1.1.3.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.1.1.3.2.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.1.1.3.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.3.2.1.1.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.3.2.1.1.1.1\">Latency</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.1.1.3.3\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_tt\" colspan=\"2\" id=\"S2.T1.1.1.4\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.1.1.4.1\"></span> <span class=\"ltx_text\" id=\"S2.T1.1.1.4.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.1.1.4.2.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.1.1.4.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.4.2.1.1.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.4.2.1.1.1.1\">Security</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.1.1.4.3\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.12.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.12.13.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.12.13.1.1\"></span> <span class=\"ltx_text\" id=\"S2.T1.12.13.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.12.13.1.2.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.12.13.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.12.13.1.2.1.1.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">Long</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.12.13.1.3\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.12.13.2\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.12.13.2.1\"></span> <span class=\"ltx_text\" id=\"S2.T1.12.13.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.12.13.2.2.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.12.13.2.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.12.13.2.2.1.1.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">Short</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.12.13.2.3\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.12.13.3\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.12.13.3.1\"></span> <span class=\"ltx_text\" id=\"S2.T1.12.13.3.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.12.13.3.2.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.12.13.3.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.12.13.3.2.1.1.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">Long</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.12.13.3.3\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.12.13.4\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.12.13.4.1\"></span> <span class=\"ltx_text\" id=\"S2.T1.12.13.4.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.12.13.4.2.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.12.13.4.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.12.13.4.2.1.1.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">Short</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.12.13.4.3\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.12.13.5\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.12.13.5.1\"></span> <span class=\"ltx_text\" id=\"S2.T1.12.13.5.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.12.13.5.2.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.12.13.5.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.12.13.5.2.1.1.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">Long</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.12.13.5.3\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.12.13.6\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">\n<span class=\"ltx_text\" id=\"S2.T1.12.13.6.1\"></span> <span class=\"ltx_text\" id=\"S2.T1.12.13.6.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.12.13.6.2.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.12.13.6.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.12.13.6.2.1.1.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\">Short</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.12.13.6.3\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S2.T1.2.2.2\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib21\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">21</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib20\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">20</span></a>]</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T1.2.2.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_tt\" id=\"S2.T1.2.2.3\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_tt\" id=\"S2.T1.2.2.4\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_tt\" id=\"S2.T1.2.2.5\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_tt\" id=\"S2.T1.2.2.6\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_tt\" id=\"S2.T1.2.2.7\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.2\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib22\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">22</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib23\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">23</span></a>]</cite></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.3.3.3\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.4\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.3.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.5\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.3.3.6\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.3.3.7\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S2.T1.4.4.2\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib24\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">24</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib25\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">25</span></a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib26\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">26</span></a>]</cite></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.4.4.3\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.4.4.4\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.4.4.5\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.4.4.6\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.4.4.7\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S2.T1.5.5.2\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib27\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">27</span></a>]</cite></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.5.5.3\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.5.5.4\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.5.5.5\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.5.5.6\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.5.5.7\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.5.5.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.7.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S2.T1.7.7.3\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib28\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">28</span></a>]</cite></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.7.7.4\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.7.7.5\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.7.7.6\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.6.6.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.7.7.7\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.7.7.2\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S2.T1.9.9.3\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib30\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">30</span></a>]</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.8.8.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.9.9.2\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.9.9.4\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T1.9.9.5\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.9.9.6\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.9.9.7\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.12.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_bb ltx_border_r ltx_border_t\" id=\"S2.T1.12.12.4\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib10\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">10</span></a>]</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_bb ltx_border_t\" id=\"S2.T1.10.10.1\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_bb ltx_border_r ltx_border_t\" id=\"S2.T1.11.11.2\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_bb ltx_border_t\" id=\"S2.T1.12.12.3\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_bb ltx_border_bb ltx_border_r ltx_border_t\" id=\"S2.T1.12.12.5\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_bb ltx_border_bb ltx_border_t\" id=\"S2.T1.12.12.6\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n<td class=\"ltx_td ltx_border_bb ltx_border_bb ltx_border_t\" id=\"S2.T1.12.12.7\" style=\"padding-top:-1.5pt;padding-bottom:-1.5pt;\"></td>\n</tr>\n</table>\n</figure>",
203
+ "capture": "TABLE I: Considered QoS Requirements in Related Works"
204
+ },
205
+ "2": {
206
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Key Simulation Parameters</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T2.18\">\n<tr class=\"ltx_tr\" id=\"S4.T2.18.19\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt ltx_border_tt\" id=\"S4.T2.18.19.1\" style=\"padding-top:-1pt;padding-bottom:-1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.18.19.1.1\">Simulation parameters</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt ltx_border_tt\" id=\"S4.T2.18.19.2\" style=\"padding-top:-1pt;padding-bottom:-1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.18.19.2.1\">Values</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T2.1.1.1\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">Transmit power of each user, \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T2.1.1.2\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">23 dBm</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.2.2.1\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">Single-sided noise spectral density, \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.2.2.2\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">-174 dBm/Hz</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.3.3.1\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">Channel coherence time, \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.4.4.2\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">\nms\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib28\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">28</span></a>]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.5.5.1\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">Duration of one time slot, \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.6.6.2\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">\nms</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.7.7.1\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">Decoding error probability, \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.8.8.2\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">\n\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib28\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">28</span></a>]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.9.9.1\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">Information leakage, \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.10.10.2\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">\n\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib28\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">28</span></a>]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.12.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.11.11.1\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">QoS exponent of queuing delay, \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.12.12.2\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">\n\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.10253v2#bib.bib28\" title=\"\"><span class=\"ltx_text\" style=\"font-size:80%;\">28</span></a>]</cite>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.14.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.13.13.1\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">Size of bandwidth resource block, \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.14.14.2\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">\n\u00a0kHz</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.16.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.15.15.1\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">Learning rates, \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.16.16.2\" style=\"padding-top:-1pt;padding-bottom:-1pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.17.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.17.17.1\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">Batch sizes of GNN, \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.17.17.2\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">32</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.18.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_bb ltx_border_t\" id=\"S4.T2.18.18.1\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">Batch sizes of meta optimizer, \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_bb ltx_border_t\" id=\"S4.T2.18.18.2\" style=\"padding-top:-1pt;padding-bottom:-1pt;\">4, 2</td>\n</tr>\n</table>\n</figure>",
207
+ "capture": "TABLE II: Key Simulation Parameters"
208
+ },
209
+ "3": {
210
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>System Parameters of Different Tasks</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T3.26\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4\">\n<td class=\"ltx_td ltx_border_r ltx_border_tt ltx_border_tt\" id=\"S4.T3.4.4.5\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt ltx_border_tt\" id=\"S4.T3.4.4.6\">Parameters</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt ltx_border_tt\" id=\"S4.T3.2.2.2\">\n &amp; \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt ltx_border_tt\" id=\"S4.T3.4.4.4\">\n &amp; \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T3.6.6.3\">Network scale</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T3.6.6.4\">Number of users</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T3.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T3.6.6.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.9.9.4\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T3.9.9.4.1\">Channel models</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.7.7.1\">Path loss: \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.9.9.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.10.10.1\">\n<span class=\"ltx_text\" id=\"S4.T3.10.10.1.2\"></span><span class=\"ltx_text\" id=\"S4.T3.10.10.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.10.10.1.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T3.10.10.1.1.1.2\">\n<span class=\"ltx_td ltx_align_left\" id=\"S4.T3.10.10.1.1.1.2.1\">Shadowing:</span></span>\n<span class=\"ltx_tr\" id=\"S4.T3.10.10.1.1.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S4.T3.10.10.1.1.1.1.1\"></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T3.10.10.1.3\"></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.11.11.2\">\n<span class=\"ltx_text\" id=\"S4.T3.11.11.2.2\"></span><span class=\"ltx_text\" id=\"S4.T3.11.11.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.11.11.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T3.11.11.2.1.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S4.T3.11.11.2.1.1.1.1\"></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T3.11.11.2.3\"></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.12.12.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.18.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.15.15.3\">\n<span class=\"ltx_text\" id=\"S4.T3.15.15.3.4\"></span><span class=\"ltx_text\" id=\"S4.T3.15.15.3.3\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.15.15.3.3.3\">\n<span class=\"ltx_tr\" id=\"S4.T3.15.15.3.3.3.4\">\n<span class=\"ltx_td ltx_align_left\" id=\"S4.T3.15.15.3.3.3.4.1\">Small-scale channels:</span></span>\n<span class=\"ltx_tr\" id=\"S4.T3.13.13.1.1.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S4.T3.13.13.1.1.1.1.1\">,</span></span>\n<span class=\"ltx_tr\" id=\"S4.T3.14.14.2.2.2.2\">\n<span class=\"ltx_td ltx_align_left\" id=\"S4.T3.14.14.2.2.2.2.1\">,</span></span>\n<span class=\"ltx_tr\" id=\"S4.T3.15.15.3.3.3.3\">\n<span class=\"ltx_td ltx_align_left\" id=\"S4.T3.15.15.3.3.3.3.1\"></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T3.15.15.3.5\"></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.17.17.5\">\n<span class=\"ltx_text\" id=\"S4.T3.17.17.5.3\"></span><span class=\"ltx_text\" id=\"S4.T3.17.17.5.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.17.17.5.2.2\">\n<span class=\"ltx_tr\" id=\"S4.T3.16.16.4.1.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S4.T3.16.16.4.1.1.1.1\">,</span></span>\n<span class=\"ltx_tr\" id=\"S4.T3.17.17.5.2.2.2\">\n<span class=\"ltx_td ltx_align_left\" id=\"S4.T3.17.17.5.2.2.2.1\"></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T3.17.17.5.4\"></span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.18.18.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.22.22\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.22.22.5\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.22.22.5.1\">QoS</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.22.22.6\">Rewards with different QoS requirements</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.19.19.1\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T3.22.22.4\">\n<span class=\"ltx_text\" id=\"S4.T3.22.22.4.4\"></span><span class=\"ltx_text\" id=\"S4.T3.22.22.4.3\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.22.22.4.3.3.3\">\n<span class=\"ltx_tr\" id=\"S4.T3.20.20.2.1.1.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S4.T3.20.20.2.1.1.1.1.1\"><span class=\"ltx_text\" id=\"S4.T3.20.20.2.1.1.1.1.1.1\" style=\"font-size:80%;\">,</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T3.22.22.4.3.3.3.3\">\n<span class=\"ltx_td ltx_align_left\" id=\"S4.T3.22.22.4.3.3.3.3.2\"><span class=\"ltx_text\" id=\"S4.T3.22.22.4.3.3.3.3.2.1\" style=\"font-size:80%;\">,\n</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T3.22.22.4.5\"></span>\n</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.24.24\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.24.24.3\">Values of QoS constraints (Mbps)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.23.23.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.24.24.2\">\n<span class=\"ltx_text\" id=\"S4.T3.24.24.2.2\"></span><span class=\"ltx_text\" id=\"S4.T3.24.24.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.24.24.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T3.24.24.2.1.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S4.T3.24.24.2.1.1.1.1\">\n</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T3.24.24.2.3\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.26.26\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T3.26.26.3\">\n<span class=\"ltx_rule\" style=\"width:0.0pt;height:9.0pt;background:black;display:inline-block;\"></span>\nReserved bandwidth</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T3.26.26.4\">Constraints on reserved bandwidth (MHz)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T3.25.25.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_bb ltx_border_t\" id=\"S4.T3.26.26.2\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
211
+ "capture": "TABLE III: System Parameters of Different Tasks"
212
+ }
213
+ },
214
+ "image_paths": {
215
+ "1": {
216
+ "figure_path": "2401.10253v2_figure_1.png",
217
+ "caption": "Figure 1: GNN-based scalable bandwidth allocation.",
218
+ "url": "http://arxiv.org/html/2401.10253v2/x1.png"
219
+ },
220
+ "2(a)": {
221
+ "figure_path": "2401.10253v2_figure_2(a).png",
222
+ "caption": "(a) Model-agnostic meta-learning (MAML).\nFigure 2: Tasksets of meta-learning algorithms, where different shapes represent different tasks.",
223
+ "url": "http://arxiv.org/html/2401.10253v2/x2.png"
224
+ },
225
+ "2(b)": {
226
+ "figure_path": "2401.10253v2_figure_2(b).png",
227
+ "caption": "(b) Hybrid-task meta-learning (HML).\nFigure 2: Tasksets of meta-learning algorithms, where different shapes represent different tasks.",
228
+ "url": "http://arxiv.org/html/2401.10253v2/x3.png"
229
+ },
230
+ "3": {
231
+ "figure_path": "2401.10253v2_figure_3.png",
232
+ "caption": "Figure 3: Training losses with different numbers of users, where the secrecy rate in the long blocklength regime is considered, r\u03c4S,\u2110=10superscriptsubscript\ud835\udc5f\ud835\udf0f\ud835\udc46\u211010r_{\\tau}^{S,\\mathcal{I}}=10italic_r start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 10 Mbps, and W\u03c4S,\u2110=100superscriptsubscript\ud835\udc4a\ud835\udf0f\ud835\udc46\u2110100W_{\\tau}^{S,\\mathcal{I}}=100italic_W start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 100 MHz.",
233
+ "url": "http://arxiv.org/html/2401.10253v2/x4.png"
234
+ },
235
+ "4(a)": {
236
+ "figure_path": "2401.10253v2_figure_4(a).png",
237
+ "caption": "(a) Secrecy rates of scheduled users.\nFigure 4: Testing samples are selected from taskset \ud835\udcafFsuperscript\ud835\udcafF\\mathcal{T}^{\\mathrm{F}}caligraphic_T start_POSTSUPERSCRIPT roman_F end_POSTSUPERSCRIPT and \ud835\udcafEsuperscript\ud835\udcafE\\mathcal{T}^{\\mathrm{E}}caligraphic_T start_POSTSUPERSCRIPT roman_E end_POSTSUPERSCRIPT in Table. III.",
238
+ "url": "http://arxiv.org/html/2401.10253v2/x5.png"
239
+ },
240
+ "4(b)": {
241
+ "figure_path": "2401.10253v2_figure_4(b).png",
242
+ "caption": "(b) Sum secrecy rate.\nFigure 4: Testing samples are selected from taskset \ud835\udcafFsuperscript\ud835\udcafF\\mathcal{T}^{\\mathrm{F}}caligraphic_T start_POSTSUPERSCRIPT roman_F end_POSTSUPERSCRIPT and \ud835\udcafEsuperscript\ud835\udcafE\\mathcal{T}^{\\mathrm{E}}caligraphic_T start_POSTSUPERSCRIPT roman_E end_POSTSUPERSCRIPT in Table. III.",
243
+ "url": "http://arxiv.org/html/2401.10253v2/x6.png"
244
+ },
245
+ "5(a)": {
246
+ "figure_path": "2401.10253v2_figure_5(a).png",
247
+ "caption": "(a) Secrecy rate in long blocklength regime.\nFigure 5: Meta-testing with unseen channel models.",
248
+ "url": "http://arxiv.org/html/2401.10253v2/x7.png"
249
+ },
250
+ "5(b)": {
251
+ "figure_path": "2401.10253v2_figure_5(b).png",
252
+ "caption": "(b) Secrecy rate in short blocklength regime.\nFigure 5: Meta-testing with unseen channel models.",
253
+ "url": "http://arxiv.org/html/2401.10253v2/x8.png"
254
+ },
255
+ "6(a)": {
256
+ "figure_path": "2401.10253v2_figure_6(a).png",
257
+ "caption": "(a) Shannon capacity in long blocklength regime.\nFigure 6: Meta-testing with unseen QoS requirements of rate rates and unseen channels.",
258
+ "url": "http://arxiv.org/html/2401.10253v2/x9.png"
259
+ },
260
+ "6(b)": {
261
+ "figure_path": "2401.10253v2_figure_6(b).png",
262
+ "caption": "(b) Achievable rate in short blocklength regime.\nFigure 6: Meta-testing with unseen QoS requirements of rate rates and unseen channels.",
263
+ "url": "http://arxiv.org/html/2401.10253v2/x10.png"
264
+ },
265
+ "7(a)": {
266
+ "figure_path": "2401.10253v2_figure_7(a).png",
267
+ "caption": "(a) Effective capacity in long blocklength regime.\nFigure 7: Meta-testing with unseen QoS requirements and unseen channels.",
268
+ "url": "http://arxiv.org/html/2401.10253v2/x11.png"
269
+ },
270
+ "7(b)": {
271
+ "figure_path": "2401.10253v2_figure_7(b).png",
272
+ "caption": "(b) Effective capacity in short blocklength regime.\nFigure 7: Meta-testing with unseen QoS requirements and unseen channels.",
273
+ "url": "http://arxiv.org/html/2401.10253v2/x12.png"
274
+ },
275
+ "8(a)": {
276
+ "figure_path": "2401.10253v2_figure_8(a).png",
277
+ "caption": "(a) r\u03c4S,\u2110=10superscriptsubscript\ud835\udc5f\ud835\udf0f\ud835\udc46\u211010r_{\\tau}^{S,\\mathcal{I}}=10italic_r start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 10 Mbps in meta-training.\nFigure 8: Meta-testing with dynamic secrecy rate requirements, r\u03c4S,\u2110\u2208{1,\u22ef,10}superscriptsubscript\ud835\udc5f\ud835\udf0f\ud835\udc46\u21101\u22ef10r_{\\tau}^{S,\\mathcal{I}}\\in\\{1,\\cdots,10\\}italic_r start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT \u2208 { 1 , \u22ef , 10 } Mbps, whereW\u03c4S,\u2110=100superscriptsubscript\ud835\udc4a\ud835\udf0f\ud835\udc46\u2110100W_{\\tau}^{S,\\mathcal{I}}=100italic_W start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 100 MHz and U\u03c4S,\u2110=10superscriptsubscript\ud835\udc48\ud835\udf0f\ud835\udc46\u211010U_{\\tau}^{S,\\mathcal{I}}=10italic_U start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 10.",
278
+ "url": "http://arxiv.org/html/2401.10253v2/x13.png"
279
+ },
280
+ "8(b)": {
281
+ "figure_path": "2401.10253v2_figure_8(b).png",
282
+ "caption": "(b) r\u03c4S,\u2110=1superscriptsubscript\ud835\udc5f\ud835\udf0f\ud835\udc46\u21101r_{\\tau}^{S,\\mathcal{I}}=1italic_r start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 1 Mbps in meta-training.\nFigure 8: Meta-testing with dynamic secrecy rate requirements, r\u03c4S,\u2110\u2208{1,\u22ef,10}superscriptsubscript\ud835\udc5f\ud835\udf0f\ud835\udc46\u21101\u22ef10r_{\\tau}^{S,\\mathcal{I}}\\in\\{1,\\cdots,10\\}italic_r start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT \u2208 { 1 , \u22ef , 10 } Mbps, whereW\u03c4S,\u2110=100superscriptsubscript\ud835\udc4a\ud835\udf0f\ud835\udc46\u2110100W_{\\tau}^{S,\\mathcal{I}}=100italic_W start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 100 MHz and U\u03c4S,\u2110=10superscriptsubscript\ud835\udc48\ud835\udf0f\ud835\udc46\u211010U_{\\tau}^{S,\\mathcal{I}}=10italic_U start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 10.",
283
+ "url": "http://arxiv.org/html/2401.10253v2/x14.png"
284
+ },
285
+ "8(c)": {
286
+ "figure_path": "2401.10253v2_figure_8(c).png",
287
+ "caption": "(c) r\u03c4S,\u2110\u2208{1,\u22ef,10}superscriptsubscript\ud835\udc5f\ud835\udf0f\ud835\udc46\u21101\u22ef10r_{\\tau}^{S,\\mathcal{I}}\\in\\{1,\\cdots,10\\}italic_r start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT \u2208 { 1 , \u22ef , 10 } Mbps in meta-training.\nFigure 8: Meta-testing with dynamic secrecy rate requirements, r\u03c4S,\u2110\u2208{1,\u22ef,10}superscriptsubscript\ud835\udc5f\ud835\udf0f\ud835\udc46\u21101\u22ef10r_{\\tau}^{S,\\mathcal{I}}\\in\\{1,\\cdots,10\\}italic_r start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT \u2208 { 1 , \u22ef , 10 } Mbps, whereW\u03c4S,\u2110=100superscriptsubscript\ud835\udc4a\ud835\udf0f\ud835\udc46\u2110100W_{\\tau}^{S,\\mathcal{I}}=100italic_W start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 100 MHz and U\u03c4S,\u2110=10superscriptsubscript\ud835\udc48\ud835\udf0f\ud835\udc46\u211010U_{\\tau}^{S,\\mathcal{I}}=10italic_U start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 10.",
288
+ "url": "http://arxiv.org/html/2401.10253v2/x15.png"
289
+ },
290
+ "9": {
291
+ "figure_path": "2401.10253v2_figure_9.png",
292
+ "caption": "Figure 9: Meta-testing with dynamic bandwidth W\u03c4S,\u2110\u2208{10,\u22ef,100}superscriptsubscript\ud835\udc4a\ud835\udf0f\ud835\udc46\u211010\u22ef100W_{\\tau}^{S,\\mathcal{I}}\\in\\{10,\\cdots,100\\}italic_W start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT \u2208 { 10 , \u22ef , 100 } MHz in meta-training, where r\u03c4S,\u2110=10superscriptsubscript\ud835\udc5f\ud835\udf0f\ud835\udc46\u211010r_{\\tau}^{S,\\mathcal{I}}=10italic_r start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 10 Mbps and U\u03c4S,\u2110=10superscriptsubscript\ud835\udc48\ud835\udf0f\ud835\udc46\u211010U_{\\tau}^{S,\\mathcal{I}}=10italic_U start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 10.",
293
+ "url": "http://arxiv.org/html/2401.10253v2/x16.png"
294
+ },
295
+ "10": {
296
+ "figure_path": "2401.10253v2_figure_10.png",
297
+ "caption": "Figure 10: Meta-testing with different numbers of users U\u03c4S,\u2110\u2208{5,10,\u22ef,50}superscriptsubscript\ud835\udc48\ud835\udf0f\ud835\udc46\u2110510\u22ef50U_{\\tau}^{S,\\mathcal{I}}\\in\\{5,10,\\cdots,50\\}italic_U start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT \u2208 { 5 , 10 , \u22ef , 50 }, where r\u03c4S,\u2110=10superscriptsubscript\ud835\udc5f\ud835\udf0f\ud835\udc46\u211010r_{\\tau}^{S,\\mathcal{I}}=10italic_r start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 10 Mbps and W\u03c4S,\u2110=100superscriptsubscript\ud835\udc4a\ud835\udf0f\ud835\udc46\u2110100W_{\\tau}^{S,\\mathcal{I}}=100italic_W start_POSTSUBSCRIPT italic_\u03c4 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S , caligraphic_I end_POSTSUPERSCRIPT = 100 MHz.",
298
+ "url": "http://arxiv.org/html/2401.10253v2/x17.png"
299
+ }
300
+ },
301
+ "validation": true,
302
+ "references": [],
303
+ "url": "http://arxiv.org/html/2401.10253v2"
304
+ }
20240318/2401.11969v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2401.12873v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2403.01962v2.json ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "An Efficient Model-Based Approach on Learning Agile Motor Skills without Reinforcement",
3
+ "abstract": "Learning-based methods have improved locomotion skills of quadruped robots through deep reinforcement learning. However, the sim-to-real gap and low sample efficiency still limit the skill transfer. To address this issue, we propose an efficient model-based learning framework that combines a world model with a policy network. We train a differentiable world model to predict future states and use it to directly supervise a Variational Autoencoder (VAE)-based policy network to imitate real animal behaviors. This significantly reduces the need for real interaction data and allows for rapid policy updates. We also develop a high-level network to track diverse commands and trajectories. Our simulated results show a tenfold sample efficiency increase compared to reinforcement learning methods such as PPO. In real-world testing, our policy achieves proficient command-following performance with only a two-minute data collection period and generalizes well to new speeds and paths. The results are shown in .",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "INTRODUCTION",
9
+ "text": "Learning-based methods [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###] have recently demonstrated significant advantages in acquiring agile motor skills for quadrupedal robots. In particular, model-free deep Reinforcement Learning (RL) algorithms enables them to mimic animal motions so as to attain natural and agile motor skills [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###].\nHowever, model-free RL algorithms [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###] usually require substantial on-policy data to improve their performance. Given the cost of collecting data in simulation compared to the real world, these algorithms often train policies in simulation and then deploy them on physical robots through zero-shot transfer. However, the policies learned in simulation may not consistently perform well in real-world scenarios due to the persistent sim-to-real gap. Researchers have attempted to mitigate this gap using techniques like domain randomization [16 ###reference_b16###] and domain adaptation within simulation environments to enhance policy robustness. Nevertheless, these techniques do not provide a fundamental solution and cannot guarantee successful transfer. [17 ###reference_b17###] argues that dynamics randomization and adaptation approaches may not consistently address sim-to-real transfer challenges, leaving the sim2real gap unresolved.\n###figure_1### On the other hand, an alternative approach is to train or fine-tune the policy directly on a real robot, which can effectively address the problem. [18 ###reference_b18###] utilizes a model-free off-policy reinforcement learning algorithm for policy fine-tuning in the real world, albeit it still necessitates more than 2 hours data for fine-tuning. To increase sample efficiency, [19 ###reference_b19###] adopts a model-based reinforcement learning approach, which enables direct policy training on the real robot. Nevertheless, since the policy network is trained through a model-free reinforcement learning algorithm, this method still requires over one hour to train a basic policy for walking towards predefined directions.\nIn the realm of computer graphics, ControlVAE [20 ###reference_b20###] has demonstrated superior sample efficiency compared to deep reinforcement learning. It achieves this by co-training a world model with a VAE-based policy network [21 ###reference_b21###]. Building on this concept, we introduce a model-based learning framework to close the sim2real gap by directly fine-tuning policies on real robots. First we train a world model capable of predicting several consecutive states of the robot. Leveraging the differentiability of the world model, we can train an end-to-end control policy by direct backpropagation. This policy imitates reference trajectories obtained from real dogs by interacting with the world model. Additionally, we develop a high-level policy for generating latent variable within the VAE [21 ###reference_b21###]. This empowers the robot to follow various high-level commands and track diverse paths.\nIn simulated experiments, our method exhibits a tenfold improvement in sample efficiency compared to PPO [14 ###reference_b14###], both during training and adaptation. In real robot experiments, our policy effectively tracks a oblong path at speeds of 0.6m/s, 0.9m/s, and 1.2m/s with just 2 minutes of fine-tuning. Furthermore, we also evaluate our policy generalization ability in new speed commands and unseen paths, highlighting our method\u2019s robust generalization capability.\nIn conclusion, the main contributions of this paper are:\nWe present a model-based learning framework to acquire agile skills in quadrupedal robots within simulations and fine-tune them on real robots, substantially enhancing the sample efficiency of learning-based methods in the robotics domain.\nWe assess our approach in both simulation and the real robot, demonstrating that with only 2 minutes of fine-tuning, our robot effectively executes reference commands.\nWe establish the generalization capability of our method with real robot experiments, as the fine-tuned policy follows previously unseen commands and paths."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II RELATED WORK",
15
+ "text": "As model-free deep reinforcement learning algorithms [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###] continue to advance rapidly, recent research has achieved notable success in training expert policies for quadrupedal locomotion. In contrast to classical control methods [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###, 31 ###reference_b31###], learning-based approaches harness the power of deep neural networks to acquire agile motor skills. Notably, [7 ###reference_b7###, 8 ###reference_b8###] employ PPO [14 ###reference_b14###] to emulate natural motion patterns observed in animals. Moreover, [9 ###reference_b9###] introduces a novel approach that incorporates terrain information while mimicking the behavior of real animals.\nHowever, model-free deep reinforcement learning algorithms demand an enormous volume of interaction data, making it infeasible to collect on a real robot. Consequently, they often train the policy in simulation and attempt zero-shot transfer into the real world. To address the sim2real gap, [8 ###reference_b8###] introduces an environmental encoding approach, optimizing it on the real robot by maximizing total returns for swift adaptation. It\u2019s crucial to note that the effectiveness of this adaptation strategy hinges on the degree of similarity between the simulation and real-world environments and may not be universally applicable across all tasks. Furthermore, [9 ###reference_b9###, 32 ###reference_b32###, 33 ###reference_b33###, 16 ###reference_b16###] employ domain randomization techniques to cultivate a robust policy and employ domain adaptation to address the sim2real gap. [34 ###reference_b34###] reduces the sim2real gap by utilizing pre-trained representations that prove effective across various real-world robotic tasks. While these methods can alleviate the impact of the sim2real gap, they do not offer a fundamental solution. Notably, [17 ###reference_b17###] demonstrates that their policy can be transferred to the real robot without necessitating domain randomization, questioning the necessity of this technique. In summary, the challenge of transferring model-free reinforcement learning policies from simulation to reality remains an open problem.\nTo increase sample efficiency for reinforcement learning, recent years have witnessed great progress in model-based reinforcement learning algorithms [35 ###reference_b35###, 36 ###reference_b36###, 37 ###reference_b37###]. They first learn a dynamics model in the simulation and then improve their policy using model-free RL with imagined data produced by the learned model. To increase the prediction power of the world model, further research focuses on learning a compact latent space of world model [38 ###reference_b38###, 39 ###reference_b39###, 40 ###reference_b40###], and also succeeds in the real robot training [19 ###reference_b19###]. In this way, the sim2real gap does not exist since they train the policy directly in the real robot. While they can train walk policy in a real robot in one hour, it is still not evaluated how much data it will take to train a more complex policy like imitating an animal or following a desired path in our task. And since the policy is trained by the model-free reinforcement learning algorithm, the sample efficiency is still limited. Meanwhile, training directly in the real robot from scratch fails to take advantage of the simulation environment. In contrast, our method trains both the world model and control policy in a supervised manner, resulting in significantly enhanced sample efficiency. Additionally, we adopt a two-stage approach that involves training the policy in simulation to create a warm-up policy, followed by fine-tuning it in the real world using just two minutes of data. This significantly reduces the amount of real-world data required and enables the learning of more sophisticated motor skills.\nControlVAE [20 ###reference_b20###] is an innovative technique in computer graphics that utilizes a VAE-based policy, supervised by a differentiable world model. This approach provides significantly higher sample efficiency than deep reinforcement learning algorithms, but it mainly concentrates on policy training for human motion generation within a simulation context. To expand on this concept, our proposed framework combines world model and policy learning in a supervised manner, resulting in a learning framework that enhances training efficiency during the fine-tuning stages on a real robot with a regularization term. Consequently, our approach allows for deployment on a real quadrupedal robot system with only a 2-minute fine-tuning period."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III METHODOLOGY",
21
+ "text": "Our framework contains two parts, i.e. a world model and a control policy, as shown in Fig. 2 ###reference_###.\nThe world model learns to approximate the unknown dynamics of the simulation and the reality. Given current robot state and action, it predicts next state.\nThe control policy learns agile behaviors by tracking motions from real animals. Instead of interacting with a simulator, it directly collects samples predicted by the trained world model.\nBoth the world model and the control policy are updated in a supervised manner and trained iteratively: we first collect state-action pairs under a fixed control policy to fit the system dynamics using the world model. Then the control policy is updated by interacting with the fixed world model. The whole process repeats until the control policy converges.\n###figure_2###"
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III-A World Model Learning",
27
+ "text": "We commence by training the world model . It predicts the next state based on the current state and action, utilizing a residual form as follows:\nwhere is a neural network parameterized by , represents the robot state at time , encompassing robot position, orientation, linear velocity, angular velocity, joint positions and joint velocities. corresponds to the robot observation, including robot linear velocity, angular velocity, joint position, and joint velocity at robot local frame. is the target angle for each joint which can be converted to joint torques through a PD controller. represents the predicted state at the next time step .\nWhen training the world model, the robot interacts with the simulator under a fixed control policy to collect state-action sequences . The world model is trained in supervised learning manner with the n-step prediction loss that is conducive to prediction in a long time horizon:\nwhere is the predicted robot state and represents the ground truth robot state either from simulator or from the real robot during the fine-tuning stage."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "III-B Motion Tracking",
33
+ "text": "In the context of the motion tracking task, our objective is to imitate motion sequences collected from real animals. We formulate this problem into an encoder-decoder architecture, where we encode the reference motion sequence into a latent embedding and decode this latent embedding together with robot observation into joint motor action.\nWe take a VAE-based [21 ###reference_b21###] architecture with an Motion Tracking Encoder to encode the observation and a sequence of future reference motions into a latent variable . The Motor Decoder takes and and produces the action .\nBesides, we incorporate an additional state-conditional prior to disentangle distinct skills within the latent space, as emphasized in [20 ###reference_b20###].\nWe model the latent variable\u2019s prior distribution and posterior\ndistribution as Gaussian distribution:\nwhere and are neural networks parameterized by and . is a identity matrix, and is a fixed standard deviation for simplicity.\nThe tracking learning loss is defined as follows:\nwhere the joint position loss , joint velocity loss , base position loss and base velocity loss are similar to the reward function in [9 ###reference_b9###]:\nwhere and are joint position and joint velocity, and represent base position and orientation, and denote base velocity and base angular velocity. represents the states predicted by the world model and denotes the reference motion.\nSince the world model is differentiable, the gradient of the motion loss can be calculated end-to-end through the differential dynamics.\nTo ensure the latent space is well formed so that we can further find an appropriate latent variable in the downstream command following task, we incorporate a KL-divergence loss for regularization:\nTo make our policy focus on not only the next state but also a long-term horizon, the final loss term is calculated as the sum of n-step tracking loss, where the n-step roll out is predicted by the world model"
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-C Command Following",
39
+ "text": "The next step is to train a policy that follows linear velocity and angular velocity specified by users. We introduce the Command Following Encoder to encode the commands into the latent space. Given a random command , where and represent the desired linear velocity in forward direction and the desired angular velocity, the posterior distribution of latent variable is computed as:\nwhere is the neural network parameterized by . Since our goal is to make the robot follow the command, the command following loss comprises both linear velocity loss and angular velocity loss :\nwhere represents the user command while is the robot states predicted by the world model.\nTo preserve the naturalness of the robot behavior, we exclusively update the command following network while keeping the prior network and motor decoder fixed during training."
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-D Fine-tune on a Real Robot",
45
+ "text": "Owing to the sim-to-real gap, the policy learned from the simulation may fail when deployed on the real robot. Hence, we fine-tune both the Command Following Encoder and the Motor Decoder on the real robot to follow the desired paths.\nTo preserve the natural behavior originating from the original Motor Encoder , we introduce a regularization term:\nThe algorithm for fine-tuning in the real world is outlined in Algorithm 1 ###reference_###."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "IV EXPERIMENT RESULTS",
51
+ "text": "In this section, we report experimental results to address the following pivotal questions:\n(i) How effective is our approach in improving sample efficiency, compared with RL methods?\n(ii) How well is our fine-tuning process on the real robot can help to close the sim-to-real gap?\n(iii) Does our fine-tuned policy exhibit sufficient generalization capacity on previously unseen tasks?\nWe conduct experiments both in simulation and real world. We compare our method with a RL baseline in terms of sample efficiency. For motion tracking, our problem setting is different from imitation learning since we only have the target motion states and lack the groundtruth actions executed on each joint motor. Therefore, we do not compare with imitation learning algorithms. In the real world experiment, we conduct the fine-tuning process on a real quadrupedal robot. To further demonstrate the generalization ability, we performance path following task on four unseen paths."
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "IV-A Evaluation in Simulation Environments",
57
+ "text": "Sample Efficiency in Motion Tracking.\nTo address the first question regarding sample efficiency, we initially train the motion tracking task from scratch using Isaac Gym. Isaac Gym is a GPU-based physical simulator simulating a batch of agents concurrently. In this task, we simultaneously employ 128 agents for training. We compare our method with PPO algorithm [14 ###reference_b14###] with respect to the number of samples collected from the simulator. The reward function for PPO is defined as . We maintain an identical policy network structure for both methods to facilitate a meaningful comparison. The mean reward during training is reported in Fig. 4 ###reference_###(a). It demonstrates that our method achieves a mean reward of 0.8 with approximately 5 million samples, as indicated in the read dashed line. In contrast, the PPO algorithm requires over 70 million samples to achieve similar results. This showcases that our method\u2019s sample efficiency surpasses that of PPO by over tenfold.\nSample Efficiency in Adapting to New Environments.\nDirectly training PPO on a real robot is dangerous and may easily damage the robot. To compare the sample efficiency in adapting to new environments, we introduce variations to physical parameters in simulation and perform the fine-tuning process to make the adaptation. For the motion tracking task, we alter a number of physical parameters as shown in Env1, TABLE I ###reference_###. For example, we significantly increase the robot\u2019s mass from 5.74 kg to 14 kg, which makes the new environment extremely difficult for the original policy. To emulate a scenario akin to real-world robot data collection, we employ 2 agents in the simulation environment for both methods. In our approach, each training iteration accumulates 3000 samples, equivalent to 1 minute of data collection given the control frequency of 50 Hz. For PPO, policy updates are conducted every 32 steps. Fig. 4 ###reference_###(b) depicts the training curves. The plot highlights that our method attains a mean reward of 0.8 with roughly 50,000 samples (equivalent to approximately 17 minutes of data) in this challenging setting. Conversely, PPO algorithm remains subpar even with ten times the sample size.\n###figure_3### To further investigate the performance of command following, we extend this task to path following, where the robot aims at following predefined paths, as shown in Fig. LABEL:fig:paths. We employ the pure pursuit algorithm [41 ###reference_b41###] to convert the path information to commands. In this experiment, we follow the Oblong with a target speed of 0.9 m/s. We also create three distinct environments, Env2, Env3, Env4, as shown in TABLE I ###reference_###. To emulate the fine-tuning process in the real-world environment, each training iteration involves collecting 1500 samples (30 seconds data). Fig. 4 ###reference_###(c) depicts the training curves with the loss term defined in Eq. 9 ###reference_###.\nFrom the plot, we observe that our approach, under workloads of 3kg, 5kg, and 7kg,requires approximately 4 iterations (2 minutes), 6 iterations (3 minutes), and 8 iterations (4 minutes) of data to achieve a loss of less than 0.6. This result indicates a relatively good performance at these speeds. In comparison, the loss of PPO remains nearly unchanged with such a limited amount of samples, and thus we did not draw the result. In this way, we can demonstrate the high sample efficiency of our approach for both training and fine-tuning, adapting to different environments in both motion tracking task and path following tasks.\n###figure_4### ###figure_5###"
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "IV-B Evaluation on Real World Experiments",
63
+ "text": "Adapting from Simulation to Reality.\nTo address the second question, we perform physical experiments using the real robot Max.\nDue to the sim2real gap, the policy trained in the simulation may fail to follow the path with the desired speed and can exhibit significant lag behind the target speed, especially at high target speeds. This underscores the necessity of real-world fine-tuning.\nWe perform three adaptation experiments on Oblong with target speeds of 0.6m/s, 0.9m/s, and 1.2m/s. To fine-tune the policy in the real world, each iteration involves collecting 30 seconds of data (1500 samples) to train the world model, followed by updating the policy network using data predicted by the adapted world model. Fig. 4 ###reference_###(d) displays the command following loss for four iterations (2 minutes data) with target speeds of 0.6m/s, 0.9m/s, and 1.2m/s on the real robot. TABLE II ###reference_### presents the averaged linear velocity error and angular velocity loss computed within a 30-second trajectory after each iteration during the real-world adaptation. It is evident that after the first iteration, there is a significant decreasing in losses. Particularly for a speed of 1.2m/s, the speed error decreases by more than 0.26m/s. After four iterations, the losses appear to converge, and the final performance is highly effective in tracking the commands. For example, Fig. 5 ###reference_###(a) depicts speed tracking at 1.2m/s on the real robot with real-world adaptation. It is evident that in the initial policy (iteration 0), the actual speed lags considerably behind the target speed. After the first iteration, the actual speed can somewhat follow the target, but it exhibits significant fluctuations. In iteration 4, the policy effectively tracks the target speed with minimal vibration.\nGeneralization Ability on Unseen Scenarios.\n###figure_6### To answer the last question, we evaluate our policy on unseen command velocities and paths. In the previous experiment, we collect real robot data, totaling 7.5 minutes of data with target speeds of 0.6m/s, 0.9m/s, and 1.2m/s. We utilize this data for off-policy fine-tuning to derive the adapted policy.\nWe test the generalization ability on unseen target velocities of 0.7m/s, 0.8m/s, and 1.0m/s on all paths including unseen Lemniscate, U-shape, and Star.\nTABLE III ###reference_### reports averaged linear velocity error (), angular velocity error (), and distance error () computed over four paths lasting 30 seconds each. The distance error is defined as ,\nwhere , are robot position and the target position at time . is derived by integrating the target speed with respect to time.\nFrom the table, it\u2019s evident that after off-policy adaptation, all of the errors have decreased by over one-half.\nFig. 5 ###reference_### vividly demonstrates the speed tracking to follow the oblong path on the real robot under the original policy and the adapted one. The original policy lags behind the target unseen speeds, whereas our adapted policy can follow them effectively with averaged linear velocity error of around 0.05m/s. Fig. 7 ###reference_### displays the real trajectories for tracking the path at different unseen target speeds.\nFrom the plot, it\u2019s evident that the original policy lags significantly behind the target trajectory, while our adapted policy can effectively track it, performing even slightly faster at higher speeds. In conclusion, the experimental results demonstrate that our adapted policy can successfully handle unseen commands and track unfamiliar paths, highlighting the generalization capability of our approach."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "CONCLUSIONS",
69
+ "text": "In summary, we have introduced an efficient learning framework designed to mimic the natural behavior of animals and enable path tracking for quadrupedal robots. Our approach begins by training a world model and a policy network, effectively turning it into an auto-encoder that utilizes the differential dynamics from the world model. This strategy significantly boosts sample efficiency, outperforming model-free deep reinforcement learning algorithms by over tenfold. Additionally, our method facilitates rapid policy fine-tuning on real robots, requiring only 2 minutes of data, and demonstrates robust generalization capabilities. Future directions could include developing a world model with perception information, allowing the framework to adapt to visual locomotion across challenging terrains. The fine-tuning algorithm can narrow the sim2real gap further and improve the success rate of visual locomotion in challenging environments. In conclusion, our work opens up exciting possibilities for training complex motor skills on real robots."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {
74
+ "1": {
75
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>The physical parameters of the original and new environments, where Ctrl Lat represents Control Latency.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.2\">Mass (kg)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.3\">Kp</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.4\">Ctrl Lat (ms)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.5\">Max Torque (Nm)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.1.1\">Original</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.1.2\">5.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.1.3\">50.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.1.4\">0.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.2.1.5\">18.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.3.2.1\">Env1</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.3.2.2\">14.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.3.2.3\">40.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.3.2.4\">6.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.3.2.5\">16.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.4.3.1\">Env2</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.3.2\">5.74+3.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.3.3\">50.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.3.4\">6.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.3.5\">18.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.5.4.1\">Env3</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.4.2\">5.74+5.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.4.3\">50.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.4.4\">6.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.5.4.5\">18.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T1.1.6.5.1\">Env4</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.6.5.2\">5.74+7.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.6.5.3\">50.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.6.5.4\">6.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.6.5.5\">18.0</td>\n</tr>\n</tbody>\n</table>\n</figure>",
76
+ "capture": "TABLE I: The physical parameters of the original and new environments, where Ctrl Lat represents Control Latency."
77
+ },
78
+ "2": {
79
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>The averaged linear velocity error and angular velocity loss computed in a trajectory of 30s after each iteration. Iter0 refers to the original policy without any fine-tuning.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.10\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.10.7.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.10.7.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S4.T2.10.7.1.2\">Speed=0.6m/s</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S4.T2.10.7.1.3\">Speed=0.9m/s</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S4.T2.10.7.1.4\">Speed=1.2m/s</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.6\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.10.6.7\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.5.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.6.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.7.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.8.4.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.9.5.5\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.10.6.6\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.10.8.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.10.8.1.1\">Iter0</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.10.8.1.2\">0.088</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.10.8.1.3\">0.587</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.10.8.1.4\">0.250</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.10.8.1.5\">0.612</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.10.8.1.6\">0.696</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.10.8.1.7\">0.501</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.9.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.10.9.2.1\">Iter1</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.9.2.2\">0.055</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.9.2.3\">0.241</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.9.2.4\">0.194</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.9.2.5\">0.565</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.9.2.6\">0.431</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.9.2.7\">0.319</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.10.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.10.10.3.1\">Iter2</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.10.3.2\">0.047</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.10.3.3\">0.232</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.10.3.4\">0.098</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.10.3.5\">0.297</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.10.3.6\">0.148</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.10.3.7\">0.276</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.11.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.10.11.4.1\">Iter3</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.11.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.11.4.2.1\">0.038</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.11.4.3\">0.190</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.11.4.4\">0.078</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.11.4.5\">0.269</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.11.4.6\">0.103</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.11.4.7\">0.286</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.12.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T2.10.12.5.1\">Iter4</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.10.12.5.2\">0.047</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.10.12.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.12.5.3.1\">0.189</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.10.12.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.12.5.4.1\">0.063</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.10.12.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.12.5.5.1\">0.249</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.10.12.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.12.5.6.1\">0.081</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.10.12.5.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.12.5.7.1\">0.240</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
80
+ "capture": "TABLE II: The averaged linear velocity error and angular velocity loss computed in a trajectory of 30s after each iteration. Iter0 refers to the original policy without any fine-tuning."
81
+ },
82
+ "3": {
83
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>The averaged linear velocity error , angular velocity error , and distance error computed across four paths with unseen target speeds equal to 0.7m/s, 0.8m/s and 1.0m/s. </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.18\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.18.13.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.18.13.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T3.18.13.1.2\">Oblong</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S4.T3.18.13.1.3\">Lemniscate</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.6\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.12.6.7\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.7.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.8.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.9.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.10.4.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.11.5.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.12.6.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.18.14.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.18.14.2.1\">origin</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.18.14.2.2\">0.269</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.18.14.2.3\">0.578</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.18.14.2.4\">2.031</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.18.14.2.5\">0.224</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.18.14.2.6\">0.642</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.18.14.2.7\">2.190</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.18.15.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.18.15.3.1\">adapted</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.18.15.3.2\">0.052</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.18.15.3.3\">0.239</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.18.15.3.4\">0.901</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.18.15.3.5\">0.057</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.18.15.3.6\">0.199</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.18.15.3.7\">0.725</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.18.16.4\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T3.18.16.4.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"3\" id=\"S4.T3.18.16.4.2\">U-shape</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S4.T3.18.16.4.3\">Star</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.18.12\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.18.12.7\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.13.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.14.8.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.15.9.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.16.10.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.17.11.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.18.12.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.18.17.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.18.17.5.1\">origin</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.18.17.5.2\">0.233</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.18.17.5.3\">0.572</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.18.17.5.4\">1.560</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.18.17.5.5\">0.287</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.18.17.5.6\">0.631</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.18.17.5.7\">2.031</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.18.18.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T3.18.18.6.1\">adapted</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.18.18.6.2\">0.053</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.18.18.6.3\">0.210</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.18.18.6.4\">0.771</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.18.18.6.5\">0.050</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.18.18.6.6\">0.234</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.18.18.6.7\">0.901</td>\n</tr>\n</tbody>\n</table>\n</figure>",
84
+ "capture": "TABLE III: The averaged linear velocity error , angular velocity error , and distance error computed across four paths with unseen target speeds equal to 0.7m/s, 0.8m/s and 1.0m/s. "
85
+ }
86
+ },
87
+ "image_paths": {
88
+ "1": {
89
+ "figure_path": "2403.01962v2_figure_1.png",
90
+ "caption": "Figure 1: Our robot Max follows the U-shape path after fine-tuned in the real world.",
91
+ "url": "http://arxiv.org/html/2403.01962v2/extracted/5477977/pic2.png"
92
+ },
93
+ "2": {
94
+ "figure_path": "2403.01962v2_figure_2.png",
95
+ "caption": "Figure 2: Overview of our learning framework. The gray block represents fixed parameters. For the command following task, the Motor Decoder is fixed when training from scratch and becomes trainable during real-world fine-tuning.",
96
+ "url": "http://arxiv.org/html/2403.01962v2/x1.png"
97
+ },
98
+ "3": {
99
+ "figure_path": "2403.01962v2_figure_3.png",
100
+ "caption": "Figure 4: (a) Training curves of the motion tracking task in the simulation. (b) Training curves of fine-tuning the motion tracking task policy in the modified simulation environment. (c) Mean loss of fine-tuning the path following policy in three workloads within the modified simulation environment. (d) Mean loss of fine-tuning the path following policy under various speeds on the real robot.",
101
+ "url": "http://arxiv.org/html/2403.01962v2/x2.png"
102
+ },
103
+ "4": {
104
+ "figure_path": "2403.01962v2_figure_4.png",
105
+ "caption": "Figure 5: Speed following at 1.2 m/s along the oblong path on the real robot with real-world adaptation.",
106
+ "url": "http://arxiv.org/html/2403.01962v2/extracted/5477977/Realrobot_speed_tracking1.22.png"
107
+ },
108
+ "5": {
109
+ "figure_path": "2403.01962v2_figure_5.png",
110
+ "caption": "Figure 6: Speed following along the oblong path on the real robot using the original policy and the adapted policy.",
111
+ "url": "http://arxiv.org/html/2403.01962v2/extracted/5477977/Realrobot_speed_tracking_star2.png"
112
+ },
113
+ "6": {
114
+ "figure_path": "2403.01962v2_figure_6.png",
115
+ "caption": "Figure 7: Path following on unseen paths under the original policy and the adapted one. The z-axis represents the time evolution, and the reference path is computed by integrating the target speed with respect to time.",
116
+ "url": "http://arxiv.org/html/2403.01962v2/x3.png"
117
+ }
118
+ },
119
+ "validation": true,
120
+ "references": [
121
+ {
122
+ "1": {
123
+ "title": "Carnegie Mellon University, The Robotics Institute, 1992.",
124
+ "author": "R. C. Coulter et al., Implementation of the pure pursuit path\ntracking algorithm.",
125
+ "venue": null,
126
+ "url": null
127
+ }
128
+ }
129
+ ],
130
+ "url": "http://arxiv.org/html/2403.01962v2"
131
+ }
20240318/2403.05822v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2403.05828v2.json ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Multi-GPU-Enabled Hybrid Quantum-Classical Workflow in Quantum-HPC Middleware: Applications in Quantum Simulations The first two authors contributed equally to this work.",
3
+ "abstract": "Achieving high-performance computation on quantum systems presents a formidable challenge that necessitates bridging the capabilities between quantum hardware and classical computing resources. This study introduces an innovative distribution-aware Quantum-Classical-Quantum (QCQ) architecture, which integrates cutting-edge quantum software frameworks with high-performance classical computing resources to address challenges in quantum simulation for materials and condensed matter physics, including the prediction of quantum phase transitions. At the heart of this architecture is the seamless integration of Variational Quantum Eigensolver (VQE) algorithms running on Quantum Processing Units (QPUs) for efficient quantum state preparation, Tensor Network states, and Quantum Convolutional Neural Networks (QCNNs) for classifying quantum states on classical hardware.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Quantum Machine Learning (QML) integrates the disciplines of machine learning and quantum computing[1 ###reference_b1###], employing parameterized quantum circuits as statistical models[2 ###reference_b2###]. This technology has seen an increasing array of applications in the natural sciences[3 ###reference_b3###], generative modelling[4 ###reference_b4###], and classification problems[5 ###reference_b5###, 6 ###reference_b6###]. Due to its high expressivity[7 ###reference_b7###], QML demonstrates superior performance over conventional models across numerous domains. This includes stellar classification within large datasets[8 ###reference_b8###], leveraging an architecture that effectively bridges classical data with quantum algorithms. However, current applications are constrained by the limited number of Quantum Processing Unit (QPU) qubits and the fidelity of qubits in the Noisy Intermediate-Scale Quantum (NISQ) era[9 ###reference_b9###]. Despite these challenges, recent research conducted by Q-CTRL and IBM Quantum has made progress in enhancing the success probability of algorithms through an automated deterministic error-suppression workflow and quantum error mitigation technique[10 ###reference_b10###, 11 ###reference_b11###]. Nonetheless, the availability of qubits remains a significant issue in the design of quantum machine learning algorithms[9 ###reference_b9###].\nOn the other hand, the work of Huang et al. has demonstrated quantum advantage through the utilization of quantum data with a quantum computer[12 ###reference_b12###]. For instance, Monaco et al.\u2019s application of a Variational Quantum Eigensolver (VQE)-based quantum neural network model for quantum phase detection in the axial next-nearest-neighbour Ising (ANNNI) model illustrates its superiority[13 ###reference_b13###]. However, such architectures necessitate an increased number of qubits for learning more complex models[14 ###reference_b14###]. The demand for a large number of qubits can be addressed by integrating architectures through distributed quantum computing[15 ###reference_b15###], also referred to as quantum-centric supercomputing by IBM Quantum[16 ###reference_b16###]. In the realm of compact quantum devices, achieving high-fidelity qubit operations and their subsequent verification is relatively manageable. Nonetheless, the shift towards modular approaches mandates the transfer of quantum information between QPUs, thereby fostering effective qubit interactions across various modules. While this approach offers certain benefits, the ensuing inter-module communication is often characterized by slower speeds and reduced reliability, leading to the emergence of a challenge known as the quantum interconnect bottleneck (QIB)[17 ###reference_b17###]. Furthermore, the adoption of distributed quantum algorithms introduces vulnerabilities that necessitate the formulation of robust quantum protocols to ensure system integrity and security[18 ###reference_b18###].\nIn this study, we endeavor to develop a scheme that is conducive to high-performance computing (HPC) within the quantum domain [19 ###reference_b19###], specifically designed to enable distributed quantum computing through a hybrid quantum-classical workflow. This framework incorporates multi-GPU acceleration (via the cuQuantum SDK[20 ###reference_b20###]) to support quantum simulators (which serve as counterparts to noiseless QPUs, if QPUs are not available). Additionally, we integrate the latest Pennylane Lightning plugins[21 ###reference_b21###]. This component leverages CUDA-aware MPI (Message Passing Interface) boosted by NVLink\u2019s 600 GB/s bidirectional bandwidth for optimized GPU-to-GPU communication, enabling rapid distributed simulation tasks[22 ###reference_b22###]. This middleware scheme offers a synergy between classical GPU acceleration and quantum computing acceleration[23 ###reference_b23###, 19 ###reference_b19###]. The relevant conceptual architectures for Quantum-HPC middleware are discussed by Saurabh et. al[19 ###reference_b19###].\nEchoing the architectural paradigm introduced by Monaco et al.[13 ###reference_b13###], our methodology adopts a VQE-based ansatz to explore quantum phase transitions in both the transverse field Ising and the XXZ models[24 ###reference_b24###]. Different from the quantum-to-quantum or the quantum-to-classical architecture[13 ###reference_b13###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###], we propose an quantum-classical-quantum sandwich (QSandwich) architecture. This architecture capitalizes on the capabilities of classical convolutional neural networks (CNNs) to significantly reduce the qubit requirements for the quantum classifier.\nFor the implementation of the VQE-based ansatz, we also explore the potential of distributed-VQE approaches[27 ###reference_b27###, 28 ###reference_b28###, 25 ###reference_b25###] or the employment of multi-GPU configurations[29 ###reference_b29###] to meet the demands of large qubit requirements in complex scenarios. The QSandwich framework is designed to ensure that each quantum layer is effectively managed and utilized within a distributed quantum computing environment, leveraging small, yet high-fidelity QPU resources. This approach underscores our commitment to advancing the frontier of quantum computing by optimizing computational resources and fidelity in the execution of quantum simulations."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Applications and Algorithms",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A QSandwitch: Quantum-Classical-Quantum framework for Quantum-HPC Processing",
21
+ "text": "In the domain of Quantum-HPC[19 ###reference_b19###], we proposed a novel distribution-aware hybrid quantum-classical-quantum (QCQ) processing framework shown in Fig. 1 ###reference_###, representing an optimized approach to the computational challenges of distributed quantum computing for classifying phase transition. Initiated by the VQE algorithm, this framework begins its operation within the quantum realm, where a specific quantum state is prepared to encode potential solutions. This quantum state is then converted into classical information via a process termed state feature selection, during which essential characteristics of the quantum state are identified and extracted for subsequent processing in classical CNN layers. This methodology introduces a structured integration of quantum and classical computing techniques, aiming to optimize the solution-finding process through the strategic manipulation and analysis of quantum data.\nThe classical segment of the QCQ framework harnesses the capabilities of CNNs to scrutinize the quantum state prepared by the VQE algorithm. This involves the application of convolutional layers to refine the data, augmented by pooling layers that distil the essence of the information. The refined data is then adeptly re-encoded into quantum form, traversing another quantum layer tasked with extracting a scalar output to classify the phase transition problem.\nThe scalar output generated by the process is not the end point but rather an intermediate stage requiring additional refinement via a sigmoid layer to improve the accuracy of the final result. The following section will provide a detailed explanation of this hybrid QCQ architecture, focusing on the interaction between its quantum and classical components. The integration of these computational realms aims to enhance the capabilities of the near-term Quantum-HPC ecosystem, addressing challenges with efficiency beyond what quantum or classical computing can achieve individually.\n###figure_1###"
22
+ },
23
+ {
24
+ "section_id": "2.1.1",
25
+ "parent_section_id": "2.1",
26
+ "section_name": "II-A1 VQE for State Preparation",
27
+ "text": "VQE is a common approach in ground state preparation in quantum computing[30 ###reference_b30###]. In our study, we focus on preparing the ground state for the phase-transition problem with VQE and tensor network ansatz[24 ###reference_b24###]. These methods enable efficient and accurate approximation of quantum states. Here are the key points of our approach:\nGround State Preparation:\nWe employ the VQE to approximate the ground states of given quantum Hamiltonians. The advantage of using VQE for state preparation, instead of loading state information from the Pennylane dataset[31 ###reference_b31###], is that, by knowing the circuit structure, we can reproduce the ground state completely. On the other hand, the classical-shadow information from the Pennylane dataset will be helpful to reproduce the state more accurately[32 ###reference_b32###], however, it could not be the same state.\nKnowing the circuit structure also reduces the size of datasets which speeds up the data reading process in quantum machine learning later on. A n-qubits states information from pennylane datasets contains floats. On the other hand, our VQE states preparation datasets only records the parameters for the ansatz circuits. The number of parameters varies depending on the structure and depth of the ansatz. For example, we used 100 float parameters for a 10-qubit circuit in state preparation which reduced the size of datasets (otherwise 1024 floats needed) and performance well in classification.\nAdditionally, building the state preparation ourselves allows us to prepare more ground states (100 states from the Pennylane dataset[31 ###reference_b31###] and 1000 states from our preparation) and makes it possible to apply data augmentation.\nTensor Network Ansatz: Our approach replaces the standard unitary coupled cluster ansatz with tensor network ansatz states introduced by Uvarov et. al[24 ###reference_b24###], which was inspired by tensor networks, to prepare the ground states efficiently. There are several families of variational ansatz states, including rank-one circuits, tree tensor network circuits, and checkerboard-shaped circuits with varying depths. The structure of the checkerboard-shaped circuit is shown in Fig. 2 ###reference_### (b). These ansatz circuits are designed to capture different amounts of entanglement and are critical for accurately approximating the ground states of various Hamiltonians.\nWe choose the checkerboard-shaped circuits in our VQE due to the best performance in reducing the error of ground state energy[24 ###reference_b24###]. For each entangled blocks in a checkerboard-shaped circuit, we used Ising entangle gates shown in Fig. 2 ###reference_### (c).\nHamiltonian Decomposition: We represent the Hamiltonian as a sum of tensor products of Pauli operators. This decomposition allows us to estimate individual terms of the Hamiltonian expectation value variational and minimize the energy using a hybrid process. More math details are shown in II-B ###reference_###\nOptimization: The parameters of the ansatz states are optimized using classical algorithms to minimize the expectation value of the Hamiltonian. This process iterates until the ground state is approximated within the desired accuracy.\n###figure_2###"
28
+ },
29
+ {
30
+ "section_id": "2.1.2",
31
+ "parent_section_id": "2.1",
32
+ "section_name": "II-A2 Data Augmentation",
33
+ "text": "To enhance the robustness of our model and improve its accuracy, we apply data augmentation techniques to the quantum states data. This involves applying rotations and spin flips to the VQE-prepared states, creating additional valid data points for training the classifier. This technique exploits the symmetries of the Hamiltonians to generate a more diverse training dataset.\nOur methodology represents an approach to predicting phase transitions in quantum systems by merging the capabilities of quantum simulation and quantum machine learning. Through the strategic use of tensor network ansatz states, a quantum classifier, and the computational power of multi-GPU frameworks facilitated by PennyLane, we achieve significant improvements in speed and scalability, paving the way for advanced studies in quantum phase transitions."
34
+ },
35
+ {
36
+ "section_id": "2.1.3",
37
+ "parent_section_id": "2.1",
38
+ "section_name": "II-A3 Quantum Convolutional Neural Network Classifier (Felix)",
39
+ "text": "Quantum Convolutional Neural Network (QCNN) is utilized to predict phase transitions, harnessing the combined strengths of convolutional layers and quantum computing layers [33 ###reference_b33###]. This approach allows for efficient feature extraction from extensive databases and the natural simulation of quantum data. The QCNN algorithm, as utilized in the QCQ architecture, incorporates a single convolutional layer connected with a pooling layer, multiple fully connected layers, and a quantum circuit layer. This architecture is illustrated in Fig. 1 ###reference_### and is explained below:\nConvolutional Layer: The model initiates its computational flow with a single convolutional layer. It utilizes a one-dimensional convolution, generating 30 output channels, with a kernel size identical to that of the feature. This design is specifically tailored for the initial extraction of features directly from the input data.\nMax Pooling Layer: One max pooling layer, configured with a kernel size of 1, is placed strategically to reduce spatial dimensions and hence, the complexity of the data passing through the network.\nFully Connected Layers: After flattening the feature map of the max pooling layer, the model incorporates two fully connected layers that serve to interpret the features extracted by the convolutional and pooling layers, culminating in the model\u2019s ability to make predictions. Notably, the second fully connected layer is specifically designed to interface with the quantum circuit layer, mapping the classical data into a quantum-compatible format.\nQuantum Circuit Layer: At the heart of QCNN approach is the quantum circuit layer, invoked through a predefined Pennylane circuit. The circuit is illustrated in Fig. 3 ###reference_###. This layer signifies the model\u2019s capacity to perform quantum computations, potentially exploiting quantum parallelism and entanglement to enhance the model\u2019s predictive capabilities. This quantum layer is followed by another fully connected layer.\nDropout: A dropout layer with a probability (p) of 0.5 is integrated to prevent overfitting by randomly omitting a subset of features during the training phase.\nPrediction: The sigmoid activation function is employed to transform the feature map of the final fully connected layer into a probability value ranging between 0 and 1. This mapping facilitates a clear decision-making process for binary classification tasks. The computation of the loss involves measuring the discrepancy between the sigmoid function\u2019s output and the designated target values. Upon completion of the training process, the model makes predictions based on a threshold criterion: outputs exceeding 0.5 are classified as 1, indicating the presence of phase transition, while those below 0.5 are classified as 0.\n###figure_3###"
40
+ },
41
+ {
42
+ "section_id": "2.2",
43
+ "parent_section_id": "2",
44
+ "section_name": "II-B Hamiltonians for Solving Phase Transition",
45
+ "text": "In our study, we introduce a novel approach that combines quantum simulation with QML to classify phases of matter, addressing the computational challenges faced by classical simulation methods. By employing a variational quantum algorithm, we leverage the capabilities of quantum computers to prepare and classify labelled states derived from the VQE algorithm. This method effectively bypasses the data reading slowdown typically encountered in quantum-enhanced machine learning applications, presenting a significant advancement in the field.\nOur work utilizes families of variational ansatz circuits inspired by tensor networks, enabling us to exploit tensor network theory to elucidate properties of phase diagrams. This approach is instrumental in our development of a quantum neural network. These results underscore the potential of integrating quantum simulation with QML to provide deep computational insights into quantum systems.\nWe represent the following Hamiltonian as a sum of tensor products of Pauli operators:\nwhere enumerate the Pauli matrices . With the decomposition (1), individual terms of can be estimated and variationally minimized elementwise using a classical-to-quantum process. In each iteration, one prepares the state and measures each qubit in the local , , or basis, estimates the energy and updates . This method can become scalable only if the number of terms in the Hamiltonian is polynomially bounded in the number of spins and the coefficients are defined up to poly(n) digits.\nIn particular, we use the transverse field Ising model\n(TFIM):\nwhere are Pauli Z matrix acting on the th spin. Constant is known as the energy scale, we set in our study which means no loss of generality. Dimensionless constant corresponds to the transverse magnetic field.\nWhen , the system has two ground states, with opposite signs\nof magnetization.When the system has one ground state with zero magnetization. The changing of ground states refers to the phase changing of the system, which happens at .\nThe other model we used is antiferromagnetic XXZ spin chain model:\nwhere the sign of determines if the system is ferromagnetic () or\nantiferromagnetic (). The sign of is not important in bipartite lattice because the states can be redefined by . We set in our study.\nFor antiferromagnetic XXZ model, when , the ground states spins each takes their opposite values to their neighbours, and the system is in antiferromagnetic Ising state; When , the system is in the planar phase. A Berezinsky\u2013Kosterlitz\u2013Thouless type phase transition happens at [34 ###reference_b34###].\nConclusively, our research demonstrates the feasibility and effectiveness of using quantum simulation and QML to classify phases of matter. By preparing approximate ground states variationally and employing them as inputs to a quantum classifier, we avoid the limitations of traditional Monte Carlo sampling methods. The success of our nearest-neighbour quantum neural network in accurately identifying phases of matter highlights the promising future of this interdisciplinary approach. This synergy between quantum computing and machine learning opens new avenues for exploring quantum systems, significantly impacting the fields of condensed matter physics and quantum computing."
46
+ },
47
+ {
48
+ "section_id": "2.3",
49
+ "parent_section_id": "2",
50
+ "section_name": "II-C Towrad Distributed Quantum Computing: a Conceptual Architecture for a Distributed-Quantum-HPC Middleware",
51
+ "text": "To develop a conceptual architecture for distributed quantum computing with a QCQ workflow, we expand upon the concept based on the work of Saurabh et al[19 ###reference_b19###]. This approach enables a deeper integration of quantum computing processes within distributed computing frameworks.\nBased on Fig. 4 ###reference_### delineates an integrated architecture designed to leverage distributed quantum resources for advanced computational tasks. The schematic begins with a classical scheduling layer, where a CPU disaggregates a complex problem into sub-problems. These are then converted into quantum states through GPUs in an encoding layer, facilitating the classical-to-quantum transition. The middle of the system can lie in the distributed-VQE layer[35 ###reference_b35###, 25 ###reference_b25###], where QPUs (linked by quantum internet and augmented by quantum repeaters) execute quantum algorithms. Finally, a hybrid quantum-classical neural network analyzes the quantum states, yielding a scalar output via a quantum classifier. This streamlined, multi-layered framework illustrates a scalable approach for executing computationally intensive tasks across distributed quantum systems, aiming to unlock new potentials in near-term quantum information processing.\n###figure_4###"
52
+ },
53
+ {
54
+ "section_id": "3",
55
+ "parent_section_id": null,
56
+ "section_name": "III Result",
57
+ "text": "###figure_5### ###figure_6### In Fig. 5 ###reference_###, we have successfully implemented a quantum machine learning classifier for phase transition using the PennyLane package, enhanced by the computational acceleration of the cuQuantum SDK. In our classification task, we observed an increase in accuracy near the phase transition point when utilizing checkerboard states with increasing depth in our VQE algorithm. Notably, even lower-depth ansatz states such as rank-one, tree tensor network, and single-layer checkerboard states delivered comparable results due to the relatively simple nature of the Hamiltonian involved.\n###figure_7### We prepared a dataset of 100 data points employing a VQE with a four-layered checkerboard ansatz state. After shuffling, this dataset was divided into an 80% training set and a 20% test set. Impressively, the classifier achieved a 98.0% prediction accuracy rate. For the antiferromagnetic XXZ spin chain model, we generated 4000 data points to train the classifier, achieving a 94.6% accuracy rate on the test data after enhancing the classifier circuit with two additional layers.\nThe QCNN yields results that are both robust and satisfactory (Fig. 6 ###reference_###). In the initial phases of the training epochs, the accuracy of the model escalates significantly, demonstrating a rapid learning curve which subsequently reaches a plateau at approximately 98.0%. Further evaluation on a test set reveals that the model attains a remarkable accuracy of 99.5%.\nThe dataset employed for the QCNN model was generated utilizing the VQE on XXZ model. The dataset comprised a total of 1000 data points. In adherence to standard machine learning practices, the dataset was partitioned in an 80-20 split, with 80% utilized for the training phase and the remaining 20% reserved for testing.\nOur approach demonstrates the potential of quantum-enhanced machine learning to classify phases of matter with high accuracy, outperforming classical Monte-Carlo sampling-based methods. This technique is versatile and can be applied to any model expressible as a spin model, with the capability to be extended to multi-class classification problems. Our results underscore the advantages of integrating quantum simulations with advanced machine learning techniques, paving the way for new insights into the study of quantum phase transitions."
58
+ },
59
+ {
60
+ "section_id": "3.1",
61
+ "parent_section_id": "3",
62
+ "section_name": "III-A GPU Acceleration with cuQuantum for VQE and QCNN",
63
+ "text": "In this research work, our main task is to develop and optimize a Hybrid QML model with a distributed-quantum-computing framework for classifying phase transitions. In our demonstration, we use the PennyLane package, with a focus on computational efficiency through GPU acceleration with cuQuantum SDK. The inherent challenge in QML is the intensive computational requirement, often necessitating QPUs with high-fidelity qubits. To address this, we employ classical hardware for initial data processing and feature extraction\u2014a step integral to our QML model\u2019s training process. Our strategy utilizes the cuQuantum SDK, which offers a GPU-optimized collection of low-level primitives supporting Pennylane lightning backend. This approach accelerates quantum machine learning model execution on NVIDIA GPUs, substantially shortens training durations, and reduces computational costs. By running our applications on Denvr Dataworks\u2019s CUDA-compatible platform, we benefit from the enhanced processing capabilities of NVIDIA GPUs, which are designed for parallel computing and can therefore handle large datasets and complex operations with greater speed than conventional CPUs. The expected outcome of our work is a threefold acceleration in the training of our quantum machine learning models compared to CPU-only execution. This improvement is attributable to the GPUs\u2019 parallel processing power, which allows for quicker data processing and computation. The integration of cuQuantum with CUDA not only augments computational speed but also offers cost-effective solutions by eliminating the need for expensive quantum hardware and extensive infrastructure. Our objective is to make QML models more scalable and accessible, leveraging these technologies to advance research in quantum phase transitions and establish new performance benchmarks in the field of quantum computing.\n###figure_8### In line with our objectives, the benchmarking results showcase a comparative analysis of computational performance across different hardware configurations for these quantum simulations. The bar plot distinctly illustrates the time taken to execute specific tasks like the VQE (in this case with 16 qubits), Hybrid QCNN (in this case with 3 qubits), and the aggregate computational process across Intel CPUs, Apple M1 chips, and NVIDIA A100 GPUs. The acceleration impact is significant when utilizing the A100 GPUs, with the most notable performance improvement observed with a configuration of four GPUs, suggesting a near-linear scaling in this particular benchmarking scenario. This benchmarking also agrees with the result shown in Vallero et. al\u2019s work[36 ###reference_b36###].\n###figure_9### The accompanying line plot further elucidates the speedup achieved with the use of 1, 2, and 4 A100 GPUs compared to CPU benchmarks. It becomes evident that the total speedup gained from GPU acceleration does not fully align with an ideal linear progression, which is visually represented by a grey dashed line. This deviation implies that while GPU acceleration substantially benefits the computational speed, the returns diminish with the addition of more GPUs, possibly due to overhead associated with parallelization and inter-GPU communication. The less efficient may be because in our case the qubit number is not very large. In Vallero et. al\u2019s benchmarking[36 ###reference_b36###], with more qubits we run, the more acceleration we will observe. These findings underscore the A100 GPUs\u2019 profound capabilities in enhancing performance for quantum simulation tasks, thereby underscoring their potential to facilitate and expedite complex quantum computations in research settings."
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "IV Conclusion",
69
+ "text": "In this study, we propose an innovative QCQ architecture aimed at enhancing applications in distributed quantum computing and Quantum-HPC systems. This architecture integrates VQE algorithms and QCNNs, enabling rapid and accurate quantum simulations, exemplified by phase transition classification within this research. The hybrid QCQ algorithm demonstrated an exceptional 99.5% test accuracy in predicting phase transitions for models such as the transverse field Ising and XXZ. The architecture efficiently utilizes QPUs with a limited number of high-fidelity qubits, such as superconducting circuits[37 ###reference_b37###], ion traps[38 ###reference_b38###], neutral atoms[39 ###reference_b39###], and NV center diamonds[40 ###reference_b40###], to scale up the qubit count and achieve distributed quantum computing.\nAt the heart of the QCQ framework is the seamless integration of variational quantum eigensolver algorithms, tensor network states, and convolutional neural networks. This synergistic amalgamation allows for the effective preparation of quantum states through quantum simulations, followed by their classification using advanced machine learning techniques. The architecture finds applications in quantum reinforcement learning[41 ###reference_b41###] and can optimize the hybrid neural network structure using an approach[42 ###reference_b42###] or the fast weights approach[43 ###reference_b43###].\nThe success of this QCQ framework highlights the immense potential of merging quantum computing with machine learning. By surmounting the limitations inherent in classical algorithms, this method paves the way for deeper insights into the behavior of quantum systems across various sizes and configurations. It finds applications in fields such as materials science, condensed matter physics, and sustainability research.\nAs research in quantum hardware and algorithms progresses, architectures like QCQ will play a critical role in unlocking the transformative capabilities of quantum-enhanced computing. While challenges in further enhancing accuracy and extending applicability remain, the scalability and efficiency exhibited by this framework open up exciting avenues for future exploration. Integrating the strengths of both quantum and classical computing, the QCQ architecture represents a significant advancement towards fully leveraging the potential of quantum technologies in diverse scientific and technological domains."
70
+ },
71
+ {
72
+ "section_id": "5",
73
+ "parent_section_id": null,
74
+ "section_name": "Future work",
75
+ "text": "Our upcoming research aims to enhance the QCQ architecture by harnessing the CUDA Quantum platform[44 ###reference_b44###], delivering substantial speedup and flexibility for quantum-classical computing workflows through distributed quantum simulation and seamless integration with machine learning frameworks."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {},
80
+ "image_paths": {
81
+ "1": {
82
+ "figure_path": "2403.05828v2_figure_1.png",
83
+ "caption": "Figure 1: Pipeline of the Hybrid QSandwich Architecture.",
84
+ "url": "http://arxiv.org/html/2403.05828v2/extracted/5475718/figures/QCQ-framework.png"
85
+ },
86
+ "2": {
87
+ "figure_path": "2403.05828v2_figure_2.png",
88
+ "caption": "Figure 2: (a) Quantum circuit depicted as a tensor network with bonds of dimension 2. (b) Checkerboard tensor network circuit. [20] Each green block refers to a two-qubit entangler circuit. (c) The entangled block in the checkerboard circuit",
89
+ "url": "http://arxiv.org/html/2403.05828v2/extracted/5475718/figures/tensornetwork.png"
90
+ },
91
+ "3": {
92
+ "figure_path": "2403.05828v2_figure_3.png",
93
+ "caption": "Figure 3: The quantum circuit used in QCNN. In this circuit, a Hadamard gate is first applied to each qubit, followed by a rotational-Y gate. The angles of the rotational gates are the trainable parameters (Theta1, Theta2 and Theta3). At the end of this circuit, measurement is executed and the possibilities of finding 7 out of 8 quantum states (except \u201c111\u201d) are the output of this circuit. Thus, when regarded as a layer in the QCNN, this circuit has 3 input channels and 7 output channels.",
94
+ "url": "http://arxiv.org/html/2403.05828v2/extracted/5475718/figures/NNcircuit.png"
95
+ },
96
+ "4": {
97
+ "figure_path": "2403.05828v2_figure_4.png",
98
+ "caption": "Figure 4: The schematic represents a distributed quantum computing architecture, illustrating the hybrid QCQ framework for classifying phase transitions. This architecture combines CPU-based scheduling with quantum processing across multiple layers and employs GPUs for state encoding. The dashed lines represent omitted n\ud835\udc5bnitalic_n blocks in parallel due to size constraints of the image",
99
+ "url": "http://arxiv.org/html/2403.05828v2/extracted/5475718/figures/distributed-framework.png"
100
+ },
101
+ "5(a)": {
102
+ "figure_path": "2403.05828v2_figure_5(a).png",
103
+ "caption": "Figure 5: (a)Predicted phases as a function of h for the TFIM model. (b) The predicted probability of phases as a function of Jz for the XXZ model. Positive prediction of label II represents phase II, which is above the dashed lines. The theoretically phase II (disorder phase) is the areas on the right-hand side of the red lines (shown in red color).",
104
+ "url": "http://arxiv.org/html/2403.05828v2/x1.png"
105
+ },
106
+ "5(b)": {
107
+ "figure_path": "2403.05828v2_figure_5(b).png",
108
+ "caption": "Figure 5: (a)Predicted phases as a function of h for the TFIM model. (b) The predicted probability of phases as a function of Jz for the XXZ model. Positive prediction of label II represents phase II, which is above the dashed lines. The theoretically phase II (disorder phase) is the areas on the right-hand side of the red lines (shown in red color).",
109
+ "url": "http://arxiv.org/html/2403.05828v2/x2.png"
110
+ },
111
+ "6": {
112
+ "figure_path": "2403.05828v2_figure_6.png",
113
+ "caption": "Figure 6: The evolution of loss (left) and accuracy (right) in the QCNN model. As the number of epochs increases, the model\u2019s loss decreases exponentially. The model\u2019s accuracy improves significantly in the initial epochs and then stabilizes at around 98%. And test accuracy reaches 99.5%.",
114
+ "url": "http://arxiv.org/html/2403.05828v2/extracted/5475718/figures/training.png"
115
+ },
116
+ "7": {
117
+ "figure_path": "2403.05828v2_figure_7.png",
118
+ "caption": "Figure 7: Benchmarking of computation times for VQE, Hybrid QCNN, and Total processes, comparing performance across Intel CPU (i7-13700KF), Apple M1 pro (10-core CPU), and 1, 2, and 4 NVIDIA A100 GPUs as quantum simulators.",
119
+ "url": "http://arxiv.org/html/2403.05828v2/extracted/5475718/figures/processing.png"
120
+ },
121
+ "8": {
122
+ "figure_path": "2403.05828v2_figure_8.png",
123
+ "caption": "Figure 8: The line graph conducts a linearity performance check of multi-GPU settings for quantum machine learning training.",
124
+ "url": "http://arxiv.org/html/2403.05828v2/extracted/5475718/figures/linearity.png"
125
+ }
126
+ },
127
+ "validation": true,
128
+ "references": [
129
+ {
130
+ "1": {
131
+ "title": "https://developer.nvidia.com/cuquantum-sdk, 2021.",
132
+ "author": "NVIDIA, cuQuantum SDK: Simulating quantum circuits on GPUs.",
133
+ "venue": "Accessed: February 23, 2023.",
134
+ "url": null
135
+ }
136
+ },
137
+ {
138
+ "2": {
139
+ "title": "https://pennylane.ai/datasets/qspin/ising-model, 2023.",
140
+ "author": "D. G. Utkarsh Azad, Pennylane spin datasets.",
141
+ "venue": null,
142
+ "url": null
143
+ }
144
+ },
145
+ {
146
+ "3": {
147
+ "title": "url: http://dx.doi.org/10.1007/978-3-319-48487-7, doi:10.1007/978-3-319-48487-7.",
148
+ "author": "F. Franchini, An Introduction to Integrable Techniques for One-Dimensional Quantum Systems, Springer International Publishing, 2017.",
149
+ "venue": null,
150
+ "url": "http://dx.doi.org/10.1007/978-3-319-48487-7"
151
+ }
152
+ }
153
+ ],
154
+ "url": "http://arxiv.org/html/2403.05828v2"
155
+ }
20240318/2403.06467v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2403.08282v2.json ADDED
@@ -0,0 +1,745 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Hierarchical Auto-Organizing System for Open-Ended Multi-Agent Navigation",
3
+ "abstract": "Due to the dynamic and unpredictable open-world setting, navigating complex environments in Minecraft poses significant challenges for multi-agent systems. Agents must interact with the environment and coordinate their actions with other agents to achieve common objectives. However, traditional approaches often struggle to efficiently manage inter-agent communication and task distribution, crucial for effective multi-agent navigation. Furthermore, processing and integrating multi-modal information (such as visual, textual, and auditory data) is essential for agents to comprehend their goals and navigate the environment successfully and fully. To address this issue, we design the HAS framework to auto-organize groups of LLM-based agents to complete navigation tasks. In our approach, we devise a hierarchical auto-organizing navigation system, which is characterized by 1) a hierarchical system for multi-agent organization, ensuring centralized planning and decentralized execution; 2) an auto-organizing and intra-communication mechanism, enabling dynamic group adjustment under subtasks; 3) a multi-modal information platform, facilitating multi-modal perception to perform the three navigation tasks with one system. To assess organizational behavior, we design a series of navigation tasks in the Minecraft environment, which includes searching and exploring. We aim to develop embodied organizations that push the boundaries of embodied AI, moving it towards a more human-like organizational structure.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Drawing on large language models (LLMs) with human knowledge, communicative competence Mao et al. (2020a ###reference_b45###; a ###reference_b45###), and decision-making capabilities, embodied agents exhibit human-like intelligence in playing games (Park et al., 2023 ###reference_b50###; Wang et al., 2023a ###reference_b58###; Zhao et al., 2023 ###reference_b71###), programming (Qian et al., 2023 ###reference_b52###; Hong et al., 2023 ###reference_b24###), and robotic tasks (Zhang et al., 2023b ###reference_b70###; Mandi et al., 2023 ###reference_b43###).\nThe development of individual intelligence has led to creating a new, collaborative framework where multiple agents Chen et al. (2023a ###reference_b7###; b ###reference_b10###) work together.\nThis shift towards a multi-agent system Hong et al. (2023 ###reference_b24###); Chen et al. (2023a ###reference_b7###) uses advanced language understanding and decision-making abilities to enhance intelligence through interactions and data sharing. Agents in this system specialize in various tasks, sharing insights to enhance overall efficiency. This collaborative approach not only optimizes execution but also fosters a learning environment where agents continually improve through shared intelligence. Their interaction leads to refined skills, equipping them to handle complex, large-scale tasks through parallel processing and coordinated collaboration, a notable advancement in autonomous systems.\nMulti-modal navigation (Wijmans et al., 2019 ###reference_b63###; Yu et al., 2021 ###reference_b67###; Du et al., 2020 ###reference_b17###; Kwon et al., 2023 ###reference_b27###; Chen et al., 2021 ###reference_b8###; Moudgil et al., 2021 ###reference_b48###) stands at the forefront of contemporary AI research, representing a rapidly evolving field that aims to integrate various sensory inputs into a cohesive navigational strategy. This emerging domain extends the capabilities of multi-agent systems by requiring them to interpret and act upon a confluence of sensory data.\nUnlike traditional methods focusing on rendered images or static virtual environments, works on dynamic environments Deng et al. (2023 ###reference_b15###), such as multi-modal navigation in open-ended environments like Minecraft, pose a more complex challenge. Agents in Minecraft Wang et al. (2023a ###reference_b58###); Zhao et al. (2023 ###reference_b71###) navigate a world full of freedom and variability, making it an ideal testbed for embodied agent systems. Minecraft\u2019s unpredictable and open-ended environment is an ideal testing ground for advanced AI systems. These systems emulate human adaptability and intelligence, processing and sharing multi-modal data to strategically navigate and complete tasks. This approach goes beyond simple improvements, marking a significant leap in developing autonomous systems that handle complex tasks with remarkable autonomy and skill.\nIntegrating Multi-modal Language Models (MLMs) into embodied agents enhances navigational efficiency. Equipped with LLMs, these agents engage intelligently with their environment. The zero-shot performance capabilities of LLMs enable them to interpret and act upon data without explicit programming. This makes them ideal for open-ended environments.\nOur vision is centered on employing MLM-based embodied agents to revolutionize navigation within the intricate landscapes of Minecraft. Environments in this game present unique challenges, such as the need to respond to image, audio, and object cues in real-time, as highlighted in Figure 1 ###reference_###. By harnessing the zero-shot learning capabilities and the nuanced decision-making abilities of LLMs, our agents can navigate these spaces with efficiency and adaptability akin to human intuition. Such versatility is achieved without requiring exhaustive retraining or complex reconfigurations, marking a substantial leap towards autonomous systems that engage with their surroundings in more sophisticated and human-like ways.\nWe summarize our contributions as follows:\nWe introduce HAS, a hierarchical structure for multi-agent navigation based on LLMs in the Minecraft environment. It utilizes centralized planning with decentralized execution, enabling efficient multi-modal navigation in open-ended environments.\nWe design an auto-organizing and intra-communication mechanism to dynamically adjust the key role and action group based on the task allocation and maintain inter-group communication to ensure efficient collaboration.\nWe achieve state-of-the-art performance on the asynchronous multi-modal navigation task on image, audio, and object goals in Minecraft\u2019s open-ended environment."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Works",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Intelligent Agent in Minecraft",
21
+ "text": "As an open-ended sandbox game, Minecraft has always been an ideal setting for testing the performance of intelligent agents (Johnson et al., 2016 ###reference_b26###; Hofmann, 2019 ###reference_b23###). The agents must autonomously perform various tasks in Minecraft, such as chopping trees, crafting tools, and mining diamonds. At the beginning, much of the works focus on exploring reinforcement learning (Lin et al., 2021 ###reference_b33###; Mao et al., 2022 ###reference_b46###; Skrynnik et al., 2021 ###reference_b54###; Lifshitz et al., 2023 ###reference_b32###) or imitation learning (Amiranashvili et al., 2020 ###reference_b3###; Baker et al., 2022 ###reference_b4###), without satisfactory performance. VPT (Baker et al., 2022 ###reference_b4###) and MineDojo (Fan et al., 2022 ###reference_b19###) collect internet-scale datasets for their model pre-training. VPT enables direct learning to act during video pre-training and using these behaviors as exploration priors for reinforcement learning.\nYet, recent works found that the pre-trained LLMs could serve as a strong \u201cmind\u201d that provides planning ability to the agents. Voyager (Wang et al., 2023a ###reference_b58###) is a single-robot multiple agent system\nthat leverages multiple groups of GPT-4 (OpenAI, 2023 ###reference_b49###) as a high-level planner, low-level action code generator, critic generator, and curriculum manager. Plan4MC (Yuan et al., 2023 ###reference_b68###) proposes a skill graph pre-generated by the LLMs. DEPS (Wang et al., 2023e ###reference_b62###), an interactive planning method based on LLMs, addresses multi-step reasoning issues in open-world planning. GITM (Zhu et al., 2023b ###reference_b75###) develops a set of structured actions and leverages LLMs to generate action plans for the agents to execute, achieving impressive results in various tasks."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Embodied Multimodal Model",
27
+ "text": "Embodied agents integrate sensory perceptions, physical actions, and computational intelligence to accomplish tasks and goals within their environment. Key areas are wide-ranging, including Navigation (Wijmans et al., 2019 ###reference_b63###; Yu et al., 2021 ###reference_b67###; Du et al., 2020 ###reference_b17###; Kwon et al., 2023 ###reference_b27###; Chen et al., 2021 ###reference_b8###; Moudgil et al., 2021 ###reference_b48###), Embodied Question Answering (Das et al., 2018 ###reference_b12###; Yu et al., 2019 ###reference_b66###; Datta et al., 2022 ###reference_b13###), Active Visual Tracking (Luo et al., 2018 ###reference_b38###; Zhong et al., 2021 ###reference_b73###; Luo et al., 2019 ###reference_b39###; Zhong et al., 2019 ###reference_b72###), and Visual Exploration (Liu & Okatani, 2022 ###reference_b36###; Dean et al., 2020 ###reference_b14###; Chen et al., 2018 ###reference_b9###). The field is evolving rapidly with the development of Large Language Models (LLMs) (Song et al., 2022 ###reference_b55###) and Multimodal LLMs (MLLMs) (Alayrac et al., 2022 ###reference_b2###; Zhu et al., 2023a ###reference_b74###; Li et al., 2023a ###reference_b29###; 2022 ###reference_b30###; b ###reference_b31###; Gong et al., 2023 ###reference_b21###; Lyu et al., 2023 ###reference_b40###; Ye et al., 2023 ###reference_b65###; Dai et al., 2023 ###reference_b11###; Wang et al., 2023c ###reference_b60###; Liu et al., 2023a ###reference_b34###; Maaz et al., 2023 ###reference_b42###; Su et al., 2023 ###reference_b56###; Gao et al., 2023 ###reference_b20###), integrating multiple modalities for more effective processing. A prime example of this innovation is PaLM-E (Driess et al., 2023 ###reference_b16###), a sophisticated multimodal model with 562B parameters, adept at a broad spectrum of embodied tasks and demonstrating exceptional capabilities in visual reasoning."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "LLM-based Multi-Agent Frameworks",
33
+ "text": "Large Language Models (LLMs) are skilled at completing new tasks when given prompt-based instructions. Autonomous agents based on Large Language Model-based (LLM-based) models have gained significant interest in industry and academia (Wang et al., 2023b ###reference_b59###).\nSeveral works (Wang et al., 2023d ###reference_b61###; Du et al., 2023 ###reference_b18###; Zhuge et al., 2023 ###reference_b76###; Hao et al., 2023 ###reference_b22###; Akata et al., 2023 ###reference_b1###; Zhang et al., 2023a ###reference_b69###) have augmented the problem-solving abilities of LLMs by incorporating discussions among multiple agents. Stable-Alignment (Liu et al., 2023b ###reference_b35###) generates instruction datasets by reaching a consensus on value judgments through interactions among LLM agents in a sandbox. Some works in the field of artificial intelligence focus on studying sociological phenomena. For instance, Generative Agents (Park et al., 2023 ###reference_b50###) creates a virtual \u201ctown\u201d comprising 25 agents to investigate language interaction, social understanding, and collective memory. The Natural Language-Based Society of Mind (Zhuge et al., 2023 ###reference_b76###) involves agents with different functions interacting to solve complex tasks through multiple rounds of Mindstorms. In addition, others (Cai et al., 2023 ###reference_b6###) propose a model for cost reduction by combining large models as tool makers and small models as tool users.\nSome works emphasize cooperation and competition related to planning and strategy (Bakhtin et al., 2022 ###reference_b5###), some propose LLM-based economies (Zhuge et al., 2023 ###reference_b76###), and others propose LLM-based programming (Hong et al., 2023 ###reference_b24###). Developing an LLM-based multiple embodied agent system with control capability is challenging. We face difficulties in multi-agent cooperation, such as low efficiency of global perception and generalizing dynamic groups. To overcome these challenges, we aim to apply a hierarchical auto-organizing system in multi-agent navigation."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "HAS: A Multi-agent Navigation Framework",
39
+ "text": "###figure_1### As shown in Figure 2 ###reference_###, the HAS is a hierarchical LLM-based multi-modal multi-agent navigation framework denoted as , which can manage and execute complex multi-agent navigation tasks on image (), object (), and audio () goals with perception on the state list of vision (), audio (), and other properties within open-ended environments by leveraging cognitive and collaborative capabilities of the multi-modal language model ():\nwhere represents the state list of vision, audio, and other properties of one conductor agent, is the original goal, and is the dynamic map. Then, we get as the action for the conductor agent .\nThe Hierarchical architecture consists of two primary operational domains: higher-order centralized planning, which is managed by the manager multi-modal language model (), and ground-level decentralized execution, which is conducted by the conductor model (), then the action of the conductor can be obtained as follows,\nwhere and represent the multi-modal language models of the conductor and manager agents."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Centralized Planning with Decentralized Execution",
45
+ "text": "Evoked from the Centralized Training with Decentralized Execution (CTDE) framework (Hu et al., 2023 ###reference_b25###; Sunehag et al., 2017 ###reference_b57###; Lowe et al., 2017 ###reference_b37###; Mao et al., 2018 ###reference_b44###) for cooperative Multi-Agent Reinforcement Learning (MARL), we propose Centralized Planning with Decentralized Execution (CPDE) for the cooperative LLM-based Multi-Agent system. This architecture leverages global state information to inform the planning process, while the execution of tasks is carried out by local agents independently. CPDE is tailored to maximize the synergy of global oversight and local autonomy, ensuring efficient navigation and task completion in complex environments.\nThe centralized planning process for a manager agent simulates a high-level understanding like that of a human planner. The process includes understanding the environment\u2019s dynamic global states, recognizing the conductor agents\u2019 capabilities and limitations, and devising a strategy. It contains consisting of 4 MLM modules with different functions for the manager as mentioned in Section 3.2 ###reference_### and a Global Multi-modal Memory for storing multi-modal information as mentioned in Section 3.3 ###reference_###.\nThe decentralized execution process is designed to capitalize on the autonomy and flexibility of conductor agents . These agents navigate the environment, perform tasks, and learn from their interactions guided by the strategic direction from the centralized planning of the manager agent. Note that these conductors can deploy several action agents for similar goals of one sub-goal. It contains comprising 4 MLM modules for conductors with different functions as mentioned in Section 3.2 ###reference_### and a Local Multi-modal Memory for storing multi-modal information as mentioned in Section 3.3 ###reference_###.\nThe auto-organizing mechanism is a spontaneous grouping mechanism to promote multi-agents\u2019 efficiency. In the centralized planning phase, the manager agent auto-organizes several conductor agents based on the global environment and tasks, which takes advantage of the planning capabilities of MLM and is inspired by AutoAgents (Chen et al., 2023a ###reference_b7###) to deploy agents with different roles automatically. During the decentralized execution stage, we are inspired by the Self-Organized Group (SOG) (Shao et al., 2022 ###reference_b53###) to solve the issue of zero-shot generalization ability with dynamic team composition and varying partial observability. We improve and use a novel auto-organizing mechanism. Each group has conductor-action agent consensus, where the action agents can only communicate with their conductor. Additionally, we utilized the summarization ability (Ma et al., 2023 ###reference_b41###) from LLM to summarize and distribute the messages received to all affiliate group members to hold a unified schedule."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Multi-modal Language Model",
51
+ "text": "The Multi-modal Language Model , consisting of two different types of MLM of the manager and conductor agents. , which consists of Planner, Describer, Critic, and Deployer for the manager. They formulate aligned task plans, condense and translate multi-modal data, refine strategies through feedback, and assign and direct agent subtasks respectively.\n, which consists of Actor, Curriculum, Critic, and Skill module for conductors. They translate strategic plans into executable actions, orchestrate dynamic group formations, and distribute tasks across agents, ensuring alignment with centralized directives and facilitating continuous learning and adaptation through a curriculum of complex tasks.\nOur approach integrates the environment\u2019s observations and task directives to plan actions based on the current scenario. We begin by translating multi-modal observations into textual descriptions, utilizing a method that avoids direct scene captioning by the MLM. Instead, we extract keywords for items from the STEVE-21K dataset (Zhao et al., 2023 ###reference_b71###) and employ GPT-4 to craft sentences that articulate these observations. The MLM identifies relevant condition sentences from textual observations during the planning phase. It also incorporates additional context, such as biome types and inventory levels, into text formats via predefined templates. We generate action plans by re-engaging the MLM\u2019s linguistic component with the task instructions and these descriptive texts. This methodology leverages the MLM\u2019s capabilities in a layered manner, yielding more accurate situational descriptions and plans that significantly reduce the likelihood of generating unrealistic elements compared to fully integrated models.\nHAS enhances its planning through a closed-loop feedback mechanism, automatically correcting failures by analyzing feedback and identifying errors using its self-explanation capabilities. Unlike other agents, it generates improved plans without human input or extra information. Additionally, HAS simulates and evaluates each plan step to identify potential flaws early, reducing the likelihood of encountering difficult situations due to plan failures. This proactive approach enables it to foresee issues like insufficient resources, which could hinder task completion."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Multi-modal Memory",
57
+ "text": "Research (Hong et al., 2023 ###reference_b24###) has shown that memory mechanisms play a crucial role in the functioning of generalist agents. Equipping HAS with multi-modal memory enables it to plan using pre-existing knowledge and real-world experiences, improving planning accuracy and consistency. The MLM in HAS allows leveraging these experiences in context without requiring additional model updates.\nWe have illustrated the design of our multi-modal memory system. At a high level, this system is a key-value memory model with multi-modal keys comprising both the task and the observation of the state when this memory entry is created. The values stored in this memory system are the plans that were successfully executed. Since the plans are in an open-ended environment, Minecraft, there could be multiple entries with the same task but different observations and plans. As a result, HAS needs to generate multi-modal queries based on the current task and situations to retrieve the relevant memory entries.\nRetrival-augmented storage (RAS) enables long-term planning capability by Retrieval-Augmented Generation (RAG) (Lewis et al., 2020 ###reference_b28###; Mao et al., 2020b ###reference_b47###). RAG improves the quality of responses generated by language models with external sources of knowledge to complement the model\u2019s internal representation. Instead of external knowledge libraries, we use the collected multi-modal memory as the knowledge library and retrieve interactive experiences as demonstration prompts to improve the planning results.\nThe formulation is as follows:\nwhere , , and represent instructions, plans, and retrieved memory entries. and denote retrieval and planning models. This retrieval-augmented planning method helps HAS to ground its internal knowledge into open-ended environments efficiently. It also leverages historical interaction feedback to solve hallucinations within LLMs and produce more accurate plans.\nMulti-modal retrieval (MMR) enables efficient access to a rich repository of multi-modal memories. This process is initiated with a query containing textual and visual elements. To align this query with the trajectories stored within the multi-modal memory, we utilize the manager\u2019s Describer . The Describer converts visual information into a textual format. This textual description serves as an image tag, amalgamating with other textual data or as a textual representation for audio information.\nWhen a retrieval request is made from the multi-modal memory, especially when the information is an amalgamation of image and text, the Describer module is employed to transcribe the image into text. Subsequently, this description is used to compute the similarity across the multi-modal memory entries. The top-k most similar entries are retrieved for further processing. The formalization of this retrieval process is as follows:\nwhere denotes the textual query, represents the visual query, and signifies the retrieval function. The function computes the similarity between the multi-modal memory entries and the query, using the textual description provided by .\nThe dynamic map visually represents the exploration domain, showcasing only the areas the agents have explored. It is the main way for the manager agent to see the global environment. When using the map, it will be used in the form of an image. It is updated in real-time with information from all action agents, providing a strategic overview of the environment. This map is instrumental in planning and executing navigation tasks as it reflects the current knowledge and discoveries made by the agent collective.\nThe following equations can formalize the dynamic map\u2019s function:\nWhere represents the dynamic map at time , is the state information from agent at time , including text data like the place name, special materials. is the function that integrates the text data into the map, and is the number of active agents. This real-time updating mechanism ensures that the dynamic map remains an accurate and current representation of the exploration field."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Experiment",
63
+ "text": "Our experiments aim to achieve three goals using HAS in challenging Minecraft navigation tasks. Firstly, we want to evaluate the performance of HAS against baselines that do not fully address the issues faced by open-world agents. Secondly, we aim to understand the factors that contribute to these results. Lastly, we want to explore the potential of HAS for life-long learning and its benefits in long-horizon tasks. We will first introduce the evaluation settings, present the main comparative results and ablation studies, and conclude with an exploratory trial on long-horizon tasks."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Experimental Setups",
69
+ "text": "We select gpt-4-1106-vision-preview (Yang et al., 2023 ###reference_b64###) as the base model. Our simulation environment is based on MineDojo (Fan et al., 2022 ###reference_b19###) and uses Mineflayer (PrismarineJS, 2013 ###reference_b51###) APIs for motor controls. The maximum number of robots that can be allocated based on this environment is 8, which is also our experimental robots\u2019 upper limit."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Baselines",
75
+ "text": "Currently, no MLM-driven multi-agents (robots) work out of the box for Minecraft, so we carefully selected several representative algorithms as baselines for our experiment. They rely on extracting information from a system\u2019s backend, presenting a significant divergence from real-world scenarios.\n(Wang et al., 2023a ###reference_b58###) is a blind, single-robot system, relying only on textual grounding for perception. It has a long-term procedural memory that stores a hierarchical library of code-based grounding procedures. Complex skills can use simpler skills as sub-procedures. Voyager is known for its ability to explore areas and master the tech tree. However, its main focus is to prompt GPT-4 (OpenAI, 2023 ###reference_b49###) on background text messages in embodied agents. We use multiple hosts to deploy multiple models to work directly on the server. We convert the input of the image goal into a text task through GPT-4V, using the same Describer module as ours.\n(Zhao et al., 2023 ###reference_b71###) is a multi-modal single-robot system that combines the vision unit with the STEVE-13B with the code database. It focuses on introducing a visual module to endow the model with visual perception capabilities in processing visual perception information and handling task reasoning for skill-related execution. Similarly, using multiple hosts, we deploy multiple models to work directly on the server. This single-agent method optimizes performance. We also convert the input of the image goal into a text task by using the same Describer module as ours."
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "Task setting",
81
+ "text": "We set the world on peaceful mode and agents always start in survival mode with an empty inventory so that navigation tasks can be performed without interruption. Meanwhile, we choose over 15 environment seeds of 6 different terrains from the STEVE-21K dataset (Zhao et al., 2023 ###reference_b71###) for evaluation. The tasks we chose mainly test the efficiency of long-distance directivity navigation, short-range non-directional navigation, and free-world exploration.\nA task is considered successful when the target object is close to the target or when a certain number of targets are reached. Due to the open-world nature of Minecraft, the world and initial position that the agent is spawned at could vary a lot. Therefore, we conduct at least 30 tests for each task and report the average efficiency and success rate to ensure a thorough assessment."
82
+ },
83
+ {
84
+ "section_id": "4.4",
85
+ "parent_section_id": "4",
86
+ "section_name": "Evaluation Results",
87
+ "text": "Multi-modal goal search includes goals of Image, Object, and Audio.\nObject labels identify in-game items such as villages, pyramids, and animals. Image labels help locate objects using images. Audio labels are used to detect sounds outside of the player\u2019s range. Due to an insufficient audio library, we set the perceptible range around the target and passed it to the agent as text. It is similar to finding objects at close range, expressing distance through feedback intensity values.\nAs shown in Table 1 ###reference_###, HAS achieves the best performance as a multi-agent system. Due to the auto-organizing mechanism, multiple agents can be better planned than directly adding multiple free agents. Even with a single-agent system, our performance is still the best. This is because our system decomposes tasks layer by layer, allowing both managers and action agents to maintain focus on their specific tasks. With the help of dynamic maps, the agent can be dynamically updated to understand the environment better globally. Our method has sufficient potential for performance growth. With the improvement of robots, it does not require too many robots to find a reasonable number of robots in the audio goal at close range.\nContinuous block search is a close exploration mission to assess the agent\u2019s exploratory capabilities and proficiency in locating multiple blocks in a row. Diamond blocks are placed at every 16-block interval across the mainland map.\nAs shown in Table 2 ###reference_###, we experiment with the block-searching task (Zhao et al., 2023 ###reference_b71###) to assess the agent\u2019s exploratory capabilities and proficiency in locating specified blocks. Dynamic map is to identify as many blocks as possible within the fewest iterations, which indicates the method\u2019s efficiency. The dynamic map and the self-organization of the hierarchical structure allow more blocks to be found and deployed.\nMap exploration aims to let the agent update the map as much as possible. We set up the same status awareness: when in an unreached area, status information prompts the agent in text. We set each step\u2019s maximum movement distance not to exceed 50 blocks.\nAs shown in Table 3 ###reference_###, HAS has achieved the best performance, especially on multi-agent, which can better reflect the hierarchical structure. It is of considerable value for macro-control of multi-agent exploration and avoids agents repeatedly wandering in the explored area."
88
+ },
89
+ {
90
+ "section_id": "4.5",
91
+ "parent_section_id": "4",
92
+ "section_name": "Ablation Study",
93
+ "text": "To understand the impact of different components on the performance of our system, we conducted ablation studies. The results, as shown in Table 4 ###reference_###, provide insights into the effectiveness of the dynamic map and auto-organizing mechanism. Note that in w/o AO, we directly block the planner and deployer modules of the Manager to remove the auto-organizing mechanism and only retain the describer\u2019s perception of the environment, including dynamic maps into status information, thereby directly acting on the same target task on a fixed number of agents.\nDynamic map is a multi-modal information body including images and status information. It can integrate data in the simplest way to provide global understanding, reduce additional overhead and provide the possibility of macro layout for multi-person cooperation.\nAuto-organizing mechanism can dynamically adjust dynamic groups and automatically assign tasks based on the existing environment, allowing efficient multi-person cooperation."
94
+ },
95
+ {
96
+ "section_id": "5",
97
+ "parent_section_id": null,
98
+ "section_name": "Conclusion",
99
+ "text": "In conclusion, HAS brings a significant advancement to multi-agent systems for complex environment navigation. Our contributions include a hierarchical auto-organizing system that ensures effective centralized planning and decentralized execution, an innovative auto-organizing and intra-communication mechanism for dynamic task adaptation, and a multi-modal information platform that integrates diverse sensory inputs. These advancements collectively enhance the autonomy, efficiency, and adaptability of AI agents, marking a pivotal step forward in the field of Embodied AI."
100
+ }
101
+ ],
102
+ "appendix": [],
103
+ "tables": {
104
+ "1": {
105
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.6\" style=\"width:433.6pt;height:143.6pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-2.1pt,0.7pt) scale(0.990240884444566,0.990240884444566) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.6.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.7.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T1.6.6.7.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.6.6.7.1.1.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T1.6.6.7.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.6.6.7.1.2.1\"># agents</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S4.T1.6.6.7.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.7.1.3.1\">Image Goal</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S4.T1.6.6.7.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.7.1.4.1\">Object Goal</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T1.6.6.7.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.7.1.5.1\">Audio Goal</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.1\"># iters\u00a0()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2.2\">success rate\u00a0()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.3.3.3.3\"># iters\u00a0()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.4.4\">success rate\u00a0()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.5.5.5\"># iters\u00a0()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.6.6.6\">success rate\u00a0()</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.8.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.8.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.6.6.8.1.1.1\">Voyager</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.8.1.2\">1</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.8.1.3\">95</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.8.1.4\">0.21</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.8.1.5\">64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.8.1.6\">0.41</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.8.1.7\">21</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.8.1.8\">0.67</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.9.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.6.6.9.2.1\">3 / 2 / 5</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.9.2.2\">45</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.9.2.3\">0.47</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.9.2.4\">36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.9.2.5\">0.59</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.9.2.6\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.9.2.7\">0.85</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.10.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.10.3.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.6.6.10.3.1.1\">STEVE</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.10.3.2\">1</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.10.3.3\">85</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.10.3.4\">0.25</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.10.3.5\">71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.10.3.6\">0.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.10.3.7\">13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.10.3.8\">0.71</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.11.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.6.6.11.4.1\">5 / 5 / 4</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.11.4.2\">32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.11.4.3\">0.52</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.11.4.4\">29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.6.6.11.4.5\">0.57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.11.4.6\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.11.4.7\">0.82</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.12.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.12.5.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.12.5.1.1\">HAS\u00a0(Ours)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.12.5.2\">1</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.12.5.3\">27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.12.5.4\">0.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.12.5.5\">15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.12.5.6\">0.83</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.12.5.7\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.12.5.8\">0.87</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.13.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T1.6.6.13.6.1\">8 / 7 / 3</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.6.13.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.13.6.2.1\">6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T1.6.6.13.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.13.6.3.1\">0.84</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.6.13.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.13.6.4.1\">4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T1.6.6.13.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.13.6.5.1\">0.95</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.6.13.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.13.6.6.1\">2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.6.13.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.13.6.7.1\">0.99</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.1\">Comparison on goal search task.</span> # iters represent the average number of iterations required to finish each task (find goals) with a maximum of 100 prompting iterations\u00a0(only within the statistical range). The success rate is for task fulfillment. We list the one-agent and best performance with the number of agents\u00a0(from left to right represents different goals) on the bottom line for each method according to # agents. Note that due to dynamic robots, this number for our method refers to the peak number.</figcaption>\n</figure>",
106
+ "capture": "Table 1: Comparison on goal search task. # iters represent the average number of iterations required to finish each task (find goals) with a maximum of 100 prompting iterations\u00a0(only within the statistical range). The success rate is for task fulfillment. We list the one-agent and best performance with the number of agents\u00a0(from left to right represents different goals) on the bottom line for each method according to # agents. Note that due to dynamic robots, this number for our method refers to the peak number."
107
+ },
108
+ "2": {
109
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.2\" style=\"width:216.8pt;height:71.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-83.0pt,27.3pt) scale(0.56638008962837,0.56638008962837) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.2.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T2.2.2.2.3\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.2.2.2.4\"># agents</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1\"># iters\u00a0()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.2.2.2.2\"># blocks\u00a0()</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.3.1.1\" rowspan=\"2\">\n<span class=\"ltx_text\" id=\"S4.T2.2.2.3.1.1.1\">Voyager</span><cite class=\"ltx_cite ltx_citemacro_citep\">(Wang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.08282v2#bib.bib58\" title=\"\">2023a</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.3.1.2\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.3.1.3\">34</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.3.1.4\">28</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.4.2.1\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.4.2.2\">14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.4.2.3\">81</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.5.3.1\" rowspan=\"2\">\n<span class=\"ltx_text\" id=\"S4.T2.2.2.5.3.1.1\">STEVE</span><cite class=\"ltx_cite ltx_citemacro_citep\">(Zhao et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.08282v2#bib.bib71\" title=\"\">2023</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.5.3.2\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.5.3.3\">15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.5.3.4\">65</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.6.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.6.4.1\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.6.4.2\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.6.4.3\">211</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.7.5.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.7.5.1.1\">HAS</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.7.5.2\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.7.5.3\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.7.5.4\">68</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.8.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.2.2.8.6.1\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.2.2.8.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.8.6.2.1\">2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.2.2.8.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.8.6.3.1\">367</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.1\">Comparison on continues block search task.</span> # iters represent the average number of iterations required to locate the first ten diamond blocks with a maximum of 100 prompting iterations; the lower this number, the higher the task completion efficiency. # blocks denotes the average number of diamond blocks found over 100 iterations, with higher values indicating better performance. We list the one-agent and best performance with the number of agents on the bottom line for each method according to # agents.\n</figcaption>\n</figure>",
110
+ "capture": "Table 2: Comparison on continues block search task. # iters represent the average number of iterations required to locate the first ten diamond blocks with a maximum of 100 prompting iterations; the lower this number, the higher the task completion efficiency. # blocks denotes the average number of diamond blocks found over 100 iterations, with higher values indicating better performance. We list the one-agent and best performance with the number of agents on the bottom line for each method according to # agents.\n"
111
+ },
112
+ "3": {
113
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.2\" style=\"width:216.8pt;height:75.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-72.8pt,25.3pt) scale(0.598106665833768,0.598106665833768) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.2.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T3.2.2.2.3\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.2.2.2.4\"># agents</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.1\"># iters\u00a0()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.2.2.2.2\">area\u00a0()</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.3.1.1\" rowspan=\"2\">\n<span class=\"ltx_text\" id=\"S4.T3.2.2.3.1.1.1\">Voyager</span><cite class=\"ltx_cite ltx_citemacro_citep\">(Wang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.08282v2#bib.bib58\" title=\"\">2023a</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.2.3.1.2\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.2.3.1.3\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.2.3.1.4\">175</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.2.4.2.1\">5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.2.4.2.2\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.2.4.2.3\">755</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.5.3.1\" rowspan=\"2\">\n<span class=\"ltx_text\" id=\"S4.T3.2.2.5.3.1.1\">STEVE</span><cite class=\"ltx_cite ltx_citemacro_citep\">(Zhao et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.08282v2#bib.bib71\" title=\"\">2023</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.2.5.3.2\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.2.5.3.3\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.2.5.3.4\">161</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.6.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.2.6.4.1\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.2.6.4.2\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.2.6.4.3\">696</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T3.2.2.7.5.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.7.5.1.1\">HAS</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.2.7.5.2\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.2.7.5.3\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.2.7.5.4\">201</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.8.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.2.2.8.6.1\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.2.2.8.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.8.6.2.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.2.2.8.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.8.6.3.1\">1368</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.1\">Comparison on map exploration task.</span> # iters represent the average number of iterations required to locate the area of 100 blocks; the lower this number, the higher the task completion efficiency. # area denotes the average squares of blocks over 5 iterations, with higher values indicating better performance. We list the one-agent and the best performance with the number of agents on the bottom line for each method according to # agents.\n</figcaption>\n</figure>",
114
+ "capture": "Table 3: Comparison on map exploration task. # iters represent the average number of iterations required to locate the area of 100 blocks; the lower this number, the higher the task completion efficiency. # area denotes the average squares of blocks over 5 iterations, with higher values indicating better performance. We list the one-agent and the best performance with the number of agents on the bottom line for each method according to # agents.\n"
115
+ },
116
+ "4": {
117
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T4.6\" style=\"width:433.6pt;height:157.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(19.0pt,-6.9pt) scale(1.09598667000503,1.09598667000503) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.6.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.6.6.7.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S4.T4.6.6.7.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.6.7.1.1.1\">Setting</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S4.T4.6.6.7.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.6.6.7.1.2.1\"># agents</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S4.T4.6.6.7.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.6.7.1.3.1\">Goal Search</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S4.T4.6.6.7.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.6.7.1.4.1\">Block Search</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T4.6.6.7.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.6.7.1.5.1\">Map Exploration</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.6.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.1.1.1\"># iters\u00a0()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T4.2.2.2.2\">success rate\u00a0()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.3.3.3.3\"># iters\u00a0()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T4.4.4.4.4\"># blocks\u00a0()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.5.5.5.5\"># iters\u00a0()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.6.6.6.6\">area\u00a0()</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.6.6.8.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.8.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.6.6.8.1.1.1\">w/o DM</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.8.1.2\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.6.6.8.1.3\">53</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.8.1.4\">0.46</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.6.6.8.1.5\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.8.1.6\">67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.6.6.8.1.7\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.6.6.8.1.8\">160</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.6.6.9.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.6.6.9.2.1\">6 / 4 / 5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.6.9.2.2\">22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.6.6.9.2.3\">0.64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.6.9.2.4\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.6.6.9.2.5\">237</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.6.9.2.6\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.6.9.2.7\">624</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.6.6.10.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.10.3.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.6.6.10.3.1.1\">w/o AO</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.10.3.2\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.6.6.10.3.3\">41</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.10.3.4\">0.55</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.6.6.10.3.5\">35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.10.3.6\">29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.6.6.10.3.7\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.6.6.10.3.8\">172</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.6.6.11.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.6.6.11.4.1\">5 / 5 / 5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.6.11.4.2\">15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.6.6.11.4.3\">0.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.6.11.4.4\">11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.6.6.11.4.5\">106</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.6.11.4.6\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.6.11.4.7\">706</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.6.6.12.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.12.5.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.6.12.5.1.1\">HAS\u00a0(Ours)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.12.5.2\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.6.6.12.5.3\">15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.12.5.4\">0.82</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.6.6.12.5.5\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.6.6.12.5.6\">68</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.6.6.12.5.7\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.6.6.12.5.8\">201</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.6.6.13.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T4.6.6.13.6.1\">6 / 8 / 8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.6.6.13.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.6.13.6.2.1\">4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T4.6.6.13.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.6.13.6.3.1\">0.93</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.6.6.13.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.6.13.6.4.1\">2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T4.6.6.13.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.6.13.6.5.1\">367</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.6.6.13.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.6.13.6.6.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.6.6.13.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.6.6.13.6.7.1\">1368</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.8.1\">Ablation studies</span> for multi-modal goal search, continuous block search, and map exploration. The setting is the same as the above 3 experiments. Note that w/o DM is without the dynamic map, and w/o AO is without the auto-organizing mechanism.</figcaption>\n</figure>",
118
+ "capture": "Table 4: Ablation studies for multi-modal goal search, continuous block search, and map exploration. The setting is the same as the above 3 experiments. Note that w/o DM is without the dynamic map, and w/o AO is without the auto-organizing mechanism."
119
+ }
120
+ },
121
+ "image_paths": {
122
+ "1": {
123
+ "figure_path": "2403.08282v2_figure_1.png",
124
+ "caption": "Figure 1: Illustration of the functionality of HAS, in the Minecraft environment.\nGiven a navigation goal in the form of images, objects, or audio, collectives of agents autonomously self-organize to undertake collaborative endeavors of navigation like searching and exploring.",
125
+ "url": "http://arxiv.org/html/2403.08282v2/x1.png"
126
+ },
127
+ "2": {
128
+ "figure_path": "2403.08282v2_figure_2.png",
129
+ "caption": "Figure 2: HAS framework. M-MLM and A-MLM correspond to the memory-augmented multi-modal language models of the manager and action agent, respectively. It also utilizes a multi-modal memory to store and obtain experiences as references for planning. HAS can improve its planning skills by exploring its own proposed tasks with self-instruction and using its growing memory to plan tasks it has visited before.",
130
+ "url": "http://arxiv.org/html/2403.08282v2/x2.png"
131
+ }
132
+ },
133
+ "validation": true,
134
+ "references": [
135
+ {
136
+ "1": {
137
+ "title": "Playing repeated games with large language models.",
138
+ "author": "Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, and Eric Schulz.",
139
+ "venue": "arXiv preprint, 2023.",
140
+ "url": null
141
+ }
142
+ },
143
+ {
144
+ "2": {
145
+ "title": "Flamingo: a visual language model for few-shot learning.",
146
+ "author": "Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al.",
147
+ "venue": "Advances in Neural Information Processing Systems, 35:23716\u201323736, 2022.",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "3": {
153
+ "title": "Scaling imitation learning in minecraft.",
154
+ "author": "Artemij Amiranashvili, Nicolai Dorka, Wolfram Burgard, Vladlen Koltun, and Thomas Brox.",
155
+ "venue": "arXiv preprint arXiv:2007.02701, 2020.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "4": {
161
+ "title": "Video pretraining (vpt): Learning to act by watching unlabeled online videos.",
162
+ "author": "Bowen Baker, Ilge Akkaya, Peter Zhokov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, and Jeff Clune.",
163
+ "venue": "Advances in Neural Information Processing Systems, 35:24639\u201324654, 2022.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "5": {
169
+ "title": "Human-level play in the game of diplomacy by combining language models with strategic reasoning.",
170
+ "author": "Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, Jonathan Gray, Hengyuan Hu, et al.",
171
+ "venue": "Science, 378(6624):1067\u20131074, 2022.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "6": {
177
+ "title": "Large language models as tool makers.",
178
+ "author": "Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou.",
179
+ "venue": "arXiv preprint, 2023.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "7": {
185
+ "title": "Autoagents: A framework for automatic agent generation.",
186
+ "author": "Guangyao Chen, Siwei Dong, Yu Shu, Ge Zhang, Jaward Sesay, B\u00f6rje F Karlsson, Jie Fu, and Yemin Shi.",
187
+ "venue": "arXiv preprint arXiv:2309.17288, 2023a.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "8": {
193
+ "title": "History aware multimodal transformer for vision-and-language navigation.",
194
+ "author": "Shizhe Chen, Pierre-Louis Guhur, Cordelia Schmid, and Ivan Laptev.",
195
+ "venue": "Advances in neural information processing systems, 34:5834\u20135847, 2021.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "9": {
201
+ "title": "Learning exploration policies for navigation.",
202
+ "author": "Tao Chen, Saurabh Gupta, and Abhinav Gupta.",
203
+ "venue": "In International Conference on Learning Representations, 2018.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "10": {
209
+ "title": "Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents.",
210
+ "author": "Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chen Qian, Chi-Min Chan, Yujia Qin, Yaxi Lu, Ruobing Xie, et al.",
211
+ "venue": "arXiv preprint arXiv:2308.10848, 2023b.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "11": {
217
+ "title": "Instructblip: Towards general-purpose vision-language models with instruction tuning.",
218
+ "author": "Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi.",
219
+ "venue": "arXiv preprint arXiv:2305.06500, 2023.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "12": {
225
+ "title": "Embodied question answering.",
226
+ "author": "Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra.",
227
+ "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1\u201310, 2018.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "13": {
233
+ "title": "Episodic memory question answering.",
234
+ "author": "Samyak Datta, Sameer Dharur, Vincent Cartillier, Ruta Desai, Mukul Khanna, Dhruv Batra, and Devi Parikh.",
235
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19119\u201319128, 2022.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "14": {
241
+ "title": "See, hear, explore: Curiosity via audio-visual association.",
242
+ "author": "Victoria Dean, Shubham Tulsiani, and Abhinav Gupta.",
243
+ "venue": "Advances in neural information processing systems, 33:14961\u201314972, 2020.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "15": {
249
+ "title": "Citygen: Infinite and controllable 3d city layout generation.",
250
+ "author": "Jie Deng, Wenhao Chai, Jianshu Guo, Qixuan Huang, Wenhao Hu, Jenq-Neng Hwang, and Gaoang Wang.",
251
+ "venue": "arXiv preprint arXiv:2312.01508, 2023.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "16": {
257
+ "title": "Palm-e: An embodied multimodal language model.",
258
+ "author": "Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al.",
259
+ "venue": "arXiv preprint arXiv:2303.03378, 2023.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "17": {
265
+ "title": "Vtnet: Visual transformer network for object goal navigation.",
266
+ "author": "Heming Du, Xin Yu, and Liang Zheng.",
267
+ "venue": "In International Conference on Learning Representations, 2020.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "18": {
273
+ "title": "Improving factuality and reasoning in language models through multiagent debate, 2023.",
274
+ "author": "Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch.",
275
+ "venue": null,
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "19": {
281
+ "title": "Minedojo: Building open-ended embodied agents with internet-scale knowledge.",
282
+ "author": "Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar.",
283
+ "venue": "Advances in Neural Information Processing Systems, 35:18343\u201318362, 2022.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "20": {
289
+ "title": "Llama-adapter v2: Parameter-efficient visual instruction model.",
290
+ "author": "Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al.",
291
+ "venue": "arXiv preprint arXiv:2304.15010, 2023.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "21": {
297
+ "title": "Multimodal-gpt: A vision and language model for dialogue with humans.",
298
+ "author": "Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen.",
299
+ "venue": "arXiv preprint arXiv:2305.04790, 2023.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "22": {
305
+ "title": "Chatllm network: More brains, more intelligence.",
306
+ "author": "Rui Hao, Linmei Hu, Weijian Qi, Qingliu Wu, Yirui Zhang, and Liqiang Nie.",
307
+ "venue": "arXiv preprint, 2023.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "23": {
313
+ "title": "Minecraft as ai playground and laboratory.",
314
+ "author": "Katja Hofmann.",
315
+ "venue": "In Proceedings of the annual symposium on computer-human interaction in play, pp. 1\u20131, 2019.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "24": {
321
+ "title": "Metagpt: Meta programming for multi-agent collaborative framework.",
322
+ "author": "Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Jinlin Wang, Ceyao Zhang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, et al.",
323
+ "venue": "arXiv preprint arXiv:2308.00352, 2023.",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "25": {
329
+ "title": "Attention-guided contrastive role representations for multi-agent reinforcement learning.",
330
+ "author": "Zican Hu, Zongzhang Zhang, Huaxiong Li, Chunlin Chen, Hongyu Ding, and Zhi Wang.",
331
+ "venue": "arXiv preprint arXiv:2312.04819, 2023.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "26": {
337
+ "title": "The malmo platform for artificial intelligence experimentation.",
338
+ "author": "Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell.",
339
+ "venue": "In Ijcai, pp. 4246\u20134247, 2016.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "27": {
345
+ "title": "Renderable neural radiance map for visual navigation.",
346
+ "author": "Obin Kwon, Jeongho Park, and Songhwai Oh.",
347
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9099\u20139108, 2023.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "28": {
353
+ "title": "Retrieval-augmented generation for knowledge-intensive nlp tasks.",
354
+ "author": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rockt\u00e4schel, et al.",
355
+ "venue": "Advances in Neural Information Processing Systems, 33:9459\u20139474, 2020.",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "29": {
361
+ "title": "Otter: A multi-modal model with in-context instruction tuning.",
362
+ "author": "Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu.",
363
+ "venue": "arXiv preprint arXiv:2305.03726, 2023a.",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "30": {
369
+ "title": "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation.",
370
+ "author": "Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi.",
371
+ "venue": "In International Conference on Machine Learning, pp. 12888\u201312900. PMLR, 2022.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "31": {
377
+ "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models.",
378
+ "author": "Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi.",
379
+ "venue": "arXiv preprint arXiv:2301.12597, 2023b.",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "32": {
385
+ "title": "Steve-1: A generative model for text-to-behavior in minecraft (abridged version).",
386
+ "author": "Shalev Lifshitz, Keiran Paster, Harris Chan, Jimmy Ba, and Sheila McIlraith.",
387
+ "venue": "In NeurIPS 2023 Workshop on Goal-Conditioned Reinforcement Learning, 2023.",
388
+ "url": null
389
+ }
390
+ },
391
+ {
392
+ "33": {
393
+ "title": "Juewu-mc: Playing minecraft with sample-efficient hierarchical reinforcement learning.",
394
+ "author": "Zichuan Lin, Junyou Li, Jianing Shi, Deheng Ye, Qiang Fu, and Wei Yang.",
395
+ "venue": "arXiv preprint arXiv:2112.04907, 2021.",
396
+ "url": null
397
+ }
398
+ },
399
+ {
400
+ "34": {
401
+ "title": "Visual instruction tuning.",
402
+ "author": "Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee.",
403
+ "venue": "arXiv preprint arXiv:2304.08485, 2023a.",
404
+ "url": null
405
+ }
406
+ },
407
+ {
408
+ "35": {
409
+ "title": "Training socially aligned language models in simulated human society.",
410
+ "author": "Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi.",
411
+ "venue": "arXiv preprint arXiv:2305.16960, 2023b.",
412
+ "url": null
413
+ }
414
+ },
415
+ {
416
+ "36": {
417
+ "title": "Symmetry-aware neural architecture for embodied visual exploration.",
418
+ "author": "Shuang Liu and Takayuki Okatani.",
419
+ "venue": "In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 17221\u201317230. IEEE, 2022.",
420
+ "url": null
421
+ }
422
+ },
423
+ {
424
+ "37": {
425
+ "title": "Multi-agent actor-critic for mixed cooperative-competitive environments.",
426
+ "author": "Ryan Lowe, Yi I Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch.",
427
+ "venue": "Advances in neural information processing systems, 30, 2017.",
428
+ "url": null
429
+ }
430
+ },
431
+ {
432
+ "38": {
433
+ "title": "End-to-end active object tracking via reinforcement learning.",
434
+ "author": "Wenhan Luo, Peng Sun, Fangwei Zhong, Wei Liu, Tong Zhang, and Yizhou Wang.",
435
+ "venue": "In International conference on machine learning, pp. 3286\u20133295. PMLR, 2018.",
436
+ "url": null
437
+ }
438
+ },
439
+ {
440
+ "39": {
441
+ "title": "End-to-end active object tracking and its real-world deployment via reinforcement learning.",
442
+ "author": "Wenhan Luo, Peng Sun, Fangwei Zhong, Wei Liu, Tong Zhang, and Yizhou Wang.",
443
+ "venue": "IEEE transactions on pattern analysis and machine intelligence, 42(6):1317\u20131332, 2019.",
444
+ "url": null
445
+ }
446
+ },
447
+ {
448
+ "40": {
449
+ "title": "Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration.",
450
+ "author": "Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, and Zhaopeng Tu.",
451
+ "venue": "arXiv preprint arXiv:2306.09093, 2023.",
452
+ "url": null
453
+ }
454
+ },
455
+ {
456
+ "41": {
457
+ "title": "Large language models play starcraft ii: Benchmarks and a chain of summarization approach.",
458
+ "author": "Weiyu Ma, Qirui Mi, Xue Yan, Yuqiao Wu, Runji Lin, Haifeng Zhang, and Jun Wang.",
459
+ "venue": "arXiv preprint arXiv:2312.11865, 2023.",
460
+ "url": null
461
+ }
462
+ },
463
+ {
464
+ "42": {
465
+ "title": "Video-chatgpt: Towards detailed video understanding via large vision and language models.",
466
+ "author": "Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan.",
467
+ "venue": "arXiv preprint arXiv:2306.05424, 2023.",
468
+ "url": null
469
+ }
470
+ },
471
+ {
472
+ "43": {
473
+ "title": "Roco: Dialectic multi-robot collaboration with large language models.",
474
+ "author": "Zhao Mandi, Shreeya Jain, and Shuran Song.",
475
+ "venue": "arXiv preprint arXiv:2307.04738, 2023.",
476
+ "url": null
477
+ }
478
+ },
479
+ {
480
+ "44": {
481
+ "title": "Modelling the dynamic joint policy of teammates with attention multi-agent ddpg.",
482
+ "author": "Hangyu Mao, Zhengchao Zhang, Zhen Xiao, and Zhibo Gong.",
483
+ "venue": "arXiv preprint arXiv:1811.07029, 2018.",
484
+ "url": null
485
+ }
486
+ },
487
+ {
488
+ "45": {
489
+ "title": "Learning multi-agent communication with double attentional deep reinforcement learning.",
490
+ "author": "Hangyu Mao, Zhengchao Zhang, Zhen Xiao, Zhibo Gong, and Yan Ni.",
491
+ "venue": "Autonomous Agents and Multi-Agent Systems, 34:1\u201334, 2020a.",
492
+ "url": null
493
+ }
494
+ },
495
+ {
496
+ "46": {
497
+ "title": "Seihai: A sample-efficient hierarchical ai for the minerl competition.",
498
+ "author": "Hangyu Mao, Chao Wang, Xiaotian Hao, Yihuan Mao, Yiming Lu, Chengjie Wu, Jianye Hao, Dong Li, and Pingzhong Tang.",
499
+ "venue": "In Distributed Artificial Intelligence: Third International Conference, DAI 2021, Shanghai, China, December 17\u201318, 2021, Proceedings 3, pp. 38\u201351. Springer, 2022.",
500
+ "url": null
501
+ }
502
+ },
503
+ {
504
+ "47": {
505
+ "title": "Generation-augmented retrieval for open-domain question answering.",
506
+ "author": "Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen.",
507
+ "venue": "arXiv preprint arXiv:2009.08553, 2020b.",
508
+ "url": null
509
+ }
510
+ },
511
+ {
512
+ "48": {
513
+ "title": "Soat: A scene-and object-aware transformer for vision-and-language navigation.",
514
+ "author": "Abhinav Moudgil, Arjun Majumdar, Harsh Agrawal, Stefan Lee, and Dhruv Batra.",
515
+ "venue": "Advances in Neural Information Processing Systems, 34:7357\u20137367, 2021.",
516
+ "url": null
517
+ }
518
+ },
519
+ {
520
+ "49": {
521
+ "title": "Gpt-4 technical report.",
522
+ "author": "OpenAI.",
523
+ "venue": "arXiv preprint arXiv: Arxiv-2303.08774, 2023.",
524
+ "url": null
525
+ }
526
+ },
527
+ {
528
+ "50": {
529
+ "title": "Generative agents: Interactive simulacra of human behavior.",
530
+ "author": "Joon Sung Park, Joseph O\u2019Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein.",
531
+ "venue": "In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, pp. 1\u201322, 2023.",
532
+ "url": null
533
+ }
534
+ },
535
+ {
536
+ "51": {
537
+ "title": "Prismarinejs/mineflayer: Create minecraft bots with a powerful, stable, and high level javascript api., 2013.",
538
+ "author": "PrismarineJS.",
539
+ "venue": "URL https://github.com/PrismarineJS/mineflayer/tree/master.",
540
+ "url": null
541
+ }
542
+ },
543
+ {
544
+ "52": {
545
+ "title": "Experiential co-learning of software-developing agents.",
546
+ "author": "Chen Qian, Yufan Dang, Jiahao Li, Wei Liu, Weize Chen, Cheng Yang, Zhiyuan Liu, and Maosong Sun.",
547
+ "venue": "arXiv preprint arXiv:2312.17025, 2023.",
548
+ "url": null
549
+ }
550
+ },
551
+ {
552
+ "53": {
553
+ "title": "Self-organized group for cooperative multi-agent reinforcement learning.",
554
+ "author": "Jianzhun Shao, Zhiqiang Lou, Hongchang Zhang, Yuhang Jiang, Shuncheng He, and Xiangyang Ji.",
555
+ "venue": "Advances in Neural Information Processing Systems, 35:5711\u20135723, 2022.",
556
+ "url": null
557
+ }
558
+ },
559
+ {
560
+ "54": {
561
+ "title": "Hierarchical deep q-network from imperfect demonstrations in minecraft.",
562
+ "author": "Alexey Skrynnik, Aleksey Staroverov, Ermek Aitygulov, Kirill Aksenov, Vasilii Davydov, and Aleksandr I Panov.",
563
+ "venue": "Cognitive Systems Research, 65:74\u201378, 2021.",
564
+ "url": null
565
+ }
566
+ },
567
+ {
568
+ "55": {
569
+ "title": "Llm-planner: Few-shot grounded planning for embodied agents with large language models.",
570
+ "author": "Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M Sadler, Wei-Lun Chao, and Yu Su.",
571
+ "venue": "arXiv preprint, 2022.",
572
+ "url": null
573
+ }
574
+ },
575
+ {
576
+ "56": {
577
+ "title": "Pandagpt: One model to instruction-follow them all.",
578
+ "author": "Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai.",
579
+ "venue": "arXiv preprint arXiv:2305.16355, 2023.",
580
+ "url": null
581
+ }
582
+ },
583
+ {
584
+ "57": {
585
+ "title": "Value-decomposition networks for cooperative multi-agent learning.",
586
+ "author": "Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z Leibo, Karl Tuyls, et al.",
587
+ "venue": "arXiv preprint arXiv:1706.05296, 2017.",
588
+ "url": null
589
+ }
590
+ },
591
+ {
592
+ "58": {
593
+ "title": "Voyager: An open-ended embodied agent with large language models.",
594
+ "author": "Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar.",
595
+ "venue": "arXiv preprint arXiv:2305.16291, 2023a.",
596
+ "url": null
597
+ }
598
+ },
599
+ {
600
+ "59": {
601
+ "title": "A survey on large language model based autonomous agents.",
602
+ "author": "Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al.",
603
+ "venue": "arXiv preprint arXiv:2308.11432, 2023b.",
604
+ "url": null
605
+ }
606
+ },
607
+ {
608
+ "60": {
609
+ "title": "Visionllm: Large language model is also an open-ended decoder for vision-centric tasks.",
610
+ "author": "Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, et al.",
611
+ "venue": "arXiv preprint arXiv:2305.11175, 2023c.",
612
+ "url": null
613
+ }
614
+ },
615
+ {
616
+ "61": {
617
+ "title": "Unleashing cognitive synergy in large language models: A task-solving agent through multi-persona self-collaboration.",
618
+ "author": "Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji.",
619
+ "venue": "arXiv preprint, 2023d.",
620
+ "url": null
621
+ }
622
+ },
623
+ {
624
+ "62": {
625
+ "title": "Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents.",
626
+ "author": "Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang.",
627
+ "venue": "arXiv preprint arXiv:2302.01560, 2023e.",
628
+ "url": null
629
+ }
630
+ },
631
+ {
632
+ "63": {
633
+ "title": "Dd-ppo: Learning near-perfect pointgoal navigators from 2.5 billion frames.",
634
+ "author": "Erik Wijmans, Abhishek Kadian, Ari Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva, and Dhruv Batra.",
635
+ "venue": "In International Conference on Learning Representations, 2019.",
636
+ "url": null
637
+ }
638
+ },
639
+ {
640
+ "64": {
641
+ "title": "The dawn of lmms: Preliminary explorations with gpt-4v (ision).",
642
+ "author": "Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang.",
643
+ "venue": "arXiv preprint arXiv:2309.17421, 9(1):1, 2023.",
644
+ "url": null
645
+ }
646
+ },
647
+ {
648
+ "65": {
649
+ "title": "mplug-owl: Modularization empowers large language models with multimodality.",
650
+ "author": "Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al.",
651
+ "venue": "arXiv preprint arXiv:2304.14178, 2023.",
652
+ "url": null
653
+ }
654
+ },
655
+ {
656
+ "66": {
657
+ "title": "Multi-target embodied question answering.",
658
+ "author": "Licheng Yu, Xinlei Chen, Georgia Gkioxari, Mohit Bansal, Tamara L Berg, and Dhruv Batra.",
659
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6309\u20136318, 2019.",
660
+ "url": null
661
+ }
662
+ },
663
+ {
664
+ "67": {
665
+ "title": "Sound adversarial audio-visual navigation.",
666
+ "author": "Yinfeng Yu, Wenbing Huang, Fuchun Sun, Changan Chen, Yikai Wang, and Xiaohong Liu.",
667
+ "venue": "In International Conference on Learning Representations, 2021.",
668
+ "url": null
669
+ }
670
+ },
671
+ {
672
+ "68": {
673
+ "title": "Plan4mc: Skill reinforcement learning and planning for open-world minecraft tasks.",
674
+ "author": "Haoqi Yuan, Chi Zhang, Hongcheng Wang, Feiyang Xie, Penglin Cai, Hao Dong, and Zongqing Lu.",
675
+ "venue": "arXiv preprint arXiv:2303.16563, 2023.",
676
+ "url": null
677
+ }
678
+ },
679
+ {
680
+ "69": {
681
+ "title": "Controlling large language model-based agents for large-scale decision-making: An actor-critic approach.",
682
+ "author": "Bin Zhang, Hangyu Mao, Jingqing Ruan, Ying Wen, Yang Li, Shao Zhang, Zhiwei Xu, Dapeng Li, Ziyue Li, Rui Zhao, et al.",
683
+ "venue": "arXiv preprint arXiv:2311.13884, 2023a.",
684
+ "url": null
685
+ }
686
+ },
687
+ {
688
+ "70": {
689
+ "title": "Building cooperative embodied agents modularly with large language models.",
690
+ "author": "Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B Tenenbaum, Tianmin Shu, and Chuang Gan.",
691
+ "venue": "arXiv preprint arXiv:2307.02485, 2023b.",
692
+ "url": null
693
+ }
694
+ },
695
+ {
696
+ "71": {
697
+ "title": "See and think: Embodied agent in virtual environment.",
698
+ "author": "Zhonghan Zhao, Wenhao Chai, Xuan Wang, Li Boyi, Shengyu Hao, Shidong Cao, Tian Ye, Jenq-Neng Hwang, and Gaoang Wang.",
699
+ "venue": "arXiv preprint arXiv:2311.15209, 2023.",
700
+ "url": null
701
+ }
702
+ },
703
+ {
704
+ "72": {
705
+ "title": "Ad-vat+: An asymmetric dueling mechanism for learning and understanding visual active tracking.",
706
+ "author": "Fangwei Zhong, Peng Sun, Wenhan Luo, Tingyun Yan, and Yizhou Wang.",
707
+ "venue": "IEEE transactions on pattern analysis and machine intelligence, 43(5):1467\u20131482, 2019.",
708
+ "url": null
709
+ }
710
+ },
711
+ {
712
+ "73": {
713
+ "title": "Towards distraction-robust active visual tracking.",
714
+ "author": "Fangwei Zhong, Peng Sun, Wenhan Luo, Tingyun Yan, and Yizhou Wang.",
715
+ "venue": "In International Conference on Machine Learning, pp. 12782\u201312792. PMLR, 2021.",
716
+ "url": null
717
+ }
718
+ },
719
+ {
720
+ "74": {
721
+ "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models.",
722
+ "author": "Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny.",
723
+ "venue": "arXiv preprint arXiv:2304.10592, 2023a.",
724
+ "url": null
725
+ }
726
+ },
727
+ {
728
+ "75": {
729
+ "title": "Ghost in the minecraft: Generally capable agents for open-world enviroments via large language models with text-based knowledge and memory.",
730
+ "author": "Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, et al.",
731
+ "venue": "arXiv preprint arXiv:2305.17144, 2023b.",
732
+ "url": null
733
+ }
734
+ },
735
+ {
736
+ "76": {
737
+ "title": "Mindstorms in natural language-based societies of mind.",
738
+ "author": "Mingchen Zhuge, Haozhe Liu, Francesco Faccio, Dylan R Ashley, R\u00f3bert Csord\u00e1s, Anand Gopalakrishnan, Abdullah Hamdi, Hasan Abed Al Kader Hammoud, Vincent Herrmann, Kazuki Irie, et al.",
739
+ "venue": "arXiv preprint arXiv:2305.17066, 2023.",
740
+ "url": null
741
+ }
742
+ }
743
+ ],
744
+ "url": "http://arxiv.org/html/2403.08282v2"
745
+ }
20240318/2403.09195v2.json ADDED
@@ -0,0 +1,275 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "SAM-Lightening: A Lightweight Segment Anything Model with Dilated Flash Attention to Achieve 30\u00d7 Acceleration",
3
+ "abstract": "Segment Anything Model (SAM) has garnered significant attention in segmentation tasks due to their zero-shot generalization ability.\nHowever, a broader application of SAMs to real-world practice has been restricted by their low inference speed and high computational memory demands, which mainly stem from the attention mechanism.\nExisting work concentrated on optimizing the encoder, yet has not adequately addressed the inefficiency of the attention mechanism itself, even when distilled to a smaller model, which thus leaves space for further improvement.\nIn response, we introduce SAM-Lightening, a variant of SAM, that features a re-engineered attention mechanism, termed Dilated Flash Attention.\nIt not only facilitates higher parallelism, enhancing processing efficiency but also retains compatibility with the existing FlashAttention.\nCorrespondingly, we propose a progressive distillation to enable an efficient knowledge transfer from the vanilla SAM without costly training from scratch.\nExperiments on COCO and LVIS reveal that SAM-Lightening significantly outperforms the state-of-the-art methods in both run-time efficiency and segmentation accuracy.\nSpecifically, it can achieve an inference speed of 7 milliseconds (ms) per image, for images of size 10241024 pixels, which is faster than the vanilla SAM and than the state-of-the-art.\nMoreover, it takes only memory, which is of the vanilla SAM.\nThe code and weights are available at https://anonymous.4open.science/r/SAM-LIGHTENING-BC25/.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "###figure_1### Image segmentation has been traditionally constrained by the necessity for deep learning models to be specifically trained on datasets designed for particular tasks.\nThis specialization of hand-crafted datasets often limits their generation ability.\nAddressing this constraint, the Segment Anything Model (SAM) [1 ###reference_b1###] represents a paradigmatic shift with its zero-shot learning abilities that allow itself to segment new and unseen images.\nHowever, SAM\u2019s application in varied sectors like augmented reality (AR), image editing, deployment on smartphones and medical imaging [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###] is impeded by its computational burden challenge in its image encoder, which comprises a substantial 632 million parameters.\nThis size is roughly 20 times that of conventional segmentation networks like U-Net [7 ###reference_b7###], leading to high computational demands.\nIn response to this challenge, various efforts have been initiated.\nFor example, FastSAM [8 ###reference_b8###] adopts a strategy of replacing SAM\u2019s transformer encoder with a more streamlined convolutional neural network (CNN), aiming to create a lighter model.\nHowever, this often leads to diminished accuracy, especially in complex segmentation tasks.\nAnother notable approach is MobileSAM [9 ###reference_b9###], which employs distillation techniques to transfer knowledge from SAM\u2019s encoder to a more compact ViT-tiny [10 ###reference_b10###] encoder.\nSimilarly, initiatives like EfficientSAM [11 ###reference_b11###] aim to refine the training processes of MobileSAM to improve accuracy.\nConversely, SAMFast [12 ###reference_b12###] focuses on speed optimization of the original SAM through techniques such as quantization and pruning, but these modifications have limited impact on performance enhancement.\nOur research identifies key limitations in previous works [9 ###reference_b9###, 11 ###reference_b11###, 12 ###reference_b12###] on SAM, primarily in terms of inefficient computation and memory usage in attention mechanisms. To address these issues, we integrate FlashAttention [13 ###reference_b13###] and dilated attention mechanisms into our SAM framework, providing orthogonal improvements over existing methods. These enhancements not only reduce memory consumption but also improve parallel processing, making them complementary to previous advancements.\nHowever, directly applying these mechanisms to SAM would necessitate a complete retraining of the model, incurring substantial computational costs. To circumvent this challenge,\nwe proposed a dynamic layer-wise distillation (DLD). DLD implements a progressive distillation scheme for the image encoder by progressively allocating feature weights, effectively facilitating the transfer of knowledge from SAM to our lightweight model.\nWe demonstrate that our model (SAM-Lightening) is not only expressive enough to represent the original SAM but is also computationally efficient, completing inference within .\nIn brief, our main contributions are four-fold:\nWe introduce a novel SAM structure, SAM-Lightening, to significantly reduce the computational complexity.\nWe design a novel dilated flash attention mechanism to replace the vanilla self-attention to enhance the efficiency and inference speed of SAM-Lightening.\nTo efficiently transfer the knowledge from vanilla SAM to SAM-Lightening, we propose a dynamic layer-wise distillation without compromising the performance.\nSAM-Lightening achieves state-of-the-art performance of 7 ms per image, which is faster than vanilla SAM."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related work",
15
+ "text": "Segment Anything Model:\nSAM comprises three main parts: the image encoder, prompt encoder, and mask decoder.\nNotably, the image encoder is the most parameter-intensive segment of SAM, accounting for a substantial 98.3% of its processing time [1 ###reference_b1###], which highlights the need for optimization.\nFastSAM [8 ###reference_b8###] employs a CNN encoder, specifically the YOLOv8-seg [14 ###reference_b14###], to replace the ViT encoder to enhance processing speed.\nHowever, it has been observed to compromise segmentation precision, particularly in complex scenarios and in capturing fine edge details.\nMobileSAM [9 ###reference_b9###] distill the encoder to reduce both the model size and computational requirements.\nNevertheless, the imbalance in MobileSAM\u2019s encoder structure and parameter distribution limits its potential for practical deployment and performance optimization.\nSAMFast [12 ###reference_b12###] represents another optimization strategy, focusing on enhancing the processing speed of SAM using methods like quantization and sparsification. While this scheme does offer some acceleration, its overall impact remains moderate.\nEfficientSAM [11 ###reference_b11###], on the other hand, improves upon MobileSAM\u2019s training methodology, specifically targeting the accuracy aspect of the MobileSAM approach.\nFlashAttention:\nThe FlashAttention mechanism [13 ###reference_b13###] introduces an efficient and accurate approach for computing attention in neural networks.\nIt achieves a significant reduction in high bandwidth memory reads and writes, primarily through strategic tiling and recomputation techniques.\nBuilding upon this, FlashAttention-2 [15 ###reference_b15###] further refines the process by enhanced matrix multiplication operations.\nThese improvements have been shown to deliver up to a twofold increase in performance in specific computational settings.\nKnowledge Distillation:\nKnowledge distillation [16 ###reference_b16###] is a technique for transferring knowledge from a complex model to a simpler one.\nThey aim to retain the performance attributes of the larger model while significantly reducing its computational footprint and model size.\nMobileSAM employs a decoupled knowledge distillation by extracting outputs from the original SAM\u2019s ViT-H image encoder and using them to distill into a pre-trained ViT-tiny encoder directly.\nThis strategy proves particularly beneficial for smaller models that already possess pre-trained parameters."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Methods",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Dilated Flash Attention",
27
+ "text": "To address the high computational demands in the image encoder of SAM, we design a novel attention operation with FlashAttention to expedite the inference speed.\nSegmentation and Sparsification:\nTo alleviate the computational burden in processing () in attention operation, we divide each input into equal-length parts () and then apply sparsification along the sequence dimension within each segment.\nThis sparsification involves selecting rows at fixed intervals (), thereby reducing the volume of data the attention mechanism needs to process.\nAs shown in Fig. 1 ###reference_###, the sparsification process can be formulated as:\nHere, represents the sampled sparse matrix. represents any of the variables , , or .\nParallel Processing With FlashAttention:\nSparsified segments of each input data are dense matrices that can participate in the attention calculation independently and thus can be processed in parallel.\nThis parallelism is vital for efficiently managing large-scale image datasets, significantly speeding up the processing time and enhancing the efficiency of our model for real-time image segmentation.\nIncorporating FlashAttention further increases efficiency by parallelizing dense matrix computations in the process.\nOutput Recomposition:\nIn the proposed Dilated Flash Attention framework, we process sparsified segments in parallel, implementing a softmax function applied to the product of and the transpose of , subsequently followed by multiplication with as follows:\nThe reassembly of these outputs into the cohesive final output involves a meticulously designed process:\nInitially, we establish a zero matrix that mirrors the dimensions of the original input for accumulating the outputs of the individual segments.\nFor each computed segment output , a specific offset is identified. This offset determines the precise starting position of within the matrix.\nEach is mapped to using a mapping operation based on its :\nThe \u201cMAP\u201d operation places each element into according to the position determined by . This guarantees the accurate alignment of each segment\u2019s output within the final output matrix , based on its original input position.\nComputation Efficiency\nWith the proposed Dilated Flash Attention mechanism, efficiency is quantitatively enhanced by a factor of , where represents the total size of the input, the length of each segment, and the interval of sparsification.\nThis mathematical relationship demonstrates that Dilated Flash Attention requires substantially fewer computations for any given input size.\nConsequently, this boosts the model\u2019s capability in efficiently processing large-scale image segmentation tasks, marking a notable improvement in both performance and practicality."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Dynamic Layer-Wise Distillation (DLD)",
33
+ "text": "Training the SAM-Lightening from scratch is costly, while layer adaptation is challenging due to the distinctive structures between SAM with ViT-H as the feature encoder and SAM-Lightening.\nTo enable efficient knowledge transfer from vanilla SAM to the proposed framework, we propose a novel Dynamic Layer-Wise Distillation (DLD), which dynamically modifies feature weights to enhance the layer-wise distillation between the models [17 ###reference_b17###].\nDynamic Layer-Wise Weights: \nWhen preceding layers are not well-distilled, the performance of subsequent layers can suffer from low-quality features extracted from preceding layers. By assigning greater weight to the loss of these initial layers, dynamic weighting ensures they receive more focus during the training process. This helps in better aligning the student model with the teacher model in the initial stages.\nGiven a deep neural network consisting of layers, each layer is associated with a temporal weight . This mechanism adjusts the significance of each layer in the neural network across various training stages .\nThe initial layer retains maximum emphasis () and the subsequent layers adhere to a dynamic weighting scheme, which can be mathematically represented by the piece-wise function:\nWhere denotes the epoch at which the layer commences updating its weight, and the previous layer has reached saturation, i.e., . The parameter captures the number of epochs over which the weight transitions from 0 to 1.\nFor a predefined epoch increment , each layer sequentially activates its learning potential after the preceding layer reaches its peak weight. This mechanism facilitates a cascading knowledge absorption from the teacher model.\nDecoupled Feature Distillation: The distillation process transfers knowledge from SAM\u2019s encoder (the teacher model) to our proposed encoder (the student model), as shown in Fig.1 ###reference_###. We have chosen the layers closest to the output for feature distillation. Since these deeper layers are directly related to the model\u2019s outputs, distilling them can more effectively transfer crucial information for prediction results. These layers are designated as \u201cFocus Layers\u201d.\nDuring the initial phase of training, layers closer to the input are given precedence. Here, the intent is to align the SAM-Lightning primary feature representations of the student model, expressed as , with those of the teacher model, , for the layers closest to the input.\nAs training advances, the layer-wise weighting dynamically shifts. The loss associated with subsequent layers is incrementally amplified. In the progress, the loss function evolves to assimilate representations from succeeding layers:\nwhere is the complete count of layers, and the coefficient is a piece-wise function determined by the training epoch and the layer . The integrated distillation loss is formulated as:\nwhere encapsulates the weighted sum of all selected feature layer losses, is the loss for the image encoder output layer, and is a scaling factor to balance the significance of the decoder output in the overall distillation process.\nAlign Decoder: Additionally, the lightweight image encoder obtained through decoupled distillation has alignment issues with the frozen decoder, especially for point-based prompt segmentation tasks. Therefore, we fine-tuned the decoder by sampling point prompts and box prompts on the SA-1B dataset to align with the image encoder.\nThe loss function is defined as follows:\nHere, IOU represents the Intersection over Union loss, while Dice loss and Focal Loss are used to address class imbalance and challenging segmentation regions, respectively."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Experiment",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "Experimental Setups",
45
+ "text": "Our model utilizes of the SA-1B dataset for distillation and fine-tuning.\nIt features an encoder with an embedding dimension of 384, six attention heads, and a six-layer structure.\nFor the FlashAttention component, we use bfloat16.\nBoth the distillation and fine-tuning processes are conducted for 10 epochs each, with a learning rate of and a batch size of 32.\nGradient accumulation is set with a step size of 4.\nThe model is trained on two NVIDIA RTX 4090 GPUs.\nTo enhance training speed, the outputs of SAM\u2019s image encoder are saved [10 ###reference_b10###, 9 ###reference_b9###]."
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": "Results",
51
+ "text": "Run-Time And Memory Efficiency Evaluation:\nWe compare the performance of our proposed SAM-Lightening with vanilla SAM (i.e., SAM-ViT-H) [1 ###reference_b1###], FastSAM [8 ###reference_b8###], MobileSAM [9 ###reference_b9###], EfficientSAM [11 ###reference_b11###], SAMFast [12 ###reference_b12###] in Table 1 ###reference_### and Table 2 ###reference_###.\nRegarding the segmentation performance, the vanilla SAM is considered as the upper bound.\nImportantly, Table 1 ###reference_### shows that SAM-Lightening outperforms all its counterparts in terms of inference latency and peak memory usage, achieving acceleration, peak memory reduction when compared to vanilla SAM, and acceleration when compared to the state-of-the-art.\nThe throughput comparison in Table 2 ###reference_### further reinforces SAM-Lightening\u2019s superior performance, which achieves the highest throughput across various batch sizes.\nConclusively, this high throughput with its low latency and memory usage, positions SAM-Lightening as a highly efficient model for image segmentation tasks.\nEnc. ms\nDec. ms\nTot. ms\nSAM-ViT-H\n216.1\n3.8\n219.9\n1.0\n5.7GB\nSAMFast\n23.2\n3.8\n27.0\n8.5\n4.1GB\nFastSAM\n20.7\n3.4\n24.1\n9.1\n2.6GB\nEfficientSAM\n22.3\n3.8\n26.1\n8.3\n309MB\nMobileSAM\n8.1\n3.8\n11.9\n18.5\n309MB\nSAM-ViT-H\n219.9\n944.9\nOOM\nOOM\nSAMFast\n53.6\n206.6\n438.2\n964.2\nFastSAM\n24.1\n80.1\n171.5\n349.1\nEfficientSAM\n22.3\n79.2\n157.7\n317.5\nMobileSAM\n8.1\n34.1\n72.3\n156.8\n###figure_2### Comparison In Box/Point Prompt Mode:\nWe first evaluated the performance under bounding boxes and point-based prompts.\nFor bounding box prompts, we followed the settings in vanilla SAM by leveraging the ground-truth annotation in the COCO [18 ###reference_b18###] and LVIS [19 ###reference_b19###] to synthesize bounding boxes that define areas of interest in each image.\nFor point prompts, we randomly sampled points within the ground-truth masks from images, challenging all the models to accurately segment the object or region associated with each point.\nQuantitatively, we used mean Intersection over Union (mIoU) as the metric.\nAs shown in Table 3 ###reference_###, both SAMFast and MobileSAM suffer from a performance decline when compared to vanilla SAM, particularly with point prompts.\nFastSAM, as a CNN-based model, shows an even more pronounced drop, which is especially evident in the handling LVIS dataset that contains a large number of small objects.\nThis observation reflects the limitations of CNN-based encoders in processing more complex segmentation scenarios.\nIn contrast, SAM-Lightening matches the original SAM in terms of segmentation performance to the best context.\nThis holds even in scenarios of point-based prompts, where SAM-Lightening achieves mIoU similar to the vanilla SAM.\nComparison In Anything Mode:\n###figure_3### While the segment-anything mode is an innovative approach, it is not a commonly used segmentation method and thus does not effectively represent typical segmentation tasks.\nTherefore, our analysis has primarily focused on visually comparing the segmentation outcomes through point-based and box-based methods, which are more prevalent in practical applications.\nHowever, for completeness and to demonstrate the versatility of the models, we have also included the outputs of the segment-anything mode in our comparison.\nFrom the representative samples demonstrated in Fig. 3 ###reference_###, both SAM-Lightening and MobileSAM exhibit segmentation results that are nearly indistinguishable from those of the vanilla SAM.\nThis similarity is notable in terms of edge clarity and detail preservation, which are hallmarks of high-quality segmentation.\nSAM-Lightening demonstrates its robustness and accuracy, aligning closely with the performance of the vanilla SAM."
52
+ },
53
+ {
54
+ "section_id": "4.3",
55
+ "parent_section_id": "4",
56
+ "section_name": "Ablation study",
57
+ "text": "###figure_4### It\u2019s noteworthy that many previous works [4 ###reference_b4###, 20 ###reference_b20###, 21 ###reference_b21###] use smaller input sizes for SAM other than 1024.\nFor a fair comparison, we also conducted experiments in these scenarios and found that keeping FlashAttention for input sizes equal to or smaller than achieves optimal performance.\nThis indicates that the applicability of FlashAttention depends on the model\u2019s input size and specific hardware configuration.\nThe decision to use FlashAttention should be made based on the specific application context and performance requirements.\nAlthough FlashAttention accelerates training in model distillation, its impact on inference performance is determined by various hardware metrics. On our inference platform, especially for the SAM with a 1024 input size, the multi-head attention operator exhibits a more computation-intensive characteristic. As shown in Fig. 4 ###reference_###, this results in a slightly lower inference speed with FlashAttention compared to without it. Therefore, we opt to use FlashAttention during the distillation process to optimize performance while removing it during the evaluation phase."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Conclusion",
63
+ "text": "We propose SAM-Lightening to address the primary limitations of high computational demand and slow inference speed in vanilla SAM to make it more suitable for deployment on resource-constrained devices.\nOur approach involves the redesign of the image encoder in SAM, by distilling the self-attention operators into dilated flash attentions with dynamic layer-wise distillation.\nThese optimizations contribute to a notable reduction in computational complexity and memory usage without compromising the segmentation performance.\nSpecifically, SAM-Lightening can complete inference within 7 milliseconds per image, achieving a speed up over SAM-ViT-H.\nSince SAM-Lightening is complementary to pruning and quantization, one future direction can look into the integration with them."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {
68
+ "1": {
69
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.8.1.1\">Table 1</span>: </span>Performance comparison on Nvidia RTX 4090 GPU, where \u201cEnc.\u201d refers to the Encoder, \u201cDec.\u201d to the Decoder, \u201cMem.\u201d to Memory usage, \u201cTot.\u201d to Total Time, and \u201cSU\u201d denotes the Speed-Up ratio.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.6.7.1\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.6.7.1.1\" style=\"width:74.0pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.6.7.1.1.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.6.7.1.2\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.6.7.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.7.1.2.1.1\">Enc.</span> ms</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.6.7.1.3\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.6.7.1.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.7.1.3.1.1\">Dec.</span> ms</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.6.7.1.4\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.6.7.1.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.7.1.4.1.1\">Tot.</span> ms</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.6.7.1.5\" style=\"width:22.8pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.6.7.1.5.1\">S.U.</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.6.7.1.6\" style=\"width:28.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.6.7.1.6.1\">Mem.</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.2\" style=\"width:74.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.1.2.1\">SAM-ViT-H</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.3\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.1.3.1\">216.1</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.4\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.1.4.1\">3.8</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.5\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.1.5.1\">219.9</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1\" style=\"width:22.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.1.1.1.1\">1.0</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.6\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.1.1.6.1\">5.7GB</p>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2\">\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T1.2.2.2\" style=\"width:74.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.2.2.2.1\">SAMFast</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T1.2.2.3\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.2.2.3.1\">23.2</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T1.2.2.4\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.2.2.4.1\">3.8</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T1.2.2.5\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.2.2.5.1\">27.0</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T1.2.2.1\" style=\"width:22.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.2.2.1.1.1\">8.5</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T1.2.2.6\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.2.2.6.1\">4.1GB</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.3.3.2\" style=\"width:74.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.3.3.2.1\">FastSAM</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.3.3.3\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.3.3.3.1\">20.7</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.3.3.4\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.3.3.4.1\">3.4</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.3.3.5\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.3.3.5.1\">24.1</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.3.3.1\" style=\"width:22.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.3.3.1.1.1\">9.1</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.3.3.6\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.3.3.6.1\">2.6GB</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.4.4.2\" style=\"width:74.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.4.4.2.1\">EfficientSAM</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.4.4.3\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.4.4.3.1\">22.3</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.4.4.4\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.4.4.4.1\">3.8</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.4.4.5\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.4.4.5.1\">26.1</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.4.4.1\" style=\"width:22.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.4.4.1.1.1\">8.3</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.4.4.6\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.4.4.6.1\">309MB</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.5\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.5.5.2\" style=\"width:74.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.5.5.2.1\">MobileSAM</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.5.5.3\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.5.5.3.1\">8.1</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.5.5.4\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.5.5.4.1\">3.8</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.5.5.5\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.5.5.5.1\">11.9</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.5.5.1\" style=\"width:22.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.5.5.1.1.1\">18.5</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.5.5.6\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.5.5.6.1\">309MB</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S4.T1.6.6.2\" style=\"width:74.0pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.6.6.2.1\">SAM-Lightening</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S4.T1.6.6.3\" style=\"width:17.1pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.6.6.3.1\">3.5</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S4.T1.6.6.4\" style=\"width:17.1pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.6.6.4.1\">3.4</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S4.T1.6.6.5\" style=\"width:17.1pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.6.6.5.1\">6.9</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S4.T1.6.6.1\" style=\"width:22.8pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.6.6.1.1.1.1\">30.1</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S4.T1.6.6.6\" style=\"width:28.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.6.6.6.1\">224MB</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
70
+ "capture": "Table 1: Performance comparison on Nvidia RTX 4090 GPU, where \u201cEnc.\u201d refers to the Encoder, \u201cDec.\u201d to the Decoder, \u201cMem.\u201d to Memory usage, \u201cTot.\u201d to Total Time, and \u201cSU\u201d denotes the Speed-Up ratio."
71
+ },
72
+ "2": {
73
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.1.1\">Table 2</span>: </span>Parallel throughput comparison. Inference times are given in milliseconds (ms).</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.1.1\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.1\" style=\"width:74.0pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.3.1.1.1.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.2\" style=\"width:25.6pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.3.1.1.2.1\">Size 1</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.3\" style=\"width:25.6pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.3.1.1.3.1\">Size 4</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.4\" style=\"width:25.6pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.3.1.1.4.1\">Size 8</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.5\" style=\"width:34.1pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.3.1.1.5.1\">Size 16</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.2.2\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.3.2.2.1\" style=\"width:74.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.2.2.1.1\">SAM-ViT-H</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.3.2.2.2\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.2.2.2.1\">219.9</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.3.2.2.3\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.2.2.3.1\">944.9</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.3.2.2.4\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.2.2.4.1\">OOM</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.3.2.2.5\" style=\"width:34.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.2.2.5.1\">OOM</p>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.1\">\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T2.3.3.1.1\" style=\"width:74.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.3.1.1.1\">SAMFast</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T2.3.3.1.2\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.3.1.2.1\">53.6</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T2.3.3.1.3\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.3.1.3.1\">206.6</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T2.3.3.1.4\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.3.1.4.1\">438.2</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T2.3.3.1.5\" style=\"width:34.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.3.1.5.1\">964.2</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.4.2\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.4.2.1\" style=\"width:74.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.4.2.1.1\">FastSAM</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.4.2.2\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.4.2.2.1\">24.1</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.4.2.3\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.4.2.3.1\">80.1</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.4.2.4\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.4.2.4.1\">171.5</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.4.2.5\" style=\"width:34.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.4.2.5.1\">349.1</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.5.3\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.5.3.1\" style=\"width:74.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.5.3.1.1\">EfficientSAM</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.5.3.2\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.5.3.2.1\">22.3</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.5.3.3\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.5.3.3.1\">79.2</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.5.3.4\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.5.3.4.1\">157.7</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.5.3.5\" style=\"width:34.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.5.3.5.1\">317.5</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.6.4\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.6.4.1\" style=\"width:74.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.6.4.1.1\">MobileSAM</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.6.4.2\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.6.4.2.1\">8.1</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.6.4.3\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.6.4.3.1\">34.1</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.6.4.4\" style=\"width:25.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.6.4.4.1\">72.3</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.3.6.4.5\" style=\"width:34.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.3.6.4.5.1\">156.8</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.7.5\">\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S4.T2.3.7.5.1\" style=\"width:74.0pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.3.7.5.1.1\">SAM-Lightening</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S4.T2.3.7.5.2\" style=\"width:25.6pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.3.7.5.2.1\">3.5</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S4.T2.3.7.5.3\" style=\"width:25.6pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.3.7.5.3.1\">13.0</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S4.T2.3.7.5.4\" style=\"width:25.6pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.3.7.5.4.1\">27.2</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S4.T2.3.7.5.5\" style=\"width:34.1pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.3.7.5.5.1\">59.2</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
74
+ "capture": "Table 2: Parallel throughput comparison. Inference times are given in milliseconds (ms)."
75
+ },
76
+ "3": {
77
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.1.1\">Table 3</span>: </span>Segmentation performance comparison in terms of mIOU on COCO and LVIS. The labels \u201cBox\u201d, \u201c1P\u201d, and \u201c3P\u201d correspond to the use of a bounding box, one point, and three points as prompts, respectively.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.3\" style=\"width:433.6pt;height:342.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(125.8pt,-99.5pt) scale(2.38143072451332,2.38143072451332) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.3.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T3.3.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.1.1.1.1\">Model</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"3\" id=\"S4.T3.3.1.1.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.1.1.2.1\">COCO</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S4.T3.3.1.1.1.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.1.1.3.1\">LVIS</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.2.2.1.1\">Box</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.2.2.2.1\">1P</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.2.2.3.1\">3P</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.2.2.4.1\">Box</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.2.2.5.1\">1P</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.2.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.2.2.6.1\">3P</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.3.3.1\">SAM-ViT-H</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.3.3.2\">80.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.3.3.3\">49.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.3.3.4\">72.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.3.3.5\">83.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.3.3.6\">60.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.3.3.7\">74.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.4.4.1\">SAMFast</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.4.4.2\">77.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.4.4.3\">44.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.4.4.4\">66.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.4.4.5\">80.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.4.4.6\">54.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.1.4.4.7\">69.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.3.1.5.5.1\">FastSAM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.5.5.2\">65.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.5.5.3\">50.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.1.5.5.4\">52.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.5.5.5\">61.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.5.5.6\">41.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.5.5.7\">41.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.3.1.6.6.1\">EfficientSAM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.6.6.2\">77.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.6.6.3\">43.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.1.6.6.4\">69.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.6.6.5\">79.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.6.6.6\">53.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.6.6.7\">72.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.3.1.7.7.1\">MobileSAM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.7.7.2\">77.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.7.7.3\">47.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.1.7.7.4\">67.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.7.7.5\">78.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.7.7.6\">55.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.1.7.7.7\">66.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.8.8.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.8.8.1.1\">SAM-Lightening</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.3.1.8.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.8.8.2.1\">78.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.3.1.8.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.8.8.3.1\">48.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.8.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.8.8.4.1\">72.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.3.1.8.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.8.8.5.1\">81.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.3.1.8.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.8.8.6.1\">59.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.3.1.8.8.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.8.8.7.1\">74.6</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
78
+ "capture": "Table 3: Segmentation performance comparison in terms of mIOU on COCO and LVIS. The labels \u201cBox\u201d, \u201c1P\u201d, and \u201c3P\u201d correspond to the use of a bounding box, one point, and three points as prompts, respectively."
79
+ }
80
+ },
81
+ "image_paths": {
82
+ "1": {
83
+ "figure_path": "2403.09195v2_figure_1.png",
84
+ "caption": "Fig. 1: The overall framework of SAM-Lightening along with the dynamic layer-wise distillation that can efficiently transfer knowledge from the vanilla SAM without training from scratch.",
85
+ "url": "http://arxiv.org/html/2403.09195v2/extracted/5477241/pic/123101.png"
86
+ },
87
+ "2": {
88
+ "figure_path": "2403.09195v2_figure_2.png",
89
+ "caption": "Fig. 2: Representative image segmentation results between SAM-Lightening and the vanilla SAM in prompt mode.",
90
+ "url": "http://arxiv.org/html/2403.09195v2/extracted/5477241/pic/123104.png"
91
+ },
92
+ "3": {
93
+ "figure_path": "2403.09195v2_figure_3.png",
94
+ "caption": "Fig. 3: Representative samples under anything mode.",
95
+ "url": "http://arxiv.org/html/2403.09195v2/extracted/5477241/pic/12301.png"
96
+ },
97
+ "4": {
98
+ "figure_path": "2403.09195v2_figure_4.png",
99
+ "caption": "Fig. 4: Impacts of inference time with FlashAttention over input size, where we select two embedding dimensions, namely 768 and 384, for comparison.",
100
+ "url": "http://arxiv.org/html/2403.09195v2/extracted/5477241/pic/12311.jpg"
101
+ }
102
+ },
103
+ "validation": true,
104
+ "references": [
105
+ {
106
+ "1": {
107
+ "title": "\u201cSegment anything,\u201d",
108
+ "author": "Kirillov et al.,",
109
+ "venue": "arXiv preprint arXiv:2304.02643, 2023.",
110
+ "url": null
111
+ }
112
+ },
113
+ {
114
+ "2": {
115
+ "title": "\u201cSegment anything for microscopy,\u201d",
116
+ "author": "Archit et al.,",
117
+ "venue": "Aug 2023.",
118
+ "url": null
119
+ }
120
+ },
121
+ {
122
+ "3": {
123
+ "title": "\u201cSegment anything in medical images,\u201d",
124
+ "author": "Ma et al.,",
125
+ "venue": "Apr 2023.",
126
+ "url": null
127
+ }
128
+ },
129
+ {
130
+ "4": {
131
+ "title": "\u201cSam-med2d,\u201d",
132
+ "author": "Cheng et al.,",
133
+ "venue": "arXiv preprint arXiv:2308.16184, 2023.",
134
+ "url": null
135
+ }
136
+ },
137
+ {
138
+ "5": {
139
+ "title": "\u201cTrack anything: Segment anything meets videos,\u201d",
140
+ "author": "Yang et al.,",
141
+ "venue": "Apr 2023.",
142
+ "url": null
143
+ }
144
+ },
145
+ {
146
+ "6": {
147
+ "title": "\u201cAnything-3d: Towards single-view anything reconstruction in the\nwild,\u201d",
148
+ "author": "Shen et al.,",
149
+ "venue": "arXiv preprint arXiv:2304.10261, 2023.",
150
+ "url": null
151
+ }
152
+ },
153
+ {
154
+ "7": {
155
+ "title": "U-Net: Convolutional Networks for Biomedical Image\nSegmentation, p. 234\u2013241,",
156
+ "author": "Ronneberger et al.,",
157
+ "venue": "Jan 2015.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "8": {
163
+ "title": "\u201cFast segment anything,\u201d",
164
+ "author": "Zhao et al.,",
165
+ "venue": "arXiv preprint arXiv:2306.12156, 2023.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "9": {
171
+ "title": "\u201cFaster segment anything: Towards lightweight sam for mobile\napplications,\u201d",
172
+ "author": "Zhang et al.,",
173
+ "venue": "arXiv preprint arXiv:2306.14289, 2023.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "10": {
179
+ "title": "\u201cTinyvit: Fast pretraining distillation for small vision\ntransformers,\u201d",
180
+ "author": "Wu et al.,",
181
+ "venue": "Springer, Cham, 2022.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "11": {
187
+ "title": "\u201cEfficientsam: Leveraged masked image pretraining for efficient\nsegment anything,\u201d",
188
+ "author": "Xiong et al.,",
189
+ "venue": "arXiv preprint arXiv:2312.00863, 2023.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "12": {
195
+ "title": "\u201cAccelerating generative ai,\u201d\naccelerating-generative-ai,\n2023.",
196
+ "author": "PyTorch Team,",
197
+ "venue": null,
198
+ "url": "https://pytorch.org/blog/accelerating-generative-ai/"
199
+ }
200
+ },
201
+ {
202
+ "13": {
203
+ "title": "\u201cFlashattention: Fast and memory-efficient exact attention with\nio-awareness,\u201d",
204
+ "author": "Dao et al.,",
205
+ "venue": "Advances in Neural Information Processing Systems, vol. 35, pp.\n16344\u201316359, 2022.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "14": {
211
+ "title": "\u201cUltralytics yolov8,\u201d 2023.",
212
+ "author": "Jocher et al.,",
213
+ "venue": null,
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "15": {
219
+ "title": "\u201cFlashattention-2: Faster attention with better parallelism and work\npartitioning,\u201d",
220
+ "author": "Dao,",
221
+ "venue": "arXiv preprint arXiv:2307.08691, 2023.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "16": {
227
+ "title": "\u201cDistilling the knowledge in a neural network,\u201d",
228
+ "author": "Hinton et al.,",
229
+ "venue": "arXiv: Machine Learning,arXiv: Machine Learning, Mar 2015.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "17": {
235
+ "title": "\u201cShow, attend and distill:knowledge distillation via attention-based\nfeature matching,\u201d",
236
+ "author": "Ji et al.,",
237
+ "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,\np. 7945\u20137952, Sep 2022.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "18": {
243
+ "title": "\u201cMicrosoft coco: Common objects in context,\u201d",
244
+ "author": "Lin et al.,",
245
+ "venue": "COMPUTER VISION - ECCV 2014, PT V, pp. 740\u2013755, 2014.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "19": {
251
+ "title": "\u201cLvis: A dataset for large vocabulary instance segmentation,\u201d",
252
+ "author": "Gupta et al.,",
253
+ "venue": "in 2019 IEEE/CVF Conference on Computer Vision and Pattern\nRecognition (CVPR), Jun 2019.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "20": {
259
+ "title": "\u201cMMDetection: Open mmlab detection toolbox and benchmark,\u201d",
260
+ "author": "Chen et al.,",
261
+ "venue": "arXiv preprint arXiv:1906.07155, 2019.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "21": {
267
+ "title": "\u201cSeggpt: Segmenting everything in context,\u201d",
268
+ "author": "Wang et al.,",
269
+ "venue": "arXiv preprint arXiv:2304.03284, 2023.",
270
+ "url": null
271
+ }
272
+ }
273
+ ],
274
+ "url": "http://arxiv.org/html/2403.09195v2"
275
+ }
20240318/2403.09473v2.json ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Analysis of a continuous opinion and discrete action dynamics model coupled with an external observation dynamics^*^*This work was partially supported by ANR through the grant NICETWEET No. ANR-20-CE48-0009 and the PNRR project DECIDE No. 760069.",
3
+ "abstract": "We consider a set of consumers in a city or town (who thus generate pollution) whose opinion is governed by a continuous opinion and discrete action (CODA) dynamics model. This dynamics is coupled with an observation signal dynamics, which defines the information the consumers have access to regarding the common pollution. We show that the external observation signal has a significant impact on the asymptotic behavior of the CODA model. When the coupling is strong, it induces either a chaotic behavior or convergence towards a limit cycle. When the coupling is weak, a more classical behavior characterized by local agreements in polarized clusters is observed. In both cases, conditions under which clusters of consumers don\u2019t change their actions are provided.Numerical examples are provided to illustrate the derived analytical results.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Opinion dynamics (OD) over social networks attracted a lot of attention during the last decades. Multi-agent systems have provided an efficient way to model opinion evolution under social interactions. The existing OD models consider that the opinions evolve either in a discrete set [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###] or in a continuous set of values [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###]. While some models naturally lead to consensus [9 ###reference_b9###, 7 ###reference_b7###] some others yield a network clustering [5 ###reference_b5###, 6 ###reference_b6###, 8 ###reference_b8###, 10 ###reference_b10###]. However, all the models enumerated above consider that each individual has access to the opinion values of the neighbors. In order to more accurately describe the opinion dynamics and to recover more realistic behaviors, a mix of continuous opinion with discrete actions (CODA) was proposed in [11 ###reference_b11###]. This model reflects the fact that even if we often face binary choices or actions that are visible to our neighbors, our opinion evolves in a continuous space of values that are not accessible. A consensus-like dynamics reproducing this behavior has been proposed and analyzed in [12 ###reference_b12###] where the preservation and the propagation of actions are also characterized through the notion of robust polarized clusters. While the model in [12 ###reference_b12###] led to a clustering of the network, a similar idea was employed in [13 ###reference_b13###] to study the emergence of consensus under quantized all-to-all communication.\nIn this paper we analyze the behavior of the CODA model introduced in [12 ###reference_b12###] coupled with an external dynamics. Many models have been developed to characterize the pollution dynamic in urban areas, considering the fluid dynamics approach [14 ###reference_b14###], chemistry-based approach [15 ###reference_b15###], or both [16 ###reference_b16###]. Even if the time constants depend on the chemical compound considered [17 ###reference_b17###], we introduce a simple linear pollution model to estimate the local air quality.\nIn this model, the pollution level depends on the actions of the individuals which in turn are influenced both by the actions of their neighbors and the pollution level. The coupling of the two dynamics leads to a complex asymptotic behavior that can be summarized as follows. When the coupling between the dynamics is weak, one recovers the asymptotic behavior of the original CODA model in [12 ###reference_b12###]. A strong coupling between the two dynamics hampers the convergence towards a steady state and yields either chaotic oscillations or convergence towards a limit cycle. It is noteworthy that even in the simplified case when all the agents have the same initial opinion, the strong coupling with the external dynamics hampers the convergence toward a steady state and may lead to chaotic oscillations.\nThe main contributions of this paper are: i) the introduction of a mathematical model capturing the coupling between the CODA dynamics and an external one; ii) the analysis of the asymptotic behavior of the aforementioned model; iii) and the characterization of the coupling strength leading to different asymptotic behaviors.\nThe paper is structured as follows. Section II ###reference_### presents the definitions of the measures that constitute the model. Characteristics of opinion equilibrium and asymptotic behavior are analyzed in Section III ###reference_###, followed by a focus on the synchronized behavior in Section IV ###reference_###. Section V ###reference_### illustrates the different behaviors with numerical simulations. Finally, Section VI ###reference_### concludes our work."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Problem formulation and preliminaries",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Analysis of the model",
21
+ "text": "Before starting the analysis of the model introduced in the previous section, let us observe that extreme opinion values do not evolve in time. We also observe that the definitions of and are rigorous only if and . Therefore, the following assumption is perfectly justified by our setup.\nFor all , and ."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III-A Characterization of opinion equilibria",
27
+ "text": "In the following, we analyze the asymptotic behavior of opinions that follows the dynamics (3 ###reference_###). In other words, we assume that the external signal has an exogenous decoupled evolution.\nTo simplify our further reasoning, we introduce the following notation\nLet , . Then for all , one of the following relation holds\nor,\nLet us first observe that , since for all . Then, using (4 ###reference_###), one rewrites (3 ###reference_###) as:\nWe continue our reasoning by induction. From equation (8 ###reference_###) it is straightforward that if then and . Reversely, if then and . Finally, if then .\n\u220e\nLet and assume that Assumption 1 ###reference_umption1### holds. If and are stationary sequences with limit and , respectively. Then the sequence of opinion converges to\nLet us note and remarks that and since those are stationary sequences in . Then .\nWith Assumption 1 ###reference_umption1###, we have , so that Lemma 1 ###reference_ma1### applies. Then by induction\nor\nThen since is a bounded monotonous sequence, it converges. Let denote that limit. Now from above inequalities, if then . Conversely, if , we have that . Finally, if then (8 ###reference_###) rewrite as\nTaking the limit of the previous cancel the first term of the left-hand side and we have that .\n\u220e\nWe will later see that opinions can either have an oscillatory chaotic behavior or they converge to a limit cycle. For a discrete-time system given by , if there is a natural number for which there exist successive convergent sub-sequences , , \u2026, , then the overall sequence converges to a limit cycle of length defined by the limits of the sub-sequences.\nUnder Assumption 1 ###reference_umption1###, if is a stationary sequence and converges to a limit cycle of length , then also converges to a limit cycle of length denoted ."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "III-B Asymptotic behavior of the external/pollution dynamics",
33
+ "text": "As pointed out in the previous subsection, the stationarity of plays a major role in the asymptotic behavior of the opinions . Consequently, in this subsection we provide a sufficient condition ensuring that is a stationary sequence. In this setting, the opinions behave as in the CODA model provided in [12 ###reference_b12###] since the players are influenced by the external dynamics uniformly with respect to time after the sequences become stationary. We can rewrite the dynamics (2 ###reference_###) by injecting (1 ###reference_###)\nTherefore, the pollution reaches an equilibrium only if (and implicitly ) is stationary. Let us suppose that\nIn this case the equilibrium is given by\nwhich now only depends on the partition of actions of the individuals in the social network.\nIn order to guarantee that is stationary one needs to ensure that is stationary. In other words, for sufficiently large the value of does not cross the threshold .\nThe sequence is stationary for any graph with individuals if there exists such that either or\n."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-C Preservation of action",
39
+ "text": "In the following, we investigate under which condition we have that . The following result is instrumental for our purposes.\nLet , then the following statements hold true:\nif and then ,\nif and then .\nAs proven in Lemma 1 ###reference_ma1### one of (5 ###reference_###), (6 ###reference_###) or (7 ###reference_###) holds true.\n1) If (5 ###reference_###) is verified one has meaning that implies .\nIf (6 ###reference_###) or (7 ###reference_###) holds, then . Consequently, ensures or equivalently .\n2) If (5 ###reference_###) or (7 ###reference_###) holds, then . Therefore, if yields .\nIf (6 ###reference_###) holds, one has that meaning that implies .\n\u220e\nWhen Lemma 3 ###reference_ma3### can be refined as follows.\nLet and assume that . The following statements hold:\n if and then, ,\n if and then, .\nNotice that implies . Recalling that one obtains that . Notice also that\nTherefore\nand taking into account that the desired result yields from Lemma 3 ###reference_ma3###.\n\u220e\nThroughout the paper, we denote by the cardinality of a set . We provide a definition for some cluster in the graph such that the opinion will not change through time\nWe say that a subset of agents is a weakly robust polarized cluster if the following hold:\n, ,\n, .\nIf is a weakly robust polarized cluster and for all and , then\nThe proof will be done by induction. Let us suppose that one has . Assume that for a given one has . Since the interaction graph is fixed and is a weakly robust polarized cluster the following holds true . Noticing that one obtains that which is equivalent to . Applying Lemma 3 ###reference_ma3### one gets and the induction is complete. Similar reasoning applies when .\n\u220e\nThe preservation of action in a weakly polarized cluster is subject to a constant value of over the time. In order to get rid of this constraint, we introduce the following concept.\nWe say that a subset of agents is a strongly robust polarized cluster if the following hold:\n, ,\n, .\nIf is a robust polarized cluster then\n, , .\nApply proof of Proposition 2 ###reference_position2### while considering .\n\u220e\nIt is worth noting that for we cannot have robust polarized cluster since and for any . This fact will have importance in the following."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "IV Analysis of the synchronized behavior",
45
+ "text": "In the case where the state of the system does not converge towards a steady state. Instead, one has oscillations that may be either chaotic or converging to a limit cycle. This is illustrated in Fig. 1 ###reference_### where we can see that for close to one we get a limit cycle while for , in a wide range, one has a chaotic behavior. For the sake of simplicity, we assume in the following that all the opinions are synchronized.\n###figure_1### We say that an opinion state is Fully Synchronized (FS) if , .\nWhen the opinion state is FS, the opinion of an agent is equal to the opinion of any other agent in . Therefore, in the remainder of this section, we will denote and the common opinion and action of all the agents at time . In other words, we omit the agent index when referring to its opinion or action.\nThe FS property is forward invariant over time, i.e. if is FS at time , then is FS.\nIf is FS then .\nIndeed, under the assumption of FS, one has that for all , .\nIn the FS regime, the action space reduces to\nNotice that each point in corresponds to a partition of the state space in four sets. For instance corresponds to In order to prove the oscillatory behavior of the system (2 ###reference_###)-(3 ###reference_###) we show that in general does not contain equilibrium points. This means that the trajectory of the system cannot remain in a certain partition which means it cannot converge towards a steady state.\nAssume that and the opinion state is FS at time . Then the points are not equilibrium in the action space.\nWe proceed by contradiction. The reasoning is similar for each of the two points so we will focus on the first one. Let us assume that is an equilibrium point. This means that if there exists such that then for any we have and . We notice that in this case, one has . Then the dynamics (8 ###reference_###) becomes\nRecalling that one deduces that . Consequently, one obtains\nIterating the inequality above for consecutive values of it results that for any the following holds:\nLet us recall that . Therefore, if we consider in (11 ###reference_###) a sufficiently large (i.e. ) we get yielding which contradicts the assumption that is an equilibrium in the action space.\n\u220e\nAssume and being FS. The following statements hold true.\nis an equilibrium in the action space if and only if where . In this case is a stable equilibrium of the coupled dynamics (2 ###reference_###)-(3 ###reference_###).\nis an equilibrium in the action space if and only if where . In this case is a stable equilibrium of the coupled dynamics (2 ###reference_###)-(3 ###reference_###).\nThe reasoning is symmetric and it is sufficient to focus only on the first statement.\n\u201d\u201d We assume that is an equilibrium and show that .\nSuppose that there exists such that . Then, for all , we have and . Therefore, for all one has and the dynamics (2 ###reference_###) becomes\nSince the dynamics above is asymptotically stable and converges to the equilibrium defined in (10 ###reference_###) with i.e. . We conclude that .\n\n\u201d\u201d We assume that and show that is an equilibrium.\nOne proves that if is such that then .\nNotice first that in the case under consideration . Therefore, dynamics (3 ###reference_###) becomes\nSince one deduces from (13 ###reference_###) that as well.\nOn the other hand implies and is equivalent to . Therefore, (12 ###reference_###) becomes\nyielding .\nBy recursive reasoning one gets that is an equilibrium point.\nWe already noticed that in the case under study is an asymptotically stable point for (2 ###reference_###) that takes the particular form (12 ###reference_###). On the other hand the dynamics (3 ###reference_###) simplifies as (13 ###reference_###) whose stable equilibrium is .\n\u220e\nIn the general case when neither nor is an equilibrium in the action space. Since no equilibrium exists in this case, the trajectory of (2 ###reference_###)-(3 ###reference_###) will switch an infinite number of times between the four sets of the partition defined by the action space ."
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "Numerical results",
51
+ "text": "First, we will illustrate the same kind of result as in [12 ###reference_b12###] over a square lattice. Finally, we will present the different behaviors we can observe for a complete graph with FS property."
52
+ },
53
+ {
54
+ "section_id": "5.1",
55
+ "parent_section_id": "5",
56
+ "section_name": "Square lattice",
57
+ "text": "###figure_2### ###figure_3### ###figure_4### Our study visualizes results with interactions based on a square lattice topology. In Figure 2 ###reference_###, we note the persistence of resilient clusters even after numerous iterations. The evolution of opinions and the corresponding state through iterations is depicted in Figure 3 ###reference_###. We can observe that opinions and state converge fast to a limit cycle. On Figure 2 ###reference_###, we see that the opinions are polarized on the graph. There are many robust clusters for both action 1 and -1. Each of them is separated by a frontier that seem to have the same length between the clusters of opposite action and them. We can see on Figure 3 ###reference_### that there is no agent with a constant opinion, the opinions are on a limit cycle. The ones that stay\nwith a constant action form robust clusters, as illustrated in Figure 2 ###reference_###."
58
+ },
59
+ {
60
+ "section_id": "5.2",
61
+ "parent_section_id": "5",
62
+ "section_name": "Complete graph with FS",
63
+ "text": "In Figures 4 ###reference_###, we identify three unique behaviors displayed by the dynamical systems: stable equilibrium (when ), chaotic patterns, and a collection of limit cycles. These simulations were conducted on a complete graph comprising 20 nodes, all of which possess the FS property at initial time. The starting opinion is set at , and the initial state is taken at , with a threshold of . For the emission dynamics, the range is between and , and the decay rate is defined by . We represent the discrete trajectory linking each consecutive couple by a straight line. The color of the line represents the time when this shift occurs. Moreover, we present the subsequent vector field of the dynamics illustrated by the quivers. The speed of the dynamics is given by the length of the arrows.\nA quick observation reveals that, based on the given parameters, the dynamics of the opinion and state undergo significant variations in response to the value of . For instances when , both the opinion and state quickly stabilize at an equilibrium. However, as depicted in Figure 1 ###reference_###, given the same conditions, when we discern two potential behaviors. The first sees positioned outside all limit cycle intervals, resulting in chaotic behavior as illustrated in the center of Figure 4 ###reference_###. The second showcases a limit cycle, as depicted in the right of Figure 4 ###reference_###. Specifically, in the Chaos case of Figure 4 ###reference_###, the sequence never reiterates, thus forming its unique pattern. Conversely, when falls within a limit cycle interval, the sequence converge towards a limit cycle."
64
+ },
65
+ {
66
+ "section_id": "6",
67
+ "parent_section_id": null,
68
+ "section_name": "VI Conclusion",
69
+ "text": "In this paper, we have introduced and analyzed a CODA model coupled with an external dynamic state. Specifically, we consider that the external dynamics represent a very simple pollution model in which the emission level depends on the actions of the individuals in the social network. Conversely, the opinions are both influenced by the actions of the neighbors and the pollution level (above or below a given threshold). We have shown that different behaviors are possible, ranging from convergence to a steady state to chaotic behavior of the coupled dynamics. Numerical examples illustrate our theoretical results."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {},
74
+ "image_paths": {
75
+ "1": {
76
+ "figure_path": "2403.09473v2_figure_1.png",
77
+ "caption": "Figure 1: Bifurcation diagram of the opinion for 0.5<\u03b2<10.5\ud835\udefd10.5<\\beta<10.5 < italic_\u03b2 < 1. N=20\ud835\udc4120N=20italic_N = 20, \u03b8\u2062(0)=0.4\ud835\udf0300.4\\theta(0)=0.4italic_\u03b8 ( 0 ) = 0.4, p\u2062(0)=100\ud835\udc5d0100p(0)=100italic_p ( 0 ) = 100, p\u00af=15\u00af\ud835\udc5d15\\bar{p}=15over\u00af start_ARG italic_p end_ARG = 15, emin=0subscript\ud835\udc52min0e_{\\text{min}}=0italic_e start_POSTSUBSCRIPT min end_POSTSUBSCRIPT = 0, emax=1subscript\ud835\udc52max1e_{\\text{max}}=1italic_e start_POSTSUBSCRIPT max end_POSTSUBSCRIPT = 1 and \u03b3=0.5\ud835\udefe0.5\\gamma=0.5italic_\u03b3 = 0.5,",
78
+ "url": "http://arxiv.org/html/2403.09473v2/extracted/5478275/figs/bifurcation_diagram_beta_opinion_with_consensus_0.png"
79
+ },
80
+ "2": {
81
+ "figure_path": "2403.09473v2_figure_2.png",
82
+ "caption": "Figure 2: Visualization of Opinion Dynamics on a 50 \u00d7 50 Square Lattice for \u03b2=0.45\ud835\udefd0.45\\beta=0.45italic_\u03b2 = 0.45: Initial opinions are randomly distributed as i.i.d. uniform variables between -1 and 1, with the resultant opinions after 100 iterations represented by each colored square cell. Agents engage in communication with their adjacent cells (above, below, left, and right). The cells marked with crosses indicate the presence of strongly robust polarized clusters; black crosses correspond to action -1, and white crosses denote action 1.",
83
+ "url": "http://arxiv.org/html/2403.09473v2/extracted/5478275/figs/grid_graph_0.5_0.45_2500_2500_opinion.png"
84
+ },
85
+ "3": {
86
+ "figure_path": "2403.09473v2_figure_3.png",
87
+ "caption": "Figure 3: Depiction of Dynamical Evolution in the 50 \u00d7 50 Square Lattice from Figure 2: The upper panel showcases the trajectory of each agent\u2019s opinion over iterations, while the lower panel illustrates the corresponding state evolution. The red dashed line marks the state threshold.",
88
+ "url": "http://arxiv.org/html/2403.09473v2/extracted/5478275/figs/state_opinion_grid_graph_0.5_0.45_2500_2500_opinion_state.png"
89
+ },
90
+ "4": {
91
+ "figure_path": "2403.09473v2_figure_4.png",
92
+ "caption": "Figure 4: Trajectory dynamics. Left: two possible equilibria. Center: no equilibrium or limit cycle. Right: limit cycle.",
93
+ "url": "http://arxiv.org/html/2403.09473v2/extracted/5478275/figs/phase_diagram_0.5_0.98_0.5_15_20_merged_quiver_3_v2.png"
94
+ }
95
+ },
96
+ "validation": true,
97
+ "references": [],
98
+ "url": "http://arxiv.org/html/2403.09473v2"
99
+ }
20240318/2403.09701v2.json ADDED
@@ -0,0 +1,444 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A Natural Extension To Online Algorithms For Hybrid RL With Limited Coverage",
3
+ "abstract": "Hybrid Reinforcement Learning (RL), leveraging both online and offline data, has garnered recent interest, yet research on its provable benefits remains sparse. Additionally, many existing hybrid RL algorithms (Song et al.,, 2023; Nakamoto et al.,, 2023; Amortila et al.,, 2024) impose coverage assumptions on the offline dataset, but we show that this is unnecessary. A well-designed online algorithm should \u201cfill in the gaps\u201d in the offline dataset, exploring states and actions that the behavior policy did not explore.\nUnlike previous approaches that focus on estimating the offline data distribution to guide online exploration (Li et al., 2023b, ), we show that a natural extension to standard optimistic online algorithms \u2013 warm-starting them by including the offline dataset in the experience replay buffer \u2013 achieves similar provable gains from hybrid data even when the offline dataset does not have single-policy concentrability. We accomplish this by partitioning the state-action space into two, bounding the regret on each partition through an offline and an online complexity measure, and showing that the regret of this hybrid RL algorithm can be characterized by the best partition \u2013 despite the algorithm not knowing the partition itself. As an example, we propose DISC-GOLF, a modification of an existing optimistic online algorithm with general function approximation called GOLF used in Jin et al., (2021); Xie et al., 2022a , and show that it demonstrates provable gains over both online-only and offline-only reinforcement learning, with competitive bounds when specialized to the tabular, linear and block MDP cases. Numerical simulations further validate our theory that hybrid data facilitates more efficient exploration, supporting the potential of hybrid RL in various scenarios.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Reinforcement Learning (RL) encompasses two main approaches: online and offline. Online RL involves agents learning to maximize rewards through real-time interactions with their environment, essentially learning by doing. Conversely, offline RL involves agents learning optimal actions by analyzing data collected by others, akin to learning by observation.\nHowever, learning by both watching and doing, or learning from both offline pre-collected data and online exploration, often called hybrid RL, remains underexplored. Despite recent scholarly attention, (Song et al.,, 2023 ###reference_b25###; Nakamoto et al.,, 2023 ###reference_b21###; Wagenmaker and Pacchiano,, 2023 ###reference_b27###; Xie et al., 2022b, ###reference_b31###; Li et al., 2023b, ###reference_b16###; Amortila et al.,, 2024 ###reference_b1###), only Wagenmaker and Pacchiano, (2023 ###reference_b27###) and Li et al., 2023b ###reference_b16### consider the case where the offline dataset may not have single-policy concentrability.111An offline complexity measure that measures the coverability of the offline dataset (Zhan et al.,, 2022 ###reference_b34###) with respect to the state-and-action pairs covered by a single reference policy.\nLi et al., 2023b ###reference_b16### suggest dividing the state and action space within a tabular MDP into a disjoint partition . The intuition is as follows. If the offline dataset has sufficient coverage of the state and action pairs in , a good algorithm should direct its online exploration to sufficiently explore . Previous approaches (Li et al., 2023b, ###reference_b16###; Wagenmaker and Pacchiano,, 2023 ###reference_b27###) solve difficult optimization problems with the Frank-Wolfe algorithm to perform reward-free online exploration of the under-covered portion of the state and action space. These approaches are not generally applicable to existing state-of-the-art online algorithms for deep RL, and so we take a different approach.\nMany online algorithms explore by maintaining an experience replay buffer, minimizing the empirical risk over it to sequentially update estimates about the unknown environment (Auer et al.,, 2008 ###reference_b3###). One may trivially include the offline dataset in the experience buffer to obtain a hybrid RL algorithm, as others have previously noted (Song et al.,, 2023 ###reference_b25###; Nakamoto et al.,, 2023 ###reference_b21###; Amortila et al.,, 2024 ###reference_b1###), under coverage assumptions on the offline dataset.222Unlike these, we are able to include the entire offline dataset \u2013 we do not need to discard any offline samples.\nThough being extensively applied in empirical studies, it is not clear whether (1) simply appending the offline dataset to the experience replay buffer can lead to a provable improvement when the offline dataset is of poor quality, or (2) whether it ensures sufficient exploration for the portion of the state-action space without good coverage. We seek to address this gap in our paper, tackling the more difficult setting where the offline data may be of arbitrarily poor quality without single-policy concentrability, in the context of regret-minimizing online RL with general function approximation. To our knowledge, we are the first to do so."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Problem Setup",
15
+ "text": "We consider the situation where we are given access to a function class , and aim to model the optimal Q-function using it. Below, we introduce some notation that we use throughout the paper."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Measures of Complexity",
21
+ "text": "In this section, we extend existing complexity measures for offline and online learning with general function approximation in order to use them to understand the complexity of hybrid RL. We will use each on an arbitrary partition of the state-action space, with the intuition being that the offline complexity measure should characterize the difficulty of learning only on the portion that is well-covered by the behavior policy, and the online complexity measure for the difficulty of learning on the portion that has not been explored yet. We later show that a subsequent regret bound can be determined by the complexity measures over any partition, and so the regret is characterized by the infimum over the partitions of the complexity measures on them."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Online Finetuning From Offline Data",
27
+ "text": "Here is an example. In this section, we derive an efficient regret bound for an optimistic online algorithm with general function approximation that is warm-started with offline data of arbitrarily poor quality. This regret bound demonstrates provable gains over both online-only and offline-only reinforcement learning through splitting the state-action space.555The algorithm is never aware of the partition. The partition is only a convenient, but useful, theoretical construct."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Case Studies",
33
+ "text": "Theorem 1 ###reference_1### established a regret bound for the general function approximation setting. Throughout this section, we examine case studies to demonstrate the exact improvement of hybrid RL algorithm over pure online and pure offline algorithms and characterize the set of good partitions. We defer all proofs in this section to Appendix C ###reference_###."
34
+ },
35
+ {
36
+ "section_id": "5.1",
37
+ "parent_section_id": "5",
38
+ "section_name": "Tabular MDPs.",
39
+ "text": "The most commonly considered MDP family is that of the Tabular MDPs, with a finite number of states and actions. As each function at the step can be represented as a dimensional vector, we consider the function class . For a constant , an intuitive choice of partition that corresponds closely to the choice of Li et al., 2023b ###reference_b16### is\n As such, the partial offline concentrability coefficient reduces to the supremum of density ratios over the offline partition, allowing us to bound the partial SEC by the cardinality of the online partition.\nWe can bound and . As such, with probability at least ,\nTherefore, if the offline dataset has good coverage on a subset , the complexity of online learning complexity can be reduced to the cardinality of its complement . We then obtain a regret bound that is at most a factor of off from the minimax-optimal results in the offline-only and online-only cases (Rashidinejad et al.,, 2023 ###reference_b23###; Shi et al.,, 2022 ###reference_b24###; Azar et al.,, 2017 ###reference_b4###; Xie et al., 2022b, ###reference_b31###), even though (1) DISC-GOLF is a very general model-free function-approximation algorithm, and (2) we did not perform a specialized analysis of this case beyond simply bounding the partial SEC in this setting. We anticipate that analyzing specialized versions of DISC-GOLF can achieve tighter sample complexities in the same sense that Li et al., 2023a ###reference_b15### accomplish for Q-learning. Note that in a few shot learning setting, where , the regret is approximately , where is the set of state, action and step tuples where the offline occupancy measure is unsupported."
40
+ },
41
+ {
42
+ "section_id": "5.2",
43
+ "parent_section_id": "5",
44
+ "section_name": "Linear MDPs.",
45
+ "text": "The family of Linear MDPs is a common MDP family that generalizes the tabular case, defined in Definition 2 ###reference_n2###. It can be shown that the linear function class for action-value function approximation: is Bellman complete (Jin et al.,, 2020 ###reference_b12###).\nAn episodic MDP is a linear MDP with a feature map , if for any , there exist unknown (signed) measures over and an unknown vector , such that for any , we have\n\nwhere for all and for all .\nWe can define a partition of the state-action space as follows. For any subset , consider the image of the feature map . We can choose and to be the subspaces spanned by and , with dimensions and respectively. That is, any partition of the state-action space induces two subspaces of through the feature map . Let and be the orthogonal projection operators onto and . We can then upper bound the complexity measures over each partition, as we show in Proposition 2 ###reference_p2###.\nLet . We have and , where is the -th largest eigenvalue. Then, with probability at least , the regret is bounded by\nWe can compare this result to the minimax lower bound from Zhou et al., (2021 ###reference_b35###), and the best known upper bound from Zanette et al., (2020 ###reference_b33###) of , for online RL in linear MDPs. It is exciting to note that by incorporating offline data into an online algorithm, we can improve the dependence on dimension of the regret incurred on the online partition from to . We accomplish this by bounding the SEC in the linear MDP case by , up to logarithmic factors. This therefore demonstrates another example of provable gains from hybrid RL."
46
+ },
47
+ {
48
+ "section_id": "5.3",
49
+ "parent_section_id": "5",
50
+ "section_name": "Block MDPs.",
51
+ "text": "A block MDP (BMDP) refers to an environment with a finite but unobservable latent state space , a finite action space , and a possibly infinite but observable state space (Dann et al.,, 2019 ###reference_b9###; Misra et al.,, 2019 ###reference_b19###; Du et al.,, 2021 ###reference_b10###). At each step, the environment generates a current state given the underlying latent state . This is described by the block structure outlined below.\nA block MDP is an MDP where each context uniquely determines its generating state , i.e. there is a decoding function such that is supported on .\nAny partition induces a partition on the latent state-action space and , and the offline behavior policy and a given policy induce measures and on . Then, Proposition 3 ###reference_p3### shows that the offline and online learning complexities are determined by the cardinalities of the induced partitions of the latent state space. This bound is also dependent on , but we omit it in the main text for brevity.\nIn a block MDP, and if is Bellman-complete.\nThen, with probability at least ,"
52
+ },
53
+ {
54
+ "section_id": "6",
55
+ "parent_section_id": null,
56
+ "section_name": "A Recipe for General Algorithms",
57
+ "text": "The analysis and techniques used above are by no means applicable only to DISC-GOLF.\nIn Proposition 4 ###reference_p4### below, we provide a general recipe that can be used to analyze how a general online algorithm can benefit from being initialized with access to an offline dataset.\nWe define to be the measure over induced by running algorithm for iterations at horizon . This bound depends on a set of error terms , which for example is (1) the Bellman error in the case of general function approximation with DISC-GOLF, (2) the sum of upper confidence bonus terms, estimation errors, and two martingale terms with UCBVI (Azar et al.,, 2017 ###reference_b4###) for the tabular setting, and (3) the gap multiplied by the probability each arm is pulled in the bandit case with UCB (Auer,, 2003 ###reference_b2###). We then have the following result below that provides a guarantee for the procedure of \u201chybridifying\u201d general online algorithms by initializing them with offline datasets. We defer the proof of Proposition 4 ###reference_p4### to Appendix D ###reference_###.\nLet be a general online learning algorithm that satisfies the following conditions:\nadmits the regret decomposition for some collection of random functions888This is often the Bellman error in the case of MDPs. with each a mapping from ;\nw.p. ;\nthere exists a function such that for any , it holds with a probability at least that\n for some , , and where is some measure of complexity of the algorithm and its dependence on the probability of failure ;\na coverage measure on any of 999We set as 0.\nThen, the algorithm satisfies the following regret bound w.p. at least :\nInformally, Proposition 4 ###reference_p4### states that given (1) a regret decomposition over the errors at each timestep, (2) a bound on the in-sample error (or just the error under the behavior policy measure), (3) an online-only regret bound for the original algorithm, and (4) an offline coverage measure, we can provide a similar guarantee to what we showed for DISC-GOLF in Theorem 1 ###reference_1###. We anticipate that one can use this or similar arguments to improve upon the minimax-optimal online-only and offline-only regret bounds when analyzing more specialized algorithms."
58
+ },
59
+ {
60
+ "section_id": "7",
61
+ "parent_section_id": null,
62
+ "section_name": "Numerical Experiments",
63
+ "text": "To illustrate the notion that appending the offline dataset to the experience replay buffer can encourage sufficient exploration for the portion of the state-action space that does not have good coverage, we perform two simulation studies in the tabular and linear MDP settings respectively."
64
+ },
65
+ {
66
+ "section_id": "7.1",
67
+ "parent_section_id": "7",
68
+ "section_name": "Forest, Tabular MDP.",
69
+ "text": "###figure_1### ###figure_2### We used a simple forest management simulator from the pymdptoolbox package of Cordwell et al., (2015 ###reference_b8###). This environment has states and actions, and we used a horizon of years. Every year, the agent can choose to wait and let the forest grow, earning a reward of if the forest is years old and otherwise, or cut the forest down, earning a reward of if the forest is between years old, if the forest is years old, and otherwise. The forest burns down with probability each year (making it years old).\nWe examine how an optimistic model-based algorithm, UCBVI (Azar et al.,, 2017 ###reference_b4###), behaves when warm-started with an offline dataset. We considered three behavior policies \u2013 adversarial, uniform, and optimal. The adversarial behavior policy does the opposite of the optimal policy of the time, and takes a random action of the time. Each offline dataset consisted of trajectories. The offline partition was chosen to be the state-action pairs with occupancy at least , and the online partition was defined as its complement. In Figure 1 ###reference_###, we plot the full and partial single-policy concentrability coefficients from running UCBVI on each partition and for each behavior policy. Between this and Figure 3 ###reference_### in Appendix F ###reference_###, which depicts the cumulative visits to each partition, we see that when the behavior policy is poor or middling, hybrid RL explores more of the online partition to fill in the gaps in the offline dataset than online RL does. However, when the behavior policy is optimal, hybrid RL sticks to the online partition due to the warm-started model estimation."
70
+ },
71
+ {
72
+ "section_id": "7.2",
73
+ "parent_section_id": "7",
74
+ "section_name": "Tetris, Linear MDP.",
75
+ "text": "###figure_3### In another experiment, we consider a scaled-down version of Tetris with pieces of shape at most , where the game board has a width of . The agent can take four actions, corresponding to the degree of rotation in degree intervals, at each timestep. The reward is the negative of any additional increase in the height of the stack beyond . We examine the extent to which an optimistic RL algorithm, LSVI-UCB from Jin et al., (2020 ###reference_b12###), explores the feature space more effectively when initialized with an offline dataset of 200 trajectories of length 40 from a uniform behavior policy.\nDue to combinatorial blowup, this environment is rather difficult to explore. We therefore chose to focus on the portion of the environment that was covered by the uniform behavior policy within the simulated timesteps in the offline dataset. This was accomplished through projecting the -dimensional one-hot state-action encoding into a 60-dimensional subspace estimated through performing SVD on the offline dataset. The offline partition was chosen to be the span of the top eigenvectors, while the online partition was the span of the remaining 55. Without the projection, the results are qualitatively similar to what we have observed, except with concentrability coefficients that are orders of magnitudes higher.\nIn Figure 2 ###reference_###, we plot the all-policy concentrability coefficients from , given by the largest, -th largest, and -th largest eigenvalues of the data covariance matrix and its projections onto the offline and online partitions respectively. We see that the concentrability coefficients on the entire space, as well as the offline and online partitions, decrease much faster with the hybrid algorithm than that of the online-only algorithm. This further confirms that an online algorithm initialized with a precollected offline dataset can explore more effectively."
76
+ },
77
+ {
78
+ "section_id": "8",
79
+ "parent_section_id": null,
80
+ "section_name": "Conclusion and Discussion",
81
+ "text": "We have answered through theoretical results and numerical simulations that simply appending the offline dataset to the experience replay buffer can (1) lead to an improvement when the offline dataset is of poor quality, and (2) encourage sufficient exploration for the portion of the state-action space without good coverage. This yields a general recipe for modifying existing online algorithms to incorporate offline data, and we propose DISC-GOLF, a modification of an existing optimistic online algorithm, as an example, with promising theoretical guarantees demonstrating provable gains over both offline-only and online-only learning."
82
+ }
83
+ ],
84
+ "appendix": [
85
+ {
86
+ "section_id": "Appendix 1",
87
+ "parent_section_id": null,
88
+ "section_name": "Appendix A Proof of Theorem 1",
89
+ "text": "Theorem 1 ###reference_1###.\u2003Let be an arbitrary partition over . Note that this partition induces the restricted function classes on the Bellman error and .\nAlgorithm 1 ###reference_### satisfies the following regret bound with probability at least :\nwhere\n for some constants with .\nLet be an arbitrary (not necessarily disjoint) partition of . We will bound the regret for an arbitrary partition, allowing us to take the infimum over partitions for the final regret bound.\nWe first address some notation. Recall that we defined the Bellman error by .\nFollowing the proof in Xie et al., 2022a ###reference_b30###, we use the same shorthand for the Bellman error , and the cumulative in-sample occupancy measures (without and with the offline dataset) by\nwhere is the occupancy measure induced by running the greedy policy w.r.t . We further write\nWe require the following lemma to bound the in-sample Bellman error. This is very similar to Lemma 15 of Xie et al., 2022a ###reference_b30###, except that this incorporates the offline data as well. Note that Xie et al., 2022a ###reference_b30### work with Q-functions bounded in instead of , so their bound depends on instead of . The proof can be found in Appendix E.2 ###reference_###\nWith a probability at least , for all , we have that for all\nby choosing for some constant .\nWith this, we can begin the proof. By a regret decomposition (Lemma 3 (Xie et al., 2022a, ###reference_b30###)), the total regret can be upper bounded by\nWe further decompose this decomposition (1 ###reference_###) by the partition on :\nwhere we call the first term the online term and the second term the offline term.\nWe will bound each term individually. The bound on the online term follows from an argument from Xie et al., 2022a ###reference_b30###, while the bound on the offline term can be obtained in a reasonably similar way, from applying Cauchy-Schwarz, performing a change of measure, and finally bounding the result by the partial concentrability coefficient.\nGoing forward, we will adopt the shorthand and .\nAs mentioned above, we upper bound the the first term on the RHS in the same way Xie et al., 2022a ###reference_b30### do for the online exploration, with the SEC:\nThe second-last line follows from bounding the term on the left of the third line by the SEC, and bounding the term on the right with Lemma 1 ###reference_1###.\nWe bound the regret incurred by state and actions in directly by the offline data. We first perform a similar Cauchy-Schwarz and change of measure argument to before:\nWe can bound the first term with the partial all-policy concentrability coefficient. As for any it holds that\nthis reduces to the partial all-policy concentrability coefficient.\nTo bound the second term, we use the in-sample regret bound from Lemma 1 ###reference_1###. We then obtain:\nPutting it all together,\nWe therefore have the following regret bound:\nwhere we set for some constants . Finally, we choose to be so becomes a constant, to obtain our result."
90
+ },
91
+ {
92
+ "section_id": "Appendix 2",
93
+ "parent_section_id": null,
94
+ "section_name": "Appendix B Proofs About The SEC From Xie et\u00a0al., 2022a",
95
+ "text": "We first prove a general result, which will be used for various case studies. Xie et al., 2022a ###reference_b30### have shown that SEC can be bounded by the Distributional-Eluder dimension (Definition 4 ###reference_n4###).\nThe Distributional-Eluder dimension is the largest , such that there exist sequences and such that for all ,\nfor .\nWhen we restrict the SEC to the online partition to obtain the online complexity measure on the online partition of , we have that:\nwhere and . That is, the restricted SEC is bounded by a modified analogue of the coverability coefficient and the Distributional-Eluder dimension, modulo a logarithmic factor.\nThe proof is modified from the proof of Proposition 13 in Xie et al., 2022a ###reference_b30###. Our task is to ensure that the statements in the above two propositions still hold when we restrict the complexity measure to the online partition, and modify the definition of the coverability coefficient. We first prove the statement (1). Similarly to Xie et al., 2022a ###reference_b30###, we define\nWe will denote\nWe want to show that for any ,\nRecall that\nUnlike Xie et al., 2022a ###reference_b30###, we will prove this for an arbitrary , and take the maximum over over both sides of the inequality to obtain our desired result. We therefore fix and consider arbitrary sequences and This therefore induces a sequence of Bellman errors for all . As in Xie et al., 2022a ###reference_b30###, we define .\nConsider the stopping time\nand decompose\nWe perform the same Cauchy-Schwarz and change-of-measure argument as in the proof of Theorem 1 ###reference_1### to obtain, writing\nWe tackle the first term representing the burn-in period as follows:\nwhere we divide both the numerator and the denominator by in the first line, use that to go from the third to the fourth line, invoke Tonelli\u2019s theorem to swap the sum and the integral to go from the sixth to the seventh line, and invoke the definition of to bound and finally observe that to go from the third-last to the second-last line.\nNow we tackle the second term. As in Xie et al., 2022a ###reference_b30###, we observe that\nas by definition, , and rearrange the inequality in the same way as Xie et al., 2022a ###reference_b30### to find that\nIt then follows that recalling the definition of the stopping time\nthat we can bound the post-burn-in term:\nwhere we use the definition of to bound , the bounded convergence theorem to swap the sum and the integral, and a restricted version of the per-state-action elliptic potential lemma from Xie et al., 2022a ###reference_b30### in Lemma 5 ###reference_5### to bound .\nTherefore,\nso taking the max over all yields\n\u220e\nNow, we prove (2). This proof is virtually the same as that of Proposition 14 in Xie et al., 2022a ###reference_b30###, but we provide it here for completeness. We wish to show that\nWe use the same definition as in Xie et al., 2022a ###reference_b30###, but specialize it to our context:\nGeneralized -(in)dependent sequence. A distribution is (generalized) -dependent on a sequence if for all , if for some , we also have . We say that is (generalized) -independent if this does not hold, i.e., for some , it has but .\nNote that if , then -dependent sequence -dependent sequence, and -independent sequence -independent sequence.\nThe distributional Eluder dimension is the largest such that is generalized -independent of for some . We will refer to this as . This, as in Xie et al., 2022a ###reference_b30###, upper bounds the lengths of sequences such that for all ,\n.\nSimilarly to Xie et al., 2022a ###reference_b30###, we define , and examine\nfixing that we choose later, and writing for the number of disjoint -dependent subsequences of in .\nWe follow the proof of Xie et al., 2022a ###reference_b30###. Suppose . By definition, there exist at least disjoint subsequences of , which we call , where we have that\nand by the definition of , that\nwhich imply that if for some , then\nLet be the longest subsequence such that\nBy the same construction as in Xie et al., 2022a ###reference_b30###, there exists such that there must exist at least\n-dependent disjoint subsequences in .\nAs for all , -dependence implies -dependence, we have that after observing also that . Now, observe that\nNow if ,\nSo for any , by setting ,\nFinally, let\n\nbe the original sequence of the\nordered in a decreasing manner. By the same argument as Xie et al., 2022a ###reference_b30###, for any ,\nand for any such that , if we also have that is such that , it follows that\nWe therefore have that , and that . We then have\nwhich implies that\nFinally, choose , to find that\n\u220e"
96
+ },
97
+ {
98
+ "section_id": "Appendix 3",
99
+ "parent_section_id": null,
100
+ "section_name": "Appendix C Proofs of Case Studies",
101
+ "text": "Proposition 1 ###reference_p1###.\u2003We can bound and . As such, with probability at least ,\nBy definition,\nThe online complexity measure bound is a direct application of Lemma 2 ###reference_2###:\nFinally, choose to be so scales with , which is .\n\u220e\nProposition 2 ###reference_p2###.\u2003We have and , where is the -th largest eigenvalue of a matrix. Then, with probability at least , the regret is bounded by\nWe first bound the all-policy concentrability for the offline partition. By Bellman completeness, for any , . Since is parametrized by , we denote by the parameter for .\nNote that by Assumption LABEL:aspt:linear_MDP.\nWe then bound the SEC through the distributional Bellman-Eluder dimension through Lemma 2 ###reference_2###. It then suffices to bound the distributional Bellman-Eluder dimension as follows:\nThe following lemma states, informally, that low Bellman rank families are MDPs such that the Bellman error can be written as the inner product of feature maps of the Bellman error and feature maps of the distributions. That is, the expected Bellman error can be written as such.\nThere exist mappings and such that\nMoreover, .\nFor any , we can write . Since for all such that there exists , where and is a set of orthogonal basis of the subspace spanned by . Thus, we can write\nTherefore, we have\n\u220e\nWe then use this lemma to bound the distributional Bellman error as follows.\nThe following proof is a minor modification from the proof of Proposition 11 of Jin et al., (2020 ###reference_b12###).\nAssume that . Then let be an -independent sequence w.r.t. . By Definition 4 ###reference_n4###, there exists such that for all , and . By Lemma 3 ###reference_3###, this is equivalent to: for all ,\nFor notational simplicity, define and and . The previous argument implies that for all ,\nTherefore, we have . By the matrix determinant lemma,\nOn the other hand,\nTherefore, we obtain\nTaking logarithm on both sides, we have\nwhich implies that\nCombined with Lemma 2 ###reference_2### and choosing , we have\nFinally, note that each is bounded in norm by by Lemma B.1 of Jin et al., (2020 ###reference_b12###). We then find that , so\n\u220e\nProposition 3 ###reference_p3###.\u2003 and in a BMDP with Bellman-complete . With probability ,\nwhere\n for some constant with .\nThe offline partition can be upper bounded by\nFor the online partition, we have\n\u220e"
102
+ },
103
+ {
104
+ "section_id": "Appendix 4",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix D General Recipe",
107
+ "text": "Proposition 4 ###reference_p4###.\u2003Let be a general online learning algorithm that satisfies the following conditions:\nadmits the regret decomposition for some collection of random functions111111This is often the Bellman error in the case of MDPs. with each a mapping from ;\nit holds with probability at least that\nthere exists a function such that for any , it holds with a probability at least\nfor some , , and where is some measure of complexity of the algorithm and its dependence on the probability of failure ;\nfor any , there exists a measure of coverage\nThen, the algorithm satisfies the following regret bound:\nWe first use the regret decomposition in Condition 1 to obtain\nThe regret bound on the online partition follows from condition 2, as we have .\nWe then upper bound the regret of the offline term. We denote by . To proceed, we have\n\u220e"
108
+ },
109
+ {
110
+ "section_id": "Appendix 5",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix E Technical and Miscellaneous Lemmas",
113
+ "text": "Let be a bounded sequence in satisfying . Let be a positive definite matrix. For any , we define . Then, if the smallest eigenvalue of satisfies , we have\nLemma 1 ###reference_1###. \nWith a probability at least , for all , we have that for all\nby choosing for some constant .\nLemma 44 in Jin et al., (2021 ###reference_b11###) showed that with high probability: (i) any function in the confidence set has low Bellman-error over the collected Datasets as well as the distributions from which are sampled; (ii) the optimal value function is inside the confidence set. We use this to our setting as follows, with the intuition being that we pre-append a sequence of functions generated from the offline dataset from samples to the sequence.\nThat is, consider , and consider a set of functions , which we define as follows. For each , define to be any arbitrary function in the confidence set of functions constructed by the first episodes of the offline dataset (we can set an arbitrary order for the episodes in the offline dataset). For each , define . As Lemma 44 in Jin et al., (2021 ###reference_b11###) shows that (i) and (ii) hold for all , they must also hold for all .\n\u220e\nConsider an arbitrary sequence of densities , and a partition of . Define\nObserve that for all . For all , we have that\nThe lemma, and the proof, is slightly modified from Lemma 4 of Xie et al., 2022a ###reference_b30### to account for our restriction to the online partition, as well as the fact that the restricted distributions are no longer distributions. Observe that by definition, so the quantity inside the sum is within . Using the fact for any , we have\nwhere the last line follows from the observation that for all .\n\u220e"
114
+ },
115
+ {
116
+ "section_id": "Appendix 6",
117
+ "parent_section_id": null,
118
+ "section_name": "Appendix F Additional Figures",
119
+ "text": "###figure_4### ###figure_5### ###figure_6### ###figure_7###"
120
+ }
121
+ ],
122
+ "tables": {},
123
+ "image_paths": {
124
+ "1(a)": {
125
+ "figure_path": "2403.09701v2_figure_1(a).png",
126
+ "caption": "Figure 1: Coverage of the online samples averaged over 30 trials, with 1.96\u2062\u03c3^1.96^\ud835\udf0e1.96\\hat{\\sigma}1.96 over^ start_ARG italic_\u03c3 end_ARG confidence intervals. Hybrid RL explores more of the online partition and less of the offline partition than online RL when the behavior policy is poor, and vice-versa when the behavior policy is good. Lower is better.",
127
+ "url": "http://arxiv.org/html/2403.09701v2/extracted/5477221/figs/cov_tabular_onpart.png"
128
+ },
129
+ "1(b)": {
130
+ "figure_path": "2403.09701v2_figure_1(b).png",
131
+ "caption": "Figure 1: Coverage of the online samples averaged over 30 trials, with 1.96\u2062\u03c3^1.96^\ud835\udf0e1.96\\hat{\\sigma}1.96 over^ start_ARG italic_\u03c3 end_ARG confidence intervals. Hybrid RL explores more of the online partition and less of the offline partition than online RL when the behavior policy is poor, and vice-versa when the behavior policy is good. Lower is better.",
132
+ "url": "http://arxiv.org/html/2403.09701v2/extracted/5477221/figs/cov_tabular_offpart.png"
133
+ },
134
+ "2": {
135
+ "figure_path": "2403.09701v2_figure_2.png",
136
+ "caption": "Figure 2: Plot of the full and partial all-policy concentrability coefficients of the online samples from 100100100100 online episodes. The solid line represents the mean over 30303030 trials, and the shaded areas represent confidence intervals generated by 1.961.961.961.96 times the sample standard deviation. We see that hybrid RL takes fewer online episodes than online-only RL to achieve a lower concentrability coefficient.",
137
+ "url": "http://arxiv.org/html/2403.09701v2/extracted/5477221/figs/cov_linear.png"
138
+ },
139
+ "3(a)": {
140
+ "figure_path": "2403.09701v2_figure_3(a).png",
141
+ "caption": "Figure 3: Cumulative visits to the offline and online partitions over the 200200200200 online episodes of horizon 20202020. When the behavior policy is poor or middling, the hybrid algorithm visits the online partition more and the offline partition less than the online-only algorithm does. When the behavior policy is optimal, the converse occurs, as the model parameters in UCBVI (Azar et al.,, 2017) are warm-started by estimating them from the offline dataset, enabling the hybrid algorithm to learn that the offline partition contains the good state-action pairs. Solid lines indicate the mean over 30303030 trials, and the shaded area denotes a confidence interval of 1.961.961.961.96 sample standard deviations.",
142
+ "url": "http://arxiv.org/html/2403.09701v2/extracted/5477221/figs/visits_tabular_onpart.png"
143
+ },
144
+ "3(b)": {
145
+ "figure_path": "2403.09701v2_figure_3(b).png",
146
+ "caption": "Figure 3: Cumulative visits to the offline and online partitions over the 200200200200 online episodes of horizon 20202020. When the behavior policy is poor or middling, the hybrid algorithm visits the online partition more and the offline partition less than the online-only algorithm does. When the behavior policy is optimal, the converse occurs, as the model parameters in UCBVI (Azar et al.,, 2017) are warm-started by estimating them from the offline dataset, enabling the hybrid algorithm to learn that the offline partition contains the good state-action pairs. Solid lines indicate the mean over 30303030 trials, and the shaded area denotes a confidence interval of 1.961.961.961.96 sample standard deviations.",
147
+ "url": "http://arxiv.org/html/2403.09701v2/extracted/5477221/figs/visits_tabular_offpart.png"
148
+ },
149
+ "4": {
150
+ "figure_path": "2403.09701v2_figure_4.png",
151
+ "caption": "Figure 4: Average reward over 200200200200 episodes from running UCBVI (Azar et al.,, 2017) both in its original form and initialized with an offline dataset. When the behavior policy is optimal, the hybrid algorithm learns the optimal policy quickly. When it is not, we still gain an advantage over online-only learning, even when the behavior policy is adversarial, even though in these cases 200200200200 episodes are not sufficient to learn the optimal policy. Incidentally, the hybrid algorithm with poor behavior policies has a high reward at the start, but faces a drop in performance as it explores other states and actions due to the very large exploration bonus we chose to encourage exploration. Results averaged over 30303030 trials, with 1111 standard deviation-wide shaded areas.",
152
+ "url": "http://arxiv.org/html/2403.09701v2/extracted/5477221/figs/reward_tabular.png"
153
+ },
154
+ "5": {
155
+ "figure_path": "2403.09701v2_figure_5.png",
156
+ "caption": "Figure 5: Average reward of each episode when running LSVI-UCB (Jin et al.,, 2020) in its original form and initialized with an offline dataset. Results averaged over 30303030 trials, with 1111 standard deviation-wide shaded areas. The hybrid version approaches the optimal weights almost instantaneously, while the online-only version takes many more episodes to do the same.",
157
+ "url": "http://arxiv.org/html/2403.09701v2/extracted/5477221/figs/reward_linear.png"
158
+ }
159
+ },
160
+ "validation": true,
161
+ "references": [
162
+ {
163
+ "1": {
164
+ "title": "Harnessing density ratios for online reinforcement learning.",
165
+ "author": "Amortila, P., Foster, D. J., Jiang, N., Sekhari, A., and Xie, T. (2024).",
166
+ "venue": null,
167
+ "url": null
168
+ }
169
+ },
170
+ {
171
+ "2": {
172
+ "title": "Using confidence bounds for exploitation-exploration trade-offs.",
173
+ "author": "Auer, P. (2003).",
174
+ "venue": "J. Mach. Learn. Res., 3(null):397\u2013422.",
175
+ "url": null
176
+ }
177
+ },
178
+ {
179
+ "3": {
180
+ "title": "Near-optimal regret bounds for reinforcement learning.",
181
+ "author": "Auer, P., Jaksch, T., and Ortner, R. (2008).",
182
+ "venue": "Advances in neural information processing systems, 21.",
183
+ "url": null
184
+ }
185
+ },
186
+ {
187
+ "4": {
188
+ "title": "Minimax regret bounds for reinforcement learning.",
189
+ "author": "Azar, M. G., Osband, I., and Munos, R. (2017).",
190
+ "venue": null,
191
+ "url": null
192
+ }
193
+ },
194
+ {
195
+ "5": {
196
+ "title": "Robust fitted-q-evaluation and iteration under sequentially exogenous\nunobserved confounders.",
197
+ "author": "Bruns-Smith, D. and Zhou, A. (2023).",
198
+ "venue": null,
199
+ "url": null
200
+ }
201
+ },
202
+ {
203
+ "6": {
204
+ "title": "Adversarially trained actor critic for offline reinforcement\nlearning.",
205
+ "author": "Cheng, C.-A., Xie, T., Jiang, N., and Agarwal, A. (2022).",
206
+ "venue": null,
207
+ "url": null
208
+ }
209
+ },
210
+ {
211
+ "7": {
212
+ "title": "Better exploration with optimistic actor-critic.",
213
+ "author": "Ciosek, K., Vuong, Q., Loftin, R., and Hofmann, K. (2019).",
214
+ "venue": null,
215
+ "url": null
216
+ }
217
+ },
218
+ {
219
+ "8": {
220
+ "title": "pymdptoolbox.",
221
+ "author": "Cordwell, S., Gonzales, Y., and Theja (2015).",
222
+ "venue": "https://github.com/sawcordwell/pymdptoolbox.",
223
+ "url": null
224
+ }
225
+ },
226
+ {
227
+ "9": {
228
+ "title": "On oracle-efficient pac rl with rich observations.",
229
+ "author": "Dann, C., Jiang, N., Krishnamurthy, A., Agarwal, A., Langford, J., and\nSchapire, R. E. (2019).",
230
+ "venue": null,
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "10": {
236
+ "title": "Provably efficient rl with rich observations via latent state\ndecoding.",
237
+ "author": "Du, S. S., Krishnamurthy, A., Jiang, N., Agarwal, A., Dud\u00edk, M., and Langford,\nJ. (2021).",
238
+ "venue": null,
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "11": {
244
+ "title": "Bellman eluder dimension: New rich classes of rl problems, and\nsample-efficient algorithms.",
245
+ "author": "Jin, C., Liu, Q., and Miryoosefi, S. (2021).",
246
+ "venue": null,
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "12": {
252
+ "title": "Provably efficient reinforcement learning with linear function\napproximation.",
253
+ "author": "Jin, C., Yang, Z., Wang, Z., and Jordan, M. I. (2020).",
254
+ "venue": "In Conference on Learning Theory, pages 2137\u20132143. PMLR.",
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "13": {
260
+ "title": "Offline policy evaluation and optimization under confounding.",
261
+ "author": "Kausik, C., Lu, Y., Tan, K., Makar, M., Wang, Y., and Tewari, A. (2023).",
262
+ "venue": null,
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "14": {
268
+ "title": "Conservative q-learning for offline reinforcement learning.",
269
+ "author": "Kumar, A., Zhou, A., Tucker, G., and Levine, S. (2020).",
270
+ "venue": null,
271
+ "url": null
272
+ }
273
+ },
274
+ {
275
+ "15": {
276
+ "title": "Is q-learning minimax optimal? a tight sample complexity analysis.",
277
+ "author": "Li, G., Cai, C., Chen, Y., Wei, Y., and Chi, Y. (2023a).",
278
+ "venue": null,
279
+ "url": null
280
+ }
281
+ },
282
+ {
283
+ "16": {
284
+ "title": "Reward-agnostic fine-tuning: Provable statistical benefits of hybrid\nreinforcement learning.",
285
+ "author": "Li, G., Zhan, W., Lee, J. D., Chi, Y., and Chen, Y. (2023b).",
286
+ "venue": "arXiv preprint arXiv:2305.10282.",
287
+ "url": null
288
+ }
289
+ },
290
+ {
291
+ "17": {
292
+ "title": "Provably good batch reinforcement learning without great exploration.",
293
+ "author": "Liu, Y., Swaminathan, A., Agarwal, A., and Brunskill, E. (2020).",
294
+ "venue": null,
295
+ "url": null
296
+ }
297
+ },
298
+ {
299
+ "18": {
300
+ "title": "Pessimism in the face of confounders: Provably efficient offline\nreinforcement learning in partially observable markov decision processes.",
301
+ "author": "Lu, M., Min, Y., Wang, Z., and Yang, Z. (2023).",
302
+ "venue": null,
303
+ "url": null
304
+ }
305
+ },
306
+ {
307
+ "19": {
308
+ "title": "Kinematic state abstraction and provably efficient rich-observation\nreinforcement learning.",
309
+ "author": "Misra, D., Henaff, M., Krishnamurthy, A., and Langford, J. (2019).",
310
+ "venue": null,
311
+ "url": null
312
+ }
313
+ },
314
+ {
315
+ "20": {
316
+ "title": "Tactical optimism and pessimism for deep reinforcement learning.",
317
+ "author": "Moskovitz, T., Parker-Holder, J., Pacchiano, A., Arbel, M., and Jordan, M. I.\n(2022).",
318
+ "venue": null,
319
+ "url": null
320
+ }
321
+ },
322
+ {
323
+ "21": {
324
+ "title": "Cal-ql: Calibrated offline rl pre-training for efficient online\nfine-tuning.",
325
+ "author": "Nakamoto, M., Zhai, Y., Singh, A., Mark, M. S., Ma, Y., Finn, C., Kumar, A.,\nand Levine, S. (2023).",
326
+ "venue": null,
327
+ "url": null
328
+ }
329
+ },
330
+ {
331
+ "22": {
332
+ "title": "Toward the fundamental limits of imitation learning.",
333
+ "author": "Rajaraman, N., Yang, L. F., Jiao, J., and Ramachandran, K. (2020).",
334
+ "venue": null,
335
+ "url": null
336
+ }
337
+ },
338
+ {
339
+ "23": {
340
+ "title": "Bridging offline reinforcement learning and imitation learning: A\ntale of pessimism.",
341
+ "author": "Rashidinejad, P., Zhu, B., Ma, C., Jiao, J., and Russell, S. (2023).",
342
+ "venue": null,
343
+ "url": null
344
+ }
345
+ },
346
+ {
347
+ "24": {
348
+ "title": "Pessimistic q-learning for offline reinforcement learning: Towards\noptimal sample complexity.",
349
+ "author": "Shi, L., Li, G., Wei, Y., Chen, Y., and Chi, Y. (2022).",
350
+ "venue": null,
351
+ "url": null
352
+ }
353
+ },
354
+ {
355
+ "25": {
356
+ "title": "Hybrid rl: Using both offline and online data can make rl efficient.",
357
+ "author": "Song, Y., Zhou, Y., Sekhari, A., Bagnell, J. A., Krishnamurthy, A., and Sun, W.\n(2023).",
358
+ "venue": null,
359
+ "url": null
360
+ }
361
+ },
362
+ {
363
+ "26": {
364
+ "title": "Pessimistic model-based offline reinforcement learning under partial\ncoverage.",
365
+ "author": "Uehara, M. and Sun, W. (2023).",
366
+ "venue": null,
367
+ "url": null
368
+ }
369
+ },
370
+ {
371
+ "27": {
372
+ "title": "Leveraging offline data in online reinforcement learning.",
373
+ "author": "Wagenmaker, A. and Pacchiano, A. (2023).",
374
+ "venue": null,
375
+ "url": null
376
+ }
377
+ },
378
+ {
379
+ "28": {
380
+ "title": "Provably efficient causal reinforcement learning with confounded\nobservational data.",
381
+ "author": "Wang, L., Yang, Z., and Wang, Z. (2020).",
382
+ "venue": null,
383
+ "url": null
384
+ }
385
+ },
386
+ {
387
+ "29": {
388
+ "title": "Bellman-consistent pessimism for offline reinforcement learning.",
389
+ "author": "Xie, T., Cheng, C.-A., Jiang, N., Mineiro, P., and Agarwal, A. (2021).",
390
+ "venue": "Advances in neural information processing systems,\n34:6683\u20136694.",
391
+ "url": null
392
+ }
393
+ },
394
+ {
395
+ "30": {
396
+ "title": "The role of coverage in online reinforcement learning.",
397
+ "author": "Xie, T., Foster, D. J., Bai, Y., Jiang, N., and Kakade, S. M. (2022a).",
398
+ "venue": "arXiv preprint arXiv:2210.04157.",
399
+ "url": null
400
+ }
401
+ },
402
+ {
403
+ "31": {
404
+ "title": "Policy finetuning: Bridging sample-efficient offline and online\nreinforcement learning.",
405
+ "author": "Xie, T., Jiang, N., Wang, H., Xiong, C., and Bai, Y. (2022b).",
406
+ "venue": null,
407
+ "url": null
408
+ }
409
+ },
410
+ {
411
+ "32": {
412
+ "title": "When is realizability sufficient for off-policy reinforcement\nlearning?",
413
+ "author": "Zanette, A. (2023).",
414
+ "venue": null,
415
+ "url": null
416
+ }
417
+ },
418
+ {
419
+ "33": {
420
+ "title": "Learning near optimal policies with low inherent bellman error.",
421
+ "author": "Zanette, A., Lazaric, A., Kochenderfer, M., and Brunskill, E. (2020).",
422
+ "venue": null,
423
+ "url": null
424
+ }
425
+ },
426
+ {
427
+ "34": {
428
+ "title": "Offline reinforcement learning with realizability and single-policy\nconcentrability.",
429
+ "author": "Zhan, W., Huang, B., Huang, A., Jiang, N., and Lee, J. (2022).",
430
+ "venue": "In Conference on Learning Theory, pages 2730\u20132775. PMLR.",
431
+ "url": null
432
+ }
433
+ },
434
+ {
435
+ "35": {
436
+ "title": "Nearly minimax optimal reinforcement learning for linear mixture\nmarkov decision processes.",
437
+ "author": "Zhou, D., Gu, Q., and Szepesvari, C. (2021).",
438
+ "venue": null,
439
+ "url": null
440
+ }
441
+ }
442
+ ],
443
+ "url": "http://arxiv.org/html/2403.09701v2"
444
+ }
20240318/2403.10040v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240318/2403.11377v1.json ADDED
The diff for this file is too large to render. See raw diff