yilunzhao commited on
Commit
2411829
·
verified ·
1 Parent(s): 5cdbdaf

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 20240722/2006.16039v6.json +307 -0
  2. 20240722/2202.04060v4.json +661 -0
  3. 20240722/2203.00526v3.json +0 -0
  4. 20240722/2203.02180v2.json +0 -0
  5. 20240722/2203.10560v3.json +0 -0
  6. 20240722/2206.04359v2.json +149 -0
  7. 20240722/2209.02552v3.json +0 -0
  8. 20240722/2211.12592v2.json +283 -0
  9. 20240722/2212.11055v5.json +0 -0
  10. 20240722/2212.14084v2.json +0 -0
  11. 20240722/2301.02268v2.json +0 -0
  12. 20240722/2301.12554v5.json +0 -0
  13. 20240722/2303.16593v2.json +405 -0
  14. 20240722/2304.08879v3.json +205 -0
  15. 20240722/2306.02547v3.json +42 -0
  16. 20240722/2307.01836v3.json +0 -0
  17. 20240722/2307.07679v3.json +340 -0
  18. 20240722/2309.00169v3.json +0 -0
  19. 20240722/2309.10095v2.json +149 -0
  20. 20240722/2309.11966v2.json +157 -0
  21. 20240722/2309.12949v2.json +211 -0
  22. 20240722/2309.13193v2.json +171 -0
  23. 20240722/2309.15776v2.json +0 -0
  24. 20240722/2310.01967v5.json +185 -0
  25. 20240722/2310.09450v3.json +511 -0
  26. 20240722/2310.14277v2.json +0 -0
  27. 20240722/2310.17163v2.json +0 -0
  28. 20240722/2310.20204v4.json +0 -0
  29. 20240722/2311.08100v4.json +0 -0
  30. 20240722/2311.08236v2.json +194 -0
  31. 20240722/2311.12048v2.json +0 -0
  32. 20240722/2311.13348v2.json +290 -0
  33. 20240722/2311.14671v3.json +0 -0
  34. 20240722/2312.02216v3.json +11 -0
  35. 20240722/2312.05910v5.json +0 -0
  36. 20240722/2312.07962v2.json +285 -0
  37. 20240722/2312.09781v4.json +386 -0
  38. 20240722/2312.10217v3.json +0 -0
  39. 20240722/2312.12056v2.json +0 -0
  40. 20240722/2312.12544v3.json +719 -0
  41. 20240722/2312.14055v2.json +0 -0
  42. 20240722/2401.00009v3.json +65 -0
  43. 20240722/2401.00280v3.json +0 -0
  44. 20240722/2401.02413v2.json +636 -0
  45. 20240722/2401.02938v2.json +531 -0
  46. 20240722/2401.02957v2.json +0 -0
  47. 20240722/2401.04152v2.json +323 -0
  48. 20240722/2401.07598v3.json +0 -0
  49. 20240722/2401.08742v3.json +287 -0
  50. 20240722/2401.09967v4.json +0 -0
20240722/2006.16039v6.json ADDED
@@ -0,0 +1,307 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Game Comonads & Generalised Quantifiers",
3
+ "abstract": "Game comonads, introduced by Abramsky, Dawar and Wang and developed by Abramsky and Shah, give an interesting categorical semantics to some Spoiler-Duplicator games that are common in finite model theory. In particular they expose connections between one-sided and two-sided games, and parameters such as treewidth and treedepth and corresponding notions of decomposition. In the present paper, we expand the realm of game comonads to logics with generalised quantifiers. In particular, we introduce a comonad graded by two parameters such that isomorphisms in the resulting Kleisli category are exactly Duplicator winning strategies in Hella\u2019s -bijection game with pebbles. We define a one-sided version of this game which allows us to provide a categorical semantics for a number of logics with generalised quantifiers. We also give a novel notion of tree decomposition that emerges from the construction.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Model-comparison games, such as Ehrenfeucht-Fra\u00efss\u00e9 games and pebble games play a central role in finite model theory. Recent work by Abramsky et al. [ADW17 ###reference_bx3###, AS18 ###reference_bx4###] provides a category-theoretic view of such games which yields new insights. In particular, the pebbling comonad introduced in [ADW17 ###reference_bx3###] reveals an interesting relationship between one-sided and two-sided pebble games. The morphisms in the Kleisli category associated with correspond exactly to winning strategies in the existential positive -pebble game. This game was introduced by Kolaitis and Vardi [KV92 ###reference_bx20###] to study the expressive power of Datalog. A winning strategy for Duplicator in the game played on structures and implies that all formulas of existential positive -variable logic true in are also true in . The game has found widespread application in the study of database query languages as well as constraint satisfaction problems. Indeed, the widely used -local consistency algorithms for solving constraint satisfaction can be understood as computing the approximation to homomorphism given by such strategies [KV00 ###reference_bx22###]. At the same time, isomorphisms in the Kleisli category associated with correspond to winning strategies in the -pebble bijection game. This game is a variant of the bijection game introduced by Hella [Hel96 ###reference_bx17###] and characterises equivalence in the -variable logic with counting. This gives a family of equivalence relations (parameterised by ) which has been widely studied as approximations of graph isomorphism. It is often called the Weisfeiler-Leman family of equivalences and has a number of characterisations in logic, algebra and combinatorics (see the discussion in [Gro17 ###reference_bx15###]).\nThe bijection game originally introduced by Hella is actually the initial level of a hierarchy of games that he defined to characterise equivalence in logics with generalised (i.e. Lindstr\u00f6m) quantifiers. For each we have a -pebble -bijection game that characterises equivalence with respect to an infinitary -variable logic with quantifiers of arity at most . In the present paper, we introduce a graded comonad associated with this game which we call the -Hella comonad, or . This comonad is obtained as a quotient of the comonad and we are able to show that isomorphisms in the associated Kleisli category correspond to winning strategies for Duplicator in the -pebble -bijection game. The morphisms then correspond to a new one-way game we define, which we call the -pebble -function game. We are able to show that this relates to a natural logic: a -variable positive infinitary logic with -ary homomorphism-closed quantifiers.\nThis leads us to a systematic eight-way classification of model-comparison games based on what kinds of functions Duplicator is permitted (arbitrary functions, injections, surjections or bijections) and what the partial maps in game positions are required to preserve: just atomic information or also negated atoms. We show that each of these variations correspond to preservation of formulas in a natural fragment of bounded-variable infinitary logic with -ary Lindstr\u00f6m quantifiers. Moreover, winning strategies in these games also correspond to natural restrictions of the morphisms in the Kleisli category of that are well-motivated from the category-theoretic point of view.\nAnother key insight provided by the work of Abramsky et al. is that coalgebras in the pebbling comonad correspond exactly to tree decompositions of width . Similarly, the coalgebras in the Ehrenfeucht-Fra\u00efss\u00e9 comonad introduced by Abramsky and Shah characterise the treedepth of structures. This motivates us to look at coalgebras in and we show that they yield a new and natural notion of generalised tree decomposition.\nIn what follows, after a review of the necessary background in Section 1 ###reference_###, we introduce the various games and logics in Section 2 ###reference_### and establish the relationships between them. Section 3 ###reference_### contains the definition of the Hella comonad and shows that interesting classes of morphisms in the associated Kleisli category correspond to winning strategies in the games. The coalgebras of this comonad are investigated in Section 4 ###reference_###, and the associated tree-decompositions of structures defined."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "2. Games and Logic with Generalised Quantifiers",
15
+ "text": "The -bijective -pebble game as introduced by Hella is a model-comparison game which captures equivalence of structures over the logic , i.e. -variable infinitary logic where the allowed quantifiers are all generalised quantifiers with arity . This game generalises a variant of the bijection game which captures equivalence over , -variable infinitary logic with counting quantifiers (which is equivalent to a fragment of as shown by Kolaitis and V\u00e4\u00e4n\u00e4nen [KV95 ###reference_bx21###]). In this section, we introduce a family of games which relax the rules of and show their correspondence to different fragments of . In particular, we introduce a \u201cone-way\u201d version of which is crucial to our construction of a modified version of the comonad for these games."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "2.1. Relaxing",
21
+ "text": "Recall that each round of involves Duplicator selecting a bijection and ends with a test of whether for the pebbled positions it is the case that for any\nwhere Duplicator loses if the test is failed. For the rest of the round, Spoiler rearranges up to pebbles on with the corresponding pebbles on moved according to .\nSo, to create from a \u201cone-way\u201d game from to we need to relax the condition that be a bijection and the in the final test. The following definition captures the most basic such relaxation:\nFor two relational structures , , the positive -pebble -function game, is played by Spoiler and Duplicator. Prior to the th round the position consists of partial maps and . In Round\nDuplicator provides a function such that for each , .\nSpoiler picks up to distinct pebbles, i.e. elements and elements .\nThe updated position is given by and for ; and and for .\nIf there is some and with \nbut , then Spoiler has won the game.\nDuplicator wins by preventing Spoiler from winning.\nAs this game is to serve as the appropriate one-way game\nfor , it is worth asking how this this\ngame relates to (the one-way game\nfor ) which makes no mention of functions\nin its definition. The answer comes in recalling Abramsky\net al.\u2019s presentation of a (deterministic) strategy for\nDuplicator in \nas a collection of branch maps for each , a history\nof Spoiler moves and a pebble index. These branch\nmaps tell us how Duplicator would respond to Spoiler moving pebble to any element in given the moves that Spoiler has played in preceding rounds and can be thought of as a function which Duplicator provides to Spoiler after Spoiler has indicated which pebble he will move. In the game in Definition 2.1 ###reference_###, Duplicator provides this function before Spoiler indicates which\npebbles are to be moved.\nIn addition to this game, we now define some other relaxations of which are important. In particular we define the following positive games by retaining that the pebbled position need only preserve positive atoms at the end of each round but varying the condition on .\nFor two relational structures , , the positive -pebble -injection (resp. surjection, bijection) game, (resp. , ) is played by Spoiler and Duplicator. Prior to the th round the position consists of partial maps and . In Round\nDuplicator provides an injection (resp. a surjection, bijection) such that for each , .\nSpoiler picks up to distinct pebbles, i.e. elements and elements .\nThe updated position is given by and for ; and and for .\nSpoiler has won the game if there is some and \nwith but .\nDuplicator wins by preventing Spoiler from winning.\nStrengthening the test condition in each round so that Spoiler wins if there is some and with if, and only if, ,\nwe get the definitions for the games , , and where the latter is precisely the -bijective -pebble game of Hella. We recap the poset of the games we\u2019ve just defined ordered by strengthening of the rules/restrictions on Duplicator in the Hasse diagram in Figure 1 ###reference_###. Here a game is above if a Duplicator winning strategy in is also one in ."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "2.2. Logics with generalised quantifiers",
27
+ "text": "In Section 1 ###reference_###, we introduce for each the logics, as the infinitary logic extended with all generalised quantifiers of arity .\nIn this section we explore fragments of defined by restricted classes of generalised quantifiers, which we introduce next.\nA class of -structures is homomorphism-closed if for all homomorphisms\nSimilarly, we say is injection-closed (resp. surjection-closed, bijection-closed) if for all injective homomorphisms (resp. surjective, bijective homomorphisms)\nWe write for the class of all generalised quantifiers of arity where is homomorphism-closed. Similarly, we write , and for the collections of -ary quantifiers based on injection-closed, surjection-closed and bijection-closed classes.\nIn order to define logics which incorporate these restricted classes of quantifiers, we first define a base logic without quantifiers or negation.\n{defi}\nFix a signature .\nWe denote by , the class of positive infinitary -variable quantifier-free formulas over . That means the variable fragment of the class of formulas\nfor any .\nWe use to denote a similar class of formulas but with negation permitted on atoms.\nThis basic set of formulas can be extended into a logic by adding some set of quantifiers as described here:\nFor some collection of generalised quantifiers, we denote by the smallest extension of closed under the construction\nfor any . is the same logic but with negation on atoms.\nNote that and, as we can always push negation down to the level of atoms in , .\nWith this definition we are ready to introduce our logics. These are , , and and their positive counterparts , , and .\nThe obvious inclusion relationships between these logics are given by the Hasse diagram in Figure 2 ###reference_###. As we shall see, these logics are governed exactly by the games pictured in Figure 1 ###reference_###.\nBefore we prove the correspondence with the aforementioned games, we highlight two important facts about this family of logics. Firstly, we show that is equivalent to Hella\u2019s original infinitary logic with -ary generalised quantifiers and, secondly, we show how these families of generalised quantifiers relate the sizes of structures."
28
+ },
29
+ {
30
+ "section_id": "2.2.1",
31
+ "parent_section_id": "2.2",
32
+ "section_name": "2.2.1. and are equivalent",
33
+ "text": "Theorem 6 ###reference_6### proves, among other things, that Duplicator has a winning strategy in the game if, and only if, . However, Hella [Hel89 ###reference_bx16###] originally characterised such pairs of structures by equivalence in the seemingly more powerful logic . Here, we show from first principles that these two logics are indeed equivalent.\nWe say that two logics and are equivalent if for every signature and every formula there exists an equivalent formula such that for any -structure and any tuple of elements with the same length as , if and only if . For two such equivalent logics we will write .\nTo show that for any and we need to overcome two differences between these logics. Firstly, the class of bijective-homomorphism-closed -ary quantifiers is a proper subclass of of all isomorphism-closed -ary quantifiers. The following observation provides a way of replacing general isomorphism-closed classes with bijective-homomorphism-closed ones by modifying the signature.\nFor an isomorphism-closed class of -structures, if then\nis a bijective-homomorphism closed class of structures.\nAn important consequence of this is that any such , the formula\nis equivalent to the formula\nwhere for any and .\nThe second difference between these two logics is the role of negation. As defined in this section, only allows negation on atoms, whereas allows negation throughout formulas. The following observation is important for dealing with this difference.\nA class of -structures is isomorphism-closed if, and only if, its complement is.\nThis implies that the formula is equivalent to .\nWe are now ready to prove the desired equivalence of logics.\nFor all , .\nClearly is contained in , so we focus on translating a formula to an equivalent in . This can be done by induction on the quantifier depth of . For quantifier depth , there are no quantifiers to be replaced and any negation is either on atoms or can be assumed to be on atoms by appropriately distributing over conjunction or disjunction.\nNow we assume has quantifier depth . Without loss of generality, we can assume that is of the form for some isomorphism-closed class of -structures. Indeed, if contains a leading negation we can use Observation 2.2.1 ###reference_.SSS1### to remove the negation by replacing with . Note that the formulas and have quantifier depth strictly less than and so by induction they have equivalents and in . Now, using the consequence of Observation 2.2.1 ###reference_.SSS1### mentioned above, we can define as"
34
+ },
35
+ {
36
+ "section_id": "2.2.2",
37
+ "parent_section_id": "2.2",
38
+ "section_name": "2.2.2. Generalised quantifiers and size",
39
+ "text": "For any relational signature let denote the collection of -structures whose universe has exactly elements. Let and similarly . It is obvious that is bijection-closed, is injection-closed and is surjection-closed.\nWhen is the empty signature this gives us classes of sets , and which are closed under bijections, injections and surjections respectively.\nAs any signature admits an empty interpretation into the empty signature which sends\nany -structure to its underlying set, we can create sentences and \nby binding the nullary quantifier , for , and respectively, to this empty interpretation. As noted in the following observation\nthese sentences are important for comparing the sizes of structures, in any signature.\n{obs}\nFor all there are sentences and in , and respectively, such that\nAs a direct result of this we have that"
40
+ },
41
+ {
42
+ "section_id": "2.3",
43
+ "parent_section_id": "2",
44
+ "section_name": "2.3. Games and logics correspond",
45
+ "text": "So far we have introduced a series of games and logics which are all variations on Hella\u2019s -bijection -pebble game, , and the corresponding logic . Here we show that these games and logics match up in the way that one would expect from looking at the respective refinement posets in Figures 1 ###reference_### and 2 ###reference_###.\nIn order to present the proof of this in a uniform fashion, we label the corners of these cubes by three parameters as indicated in Figure 3 ###reference_###. These\nparameters signal the presence or absence of certain rules in the corresponding\ngames. In particular, and indicate if the function provided by Duplicator\nin each round is required to be injective or surjective respectively and \nindicates if Spoiler wins when negated atoms are not preserved by the partial\nmap defined at the end of a round.\nNow we define the aliases of each of the games which modify as follows, with the games defined lining up with the games defined in Section 2.1 ###reference_###.\nFor two -structures and , the game - is played by Spoiler and Duplicator in the same fashion as the game with the following additional rules:\nWhen Duplicator provides a function at the beginning of a round, is required to be\ninjective if and\nsurjective if .\nIf , Spoiler wins at move if the partial map taking to fails to preserve negated atoms as well as atoms.\nSimilarly, we define parameterised aliases for the logics introduced in Section 2.2 ###reference_###. To lighten our notational burden, we use to denote the logic throughout this section.\nWe define to be the logic extended by\nall -ary generalised quantifiers closed by all homomorphisms which are:\ninjective, if ; and\nsurjective, if\nif , negation on atoms.\nFor example, extends with negation on atoms but contains no additional quantifiers as all -ary quantifiers closed under homomorphisms are already in . On the other hand, does not allow negation on atoms but allows all quantifiers that are closed under bijective homomorphisms.\nNow to prove the desired correspondence between - and , we adapt a proof from Hella [Hel96 ###reference_bx17###] to work for this parameterised set of games.\nFor this we need the language of forth systems which are used\nas an explicit representation of a Duplicator winning strategy222These are called \u201c-variable -bijective back-and-forth sets\u201d in Hella\u2019s paper, where the \u201cback\u201d condition is implicit in the use of bijections. We drop that in the present generalisation.. We provide the appropriate generalised definition here:\n{defi}\nLet be the set of all partial functions which preserve atoms (i.e. are partial homomorphisms) and, if additionally preserve negated atoms.\nA set is a forth system for the game if it satisfies the following properties:\nDownwards closure: If then for any\n()-forth property For any in s.t. , there exists a function , which is injective if and surjective if s.t. for every with and \nwe have .\nNote that in the \u201cforth\u201d condition, there is a single function that yields the property for any choice of set . This captures the condition in the game where Duplicator has to play this function before Spoiler chooses which pebbles to move (cf. Remark 2 ###reference_2###).\nAs this definition is essentially an unravelling of a Duplicator winning strategy for the game we get the following.\nThere is a forth system containing the empty partial homomorphism if, and only if, Duplicator has a winning strategy for the game\nFor the forward direction we note that if the pebbled position at the beginning of some round of describes a partial homomorphism then the forth condition on guarantees that if Duplicator plays in this round then the pebbled position at the end of the round will be . As we know that such a move will not result in Duplicator losing the game. So if , Duplicator can use to play indefinitely without losing.\nFor the other direction, we note that the set of possible positions when playing the game according to some winning Duplicator strategy will form a forth system .\nFollowing Hella, we define the canonical forth system for a game as follows:\nThe canonical forth system for is denoted and is given by the intersection , whose conjuncts are defined inductively as follows:\n.\nis the set of such that satisfies the -forth condition with respect to the set\nIt is not difficult to see that for any forth system for we have . This means that there is a winning strategy for Duplicator in the game if, and only if, is not empty.\nTo complete the vocabulary needed to emulate Hella\u2019s proof in this setting we introduce the following generalisations of Hella\u2019s definitions.\nFor any and a formula in some logic, we say that preserves the validity of if for any of the same length as we have that .\nDenote by the set of all which preserve the validity of all formulas.\nLet denote the fragment of with only finitary conjunctions and disjunctions.\nDenote by the set of all which preserve the validity of all formulas.\nNow, we directly modify Hella\u2019s argument to prove the following:\nFor finite relational structures,\nWe prove the result by showing that\nThe inclusion is obvious so we focus on proving\n; and\n\nProof of 1. Given we prove by structural induction on that preserves . Clearly as is a partial homomorphism, it preserves atoms and, if , negated atoms. The inductive cases for and are easy so we focus on the cases where\nNow implies the existence of a map such that for all with we have , so using the induction hypothesis we have that for all , and tuples from and from ,\nThis means that is a homomorphism\nFurthermore, in the cases where or this homomorphism is injective, surjective and bijective respectively and the quantifier in general represents a query which is closed by injective-homomorphism, surjective-homomorphism or bijective-homomorphism so in all of these cases\nand so and we are done with Part 1 of the proof.\nProof of 2. Suppose that we have . We have that by definition, so we prove by induction that , for all . Indeed, suppose this is true for but that . Then it must be the case that for every (injective if , surjective if ) there is some choice of tuples from and from with and such that . By induction, this means that and so there is a formula such that but\nLet denote the set of functions which are injective if and surjective if . Recall from Observation 2.2.2 ###reference_.SSS2###, the existence of implies that is non-empty. Now we define two structures and . We have by construction that no is a homomorphism from , meaning that we can define a query with and which is closed under:\nall homomorphisms, if\nall injective homomorphisms, if\nall surjective homomorphisms, if\nall bijective homomorphisms, if\nSo in all cases, the quantifier is allowed in .\nSince each formula is in , it has at most free variables in all. By renaming these variables, we can ensure that the variables are all from among a fixed tuple of variables which are distinct from all variables in all . Then\nis a formula of , since each sub-formula still has at most free variables (recall Remark 1 ###reference_1###). This formula is true on but false on . However, this contradicts that and so preserves the truth of all such formulas.\nWe conclude this section by showing the desired correspondence for the whole family of games and logics we have introduced.\nFor and all the following are equivalent:\nDuplicator has a winning strategy for -\n\n\nFirst note that by the definition of the canonical forth system, Duplicator wins - if, and only if, .\nFurthermore, and are defined as the sets of partial maps which preserve any or formulas respectively which hold on the domain of . So or if, and only if, all sentences in these logics which are true are also true in , i.e. or .\nApplying the result of Lemma 5 ###reference_5### proves the equivalence of these three."
46
+ },
47
+ {
48
+ "section_id": "3",
49
+ "parent_section_id": null,
50
+ "section_name": "3. The Hella Comonad and its Kleisli Category",
51
+ "text": "In this section, we show how to construct a game comonad which captures the strategies of in the same way that captures the strategies of . We do this using a new technique for constructing new game comonads from old based on strategy translation. We then show that different types of morphism in the Kleisli category of this new comonad correspond to Duplicator strategies for the games introduced in Section 2 ###reference_###."
52
+ },
53
+ {
54
+ "section_id": "3.1",
55
+ "parent_section_id": "3",
56
+ "section_name": "3.1. Translating between games",
57
+ "text": "The pebbling comonad is obtained by defining a structure for each whose universe consists of (non-empty) lists in which we think of as sequences of moves by Spoiler in a game , with unspecified. With this in mind, we call a sequence in a -history (allowing the empty sequence). In contrast, a move in the + involves Spoiler moving up to pebbles and therefore a history of Spoiler moves is a sequence in . We call such a sequence an -history.\nWith this set-up, (deterministic) strategies are given by functions\nfor and\nfor +.\nA winning strategy for Duplicator in + can always be translated into one in . We aim now to establish conditions for when a translation can be made in the reverse direction. For this, it is useful to establish some machinery.\nThere is a natural flattening operation that takes -histories to -histories. We denote the operation by , so , where . Of course, the function is not injective and has no inverse. It is worth, however, considering functions from -histories to -histories that are inverse to in the sense that . One obvious such function takes a -history to the -history , i.e. the sequence of one-element sequences. This is, in some sense, minimal in that it imposes the minimal amount of structure on . We are interested in a maximal such function. For this, recall that the sequences in that form the elements of an -history have length at most and do not have a repeated index from . We aim to break a -history into maximal such blocks. This leads us to the following definition.\n{defi}\nA list is called basic if it contains fewer than or equal to pairs and the pebble indices are all distinct.\nThe -structure function is defined recursively as follows:\nif is basic\notherwise, where such that is the largest basic prefix of .\nIt is immediate from the definition that .\nIt is useful to characterise the range of the function , which we do through the following definition.\n{defi}\nAn -history is structured if whenever and are successive elements of , then either has length exactly or begins with a pair such that occurs in .\nIt is immediate from the definitions that is structured for all -histories and that an -history is structured if, and only if, .\nWe are now ready to characterise those Duplicator winning strategies for that can be lifted to . First, we define a function that lifts a position in that Duplicator must respond to, i.e. a pair where is a -history and a pebble index, to a position in , i.e. an -history.\nSuppose is a -history and is the last basic list in , so . Let be a pebble index.\nDefine the -structuring of by\nSay that a Duplicator strategy in is -consistent if for all -histories and and all pebble indices and :\nIntuitively, an -consistent Duplicator strategy in the game is one where Duplicator plays the same function in all moves that could be part of the same Spoiler move in the game +. We are then ready to prove the main result of this subsection.\nDuplicator has an -consistent winning strategy in if, and only if, it has a winning strategy in +.\nThe reverse direction is easy. Suppose first that is a Duplicator winning strategy in +. Define the strategy in such that for a -history and a pebble index , . This is easily seen to be -consistent and winning.\nFor the other direction we deal with the case of separately.\nFor , all -histories are structured. Indeed, for any\n-history and any pebble index , . This means that\nfor any and and the -consistent winning strategies are precisely those such that for any\n-history , pebble indices and and elements if then . This is the same as saying that the branch maps and are equal for every history and every pair of pebble indices and . We denote the common branch map at by . This then gives a strategy in the game + where after every -history , Duplicator provides the function .\nFor , suppose is an -consistent winning strategy for Duplicator in . We construct from this a winning strategy for Duplicator in +. If is a structured -history and is the last pebble index occurring in it, we can just take . To extend this to unstructured -histories, we first define the structured companion of an -history.\nSuppose is an -history that is not structured and let be a pair of consecutive sequences witnessing this. We call such a pair a bad pair. Let be the last pair occurring in and the first pair occurring in .\nLet be the prefix of ending with and let be the last element of such that appears in if there is any. We now obtain a new -history from by replacing the pair by where\nIt is clear that in this -history, neither of the pairs or is bad, so it has one fewer bad pair than . Also, this move is chosen so that responding to the moves according to does not change the partial function defined by the pebbled position after responding to the moves . Repeating the process, we obtain a structured -history which we call , the structured companion of .\nWe can now formally define the Duplicator strategy by saying for any -history , where is the structured companion of and is the last pebble index occurring in . To see why is a winning strategy, we note that as responding with to the link moves does not alter the partial function defined by the pebbled position, the function defined after responding to according to is the same as that defined after responding to according to . So if there is a winning -history for Spoiler against then is a winning -history for Spoiler against , a contradiction."
58
+ },
59
+ {
60
+ "section_id": "3.2",
61
+ "parent_section_id": "3",
62
+ "section_name": "3.2. Lifting the comonad to",
63
+ "text": "Central to Abramsky et al.\u2019s construction of the pebbling comonad is the observation that for -structures (defined in Section 1 ###reference_###), maps in the Kleisli category correspond to Duplicator winning strategies in .\nFor and -structures over the signature , there is a homomorphism if, and only if, there is a (deterministic) winning strategy for Duplicator in the game\nThe relation to strategies is clear in the context of elements representing histories of Spoiler moves up to and including the current move in the . The relational structure given to this set by Abramsky, Dawar and Wang ensures that pebbled positions preserve relations in , while the caveat here about -structures is a technicality to ensure that the pebbled positions when \u201cplaying\u201d according to a map all define partial homomorphisms, in particular they give well defined partial maps from to .\nAs we saw in Lemma 7 ###reference_7### a Duplicator winning strategy in + is given by an -consistent strategy in . The -consistency condition can be seen as saying that the corresponding map must, on certain \u201cequivalent\u201d elements of give the same value. We can formally define the equivalence relation as follows.\nFor and a relational structure. Define on the universe of as follows:\nIn general, for any structured -history , we write to denote the -equivalence class of an element with .\nThis allows us to define the main construction of this section as a quotient of the relational structure . Note that the relation is not a congruence of this structure, so there is not a canonical quotient. Indeed, given an arbitrary equivalence relation over a relational structure , there are two standard ways to define relations in a quotient . We could say that a tuple of equivalence classes is in a relation if, and only if, every choice of representatives is in or if some choice of representatives is in . The latter definition has the advantage that the quotient map from to is a homomorphism and it is this definition that we assume for the rest of the paper. From this definition we also see that for any homomorphism the map which sends to is a well-defined homomorphism.\nFor , and a relational signature, we define the functor by:\nOn objects .\nOn morphisms .\nWriting for the quotient map enables us to establish the following useful property.\n{obs}\nCombining this with Lemma 7 ###reference_7###, we have the appropriate generalisation of Lemma 8 ###reference_8###.\nFor -structures and , there is a homomorphism if, and only if, there is a winning strategy for Duplicator in the game +\nFrom right to left, by Lemma 7 ###reference_7### we have an -consistent winning strategy for Duplicator in . The -consistency condition implies that the Duplicator response to a Spoiler play is determined by and only. So the corresponding homomorphism respects and is a well-defined homomorphism .\nFor the other direction, note that defines a Duplicator winning strategy for which is -consistent. Thus, by Lemma 7 ###reference_7###, there is a winning strategy for Duplicator in +.\nFurthermore, we can see that the quotient map defined above is indeed a\nnatural transformation between the functors and .\nis a natural transformation.\nLet and be relational structures over the same signature and \nbe a homomorphism. To show that is natural we need to establish the equality\n. Fix an element .\nOn the right hand side of we have that and so\n. On the left hand side,\n and so as required.\nThis allows us to prove the following important lemma.\nThe counit and comultiplication for lift to well-defined natural transformations for .\nSuppose . Then by the definition of , we have and so so we can define such that . So by Observation 8 ###reference_8### this is a homomorphism for every and by Lemma 10 ###reference_10### it is natural. \nThe argument is slightly more complicated for . Firstly introduce defined such that the length of the list in is the same as the length of the list in and . Informally, this means replacing every appearing in with the prefix of which runs up to (and includes) that appearance of . Now it is not hard to see that for any\nNow, as is a map from to , to show that it \u201clifts\u201d to being a comultiplication for we must show that the function\nis well-defined with respect to .\nSo, for any , we prove that , as elements of . Firstly, by definition and so . We can write similar expressions for . \nAs we have that we use the above fact about to get that . As only changes the elements of a list leaving the pebble indices unchanged and is based only on the pebble indices of a list, we can deduce that . So, by the definition of , if , which is precisely the statement that .\nNaturality for follows from the naturality of and the naturality\nof the comultiplication of .\nWe call these lifted natural transformations and . As , we have that for any the notion of \u201cthe\u201d equivalence class of , is well-defined. So for any term built from composing and we have that the term , obtained by replacing by , with and with satisfies by the above proof. Now as the counit and coassociativity laws are equations in and which remain true on taking the quotient we have the following result.\nis a comonad on"
64
+ },
65
+ {
66
+ "section_id": "3.3",
67
+ "parent_section_id": "3",
68
+ "section_name": "3.3. Classifying the morphisms of",
69
+ "text": "In Abramsky et al.\u2019s treatment of the Kleisli category of [ADW17 ###reference_bx3###] they classify the morphisms according to whether their branch maps are injective, surjective or bijective. We extend this definition to the comonad . This gives us a way of classifying the morphisms to match the classification of strategies given in Section 2 ###reference_###.\nFor a Kleisli morphism of , the branch maps of are defined as the following collection of functions , indexed by the structured -histories :\nWe say that such an is branch-bijective (resp. branch-injective, -surjective) if for every\nWe denote these maps by and )\nInformally, the branch map is the response given by Duplicator in the + when playing according to the strategy represented by after Spoiler has made the series of plays in . This gives us another way of classifying the Duplicator winning strategies for the games from Section 2 ###reference_###.\nThere is a winning strategy for Duplicator in the game + (resp. +, +) if and only if (resp. , ).\nImmediate from the definitions.\nExpanding this connection between Kleisli maps and strategies, we define the following:\n{defi}\nWe say a a Kleisli map is strongly branch-bijective (resp. strongly branch-injective, -surjective) if the strategy for the game (resp. ) is also a winning strategy for the game (resp. ) and we denote these maps by (resp. and )\nNow we generalise a result of Abramsky, Dawar and Wang to the Kleisli category .\nFor finite relational structures,\nAs and are finite, the existence of an injection implies that . So, implies that and thus any injective map between the two is also surjective and vice versa. This means the first equivalence is trivial and further both of these imply \nFor the second equivalence, we first introduce some notation. Let be the finite substructure of induced on the elements . Note that for any , the Kleisli completion restricts to a bijective homomorphism for each . So if and are branch-bijective, we have for each a pair of bijective homomorphisms . As these are finite structures we can deduce that these are indeed isomorphisms and so is a strategy for .\nFor the final equivalence, if witnesses then we have, by induction, that is an isomorphism from to for each . So is an isomorphism witnessing . For the converse we suppose that there is an isomorphism . Then the Kleisli map is a strongly branch-bijective strategy.\nThis lemma allows us to conclude that the isomorphisms in the category correspond with equivalence of structures up to variable infinitary logic extended by all generalised quantifiers of arity at most and thus with winning strategies for Hella\u2019s -bijective -pebble game.\nFor two -structures and the following are equivalent:\n\nDuplicator has a winning strategy for\n\nImmediate from Lemma 14 ###reference_14### and Hella [Hel89 ###reference_bx16###].\nA similar result can also be obtained relating branch-injective and branch-surjective\nmaps to monomorphisms and epimorphisms respectively. However, the category in question is not\nthe full category where is seen as a comonad on\n but rather the restriction of this category where the objects only the -structures.\nAbramsky and Shah show that this category can be obtained from the relevant game comonad\nas the Kleisli category of a relative comonad [AS18 ###reference_bx4###]."
70
+ },
71
+ {
72
+ "section_id": "4",
73
+ "parent_section_id": null,
74
+ "section_name": "4. Coalgebras and Decompositions",
75
+ "text": "Abramsky et al. [ADW17 ###reference_bx3###] show that the coalgebras of the comonad have a surprising correspondence with objects of great interest to finite model theorists. That is, any coalgebra gives a tree decomposition of of width at most and any such tree decomposition can be turned into a coalgebra.\nThis result works because has a treelike structure where any pebble history, or branch, only witnesses the relations from the elements of which make up the pebbled position on . So a homomorphism witnesses a sort of treelike -locality of the relational structure and the -coalgebra laws are precisely enough to ensure this can be presented as a tree decomposition (of width ).\nIn lifting this comonad to , we have given away some of the restrictive -local nature of which makes this argument work. The structure witnesses many more of \u2019s relations than . Take, for example, the substructure induced on the elements , where is the empty history. This witnesses all relations in which have arity . So, in particular, if contains no relations of arity greater than , this substructure is just a copy of and the obvious embedding can be easily seen to be a -coalgebra.\nFrom this, we can see that if -coalgebras capture some notion of -generalised tree decomposition, this should clearly be more permissive than the notion of tree decomposition, allowing a controlled amount of non-locality (parameterised by ) and collapsing completely for -structures with . In this section we define the appropriate generalisation of tree decomposition and show its relation with -coalgebras."
76
+ },
77
+ {
78
+ "section_id": "4.1",
79
+ "parent_section_id": "4",
80
+ "section_name": "4.1. Generalising tree decomposition",
81
+ "text": "Recall the definition of a tree decomposition of a -structure, for\nexample from Definition 4.1.1 of [Gro17 ###reference_bx15###].\nA tree decomposition of a -structure is a pair with a tree and such that:\nFor every the set induces a subtree of\n; and\nfor all relational symbols and related tuples , there\nexists a node such that .\nTo arrive at a generalisation of tree decomposition which allows for the non-locality discussed above, we first introduce the following extension of ordinary tree decompositions.\nAn extended tree decomposition of a -structure is a triple with such that:\nis a tree-decomposition of where is defined by ; and\nif and then .\nIn an extended tree decomposition, the bags of the underlying tree decomposition are split into a fixed bag and a floating bag . The second condition above ensures that contains only elements for which is their first333minimum in the tree order appearance in .\nWidth and arity are two important properties of extended tree decompositions.\nLet be an extended tree decomposition.\nThe width, , of is .\nThe arity, , of is the least such that:\nif then ; and\nfor every tuple in every relation of , there is a such that and .\nWe note that the definition of width here differs from the width of the underlying tree decomposition . However as we see in Lemma 16 ###reference_16### having an ordinary tree decomposition of width is equivalent to having an extended tree decomposition of width and arity .\nWe are particularly interested in extended tree decompositions that are further well-structured, in a sense that is related to the definition of structured -histories in Section 3 ###reference_###.\n{defi}\nAn extended tree decomposition with width and arity is structured if for every there exists s.t. , for every node , , for any child of and for any a child of we have that either:\n; or\n; or"
82
+ },
83
+ {
84
+ "section_id": "4.2",
85
+ "parent_section_id": "4",
86
+ "section_name": "4.2. Drawing extended tree decompositions and examples",
87
+ "text": "We draw extended tree decompositions as trees where the nodes have two labels, an upper label indicating the fixed bag at that node and the lower label denoting the floating bag. In this subsection, we give some simple examples of these decompositions.\nAny structure which has no relations of arity greater than admits a trivial arity , width extended tree decomposition with a single node. This is drawn as:\nFrom this example we see that, in particular, any graph has a trivial extended tree decomposition of arity . The next two examples show that for graphs, extended tree decompositions of arity look similar to ordinary tree decompositions.\n{exa}\nConsider the following tree as a graph.\nAs with ordinary tree decompositions a tree can be given a decomposition of width by creating a bag for each edge. The corresponding extended tree decomposition of width and arity for is the following:\nUnlike with ordinary tree decompositions, the floating bags in extended tree decompositions can be used to give more succinct decompositions (without changing the width). For example, the following is an extended decomposition of again with width and arity .\nAs we see in Lemma 16 ###reference_16###, the correspondence between ordinary tree decompositions and extended tree decompositions of arity extends beyond trees to all relational structures. However, for signatures of arity higher than increasing the arity of an extended tree decomposition can result in non-trivial decompositions of lower width as is shown by the following example.\nConsider a hypergraph constructed from above by adding ternary edges and . Such a structure contains a -clique in its Gaifman graph (see Libkin [Lib04 ###reference_bx23###] Definition 4.1) and so cannot have an ordinary tree decomposition of width less than . However, the following is an extended tree decomposition of width and arity for :"
88
+ },
89
+ {
90
+ "section_id": "4.3",
91
+ "parent_section_id": "4",
92
+ "section_name": "4.3. Preliminary results on extended tree decompositions",
93
+ "text": "Before proving the main result of this section we present two results which establish some basic facts about this new type of decomposition. The first establishes the equivalence of width , arity extended tree decompositions with ordinary tree decompositions of width . This is interesting as we recall from [ADW17 ###reference_bx3###] that tree decompositions of width correspond to coalgebras of whereas we will see in Theorem 18 ###reference_18### that coalgebras of give extended tree decompositions of arity and width . In this light, this result can be seen as demonstrating the extra strength of over .\nA relational structure has a tree decomposition of width if, and only if, it has an extended tree decomposition of width and arity 1\n() Without loss of generality we can assume that is a tree decomposition such that for all and if is a child of in then . We now show how to transform such a tree decomposition into an extended decomposition of width and arity .\nDefine the equivalence relation on as\nNow we can define the extended decomposition as follows:\n\nwhere is the common parent of the elements of\n\nFor non-root nodes in both and are well-defined by the definition of . For the singleton equivalence class containing the root of we choose any and define and .\nLetting we have that and so is a tree decomposition. Furthermore, by definition, so for any we have by the condition that is a connected subtree of for any . So is an extended tree decomposition.\nIt is easy to see that the maximum size of is equal to by design. So the width of is . If is a tuple in a relation of we know that there is a node such that . By definition, with . So the arity of is 1, as required.\n()\nTo go backwards we take a width , arity extended tree decomposition and we construct a tree decomposition by replacing each node with the following spider :\nwhere the children of the leaf of labelled by are the roots of the spiders such that is a child of in and . To see that this is a tree decomposition note firstly that is clearly a tree under this construction. Next, it is easy to see that for any , either appears in every bag of or just in a single leaf. This means that the bags containing in still form a connected subtree. Lastly, we need to show that each related tuple in is contained in some bag of . This is guaranteed by the condition that has arity , which means any time there exists such that .\nHaving established the connection between extended tree decompositions and ordinary tree decompositions we now relate extended tree decompositions to our construction in Section 3 ###reference_### with the next easy but important result. It is noteworthy here that the extended tree decompositions admitted by the structures from Section 3 ###reference_### are structured. This is important later in this section.\nFor any finite , there is a structured extended tree decomposition of of width and arity .\nRecall that the underlying set of consists of representatives of equivalence classes in where is a structured -history and . We construct an extended tree decomposition where each node is an -history appearing in one of these representatives. The tree ordering is simply given by the prefix relation. The fixed bag at , , contains up to elements which represent the at most elements which are pebbled after is played. To describe these explicitly, let be the flattening of the list and for each appearing as a pebble index in and let be the maximal prefix of which ends in for some . Then contains the -equivalence classes of each of the . As there can be at most elements in this set, our extended tree decomposition has width . The floating bag is given, more simply as . From this description it is easy to see that for any , if appears in then is a prefix of and for any with we have . This confirms that is a connected subtree of T and that is a singleton containing the root of that subtree.\nTo show that defines an extended tree\ndecomposition of it now suffices to show that any\nrelated tuple appears in some bag. Because of the way\nrelations are defined in we can find s.t. . By the definition of\nrelations in we know that the are totally\nordered by the prefix relation. This means that the is\nsimilarly totally ordered with largest element . The related\ntuple is contained in . Furthermore, contains the for which . As these are linearly ordered by the prefix relation it would be impossible for there to be more than distinct such lists. This means that () is indeed an extended tree decomposition of width and arity .\nTo see that is structured we rely on the fact that the sequences appearing in are themselves structured in the sense of Definition 3.1 ###reference_###. The proof is as follows. Suppose there is a node with a child where and suppose that and . We now need to show that for any node . Unpacking the definitions we have that contains elements where appears in for some . As we also know that , which means in particular that does not contain two pairs for because if it did the contributions from pebbles and to would both be . These two facts together mean that the length of must be strictly less than . Thus as is a structured -history we must have that the first element of is where such the index appears in some pair in . It is not hard to see that , completing our proof.\nWe now prove the main claim of this section: the -coalgebras are in correspondence with structured extended tree decompositions of width and arity ."
94
+ },
95
+ {
96
+ "section_id": "4.4",
97
+ "parent_section_id": "4",
98
+ "section_name": "4.4. Correspondence with coalgebras",
99
+ "text": "In this final subsection we establish the connection between width , arity extended decompositions of a which are structured with coalgebras . Formally stated, we prove the following theorem:\nFor a finite relational structure the following are equivalent:\nthere is a -coalgebra\nthere is a structured extended tree decomposition of with width at most and arity at most\n(1 2) Let be a coalgebra and, as , let . Recall that by Lemma 17 ###reference_17### there is a structured extended tree decomposition of with arity and width where the nodes of are labelled by structured -histories . We use this decomposition to define a decomposition on as follows:\nis the tree restricted to the set .\n.\n.\nWe now show, firstly, that this is an extended tree decomposition, secondly that it has width and arity and finally that it is structured.\nFirst of all this requires that be a tree. For any we have some with . Suppose that . It is sufficient to show that for any prefix of (including the empty sequence). This fact can be deduced from the comultiplication law that for all . The left-hand side of this equation is where and the right-hand side is where . Taking any it is not hard to see that and . From this we can conclude that for any appearing in for any we have that where is the empty sequence. This proves that all prefixes of appear in . Now we show that with defines a tree decomposition of . Indeed is a subtree because it is really the intersection of two subtrees of the original . Furthermore, for any , we have that . As is a tree decomposition, there is an with . You can assume by taking the longest prefix of which satisfies this444This works by noting that for a parent of in , . This means that and . This shows that defines an extended tree decomposition.\nAs is injective by the coalgebra law , we know that for any and by definition. As has width this means that for all and so has width . For arity, we have that for any related tuple in the tuple is related in . having arity means that for any . So again by the injectivity of and so has arity .\nFinally the extended tree decomposition is structured because is structured and the coalgebra laws guarantee that and for any with child node . This first equation is deduced by noting that injectivity guarantees . The reverse inequality comes from the fact that any is the equivalence class of some prefix of . As we saw before, the comultiplication law guarantees that such classes are realised as for an appropriate so we have . The second equation follows from the same reasoning. Together these ensure that the conditions for being structured which are satisfied in are also satisfied in .\n(2 1) Defining a coalgebra from a\nstructured extended tree decomposition of width \nand arity requires some careful bookkeeping which is presented\nexplicitly here. Throughout we rely on the fact that our tree \ncomes with an order and so has a root which we call . By the conditions of being structured, we have for each a -minimal node where appears in and we have that . This means in particular that at the root .\nThe general strategy in defining the coalgebra is to assign to each node a structured -history which records the elements of which have appeared in on the path from to . We then show that defines a -coalgebra for .\nStarting at the root we define to be the empty list. At each new node in with parent we define to record the elements of which appear in and persist in . As the arity of is we know that . We then form by appending to . This inductively defines on all the nodes of .\nDefining in such a way as to ensure is a structured -history requires some care with assigning pebble indices from to the elements in . To help keep track of these indices we also define a function . We say that a live prefix of is a prefix of the flattened list with final element such that no larger prefix of ends with for any . We say that is live in if it appears at the end of some live prefix . The end goal is that will be an -history where the live elements are exactly those in and that for each such element there is a live prefix of ending in the pair .\nAt each we partition as where is the set of new elements in and is the set of elements retained from the parent node. Firstly, we define to be equal to on . As the width of is we know that and so the number of free indices is at least as big as the number of new elements so we can assign to each element of a distinct index from . In many cases this is enough and we can pick any ordering of the elements in and set to be the list .\nWe now need to define some modifications to this to ensure that is structured. Recall that an -history is structured if and only if for every pair of successive blocks appearing immediately before in we have that either or the first pebble index in must have appeared in . To ensure this holds true for each , we need to take extra care defining in cases where or are less than .\nIf then we must choose to be an index which appeared in . To see that we can do this recall that is structured and so for each non-root node with child we have (using our new language from this proof) that at least one of the following is true\n,\n; or\n.\nIn the first case, we have so no action needs to be taken.\nIn the second case, where and then there is a spare index and we define to be and we define .\nIn the third case, there may not be a spare index but instead there is some element meaning that some element which appears in does not need to be live after . In this case we simply define .\nCollectively, these modifications ensure that is structured and so the definition is well-defined. It remains to show that is a coalgebra.\nTo show that is a homomorphism, take any related tuple . As is an extended tree decomposition there is some such that . Now as the arity of the decomposition is there are at most elements with and so . For all the other elements there must be some earlier with and a unique path linking and in . We must have and for all so by the definition of above we know that the index used to pebble in has not been reallocated by the end of . From this it is easy to see that the tuple (with function application defined component-wise on the tuple) is related in .\nFinally, we verify that satisfies the coalgebra laws. The counit law, is satisfied by definition. For comultiplication, it suffices to check that for any , if appears in then it appears in exactly one of the and . This can be seen to hold from the construction above, concluding our proof."
100
+ },
101
+ {
102
+ "section_id": "5",
103
+ "parent_section_id": null,
104
+ "section_name": "5. Concluding Remarks",
105
+ "text": "The work of Abramsky et al., giving comonadic accounts of pebble games and their relationship to logic has opened up a number of avenues of research. It raises the possibility of studying logical resources through a categorical lens and introduces the notion of coresources. This view has been applied to pebble games [ADW17 ###reference_bx3###], Ehrenfeucht-Fra\u00efss\u00e9 games, bisimulation games [AS18 ###reference_bx4###] and also to quantum resources [ABdSZ17 ###reference_bx1###, ABKM19 ###reference_bx2###]. In this paper we have extended this approach to logics with generalised quantifiers.\nThe construction of the comonad introduces interesting new techniques to this project. The pebbling comonad is graded by the value of which we think of as a coresource increasing which constrains the morphisms. The new parameter provides a second coresource, increasing which further constrains the moves of Duplicator. It is interesting that the resulting comonad can be obtained as a quotient of and the strategy lifting argument developed\nin Section 3 ###reference_### could prove useful in other contexts.\nThe morphisms in the Kleisli category correspond to winning strategies in a new game we introduce which characterises a natural logic: the positive logic of homomorphism-closed quantifiers. The isomorphisms correspond to an already established game: Hella\u2019s -bijective game with pebbles. This relationship allows for a systematic exploration of variations characterising a number of natural fragments of the logic with -ary quantifiers. One natural fragment that is not yet within this framework and worth investigating is the logic of embedding-closed quantifiers of Haigora and Luosto [HL14 ###reference_bx18###].\nThis work opens up a number of perspectives. Logics with generalised quantifiers have been widely studied in finite model theory. They are less of interest in themselves and more as tools for proving inexpressibility in specific extensions of first-order or fixed-point logic. For instance, the logics with rank operators [DGHL09 ###reference_bx7###, GP19 ###reference_bx14###] of great interest in descriptive complexity have been analysed as fragments of a more general logic with linear-algebraic quantifiers [DGP19 ###reference_bx8###]. It would be interesting to explore whether the comonad could be combined with a vector space construction to obtain a categorical account of this logic.\nMore generally, the methods illustrated by our work could provide a way to deconstruct pebble games into their component parts and find ways of constructing entirely new forms of games and corresponding logics. The games we consider and classify are based on Duplicator playing different kinds of functions (i.e. morphisms on finite sets) and maintaining different kinds of homomorphisms (i.e. morphisms in the category of -structures). Could we build reasonable pebble games and logics on other categories? In particular, can we bring the algebraic pebble games of [DH17 ###reference_bx10###] into this framework?"
106
+ }
107
+ ],
108
+ "appendix": [],
109
+ "tables": {},
110
+ "image_paths": {},
111
+ "validation": true,
112
+ "references": [
113
+ {
114
+ "1": {
115
+ "title": "The Quantum Monad on Relational Structures.",
116
+ "author": "Samson Abramsky, Rui Soares Barbosa, Nadish de Silva, and Octavio Zapata.",
117
+ "venue": "In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS), volume 83 of LIPIcs, pages 35:1\u201335:19, 2017.",
118
+ "url": null
119
+ }
120
+ },
121
+ {
122
+ "2": {
123
+ "title": "A comonadic view of simulation and quantum resources.",
124
+ "author": "Samson Abramsky, Rui Soares Barbosa, Martti Karvonen, and Shane Mansfield.",
125
+ "venue": "In 34th Annual ACM/IEEE Symposium on Logic in Computer Science, (LICS), 2019.",
126
+ "url": null
127
+ }
128
+ },
129
+ {
130
+ "3": {
131
+ "title": "The pebbling comonad in finite model theory.",
132
+ "author": "Samson Abramsky, Anuj Dawar, and Pengming Wang.",
133
+ "venue": "In 32nd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), pages 1\u201312, June 2017.",
134
+ "url": null
135
+ }
136
+ },
137
+ {
138
+ "4": {
139
+ "title": "Relating structure and power: Comonadic semantics for computational resources.",
140
+ "author": "Samson Abramsky and Nihil Shah.",
141
+ "venue": "In 27th EACSL Annual Conference on Computer Science Logic, CSL, volume 119 of LIPIcs, pages 2:1\u20132:17, 2018.",
142
+ "url": null
143
+ }
144
+ },
145
+ {
146
+ "5": {
147
+ "title": "Introduction to categories and categorical logic.",
148
+ "author": "Samson Abramsky and Nikos Tzevelekos.",
149
+ "venue": "In New structures for physics, pages 3\u201394. Springer, 2010.",
150
+ "url": null
151
+ }
152
+ },
153
+ {
154
+ "6": {
155
+ "title": "Generalized quantifiers and logical reducibilities.",
156
+ "author": "Anuj Dawar.",
157
+ "venue": "Journal of Logic and Computation, 5(2):213\u2013226, 1995.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "7": {
163
+ "title": "Logics with rank operators.",
164
+ "author": "Anuj Dawar, Martin Grohe, Bjarki Holm, and Bastian Laubner.",
165
+ "venue": "In 24th Annual IEEE Symposium on Logic In Computer Science (LICS), pages 113\u2013122, Washington, DC, USA, 2009. IEEE Computer Society.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "8": {
171
+ "title": "Approximations of isomorphism and logics with linear-algebraic operators.",
172
+ "author": "Anuj Dawar, Erich Gr\u00e4del, and Wied Pakusa.",
173
+ "venue": "In 46th International Colloquium on Automata, Languages, and Programming (ICALP), volume 132 of LIPIcs, pages 112:1\u2013112:14, 2019.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "9": {
179
+ "title": "The expressive power of finitely many generalized quantifiers.",
180
+ "author": "Anuj Dawar and Lauri Hella.",
181
+ "venue": "Information and Computation, 123(2):172\u2013184, 1995.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "10": {
187
+ "title": "Pebble games with algebraic rules.",
188
+ "author": "Anuj Dawar and Bjarki Holm.",
189
+ "venue": "Fundamenta Informaticae, Vol. 150, nr 3/4:281\u2013316, 2017.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "11": {
195
+ "title": "Infinitary logic and inductive definability over finite structures.",
196
+ "author": "Anuj Dawar, Steven Lindell, and Scott Weinstein.",
197
+ "venue": "Information and Computation, 119(2):160\u2013175, 1995.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "12": {
203
+ "title": "Extended logics: The general framework.",
204
+ "author": "Heinz-Dieter Ebbinghaus.",
205
+ "venue": "In J. Barwise and S. Feferman, editors, Model-Theoretic Logics, pages 25\u201376. Springer-Verlag, New York, 1985.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "13": {
211
+ "title": "Finite Model Theory.",
212
+ "author": "Heinz-Dieter Ebbinghaus and J\u00f6rg Flum.",
213
+ "venue": "Springer, 2nd edition, 1999.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "14": {
219
+ "title": "Rank logic is dead, long live rank logic!",
220
+ "author": "Erich Gr\u00e4del and Wied Pakusa.",
221
+ "venue": "The Journal of Symbolic Logic, 84(1):54\u201387, 2019.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "15": {
227
+ "title": "Descriptive Complexity, Canonisation, and Definable Graph Structure Theory, volume 47 of Lecture Notes in Logic.",
228
+ "author": "Martin Grohe.",
229
+ "venue": "Cambridge University Press, 2017.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "16": {
235
+ "title": "Definability hierarchies of generalized quantifiers.",
236
+ "author": "Lauri Hella.",
237
+ "venue": "Annals of Pure and Applied Logic, 43(3):235 \u2013 271, 1989.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "17": {
243
+ "title": "Logical hierarchies in PTIME.",
244
+ "author": "Lauri Hella.",
245
+ "venue": "Information and Computation, 129(1):1 \u2013 19, 1996.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "18": {
251
+ "title": "On logics extended with embedding-closed quantifiers, 2014.",
252
+ "author": "Jevgeni Haigora and Kerkko Luosto.",
253
+ "venue": "arXiv:1401.6682.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "19": {
259
+ "title": "Descriptive Complexity.",
260
+ "author": "Neil Immerman.",
261
+ "venue": "Springer Verlag, 1998.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "20": {
267
+ "title": "Infinitary logic for computer science.",
268
+ "author": "Phokion G. Kolaitis and Moshe Y. Vardi.",
269
+ "venue": "In 19th International Colloquium on Automata, Languages, and Programming (ICALP), pages 450\u2013473, 1992.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "21": {
275
+ "title": "Generalized quantifiers and pebble games on finite structures.",
276
+ "author": "Phokion G. Kolaitis and Jouko A. V\u00e4\u00e4n\u00e4nen.",
277
+ "venue": "Annals of Pure and Applied Logic, 74(1):23 \u2013 75, 1995.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "22": {
283
+ "title": "A game-theoretic approach to constraint satisfaction.",
284
+ "author": "Phokion G. Kolaitis and Moshe Y. Vardi.",
285
+ "venue": "In Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on on Innovative Applications of Artificial Intelligence, pages 175\u2013181, 2000.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "23": {
291
+ "title": "Elements Of Finite Model Theory (Texts in Theoretical Computer Science. An Eatcs Series).",
292
+ "author": "Leonid Libkin.",
293
+ "venue": "SpringerVerlag, 2004.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "24": {
299
+ "title": "First order predicate logic with generalized quantifiers.",
300
+ "author": "Per Lindstr\u00f6m.",
301
+ "venue": "Theoria, 32(3):186\u2013195, 1966.",
302
+ "url": null
303
+ }
304
+ }
305
+ ],
306
+ "url": "http://arxiv.org/html/2006.16039v6"
307
+ }
20240722/2202.04060v4.json ADDED
@@ -0,0 +1,661 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Streaming word problems",
3
+ "abstract": "We study deterministic and randomized streaming algorithms for word problems of finitely generated groups. For finitely generated\ngroups that can be obtained from linear groups using the following operations we show the existence of randomized streaming algorithms with logarithmic\nspace complexity for their word problems: finite extensions, taking a finitely generated subgroups, graph products\nand wreath products by finitely generated abelian groups. We contrast these results with several lower bounds.\nAn example of a finitely presented group, where\nthe word problem has only a linear space randomized streaming algorithm, is Thompson\u2019s group .\nFinally, randomized streaming algorithms for subgroup membership problems in free groups and direct products\nof free groups are studied.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1. Introduction",
9
+ "text": "The word problem for a finitely generated group is the following computational problem: Fix a finite set of generators for ,\nwhich means that every element of can be written as a finite product of elements from . The input for the word problem is a finite\nword over the alphabet and the question is whether this word evaluates to the group identity of .\nThe word problem was introduced by Dehn in 1911 [18 ###reference_b18###]. It is arguably the most important computational problem in group theory and has\nbeen studied by group theorists as well as computer scientists; see [54 ###reference_b54###] for a survey. In recent years, complexity theoretic investigations\nof word problems moved into the focus. For many important classes of groups it turned out that the word problem belongs to low-level complexity\nclasses. The first result in this direction was proved by Lipton and Zalcstein [43 ###reference_b43###] (if the field has characteristic zero) and Simon [66 ###reference_b66###]\n(if the field has prime characteristic): if is a finitely generated linear group over an arbitrary field (i.e., a finitely generated group of invertible\nmatrices over ), then the word problem for can be solved in deterministic logarithmic space. Related results can be found in [39 ###reference_b39###, 70 ###reference_b70###].\nThe word problem of a group with a finite generating set can be identified with a formal language consisting of all\nwords over the alphabet that evaluate to the group identity of . Language theoretic aspects of the word problem have been studied\nintensively in the past. For instance, Anissimov and Seifert [2 ###reference_b2###] showed that is regular if and only if is finite, and\nMuller and Schupp [57 ###reference_b57###] showed that is context-free if and only if is virtually free,111If is a property or class of\ngroups, then a group is called virtually , if is a finite extension of a -group.\nsee [31 ###reference_b31###] for an overview.\nIn this paper we initiate the study of streaming algorithms for word problems. These are algorithms that do not have random access on the whole input.\nInstead, the -th input symbol is only available at time [1 ###reference_b1###]. Quite often, streaming algorithms are randomized and have a bounded error\nprobability. Usually, one is interested in the space used by a streaming algorithm, but also update times\n(i.e., the worst case time spend to process a new input symbol) have been studied. Clearly, every regular language has a deterministic streaming algorithm with constant\nspace; it is a deterministic finite automaton for . Randomized streaming algorithms for context-free languages have been studied in [5 ###reference_b5###, 9 ###reference_b9###, 22 ###reference_b22###, 48 ###reference_b48###].\nLet us now explain the main results of this paper. For a finitely generated group with finite generating set ,\nthe deterministic (resp., randomized) streaming space complexity of is the\nspace complexity of the best deterministic (resp., randomized) streaming algorithm for . The concrete\nchoice of the generating set has only a minor influence on the deterministic (resp., randomized) streaming space complexity\nof ; see Lemma 5.1 ###reference_definition1### for a precise statement.\nIn statements where the influence of the generating set on the streaming space complexity is blurred by\nthe Landau notation, we speak of the deterministic/randomized streaming space complexity\nof the word problem of or simply the deterministic/randomized streaming space complexity of .\nThe deterministic streaming space complexity of \nis directly linked to the growth function of the group .\nThe latter is the number of different group elements\nof that can be represented by words over the finite generating set of length at most (also here the\ngenerating set only has a minor influence). The deterministic streaming space complexity of the word problem for \nturns out to be up to a small additive constant (Theorem 6.1 ###reference_definition1###). The growth of finitely generated groups is a well investigated\ntopic in geometric group theory. A famous theorem of Gromov says that a finitely generated group has polynomial growth if and only if it is virtually nilpotent; see [17 ###reference_b17###, 51 ###reference_b51###]\nfor a discussion. Theorem 6.1 ###reference_definition1### reduces all questions about the deterministic streaming space complexity of word problems to questions about growth\nfunctions. Due to this, we mainly study randomized streaming algorithms for word problems in this paper.\nIn the randomized setting, the growth of still yields a lower bound:\nThe randomized streaming space complexity of the word problem of is lower bounded by\n (Theorem 6.2 ###reference_definition2###).\nA large class of groups, where this lower bound can be exactly matched by an upper bound, is the class of finitely generated linear groups.\nRecall that Lipton and Zalcstein [43 ###reference_b43###] and Simon [66 ###reference_b66###] showed that the word problem of a finitely generated linear group can be solved in logarithmic space.\nTheir algorithm can be turned into a randomized streaming algorithm with logarithmic space complexity. In order to plug these streaming algorithms\ninto closure results for randomized streaming space complexity (that are discussed below) we need the notion of a so-called\n-distinguisher for . Roughly speaking, a randomized streaming algorithm for a finitely generated group with finite generating set \nis an -distinguisher\nif for all words of length at most the following hold: (i) if and evaluate to the same element of \nthen with probability at least , and lead to the same memory state of the streaming algorithm, and (ii)\nif and evaluate to different elements of \nthen with probability at least , and lead to different memory states of the streaming algorithm; see Section 8 ###reference_###.\nThe error probability many depend on the input length .\nIt is easy to obtain from an -distinguisher for the group a randomized streaming algorithm\n for the word problem of with error probability . Moreover, the space complexity of is only twice\nthe space complexity of ; see Lemma 8.1 ###reference_definition1###.\nWe then show that for every finitely generated linear group there is an -distinguisher with\nspace complexity (Theorem 9.2 ###reference_definition2###) and inverse polynomial\nerror probability for any constant . If is moreover virtually nilpotent,\nthen the space complexity can be further reduced to at the cost of an inverse polylogarithmic error probability (for any constant\n); see Theorem 9.3 ###reference_definition3###.\nIn fact, using a known gap theorem\nfor the growth of linear groups [55 ###reference_b55###, 71 ###reference_b71###], it turns out that the randomized streaming space complexity of the word problem for a finitely generated\nlinear group is either (if is virtually nilpotent) or (if is not virtually nilpotent),\nsee Theorem 10.3 ###reference_mdefinition3###.\nFor non-linear groups the situation turns out to be more difficult. We show that\nthe existence of low-error distinguishers with logarithmic space complexity is preserved by\ncertain group constructions\nincluding finite extensions (Theorem 10.2 ###reference_mdefinition2###), graph products (Theorem 10.7 ###reference_mdefinition7###) and\nwreath products by finitely generated abelian groups (Corollary 10.13 ###reference_mdefinition13###).\nUsing these transfer results we obtain also non-linear groups with\na logarithmic randomized streaming space complexity, e.g., metabelian groups (Corollary 10.5 ###reference_mdefinition5###) and free solvable groups (Corollary 10.14 ###reference_mdefinition14###).\nIn Section 12 ###reference_### we prove lower bounds for the randomized streaming space complexity of word problems.\nFor wreath products of the form such that is non-abelian and is infinite, we can show that the\nrandomized streaming space complexity is by a reduction from the randomized communication complexity\nof disjointness (Theorem 11.1 ###reference_mdefinition1###). A concrete finitely presented group with randomized streaming space complexity is Thompson\u2019s\ngroup (Corollary 11.3 ###reference_mdefinition3###). Thompson\u2019s group (introduced by Richard Thompson in 1965) belongs due to its unusual properties to the most intensively studied infinite groups; see e.g. [12 ###reference_b12###]. From a computational perspective it is interesting to note that is co-context-free (i.e., the set of all words over any set of generators that do not evaluate to the\ngroup identity is a context-free language) [42 ###reference_b42###]. This implies that the word problem for Thompson\u2019s group is in DSPACE.\nFinally, we consider the famous Grigorchuk group [26 ###reference_b26###], which\nwas the first example of a group with intermediate word growth as well as the\nfirst example of a group that is amenable but not elementary amenable. We\nshow that the deterministic streaming space complexity of is , whereas the\nrandomized streaming space complexity of is (Theorem 11.6 ###reference_mdefinition6###).\nIn the last section of the paper we consider randomized streaming algorithms for subgroup membership problems. In a subgroup\nmembership problem one has a subgroup of a finitely generated group and for a given input word ( is again\na finite set of generators for ) one has to determine whether represents an element of . The word problem is the special\ncase where . We present a randomized streaming algorithm with logarithmic space complexity for the case where is a finitely generated free\ngroup and is a finitely generated subgroup of (Theorem 12.4 ###reference_mdefinition4###). Moreover, we show that this result extends neither to the case\nwhere is not finitely generated (Theorem 12.5 ###reference_mdefinition5###) nor the case where is a finitely generated subgroup of a direct product of two free groups of rank two (Theorem 12.6 ###reference_mdefinition6###)."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "2. Preliminaries",
15
+ "text": "For integers let be the integer interval .\nWe write for the set of all probabilities.\nWe write for , where is Euler\u2019s number.\nLet be a finite alphabet.\nAs usual we write for the set of all finite words over the alphabet . The empty word\nis denoted with .\nFor a word () let \nbe its length and (for ) the symbol at position .\nA prefix of a word is a word such that for some word .\nWe denote with the set of all prefixes of .\nLet be the set of non-empty words and\n be the set of all words of length at most .\nFor a subalphabet we denote with the\nprojection homomorphism that deletes all symbols from in a word:\n for and for .\nSeveral times we will make use of the Chernoff bound. There are many variations of the Chernoff bound, the following\nform can be found for instance in [20 ###reference_b20###, equation (1)]:\nLet , , and\n be independent identically distributed Bernoulli random variables with\n and for all . Then we have:"
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "2.1. Communication complexity",
21
+ "text": "Our lower bounds for randomized streaming space complexity will be based on randomized communication\ncomplexity. We present the necessary background from communication complexity; see [41 ###reference_b41###]\nfor a detailed introduction.\nConsider a function for some finite sets and .\nA randomized (communication) protocol for consists of two parties called Alice and Bob.\nThe input for Alice (resp., Bob) is an element and a random choice \n(resp., and a random choice ). Here, and are finite sets.\nThe goal of Alice and Bob is to compute .\nFor this, they communicate in a finite number of rounds, where in each round either Alice sends\na bit to Bob or Bob sends a bit to Alice. The protocol determines which of the two communication directions is chosen.\nAt the end, Bob outputs a bit . In a one-way protocol, only Alice sends bits to Bob.\nWe assume a probability distribution on the set (resp., )\nof Alice\u2019s (resp., Bob\u2019s) random choices.\nThe protocol computes if\nfor all we have\nThe cost of the protocol is the maximum of the number of transmitted bits, where the maximum\nis taken over all .\nThe randomized (one-way) communication complexity of is the minimal cost\namong all (one-way) randomized protocols that compute . Here, the size of the finite sets and is not restricted.\nThe choice of the constant in (2 ###reference_###) is arbitrary in the sense that changing the constant\nto any only changes the communication complexity\nby a constant (depending on ),\nsee [41 ###reference_b41###, p. 30]. Also note that we only use the private version of randomized communication protocols, where Alice\nand Bob make private random choices from the sets and , respectively, and their choices are not known to the other\nparty (in contrast to the public version of randomized communication protocols)."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "2.2. Probabilistic finite automata",
27
+ "text": "In the following we introduce probabilistic finite automata [60 ###reference_b60###, 61 ###reference_b61###], which will be used\nas our model for randomized streaming algorithms.\nA probabilistic finite automaton (PFA) \nconsists of a finite set of states , a finite alphabet ,\nan initial state distribution ,\na transition probability function \nand a set of final states such that\n and for all , .\nIf and map into , then is a deterministic finite automaton (DFA).\nIf only is required to map into , then is called a\nsemi-probabilitistic finite automaton (semiPFA).\nThis means that after choosing the initial state according to the distribution , \nproceeds deterministically.\nLet be a PFA.\nFor a random variable with values\nfrom and we define the random variable (which also takes values from ) by\nFor a word we define a random variable with values from inductively\nas follows: the random variable is defined such that\n for all . Moreover, for all and .\nThus, is the probability that is in state after reading .\nWe can define also via runs:\nA run on a word in the PFA is a sequence \nwhere . We say that ends in .\nGiven a run in we define .\nFor each the function is a probability distribution on\nthe set of all runs of on . Then, is the sum of\nall probabilities , where ends in .\nIf is a semiPFA then we can identify with a mapping ,\nwhere is the unique state with . This mapping is extended to a mapping\n in the usual way: and \nfor all and .\nWe then obtain\nFor a semiPFA and a boolean condition\n that depends on the state , we define the probability\nFor a language , a PFA and a word we define\nthe error probability of on for as"
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "2.3. Sequential transducer",
33
+ "text": "In Section 10.2 ###reference_2### we make use of (left-)sequential transducers, see e.g. [10 ###reference_b10###] for more details.\nA sequential transducer is a tuple , where\n is a finite set of states, is the input alphabet, is the output alphabet,\n is the initial state, and is the transition function.\nIf then this should be read as follows: if the transducer is in state and the next input\nsymbol is then it moves to state and outputs the word .\nWe extend to a mapping as follows, where\n, and :\nfor all , and\nif and then .\nFinally, we define the function computed by as follows (where\n and ):\n if and only if for some .\nIntuitively, in order compute , reads the word \nstarting in the initial state and thereby concatenates all the outputs produced in the transitions."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "3. Streaming algorithms: definitions",
39
+ "text": "In this section we define our model of randomized streaming algorithms. It is a non-uniform model in the sense\nthat for every input length we have a separate algorithm that handles inputs of length at most .\nFormally, a (non-uniform) randomized streaming algorithm is a sequence \nof PFA over the same input alphabet .\nIf every is deterministic (resp., semi-probabilitistic), we speak of a deterministic (resp., semi-randomized) streaming algorithm.\nLet be monotonically decreasing functions.\nA randomized streaming algorithm is -correct for a language \nif for every large enough and every word we have the following:\nif then and\nif then .\nIf then we also say that is -correct for .\nWe say that is a\nrandomized streaming algorithm for if it is -correct for ;\n-sided randomized streaming algorithm for if it is -correct for ;\n-sided randomized streaming algorithm for if it is -correct for ;\ndeterministic streaming algorithm for if it is deterministic and -correct for ;\nnondeterministic streaming algorithm for if it is -correct for for any monotonically decreasing function with ;\nco-nondeterministic streaming algorithm for if it is -correct for for any monotonically decreasing function with .\nThe choice of \nfor the error probability is not important. Using a standard application of the Chernoff bound, one can make\nthe error probability an arbitrarily small constant; see Theorem 4.1 ###reference_definition1### below.\nThe space complexity of the randomized streaming algorithm \nis the function , where is the state set of .\nThe motivation for this definition is that states of can be encoded by bit strings of length at most .\nThe randomized streaming space complexity of the language is the smallest possible function , where\n is a randomized streaming algorithm for . In an analogous way we define the -sided (resp., -sided) randomized\nstreaming space complexity, the deterministic streaming space complexity, and the (co-)nondeterministic streaming space complexity of a language .\nThe (non)deterministic streaming space complexity of a language is directly linked to the automaticity of .\nThe automaticity of is the function that maps to the number of states of a smallest DFA \nsuch that for all words we have: if and only if is accepted by .\nIf we allow the automata to be nondeterministic then we obtain the nondeterministic automaticity of .\nHence, the deterministic (resp., nondeterministic) streaming space complexity of is exactly (resp., ). The (nondeterministic)\nautomaticity of languages was studied in [23 ###reference_b23###, 65 ###reference_b65###].\nInteresting in our context is the following result of Karp [37 ###reference_b37###]: if is a non-regular language then\n for infinitely many . Hence, for every non-regular language the\ndeterministic streaming space complexity of is lower bounded by for a constant \nand infinitely many .\nAs remarked before, our model of streaming algorithms is non-uniform in the sense that for every input length we have a separate\nstreaming algorithm .333This is analogous to circuit complexity, where for every input length one\nhas a separate boolean circuits with input gates.\nThis makes lower bounds of course stronger. On the other hand, the streaming algorithms\nthat we construct for concrete groups will be mostly uniform in the sense that there is an efficient algorithm that constructs from a given \nthe PFA ."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "4. Streaming algorithms: general results",
45
+ "text": "Before we investigate streaming algorithms for word problems we prove a few general results that are of independent interest.\nLet us first prove that (as stated above) the error probability of a randomized streaming algorithm can be pushed down to any constant\n at the cost of an additional constant factor in the space complexity:\nLet a monotonic function and\n a randomized streaming algorithm such that\n is -correct for the language . Then there exists a randomized streaming\nalgorithm such that and\n is -correct for the language .\nLet with .\nWe use the standard idea of running (in parallel) copies of and making a majority vote at the end.\nFormally, for an and we define the semiPFA as follows:\n,\n,\n, and\n.\nWe then define the new randomized streaming algorithm .\nIn order to bound the error probability of by we have to show that\n for every input word .\nFor this we introduce identically distributed independent Bernoulli\nrandom variables with . Then, for every we have:\nLet . With we obtain with the Chernoff bound (1 ###reference_###):\nThe space complexity of is clearly times the space complexity of .\n\u220e\nLet be a PFA and , .\nWe say that is an -isolated cutpoint for if for all words\n we have\nThe language accepted by with cut-point is the set of all words with\n.\nPaz stated in [59 ###reference_b59###, Theorem 30\u2019] that in this situation there\nexists a DFA for with states.\nA proof can be found in [60 ###reference_b60###, p. 160]; it uses\nthe proof technique for a slightly weaker result of Rabin [61 ###reference_b61###, Theorem 3].\nPaz\u2019s proof easily yields the following result:\nLet be a language with randomized streaming space complexity . Then the\ndeterministic streaming space complexity of is bounded by .\nLet be a randomized streaming algorithm for such that\n. Fix an and\nset and , so that and\n. We cannot directly apply the above mentioned result of Paz since is not\nnecessarily a -isolated cut-point for : (3 ###reference_###) only has to hold for words \nof length at most . But we can argue as follows:\nRecall the automaticity function of the language from Section 3 ###reference_###.\nThen the deterministic streaming space complexity of is .\nIt is shown in [34 ###reference_b34###] (see also [65 ###reference_b65###]) that is the maximal number for which there exist words \nsuch that for all with there exists a word such that\n and if and only if .\nAssume now that and fix the above words and .\nConsider with . Since \nand if and only if \nwe get\nwhenever .\nIn the proof of [59 ###reference_b59###, Theorem 30\u2019] (see [60 ###reference_b60###, p. 160]) it is shown that this implies\nWe obtain and hence .\n\u220e\nWe now turn to the connection between randomized and semi-randomized streaming algorithms.\nOur next result states that a randomized streaming algorithm can be transformed into an equivalent semi-randomized\nstreaming algorithms with a moderate blow-up in the space complexity.\nLet for all \nand let be a randomized streaming algorithm which is -correct for the language . Then there is a semi-randomized streaming\nalgorithm with and\n is -correct for the language .\nLet .\nLet us fix an and consider the PFA\n. We first transform into an\nacyclic PFA , where acyclic means that for every run\n of such that and for some we have .\nWe define the components of as follows:\n\n\nFor all states and all we set . Moreover,\n if and .\nFor states we define arbitrarily. Let us set for all \nand whenever .\nFor all states we set . Moreover, if .\nThe randomized streaming algorithm is also -correct for the language .\nMoreover the space complexity of is .\nWe now define a random variable , whose value is a DFA\n, as follows:\nFor every state of \nand every we choose a state\n with probability and define .\nThe initial state is chosen with probability .\nThe above choices are made independently.\nLet be the support of (the set of DFAs that have non-zero probability).\nFor every fixed word with and define\n\nby if and only if . In other words:\n if and only if\n makes an error (with respect to the language ) on the word .\nFor the expected value of we obtain\nbecause the left-hand side of the inequality is exactly the error probability of on .\nFor this, it is important that we construct the DFAs from the acyclic PFA\n: in our original PFA\n, there could be a run of the form with \nand . But runs of this form cannot occur in a DFA.\nThe rest of the proof follows the arguments from the proof of Newman\u2019s theorem from communication complexity, see e.g. [41 ###reference_b41###].\nFix a number that will be suitably chosen later.\nFor a -tuple of DFAs from \nwe construct a semi-probabilistic automaton by taking the disjoint\nunion of the . To define the initial state distribution of , let\n be the initial state of . Then we set . Thus, the starting state\nof a run in is chosen uniformly among the initial states of the .\nWe show that there exists a -tuple of the above form such that\nfor every input word \nthe error probability of on is at most .\nThen \nis the desired semi-randomized streaming algorithm from the theorem.\nFix again an input word and a -tuple\n.\nThen the error probability\nof on is\nWe now choose the tuple randomly by taking \nindependent copies of the random variable .\nWith the Chernoff bound (1 ###reference_###) and (i.e., )\nwe obtain\nBy the union bound, the probability that\n for some word of length\nat most (where is randomly chosen using \nindependent copies of the random variable ) is bounded by\nIf we choose then this probability is strictly below .\nWith such a the space complexity of becomes\n.\n\u220e\nNote that if and for some constant \nthen in Theorem 4.3 ###reference_definition3###.\nAlso notice that the proof of Theorem 4.3 ###reference_definition3### uses non-uniformity in a crucial way.\nOur final result in this section is a trade-off between the space complexity and the error probability\nfor semi-randomized streaming algorithms:\nLet be the deterministic streaming space complexity of the language and let\n be a semi-randomized streaming algorithm that is\n-correct for the language . Then for every large enough we have\nFix an large enough such that for every word the error probability\n is bounded by .\nLet . Hence, we have\n. If then we are done.\nTherefore, assume that , i.e., .\nThere must exist a state with . Consider the DFA\n. If there is a word such that\n then we would have , which\nyields a contradition. Therefore we have .\nSince is a DFA with state set , we get .\n\u220e"
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "5. Groups and word problems",
51
+ "text": "Let be a group. The identity element will be always denoted with .\nFor a subset , we denote with the subgroup\nof generated by . It is the set of all products of elements from .\nIt can be also defined as the smallest (w.r.t. inclusion) subgroup of that contains .\nSimilarly, the normal closure of is smallest normal subgroup of that contains .\nIt can be also defined as the subgroup .\nWe can then construct the quotient group . The commutator of is the element\n and for subsets we write for the subgroup .\nIn this paper, we only consider finitely generated (f.g.) groups. The group is finitely generated\nif there is a finite set such that . In this situation, is called\na finite generating set for . If then we say that\n is a finite symmetric generating set for . In the following we assume that all finite generating sets\nare symmetric.\nEvery word evaluates to a group element in the natural way. Here is the unique\nmorphism from the free monoid to such that for all .\nInstead of we also write\n. For a word with we define the word\n. Clearly, we have .\nLet be the Cayley graph of with respect to the finite symmetric generating set . It is the edge-labelled graph\nwhose vertex set is and that has an -labelled edge from to for all and\n. Let be the word problem for with respect to the generating set\n.\nNext we introduce free groups and some related concepts.\nFix a finite alphabet and take a copy \nof formal inverses. Let .\nWe extend the mapping () to the whole alphabet \nby setting . For a word \nthe word is defined as above.\nA word is called reduced if it contains no factor of the form \nfor . Let be the set of reduced words.\nThe free group can be defined as the set of reduced\nwords together with the following multiplication operation: Let . Then one can uniquely\nwrite and as and such that and define\nthe product of and in the free group as . For every word \nwe can define a unique reduced word as follows: if then\n and if\n for and then . It is important that this definition does not\ndepend on which factor is deleted in . The reduction relation for all\n and is a so-called confluent relation.\nThe reduction mapping then becomes the unique morphism mapping\na word to the element of the free group represented by .\nGroup presentations are a common way to describe groups. Let and as in the previous paragraph\nand let . Then the quotient group is also denoted by \nand the pair is called a group presentation. The group is finitely generated (since we\nassume to be finite) and every f.g. group can be written in this form. If also is finite then the group \nis called finitely presented.\nWe are interested in streaming algorithms for word problems .\nThe following lemma is simple but important:\nLet and be finite symmetric generating sets for the group and let\n be the deterministic/randomized streaming space complexity of .\nThen there exists a constant that depends on , and \nsuch that .\nFor every generator there is a word such that .\nLet and let \nbe the homomorphism with for .\nLet be a deterministic/randomized\nstreaming algorithm for the language with space complexity . We obtain a deterministic/randomized\nstreaming algorithm for as follows: on input ,\nthe PFA simulates the PFA on the input word .\nThis yields a deterministic/randomized streaming algorithm for with space complexity .\n\u220e\nBy Lemma 5.1 ###reference_definition1###, the dependence of the streaming space complexity from the generating set is often blurred by the use\nof the Landau notation. In such situations we will speak of the deterministic/randomized streaming space complexity for the group \n(instead of the deterministic/randomized streaming space complexity of the language )."
52
+ },
53
+ {
54
+ "section_id": "6",
55
+ "parent_section_id": null,
56
+ "section_name": "6. Streaming algorithms for word problems and growth",
57
+ "text": "Let be a finitely generated group and let be a finite symmetric generating set for .\nFor let be the ball of radius in the Cayley graph with center .\nThe growth function is defined by\nfor all .\nFor different finite generating sets of \nthe functions and are different, but their asymptotic behavior\nis the same; see e.g. [51 ###reference_b51###, Proposition 1.3] for a precise statement.\nThe (non)deterministic streaming space complexity of is directly\nlinked to the growth of by the following theorem.\nLet be a finitely generated infinite group and let be a finite symmetric generating set for . Define the function by\nThen, the deterministic streaming space complexity of is and the nondeterministic\nstreaming space complexity of is .\nWe start with the upper bound for the deterministic streaming space complexity in case is even.\nIn the following we identify the ball with its induced subgraph of the Cayley graph .\nWe define a deterministic finite automaton by taking the edge-labelled graph \nwith the initial and unique final state . It can be viewed as a partial DFA in the sense that for every \nand every , has at most one outgoing edge labelled with (that leads to if ).\nIn order to add the missing transitions we choose an element \n(here, we set ). Such an element exists because is infinite.\nIf has not outgoing -labelled edge in then we add an\n-labelled edge from to . We call those edges spurious. The resulting DFA is .\nWe claim that for every word , is accepted by \nif and only if . This is clear, if no spurious edge is traversed while reading into .\nIn this case, after reading , we end up in state . Now assume that a spurious edge is traversed while reading into \nand let be the shortest prefix of such that a spurious edge is traversed while reading the last symbol of .\nLet us write . We must\nhave and . Moreover, .\nSince , we have\n. Moreover, is rejected by , because leads in from\nthe initial state to state and there is no path of length at most from back to the final state .\nFor the case that is odd, we take the ball . Instead of adding spurious edges we add a failure state .\nIf has no outgoing -labelled edge in , then we add an -labelled\nedge from to . Moreover, for every we add an -labelled loop at state . As for the case even, one can\nshow that the resulting DFA accepts a word if and only if .\nThe upper bound for the nondeterministic streaming space complexity follows with the same arguments. Notice that the failure state in case\n is odd is not needed in a nondeterministic automaton.\nFor the lower bound we start with the nondeterministic streaming space complexity.\nLet and choose words such that\n for all and whenever .\nThen for every we have and \nfor all . Moreover, for all . In the language of [23 ###reference_b23###], \nis a set of uniformly -dissimilar words.\nBy [23 ###reference_b23###, Lemma 3.1] this implies that the nondeterministic automaticity of satisfies\n.\nThis shows the lower bound on the nondeterministic streaming space complexity.\nFor the lower bound on the deterministic streaming space complexity, let\n be a smallest DFA such that\n.\nWe have to show that for from (4 ###reference_###).\nLet us consider two words of length at most such that\n and . We then have and .\nOn the other hand, we have , which is a contradiction (note that ).\nHence, if for two words of length at most , then .\nLet . The previous paragraph shows that\n. If is even then \nand we are done. So, let us assume that is odd.\nIf then we are again done.\nSo, let us assume that and . Then, to every state we\ncan assign a unique group element such that\nfor every word with we have if and only if .\nThe mapping is a bijection between and .\nLet us now take a state and a generator such that\n. Such a state and generator must exist since is infinite.\nLet be words of length at most such that \nand . We obtain .\nBut and since\n and ,\n. This is a contradiction\nsince and both have length at most . Hence, we must have .\n\u220e\nThe growth of f.g. groups is well-studied and Theorem 6.1 ###reference_definition1### basically closes the chapter\non (non)deterministic streaming algorithms for word problems. Hence, in the rest of the paper we focus on\nrandomized streaming algorithms. Here, we can still prove a lower bound (that will turn out to be sharp in some cases\nbut not always) using the randomized one-way communication complexity of the equality problem:\nLet be a finitely generated group and let be a finite symmetric generating set for .\nThe randomized streaming space complexity of is\n.\nWe make a reduction from the equality problem in communication complexity. In this problem, Alice and Bob each have a private number (say for Alice\nand for Bob) and their goal is to check whether .\nIt is known that the randomized one-way communication complexity (where Alice can send information to Bob in one round) of the equality problem is \nwhen Alice and Bob make private random choices [41 ###reference_b41###].\nFix an arbitrary bijection\nand let\nbe an injective mapping that maps every group element\n to a word such that .\nAssume now that is a randomized streaming algorithm\nfor and assume that its space complexity is .\nThen we obtain a randomized one-way communication protocol for equality on numbers from \nwith communication cost , which implies that :\nIf Alice holds the number , then she runs (using her random choices) the PFA on input\n. The state reached at the end (which can be encoded by a bit string of length at most ) is communicated to Bob. Assume that Bob holds the number\n. Bob then simulates (using his random choices) the PFA on input starting from state \nand accepts if and only if a final state of is reached.\nWe have if and only if in \nif and only if . This shows that we obtain indeed a randomized one-way protocol for equality.\n\u220e\nSince every f.g. infinite group has growth at least , Theorem 6.2 ###reference_definition2### has the following consequence:\nIf is a f.g. infinite group, then the randomized streaming space complexity of is .\nLater in this paper, we will make use of the following two famous results on the growth of groups, see also [17 ###reference_b17###, 51 ###reference_b51###]:\nGromov\u2019s theorem [28 ###reference_b28###]: A f.g. group has polynomial growth if and only if is virtually nilpotent (i.e., has a nilpotent subgroup of finite index).\nWolf-Milnor theorem [55 ###reference_b55###, 71 ###reference_b71###]; see also [17 ###reference_b17###, p. 202]:\nA f.g. solvable group is either virtually nilpotent (and hence has polynomial growth) or there is a constant\n such that has growth (i.e., has exponential growth).\nIt is well known that the same dichotomy also holds for f.g. linear groups. This is a consequence of Tits alternative [67 ###reference_b67###]:\nA f.g. linear group is either virtually solvable or contains a free group of rank at least two (in which case has exponential growth).\nThe dichotomy theorem of Milnor and Wolf does not generalize to all f.g. groups. Grigorchuk [26 ###reference_b26###] constructed a group\nwhose growth is lower bounded by [7 ###reference_b7###] and upper bounded by [6 ###reference_b6###].\nThe streaming space complexity of this remarkable group will be studied in Theorem 11.6 ###reference_mdefinition6###."
58
+ },
59
+ {
60
+ "section_id": "7",
61
+ "parent_section_id": null,
62
+ "section_name": "7. Comparison to sofic groups",
63
+ "text": "In this section we will discuss a relationship between randomized streaming space complexity and sofic groups.\nThere are many equivalent definitions of sofic groups. The following definition is from [4 ###reference_b4###, 13 ###reference_b13###]:\nWith we denote the symmetric group on (the set of all permutations\non together with the operation of function composition). For \nthe normalized Hamming weight is defined by\nLet be a f.g. group and be a finite symmetric generating set for . Let \nbe the canonical morphism that evaluates words in the group .\nThen is called sofic if for every there exists a and a monoid morphism\n (with ) such that\nfor every word the following holds:\nif then , and\nif then .\nIn case is sofic, we define the sofic dimension growth of (with respect to ) as\nthe function such that is the minimal\nvalue for which the above conditions hold. For different finite generating sets of \nthe functions and are different, but their asymptotic behavior\nis the same (analogously to the growth functions and ); see [13 ###reference_b13###, Proposition 3.3.2] for a precise statement.\nIt is a famous open problem whether every group is sofic.444One can define the concept\nof sofic groups also for non-finitely generated groups, but here we only talk about finitely generated groups.\nThe connection to randomized streaming complexity can be seen as follows: Assume that\n is sofic and consider its sofic dimension growth .\nFor every let and let\n be the monoid morphism satisfying the above conditions for soficity.\nThen we obtain a semi-randomized streaming algorithm \nthat is -correct for as follows: define\n with\n,\nfor all and for ,\nfor all and , and\n.\nThe space complexity of this algorithm is .\nThe above semi-randomized streaming algorithm has\nsome particular properties:\nfor every , the transition function (for ) is\na permutation on , and\nthe initial state distribution is a uniform distribution on\na subset of .\nThe second property is not a real restriction. With an additional constant factor in the space complexity\none can easily ensure that is the uniform distribution on\na subset of .\nThe first property is a severe restriction that makes the existence of non-sofic\ngroups possible."
64
+ },
65
+ {
66
+ "section_id": "8",
67
+ "parent_section_id": null,
68
+ "section_name": "8. Distinguishers for groups",
69
+ "text": "Let be a f.g. group with the finite generating set . Moreover, let\n be monotonically decreasing functions.\nA semi-randomized streaming algorithm with (a semiPFA)\nis called an -distinguisher for (with respect to ), if the following properties hold\nfor all large enough and all words :\nIf then .\nIn other words: for a randomly chosen initial state,\nthe semiPFA arrives with probability at least in the same state after reading and .\nIf then .\nIn other words: for a randomly chosen initial state, the semiPFA arrives with probability at least\n in different states after reading and .\nNote that the set of final states of is not important and we will just write\n in the following if we talk about an\n-distinguisher .\nLet be an -distinguisher for with respect to .\nThen has an -correct semi-randomized streaming algorithm with space complexity .\nLet with .\nUsing the above definition of an -distinguisher with the empty string we get for every word\n:\nIf then .\nIf then .\nThis allows to construct an -correct\nrandomized streaming algorithm for . Thereby the space complexity of the algorithm only doubles:\nWe define where\nfor all and if ,\nfor and , and\n.\nIt is easy to check that this semi-randomized streaming algorithm is indeed -correct for .\n\u220e\nDue to Lemma 8.1 ###reference_definition1###, our goal in the rest of the paper will be the construction of space efficient\n-distinguishers for groups.\nWe will need -distinguishers in order\nto get transfer results for graph products and wreath products. For this, we need\nsome further observations on -distinguishers that we discuss in the rest of the section.\nFor equivalence relations and on a set and a subset we say that:\nrefines on if for all we have: if then ;\nequals on if for all we have: if and only if .\nFor a semiPFA and a state we define the equivalence\nrelation on as follows: if and only if .\nWhenever is clear from the context, we just write instead of .\nLet be an -distinguisher for the finitely generated group with respect\nto the finite generating set . Let .\nConsider a set .\nThen, the following statements hold, where refers to :\n,\n,\n.\nAll three statements follow from the union bound and the fact that there are unordered pairs\nof different elements from . For the first statement note that\n for all .\nFor the second statement, note that , and similarly for the third statement.\n\u220e\nRecall that for a word we write for the set of all prefixes of .\nLet be a finitely generated group with the finite generating set and let be a semiPFA with .\nConsider words such that refines on and\nlet with . Then refines on .\nAssume that are such that . We have to show that .\nIf we must have for a prefix of .\nSince , and refines on , we have .\nThis implies that . In addition we have and .\nIn this way we obtain from a word such that and\n (we might have ). In the same way, we can obtain from \na word such that and\n.\nSince we have . Since\n refines on and we get\n.\n\u220e\nLet , , and be as in Lemma 8.3 ###reference_definition3###.\nConsider words such that refines on and\nlet with . Then refines on .\nAssume that are such that . We have to show that .\nSince and , we have , i.e., .\nWe can then define the words in the same way as in the proof of\nLemma 8.3 ###reference_definition3###. We obtain .\nSince refines on we get .\n\u220e"
70
+ },
71
+ {
72
+ "section_id": "9",
73
+ "parent_section_id": null,
74
+ "section_name": "9. Randomized streaming algorithms for linear groups",
75
+ "text": "Recall that a group is linear if it is isomorphic to a group of invertible matrices over a field .\nThe group of all invertible -matrices with entries from is denoted with .\nFor every f.g. linear group, the word problem can be solved in logarithmic space. This was shown by\nLipton and Zalcstein [43 ###reference_b43###] (if the underlying field has characteristic zero) and Simon [66 ###reference_b66###]\n(if the underlying field has prime characteristic). In this section, we show that with some care, one can turn the algorithms\nfrom [43 ###reference_b43###, 66 ###reference_b66###] into -distinguishers with for a constant and space complexity .\nWe will make use of the following well-known result of DeMillo, Lipton, Schwartz and Zippel [72 ###reference_b72###, 64 ###reference_b64###, 16 ###reference_b16###]. The degree\nof a multivariate polynomial with coefficients from the field is the maximal sum \nwhere is a monomial of .\nLet be a non-zero multivariate polynomial of degree , and let be finite.\nIf is randomly chosen according to the uniform distribution, then\n.\nWe now come to the main result of this section.\nFor every f.g. linear group and every there exists a -distinguisher\nwith space complexity .\nBy [43 ###reference_b43###], is a finitely generated subgroup of , where the field is of the form\n for a prime field . Thus, is either or a finite field for a prime \nand is the field of all fractions for polynomials with .\nLet us first assume that . Let be a generating set for .\nThen every generator is a matrix, whose entries are quotients of polynomials from\n. Therefore there exists a fixed non-zero polynomial \nsuch that every matrix for has entries from .\nLet be the dimension of the matrices.\nLet be the maximal degree of and all polynomials that appear in matrices with\n. The parameters , , and are constants in the further considerations.\nFix an input length .\nClearly, for all matrices with we have\n if and only if .\nConsider two input words with and assume that\nDefine the matrix\nNote that all entries of the matrix are polynomials\nof degree at most and at least one of them is not the zero polynomial.\nLet , where is the value from the theorem. For a tuple and\na matrix \nlet be the integer matrix obtained from by replacing every variable by .\nFor a randomly chosen tuple , Theorem 9.1 ###reference_definition1### implies that\nLet us now consider a tuple such that .\nEvery entry in a matrix () has an absolute value of order ( is a constant)\nand also .\nTherefore, all entries in the matrix are of absolute value\n, and similarly for\n. Hence,\n is a non-zero matrix with all entries of absolute value at most .\nThe number of different prime factors\nof a number is bounded by\nsee [62 ###reference_b62###, Theorem 16].\nBy a weak form of the prime number theorem, the number of primes of size at most is .\nHence, by randomly choosing a prime of size at most we can obtain the bound\nfor large enough.\nHence, we obtain\nfor large enough.\nThe streaming algorithm for inputs of length at most is now clear: Initially, the algorithm guesses\n (for ) and a prime . All these numbers\nneed bits in total. If then the algorithm ignores the input word.\nOtherwise, the algorithm initializes a matrix ,\nwhere is the -dimensional identity matrix. Then, for every\nnew generator matrix the algorithm updates by\nAll computation are carried out in the field . If () then\nafter reading the input words and , the algorithm arrives with probability one in\nthe same state. On the other hand, if then the reached states differ with probability\nat least by the above error analysis.\nLet us now briefly discuss the case where the underlying prime field is for a prime .\nThen we have to work in a finite extension for some such that\n, which can be achieved by taking of size .\nBy fixing a subset of size and choosing\na tuple randomly, we obtain the bound (5 ###reference_###).\nSince an -dimensional matrix over the field can be stored in space\n ( and are constants and ), this yields the desired algorithm in the same way as for the case .\n\u220e\nA group is nilpotent if its lower central series terminates after finitely many steps in the trivial group .\nThe lower central series of a group is the series \nwhere .\nEvery nilpotent group is linear. For nilpotent groups we can improve the algorithm from the proof of Theorem 9.2 ###reference_definition2###, at least if we sacrifice\nthe inverse polynomial error probability:\nFor every f.g. nilpotent group and every constant there exists a -distinguisher\nwith space complexity .\nWe can assume that is infinite.\nWith we denote the set of all upper triangular -matrices over with all\ndiagonal entries equal to (so-called unitriangular matrices). These matrices form a f.g. nilpotent group.\nLet be a f.g. nilpotent group.\nThen has a f.g. torsion-free nilpotent subgroup\n such that the index is finite [36 ###reference_b36###, Theorem 17.2.2].\nMoreover, there exists such that\nthe finitely generated torsion-free nilpotent group \ncan be embedded into the group [36 ###reference_b36###, Theorem 17.2.5].\nBy Theorem 10.2 ###reference_mdefinition2### below it suffices to show that every has an\n-distinguisher\nwith and space complexity .\nFix a finite generating set for and an input length .\nConsider a product with and\n. From [45 ###reference_b45###, Proposition 4.18] it follows that the absolute\nvalue of every entry of the matrix has size at most .\nThe randomized streaming algorithm for will guess a prime number of size\n and computes the product matrix modulo . For this,\n bits are sufficient.\nConsider two input words and with .\nIf then our randomized streaming algorithm will reach with probability one the same\nstate after reading the input words and , respectively. On the other hand, if ,\nthen consider a non-zero matrix entry of the matrix .\nWe have . The number of different prime factors of is therefore bounded by\n. Hence, by randomly choosing a prime number of size\nat most we can obtain a probability of at most \nfor . Hence, with probability we reach different states after reading and , respectively.\n\u220e\nNote that if is infinite, the space bound from Theorem 9.3 ###reference_definition3### is sharp up to constant factors even if we allow a constant error probability;\nsee Remark 6.3 ###reference_definition3###.\nBy Theorem 4.4 ###reference_definition4### the inverse polylogarithmic error in Theorem 9.3 ###reference_definition3### cannot be improved if is infinite:\nConsider an -distinguisher\nwith space complexity for the infinite group . Lemma 8.1 ###reference_definition1### yields an\n-correct semi-randomized streaming algorithm for the word problem of with space complexity\n. By Theorem 6.1 ###reference_definition1###, the deterministic streaming space complexity of the word problem for is\nlower bounded by . Hence, if is large enough, we must have by Theorem 4.4 ###reference_definition4###.\nWe get for some constant , i.e., ."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {},
80
+ "image_paths": {},
81
+ "validation": true,
82
+ "references": [
83
+ {
84
+ "1": {
85
+ "title": "The space complexity of approximating the frequency moments.",
86
+ "author": "Noga Alon, Yossi Matias, and Mario Szegedy.",
87
+ "venue": "Journal of Computer and System Sciences, 58(1):137\u2013147, 1999.",
88
+ "url": null
89
+ }
90
+ },
91
+ {
92
+ "2": {
93
+ "title": "Zur algebraischen Charakteristik der durch kontext-freie Sprachen\ndefinierten Gruppen.",
94
+ "author": "Anatolij W. Anissimov and Franz D. Seifert.",
95
+ "venue": "Elektron. Informationsverarbeit. Kybernetik,\n11(10\u201312):695\u2013702, 1975.",
96
+ "url": null
97
+ }
98
+ },
99
+ {
100
+ "3": {
101
+ "title": "New results on noncommutative and commutative polynomial identity\ntesting.",
102
+ "author": "Vikraman Arvind, Partha Mukhopadhyay, and Srikanth Srinivasan.",
103
+ "venue": "Computational Complexity, 19(4):521\u2013558, 2010.",
104
+ "url": null
105
+ }
106
+ },
107
+ {
108
+ "4": {
109
+ "title": "Quantifying metric approximations of discrete groups.",
110
+ "author": "Goulnara Arzhantseva and Pierre-Alain Cherix.",
111
+ "venue": "arXiv:2008.12954, 2020.",
112
+ "url": null
113
+ }
114
+ },
115
+ {
116
+ "5": {
117
+ "title": "Streaming algorithms for language recognition problems.",
118
+ "author": "Ajesh Babu, Nutan Limaye, Jaikumar Radhakrishnan, and Girish Varma.",
119
+ "venue": "Theoretical Computer Science, 494:13\u201323, 2013.",
120
+ "url": null
121
+ }
122
+ },
123
+ {
124
+ "6": {
125
+ "title": "The growth of Grigorchuk\u2019s torsion group.",
126
+ "author": "Laurent Bartholdi.",
127
+ "venue": "International Mathematics Research Notices, 20:1049\u20131054,\n1998.",
128
+ "url": null
129
+ }
130
+ },
131
+ {
132
+ "7": {
133
+ "title": "Lower bounds on the growth of a group acting on the binary rooted\ntree.",
134
+ "author": "Laurent Bartholdi.",
135
+ "venue": "International Journal of Algebra and Computation,\n11(01):73\u201388, 2001.",
136
+ "url": null
137
+ }
138
+ },
139
+ {
140
+ "8": {
141
+ "title": "Groups with ALOGTIME-hard word problems and PSPACE-complete\ncompressed word problems.",
142
+ "author": "Laurent Bartholdi, Michael Figelius, Markus Lohrey, and Armin Wei\u00df.",
143
+ "venue": "ACM Transactions on Computation Theory, 14(3\u20134), 2023.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "9": {
149
+ "title": "Property testing of regular languages with applications to streaming\nproperty testing of visibly pushdown languages.",
150
+ "author": "Gabriel Bathie and Tatiana Starikovskaya.",
151
+ "venue": "In Proceedings of the 48th International Colloquium on Automata,\nLanguages, and Programming, ICALP 2021, volume 198 of LIPIcs, pages\n119:1\u2013119:17. Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr Informatik, 2021.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "10": {
157
+ "title": "Transductions and context\u2013free languages.",
158
+ "author": "J. Berstel.",
159
+ "venue": "Teubner Studienb\u00fccher, Stuttgart, 1979.",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "11": {
165
+ "title": "Groups acting faithfully on trees and properly on products of trees,\n2019.",
166
+ "author": "J. Button.",
167
+ "venue": "arXiv:1910.04614.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "12": {
173
+ "title": "Introductory notes on Richard Thompson\u2019s groups.",
174
+ "author": "John W. Cannon, William J. Floyd, and Walter R. Parry.",
175
+ "venue": "L\u2019Enseignement Math\u00e9matique, 42(3):215\u2013256, 1996.",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "13": {
181
+ "title": "Algorithms and Quantifications in Amenable and Sofic Groups.",
182
+ "author": "Matteo Cavaleri.",
183
+ "venue": "PhD thesis, Universita\u2019 Degli Studi Di Roma La Sapienza, 2016.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "14": {
189
+ "title": "Cellular Automata and Groups, 2nd edition.",
190
+ "author": "Tullio Ceccherini-Silberstein and Michel Coornaert.",
191
+ "venue": "Springer, 2023.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "15": {
197
+ "title": "Randomness-optimal unique element isolation with applications to\nperfect matching and related problems.",
198
+ "author": "Suresh Chari, Pankaj Rohatgi, and Aravind Srinivasan.",
199
+ "venue": "SIAM Journal on Computing, 24(5):1036\u20131050, 1995.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "16": {
205
+ "title": "A probabilistic remark on algebraic program testing.",
206
+ "author": "Richard A. DeMillo and Richard J. Lipton.",
207
+ "venue": "Information Processing Letters, 7(4):193\u2013195, 1978.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "17": {
213
+ "title": "Topics in Geometric Group Theory.",
214
+ "author": "Pierre de la Harpe.",
215
+ "venue": "University of Chicago Press, 2000.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "18": {
221
+ "title": "\u00dcber unendliche diskontinuierliche Gruppen.",
222
+ "author": "Max Dehn.",
223
+ "venue": "Mathematische Annalen, 71:116\u2013144, 1911.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "19": {
229
+ "title": "Logspace computations in graph products.",
230
+ "author": "Volker Diekert and Jonathan Kausch.",
231
+ "venue": "Journal of Symbolic Computation, 75:94\u2013109, 2016.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "20": {
237
+ "title": "Simplified Chernoff bounds with powers-of-two probabilities.",
238
+ "author": "Michael Dillencourt and Michael T. Goodrich.",
239
+ "venue": "Information Processing Letters, 182, 2023.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "21": {
245
+ "title": "Not residually finite groups of intermediate growth, commensurability\nand non-geometricity.",
246
+ "author": "Anna Erschler.",
247
+ "venue": "Journal of Algebra, 272(1):154\u2013172, 2004.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "22": {
253
+ "title": "Streaming property testing of visibly pushdown languages.",
254
+ "author": "Nathana\u00ebl Fran\u00e7ois, Fr\u00e9d\u00e9ric Magniez, Michel\nde Rougemont, and Olivier Serre.",
255
+ "venue": "In Proceedings of the 24th Annual European Symposium on\nAlgorithms, ESA 2016, volume 57 of LIPIcs, pages 43:1\u201343:17.\nSchloss Dagstuhl - Leibniz-Zentrum f\u00fcr Informatik, 2016.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "23": {
261
+ "title": "Automaticity III: polynomial automaticity and context-free\nlanguages.",
262
+ "author": "Ian Glaister and Jeffrey O. Shallit.",
263
+ "venue": "Computational Complexity, 7(4):371\u2013387, 1998.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "24": {
269
+ "title": "Interleaved group products.",
270
+ "author": "William Timothy Gowers and Emanuele Viola.",
271
+ "venue": "SIAM Journal on Computing, 48(2):554\u2013580, 2019.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "25": {
277
+ "title": "Graph Products of Groups.",
278
+ "author": "Elisabeth R. Green.",
279
+ "venue": "PhD thesis, The University of Leeds, 1990.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "26": {
285
+ "title": "Burnside\u2019s problem on periodic groups.",
286
+ "author": "Rostislav I. Grigorchuk.",
287
+ "venue": "Functional Analysis and Its Applications, 14:41\u201343, 1980.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "27": {
293
+ "title": "On the gap conjecture concerning group growth.",
294
+ "author": "Rostislav I. Grigorchuk.",
295
+ "venue": "Bulletin of Mathematical Sciences, 4(1):113\u2013128, 2014.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "28": {
301
+ "title": "Groups of polynomial growth and expanding maps.",
302
+ "author": "Mikhail Gromov.",
303
+ "venue": "Publications Math\u00e9matiques de L\u2019Institut des Hautes\nScientifiques, 53:53\u201378, 1981.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "29": {
309
+ "title": "Residual properties of infinite soluble groups.",
310
+ "author": "K. W. Gruenberg.",
311
+ "venue": "Proceedings of the London Mathematical Society, s3-7(1):29\u201362,\n1957.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "30": {
317
+ "title": "On subgroups of the R. Thompson group and other diagram\ngroups.",
318
+ "author": "Victor S. Guba and Mark V. Sapir.",
319
+ "venue": "Matematicheskii Sbornik, 190(8):3\u201360, 1999.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "31": {
325
+ "title": "Groups, Languages and Automata, volume 88 of London\nMathematical Society Student Texts.",
326
+ "author": "Derek F. Holt, Sarah Rees, and Claas E. R\u00f6ver.",
327
+ "venue": "Cambridge University Press, 2017.",
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "32": {
333
+ "title": "On linear and residual properties of graph products.",
334
+ "author": "Tim Hsu and Daniel T. Wise.",
335
+ "venue": "Michigan Mathematical Journal, 46(2):251\u2013259, 1999.",
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "33": {
341
+ "title": "On representations of Artin groups and the Tits conjecture.",
342
+ "author": "Stephen P. Humphries.",
343
+ "venue": "Journal of Algebra, 169(3):847\u2013862, 1994.",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "34": {
349
+ "title": "Running time to recognize nonregular languages by 2-way probabilistic\nautomata.",
350
+ "author": "Janis Kaneps and Rusins Freivalds.",
351
+ "venue": "In Proceedings of the 18th International Colloquium on Automata,\nLanguages and Programming, ICALP91, volume 510 of Lecture Notes in\nComputer Science, pages 174\u2013185. Springer, 1991.",
352
+ "url": null
353
+ }
354
+ },
355
+ {
356
+ "35": {
357
+ "title": "Stallings foldings and subgroups of free groups.",
358
+ "author": "Ilya Kapovich and Alexei Myasnikov.",
359
+ "venue": "Journal of Algebra, 248(2):608\u2013668, 2002.",
360
+ "url": null
361
+ }
362
+ },
363
+ {
364
+ "36": {
365
+ "title": "Fundamentals of the Theory of Groups, volume 62 of Graduate Texts in Mathematics.",
366
+ "author": "Mikhail I. Kargapolov and Yurii I. Merzljakov.",
367
+ "venue": "Springer-Verlag, New York, 1979.",
368
+ "url": null
369
+ }
370
+ },
371
+ {
372
+ "37": {
373
+ "title": "Some bounds on the storage requirements of sequential machines and\nturing machines.",
374
+ "author": "Richard M. Karp.",
375
+ "venue": "Journal of the ACM, 14(3):478\u2013489, 1967.",
376
+ "url": null
377
+ }
378
+ },
379
+ {
380
+ "38": {
381
+ "title": "The parallel complexity of certain algorithmic problems in group\ntheory.",
382
+ "author": "Jonathan Kausch.",
383
+ "venue": "PhD thesis, University of Stuttgart, 2017.",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "39": {
389
+ "title": "Evaluation of circuits over nilpotent and polycyclic groups.",
390
+ "author": "Daniel K\u00f6nig and Markus Lohrey.",
391
+ "venue": "Algorithmica, 80(5):1459\u20131492, 2018.",
392
+ "url": null
393
+ }
394
+ },
395
+ {
396
+ "40": {
397
+ "title": "Parallel identity testing for skew circuits with big powers and\napplications.",
398
+ "author": "Daniel K\u00f6nig and Markus Lohrey.",
399
+ "venue": "International Journal of Algebra and Computation,\n28(6):979\u20131004, 2018.",
400
+ "url": null
401
+ }
402
+ },
403
+ {
404
+ "41": {
405
+ "title": "Communication complexity.",
406
+ "author": "Eyal Kushilevitz and Noam Nisan.",
407
+ "venue": "Cambridge University Press, 1997.",
408
+ "url": null
409
+ }
410
+ },
411
+ {
412
+ "42": {
413
+ "title": "The co-word problem for the Higman-Thompson group is context-free.",
414
+ "author": "J\u00f6rg Lehnert and Pascal Schweitzer.",
415
+ "venue": "Bulletin of the London Mathematical Society, 39(2):235\u2013241, 02\n2007.",
416
+ "url": null
417
+ }
418
+ },
419
+ {
420
+ "43": {
421
+ "title": "Word problems solvable in logspace.",
422
+ "author": "Richard J. Lipton and Yechezkel Zalcstein.",
423
+ "venue": "Journal of the Association for Computing Machinery,\n24(3):522\u2013526, 1977.",
424
+ "url": null
425
+ }
426
+ },
427
+ {
428
+ "44": {
429
+ "title": "Decidability and complexity in automatic monoids.",
430
+ "author": "Markus Lohrey.",
431
+ "venue": "International Journal of Foundations of Computer Science,\n16(4):707\u2013722, 2005.",
432
+ "url": null
433
+ }
434
+ },
435
+ {
436
+ "45": {
437
+ "title": "The Compressed Word Problem for Groups.",
438
+ "author": "Markus Lohrey.",
439
+ "venue": "SpringerBriefs in Mathematics. Springer, 2014.",
440
+ "url": null
441
+ }
442
+ },
443
+ {
444
+ "46": {
445
+ "title": "Streaming word problems.",
446
+ "author": "Markus Lohrey and Lukas L\u00fcck.",
447
+ "venue": "In Proceedings of the 47th International Symposium on\nMathematical Foundations of Computer Science, MFCS 2022, volume 241 of\nLIPIcs, pages 72:1\u201372:15. Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr\nInformatik, 2022.",
448
+ "url": null
449
+ }
450
+ },
451
+ {
452
+ "47": {
453
+ "title": "Streaming in graph products.",
454
+ "author": "Markus Lohrey, Lukas L\u00fcck, and Julio Xochitemol.",
455
+ "venue": "to appear in Proceedings of the 49th International Symposium on\nMathematical Foundations of Computer Science, MFCS 2024.",
456
+ "url": null
457
+ }
458
+ },
459
+ {
460
+ "48": {
461
+ "title": "Recognizing well-parenthesized expressions in the streaming model.",
462
+ "author": "Fr\u00e9d\u00e9ric Magniez, Claire Mathieu, and Ashwin Nayak.",
463
+ "venue": "SIAM Journal on Computing, 43(6):1880\u20131905, 2014.",
464
+ "url": null
465
+ }
466
+ },
467
+ {
468
+ "49": {
469
+ "title": "On a theorem of Marshall Hall.",
470
+ "author": "Wilhelm Magnus.",
471
+ "venue": "Annals of Mathematics. Second Series, 40:764\u2013768, 1939.",
472
+ "url": null
473
+ }
474
+ },
475
+ {
476
+ "50": {
477
+ "title": "On isomorphic matrix representations of infinite groups.",
478
+ "author": "A. I. Mal\u2019cev.",
479
+ "venue": "Rec. Math. [Mat. Sbornik] N.S., 8(50).",
480
+ "url": null
481
+ }
482
+ },
483
+ {
484
+ "51": {
485
+ "title": "How Groups Grow.",
486
+ "author": "Avinoam Mann.",
487
+ "venue": "London Mathematical Society Lecture Note Series. Cambridge University\nPress, 2011.",
488
+ "url": null
489
+ }
490
+ },
491
+ {
492
+ "52": {
493
+ "title": "The conjugacy problem in free solvable groups and wreath products of\nabelian groups is in .",
494
+ "author": "Alexei Miasnikov, Svetla Vassileva, and Armin Wei\u00df.",
495
+ "venue": "Theory of Computing Systems, 63(4):809\u2013832, 2019.",
496
+ "url": null
497
+ }
498
+ },
499
+ {
500
+ "53": {
501
+ "title": "The occurrence problem for direct products of groups.",
502
+ "author": "K. A. Miha\u012dlova.",
503
+ "venue": "Math. USSR Sbornik, 70:241\u2013251, 1966.",
504
+ "url": null
505
+ }
506
+ },
507
+ {
508
+ "54": {
509
+ "title": "Decision problems for groups \u2013 survey and reflections.",
510
+ "author": "Charles F Miller III.",
511
+ "venue": "In G. Baumslag and Charles F Miller III, editors, Algorithms and\nclassification in combinatorial group theory, pages 1\u201359. Springer, 1992.",
512
+ "url": null
513
+ }
514
+ },
515
+ {
516
+ "55": {
517
+ "title": "Growth of finitely generated solvable groups.",
518
+ "author": "John Milnor.",
519
+ "venue": "Journal of Differential Geometry, 2(4):447 \u2013 449, 1968.",
520
+ "url": null
521
+ }
522
+ },
523
+ {
524
+ "56": {
525
+ "title": "Probability and Computing, 2nd edition.",
526
+ "author": "Michael Mitzenmacher and Eli Upfal.",
527
+ "venue": "Cambridge University Press, 2017.",
528
+ "url": null
529
+ }
530
+ },
531
+ {
532
+ "57": {
533
+ "title": "Groups, the theory of ends, and context-free languages.",
534
+ "author": "David E. Muller and Paul E. Schupp.",
535
+ "venue": "Journal of Computer and System Sciences, 26:295\u2013310, 1983.",
536
+ "url": null
537
+ }
538
+ },
539
+ {
540
+ "58": {
541
+ "title": "The word and geodesic problems in free solvable groups.",
542
+ "author": "Alexei Myasnikov, Vitaly Roman\u2019kov, Alexander Ushakov, and AnatolyVershik.",
543
+ "venue": "Transactions of the American Mathematical Society,\n362(9):4655\u20134682, 2010.",
544
+ "url": null
545
+ }
546
+ },
547
+ {
548
+ "59": {
549
+ "title": "Some aspects of probabilistic automata.",
550
+ "author": "Azaria Paz.",
551
+ "venue": "Information and Control, 9(1):26\u201360, 1966.",
552
+ "url": null
553
+ }
554
+ },
555
+ {
556
+ "60": {
557
+ "title": "Introduction to Probabilistic Automata.",
558
+ "author": "Azaria Paz.",
559
+ "venue": "Academic Press, 1971.",
560
+ "url": null
561
+ }
562
+ },
563
+ {
564
+ "61": {
565
+ "title": "Probabilistic automata.",
566
+ "author": "Michael O. Rabin.",
567
+ "venue": "Information and Control, 6(3):230\u2013245, 1963.",
568
+ "url": null
569
+ }
570
+ },
571
+ {
572
+ "62": {
573
+ "title": "Estimation de la fonction de Tchebychef sur le\nk-i\u00e8me nombre premier et grandes valeurs de la fonction \nnombre de diviseurs premiers de .",
574
+ "author": "Guy Robin.",
575
+ "venue": "Acta Arithmetica, 42(4):367\u2013389, 1983.",
576
+ "url": null
577
+ }
578
+ },
579
+ {
580
+ "63": {
581
+ "title": "A Course in the Theory of Groups, 2nd edition.",
582
+ "author": "Derek J.S. Robinson.",
583
+ "venue": "Springer, 1996.",
584
+ "url": null
585
+ }
586
+ },
587
+ {
588
+ "64": {
589
+ "title": "Fast probabilistic algorithms for verification of polynomial\nidentities.",
590
+ "author": "Jacob T. Schwartz.",
591
+ "venue": "Journal of the ACM, 27(4):701\u2013717, 1980.",
592
+ "url": null
593
+ }
594
+ },
595
+ {
596
+ "65": {
597
+ "title": "Automaticity I: properties of a measure of descriptional\ncomplexity.",
598
+ "author": "Jeffrey Shallit and Yuri Breitbart.",
599
+ "venue": "Journal of Computer and System Sciences, 53(1):10\u201325, 1996.",
600
+ "url": null
601
+ }
602
+ },
603
+ {
604
+ "66": {
605
+ "title": "Word problems for groups and contextfree recognition.",
606
+ "author": "Hans-Ulrich Simon.",
607
+ "venue": "In Proceedings of Fundamentals of Computation Theory, FCT 1979,\npages 417\u2013422. Akademie-Verlag, 1979.",
608
+ "url": null
609
+ }
610
+ },
611
+ {
612
+ "67": {
613
+ "title": "Free subgroups in linear groups.",
614
+ "author": "Jacques Tits.",
615
+ "venue": "Journal of Algebra, 20:250\u2013270, 1972.",
616
+ "url": null
617
+ }
618
+ },
619
+ {
620
+ "68": {
621
+ "title": "Algorithmic theory of free solvable groups: Randomized computations.",
622
+ "author": "Alexander Ushakov.",
623
+ "venue": "Journal of Algebra, 407:178\u2013200, 2014.",
624
+ "url": null
625
+ }
626
+ },
627
+ {
628
+ "69": {
629
+ "title": "On finitely generated soluble linear groups.",
630
+ "author": "Bertram A. F. Wehrfritz.",
631
+ "venue": "Mathematische Zeitschrift, 170:155\u2013167, 1980.",
632
+ "url": null
633
+ }
634
+ },
635
+ {
636
+ "70": {
637
+ "title": "A logspace solution to the word and conjugacy problem of generalized\nBaumslag-Solitar groups.",
638
+ "author": "Armin Wei\u00df.",
639
+ "venue": "In Algebra and Computer Science, volume 677 of Contemporary Mathematics. American Mathematical Society, 2016.",
640
+ "url": null
641
+ }
642
+ },
643
+ {
644
+ "71": {
645
+ "title": "Growth of finitely generated solvable groups and curvature of\nRiemannian manifolds.",
646
+ "author": "Joseph A. Wolf.",
647
+ "venue": "Journal of Differential Geometry, 2(4):421 \u2013 446, 1968.",
648
+ "url": null
649
+ }
650
+ },
651
+ {
652
+ "72": {
653
+ "title": "Probabilistic algorithms for sparse polynomials.",
654
+ "author": "Richard Zippel.",
655
+ "venue": "In Proceedings of the International Symposium on Symbolic and\nAlgebraic Manipulation, EUROSAM 1979, volume 72 of Lecture Notes in\nComputer Science, pages 216\u2013226. Springer, 1979.",
656
+ "url": null
657
+ }
658
+ }
659
+ ],
660
+ "url": "http://arxiv.org/html/2202.04060v4"
661
+ }
20240722/2203.00526v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2203.02180v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2203.10560v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2206.04359v2.json ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Learning Non-Vacuous Generalization Bounds from Optimization",
3
+ "abstract": "One of the fundamental challenges in the deep learning community is to theoretically understand how well a deep neural network generalizes to unseen data.\nHowever, current approaches often yield generalization bounds that are either too loose to be informative of the true generalization error or only valid to the compressed nets.\nIn this study, we present a simple yet non-vacuous generalization bound from the optimization perspective.\nWe achieve this goal by leveraging that the hypothesis set accessed by stochastic gradient algorithms is essentially fractal-like and thus can derive a tighter bound over the algorithm-dependent Rademacher complexity.\nThe main argument rests on modeling the discrete-time recursion process via a continuous-time stochastic differential equation driven by fractional Brownian motion.\nNumerical studies demonstrate that our approach is able to yield plausible generalization guarantees for modern neural networks such as ResNet and Vision Transformer, even when they are trained on a large-scale dataset (e.g. ImageNet-1K).",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Deep neural networks (DNNs) have shown remarkable performance in a wide range of tasks over the past decade (Bengio et al., 2021 ###reference_b11###).\nA mystery is that they generalize surprisingly well on unseen data, though having far more trainable parameters than the number of training examples (Belkin et al., 2019 ###reference_b10###; Li et al., 2023 ###reference_b42###).\nThis phenomenon of benign overfitting inevitably casts shadows on\nthe classical theory of statistical learning, which posits that models with high complexity tend to overfit the training data, whereas models with low complexity tend to underfit the training data.\nTo reconcile the conflicts, some researchers argue that this is due to the regularization incurred during training, either implicitly imposed via use of stochastic gradient descent (SGD) (Advani et al., 2020 ###reference_b2###; Barrett & Dherin, 2021 ###reference_b7###; Smith et al., 2021 ###reference_b63###; Sclocchi & Wyart, 2024 ###reference_b59###) or explicitly via batch normalization (Ioffe & Szegedy, 2015 ###reference_b32###), weight decay (Krogh & Hertz, 1992 ###reference_b39###), dropout (Srivastava et al., 2014 ###reference_b64###), etc.\nHowever, Zhang et al. (2017 ###reference_b72###) questioned this widely received wisdom because they found that DNNs are still able to achieve\nzero training error with randomly labeled examples, which apparently cannot generalize.\nPrior to our work, there has been extensive study trying to explain the generalization behavior of DNNs and they roughly can be categorized into the following classes.\nThe first class is the so-called norm-based bounds (Neyshabur et al., 2015 ###reference_b54###; Bartlett et al., 2017 ###reference_b8###; Neyshabur et al., 2018 ###reference_b53###; Golowich et al., 2018 ###reference_b26###) that are composed of the operator norm of layerwise weight matrices.\nHowever, recent studies suggest that these norm-based bounds might be problematic\nas they abnormally increase with the number of training examples (Nagarajan & Kolter, 2019 ###reference_b52###).\nMoreover, norm-based bounds are numerically vacuous as they are even several orders of magnitude larger than the number of network parameters.\nThe second class connects the generalization to the flatness of the solution (Hochreiter & Schmidhuber, 1997 ###reference_b30###; Keskar et al., 2017 ###reference_b36###; Dziugaite & Roy, 2017 ###reference_b21###; P\u00e9rez-Ortiz et al., 2021 ###reference_b57###; Nguyen et al., 2024 ###reference_b56###), showing that flat minima usually generalize well.\nHowever, the flat minima alone do not suffice in explaining the generalization behavior of DNNs. For example, Dinh et al. (2017 ###reference_b16###) argued that sharp minima can generalize as well by reparametrizing the function space and Wen et al. (2023 ###reference_b66###) also successfully identified a class of non-generalizing flattest models for two-layer ReLU networks.\nAnother class involves bounding the generalization error via a compression framework (Arora et al., 2018 ###reference_b6###).\nEmpirical results suggest that we can achieve almost non-vacuous bounds on realistic neural networks (Zhou et al., 2019 ###reference_b74###; Lotfi et al., 2022 ###reference_b43###).\nNevertheless, this framework only proves the generalization of the compressed net, not of the true net found by the learning algorithm.\nLastly, stability-based (Hardt et al., 2016 ###reference_b27###) and information-theoretic (Xu & Raginsky, 2017 ###reference_b70###) bounds have also received a lot of attention, but both of them are limited in terms of practical value.\nTherefore, it remains a great challenge to search for generalization bounds that not only qualitatively but also quantitatively predict how well the model performs on the new-coming data.\n###figure_1### Indeed, one critical issue that prevents the generalization bounds from practical usage is that the Rademacher complexity (Bartlett & Mendelson, 2002 ###reference_b9###) often is evaluated on a pre-specified hypothesis set (Neyshabur et al., 2015 ###reference_b54###; Bartlett et al., 2017 ###reference_b8###; Arora et al., 2019 ###reference_b5###).\nBut, in practice, we do not want to have a bound that holds uniformly over the pre-specified hypothesis set because we are more interested in a small portion of the hypothesis set that is accessible to the learning algorithm, and our goal is to address this issue. Since most tasks of modern neural networks are attacked by SGD and its variants, we are particularly interested in bounding the Rademacher complexity of the hypothesis set that SGD accesses during training.\nTo this end, we propose to model the discrete-time SGD recursion through the lens of stochastic differential equations (SDEs), an approach that has been widely used to study the escaping behavior of SGD (Jastrzebski et al., 2018 ###reference_b33###; Nguyen et al., 2019 ###reference_b55###; Xie et al., 2021 ###reference_b69###).\nAn important ingredient to studying SGD from this perspective is stochastic gradient noise (SGN), which is the difference between the stochastic gradient over a mini-batch and the true gradient over the full training set.\nIn early attempts, by invoking the central limit theorem, SGN is assumed to be either Gaussian (Mandt et al., 2017 ###reference_b48###; Li et al., 2017 ###reference_b40###; Hu et al., 2019 ###reference_b31###; Chaudhari & Soatto, 2018 ###reference_b14###; Xie et al., 2021 ###reference_b69###) or L\u00e9vy stable (Simsekli et al., 2019 ###reference_b61###; Zhang et al., 2020 ###reference_b73###).\nThese assumptions are compliant with an implicit constraint that SGN incurred at different iterations is mutually independent.\nHowever, as shown in Figure 1 ###reference_###, the temporal correlation of SGN is significant, suggesting that SGN is more reasonable to be fractional Gaussian noise (FGN) rather than Gaussian noise or from L\u00e9vy stable distribution.\nRecall that FGNs are the increments of fractional Brownian motion (FBM), a self-similar random process, thus allowing us to quantify the roughness of the optimization trajectory in terms of its Hausdorff dimension.\nWhile the FBM-driven SDE representation of the SGD recursion has previously been investigated (Lucchi et al., 2022 ###reference_b45###; Tan et al., 2023 ###reference_b65###), they only focused on why SGD favors flat minima and a rigorous treatment of its relation to generalization is still lacking.\nAt the core of our approach lies the fact that the optimization trajectory accessed by SGD during training is restricted to a small subset of the hypothesis space, which is fractal-like due to the incurred FGNs (Klingenh\u00f6fer & Z\u00e4hle, 1999 ###reference_b37###; Lou & Ouyang, 2016 ###reference_b44###).\nWe finally note that there already exist some generalization bounds that take the fractal structure into account, for example, see Simsekli et al. (2020 ###reference_b62###); Camuto et al. (2021 ###reference_b13###); Dupuis et al. (2023 ###reference_b18###); Sachs et al. (2023 ###reference_b58###).\nHowever, these approaches only present certain complexity measures such as the tail index to compare the generalization performance of one model against that of one another.\nBoth of them are not able to quantitatively give a plausible estimate of the generalization error and their experimental results are restricted to using a constant learning rate, which is unrealistic for real-world applications.\nMore seriously, when a classification model is trained with the cross-entropy\nloss, Camuto et al. (2021 ###reference_b13###) could not even observe a clear negative or positive correlation between the complexity measure and the generalization error.\nBy contrast, our approach can yield non-vacuous generalization bounds that predict the test loss (accuracy) well.\nMeanwhile, our bound is also model-agnostic, namely, we can efficiently estimate it for any DNNs with complex architectures such as ResNet (He et al., 2016 ###reference_b29###) and Vision Transformer (Dosovitskiy et al., 2021 ###reference_b17###).\nThe remainder of the paper is organized as follows.\nWe first review some mathematical notions in Section 2 ###reference_### and then elaborate on the novel generalization bound for SGD in Section 3 ###reference_###. Before concluding, we finally present the experimental results in Section 4 ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries",
15
+ "text": "In this section, we briefly recap several concepts that we will use throughout this paper."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Fractional Brownian Motion",
21
+ "text": "In probability theory, fractional Brownian motion (FBM), introduced by Mandelbrot & Van Ness (1968 ###reference_b47###), is an extension of Brownian motion and is defined as follows.\nGiven a complete probability space ,\nFBM is an almost surely continuous centered Gaussian process \nwith covariance function\nwhere is a real value in and is often referred to as the Hurst exponent.\nUnlike Brownian motion and other stochastic processes, the increments of FBM need not be independent. In particular, when , the increments of FBM are negatively correlated and exhibit short-range dependence, implying that it is more likely to overturn past changes. By contrast, FBM shows long-range dependence when . That is, if it was increasing in the past, it is persistent to keep the trend and vice versa.\nIn particular, when , FBM reduces to the standard Brownian motion.\nTo gain some intuition, we plot several sample paths of FBM in Figure 2 ###reference_### with different Hurst exponents.\nOne can observe that, when the Hurst exponent is small,\nthe sample path is seriously ragged. By contrast, it appears dramatically smoother when the Hurst exponent becomes relatively larger.\n###figure_2###"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Fractal Dimension",
27
+ "text": "The notion of dimension is central to our analysis.\nOne that we are most familiar with is the ambient dimension. Roughly speaking, a dimension describes how much space a set occupies near each of its points.\nFor instance, as a vector space has an ambient dimension of \nsince different coordinates are required to identify a point in this space.\nThe fractal dimension, however, extends this notion to the fractional case.\nWhile it turns out to be particularly useful in many mathematical fields such as number theory and dynamical systems, there are many different ways to define fractal dimension, and not all the definitions are equivalent to each other.\nOf the wide variety of fractal dimensions, we focus on probably the most important box-counting and Hausdorff dimensions.\nBox-counting dimension. Suppose is a non-empty subset of ,\nand the diameter of is defined as .\nLet be the least number of subsets of diameter\nat most to cover , that is,\n and for each .\nThen, the lower and upper box-counting dimensions of , respectively,\nare defined as\nand\nNote that\n\nand if the equality holds, the box-counting dimension of \nis then denoted by\nThe popularity of the box-counting dimension is largely due to its intuitive definition and relative ease of empirical calculation.\nBy contrast, the Hausdorff dimension, which is described below, is in terms of measure theory and is mathematically convenient to work with.\nConsequently, a disadvantage of the Hausdorff dimension is that it is often difficult to estimate by computational methods.\nHowever, for a proper understanding of fractal geometry, familiarity with the Hausdorff dimension is essential.\nHausdorff dimension. Let be a -cover of a non-empty bounded set ,\nand for each , we call\nthe -dimensional Hausdorff measure of .\nUsually, it equals or .\nThe critical value of at which \njumps from to is referred to as the Hausdorff dimension.\nRigorously, it is defined as\nWhile these two kinds of dimensions are the same under some regularity conditions (Mattila, 1999 ###reference_b49###, Theorem 5.7), they are not equivalent to each other.\nFor example, considering the set of rationals in , the Hausdorff dimension is 0, while the box-counting dimension is 1.\nIn general, it holds that ."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Non-Vacuous Generalization Bound for SGD",
33
+ "text": "Assume we have access to a training set of independent and identically distributed (i.i.d.) data points,\nwhere denotes the features, denotes the labels, and denotes the data space that follows an unknown data distribution .\nThe goal of supervised learning is to choose a suitable hypothesis , parameterized by a vector of network parameters , so that the generalization error (i.e. the risk on previously unseen data),\nis small.\nHere, is a non-negative loss function, and is the composition of the loss and the hypothesis, which will also referred to as \u201closs\u201d, with a slight abuse of notation.\nHowever, due to the unknown data distribution , we are not able to minimize directly.\nInstead, we can only minimize the empirical error over the training set , namely,\nNotice that the difference is referred to as the generalization gap.\nParticularly, in the realizable case where the empirical error is zero, the generalization gap is interchangeable with the generalization error."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Problem Setup",
39
+ "text": "Starting from an initialization point , the SGD algorithm recursively updates the weights of the neural network as follows,\nwhere is the learning rate and is an unbiased estimate of the true gradient, which is computed by\nwhere is a set of examples (i.e. mini-batch) that are i.i.d. drawn from and is the mini-batch size.\nParticularly, when , SGD becomes the full-batch gradient descent (GD).\nWhile the SGD algorithm is random, once the training set , the initialization point , and the training steps are fixed, the total number of optimization trajectories (i.e. the collection of weights throughout training) is indeed finite (though very large).\nTo see this, notice that there are only finitely many subsets that can take.\nFor example, in the case of with-replacement sampling, there are in total mini-batches to choose from at every step.\nBy contrast, in the case of without-replacement sampling, this number can be further reduced to .\nOf course, here we require that there are no other sources of stochasticity during training such as perturbing the weights with random noise.\nMany studies (Zhu et al., 2019 ###reference_b75###; Amir et al., 2021 ###reference_b4###; Wu & Su, 2023 ###reference_b67###) have shown that training neural networks with the stochastic gradient generally outperforms with the true gradient because of the incurred stochastic gradient noise (SGN), which is defined as\nIf one assumes that the learning rate is sufficiently small and follows a zero-mean distribution, the SGD recursion (1 ###reference_###) can be seen as a first-order discretization of a continuous-time SDE (Li et al., 2017 ###reference_b40###).\nRecently, perspectives from SDEs have provided many insights on studying the generalization behavior of DNNs through the asymptotic convergence rate and local dynamic behavior of SGD (Mandt et al., 2017 ###reference_b48###; Simsekli et al., 2019 ###reference_b61###; Xie et al., 2021 ###reference_b69###; Tan et al., 2023 ###reference_b65###; Gess et al., 2024 ###reference_b25###).\nIn our analysis, we will consider the case where SGD is viewed as the Euler-Maruyama discretization of the following SDE,\nwhere is the drift coefficient, is the diffusion coefficient, and represents a -dimensional FBM with Hurst exponents .\nFor simplicity, we also assume that the random noise of different coordinates is mutually independent.\nSuch class of SDEs admits SGN produced at different iterations to be mutually interdependent, which significantly varies from previous studies where SGN is assumed either to be Gaussian (Mandt et al., 2017 ###reference_b48###; Li et al., 2019 ###reference_b41###) or follow a L\u00e9vy stable distribution (Simsekli et al., 2020 ###reference_b62###; Dupuis & \u015eim\u015fekli, 2024 ###reference_b19###).\nA pairwise correspondence between discrete-time SGD recursion (1 ###reference_###) and continuous-time SDE driven by FBM (2 ###reference_###) can be easily established.\nFor a finite number of training steps, let be the optimization trajectory that achieved by a specific run indexed by of SGD.\nWhen the learning rate is small enough,\nfor a given , we can always define a stochastic process\n as the interpolation of two successive iterates and such that for all .\nThis approach is frequently adopted in SDE literature (Mishura & Shevchenko, 2008 ###reference_b50###) and allows the trajectory to be continuous to represent the SGD recursion.\nTherefore, always can be viewed as a sample path of the solution to SDE (2 ###reference_###) in a time frame, say, without loss of generality, .\nConsequently, for a training set and an initialization point , the hypothesis set that SGD accesses is essentially a tiny space and can be defined as\nWhile is randomly drawn from a probability distribution, unless other specified, our discussion below always assumes that is fixed so that our analysis can be greatly simplified.\nThis is because most SGD solutions trained from different initialization points belong to the same basin in the loss landscape after proper permutation (Entezari et al., 2022 ###reference_b23###; Ainsworth et al., 2023 ###reference_b3###).\nAs a result, any generalization bounds conditioned on can also be applied to predict the generalization performance of SGD solutions that are trained from another initialization point.\nFor simplicity of notation, we will omit the dependence on and simply write instead.\nFurther, we write to denote the loss functions associated with mapping from to ,\nTo remove the dependence on , we can take a union over , yielding and to represent the set of all possible parameters and loss functions.\nFor any , our goal is to bound the following term\nwhich is algorithm-dependent and differs from what is usually studied where is replaced by a pre-specified hypothesis set.\nIn the sequel, we will present the main result in terms of the empirical Rademacher complexity (Bartlett & Mendelson, 2002 ###reference_b9###), which is defined as\nwhere the Rademacher variables are i.i.d. with .\nLet be the set of all possible loss evaluations that a loss function can achieve over the training set , namely,\nWe can further observe that the value of is the same as the Rademacher complexity of the set .\nIn the following section, we aim to control by taking into account the Hausdorff dimension of the sample paths of the solution to SDE (2 ###reference_###).\nThe Hausdorff dimension determines the raggedness of the sample path and characterizes the dynamic behavior of SGD around the local minimum."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Main Assumptions",
45
+ "text": "We will first present several assumptions used in our theoretical analysis.\nThe loss function is bounded in and -Lipschitz continuous with respect to its first argument.\nThe boundedness assumption is standard in the literature, for example, see Shalev-Shwartz & Ben-David (2014 ###reference_b60###) and Mohri et al. (2018 ###reference_b51###).\nFurthermore, if a mapping satisfies the Lipschitz continuity,\nthen the Hausdorff dimension of the image is no greater than the Hausdorff\ndimension of the preimage (Falconer, 2004 ###reference_b24###, Proposition 3.3).\nThis Lipschitz assumption can be easily satisfied, if the gradient of the loss function is uniformly bounded for any , for example, by gradient clipping.\n###figure_3### The drift coefficient and diffusion coefficient in SDE (2 ###reference_###) are both bounded vector fields on .\nThis assumption is reasonable due to the\nexistence of batch normalization (Ioffe & Szegedy, 2015 ###reference_b32###), weight decay (Krogh & Hertz, 1992 ###reference_b39###), and other popular tricks. Under this assumption, the existence and uniqueness of solutions to SDE (2 ###reference_###) are guaranteed if the Hurst exponent is larger than (Lyons & Qian, 2002 ###reference_b46###).\nHowever, the current study on the Hausdorff dimension of the sample paths of the solution to SDE (2 ###reference_###) is only limited to the case where the Hurst exponent is the same for all coordinates (Lou & Ouyang, 2016 ###reference_b44###).\nThis obviously is not true for real-world neural networks that have millions (even billions) number of parameters (cf. Figure 1 ###reference_###).\nLuckily, when the mini-batch size is small, the norm of SGN is always much larger than the norm of the true gradient (cf. Figure 3 ###reference_###), suggesting that the training process is dominated by the diffusion term so that we can instead use the known results of multi-dimensional FBM.\nIn light of this, we can further impose the assumption below.\nFor each specific run indexed by of SGD, the Hausdorff dimension of the sample path of the solution to SDE (2 ###reference_###), , is upper bounded by the Hausdorff dimension of the sample path of the driven FBM, which is explicitly given by\nwhere the Hurst exponents are sorted such that and is determined by the inequality (Xiao, 1995 ###reference_b68###, Theorem 2.1).\nFurthermore, we assume the data distribution is supported on a countable set so that .\nWe note that the countability assumption is crucial to our results.\nThanks to this condition, we are able to invoke the countable stability (Falconer, 2004 ###reference_b24###, Section 3.2) of the Hausdorff dimension to control the upper bound of .\nThis assumption generally holds for image-based datasets, where each pixel is an integer from to .\nMoreover, we can further require that the Hausdorff dimension corresponding to the driven FBM does not depend on the order of the mini-batches. Namely, for any specific run of SGD, it remains the same.\nThis can be easily checked by shuffling the order of mini-batches (cf. Table 1 ###reference_###).\nFurthermore, we can also observe that remains approximately the same even when the model is trained with different training sets and initialization points.\nTherefore, the Hausdorff dimension estimated under any specific run of SGD essentially provides a plausible upper bound over , which is particularly useful in practice.\nLet be a non-empty bounded subset of and there exists a Borel measure on and positive numbers , , and such that and for\nwhere\nThis so-called Ahlfors regularity is often used in fractal geometry to ensure the set is regular enough so that the Hausdorff dimension is equivalent to the box-counting dimension (Mattila, 1999 ###reference_b49###, Theorem 5.7).\nThat is, under this assumption, we have .\nAs a result, we can use the covering number\ntechniques.\nRecall that is a collection of sample paths of the solution to SDE (2 ###reference_###) and thus we have as well."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Upper Bound",
51
+ "text": "Based on these assumptions, we are ready to present an upper bound over .\nLet Assumptions 1 ###reference_umption1###-4 ###reference_umption4### hold. For any i.i.d. sample , there always exist a constant such that the following inequality holds:\nFix and .\nThen, for any , satisfying , we always have for the corresponding , the following inequality\nimplying that .\nAccording to Assumption 4 ###reference_umption4###, we know that is regular enough so that .\nThis means that, when approaches to zero, we have\nTherefore, for any , there always exists an integer such that for any\nChoosing and , then we have for all\nSubstituting in, yielding\nWrite , we have\nwhere the last inequality is due to the fact that for all .\nBy appealing to Dudley\u2019s lemma (Shalev-Shwartz & Ben-David, 2014 ###reference_b60###, Lemma 27.5), the following inequality holds\nthus completing the proof.\n\u220e\nBased on the Rademacher complexity , we are now ready to present the bound over the maximal generalization gap.\nLet Assumptions 1 ###reference_umption1###-4 ###reference_umption4###hold.\nThen, for any , with probability at least over the draw of an i.i.d. sample , there always exists a constant such that the following inequality holds for all ,\nThis is a direct consequence of Mohri et al. (2018 ###reference_b51###, Theorem 3.3).\n\u220e\nIn the classical literature where the fractal structure of the learned hypothesis set is not taken into consideration, the Rademacher complexity scales as if we assume , see Shalev-Shwartz & Ben-David (2014 ###reference_b60###, Example 27.2).\nAs a result, this suggests that the generalization bound would increase with the number of training examples, which is obviously contradictory to the empirical results.\nBy contrast, our result suggests that the above bound can decrease with the number of training examples in a sublinear rate, namely, .\nFor the simplest case where , the above bound reduces to\nwhich implies that the generalization gap continues to increase until the training process saturates.\nIn addition, it also suggests that optimizing in the flat regions of the loss landscape indeed decreases the generalization gap.\nThis is because the optimization trajectories generated in the flat regions are smoother in terms of lower values of (e.g. in the case of small vs. large mini-batch size).\nHowever, it should be emphasized that a small generalization gap does not necessarily dictate a small generalization error (requiring the training loss to be small as well).\nFor example, for an untrained neural network, the generalization gap between the training set and the test set is small, whereas the generalization error on the test set could be very large.\nNote that our bound does not explicitly depend on the number of trainable parameters .\nInstead, the Hausdorff dimension plays a similar role and quantifies the \u201ceffective\u201d complexity of the hypothesis set because in general is much smaller than .\nMoreover, the effects of other important ingredients such as the network architecture and the initialization method are implicitly absorbed in as well."
52
+ },
53
+ {
54
+ "section_id": "3.4",
55
+ "parent_section_id": "3",
56
+ "section_name": "Estimation",
57
+ "text": "The generalization bound of Theorem 2 ###reference_orem2### can be easily computed in practice, and we estimate it by the formula below:\nCompared to Theorem 2 ###reference_orem2###, notice that we have omitted the nuisance factor because it is essentially an artifact due to the proof and its influence is limited even though the value of is very large.\nIndeed, if is a self-similar set or generated from an iterated function system (Falconer, 2004 ###reference_b24###; Camuto et al., 2021 ###reference_b13###), the value of approximately equals to one.\nApart from the already known number of training examples , there are three remaining terms to be calculated.\nWe start with the Lipschitz constant .\nAlthough we have assumed as a constant that universally holds for any , in practice, it should be restricted to the space of and therefore corresponds to a much smaller value.\nRecall that the Lipschitz continuity can be guaranteed if the gradient of the loss function is bounded, namely, for any and .\nMoreover, we have at each iteration .\nTherefore, we can approximate with the maximum value of throughout training.\nNext, we are going to estimate .\nTo this end, we need to calculate the per-example loss on the full training set until the training is finished.\nSubsequently, we can estimate the diameter of by computing the smallest bounding ball 111The code is available at https://github.com/hirsch-lab/cyminiball ###reference_###..\nHowever, this approach is computationally prohibitive when is large.\nTo circumvent this issue, we can alternatively approximate with\nwhere and are the vectors of network parameters at initialization and the end of training.\nThis is because that the loss is always non-negative and generally tends to decrease during training.\nWe now continue to compute according to Equation (3 ###reference_###) to give an estimation of , for which we first need to estimate the Hurst exponent 222The code is available at https://github.com/CSchoel/nolds ###reference_github.com/CSchoel/nolds###. for each coordinate of the neural network.\nTo produce a series of SGN for a neural network, we run through the full training set to calculate the full-batch gradient.\nThen, we feed a number of mini-batches into the neural network, and as a result, we can obtain a series of SGN by subtracting the full-batch gradient from the mini-batch gradient.\nNotice that for very large neural networks that contain millions (even billions) of trainable parameters, due to limited memory, we are not able to generate a series of SGN for each coordinate.\nIn this case, we can randomly sample a small portion of coordinates and we find that the estimation is robust to the number of used coordinates (see Supplementary Material, Figure 1).\nFinally, we want to emphasize that these terms, theoretically, should be better estimated using the union of multiple runs with different seeds.\nIn practice, however, we find that they often lead to similar results.\nTherefore, we choose to estimate using a single run, which is particularly useful in scenarios such as neural architecture search where an instant measure is required to compare against different runs."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Numerical Studies",
63
+ "text": "In this section, we present the experimental results to demonstrate the efficacy of the proposed generalization bound."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Implementation Details",
69
+ "text": "We consider three publicly available datasets\u2014CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009 ###reference_b38###), and ImageNet-1K (Deng et al., 2009 ###reference_b15###).\nCIFAR-10 and CIFAR-100 are composed of training examples and test examples that are equally divided into 10 and 100 classes.\nBy contrast, ImageNet-1K is a large-scale dataset that consists of 1000 classes and contains approximately one million training images and validation images.\nWe do not use data augmentation in all experiments,\nsince doing so will prevent the model from consistently reaching low\ncross-entropy loss and impose uncontrollable effects on SGN as the training examples are no longer i.i.d. distributed (Dziugaite et al., 2020 ###reference_b20###; Jiang et al., 2020 ###reference_b34###).\nUnless otherwise specified, optimization uses SGD with momentum of and weight decay of .\nBy default, we use a mini-batch size of , a learning rate of , and a cosine learning rate scheduler to ensure that the models can fit the training set completely.\nDetermining when to stop the training process is important to quantitatively\nassess the generalization bounds, especially for those that can only be calculated after the training is finished.\nStopping too early or too late may produce different results.\nSlightly different from Jiang et al. (2020 ###reference_b34###); Dziugaite et al. (2020 ###reference_b20###), we terminate the training process when the training accuracy reaches the threshold of %.\nThis is because decreasing the cross-entropy loss to a very low value will result in severe overfitting."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Number of Training Examples",
75
+ "text": "Increasing the number of training examples generally will promote the generalization performance of DNNs (Kaplan et al., 2020 ###reference_b35###).\nWhile this observation is obvious, a non-negligible fact is that there are still a large number of generalization bounds that fail to (correctly) reveal this correlation (Nagarajan & Kolter, 2019 ###reference_b52###).\nIn the following, we aim to investigate how the proposed bound changes with the number of training examples.\nFirst, we need to generate a bunch of subsets as follows: for CIFAR-10, we gradually increase the number of training examples (per class) from 500 to 5000 with a step size of 500; and for CIFAR-100, the number is increased from 100 to 500 with a step size of 50.\nWe then proceed to train two modern neural networks\u2014ResNet-56 (He et al., 2016 ###reference_b29###) and WideResNet-28-10 (Zagoruyko & Komodakis, 2016 ###reference_b71###)\u2014on them for 50 and 200 epochs, respectively.\nAs depicted in Figure 4 ###reference_### (and Supplementary Material, Figure 2), the generalization gap (test loss - training loss) indeed decreases as more training examples are used and our bound correctly captures this trend.\nMore importantly, we observe that is non-vacuous and almost can recover the generalization gap when the full training set is used.\n###figure_4### ###figure_5###"
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "Effects of Learning Rate and Mini-batch Size",
81
+ "text": "Another issue that hinders previous generalization bounds from wide usage is that they often anti-correlate\nwith the generalization error when changing the commonly used training hyperparameters (Jiang et al., 2020 ###reference_b34###).\nIn this part, we aim to probe the effects of learning rate and mini-batch size,\nwhich typically dominate the generalization performance of DNNs.\nTo this end, we varied the learning rate from to with a step size of and simultaneously doubled the mini-batch size from to .\nAs shown in Figure 5 ###reference_### (and Supplementary Material, Figure 3), we can observe that the upper bound indeed decreases with the ratio of the learning rate to the mini-batch size.\nThese results align with the observation that a larger ratio of learning rate to mini-batch size usually leads to a better generalization (Jastrzebski et al., 2018 ###reference_b33###; He et al., 2019 ###reference_b28###)."
82
+ },
83
+ {
84
+ "section_id": "4.4",
85
+ "parent_section_id": "4",
86
+ "section_name": "Results on ImageNet-1K",
87
+ "text": "In this section, we continue to investigate how the proposed bound evolves with the training epoch.\nParticularly, we evaluate it on the large-scale ImageNet-1K dataset.\nFor this purpose, we trained on two popular neural networks\u2014ResNet-18 and ViT-S-32 (Dosovitskiy et al., 2021 ###reference_b17###)\u2014with basic data augmentation, namely, resizing and cropping images to 224-pixel resolution and then normalizing them.\nFor ResNet-18, we trained it for 100 epochs with a mini-batch size of 256 and optimization uses SGD with an initial learning rate of 0.1 and a weight decay of 1.0e-4.\nFor ViT-S-32, we trained it for 300 epochs with a mini-batch size of 1024 and the optimizer is AdamW with an initial learning rate of 3.0e-3 and a weight decay of 0.1.\nFor both models, a cosine schedule is used to adjust the learning rate.\n###figure_6### As shown in Figure 6 ###reference_###, we can observe that the predicted accuracy on the validation set monotonically increases as a function of the training epoch, which is consistent with the true validation accuracy.\nMore importantly, our approach is able to produce non-vacuous predictions at the end of training on the validation accuracy ( of predicted accuracy vs. of validation accuracy for ViT-S-32 and of predicted accuracy vs. of validation accuracy for ResNet-18).\nTo the best of our knowledge, these results are the tightest generalization bounds on ImageNet-1K up to date."
88
+ },
89
+ {
90
+ "section_id": "4.5",
91
+ "parent_section_id": "4",
92
+ "section_name": "Comparison with Existing Estimators",
93
+ "text": "In this section, we quantitatively compare the Hausdorff dimension estimated according to Equation (3 ###reference_###) against other methods such as through the upper Blumenthal-Getoor index (Simsekli et al., 2020 ###reference_b62###) and the persistent homology dimension (Birdal et al., 2021 ###reference_b12###; Dupuis et al., 2023 ###reference_b18###).\nTheoretically, these measures would be smaller if the corresponding neural network enjoys a better generalization performance.\nFor convenience, we still probe how they change with the number of training examples.\nAs illustrated in Figure 7 ###reference_### (and Supplementary Material, Figure 4), the persistent homology dimension increases with the training set size, which is undesirable because training with more examples generally yields better generalization.\nMeanwhile, the upper Blumenthal-Getoor index stays around and fails to convey any information about the training set size.\nBy contrast, our method suggests that the Hausdorff dimension decreases with the number of training examples, which is more consistent with the true generalization error.\n###figure_7###"
94
+ },
95
+ {
96
+ "section_id": "5",
97
+ "parent_section_id": null,
98
+ "section_name": "Conclusion",
99
+ "text": "In this study, we developed a non-vacuous and tractable generalization bound for SGD from the perspective of fractal geometry, which is different from the classical generalization bounds.\nEmpirical results further demonstrated its efficacy by altering the training set size and the ratio of the learning rate to the mini-batch size.\nFollowing this line, it is natural to extend our results to encompass the adaptive optimizers such as Adam and RMSprop, which we leave for future study."
100
+ }
101
+ ],
102
+ "appendix": [],
103
+ "tables": {
104
+ "1": {
105
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T1.20.4.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S3.T1.6.3\" style=\"font-size:90%;\">Effects of different sources of stochasticity on Hausdorff dimension .\nThe first row quantifies how is affected by the different initialization points of the neural network (ResNet-20) under the same training set.\nWhen the neural network is initialized with the same weights, the second row describes how changes with the training set (i.e. random subsets of CIFAR-10).\nFinally, when both the initialization point and the training set are the same, the last row further studies the effect of the order of the mini-batches.\n</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.18\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.18.13.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T1.18.13.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"4\" id=\"S3.T1.18.13.1.2\">Number of training examples (per class)</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.18.14.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T1.18.14.2.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.18.14.2.2\">1000</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.18.14.2.3\">2000</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.18.14.2.4\">3000</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.18.14.2.5\">4000</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.10.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.10.4.5\">Initialization point</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.7.1.1\">3.12 0.07</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.8.2.2\">2.84 0.09</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.9.3.3\">2.79 0.07</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S3.T1.10.4.4\">2.70 0.06</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.14.8.5\">Training set</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.11.5.1\">3.02 0.09</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.12.6.2\">2.84 0.06</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.13.7.3\">2.78 0.04</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T1.14.8.4\">2.71 0.04</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.18.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T1.18.12.5\">Mini-batch order</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.15.9.1\">3.03 0.09</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.16.10.2\">2.83 0.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.17.11.3\">2.77 0.02</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S3.T1.18.12.4\">2.71 0.04</td>\n</tr>\n</tbody>\n</table>\n</figure>",
106
+ "capture": "Table 1: Effects of different sources of stochasticity on Hausdorff dimension .\nThe first row quantifies how is affected by the different initialization points of the neural network (ResNet-20) under the same training set.\nWhen the neural network is initialized with the same weights, the second row describes how changes with the training set (i.e. random subsets of CIFAR-10).\nFinally, when both the initialization point and the training set are the same, the last row further studies the effect of the order of the mini-batches.\n"
107
+ }
108
+ },
109
+ "image_paths": {
110
+ "1": {
111
+ "figure_path": "2206.04359v2_figure_1.png",
112
+ "caption": "Figure 1: Histogram of Hurst exponents for all coordinates of ResNet-20. For each coordinate, we first generate a series of stochastic gradient noise (SGN) and then estimate its Hurst exponent.\nIf the elements of a time series are mutually independent, for example, in the case of the Brownian motion and the L\u00e9vy flight, the corresponding Hurst exponent would be 1/2121/21 / 2 (Embrechts, 2009, Theorem 8.1.3).\nOtherwise, it would suggest that the elements are not independent.",
113
+ "url": "http://arxiv.org/html/2206.04359v2/x1.png"
114
+ },
115
+ "2": {
116
+ "figure_path": "2206.04359v2_figure_2.png",
117
+ "caption": "Figure 2: Sample paths of FBM in two-dimensional space. The colors indicate the evolution over time. The Hurst exponent H\ud835\udc3bHitalic_H corresponds to the raggedness of the sample path, with a higher value leading to a smoother motion.",
118
+ "url": "http://arxiv.org/html/2206.04359v2/x2.png"
119
+ },
120
+ "3": {
121
+ "figure_path": "2206.04359v2_figure_3.png",
122
+ "caption": "Figure 3: Norm of the true gradient and the stochastic gradient as a function of training epoch, where the mini-batch size is 128.",
123
+ "url": "http://arxiv.org/html/2206.04359v2/x3.png"
124
+ },
125
+ "4": {
126
+ "figure_path": "2206.04359v2_figure_4.png",
127
+ "caption": "Figure 4: Upper bound \u03f1boundsubscriptitalic-\u03f1bound\\varrho_{\\mathrm{bound}}italic_\u03f1 start_POSTSUBSCRIPT roman_bound end_POSTSUBSCRIPT and true generalization gap as a function of the number of training examples.",
128
+ "url": "http://arxiv.org/html/2206.04359v2/x4.png"
129
+ },
130
+ "5": {
131
+ "figure_path": "2206.04359v2_figure_5.png",
132
+ "caption": "Figure 5: Negative correlation between the upper bound \u03f1boundsubscriptitalic-\u03f1bound\\varrho_{\\mathrm{bound}}italic_\u03f1 start_POSTSUBSCRIPT roman_bound end_POSTSUBSCRIPT and the ratio of learning rate to mini-batch size.",
133
+ "url": "http://arxiv.org/html/2206.04359v2/x5.png"
134
+ },
135
+ "6": {
136
+ "figure_path": "2206.04359v2_figure_6.png",
137
+ "caption": "Figure 6: Predicted accuracy as a function of the training epoch on the ImageNet-1K validation set. The predicted accuracy on the validation set is obtained by first estimating the validation loss (i.e. \u03f1boundsubscriptitalic-\u03f1bound\\varrho_{\\mathrm{bound}}italic_\u03f1 start_POSTSUBSCRIPT roman_bound end_POSTSUBSCRIPT + training loss) and then retrieving the closest accuracy from the training curve (i.e. pairs of training loss and training accuracy).",
138
+ "url": "http://arxiv.org/html/2206.04359v2/x6.png"
139
+ },
140
+ "7": {
141
+ "figure_path": "2206.04359v2_figure_7.png",
142
+ "caption": "Figure 7: Comparison between different Hausdorff dimension estimators.",
143
+ "url": "http://arxiv.org/html/2206.04359v2/x7.png"
144
+ }
145
+ },
146
+ "validation": true,
147
+ "references": [],
148
+ "url": "http://arxiv.org/html/2206.04359v2"
149
+ }
20240722/2209.02552v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2211.12592v2.json ADDED
@@ -0,0 +1,283 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Representations of the symmetric group are decomposable in polynomial time",
3
+ "abstract": "We introduce an algorithm to decompose orthogonal matrix representations of the symmetric group over the reals into irreducible representations, which as a by-product also computes the multiplicities of the irreducible representations. The algorithm applied to a -dimensional representation of is shown to have a complexity of operations for determining which irreducible representations are present and their corresponding multiplicities and a further operations to fully decompose representations with non-trivial multiplicities. These complexity bounds are pessimistic and in a practical implementation using floating point arithmetic and exploiting sparsity we observe better complexity. We demonstrate this algorithm on the problem of computing multiplicities of two tensor products of irreducible representations (the Kronecker coefficients problem) as well as higher order tensor products. For hook and hook-like irreducible representations the algorithm has polynomial complexity as increases. We also demonstrate an application to constructing a basis of multivariate orthogonal polynomials with respect to a tensor product weight so that applying a permutation of variables induces an irreducible representation.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": " Introduction",
9
+ "text": "Let be an orthogonal matrix representation of the symmetric group, acting on the vector space . In other words is a homomorphism: for all we have . We consider the problem of decomposing the representation into irreducible representations, that is decomposing\nwhere are each irreducible representations acting on the vector spaces , where in the setting of the symmetric group these are each associated with a different partition of .\nWe assume that we are given the generators of a representation of as symmetric (which must also be orthogonal) matrices where are the simple transpositions, i.e., the Coxeter generators.\nGiven generators of an orthogonal matrix representation , we shall first consider the following fundamental problems:\nDetermine the irreducible representations present in the above decomposition.\nDetermine the (non-zero) multiplicities of each irreducible representation, that is, how many times does each irreducible representation occur.\nWe will then solve the question of decomposing a representation by solving an equivalent linear algebra problem:\nCompute an orthogonal matrix that block-diagonalises the representation so that\nwhere are distinct irreducible representations and are their multiplicities. Here we use the notation\nwhere and is the dimension of the irreducible representation.\nIn this paper we will prove that both problems are computable (assuming real arithmetic). In particular Problems 1 ###reference_blem1### and 2 ###reference_blem2### can be solved via Algorithm 1 ###reference_### in operations (Corollary 3 ###reference_ollary3###) and Problem 3 ###reference_blem3### can be solved via Algorithm 2 ###reference_### in a further operations (Corollary 5 ###reference_ollary5###). Going further, the practical realisation of Algorithm 2 ###reference_### appears to achieve operations by exploiting sparsity, but this more efficient complexity result is not yet proven, see discussion in Section 5 ###reference_###.\nPrior work on these problems begins with Dixon [5 ###reference_b5###], who constructed an iterative algorithm for reducing representations of general finite groups to irreducible components, which can be used to solve Problems 1 ###reference_blem1### and 2 ###reference_blem2### in the limit. However, it does not consider the decomposition of repeated copies of irreducible representations hence does not solve Problem 3 ###reference_blem3###, that is, it only computes the canonical representation as defined in [20 ###reference_b20###]. Furthermore, the algorithm requires the computation of eigenvectors and eigenvalues which also depends on iterative algorithms which may fail. There is also related work on decomposing representations of compact groups [19 ###reference_b19###].\nIt is possible to solve Problem 2 ###reference_blem2### by using the orthogonality relationships of the characters with respect to the inner product\nsee for example [20 ###reference_b20###, Theorem 4]. As characters are class functions they depend only on the conjugacy class of the permutation, therefore this sum can be reduced to one over all partitions, whose number are given by the partition function . The complexity of the resulting algorithm depends on the cost in creating a representation for running through the a set of permutations in the given conjugacy class. If these can be constructed directly in operations then the total complexity is . However, if one is only given generators (that may be dense matrices) we need to construct other representations by multiplying as many as matrices, hence the total complexity is where the depends on which matrix-multiplication algorithm is used ( with the standard definition of matrix multiplication but can be improved using fast matrix multiplication). In either case, as grows faster than algebraically it does not result in a polynomial complexity algorithm.\nIn other work, Serre [20 ###reference_b20###, Theorem 8, Proposition 8] introduced explicit projections to the invariant vector spaces corresponding to irreducible representations, and the column rank of these projectors can be used to solve Problem 2 ###reference_blem2###. However, the projections require computing sums over every element of the group, that is, in the case of the symmetric group the complexity for constructing each projector is operations. The number of projectors needed is so the total complexity is operations. Furthermore, it does not give a fast approach to solve Problem 1 ###reference_blem1###, that is, it cannot determine which irreducible representations are present (i.e., have non-zero multiplicity) without potentially checking all possibilities which grows super-algebraically with . Finally, whilst a QR decomposition of the projectors can be used to construct an orthogonal matrix that block-diagonalises a representation into irreducible representation there is no guarantee that the irreducible representations in the same isomorphism class will be identical, that is, one likely needs to apply Algorithm 2 ###reference_### (or an equivalent procedure) to enforce these representations are identical.\nThe complexity of Serre\u2019s approach for constructing projectors can be dramatically reduced to polynomial in using the ideas introduced by Hymabaccus and Pasechnik in [11 ###reference_b11###], see also the related software [12 ###reference_b12###]. In particular, in the case of the symmetric group the complexity of computing the projectors is reducible to operations for a total of operations to solve Problem 2 ###reference_blem2###, assuming one has computed which irreducible represents are present (i.e. solved Problem 1 ###reference_blem1###). Unfortunately this work does not provide a fast way to solve Problem 1 ###reference_blem1### and hence the total cost is still not polynomial, unless combined with the proposed Algorithm 1 ###reference_###.\nRemark The work of [11 ###reference_b11###, 12 ###reference_b12###] takes a more computational algebra approach and has the significant benefit that the computations can be performed exactly. As the algorithms in this paper work with reals and compute orthogonal matrices that cannot in general be represented exactly on a computer the practical implementation inevitably requires floating point arithmetic operations, hence the calculations will not be exact due to round-off errors. However, as the algorithms are built on top of well-understand numerical linear algebra algorithms such as those for computing eigenvalues of symmetric matrices the computations are reliable, as demonstrated in Figure 5 ###reference_###. Going a step further techniques from Validated Numerics may yield a rigorous implementation or verification of the results, a topic we discuss in Section 8 ###reference_###.\nSolving Problem 2 ###reference_blem2### in the special case where the representation is a tensor product of irreducible representations of is one approach to computing Kronecker coefficients, a problem that is known to be NP-hard (in particular, P-hard [13 ###reference_b13###, 2 ###reference_b2###]). In certain settings where the corresponding partitions have a fixed length there is a combinatorial algorithm that can compute specific Kronecker coefficients in polynomial time [3 ###reference_b3###, 18 ###reference_b18###]. It is difficult to make a direct comparison between our complexity results when applied to the Kronecker coefficients problem: our complexity results depend on the dimension of the corresponding irreducible representations and we obtain all non-zero multiplicities without having to deduce the zero multiplicities, whereas the complexity results of [18 ###reference_b18###] largely depend on the length of the corresponding partitions (including that of the partition corresponding to the irreducible representation whose multiplicity is being computed) and give no approach to deduce which multiplicities are non-zero apart from testing all possible partitions, whose number grows faster than polynomial in .\nWe will now construct a concrete example that will be used throughout this paper. Denote the representation coming from permutation matrices as , where is the identity matrix with the rows permuted according to . It has symmetric generators with the and th rows permuted. We will consider the representation defined by:\nThis representation is block-diagonalised by the matrix\nA quick check shows that we have block-diagonalised the generators (and hence the representation) into a and three sub-blocks:\nNote that the sub-blocks are necessarily also representations of . That is, we have decomposed the permutation matrix representation\ninto three irreducible representations: one associated with the partition , one associated with the trivial partition (with multiplicity two) and one associated with the sign partition . The algorithms we introduce computes these partitions and multiplicities as well as the matrix from the generators .\nThe paper is structured as follows:\nSection 2 ###reference_###: We review basics of representation theory of the symmetric group, including irreducible representations, their generators, the Gelfand\u2013Tsetlin (GZ) algebra and its spectral properties for irreducible representations.\nSection 3 ###reference_###: We detail how the problem of reducing an orthogonal matrix representation can be recast in terms of linear algebra, in particular as Problem 3 ###reference_blem3###.\nSection 4 ###reference_###: We discuss how the GZ algebra can be used to solve Problem 2 ###reference_blem2### via a joint spectrum problem involving commuting symmetric matrices. This also gives a way to reduce a representation into irreducible representations but cannot distinguish multiple copies of the same irreducible representation, that is, we only compute reduction to a canonical representation.\nSection 5 ###reference_###: We discuss how irreducible representations involving multiple copies of the same irreducible representation can be fully reduced, leading to the solution of Problem 3 ###reference_blem3###.\nSection 6 ###reference_###: We outline the algorithms for solving Problem 3 ###reference_blem3### and Problem 2 ###reference_blem2### and discuss briefly their practical implementation using floating point arithmetic.\nSection 7 ###reference_###: We show examples including tensor products of irreducible representations (the Kronecker coefficients) and higher order analogues, using a floating-point arithmetic implementation of the proposed algorithm. We also demonstrate polynomial complexity for hook and hook-like irreducible representations. Finally, we consider the problem of constructing a basis of orthogonal polynomials with respect to a tensor product weight in -dimensions so that permuting variables give rise to irreducible representations of .\nSection 8 ###reference_###: We briefly discuss potential future work including better complexity algorithms, adaptation to Coxeter groups, and applications to sparse discretisations arising in numerical quadrature and solutions to partial differential equations on geometries with symmetries.\nRemark Most of the theoretical results can be adapted to non-orthogonal representations or though possibly with worse complexities. For example, the tridiagonalisation procedure in Lemma 3 ###reference_ma3### only applies for symmetric matrices and hence computing the relevant nullspaces will have worse complexity. Moreover, the practical implementation with non-orthogonal representations may be less-reliable as algorithms involving non-orthogonal and non-symmetric matrices are prone to issues due to ill-conditioning, or in the case of computing an eigendecomposition may potentially fail.\nAcknowledgments: I thank Oded Yacobi (U. Sydney) for significant help in understanding the basics of representation theory and in particular [16 ###reference_b16###], as well as Peter Olver (U. Minnesota), Alex Townsend (Cornell), and Marcus Webb (U. Manchester) for helpful suggestions on drafts. We also thank the anonymous referees for their very helpful feedback. This work was completed with the support of the EPSRC grant EP/T022132/1 \u201cSpectral element methods for fractional differential equations, with applications in applied analysis and medical imaging\u201d and the Leverhulme Trust Research Project Grant RPG-2019-144 \u201cConstructive approximation\ntheory on and inside algebraic curves and surfaces\u201d."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": " Irreducible representations of the symmetric group",
15
+ "text": "In this section we review some basic facts of representation theory of the symmetric group, roughly following [16 ###reference_b16###]. The irreducible representations can be identified with partitions:\nA partition of , is a tuple of integers such that . We use the notation to denote that is a partition of .\nA basis for the vector space associated with an irreducible representation can be identified with Young tableaux:\nA Young tableau is a chain of partitions where such that is equivalent to with\none entry increased or one additional entry equal to one. The set of all Young tableaux of length is denoted .\nA natural way to visualise a partition is via a Young diagram: if then a Young diagram consists of rows of boxes where the th row has exactly boxes. A natural way to visualise a Young tableau is by filling in the boxes of a Young Diagram according to the\norder in which the new boxes appear in the sequence of Young Diagrams corresponding to . For example, the Young tableau can be depicted\nThe number of Young tableaux corresponding to a given partition is called the hook length . Young tableaux can be used\nto build an explicit irreducible matrix representation:\nAssociated with any partition is a canonical orthogonal matrix irreducible representation . The dimension of the irreducible representation\nis the number of Young tableaux and hence we can parameterise by an enumeration of Young tableaux . The entries of the (symmetric) generators are\ngiven by the following rules (cf. [16 ###reference_b16###, (6.5)]):\nFor the Young tableau , if box and are in the same row then .\nFor the Young tableau , if box and are in the same column then .\nFor the Young tableau , if box and are not in the same row or column then let be such that \nis equal to except with boxes and swapped. If is the index of box , then for the axial distance we have\nAll other entries are zero.\nWe now introduce the Gelfand\u2013Tsetlin (GZ) algebra, which is a commutative algebra that we shall utilise to determine the multiplicities of the irreducible representations as well as decompose a representation into its canonical representation.\nThe Young\u2013Jucys\u2013Murphy (YJM)-generators are\nwhere is cyclic notation for the permutation that swaps and .\nThe Gelfand\u2013Tsetlin (GZ) algebra, a sub-algebra of the group algebra of , is generated by the YJM generators [16 ###reference_b16###, Corollary 2.6]:\n\nNote that a representation induces a representation . We shall see that are commuting matrices whose joint spectrum encodes Young tableaux, which in turn encodes the multiplicities of the irreducible representations. In particular, Young tableaux appear in the spectrum as content vectors:\nA content vector satisfies the following:\n.\nfor .\nIf for some then\nThere is a bijection between content vectors and Young Tableaux, [16 ###reference_b16###, Proposition 5.3], which we denote . This map can be constructed as follows: if the box in the -th row and -th column in a Young tableau has the number in it then the -th entry of the corresponding content vector is . For example, for the Young diagram with partition the contents are\nand so the Young tableau in (2 ###reference_###) corresponds to the content vector . The map from content vectors to Young tableaux consists of filling in the boxes in the order the diagonals appear. In particular, if in the th entry of the content vector we have an integer which has appeared times then, if is non-negative, the box is equal to , otherwise, if is negative, the box is equal to .\nWe now come to the key result: are diagonal for all irreducible representations and their entries encode all content vectors associated with the corresponding partition. The following is equivalent to [16 ###reference_b16###, Theorem 5.8]:\nis diagonal and each position on the main diagonal gives a content vector: for each\nwhere are an enumeration of Young tableaux."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": " From representation theory to linear algebra",
21
+ "text": "A classic result in representation theory is that all representations of finite groups are decomposable:\n[7 ###reference_b7###, Proposition 1.8]\nFor any representation of a finite group there is a decomposition\nwhere are distinct irreducible representations. The decompositions of into a direct sum of the summands is unique, as are the that occur (up to isomorphism) and their multiplicities .\nIn terms of concrete linear algebra and specialising to the case of where we are given an orthogonal matrix representation , we denote the subspaces whose intersection is that are all isomorphic to where for .\nWe shall use Schur\u2019s Lemma to show that have certain orthogonality properties:\n[7 ###reference_b7###, Schur\u2019s Lemma 1.7] If and are irreducible representations of and is a -module homomorphism, then\nEither is an isomorphism, or .\nIf , then for some .\nThis lemma encodes the fact that bases corresponding to differing irreducible representations are automatically orthogonal to each other whereas bases corresponding to the same irreducible representation have inner products that are a scaled identity. We can use this to show that we can choose a basis for each so that the resulting representation is precisely of the form of Definition 3 ###reference_inition3###:\nThere exists with orthogonal columns, i.e., , such that and for we have\nConsider an orthogonal basis which spans : if has dimension then we can construct a matrix with orthogonal columns that span and note that invariance guarantees that there exists a representation such that\nBecause are isomorphic to , there exists a matrix that is invertible on so that for all we have therefore consider\nThis satisfies\nOrthogonality follows from Schur\u2019s lemma. In particular, consider the map defined by . This is a -module homomorphism:\nThus for some real constant . Thus define\n\u220e\nThe orthogonality property carries over to bases corresponding to different irreducible representations:\nfor some constants .\nConsider the map defined by . Similar to above this is also a -module homomorphism:\nThe corollary therefore follows from Schur\u2019s lemma, where the fact that is real follows since and have real entries.\n\u220e\nThe previous proposition guarantees that if all irreducible representations have trivial multiplicity () then is an orthogonal matrix. This is not necessarily the case when we have non-trivial multiplicities. However, we can guarantee the existence of an orthogonal matrix that fully block-diagonalises a representation by taking an appropriate linear combination of , thus guaranteeing that Problem 3 ###reference_blem3### has a solution:\nThere exists an orthogonal matrix such that\nwhere are all irreducible representations.\nRecall and consider the Cholesky decomposition\nwhere is upper-triangular. Note that is invertible as is a Gram matrix (associated with the first columns of ).\nDefine and , which has orthogonal columns as .\nWe then have\nThus satisfies the necessary properties.\n\u220e"
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": " Counting multiplicities via the Gelfand\u2013Tsetlin algebra and basis",
27
+ "text": "We now consider the solution of Problem 2 ###reference_blem2###: how do we use the existence of an orthogonal basis to determine how many copies of each irreducible representation are present in a given representation? The key will be the simultaneous diagonalisation of related commuting operators: the representations of the YJM-generators .\nLet us return to the example representation , where we have:\nNote the above are all symmetric matrices. In fact, they also commute. If we conjugate with the orthogonal matrix from (1 ###reference_###) that block-diagonalised the representation we have that the GZ generators are simultaneously diagonalised:\nThis property is general:\nIf is an orthogonal matrix representation then are symmetric and commute. In other words, there exists an orthogonal matrix that simultaneously diagonalises .\nOne approach to showing this is to note that hence is a symmetric matrix, and therefore so is .\nThe fact that commute follows since commute as outlined in [16 ###reference_b16###].\nAnother approach that is more illuminating to our problem is to appeal to Theorem 1 ###reference_orem1###: using the that fully block-diagonalises we have\nand Lemma 1 ###reference_ma1### guarantees that are diagonal.\n\u220e\nWe can use the fact that the eigenvalues of contain copies of the diagonal entries of to deduce the multiplicities from the joint spectrum.\nGiven an orthogonal matrix representation ,\n is the matrix containing the joint spectrum of . That is, if simultaneously diagonalises we have\nNote this is only uniquely defined up to permutation of rows: we choose the ordering so that the first column is non-decreasing, rows with the same entry in the first column are non-decreasing in the second column, and so-forth.\nWe can deduce which irreducible representations are present from :\nThe rows of are content vectors corresponding to each Young tableau associated with the irreducible representation , repeated times.\nThis follows from the (second) proof of Proposition 3 ###reference_position3###.\n\u220e\nIn the case of the example representation we rearrange the eigenvalues from above into a matrix:\nThe rows of are content vectors, whose corresponding Young tableaux according to the map are:\nThat is, we have every possible Young tableau corresponding to the partitions , and (which is repeated twice). We can deduce from this that each of the corresponding multiplicities are and .\nWe are yet to discuss how to simultaneously diagonalise . Practically speaking, this is a well-studied problem with effective\niterative methods [1 ###reference_b1###]. However, we can use the fact that the eigenvalues are integers to ensure that the problem is solvable with a finite number of operations via a more traditional Diagonalize-One-then-Diagonalize-the-Other (DODO) approach.\ncan be simultaneously diagonalised in operations.\nWe first note that the eigenvalues of all lie between and since they correspond to entries of content vectors. A symmetric matrix with integer eigenvalues satisfying can be diagonalised in operations: first one can tridiagonalize\nwhere is a product of Householder reflections and is a symmetric tridiagonal matrix with the same eigenvalues as \nin operations. Determination of an orthogonal basis for the nullspace of a tridiagonal matrix requires operations using e.g. Gram\u2013Schmidt, thus can be diagonalised in operations by determination of the nullspaces of for each .\nThus we can diagonalise in operations (we know it has eigenvalues ) and conjugate for in a total of operations. These will be block-diagonalised according to the eigenvalues of hence we can deflate these matrices and repeat the process times on the sub-matrices.\n\u220e\nBy converting the content vectors associated with the joint spectrum to partitions (see Algorithm 4 ###reference_###) we can deduce the multiplicities of the irreducible representations:\nProblem 2 ###reference_blem2### can be solved in operations.\nRemark In practice, this complexity can be improved using randomised linear algebra, eg., computing a single eigenvalue decomposition of a randomised linear combination of [9 ###reference_b9###] would reduce the complexity to ."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": " Decomposing multiple copies of the same representation",
33
+ "text": "The above results guarantee that one can compute a that simultaneously diagonalises the symmetric matrices . Moreover, we have block-diagonalised , but unfortunately multiple copies of the same irreducible representation are not necessarily decoupled. That is, is only guaranteed to reduce a representation to a canonical representation:\nSuppose simultaneously diagonalises the YJM generators , where we assume the are sorted by the corresponding partitions. Then it reduces to a canonical representation:\nwhere there exists matrices for irreducible representation of dimension such that\nMoreover, if is divided into blocks of size then each block is diagonal.\nWe know that from Theorem 1 ###reference_orem1### and are both orthogonal matrices that simultaneously diagonalise , so the only ambiguity arrives due to each content vector being repeated according to the multiplicity of the irreducible representation. Thus the columns of associated with a given content vector must be linear combinations of the columns of associated with the same content vector, which ensures that has diagonal blocks.\n\u220e\nThus we further need a method to decompose a representation containing multiple copies of the same irreducible representation. As we have reduced the representation to a canonical representation we can consider each separately, which for simplicity we denote with corresponding partition . We mimic an approach for building an orthogonal basis for the eigenspace of a matrix whose corresponding eigenvalue is non-trivial: that is, the problem of finding orthogonal vectors such that . We instead find orthogonal matrices such that . Both problems consist of finding the nullspace of a matrix. A remarkable fact is that solving this nullspace problem in the na\u00efve way guarantees orthogonality.\nGiven an orthogonal matrix representation , we have\nfor an orthogonal matrix with if and only if\nfor a matrix with orthogonal columns satisfying\nwhere concatenates the columns of a matrix.\nGiven , define with . The fact that it satisfies (4 ###reference_###) follows immediately from the definition of the Kronecker product so we only need to show orthogonality. Writing we have for\nand (because the Frobenius norm squared of a matrix with orthogonal columns is equal to its rank)\nFor the other direction, define by (that is, we reshape the vector to be a matrix) and define . Note that (3 ###reference_###) is automatically satisfied by the definition of the Kronecker product so we need only show orthogonality.\nFrom Schur\u2019s lemma (Lemma 2 ###reference_ma2###) we know that for some constants . Writing this means that . Thus we have\nwhich ensures that is in fact orthogonal.\n\u220e\nIn the case where is defined as in Lemma 4 ###reference_ma4### the nullspace of as defined in (4 ###reference_###) can be recovered from the nullspace of only of its columns.\nFrom Lemma 4 ###reference_ma4### we know that has diagonal blocks and hence has only non-zero entries, or in particular each has non-zero entries, in the same locations. Dropping the known zero rows of will therefore be in the nullspace of with the corresponding columns dropped.\n\u220e\nProblem 3 ###reference_blem3### can be solved in operations.\nNote that can na\u00efvely be computed in operations via Gram\u2013Schmidt applied to the matrix in (4 ###reference_###), for a total of operations (using ). Using the previous corollary allows us to reduce the complexity by applying Gram\u2013Schmidt to an matrix, taking operations. Summing over these for completes the proof.\n\u220e\nRemark In practice many of the rows of the matrix are identically zero, and can be dropped without altering its nullspace. Such rows can be found in operations. In experiments this appears to reduce the overall complexity to , see Section 7 ###reference_###, however, if this is true in general remains open and would require understanding the sparsity present in the generators of the irreducible representations ."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": " Algorithms",
39
+ "text": "Encoded in the above results are algorithms for solving Problems 1\u20133. Here we outline explicitly the stages of the algorithms and discuss briefly the practical implementation in floating point arithmetic. Algorithm 1 ###reference_### computes irreducible representation multiplicities, that is, it solves Problems 1 ###reference_blem1### and 2 ###reference_blem2###, as well as computing an orthogonal matrix that reduces a representation to canonical representations. Algorithm 2 ###reference_### builds on Algorithm 1 ###reference_### to fully decompose a representation into irreducible representations, solving Problem 3 ###reference_blem3###, including the case where there are non-trivial multiplicities.\nFor completeness we also include Algorithm 3 ###reference_###, which constructs the YJM-generators, and Algorithm 4 ###reference_###, which discusses the translation of a content vector to a partition.\nInput: Generators .\nOutput: Partitions where , corresponding multiplicities , and an orthogonal matrix that reduces a representation to a canonical representation.\nInput: Generators .\nOutput: An orthogonal matrix that fully decomposes a representation.\nInput: Generators .\nOutput: YJM generators .\nInput: Content vector\nOutput: Partition\nIn the practical implementation [17 ###reference_b17###] we use floating point arithmetic, and in particular we simultaneously diagonalise matrices using standard numerical methods for diagonalising matrices (e.g. we could use [1 ###reference_b1###] though in practice we use the simpler to implement diagonalize-one-then-diagonalize-the-other (DODO) approach), as opposed to the proposed exact method based on nullspace calculations. For the nullspace calculation in Algorithm 2 ###reference_### we use a standard method which is based on computing the Singular Value Decomposition (SVD) and taking the singular vectors associated with the smallest singular values, which will all be approximately zero. We also use sparse matrix data structures to further improve the computational cost. Using floating point arithmetic introduces round-off error, however, in practice the error is proportional to machine epsilon () and the values of the approximate eigenvalues can be rounded to exact integers: that is, while there is no proof of correctness, when the algorithm succeeds it can be verified, in particular as we have computed a that approximately block-diagonalises the representation and we have precise bounds on the errors in floating point matrix multiplication [10 ###reference_b10###]. Note however the dependence on black-box linear algebra software implemented in floating point means there is a small but non-zero chance of failure."
40
+ },
41
+ {
42
+ "section_id": "7",
43
+ "parent_section_id": null,
44
+ "section_name": " Examples",
45
+ "text": "We now present some examples. In Figure 1 ###reference_### we apply Algorithm 1 ###reference_### (using floating point arithmetic) to two representations resulting from tensor products: (i.e. computing Kronecker coefficients) and triple tensor product . In Figure 2 ###reference_### we demonstrate the cubic complexity of applying Algorithm 1 ###reference_### as the dimension grows by considering increasing tensor powers\nfor fixed irreducible representation .\nNote for generic partitions the dimension of the irreducible representation grows combinatorially fast as increases. However, for hook irreducible representations, which are those associated with partitions of the form , the growth is only quadratic. Thus in Figure 2 ###reference_### we focus on tensor powers of a hook and an almost-hook (i.e. one associated with the partition where the number of ones are such that it is a partition of ) irreducible representations to scale to larger . This example shows that the computational cost in practice primarily depends on the dimension of the representation, not .\n\n###figure_1### ###figure_2### ###figure_3### For our next example, in Figure 3 ###reference_### we consider the growth in computational cost as increases for two examples: a hook tensor product and an almost-hook tensor product . The dimensions of the irreducible representations , and are , , and , respectively, so that the tensor products have dimensions and , hence we have demonstrated that the algorithm has polynomial complexity in , in particular at worst and operations. In Figure 3 ###reference_### we plot the time taken showing that the numerical implementation using floating point arithmetic demonstrating the predicted complexities.\n###figure_4### ###figure_5### Finally, we consider an example that requires solving Problem 3 ###reference_blem3### via Algorithm 2 ###reference_###. Let denote a family of orthogonal polynomials222What follows also applies to monomials, which can be viewed as orthogonal polynomials on the (complex) unit circle. with respect to a weight so that\nare degree multivariate orthogonal polynomials with respect to a tensor product weight . Write all degree orthogonal polynomials in a vector where\n\nis the number of monomials of degree in variables. For any permutation we have that is another basis of orthogonal polynomials with respect to the same weight. As orthogonal polynomials are unique up to their span [6 ###reference_b6###], the basis must be an orthogonal linear combinations of the original basis and hence there exists matrices such that\nIn fact these matrices can be deduced directly as simple permutation matrices from the definition above.\n###figure_6### ###figure_7### Note they have the requisite group structure:\nthus defines a representation. If we compute that block-diagonalise this representation using Algorithm 2 ###reference_### then is another basis of orthogonal polynomials but for one which permutations of variables can be applied efficiently:\nIn Figure 4 ###reference_### we consider the practical implementation of this algorithm. Whilst the proven complexity is , in practice we observe complexity, resulting from dropping zero rows in the nullspace system. In Figure 5 ###reference_### we measure the errors introduced by floating point arithmetic by comparing with the numerically computed with the expected formula\nThe error in this computation grows like for machine epsilon where is a small constant, an error grow rate that is indicative of a numerical stable algorithm.\n###figure_8### ###figure_9###"
46
+ },
47
+ {
48
+ "section_id": "8",
49
+ "parent_section_id": null,
50
+ "section_name": " Future work",
51
+ "text": "In this work we have established algorithms for decomposing representations of that have polynomial complexity as the dimension of the representation or increase. These bounds are unlikely to be sharp, and in experiments we often achieve better complexity, though proving these sharper complexity results remains open and likely depends on a careful understanding of the sparsity present in the irreducible representations. The complexity can potentially be further reduced using randomised linear algebra. Whether the complexity can be reduced to such extent that general Kronecker coefficients can themselves be computed with high probability in polynomial time is unclear, and, given the super-algebraic growth in the dimensions of the representations, seems highly unlikely.\nThe practical implementation of the algorithm using floating point arithmetic is not guaranteed to be rigorous: it is possible round-off errors can alter the multiplicities or irreducible representations present. Fortunately, one can overcome this issue by employing techniques from the field of Validated Numerics [21 ###reference_b21###]. This could potentially take the form of an implementation of the algorithm where the errors can be controlled rigorously by incorporating interval/ball arithmetic, see eg. [14 ###reference_b14###] for an effective implementation of ball arithmetic that includes computing eigenvalues with rigorous bounds. More practically, the results of the current algorithm implemented with floating point arithmetic can be verified a postiori. To briefly sketch the argument: if is an approximation to a matrix that simultaneously diagonalises (coming from Algorithm 1 ###reference_###, Algorithm 2 ###reference_###, or elsewhere) then we can write\nwhere the norm of the matrix can be bounded using interval arithmetic. Thus we can write\nwhere can be bounded explicitly using Neumann series in terms of . Thus we have\nThis matrix shares the same spectrum as and them errors in the computation above can be bounded rigorously using interval arithmetic. The result will be a matrix with small intervals surrounding integers on the diagonal combined with small intervals around 0 for off-diagonal entries. Gershgorin\u2019s theorem can then be combined with the fact that the eigenvalues must be integers to prove precisely which eigenvalues are present. To establish that the eigenvalues of each share a common eigenvector and hence prove the correctness of we can use the results of [22 ###reference_b22###].\nThe algorithms introduced may be useful for introducing sparsity in numerical discretisations. Representation theory has been used in the context of computing cubature rules for constructing sparse Hankel matrices associated with cubature rules on triangles, squares and hexagons [4 ###reference_b4###]. Other potential applications include solving partial differential equations that commute with symmetry actions. Discretising such equations with a basis associated with irreducible representations (such as the orthogonal polynomials constructed in Section 7 ###reference_###) will induce sparsity in the resulting discretisation. There is therefore a clear need for extension of the results of this paper for representations of general Coxeter groups, which are connected to orthogonal polynomials whose weight is invariant under symmetry actions (cf. [6 ###reference_b6###]). Explicit irreducible representations are known for the dihedral [20 ###reference_b20###, Section 5.3] and the hyperoctohedral [8 ###reference_b8###, 15 ###reference_b15###] groups which could potentially be used to adapt the proposed algorithm to these other groups."
52
+ }
53
+ ],
54
+ "appendix": [],
55
+ "tables": {},
56
+ "image_paths": {
57
+ "1(a)": {
58
+ "figure_path": "2211.12592v2_figure_1(a).png",
59
+ "caption": "Figure 1: \n\nLeft: multiplicities of irreducible representations in the tensor product \u03c1(3,2,1)\u2297\u03c1(2,2,2)tensor-productsubscript\ud835\udf0c321subscript\ud835\udf0c222\\rho_{(3,2,1)}\\otimes\\rho_{(2,2,2)}italic_\u03c1 start_POSTSUBSCRIPT ( 3 , 2 , 1 ) end_POSTSUBSCRIPT \u2297 italic_\u03c1 start_POSTSUBSCRIPT ( 2 , 2 , 2 ) end_POSTSUBSCRIPT (Kronecker coefficients). Right: multiplicities of irreducible representations in the triple tensor product \u03c1(3,2,1)\u2297\u03c1(2,2,2)\u2297\u03c1(3,3)tensor-productsubscript\ud835\udf0c321subscript\ud835\udf0c222subscript\ud835\udf0c33\\rho_{(3,2,1)}\\otimes\\rho_{(2,2,2)}\\otimes\\rho_{(3,3)}italic_\u03c1 start_POSTSUBSCRIPT ( 3 , 2 , 1 ) end_POSTSUBSCRIPT \u2297 italic_\u03c1 start_POSTSUBSCRIPT ( 2 , 2 , 2 ) end_POSTSUBSCRIPT \u2297 italic_\u03c1 start_POSTSUBSCRIPT ( 3 , 3 ) end_POSTSUBSCRIPT.",
60
+ "url": "http://arxiv.org/html/2211.12592v2/x1.png"
61
+ },
62
+ "1(b)": {
63
+ "figure_path": "2211.12592v2_figure_1(b).png",
64
+ "caption": "Figure 1: \n\nLeft: multiplicities of irreducible representations in the tensor product \u03c1(3,2,1)\u2297\u03c1(2,2,2)tensor-productsubscript\ud835\udf0c321subscript\ud835\udf0c222\\rho_{(3,2,1)}\\otimes\\rho_{(2,2,2)}italic_\u03c1 start_POSTSUBSCRIPT ( 3 , 2 , 1 ) end_POSTSUBSCRIPT \u2297 italic_\u03c1 start_POSTSUBSCRIPT ( 2 , 2 , 2 ) end_POSTSUBSCRIPT (Kronecker coefficients). Right: multiplicities of irreducible representations in the triple tensor product \u03c1(3,2,1)\u2297\u03c1(2,2,2)\u2297\u03c1(3,3)tensor-productsubscript\ud835\udf0c321subscript\ud835\udf0c222subscript\ud835\udf0c33\\rho_{(3,2,1)}\\otimes\\rho_{(2,2,2)}\\otimes\\rho_{(3,3)}italic_\u03c1 start_POSTSUBSCRIPT ( 3 , 2 , 1 ) end_POSTSUBSCRIPT \u2297 italic_\u03c1 start_POSTSUBSCRIPT ( 2 , 2 , 2 ) end_POSTSUBSCRIPT \u2297 italic_\u03c1 start_POSTSUBSCRIPT ( 3 , 3 ) end_POSTSUBSCRIPT.",
65
+ "url": "http://arxiv.org/html/2211.12592v2/x2.png"
66
+ },
67
+ "2": {
68
+ "figure_path": "2211.12592v2_figure_2.png",
69
+ "caption": "Figure 2: \nTime taken to compute tensor powers (\u03c1\u2297ksuperscript\ud835\udf0ctensor-productabsent\ud835\udc58\\rho^{\\otimes k}italic_\u03c1 start_POSTSUPERSCRIPT \u2297 italic_k end_POSTSUPERSCRIPT) for a hook (\u03c1(2,1,1,\u2026,1,1)subscript\ud835\udf0c211\u202611\\rho_{(2,1,1,\\ldots,1,1)}italic_\u03c1 start_POSTSUBSCRIPT ( 2 , 1 , 1 , \u2026 , 1 , 1 ) end_POSTSUBSCRIPT) and an almost-hook (\u03c1(2,2,1,1,\u2026,1,1)subscript\ud835\udf0c2211\u202611\\rho_{(2,2,1,1,\\ldots,1,1)}italic_\u03c1 start_POSTSUBSCRIPT ( 2 , 2 , 1 , 1 , \u2026 , 1 , 1 ) end_POSTSUBSCRIPT) irreducible representation for varying k\ud835\udc58kitalic_k and n\ud835\udc5bnitalic_n. We plot the time taken compared to the dimension of the resulting representation to demonstrate the cubic growth in complexity.",
70
+ "url": "http://arxiv.org/html/2211.12592v2/x3.png"
71
+ },
72
+ "3(a)": {
73
+ "figure_path": "2211.12592v2_figure_3(a).png",
74
+ "caption": "Figure 3: \n\nLeft: growth in the dimension of a hook (\u03c1(n\u22121,1)\u2297\u03c1(2,1,\u2026,1)tensor-productsubscript\ud835\udf0c\ud835\udc5b11subscript\ud835\udf0c21\u20261\\rho_{(n-1,1)}\\otimes\\rho_{(2,1,\\ldots,1)}italic_\u03c1 start_POSTSUBSCRIPT ( italic_n - 1 , 1 ) end_POSTSUBSCRIPT \u2297 italic_\u03c1 start_POSTSUBSCRIPT ( 2 , 1 , \u2026 , 1 ) end_POSTSUBSCRIPT) and an almost-hook (\u03c1(n\u22122,1,1)\u2297\u03c1(2,2,1,\u2026,1)tensor-productsubscript\ud835\udf0c\ud835\udc5b211subscript\ud835\udf0c221\u20261\\rho_{(n-2,1,1)}\\otimes\\rho_{(2,2,1,\\ldots,1)}italic_\u03c1 start_POSTSUBSCRIPT ( italic_n - 2 , 1 , 1 ) end_POSTSUBSCRIPT \u2297 italic_\u03c1 start_POSTSUBSCRIPT ( 2 , 2 , 1 , \u2026 , 1 ) end_POSTSUBSCRIPT) tensor product. Right: time to compute corresponding Kronecker coefficients, showing the timings match the predicted rate of \ud835\udcaa\u2062(n7)\ud835\udcaasuperscript\ud835\udc5b7{\\mathcal{O}}(n^{7})caligraphic_O ( italic_n start_POSTSUPERSCRIPT 7 end_POSTSUPERSCRIPT ) and \ud835\udcaa\u2062(n13)\ud835\udcaasuperscript\ud835\udc5b13{\\mathcal{O}}(n^{13})caligraphic_O ( italic_n start_POSTSUPERSCRIPT 13 end_POSTSUPERSCRIPT ) operations.",
75
+ "url": "http://arxiv.org/html/2211.12592v2/x4.png"
76
+ },
77
+ "3(b)": {
78
+ "figure_path": "2211.12592v2_figure_3(b).png",
79
+ "caption": "Figure 3: \n\nLeft: growth in the dimension of a hook (\u03c1(n\u22121,1)\u2297\u03c1(2,1,\u2026,1)tensor-productsubscript\ud835\udf0c\ud835\udc5b11subscript\ud835\udf0c21\u20261\\rho_{(n-1,1)}\\otimes\\rho_{(2,1,\\ldots,1)}italic_\u03c1 start_POSTSUBSCRIPT ( italic_n - 1 , 1 ) end_POSTSUBSCRIPT \u2297 italic_\u03c1 start_POSTSUBSCRIPT ( 2 , 1 , \u2026 , 1 ) end_POSTSUBSCRIPT) and an almost-hook (\u03c1(n\u22122,1,1)\u2297\u03c1(2,2,1,\u2026,1)tensor-productsubscript\ud835\udf0c\ud835\udc5b211subscript\ud835\udf0c221\u20261\\rho_{(n-2,1,1)}\\otimes\\rho_{(2,2,1,\\ldots,1)}italic_\u03c1 start_POSTSUBSCRIPT ( italic_n - 2 , 1 , 1 ) end_POSTSUBSCRIPT \u2297 italic_\u03c1 start_POSTSUBSCRIPT ( 2 , 2 , 1 , \u2026 , 1 ) end_POSTSUBSCRIPT) tensor product. Right: time to compute corresponding Kronecker coefficients, showing the timings match the predicted rate of \ud835\udcaa\u2062(n7)\ud835\udcaasuperscript\ud835\udc5b7{\\mathcal{O}}(n^{7})caligraphic_O ( italic_n start_POSTSUPERSCRIPT 7 end_POSTSUPERSCRIPT ) and \ud835\udcaa\u2062(n13)\ud835\udcaasuperscript\ud835\udc5b13{\\mathcal{O}}(n^{13})caligraphic_O ( italic_n start_POSTSUPERSCRIPT 13 end_POSTSUPERSCRIPT ) operations.",
80
+ "url": "http://arxiv.org/html/2211.12592v2/x5.png"
81
+ },
82
+ "4(a)": {
83
+ "figure_path": "2211.12592v2_figure_4(a).png",
84
+ "caption": "Figure 4: \n\nTime taken to block-diagonalise a representation generated from permuting variables of tensor product orthogonal polynomials in n\ud835\udc5bnitalic_n variables. The complexity of the implementation appears to be \ud835\udcaa\u2062(n\u2062dn,p3)\ud835\udcaa\ud835\udc5bsuperscriptsubscript\ud835\udc51\ud835\udc5b\ud835\udc5d3{\\mathcal{O}}(nd_{n,p}^{3})caligraphic_O ( italic_n italic_d start_POSTSUBSCRIPT italic_n , italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ) operations, which is better than the proven complexity of \ud835\udcaa\u2062(n\u2062dn,p4)\ud835\udcaa\ud835\udc5bsuperscriptsubscript\ud835\udc51\ud835\udc5b\ud835\udc5d4{\\mathcal{O}}(nd_{n,p}^{4})caligraphic_O ( italic_n italic_d start_POSTSUBSCRIPT italic_n , italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ), where dn,psubscript\ud835\udc51\ud835\udc5b\ud835\udc5dd_{n,p}italic_d start_POSTSUBSCRIPT italic_n , italic_p end_POSTSUBSCRIPT is the dimension of the representation, in this case the number of monomials of degree p\ud835\udc5dpitalic_p in n\ud835\udc5bnitalic_n\nvariables. Left: we fix the number of variables n\ud835\udc5bnitalic_n and let the order of the polynomials increase. Right: we fix the order p\ud835\udc5dpitalic_p of the polynomials and increase the number of variables n\ud835\udc5bnitalic_n.",
85
+ "url": "http://arxiv.org/html/2211.12592v2/x6.png"
86
+ },
87
+ "4(b)": {
88
+ "figure_path": "2211.12592v2_figure_4(b).png",
89
+ "caption": "Figure 4: \n\nTime taken to block-diagonalise a representation generated from permuting variables of tensor product orthogonal polynomials in n\ud835\udc5bnitalic_n variables. The complexity of the implementation appears to be \ud835\udcaa\u2062(n\u2062dn,p3)\ud835\udcaa\ud835\udc5bsuperscriptsubscript\ud835\udc51\ud835\udc5b\ud835\udc5d3{\\mathcal{O}}(nd_{n,p}^{3})caligraphic_O ( italic_n italic_d start_POSTSUBSCRIPT italic_n , italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ) operations, which is better than the proven complexity of \ud835\udcaa\u2062(n\u2062dn,p4)\ud835\udcaa\ud835\udc5bsuperscriptsubscript\ud835\udc51\ud835\udc5b\ud835\udc5d4{\\mathcal{O}}(nd_{n,p}^{4})caligraphic_O ( italic_n italic_d start_POSTSUBSCRIPT italic_n , italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT ), where dn,psubscript\ud835\udc51\ud835\udc5b\ud835\udc5dd_{n,p}italic_d start_POSTSUBSCRIPT italic_n , italic_p end_POSTSUBSCRIPT is the dimension of the representation, in this case the number of monomials of degree p\ud835\udc5dpitalic_p in n\ud835\udc5bnitalic_n\nvariables. Left: we fix the number of variables n\ud835\udc5bnitalic_n and let the order of the polynomials increase. Right: we fix the order p\ud835\udc5dpitalic_p of the polynomials and increase the number of variables n\ud835\udc5bnitalic_n.",
90
+ "url": "http://arxiv.org/html/2211.12592v2/x7.png"
91
+ },
92
+ "5(a)": {
93
+ "figure_path": "2211.12592v2_figure_5(a).png",
94
+ "caption": "Figure 5: \n\nThe maximum entry-wise error comparing Q\u22a4\u2062\u03c1n,p\u2062(\u03c4k)\u2062Qsuperscript\ud835\udc44topsubscript\ud835\udf0c\ud835\udc5b\ud835\udc5dsubscript\ud835\udf0f\ud835\udc58\ud835\udc44Q^{\\top}\\rho_{n,p}(\\tau_{k})Qitalic_Q start_POSTSUPERSCRIPT \u22a4 end_POSTSUPERSCRIPT italic_\u03c1 start_POSTSUBSCRIPT italic_n , italic_p end_POSTSUBSCRIPT ( italic_\u03c4 start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) italic_Q with the formula for the generators of the irreducible representations, where Q\ud835\udc44Qitalic_Q is computed using the Algorithm 2 with floating point arithmetic. The error is proportional to the dimension of the representation which is indicative of a stable numerical algorithm. The left/right figures correspond to the same simulations as Figure 4.",
95
+ "url": "http://arxiv.org/html/2211.12592v2/x8.png"
96
+ },
97
+ "5(b)": {
98
+ "figure_path": "2211.12592v2_figure_5(b).png",
99
+ "caption": "Figure 5: \n\nThe maximum entry-wise error comparing Q\u22a4\u2062\u03c1n,p\u2062(\u03c4k)\u2062Qsuperscript\ud835\udc44topsubscript\ud835\udf0c\ud835\udc5b\ud835\udc5dsubscript\ud835\udf0f\ud835\udc58\ud835\udc44Q^{\\top}\\rho_{n,p}(\\tau_{k})Qitalic_Q start_POSTSUPERSCRIPT \u22a4 end_POSTSUPERSCRIPT italic_\u03c1 start_POSTSUBSCRIPT italic_n , italic_p end_POSTSUBSCRIPT ( italic_\u03c4 start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) italic_Q with the formula for the generators of the irreducible representations, where Q\ud835\udc44Qitalic_Q is computed using the Algorithm 2 with floating point arithmetic. The error is proportional to the dimension of the representation which is indicative of a stable numerical algorithm. The left/right figures correspond to the same simulations as Figure 4.",
100
+ "url": "http://arxiv.org/html/2211.12592v2/x9.png"
101
+ }
102
+ },
103
+ "validation": true,
104
+ "references": [
105
+ {
106
+ "1": {
107
+ "title": "Numerical methods for simultaneous diagonalization.",
108
+ "author": "A. Bunse-Gerstner, R. Byers, and V. Mehrmann.",
109
+ "venue": "SIAM J. Mat. Anal. Appl., 14(4):927\u2013949, 1993.",
110
+ "url": null
111
+ }
112
+ },
113
+ {
114
+ "2": {
115
+ "title": "The complexity of computing Kronecker coefficients.",
116
+ "author": "P. B\u00fcrgisser and C. Ikenmeyer.",
117
+ "venue": "In Discrete Mathematics and Theoretical Computer Science, pages\n357\u2013368. Discrete Mathematics and Theoretical Computer Science, 2008.",
118
+ "url": null
119
+ }
120
+ },
121
+ {
122
+ "3": {
123
+ "title": "Computing multiplicities of Lie group representations.",
124
+ "author": "M. Christandl, B. Doran, and M. Walter.",
125
+ "venue": "In 2012 IEEE 53rd Annual Symposium on Foundations of Computer\nScience, pages 639\u2013648. IEEE, 2012.",
126
+ "url": null
127
+ }
128
+ },
129
+ {
130
+ "4": {
131
+ "title": "A moment matrix approach to computing symmetric cubatures.",
132
+ "author": "M. Collowald and E. Hubert.",
133
+ "venue": "Technical report, hal-01188290, 2015.",
134
+ "url": null
135
+ }
136
+ },
137
+ {
138
+ "5": {
139
+ "title": "Computing irreducible representations of groups.",
140
+ "author": "J. D. Dixon.",
141
+ "venue": "Maths Comp., 24(111):707\u2013712, 1970.",
142
+ "url": null
143
+ }
144
+ },
145
+ {
146
+ "6": {
147
+ "title": "Orthogonal Polynomials of Several Variables, volume 155.",
148
+ "author": "C. F. Dunkl and Y. Xu.",
149
+ "venue": "Cambridge University Press, 2014.",
150
+ "url": null
151
+ }
152
+ },
153
+ {
154
+ "7": {
155
+ "title": "Representation Theory: a First Course, volume 129.",
156
+ "author": "W. Fulton and J. Harris.",
157
+ "venue": "Springer Science & Business Media, 2013.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "8": {
163
+ "title": "Representations of the hyperoctahedral groups.",
164
+ "author": "L. Geissinger and D. Kinch.",
165
+ "venue": "J. Algebra, 53(1):1\u201320, 1978.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "9": {
171
+ "title": "Randomized joint diagonalization of symmetric matrices.",
172
+ "author": "H. He and D. Kressner.",
173
+ "venue": "SIAM J. Matrix Anal. Appl., 45(1):661\u2013684, 2024.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "10": {
179
+ "title": "Accuracy and Stability of Numerical Algorithms.",
180
+ "author": "N. J. Higham.",
181
+ "venue": "SIAM, 2002.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "11": {
187
+ "title": "Decomposing linear representations of finite groups.",
188
+ "author": "K. Hymabaccus and D. Pasechnik.",
189
+ "venue": "arXiv preprint arXiv:2007.02459, 2020.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "12": {
195
+ "title": "Repndecomp: A GAP package for decomposing linear representations of\nfinite groups.",
196
+ "author": "K. Hymabaccus and D. Pasechnik.",
197
+ "venue": "Journal of Open Source Software, 5(50):1835, 2020.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "13": {
203
+ "title": "On vanishing of kronecker coefficients.",
204
+ "author": "C. Ikenmeyer, K. D. Mulmuley, and M. Walter.",
205
+ "venue": "Computational Complexity, 26:949\u2013992, 2017.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "14": {
211
+ "title": "Arb: a C library for ball arithmetic.",
212
+ "author": "F. Johansson.",
213
+ "venue": "ACM Communications in Computer Algebra, 47(4):166\u2013169, 2013.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "15": {
219
+ "title": "Representations of the hyperoctahedral group .",
220
+ "author": "C. Musili.",
221
+ "venue": "Representations of Finite Groups, pages 197\u2013220, 1993.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "16": {
227
+ "title": "A new approach to representation theory of symmetric groups.",
228
+ "author": "A. Okounkov and A. Vershik.",
229
+ "venue": "Selecta Mathematica New Series, 2(4):581\u2013606, 1996.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "17": {
235
+ "title": "NumericalRepresentationTheory.jl v0.3,\nhttps://github.com/dlfivefifty/NumericalRepresentationTheory.jl,\n2024.",
236
+ "author": "S. Olver.",
237
+ "venue": null,
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "18": {
243
+ "title": "On the complexity of computing Kronecker coefficients.",
244
+ "author": "I. Pak and G. Panova.",
245
+ "venue": "Computational Complexity, 26(1):1\u201336, 2017.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "19": {
251
+ "title": "Replab: A computational/numerical approach to representation theory.",
252
+ "author": "D. Rosset, F. Montealegre-Mora, and J.-D. Bancal.",
253
+ "venue": "In Quantum Theory and Symmetries, pages 643\u2013653. Springer,\n2021.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "20": {
259
+ "title": "Linear Representations of Finite Groups.",
260
+ "author": "J.-P. Serre.",
261
+ "venue": "Springer, 1977.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "21": {
267
+ "title": "Validated Numerics: a Short Introduction to Rigorous\nComputations.",
268
+ "author": "W. Tucker.",
269
+ "venue": "Princeton University Press, 2011.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "22": {
275
+ "title": "Error bounds for computed eigenvalues and eigenvectors.",
276
+ "author": "T. Yamamoto.",
277
+ "venue": "Numerische Mathematik, 34:189\u2013199, 1980.",
278
+ "url": null
279
+ }
280
+ }
281
+ ],
282
+ "url": "http://arxiv.org/html/2211.12592v2"
283
+ }
20240722/2212.11055v5.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2212.14084v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2301.02268v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2301.12554v5.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2303.16593v2.json ADDED
@@ -0,0 +1,405 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Investigating the Design Considerations for Integrating Text-to-Image Generative AI within Augmented Reality Environments",
3
+ "abstract": "Generative Artificial Intelligence (GenAI) has emerged as a fundamental component of intelligent interactive systems, enabling the automatic generation of multimodal media content. The continuous enhancement in the quality of Artificial Intelligence-Generated Content (AIGC), including but not limited to images and text, is forging new paradigms for its application, particularly within the domain of Augmented Reality (AR). Nevertheless, the application of GenAI within the AR design process remains opaque. This paper aims to articulate a design space encapsulating a series of criteria and a prototypical process to aid practitioners in assessing the aptness of adopting pertinent technologies. The proposed model has been formulated based on a synthesis of design insights garnered from ten experts, obtained through focus group interviews. Leveraging these initial insights, we delineate potential applications of GenAI in AR.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1. Introduction & Related Work",
9
+ "text": "Augmented Reality (AR) serves as a means to connect the physical and digital worlds, supplementing or extending the former with rich decorative and informative visual effects. (Jung and tom Dieck, 2018 ###reference_b17###; Nebeling and Speicher, 2018 ###reference_b22###).\nThere are three major variations for AR depending on the display methods: Spatial Augmented Reality (SAR), Head-Mounted Display (HMD), and Hand-Held Display (HHD) (Wang et al., 2021 ###reference_b33###; Giunta et al., 2018 ###reference_b12###).\nEach of them presents particular advantages and disadvantages for various tasks and scenarios, as examined extensively in previous studies.\nFor instance, SAR has been identified to merge the real and virtual worlds by directly projecting light onto physical surfaces, but it poses privacy concerns as the displayed content is publicly visible (Roesner et al., 2014 ###reference_b27###; Kotsios, 2015 ###reference_b18###).\nWhilst HHD and HMD offer considerable advantages in terms of privacy protection, they inadvertently engender a somewhat isolated user experience that may impede the sharing of content (Birlo et al., 2022 ###reference_b5###).\nResearchers have also explored the integration of multiple display technologies.\nFor instance, Hartmann et al. proposed Augmented Augmented Reality (AAR) by combining wearable AR displays with wearable spatial augmented reality projectors to mitigate the isolated experience of using an individual AR device (Hartmann et al., 2020 ###reference_b14###).\nAll of the different methods are meant to offer enhanced contextual information and immersion while retaining focus on the physical world for users.\nTwo major components in AR content are images and text (Bach et al., 2017 ###reference_b3###; Weerasinghe et al., 2022 ###reference_b35###; Chiu et al., 2018 ###reference_b7###; Jing et al., 2019 ###reference_b16###), both of which currently require human involvement in their creation, such as 3D modeling using Unity or animation story scripting (Bassyouni and Elhajj, 2021 ###reference_b4###; V\u00e4\u00e4t\u00e4j\u00e4 et al., 2013 ###reference_b31###).\nHowever, recent advancements in Artificial Intelligence (AI) technology have improved the capabilities of AI-generated content (AIGC) to the extent that the boundary between human and machine-generated content has become impressively indistinguishable.\nFor instance, the Large Language Models (LLM), such as GPT and its variants (OpenAI, 2022 ###reference_b24###) and PaLM (Chowdhery et al., 2022 ###reference_b8###), are capable of generating high-quality conversational responses or completing text contextually(Dale, 2021 ###reference_b9###; Floridi and Chiriatti, 2020 ###reference_b11###).\nAnother type of recently prevalent generative model is text-to-image models, such as the Stable Diffusion (Rombach et al., 2022 ###reference_b28###), Disco Diffusion (Rafael-Patino et al., 2021 ###reference_b26###) and DALL E 2 (Marcus et al., 2022 ###reference_b19###), which generate artistic images by given textual prompts.\nThese generative models have drawn major attention from various academic and industrial fields.\nDiverse attempts are being conducted to utilize them to facilitate the daily routine and working process (e.g., code generation, documentation translation, and illustration generation), but the study of the combination of generative AI and AR (AIGC+AR) remains inactive and remains room for discovery.\nThere are some existing works that applied generative AI models to specific AR scenarios, focusing on technical solutions or engineering tasks. For example, Asangika et al. proposed ARGAN (Sandamini et al., 2022 ###reference_b29###), an Augmented Reality-based Fashion Design system, leverages Controllable Generative Adversarial Networks to generate new attire from a sketch and theme image, allowing real-time visualization of virtual 2D apparel on a human body.\nHowever, the discussion on the generic design guidance and holistic system analysis of generative AI in AR has not been systemically stated yet(Xu et al., 2023 ###reference_b36###), especially concerning the general discussion of design space where AIGC is employed in AR display.\nSuch guidance and discussion are crucial for the design decision-making whenever the system involves the usage of AIGC in AR, because the designs would dramatically impact the user experience and determine the success of the application.\nTherefore, in this paper, we will introduce a design space that concerns multiple factors in the design progress.\nThis design space is drawn from an in-depth focus group interview involving 10 experts who are instructed to use our AIGC+AR prototype and answer a set of questions.\nIn particular, our contributions are as follows:\nA prototype applying two prevalent LLM and text-to-image models in three AR variations to allow the user to experience different AIGC+AR designs.\nFocus groups and discussion summary on a \u201cuser-function-environment\u201d design thinking.\nPotential application scenarios for the combination of AIGC and AR.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "2. Prototype Implementation",
15
+ "text": "In order to enhance the more intuitive experience of subsequent interviewees and obtain more design references, we have developed an experimental prototype system called \u201cGenerativeAIR\u201d (Generative AI plus AR). It could be seen as an instance of AIGC+AR to be explored.\nThe system comprises of diverse generative models (two multimodal generative AI models) and AR devices (three AR display devices).\nIt takes speech as the interactive input of the system and then generates text and image contents through the generative models that will be displayed on different AR devices.\nRegarding the specific AR devices, we use Samsung Freestyle projector (Quin, 2022 ###reference_b25###) for SAR, HoloLens 2 (Microsoft, 2019 ###reference_b20###) for HMD, and Oneplus 10 Pro (OnePlus, 2022 ###reference_b23###) for HHD.\nIn detail, GenerativeAIR first uses the built-in microphone in the mobile phone to convert the voice of the user into text.\nNext, it leverages the application programming interface (API) provided by Google is for speech-to-text (Google, 2022 ###reference_b13###).\nAs for the AI generation part, GenerativeAIR uses ChatGPT (OpenAI, 2022 ###reference_b24###) for text-text generation and Stable Diffusion 2 (AI, 2022 ###reference_b2###) for text-image generation.\nNote that these models are mounted on the cloud rather than deployed locally on the phone.\nThe prompts for content generation and the generated results (image and text) are both transmitted through the wireless network, which inevitably leads to a certain but tolerable delay (the actual test delay in our network environment is at the millisecond level).\nFigure 1 ###reference_### shows the overall workflow of the system.\nIn the beginning, the user speaks through a microphone, and then the transcribed text is used as input into two AI models to generate corresponding images and text (as shown in Figure 1 ###reference_### (a)).\nConsequently, the generated media content is transmitted to different AR devices through the network and displayed (as shown from Figure 1 ###reference_### (b) to (d))."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "3. Methodology",
21
+ "text": "In order to elicit design factors for AIGC+AR in an open-ended fashion, hence, we held an internal interview and brainstormed.\nWe use focus group interviews in this process, which is particularly suitable for early exploration in identifying new problems and assessing users\u2019 needs (Morgan, 1997 ###reference_b21###).\nThe participants are allowed to freely use the GenerativeAIR and will be asked a set of questions regarding their experience with the AIGC+AR application."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "3.1. Participants",
27
+ "text": "We carried out in-depth focus group discussions with a panel of our ten authors, comprised of six males and four females. Participants were identified by the indices P1 to P10, and their backgrounds varied: 6 participants were academic researchers from different disciplines (computer science (4), mechanical engineering (1), and design (1)); and the remaining 4 were working professionals from various industries ( UI/UX design (2), telecommunications (1), and IT (1)). The mean age of the participants was 28.7 years (SD=6.90), and all of them had at least two years of experience studying the technology or design of AI or AR."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "3.2. Procedure",
33
+ "text": "We conducted three focus groups with a total of ten participants (G1=3; G2=3; G3=4), each lasting approximately 80 minutes and consisting of five steps. Firstly, the moderator introduced the research purpose (~5 mins). Secondly, participants were asked to freely experience the GenerativeAIR system and respond to any questions raised during the process (~15 minutes). Thirdly, participants provided self-introductions and shared their initial impressions of the GenerativeAIR system (~10 mins). Fourthly, In the main discussion participants freely discussed two topics: RQ1) What are the characteristics need to be considered when comparing AIGC and AR with other related technologies; RQ2) what features should be envisioned when developing the AIGC and AR technology itself (~40 mins). Finally, a summary and debriefing of the discussion was provided (~10 minutes). It is worth mentioning that in the fourth step, for the first question RQ1, we sent a questionnaire to the participants, asking them to score and compare AIGC and AR with their related technologies. More details are in the following section 4.2 ###reference_###."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "3.3. Analysis",
39
+ "text": "The discussions in each focus group were recorded and later transcribed and coded using Grounded Theory (Strauss and Corbin, 1998 ###reference_b30###) by our two authors. To ensure the validity of the motivation categorization, efforts were made to minimize the influence of less logical statements that are common in focus groups. Specifically, the moderator encouraged participants to reflect on and verbalize the underlying logical meaning behind their statements. During the coding phase, less logical statements without support from other statements were excluded as evidence. Furthermore, a visualization analysis is also involved for the first question RQ1, in order to better clarify the consideration factors in system design.\n###figure_2###"
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "4. Discussions & Findings",
45
+ "text": "In this section, we present and analyze the results of focus group discussions on the design factors that need to be paid attention to when integrating AIGC and AR by answering the following two research questions: the first question is to investigate the importance of the overall system by presenting external characteristics; the second question is for studying the performance of the system itself through clarifying internal differences."
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "4.1. RQ1: What are the characteristics need to be considered when comparing AIGC and AR with other related technologies?",
51
+ "text": "In order to identify the advantages and disadvantages of AIGC+AR in comparison to related technologies and discover more suitable application scenarios in the future, a visualization method to qualitatively compare the characteristics of distinct dimension was implemented (Feger et al., 2022 ###reference_b10###). Given the lack of related work that is similar to the overall system, we separately compared and analyzed its two components, namely, AR display and generative AI. The comparison was based on five different dimensions for both display performance (Figure 2 ###reference_### (a)) and content generation performance (Figure 2 ###reference_### (b)). The three gray pentagons of different sizes in the figure represent low, medium, and high levels, respectively. It should be noted that these ratings are based on extensive discussions with 10 participants, but they were not exhibited in any systematically reviewed literature. Therefore, these results cannot be considered as rigorous or unique findings, nor are they conclusive regarding the significance of related technologies. Rather, they are intended to address the external characteristics for design considerations.\nWith respect to display performance, most participants generally believed the AR technology is superior to traditional monitors in terms of functionality, interactivity, and immersion. However, it comes at a higher cost and with lower fidelity. Here, we define that the dimension \u201cfidelity\u201d refers to the degree of similarity between the displayed virtual objects and the physical world. For example, participant P2 expressed that using AR displays can obtain more interesting and rich experiences, but the displayed virtual objects are still very different from the physical objects in the real world by saying: \u201cI was very obsessed with Pok\u00e9mon GO, a mobile AR game. Its novel operating experience and interesting settings brought me a lot of fun. Yet the display effect of Pokemon in the game is not satisfactory. For example, sometimes Pikachu will appear in the air on the edge of my table or the light and shadow of the displayed trees look strange. It is easy for people to realize that these virtual objects are fake.\u201d. Considering the current gap between virtual simulation and the real world, some participants thought that a higher proportion of virtual components in the display might reduce the user\u2019s real experience of the physical world. For example, the participant P6 worried that too many 3D virtual objects could aggravate her dizziness and cause discomfort by saying: \u201cI have 3D motion sickness, so I prefer AR to VR because it\u2019s less virtual and I feel better. I\u2019m looking forward to having a custom function for the displayed virtual part, so that I can easily decide its position, size and proportion of the screen.\u201d. Certainly, there may be more complicated factors in practical cases that need to be considered in-depth in future work.\nFor content generation performance, all participants agreed that generative AI has an unparalleled advantage over human generation in terms of speed and complexity, whether compared to human generation alone or human-machine collaborative generation. For example, the participant P1 greatly appreciated this convenience for his life by saying \u201cI am a painting enthusiast. Imagining that you only need to say a few words to the machine to generate a Van Gogh-style Opera House of Sydney. Generative AI is really amazing for me!\u201d. Participant P2 also believed that this high efficiency improved her work productivity by saying \u201cChatgpt can help me program ! I tried to assign some simple code tasks to it, and it can be completed very well, which greatly improved my work efficiency.\u201d. In addition, we noticed that there was some controversy among the participants regarding the \u201daccuracy\u201d rating in Figure 2 ###reference_### (b), as accuracy refers to the gap between the generated content and the expected results of human generation, which is a relatively subjective indicator. Some participants (e.g., P1) felt that AIGC content (such as automatically generated art drawings) was better than self-made ones by saying: \u201cAI-generated paintings are better than mine\u201d, while others (e.g., P5) thought otherwise by saying: \u201cThe layout and storytelling of AI paintings are far from meeting my expectations\u201d. Hence, we finally hypothesize that, regardless of complicating factors such as time cost or individual ability difference, the most satisfactory results are achieved when people are involved in the generation process. Although the results generated by state-of-the-art AI models are already close to human expectations in some specific cases, we consider the \u201cmedium\u201d rating to be cautious."
52
+ },
53
+ {
54
+ "section_id": "4.2",
55
+ "parent_section_id": "4",
56
+ "section_name": "4.2. RQ2: What features should be envisioned when developing the AIGC and AR technology itself?",
57
+ "text": "The comparison of AR and AIGC and related technologies stimulates ideas for application scenarios. Further exploration of their technical features and details can improve interactive experience and system performance.\nRegarding display form, there are 7 participants who all involved remarks with similar meanings: the portability of AR glasses and mobile phones distinguishes them from stationary display methods like SAR. As almost everyone now owns a smartphone, it is expected to be the most common way for following related work on generative AR display. For example, the participant P3 mentioned that mobile phone AR (HHD) has unparalleled advantages in mobility and flexibility compared to the other two AR display methods by saying: \u201cI\u2019m not always comfortable wearing AR glasses, and the projector works better at night. Compared with these time and space constraints, the mobile phone is more flexible and convenient because I can carry it with me anywhere and take it out to take a look at it at any time.\u201d. Another participant, P10, chimed in that head-mounted AR displays (HMDs) are generally expensive and have limited uses. He expressed his opinion by saying: \u201cAlthough I was very impressed when I first tried Hololens 2, that feeling quickly faded away. I couldn\u2019t help but wonder if there was a good reason for me to spend over three thousand dollars on a new gadget that doesn\u2019t have much practical value. For me, the answer seems to be no.\u201d.\nOn the other hand, 4 participants who pointed out the deficiencies of HHD, i.e., the display scope of a projector is much larger than a mobile phone screen, which can hinder the recognition of generated text in AR apps. For example, the participant P8 claimed that the display content on mobile phone was difficult to view while moving by saying: \u201cI realized that it was difficult for me to stay focused on what was displayed on my phone when I was moving around, especially small text content.\u201d. Therefore, tasks or scenes that rely heavily on text generation are not suitable for HHD devices.\n8 participants highlighted that privacy and accessibility are important considerations, especially in multi-user collaboration and sharing scenarios. For instance, the participant P5 proposed that AIGC should be able to make some adjustments according to different situations by saying: \u201cContent privacy issues need to be taken seriously. For different scenarios or different display modes, the displayed media content can be displayed in layers according to different permissions. In the past, this matter was usually participated by humans, but now it may be handed over to AI for automatic processing.\u201d. Potential solutions would be adopted for our AIGC+AR project include combining multiple display methods to surpass their limitations and providing hierarchical permissions for different users based on AI identification and authentication.\nAdditionally,as the main carrier of information, we noted that there is a significant difference in user experience between 2D and 3D content generation in consequence of the interview conversations. For image generation, many participants recognized that 3D virtual images can greatly increase user immersion and improve system usability. For example, the participant P9 expressed interest in trying on 3D virtual clothes by saying: \u201cThe idea of AR virtual trying on clothes is not that new, it is very interesting but also a bit troublesome because the clothes to try on always need to be manually configured by humans. Now generative AI provides new possibilities, and it may be very interesting to change clothes by speaking.\u201d. Nevertheless, a part of participants also showed rejection of 3D content, such as the P6 above-mentioned who has 3D motion vertigo by saying: \u201cI don\u2019t think 3D objects in AR are necessary for me until I find a solution to my vertigo, 2D and 3D objects look fake anyway\u201d.\nText generation, meanwhile, is more sensitive to display size and requires sufficient space or dynamic display methods like scrolling or refresh. One of participants, P7, said such statement: \u201cAfter trying your GenerativeAIR, I found that when the text generated by AI becomes too much, it is very difficult for me to read. Firstly, because of the limited size of some display devices such as head-mounted display or the small mobile screen. Secondly, since the generated content is not designed into a good layout and interaction, I have no way to adjust them.\u201d. Alternatively, converting text interaction to audio interaction is a practicable solution. For example, applying ChatGPT for dialog generation and display on smaller mobile phone screens may not be user-friendly. For example, the P7 also supplemented: \u201cOf course, we don\u2019t have to use our eyes to see. For text, it is also feasible to communicate only by voice for me.\u201d."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "5. Design Space",
63
+ "text": "Via clustering and merging, we have condensed the multiple factors gathered from the focus group interviews into three overarching categories, namely \u201cuser\u201d, \u201cfunction\u201d, and \u201cenvironment\u201d. These categories are widely recognized as fundamental considerations in the design of interactive systems, as noted by previous studies (Villegas et al., 2018 ###reference_b32###; Wang et al., 2022 ###reference_b34###; Henricksen and Indulska, 2006 ###reference_b15###). Our objective is to explore the design space of AIGC+AR by investigating the relationships among these categories. Starting from user-centered thinking, our design space is structured into three aspects: what functions do the user need from the system (user-function), how the environment provides feedback to the user (user-environment) and what are the differences in the needs of different users (user-user)."
64
+ },
65
+ {
66
+ "section_id": "5.1",
67
+ "parent_section_id": "5",
68
+ "section_name": "5.1. User-Function Design",
69
+ "text": "The AIGC+AR system is envisioned to offer diverse functions for the unique user experience.\n\u201cAIGC can bring great flexibility and complement our design through its runtime content generation capability. (P5)\u201d Previously the artifacts in AR applications are created before the distribution and if there are items needed but not included in the package, it will be impossible to have them at runtime. AIGC opens up the opportunity to create high-quality content at runtime, which is particularly ideal for AR applications, because with AR, users are not isolated from their current context, where a level of personalization is attractive.\n\u201cAR systems can observe my surroundings and deliver the information to me, and this has a great potential that they can help to observe the world for people who can\u2019t. (P9 P10)\u201d The advancements in technology should promote equality instead of only benefiting the advantaged. People with disability may be unable to access certain form of context while their AR devices can and use AIGC to transform the information into the accessible form in real-time. There are existing work e.g.(Chen et al., 2020 ###reference_b6###) on using AI to help the people with disability from certain perspectives, and leveraging AR+AIGC can largely extend such efforts."
70
+ },
71
+ {
72
+ "section_id": "5.2",
73
+ "parent_section_id": "5",
74
+ "section_name": "5.2. User-Environment Design",
75
+ "text": "The interactivity with the environment makes AR different from VR, and also provides distinct design space for AIGC+AR systems.\n\u201cSometimes AR did not give me a feeling of reality, because the content cannot fit in the environment I was surrounded by. I understand the developers can\u2019t exhaustively design a collection of artifacts for every scenario, but it still somehow ruined the experience (P1 P7)\u201d The virtual scene and real environment are separated by a gap that breaks the fidelity and immersion of user experience. This gap can be filled with AIGC. One way is by assisting the content generation, through adapting the initial designed artifacts to fit the environment (\u201dAI Assist\u201d in 4.2 ###reference_###); and another way is by generating new elements for the unprepared environment (\u201dAI Generation\u201d in 4.2 ###reference_###). Dynamically combining\nthese two methods can enable a seamless experience through connecting the virtual scenes and real environments.\nThere are multiple dynamic factors involved in the AR system, typically the different AR display methods, the distinct scenarios, and various environmental factors. These can be catered by integrating AIGC into the system. For such a system, the feedback supplied by the environment to the user is mainly dependent on the presentation of the AR display. As mentioned above, the three AR display methods (SAR, HMD and HHD) have separate display performances, particularly in terms of functionality, portability and privacy. \u201cThe potential application scenarios of HHD based on mobile phones are vast and varied. (P3)\u201d. Alternatively, in different scenarios (e.g., indoors, outdoors, working and entertainment), environmental factors (e.g., light and sound) should be included in the design as well, which is beneficial for ameliorating user experience such as immersion and interactivity. AIGC can be leveraged to address such requirements. \u201cIt would be fun if a machine could somehow know my current mood and adapt the content and style of the generated image accordingly. (P4)\u201d."
76
+ },
77
+ {
78
+ "section_id": "5.3",
79
+ "parent_section_id": "5",
80
+ "section_name": "5.3. User-User Design",
81
+ "text": "AR systems should be aware of other users within the same space, and AIGC can serve as a piece of this puzzle.\nOne angle of user-user design is how AR+AIGC systems can coordinate to generate content for better content sharing and collaboration. The shared space can be complex and leveraging AIGC can avoid users being isolated. Specifically, users may have different roles in the same shared space, and have different needs. For instance, in Hartmann\u2019s work (Hartmann et al., 2020 ###reference_b14###), the concepts of \u201cpresentation user\u201d and \u201cexternal user\u201d were introduced. These two types of users engage with media content asymmetrically. For \u201cpresentation user\u201d, who conducts the AR device for presenting purpose, they care more about portability and privacy. For \u201cexternal users\u201d, their main needs are immersion, low cognitive cost and communication with other users. Therefore, both of AI part and AR part need to be coordinated for distinctive needs for multiple users in the same shared space, such as hierarchical user accessibility based on AI recognition and authentication, or content sharing and switching based on different AR display methods. \u201cIf the generated content involves my privacy, such as my photo albums and life vlog, I would like AI to understand which content can be shown to my friends. If it generates or shows what I don\u2019t want others to see content, it would be too embarrassing. (P2)\u201d."
82
+ },
83
+ {
84
+ "section_id": "6",
85
+ "parent_section_id": null,
86
+ "section_name": "6. Potential Applications",
87
+ "text": "###figure_3### In this section, we summarize and highlight three streams of potential applications enabled by AR+AIGC. Firstly, generative AI models can enhance real-time creative media generation with personalization, as illustrated in Figure 3 ###reference_### (a), where a boy wearing AR glasses interacts with a virtual teddy bear. Such generation can be largely personalized from various perspectives to cater the needs of users. One example is AR fitting room, which can be significantly improved functionally from the integration of AIGC. Moreover, people with disability, the minority, or people from unprivileged groups can also benefit from relevant applications with enhanced capabilities from the integration of AIGC. Secondly, AR+AIGC unlocks the potential of more smooth interactions with the environment and surroundings, as shown in Figure 3 ###reference_### (b). For example, AR games may become more realistic and AR pets may become more lively. Furthermore, GenerativeAIR can enable a better shared experience, and address privacy and privilege classification issues in multi-user scenarios by assigning hierarchical display content based on user permissions and privacy levels through id-authentication, as depicted in Figure 3 ###reference_### (c).\nTo interpret these application scenarios, we employ a user-centered design approach as discussed in section 5 ###reference_###. Specifically, (a) and (b) are intended for single users, while (c) is designed for multiple users. GenerativeAIR offers various functions to meet diverse requirements, and the interaction between user and environment remains a persistent theme."
88
+ },
89
+ {
90
+ "section_id": "7",
91
+ "parent_section_id": null,
92
+ "section_name": "7. Limitations and Future Work",
93
+ "text": "This work aims to explore the potential design space of generative AI for using a simple prototype GenerativeAIR. The limitations of our work primarily lie in the technical aspects that require further improvement. For example, our current prototype only generates 2D images using the Stable Diffusion model, and we acknowledge the potential benefits of generating 3D content in AR. Additionally, our prototype is currently offline and lacks real-time interaction, hindering its practical application. Future work will focus on implementing real-time functionality and integrating additional software and hardware to enrich the system\u2019s functions. Moreover, we have not addressed the hierarchical difficulty of privacy and permissions in multi-user scenarios, which is a critical issue for collaborative and sharing settings."
94
+ },
95
+ {
96
+ "section_id": "8",
97
+ "parent_section_id": null,
98
+ "section_name": "8. Conclusion",
99
+ "text": "This paper introduces the concept of AIGC+AR and explores its design space. We first construct a prototype by integrating two text-input generative AI models with three common AR displays in section 2 ###reference_###. More details about our focus group could be seen in section 3 ###reference_###. Next, in section 4 ###reference_###, we provide a qualitative comparison of the advantages and disadvantages of our work compared to related studies. Also we discuss factors that need to be considered in technology development itself. Furthermore, a \u201cuser-fucntion-environment\u201d design thinking is proposed and discussed section 5 ###reference_###. Last, in section 6 ###reference_###, we present and analyze potential application scenarios."
100
+ }
101
+ ],
102
+ "appendix": [],
103
+ "tables": {},
104
+ "image_paths": {
105
+ "1": {
106
+ "figure_path": "2303.16593v2_figure_1.png",
107
+ "caption": "Figure 1. The workflow and display effect of our GenerativeAIR prototype: (a) The user\u2019s speech into the microphone is converted into text, which is then fed into an AI model for generating artistic images and more text; (b) Generated content in Spatial Augmented Reality (SAR): an example of Samsung Freestyle project; (c) Generated content in Head-Mounted Display (HMD): an example of Microsoft HoloLens 2; (d) Generated content in Hand-Held Display (HHD): an example of OnePlus 10 Pro.",
108
+ "url": "http://arxiv.org/html/2303.16593v2/extracted/5747470/figures/prototype_processed.png"
109
+ },
110
+ "2": {
111
+ "figure_path": "2303.16593v2_figure_2.png",
112
+ "caption": "Figure 2. The comparisons of AR display+generative AI and their related techniques: (a) display performance comparison of AR, VR and normal monitor; (b) content-generation performance comparison of generative AI (machine), AI assist (machine+human) and human.",
113
+ "url": "http://arxiv.org/html/2303.16593v2/extracted/5747470/figures/comparison_compressed.png"
114
+ },
115
+ "3": {
116
+ "figure_path": "2303.16593v2_figure_3.png",
117
+ "caption": "Figure 3. Potential Applications of GenerativeAIR: (a) Boosting real-time creative media generation; (b) Smoothening interactions with surroundings and environment; (c) Facilitating multi-user collaboration.",
118
+ "url": "http://arxiv.org/html/2303.16593v2/extracted/5747470/figures/application_compressed.png"
119
+ }
120
+ },
121
+ "validation": true,
122
+ "references": [
123
+ {
124
+ "1": {
125
+ "title": "Stable Diffusion 2.",
126
+ "author": "Stability AI. 2022.",
127
+ "venue": "https://stability.ai/blog/stable-diffusion-v2-release.",
128
+ "url": null
129
+ }
130
+ },
131
+ {
132
+ "2": {
133
+ "title": "Drawing into the AR-CANVAS: Designing embedded visualizations for augmented reality. In Workshop on Immersive Analytics, IEEE Vis.",
134
+ "author": "Benjamin Bach, Ronell Sicat, Hanspeter Pfister, and Aaron Quigley. 2017.",
135
+ "venue": "",
136
+ "url": null
137
+ }
138
+ },
139
+ {
140
+ "3": {
141
+ "title": "Augmented reality meets artificial intelligence in robotics: A systematic review.",
142
+ "author": "Zahraa Bassyouni and Imad H Elhajj. 2021.",
143
+ "venue": "Frontiers in Robotics and AI (2021), 296.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "4": {
149
+ "title": "Utility of optical see-through head mounted displays in augmented reality-assisted surgery: A systematic review.",
150
+ "author": "Manuel Birlo, PJ Eddie Edwards, Matthew Clarkson, and Danail Stoyanov. 2022.",
151
+ "venue": "Medical Image Analysis (2022), 102361.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "5": {
157
+ "title": "Unblind your apps: Predicting natural-language labels for mobile gui components by deep learning. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. 322\u2013334.",
158
+ "author": "Jieshan Chen, Chunyang Chen, Zhenchang Xing, Xiwei Xu, Liming Zhu, Guoqiang Li, and Jinshui Wang. 2020.",
159
+ "venue": "",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "6": {
165
+ "title": "Interactive mobile augmented reality system for image and hand motion tracking.",
166
+ "author": "Pei-Hsuan Chiu, Po-Hsuan Tseng, and Kai-Ten Feng. 2018.",
167
+ "venue": "IEEE Transactions on Vehicular Technology 67, 10 (2018), 9995\u201310009.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "7": {
173
+ "title": "Palm: Scaling language modeling with pathways.",
174
+ "author": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022.",
175
+ "venue": "arXiv preprint arXiv:2204.02311 (2022).",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "8": {
181
+ "title": "GPT-3: What\u2019s it good for?",
182
+ "author": "Robert Dale. 2021.",
183
+ "venue": "Natural Language Engineering 27, 1 (2021), 113\u2013118.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "9": {
189
+ "title": "ElectronicsAR: design and evaluation of a mobile and tangible high-fidelity augmented electronics toolkit.",
190
+ "author": "Sebastian S Feger, Lars Semmler, Albrecht Schmidt, and Thomas Kosch. 2022.",
191
+ "venue": "Proceedings of the ACM on Human-Computer Interaction 6, ISS (2022), 700\u2013721.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "10": {
197
+ "title": "GPT-3: Its nature, scope, limits, and consequences.",
198
+ "author": "Luciano Floridi and Massimo Chiriatti. 2020.",
199
+ "venue": "Minds and Machines 30, 4 (2020), 681\u2013694.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "11": {
205
+ "title": "A review of augmented reality research for design practice: looking to the future.",
206
+ "author": "Lorenzo Giunta, Jamie O\u2019Hare, James Gopsill, Elies Dekoninck, et al. 2018.",
207
+ "venue": "DS 91: Proceedings of NordDesign 2018, Link\u00f6ping, Sweden, 14th-17th August 2018 (2018).",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "12": {
213
+ "title": "Speech-to-Text.",
214
+ "author": "Google. 2022.",
215
+ "venue": "https://cloud.google.com/speech-to-text.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "13": {
221
+ "title": "AAR: Augmenting a wearable augmented reality display with an actuated head-mounted projector. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 445\u2013458.",
222
+ "author": "Jeremy Hartmann, Yen-Ting Yeh, and Daniel Vogel. 2020.",
223
+ "venue": "",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "14": {
229
+ "title": "Developing context-aware pervasive computing applications: Models and approach.",
230
+ "author": "Karen Henricksen and Jadwiga Indulska. 2006.",
231
+ "venue": "Pervasive and mobile computing 2, 1 (2006), 37\u201364.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "15": {
237
+ "title": "SnapChart: an augmented reality analytics toolkit to enhance interactivity in a collaborative environment. In The 17th International Conference on Virtual-Reality Continuum and Its Applications in Industry. 1\u20132.",
238
+ "author": "Allison Jing, Chenyang Xiang, Seungwon Kim, Mark Billinghurst, and Aaron Quigley. 2019.",
239
+ "venue": "",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "16": {
245
+ "title": "Augmented reality and virtual reality.",
246
+ "author": "Timothy Jung and M Cluaudia tom Dieck. 2018.",
247
+ "venue": "Ujedinjeno Kraljevstvo: Springer International Publishing AG (2018).",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "17": {
253
+ "title": "Privacy in an augmented reality.",
254
+ "author": "Andreas Kotsios. 2015.",
255
+ "venue": "International journal of law and information technology 23, 2 (2015), 157\u2013185.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "18": {
261
+ "title": "A very preliminary analysis of DALL-E 2.",
262
+ "author": "Gary Marcus, Ernest Davis, and Scott Aaronson. 2022.",
263
+ "venue": "arXiv preprint arXiv:2204.13807 (2022).",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "19": {
269
+ "title": "HoloLens AR Glasses.",
270
+ "author": "Microsoft. 2019.",
271
+ "venue": "https://www.microsoft.com/en-us/hololens/buy.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "20": {
277
+ "title": "The focus group guidebook: sage publications.",
278
+ "author": "D Morgan. 1997.",
279
+ "venue": "",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "21": {
285
+ "title": "The trouble with augmented reality/virtual reality authoring tools. In 2018 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct). IEEE, 333\u2013337.",
286
+ "author": "Michael Nebeling and Maximilian Speicher. 2018.",
287
+ "venue": "",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "22": {
293
+ "title": "OnePlus 10 Pro Mobile Phone.",
294
+ "author": "OnePlus. 2022.",
295
+ "venue": "https://www.oneplus.com/au/10-pro.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "23": {
301
+ "title": "ChatGPT.",
302
+ "author": "OpenAI. 2022.",
303
+ "venue": "https://openai.com/blog/chatgpt.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "24": {
309
+ "title": "Reviews-Consumer Technology. Gadgets: Acer Conceptd 7 Spatiallabs Edition, Kelda Bubblespa, Nothing Phone (1), Zhiyun Five Ray, Zuma Lumisonic, Samsung Freestyle.",
310
+ "author": "C Quin. 2022.",
311
+ "venue": "Engineering & Technology 17, 8 (2022), 102\u2013108.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "25": {
317
+ "title": "The diffusion-simulated connectivity (DiSCo) dataset.",
318
+ "author": "Jonathan Rafael-Patino, Gabriel Girard, Rapha\u00ebl Truffet, Marco Pizzolato, Emmanuel Caruyer, and Jean-Philippe Thiran. 2021.",
319
+ "venue": "Data in Brief 38 (2021), 107429.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "26": {
325
+ "title": "Security and privacy for augmented reality systems.",
326
+ "author": "Franziska Roesner, Tadayoshi Kohno, and David Molnar. 2014.",
327
+ "venue": "Commun. ACM 57, 4 (2014), 88\u201396.",
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "27": {
333
+ "title": "High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10684\u201310695.",
334
+ "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer. 2022.",
335
+ "venue": "",
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "28": {
341
+ "title": "An Augmented Reality-based Fashion Design Interface with Artistic Contents Generated Using Deep Generative Models. In 2022 22nd International Conference on Advances in ICT for Emerging Regions (ICTer). IEEE, 104\u2013109.",
342
+ "author": "Asangika Sandamini, Chamodi Jayathilaka, Thisara Pannala, Kasun Karunanayaka, Prabhash Kumarasinghe, and Dushani Perera. 2022.",
343
+ "venue": "",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "29": {
349
+ "title": "Basics of qualitative research techniques.",
350
+ "author": "Anselm Strauss and Juliet Corbin. 1998.",
351
+ "venue": "(1998).",
352
+ "url": null
353
+ }
354
+ },
355
+ {
356
+ "30": {
357
+ "title": "Exploring augmented reality for user-generated hyperlocal news content.",
358
+ "author": "Heli K V\u00e4\u00e4t\u00e4j\u00e4, Mari J Ahvenainen, Markus S Jaakola, and Thomas D Olsson. 2013.",
359
+ "venue": "In CHI\u201913 Extended Abstracts on Human Factors in Computing Systems. 967\u2013972.",
360
+ "url": null
361
+ }
362
+ },
363
+ {
364
+ "31": {
365
+ "title": "Characterizing context-aware recommender systems: A systematic literature review.",
366
+ "author": "Norha M Villegas, Cristian S\u00e1nchez, Javier D\u00edaz-Cely, and Gabriel Tamura. 2018.",
367
+ "venue": "Knowledge-Based Systems 140 (2018), 173\u2013200.",
368
+ "url": null
369
+ }
370
+ },
371
+ {
372
+ "32": {
373
+ "title": "AR/MR remote collaboration on physical tasks: A review.",
374
+ "author": "Peng Wang, Xiaoliang Bai, Mark Billinghurst, Shusheng Zhang, Xiangyu Zhang, Shuxia Wang, Weiping He, Yuxiang Yan, and Hongyu Ji. 2021.",
375
+ "venue": "Robotics and Computer-Integrated Manufacturing 72 (2021), 102071.",
376
+ "url": null
377
+ }
378
+ },
379
+ {
380
+ "33": {
381
+ "title": "Constructing Product Usage Context Knowledge Graph Using User-Generated Content for User-Driven Customization.",
382
+ "author": "Xingzhi Wang, Ang Liu, and Sami Kara. 2022.",
383
+ "venue": "Journal of Mechanical Design (2022), 1\u201348.",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "34": {
389
+ "title": "Arigat\u014d: Effects of Adaptive Guidance on Engagement and Performance in Augmented Reality Learning Environments.",
390
+ "author": "Maheshya Weerasinghe, Aaron Quigley, Klen \u010copi\u010d Pucihar, Alice Toniolo, Angela Miguel, and Matja\u017e Kljun. 2022.",
391
+ "venue": "IEEE Transactions on Visualization and Computer Graphics 28, 11 (2022), 3737\u20133747.",
392
+ "url": null
393
+ }
394
+ },
395
+ {
396
+ "35": {
397
+ "title": "Generative AI-empowered Effective Physical-Virtual Synchronization in the Vehicular Metaverse.",
398
+ "author": "Minrui Xu, Dusit Niyato, Hongliang Zhang, Jiawen Kang, Zehui Xiong, Shiwen Mao, and Zhu Han. 2023.",
399
+ "venue": "arXiv preprint arXiv:2301.07636 (2023).",
400
+ "url": null
401
+ }
402
+ }
403
+ ],
404
+ "url": "http://arxiv.org/html/2303.16593v2"
405
+ }
20240722/2304.08879v3.json ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Robotic Gas Source Localization with Probabilistic Mapping and Online Dispersion Simulation",
3
+ "abstract": "Gas source localization (GSL) with an autonomous robot is a problem with many prospective applications, from finding pipe leaks to emergency-response scenarios.\nIn this work, we present a new method to perform GSL in realistic indoor environments, featuring obstacles and turbulent flow. Given the highly complex relationship between the source position and the measurements available to the robot (the single-point gas concentration, and the wind vector) we propose an observation model that derives from contrasting the online, real-time simulation of the gas dispersion from any candidate source localization against a gas concentration map built from sensor readings. To account for a convenient and grounded integration of both into a probabilistic estimation framework, we introduce the concept of probabilistic gas-hit maps, which provide a higher level of abstraction to model the time-dependent nature of gas dispersion.\nResults from both simulated and real experiments show the capabilities of our current proposal to deal with source localization in complex indoor environments.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Mobile Robotic Olfaction (MRO) is a research field that focuses on autonomous robots with the capability of sensing volatile compounds in the air to carry out olfactory-related tasks. MRO is an active research field due to its many potential applications, which include detecting dangerous or illegal substances, locating gas pipe leaks, and assisting in rescue missions in inaccessible places.\nThe sensory devices that allow for gas sensing are usually referred to as electronic noses, or e-noses [1 ###reference_b1###, 2 ###reference_b2###], and are often composed of arrays of multiple types of sensors, including gas transducers that are sensitive to different substances, as well as thermometers, hygrometers, etc. Another sensor that is important for gas distribution mapping and source localization is an anemometer, since the airflow through an environment greatly impacts the dispersion of any volatiles.\nTwo major specific problems are addressed in MRO: gas distribution mapping (GDM) [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###] and gas source localization (GSL) [7 ###reference_b7###]. In this paper, we exploit the connection between both problems to design a source localization method that relies building a map of the gas distribution and comparing it to the predictions of a dispersion model.\nGas source localization is particularly challenging because it is a non-observable estimation problem. The sensory information available to the robotic agent \u2013the gas concentration and the airflow vector at one specific point in the environment\u2013 is only indirectly related to the location of the source. Relating both, sensor measurements and the location of the source, requires an observation model that takes into account the complexities of gas dispersion phenomena.\nOne possible way to derive such an observation model is to rely on predictive dispersion modeling, which takes the source\u2019s parameters and boundary conditions of the environment as input to predict the gas concentration at each point in the environment at a future point in time. It is then possible to numerically derive the observation model by comparing the robot sensor measurements with the predictions of the dispersion model when considering all the potential gas source locations.\nSo far, this strategy has been adopted in previous works by assuming very simple analytical dispersion models such as the Gaussian Plume, and/or making strong assumptions about the environmental conditions: e.g. laminar, constant flow, absence of obstacles, known release rates.\nEliminating these assumptions not only greatly increases the computational complexity of applicable prediction models, but also further complicates the mathematical relationship between the measurements and the potential source locations. In such a scenario, there are many configurations of source position and airflow that give rise to the same concentration value at a given point in the environment. This makes the design of a source observation model based on a single-point measurement intractable.\n###figure_1### In this work, we tackle this issue with a novel GSL method that brings the following novelties:\nA systematic way to estimate the source position from a predictive gas dispersion model. We employ the filament model to allow for real-time computation of the predictions, which makes it possible to dynamically adapt the predictions to the observed environmental conditions (i.e. the airflow).\nThe results of this predictive model are then used to probabilistically estimate the source location through the method described in Section IV ###reference_###. Due to the aforementioned computational complexity of simulating gas dispersion, a crucial part of this contribution is an iterative refinement (coarse-to-fine) method that allows the agent to intelligently allocate its computation time \u2013simulating in detail only the most likely scenarios, and discarding unlikely source positions quickly.\nAn observation model for the source location that uses a map of the gas dispersion as its observation, rather than single point measurements.\nFurthermore, this map is abstracted from concentration readings to a more stable and reliable hit-map, where instead of having a continuous concentration value for each cell in the map, we deal with a binary variable representing the presence or absence of gas in that cell. Within the GSL pipeline we treat the resulting hit probability map as a \u201dvirtual\u201d, more general observation of the gas present in the environment, which ultimately leads to improved performance of the estimation process (explained in detail in Section III ###reference_###).\nAn exploration strategy that aims to maximize the information about the source location that is gained with new measurements. We discuss the precedents in this subject, and the main differences between them and our proposal in Section V ###reference_###.\nFigure 1 ###reference_### shows an overview of the structure of the algorithm. Each individual step is explored in detail in Sections III ###reference_###-V ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II State of the Art",
15
+ "text": "In this section, we will first give a broad overview of the GSL methods proposed in the past, and then discuss in more detail methods that are directly related to our proposal. For a more general review of the state of the art, see [8 ###reference_b8###, 9 ###reference_b9###]."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Background on GSL Methods",
21
+ "text": "Some existing methods frame the problem of GSL in terms of purely reactive navigation, where the sensory input feeds a control loop that steers the movements of the robot [10 ###reference_b10###, 11 ###reference_b11###]. These methods usually rely on either the direction of the airflow (anemotaxis) or the direction of the gas concentration gradient (chemotaxis) as the main guide for the movement. While this was the most popular way to tackle the problem of source localization for a long time, reactive algorithms have decayed in popularity in recent years, because they cannot cope with the complexity of many real-world situations. For example, in cases where the airflow in the environment is heavily turbulent or time-dependent, no continuous plume or clear concentration gradient may exist, and so the operating principle of these algorithms is invalidated.\nMore elaborate approaches use the sensory input to estimate a probabilistic belief of the state of the environment, which can include the position of the source itself, the conditions of the airflow, the shape of the gas plume, etc. When the estimated variables include more source parameters than just its position, the problem is commonly referred to as source term estimation (STE) [12 ###reference_b12###, 13 ###reference_b13###].\nThese probabilistic solutions offer a more robust way to process the measurements the robot gathers (i.e. wind and gas concentration), since noisy, spurious or unrepresentative measurements are far less likely to throw off the search process.\n###figure_2### The main challenge in applying a probabilistic framework is the need for a grounded observation model that relates the state of the environment with the measurements. Both the e-noses and the anemometers that robots are often equipped with are single-point sensors and, as was mentioned in the introduction, there is no analytical model that allows to reliably infer the source position from such limited information. Still, multiple options exist, requiring different compromises and offering different advantages.\nFor example, if we can assume simplistic environmental conditions (i.e. homogeneous airflow, constant gas release rate, absence of obstacles), simple models that define the gas concentration as a function of the sampled position (e.g. Gaussian Plume, Isotropic Plume) can be employed [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###]. With such models, one can infer the location of the source, and even perform STE to obtain extra information about the characteristics of the gas release.\nAnother possibility is to apply numerical simulations (i.e. Computation Fluid Dynamics, CFD), which can produce very good estimations of the way gas would disperse under a certain set of environmental conditions. This approach has two main problems. The first one is its computational complexity, as even a single CFD simulation can take hours of computation on a powerful machine. The second one is the fact that CFD models require precise knowledge of the boundary conditions in order to generate an accurate prediction. Some methods [17 ###reference_b17###, 18 ###reference_b18###][19 ###reference_b19###] have been proposed that get around these limitations by pre-computing a wide range of scenarios and storing the results in a database. During the search, the observations gathered by the robot are contrasted with the simulations in the database to find the most likely one. Despite the superior precision of the models used, this approach presents several important problems. Firstly, it requires the environment to be known, so that all the simulations can be carried out in advance. It also does not scale well with the size or the complexity (number of inlets/outlets, or source positions) of the environment, as the number of simulations that would be required to cover all options dramatically increases.\nIn a previous work [20 ###reference_b20###], the authors presented GrGSL, an algorithm which also is designed for source localization in indoors environments with obstacles. This algorithm was based on the propagation of short-range directional estimations by using the geometry of the environment, in a process that treats the occupancy map as a graph and employs a technique loosely based on Dijkstra\u2019s algorithm. While this method showed good results in initial testing, the very heuristic nature of its estimation process means that it has difficulty dealing with more complex scenarios (see Section VI-B ###reference_###).\nRecently, other works have exploited the connection between GDM and GSL to attempt to locate a source by building a map of the gas concentration that is then compared to the predictions of a model that runs in real-time. In [21 ###reference_b21###], the authors present an STE method that relies on building a concentration map with the Kernel DM+V/W algorithm [22 ###reference_b22###] and comparing it with the map predicted by a Pseudo-Gaussian plume model \u2013with the limitation that the model does not contemplate the existence of obstacles. In [23 ###reference_b23###] the authors employ machine-learning with a convolutional neural network trained on CFD results as a surrogate for a gas dispersion model, thus being able to obtain an estimate of the shape of the gas plume for a specific set of source parameters in real time, even in the presence of obstacles."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B The Filament Dispersion Model",
27
+ "text": "Our proposal for probabilistic gas source localization relies on a measurement model that stems from the filament-based gas propagation model [24 ###reference_b24###]. It was proposed by Farrell et al as a relatively lightweight method for simulating the short time-scale variations in concentration caused by turbulence, as opposed to the time-averaged nature of models like the Gaussian Plume. This model has been used before in the context of robotics, but only for the generation of simulated scenarios [25 ###reference_b25###]. To the best of our knowledge, this is its first application to online source localization.\nThere are two main problems that need to be addressed in order to design a source localization method that utilizes the filament model.\nThe first one is that despite being much faster to compute than numerical CFD models, running many filament simulations to compare their results with the robot measurements is still a time-consuming task, and naively simulating all potential source locations in even a small environment is not doable in real-time with current hardware. This problem is explored in detail in Section IV-C ###reference_###.\nThe second problem is that filament simulations require knowledge about the airflow in the entire environment, which is not available through sensory measurements, as those are strictly local. Therefore, the application of the filament model to source robotic search requires some method for extrapolating local wind measurements to estimate the global airflow. In an outdoors application, one could make the assumption that the airflow is homogeneous in the search space, even if it changes over time [18 ###reference_b18###, 26 ###reference_b26###]. Indoors, this is not an acceptable assumption, as the presence of walls and obstacles forces the airflow to conform to the geometry of the environment.\n###figure_3### Since the focus of our current method is on addressing the case of indoors GSL, we will build upon a method proposed by Monroy et al [28 ###reference_b28###] to estimate the airflow based on Gaussian Markov Random Fields (GMRF). This method estimates the wind vectors over a 2D lattice of cells, imposing constraints on how the airflow can vary from one cell to its neighboring ones and how it must adapt to the shape of the obstacles. By doing this, it is possible to generate a prediction of the direction of the airflow for the entire environment from a few sparse wind measurements. In order to define the constraints necessary to solve for this vector field, this method requires the geometry of the environment to be known, including the positions of possible inlets/outlets (doors, windows, etc.). It does not, however, need to know the boundary conditions at those points, or even whether they are actively functioning as inlets/outlets. That is to say that, for example, the map should include the location of all windows, but does not need to specify whether they are currently open."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III Probabilistic Plume Mapping",
33
+ "text": "The core idea of the method we present here is to iteratively estimate the source location through the gas distribution mapping (GDM) in the environment. This map is compared to the gas predictions that a filament model generates from a candidate source location, which provides us with a likelihood of the source being in that position (Fig. 2 ###reference_###)."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-A Gas Hit Map",
39
+ "text": "While combining GDM and GSL is not a new idea, it is worth discussing in which ways our current proposal deviates from the conventions and techniques of GDM methods, and the reasoning behind these modifications. The main difference is that, rather than building a grid map of gas concentration values (a continuous random variable), we build a grid map of binary values, where a cell can either contain a measurable amount of gas (a gas hit) or not (a miss), and the exact concentration value is abstracted away.\nThis presence of gas is treated as a random variable, and so we will deal with it in terms of the probability of a cell containing gas. Similarly to the Occupancy Grid Maps (OGM) employed in mobile robotics[29 ###reference_b29###], the value at each grid cell represents the probability that the cell contains enough gas to trigger a gas detection event at an arbitrary instant of time.\nFrom a frequentist perspective, these hit probabilities can be interpreted as the proportion of time that a cell contains enough gas to trigger a hit. Notice that this concept resembles the idea of plume mapping defined by Farrell et al. in [30 ###reference_b30###], as this probability may be seen as the degree to which each cell belongs to the shape of the time-averaged gas plume: some cells are \u201dstably in the plume\u201d (i.e. always contain a concentration of the target gas above a minimum threshold), while others are only partially so, as fluctuations in the shape of the plume can cause them to not contain gas at certain times.\nThe reason abstract away the concentration values is to better match the degree of precision that is attainable with predictive dispersion models \u2013particularly during an online search, where many of the relevant parameters (boundary conditions, source release rate, etc) are uncertain. In this context, the comparison of specific concentration values measured by the sensor with those simulated by the dispersion model is not meaningful, as the simulations cannot be expected to match reality to such a degree of accuracy. Working at a higher level of abstraction attenuates the impact of these limitations \u2013for example, the general shape of a gas plume, as represented by the gas hit variable, is largely independent of the release rate of the source, while the concentration values would greatly depend on it.\nFormally, a gas-hit map is defined as a vector of binary random variables, , where is the set of all cells in the environment. We will refer to the set of sensory observations as , using a subscript to denote the position at which the observation was taken and a superscript to denote the time instant (e.g. is the observation taken at time in cell ). These superscripts and subscripts will be omitted for the sake of readability when they are not relevant to the discussion. We will refer to the probability of cell containing enough gas to trigger a gas hit as .\nIt should be noted that the probabilistic formulation presented in this article is agnostic to the specific definition of gas hit. For the purposes of the implementation and experimental validation presented here, a certain observation is considered a hit or a miss simply by comparing the sensor reading to a fixed concentration threshold. However, other approaches which use an adaptive threshold or consider the time-derivative of the concentration reading to define the concept of a hit \u2013see, for example [31 ###reference_b31###]\u2013 could be used."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-B Building the Gas Hit Map",
45
+ "text": "The estimation of the gas-hit map \u2013\u2013 is carried out recursively by bayesian filtering. Starting from an arbitrary prior , each measurement taken in cell will modify the estimated probability of through the conditional probability . We can define this conditional probability as a piece-wise function that depends on whether the observation is a hit or a miss:\nBecause of the complexity of the gas dispersion phenomenon, there is no clear way of setting grounded values for and , thus they are left as input parameters to the algorithm, with the only requirement that\nInterested readers can find a similar solution for occupancy mapping in [32 ###reference_b32###, p. 33-35]. In that work, the author provides a solid discussion on arbitrarily defining conditional probabilities for a binary random variable given sensory observations.\nEq. 1 ###reference_### defines the conditional gas-hit probability of a cell only for a measurement taken in the same cell. Building a hit probability map only from this information would require an unfeasible amount of measures. Thus, some kind of inference to neighboring cells is required. For that, we resort to the wind vectors and the known geometry of the environment to compute an estimation of when , so a map can be built from sparse measurements.\nIntuitively, a hit measured at any specific cell should have no effect on the hit probability of far-away cells, but must change the probability of their surrounding cells according to their distance and position. Concretely, cells that are located upwind or downwind from the measurement location should be more strongly affected, as a noticeable airflow causing advection will create a stronger correlation between the state of cells whose relative position aligns with the wind vector.\nSeveral methods have been proposed in the field of GDM to encode this dependency between cells based on distance and airflow alignment. In this work, similarly to the Kernel DM+V/W method [33 ###reference_b33###], we apply a dependency model given by a 2D Gaussian centered at the measurement location and aligned with the wind vector. Thus, the influence factor of a measurement taken in cell over the conditional probability of is given by:\nwhere stands for the coordinates of any cell , are the coordinates of the cell in which the measurement is taken, and is the covariance matrix of the 2D Gaussian, stretched and rotated according to the wind vector as described in [33 ###reference_b33###] (see Fig. 3 ###reference_###A). The value of is scaled by to set the maximum value of the influence (when ) to 1.\nWe use this influence value to linearly interpolate between two extreme cases for following the expression:\n###figure_4### When (the cell is being directly observed), and we are applying the expression in Eq. 1 ###reference_###. When the cell is very far away from cell , and , meaning that and can be considered independent (see Fig. 3 ###reference_###B)."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-C Obstacles",
51
+ "text": "An additional consideration is required for this method to be applicable to real scenarios where obstacles (walls, furniture, etc.) may exist between cells and . This issue can be addressed by modifying the calculation of in such a way that its value is not based on the relative positions of cell and , but rather on the direction and length of the shortest free path between them [34 ###reference_b34###]. To efficiently find these paths, we employ the propagation method described in [20 ###reference_b20###].\nAs a brief overview, the method is based on treating the grid of cells as a graph, where each free cell (i.e. not occupied by an obstacle) is a vertex, and two vertices are connected if their corresponding cells are both free and neighboring each other. On this graph we apply a search technique based on Dijkstra\u2019s algorithm, starting from the set of cells that are neighboring the robot\u2019s position () to find all the cells in the grid which should be assigned to each of the members of based on traversable distance. The result of this can be seen as a graph partition, where each group of vertices is defined by the vertex which is closest to all the members of the group.\nBased on this graph partition and the paths calculated in the process of creating it, we can redefine the value of for a cell as:\nwhere is the vector normalized by its length (i.e. unit vector from to ) and is the length of the shortest path connecting to (see Fig. 4 ###reference_###). Note that this expression becomes the expression in Eq. 2 ###reference_### for cells in the immediate vicinity of the robot, but will produce slightly different results for cells that are further away even if no obstacles exist along the path, as the direction of might not perfectly line up with . Implementations of this algorithm may choose to add an extra check for this case and apply Eq. 2 ###reference_### whenever a direct line of sight exists, but our testing shows no difference in the effectiveness of the algorithm in either case."
52
+ },
53
+ {
54
+ "section_id": "3.4",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-D Bayesian Update",
57
+ "text": "We have so far omitted the time superscript on the measurements, and described only the conditional probability of , for a single measurement . However, the actual estimation of the map of hit probabilities is based on combining the information obtained from all the accumulated measurements. This process can be done recursively through Bayesian filtering, where we consider the conditional probability discussed in the previous sections, , as the inverse sensor model.\nSince the variable we are interested in when building the map is a binary random variable, we can use the log-odds form of the binary Bayes filter [29 ###reference_b29###]:\nwhere denotes the log-odds:\nand the actual probability value can be recovered with:"
58
+ },
59
+ {
60
+ "section_id": "3.5",
61
+ "parent_section_id": "3",
62
+ "section_name": "III-E Confidence Value",
63
+ "text": "An important consideration when using the generated hit probability map to estimate the location of the source is that the hit probability values that are estimated at some locations are based on measures in the near vicinity of that cell, while others are only due to extrapolation, or even still equal to the uninformative prior. Trivially, the estimations about the presence of gas that are based on extensive observations should have a stronger effect over the predicted source location probabilities.\nThus, it becomes necessary to quantify the uncertainty about the hit probabilities at each cell, which can be defined as a function of how many measurements have been gathered, and how close to the cell those measurements were taken. In this work, we will use the confidence measure introduced in [27 ###reference_b27###], which is calculated as follows:\nwhere is the length of the shortest path between , (the position of the cell being updated), and (the position at which the robot took a measurement at timestep ). Both and are parameters that control how much confidence is gained from each individual measurement. For more information about these parameters, see [27 ###reference_b27###]."
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "IV Source Position Estimation",
69
+ "text": "In this section, we will discuss the process of using the map of that was built from the measurements to generate an estimation of the position of the source."
70
+ },
71
+ {
72
+ "section_id": "4.1",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-A Hit Map Comparison",
75
+ "text": "We define a random variable to represent the source location as a discrete variable whose possible values are each of the free cells in the environment. The probability distribution that we want to calculate is thus .\nAs explained in Section III ###reference_###, the core idea of our proposal is that we can estimate the probability of a certain cell being the source location by comparing the plume that would result from having the source there (according to a dispersion model), to the plume we have constructed from measurements. Specifically, when we talk about \u201dcomparing the plumes\u201d, we mean comparing the hit probabilities, where these probabilities can be understood as relative frequencies. We define and as these predicted relative frequencies given the measurements and given the predictive model for a source in cell , respectively. The absolute difference between these two values, , can then be used as a measure of the similarity between the measured and the predicted states of cell :\nWe can then define the conditional probability of with the following expression:\nA value of that is based on a very low confidence estimation () does not give meaningful information about the source, and so the conditional probability distribution of the source is uniform \u2013 . As approaches 1, the probability distribution of the source favors the locations which predict a value of similar to .\nAssuming conditional independence of each given , the probability distribution of the source is then calculated as:\nwhere denotes the set . If we assume the prior probability distribution of the source location to be uniform, the term is equal for all , and since we assume there is a single gas source, we can simply omit it from the calculation and normalize the resulting values to obtain a valid probability distribution:\n###figure_5###"
76
+ },
77
+ {
78
+ "section_id": "4.2",
79
+ "parent_section_id": "4",
80
+ "section_name": "IV-B Filament model",
81
+ "text": "The formulation discussed in the previous section requires a predictive gas dispersion model that, given a source position, produces a map of hit probabilities for the expected gas plume. Because of the types of environmental conditions that we are considering (indoor environments with obstacles), we opt for a simplified version of the filament model.\nIn the original filament model, as proposed by Farrell et al [24 ###reference_b24###], the dispersion of gas is simulated by tracking the movement of discrete units (the filaments), which in turn represent three-dimensional spatial distributions of concentration \u2013usually modeled as normal distributions centered on the filament\u2019s position. The filaments are assumed to move mostly through advection, and the effects of diffusion are accounted for by changing the parameters of the concentration distribution that each filament represents. That is, a filament that was emitted a long time ago represents a normal distribution of higher than one that was just released from the source, even though both distributions contain the same number of moles of the realized gas.\nFor the purposes of this work, the filament model has been simplified in two ways. The first and most important one is that the simulation takes place in two dimensions rather than three. This is partially to alleviate the computational complexity of the proposed method, but also to account for the fact that only 2D airflow information is available from the GMRF estimations.\nThe second way in which our implementation deviates from the filament model is that, since we are not interested in computing concentration values (for the reasons discussed in Section III ###reference_###), we are not modeling the filaments as normal distributions of concentration. Instead, we consider that filaments have a discrete radius that grows as a function of the length of time that has passed since their emission. During the simulation, we record the number of time instants that each cell is close enough to the position of at least one filament to be occupied by gas. The relative frequency of this event is the value of ."
82
+ },
83
+ {
84
+ "section_id": "4.3",
85
+ "parent_section_id": "4",
86
+ "section_name": "IV-C Iterative Coarse-to-Fine Refinement",
87
+ "text": "Despite the adoption of a simplified model, the filament model still has a significant computational cost, as a complete simulation needs to be carried out for each cell in the environment. In order to alleviate this computational load, some optimizations can be implemented so that a higher proportion of the time is spent on the calculations that will significantly impact the algorithm\u2019s predictions about the source location.\nConsider, for example, a simulation with the source in cell whose predicted plume map does not match at all the measurements that have been taken so far. It is trivial that simulations with the source in the immediate neighbors of will produce similar plumes, which will also be poor matches for the real measurements, and thus those cells will be evaluated as unlikely source locations. On the contrary, areas of the map that are evaluated positively benefit much more from increased resolution, as it may be required to differentiate from several likely source candidates. We propose to utilize this idea by employing a progressive refinement (coarse-to-fine) strategy, where only a few source locations are simulated initially as representatives of coarse regions, and those regions which produce the most promising results are recursively subdivided to obtain more precision in the final estimations.\n###figure_6### We refer to the proportion of cells that gets subdivided in each successive step of this process as the refinement fraction (). That is, for , half of the cells (the half with the highest estimated probability of containing the source) are subdivided for finer simulation in the next step. Figure 5 ###reference_### shows how affects the computation time for updating the source\u2019s probability distribution and the quality of the resulting estimation as the search process progresses. The quality of the estimation is measured as the Kullback-Leibler Divergence (KDL) between the probability distribution of obtained with that refinement fraction, and the probability distribution of that results from considering all cells. Note that the KLD for is trivially always 0, as its resulting probability distribution is the one being approximated.\nIt can be observed that for the KLD becomes negligible as the number of iterations of the algorithm advances. Conversely, the results obtained with a very small refinement fraction worsen over time. This effect is caused by the variance of the source\u2019s distribution decreasing. As the search advances, a small number of nearby cells accumulate most of the probability of containing the source. When this happens, subdividing that small area of the map with a high probability of is enough to obtain a probability distribution very close to the one being approximated. For very small refinement fractions, the KLD increases because can vary more drastically for nearby cells at this stage of the search than when only part of the map has been observed, and thus the very limited amount of subvidisions is not enough to capture the high-frequency variations in the estimated probability of the source location.\nWe tested two different methods for generating these regions, both of which produce rectangular cells and allow for a maximum cell size to be specified to make sure the regions are not so coarse as to have the environmental conditions change significantly inside of them. The first method is to generate a quadtree from the occupancy map, which allows for large unoccupied regions to be grouped together into a single leaf of the tree, while allowing obstacles to serve as boundaries that force areas to be considered separately. The second method is to greedily fuse the free cells in the occupancy map with their neighbors, in an arbitrary order, with the only constraint being that the resulting cells must remain rectangular. While both of these methods allowed for a significant speed increase from the naive approach (see Figure 6 ###reference_###), the biggest speedup was obtained by combining both: generating a quadtree, and then fusing the resulting neighboring leaves.\nNote that none of these methods are guaranteed to generate an optimal number of subdivisions of the map according to our constraints. A more in-depth study of this problem is left for future work."
88
+ },
89
+ {
90
+ "section_id": "5",
91
+ "parent_section_id": null,
92
+ "section_name": "Movement Strategy",
93
+ "text": "In this section we will address the subject of selecting the next position where the robot will take a measurement, including discussion of existing literature and areas of potential future improvement."
94
+ },
95
+ {
96
+ "section_id": "5.1",
97
+ "parent_section_id": "5",
98
+ "section_name": "Information Value",
99
+ "text": "One of the most common strategies for planning the movements of the robot is to maximize the information about the source location that is gained with each new measurement. This idea has been extensively explored by previous methods [35 ###reference_b35###, 36 ###reference_b36###, 37 ###reference_b37###], in what is usually referred to as \u201dinformation-theoretic\u201d movement strategies. One of the most notable examples of this idea is Infotaxis [38 ###reference_b38###], which popularized the use of the expected change in the entropy of the source distribution as a measure for the information gain. Later works [37 ###reference_b37###, 20 ###reference_b20###] have proposed using the Kullback-Leibler Divergence instead, since aiming only to decrease entropy can cause the robot to refuse exploration and only focus on confirming its current belief. Some recent works [35 ###reference_b35###, 39 ###reference_b39###] have proposed combining the Infotaxis paradigm with rapidly-exploring random trees (RRT) as a way to handle the presence of obstacles for selecting the new sampling location.\nAll of these approaches have a common problem: they require estimating what the next measurement will be if the robot moves to a given position. With this hypothetical next measurement, they simulate an update on the source position belief, and compare the result to the current one. This requires a method for reliably estimating what the next measurement will be, which is not trivial, and also requires performing an iteration of the source estimation procedure for each considered movement, which can lead to a prohibitive computational cost if the number of possible movements is large. Since our algorithm has a slow update process, requiring many filament simulations, this approach is not feasible.\n###figure_7### ###figure_8### ###figure_9### Instead, we propose using the already-computed filament simulations to identify the most interesting areas. When trying to discriminate between two possible source positions, the areas of most interest for taking measurements are those in which their respective predicted plumes are most different (see Figure 7 ###reference_###). We generalize this to all possible source positions and use their estimated probabilities of containing the source as a weight for how much their prediction influences the interest of a measurement location. This is represented by the variance of . The information value of a particular measuring location, , is therefore calculated as follows:\nThe term is the expected value of the predicted relative frequency of hits in cell (), calculated as the average of weighted by the estimated probability of each source location . This expected value is then used to compute the variance of , thus quantifying how much the presence of gas in cell depends on where the source is located.\nThe resulting variance is finally multiplied by the term , where is the confidence value introduced in Eq. 6. This serves to make areas already visited by the robot less interesting, which is desirable because it avoids having the robot constantly re-observe areas of low uncertainty, where no more information about the source can be obtained.\nIt should be noted that the value of represents only the expected amount of information to be gained, and does not consider the cost of selecting cell as the next location \u2013where this cost could be defined simply in terms of the time required for the robot to navigate to the selected cell. Therefore, even if the value of is assumed to be an accurate estimation of the information that will be gained, there is no guarantee that always moving to maximize will lead to the fastest possible convergence of the algorithm, even though it should minimize the number of required iterations.\nThe subject of balancing information gain and navigational cost has been extensively explored in the literature, for example for the problem of generating occupation maps [40 ###reference_b40###, 41 ###reference_b41###, 42 ###reference_b42###]. Thus any existing method could be applied to the case of this algorithm. A detailed analysis of the results obtained with these methods, as well as the effect of assigning specific values to their input parameters, is beyond the scope of this paper, and is left for future work. For the purposes of the experimental results presented in the next section, the strategy employed is a greedy maximization of the expected information value."
100
+ },
101
+ {
102
+ "section_id": "5.2",
103
+ "parent_section_id": "5",
104
+ "section_name": "Exploration Phase",
105
+ "text": "A problem with using the previously defined information gain metric, , is that it relies on being able to apply the dispersion model to predict the shape of the gas plumes. However, since no information about the boundary conditions of the environment is provided to the algorithm, the estimation of the airflow during the first few iterations is not reliable, and so is the filament model.\nWe have tackled this limitation by introducing an initial exploration step of an arbitrary number of iterations, defined as an input parameter of the algorithm\u2013 during which the robot always moves to maximize the total observed area. To do this it uses , the confidence value, of both the cell being considered as the next location and all other cells in its neighborhood to compute which movement has the best exploration value. The expression applied to calculate this exploration value is as follows:\nwhere is the set of cells that are near cell \u2013for an arbitrarily defined distance limit\u2013, and is the distance of the shortest path between cells and . In practice, the distance limit is just a computational optimization, as the influence of other cells on the evaluation quickly approaches 0 as increases.\nIn the process of trying to cover the largest possible area, the robot will gather sparse measurements of both gas and airflow that span a significant portion of the map. Thus, when the exploration phase ends, both a rough outline of the gas map, and a tentative estimation of the airflow are present, making the source estimation process possible.\nIt should be noted that this exploration process is not optimal, as it does not actively consider the airflow estimation, but simply how much the observation will expand the amount of observed area. Designing a more optimized strategy is challenging and beyond the scope of this paper, as it would require predicting how future measurements would change the predicted airflow map after the GMRF is updated."
106
+ },
107
+ {
108
+ "section_id": "6",
109
+ "parent_section_id": null,
110
+ "section_name": "VI Experimental Validation",
111
+ "text": ""
112
+ },
113
+ {
114
+ "section_id": "6.1",
115
+ "parent_section_id": "6",
116
+ "section_name": "VI-A Accuracy of Hit Maps",
117
+ "text": "As discussed in Section IV ###reference_###, the probability distribution for the source location is derived from the similarity between the measured and simulated gas hit maps. However, there is an open question as to whether those maps accurately reflect the state of the gas dispersion in the environment. In this section, we will look at a few example environments, where these reconstructed hit maps are compared to the ground-truth hit frequency extracted directly from a full 3D, CFD-based simulation.\n###figure_10### Figure 8 ###reference_### presents three different environments for comparison purposes, corresponding to experiments A2, B2, and E2. The leftmost column showcases the reference map extracted from the CFD simulation, while each subsequent column displays a map generated by our method, intended for comparison with the reference map. Underneath each map, the similarity measure is shown, calculated as , where represents the estimated hit frequency in a given cell , denotes the corresponding value in the ground-truth map, and is the total number of cells.\nThe second column shows the hit maps generated by the simplified 2D filament model, with airflow reconstructed using the GMRF technique [28 ###reference_b28###]. Notably, in experiment A2, the simulated hit map closely matches the ground truth, while experiments B2 and E2 display more significant differences. These disparities mainly arise from the more pronounced three-dimensionality of airflow in these scenarios, which 2D filament model cannot capture. Experiment B2, in particular, exhibits the largest deviation, where the model predicts an absence of gas in the bottom-right corner, contrary to reality. This sheds light on the results presented in Section VI, where experiment B2 is highlighted as having the highest error in the source position declared by our method.\nMoving to the third column, we see the hit maps as reconstructed from distributed sensory measurements taken at 1m-spaced grid. This validates that, with a sufficient number of measurements, our method accurately reconstructs the gas plume\u2019s shape. It can be observed that in all cases depicted the results closely align with the ground-truth map.\nFinally, the last column illustrates the gas hit maps reconstructed from measurements during a search process for the source, specifically at the moment of source declaration. It\u2019s worth noting that in all cases, a significant portion of the map remains unobserved, with the hit probability remaining equal to the prior. This emphasizes our method\u2019s capability to declare the source position without requiring a perfect, comprehensive gas map, allowing for faster and sparser exploration.\n###figure_11### ###figure_12### ###figure_13### ###figure_14###"
118
+ },
119
+ {
120
+ "section_id": "6.2",
121
+ "parent_section_id": "6",
122
+ "section_name": "VI-B Setup",
123
+ "text": "The experimental validation of the proposed algorithm was carried out by performing both simulation and real-world testing. The benefit of using simulation is that it allows for easily repeatable experimentation under a wide range of scenarios and conditions, while the real-world experiments serve to verify that the results obtained in the simulations still hold under real, uncontrolled conditions. All configuration files for these experiments (both simulated and real), which include the input parameters of the algorithm, can be found on an online repository 111https://github.com/MAPIRlab/Gas-Source-Localization ###reference_alization###. In the case of the simulated experiments, this includes not only the configuration of the robot and algorithm, but also all the data about the airflow, inlets and outlets, source release rate, etc.\nFor the simulations, we rely on GADEN [25 ###reference_b25###], a 3D gas dispersion simulator that employs CFD-based airflow. We tested the algorithm\u2019s performance under 20 different scenarios, taking place in five distinct environments (Fig. 9 ###reference_###). Two of these scenarios correspond to the real-world experiments. The environments are as follows:\nEnvironment A is a simplified version of a generic indoor location, featuring multiple rooms and walls, but no limited-height obstacles to fully explore the three-dimensionality of the gas dispersion process.\nEnvironments B and C are models of real houses, and the specific scenarios considered have been selected from the VGR dataset [43 ###reference_b43###]. These scenarios include significantly more complex geometry that prevents the airflow from being consistent in the vertical axis.\nEnvironments D and E are the scenarios for the two real world experiments. Environment D is a house, with similar characteristics to B and C. Environment E is a research laboratory. Both of these environments have also been used in simulation, to compare the results obtained there with those of the real experiments.\nWe label experiments that take place in the same environment with a number \u2013 e.g. experiments A1 and A2 both take place in environment A, but with a different configuration: the source is placed at a different location and/or the airflow inlets and outlets have changed.\nWe compare the currently presented algorithm (labeled in the results as PMFS) to the algorithm presented in [20 ###reference_b20###], (labeled GrGSL) and Surge-Cast plume tracking [44 ###reference_b44###]. Each of the experiments comprised 30 runs for each of the algorithms, which are considered to end once the variance of the probability distribution of the source location falls below a fixed threshold \u2013in this case, . In the case of Surge-Cast, which does not have a probability distribution for the source, the algorithm is allowed to run uninterrupted for 300s.\nThe real-world experiments correspond to simulations D1 and E1, respectively. The gas source was an ultrasonic vibration humidifier loaded with a 96% ethanol solution. We used a Giraff robot equipped with a photoionization detector (PID) and an ultrasonic anemometer for GSL, and with a 2D laser scanner and RGB-D camera for navigation. Because these experiments are vastly more time-consuming to carry out, only 10 runs of each algorithm were recorded as a way to validate the more extensive simulation results."
124
+ },
125
+ {
126
+ "section_id": "6.3",
127
+ "parent_section_id": "6",
128
+ "section_name": "VI-C Results",
129
+ "text": "Results are shown in Fig. 11 ###reference_###. For PMFS and GrGSL, the \u201derror\u201d value displayed in Fig. 11 ###reference_### is the distance in meters between the ground-truth source location and the source position declared by the algorithm after reaching the convergence criterion. For Surge-Cast, since it is a purely navigational algorithm that does not have a mechanism for source declaration, the value displayed is the minimum distance between the robot position and the source achieved during the search. While this is certainly a different metric, we show it alongside the error in source declaration to provide a reference value that should give the reader an indication of the complexity of the setup \u2013i.e. experiments where Surge-Cast performs well can be assumed to have clear, uninterrupted gas plumes that extend to the source position.\n###figure_15### Figure 11 ###reference_### also shows the total time in seconds required for the algorithm to find the gas source. Again, in the case of Surge-Cast the lack of a source declaration mechanism means that we cannot report the same metric as with the other two algorithms, instead showing the amount of time it took to reach the closest position to the source (the one reported in Figure 11 ###reference_###). Similarly to the error figure, this cannot be interpreted as a direct comparison, but rather as a baseline reference value.\nIt can be observed that the proposed method outperforms GrGSL in the majority of the more complex environments, while obtaining comparable results in the simpler cases. Experiment B2 proves to be particularly challenging, with both algorithms producing estimations that are, on average, more than 2m away from the actual source position. This shows one of the main limitations of our proposed method, which is that it very strongly relies on obtaining an accurate estimation of the direction of the airflow, and is unable to do so when the three-dimensionality of the environment plays an important role (Fig. 12 ###reference_### shows a stream trace of the airflow around the point where the source is placed). In other cases (see scenarios B1, C1 and C2), the new method is able to produce much better estimations than the previous algorithm, because GrGSL tends to produce an estimation that is as far upwind as possible inside the gas plume, while the reasoning of the new method (comparing the plume that would be produced if the source was in a specific location to the currently mapped plume) allows it to produce estimations that are not aligned with the airflow currents.\nRegarding the time required for convergence, it can be observed that there is neither PMFS pr GrGSL manage a clear improvement over the other. There is high variance on which method achieves the best time for any given experiment, and both of them tend to be comparable to the amount of time required for a reactive navigation algorithm like Surge-Cast. Further testing with multiple movement strategies (for example, the concept discussed in Section V-A ###reference_###, where the cost of each movement is taken into consideration alongside its information value) would be required to draw more relevant conclusions about the search time.\nThe results obtained in the real-world experiments are coherent with what might be expected given the simulations. It can be observed that both methods obtain comparable results, both in terms of the error in the final estimation, and in the amount of time required to produce it. While the error recorded from PMFS is on average lower in both scenarios, the variance of the results and the lower repetition count of the real-world experiment compared to the simulations, make this relatively small difference not very meaningful. Figure 13 ###reference_### shows a snapshot of the probabilities calculated by PMFS (of each , and of ) during the search in environment D. It can be observed that the area that receives the highest probability of containing the source is at the start of the plume (following it upwind), but that, given that many cells remain unobserved (with and ) there still exists significant uncertainty in ."
130
+ },
131
+ {
132
+ "section_id": "7",
133
+ "parent_section_id": null,
134
+ "section_name": "VII Conclusions and Future Work",
135
+ "text": "The gas source localization algorithm presented in this work revolves around the idea of using a forward gas dispersion model to estimate the gas plume that would be produced if the source was in a certain position, and comparing that prediction with the data that has been obtained so far.\nThe specifics of the model used and the calculations done with those results, as presented here, should not be considered a finalized solution, but merely a first attempt at making a feasible implementation of the concept. Indeed, many of the most important limitations of the method (mainly those related to the 3D phenomena) could be addressed by using more robust, more accurate methods for the airflow estimation and the gas dispersion, although a compromise between accuracy and computational complexity will always be required to perform on-line simulations during the search.\nAnother limitation that should be addressed by future work is the need to know the map of the environment in advance. Developing methods for estimating the airflow and simulating the gas dispersion in partially observed environments with uncertainty would be of great importance in making the concepts presented here fit for real applications."
136
+ }
137
+ ],
138
+ "appendix": [],
139
+ "tables": {},
140
+ "image_paths": {
141
+ "1": {
142
+ "figure_path": "2304.08879v3_figure_1.png",
143
+ "caption": "Figure 1: Pipeline of the proposed GSL method. We propose an observation model based on the comparison of the hit-map derived from sensor measurements and those predicted by a dispersion model.",
144
+ "url": "http://arxiv.org/html/2304.08879v3/extracted/5747209/img/Diagrama_PMFS.png"
145
+ },
146
+ "2": {
147
+ "figure_path": "2304.08879v3_figure_2.png",
148
+ "caption": "Figure 2: (A) Gas-hit probability map built from e-nose measurements. The color scale goes from blue (lowest) to red (highest). (B) Gas-hit maps predicted by the dispersion model for two candidate source positions (marked in the images with a black dot). (C) Probability distribution of the source location, estimated by comparing A and B.",
149
+ "url": "http://arxiv.org/html/2304.08879v3/extracted/5747209/img/HitP-SourceP.png"
150
+ },
151
+ "3": {
152
+ "figure_path": "2304.08879v3_figure_3.png",
153
+ "caption": "Figure 3: (A) The influence of a measurement (\u03bbisubscript\ud835\udf06\ud835\udc56\\lambda_{i}italic_\u03bb start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT) over the probability of Hisubscript\ud835\udc3b\ud835\udc56H_{i}italic_H start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is calculated by sampling a 2D gaussian kernel of initial \u03c3=\u03c30\ud835\udf0esubscript\ud835\udf0e0\\sigma=\\sigma_{0}italic_\u03c3 = italic_\u03c3 start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, centered on the measurement location, that is stretched and rotated by the wind vector. The values of a\ud835\udc4eaitalic_a and b\ud835\udc4fbitalic_b are calculated using the method outlined in [27]. (B) One-dimensional simplification of the relation between \u03bbi\u2062ksubscript\ud835\udf06\ud835\udc56\ud835\udc58\\lambda_{ik}italic_\u03bb start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT and P\u2062(Hi|zk)\ud835\udc43conditionalsubscript\ud835\udc3b\ud835\udc56subscript\ud835\udc67\ud835\udc58P(H_{i}|z_{k})italic_P ( italic_H start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | italic_z start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ), as described by Equation 3.",
154
+ "url": "http://arxiv.org/html/2304.08879v3/extracted/5747209/img/gaussian+kernel.png"
155
+ },
156
+ "4": {
157
+ "figure_path": "2304.08879v3_figure_4.png",
158
+ "caption": "Figure 4: The vector v^nsubscript^\ud835\udc63\ud835\udc5b\\hat{v}_{n}over^ start_ARG italic_v end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT used in Eq. 4 is the direction from the considered cell i\ud835\udc56iitalic_i to the cell n\u2208N\ud835\udc5b\ud835\udc41n\\in Nitalic_n \u2208 italic_N that is part to the shortest path between i\ud835\udc56iitalic_i and k\ud835\udc58kitalic_k. \u03b4i\u2062ksubscript\ud835\udeff\ud835\udc56\ud835\udc58\\delta_{ik}italic_\u03b4 start_POSTSUBSCRIPT italic_i italic_k end_POSTSUBSCRIPT is the total length of said path.",
159
+ "url": "http://arxiv.org/html/2304.08879v3/extracted/5747209/img/obstacles.png"
160
+ },
161
+ "5": {
162
+ "figure_path": "2304.08879v3_figure_5.png",
163
+ "caption": "Figure 5: Effect of modifying the fraction of cells that are subdivided on each coarse-to-fine step (\u03c1\ud835\udf0c\\rhoitalic_\u03c1). (A) Computation time for updating the source probability distribution. (B) Kullback-Leibler divergence of the resulting source location probability distribution with respect to the distribution predicted by considering all the cells.",
164
+ "url": "http://arxiv.org/html/2304.08879v3/extracted/5747209/img/refining.png"
165
+ },
166
+ "6": {
167
+ "figure_path": "2304.08879v3_figure_6.png",
168
+ "caption": "Figure 6: Average time required to update the source position distribution with the discussed optimizations in three different environments, with \u03c1=0.5\ud835\udf0c0.5\\rho=0.5italic_\u03c1 = 0.5",
169
+ "url": "http://arxiv.org/html/2304.08879v3/extracted/5747209/img/simulation_times.png"
170
+ },
171
+ "8": {
172
+ "figure_path": "2304.08879v3_figure_8.png",
173
+ "caption": "Figure 8: Comparison of the hit maps involved in the algorithm. Column 1 shows the ground truth value, extracted from Gaden, which uses a CFD simulation and the full 3D filament model. Column 2 shows the predicted hit map using the online simplified 2D filament model and reconstructed airflow. Column 3 shows the map produced from sensory measurements taken at regular 1 meter intervals. Column 4 shows the map generated during the GSL process. The real source location is marked in columns 1 and 4 with a circle, and the final declared source position appears in column 4 with a square.",
174
+ "url": "http://arxiv.org/html/2304.08879v3/extracted/5747209/img/Hitmaps.png"
175
+ },
176
+ "9": {
177
+ "figure_path": "2304.08879v3_figure_9.png",
178
+ "caption": "Figure 9: (1) 3D models of the scenarios in which the experiments take place. Details of the environment configuration for each experiment can be found in the online repository. (2) The robot and sensory equipment utilized for the real-world experiments.",
179
+ "url": "http://arxiv.org/html/2304.08879v3/extracted/5747209/img/setup.png"
180
+ },
181
+ "10": {
182
+ "figure_path": "2304.08879v3_figure_10.png",
183
+ "caption": "Figure 10: Average error in meters between the final estimated source location and the ground truth in each of the experiments. For Surge-Cast \u2013lacking source declaration\u2013, the reported error is the smallest distance achieved between the robot and the source position.\n",
184
+ "url": "http://arxiv.org/html/2304.08879v3/extracted/5747209/img/distances.png"
185
+ },
186
+ "11": {
187
+ "figure_path": "2304.08879v3_figure_11.png",
188
+ "caption": "Figure 11: Time required to finish the search in each experiment. For Surge-Cast, the time it took for it to reach the smallest distance to the source, as reported in Fig. 11.\n",
189
+ "url": "http://arxiv.org/html/2304.08879v3/extracted/5747209/img/times.png"
190
+ },
191
+ "12": {
192
+ "figure_path": "2304.08879v3_figure_12.png",
193
+ "caption": "Figure 12: Stream trace of the airflow near the source position in scenario B2. The strong three-dimensionality of the dispersion process poses a challenge for the proposed method, since the dispersion model used by PMFS only considers two dimensions.",
194
+ "url": "http://arxiv.org/html/2304.08879v3/extracted/5747209/img/3d-airflow_edges_withAxes.png"
195
+ },
196
+ "13": {
197
+ "figure_path": "2304.08879v3_figure_13.png",
198
+ "caption": "Figure 13: Snapshot of the calculated probabilities during the search in the real-world environment. (A) Hit probability map, with the arrows inside the cells showing the estimated wind direction. (B) Source location probability distribution. The red arrow is the pose of the robot, and the blue lines indicate the measurements of the laser scanner.",
199
+ "url": "http://arxiv.org/html/2304.08879v3/extracted/5747209/img/exp_real_screenshots.png"
200
+ }
201
+ },
202
+ "validation": true,
203
+ "references": [],
204
+ "url": "http://arxiv.org/html/2304.08879v3"
205
+ }
20240722/2306.02547v3.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "An Euler-type method for Volterra integro-differential equations",
3
+ "abstract": "We describe an algorithm, based on Euler\u2019s method, for solving Volterra\nintegro-differential equations. The algorithm approximates the relevant\nintegral by means of the composite Trapezium Rule, using the discrete nodes\nof the independent variable as the required nodes for the integration\nvariable. We have developed an error control device, using Richardson\nextrapolation, and we have achieved accuracy better than for all\nnumerical examples considered.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Many techniques exist for solving Volterra integro-differential equations\n(IDEs), such as Adomian decomposition [1 ###reference_b1###], Laplace decomposition\n[2 ###reference_b2###], Galerkin methods [3 ###reference_b3###], Haar functions [4 ###reference_b4###], homotopy perturbation [5 ###reference_b5###] and more [6 ###reference_b6###][14 ###reference_b14###], including Runge-Kutta methods [15 ###reference_b15###][16 ###reference_b16###].\nIn this paper, we focus our attention on Volterra IDEs of the form\nwith an appropriate set of initial conditions defined at and where\nthe kernel has the structure\nThe last of these is said to be separable.\nWe will develop a straightforward one-step method, in the spirit of Euler,\nwhich, combined with Richardson extrapolation, will be seen to yield very\naccurate results.\nThroughout this paper, we assume that all occurring functions are\nreal-valued and as smooth as our analysis requires."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Algorithm",
15
+ "text": "Initially, we will describe our algorithm for the case of in (1 ###reference_###). The more general case will be described later. We partition the\ninterval of interest, denoted by means of the\nequispaced nodes\nThe spacing between the nodes, known as the stepsize, is denoted . The stepsize must be constant in order for our error control device (based\non Richardson extrapolation) to be implemented successfully.\nWe assume that we have an initial value\nand we compute the solution at via\nThis is an explicit Euler approximation to\nThen, we compute\nto obtain an approximation to Again, this step has\nan explicit Eulerian character.\nBut how to find To this end, we use the\ninformation already determined, in the form\nwhere and denote the kernel evaluated at and respectively. This approximation is recognized as the Trapezium\nRule, wherein we have and .\nTo find we compute\nwhere the approximation to the integral is now seen to be the composite Trapezium Rule, with and\nContinuing in this manner yields the general algorithm\nFor the kernel in (2 ###reference_###), we simply express the derivative as\nand for kernel , we have\ni.e. we factor out of the integral since it is not\ndependent on . For those kernels that are dependent on or , we have\nWhen in (1 ###reference_###), we have the system\nand when we have\nObviously, the initial values and must be specified for the\nfirst system, and and must be specified for the second system."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Error control",
21
+ "text": "The Eulerian character of our algorithm, together with the use of the\nTrapezium Rule, ensures that we cannot expect an error better than\nfirst-order. However, this is quite acceptable, since we can deploy\nRichardson extrapolation to achieve higher-order approximations from\nfirst-order results. We have provided detail regarding Richardson\nextrapolation elsewhere [17 ###reference_b17###], and we simply state here the\nprocess we use to construct solutions of order as high as five.\nLet denote the solution obtained at using a\nstepsize (i.e. the nodes in (3 ###reference_###)). Let \ndenote the solution obtained at using a stepsize . Such a\ncomputation uses the equispaced nodes\nwhere each intermediate node is located midway between \nand . We can similarly obtain the solutions and using appropriate\nnode distributions. Now, we form the linear combinations\nwhich yield 2nd-, 3rd-, 4th- and 5th-order solutions, respectively, at . We will be interested in the 3rd-order solution in our numerical\nexamples. If we assume the 3rd- and 5th-order solutions have error terms of\nthe form\nrespectively, then\nfor suitably small . Since and are known, we have\nas a good estimate for the error coefficient . Consequently, a\nsuitable stepsize for a desired accuracy is found from\nwhere the safety factor is Naturally,\nsuch a value for is computed at each and the smallest such\nvalue is the one chosen. This chosen value is then used to rerun the\nalgorithm, with the resulting output satisfying the specified tolerance If we wish to control relative error, we compute\nat each and, as before, take the smallest such value and rerun the\nalgorithm."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Examples",
27
+ "text": "We consider a variety of examples, indicated in the tables below.\nFor each example, we solve the IDE on the interval (see\nthe Appendix for commentary in this regard). The parameters and refer to the number of nodes in (3 ###reference_###)) needed to achieve\ntolerances of and \nrespectively, using the Richardson process described above. These examples\nspan the various possibilities in (1 ###reference_###) and (2 ###reference_###). We have also included two examples of systems of IDEs (see Table 3). Initial\nvalues used were determined from the given solutions, and so have not been\nlisted.\nThe solution for #1 is an approximation, as given in [2 ###reference_b2###]. In\n#4, we have\nOn our computational platform [18 ###reference_b18###], these calculations were\nphysically fast, requiring no more than five seconds, and usually much less,\nfor each case."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Conclusion",
33
+ "text": "We have reported on an algorithm, based on Euler\u2019s method, for solving a\nbroad class of Volterra integro-differential equations. Our algorithm\napproximates the relevant integral by means of the composite Trapezium Rule,\nusing the discrete nodes of the independent variable as the required\nnodes for the integration variable . We use Richardson extrapolation to\nenhance the quality of the solution, achieving accuracy better than for all the numerical examples considered. The algorithm has very\ngeneral character, is easy to implement and, on our computational platform,\nis fast.\nNevertheless, further work is required. The algorithm is explicit, and we\nhave not considered stability issues in this work. It is possible that an\nimplicit form of the algorithm may be necessary to solve certain problems,\nand the feasibility of such a version should be investigated. We believe\nthat for a nonseparable kernel a modification to the algorithm will\nbe necessary, and we will combine this task with that of creating an\nimplicit version. Lastly, we have not considered weakly singular problems\nusing our algorithm and this, too, should be a topic for further study."
34
+ }
35
+ ],
36
+ "appendix": [],
37
+ "tables": {},
38
+ "image_paths": {},
39
+ "validation": true,
40
+ "references": [],
41
+ "url": "http://arxiv.org/html/2306.02547v3"
42
+ }
20240722/2307.01836v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2307.07679v3.json ADDED
@@ -0,0 +1,340 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Sharp Convergence Rates for Matching Pursuit",
3
+ "abstract": "We study the fundamental limits of matching pursuit, or the pure greedy algorithm, for approximating a target function by a linear combination of elements from a dictionary. When the target function is contained in the variation space corresponding to the dictionary, many impressive works over the past few decades have obtained upper and lower bounds on the error of matching pursuit, but they do not match. The main contribution of this paper is to close this gap and obtain a sharp characterization of the decay rate, , of matching pursuit. Specifically, we construct a worst case dictionary which shows that the existing best upper bound cannot be significantly improved. It turns out that, unlike other greedy algorithm variants which converge at the optimal rate , the convergence rate is suboptimal. Here, is determined by the solution to a certain non-linear equation.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Matching pursuit [20 ###reference_b20###] is a widely used algorithm in signal processing that approximates a target signal by selecting a sparse linear combination of elements from a given dictionary.\nOver the years, matching pursuit has garnered significant attention due to its effectiveness in capturing essential features of a signal with a parsimonious representation, offering reduced storage requirements, efficient signal reconstruction, and enhanced interpretability of the underlying signal structure. Because of this, its applications span various domains, including image, video, and audio processing and compression [2 ###reference_b2###, 22 ###reference_b22###].\nWhile previous works have explored the convergence properties of matching pursuit, several open questions and challenges remain. In particular, the relationship between the characteristics of the target signal, the chosen dictionary, and the convergence rate warrants further investigation.\nThe main objective of this paper is to provide a comprehensive analysis of the convergence properties of matching pursuit. Understanding the convergence rate is crucial for assessing the algorithm\u2019s efficiency and determining the number of iterations required to achieve a desired level of approximation accuracy.\nLet be a Hilbert space and be a symmetric collection of unit vectors, i.e., for and implies , called a dictionary. Non-linear dictionary approximation methods, which attempt to approximate a target function by a sparse linear combination\nwhere both the sequence of dictionary elements and the coefficients depend upon the function to be approximated, are common method in machine learning and signal processing. Such methods aim to generate an approximation of the form (1 ###reference_###) with a small number of terms , and include gradient boosting [11 ###reference_b11###], -boosting [4 ###reference_b4###], basis pursuit [5 ###reference_b5###], and matching pursuit [20 ###reference_b20###].\nIn this work, we consider matching pursuit [20 ###reference_b20###], which is a classical method for algorithmically generating a convergent non-linear dictionary expansion of the form\nMatching pursuit is also known as the pure greedy algorithm [9 ###reference_b9###], and is given by\nwhere is the residual at step . We remark that the inner product here is the inner product of the Hilbert space , which is typically the inner product with respect to a probability distribution in practical applications. An equivalent way of writing this method, which explains the name pure greedy algorithm, is\nIn each step we add the single term which minimizes the error the most, hence the name pure greedy algorithm. In other words, we fit a single term to the residual in each step.\nAn important generalization of this algorithm is the pure greedy algorithm with shrinkage , see [32 ###reference_b32###, p. 375], given by\nHere we scale down the greedy term by a factor , called the shrinkage factor, in each step.\nFinding the optimal term in (5 ###reference_###) could potentially involve solving a high-dimensional, non-convex optimization problem. To address this computational issue, gradient tree boosting [11 ###reference_b11###] (i.e., boosting) features an additional step in which is itself greedily optimized. Here the dictionary consists of normalized piecewise constant functions (or decision trees), which are fit to the residuals via the CART algorithm [3 ###reference_b3###]. While CART is usually not motivated as a greedy way to optimize the inner product over a collection of normalized piecewise constant functions, it can be equivalently formulated as such. Except for decision stumps (or depth-one trees) which involve only a 2-dimensional optimization, understanding the optimization gap for this greedy heuristic (CART) in general is currently an open problem, one that we do not address in the present paper. That is, we assume herein that the optimization problem is solved exactly in (5 ###reference_###).\nDespite the practical success and great interest generated by the method of matching pursuit, the precise convergence properties of the algorithm (3 ###reference_###) for general dictionaries have not been precisely determined. To describe this problem, we introduce the variation norm [15 ###reference_b15###, 6 ###reference_b6###] with respect the the dictionary , defined by\nwhere the set is the closed convex hull of , i.e.,\nThis norm is a common measure of complexity when studying non-linear dictionary approximation, since it is well-defined and useful for any abstract dictionary [6 ###reference_b6###, 32 ###reference_b32###]. Results for the variation space can also often be extended to interpolation spaces which allow an even wider class of target functions to be analyzed [1 ###reference_b1###].\nFor some context, let us give a few examples of dictionaries of interest and their corresponding variation spaces. For specific dictionaries of interest, the variation spaces have often been characterized and other descriptions are available. For example, the variation space for the dictionary\nof Gaussian bumps is equivalent to the Besov space the Besov space , which consists of function satisfying\nwhere is the second order modulus of smoothness in given by\nThis equivalence was proved in [21 ###reference_b21###]. For more information on Besov spaces, we refer to [7 ###reference_b7###, 8 ###reference_b8###].\nAnother example are the spaces for the dictionaries of ridge functions\nwhich correspond to shallow neural networks have been characterized and intensively studied [24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 30 ###reference_b30###, 28 ###reference_b28###, 13 ###reference_b13###, 10 ###reference_b10###]. Here is the popular ReLUk activation function.\nTypically the convergence of abstract greedy algorithms is studied on the variation space [32 ###reference_b32###]. For instance, it is shown in [9 ###reference_b9###] that the pure greedy algorithm (3 ###reference_###) satisfies\nThis was subsequently improved by Konyagin and Temylakov [14 ###reference_b14###] to\nand finally by Sil\u2019nichenko [31 ###reference_b31###] to\nwhere and is a root of the non-linear equation\nFor the pure greedy algorithm with shrinkage (5 ###reference_###) the method of Sil\u2019nichenko implies that\nwhere and is a root of the non-linear equation\nAs , the exponent (c.f., the exponent\n obtained in [23 ###reference_b23###]). We remark that for , the pure greedy algorithm is stationary. This manifests itself in the fact that although the exponent approaches , the constant in the upper bound approaches as .\nOn the other hand, for functions it is known [27 ###reference_b27###, 12 ###reference_b12###] that for each there exists an approximation\nsuch that , and that the exponent is optimal for all dictionaries and [15 ###reference_b15###].\nThus, in general the best we could hope for in the convergence of matching pursuit is a convergence rate like , which is attained by other greedy algorithms such as the orthogonal or relaxed greedy algorithm [9 ###reference_b9###, 12 ###reference_b12###] (in fact, the orthogonal greedy algorithm has been shown to converge even faster for compact dictionaries [29 ###reference_b29###, 16 ###reference_b16###]). Remarkably, the convergence of matching pursuit is strictly worse. Specifically, it was shown in [19 ###reference_b19###] that there exists a dictionary and an such that the iterates of the pure greedy algorithm (3 ###reference_###) satisfy\nThis estimate was finally improved in [18 ###reference_b18###] to\nComparing the estimates (13 ###reference_###) and (19 ###reference_###) we see that there is a still a significant, though small, gap between the best upper and lower bounds. The goal of this work is to close this gap nearly completely. We show the following.\nLet be the root of the equation (14 ###reference_###). Then for every , there exists a dictionary and a function such that the iterates of the pure greedy algorithm (i.e., matching pursuit) (3 ###reference_###) satisfy\nCombined with the upper bound (13 ###reference_###), this gives a precise characterization of the exponent in the convergence rate of matching pursuit\nand therefore makes significant progress towards solving an open problem posed in [32 ###reference_b32###], that is, to find the order of decay of the pure greedy algorithm. Additionally, this shows that the rate of convergence with shrinkage is strictly better than for , i.e., that any amount of shrinkage improves the algorithm in the worst case, lending theoretical support to the empirical observation that some shrinkage in gradient tree boosting can result in significant performance improvements.\nFinally, we remark that although the pure greedy algorithm is significantly worse than the relaxed or orthogonal greedy algorithm in the worst case over the variation space , it achieves the same rate of convergence on some interpolation spaces between the variation space and the space , see [17 ###reference_b17###] for details.\nAlthough we do not give all of the details, our method shows that the rate obtained by Sil\u2019nichenko (15 ###reference_###) is optimal also for shrinkage for a numerically computable value . However, for sufficiently small shrinkage our method breaks down and it is an open problem to determine the rate of convergence as ."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Construction of the Worst-Case Dictionary",
15
+ "text": "In this section, we give the proof of Theorem 1 ###reference_orem1###. This is based upon an extension and optimization of the constructions developed in [19 ###reference_b19###, 18 ###reference_b18###]."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Basic Construction",
21
+ "text": "We let and denote by its standard basis. Let . We will attempt to construct a realization of the pure greedy algorithm (3 ###reference_###), i.e., a dictionary and an initial iterate , for which the convergence rate is bounded below by\nWe will show that this construction succeeds as long as\nLet us begin by verifying that this implies Theorem 1 ###reference_orem1###. It suffices to show that with equality in (21 ###reference_###), we have\nwhere is the root of (14 ###reference_###). Solving the relation (22 ###reference_###) for in terms of we get\nPlugging this into (14 ###reference_###), we readily see that (14 ###reference_###) is equivalent to equality in (21 ###reference_###).\nLet us proceed with the lower bound construction. It will be convenient in what follows to work with the residual form of the pure greedy algorithm, given by\nThe construction, which is quite technical and follows the methods developed in [19 ###reference_b19###, 18 ###reference_b18###], will depend upon four fundamental parameters in addition to : an integer , an integer , a real number , and a function .\nDefine two sequences of elements and inductively by\nwhere is defined by (recall that is one of our parameters)\nHere and are numbers which are chosen so that the following three conditions are satisfied:\nThen, we set (recall that and are parameters)\nFrom (25 ###reference_###) and (27 ###reference_###), we see that .\nThis implies that .\nIt is also clear that , so that .\nThe idea is that the iteration in (25 ###reference_###) should follow the execution of the pure greedy algorithm applied to the dictionary with initial residual (note that for notational convenience we begin indexing at instead of at ). In order to do this, we need to show that our parameters can be chosen appropriately so that the conditions in (27 ###reference_###) can always be satisfied at every iteration, and so that the inner product inequalities\nhold for every and with . This is divided into the following two technical results.\nSuppose that the smooth function satisfies the following conditions:\nFor some , we have for .\nsatisfies the following integral equality:\nsatisfies the following two integral inequalities:\nThen for sufficiently large and sufficiently small , the conditions (27 ###reference_###) can always be satisfied for every and the conditions (29 ###reference_###) can be satisfied for every and with (by making appropriate choices of and ). Thus, the iteration (25 ###reference_###) defines a realization of the pure greedy algorithm with dictionary and initial residual .\nThis Proposition is proved in Section 3 ###reference_### and borrows may of the ideas from [18 ###reference_b18###]. The method in [18 ###reference_b18###] can essentially be viewed as choosing to be a smoothed version of\nwhere is chosen to satisfy (30 ###reference_###), and then optimizing in and . By generalizing to an arbitrary smooth function we are able to improve upon this analysis. That we can close the gap between existing upper and lower bounds and determine the sharp exponent in the rate of convergence this way is contained in the following Proposition.\nFor any satisfying the condition (21 ###reference_###), there exists a smooth function satisfying the conditions of Proposition 1 ###reference_position1###.\nThis Proposition is proved in Section 4 ###reference_###. Combining these results, we obtain Theorem 1 ###reference_orem1###."
22
+ },
23
+ {
24
+ "section_id": "3",
25
+ "parent_section_id": null,
26
+ "section_name": "Proof of Proposition 1",
27
+ "text": "We begin by determining conditions on the values and which are required to ensure that (27 ###reference_###) is satisfied.\nWe calculate by noting that\nso that\nWe are free to choose , and following the argument in [18 ###reference_b18###] we make the choice .\nThe conditions (27 ###reference_###) and the construction (25 ###reference_###) imply that\nwhich means that must be chosen so that satisfies\nNext, since , we see by induction that . Using the construction (25 ###reference_###) this implies that\nThis means that must also satisfy\nFinally, we solve equation (38 ###reference_###) for and choose to be the positive root."
28
+ },
29
+ {
30
+ "section_id": "3.1",
31
+ "parent_section_id": "3",
32
+ "section_name": "Inner Product Formulas",
33
+ "text": "Next, we derive formulas for the inner products appearing in (29 ###reference_###), following [18 ###reference_b18###, 19 ###reference_b19###]. Consider first the case . We will proceed by induction on . Combining the last two equations in (25 ###reference_###) we see that\nPlugging the choice into this, we get\nTaking the inner product with , noting that , and that by assumption, we get\nUsing (27 ###reference_###) and the last line in (25 ###reference_###), we see that by construction . From this and equation (42 ###reference_###) we see by induction on that\nfor .\nThe case of is a bit more complicated. We first note that since the formula (28 ###reference_###) implies that\nTaking the inner product of (41 ###reference_###) with , we see that\nCombining the base case (44 ###reference_###) and the inductive step (45 ###reference_###), we get\nNext, we consider the case . In this case the relations (25 ###reference_###) give, for\nsince . Further, we see that\nShifting the indices in (47 ###reference_###) we also get\nSolving this equation for gives\nand plugging that back into equation (48 ###reference_###) gives\nFinally, plugging this into (47 ###reference_###), we get that"
34
+ },
35
+ {
36
+ "section_id": "3.2",
37
+ "parent_section_id": "3",
38
+ "section_name": "Parameter Asymptotics",
39
+ "text": "The estimates required to prove Proposition 1 ###reference_position1### are quite technical and require the comparison of a variety of sums with corresponding integrals. For this purpose, we will need asymptotic formulas for some of the parameters introduced so far. These asymptotics were first derived in [18 ###reference_b18###, 19 ###reference_b19###].\nWe have from (35 ###reference_###) that\nfor a constant (the exact value of will not be important for us). This implies that\nfor a (different) constant .\nFrom this, we obtain the following asymptotics for ,\nThese two formulas give us the asymptotics of the right hand side of (37 ###reference_###)\nFinally, we will also need the asymptotics for the inductive factor in (52 ###reference_###)\nThe asymptotics for (55 ###reference_###) and the asymptotics for (54 ###reference_###) imply that\nMultiplying these, we get\nNote that here and in the following, we use the notation to denote a quantity depending upon the parameter whose limit is as .\nWe also adopt the convention that the constants in big- expressions and the rate of decay to in little- expressions will be uniform in any lower case indices or which do not appear in the argument (or the subscript if the argument is ), but may depend upon other parameters of the construction. This modified big- notation is necessary to simplify the comparisons between sums and integrals in the following.\nFinally, we recall the basic fact, which we will use without reference in the following, that (using this convention)"
40
+ },
41
+ {
42
+ "section_id": "3.3",
43
+ "parent_section_id": "3",
44
+ "section_name": "Estimates for the Conditions (27)",
45
+ "text": "In this Section, we show that for sufficiently large , the parameters and can be chosen to satisfy the conditions (27 ###reference_###). For this, we will need the following formula for the coefficients of , which will also be useful later.\nThe following formula holds for\nFurthermore, for , we have\nSince we also have for , this lemma gives the components of all of the iterates .\nWe prove this formula for any fixed by induction on , with the base case being . The base case follows in the case where from\nsince . When , we use the definition of to get (62 ###reference_###) in the case .\nThe inductive step follows from (41 ###reference_###) since\nUsing the inductive hypothesis, this gives\nwhen , and similarly when .\n\u220e\nUsing this, we prove the following.\nLet be a non-zero function. Then for sufficiently large , equations (26 ###reference_###) and (37 ###reference_###) are solvable for . In addition, is non-negative and the resulting value of satisfies (39 ###reference_###), so that is defined via (38 ###reference_###). Finally, the construction satisfies\nNote that since is non-negative, we have\nWe begin by calculating the inner product for using Lemma 1 ###reference_ma1### and the definition of in (26 ###reference_###). We get\nFor sufficiently large , we now show by induction on that we can solve (26 ###reference_###) and (37 ###reference_###) for , and that the resulting will satisfy (37 ###reference_###) and , which implies that for the satisfying (38 ###reference_###) we will have .\nSo assume inductively that for all we have and . (Note that in the base case there is nothing to assume.) Since , we use the inductive assumption on to remove the initial double sum on the right hand side of (68 ###reference_###) and get the inequality\nUsing the asymptotics (54 ###reference_###), (55 ###reference_###) and (56 ###reference_###), and comparing the Riemann sum in the previous equation with an integral, a straightforward calculation shows that can be chosen sufficiently large so that (for an appropriate constant ) and\nwhich implies that . Moreover, since we readily see that\nas , which implies that .\n\u220e"
46
+ },
47
+ {
48
+ "section_id": "3.4",
49
+ "parent_section_id": "3",
50
+ "section_name": "Estimates for the Inner Product Conditions",
51
+ "text": "In the following analysis, we assume that has been chosen sufficiently large so that the conclusion of Proposition 3 ###reference_position3### holds, and turn to proving that the conditions (29 ###reference_###) are satisfied under the assumptions of Proposition 1 ###reference_position1### for sufficiently large and small .\nFor this, we will need the following estimate relating sums to integrals of .\nSuppose that is a function such that for some , we have on . Let , then we have\nHere the constant in only depends upon and not upon and .\nConsider the sequence of points\nThis sequence of points forms a partition of the interval and the gap (or mesh/norm) of satisfies\nMoreover, we have the following estimate\nSince is bounded, we obtain\nFinally, since is smooth and is increasing and bounded on , we obtain, by comparing with the Riemann-Stieltjes integral and using that the mesh satisfies ,\n\u220e\nSimilar to the argument given in [18 ###reference_b18###], the next step is to carefully study the sequence . We also give an asymptotic formula for the coefficients of .\nSuppose that satisfies the condition (30 ###reference_###) in Proposition 1 ###reference_position1###.\nThen, in the preceding construction, with chosen large enough such that the conclusion of Proposition 3 ###reference_position3### holds, the sequence in (26 ###reference_###) will satisfy\nMore precisely, this convergence will be such that\nIn addition, under these conditions as we have the following asymptotics for the coefficients of\nfor any . (Recall that by our convention goes to as uniformly in .)\nThe proof is quite similar to the proof of Lemma 2 in [18 ###reference_b18###]. Suppose that\nfor some .\nUsing the formula (68 ###reference_###), we see that\nWe proceed to evaluate the limits in the above equation. Using the fact that is Riemann integrable, that is fixed and that as , we see that\nFurther, applying Lemma 2 ###reference_ma2### to , we see that\nFrom this, by comparing a Riemann sum with an integral, we see that\nsince for large enough , for (recall that is assumed to vanish on for some ).\nPlugging this into (82 ###reference_###), we get\nUtilizing the asymptotics (54 ###reference_###) and (56 ###reference_###), we obtain that\nTogether with (86 ###reference_###), this implies that (assuming that (81 ###reference_###) holds)\nIn an entirely analogous manner, we prove that (for any ) implies that\nWe proceed to use the same fixed point argument from [18 ###reference_b18###] to complete the proof. Define by\nProposition 3 ###reference_position3### implies that . Thus, setting and , we see inductively that (88 ###reference_###) implies that\nwhile (89 ###reference_###) implies that\nThus (78 ###reference_###) will be proved if we can show that (here the exponent represents function composition).\nThe function defined in (90 ###reference_###) satisfies .\nRewrite as\nwhere , , and .\nThe assumption (30 ###reference_###) implies that , i.e., that . We wish to show that iterating the map converges to this fixed point. This follows from the following simple calculation\nsince .\n\u220e\nLemma 3 ###reference_ma3### completes the proof of (78 ###reference_###).\nNext, we prove the asymptotic formula (80 ###reference_###). We may assume that and use Lemma 1 ###reference_ma1### and the fact that (see Proposition 3 ###reference_position3###) to get\nUsing the definition of (26 ###reference_###), we get\nUsing Lemma 2 ###reference_ma2### and noting that (since on ), we get\nas desired.\nFinally, we will prove (79 ###reference_###). For this, we note that (37 ###reference_###), combined with the asymptotics (56 ###reference_###) implies that\nUsing the formula (41 ###reference_###) to rewrite this as\nUsing the asymptotics (54 ###reference_###) and (56 ###reference_###), we see that\nfor a constant . In addition, using the definition of , the fact that , and that is smooth, we see that\nfor a constant .\nThis means that the last term in (99 ###reference_###) is\nfor a constant .\nNow, we calculate, using the definition of ,\nSimilarly, we obtain\nNext, we observe that\nUsing that is smooth, we see that\nApplying the asymptotics (80 ###reference_###) and comparing the Riemann sum with an integral, we get\nSimilarly, we obtain\nUsing all of this, we obtain\nSince is non-negative, the integral in the denominator above is non-zero and we obtain\nfor a constant . Using that and combining the above estimates with (99 ###reference_###) and (98 ###reference_###), we get\nfor some constant . Finally, since\nis uniformly bounded in (since the sequence converges to ), we must in fact have , which completes the proof.\n\u220e\nNext, we prove that for sufficiently large , the conditions (29 ###reference_###) will be satisfied. We consider first the case . We have the following result.\nSuppose that satisfies the conditions of Proposition 4 ###reference_position4### and also the inequality (31 ###reference_###) from Proposition 1 ###reference_position1###.\nThen there is a sufficiently large such that for we have .\nWe utilize the formula (52 ###reference_###). If we can show that for some , we have\nfor every , then the asymptotics (59 ###reference_###) combined with the base case (which holds by construction) will imply the desired result by induction.\nDefine . From the definition of , we see that\nUsing (79 ###reference_###), (78 ###reference_###), and that is bounded, we get\nWe rewrite the term in parenthesis as\nUtilizing the asymptotics (58 ###reference_###) and the boundedness of , this can be rewritten as\nNext, we use that is infinitely differentiable and that to obtain\nRestricting (since otherwise ), we get\nSince is smooth (and thus is bounded), we get\nWe finally calculate, using the asymptotic formula (80 ###reference_###) for the components of to get\nComparing this Riemann sum with an integral, setting , using that (and thus the integrand) is , we get\nCombined with the assumption (31 ###reference_###), this completes the proof, since for sufficiently large the term in brackets above will be strictly less than in magnitude uniformly in .\n\u220e\nNext, we consider the case where . Here we need to choose both large enough and small enough.\nSuppose that satisfies the condition of Proposition 4 ###reference_position4### and also the inequality (32 ###reference_###) from Proposition 1 ###reference_position1###\nThen there is a sufficiently large such that for we have . Further, by increasing if necessary, the denominator in the definition of will be non-zero and we can choose small enough so that we also have for all .\nWe will need to use the formulas (43 ###reference_###) and (46 ###reference_###). For this, we will need to estimate\nUtilizing the the inductive definition of (25 ###reference_###), we get\nWe consider the last term above first. From the definition of (26 ###reference_###) we get\nSince by Proposition 3 ###reference_position3### and by Proposition 4 ###reference_position4###, we see that\nApplying Lemma 2 ###reference_ma2###, we see that\nSetting we write this as\nNext, we consider the term\nUsing again (78 ###reference_###), we get\nand Lemma 2 ###reference_ma2### implies (recalling the choice )\nComparing a Riemann sum with an integral finally yields\nFinally, we consider the first term in (124 ###reference_4###). Using (80 ###reference_###), we get\nUtilizing the asymptotics (54 ###reference_###), (55 ###reference_###), and (78 ###reference_###), we get\nProceeding as before, using Lemma 2 ###reference_ma2###, recalling the choice , and comparing a Riemann sum with an integral, we finally obtain\nCombining the estimates (135 ###reference_5###), (132 ###reference_2###), and (128 ###reference_8###), we get\nOur assumption (32 ###reference_###) on now imply that for sufficiently large , the sum will satisfy the bounds\nuniformly in and . We now use equation (43 ###reference_###) to see that\nThe bound (137 ###reference_7###) implies that for sufficiently large the final absolute value above is strictly smaller than uniformly in . Thus, by choosing potentially larger, we can guarantee that for all ,\nas well. Fix large enough to guarantee this. Then we get\nas desired.\nFinally, we bound . Consider the inner product formula (46 ###reference_###). Utilizing the definition of , we obtain\nwhere\nWe note that since , we get\nSince is assumed to vanish on we get that\nand so this sum is bounded uniformly in (but depending upon ).\nPlugging these bounds into (46 ###reference_###) we see that\nfor a constant depending upon . Utilizing the bound (139 ###reference_9###), we see that for sufficiently small (depending upon and thus ), we finally get (since )\nfor all . This completes the proof.\n\u220e"
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Proof of Proposition 2",
57
+ "text": "In this Section, we prove Proposition 2 ###reference_position2###, which guarantees the existence of a function satisfying the conditions of Proposition 1 ###reference_position1###.\nTo construct such a function , let be a bump function which is supported on and satisfies\nLet and write .\nFix a parameter to be determined later. For a continuously differentiable function , we extend to the whole real line by setting for and for . Then, we smooth using the bump function and normalize so that (30 ###reference_###) is satisfied. Specifically, we set\nand , where the constant is chosen to satisfy (30 ###reference_###).\nSince in as and both functions vanish in a neighborhood of for , we see that\nThis means that if satisfies\nthen the normalizing constant will satisfy as .\nUsing this, we calculate that if satisfies\nand\nin addition to (148 ###reference_8###), then for sufficiently small , will satisfy (30 ###reference_###), (31 ###reference_###), and (32 ###reference_###).\nIndeed, for any , we clearly have and on if . Now, since vanishes in a neighborhood of , we see that the integral in (32 ###reference_###) converges to the corresponding integral for uniformly in . For the condition (31 ###reference_###), we note that\nas uniformly in . Further, converges to in , so that the only problematic term arises from the term in (31 ###reference_###). However, converges uniformly to for outside of . For this term, we note that\nis uniformly continuous in and (for smaller values of the entire integral in (31 ###reference_###) vanishes). On the interval the function increases rapidly from to , so the contribution from the integral of this term over converges to\nuniformly in as . For , the integral in (31 ###reference_###) reduces to\nSince is bounded and we see that this converges to\nas uniformly in . Since (setting in (149 ###reference_9###)), for sufficiently small we will have for all (as increases to on the interval and ). Thus satisfies all of the conditions of Proposition 1 ###reference_position1###. The task now is to choose appropriately and to find such an .\nThe key to this is to introduce the function\nand to rewrite the conditions on in terms of . The condition (148 ###reference_8###) then becomes\nIntegrating by parts to remove the term in (149 ###reference_9###) and observing that\nwe obtain\nTo determine the function , we choose to set (note that this is also what is needed to make Sil\u2019nichenko\u2019s analysis tight)\nfor a positive constant . Combined with the definition (156 ###reference_6###), which implies that , and the endpoint condition (157 ###reference_7###), this forces\nand\nSince must hold, we obtain the following condition on and\nDifferentiating the integrand in (150 ###reference_0###) with respect to , rewriting in terms of , and noting that for the integrals vanish, we see that the condition (150 ###reference_0###) becomes\nPlugging (162 ###reference_2###) into these conditions and noting that the integral above is minimized in when the integrand is , which occurs for\nand is maximized when , we obtain the following two conditions on , and\nA routine calculation shows that condition (163 ###reference_3###) means that must be chosen so that\nLet be the value of which gives equality in the above bound. By choosing , we see that it suffices for the inequalities (165 ###reference_5###) and (166 ###reference_6###) to hold (strictly, of course) for . Plugging in this value and performing a routine, yet tedious, calculation, we see that (165 ###reference_5###) becomes\nand (166 ###reference_6###) becomes\nWe see that (168 ###reference_8###) holds for any since the left hand side is and the right hand side is . Thus the only condition on which must be satisfied is (169 ###reference_9###), which is exactly the relation (21 ###reference_###).\nThe only step left in the proof of Theorem 1 ###reference_orem1### is to show that the equation (156 ###reference_6###) can be solved for when is given by (162 ###reference_2###). Differentiating (156 ###reference_6###) with respect to , we obtain the following integral differential equation on :\nWe will use the following Lemma, based on the Schauder fixed point theorem, to solve this equation.\nSuppose that is continuously differentiable and satisfies the bound\nIn addition, assume that there exist functions on such that\nand\nThen the equation (170 ###reference_0###) has a continuously differentiable solution .\nConsider the map defined by\nFor a fixed to be determined later, define the set\nBy the Arzela-Ascoli theorem, is a compact subset of . We will show that for sufficiently large , maps into itself. Indeed, if , then we clearly have (since and thus are non-negative)\nand\nby assumption on and . Further, taking derivates, we see that\nWe now integrate the last integral above by parts to obtain\nSince is fixed, and by assumption, collecting the preceding equations gives a bound of\nwhere is a constant only depending upon and not upon the bound . If , we can choose to guarantee that as well. This proves that maps to itself. The proof is now completed by invoking the Schauder fixed point theorem.\n\u220e\nFinally, we verify the assumptions of Lemma 4 ###reference_ma4### for the function in (170 ###reference_0###). A straightforward calculation gives that the integral in (171 ###reference_1###) is maximized when and its value is given by\nPlugging in the values and which give equality in equations (169 ###reference_9###) and (167 ###reference_7###) for , a straightforward numerical calculation gives that . Since this value is continuous in and , the same holds for close enough and .\nFinally, we verify the existence of the functions and in Lemma 4 ###reference_ma4###. We consider the modified map\nStarting with the function , we iterate to obtain the sequence . The map is monotone, so that and . Consequently, if for some , then and will satisfy the conditions of Lemma 4 ###reference_ma4###. This follows since by the monotonicity of , implies that and implies that . Finally, implies that and coincide, which shows that the conditions of the Lemma are satisfied.\nWe verify numerically for given in (170 ###reference_0###), with and (the optimal values for ), that the iterates satisfy . For reference, a plot of , and is shown in Figure 1 ###reference_###. Since depends continuously on (and thus on and ), this completes the proof of Proposition 2 ###reference_position2###. Finally, in Figure 1 ###reference_### we solve equation (170 ###reference_0###) numerically to give an idea of what the optimal looks like.\n###figure_1### ###figure_2###"
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Acknowledgements",
63
+ "text": "We would like to thank Andrew Barron, Ron DeVore, Jinchao Xu, Vladimir Temlyakov, and Matias Cattaneo for helpful discussions. JWS was supported in part by the National Science Foundation through DMS-2424305 and CCF-2205004 as well as the Office of Naval Research MURI N00014-20-1-2787. JMK was supported in part by the National Science Foundation through CAREER DMS-2239448, DMS-2054808, and HDR TRIPODS CCF-1934924."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {},
68
+ "image_paths": {
69
+ "1(a)": {
70
+ "figure_path": "2307.07679v3_figure_1(a).png",
71
+ "caption": "Figure 1: Left: The first 3333 iterates of T~Gsubscript~\ud835\udc47\ud835\udc3a\\tilde{T}_{G}over~ start_ARG italic_T end_ARG start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT, which demonstrate that f3>0subscript\ud835\udc5330f_{3}>0italic_f start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT > 0. Right: The numerically calculated solution to (170) for the G\ud835\udc3aGitalic_G corresponding to \u03b2\u2217superscript\ud835\udefd\\beta^{*}italic_\u03b2 start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT and \u03c4\u2217superscript\ud835\udf0f\\tau^{*}italic_\u03c4 start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT for s=1\ud835\udc601s=1italic_s = 1.",
72
+ "url": "http://arxiv.org/html/2307.07679v3/x1.png"
73
+ },
74
+ "1(b)": {
75
+ "figure_path": "2307.07679v3_figure_1(b).png",
76
+ "caption": "Figure 1: Left: The first 3333 iterates of T~Gsubscript~\ud835\udc47\ud835\udc3a\\tilde{T}_{G}over~ start_ARG italic_T end_ARG start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT, which demonstrate that f3>0subscript\ud835\udc5330f_{3}>0italic_f start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT > 0. Right: The numerically calculated solution to (170) for the G\ud835\udc3aGitalic_G corresponding to \u03b2\u2217superscript\ud835\udefd\\beta^{*}italic_\u03b2 start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT and \u03c4\u2217superscript\ud835\udf0f\\tau^{*}italic_\u03c4 start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT for s=1\ud835\udc601s=1italic_s = 1.",
77
+ "url": "http://arxiv.org/html/2307.07679v3/x2.png"
78
+ }
79
+ },
80
+ "validation": true,
81
+ "references": [
82
+ {
83
+ "1": {
84
+ "title": "The Annals of Statistics 36(1), 64\u201394 (2008)",
85
+ "author": "Barron, A.R., Cohen, A., Dahmen, W., DeVore, R.A.: Approximation and learning\nby greedy algorithms.",
86
+ "venue": null,
87
+ "url": null
88
+ }
89
+ },
90
+ {
91
+ "2": {
92
+ "title": "In: Proceedings., International Conference on Image Processing,\nvol. 1, pp. 53\u201356. IEEE (1995)",
93
+ "author": "Bergeaud, F., Mallat, S.: Matching pursuit of images.",
94
+ "venue": null,
95
+ "url": null
96
+ }
97
+ },
98
+ {
99
+ "3": {
100
+ "title": "Belmont, Calif.: Wadsworth International Group, c1984. (1984).",
101
+ "author": "Breiman, L., Friedman, J., Olshen, R., Stone, C.: Classification and Regression\nTrees.",
102
+ "venue": "DOI https://doi.org/10.1201/9781315139470",
103
+ "url": null
104
+ }
105
+ },
106
+ {
107
+ "4": {
108
+ "title": "Journal of the American Statistical Association 98(462),\n324\u2013339 (2003)",
109
+ "author": "B\u00fchlmann, P., Yu, B.: Boosting with the L2 loss: regression and\nclassification.",
110
+ "venue": null,
111
+ "url": null
112
+ }
113
+ },
114
+ {
115
+ "5": {
116
+ "title": "SIAM Review 43(1), 129\u2013159 (2001)",
117
+ "author": "Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis\npursuit.",
118
+ "venue": null,
119
+ "url": null
120
+ }
121
+ },
122
+ {
123
+ "6": {
124
+ "title": "Acta Numerica 7, 51\u2013150 (1998)",
125
+ "author": "DeVore, R.A.: Nonlinear approximation.",
126
+ "venue": null,
127
+ "url": null
128
+ }
129
+ },
130
+ {
131
+ "7": {
132
+ "title": "Springer Science & Business Media (1993)",
133
+ "author": "DeVore, R.A., Lorentz, G.G.: Constructive approximation, vol. 303.",
134
+ "venue": null,
135
+ "url": null
136
+ }
137
+ },
138
+ {
139
+ "8": {
140
+ "title": "Transactions of the American Mathematical Society 335(2),\n843\u2013864 (1993)",
141
+ "author": "DeVore, R.A., Sharpley, R.C.: Besov spaces on domains in .",
142
+ "venue": null,
143
+ "url": null
144
+ }
145
+ },
146
+ {
147
+ "9": {
148
+ "title": "Advances in Computational Mathematics 5(1), 173\u2013187 (1996)",
149
+ "author": "DeVore, R.A., Temlyakov, V.N.: Some remarks on greedy algorithms.",
150
+ "venue": null,
151
+ "url": null
152
+ }
153
+ },
154
+ {
155
+ "10": {
156
+ "title": "Constructive Approximation 55(1), 369\u2013406 (2022)",
157
+ "author": "E, W., Ma, C., Wu, L.: The barron space and the flow-induced function spaces\nfor neural network models.",
158
+ "venue": null,
159
+ "url": null
160
+ }
161
+ },
162
+ {
163
+ "11": {
164
+ "title": "The Annals of Statistics 29(5), 1189 \u2013 1232 (2001).",
165
+ "author": "Friedman, J.H.: Greedy function approximation: A gradient boosting machine.",
166
+ "venue": "DOI 10.1214/aos/1013203451.",
167
+ "url": null
168
+ }
169
+ },
170
+ {
171
+ "12": {
172
+ "title": "The Annals of Statistics 20(1), 608\u2013613 (1992)",
173
+ "author": "Jones, L.K.: A simple lemma on greedy approximation in hilbert space and\nconvergence rates for projection pursuit regression and neural network\ntraining.",
174
+ "venue": null,
175
+ "url": null
176
+ }
177
+ },
178
+ {
179
+ "13": {
180
+ "title": "IEEE Transactions on Information Theory 64(12), 7649\u20137656\n(2018)",
181
+ "author": "Klusowski, J.M., Barron, A.R.: Approximation by combinations of ReLU and\nsquared ReLU ridge functions with and controls.",
182
+ "venue": null,
183
+ "url": null
184
+ }
185
+ },
186
+ {
187
+ "14": {
188
+ "title": "East J. Approx 5(4), 493\u2013499 (1999)",
189
+ "author": "Konyagin, S., Temlyakov, V.: Rate of convergence of pure greedy algorithm.",
190
+ "venue": null,
191
+ "url": null
192
+ }
193
+ },
194
+ {
195
+ "15": {
196
+ "title": "IEEE Transactions on Information Theory 47(6), 2659\u20132665\n(2001)",
197
+ "author": "Kurkov\u00e1, V., Sanguineti, M.: Bounds on rates of variable-basis and\nneural-network approximation.",
198
+ "venue": null,
199
+ "url": null
200
+ }
201
+ },
202
+ {
203
+ "16": {
204
+ "title": "arXiv preprint arXiv:2304.13332 (2023)",
205
+ "author": "Li, Y., Siegel, J.: Entropy-based convergence rates of greedy algorithms.",
206
+ "venue": null,
207
+ "url": null
208
+ }
209
+ },
210
+ {
211
+ "17": {
212
+ "title": "Mathematical Notes 76, 497\u2013510 (2004)",
213
+ "author": "Livshits, E.D.: Rate of convergence of pure greedy algorithms.",
214
+ "venue": null,
215
+ "url": null
216
+ }
217
+ },
218
+ {
219
+ "18": {
220
+ "title": "Izvestiya: Mathematics 73(6), 1197 (2009)",
221
+ "author": "Livshits, E.D.: Lower bounds for the rate of convergence of greedy algorithms.",
222
+ "venue": null,
223
+ "url": null
224
+ }
225
+ },
226
+ {
227
+ "19": {
228
+ "title": "Constructive Approximation 19(4), 509\u2013523 (2003)",
229
+ "author": "Livshitz, E., Temlyakov, V.: Two lower estimates in greedy approximation.",
230
+ "venue": null,
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "20": {
236
+ "title": "IEEE Transactions on Signal Processing 41(12), 3397\u20133415\n(1993)",
237
+ "author": "Mallat, S.G., Zhang, Z.: Matching pursuits with time-frequency dictionaries.",
238
+ "venue": null,
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "21": {
244
+ "title": "37. Cambridge University Press (1992)",
245
+ "author": "Meyer, Y.: Wavelets and Operators: Volume 1.",
246
+ "venue": null,
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "22": {
252
+ "title": "IEEE Transactions on Circuits and Systems for Video Technology\n7(1), 158\u2013171 (1997)",
253
+ "author": "Neff, R., Zakhor, A.: Very low bit-rate video coding based on matching\npursuits.",
254
+ "venue": null,
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "23": {
260
+ "title": "Proceedings of the Steklov Institute of Mathematics 280(1),\n227\u2013239 (2013)",
261
+ "author": "Nelson, J.L., Temlyakov, V.N.: Greedy expansions in Hilbert spaces.",
262
+ "venue": null,
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "24": {
268
+ "title": "In: International Conference on Learning Representations (ICLR 2020)\n(2019)",
269
+ "author": "Ongie, G., Willett, R., Soudry, D., Srebro, N.: A function space view of\nbounded norm infinite width ReLU nets: The multivariate case.",
270
+ "venue": null,
271
+ "url": null
272
+ }
273
+ },
274
+ {
275
+ "25": {
276
+ "title": "The Journal of Machine Learning Research 22(1), 1960\u20131999\n(2021)",
277
+ "author": "Parhi, R., Nowak, R.D.: Banach space representer theorems for neural networks\nand ridge splines.",
278
+ "venue": null,
279
+ "url": null
280
+ }
281
+ },
282
+ {
283
+ "26": {
284
+ "title": "SIAM Journal on Mathematics of Data Science 4(2), 464\u2013489\n(2022)",
285
+ "author": "Parhi, R., Nowak, R.D.: What kinds of functions do deep neural networks learn?\nInsights from variational spline theory.",
286
+ "venue": null,
287
+ "url": null
288
+ }
289
+ },
290
+ {
291
+ "27": {
292
+ "title": "S\u00e9minaire Analyse fonctionnelle (dit \u201cMaurey-Schwartz\") pp.\n1\u201312 (1981)",
293
+ "author": "Pisier, G.: Remarques sur un r\u00e9sultat non publi\u00e9 de B. Maurey.",
294
+ "venue": null,
295
+ "url": null
296
+ }
297
+ },
298
+ {
299
+ "28": {
300
+ "title": "arXiv preprint arXiv:2106.15002 (2021)",
301
+ "author": "Siegel, J.W., Xu, J.: Characterization of the variation spaces corresponding to\nshallow neural networks.",
302
+ "venue": null,
303
+ "url": null
304
+ }
305
+ },
306
+ {
307
+ "29": {
308
+ "title": "IEEE Transactions on Information Theory 68(5), 3354\u20133361\n(2022)",
309
+ "author": "Siegel, J.W., Xu, J.: Optimal convergence rates for the orthogonal greedy\nalgorithm.",
310
+ "venue": null,
311
+ "url": null
312
+ }
313
+ },
314
+ {
315
+ "30": {
316
+ "title": "Foundations of Computational Mathematics pp. 1\u201357 (2022)",
317
+ "author": "Siegel, J.W., Xu, J.: Sharp bounds on the approximation rates, metric entropy,\nand -widths of shallow neural networks.",
318
+ "venue": null,
319
+ "url": null
320
+ }
321
+ },
322
+ {
323
+ "31": {
324
+ "title": "Mathematical Notes 76(3), 582\u2013586 (2004)",
325
+ "author": "Sil\u2019nichenko, A.: Rate of convergence of greedy algorithms.",
326
+ "venue": null,
327
+ "url": null
328
+ }
329
+ },
330
+ {
331
+ "32": {
332
+ "title": "Cambridge University Press (2011)",
333
+ "author": "Temlyakov, V.: Greedy approximation, vol. 20.",
334
+ "venue": null,
335
+ "url": null
336
+ }
337
+ }
338
+ ],
339
+ "url": "http://arxiv.org/html/2307.07679v3"
340
+ }
20240722/2309.00169v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2309.10095v2.json ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A Semi-Supervised approach for Event Identification",
3
+ "abstract": "The problem of event identification in power systems is increasingly recognised as one of the key steps toward a more reliable, secure, and stable electrical grid.\nOn the other hand, the increasing deployment of Phasor Measurement Units (PMUs) across the grid along with the advancement in machine learning technologies and data science provides invaluable opportunities to investigate more advanced data driven based event identification methods. However, given the fact that using expert knowledge for labeling various types of events is expensive and tedious, availability of proper labeled eventful PMU data is still an undergoing challenge in the literature within this context. In this this paper, we propose a novel semi-supervised framework to investigate the efficacy of including unlabeled samples on the performance event identification task. We investigate the performance of two categories of classical semi-supervised approaches, i.e., self-training and graph-based methods. By categorizing each event via a set of physically interpretable features obtained from modal analysis of synthetic eventful PMU data, we illustrate that our proposed approach, can effectively identify the generation loss and line trips events, even in an extremely limited labeled regime.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The\nproblem of event identification is increasingly recognised as one of the key steps toward a more reliable, secure, and stable electrical grid.\nExtensive research has been carried out on this problem which can be broadly categorized into traditional model based methods (see e.g., [Model-based1, Model-based2, model-based3, model-based5]) and the state-of-the-art data-driven methods which have received considerable critical attention in recent years.\nThe roots for the increasing significance of data-driven event identification in a wide variety of power system studies (e.g., monitoring and operation) stem from the following two main factors:\n1) The performance of model based methods highly depends on the accuracy of dynamic models of the system components (e.g., generators, loads, etc.). Given the ongoing integration of renewable energy technologies as well as unconventional loads with power electronic interfaces, it is difficult to develop accurate and sufficiently low order dynamical models which in turn limits the practical application of such methods in real world problems.\n2) The increasing deployment of Phasor Measurement Units (PMUs) across the grid along with the advancements in machine learning technologies and data science provide invaluable opportunities to investigate more advanced data-driven based event identification methods.\nThe main advantage of such methods is their ability to distinguish between different types of power system disturbances from the collection of high-dimensional spatio-temporally correlated time-synchronized phasor measurements with high resolution rather than relying on the dynamic modeling of the power system components.\nExtensive research has been carried out to investigate and highlight the efficacy of such methods which can be broadly categorized from two different perspectives: (a) the methods by which they process the time series PMU data to infer information regarding different types of events, and (b) the availability of labeled eventful PMU data.\nWe briefly outline (overview) this in the Section. I ###reference_###.A, and B, respectively."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Processing the Time-Series PMU Data",
15
+ "text": "The first step in any data-driven based event identification scheme is to process the time-series data to infer information regarding the specific type of an event. Within this perspective, available literature in the context of event identification can be broadly categorized into two subgroups depending on whether they rely on the physics of the system to process the PMU data or not.\n(i) model-free feature extraction methods: References such as [heuristic_1, Unsupervised1-Ellipsoid, Ellipsoid_time] extract features based on the properties (e.g., volume, rate of change of volume, center coordinates, projection of axes, etc.) of the minimum volume enclosing ellipsoid (MVEE) which is constructed from the collection of time-series PMU data (main drawback: computationally heavy).\nWithin the same category, references [supervised1-DT-Vittal, Supervised6-CNN, Supervised2-KNN, Supervised3-SVM, Supervised4-ELM, CNN_new1, TextMining_Sara, DNN_new, LSTM_new] are examples of machine learning based event identification methods that use various model-free feature extraction techniques to transform the raw time series PMU data or their pruned version (see, for example, [TextMining_Sara]) into numerical features that characterizes different types of events.\n(ii) physics-based feature extraction methods:\nReferences such as [Brahma-DynEvents, SignalProc-1, SignalProc-2, li2018real] rely on the well-established signal processing techniques (e.g., ) to extract physically interpretable features which can characterize various types of events based on the underlying dynamical behavior of the system."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "The Availability of Labeled Eventful Time-Series PMU Data",
21
+ "text": "The majority of existing literature in the context of event identification [supervised1-DT-Vittal, Supervised6-CNN, Supervised2-KNN, Supervised3-SVM, Supervised4-ELM, CNN_new1, TextMining_Sara, DNN_new, LSTM_new] belongs to the supervised learning paradigm.\nThese methods require proper labeled data with detailed event types. However, given the fact that using expert knowledge for labeling various types of events can be expensive and tedious, proper labeled eventful PMU data is often scares.\nUnsupervised and semi-supervised learning are common practices in machine learning when dealing with limited or no labeled data.\nUnsupervised learning aims to infer the underlying structure within the unlabeled data. Although they can distinguish between clusters of events [Unsupervised_YangWeng, Unsupervised1-Ellipsoid, Unsupervised3-Ensemble, Unsupervised4-Kmeans, Unsupervised2-PCA, Yangweng_limitedlabel], they do not possess the ground truth to associate each cluster with its real-world meaning. Furthermore, when there is access to even a small amount of labeled data, supervised learning has been shown to perform better than unsupervised learning methods [Unsupervised_YangWeng, Yangweng_limitedlabel]. Semi-supervised learning approaches, on the other hand, aim to label unlabeled data points using knowledge learned from a small number of labeled data points which can significantly enhance the performance of a classification task [SS-approaches]. They can be broadly categorized into i) generative methods [ss-zhu2009introduction], ii) graph-based methods [zhu2002LP, ss_LP_1, ss_wang2013semi, ss_chapelle2005semi], iii) disagreement based methods [ss-disagreement-zhou2010semi], and iv) semi-supervised support vector machines (S3VMs) [ss-joachims1999transductive, ss-tsvm2]. In this paper, we focus on the first two categories for the interpretability and explainability.\nWe propose a novel semi-supervised event identification approach to investigate the efficacy of including unlabeled samples on the performance of two categories of classical semi-supervised approaches. Within this framework, we illustrate that our proposed approach to extract a set of physically interpretable features based on modal analysis of PMU data (see our previous work, [NT_TPS] for further details) can effectively capture the underling dynamical behavior of the system in response to a specific type of an event. Further, we also illustrate that even a subset of features obtained from filtering technique [NT_TPS] or the low dimensional representation of the feature vectors obtained from principal component analysis (PCA) of the samples, are both explainable enough to identify the two types of events in this study."
22
+ },
23
+ {
24
+ "section_id": "1.3",
25
+ "parent_section_id": "1",
26
+ "section_name": "Related Work",
27
+ "text": "Reference [Detection_localization_and_classification_SS] proposes a framework for event detection, localization, and classification in power grid based on semi-supervised learning. Since the majority of power system events are unlabeled a pseudo label (PL) technique is leveraged to classify events with limited labels.\nReference [Razavi-Far-TPS-Industrial] uses several state-of-the-art approaches for feature extraction and semi-supervised feature reduction. Based on [Razavi-Far-TPS-Industrial] relationships between labeled and unlabeled data are mainly extracted based on three fundamental semi-supervised assumptions i) manifold assumption: data can be represented on a low dimensional manifold (Most of the Graph-based schemes are under this assumption), ii) cluster assumption: data samples belonging to the same cluster are assumed to be of a same class, and iii) smoothness assumption: samples in the dense regions share the same class label.\nThe remainder of the paper is organized as follows. Section.\nII ###reference_### describes the simulation process to generate the synthetic eventful PMU data. We explain the proposed semi-supervised event identification framework in Section. IV ###reference_###. In Section. V ###reference_###, we further elaborate on the pseudo labeling process of the unlabeled samples, and the semi-supervised classification models. We discuss the simulation results in Section. VI ###reference_###. Finally, Section. VII ###reference_### concludes the paper."
28
+ },
29
+ {
30
+ "section_id": "2",
31
+ "parent_section_id": null,
32
+ "section_name": "II Generation of the Synthetic Eventful Time-series PMU Data",
33
+ "text": "Consider an electric grid with buses, lines, generators, loads, and installed PMUs, denoted as , , , , , respectively.\nIn this study, we consider three different type of events, denoted as where GL, LT, and BF, represent generation loss, line trip, and bus fault events, respectively. Further, we define a set of different operation conditions by randomly changing the power consumption of each load (i.e., ) individually, such that to increase or decrease the system net load within a specif range (in this study, to of the normal system loading condition). is the total number of loading condition scenarios that are considered in the study.\nRecall that each PMU has multiple channels through which we can obtain different types of measurements relative to the bus where the PMU is installed.\nFor the sake of clarity, we merely focus on the positive sequence voltage magnitude () and corresponding angle (), and frequency () channels in this study 111Other channels could be included as well..\nFor any given channel where , let , , and , denote the channel measurement obtained from PMU at sample with a sampling period of . The total number of samples is . Correspondingly, we concatenate all the measurement obtained from PMUs into a matrix, denoted as where for , and . Finally, for the event, we define by combining all the phasor measurement obtained from PMUs, PMU channels, and for samples.\nWithin this setting, we developed a publicly available Python code which leverages PSS\u00aeE software Python Application Program Interface (API) to generate synthetic eventful PMU data. We outline the process by which we generate the synthetic PMU data as shown in Algorithm. 1."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "III Structure of the Event Dataset Based on the Generated PMU Data",
39
+ "text": "###table_1### The first step in identifying a system event is to extract a set of delineating features that are likely to contain information regarding the event type (henceforth referred to as event class). Using the fact that temporal effects in a power system are driven by the interacting dynamics of system components, we propose to use mode decomposition as the framework with which to extract features. More specifically, we assume that each PMU data stream after an event consists of a superposition of a small number of dominant dynamic modes. Thus, the features will be the frequency and damping ratio of these modes, as well as the residual coefficients indicating the quantity of each mode present in each data stream. We refer readers to our previous work [NT_TPS] for further details.\nWe assume that after an event consists of a superposition of common damped sinusoidal modes as\nwhere for any given channel , represents the noise in the PMU measurement and is the mode associated with the event. We represent each mode as where and and are the damping factor and angular frequency of the mode, respectively. Furthermore, residue corresponding to each mode and PMU measurement is defined by its magnitude and angle .\nNote that, given the fact that typically modes include only complex\nconjugate pairs and no real modes (i.e., complex conjugate\npairs, yielding modes in total), we only keep one mode from each pair. Thus, we keep only modes in the feature vector for each event. Moreover, for any given channel, , since only a small portion of the PMUs () capture the dynamic response of the system after an event, we only keep the residues of a set of PMUs with the largest magnitudes in the vector of features of that channel, denoted as . Note that the PMUs are not necessarily the same PMUs for different events (see, [NT_TPS] for further details).\nUsing channel measurements obtained from multiple PMUs, we define a row vector of features, , as follows:\nwhich consists of angular frequencies, damping factors and the corresponding magnitude and angle of the residues for each of the PMUs (with the largest residue magnitudes) and the modes."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-A Structure of the feature vector for each event",
45
+ "text": "Taking into account the modal analysis results obtained from modal analysis from various PMU channels, each event can be described as a set\nof features obtained from modal analysis of PMU data and a label which\ndescribes the class of an event and is number of event classes (i.e, considering line trip, and generation\nloss, in this study is ).\nWe define the matrix of event features and the labels matrix where is the total number of samples. Further, we define which can be alternatively expressed as and is defined in Table. I ###reference_###. Note that for the notation simplicity we keep the subscript indices, but they can represent different subset of samples which we explicitly clarify this in Table. I ###reference_###."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "IV Proposed Framework to Investigate the Impact of the Unlabeled Data",
51
+ "text": "The overview of the proposed framework to evaluate the performance of various semi-supervised algorithms under various labeled vs. unlabeled samples ratio scenarios is shown in Fig. 3 ###reference_###.\n###figure_1### To investigate the impact of incorporating unlabeled samples on event identification performance, we utilize the k-fold cross-validation technique. First, we shuffle samples and partition the data into equally sized folds. We use folds as a training set, denoted as with samples, and reserve the remaining fold as a test set, denoted as with samples. We repeat this process times, with each fold serving as the validation set once.\nTo further investigate how the distribution of labeled and unlabeled samples affects the performance of various semi-supervised algorithms, we shuffle the samples in the training set for times and split it into a subset of labeled samples , and a subset of unlabeled samples by ignoring their ground truth labels, denoted as where .\nTo illustrate the impact of the number of included unlabeled samples, denoted as , we define a subset of unlabeled training samples as .\nNote that, semi-supervised learning is not guaranteed to improve supervised models and depends on certain underlying assumptions for it to work properly. These assumptions include the smoothness and cluster assumptions [ref], which state that high-density regions are likely to have similar outputs and that points in the same cluster are likely to belong to the same class, respectively. To account for these assumptions, we sort the unlabeled samples in the based on their proximity to the nearest labeled sample, resulting in a sorted subset of unlabeled samples denoted as .\nConcatenating the labeled training samples, , in the fold, and split, with a subset of sorted unlabeled training samples, , we obtain a training data set with mixed labeled and unlabeled samples, denoted as . We can alternatively represent the labeled and unlabeled training samples in the matrix format as described below.\nWe define the matrix of event features with labeled samples as and the corresponding matrix of labels as where . For the subset of unlabeled samples in the shuffle of the training set, we define event features of unlabeled samples as , . For the sake of notation coherency as well as implementation considerations (e.g., learning the classification models), we assign an integer value of to the unlabeled samples, i.e., . Hence, the mixed labeled and unlabeled training set can be expressed as\nwhere\nSimilarly, the test in the fold can be represented in the matrix format as \nwhere and , and .\nFor future references, the columns of the matrix of event features for any subset of samples (i.e., ) are denoted as .222The symbol represents any variation of subscripts or superscripts of a variable that has been used throughout the paper."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Learning the classification models and model validation process",
57
+ "text": "In order to investigate the performance of various (more-established, classical?) semi-supervised machine learning algorithms, we use the obtained training set with mixed labeled an unlabeled samples, to learn a classification model and evaluate its efficacy in classifying the labeled samples in the test set, .\nMore specifically, to evaluate the performance of a given classifier, denoted as , in the fold, and the split of the training set and using the unlabeled samples, we use the matrix of event features and the corresponding matrix of labels in the , we learn a classifier and identify the label of the events in the , denoted as .\nTo quantitatively evaluate and compare the performance of different semi-supervised learning algorithms across various scenarios, we employ the area under curve (AUC) of the receiver operator characteristic (ROC) as a metric denoted as , and . This metric enables the characterization of the accuracy of classification for different discrimination thresholds [zebari2020comprehensiveFS]. The discrimination threshold determines the probability at which the positive class is preferred over the negative class. The ROC curve plots the relationship between the true positive rate and the false positive rate for varying threshold settings. The ROC AUC value, which ranges from 0 to 1, provides an estimate of the classifier\u2019s ability to classify events. A value of AUC closer to 1 indicates a better classification performance."
58
+ },
59
+ {
60
+ "section_id": "5.1",
61
+ "parent_section_id": "5",
62
+ "section_name": "Dimensionality Reduction \u2013 filter method",
63
+ "text": "Although mode decomposition is meant to focus on only the physically meaningful features of the dataset, there are still simply too many of them (typically 100s in our dataset) to consider in one model. At the same time, considering the problem of limited labeled historical events, using too many features at once will lead to overfitting, which in turn causes degraded performance. Hence, we use filter methods to avoid overfitting due to the unnecessary large number of features while ensuring that events can be distinguished by the same subset of features,\nSince filter methods, in contrast with wrapper and embedded methods, are independent from classification models [zebari2020comprehensiveFS], they are computationally inexpensive and are more efficient for real time applications.\nWe will rely on a well-known approach in machine learning, bootstrapping to specifically address the problem of limited number of labeled data. Bootstrapping is a technique of sampling with replacement to create multiple datasets from the original dataset, thereby selecting the most informative features with some degree of statistical confidence.333Since feature selection is not the focus of this paper, we will not provide a full explanation. We point the reader to our previous work [NT_TPS] for a more detailed discussion.\nIn order to robustly find a subset features, we compute the percentile of the correlation measure (i.e., mutual information [NT_TPS]) of each feature with the vector of event labels over bootstrapped datasets and select features, i.e., , where . Using the selected features, we obtain a reduced order event feature vectors, , and ."
64
+ },
65
+ {
66
+ "section_id": "5.2",
67
+ "parent_section_id": "5",
68
+ "section_name": "Dimensionality Reduction \u2013 Principal compoment analysis",
69
+ "text": "To further investigate the feature space and its low-dimensional representation, we use a well-established dimensionality reduction technique, i.e., Principal component analysis (PCA) which has been widely used in the literature as well as for industry applications, when dealing with a dataset with high-dimensional feature space. To ensure the interpertability of the data, preserving the most of the information while reducing the data dimensions, PCA leverages techniques from linear algebra and statistics to linearly transform the data into a low-dimensional subspace with a new coordinate system where the data can be described based on the direction of maximum variances.\nGiven the original mixed training set with mixed labeled and unlabeled samples, where , we project the matrix of event features onto the low-dimensional subspace of the first principal components, i.e., , , thereby obtaining a reduced order matrix of event features, , and . Similarly, the low-dimensional representation of test samples, denoted as is obtained by projecting on the new coordinate system of principal components.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7###"
70
+ },
71
+ {
72
+ "section_id": "5.3",
73
+ "parent_section_id": "5",
74
+ "section_name": "Model Learning and Validation \u2013 Semi-Supervised Setting",
75
+ "text": "In general, semi-supervised approaches utilize both labeled and unlabeled samples, but they are different in the way they incorporate the information from unlabeled samples in the learning process. In the remainder of this section, we provide further details for the two classical semi-supervised approaches that are used int this paper, i.e., 1) generative self-training methods considering two well-known classical classification algorithms (SVM vs. GB) as base classifiers, and 2) Graph-based methods including label propagation (LP) vs. label spreading (LS)."
76
+ },
77
+ {
78
+ "section_id": "5.3.1",
79
+ "parent_section_id": "5.3",
80
+ "section_name": "V-C1 Generative model \u2013 Self-training with SVM vs. GB base classifier",
81
+ "text": "For any given base classifier, we learn a model from the labeled samples of the which is obtained from a subset of samples including labeled and unlabeled training samples after the split. In this paper we merely focus on the SVM and GB classification algorithms. Then using the learned model, we predict the labels for each unlabeled point, i.e., psuedo-labeled points. Finally we use the pseudo-labeled sample from the previous step and learn a new classifier and evaluate the performance of the learned model on the never-seen-before test examples, , given the fact that now all the data in the are labeled."
82
+ },
83
+ {
84
+ "section_id": "5.3.2",
85
+ "parent_section_id": "5.3",
86
+ "section_name": "V-C2 Graph-based methods \u2013 label propagation (LP) vs. label spreading (LS)",
87
+ "text": "Consider a graph which is constructed over the mixed labeled and unlabeled training set in the split of the original training dataset into the labeled and unlabeled samples, i.e., . Each sample, , in the can be represented as a graph node, , i.e., , and . Furthermore, we define a notion of weighted edge, i.e., where its row and column (corresponding to any pair of samples ), denoted as , can be obtained as , and represents the Euclidean distance. As a result, the closer the samples are, they will have larger weights. 444Other choices of the distance metric are possible and can be selected based upon the distribution of the samples. Then the intuition is that similar samples (i.e., with closer distance) have similar label, and labels propagate from labeled samples to unlabeled ones through weighted edges where the weights carry the notion of similarity. In other words, the problem is that to estimate from , and .\nGiven a probabilistic transition matrix where each element in the row, and column, denoted as can be obtained based on (5 ###reference_###).\nwhich represents the probability of going from node to . Further, we define the label matrix where the first rows correspond to the labeled samples and the remaining rows correspond to the unlabeled samples. A pseudo code for the label propagation algorithm based on [zhu2002LP] is shown in .\nThe performance of the proposed event identification framework based on\nthe label propagation algorithm as well as its relaxed version, i.e., label spreading have been investigated in this paper. Due to the space limitation, we are not providing details regarding the label spreading algorithm."
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "VI Simulation Results",
93
+ "text": ""
94
+ },
95
+ {
96
+ "section_id": "7",
97
+ "parent_section_id": null,
98
+ "section_name": "VII conclusion",
99
+ "text": ""
100
+ }
101
+ ],
102
+ "appendix": [],
103
+ "tables": {
104
+ "1": {
105
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T1.28.1.1\" style=\"font-size:90%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S3.T1.29.2\" style=\"font-size:90%;\"> Indices definition.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.26\">\n<tr class=\"ltx_tr\" id=\"S3.T1.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.4\">\n represents the sample in the data set and and is the total number of samples</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.9.9.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.9.9.5.4\">\n<tr class=\"ltx_tr\" id=\"S3.T1.8.8.4.3.3\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S3.T1.8.8.4.3.3.3\">represents a subset of samples in the training set of the fold, and represents the sample,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.9.9.5.4.4\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S3.T1.9.9.5.4.4.1\">and is the number of training samples.</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.14.14.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.14.14.5.4\">\n<tr class=\"ltx_tr\" id=\"S3.T1.13.13.4.3.3\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S3.T1.13.13.4.3.3.3\">represents a subset of samples in the test set of the fold, represents the sample,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.14.5.4.4\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S3.T1.14.14.5.4.4.1\">and is the number of test samples</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.20.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.15.15.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.20.20.6\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.20.20.6.5\">\n<tr class=\"ltx_tr\" id=\"S3.T1.19.19.5.4.4\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S3.T1.19.19.5.4.4.4\">\n represents the labeled sample in the split of the training dataset, and the fold into labeled</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.20.20.6.5.5\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S3.T1.20.20.6.5.5.1\">and unlabeled part, and is the total number of labeled samples.</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.26.26\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.21.21.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.26.26.6\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.26.26.6.5\">\n<tr class=\"ltx_tr\" id=\"S3.T1.25.25.5.4.4\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S3.T1.25.25.5.4.4.4\">\n represents the unlabeled sample in the split of the training dataset, and the fold into labeled</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.26.26.6.5.5\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S3.T1.26.26.6.5.5.1\">and unlabeled part, and is the total number of unlabeled samples.</td>\n</tr>\n</table>\n</td>\n</tr>\n</table>\n</figure>",
106
+ "capture": "TABLE I: Indices definition."
107
+ }
108
+ },
109
+ "image_paths": {
110
+ "1": {
111
+ "figure_path": "2309.10095v2_figure_1.png",
112
+ "caption": "Figure 1: Overview of the proposed framework",
113
+ "url": "http://arxiv.org/html/2309.10095v2/Figures/SS_process_v1_d.pdf"
114
+ },
115
+ "2": {
116
+ "figure_path": "2309.10095v2_figure_2.png",
117
+ "caption": "Figure 2: Overview of the proposed framework",
118
+ "url": "http://arxiv.org/html/2309.10095v2/Figures/SS_process_v1_dp_mi.pdf"
119
+ },
120
+ "3": {
121
+ "figure_path": "2309.10095v2_figure_3.png",
122
+ "caption": "Figure 3: Overview of the proposed framework",
123
+ "url": "http://arxiv.org/html/2309.10095v2/Figures/SS_process_v1_dp_pca.pdf"
124
+ },
125
+ "4": {
126
+ "figure_path": "2309.10095v2_figure_4.png",
127
+ "caption": "Figure 4: Support vector machines - without using unlabeled samplesr",
128
+ "url": "http://arxiv.org/html/2309.10095v2/Figures/v32_N_train%20=%20745N_L=%2010SVM.png"
129
+ },
130
+ "5": {
131
+ "figure_path": "2309.10095v2_figure_5.png",
132
+ "caption": "Figure 5: Gradient boosting - without using unlabeled samples",
133
+ "url": "http://arxiv.org/html/2309.10095v2/Figures/v32_N_train%20=%20745N_L=%2010GB.png"
134
+ },
135
+ "6": {
136
+ "figure_path": "2309.10095v2_figure_6.png",
137
+ "caption": "Figure 6: Semi-supervised algorithm:graph-based (label propagation vs. label spreading)",
138
+ "url": "http://arxiv.org/html/2309.10095v2/Figures/v32_N_train%20=%20745N_L=%2010LP-LS.png"
139
+ },
140
+ "7": {
141
+ "figure_path": "2309.10095v2_figure_7.png",
142
+ "caption": "Figure 7: Semi-supervised algorithm:self-training (svm vs. gb as the base classifie)",
143
+ "url": "http://arxiv.org/html/2309.10095v2/Figures/v32_N_train%20=%20745N_L=%2010Self-%20Training%20.png"
144
+ }
145
+ },
146
+ "validation": true,
147
+ "references": [],
148
+ "url": "http://arxiv.org/html/2309.10095v2"
149
+ }
20240722/2309.11966v2.json ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "NeuralLabeling: A versatile toolset for labeling vision datasets using Neural Radiance Fields",
3
+ "abstract": "We present NeuralLabeling, a labeling approach and toolset for annotating 3D scenes using either bounding boxes or meshes and generating segmentation masks, affordance maps, 2D bounding boxes, 3D bounding boxes, 6DOF object poses, depth maps, and object meshes.\nNeuralLabeling uses Neural Radiance Fields (NeRF) as a renderer, allowing labeling to be performed using 3D spatial tools while incorporating geometric clues such as occlusions, relying only on images captured from multiple viewpoints as input.\nTo demonstrate the applicability of NeuralLabeling to a practical problem in robotics, we added ground truth depth maps to 30000 frames of transparent object RGB and noisy depth maps of glasses placed in a dishwasher captured using an RGBD sensor, yielding the Dishwasher30k dataset.\nWe show that training a simple deep neural network with supervision using the annotated depth maps yields a higher reconstruction performance than training with the previously applied weakly supervised approach.\nWe also show how instance segmentation and depth completion datasets generated using NeuralLabeling can be incorporated into a robot application for grasping transparent objects placed in a dishwasher with an accuracy of 83.3%, compared to 16.3% without depth completion.\nSupplementary URI: https://florise.github.io/neural_labeling_web/.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "INTRODUCTION",
9
+ "text": "Deep learning requires large datasets, which are time-intensive and expensive to create.\nThere are various approaches to avoid this, such as using foundation models or weakly supervised training methods like cyclic adversarial learning [Zhu_2017_ICCV].\nHowever, despite being trained on massive datasets, foundation models such as Segment Anything [kirillov2023] and CLIP [radford2021a] still rely on inference data to be similar to the training data, which is not always the case.\nModels trained using weakly supervised learning might outperform state-of-the-art models when the SOTA models are not trained on task-specific data, but their performance is lower than SOTA models evaluated on evaluation data more similar to their training data.\nThus there is a need for tools that can support large dataset creation in a time-efficient low-cost manner.\nWe hope to contribute to solving this problem by introducing a labeling tool for computer vision datasets that uses the power of Neural Radiance Fields (NeRF) [mildenhall2020] for photorealistic rendering and geometric understanding.\nBecause 3D Vision can take advantage of 3D consistency, labels on a single scene can be applied to images from multiple viewpoints.\nThis property works particularly well with photorealistic renderings such as NeRF, where richly annotated data with many views is available with only simple manual 3D labeling.\nThis not only saves significant labeling time but is also useful in automatically generating a consistent dataset.\nSpecialized labeling tools are essential for labeling vision datasets, and both academic researchers and commercial entities have released such tools.\nMost existing labeling tools (such as Segment Anything Labeling Tool [salt] and Roboflow [roboflow]) use single images and therefore require significant human effort to annotate long sequences, use sequential data but have no geometric understanding so they cannot be used for annotating 6DOF poses [cheng2022xmem], or require depth data to obtain geometric information [lai2012, zimmer2019, singh2021].\nOur toolkit, NeuralLabeling, operates on sequences of images and can thus be used to more rapidly label large datasets.\nBy using manual scaling and NeRF depth reconstruction [mildenhall2020], NeuralLabeling does not rely on input depth data except when used for generating datasets for depth completion tasks.\nDue to improvements in the training time of NeRFs [muller2022], NeuralLabeling does not rely on slow dense mesh reconstruction and instead only requires camera pose estimation, which takes around an hour per scene of approximately 500 images, and could be further reduced by selecting key frames and interpolating camera poses between them [takeda2023] or avoided using NeRF recording applications such as NeRFCapture [NeRFCapture].\n###figure_1### This paper has two main contributions:\n(1) We present NeuralLabeling, a novel labeling system that is deeply integrated into a NeRF-based photorealistic rendering system (Section III ###reference_###).\n(2) We construct the Dishwasher30k dataset, which can be used for NeRF-based transparent object depth completion research, and release it on our web page.\nFurthermore, we perform the following experiments to validate our approach:\n(1) We evaluate the accuracy of NeuralLabeling for generating transparent object datasets for depth completion (Section IV-A ###reference_###).\n(2) We evaluate the accuracy of NeuralLabeling for generating datasets for object segmentation, taking into account occlusions (Section IV-B ###reference_###).\n(3) We demonstrate how training a transparent object depth completion network using a dataset generated by NeuralLabeling leads to improved performance compared to unsupervised datasets (Section IV-C ###reference_###).\n(4) We show that networks trained using datasets generated by NeuralLabeling can be integrated into a robot manipulation system (Section IV-D ###reference_###).\n###figure_2###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II BACKGROUND",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Vision data labeling tools",
21
+ "text": "###figure_3### NeuralLabeling was inspired by various recent tools for creating labeled datasets but qualitatively improves upon each of them.\nProgressLabeller [chen2022d] is a state-of-the-art labeling tool that uses mesh alignment and posed camera images.\nRapidPoseLabels [singh2021] is an RGBD-based labeling tool, allowing for labeling objects with pose annotations.\nBecause it uses RGBD data as input it cannot be used if depth data is unavailable or unreliable.\n3D-DAT [suchi2023] is a mesh-based labeling tool implemented as a Blender plugin.\nIt uses NeRF for automated alignment of objects with NeRF geometry, but it requires meshes to be provided as input.\nIt also does not support NeRF-to-mesh occlusions.\nNerfing It [blomqvist2023b] is a NeRF-based labeling tool, but it does not support mesh-based labeling.\nIt also uses a vanilla NeRF implementation that is not optimized for speed, and thus requires long training times to prepare scenes for labeling.\nTable I ###reference_### compares NeuralLabeling with various state-of-the-art labeling tools.\nOur work resembles the pipeline used for preparing the HANDAL dataset [guo2023].\nTheir work uses a bi-methodical 3D-bounding-box-based and mesh-based labeling approach, similar to what we present in this paper.\nAn advantage of HANDAL is that it also supports labeling dynamic scenes.\nAn advantage of our tool is that it can be used to generate depth maps for transparent objects.\nNeuralLabeling can generate segmentation masks that can be used for training neural networks to perform object segmentation, whereas the HANDAL pipeline relies on segmentation masks generated using a pre-trained tracker [cheng2022xmem].\nTheir work uses automatic scaling based on depth input, whereas our work relies on manual scaling using a scaling tool.\nInspired by their work, we added an affordance labeling tool to NeuralLabeling.\nNeuralLabeling enables the labeling of existing scenes using NeRF, however in the parallel work PEGASUS [meyer2024pegasus] we allow generating datasets by inserting objects into an existing scene and rendering them using 3D Gaussian Splatting.\nBy inserting custom objects into a scene, a wider variety of object configurations can be generated, thus leading to more variety in the generated datasets.\nHowever, the PEGASUS renderer is unaware of scene-specific lighting, whereas for NeuralLabeling the objects inherit natural scene lighting."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Transparent Object Depth Completion",
27
+ "text": "NeuralLabeling started as a tool to label transparent objects with accurate depth estimates to enable robots to estimate depth and shape of glasses and cups, without relying on expensive photorealistic simulations.\nDeep learning approaches have greatly contributed to solving the problem of transparent object depth completion [sajjan2020clear], however most existing datasets consist of glasses placed in simple environments such as on tables and floors [sajjan2020clear, chen2022clearpose, zhu2021a].\nState-of-the-art pretrained models underperform when applied to more complex environments such as a dishwasher [erich2023fakingdepth].\nWeakly supervised training methods can outperform state-of-the-art supervised models, but still underperform compared to the performance of the state-of-the-art models on data that is more similar to their training data.\nWe show that NeuralLabeling can be used to easily create supervised datasets for a complex environment such as a dishwasher, and that a network trained on such a supervised dataset can outperform a network trained on a weakly supervised dataset.\nUsing NeuralLabeling, it took roughly one workweek to construct this dataset, which contains NeRFs, mesh models, alignment configurations of the meshes with the NeRF, generated depth, and generated segmentation masks.\nWe release this dataset, which we name Dishwasher30k."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III METHODOLOGY",
33
+ "text": "We support labeling using either 3D bounding-boxes or meshes (Fig. 1 ###reference_###).\n3D-bounding-box-based labeling is useful when scenes are uncluttered and/or high quality object meshes for applying labels to the scene are not available.\nMesh-based labeling is useful when scenes are cluttered or if we already have object meshes available.\nWe support mesh extraction using bounding-boxes, which enables a novel pipeline where we obtain mesh models for objects placed in an uncluttered manner, and then reuse these models in a cluttered scene.\nFig. 2 ###reference_### gives a more detailed overview of the combined labeling pipeline.\nWe aim to generate semantic segmentation masks, 2D and 3D bounding boxes, 6DOF object poses, depth maps and object meshes for each frame in a RGB image sequence (Fig. 3 ###reference_###).\nSegmentation masks are further classified into binary, instance and class segmentation masks.\n2D bounding boxes are defined by the lower left corner and upper right corner.\n3D bounding boxes are defined by the lower left front corner, upper right back corner and an orientation.\nWhen using the 3D-bounding-box-based labeling workflow, we can either directly use the labeled bounding-boxes or we can optimize the bounding-boxes to tightly fit their geometry.\n6DOF object poses are defined by translation and rotation of objects relative to the camera pose.\nDepth maps are defined by depth elements of rays cast perpendicular from the camera plane to the nearest surface, or if no nearest surface exists for a depth element.\nObject meshes are defined using the common Wavefront OBJ format [zotero-3323].\nIn the downstream tasks presented in this paper we use object meshes, semantic segmentation masks and depth maps.\nThe other output types were added to increase the flexibility of the toolset.\nTo annotate an object with affordances, sub-bounding-boxes can be added, which are stored as a JSON file alongside the exported geometry and automatically loaded when inserting exported meshes in new scenes.\nPer-object affordance maps can be exported in a similar way as segmentation masks."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-A Uncluttered scene pipeline",
39
+ "text": "In this pipeline an uncluttered scene is annotated using bounding-boxes.\nRecord RGB frames of a scene containing objects to label: , where is number of frames, is width and is height.\nObtain camera extrinsics and intrinsics for each frame using Structure-from-Motion algorithms such as COLMAP [schoenberger2016sfm, schoenberger2016mvs] or hloc [sarlin2019coarse], where is camera rotation matrix and is camera translation vector.\nDetermine scale by comparing keypoints or using AR marker [meyer2023], and rescale positions .\nRender NeRF using and .\nLabel objects using bounding-boxes, by inserting boxes, translating and rotating them to surround target objects.\nExport geometry contained in bounding-boxes by querying density of NeRF in bounding-box areas, apply density filter and run marching cubes [lorensen1987]."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-B Cluttered scene pipeline",
45
+ "text": "In this pipeline, a cluttered scene is labeled using polygonal meshes.\nIf we have access to the physical objects in the scene, meshes can be obtained through the uncluttered scene pipeline.\nThis pipeline repeats steps 1-4 from the uncluttered scene pipeline but replaces steps 5 and 6 with the following:\nLabel objects using mesh models, by inserting meshes, translating and rotating them to be aligned with NeRF rendering of objects.\nExport semantic segmentation masks, 2D and 3D bounding boxes, 6DOF object poses and depth maps."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-C Implementation details",
51
+ "text": "Because our labeling functionality is specialized, we implemented NeuralLabeling as a fork of instant-ngp [muller2022] instead of merging our changes into the main project.\ninstant-ngp allows for parallel training and rendering, and with our fork also for labeling.\nRendering of geometry extracted using marching cubes and rendering of (re)inserted meshes is implemented using OpenGL.\nIn the bounding-box-based pipeline, we support real-time geometry previews from NeRF.\nRendering of overlay effects such as 2D and 3D bounding boxes is implemented using ImGui111Online: https://github.com/ocornut/imgui ###reference_github.com/ocornut/imgui###.\nManipulating objects (translation, rotation, scaling) is implemented using ImGuizmo222Online: https://github.com/CedricGuillemet/ImGuizmo ###reference_o###.\nWe implemented an accelerated algorithm for object alignment, using multi-threading and CUDA kernels.\nNeRF-to-mesh occlusions are handled by comparing estimated depth of rays traced in the NeRF rendering with depth of the rendered meshes fragments.\nWe integrate improved transparent object depth estimation via Dex-NeRF [ichnowski2021a].\nWe enable scripting of NeuralLabeling through Python bindings, which is useful for automated dataset generation."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "IV EVALUATION",
57
+ "text": "Our toolkit can be used to easily and quickly label photorealistic scenes that would be hard to manually model and generate various useful outputs for downstream deep learning tasks.\nWe demonstrate this by (1) evaluating base performance of depth generation using object annotations in Section IV-A ###reference_###, (2) evaluating segmentation performance of annotating opaque objects with a high degree of environment occlusion in Section IV-B ###reference_###, (3) annotating scenes containing transparent objects in a complex environment and training neural networks using generated data in Section IV-C ###reference_### and (4) evaluating the performance of a holistic robotic transparent object manipulation system using neural networks trained or fine-tuned on generated datasets in Section IV-D ###reference_###."
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "IV-A Ground truth depth label accuracy",
63
+ "text": "To evaluate the optimal performance of using NeuralLabeling, we recorded 30 samples of color and depth data collected from glasses placed into a scene.\nThe glass is then replaced with an opaque clone placed into the same position and the depth data is recaptured.\nTo place the opaque clone into the same position as the original glass, we take a picture of the original scene using a camera and render an overlay image.\nThis is a typical approach for creating real-world validation data for transparent object depth completion [sajjan2020clear].\nIn addition to capturing ground truth depth by manually aligning opaque clones, we also captured a NeRF scene recording.\nWe create meshes of the glasses by recording two environments with opaque clones of the glasses placed facing upwards and downwards (Fig. 4 ###reference_###).\nWe used opaque clones instead of the original glasses for creating meshes, as this produced higher quality meshes due to NeRF not being able to correctly estimate the inner surface of the original glasses.\nWe applied our cluttered scene pipeline to this dataset to generate experimental data by aligning the scanned meshes with the original glasses.\nTo evaluate the accuracy of the pipeline, we compare the generated depth of the labels applied to the transparent scene with the ground truth depth generated from aligning opaque clones.\nThis experiment resulted in a median error of 4mm and an MAE (mean absolute error) of 9mm at a mean working distance of 649mm (1.4% relative error), which is similar to the stated depth estimation error of the depth sensor (2% at 2 meters working distance).\nExporting depth using NeuralLabeling without mesh annotations, but using Dex-NeRF-like [ichnowski2021a] transparent object depth estimation resulted in a median error of 5mm and an MAE of 16mm (2.4% relative error).\nWe can conclude that our method for labeling transparent object depth is at least as accurate as the applied depth sensor is on opaque objects, and more accurate compared to Dex-NeRF.\n###figure_4###"
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-B NeRF occlusion for generating segmentation masks",
69
+ "text": "One of the unique functions of NeuralLabeling is to use NeRF occlusions to generate accurate segmentation masks.\nWe performed a small experiment to measure the effectiveness.\nWe labeled a sequence of three heavily occluded frames of the basket scene (scene B of Fig. 3 ###reference_###) with ground truth segmentation masks, then calculated F1-score, Intersection-over-Union (IoU), accuracy, precision and recall.\nWe compare our method with Segment Anything (SAM) [kirillov2023] labels using 2D bounding boxes and XMem [cheng2022xmem] by using the first frame as input.\nQuantitative and qualitative results can be found in Table II ###reference_### and in the supplemental materials respectively.\nOur method outperforms SAM in almost every metric, while performing similar to XMem.\nSome qualitative benefits of our approach are that NeuralLabeling does not require all objects to be visible in the first frame of a sequence such as with XMem and does not need per frame 2D bounding boxes such as with SAM.\nFor generating NeRF occlusions we rely on extracting an accurate depth estimate from NeRF, which is difficult for objects with highlights and reflections.\nOur method for example struggles to generate accurate segmentation masks of the towel from scene B, which is wrapped in plastic.\nCompared to the other methods, the segmentation masks generated by NeuralLabeling are more conservative, which decreases the recall score."
70
+ },
71
+ {
72
+ "section_id": "4.3",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-C Training networks for depth completion",
75
+ "text": "###figure_5### In a previous study [erich2023fakingdepth] we evaluated the usage of unpaired training data with a cyclic adversarial training approach [Zhu_2017_ICCV] for transparent object depth completion.\nWe use the same dataset and network design from the previous study but added supervised ground truth depth maps and instance segmentation masks using NeuralLabeling.\nThe RGB images from the original dataset were used for determining camera poses and NeRF rendering."
76
+ },
77
+ {
78
+ "section_id": "4.3.1",
79
+ "parent_section_id": "4.3",
80
+ "section_name": "IV-C1 Dataset preparation",
81
+ "text": "For transparent objects a marching cubes threshold can be used to tune the mesh geometry similar to Dex-NeRF [ichnowski2021a], however the observed mesh quality was still lower than using opaque clones.\nWe merged the upwards and downwards facing meshes using MeshLab [journals/ercim/CignoniCR08] to produce complete meshes of the glasses.\nWe want to show that good results can be obtained using low cost methods, so we avoided using more advanced techniques such as using expensive camera setups [erich2023neuralscanning] or commercial 3D scanners.\nThe meshes are manually aligned with the NeRF rendering.\nWe generated camera pose estimates for 59 out of 60 scenes, camera pose estimation failed on one scene.\nWe calibrated the camera pose scales for each scene by measuring the distance between two points where the real world distance was known, taking about a minute per scene.\nIt then took two working days to label the 59 scenes with the meshes.\nAn automated process generated the depth maps for the 59 scenes, which took around three minutes per scene.\nNeRF-to-mesh occlusions could not reliably be generated due to the difficulty in estimating the depth elements of inner surfaces of glasses using NeRF.\nInstead, we use sensor depth elements to occlude the generated depth elements.\nSensor depth elements for transparent objects are inaccurate due to missing elements (), background depth elements and noisy surface depth elements.\nBy using sensor depth elements for calculating occlusions, we can fill in missing depth elements and correct background depth elements to be on the object surface, but some noisy surface depth elements that were inaccurately estimated as being too close to the camera might remain.\nFig. 5 ###reference_### shows a sample from the dishwasher dataset, with the original depth recorded by the depth sensor, with generated depth estimate from mesh annotations, and finally, the combined sensor and mesh depth that is used as ground truth for training our network.\nWe reuse the validation set of the original paper [erich2023fakingdepth] (), containing scenes in which glasses were manually aligned with opaque clones (using the same process described in Section IV-A ###reference_###).\nAll evaluation samples are patches with dimensions extracted from the center of the sensor frame with dimensions , where depth is clipped to the mm range.\nFor training we first random crop frames horizontally to dimensions and then resize to dimensions using nearest neighbor interpolation.\nFor evaluation we center crop the frames horizontally before resizing.\n is the number of channels for the modality: for depth-only, for RGB only, for RGBD.\nFor each channel we map the values to the domain from the original domains from RGB and for depth."
82
+ },
83
+ {
84
+ "section_id": "4.3.2",
85
+ "parent_section_id": "4.3",
86
+ "section_name": "IV-C2 Networks and training",
87
+ "text": "Whereas in the previous study we used two generator networks and two discriminator networks, in this study we use only a single generator network for each evaluated modality.\nThe generator network in the previous study and the current study is a simple U-Net, based on Pix2pix [isola2017image].\nWe evaluated three modalities: RGBD2Depth, Depth2Depth and RGB2Depth.\nIn the previous study we evaluated Depth2Depth and RGBD2RGBD modalities (the method required input and output type to be symmetric, so the target output was RGBD data of scenes containing opaque clones of the original glasses by spray painting them).\nIn both the original study and the current study we trained for iterations."
88
+ },
89
+ {
90
+ "section_id": "4.3.3",
91
+ "parent_section_id": "4.3",
92
+ "section_name": "IV-C3 Results",
93
+ "text": "###table_1### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30### ###figure_31### ###figure_32### ###figure_33### ###figure_34### ###figure_35### Table III ###reference_### and Table IV ###reference_### contain quantitative and qualitative results.\nCyclic adversarial measurements are sourced from our previous paper [erich2023fakingdepth].\nMetrics used are Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Relative error (Rel), proportion of depth elements with less than 5% error (1.05), less than 10% error (1.10) and less than 25% error (1.25).\nWe apply the metrics to the depth elements covered by transparent objects.\nRegardless of the modality used, we could obtain a significant improvement by using supervised data created using NeuralLabeling.\nThe weakly supervised approach required recording a separate dataset containing 60 scenes of opaque clones, which took about 4 hours to collect, but our current method does not use this.\nFor the current supervised approach, we had to record two scenes of opaque clones to extract meshes, and then label the original transparent objects in the dishwasher scenes.\nCreating opaque meshes took around 8 hours.\nAligning the opaque meshes with the transparent scenes took around 16 hours, the time per scene varying based on the amount of objects in the scene.\nTraining time for the original approach was around 4 times longer, as four networks had to be trained instead of a single network.\nNeuralLabeling requires COLMAP camera estimates taking around an hour per scene and we pretrained the NeRFs to allow for faster labeling for around an hour per scene.\nPredictions using the newly trained networks are slightly more blurry than the CycleGAN approach due to not using a discriminator, but because the depth maps are for robot consumption this was not considered an issue.\nWe conclude that the NeuralLabeling approach requires more time to prepare the dataset but allows for more accurate depth estimates and efficient training for downstream tasks."
94
+ },
95
+ {
96
+ "section_id": "4.4",
97
+ "parent_section_id": "4",
98
+ "section_name": "IV-D Robot experiment and demonstration",
99
+ "text": "We implemented ROS nodes for transparent object depth completion using the depth-to-depth network and a Detectron2 [wu2019detectron2] instance segmentation network fine-tuned on transparent object data generated using NeuralLabeling.\nThe robot that we used is RT Corporation Sciurus17.\nGrasps are evaluated on two objects, a tall glass and a wine glass, which were part of the dataset for training the depth completion network and fine-tuning the segmentation network.\nWe placed the objects in 9 positions inside the dishwasher, and performed 3 trials per position, for a total of 54 trials.\nThe overall grasp success rate using the system is 83.3%.\nWine glass grasp success rate was 92.3% and tall glass grasp success rate was 75%.\nWe performed the same experiment with our prediction segmentation masks but without using depth completion (i.e. the original sensor depth).\nThe overall grasp success rate without depth completion was 16.3%.\nWine glass grasp success rate without depth completion was 29.6% and tall glass grasp success rate without depth completion was 0%.\nIn future work, we plan to explore more advanced neural network designs for more accurate depth completion, as well as mechanical improvements to the gripper to allow for a larger error tolerance.\nAs shown in the supplemental material, our robot system can also perform sequential grasping of transparent objects placed in a dishwasher environment."
100
+ },
101
+ {
102
+ "section_id": "5",
103
+ "parent_section_id": null,
104
+ "section_name": "DISCUSSION AND CONCLUSION",
105
+ "text": "We presented NeuralLabeling, a labeling approach and toolset for annotating NeRF renderings and generating datasets for downstream deep learning applications.\nWith NeuralLabeling we were able to rapidly create datasets of transparent objects in a complex environment and use the datasets to greatly improve the performance of transparent object depth completion and to perform instance segmentation in a transparent object manipulation example.\nThe main limitation of NeuralLabeling is the significant time required to record scenes and generate camera extrinsics for each captured frame, however this is mostly automated and could be further automated in the future.\nIn future we plan to apply NeuralLabeling to larger scenes such as supermarkets and convenience stores for generating datasets to fine-tune vision-language models.\nWe also plan to investigate how NeuralLabeling can be applied to dynamic scenes and how high-quality object meshes can be used to insert objects into scenes where the objects were not originally located."
106
+ }
107
+ ],
108
+ "appendix": [],
109
+ "tables": {
110
+ "1": {
111
+ "table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Comparing unique aspects of labeling tools. All tools support segmentation masks. NDR\u00a0=\u00a0No Input Depth Required, G\u00a0=\u00a0Geometry, M\u00a0=\u00a0Mesh, 6D\u00a0=\u00a06DOF poses, O\u00a0=\u00a0Occlusion masks, A\u00a0=\u00a0Affordance maps, OD\u00a0=\u00a0Object Depth</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S1.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S1.T1.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S1.T1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S1.T1.1.1.1.2\">Inputs</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S1.T1.1.1.1.3\">Selection</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"4\" id=\"S1.T1.1.1.1.4\">Outputs</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r\" id=\"S1.T1.1.2.2.1\">Tool</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S1.T1.1.2.2.2\">NDR</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S1.T1.1.2.2.3\">G</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S1.T1.1.2.2.4\">M</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S1.T1.1.2.2.5\">6D</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S1.T1.1.2.2.6\">O</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S1.T1.1.2.2.7\">A</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S1.T1.1.2.2.8\">OD</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S1.T1.1.3.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S1.T1.1.3.1.1\">ProgressLabeller\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">chen2022d</span>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.1.3.1.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.3.1.3\">\u2717</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.1.3.1.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.3.1.5\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.3.1.6\">\u2717</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.3.1.7\">\u2717</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.3.1.8\">\u2717</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.4.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S1.T1.1.4.2.1\">3D-DAT\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">suchi2023</span>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.1.4.2.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.4.2.3\">\u2717</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.1.4.2.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.4.2.5\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.4.2.6\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.4.2.7\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.4.2.8\">\u2717</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.5.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S1.T1.1.5.3.1\">Nerfing It\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">blomqvist2023b</span>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.1.5.3.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.5.3.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.1.5.3.4\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.5.3.5\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.5.3.6\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.5.3.7\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.5.3.8\">\u2717</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.6.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S1.T1.1.6.4.1\">RapidPoseLabels\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">singh2021</span>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.1.6.4.2\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.6.4.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.1.6.4.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.6.4.5\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.6.4.6\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.6.4.7\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.6.4.8\">\u2717</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.7.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S1.T1.1.7.5.1\">HANDAL\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<span class=\"ltx_ref ltx_missing_citation ltx_ref_self\">guo2023</span>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.1.7.5.2\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.7.5.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.1.7.5.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.7.5.5\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.7.5.6\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.7.5.7\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.7.5.8\">\u2717</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.8.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S1.T1.1.8.6.1\">NeuralLabeling (Ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S1.T1.1.8.6.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.1.8.6.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S1.T1.1.8.6.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.1.8.6.5\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.1.8.6.6\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.1.8.6.7\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.1.8.6.8\">\u2713</td>\n</tr>\n</tbody>\n</table>\n</figure>",
112
+ "capture": "TABLE I: Comparing unique aspects of labeling tools. All tools support segmentation masks. NDR\u00a0=\u00a0No Input Depth Required, G\u00a0=\u00a0Geometry, M\u00a0=\u00a0Mesh, 6D\u00a0=\u00a06DOF poses, O\u00a0=\u00a0Occlusion masks, A\u00a0=\u00a0Affordance maps, OD\u00a0=\u00a0Object Depth"
113
+ },
114
+ "2": {
115
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Quantitative results of masking using NeRF occlusion. Higher score is better.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T2.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"5\" id=\"S4.T2.1.1.1.2\">Binary</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"5\" id=\"S4.T2.1.1.1.3\">Category</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S4.T2.1.2.2.1\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.1.2.2.2\">F1-score</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.1.2.2.3\">IoU</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.1.2.2.4\">Accuracy</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.1.2.2.5\">Precision</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T2.1.2.2.6\">Recall</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.1.2.2.7\">F1-score</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.1.2.2.8\">IoU</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.1.2.2.9\">Accuracy</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.1.2.2.10\">Precision</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.1.2.2.11\">Recall</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.1.1\">SAM</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.2\">0.80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.3\">0.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.4\">0.97</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.5\">0.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.1.6\">0.90</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.7\">0.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.8\">0.61</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.9\">1.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.10\">0.68</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.11\">0.85</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.4.2.1\">XMem</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.2\">0.85</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.3\">0.74</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.4\">0.98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.5\">0.82</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.4.2.6\">0.88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.7\">0.80</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.8\">0.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.9\">1.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.10\">0.77</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.11\">0.84</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T2.1.5.3.1\">Ours</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.5.3.2\">0.83</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.5.3.3\">0.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.5.3.4\">0.98</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.5.3.5\">0.95</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T2.1.5.3.6\">0.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.5.3.7\">0.80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.5.3.8\">0.68</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.5.3.9\">1.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.5.3.10\">0.93</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.5.3.11\">0.71</td>\n</tr>\n</tbody>\n</table>\n</figure>",
116
+ "capture": "TABLE II: Quantitative results of masking using NeRF occlusion. Higher score is better."
117
+ },
118
+ "3": {
119
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Transparent object depth completion using weakly supervised methods versus strongly supervised methods</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T3.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.1.1.1.1.1\">Training regime</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T3.1.1.1.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.1.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.1.1.2.1.1\">Modality</span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.1.1.1.3\">RMSE (m) \u2193</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.1.1.1.4\">MAE (m) \u2193</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.1.1.1.5\">Rel \u2193</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.1.1.1.6\">1.05 \u2191</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.1.1.1.7\">1.10 \u2191</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.1.1.1.8\">1.25 \u2191</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.2.2\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.1.2.2.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.2.2.1.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.2.2.1.1.1\">Joint Bilateral Filter</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.1.2.2.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.2.2.2.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.2.2.2.1.1\">RGBD2Depth</span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.2.3\">0.067</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.2.4\">0.048</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.2.5\">0.083</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.2.6\">0.477</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.2.7\">0.688</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.2.8\">0.950</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.3.3\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row\" id=\"S4.T3.1.3.3.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.3.3.1.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.3.3.1.1.1\">ClearGrasp</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row\" id=\"S4.T3.1.3.3.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.3.3.2.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.3.3.2.1.1\">RGBD2Depth</span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.3.3\">0.090</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.3.4\">0.057</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.3.5\">0.120</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.3.6\">0.404</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.3.7\">0.555</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.3.8\">0.840</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.4.4\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.1.4.4.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.4.4.1.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.4.4.1.1.1\">Cyclic adversarial</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.1.4.4.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.4.4.2.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.4.4.2.1.1\">RGBD2RGBD</span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.4.4.3\">0.061</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.4.4.4\">0.040</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.4.4.5\">0.072</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.4.4.6\">0.528</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.4.4.7\">0.767</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.4.4.8\">0.940</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.5.5\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row\" id=\"S4.T3.1.5.5.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.5.5.1.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.5.5.1.1.1\">Cyclic adversarial</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row\" id=\"S4.T3.1.5.5.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.5.5.2.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.5.5.2.1.1\">Depth2Depth</span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.5.3\">0.058</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.5.4\">0.035</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.5.5\">0.061</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.5.6\">0.589</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.5.7\">0.861</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.5.8\">0.954</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.6.6\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.1.6.6.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.6.6.1.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.6.6.1.1.1\">Dishwasher30k supervised</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.1.6.6.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.6.6.2.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.6.6.2.1.1\">RGBD2Depth</span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.6.6.3\"><em class=\"ltx_emph ltx_font_italic\" id=\"S4.T3.1.6.6.3.1\">0.037</em></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.6.6.4\">0.023</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.6.6.5\">0.039</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.6.6.6\">0.725</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.6.6.7\">0.880</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.6.6.8\"><em class=\"ltx_emph ltx_font_italic\" id=\"S4.T3.1.6.6.8.1\">0.959</em></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.7.7\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row\" id=\"S4.T3.1.7.7.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.7.7.1.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.7.7.1.1.1\">Dishwasher30k supervised</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row\" id=\"S4.T3.1.7.7.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.7.7.2.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.7.7.2.1.1\">Depth2Depth</span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.7.7.3\">0.043</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.7.7.4\"><em class=\"ltx_emph ltx_font_italic\" id=\"S4.T3.1.7.7.4.1\">0.021</em></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.7.7.5\"><em class=\"ltx_emph ltx_font_italic\" id=\"S4.T3.1.7.7.5.1\">0.038</em></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.7.7.6\"><em class=\"ltx_emph ltx_font_italic\" id=\"S4.T3.1.7.7.6.1\">0.800</em></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.7.7.7\"><em class=\"ltx_emph ltx_font_italic\" id=\"S4.T3.1.7.7.7.1\">0.895</em></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.7.7.8\">0.955</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.8.8\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T3.1.8.8.1\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.8.8.1.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.8.8.1.1.1\">Dishwasher30k supervised</span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T3.1.8.8.2\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T3.1.8.8.2.1\">\n<span class=\"ltx_p\" id=\"S4.T3.1.8.8.2.1.1\">RGB2Depth</span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.1.8.8.3\">0.045</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.1.8.8.4\">0.028</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.1.8.8.5\">0.049</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.1.8.8.6\">0.676</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.1.8.8.7\">0.861</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.1.8.8.8\">0.948</td>\n</tr>\n</tbody>\n</table>\n</figure>",
120
+ "capture": "TABLE III: Transparent object depth completion using weakly supervised methods versus strongly supervised methods"
121
+ },
122
+ "4": {
123
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span>Qualitative results of our supervised method and previous best cyclic adversarial method.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T4.30\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.30.31.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.30.31.1.1\">Captured Color</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.30.31.1.2\">Captured Depth</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.30.31.1.3\">Our result</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.30.31.1.4\">CycleGAN result</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.30.31.1.5\">Ground truth depth</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.30.32.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"5\" id=\"S4.T4.30.32.2.1\">Three samples with the lowest MAE using our method</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.1\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.1.1.1.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.1.1.1.1.g1\" src=\"extracted/5746185/results/supervised/17-input-color.png\" width=\"538\"/>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.2.2.2\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.2.2.2.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.2.2.2.1.g1\" src=\"extracted/5746185/results/supervised/17-input-depth.png\" width=\"538\"/>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.3.3\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.3.3.3.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.3.3.3.1.g1\" src=\"extracted/5746185/results/supervised/17-output-depth.png\" width=\"538\"/>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T4.3.3.3.1.1\">0.012</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.4.4\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.4.4.4.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.4.4.4.1.g1\" src=\"extracted/5746185/results/old_method/17-depth-output.png\" width=\"538\"/>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T4.4.4.4.1.1\">0.032</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.5.5.5\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.5.5.5.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.5.5.5.1.g1\" src=\"extracted/5746185/results/old_method/17-depth-ground-truth.png\" width=\"538\"/>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.10.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.6.6.1\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.6.6.1.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.6.6.1.1.g1\" src=\"extracted/5746185/results/supervised/24-input-color.png\" width=\"538\"/>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.7.7.2\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.7.7.2.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.7.7.2.1.g1\" src=\"extracted/5746185/results/supervised/24-input-depth.png\" width=\"538\"/>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.8.8.3\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.8.8.3.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.8.8.3.1.g1\" src=\"extracted/5746185/results/supervised/24-output-depth.png\" width=\"538\"/>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T4.8.8.3.1.1\">0.013</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.9.9.4\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.9.9.4.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.9.9.4.1.g1\" src=\"extracted/5746185/results/old_method/24-depth-output.png\" width=\"538\"/>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T4.9.9.4.1.1\">0.025</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.10.10.5\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.10.10.5.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.10.10.5.1.g1\" src=\"extracted/5746185/results/old_method/24-depth-ground-truth.png\" width=\"538\"/>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.15.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.11.11.1\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.11.11.1.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.11.11.1.1.g1\" src=\"extracted/5746185/results/supervised/16-input-color.png\" width=\"538\"/>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.12.12.2\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.12.12.2.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.12.12.2.1.g1\" src=\"extracted/5746185/results/supervised/16-input-depth.png\" width=\"538\"/>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.13.13.3\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.13.13.3.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.13.13.3.1.g1\" src=\"extracted/5746185/results/supervised/16-output-depth.png\" width=\"538\"/>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T4.13.13.3.1.1\">0.014</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.14.14.4\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.14.14.4.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.14.14.4.1.g1\" src=\"extracted/5746185/results/old_method/16-depth-output.png\" width=\"538\"/>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T4.14.14.4.1.1\">0.033</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.15.15.5\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.15.15.5.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.15.15.5.1.g1\" src=\"extracted/5746185/results/old_method/16-depth-ground-truth.png\" width=\"538\"/>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.30.33.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"5\" id=\"S4.T4.30.33.3.1\">Three samples with the highest MAE using our method</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.20.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.16.16.1\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.16.16.1.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.16.16.1.1.g1\" src=\"extracted/5746185/results/supervised/09-input-color.png\" width=\"538\"/>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.17.17.2\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.17.17.2.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.17.17.2.1.g1\" src=\"extracted/5746185/results/supervised/09-input-depth.png\" width=\"538\"/>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.18.18.3\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.18.18.3.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.18.18.3.1.g1\" src=\"extracted/5746185/results/supervised/09-output-depth.png\" width=\"538\"/>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T4.18.18.3.1.1\">0.034</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.19.19.4\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.19.19.4.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.19.19.4.1.g1\" src=\"extracted/5746185/results/old_method/09-depth-output.png\" width=\"538\"/>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T4.19.19.4.1.1\">0.066</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.20.20.5\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.20.20.5.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.20.20.5.1.g1\" src=\"extracted/5746185/results/old_method/09-depth-ground-truth.png\" width=\"538\"/>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.25.25\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.21.21.1\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.21.21.1.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.21.21.1.1.g1\" src=\"extracted/5746185/results/supervised/08-input-color.png\" width=\"538\"/>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.22.22.2\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.22.22.2.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.22.22.2.1.g1\" src=\"extracted/5746185/results/supervised/08-input-depth.png\" width=\"538\"/>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.23.23.3\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.23.23.3.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.23.23.3.1.g1\" src=\"extracted/5746185/results/supervised/08-output-depth.png\" width=\"538\"/>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T4.23.23.3.1.1\">0.035</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.24.24.4\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.24.24.4.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.24.24.4.1.g1\" src=\"extracted/5746185/results/old_method/08-depth-output.png\" width=\"538\"/>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T4.24.24.4.1.1\">0.054</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.25.25.5\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.25.25.5.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.25.25.5.1.g1\" src=\"extracted/5746185/results/old_method/08-depth-ground-truth.png\" width=\"538\"/>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.30.30\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.26.26.1\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.26.26.1.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.26.26.1.1.g1\" src=\"extracted/5746185/results/supervised/04-input-color.png\" width=\"538\"/>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.27.27.2\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.27.27.2.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.27.27.2.1.g1\" src=\"extracted/5746185/results/supervised/04-input-depth.png\" width=\"538\"/>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.28.28.3\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.28.28.3.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.28.28.3.1.g1\" src=\"extracted/5746185/results/supervised/04-output-depth.png\" width=\"538\"/>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T4.28.28.3.1.1\">0.036</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.29.29.4\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.29.29.4.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.29.29.4.1.g1\" src=\"extracted/5746185/results/old_method/04-depth-output.png\" width=\"538\"/>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T4.29.29.4.1.1\">0.050</span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.30.30.5\">\n<span class=\"ltx_inline-block ltx_minipage ltx_align_top\" id=\"S4.T4.30.30.5.1\" style=\"width:60.7pt;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_square\" height=\"538\" id=\"S4.T4.30.30.5.1.g1\" src=\"extracted/5746185/results/old_method/04-depth-ground-truth.png\" width=\"538\"/>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
124
+ "capture": "TABLE IV: Qualitative results of our supervised method and previous best cyclic adversarial method."
125
+ }
126
+ },
127
+ "image_paths": {
128
+ "1": {
129
+ "figure_path": "2309.11966v2_figure_1.png",
130
+ "caption": "Figure 1: NeuralLabeling supports two pipelines for labeling NeRFs: Bounding-box-based labeling for uncluttered scenes and mesh-based labeling for cluttered scenes.",
131
+ "url": "http://arxiv.org/html/2309.11966v2/extracted/5746185/labeling_tools.png"
132
+ },
133
+ "2": {
134
+ "figure_path": "2309.11966v2_figure_2.png",
135
+ "caption": "Figure 2: A scene can be labeled using either bounding-boxes or using meshes. Bounding boxes can be used to extract meshes from a scene.",
136
+ "url": "http://arxiv.org/html/2309.11966v2/extracted/5746185/workflow.png"
137
+ },
138
+ "3": {
139
+ "figure_path": "2309.11966v2_figure_3.png",
140
+ "caption": "Figure 3: \nNeuralLabeling supports a wide variety of outputs.\nCircled letter references the scene: (A) Mostly Lambertian objects placed upright for mesh extraction, second row shows the annotated bounding boxes, third row shows the geometry generated using the bounding boxes.\n(B) Most of the objects from (A) placed in a shopping basket and annotated using the meshes generated from (A), towel was captured separately, second row shows 3D bounding boxes based on the mesh annotations, third row shows 6DOF poses based on the mesh annotations. Second column of (B) shows instance masks, category masks and binary masks, each using NeRF-to-mesh occlusions rendered directly by NeuralLabeling to improve segmentation accuracy.\n(C) Lambertian objects placed on a lunch plate. We use YCB objects for which we use openly available meshes based on 3D scans using the Google Scanner, second row shows the meshes rendered directly in the scene, third row shows 2D bounding boxes generated based on mesh geometry.",
141
+ "url": "http://arxiv.org/html/2309.11966v2/extracted/5746185/demo.png"
142
+ },
143
+ "4": {
144
+ "figure_path": "2309.11966v2_figure_4.png",
145
+ "caption": "Figure 4: Opaque clones of glasses placed up- and down-facing, rendered using NeRF. Using the bounding-box labeling pipeline we extract meshes that are used for annotating the dishwasher scenes.",
146
+ "url": "http://arxiv.org/html/2309.11966v2/extracted/5746185/glasses.png"
147
+ },
148
+ "5": {
149
+ "figure_path": "2309.11966v2_figure_5.png",
150
+ "caption": "Figure 5: Non-Lambertian objects in a complicated environment, annotated using opaque clone NeRF meshes, second column shows sensor depth estimate using RealSense D415, third column shows estimated object depth based on mesh annotations, fourth column shows the combination of generated depth with noisy sensor depth, which can be used as ground truth data for training a deep neural network.",
151
+ "url": "http://arxiv.org/html/2309.11966v2/extracted/5746185/dishwasher_depth.png"
152
+ }
153
+ },
154
+ "validation": true,
155
+ "references": [],
156
+ "url": "http://arxiv.org/html/2309.11966v2"
157
+ }
20240722/2309.12949v2.json ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Guaranteed Private Communication with Secret Block Structure",
3
+ "abstract": "A novel private communication framework is proposed where privacy is induced by transmitting over a channel instances of linear inverse problems that are identifiable to the legitimate receiver but unidentifiable to an eavesdropper. The gap in identifiability is created in the framework by leveraging secret knowledge between the transmitter and the legitimate receiver. Specifically, the case where the legitimate receiver harnesses a secret block structure to decode a transmitted block-sparse message from underdetermined linear measurements in conditions where classical compressed sensing would provably fail is examined. The applicability of the proposed scheme to practical multiple-access wireless communication systems is discussed. The protocol\u2019s privacy is studied under a single transmission, and under multiple transmissions without refreshing the secret block structure. It is shown that, under a specific scaling of the channel dimensions and transmission parameters, the eavesdropper can attempt to overhear the block structure from the fourth-order moments of the channel output. Computation of a statistical lower bound suggests that the proposed fourth-order moment secret block estimation strategy is near optimal. The performance of a spectral clustering algorithm is studied to that end, defining scaling laws on the lifespan of the secret key before the communication is compromised. Finally, numerical experiments corroborating the theoretical findings are conducted.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "While communication privacy is often ensured at higher network layers [Yu, Tomasin, Schmitt], and can be achieved via cryptographic means; there are new methods in\nphysical layer security [bloch2011physical], which can leverage the structural properties of a communication channel to generate privacy. Physical layer privacy can\nstrengthen security in modern data exchange protocols, such as next-generation wireless systems, the Internet of Things, and satellite constellations. Physical layer security offers numerous complementary guarantees to usual cryptography: It can protect users\u2019 identities, physical locations, or even conceal the existence of a communication to an eavesdropper; and can be implemented opportunistically over wireless channels with no or little computational overhead. There is interest in realizing the theoretical promises of physical layer security in realistic systems [poor2017wireless].\nTraditional physical layer privacy schemes exploit channel differences to share information with Bob without Eve\u2019s knowledge, which often comes with the assumption that Bob and Eve\u2019s channels are distinct. Typical strategies involve the use of artificial noise [goel2008GuaranteeingSecrecy, tomasin2022BeamformingArtificial, rajiv2022securing, krunz2023secure]. The noise can be either injected into the nullspace of channel state information (CSI) and mitigated by exploiting CSI or directly injected noise into the transmitted message and resolved by the legitimate receiver side by exploiting a secret key [zhang2018CovertCommunication, schaefer2018SecureBroadcasting]. Other privacy schemes involve random and adversarial beamforming design [ayyalasomayajula2023users, Checa], or the injection of fake paths over geometric channels to diminish the capability of an eavesdropper to distinguish between true and fake paths and challenge the estimation of CSI [li2023ChannelState, tran2024physical] by an eavesdropper.\nThe previously mentioned physical layer security schemes induce privacy by performing a linear action on the transmitted message that is statistically hard to invert without additional knowledge. In a related fashion, the compressed sensing framework [donoho2006compressed] assumes a non-linear prior on the input message and has been exploited to ensure privacy [zhang2016review]. If the sensing matrix is kept secret to an eavesdropper, perfect secrecy can be guaranteed in the information-theoretic sense [liang2009information] under restrictive conditions [bianchi2015analysis]. Typical sensing matrices are functions of the CSI. The computational secrecy of this approach has also been investigated [orsdemir2008security, rachlin2008secrecy], restricting Eve\u2019s ability to recover the encoded message via a polynomial time algorithm.\nMotivated by applications to multiple access wireless systems, we focus here, instead, on a novel model where the sensing matrix (e.g. the channel matrix) is imposed by the environment and is not under the control of the transmitter. Privacy is achieved by sharing an additional structure with the legitimate receiver, easing the decoding of the message [baraniuk2010model]. From the eavesdropper\u2019s perspective, the decoding amounts to solving a bilinear inverse problem, which is known to demand much more stringent assumptions to be identifiable [choudhary2018properties, choudhary2013identifiability2, da2019self, li2016identifiability, lee2018fast, ahmed2013blind]. Thus, statistical hardness is exploited to provide privacy."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Linear Inverse Problem Based Privacy",
15
+ "text": "We consider the classical secret communication problem with side information: A transmitter (Alice) wishes to privately transmit a vector to a legitimate receiver (Bob) over a public channel111For clarity purposes, signals are assumed to be real-valued, but the present model and analysis are extendable to the complex case.. The noisy channel output received by Bob and the eavesdropper (Eve) are and , with noise and , respectively. To achieve privacy and prevent Eve from recovering the message , Alice and Bob may communicate a low information rate signal over a secure channel inaccessible to Eve. The secure channel assumption is common in physical layer security and has been previously used\nwith collaborative inference strategies [mohapatra2016CapacityTwoUser], or in covert communication [zhang2021CovertCommunication]. This secure channel can be constructed, for example, through coding such as in the context of a wiretap channel with side information [oggier2011SecrecyCapacity, chen2008wiretap].\n###figure_1### In the proposed setting, the effect of the Alice\u2013Bob and Alice\u2013Eve channels are assumed to be linear and modeled by a \u201cfat\u201d matrices , and , respectively, with so that channel outputs and write\nwhere and are white Gaussian noise. The matrices and are imposed by the environment; is known by Bob and is known by Eve. Finally, we assume that Eve is aware of the communication protocol established by Alice. The overall communication model is depicted in Figure 1 ###reference_###.\nFor the purposes of our analysis, we will assume the channel matrices and to satisfy certain incoherence properties, which are detailed in the sequel. The privacy results in Section III ###reference_### are given in terms of incoherence and hold regardless of the specific realizations of the Alice\u2013Bob and Alice\u2013Eve channels. For this reason, and to improve clarity, the subscripts \u201cB\u201d and \u201cE\u201d referring to Bob and Eve\u2019s model parameters are dropped in the rest of the paper unless a disambiguation is explicitly needed.\nTo ensure privacy, Alice, who designs the message and the side information, must ensure two properties. First, Bob must be able to provably recover from the observation via the side information from the secure channel. Second, Eve cannot provably recover without knowing the side information. Thus, Alice is left to design an inverse problem that is identifiable to Bob but unidentifiable to Eve. These goals can typically be jointly achieved by imposing an additional structure on and privately sharing this structure over the secure channel. For practicality, this structure must be comprised of a small number of bits and reusable over multiple transmissions. Building onto our prior work [dacosta2022FrameworkPrivate], we propose that Alice shares a secret block structure with Bob and encodes her message as a block-sparse signal whose support follows this secret structure. Harnessing a block-sparse prior to recovering signals through underdetermined linear measurements has been extensively shown to allow exact recovery in conditions where classical compressed sensing would provably fail [eldar2009robust, eldar2009block, baraniuk2010model, gribonval2003sparse]. We leverage these results to establish the existence of a private communication regime where Alice and Bob achieve secrecy by transmitting single instances of an unidentifiable compressed sensing problem over a public channel. Then, as refreshing the secret block structure at each channel transmission is impractical, we study the privacy of the communication from multiple transmissions while reusing the same secret block structure. We propose a near-optimal method for Eve to eavesdrop on the block structure based on the spectral clustering of the fourth-order moments of the channel output. An upper bound on the number of transmissions before the secret structure and the messages are compromised is derived, and the trade-off between key reuse and secrecy is discussed. Spectral clustering has been considered a fast and robust method to recover low-dimensional structures in high-dimensional datasets with significant success. It has been applied, for instance, to recovering partitions and cliques in high-dimensional graphs [rohe2011SpectralClustering] as well as for unsupervised classification in machine learning [von2007tutorial].\nThe proposed signaling scheme is motivated by its applicability to modern multi-user wireless communication protocols. As an example, we assume an uplink scenario with many transmitters sending within a symbol interval a message using a precoding scheme through a linear channel that is imposed by the environment. The received message at the base station classically reads . When the channel users parsimoniously transmit at a given symbol interval, that is, a random fraction of users remain inactive, the channel input can be modeled with a group-sparse prior. If this prior is only known by the legitimate base station (Bob), the relative identifiability of block-sparse signals versus unstructured sparse signals can be exploited to induce privacy against an eavesdropper.\nMany massive access communication schemes rely on sporadic channel traffic [wu2020massiveAccess] to allow more robust decoding on the receiver side, even from an under-determined channel output. We pinpoint two practical schemes where our framework is applicable:\nIn overloaded CDMA communications, the transmitters rely on unique sequences , known to the base station, to spread the messages onto a larger dimension space before transmission [verdu1999spectral, chen2001multicarrier]. Sparse coded multiple access schemes have been considered to improve user detection when the system is overloaded [alam2018NonOrthogonalMultiple], and adapted coding sequences are proposed in [liu2020IdenticalCode, zhu2011ExploitingSparse]. However, the privacy benefits of overloading have not yet been considered in that context.\nIn massive MIMO communications, the number of identifiable spatial streams equals the number of receive antennas. Suppose the transmitter has more antennas than the receiver. In that case, she intermittently activates sub-groups of antennas according to a pattern shared with the receiver. She transmits on the active sub-groups at each symbol interval at the price of a reduced bit rate. Such MIMO systems have been considered to minimize implementation cost [ni2016HybridBlock] or improve spectral efficacy [wang2019non, liu2018gaussian]."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Contributions and Paper Organization",
21
+ "text": "We build upon our prior work [dacosta2022FrameworkPrivate] and present an improved eavesdropping scheme based on fourth moments with complete proofs and numerical simulations. Computation of a statistical lower bound suggests that the improved eavesdropping scheme is asymptotically near-optimal. In Section II ###reference_###, we propose a novel communication protocol that leverages the advantageous recoverability of block-sparse signals to ensure privacy. We provide Alice and Bob\u2019s encoding and decoding strategies, respectively. In our design, Alice transmits secretly to Bob a block structure and uses this structure to encode her message, which can be done at a very low transmission rate, while the channel matrix cannot be designed by Alice and is provided by nature. To the authors\u2019 knowledge, the proposed protocol is the first linear inverse problem-based privacy method that does not require the matrix to be secretly shared. Additionally, unlike most pre-existing physical layer privacy designs, neither co-location nor distinct locations are needed for the proposed scheme. Furthermore, Corollary 3 ###reference_orem3### guarantees that Alice can adjust the block length and the sparsity level of the message she transmits so that the transmission is provably identifiable for Bob and unidentifiable to Eve as the signal length increases.\nIn Section III ###reference_###, we consider the possibility of Eve recovering the secret block structure from the observation of multiple snapshots of the observation that Alice has generated with the same block structure . We show in Proposition 9 ###reference_orem9### that, depending on Alice\u2019s choice of the block length and sparsity level, it is possible to extract from the fourth-order moments of the observation and propose an eavesdropping algorithm to that end. We investigate the case of a finite number of snapshots and derive an upper bound on the rate at which Alice must generate a new to prevent Eve from deciphering Bob\u2019s messages.\nWe present numerical results that validate our theoretical findings in Section IV ###reference_###. Section V ###reference_### draws a conclusion, and further research directions are discussed."
22
+ },
23
+ {
24
+ "section_id": "1.3",
25
+ "parent_section_id": "1",
26
+ "section_name": "Notations",
27
+ "text": "Vectors of and matrices of are denoted by boldface letters and capital boldface letters , respectively. The entry of a matrix is written as . The matrix norms , , and refer to the spectral norm, the Frobenius norm, and the maximal absolute value of the entries in , respectively. Given a positive semi-definite matrix , we write and as its smallest eigenvalue, and th-largest eigenvalue (with multiplicity), respectively. The Hadamard product between two matrices and is denoted as . We write by the identity matrix and by the all-one matrix in dimension . Given a random vector , we denote by its covariance matrix. A block structure over into blocks is described by a mapping , and is associated with the indicator matrix of defined by\nWe denote by the subvector of with entries ensuring . The \u201cblock--norm\u201d of a vector is defined as and counts the number blocks in that are not exactly equal to . For two functions and , we use the Landau notation to denote that the ratio tends to as ."
28
+ },
29
+ {
30
+ "section_id": "2",
31
+ "parent_section_id": null,
32
+ "section_name": "II Privacy with Block Sparsity",
33
+ "text": ""
34
+ },
35
+ {
36
+ "section_id": "2.1",
37
+ "parent_section_id": "2",
38
+ "section_name": "II-A Alice\u2019s Encoding",
39
+ "text": "In the proposed protocol, Alice constructs her message as follows. Given the knowledge of the channel dimension, Alice initializes the communication by randomly selecting a block structure . Alice sends this structure to Bob over the secret channel. We highlight that this exchange only requires bits of information, which is significantly less than schemes relying on exchanging the entire matrix ( infinite precision numbers). Although not required in practice, we assume for simplicity that the blocks have equal block size , i.e. . Next, Alice selects a probability of block activation , where is assumed for convenience in the analysis, and encodes her message in a block-sparse vector . In the sequel, we assume that is distributed according to a block Bernouilli\u2013Gaussian distribution such that\nwhere is a random i.i.d. standard Gaussian vector of dimension . A visualization of the block sparsity encoding is provided in Figure 2 ###reference_###.\n###figure_2###"
40
+ },
41
+ {
42
+ "section_id": "2.2",
43
+ "parent_section_id": "2",
44
+ "section_name": "II-B Bob\u2019s Decoding",
45
+ "text": "At the public channel output, Bob receives a vector and leverages that Alice securely sent to recover the ground truth message . To do so, Bob formulates the block-compressed sensing problem:\nwhere is a parameter that scales proportionally with the standard deviation of the noise .\nHarnessing a block-sparse prior in compressed sensing has been extensively shown in the literature to enhance the identifiability of (4 ###reference_###) and to allow an exact reconstruction of the message with much fewer measurements than classical compressed sensing [eldar2009robust, gribonval2003sparse]. However, directly solving (4 ###reference_###) remains NP-hard in the general case due to the combinatorics inherent to the minimization of . Thus, Bob computes, instead, an estimate of using a polynomial time algorithm of his choice. Among the many addressed algorithms proposed in the literature, Block Matching Pursuit (Block MP) [bach2008consistency], Block Iterative Harding Thresholding (Block IHT), Block Basis Pursuit (Block BP) [eldar2010block] or block-based CoSaMP [baraniuk2010model], have been shown to have provable performance guarantees.\nIn the sequel, we denote as the redundancy parameter, defined as the ratio between the number of measurements at the channel output and the expected number of non-zero entries in the block-sparse input vector . We remark that is trivially needed to decode the message successfully. Asymptotic phase transitions for the success of greedy algorithms to recover the block-sparse ground truth have been studied in the literature [baraniuk2010model]. Proposition 1 ###reference_orem1### reinterprets this result in terms of the parameter , the block-length , and the transmission parameter in the asymptotics .\nSuppose that is a matrix with i.i.d. random Gaussian entries and assume a noise-free environment . If\nin the limit where , then Bob can stably recover asymptotically almost surely.\nAdditionally, denoising bounds on the estimate of the input vector are provided in the presence of noise [baraniuk2010model]."
46
+ },
47
+ {
48
+ "section_id": "2.3",
49
+ "parent_section_id": "2",
50
+ "section_name": "II-C Privacy Guarantees under a Single Snapshot",
51
+ "text": "If only one snapshot is observed, it is impossible for Eve to reliably infer , which remains ambiguous even with perfect knowledge of . Therefore, from her perspective, the best possible approach consists of attempting to recover without leveraging the existence of a latent block structure in the message. This amounts to solving a classical compressed sensing program\nThe identifiability condition of (6 ###reference_###) is well-understood to be related to the Restricted Isometry Property (RIP) of the measurement operator [candes2008restricted]. In the case of a Gaussian matrix , the following proposition links the asymptotic failure of (6 ###reference_###) to a function of the model\u2019s parameters, translating results in [blanchard2011compressed] to our context.\nSuppose that is a matrix with i.i.d. random Gaussian entries and assume a noise-free environment . Then if\nholds in the limit where , then the solution of (6 ###reference_###) is different from with overwhelming probability.\nAltogether, Propositions 1 ###reference_orem1### and 2 ###reference_orem2### suggest that, given the dimensions and of , Alice can select the parameters and so that (5 ###reference_###) and (7 ###reference_###) are jointly satisfied, which is summarised in the sequel,\n###figure_3### If Alice selects a diverging redundancy parameter with and , then the protocol is asymptotically private to the exchange of a single message in the limit .\nAs an example, we discuss the scaling law of the parameters when the number of observation is fixed while the channel input gets large, and assume Alice allows the block length to grow with the channel input at a rate for some .\nIn this setup, Proposition 1 ###reference_orem1### ensures the region is identifiable to Bob for any , while Proposition 2 ###reference_orem2### indicates is non-identifiable to Eve. Hence, is asymptotically private.\nThis result suggests that parameter intervals for the private regime are increasing with the channel length. This highlights that the proposed communication protocol benefits from larger channel dimensions. Larger channel dimensions can be realized in practice by selecting longer spreading sequences in CDMA systems or increasing the number of antennae in MIMO systems.\nIn practice, Alice wants to maximize the quantity of information transmitted to Bob in a single message by transmitting messages with a maximum number of non-zero entries while remaining in the private regime. In the above example, this is achieved by selecting .\nFigure 4 ###reference_### shows the success rate of Bob and Eve to recover via the Block-BP and BP algorithms, respectively, for different values of the ratio . We see that as gets small, the success rates for both Bob and Eve diminish. This is intuitive as measures the number of observations relative to the number of active components. The lower the activity level, the fewer non-zero signals that are sent. However, it is also clear that given , there is a sweet spot at , where Bob achieves good performance while Eve does not.\n###figure_4###"
52
+ },
53
+ {
54
+ "section_id": "3",
55
+ "parent_section_id": null,
56
+ "section_name": "III Eavesdropping via Higher Order Moments",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "3.1",
61
+ "parent_section_id": "3",
62
+ "section_name": "III-A Structure of the Moments",
63
+ "text": "In effect, our results above are for a single-pad key, i.e. a new block structure is created for each message to be sent [bianchi2015analysis].\nTo reduce the usage of the secure channel, we want to understand the reusability of in transmitting several independent signals . In this scenario, if Eve can acquire multiple snapshots of observation given by , and under the knowledge of the prior distribution (3 ###reference_###) of , she can attempt to gain statistical information about without having to reconstruct the messages by studying the posterior distribution of . When additional structure can be assumed for the block structure, such as contiguity of the blocks, intra-block correlation can be exploited in the block-sparse Bayesian learning framework (BSBL) to attempt to recover both the block structure and the transmitted messages simultaneously [zhang2013ExtensionSBL, fang2013PatternCoupledSparse]. In the absence of additional information on the block structure, Eve can study the moments of the posterior distribution of . In particular, we observe that given our block signaling, the mean and covariance of carry no information about the block structure . However, the even fourth-order moments of do provide information about the block structure, , as seen below:\nAdditionally, as the odd fourth-order moments of equal zero, the terms in (III-A ###reference_###) are the moments of smallest order containing information about the block structure . As the number of samples that is necessary to estimate moments increases with their order, Eve can restrict herself to the study of the covariance of the vector in attempt to eavesdrop from the observation of the channel output. Given this observation, understanding the reusability of the block structure is equivalent to understanding Eve\u2019s capability to learn these fourth moments.\nFor notational convenience, let and . Moreover, we define the matrices , , and where each component is given, respectively, by,\nThe next proposition, whose proof is presented in Appendix A ###reference_###, gives an expression of the covariance as a function of the matrices , , and of the block structure matrix .\nLet . If is drawn according to (3 ###reference_###) then the covariance of is given by\nwhere the matrix is given by\nProposition 4 ###reference_orem4### proposes an decomposition of the covariance matrix into two main terms:\nThe term , which captures properties of the block structure .\nThe term which only depends on the block activation probability , on the channel , and on the noise power .\nIn the sequel, we propose a strategy by which to exploit this structure to learn ."
64
+ },
65
+ {
66
+ "section_id": "3.2",
67
+ "parent_section_id": "3",
68
+ "section_name": "III-B Reconstruction via Spectral Clustering",
69
+ "text": "In this section, we propose a provable spectral clustering-based algorithm for Eve to infer the block structure from observing a finite number of snapshots . In our setting, the block structure matrix that Eve aims to recover has a rank equal to the number of blocks , which is assumed to be much smaller than the ambient signal dimension . As a result, the reliability of spectral clustering can be anticipated for inferring the low-dimensional block structure.\nWe first review Algorithm 1 ###reference_###. This is a straightforward algorithm that employs the matrix , whose columns are sampled from the channel output, to determine an estimate of the covariance matrix in Equation (10 ###reference_###). This equation is consecutively \u201cinverted\u201d, yielding an estimator of the indicator matrix . As the -leading eigenvectors of the indicator matrix identify exactly the block structure , an estimate of the true block structure is constructed by clustering the rows of the leading eigenvectors of the matrix , following a -means-type procedure described by Algorithm 2 ###reference_###.\nThe rest of this section is dedicated to the theoretical analysis of the estimation procedure proposed by Algorithm 1 ###reference_###. Under incoherence assumptions on the channel matrix , we first assess Eve\u2019s capability to eavesdrop using Algorithm 2 ###reference_### when she has access to infinitely many channel outputs , and thus to the ground truth covariance matrix . Then, we consider the case where Eve observes a finite number of channel outputs."
70
+ },
71
+ {
72
+ "section_id": "3.3",
73
+ "parent_section_id": "3",
74
+ "section_name": "III-C Conditions for Exact Clustering",
75
+ "text": "Eve\u2019s ability to estimate sufficiently close to the indicator matrix is a determining factor in her attempt to recover . When the spectral distance is small enough, the eigenvectors of will align with those of , and the block structure will become identifiable by spectral clustering.\nWe start the theoretical derivations by finding in Proposition 5 ###reference_orem5###, a sufficient condition on under which the K-means clustering procedure described by Algorithm 2 ###reference_### returns the exactly the secret block structure .\nAssume is the indicator matrix of a block structure with . Then, for any with , the output of Algorithm 1 ###reference_### applied the matrix that is composed by the leading eigenvector of exactly recovers the block structure, i.e. .\nFirst, it is easy to confirm from Equation (2 ###reference_###) that . As both and are Hermitian matrices, they have orthogonal bases of eigenvectors. We write the matrices whose columns are the eigenvectors corresponding to the leading eigenvalues of and , respectively. By the Davis-Kahan eigenvector perturbation theorem [davis1970RotationEigenvectors], there exists an orthogonal matrix such that\nwhere we used when in the second inequality.\nNext, we denote by and the th columns of the matrices and , respectively. From the expression (2 ###reference_###) of , the vector indicates the block in which the th element belongs, more precisely we have\nSuppose that and let , which represent the rotated true centroid of the th block. Equation (III-C ###reference_a###) implies that . Therefore, this also implies that the estimated centroid of the th block satisfies at each step of the algorithm.\nFrom the triangle inequality, we have,\nBy orthogonality of the eigenvectors and , we also have that for any . Hence if we may write\nHence for any , and we conclude with (14 ###reference_###) and (III-C ###reference_###) that at the th iteration, Algorithm 2 ###reference_### associate if there was an element in that is in the th cluster, otherwise associate to a new cluster . This results in at the algorithm\u2019s output.\u220e"
76
+ },
77
+ {
78
+ "section_id": "3.4",
79
+ "parent_section_id": "3",
80
+ "section_name": "III-D Asymptotic Vulnerability",
81
+ "text": "In this subsection, we assume that Eve can sample infinitely many channel output that have been produced with the same secret block structure , and wish to understand Eve\u2019s capability to recover from Algorithm 1 ###reference_###. Of particular interest, Eve knows in this setting the probability distribution and consequently has access to the ground truth covariance matrix given in (10 ###reference_###).\nIn the additional pessimistic hypothesis where Eve knows the activation probability222In more practical considerations, the transmission parameter can be estimated by Eve from the covariance of the channel output as .\n, the block length , the channel matrix , and the statistics of the noise , she can compute the matrices and in Proposition 4 ###reference_orem4###, and the constant defined in the fourth step of Algorithm 1 ###reference_###. Hence, she can formulate the estimate of the block structure as\nand achieves a spectral distance to the ground truth indicator matrix\nThe crux is to understand whenever (17 ###reference_###) matches the sufficiency criteria of Proposition 5 ###reference_orem5### to access Eve\u2019s perfect recovery , and the vulnerability of the proposed scheme.\nTo that end, we must note that the matrices , and introduced in (9 ###reference_###) are summations of fourth-order moments of the matrices and . Furthermore, even if the entries of the matrix are assumed to be drawn i.i.d., the products considered in (9 ###reference_###) are coupled, and the summations are over dependent terms. As a result, additional statistical assumptions on the distribution of the channel matrix are needed to control the estimate of the block structure . Therefore, we provide Definition 6 ###reference_orem6###, which introduces a new notion of coherence relevant to our spectral clustering context.\nFor an matrix , we let and . Given two positive numbers and , a matrix is said to be -coherent if and only if the following bounds holds:\nFirst-order bounds:\nSecond-order bound:\nFourth-order bounds:\nFor any block structure over element with maximal block length , and for , the fourth order matrix satisfies\nwhere , and the fourth order matrices and satisfy\nThe matrix is invertible and\nThe parameter is raised to different exponents in (18 ###reference_###) to maintain homogeneity across the different matrix norms. Understanding when a matrix is -coherent is crucial to apply our theoretical analysis of Algorithm 1 ###reference_###. However, finding coherence parameters when assuming the entries of to be drawn i.i.d. from a know prior distribution can be particularly challenging as the quantities defined in (9 ###reference_###) are summations of fourth and eighth-order terms in the matrix . As a result, the terms in those summations are dependent, and the usual incoherence bounds for matrix sensing [candes2006robust, davenport2016overview] cannot be directly applied.\nNonetheless, an interesting class of matrix to consider is the one whose columns are drawn i.i.d. according to a unitary and isotropic distribution. In that case, we have\nUnder the additional assumption that the columns of have a bounded inner product, i.e. if there exists a small enough such that\nfor all , then we can show that -coherence holds with high probability. Indeed, (18a ###reference_.1###) holds because of the unitary isotropic assumption on , (18b ###reference_.2###) is induced by the bounded, hence sub-Gaussian concentration of the matrix (see e.g. [tropp2015introduction]), (18c ###reference_.3###) is immediate from (20 ###reference_###), and (18g ###reference_.7###) occurs from given a small enough . Finally, Lemma 7 ###reference_orem7### validates (18d ###reference_.4###) with high probability, and its proof is provided in Appendix C-A ###reference_###. The proofs of the two later bounds (18e ###reference_.5###) and (18f ###reference_.6###) are omitted for brevity and can be re-derived by following analogous reasoning.\nSuppose that the columns of are drawn i.i.d. according to a unitary isotropic random distribution and that (20 ###reference_###) holds for some . There exists a constant such that (18d ###reference_.4###) is satisfied with probability greater than .\nA numerical validation of the -coherence assumption is presented in Section IV ###reference_### when the channel matrix is i.i.d. Gaussian, or with columns drawn i.i.d. uniformly on the sphere.\nThe -coherence assumption on the matrix can be exploited with to control the spectral distance (17 ###reference_###) as\nA direct application of Proposition 5 ###reference_orem5### with (III-D ###reference_b###) yields the following characterization of the asymptotic vulnerability of the communication protocol proposed in Section II ###reference_### from an eavesdropper attempting to learn the secret block structure via Algorithm 2 ###reference_###.\nSuppose that is -coherent, then if\nEve can recover the block structure by applying Algorithm 1 ###reference_### provided access to infinitely many samples of the channel outputs .\nThis result suggests that the communication protocol between Alice and Bob proposed in Section II ###reference_### is compromised from the knowledge of the ground truth covariance and to a channel of large enough output dimension with constant reuse of the secret key. We call this regime asymptotic vulnerability."
82
+ },
83
+ {
84
+ "section_id": "3.5",
85
+ "parent_section_id": "3",
86
+ "section_name": "III-E Estimation with a Finite Number of Snapshots",
87
+ "text": "In practice, Eve can access a limited number of snapshots before Alice terminates the communication or refreshes the structure . Consequently, the true covariance always remains unknown to Eve. Instead, she can attempt to estimate from the empirical estimator of the covariance given by\n\nwhere and is the operator that stacks the diagonal elements of a matrix into an -dimensional vector. Proposition 9 ###reference_orem9### provides recovery guarantees for Eve under the proviso she accesses a large enough number of snapshots .\nLet the quantity be as defined in (22 ###reference_###) and suppose that , then there exist a constants such that if satisfies\nthen the output of Algorithm 1 ###reference_### satisfies with probability greater than .\nThe proof of Proposition 9 ###reference_orem9### is presented in Appendix B ###reference_###. We observe from (23 ###reference_###) that even in the absence of noise on the Alice\u2013Eve channel, Eve still needs a non-trivial number of snapshots to recover the block structure provably."
88
+ },
89
+ {
90
+ "section_id": "4",
91
+ "parent_section_id": null,
92
+ "section_name": "IV Numerical simulations",
93
+ "text": "Next, we will provide experiments to show the efficacy of our proposed scheme and validate theoretical results. We underscore that our assumptions are very favorable to Eve, who is assumed to know: (1) the channel matrix , (2) the probability of block activation , and (3) the block length . More practical conditions (errors in the estimate of ) are provided in Figure 10 ###reference_### and Eve\u2019s performance degrades even further."
94
+ },
95
+ {
96
+ "section_id": "4.1",
97
+ "parent_section_id": "4",
98
+ "section_name": "IV-A Coherence Assumption",
99
+ "text": "We start by validating the scaling laws of the coherence metric proposed in Definition 6 ###reference_orem6###. Figure 5 ###reference_### shows the empirical probability of a channel to be -coherent for varying values of the parameters . Two isotropic probability distributions are considered when the channel: 1) has i.i.d. Gaussian entries; 2) i.i.d columns drawn uniformly on the sphere. Under the ratio , the numerical simulations suggest that selecting and (respectively, and ) is enough to guarantee -coherence of the Gaussian channel (respectively, uniform spherical channel) with high probability, independently of the channel dimensions, under proviso of a large enough . Furthermore, given fixed channel dimensions, the uniform spherical channel has sharper tails than the Gaussian one, resulting in the the more favorable coherence parameters as seen in Figure 5 ###reference_###."
100
+ },
101
+ {
102
+ "section_id": "4.2",
103
+ "parent_section_id": "4",
104
+ "section_name": "IV-B Validation of the Spectral Clustering Method",
105
+ "text": "In this section, we validate the theoretical findings presented in Section II-C ###reference_###, Section III-D ###reference_###, and Section III ###reference_### through numerical simulations. Herein, the block compressed sensing problem (4 ###reference_###) and compressed sensing problem (6 ###reference_###) are solved using the block-basis pursuit (Block-BP) and basis pursuit (BP) convex relaxation with Matlab and the SPGL1 package [van2009probing]. For a unitary and isometric matrix , the signal-to-noise ratio (SNR) at the channel output is defined as . We subsequently select at random with independent Gaussian entries .\n###figure_5### ###figure_6### ###figure_7### ###figure_8### We consider the clustering capabilities of Algorithm 1 ###reference_###. Figure 6 ###reference_### shows the clusters returned by the subroutine Algorithm 2 ###reference_### for different numbers of snapshots and different SNRs, for the case where and and ; that is due to the block structure, we have clusters. It is clear that the value of (number of snapshots) impacts whether we can identify the clusters and, thus, the block structure. Additionally, high SNR values result in better identifiability of the clusters, especially under a limited number of snapshots, when the signal and the noise empirical covariances are not yet decoupled.\n###figure_9### ###figure_10### Next, we evaluate the probability for Eve to recover the correct block structure from the output of Algorithm 1 ###reference_### as a function of the number of observed snapshots, , that she has acquired without a refresh of the block structure. We evaluate the empirical error rate of Algorithm 1 ###reference_###, defined by the fraction random problem instances where . To assess the secrecy of the proposed protocol, we compare this empirical error rate with the error rate of a Hoeffding test between the probability distribution of the channel output produced by the true block structure and the probability distribution produced by another block structure . Given the Kullback\u2013Leibler divergence between those two distributions, Hoeffding\u2019s error rate is given by for some , where the minimum is taken over all possible block structures of -blocks of length that is not equal to . Hoeffding\u2019s error rate is an asymptotic statistical lower bound on the error probability for hypothesis testing [hoeffding1965asymptotically]. As and are Gaussian mixtures in dimension with classes, calculating the KL-divergence by a Monte Carlo method is computationally prohibitive, and we evaluate instead its variational approximation [hershey2007approximating]. The findings are shown in Figure 7 ###reference_### suggest that larger values of increase Eve\u2019s learning rate of the secret block structure, which corroborates with the theoretical results of Proposition 9 ###reference_orem9### as in fixed SNR settings. Additionally, for larger values of , we observe that Algorithm 1 ###reference_### achieves an error exponent close to Hoeffding\u2019s rate, indicating the near-optimally of the proposed moment method to eavesdrop the block structure in the asymptotic .\n###figure_11### ###figure_12###"
106
+ },
107
+ {
108
+ "section_id": "4.3",
109
+ "parent_section_id": "4",
110
+ "section_name": "IV-C Applications to MIMO systems",
111
+ "text": "Motivated by communication applications, we consider the downlink of a massive MIMO system. We assume Alice transmits parsimoniously messages encoded on a block-sparse BPSK constellation to Bob, meaning that is drawn according to a block-Bernoulli probability distribution, i.e. within an active block with independent and equal probability , and within a non-active block. We define the bit-error-rate (BER) as the ratio of entries that are in the active support of Alice\u2019s message () and that are incorrectly decoded by the receiver, i.e. . Assuming that Eve relies on its estimate of the block structure obtained from the output of Algorithm 1 ###reference_###, we empirically evaluate Bob\u2019s and Eve\u2019s BERs as a function of the number of snapshots in Figure 9 ###reference_###, and as a function of the SNR in Figure 9 ###reference_###. The figures suggest that larger values of ease both Bob\u2019s and Eve\u2019s decoding. Eve can achieve the same BER as Bob if the secret structure is reused sufficiently many times. Additionally, for a fixed number of snapshots, Eve\u2019s decoding is more impeded by the noise than Bob\u2019s, and the BER margin between Bob and Eve increases with the redundancy parameter . Hence, for fixed channel dimensions, if Alice reduces her communication rate with Bob by selecting a smaller block activation probability , she can harden Eve\u2019s decoding. This observation shows the trade-off between the communication rate Alice can achieve and the secrecy against an eavesdropper the protocol can induce. For the example considered, it is clear that for and Eve cannot decode while Bob can.\nIn practice, the channel is usually harder to estimate for Eve than for Bob. Figure 10 ###reference_### concludes the numerical results by comparing Bob\u2019s and Eve\u2019s BER when the eavesdropper has perfect and imperfect knowledge of . Herein, the estimate of the channel matrix is modelled as , where is an matrix with i.i.d. Gaussian entries. The SNR on Eve\u2019s estimate of the channel is defined as . Additionally, a baseline performance comparison is done with a classical MIMO channel of dimension where BPSK symbols are transmitted without any physical layer security scheme, and retrieved by a receiver with a maximum likelihood decoder (ML). As expected, imperfect knowledge of the channel matrix further diminishes Eve\u2019s decoding performance, enhancing the privacy of the communication protocol. The baseline ML decoding illustrates the compromise between enabling privacy and achieving good error rates.\n###figure_13###"
112
+ },
113
+ {
114
+ "section_id": "5",
115
+ "parent_section_id": null,
116
+ "section_name": "Conclusions and Future Work",
117
+ "text": "This article introduced a novel communication protocol with provable privacy guarantees. The proposed method harnesses a secret block-sparse prior to recovering the initial message from underdetermined linear measurements gathered at the output of a fat channel matrix. As block sparsity allows exact recovery in conditions where classical compressed sensing would provably fail, we established the existence of a secure transmission regime to a single snapshot between Alice and Bob. We studied the privacy guarantees of this communication protocol for multiple transmissions without refreshing the shared secret and proposed an algorithm for an eavesdropper to learn the block structure via the method of moments. The proposed block structure estimator appears to be asymptotically near-optimal. We validated the privacy benefits of this framework through numerical experiments.\nPossible extensions of this work include a comprehensive study of the trade-off between the communication rate that Alice and Bob can achieve and the lifespan of the secret block structure. Additionally, the proposed scheme paves the way for further linear inverse problem-based implementation of private communication protocols over the physical layer."
118
+ }
119
+ ],
120
+ "appendix": [
121
+ {
122
+ "section_id": "Appendix 1",
123
+ "parent_section_id": null,
124
+ "section_name": "Appendix A Proof of Proposition 4",
125
+ "text": "Let and for convenience purposes. Moreover let . We have that\nWe aim to derive the expression of the covariance of . First, the independence between and implies the independence between and . Additionally, the assumptions and imply that and . This yields\nHence the covariance matrix of the random vector reduces to\nWe derive in the sequel the expression of each of the three matrices on the right-hand side of (26 ###reference_###)."
126
+ },
127
+ {
128
+ "section_id": "Appendix 2",
129
+ "parent_section_id": null,
130
+ "section_name": "Appendix B Proof of Proposition 9",
131
+ "text": "We start the proof by noticing that from Proposition 4 ###reference_orem4###, the indicator matrix of the block structure matrix is given by\nwhere is defined in (11 ###reference_###) and is independent of . The spectral distance can be bounded with the triangle inequality, the -incoherence of the matrix , and the assumption as follows\nWe obtain from (B ###reference_1###) and Proposition 5 ###reference_orem5### that Algorithm 1 ###reference_### outputs the true block structure if\nwhere the right-hand side of (B ###reference_5###) is non-negative by the assumption . The estimated covariance error on the left-hand side of (B ###reference_5###) can be made arbitrarily small for a sufficiently large number of snapshots . Lemma 10 ###reference_orem10### provides a high-probability bound on the error on the estimated covariance error as in terms of the problem parameters.\nUnder the hypothesis of Proposition 9 ###reference_orem9###, there exist a constant such that the event\nholds with probability greater than .\nFor readability, the proof of Lemma 10 ###reference_orem10### is deferred to Appendix C-B ###reference_###.\nIt suffices to replace the left-hand side of inequation (B ###reference_5###) with the high probability bound given Lemma 10 ###reference_orem10### to yield the desired statement. \u220e"
132
+ },
133
+ {
134
+ "section_id": "Appendix 3",
135
+ "parent_section_id": null,
136
+ "section_name": "Appendix C Proof of the Technical Lemmas",
137
+ "text": "We start by studying the expected value of the diagonal terms of . We have that\nand we write for each\nBy the isotropy assumption on the matrix , is constant for different values of , and we may write . Moreover, by (19 ###reference_###), the right hand side of (44 ###reference_###) is a summation over elements yielding\nCounting the number of occurrences in each case, we have\n\nAdditionally, under the lemma\u2019s conditions, (43 ###reference_###) and (44 ###reference_###) imply\nwhere we used in the second inequality the assumption . As a result, by the isometry assumption, the terms of the summation in the right-hand side of (C-A ###reference_7###) are independent and bounded by and when and , respectively. Hence, the Chernoff bound can be applied [boucheron2013concentration], and we have\nOn the off-diagonal, because of the isotropy assumption, the random variable with has an even distribution for all and whenever . Therefore its expected value is null, that is . Denote by the matrix with off-diagonal terms equal to with diagonal entries for all . Relying on the symmetrization principle, we introduce the Rademacher random variable , where denotes the signum function. We note that are pair-wise independent. Furthermore, we can decompose the matrix as the sum\nNext, we recall in Proposition 11 ###reference_orem11### (see e.g. [tropp2015introduction, Theorem 4.1.1]) a matrix norm concentration inequality for matrices with Rademacher entries.\nConsider a fixed symmetric matrix of dimension . Let be a finite sequence of independent Rachemacher variables, and introduce the matrix Rademacher series\nLet be the matrix variance statistic of the Rademacher sum defined as\n\nthen for all we have\nTo bound using Proposition 11 ###reference_orem11###, we evaluate the matrix variance from the decomposition (48 ###reference_###). It yields\nwhere the quantity to maximize in the last inequality is constant across different values for and can be evaluated for without loss of generality. The inner summation in (C-A ###reference_1###) is taken over terms, which are equal to when and , and equal to when or . After counting the occurrences, we may reduce (C-A ###reference_1###) to\nApplying the matrix concentration inequality of Proposition 11 ###reference_orem11### with induces\nWe are now ready to achieve the desired statement. First, by the triangle inequality, we have\nIt suffices to substitute the probability bounds (47 ###reference_###) and (53 ###reference_###) into (C-A ###reference_5###) with the union bound to yield\nwith probability greater than . The statement of Lemma 7 ###reference_orem7### follows by selecting the incoherence parameter .\u220e\nWe seek to upper bound the quantity with overwhelming probability. We start the proof by recalling in Proposition 12 ###reference_orem12### the matrix Bernstein concentration inequality in the case of covariance estimation [tropp2015introduction].\nAssume that there exist a constant such that for all , we have that\nHence, providing a high probability bound on is sufficient to prove the desired statement. To that end, we apply the triangle inequality on (10 ###reference_###). This yields\nNow, we individually bound each element on the right-hand side of (C-B ###reference_6###). We recall , is controlled by the -coherence assumption on . Furthermore, we recall that for any Hermitian matrices of same dimension, we have (see e.g. [johnson1990matrix, 113]). This implies that\nWe are now ready to bound to derive an upper bound on . Applying the triangle inequality on the expression of given in (11 ###reference_###) gives\nSubstituting (58 ###reference_###) into (C-B ###reference_###) and leveraging the -coherence assumption on the matrix yield\nFinally, we can substitute (C-B ###reference_###) into (C-B ###reference_6###) to obtain\nWe achieve the desired statement with and by letting in the matrix Bernstein bound (56 ###reference_###). \u220e."
138
+ }
139
+ ],
140
+ "tables": {},
141
+ "image_paths": {
142
+ "1": {
143
+ "figure_path": "2309.12949v2_figure_1.png",
144
+ "caption": "Figure 1: Communication model with secure channel.",
145
+ "url": "http://arxiv.org/html/2309.12949v2/x1.png"
146
+ },
147
+ "2": {
148
+ "figure_path": "2309.12949v2_figure_2.png",
149
+ "caption": "Figure 2: Example of block sparse encoding in dimension n=12\ud835\udc5b12n=12italic_n = 12, with r=3\ud835\udc5f3r=3italic_r = 3 blocks of length d=4\ud835\udc514d=4italic_d = 4.",
150
+ "url": "http://arxiv.org/html/2309.12949v2/x2.png"
151
+ },
152
+ "3": {
153
+ "figure_path": "2309.12949v2_figure_3.png",
154
+ "caption": "Figure 3: Regions of (non)identifiability for Eve and Bob in the single snapshot case for a block-length d=n\u2062log\u2212\u03b4\u2061(n)\ud835\udc51\ud835\udc5bsuperscript\ud835\udeff\ud835\udc5bd=n\\log^{-\\delta}(n)italic_d = italic_n roman_log start_POSTSUPERSCRIPT - italic_\u03b4 end_POSTSUPERSCRIPT ( italic_n ) with \u03b4>0\ud835\udeff0\\delta>0italic_\u03b4 > 0.",
155
+ "url": "http://arxiv.org/html/2309.12949v2/x3.png"
156
+ },
157
+ "4": {
158
+ "figure_path": "2309.12949v2_figure_4.png",
159
+ "caption": "Figure 4: Success Rate of Bob and Eve to recover \ud835\udc99\ud835\udc99\\bm{x}bold_italic_x for different values of \u03b2\ud835\udefd\\betaitalic_\u03b2 in the absence of noise. The parameters are set to m=200\ud835\udc5a200m=200italic_m = 200, while r\ud835\udc5fritalic_r is set to the divisor of n\ud835\udc5bnitalic_n closest to log102\u2061(n)superscriptsubscript102\ud835\udc5b\\log_{10}^{2}(n)roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_n ) (i.e. r\u2243log102\u2061(n)similar-to-or-equals\ud835\udc5fsuperscriptsubscript102\ud835\udc5br\\simeq\\log_{10}^{2}(n)italic_r \u2243 roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( italic_n ) and d\u2243n\u2062log10\u22122\u2061(n)similar-to-or-equals\ud835\udc51\ud835\udc5bsuperscriptsubscript102\ud835\udc5bd\\simeq n\\log_{10}^{-2}(n)italic_d \u2243 italic_n roman_log start_POSTSUBSCRIPT 10 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT ( italic_n )). The results are averaged over 1000 trials.",
160
+ "url": "http://arxiv.org/html/2309.12949v2/x4.png"
161
+ },
162
+ "5(a)": {
163
+ "figure_path": "2309.12949v2_figure_5(a).png",
164
+ "caption": "Figure 5: The empirical probabilities of inequality (18) holding for different values of coherence parameters (\u03bc,\u03bd)\ud835\udf07\ud835\udf08(\\mu,\\nu)( italic_\u03bc , italic_\u03bd ). Top row: \ud835\udc68\ud835\udc68\\bm{A}bold_italic_A is a random Gaussian matrix with i.i.d. entries. Bottom row: \ud835\udc68\ud835\udc68\\bm{A}bold_italic_A has columns drawn i.i.d according to a unitary spherical distribution. In blue: n=50\ud835\udc5b50n=50italic_n = 50, in red: n=200\ud835\udc5b200n=200italic_n = 200, in yellow: n=400\ud835\udc5b400n=400italic_n = 400. Herein, we set mn=12\ud835\udc5a\ud835\udc5b12\\frac{m}{n}=\\frac{1}{2}divide start_ARG italic_m end_ARG start_ARG italic_n end_ARG = divide start_ARG 1 end_ARG start_ARG 2 end_ARG. Experiments are averaged over 5000 trials.",
165
+ "url": "http://arxiv.org/html/2309.12949v2/x5.png"
166
+ },
167
+ "5(b)": {
168
+ "figure_path": "2309.12949v2_figure_5(b).png",
169
+ "caption": "Figure 5: The empirical probabilities of inequality (18) holding for different values of coherence parameters (\u03bc,\u03bd)\ud835\udf07\ud835\udf08(\\mu,\\nu)( italic_\u03bc , italic_\u03bd ). Top row: \ud835\udc68\ud835\udc68\\bm{A}bold_italic_A is a random Gaussian matrix with i.i.d. entries. Bottom row: \ud835\udc68\ud835\udc68\\bm{A}bold_italic_A has columns drawn i.i.d according to a unitary spherical distribution. In blue: n=50\ud835\udc5b50n=50italic_n = 50, in red: n=200\ud835\udc5b200n=200italic_n = 200, in yellow: n=400\ud835\udc5b400n=400italic_n = 400. Herein, we set mn=12\ud835\udc5a\ud835\udc5b12\\frac{m}{n}=\\frac{1}{2}divide start_ARG italic_m end_ARG start_ARG italic_n end_ARG = divide start_ARG 1 end_ARG start_ARG 2 end_ARG. Experiments are averaged over 5000 trials.",
170
+ "url": "http://arxiv.org/html/2309.12949v2/x6.png"
171
+ },
172
+ "5(c)": {
173
+ "figure_path": "2309.12949v2_figure_5(c).png",
174
+ "caption": "Figure 5: The empirical probabilities of inequality (18) holding for different values of coherence parameters (\u03bc,\u03bd)\ud835\udf07\ud835\udf08(\\mu,\\nu)( italic_\u03bc , italic_\u03bd ). Top row: \ud835\udc68\ud835\udc68\\bm{A}bold_italic_A is a random Gaussian matrix with i.i.d. entries. Bottom row: \ud835\udc68\ud835\udc68\\bm{A}bold_italic_A has columns drawn i.i.d according to a unitary spherical distribution. In blue: n=50\ud835\udc5b50n=50italic_n = 50, in red: n=200\ud835\udc5b200n=200italic_n = 200, in yellow: n=400\ud835\udc5b400n=400italic_n = 400. Herein, we set mn=12\ud835\udc5a\ud835\udc5b12\\frac{m}{n}=\\frac{1}{2}divide start_ARG italic_m end_ARG start_ARG italic_n end_ARG = divide start_ARG 1 end_ARG start_ARG 2 end_ARG. Experiments are averaged over 5000 trials.",
175
+ "url": "http://arxiv.org/html/2309.12949v2/x7.png"
176
+ },
177
+ "5(d)": {
178
+ "figure_path": "2309.12949v2_figure_5(d).png",
179
+ "caption": "Figure 5: The empirical probabilities of inequality (18) holding for different values of coherence parameters (\u03bc,\u03bd)\ud835\udf07\ud835\udf08(\\mu,\\nu)( italic_\u03bc , italic_\u03bd ). Top row: \ud835\udc68\ud835\udc68\\bm{A}bold_italic_A is a random Gaussian matrix with i.i.d. entries. Bottom row: \ud835\udc68\ud835\udc68\\bm{A}bold_italic_A has columns drawn i.i.d according to a unitary spherical distribution. In blue: n=50\ud835\udc5b50n=50italic_n = 50, in red: n=200\ud835\udc5b200n=200italic_n = 200, in yellow: n=400\ud835\udc5b400n=400italic_n = 400. Herein, we set mn=12\ud835\udc5a\ud835\udc5b12\\frac{m}{n}=\\frac{1}{2}divide start_ARG italic_m end_ARG start_ARG italic_n end_ARG = divide start_ARG 1 end_ARG start_ARG 2 end_ARG. Experiments are averaged over 5000 trials.",
180
+ "url": "http://arxiv.org/html/2309.12949v2/x8.png"
181
+ },
182
+ "6": {
183
+ "figure_path": "2309.12949v2_figure_6.png",
184
+ "caption": "Figure 6: Projections of the clusters estimated by Algorithm 2 unto \u211d3superscript\u211d3\\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT for different numbers of snapshots and SNRs. Rows (from top to bottom): L=500\ud835\udc3f500L=500italic_L = 500, L=750\ud835\udc3f750L=750italic_L = 750, L=1500\ud835\udc3f1500L=1500italic_L = 1500. Columns (from left to right): SNR=\u221240\u2062d\u2062BSNR40dB\\operatorname{SNR}=-40\\mathrm{dB}roman_SNR = - 40 roman_d roman_B, SNR=0\u2062d\u2062BSNR0dB\\operatorname{SNR}=0\\mathrm{dB}roman_SNR = 0 roman_d roman_B, SNR=40\u2062d\u2062BSNR40dB\\operatorname{SNR}=40\\mathrm{dB}roman_SNR = 40 roman_d roman_B. Other system parameters are n=400\ud835\udc5b400n=400italic_n = 400, m=200\ud835\udc5a200m=200italic_m = 200, \u03b2=2.5\ud835\udefd2.5\\beta=2.5italic_\u03b2 = 2.5 and r=5\ud835\udc5f5r=5italic_r = 5.",
185
+ "url": "http://arxiv.org/html/2309.12949v2/x9.png"
186
+ },
187
+ "7": {
188
+ "figure_path": "2309.12949v2_figure_7.png",
189
+ "caption": "Figure 7: Probability of failure of Algorithm 1 as a function of the number of snapshots L\ud835\udc3fLitalic_L for different communications rates \u03b2\ud835\udefd\\betaitalic_\u03b2. Dashed lines represent Hoeffding\u2019s error rates pHoeffsubscript\ud835\udc5dHoeffp_{\\mathrm{Hoeff}}italic_p start_POSTSUBSCRIPT roman_Hoeff end_POSTSUBSCRIPT detailed in Section IV for the corresponding values of \u03b2\ud835\udefd\\betaitalic_\u03b2. Herein, we set n=200\ud835\udc5b200n=200italic_n = 200, m=100\ud835\udc5a100m=100italic_m = 100, r=5\ud835\udc5f5r=5italic_r = 5, and SNR=0\u2062d\u2062BSNR0dB\\operatorname{SNR}=0\\mathrm{dB}roman_SNR = 0 roman_d roman_B. Experiments are averaged over 105superscript10510^{5}10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT trials.",
190
+ "url": "http://arxiv.org/html/2309.12949v2/x10.png"
191
+ },
192
+ "8(a)": {
193
+ "figure_path": "2309.12949v2_figure_8(a).png",
194
+ "caption": "Figure 8: BER as a function of the number of snapshots L\ud835\udc3fLitalic_L for different communication rates \u03b2\ud835\udefd\\betaitalic_\u03b2. Herein, we set n=400\ud835\udc5b400n=400italic_n = 400, m=200\ud835\udc5a200m=200italic_m = 200, r=20\ud835\udc5f20r=20italic_r = 20, and SNR=0SNR0\\operatorname{SNR}=0roman_SNR = 0dB. Experiments are averaged over 105superscript10510^{5}10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT trials.",
195
+ "url": "http://arxiv.org/html/2309.12949v2/x11.png"
196
+ },
197
+ "8(b)": {
198
+ "figure_path": "2309.12949v2_figure_8(b).png",
199
+ "caption": "Figure 8: BER as a function of the number of snapshots L\ud835\udc3fLitalic_L for different communication rates \u03b2\ud835\udefd\\betaitalic_\u03b2. Herein, we set n=400\ud835\udc5b400n=400italic_n = 400, m=200\ud835\udc5a200m=200italic_m = 200, r=20\ud835\udc5f20r=20italic_r = 20, and SNR=0SNR0\\operatorname{SNR}=0roman_SNR = 0dB. Experiments are averaged over 105superscript10510^{5}10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT trials.",
200
+ "url": "http://arxiv.org/html/2309.12949v2/x12.png"
201
+ },
202
+ "9": {
203
+ "figure_path": "2309.12949v2_figure_9.png",
204
+ "caption": "Figure 10: BER as a function of SNR. Herein, we set \u03b2=10\ud835\udefd10\\beta=10italic_\u03b2 = 10, n=400\ud835\udc5b400n=400italic_n = 400, m=200\ud835\udc5a200m=200italic_m = 200, r=20\ud835\udc5f20r=20italic_r = 20, and L=400\ud835\udc3f400L=400italic_L = 400. For the cases with non-perfect knowledge of \ud835\udc68\ud835\udc68\\bm{A}bold_italic_A, it is assumed that Eve only has access to \ud835\udc68+\ud835\udc7e\ud835\udc68\ud835\udc68subscript\ud835\udc7e\ud835\udc68\\bm{A}+\\bm{W}_{\\bm{A}}bold_italic_A + bold_italic_W start_POSTSUBSCRIPT bold_italic_A end_POSTSUBSCRIPT, where each element in \ud835\udc7e\ud835\udc68subscript\ud835\udc7e\ud835\udc68\\bm{W}_{\\bm{A}}bold_italic_W start_POSTSUBSCRIPT bold_italic_A end_POSTSUBSCRIPT is white Gaussian noise, and we define SNRA\u225cE\u2062[\u2016\ud835\udc68\u2016\ud835\udda52]E\u2062[\u2016\ud835\udc7e\ud835\udc68\u2016\ud835\udda52]\u225csubscriptSNR\ud835\udc34Edelimited-[]subscriptsuperscriptnorm\ud835\udc682\ud835\udda5Edelimited-[]subscriptsuperscriptnormsubscript\ud835\udc7e\ud835\udc682\ud835\udda5\\operatorname{SNR}_{A}\\triangleq\\frac{\\mathrm{E}\\left[\\|\\bm{A}\\|^{2}_{{\\mathsf%\n{F}}}\\right]}{\\mathrm{E}\\left[\\|\\bm{W}_{\\bm{A}}\\|^{2}_{{\\mathsf{F}}}\\right]}roman_SNR start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT \u225c divide start_ARG roman_E [ \u2225 bold_italic_A \u2225 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT sansserif_F end_POSTSUBSCRIPT ] end_ARG start_ARG roman_E [ \u2225 bold_italic_W start_POSTSUBSCRIPT bold_italic_A end_POSTSUBSCRIPT \u2225 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT sansserif_F end_POSTSUBSCRIPT ] end_ARG. Experiments are averaged over 105superscript10510^{5}10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT trials.",
205
+ "url": "http://arxiv.org/html/2309.12949v2/x13.png"
206
+ }
207
+ },
208
+ "validation": true,
209
+ "references": [],
210
+ "url": "http://arxiv.org/html/2309.12949v2"
211
+ }
20240722/2309.13193v2.json ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "SurrealDriver: Designing LLM-powered Generative Driver Agent Framework based on Human Drivers\u2019 Driving-thinking Data",
3
+ "abstract": "Leveraging advanced reasoning capabilities and extensive world knowledge of large language models (LLMs) to construct generative agents for solving complex real-world problems is a major trend. However, LLMs inherently lack embodiment as humans, resulting in suboptimal performance in many embodied decision-making tasks. In this paper, we introduce a framework for building human-like generative driving agents using post-driving self-report driving-thinking data from human drivers as both demonstration and feedback. To capture high-quality, natural language data from drivers, we conducted urban driving experiments, recording drivers\u2019 verbalized thoughts under various conditions to serve as chain-of-thought prompts and demonstration examples for the LLM-Agent. The framework\u2019s effectiveness was evaluated through simulations and human assessments. Results indicate that incorporating expert demonstration data significantly reduced collision rates by 81.04% and increased human likeness by 50% compared to a baseline LLM-based agent. Our study provides insights into using natural language-based human demonstration data for embodied tasks. The driving-thinking dataset is available at https://github.com/AIR-DISCOVER/Driving-Thinking-Dataset.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "INTRODUCTION",
9
+ "text": "Recently, remarkable advancements have been achieved in large language models (LLMs) known for their zero-shot prompting and common sense reasoning capabilities [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###].\nIn addition to natural language tasks, LLMs, when equipped with specific sensory and control modules [6 ###reference_b6###, 7 ###reference_b7###], can act as the decision-making core in executing embodied tasks, such as robotics and autonomous driving [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###].\nPrevious research has validated the effectiveness of LLMs\u2019 advanced reasoning and extensive knowledge in embodied tasks [9 ###reference_b9###, 10 ###reference_b10###], but has also highlighted limitations in complex scenarios, like generating implausible sequences [11 ###reference_b11###, 12 ###reference_b12###] and a lack of operational experience [13 ###reference_b13###].\nHowever, traditional demonstrations of embodied tasks are seldom suitable as examples for few-shot learning.\nCurrent approaches primarily involve adjusting or constraining the LLM\u2019s task scope [12 ###reference_b12###] and enabling the LLM Agent to independently accumulate experience through environmental interactions [13 ###reference_b13###, 14 ###reference_b14###].\nIn the context of autonomous driving, agents analyze multimodal data, such as vectors [15 ###reference_b15###] and images [16 ###reference_b16###], to make end-to-end driving decisions, demonstrated by projects like Driving with LLMs [15 ###reference_b15###] and DriveGPT4 [16 ###reference_b16###].\nCompared to traditional fine-tuning, prompt-based methods with LLMs offer cost-effective and generalizable solutions [17 ###reference_b17###].\nApproaches like Drive As You Speak [18 ###reference_b18###] and DiLu [19 ###reference_b19###] integrate memory for coherent decision-making, and Drive Like a Human [20 ###reference_b20###] incorporate expert feedback to enhance performance.\nHowever, these so-called human-like driving behaviors primarily rely on the human common sense inherent in LLMs.\nLLMs acquire this common sense non-embodiedly from the noisy text corpus of the internet, lacking integration of professional, task-specific human data for embodied tasks [21 ###reference_b21###].\nFor LLM-based agents, employing human demonstrations [22 ###reference_b22###] and feedback [23 ###reference_b23###] for reinforcement learning in embodied tasks such as driving proves prohibitively expensive.\nA persistent challenge in this field is the lack of high-quality demonstrations and supervised human data.\nTo this end, in this paper, we innovatively leverage post-driving self-reports from human drivers, analyzing their thought processes as chain-of-thought prompts to enhance driving performance and human alignment in LLM-based agents. This approach offers new insights for aligning LLM-based agents with human drivers in embodied driving tasks.\nWe collected post-driving self-reports from 24 real-world drivers, detailing their considerations and decision-making processes during driving.\nWe then designed \u2019SurrealDriver,\u2019 an LLM-based framework for urban driving, grounded in four design considerations: a basic driving pipeline, a safety and memory mechanism, and human-aligned long-term driving guidelines, informed by demonstrations of human driving thought processes.\nOur framework was evaluated through simulation experiments and human assessments, confirming its design efficiency.\nTherefore, the contributions of this paper are as follows:\nThe first high-quality human drivers\u2019 natural language-type driving-thinking dataset collected through an urban driving experiment;\nA generative driver agent framework\ndesigned based on LLMs with human drivers\u2019 driving-thinking data as chain-of-thought prompts and implemented in Carla Simulator;\nAn empirical validation of the effectiveness of our framework through simulation ablation experiments and human evaluation."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Driving-thinking Dataset",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Driving Experiment and data collections",
21
+ "text": "To collect high-quality human drivers\u2019 language-type demonstration data, we invited 24 drivers (10 expert drivers and 14 novice drivers) to this driving-thinking Data collection session. Ten expert drivers were recruited through a formal career recruitment platform. They had extensive driving experience, ranging from 12 to 28 years, and their ages ranged from 35 to 48 years (M = 39.9, SD = 4.18). Novice drivers were recruited through social media, resulting in a group of 14 individuals aged between 20 and 25 years (M = 21.93, SD =1.49), with driving experience ranging from 1 to 4 years. This study was approved by the Institutional Review Board of the authors\u2019 institution. Before the experiment, all participants were ensured informed consent, acknowledging potential risks and their right to discontinue the study. To preserve participant confidentiality, all personal and confidential information has been anonymized, and the research results presented below have been subjected to de-identification.\nTo ensure the consistency between the collected natural language demonstrations and actual driving behaviors, we first had them participate in an actual complex urban road driving experiment and then we conducted post-driving interviews to collect their thinking-aloud data for safety reasons. For reviewing the driving experiment details in interview sessions, we recorded the driving process using multiple in-car cameras, including the driver\u2019s eye-tracking device (Tobii Glass 3111https://www.tobii.com/products/eye-trackers/wearables/tobii-pro-glasses-3), roof-mounted 360-degree panoramic camera (Insta360 X3222https://www.insta360.com/product/insta360-x3), and in-car motion camera (Dji OSMO Action 3333https://store.dji.com/hk-en/product/osmo-action-3).\nDuring the interviews, the drivers vocalized their decision-making process behind each driving behaviour as they reviewed the recorded footage. Besides, drivers were asked to contemplate the potential reasons behind their judgments and driving actions in complex driving scenarios during the experiment."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Data Analysis and Dataset Construction",
27
+ "text": "Our data consists of 24 driver interview videos, with a duration ranging from 1.5 to 2 hours. We transcribed the audio recordings into written documents and organized the participants\u2019 descriptions of their driving decision processes for each scenario encountered during the experiments. Each participant\u2019s data was processed by two to three trained coders, and a coding consistency check was performed.\nFrom our findings, an expert human driver doesn\u2019t just exhibit good driving behaviors by chance or intuition but continuously summarizes rules and patterns of driving behaviors. The construction of a thought chain progresses from strategic-level thinking to tactical-level decision-making and further to operational-level execution.\nFor example, most expert drivers reported that they observed different directions systematically while turning, no matter which direction they went in. As D11 (expert) shared,\nD11 (expert): \u201dNo matter right or left, I must look at the direction that I turn to first because that\u2019s the road that I will take. However, I also look in the opposite direction. Basically, I look twice. The first time is to look at both sides; the second time is to confirm. Then I take the turns.\u201d\nMoreover, the expert drivers also had systematic, well-developed behavioral patterns when they interacted with other road users. For example, before entering the main road, the expert drivers evaluated the status of cars on the main road to decide when and how they got onto the main road.\nD06 (expert): \u201dLook at the left rearview mirror first, mainly about the speed of the back car. If the speed is slow, I can step on gases and go directly. If the speed is fast, I can pause and wait. I can go after they pass by.\u201d\nWe can see the thought chain of expert drivers is composed of multiple interconnected decision points, each based on the current traffic conditions and anticipated future changes. Such patterns not only enable human drivers to form muscle memory through repeated practice but can also be summarized into explicit chains of thought to teach autonomous driving algorithms based on LLMs.\nThus, we think that by using the driving-thinking data of expert drivers as prompts, these excellent driving behavior patterns can be expanded and generalized through LLMs. We compiled the \u201ddriving-thinking\u201d data, along with demographic information and driving-related questionnaire data from the participants, into a dataset. This facilitates future research on driving behavior and the development of autonomous driving algorithms.\n###figure_1###"
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III SurrealDriver Framework",
33
+ "text": ""
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-A Framework Design",
39
+ "text": "Designing an agent capable of driving requires it to comprehend the complexity and diversity of driving environments, execute a continuous series of intricate operations, ensure safety, and harmonize with other human-driven vehicles. Based on these considerations, we have established the following framework as shown in Fig. 1 ###reference_###:"
40
+ },
41
+ {
42
+ "section_id": "3.1.1",
43
+ "parent_section_id": "3.1",
44
+ "section_name": "III-A1 Perception: Atomic Scene and Atomic Actions.",
45
+ "text": "Human driving scenarios are diverse, requiring agents to understand complex situations in detail. Traditional driving simulation methods train across a wide range of scenarios, which is costly.\nOur approach breaks down driving scenarios into discrete parameters for the LLMs. These parameters help the agent assess situations using common sense. We also simplify driving actions in the simulator into basic operations, enabling the agent to combine these for complex driving behaviors."
46
+ },
47
+ {
48
+ "section_id": "3.1.2",
49
+ "parent_section_id": "3.1",
50
+ "section_name": "III-A2 Execution: Short-Term Driving Memory.",
51
+ "text": "Effective car driving demands seamless and continuous actions, minimizing abrupt braking or sharp turns whenever feasible. Additionally, actions such as overtaking and following entail a fusion of fundamental maneuvers (e.g., acceleration, lane changing), rendering driving actions relatively intricate.\nTo maintain smooth driving, we capture the agent\u2019s recent driving behavior over a few steps in the short-term driving memory module. These short-term driving memories aid the agent in sustaining consistency in decision-making. Moreover, the agent can employ these driving memories to amalgamate several basic driving operations for executing complex driving behaviors."
52
+ },
53
+ {
54
+ "section_id": "3.1.3",
55
+ "parent_section_id": "3.1",
56
+ "section_name": "III-A3 Planning: Long-Term Human-like Driving Guidelines.",
57
+ "text": "The agent must align its planning with that of human drivers. This module facilitates the agent in emulating the process by which humans learn from expert drivers to amass expertise and continually enhance their driving skills.\nTo this end, we designed CoachAgent to assess the DriverAgent\u2019s driving behaviors and impart guidelines that must be adhered to. These guidelines are consistently integrated, contributing to the ongoing enhancement of the DriverAgent\u2019s driving proficiency."
58
+ },
59
+ {
60
+ "section_id": "3.1.4",
61
+ "parent_section_id": "3.1",
62
+ "section_name": "III-A4 Overall Process: Strict Safety Criteria.",
63
+ "text": "Ensuring safety is the most critical requirement for driving behavior simulation. Any simulated driving system must prioritize safety and establish rules within its framework to ensure the agent\u2019s safety.\nThus, throughout the entire driving process, safety should be consistently ensured through safety redundancy mechanisms. The agent is provided with stringent safety criteria to ensure the fundamental safety of the driving process."
64
+ },
65
+ {
66
+ "section_id": "3.2",
67
+ "parent_section_id": "3",
68
+ "section_name": "III-B Implementation",
69
+ "text": "###figure_2### We built the SurrealDriver framework in the CARLA simulator [24 ###reference_b24###], including the basic driving pipeline, the memory and safety mechanism, and the human-aligned long-term driving guidelines."
70
+ },
71
+ {
72
+ "section_id": "3.2.1",
73
+ "parent_section_id": "3.2",
74
+ "section_name": "III-B1 Basic Driving Pipeline.",
75
+ "text": "As shown in Fig. 2 ###reference_###, the basic driving pipeline consists of three main processes: perception, decision-making, and control.\nIn perception, DriverAgent receives and integrates vehicle and environmental data from the CARLA simulator. This data, provided as parameters, is analyzed based on predefined prompts and common sense, enabling DriverAgent to understand the vehicle\u2019s current situation.\nFollowing perception, DriverAgent decides on the next steps, prioritizing safety and efficiency. It then proceeds to the control phase, where it sends JSON-formatted commands to CARLA, choosing from actions like stopping, maintaining speed, lane changing, or adjusting speed. These atomic actions allow DriverAgent to execute complex maneuvers based on the scenario."
76
+ },
77
+ {
78
+ "section_id": "3.2.2",
79
+ "parent_section_id": "3.2",
80
+ "section_name": "III-B2 Memory and Safety Mechanisms",
81
+ "text": "The memory and safety mechanisms are built on top of the basic driving pipeline to store the information needed by the DriverAgent. It consists of three modules: Safety criteria and Short-term memory.\nSafety Criteria: We implemented stringent safety criteria set to prevent hazardous maneuvers. The safety redundancy mechanism has two tiers. The first, mandatory tier, mandates actions like stopping if a vehicle or pedestrian is within 10 meters or at a red traffic light. The second, optional but recommended tier, includes decelerating when nearing vehicles or pedestrians within 20 meters, slowing down at intersections, keeping a minimum distance of 1 meter from moving cars, and optimizing energy use by reducing unnecessary speed changes.\nShort-term Memory: To ensure the continuity and complexity of driving, we will store the driving behaviors of the current agent from the past few iterations and continuously update them, replacing the oldest with the latest to maintain a certain number of stored behaviors. These behaviors will then be provided to the DriverAgent again, becoming part of its perception."
82
+ },
83
+ {
84
+ "section_id": "3.2.3",
85
+ "parent_section_id": "3.2",
86
+ "section_name": "III-B3 Human aligned Long-term Driving Guideline",
87
+ "text": "To better align SurrealDriver with human drivers, we utilize the driving-thinking data of expert drivers collected in Section LABEL:Thnking-aloud a chain-of-thought prompt. While designing examples, we followed a three-dimensional approach: situation, reasoning, and action as shown in Fig. 3 ###reference_###. Situation provided specific road conditions during driver operations, and for each comparison case, we set the same road conditions, referencing the road conditions real drivers faced during their interviews. Reasoning was designed based on the content of driver interviews, with irrelevant information removed to make our examples concise and efficient in demonstrating human thinking and guiding the agent to learn human thought patterns.\n###figure_3###"
88
+ },
89
+ {
90
+ "section_id": "4",
91
+ "parent_section_id": null,
92
+ "section_name": "IV Evaluation",
93
+ "text": "We conducted driving experiments using agents from different frameworks in the same scenario, analyzing variations in their behaviors to understand how directives from different frameworks influence their driving. We evaluated the agents based on two primary dimensions: safety-driving capability and human-likeness. Safety-driving capability was assessed using an algorithmic experiment, while human-likeness was assessed through a human experiment."
94
+ },
95
+ {
96
+ "section_id": "4.1",
97
+ "parent_section_id": "4",
98
+ "section_name": "IV-A Algorithm Experiment",
99
+ "text": ""
100
+ },
101
+ {
102
+ "section_id": "4.1.1",
103
+ "parent_section_id": "4.1",
104
+ "section_name": "IV-A1 Experiment Environment Set-up",
105
+ "text": "The experimental setup on a ThundeRobot Zero desktop computer. The simulation environment was built upon the CARLA simulator version 0.9.14 [24 ###reference_b24###] and operated on Python 3.7 with Unreal Engine 4. The simulated environment was chosen to be Town10, and the Audi TT was the designated vehicle for all experiments, with fixed starting and continuously, randomly generated ending points for its path. Upon reaching the endpoint, another endpoint is randomly generated for continuous experiments. This process continues until the required number of driving rounds are completed. We leverage OpenAI\u2019s GPT-4 APIs for simulating drivers\u2019 driving decisions and solving related problems in a simulated environment. However, it takes several seconds for GPT-4 to make a decision, which is too long in a driving context for making immediate decisions. Therefore, we slowed down CARLA\u2019s simulation time based on the required token count by setting a fixed time step of 0.0006-0.0015 seconds."
106
+ },
107
+ {
108
+ "section_id": "4.1.2",
109
+ "parent_section_id": "4.1",
110
+ "section_name": "IV-A2 Results",
111
+ "text": "The overall experiment lasted 108405.90s (30.11 hours); the average experiment time for each condition was 7079.67s, 13730.6s, 23870.28s, and 63725.35s, respectively. We conducted statistical analyses separately for collision rates per unit distance and collision rates per unit time. The detailed results are shown in Table I ###reference_###. Notably, we adjusted the algorithms controlling other vehicles and pedestrians to make them more prone to sudden maneuvers (e.g. abrupt lane changes, running red lights).\nThese edge cases aim to increase the risk level of the driving environment for the agent vehicle, making its driving performance more observable.\nFor the Safety Module, collision rate data shows that the framework with the safety module has a collision rate 57.46% lower than the one without it. For example, in the absence of Safety Criteria, when the vehicle was at a distance of 5 meters from the preceding vehicle, the DriverAgent initiated a lane change, leading to a collision with the front vehicle. However, when running a framework with Safety Criteria, the vehicle encountered a situation where the distance to the preceding vehicle was 7 meters. Based on the information provided by the safety criteria, it initiated a stop behavior, safely coming to a halt behind the lead vehicle.\nFor the Short-term Memory Module, collision rate data shows that the framework with Short-term Memory has a collision rate 82.96% lower than the one without it. We found that short-term memory plays an important role in enhancing the continuity of the agent\u2019s driving decisions. For example, in one experimental trial, the vehicle initially accelerated for a few steps, and when DriverAgent had to decide its next action, it had two options: to continue accelerating or to maintain its current speed. Considering its previous acceleration actions, it chose to maintain its current speed.\nFor the Long-term Guidelines Module, collision rate data shows that the framework with Long-term Guidelines has a collision rate 83.03% lower than the one without them. With long-term guidelines, the DriverAgent demonstrated an improvement in driving skills. For example, in one experimental trial, CoachAgent analyzed the initial driving behaviors and classified them as \u2019Bad.\u2019 The reason for this assessment was the excessive frequency of stopping. A guideline was generated that \u2019Maintain a consistent and safe speed.\u2019, which made the agent perform more human-like driving behaviour."
112
+ },
113
+ {
114
+ "section_id": "4.2",
115
+ "parent_section_id": "4",
116
+ "section_name": "IV-B Human Evaluation Experiment",
117
+ "text": "A single-factor within-subjects design was used to investigate how people rate each framework used in the algorithm experiment (see in Section IV-A ###reference_###)."
118
+ },
119
+ {
120
+ "section_id": "4.2.1",
121
+ "parent_section_id": "4.2",
122
+ "section_name": "IV-B1 Experiment Design and Materials",
123
+ "text": "The independent variable was the framework, which included the \u201cw/o safety, memory, or guideline framework\u201d without safety criteria, short-term memory, or long-term guidelines; the \u201cw/o memory or guideline framework\u201d with safety criteria only; the \u201cw/o guideline framework\u201d with both safety criteria and short-term memory; and the \u201cfull framework\u201d with safety criteria, short-term memory, and long-term guidelines. Therefore, the guideline framework was the full framework of SurrealDriver. The video of each framework was created by recording experiments in the algorithm experiment (see in Section IV-A ###reference_###). The length of each video is around 30 seconds."
124
+ },
125
+ {
126
+ "section_id": "4.2.2",
127
+ "parent_section_id": "4.2",
128
+ "section_name": "IV-B2 Participants and Procedures",
129
+ "text": "We invited another 24 adult participants (aged 29.3\u00b14.9, male = 17, no overlap with participants in the Driving-thinking data collection experiment) with legal driving licenses to our human evaluation experiment. The experiment was conducted through online surveys. The survey started with demographic information questions including participants\u2019 age, gender, phone number, driving silence status, years of driving experience, and kilometers of driving per month. Then the survey guided participants to watch videos embedded in the survey. All participants watched the videos in random order. After watching each video, they rate items that measure human likeness by asking whether the driver demonstrated driving operations like those conducted by human drivers using a 5-point Likert scale where 1 represented \u201cnot at all\u201d and 5 presented \u201calmost all.\u201d"
130
+ },
131
+ {
132
+ "section_id": "4.2.3",
133
+ "parent_section_id": "4.2",
134
+ "section_name": "IV-B3 Results",
135
+ "text": "A one-way repeated measure ANOVAs were conducted to compare ratings among the four frameworks.\nFor human-likeness, the Huynh-Feldt correction was used because Mauchly\u2019s test of sphericity was significant with epsilon values larger than 0.75. We found significant differences among the four frameworks: , . The Bonferroni post hoc test revealed that the scores of the guideline framework were significantly higher than those of the w/o safety, memory, or guideline framework, ."
136
+ },
137
+ {
138
+ "section_id": "5",
139
+ "parent_section_id": null,
140
+ "section_name": "Conclusion",
141
+ "text": "In our research, we developed SurrealDriver, an LLM-based driver agent framework. The results of both algorithm experiments and human evaluation indicate that this LLM-based driver agent framework offers better performance than the basic approach for driver simulations, bringing driver agent behavior closer to human-like driving and, consequently, simulating more realistic traffic environments.\nBy integrating human Driving-thinking data with LLMs, agents can utilize natural language and examples to add rules more conveniently, allowing for easier rule adjustments.\nThus, we provide the agent with the driving-thinking data of real drivers\u2019 behaviors obtained through interviews conducted during real vehicle experiments. The agent uses its capabilities based on LLMs to autonomously assess the quality of its driving behavior compared to detailed driving behaviour reasoning. It then enhances its driving skills based on the behavior of expert drivers. This approach differs from traditional reinforcement learning and other training methods by enabling the agent to learn directly from driver transcripts, similar to humans, without the need for translation into code. Our research provided valuable insights for future human-aligned agent generation."
142
+ }
143
+ ],
144
+ "appendix": [],
145
+ "tables": {
146
+ "1": {
147
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.2.1.1\" style=\"font-size:90%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S4.T1.3.2\" style=\"font-size:90%;\">Collision Rate of Algorithm Experiment</span></figcaption>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.4\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.1.1\">Framework</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.1.2\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.4.1.2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.1.2.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.4.1.2.1.1.1\">Collision Rate by</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.1.2.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.4.1.2.1.2.1\">Distance (per meter)</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.1.3\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.4.1.3.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.1.3.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.4.1.3.1.1.1\">Collision Rate by</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.1.3.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.4.1.3.1.2.1\">Time (per second)</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.2.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.4.2.1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.1.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.4.2.1.1.1.1\">w/o safety criteria,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.1.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.4.2.1.1.2.1\">w/o short-term memory,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.1.1.3\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.4.2.1.1.3.1\">w/o long-term guidelines</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.2.2\">0.01453958</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.2.3\">0.041315485</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.3.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.4.3.1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.3.1.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.4.3.1.1.1.1\">with safety criteria,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.3.1.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.4.3.1.1.2.1\">w/o short-term memory,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.3.1.1.3\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.4.3.1.1.3.1\">w/o long-term guidelines</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.3.2\">0.00923361</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.3.3\">0.02366976</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.4.4.1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.1.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.4.4.1.1.1.1\">with safety criteria,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.1.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.4.4.1.1.2.1\">with short-term memory,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.1.1.3\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.4.4.1.1.3.1\">w/o long-term guidelines</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.2\">0.005046864</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.3\">0.009530682</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.4.5.1\">Full framework</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.4.5.2\">0.002757353</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.4.5.3\">0.005100011</td>\n</tr>\n</table>\n</figure>",
148
+ "capture": "TABLE I: Collision Rate of Algorithm Experiment"
149
+ }
150
+ },
151
+ "image_paths": {
152
+ "1": {
153
+ "figure_path": "2309.13193v2_figure_1.png",
154
+ "caption": "Figure 1: The framework of SurrealDriver.",
155
+ "url": "http://arxiv.org/html/2309.13193v2/extracted/5731060/fig/framework-new.jpg"
156
+ },
157
+ "2": {
158
+ "figure_path": "2309.13193v2_figure_2.png",
159
+ "caption": "Figure 2: The Details of DriverAgent.",
160
+ "url": "http://arxiv.org/html/2309.13193v2/extracted/5731060/fig/DriverAgent.png"
161
+ },
162
+ "3": {
163
+ "figure_path": "2309.13193v2_figure_3.png",
164
+ "caption": "Figure 3: The CoachAgent for human alignment.",
165
+ "url": "http://arxiv.org/html/2309.13193v2/extracted/5731060/fig/Coach.png"
166
+ }
167
+ },
168
+ "validation": true,
169
+ "references": [],
170
+ "url": "http://arxiv.org/html/2309.13193v2"
171
+ }
20240722/2309.15776v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2310.01967v5.json ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Efficient Frontier Management for Collaborative Active SLAM",
3
+ "abstract": "In autonomous robotics, a critical challenge lies in developing robust solutions for Active Collaborative SLAM, wherein multiple robots collaboratively explore and map an unknown environment while intelligently coordinating their movements and sensor data acquisitions. In this article, we present an efficient centralized frontier sharing approach that maximizes exploration by taking into account information gain in the merged map, distance, and reward computation among frontier candidates and encourages the spread of agents into the environment. Eventually, our method efficiently spreads the robots for maximum exploration while keeping SLAM uncertainty low. Additionally, we also present two coordination approaches, synchronous and asynchronous to prioritize robot goal assignments by the central server. The proposed method is implemented in ROS and evaluated through simulation and experiments on publicly available datasets and similar methods, rendering promising results.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "INTRODUCTION",
9
+ "text": "Autonomous robotics has emerged as a transformative force in the exploration of complex and uncharted environments. From planetary exploration missions to disaster relief operations, the deployment of autonomous robots has demonstrated a revolutionary potential across a diverse range of applications. At the heart of this success lies the robot\u2019s ability to autonomously explore an environment while gathering data and constructing detailed maps of the surrounding environment in real-time\u2014a process known as Active Simultaneous Localization and Mapping (A-SLAM).\nMany research works have recently focused on Active Collaborative SLAM (AC-SLAM), which capitalizes on the power of multiple robots working in collaboration. The potential advantages are manifold, from accelerated mapping of terrains to resilient operation in challenging and dynamic scenarios. However, the utilization of multiple robots in collaborative SLAM is not without its challenges. Coordination, resource allocation, and sensor fusion become critical facets that demand careful consideration. Furthermore, the seamless integration of individual robot efforts into a coherent, unified map poses a non-trivial computational and algorithmic challenge.\nWe propose an implementation of an AC-SLAM algorithm and extend the work in [1 ###reference_b1###] to a multi-agent system, where multiple robots collaboratively map an environment. To achieve this aim, we propose an effective method to distribute robots in the environment hence favoring exploration and considering agent priorities using reward, distance-based, and merged map information gain metrics to optimize goal selection. We also propose two communication strategies namely synchronous and asynchronous respectively in a centralized approach with a central server, to establish effective communication and coordination of goals among the agents. We implement the proposed approach in ROS using the client-server model and provide extensive simulation results and real world experiments.\nThe subsequent sections are organized as follows: Section II ###reference_### provides a review of related work, Section III ###reference_### explains the methodology of the proposed approach, Section IV ###reference_### shows the simulation and experimental results, finally we conclude Section V ###reference_### summarizing our contributions and prospects for future work."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II RELATED WORK",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Active SLAM",
21
+ "text": "In A-SLAM, the robot can actively choose its actions, such as selecting views or locations to investigate, to reduce the uncertainty of its localization and map representation for environment exploration. Thus intelligently planning and executing robot operations to minimize uncertainty, with the objective to increase the efficiency and accuracy of SLAM as described in [2 ###reference_b2###], [3 ###reference_b3###].\nOnce the robot has established a map of its surroundings, it proceeds to locate frontier points. [4 ###reference_b4###] defines frontier as the boundary separating known map locations from unknown ones, as observed by the robot\u2019s sensors. After identifying these goal frontiers, the robot computes a cost or utility function. This function relies on the potential reward associated (optimality criterion) as debated in [5 ###reference_b5###] with selecting the optimal action from a set of all possible actions. Theory of Optimal Experimental Design (TOED) [6 ###reference_b6###] and concepts from Information Theory (IT) [7 ###reference_b7###] are used to provide optimality criterion for reward computation by the utility function. TOED is used to provide a scalar mapping of pose graph covariance matrix as described in [8 ###reference_b8###] and [2 ###reference_b2###] debating on its determinant and Eigenvalues (D-Optimally criterion) to guide the reward function to the goal location. While in IT joint Entropy is used. Interested readers are guided to [9 ###reference_b9###], [10 ###reference_b10###], [11 ###reference_b11###], [12 ###reference_b12###] for discussion of uncertainty quantification methods."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Frontiers-based Approaches",
27
+ "text": "Frontiers play a pivotal role in augmenting the precision of robot localization by enabling intelligent exploration and data acquisition strategies, effectively reducing uncertainty, and enhancing the map-building and localization processes. In [13 ###reference_b13###] an active exploration strategy is proposed where each frontier is weighted based on distance and surrounding unknown cells. While in [14 ###reference_b14###] each frontier is segmented, a trajectory is planned for each segment, and the trajectory with the highest map-segment covariance is selected from the global-cost map. The work presented in [15 ###reference_b15###] uses frontier exploration for autonomous exploration a utility function based on Shannon\u2019s and Renyi entropy is used for the computation of the utility of paths. The method described by [16 ###reference_b16###] uses a cost function that is somewhat similar to [17 ###reference_b17###], which takes into consideration the discovery of the target area of a robot by another member of the swarm and switches from a frontier to a distance-based navigation function to guide the robot toward the goal frontier.\nFrontiers-based coverage approaches in [18 ###reference_b18###] divide the perception task into a broad exploration layer and a detailed mapping layer, making use of heterogeneous robots to carry out the two tasks while solving a Fixed Start Open Traveling Salesman Problem (FSOTSP). Once a frontier has been identified, the robot can use path planning algorithms to reach it and maximize the exploration while minimizing its SLAM uncertainly."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C Active Collaborative SLAM",
33
+ "text": "In AC-SLAM, the frontier detection and uncertainty quantification approaches described earlier are also applicable with additional constraints of managing computational and communication resources, and the ability to recover from network failure. The exchanged parameters are entropy [15 ###reference_b15###] [19 ###reference_b19###], Kullback\u2013Leibler Divergence (KLD) [20 ###reference_b20###, 21 ###reference_b21###], localization info [17 ###reference_b17###], visual features [22 ###reference_b22###], and frontier points. The authors of [23 ###reference_b23###, 24 ###reference_b24###], incorporate these multirobot constraints by adding the future robot paths while minimizing the optimal control function which takes into account the future steps and observations and minimizing the robot state and map uncertainty and adding them into the belief space (assumed to be Gaussian).\n[19 ###reference_b19###] presents a decentralized method for a long-planning horizon of actions for exploration and maintains estimation uncertainties at a certain threshold. The active path planner uses a modified version of RRT* and an action is chosen that best minimizes the entropy change per distance traveled. The main advantage of this approach is that it maintains good pose estimation and encourages loop-closure trajectories. An interesting solution is given by a similar approach to the method proposed by [21 ###reference_b21###] using a relative entropy (RE)-optimization method which integrates motion planning with robot localization and selects trajectories that minimize the localization error and associated uncertainty bound. A planning-cost function is computed, which includes the uncertainty in the state in addition to the state and control cost.\nWhen considering multi-robot systems, two primary aspects come into play. Firstly, teams can be either homogeneous, consisting of robots of the same type, or heterogeneous, [25 ###reference_b25###], with various robot types working together. Secondly, the system\u2019s architecture can be centralized, decentralized, or distributed, [9 ###reference_b9###]. Centralized control offers precise coordination but is susceptible to delays and single points of failure. On the other hand, decentralized systems distribute control for enhanced robustness and scalability while requiring effective coordination. Distributed systems empower individual robots for autonomous decision-making, providing fault tolerance and adaptability while demanding efficient communication protocols. Sometimes, systems can combine centralized and distributed elements [25 ###reference_b25###], sharing computational tasks among agents while central nodes handle decision-making."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "III METHODOLOGY",
39
+ "text": "While many research works have been focused on collaborative strategies for SLAM, or single-robot active-SLAM, only a few works have dealt with AC-SLAM. However, these approaches present common limitations: a) they have high computational costs associated with the number of frontiers processed. b) They fail to encourage the spread of robots into the environment. c) The uncertainty is quantified by a scalar mapping of the entire pose graph covariance matrix which may become very large, especially in landmark-based SLAM methods increasing the computational cost. Furthermore, they do not explicitly implement strategies for efficient management of frontiers to speed up map discovery and robot localization. In this work, we propose an AC-SLAM approach that deals with overcoming these limitations. Our proposed method outlines a strategy aimed at reducing in number of frontiers for reward computation and distributing robots within the environment, thereby facilitating exploration and mapping. We leverage a combination of reward metrics, distance-based evaluations, and merged map information gain to refine the goal selection.\nIn the context of single robot A-SLAM the work of [1 ###reference_b1###] uses the Open Karto111http://wiki.ros.org/open_karto ###reference_iki.ros.org/open_karto### in ROS noetic222http://wiki.ros.org/noetic ###reference_iki.ros.org/noetic### . as SLAM Back-end and proposes a modern D-Optimality criterion which is not computationally expensive for uncertainty quantification. This D-optimality criterion is computed as the maximum number of spanning trees in the weighted graph Laplacian. The reward for each frontier candidate is weighted by this optimality criterion and is passed to the path planner to guide the robot to perform A-SLAM. For a set of frontiers , each robot computes a matrix of rewards as shown in Equation 1 ###reference_###.\nIn this article, we expand [1 ###reference_b1###] to a multi-robot AC-SLAM and propose an efficient frontier sharing and exploration method. We propose two exploration approaches namely Synchronous and Asynchronous respectively III-C ###reference_### for goal assignment to robots. Additionally, we also present an efficient spread policy III-B ###reference_### for encouraging exploration.\nWe developed our proposed approach in ROS Noetic using the ROS actionlib333http://wiki.ros.org/actionlib ###reference_iki.ros.org/actionlib### library. We add a central server that receives the list of local frontier points from each robot, computes a global list, and replies with the next target to be reached by the robot. As shown in Figure 1 ###reference_### each robot detects local frontiers in its map using OpenCV and RRT-based frontier detection from [1 ###reference_b1###] and passes them to the manager node which acts as a communication gateway between the server and robot. The merge points action server creates a unique list of frontier points to be used by all the agents and choose goals action server chooses the best goal position for each agent depending on the reward matrix as shown in Equation 1 ###reference_### and spread criterion. Finally, the assigner node receives the chosen goal frontier and executes the path planning action using Dijkstra\u2019s algorithm and DWA planners from the ROS navigation stack444http://wiki.ros.org/navigation ###reference_iki.ros.org/navigation### as local and global planners respectively. Figure 2 ###reference_### shows the resultant architecture of the proposed method in ROS with their namespaces. The orange and pink nodes represent the assigner node and central server nodes from Figure 1 ###reference_###. The grey node is map merging responsible for taking the local maps from each agent and computing a merged map555http://wiki.ros.org/multirobot_map_merge ###reference_###. Filtering with percentage and update rewards & goal selection will be explained in Sections III-A ###reference_### and III-B ###reference_### respectively. Throughout this article, we will use the words robots or agents interchangeably, and the same applies to frontiers and points, as they imply the same meaning in the context.\nIn the following Sections, we elaborate the management policy of the frontiers (Section III-A ###reference_###) and the spreading policy used to speed up the exploration (Section III-B ###reference_###) are discussed. We also present two communication methodologies i.e., synchronous and asynchronous (Section III-C ###reference_###) which deal with the goal assignment to robots.\n###figure_1### ###figure_2###"
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-A Frontiers Management",
45
+ "text": "Each agent identifies a certain list of frontier points that will be merged on the server side. Depending on the extent of the map, the final global list may consist of several points, which can lead to high computational time on the server side. For this reason, a strategy to reduce the overall number of frontiers was developed. Also, since we are working on multiple robots, some of the points that are considered frontiers in a local map will be located in a region that is fully mapped when considering the global map. To solve both the aforementioned problems, we decided to consider only those points that have a given percentage of unknown cells within a given radius, using a discretized circle and the global merged map.\nFor each detected frontier in the robot frame we compute the homogeneous coordinates in the merged map as where is the homogeneous coordinates in robot frame. is the transformation matrix between merged map and robot frame.\nFigure 3 ###reference_### shows an example list of two points P and Q in a partially discovered global Occupancy Grid (O.G) map. For both points, a circle of known radius RAD is drawn and we compute the percentage of the unknown cells over the total inside the circle. Once the percentage is computed, this point is kept or discarded based on the threshold of PER_UNK set. In the specific case, opportunely setting PER_UNK, point P will be added to the global list, i.e., considered as a border point, whereas point Q will be discarded.\nThe usage of a discretized circle can lead to an Inclusion error: the discretized circle may include some cells outside the circular boundary. This error leads to false positives.\nExclusion error: the discretized circle may exclude some cells within the actual circular boundary. This error leads to false negatives.\n###figure_3### The magnitude of the error depends on the resolution used for the O.G map: higher resolutions provide a more accurate approximation of the circle and consequently negligible errors. Unfortunately, using this approach to reduce the list of frontier points (points) may not be sufficient to meet time constraints on the server side. Therefore, we devised an algorithm aimed at further reducing the point number through the adjustment of the radius considered before. The algorithm checks if the global number of points in the list is above a certain threshold MAX_PTS: in this case, it recomputes a new list of frontiers by increasing the radius RAD to 0.25; conversely, if the number of points is below a fixed threshold MIN_PTS, the list is reprocessed by decreasing PER_UNK by 10% which is the a-priori fixed given percentage of unknown cells on a given radius. This strategy gives a sufficient number of frontier points in the list for the robots to compute their reward."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-B Spread Policy",
51
+ "text": "To choose navigation targets that allow the agents to explore the map efficiently, a specific spread policy has been implemented. The server keeps track of the already assigned goals. When a target goal for one agent is selected, it updates the reward for all other agents from the old reward , by using a subtracting factor as shown in Equation 2 ###reference_###.\nThe numerator , in Equation 3 ###reference_###, is set at run-time since it depends on the maximum reward for each agent, and the number of targets already assigned. The denominator represents the Euclidean distance computed between the last chosen goal and the frontier points in the matrix. In other words, when the server assigns a target to robot , it will reprocess all the reward matrices for the other agents, updating the reward with a subtractive factor , which strictly depends on the position of the target assigned to robot .\nSince the is inversely dependent on , the closer the points are to already chosen goals the less likely they are to be chosen as the next goals, thus achieving the task of spreading the goals in the environment. Normalizing in Equation 4 ###reference_### with the size of the rewards in each matrix, allows for having a subtractive factor that is scaled with respect to the reward matrix of each agent. Thus taking into account the number of already selected points, possibly distributing the reward \u201dbudget\u201d among them. By dividing the maximum reward by the total number of selected points, when the number of targets already explored becomes significant each point will only receive a smaller portion of the total reward, resulting in a more limited effect of .\nIn the case of asynchronous approach discussed in Section III-C ###reference_###, the priority assigned to robots can lead to having one or more robots with low priority being stuck because they are always prioritized by higher-priority agents. To avoid this issue, the server also takes into account the number of requests not related to each agent. Once this number exceeds a certain predefined threshold GOAL_SKIP_WAIT, the corresponding agent will be associated with the higher priority. This approach will avoid having robots stuck, distributing goals more uniformly."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-C Sychronous and Asynchronous Approach",
57
+ "text": "The communication between the agents and the server has been implemented with two policies: synchronous and asynchronous. In the synchronous approach, during the execution of the program, each agent receives the same number of goals. Moreover, each agent waits for all the other robots in the system to reach their goal before starting a new goal procedure. In this case, the central server (Figure 1 ###reference_###) has to manage different agents at the same time and, during the reward computation, the server is given Reward Matrices, (Equation 1 ###reference_###) one for each robot. A priority among agents has been set so that goal assignment is performed respecting this sequence: given two agents and with , is assigned a goal before .\nIn the asynchronous approach, each agent is assigned in sequence as many goals as it can reach, without waiting for other agents. In this case, the priority is used to choose the winning agent in the case multiple agents perform the request at the same time. Since with this policy, an agent with a low priority can be stuck for a long time, there is also a counter that keeps track of this prioritization among the robots and, when an agent with a low priority is not considered for a long time, automatically assigns it the highest priority so as that the server will satisfy its request as soon as possible. Since the server is used once at a time and the goal chosen is for one robot at a time, the server for each goal of the agent stores it; this allows the server not to choose an already chosen goal and to update the rewards to spread the agents taking into account all the goals set so far."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "IV RESULTS",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-A Simulation Results",
69
+ "text": "The simulations111YouTube link: https://youtu.be/MsZqoaEA0gY were carried out on ROS Noetic, Gazebo, and Ubunto 20.04 on Intel Xeon\u00ae W-2235 CPU 3.80GHz x 12, with 64Gb RAM and Nvidia Quadro RTX 4000 GPU. As described earlier we modified the approach of [1 ###reference_b1###] to multi-robot and implemented the proposed approach as mentioned in Section III ###reference_### using Open Karto as SLAM backend, RosBot222https://husarion.com/ ###reference_husarion.com/###. equipped with Lidar, and planners from the ROS navigation stack.\nWe used maps open-source maps of modified Willow Garage (W.G) from Gazebo simulator and AWS hospital environments (HOS)333https://github.com/aws-robotics ###reference_github.com/aws-robotics###. measuring 2072 and 1243 respectively. Figure 4 ###reference_### shows the Gazebo images and resulting O.G map of HOS indicating the initial, and final poses and resulting pose graphs. The ground truth O.G maps were generated using the gazebo_2Dmap_plugin444https://github.com/marinaKollmitz/gazebo ###reference_###. which uses wavefront exploration.\nWe compared our proposed approach against: 1) Frontier Detection based Exploration (Frontier)555http://wiki.ros.org/frontier_exploration ###reference_###. of [4 ###reference_b4###] which uses a greedy frontier exploration strategy without any SLAM uncertainty quantification. 2) and [1 ###reference_b1###] by converting it into a multi-robot system namely MAGS. For environment exploration we debate on metrics of percentage of area covered, goal points reduction, percentage of unknown cells (PER_UNK), and radius values (RAD). Regarding map quality, we compared metrics measuring Structural Similarity Index Measurement (SSIM) , Root Mean Square Error (RMSE), and Alignment Error (AE) with reference to ground truth maps. We conducted 15 simulations of 20 minutes each for both W.G. and HOS using Frontier, MAGS, and our methods rendering a total simulation time of 15 hours. PER_UNK, RAD, MIN_PTS and MAX_PTS were initialized to 60 %, 1, 0 and 10 respectively.\n###figure_4### ###figure_5### ###figure_6### Figure 5 ###reference_### shows the percentage of maps discovered using Our approach (blue), MAGS (red), and Frontier (green) where R is the number of robots and S & A denote synchronous and asynchronous approaches. It can be observed that Our approach covers an average of 10% and 7.5% more area in W.G and HOS compared to MAGS and Frontier approaches and the agents controlled asynchronously were able to discover a higher percentage of the map concerning the ones controlled synchronously. Furthermore, we observe that Frontier outperforms MAGS as it performs frontier exploration without any uncertainty quantification, resulting in more exploration.\nFigure 6 ###reference_### shows the average rate of exploration for W.G along with the standard deviation using 3 robots. We can conclude that Our approach manages to explore 6.7% and 13% percentage more area than Frontier and MAGS.\n###figure_7### ###figure_8### ###figure_9### In Figure 7 ###reference_### we can deduce that in both W.G and HOS the synchronous and asynchronous approach the number of points is reduced significantly, with a reduction of 80%, 78%, 80%, and 65% for all the cases respectively in W.G. And that of 85%, 84%, 72% and 83% for HOS. Consequently reducing the computational cost required by the reward processing on the server side with the adoption of frontier management strategies in Section III-A ###reference_### to limit the number of global frontiers. Furthermore, the average detected points are lower in synch as the agents have to wait for others, consequently lowering the overall number of points but with the cost of less exploration as evident from Figure 5 ###reference_###.\nTable II ###reference_### and Table II ###reference_### shows the usage of PER_UNK i,e the percentage of unknown cells to be considered in the radius for computing the information gain of a frontier candidate and RAD showing the radius values changed when the list of points is recomputed using 3 robots with the async approach. We can observe that in W.G, PER_UNK remains at 40% indicating the re-computation of the list because W.G has fewer obstacles than HOS resulting in less frontier neighbour percentage. Furthermore, we observe that the percentage of RAD and PER_UNK remain at 1.00 and 40% respectively indicating less re-computation of the list on the server, consequently lowering the computational cost.\nRegarding the visual analysis of the maps and debating on map quality matrices, the results on average appear promising as shown in (Table III ###reference_###) using 3 robots. In almost all cases, using Our method rendered reduced RMSE, AE, and increased SSIM as compared to MAGS and Frontier methods. We further conclude that the Frontier method when compared to MAGS explores the environment more as shown in Figure 5 ###reference_###, but has higher RMSE, AE, and lower SSIM.\n###figure_10### ###figure_11### ###table_1###"
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-B Experimental Results",
75
+ "text": "Experiments in a real environment were performed using two ROSBot 2R robots555https://husarion.com/manuals/rosbot/ ###reference_###. with RPLidar A2 (Figure 8a ###reference_sf1###) with ROS on Ubuntu 20.04.6 (LTS). The robots are equipped with an Intel Core i7\u00ae CPU, with a system RAM of 32GB and NVIDIA RTX 1000 GPU. The environment consists of a room and two corridors measuring 81 in total as shown in Figure 8b ###reference_sf2###. Figure 9 ###reference_### shows the resultant O.G map along with karto SLAM pose graphs using two robots. From Figure 9a ###reference_sf1### we can observe that using Our approach each agent effectively spreads and explores the environment with a total explored area of 70.31 compared to that of only 55.80 for MAGS from Figure 9b ###reference_sf2###. We performed four experiments two using Our and two with MAGS with an experimental time of 20 minutes for each.\n###figure_12### ###figure_13### Figure 10 ###reference_### shows the average rate of percentage of the global map discovered by the robots with Our (2 experiments) and MAGS (2 experiments) methods respectively. It is evident that using Our approach we manage to cover 26% more map area than MAGS.\n###figure_14### ###figure_15### ###figure_16### The box plot graph shown in Figure 11 ###reference_### shows a reduction in the number of processed points used for exploration using the asynchronous and synchronous methods using Our approach. We can observe that the average number of points reduces both methods from 6 to 5 and from 3 to 2 points respectively. We also observe that the number of points detected in the synchronous approach is less because the robots wait for each other to reach the goal before processing new points. Thus we observe more points in the asynchronous approach. We observe very less points in total as compared to the simulated environment because of the large difference in environment size.\n###figure_17###"
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "CONCLUSIONS",
81
+ "text": "We proposed a method for the coordination of multiple robots in a collaborative exploration domain performing AC-SLAM. We proposed a strategy to efficiently manage the global frontiers to reduce the computational cost and to spread the robots in the environment. Two different coordinating approaches were presented for efficient exploration of the environment. We presented extensive simulation analysis on publicly available datasets and compared our approach to similar methods using ROS and performed experiments to validate the efficiency and usefulness of our approach in the real-world scenario. Possible future works can explore strategies to implement the proposed architecture in a decentralized way, thus dividing the computational weight among all the agents and using visual sensors for extracting features as potential frontier candidates."
82
+ }
83
+ ],
84
+ "appendix": [],
85
+ "tables": {
86
+ "1": {
87
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_flex_figure ltx_flex_table\">\n<div class=\"ltx_flex_cell ltx_flex_size_2\">\n<figure class=\"ltx_figure ltx_figure_panel ltx_minipage ltx_align_bottom\" id=\"S4.T2.2\" style=\"width:119.5pt;\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_figure\">TABLE I: </span><span class=\"ltx_text ltx_font_typewriter\" id=\"S4.T2.2.4.1\">PER_UNK</span> usage</figcaption>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.2.2\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.3.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.3.1.1.1\">Env.</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.3.1.2\"><span class=\"ltx_text ltx_font_typewriter ltx_font_bold\" id=\"S4.T2.2.2.3.1.2.1\">PER_UNK</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.3.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.3.1.3.1\">Used</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.4.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.4.2.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T2.2.2.4.2.1.1\">W.G</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.4.2.2\">60 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.4.2.3\">34.5 %</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.5.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.2.5.3.1\">50 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.2.5.3.2\">1.4 %</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.1.1\">\n 40 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.2.1\">64.0</span> %</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.6.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.6.4.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T2.2.2.6.4.1.1\">HOS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.6.4.2\">60 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.6.4.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.6.4.3.1\">67.3</span> %</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.7.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.2.7.5.1\">50 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.2.7.5.2\">4.3 %</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.2.2.2.1\">\n 40 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.2.2.2.2\">28.2 %</td>\n</tr>\n</tbody>\n</table>\n</figure>\n</div>\n<div class=\"ltx_flex_cell ltx_flex_size_2\">\n<figure class=\"ltx_figure ltx_figure_panel ltx_minipage ltx_align_bottom\" id=\"S4.T2.4\" style=\"width:119.5pt;\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_figure\">TABLE II: </span><span class=\"ltx_text ltx_font_typewriter\" id=\"S4.T2.4.4.1\">RAD</span> usage</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.4.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.4.2.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.4.2.3.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.2.3.1.1.1\">Env</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.4.2.3.1.2\"><span class=\"ltx_text ltx_font_typewriter ltx_font_bold\" id=\"S4.T2.4.2.3.1.2.1\">RAD</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.4.2.3.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.2.3.1.3.1\">Used</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.4.2.4.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.4.2.4.1.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T2.4.2.4.1.1.1\">W.G</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.4.2.4.1.2\">1.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.4.2.4.1.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.2.4.1.3.1\">87.0</span> %</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.2.5.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.4.2.5.2.1\">1.25</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.4.2.5.2.2\">1.8%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.3.1.1.1\">\n 1.50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.3.1.1.2\">9.7%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.2.6.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.4.2.6.3.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T2.4.2.6.3.1.1\">HOS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.4.2.6.3.2\">1.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.4.2.6.3.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.2.6.3.3.1\">76.2</span> %</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.2.7.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.4.2.7.4.1\">1.25</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.4.2.7.4.2\">5.1 %</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.4.2.2.1\">\n 1.50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.4.2.2.2\">8.5 %</td>\n</tr>\n</tbody>\n</table>\n</figure>\n</div>\n</div>\n</figure>",
88
+ "capture": "TABLE I: PER_UNK usage"
89
+ },
90
+ "2": {
91
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>MAP QUALITY METRICES.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T3.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.1.1\">Env</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.2.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.3.1\">SSIM</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.4.1\">RMSE</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.5.1\">AE</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.2.1\"><span class=\"ltx_text\" id=\"S4.T3.1.2.2.1.1\">W.G</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.2.2\">Our (Asynch)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.2.3\">0.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.2.4\">5.43</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.2.5\">25.68</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.3.1\"><span class=\"ltx_text\" id=\"S4.T3.1.3.3.1.1\">W.G</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.3.2\">MAGS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.3.3.3.1\">0.86</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.3.4\">6.34</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.3.5\">28.39</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.4.1\"><span class=\"ltx_text\" id=\"S4.T3.1.4.4.1.1\">W.G</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.4.2\">Frontier</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.4.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.1.4.4.3.1\">0.20</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.4.4.4.1\">10.04</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.4.4.5.1\">40.89</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.5.1\"><span class=\"ltx_text\" id=\"S4.T3.1.5.5.1.1\">HOS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.5.2\">Our (Asynch)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.5.5.3.1\">0.74</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.5.4\">4.89</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.5.5\">25.39</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.6.1\"><span class=\"ltx_text\" id=\"S4.T3.1.6.6.1.1\">HOS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.6.2\">MAGS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.6.3\">0.72</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.6.4\">6.39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.6.5\">29.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.7.1\"><span class=\"ltx_text\" id=\"S4.T3.1.7.7.1.1\">HOS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.7.2\">Frontier</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.7.3\">0.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.7.7.4.1\">12.67</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.7.7.5.1\">42.89</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
92
+ "capture": "TABLE III: MAP QUALITY METRICES."
93
+ }
94
+ },
95
+ "image_paths": {
96
+ "1": {
97
+ "figure_path": "2310.01967v5_figure_1.png",
98
+ "caption": "Figure 1: Central server (red), and local nodes (yellow, green) communication.",
99
+ "url": "http://arxiv.org/html/2310.01967v5/x1.png"
100
+ },
101
+ "2": {
102
+ "figure_path": "2310.01967v5_figure_2.png",
103
+ "caption": "Figure 2: The architecture of resultant system.",
104
+ "url": "http://arxiv.org/html/2310.01967v5/x2.png"
105
+ },
106
+ "3": {
107
+ "figure_path": "2310.01967v5_figure_3.png",
108
+ "caption": "Figure 3: O.G map representing two points and their radius.",
109
+ "url": "http://arxiv.org/html/2310.01967v5/extracted/5747126/images/mapped_area_circle.png"
110
+ },
111
+ "4(a)": {
112
+ "figure_path": "2310.01967v5_figure_4(a).png",
113
+ "caption": "(a) W.G\nFigure 4: Environments used and the resulting O.G map showing the initial (red) and final positions (green) of robots.",
114
+ "url": "http://arxiv.org/html/2310.01967v5/extracted/5747126/images/willow_SS.png"
115
+ },
116
+ "4(b)": {
117
+ "figure_path": "2310.01967v5_figure_4(b).png",
118
+ "caption": "(b) HOS\nFigure 4: Environments used and the resulting O.G map showing the initial (red) and final positions (green) of robots.",
119
+ "url": "http://arxiv.org/html/2310.01967v5/extracted/5747126/images/hospital_orig_pic.png"
120
+ },
121
+ "4(c)": {
122
+ "figure_path": "2310.01967v5_figure_4(c).png",
123
+ "caption": "(c) Resulting O.G map of AWS Hospital environment.\nFigure 4: Environments used and the resulting O.G map showing the initial (red) and final positions (green) of robots.",
124
+ "url": "http://arxiv.org/html/2310.01967v5/x3.png"
125
+ },
126
+ "5(a)": {
127
+ "figure_path": "2310.01967v5_figure_5(a).png",
128
+ "caption": "(a) W.G\nFigure 5: % of map discovered in W.G and HOS Environments.",
129
+ "url": "http://arxiv.org/html/2310.01967v5/x4.png"
130
+ },
131
+ "5(b)": {
132
+ "figure_path": "2310.01967v5_figure_5(b).png",
133
+ "caption": "(b) HOS\nFigure 5: % of map discovered in W.G and HOS Environments.",
134
+ "url": "http://arxiv.org/html/2310.01967v5/x5.png"
135
+ },
136
+ "6": {
137
+ "figure_path": "2310.01967v5_figure_6.png",
138
+ "caption": "Figure 6: % of map explored using Our, MAGS and Frontier approaches using 3 robots on W.G environment.",
139
+ "url": "http://arxiv.org/html/2310.01967v5/x6.png"
140
+ },
141
+ "7(a)": {
142
+ "figure_path": "2310.01967v5_figure_7(a).png",
143
+ "caption": "(a) W.G\nFigure 7: Number of Points vs the applied approach.",
144
+ "url": "http://arxiv.org/html/2310.01967v5/x7.png"
145
+ },
146
+ "7(b)": {
147
+ "figure_path": "2310.01967v5_figure_7(b).png",
148
+ "caption": "(b) HOS\nFigure 7: Number of Points vs the applied approach.",
149
+ "url": "http://arxiv.org/html/2310.01967v5/x8.png"
150
+ },
151
+ "8(a)": {
152
+ "figure_path": "2310.01967v5_figure_8(a).png",
153
+ "caption": "(a) RosBot 2\nFigure 8: Robot and experimental environment used.",
154
+ "url": "http://arxiv.org/html/2310.01967v5/extracted/5747126/images/ROSbot2R.png"
155
+ },
156
+ "8(b)": {
157
+ "figure_path": "2310.01967v5_figure_8(b).png",
158
+ "caption": "(b) Environment\nFigure 8: Robot and experimental environment used.",
159
+ "url": "http://arxiv.org/html/2310.01967v5/extracted/5747126/images/exp_lab_2robots.jpg"
160
+ },
161
+ "9(a)": {
162
+ "figure_path": "2310.01967v5_figure_9(a).png",
163
+ "caption": "(a) Our\nFigure 9: Final O.G map using Our and MAGS methods indicating initial (red) and final positions of agents (green)",
164
+ "url": "http://arxiv.org/html/2310.01967v5/x9.png"
165
+ },
166
+ "9(b)": {
167
+ "figure_path": "2310.01967v5_figure_9(b).png",
168
+ "caption": "(b) MAGS\nFigure 9: Final O.G map using Our and MAGS methods indicating initial (red) and final positions of agents (green)",
169
+ "url": "http://arxiv.org/html/2310.01967v5/x10.png"
170
+ },
171
+ "10": {
172
+ "figure_path": "2310.01967v5_figure_10.png",
173
+ "caption": "Figure 10: % of explored map evolution in experiments",
174
+ "url": "http://arxiv.org/html/2310.01967v5/x11.png"
175
+ },
176
+ "11": {
177
+ "figure_path": "2310.01967v5_figure_11.png",
178
+ "caption": "Figure 11: Points reduction in experiments",
179
+ "url": "http://arxiv.org/html/2310.01967v5/x12.png"
180
+ }
181
+ },
182
+ "validation": true,
183
+ "references": [],
184
+ "url": "http://arxiv.org/html/2310.01967v5"
185
+ }
20240722/2310.09450v3.json ADDED
@@ -0,0 +1,511 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Non-intrusive Enforcement of Decentralized Stability Protocol for IBRs in AC Microgrids",
3
+ "abstract": "This paper presents decentralized, passivity-based stability protocol for inverter-based resources (IBRs) in AC microgrids and a non-intrusive approach that enforces the protocol. By \u201cnon-intrusive\u201d we mean that the approach does not require reprogramming IBRs\u2019 controllers to enforce the stability protocol. Implementing the approach only requires very minimal information of IBR dynamics, and sharing such information with the non-IBR-manufacturer parties does not cause any concerns on intellectual property privacy.\nEnforcing the protocol allows for plug-and-play operation of IBRs, while maintaining microgrid stability.\nThe proposed method is tested by simulating a grid-connected, grid-following IBR and two networked microgrids with lines and grid-forming IBRs modeled in the electromagnetic transient (EMT) time scale. Simulations show that oscillations with increasing amplitudes can occur, when two stable AC microgrids are networked. Simulations also suggest that the proposed approach can mitigate such a system-level symptom.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "As many countries are decarbonizing their energy infrastructure, a growing number of Inverter-based Resources (IBRs), e.g., energy storage, rooftop solar panels, and electric vehicle charging stations, are emerging in power distribution grids [1 ###reference_b1###]. However, integrating large-scale IBRs will pose unprecedented challenges to distribution grid management, since today\u2019s distribution grids are not designed for hosting tens of thousands of IBRs, and distribution system operators (DSOs) generally cannot directly control IBRs at grid edges. With the concept of microgrids [2 ###reference_b2###], a large amount of IBRs in a distribution grid can be managed via a \u201cdivide-and-conquer\u201d strategy: the distribution grid can be divided into several networked microgrids, and each microgrid manages its own generation and loads [3 ###reference_b3###]. With such an architecture, the management complexity for DSOs is significantly reduced, as the DSOs only need to coordinate several microgrids, instead of controlling massive IBRs in a centralized manner [4 ###reference_b4###]. A microgrid has three operational modes: a grid-connected mode [2 ###reference_b2###], an islanded mode [2 ###reference_b2###], and a hybrid mode [5 ###reference_b5###]. Under normal conditions, a microgrid can enter the grid-connected mode where the loads in the microgrid can be balanced by the energy from both local generation and the host distribution system. When the host distribution grid fails to deliver energy, a microgrid can either balance its load autonomously by its local generation (i.e., the islanded mode), or network with its neighboring microgrids and balance loads collaboratively (i.e., the hybrid mode) [5 ###reference_b5###].\nOne key challenge of operating microgrids in the islanded or hybrid mode is how to ensure the microgrid stability [6 ###reference_b6###]. Compared with large-scale transmission systems whose dynamics are governed by thousands of giant rotating machines, the microgrids powered by IBRs are more sensitive to disturbances that include connection or disconnection of IBRs, renewable fluctuations and line faults, due to lack of physical inertia in generation resources and the small scale of the microgrids. As a result, the disturbances may compromise the quality of electricity services by incurring sustained oscillations or even instability. Exacerbating the challenge, today\u2019s IBR manufacturers tune their IBRs at a device level without much consideration of system-level performance of networked IBRs. However, the non-manufacturer parties (NMPs), e.g., DSOs, microgrid operators (GOs), and IBR owners, who concern security of networked IBRs, typically do not know the detailed control schemes of IBRs and cannot reprogram the IBRs\u2019 controllers. This is because the manufacturers are reluctant to share their detailed control schemes with the NMPs due to concerns on intellectual property (IP) privacy. Without the consideration of the system-level performance, IBRs might fight with other, causing undesirable oscillations or instability. Such incidences occurred in transmission systems, e.g., the sub-synchronous control interactions (SSCI) in Texas [7 ###reference_b7###] and oscillations in High Voltage DC systems that contain multiple converters [8 ###reference_b8###]. In the context of microgrids, it is possible that networking two stable microgrids leads to oscillations with increasing amplitudes (shall be shown in Section V ###reference_###). Therefore, as more and more IBRs are emerging at grid edges, it is imperative to develop technologies that certify system-level stability of networked IBRs.\nExisting approaches to stability certification for electrical energy systems can be classified into three categories: centralized, impedance-based, and passivity-based approaches. In the centralized approaches, system operators (SOs) are assumed to be able to collect dynamical models of key components in the systems, and they assess the system stability by performing time-domain simulations [9 ###reference_b9###], by conducting small-signal analysis [10 ###reference_b10###], or by searching for system behavior-summary functions, e.g., the Lyapunov functions [4 ###reference_b4###, 11 ###reference_b11###], and energy functions [12 ###reference_b12###, 13 ###reference_b13###]. The drawbacks of these centralized approaches are listed as follows: 1) IBR manufacturers can only share a \u201cblack-box\u201d model with SOs for simulation purposes, due to concerns on IP privacy. Consequently, detailed IBRs\u2019 models are not available for performing analytical stability assessment [4 ###reference_b4###, 13 ###reference_b13###, 12 ###reference_b12###, 11 ###reference_b11###]. 2) Some approaches [9 ###reference_b9###, 14 ###reference_b14###] are computationally intractable when addressing high-order systems. For an IBR-rich microgrid, wide-range behaviors of interested lie in the EMT time scale, and they are described by high-order dynamics.\n3) Most approaches [13 ###reference_b13###, 12 ###reference_b12###, 11 ###reference_b11###] cannot provide SOs with actionable guidance of enforcing system stability. Beyond stability analysis, controls enforcing stability are much needed.\nThe impedance-based and passivity-based approaches address the drawbacks of the centralized approaches by developing device-level stability protocol for IBRs. The device-level stability protocol entails conditions that each IBR needs to satisfy locally to ensure the stability of its host system. One way to design such protocol is by checking if the impedance ratio satisfies the Nyquist stability criterion, where the impedance ratio is defined by the IBR output impedance and the equivalent impedance of the host grid. For example, reference [15 ###reference_b15###] proposes impedance specifications for stable DC resources and a data-driven way to measure the specifications. Reference [16 ###reference_b16###] reviews impedance specifications for stability assessment of AC generation resources. Reference [17 ###reference_b17###] points out that different impedance-based criteria should be used for assessing stability of voltage-source systems and current-source systems. Reference [18 ###reference_b18###] generalizes the impedance-based stability criteria from a single-converter-infinity-bus system to a network with multiple converters. Based on the impedance-based analysis, reference [19 ###reference_b19###] proposes a participation function that aims to pinpoint root causes of instability. Reference [20 ###reference_b20###] performs the impedance-stability assessment with black-box converter models. In addition to stability assessment, there is a large body of literature that enforces the impedance-based stability protocol by tuning IBR control parameters [21 ###reference_b21###, 22 ###reference_b22###], and adding active dampers [23 ###reference_b23###]. The passivity theory is another common tool for designing the device-level protocol. For example, reference [24 ###reference_b24###] introduces the concept of self-disciplined stabilization in the context of DC microgrids. The stability protocol for each IBR is the passivity of the single-input-single-output (SISO) transfer function of the IBR.\nReference [25 ###reference_b25###] proposes the distributed, passivity-like stability protocol based on low-order nodal dynamics and power flow equations.\nReference [26 ###reference_b26###] develops the stability protocol for conventional generators in transmission systems based on the passivity shortage framework. Reference [27 ###reference_b27###] learns a neural network-structured storage function for each IBR and leverages the storage function as stability protocol to certify microgrid stability. Reference [28 ###reference_b28###] presents the passivity-based stability protocol for IBRs to assess small-signal stability of both fast and slow behaviors of IBR interconnections.\nUnfortunately, the existing impedance/passivity-based approaches have the following limitations:\n1) In references [21 ###reference_b21###, 22 ###reference_b22###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###] and [28 ###reference_b28###], the protocol is enforced in an intrusive manner, i.e., one has to reprogram the controllers of generation resources to enforce the protocol. This is undesirable for both NMPs and IBR manufacturers. The IBR controllers are typically packaged into the inverters and cannot be reprogrammed by the NMPs, for protecting IP privacy and reducing IBRs\u2019 vulnerability to cyberattacks. The control schemes of commercial inverters are typically deliberately designed and extensively tested by IBR manufacturers for achieving certain functions, such as voltage and current regulation. Hence, the IBR manufacturers might be reluctant to completely abandon or radically change their mature control schemes for enforcing the stability protocol [28 ###reference_b28###]. Besides, since many IBRs have been installed in the grid, it is costly or even infeasible to reprogram the controllers of these existing IBRs.\n2) The complexity of dynamics of IBR-dominated, AC microgrids is ignored by [24 ###reference_b24###, 25 ###reference_b25###, 27 ###reference_b27###, 26 ###reference_b26###].\nFor example, reference [24 ###reference_b24###] only considers the SISO dynamics of converter interfaces in DC microgrids, while the IBR\u2019s dynamics in an AC microgrid can have multiple inputs and outputs. References [25 ###reference_b25###, 27 ###reference_b27###, 26 ###reference_b26###] only address the slow dynamics of generation units but ignores the interactions among network dynamics and fast IBR controllers in the EMT time scale. Modelling full-order network dynamics is necessary in an IBR-rich microgrid, as some inverters may have high-frequency dynamics [29 ###reference_b29###].\n3) References [15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###, 27 ###reference_b27###] and [28 ###reference_b28###] only address stability assessment in a distributed manner without providing guidance of how to stabilize an unstable microgrid. 4) Some impedance-based approaches [15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###] simplify the dynamics of the host systems of an IBR as an ideal voltage source in series with impedance. Such a simplification is valid when the IBR connects to a strong grid (e.g., a large-scale transmission/distribution system). However, when an IBR connects to a microgrid, the complexity of dynamics of its host microgrid cannot be ignored. 5) While developed based on the \u201cblack-box\u201d IBR models, some impedance-based approaches [23 ###reference_b23###] require topology information of the host grid including line parameters and network connectivity. However, since the topology information can change dynamically due to potentially open boundaries among microgrids, stability assessment results and protocol enforcement performance may change accordingly, making it challenging to achieve the plug-and-play operation of IBRs.\nThis paper introduces a first-of-its-kind, non-intrusive, and decentralized approach to enforcing stability protocol of IBRs in AC microgrids. In this paper, we address both aspects of identifying stability protocol and designing a non-intrusive approach to enforcing the protocol, by leveraging the passivity theory and by designing a novel power-electronic (PE) interface. These two aspects together contribute the paper\u2019s novelty that allows NMPs to enforce stability of the AC microgrid in a non-intrusive and decentralized fashion. The contribution of this paper is\nsummarized as follows:\nThe approach enforces the stability protocol in a non-intrusive, and decentralized fashion. The \u201cnon-intrusive, and decentralized\u201d is in the sense that the design and operation of the PE interface does not require reprogramming IBR controllers and the topology information. This allows the NMPs to enforce the protocol that enables plug-and-play operation of IBRs. The non-intrusive feature cannot be achieved by the methods in [21 ###reference_b21###, 22 ###reference_b22###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 28 ###reference_b28###, 27 ###reference_b27###].\nDesigning the PE interface only needs a scalar that encapsulates input-output dynamics of an IBR, and does not require the detailed control schemes of the IBR or network topology information. Exposing such a scalar to NMPs will not cause any IP concerns for IBR manufacturers, as the detailed IBR control schemes cannot be inferred only based on the scalar. Compared with our approach, some existing methods require either the detailed IBR models [21 ###reference_b21###, 22 ###reference_b22###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###] or the topology information [23 ###reference_b23###] to enforce stability.\nThe proposed approach can address the high-order dynamics due to the tight interaction among voltage and current controllers, and network dynamics in the EMT time scale, whereas such complexity of dynamics of the IBR network is ignored by some existing methods [24 ###reference_b24###, 25 ###reference_b25###, 27 ###reference_b27###, 26 ###reference_b26###].\nThe rest of this paper is organized as follows: Section II ###reference_### mathematically describes the dynamics of an IBR-dominated microgrid; Section III ###reference_### presents the decentralized stability protocol; Section IV ###reference_### introduces the interface that aims to enforce the stability protocol; Section V ###reference_### tests the performance of the interface; and Section VI ###reference_### summarizes this paper."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Microgrid Dynamics",
15
+ "text": "This section considers an AC microgrid with IBRs. We describe the nodal and network dynamics of the microgrid. Then the microgrid dynamics is organized into a feedback architecture lending itself to developing stability protocol."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Dynamics of IBRs",
21
+ "text": "This paper considers two types of IBRs: grid-forming (GFM) and grid-following (GFL) IBRs. Figures 1 ###reference_### and 2 ###reference_### present the representative architectures of these two types of IBRs. The dynamics of the representative GFM and GFL IBRs are elaborated in Appendices A ###reference_### and B ###reference_###. It can be observed from Figures 1 ###reference_### and 2 ###reference_### that both GFM and GFL IBRs interact with the rest of the microgrid via terminal voltages and terminal currents , while they are governed by different internal state vector 111 will be , if the -th IBR is GFM and its dynamics is presented in Appendix A ###reference_###; will be , if the -th IBR is GFL and its dynamics is presented in Appendix B ###reference_###. Each state in is explained in the Appendices..\nThis paper concerns the fast dynamics of microgrids in the EMT time scale.\nThe small-signal dynamics of an IBR in such a time scale can be described by\nwhere the \u201c\u201d variables are the deviations of the corresponding variables from their steady states; () is the terminal current represented in the direct-quadrature (d-q) reference frame of the -th IBR; () is the terminal voltage represented in the d-q frame; and matrices , , and are derived from the IBR dynamics presented in Appendices A ###reference_### and B ###reference_###. The input-output relationship of the dynamics of IBR is shown in the central block of Figure 3 ###reference_###-(a). The input and output interact with the rest of the microgrid in a common reference frame (i.e., D-Q frame). Next, we present the reference frame transformation [29 ###reference_b29###, 30 ###reference_b30###] that converts variables in the d-q frame to the D-Q frame.\n###figure_1### ###figure_2### ###figure_3### In Figure 3 ###reference_###-(a), the output is obtained by\n\nwhere\nNote that is assumed to be a constant, since it changes much slower than the states in the time scale of interest.\nSimilarly, the relationship between and are described by\n.\nWith the above definitions, IBR can be viewed as a dynamic system that is driven by while outputting , as shown in Figure 3 ###reference_###-(b)."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Dynamics of Microgrid Network",
27
+ "text": "Assume that the microgrid with IBRs is three-phase balanced and hosts constant-impedance load. By the Kron reduction technique, the microgrid network can be reduced to a network with node and branches. One of the node is the neutral/reference point of the microgrid. Let set collect the nodal indices of the Kron-reduced network where \u201c\u201d denotes the nodal index for the neutral point. Let set collect branch indices of the reduced network. Another way to represent branch is to use a pair where correspond to the two nodes of the two terminals of branch . Suppose that , we define the positive direction assigned to branch is from node to .\nThe branches in the Kron-reduced network can be divided into two categories. Let collect the branches connecting to the neutral point via an IBR, while set collects the rest of the branches. The dynamics of branches in are governed by equations presented in Section II-A ###reference_###, whereas the dynamic behaviors of the branches in are modeled by RL circuits with resistor and inductance :\nwhere ; the subscript \u201cb\u201d reminds readers that the corresponding variables are used for describe branches without IBRs; the subscripts \u201cD\u201d and \u201cQ\u201d suggest the corresponding variables are in the common reference frame (the D-Q frame);\n and are the bus voltage differences of branch in the D- and Q- axis, i.e., and .\n###figure_4### To characterize the relationship between branch currents for , we introduce a reduced incidence matrix whose entries are with and . Each entry in matrix is defined as follows: if branch is incident at node , and the reference direction of branch is away from node ; if branch is incident at node , and the reference direction of branch is toward to node ; and if branch is not incident at node .\nWith the reference direction defined before, one can assign indices of nodes and branches such that the reduced incidence matrix has the following structure [31 ###reference_b31###]\nwhere is the first columns of matrix ; and is a -dimension identity matrix.\nNext, we present the compact form of Kirchhoff\u2019s Current Law (KCL), with the incident matrix . Let be . The KCL of the microgrid network in terms of direct/quadrature current leads to\nwhere with , ; and with , .\nPlugging (4 ###reference_###) into (5 ###reference_###) leads to\nMoreover, the relationship between the voltages across branches and the nodal voltages can be described by\nIn (7 ###reference_###), and , where the voltages across branches ; ; and nodal voltages , and where and are obtained by casting and to the D-Q frame by (2 ###reference_###).\nPlugging (4 ###reference_###) into (7 ###reference_###) leads to [31 ###reference_b31###]\nDefine the following vectors:\nThe branch dynamics (3 ###reference_###) can be organized into\nwhere ; ; ; and\nSince (11 ###reference_###) is linear, the following equations also hold:\nwhere the \u201c\u201d variables are the deviations of the original variables from their steady states."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C A Feedback Perspective of Microgrid Dynamics",
33
+ "text": "The interaction between the IBRs and the microgrid network can be interpreted from a feedback perspective shown in Figure 5 ###reference_###. The IBR dynamics (= 1, 2, \u2026, N) constitute the feed-forward loop , whereas the feedback loop results from the network dynamics (11 ###reference_###). The input of is defined by\n\nwhere the negative sigh results from the reference directions of and defined before: recall that the positive reference direction of points into the IBR , while the positive reference direction of points into the network. The output of is which drives the network dynamics (11 ###reference_###).\nWith Figure 5 ###reference_###, the dynamics of the microgrid with IBRs can be interpreted as follows. At time step , current for drives the dynamics of system which updates the internal state variables and outputs\nvoltage . The voltages further drive the dynamics of the microgrid network to update the internal state variables of the network and produces . The updated currents drives the dynamics of the IBRs, and the process described above repeats. Such a feedback perspective lends itself to introducing the transient stability protocol based on the passivity theory.\n###figure_5###"
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "III Decentralized Stability Protocol",
39
+ "text": "This section aims to answer the question of what condition each IBR should satisfy such that they can establish a stable microgrid. We term the condition the decentralized stability protocol. This section first introduces some definitions in the control theory. Then we present a lemma that provides guidance to design the protocol. Finally, the protocol is formally described and justified."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-A Stability of Interconnected Systems",
45
+ "text": "The closed-loop dynamics of Figure 5 ###reference_### can be described by\nwhere vector collects the IBR states in for , and the network states in ; and function defines the evolution of in terms of time. Recall that the equilibrium point of (12 ###reference_###) is the origin . The asymptotic stability of is rigorously described by the following definition:\n(Asymptotic stability [32 ###reference_b32###]) The equilibrium point of the system (12 ###reference_###) is asymptotically stable, if\nand if for some ,\nFor a system with input and output , the next two definitions examine the input-output properties of :\n(OFP [33 ###reference_b33###]) The system is output feedback passive (OFP), if for all square integrable and some ,\nwith a zero initial condition. Moreover, is called the passivity index.\n( Gain [33 ###reference_b33###]) The system has finite gain if for all square integrable\nwith a zero initial condition.\nThe link between asymptotic stability and the output feedback passivity is established by the following lemma [33 ###reference_b33###]:\n(Corollary 1 in [33 ###reference_b33###]) The equilibrium point of the closed-loop system in Figure 5 ###reference_### is asymptotically stable, if both subsystems and are output feedback passive.\nLemma 1 ###reference_ma1### guides one to design a decentralized protocol for each IBR to ensure system-level stability. Subsection III-B ###reference_### examines the OFP property of the feedback loop in Figure 5 ###reference_###. Subsection III-C ###reference_### introduces the protocol that ensures the OFP property of the feed-forward loop ."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-B Output Feedback Passivity of Microgrid Networks",
51
+ "text": "To establish the asymptotic stability, Lemma 1 ###reference_ma1### requires the RL network to be OFP. While it is well known that a RL network is passive, how to quantify the extent that the RL network is passive has not been well studied yet in the power and energy community. The OFP property of the network dynamics (11 ###reference_###) in the DQ frame is established by the following theorem:\n(Network Passivity Index) The microgrid network dynamics (11 ###reference_###) is OFP with input and output , if matrix has at least one positive eigenvalue.\nBy definition,\nNote that and is a scalar. Then,\nEquation (15 ###reference_###) leads to , implying\nBased on (15 ###reference_###) and (16 ###reference_###),\nwhere and . As matrices ,\nwhere is the minimal eigenvalue of ; is the maximal eigenvalue of ; and as .\nThe third line of (17 ###reference_###) is due to the fact that\nThe inequality (13 ###reference_###) is evaluated with a zero initial condition. By setting , it follows that and dynamics (11 ###reference_###) is OFP with passivity index .\n\u220e\nRemark: The proof of Theorem 1 ###reference_orem1### reveals that the passivity index of an RL network depends not only on the minimal branch resistance, but also on the branches\u2019 connectivity."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-C IBR-level Stability Protocol",
57
+ "text": "Theorem 1 ###reference_orem1### suggests that the feedback loop in Figure 5 ###reference_### is OFP. According to Lemma 1 ###reference_ma1###, the system-level asymptotic stability can be established, if the feed-forward loop is OFP. This observation inspires us to design the following IBR-level stability protocol that leads to the microgrid-level stability:\nProtocol 1: For , the dynamics of IBR with input and output is OFP.\nThe \u201cP(assive)\u201d in Protocol 1 should not be confused with the \u201cpassive element\u201d defined in the circuit theory [34 ###reference_b34###]. In the circuit theory, the passive element is an element that is \u201cnot capable of generating energy\u201d [34 ###reference_b34###]. However, whether an OFP component in the sense of Definition 2 ###reference_inition2### is capable of generating energy or not depends on the definition of its inputs and outputs. If an IBR follows Protocol 1, it does not mean that the IBR cannot produce energy that powers its host microgrid, and it essentially means that the IBR cannot produce energy that leads disturbances to be sustained or amplified. Section V ###reference_### shows an example that a IBR follows Protocol 1 but produces energy. Next we show following Protocol 1 leads to asymptotic stability.\nThe equilibrium point of the closed-loop system in Figure 5 ###reference_### is asymptotically stable if Protocol 1 is followed.\nProtocol 1 requires each IBR to be OFP, i.e., there exist such that, for ,\nAccording to Figure 3 ###reference_###-(a), and , then\nNote that . This leads to\nDefine . It follows that\nfor . By summing up the inequalities in (22 ###reference_###), we have\nSince is finite, the finite summation and integration operators can be interchanged, i.e.,\nNote that and . This leads to\nBy Definition 2 ###reference_inition2###, the subsystem in Figure 5 ###reference_### is OFP with passivity index . In addition, since subsystem is OFP according to Theorem 1 ###reference_orem1###, the asymptotic stability of equilibrium of the system in Figure 5 ###reference_### is established by Lemma 1 ###reference_ma1###.\n\u220e\nAs Protocol is not straight-forward to implement for both IBR manufacturers and NMPs, how do they enforce protocol 1? This is answered in the next section."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "IV Non-intrusive Protocol Enforcement",
63
+ "text": "In this section, we first illustrate the basic idea of enforcing Protocol 1. Then we conceptualize the architecture of an interface that enforces Protocol 1 in a non-intrusive way. We also define the information needed to design the interface."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-A Basic Idea of Protocol Enforcement",
69
+ "text": "Protocol 1 at IBR can be enforced by the scheme shown in Figure 6 ###reference_### where , , and are tunable parameters; and is an identity matrix. The next lemma guides one to tune , , and to follow Protocol 1:\n###figure_6### (Theorem 4 in [35 ###reference_b35###])\nThe closed-loop system in Figure 6 ###reference_### with input and output is OFP with , if\nwhere is the gain of the IBR with input and output in Figure 6 ###reference_###.\nSuppose that an IBR manufacturer provides an gain , the NMPs can leverage condition (23 ###reference_###) to find , , and . As a result, the closed-loop system shown in Figure 6 ###reference_### follows Protocol 1. The remaining question is: how does the IBR manufacturer compute ?"
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-B Gain for IBRs",
75
+ "text": "The following Lemma can be leveraged by IBR manufacturers to obtain :\n[32 ###reference_b32###] Assume that the real part of every eigenvalue of matrix in (1 ###reference_###) is strictly negative. Let . Then, the gain of dynamics (1 ###reference_###) is .\nIn Lemma 3 ###reference_ma3###, is the norm; transfer functions can be obtained by the \u201css2tf\u201d function in MATLAB based on matrices , , and ; ; and is the norm of [32 ###reference_b32###] which can be obtained by the \u201chinfnorm\u201d function in MATLAB, given . Lemma 3 ###reference_ma3### requires a stable matrix . This is not a big assumption, as IBR control designers typically perform small-signal analysis to ensure device-level stability."
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "IV-C Architecture of Protocol Enforcement Interfaces (PEI)",
81
+ "text": "This subsection conceptualizes an interface that enforce Protocol 1, and the theoretical result in [35 ###reference_b35###] is translated into electric energy systems for the first time. The physical layer of the interface is shown in Figure 7 ###reference_###. The interface comprises a three-phase, controlled volage source, and a three-phase controlled current source. The voltage of the voltage source and the current of the current source are determined by the terminal voltage measurement and current measurement of the IBR . This paper focuses on the control law that establishes the link between and ; the internal design of the controlled voltage and current sources is out of the scope of this paper.\n###figure_7### ###figure_8### Figure 8 ###reference_### presents the cyber layer of the interface. In Figure 8 ###reference_###, the three-phase variables and are first transformed into the d-q frame by the Park transformation: ; and \nwhere [36 ###reference_b36###]\nIn the above equation, , and can be obtained locally by a phase-locked loop [37 ###reference_b37###]. Second, the deviation vectors and are obtained by subtracting the steady-state values and from and . Third, and are computed by\nFinally, the vectors in the d-q frame and are transformed to the three-phase frame.\nEquation (24 ###reference_###) is justified by transforming Figure 7 ###reference_### in the three-phase frame to the d-q frame. Figure 9 ###reference_### presents the circuit in the d-q frame. According to Figure 6 ###reference_###, we have\nIn Figure 9 ###reference_###, based on the Kirchhoff\u2019s circuit laws, we have\nPlugging (26 ###reference_###) into (25 ###reference_###) leads to (24 ###reference_###).\nIt is worth noting that designing the interface shown in Figures 7 ###reference_### and 8 ###reference_### only requires an IBR manufacturer to provide the gains of their IBRs which can be easily obtained via Lemma 3 ###reference_ma3### by the manufacturer. The interface design does not need the information of detailed IBR control. While the IBR manufacturer may be reluctant to share such information with the NMPs due to privacy concerns on intellectual properties, revealing the of the IBRs does not lead to such privacy issues, as it is impossible to infer the detailed control design of an IBR merely based on the gains of the IBR.\n###figure_9###"
82
+ },
83
+ {
84
+ "section_id": "5",
85
+ "parent_section_id": null,
86
+ "section_name": "Case Study",
87
+ "text": "This section tests the effectiveness of the PEIs by simulating a grid-connected GFL IBR and two networked microgrids."
88
+ },
89
+ {
90
+ "section_id": "5.1",
91
+ "parent_section_id": "5",
92
+ "section_name": "Grid-connected Grid-following IBR",
93
+ "text": ""
94
+ },
95
+ {
96
+ "section_id": "5.1.1",
97
+ "parent_section_id": "5.1",
98
+ "section_name": "V-A1 A motivating example",
99
+ "text": "Figure 10 ###reference_### shows a GFL IBR connected to a distribution grid. The dynamics of the GFL IBR associated with the simulation parameters is described in Appendix B ###reference_###. At time s, the distribution grid\u2019s frequency changes from Hz to Hz. Figure 11 ###reference_###-(a) visualizes the three-phase terminal currents of the GFL IBR in Figure 10 ###reference_### from s to s, while Figure 11 ###reference_###-(b) shows the zoomed-in version of the currents during different periods. Before the change, it can be observed that the GFL IBR is stabilized. After s, the peak values of the currents become around times larger than those before the change. The significantly increased currents can trigger an overcurrent protection relay to trip the GFL IBR, preventing the GFL IBR from integrating to the grid. The poor dynamical performance of the GFL IBR can also be observed in Figure 12 ###reference_### that visualizes the terminal currents in the d-q reference frame.\n###figure_10### ###figure_11### ###figure_12### ###figure_13###"
100
+ },
101
+ {
102
+ "section_id": "5.1.2",
103
+ "parent_section_id": "5.1",
104
+ "section_name": "V-A2 System responses with the protocol enforcement interface",
105
+ "text": "Next we test the performance of the PEI using the GFL IBR with the same setting of Section V-A1 ###reference_.SSS1###. Here, each IBR connects a PEI shown in Figure 7 ###reference_###. The manufacturer of the GFL IBR can use Lemma 3 ###reference_ma3### to obtain the gain of the IBR. The gain of the GFL IBR in Figure 10 ###reference_### is . With the gain , NMPs can find the parameters of each PEI, i.e., , , and , via condition (23 ###reference_###). It is worth noting that the manufacturer does not need to share the detailed model of their IBRs with the NMPs to enable them to design the PEI. The interface parameters for the GFL IBR are: , , and . The resulting is .\nFigure 13 ###reference_### shows the performance of the PEI with the above parameters. It can be observed that after the grid frequency change at s, the three-phase current magnitudes are constant after some moderate transients.\nFigure 14 ###reference_### visualizes the d-q components of the GFL IBR\u2019s terminal currents. It can be observed that the PEIs can significantly reduce the current increase shown in Figure 11 ###reference_###.\n###figure_14### ###figure_15### ###figure_16###"
106
+ },
107
+ {
108
+ "section_id": "5.2",
109
+ "parent_section_id": "5",
110
+ "section_name": "Networked Microgrids with Two IBRs",
111
+ "text": ""
112
+ },
113
+ {
114
+ "section_id": "5.2.1",
115
+ "parent_section_id": "5.2",
116
+ "section_name": "V-B1 A motivating example",
117
+ "text": "The test system in Figure 15 ###reference_### contains two microgrids. All control parameters of IBR can be found in [29 ###reference_b29###]. For IBR , , and the rest of parameters are from [29 ###reference_b29###]. The two loads are constant-impedance, and the per-phase impedances of Loads and are 25 and 20, respectively. Before time s, Microgrids 1 and 2 are in the islanded mode. At s, the two small microgrids are networked via the tie line and they enter the hybrid mode. Figures 16 ###reference_### visualizes the three-phase terminal currents at both IBRs, i.e., and , from s to s. In Figures 16 ###reference_###, it can be observed that the magnitudes of and are constant before the two microgrids are networked, i.e., s. This suggests the two microgrids in the islanded mode are stable. However, after the two microgrids are networked, i.e., s, the magnitudes of and keep oscillating. Figure 17 ###reference_### examines the three-phase currents and in the d-q frame: before s, both and can be stabilized at their nominal values. However, after the switch is closed at s, both and keep oscillating with increasing amplitudes, suggesting that the two networked microgrids become unstable.\n###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### Next, we examine the importance of modeling fast, high-order dynamics of IBRs. If we only model the dynamics with the slow states, i.e., the droop controllers, under the disturbance at s, the real power output of IBR 1 is visualized by the orange dashed waveform in Figure 18 ###reference_###. With the simplified model, it can be observed that the two networked IBRs are stable. However, if the dynamics of both fast and slow states are modeled, under the same disturbance, the blue-solid curve in Figure 18 ###reference_### visualizes , and it suggests the two networked IBRs are actually unstable since a growing oscillation is incurred. Such instability cannot be observed from the simulation with the simplified model. Therefore, modeling the dynamics of the fast states is also important for the stability analysis.\n###figure_22###"
118
+ },
119
+ {
120
+ "section_id": "5.2.2",
121
+ "parent_section_id": "5.2",
122
+ "section_name": "V-B2 System responses with protocol enforcement interface",
123
+ "text": "With the same setting of Section V-B1 ###reference_.SSS1###, each IBR connects a PEI shown in Figure 7 ###reference_###. The manufacturer of each IBR can use Lemma 3 ###reference_ma3### to obtain the gain of the IBR. The gains and for the two IBRs are and , respectively. Based on the gains, the parameters for the PEIs are: , , , , , and . The resulting and are and , respectively.\nFigures 19 ###reference_### and 20 ###reference_### show the performance of PEIs. It can be observed that after the two microgrids are networked at s, the three-phase current magnitudes are constant after some transients. Figure 21 ###reference_### visualizes the d-q components and : the PEIs can stabilize the currents at constant values after the two IBRs are networked, while both and would keep oscillating with increasing amplitudes if no PEI is installed (shown in Figure 17 ###reference_###).\n###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28###"
124
+ },
125
+ {
126
+ "section_id": "5.2.3",
127
+ "parent_section_id": "5.2",
128
+ "section_name": "V-B3 Energy changed by PEIs",
129
+ "text": "Do the PEIs consume significant amount of energy to stabilize the microgrids? We answer this question by comparing the energy consumed by the interfaces with the energy produced by the IBRs. For , denote by , , and the real power produced by IBR , the real power consumed by the three-phase, shunt current source in the PEI at IBR , and the real power consumed by the three-phase, series voltage source in the PEI at IBR , respectively. Denote by , , and the energy produced by IBR , the energy consumed by the three-phase current source in the PEI at IBR , and the energy consumed by the three phase voltage source in the PEI at IBR , over a period.\nFigures 22 ###reference_### and 23 ###reference_### visualize , , and .\nIn Figure 22 ###reference_###, it can be observed that the real power used for stabilizing the microgrids, i.e., and , is much less than . By integrating , , and over a period, , , and over the period, can be computed. Table I ###reference_### presents , , and over the transient process (i.e., the process from s to s) and the steady state (i.e., the process from s to s). Let for . It can be seen that the PEI at IBR 1 only takes a very small amount of energy, i.e., of total energy produced by IBR 1 during the transients, to stabilize the microgrids. In the steady state, the energy consumed by the PEI is only of the total energy produced by the IBR .\nSimilarly, Figure 23 ###reference_### shows that the absolute value of real power consumed by the interface at IBR is much smaller than the real power produced by IBR . The values of , and over the transient process (s - s) and the steady state (s - s) are reported in Table I ###reference_###. Compared with the energy produced by IBR , the energy produced by IBR for the stabilization purpose is very small, i.e., of during the transients and of during the steady state.\n###figure_29### ###figure_30###"
130
+ },
131
+ {
132
+ "section_id": "5.2.4",
133
+ "parent_section_id": "5.2",
134
+ "section_name": "V-B4 Partial coverage of protocol enforcement interfaces",
135
+ "text": "In the simulation presented in Sections V-B2 ###reference_.SSS2### and V-B3 ###reference_.SSS3###, all IBRs are equipped with the PEIs. Next, we remove the PEI installed at IBR and keep the PEI at IBR . With the same setting described in Section V-B1 ###reference_.SSS1###, Figure 24 ###reference_### presents the evolution of and . It can be observed that the PEI at IBR 2 can stabilize the networked microgrids alone.\n###figure_31### ###figure_32###"
136
+ },
137
+ {
138
+ "section_id": "5.2.5",
139
+ "parent_section_id": "5.2",
140
+ "section_name": "V-B5 Performance in the presence of constant-power loads",
141
+ "text": "Next, we examine the performance of the PEIs in the presence of constant-power loads. The two constant-impedance loads in Section V-B1 ###reference_.SSS1### are replaced by two constant-power loads. In our simulation, the two constant-power loads are modeled by the Simulink block called \u201cThree-Phase Dynamic Load\u201d with the \u201cExternal Control of PQ\u201d option selected. The real power for Loads and is W and W, respectively; and there is no reactive power for both loads. At s, the two microgrids are networked. Figure 25 ###reference_### presents the terminal currents of the two IBRs in the d-q frame, and it shows instability after s. With each IBR equipped with a PEI, Figure 26 ###reference_### presents the terminal currents of the two IBRs in the d-q frame. It can be observed that the system-level symptom shown in Figure 25 ###reference_### is mitigated by the PEIs.\n###figure_33### ###figure_34### ###figure_35### ###figure_36###"
142
+ },
143
+ {
144
+ "section_id": "5.2.6",
145
+ "parent_section_id": "5.2",
146
+ "section_name": "V-B6 Impact of the PEI on power sharing",
147
+ "text": "Without the proposed solution, the microgrids with the detailed model cannot be stabilized, as shown in Figure 18 ###reference_###. As a result, the desired power sharing characteristic can only be observed based on the simplified model. To examine the desired power sharing characteristic defined by the droop control in each IBR, we first simulate the simplified IBR dynamics only involving droop control under a load change. The orange-dashed curves in Figure 28 ###reference_### present the desired power sharing behaviors defined by the droop control. Then, under the same load change, we simulate the microgrids with the PEIs and the detailed dynamics. The blue curves in Figure 28 ###reference_### present the power sharing behaviors of the two IBRs with the PEIs. It can be observed that there are small power sharing errors which are and of the prescribed real power outputs at IBRs 1 and 2, respectively, due to the PEIs.\n###figure_37### ###figure_38### Such power sharing errors can be addressed in two ways. One way is to enable the PEIs only if instability is observed. Another way is to tune the parameters of the PEIs to minimize the power sharing errors. Note that the PEIs\u2019 parameters are not unique. After we update the parameters with , , , , , , the real power outputs are visualized in Figure 28 ###reference_###. It can be observed that with the updated parameters, the two IBRs output the real power prescribed by their droop controllers. Future work will explore a systematic way to tune PEIs\u2019 parameters to achieve accurate power sharing.\n###figure_39### ###figure_40###"
148
+ },
149
+ {
150
+ "section_id": "5.2.7",
151
+ "parent_section_id": "5.2",
152
+ "section_name": "V-B7 Comparison studies",
153
+ "text": "A conventional centralized method is based on the small signal analysis [29 ###reference_b29###] which collects the detailed dynamics of all IBRs and network information, derives the system matrix, and tunes the IBR parameters such that no eigenvalue of the system matrix lies in the right-half plane. We use the IBR parameters in [29 ###reference_b29###] that lead to a stable linear system. With the parameters in [29 ###reference_b29###], the orange-dashed curves in Figure 29 ###reference_### presents the terminal currents under the disturbance at s. With the PEIs, the blue-solid curves in Figure 29 ###reference_### presents the microgrid response under the same disturbance. The PEIs can stabilize the microgrids much faster with much less overshoots, compared with the centralized approach [29 ###reference_b29###]. While it may be possible to finely tune the IBR parameters in a centralized manner such that the IBRs react faster to the disturbance with less overshoots/undershoot than the proposed method through trial and error, the key feature of our approach is that it does not require the NMPs to collect detailed dynamics of all IBRs or reprogram the internal IBR controllers. Such a desirable feature cannot be achieved by the conventional centralized approach based on the small-signal analysis [29 ###reference_b29###].\n###figure_41### ###figure_42### Next, we compare the proposed approach with an existing passivity-based approach in [25 ###reference_b25###]. Note that the approach in [25 ###reference_b25###] requires one to reprogram the internal IBR controllers, which may be infeasible for NMPs, whereas the proposed approach can stabilize the system in a non-intrusive manner. The method in [25 ###reference_b25###] is implemented by replacing the frequency droop controller with the angle droop controllers, and tuning the control parameters based on the condition derived in [25 ###reference_b25###].\nUnder the disturbance, the terminal currents of the two IBRs are visualized by the orange-dashed curves in Figure 30 ###reference_###. It can be observed that the method in [25 ###reference_b25###] can stabilize the microgrids. With the PEIs, the terminal currents is presented by the blue-solid curves in Figure 30 ###reference_###, suggesting that the PEIs can stabilize the microgrids with much less overshoots/undershoots. It is not surprising that the two approaches exhibit distinct behaviors under the same disturbances, due to different controllers. Figure 30 ###reference_### suggests that both methods can stabilize the microgrids with the settling time less than s. However, the PEIs proposed achieve such the goal without reprogramming the controllers.\n###figure_43### ###figure_44###"
154
+ },
155
+ {
156
+ "section_id": "5.3",
157
+ "parent_section_id": "5",
158
+ "section_name": "Networked Microgrids with Three IBRs",
159
+ "text": ""
160
+ },
161
+ {
162
+ "section_id": "5.3.1",
163
+ "parent_section_id": "5.3",
164
+ "section_name": "V-C1 A motivating example",
165
+ "text": "The test system shown in Figure 31 ###reference_### contains two networked microgrids. Microgrid is powered by two IBRs, and Microgrid is powered by one IBR. The parameters of the three IBRs are the same as the ones in [29 ###reference_b29###] except . The two loads are constant-impedance. The two microgrids are networked at s. Figures 32 ###reference_### present the d-q components of the terminal currents , , and . It can be observed that closing the tie-line in Figure 31 ###reference_### incurs sustained oscillations throughout the system.\n###figure_45### ###figure_46### ###figure_47### ###figure_48###"
166
+ },
167
+ {
168
+ "section_id": "5.3.2",
169
+ "parent_section_id": "5.3",
170
+ "section_name": "V-C2 System responses with full and partial coverage of PEIs",
171
+ "text": "To mitigate the system-level symptom shown in Figure 32 ###reference_###, each IBR in Figure 31 ###reference_### is equipped with a PEI. Figure 33 ###reference_### presents the responses of terminal currents of the three IBRs, and it suggests that the two networked microgrids are stabilized. Next, we remove the PEIs equipped at IBRs 1 and 2. With the event described in Section V-C1 ###reference_.SSS1###, Figure 34 ###reference_### shows the terminal currents in the d-q reference frame, and it suggests that the PEI equipped at IBR 3 can stabilize the networked microgrids alone.\n###figure_49### ###figure_50### ###figure_51### ###figure_52### ###figure_53### ###figure_54###"
172
+ },
173
+ {
174
+ "section_id": "5.3.3",
175
+ "parent_section_id": "5.3",
176
+ "section_name": "V-C3 Performance in the presence of constant power loads",
177
+ "text": "Here, the two loads in Figure 31 ###reference_### are modelled by the constant power loads described in Section V-B5 ###reference_.SSS5###. After the two microgrids are networked at s, sustained oscillations222The waveform of the oscillations is omitted in this paper for brevity. can be observed. Figure 35 ###reference_### presents the terminal currents at the three IBRs in the d-q reference frame, and it suggests that the PEIs can stabilized the networked microgrids in the presence constant power loads.\n###figure_55### ###figure_56### ###figure_57###"
178
+ },
179
+ {
180
+ "section_id": "6",
181
+ "parent_section_id": null,
182
+ "section_name": "VI Conclusion",
183
+ "text": "This paper introduces passivity-based stability protocol for IBRs in AC microgrids. The protocol is enforced by a novel interface at the grid edge in a decentralized, non-intrusive manner.\nThe proposed method is tested by simulating a grid-connected GFL IBR and two networked microgrids with benchmark parameters. Simulations show that growing oscillations can occur, when two stable AC microgrids are networked, and they also suggest that the proposed interface can mitigate such a system-level symptom. The design of PEIs still requires IBR manufacturers to compute the gain. Future work will develop data-driven methods that eliminate this requirement.\nAnother research direction is to investigate the power-electronics implementation of the PEIs."
184
+ }
185
+ ],
186
+ "appendix": [
187
+ {
188
+ "section_id": "Appendix 1",
189
+ "parent_section_id": null,
190
+ "section_name": "Appendix A Dynamics of Grid-forming IBRs",
191
+ "text": "Suppose that the -th IBR is grid-forming. As shown in Figure 1 ###reference_###, the GFM IBR includes a DC voltage source, an inverter, a resistor-inductor-capacitor (RLC) low-pass filter, a power controller, a voltage controller, and a current controller. The dynamics of each block in Figure 1 ###reference_### is introduced as follows."
192
+ },
193
+ {
194
+ "section_id": "Appendix 2",
195
+ "parent_section_id": null,
196
+ "section_name": "Appendix B Dynamics of Grid-following IBRs",
197
+ "text": "Suppose that the th IBR is grid-following (GFL). The cyber-physical architecture of the GFL IBR is summarized in Figure 2 ###reference_###. The dynamics of the RLC output filter and the current controller in Figure 2 ###reference_### can be characterized by (27 ###reference_###) and (32 ###reference_###). Next, we elaborate the phase locked loop (PLL) and the block that generates the current set points for the current controller."
198
+ }
199
+ ],
200
+ "tables": {
201
+ "1": {
202
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span><span class=\"ltx_text\" id=\"S5.T1.34.1\" style=\"color:#000000;\">Energy Analysis for Networked Microgrids with Two IBRs</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.32\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S5.T1.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.4.5.1\">Period</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S5.T1.1.1.1\">\n (J)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S5.T1.2.2.2\">\n (J)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S5.T1.3.3.3\">\n (J)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.4.4.4\">\n (%)</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.10.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.2\">\ns - s</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.7.7.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.8.8.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.9.9.5\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.10.10.6\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.16.16\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.12.12.2\">\ns - s</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.13.13.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.14.14.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.15.15.5\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.16.16.6\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.20.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.20.20.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.20.20.5.1\">Period</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.17.17.1\">\n (J)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.18.18.2\">\n (J)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.19.19.3\">\n (J)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T1.20.20.4\">\n (%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.26.26\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.22.22.2\">\ns - s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.23.23.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.24.24.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.25.25.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.26.26.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.32.32\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T1.28.28.2\">\ns - s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T1.29.29.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T1.30.30.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T1.31.31.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.32.32.6\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
203
+ "capture": "TABLE I: Energy Analysis for Networked Microgrids with Two IBRs"
204
+ }
205
+ },
206
+ "image_paths": {
207
+ "1": {
208
+ "figure_path": "2310.09450v3_figure_1.png",
209
+ "caption": "Figure 1: Cyber and physical architecture of a grid-forming IBR.",
210
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/GFM-Block-Diagram.png"
211
+ },
212
+ "2": {
213
+ "figure_path": "2310.09450v3_figure_2.png",
214
+ "caption": "Figure 2: Cyber and physical architecture of a grid-following IBR.",
215
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/GFL-Block-Diagram.png"
216
+ },
217
+ "3": {
218
+ "figure_path": "2310.09450v3_figure_3.png",
219
+ "caption": "Figure 3: (a) Reference frame transformation; and (b) an input-output perspective of IBR dynamics",
220
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/FrameTrans.png"
221
+ },
222
+ "4": {
223
+ "figure_path": "2310.09450v3_figure_4.png",
224
+ "caption": "Figure 4: Branch (i,j)msubscript\ud835\udc56\ud835\udc57\ud835\udc5a(i,j)_{m}( italic_i , italic_j ) start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT in \u21302subscript\u21302\\mathcal{E}_{2}caligraphic_E start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT",
225
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/Branch.png"
226
+ },
227
+ "5": {
228
+ "figure_path": "2310.09450v3_figure_5.png",
229
+ "caption": "Figure 5: A feedback perspective of microgrid dynamics",
230
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/Feedback.png"
231
+ },
232
+ "6": {
233
+ "figure_path": "2310.09450v3_figure_6.png",
234
+ "caption": "Figure 6: Basic idea of enforcing the Stability Protocol",
235
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/Passivation.png"
236
+ },
237
+ "7": {
238
+ "figure_path": "2310.09450v3_figure_7.png",
239
+ "caption": "Figure 7: Physical layer of the protocol enforcement interface",
240
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/Physical-Layer.png"
241
+ },
242
+ "8": {
243
+ "figure_path": "2310.09450v3_figure_8.png",
244
+ "caption": "Figure 8: Cyber layer of the protocol enforcement interface.",
245
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/Cyber-Layer.png"
246
+ },
247
+ "9": {
248
+ "figure_path": "2310.09450v3_figure_9.png",
249
+ "caption": "Figure 9: Physical layer of the protocol enforcement interface in the d-q frame",
250
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/physical_layer_dq.png"
251
+ },
252
+ "10": {
253
+ "figure_path": "2310.09450v3_figure_10.png",
254
+ "caption": "Figure 10: A grid-following IBR connected to its host distribution grid",
255
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/Grid-tie-GFL.png"
256
+ },
257
+ "11(a)": {
258
+ "figure_path": "2310.09450v3_figure_11(a).png",
259
+ "caption": "(a)\nFigure 11: Without PEI, (a) three-phase terminal currents of GFL IBR from 0.20.20.20.2s to 0.70.70.70.7s; and (b) zoomed-in version of the terminal currents.",
260
+ "url": "http://arxiv.org/html/2310.09450v3/x1.png"
261
+ },
262
+ "11(b)": {
263
+ "figure_path": "2310.09450v3_figure_11(b).png",
264
+ "caption": "(b)\nFigure 11: Without PEI, (a) three-phase terminal currents of GFL IBR from 0.20.20.20.2s to 0.70.70.70.7s; and (b) zoomed-in version of the terminal currents.",
265
+ "url": "http://arxiv.org/html/2310.09450v3/x2.png"
266
+ },
267
+ "12": {
268
+ "figure_path": "2310.09450v3_figure_12.png",
269
+ "caption": "Figure 12: Time-domain evolution of the d-q components of the terminal currents of the GFL IBR without the PEI.",
270
+ "url": "http://arxiv.org/html/2310.09450v3/x3.png"
271
+ },
272
+ "13(a)": {
273
+ "figure_path": "2310.09450v3_figure_13(a).png",
274
+ "caption": "(a)\nFigure 13: With PEI enabled, (a) three-phase terminal currents of the GFL IBR from 0.20.20.20.2s to 0.70.70.70.7s; and (b) zoomed-in version of the terminal currents.",
275
+ "url": "http://arxiv.org/html/2310.09450v3/x4.png"
276
+ },
277
+ "13(b)": {
278
+ "figure_path": "2310.09450v3_figure_13(b).png",
279
+ "caption": "(b)\nFigure 13: With PEI enabled, (a) three-phase terminal currents of the GFL IBR from 0.20.20.20.2s to 0.70.70.70.7s; and (b) zoomed-in version of the terminal currents.",
280
+ "url": "http://arxiv.org/html/2310.09450v3/x5.png"
281
+ },
282
+ "14": {
283
+ "figure_path": "2310.09450v3_figure_14.png",
284
+ "caption": "Figure 14: Time-domain evolution of the terminal currents of the GFL IBR in the d-q frame.",
285
+ "url": "http://arxiv.org/html/2310.09450v3/x6.png"
286
+ },
287
+ "15": {
288
+ "figure_path": "2310.09450v3_figure_15.png",
289
+ "caption": "Figure 15: Two networked microgrids",
290
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/TwoIBR.png"
291
+ },
292
+ "16(a)": {
293
+ "figure_path": "2310.09450v3_figure_16(a).png",
294
+ "caption": "(a)\nFigure 16: Time-domain evolution of (a) \ud835\udc22abc\u20621subscript\ud835\udc22abc1\\mathbf{i}_{\\text{abc}1}bold_i start_POSTSUBSCRIPT abc 1 end_POSTSUBSCRIPT and (b) \ud835\udc22abc\u20622subscript\ud835\udc22abc2\\mathbf{i}_{\\text{abc}2}bold_i start_POSTSUBSCRIPT abc 2 end_POSTSUBSCRIPT in the presence of constant-impedance loads.",
295
+ "url": "http://arxiv.org/html/2310.09450v3/x7.png"
296
+ },
297
+ "16(b)": {
298
+ "figure_path": "2310.09450v3_figure_16(b).png",
299
+ "caption": "(b)\nFigure 16: Time-domain evolution of (a) \ud835\udc22abc\u20621subscript\ud835\udc22abc1\\mathbf{i}_{\\text{abc}1}bold_i start_POSTSUBSCRIPT abc 1 end_POSTSUBSCRIPT and (b) \ud835\udc22abc\u20622subscript\ud835\udc22abc2\\mathbf{i}_{\\text{abc}2}bold_i start_POSTSUBSCRIPT abc 2 end_POSTSUBSCRIPT in the presence of constant-impedance loads.",
300
+ "url": "http://arxiv.org/html/2310.09450v3/x8.png"
301
+ },
302
+ "17(a)": {
303
+ "figure_path": "2310.09450v3_figure_17(a).png",
304
+ "caption": "(a)\nFigure 17: (a) Time-domain evolution of (a) {iod\u20621,ioq\u20621}subscript\ud835\udc56od1subscript\ud835\udc56oq1\\{i_{\\text{od}1},i_{\\text{oq}1}\\}{ italic_i start_POSTSUBSCRIPT od 1 end_POSTSUBSCRIPT , italic_i start_POSTSUBSCRIPT oq 1 end_POSTSUBSCRIPT } and (b) {iod\u20622,ioq\u20622}subscript\ud835\udc56od2subscript\ud835\udc56oq2\\{i_{\\text{od}2},i_{\\text{oq}2}\\}{ italic_i start_POSTSUBSCRIPT od 2 end_POSTSUBSCRIPT , italic_i start_POSTSUBSCRIPT oq 2 end_POSTSUBSCRIPT } in the presence of constant-impedance loads",
305
+ "url": "http://arxiv.org/html/2310.09450v3/x9.png"
306
+ },
307
+ "17(b)": {
308
+ "figure_path": "2310.09450v3_figure_17(b).png",
309
+ "caption": "(b)\nFigure 17: (a) Time-domain evolution of (a) {iod\u20621,ioq\u20621}subscript\ud835\udc56od1subscript\ud835\udc56oq1\\{i_{\\text{od}1},i_{\\text{oq}1}\\}{ italic_i start_POSTSUBSCRIPT od 1 end_POSTSUBSCRIPT , italic_i start_POSTSUBSCRIPT oq 1 end_POSTSUBSCRIPT } and (b) {iod\u20622,ioq\u20622}subscript\ud835\udc56od2subscript\ud835\udc56oq2\\{i_{\\text{od}2},i_{\\text{oq}2}\\}{ italic_i start_POSTSUBSCRIPT od 2 end_POSTSUBSCRIPT , italic_i start_POSTSUBSCRIPT oq 2 end_POSTSUBSCRIPT } in the presence of constant-impedance loads",
310
+ "url": "http://arxiv.org/html/2310.09450v3/x10.png"
311
+ },
312
+ "18": {
313
+ "figure_path": "2310.09450v3_figure_18.png",
314
+ "caption": "Figure 18: Response comparison between detailed and simplified models: Instability can be observed only in the simulation with the detailed, high-order model.",
315
+ "url": "http://arxiv.org/html/2310.09450v3/x11.png"
316
+ },
317
+ "19(a)": {
318
+ "figure_path": "2310.09450v3_figure_19(a).png",
319
+ "caption": "(a)\nFigure 19: (a) Time-domain evolution of instantaneous currents (curr.) iabc\u20621subscriptiabc1\\textbf{i}_{\\text{abc}1}i start_POSTSUBSCRIPT abc 1 end_POSTSUBSCRIPT at IBR 1 with the passivisation interface; (b) Zoomed-in version of iabc\u20621subscriptiabc1\\textbf{i}_{\\text{abc}1}i start_POSTSUBSCRIPT abc 1 end_POSTSUBSCRIPT during the transients (the upper panel) and the steady state (the lower panel). The two loads are constant-impedance.",
320
+ "url": "http://arxiv.org/html/2310.09450v3/x12.png"
321
+ },
322
+ "19(b)": {
323
+ "figure_path": "2310.09450v3_figure_19(b).png",
324
+ "caption": "(b)\nFigure 19: (a) Time-domain evolution of instantaneous currents (curr.) iabc\u20621subscriptiabc1\\textbf{i}_{\\text{abc}1}i start_POSTSUBSCRIPT abc 1 end_POSTSUBSCRIPT at IBR 1 with the passivisation interface; (b) Zoomed-in version of iabc\u20621subscriptiabc1\\textbf{i}_{\\text{abc}1}i start_POSTSUBSCRIPT abc 1 end_POSTSUBSCRIPT during the transients (the upper panel) and the steady state (the lower panel). The two loads are constant-impedance.",
325
+ "url": "http://arxiv.org/html/2310.09450v3/x13.png"
326
+ },
327
+ "20(a)": {
328
+ "figure_path": "2310.09450v3_figure_20(a).png",
329
+ "caption": "(a)\nFigure 20: (a) Time-domain evolution of instantaneous currents (curr.) iabc\u20622subscriptiabc2\\textbf{i}_{\\text{abc}2}i start_POSTSUBSCRIPT abc 2 end_POSTSUBSCRIPT at IBR 2 with the passivisation interface; (b) Zoomed-in version of iabc\u20622subscriptiabc2\\textbf{i}_{\\text{abc}2}i start_POSTSUBSCRIPT abc 2 end_POSTSUBSCRIPT during the transients (the upper panel) and the steady state (the lower panel). The two loads are constant-impedance.",
330
+ "url": "http://arxiv.org/html/2310.09450v3/x14.png"
331
+ },
332
+ "20(b)": {
333
+ "figure_path": "2310.09450v3_figure_20(b).png",
334
+ "caption": "(b)\nFigure 20: (a) Time-domain evolution of instantaneous currents (curr.) iabc\u20622subscriptiabc2\\textbf{i}_{\\text{abc}2}i start_POSTSUBSCRIPT abc 2 end_POSTSUBSCRIPT at IBR 2 with the passivisation interface; (b) Zoomed-in version of iabc\u20622subscriptiabc2\\textbf{i}_{\\text{abc}2}i start_POSTSUBSCRIPT abc 2 end_POSTSUBSCRIPT during the transients (the upper panel) and the steady state (the lower panel). The two loads are constant-impedance.",
335
+ "url": "http://arxiv.org/html/2310.09450v3/x15.png"
336
+ },
337
+ "21(a)": {
338
+ "figure_path": "2310.09450v3_figure_21(a).png",
339
+ "caption": "(a)\nFigure 21: Time-domain evolution of (a) iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT and (b) iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT with the PEIs in the presence of constant-impedance loads.",
340
+ "url": "http://arxiv.org/html/2310.09450v3/x16.png"
341
+ },
342
+ "21(b)": {
343
+ "figure_path": "2310.09450v3_figure_21(b).png",
344
+ "caption": "(b)\nFigure 21: Time-domain evolution of (a) iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT and (b) iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT with the PEIs in the presence of constant-impedance loads.",
345
+ "url": "http://arxiv.org/html/2310.09450v3/x17.png"
346
+ },
347
+ "22": {
348
+ "figure_path": "2310.09450v3_figure_22.png",
349
+ "caption": "Figure 22: Time-domain evolution of P1subscript\ud835\udc431P_{1}italic_P start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, Pv\u20621subscript\ud835\udc43v1P_{\\text{v}1}italic_P start_POSTSUBSCRIPT v 1 end_POSTSUBSCRIPT, and Pc\u20621subscript\ud835\udc43c1P_{\\text{c}1}italic_P start_POSTSUBSCRIPT c 1 end_POSTSUBSCRIPT at IBR 1.",
350
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/Power_IBR1.png"
351
+ },
352
+ "23": {
353
+ "figure_path": "2310.09450v3_figure_23.png",
354
+ "caption": "Figure 23: Time-domain evolution of P2subscript\ud835\udc432P_{2}italic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, Pv\u20622subscript\ud835\udc43v2P_{\\text{v}2}italic_P start_POSTSUBSCRIPT v 2 end_POSTSUBSCRIPT, and Pc\u20622subscript\ud835\udc43c2P_{\\text{c}2}italic_P start_POSTSUBSCRIPT c 2 end_POSTSUBSCRIPT at IBR 2.",
355
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/Power_IBR2.png"
356
+ },
357
+ "24(a)": {
358
+ "figure_path": "2310.09450v3_figure_24(a).png",
359
+ "caption": "(a)\nFigure 24: Time-domain evolution of (a) iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT and (b) iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT with IBR 2 equipped with the PEI.",
360
+ "url": "http://arxiv.org/html/2310.09450v3/x18.png"
361
+ },
362
+ "24(b)": {
363
+ "figure_path": "2310.09450v3_figure_24(b).png",
364
+ "caption": "(b)\nFigure 24: Time-domain evolution of (a) iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT and (b) iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT with IBR 2 equipped with the PEI.",
365
+ "url": "http://arxiv.org/html/2310.09450v3/x19.png"
366
+ },
367
+ "25(a)": {
368
+ "figure_path": "2310.09450v3_figure_25(a).png",
369
+ "caption": "(a)\nFigure 25: Time-domain evolution of (a) iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT and (b) iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT without PEIs in the presence of constant-power loads.",
370
+ "url": "http://arxiv.org/html/2310.09450v3/x20.png"
371
+ },
372
+ "25(b)": {
373
+ "figure_path": "2310.09450v3_figure_25(b).png",
374
+ "caption": "(b)\nFigure 25: Time-domain evolution of (a) iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT and (b) iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT without PEIs in the presence of constant-power loads.",
375
+ "url": "http://arxiv.org/html/2310.09450v3/x21.png"
376
+ },
377
+ "26(a)": {
378
+ "figure_path": "2310.09450v3_figure_26(a).png",
379
+ "caption": "(a)\nFigure 26: Time-domain evolution of (a) iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT and (b) iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT with the PEIs in the presence of constant-power loads.",
380
+ "url": "http://arxiv.org/html/2310.09450v3/x22.png"
381
+ },
382
+ "26(b)": {
383
+ "figure_path": "2310.09450v3_figure_26(b).png",
384
+ "caption": "(b)\nFigure 26: Time-domain evolution of (a) iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT and (b) iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT with the PEIs in the presence of constant-power loads.",
385
+ "url": "http://arxiv.org/html/2310.09450v3/x23.png"
386
+ },
387
+ "27(a)": {
388
+ "figure_path": "2310.09450v3_figure_27(a).png",
389
+ "caption": "(a)\nFigure 27: Real power outputs at IBR 1 (a) and IBR 2 (b), with original PEI parameters.",
390
+ "url": "http://arxiv.org/html/2310.09450v3/x24.png"
391
+ },
392
+ "27(b)": {
393
+ "figure_path": "2310.09450v3_figure_27(b).png",
394
+ "caption": "(b)\nFigure 27: Real power outputs at IBR 1 (a) and IBR 2 (b), with original PEI parameters.",
395
+ "url": "http://arxiv.org/html/2310.09450v3/x25.png"
396
+ },
397
+ "28(a)": {
398
+ "figure_path": "2310.09450v3_figure_28(a).png",
399
+ "caption": "(a)\nFigure 28: Real power outputs at IBR 1 (a) and IBR 2 (b), with updated PEI parameters.",
400
+ "url": "http://arxiv.org/html/2310.09450v3/x26.png"
401
+ },
402
+ "28(b)": {
403
+ "figure_path": "2310.09450v3_figure_28(b).png",
404
+ "caption": "(b)\nFigure 28: Real power outputs at IBR 1 (a) and IBR 2 (b), with updated PEI parameters.",
405
+ "url": "http://arxiv.org/html/2310.09450v3/x27.png"
406
+ },
407
+ "29(a)": {
408
+ "figure_path": "2310.09450v3_figure_29(a).png",
409
+ "caption": "(a)\nFigure 29: Comparison of the terminal currents of the proposed method (blue curves) and the conventional centralized method under the disturbance that the two microgrids are networked.",
410
+ "url": "http://arxiv.org/html/2310.09450v3/x28.png"
411
+ },
412
+ "29(b)": {
413
+ "figure_path": "2310.09450v3_figure_29(b).png",
414
+ "caption": "(b)\nFigure 29: Comparison of the terminal currents of the proposed method (blue curves) and the conventional centralized method under the disturbance that the two microgrids are networked.",
415
+ "url": "http://arxiv.org/html/2310.09450v3/x29.png"
416
+ },
417
+ "30(a)": {
418
+ "figure_path": "2310.09450v3_figure_30(a).png",
419
+ "caption": "(a)\nFigure 30: Comparison of the terminal currents of the proposed method (blue curves) and the intrusive method in [25] (orange-dashed curves) under the disturbance that the two microgrids are networked.",
420
+ "url": "http://arxiv.org/html/2310.09450v3/x30.png"
421
+ },
422
+ "30(b)": {
423
+ "figure_path": "2310.09450v3_figure_30(b).png",
424
+ "caption": "(b)\nFigure 30: Comparison of the terminal currents of the proposed method (blue curves) and the intrusive method in [25] (orange-dashed curves) under the disturbance that the two microgrids are networked.",
425
+ "url": "http://arxiv.org/html/2310.09450v3/x31.png"
426
+ },
427
+ "31": {
428
+ "figure_path": "2310.09450v3_figure_31.png",
429
+ "caption": "Figure 31: Two networked microgrids with three IBRs",
430
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/ThreeIBRs.png"
431
+ },
432
+ "32(a)": {
433
+ "figure_path": "2310.09450v3_figure_32(a).png",
434
+ "caption": "(a)\nFigure 32: The d-q components of currents iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT, iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT, and iodq\u20623subscriptiodq3\\textbf{i}_{\\text{odq}3}i start_POSTSUBSCRIPT odq 3 end_POSTSUBSCRIPT without the PEIs in the three-IBR microgrid. The two loads are constant-impedance.",
435
+ "url": "http://arxiv.org/html/2310.09450v3/x32.png"
436
+ },
437
+ "32(b)": {
438
+ "figure_path": "2310.09450v3_figure_32(b).png",
439
+ "caption": "(b)\nFigure 32: The d-q components of currents iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT, iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT, and iodq\u20623subscriptiodq3\\textbf{i}_{\\text{odq}3}i start_POSTSUBSCRIPT odq 3 end_POSTSUBSCRIPT without the PEIs in the three-IBR microgrid. The two loads are constant-impedance.",
440
+ "url": "http://arxiv.org/html/2310.09450v3/x33.png"
441
+ },
442
+ "32(c)": {
443
+ "figure_path": "2310.09450v3_figure_32(c).png",
444
+ "caption": "(c)\nFigure 32: The d-q components of currents iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT, iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT, and iodq\u20623subscriptiodq3\\textbf{i}_{\\text{odq}3}i start_POSTSUBSCRIPT odq 3 end_POSTSUBSCRIPT without the PEIs in the three-IBR microgrid. The two loads are constant-impedance.",
445
+ "url": "http://arxiv.org/html/2310.09450v3/x34.png"
446
+ },
447
+ "33(a)": {
448
+ "figure_path": "2310.09450v3_figure_33(a).png",
449
+ "caption": "(a)\nFigure 33: The d-q components of currents iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT, iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT, and iodq\u20623subscriptiodq3\\textbf{i}_{\\text{odq}3}i start_POSTSUBSCRIPT odq 3 end_POSTSUBSCRIPT with the PEIs in the three-IBR microgrid. The two loads are constant-impedance.",
450
+ "url": "http://arxiv.org/html/2310.09450v3/x35.png"
451
+ },
452
+ "33(b)": {
453
+ "figure_path": "2310.09450v3_figure_33(b).png",
454
+ "caption": "(b)\nFigure 33: The d-q components of currents iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT, iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT, and iodq\u20623subscriptiodq3\\textbf{i}_{\\text{odq}3}i start_POSTSUBSCRIPT odq 3 end_POSTSUBSCRIPT with the PEIs in the three-IBR microgrid. The two loads are constant-impedance.",
455
+ "url": "http://arxiv.org/html/2310.09450v3/x36.png"
456
+ },
457
+ "33(c)": {
458
+ "figure_path": "2310.09450v3_figure_33(c).png",
459
+ "caption": "(c)\nFigure 33: The d-q components of currents iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT, iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT, and iodq\u20623subscriptiodq3\\textbf{i}_{\\text{odq}3}i start_POSTSUBSCRIPT odq 3 end_POSTSUBSCRIPT with the PEIs in the three-IBR microgrid. The two loads are constant-impedance.",
460
+ "url": "http://arxiv.org/html/2310.09450v3/x37.png"
461
+ },
462
+ "34(a)": {
463
+ "figure_path": "2310.09450v3_figure_34(a).png",
464
+ "caption": "(a)\nFigure 34: Terminal currents iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT, iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT, and iodq\u20623subscriptiodq3\\textbf{i}_{\\text{odq}3}i start_POSTSUBSCRIPT odq 3 end_POSTSUBSCRIPT with a PEI installed at IBR 3333 in the d-q frame.",
465
+ "url": "http://arxiv.org/html/2310.09450v3/x38.png"
466
+ },
467
+ "34(b)": {
468
+ "figure_path": "2310.09450v3_figure_34(b).png",
469
+ "caption": "(b)\nFigure 34: Terminal currents iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT, iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT, and iodq\u20623subscriptiodq3\\textbf{i}_{\\text{odq}3}i start_POSTSUBSCRIPT odq 3 end_POSTSUBSCRIPT with a PEI installed at IBR 3333 in the d-q frame.",
470
+ "url": "http://arxiv.org/html/2310.09450v3/x39.png"
471
+ },
472
+ "34(c)": {
473
+ "figure_path": "2310.09450v3_figure_34(c).png",
474
+ "caption": "(c)\nFigure 34: Terminal currents iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT, iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT, and iodq\u20623subscriptiodq3\\textbf{i}_{\\text{odq}3}i start_POSTSUBSCRIPT odq 3 end_POSTSUBSCRIPT with a PEI installed at IBR 3333 in the d-q frame.",
475
+ "url": "http://arxiv.org/html/2310.09450v3/x40.png"
476
+ },
477
+ "35(a)": {
478
+ "figure_path": "2310.09450v3_figure_35(a).png",
479
+ "caption": "(a)\nFigure 35: Terminal currents iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT, iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT, and iodq\u20623subscriptiodq3\\textbf{i}_{\\text{odq}3}i start_POSTSUBSCRIPT odq 3 end_POSTSUBSCRIPT with two constant-power loads.",
480
+ "url": "http://arxiv.org/html/2310.09450v3/x41.png"
481
+ },
482
+ "35(b)": {
483
+ "figure_path": "2310.09450v3_figure_35(b).png",
484
+ "caption": "(b)\nFigure 35: Terminal currents iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT, iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT, and iodq\u20623subscriptiodq3\\textbf{i}_{\\text{odq}3}i start_POSTSUBSCRIPT odq 3 end_POSTSUBSCRIPT with two constant-power loads.",
485
+ "url": "http://arxiv.org/html/2310.09450v3/x42.png"
486
+ },
487
+ "35(c)": {
488
+ "figure_path": "2310.09450v3_figure_35(c).png",
489
+ "caption": "(c)\nFigure 35: Terminal currents iodq\u20621subscriptiodq1\\textbf{i}_{\\text{odq}1}i start_POSTSUBSCRIPT odq 1 end_POSTSUBSCRIPT, iodq\u20622subscriptiodq2\\textbf{i}_{\\text{odq}2}i start_POSTSUBSCRIPT odq 2 end_POSTSUBSCRIPT, and iodq\u20623subscriptiodq3\\textbf{i}_{\\text{odq}3}i start_POSTSUBSCRIPT odq 3 end_POSTSUBSCRIPT with two constant-power loads.",
490
+ "url": "http://arxiv.org/html/2310.09450v3/x43.png"
491
+ },
492
+ "36(a)": {
493
+ "figure_path": "2310.09450v3_figure_36(a).png",
494
+ "caption": "(a)\nFigure 36: Time-domain evolution of normalized P1subscript\ud835\udc431P_{1}italic_P start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and \u03d5d\u20621subscriptitalic-\u03d5d1\\phi_{\\text{d}1}italic_\u03d5 start_POSTSUBSCRIPT d 1 end_POSTSUBSCRIPT.",
495
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/time_scale_time_P1.png"
496
+ },
497
+ "36(b)": {
498
+ "figure_path": "2310.09450v3_figure_36(b).png",
499
+ "caption": "(b)\nFigure 36: Time-domain evolution of normalized P1subscript\ud835\udc431P_{1}italic_P start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and \u03d5d\u20621subscriptitalic-\u03d5d1\\phi_{\\text{d}1}italic_\u03d5 start_POSTSUBSCRIPT d 1 end_POSTSUBSCRIPT.",
500
+ "url": "http://arxiv.org/html/2310.09450v3/extracted/5748157/Fig/time_scale_time_phi_d1.png"
501
+ },
502
+ "37": {
503
+ "figure_path": "2310.09450v3_figure_37.png",
504
+ "caption": "Figure 37: Stabilization time of key variables",
505
+ "url": "http://arxiv.org/html/2310.09450v3/x44.png"
506
+ }
507
+ },
508
+ "validation": true,
509
+ "references": [],
510
+ "url": "http://arxiv.org/html/2310.09450v3"
511
+ }
20240722/2310.14277v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2310.17163v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2310.20204v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2311.08100v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2311.08236v2.json ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "MeLo: Low-rank Adaptation is Better than Fine-tuning for Medical Image Diagnosis",
3
+ "abstract": "The common practice in developing computer-aided diagnosis (CAD) models based on transformer architectures usually involves fine-tuning from ImageNet pre-trained weights. However, with recent advances in large-scale pre-training and the practice of scaling laws, Vision Transformers (ViT) have become much larger and less accessible to medical imaging communities.\nAdditionally, in real-world scenarios, the deployments of multiple CAD models can be troublesome due to problems such as limited storage space and time-consuming model switching.\nTo address these challenges, we propose a new method MeLo (Medical image Low-rank adaptation), which enables the development of a single CAD model for multiple clinical tasks in a lightweight manner. It adopts low-rank adaptation instead of resource-demanding fine-tuning.\nBy fixing the weight of ViT models and only adding small low-rank plug-ins, we achieve competitive results on various diagnosis tasks across different imaging modalities using only a few trainable parameters.\nSpecifically, our proposed method achieves comparable performance to fully fine-tuned ViT models on four distinct medical imaging datasets using about 0.17% trainable parameters.\nMoreover, MeLo adds only about 0.5MB of storage space and allows for extremely fast model switching in deployment and inference.\nOur source code and pre-trained weights are available here.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "In the last decade, deep learning in computer vision has undergone a revolution and significantly impacted the field of medical image analysis.\nEspecially Vision Transformer (ViT) [1 ###reference_b1###] has demonstrated remarkable capabilities in learning complex representations in a data-driven manner.\nHowever, training ViTs from scratch typically requires large annotated datasets, which are challenging to collect in healthcare due to privacy concerns and expensive annotation [2 ###reference_b2###].\nThus, transfer learning by leveraging pre-trained weight on ImageNet [3 ###reference_b3###] has gained popularity by serving as a warm starting point.\nMore recently, the community has witnessed an increase in model scale [4 ###reference_b4###, 5 ###reference_b5###], especially ViT[6 ###reference_b6###] which exhibits superior generalizability and robustness compared to smaller ImageNet pre-trained ViT when transferred to other domains.\nThus, the success of ViT and its variants, which are typically called visual foundation models, has inspired subsequent research on medical foundational models aiming to build more powerful CAD systems.\nAlthough these vision foundation models have shown unprecedented capabilities, their deployment and maintenance present several challenges in real clinical scenarios.\nConstantly updating images from new devices and new epidemics means frequent fine-tuning, which is time-consuming and resource-intensive since the models are extremely large.\nMoreover, in clinical practice, a paramount consideration resides in the optimization of storage space utilization, the reduction of GPU resource demands, and the expeditious execution of diverse medical image processing tasks. Nonetheless, the process of fine-tuning multiple foundational models not only imposes substantial storage space overhead but also exacerbates latency due to frequent model switching on the GPU.\nTo address the challenges above, we propose MeLo (Medical image Low-rank adaptation), a novel approach that leverages low-rank adaptation instead of full fine-tuning to efficiently transfer a pre-trained vision foundation model to a powerful CAD.\nMeLo freezes the original weights of visual foundation models while adding small low-rank plug-ins that can achieve remarkable results using only a small fraction of trainable weights.\nTrained with less than 0.175% of original parameters, the ViT-based models can achieve performance comparable to the fully fine-tuned counterparts.\nThis substantial reduction in trainable parameters translates to a significant reduction in computational resources and training time, making it more feasible for researchers and practitioners to deploy and maintain high-performing models.\nSince MeLo is a very small plugin, a ViT-based foundation model and the MeLo for multiple tasks can be loaded all at once, and the corresponding MeLo module can be activated when dealing with a specific diagnosis task, so as to achieve fast task response.\nTo demonstrate the effectiveness and versatility of our proposed MeLo, we have conducted extensive experiments on various diagnosis tasks across different imaging modalities.\nIn each of these tasks, MeLo consistently matches or even outperforms the performance of fully fine-tuned models, while utilizing significantly fewer trainable parameters.\nWe also test the deployment and inference of MeLo when confronting multitasking, and demonstrate that MeLo not only significantly alleviates the demand for GPU memory but also facilitates expedited switching between different tasks.\nIn summary, the proposed MeLo offers a powerful and efficient alternative to the fully fine-tuned methods for medical image analysis.\nBy utilizing a small fraction of the trainable parameters and preserving pre-trained weights, MeLo enables researchers and practitioners to develop high-performing and robust models that can be easily deployed and maintained in a wide range of medical imaging applications.\nWe are committed to making MeLo widely available to the research and clinical communities.\nTo this end, we have made the MeLo system publicly accessible, along with pre-trained MeLo module weights.\nWith these lightweight weights, users can obtain a diagnosis model that boasts similar or superior performance to their fine-tuned counterparts while benefiting from the reduced complexity and resource requirements."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "method",
15
+ "text": "In this section, we will first present the methodology of the proposed MeLo in detail, and then introduce the datasets and the implementation details in our experiments."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Medical Image Low-rank Adaptation (MeLo)",
21
+ "text": "To efficiently and effectively transfer a vision foundation model to a specific CAD model, we turn to Low-Rank Adaptation (LoRA) [7 ###reference_b7###], a popular parameter-efficient fine-tuning technique, to build our proposed MeLo method in this study.\nLoRA was first proposed to fine-tune large language models according to the hypothesis that the weight change of a pre-trained large model is highly sparse and has a low intrinsic rank during fine-tuning.\nThe main idea of LoRA is to freeze the pre-trained model weights and inject trainable rank decomposition matrices into each layer of the Transformer architecture, thus greatly reducing the number of trainable parameters when fine-tuning large models.\nA lot of studies turn to LoRA instead of full fine-tuning for building their own large models without having access to intensive computational resources.\nAs for MeLo, we employ LoRA to efficiently adapt ViT based models to different diagnosis applications as illustrated in Figure 1 ###reference_###.\nFor a specific clinical scenario, we add LoRA weights into each self-attention layer of a pre-trained ViT as depicted in Figure 2 ###reference_###.\nFor the pre-trained query and value projection matrices (denoted as and ) in a self-attention layer,\nthe added LoRA weights constrain their updates by representing them with a low-rank decomposition during fine-tuning, which can be expressed as:\nwhere and stand for the input and output features, respectively.\nTwo low-rank matrices, and , compose the weight change of the pre-trained weight .\nThe ranks of these low-rank matrices are much smaller than the model dimension , and we empirically set in our experiments.\nBy switching different MeLo modules, one pre-trained ViT can effectively handle different medical image diagnosis tasks, thus greatly reducing the computation budget for building a versatile CAD system especially when the training data is limited."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Datasets and Implementation Details",
27
+ "text": "To comprehensively evaluate the utility of MeLo, we conduct experiments using four datasets for three different medical image diagnosis tasks including thoracic disease diagnosis in chest X-ray (CXR) images, breast malignancy diagnosis in mammography images, and blood cell identification in microscopic slides, which are described below.\nShenzhen Hospital Chest X-ray dataset was collected by Shenzhen No.3 Hospital in Shenzhen, Guangdong Providence, China. It consists of 326 normal CXR images and 336 abnormal CXR images showing various manifestations of tuberculosis. Our task is to diagnose whether a CXR image displays tuberculosis. We randomly split the dataset with 80% images for training and 20% for test.\nThe dataset would be available here ###reference_ownloads.html#tuberculosis-image-data-sets###.\nNIH Chest X-ray 14 dataset [8 ###reference_b8###]\ncomprises 112,120 frontal-view CXR images annotated with 14 common thoracic diseases, and our task is to diagnose the diseases contained in each CXR image. We randomly allocated 70% of the images for training, 10% for validation, and the remaining 20% for testing.\nINBreast dataset [9 ###reference_b9###] includes 410 digital mammography images from 115 patients, which consists of 339 non-malignant and 71 malignant ones.\nThe diagnosis task follows BI-RADS assessment of masses [10 ###reference_b10###] to classify these mammography images into non-malignant and malignant ones.\nWe randomly split the dataset with 80% images for training and 20% for testing.\nBloodCell dataset contains 12,500 augmented images of four blood cell subtypes including Eosinophil, Lymphocyte, Monocyte, and Neutrophil. Our task is to identify their blood cell types. We used the dataset\u2019s own partitioning and the dataset would be available here ###reference_ymooney/blood-cells/data###.\nAll models in experiments were trained by ourselves. During training, we use the learning rate of and Adam optimizer to train MeLo based on ViTs for 200 epochs until convergence, and save the weights with the best validation performance as the final test model.\nIn Experiment 3.1 ###reference_###, we use the ViT-base model pre-trained on ImageNet.\nIn Experiment 3.2 ###reference_###, we turn to ViTs pre-trained with CLIP [11 ###reference_b11###] with varying model sizes, including ViT-base, ViT-huge, and ViT-giga models.\nIn Experiment 3.3 ###reference_###, we use the ViT-giga model pre-trained with CLIP for evaluation.\nThe pre-trained weights of all ViT models used in our experiments are provided in [12 ###reference_b12###].\nAll the experiments are conducted using a single Nvidia A100 80G GPU."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Experiments",
33
+ "text": ""
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Performance on Different Diagnosis Tasks",
39
+ "text": "We test image classification performance on MeLo using four distinct datasets comprising various modalities of medical images: chest X-rays, blood smears, and mammograms. The experimental results are presented in Table 1 ###reference_###.\nIt is evident that MeLo demonstrates either similar or improved performance across all datasets in various evaluation metrics.\nIn contrast to the fully fine-tuned which trains 81 million parameters, MeLo has significantly fewer trainable parameters, i.e., 0.14 million.\nThese experiments affirm that MeLo exhibits superior efficiency compared to full fine-tuning approaches."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Performance on Different ViT Models",
45
+ "text": "We test the effectiveness of MeLo on ViT models with varying sizes using the Shenzhen Hospital X-ray dataset.\nThe results in Figure 3 ###reference_### show that the AUC demonstrates an increment as the ViT model size increases.\nMoreover, while the model capacities of ViTs increase substantially with scale, their MeLo module maintains a consistently low number of trainable parameters across model sizes.\nFor example, there are only 1.22 million trainable parameters in MeLo for the giga ViT model which comprises 1759 million parameters.\nThese findings indicate that in the future, as the pre-trained ViTs continue to expand their model sizes, performance improvement can still be achieved at a minimal cost in terms of trainable parameters."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Performance on Deployment and Inference",
51
+ "text": "To comprehensively evaluate the deployment and inference of our proposed MeLo, we conduct a simulation experiment to verify its effectiveness.\nWe first collect 100 medical images of varying imaging modalities by randomly selecting 25 images from each dataset.\nThen two data processing situations are simulated, i..e, one situation in which the images from a specific dataset are sequentially processed one by one, and the other situation in which all the images from different datasets are shuffled and processed in a totally random order.\nWe load a ViT-giga model and four MeLo modules for different diagnosis tasks together, and compare this deployment with four ViT-giga models that are fully fine-tuned for each corresponding dataset.\nThere are two deployment strategies for the fully fine-tuned models, , i..e, one is to temporarily load the corresponding model when encountering a specific task, and the other is to preload all models at once.\nThe experimental results in Table 2 ###reference_### illustrates that MeLo provides significant latency and memory optimization benefits during deployment and inference.\nSpecifically, the model equipped with MeLo leads to lower inference time and smaller GPU memory usage when dealing with multiple diagnosis tasks in different orders.\nAs for the fully fine-tuned models deployed in the second strategy (2nd row in Table 2 ###reference_###), it is worth noting that the clinical practice typically involves dozens, if not hundreds, of different diagnostic tasks where a large number of models are supposed to be deployed.\nSuch a deployment strategy would definitely lead to GPU memory explosion, making it infeasible in a real-world scenario.\n###figure_1###"
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "CONCLUSION AND DISCUSSION",
57
+ "text": "In this work, we propose MeLo, a highly efficient and easily accessible approach that leverages low-rank adaptation to transfer a pre-trained vision foundation model to a versatile CAD system.\nMeLo has been evaluated across various medical image diagnosis tasks using different imaging modalities and ViT models with different sizes.\nThe experiments consistently prove its similar or superior performance compared with full fine-tuning while utilizing significantly fewer trainable parameters and showing the superiority of latency and memory usage in practical deployments.\nThe outstanding performance highlights the potential to establish a community focused on creating high-performing, robust, and equitable models that can be readily deployed and maintained across a broad spectrum of medical image diagnosis applications in a short timeframe."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "COMPLIANCE WITH ETHICAL STANDARDS",
63
+ "text": "This research study was conducted retrospectively using human subject data made available in open access. Ethical approval was not required as confirmed by the license attached with the open access data."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {
68
+ "1": {
69
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.1\">Table 1</span>: </span>Performance comparison of fine-tuning and MeLo on three medical image diagnosis tasks using four different datasets.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S2.T1.3\" style=\"width:506.5pt;height:119.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-90.2pt,21.3pt) scale(0.737411316654212,0.737411316654212) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T1.3.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S2.T1.3.1.1.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S2.T1.3.1.1.1.2\">Task</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S2.T1.3.1.1.1.3\">Classes</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.3.1.1.1.4\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.3.1.1.1.5\">Parameters</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.3.1.1.1.6\">Trainable</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.3.1.1.1.7\">ACC</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.3.1.1.1.8\">SEN</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.3.1.1.1.9\">PRE</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.3.1.1.1.10\">F1S</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.3.1.1.1.11\">AUC</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.3.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.3.1.2.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.2.1.1.1\">Shenzhen Hospital Chest X-ray</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.3.1.2.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S2.T1.3.1.2.1.2.1\">Tuberculosis Diagnosis</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.3.1.2.1.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S2.T1.3.1.2.1.3.1\">2</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.2.1.4\">Fine-tuning</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.2.1.5\">81.825M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.2.1.6\">81.825M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.2.1.7\">0.812</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.2.1.8\">0.813</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.2.1.9\">0.813</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.2.1.10\">0.812</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.2.1.11\">0.894</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.1.3.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.3.2.1\">MeLo</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.3.2.2\">81.966M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.3.2.3\">0.142M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.3.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.3.2.4.1\">0.835</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.3.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.3.2.5.1\">0.833</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.3.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.3.2.6.1\">0.836</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.3.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.3.2.7.1\">0.834</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.3.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.3.2.8.1\">0.898</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.3.1.4.3.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.4.3.1.1\">BloodCell</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.3.1.4.3.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S2.T1.3.1.4.3.2.1\">Blood Cell Identification</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.3.1.4.3.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S2.T1.3.1.4.3.3.1\">4</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.4.3.4\">Fine-tuning</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.4.3.5\">81.827M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.4.3.6\">81.827M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.4.3.7\">0.859</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.4.3.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.4.3.8.1\">0.877</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.4.3.9\">0.888</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.4.3.10\">0.868</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.4.3.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.4.3.11.1\">0.983</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.1.5.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.5.4.1\">MeLo</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.5.4.2\">81.968M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.5.4.3\">0.144M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.5.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.5.4.4.1\">0.930</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.5.4.5\">0.875</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.5.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.5.4.6.1\">0.958</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.5.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.5.4.7.1\">0.910</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.5.4.8\">0.947</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.3.1.6.5.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.6.5.1.1\">INbreast</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.3.1.6.5.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S2.T1.3.1.6.5.2.1\">Breast Malignancy Diagnosis</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.3.1.6.5.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S2.T1.3.1.6.5.3.1\">2</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.6.5.4\">Fine-tuning</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.6.5.5\">81.825M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.6.5.6\">81.825M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.6.5.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.6.5.7.1\">0.748</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.6.5.8\">0.594</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.6.5.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.6.5.9.1\">0.674</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.6.5.10\">0.554</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.6.5.11\">0.684</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.1.7.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.7.6.1\">MeLo</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.7.6.2\">81.966M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.7.6.3\">0.142M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.7.6.4\">0.745</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.7.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.7.6.5.1\">0.604</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.7.6.6\">0.615</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.7.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.7.6.7.1\">0.572</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.3.1.7.6.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.7.6.8.1\">0.687</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.1.8.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S2.T1.3.1.8.7.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.8.7.1.1\">NIH Chest X-ray14</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S2.T1.3.1.8.7.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S2.T1.3.1.8.7.2.1\">Thoracic Disease Diagnosis</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S2.T1.3.1.8.7.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S2.T1.3.1.8.7.3.1\">14</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.8.7.4\">Fine-tuning</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.8.7.5\">81.825M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.8.7.6\">81.825M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.8.7.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.8.7.7.1\">0.369</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.8.7.8\">0.094</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.8.7.9\">0.316</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.8.7.10\">0.132</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.1.8.7.11\">0.788</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.1.9.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.3.1.9.8.1\">MeLo</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.3.1.9.8.2\">81.980M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.3.1.9.8.3\">0.157M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.3.1.9.8.4\">0.357</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.3.1.9.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.9.8.5.1\">0.106</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.3.1.9.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.9.8.6.1\">0.319</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.3.1.9.8.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.9.8.7.1\">0.142</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.3.1.9.8.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.9.8.8.1\">0.794</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
70
+ "capture": "Table 1: Performance comparison of fine-tuning and MeLo on three medical image diagnosis tasks using four different datasets."
71
+ },
72
+ "2": {
73
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.14.1.1\">Table 2</span>: </span>Performance on 100 images from four datasets (25 per dataset) in dataset-specific order and random order. IT is the total time for model initialization, ST is the total time for model switching, A-ST is the average time per switch, TT is the total inference time for all images, and Parameter is the total number of model parameters. Fine-tune<sup class=\"ltx_sup\" id=\"S2.T2.15.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S2.T2.15.2.1\">\u2217</span></sup> loads four fine-tuned ViT models for each dataset into GPU memory all at once.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S2.T2.11\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T2.11.10.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S2.T2.11.10.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S2.T2.11.10.1.1.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"4\" id=\"S2.T2.11.10.1.2\">In Order</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"4\" id=\"S2.T2.11.10.1.3\">Random</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T2.11.10.1.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S2.T2.11.10.1.4.1\">Parameter</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.11.11.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T2.11.11.2.1\">IT</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T2.11.11.2.2\">ST</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T2.11.11.2.3\">A-ST</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S2.T2.11.11.2.4\">TT</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T2.11.11.2.5\">IT</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T2.11.11.2.6\">ST</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T2.11.11.2.7\">A-ST</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S2.T2.11.11.2.8\">TT</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T2.11.12.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T2.11.12.1.1\">Fine-tuing</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.11.12.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.11.12.1.2.1\">25.9s</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.11.12.1.3\">11.5s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.11.12.1.4\">2.9s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T2.11.12.1.5\">19.6s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.11.12.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.11.12.1.6.1\">25.4s</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.11.12.1.7\">214.0s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.11.12.1.8\">2.9s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T2.11.12.1.9\">218.5s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.11.12.1.10\">1758.6M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.7.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.3.1.1\">Fine-tuing<sup class=\"ltx_sup\" id=\"S2.T2.3.1.1.1\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T2.7.5.6\">118.1s</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T2.4.2.2\">\n0.1s</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T2.5.3.3\">\n0.1s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.7.5.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.7.5.7.1\">7.9s</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T2.7.5.8\">121.3s</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T2.6.4.4\">\n0.1s</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T2.7.5.5\">\n0.1s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.7.5.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.7.5.9.1\">8.2s</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T2.7.5.10\">7034.4M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.11.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S2.T2.11.9.5\">MeLo</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T2.11.9.6\">34.0s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T2.8.6.1\">\n0.1s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T2.9.7.2\">\n0.1s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S2.T2.11.9.7\">8.9s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T2.11.9.8\">34.4s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T2.10.8.3\">\n0.1s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T2.11.9.4\">\n0.1s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S2.T2.11.9.9\">10.6s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T2.11.9.10\">1759.9M</td>\n</tr>\n</tbody>\n</table>\n</figure>",
74
+ "capture": "Table 2: Performance on 100 images from four datasets (25 per dataset) in dataset-specific order and random order. IT is the total time for model initialization, ST is the total time for model switching, A-ST is the average time per switch, TT is the total inference time for all images, and Parameter is the total number of model parameters. Fine-tune\u2217 loads four fine-tuned ViT models for each dataset into GPU memory all at once."
75
+ }
76
+ },
77
+ "image_paths": {
78
+ "1": {
79
+ "figure_path": "2311.08236v2_figure_1.png",
80
+ "caption": "Fig. 1: The motivation of MeLo. The large-scale vision foundation model is just like a watermelon, and our proposed MeLo can conveniently adjust it to different clinical tasks by few additional parameters.",
81
+ "url": "http://arxiv.org/html/2311.08236v2/x1.png"
82
+ },
83
+ "2": {
84
+ "figure_path": "2311.08236v2_figure_2.png",
85
+ "caption": "Fig. 2: The illustration of our proposed MeLo. For a specific medical image diagnosis task, we inject low-rank decomposition matrices (denoted as A\ud835\udc34Aitalic_A and B\ud835\udc35Bitalic_B) into the pre-trained query and value projection matrices (denoted as WQsubscript\ud835\udc4a\ud835\udc44W_{Q}italic_W start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT and WVsubscript\ud835\udc4a\ud835\udc49W_{V}italic_W start_POSTSUBSCRIPT italic_V end_POSTSUBSCRIPT) of each self-attention layer. Different module colors respond to different clinical tasks.",
86
+ "url": "http://arxiv.org/html/2311.08236v2/x2.png"
87
+ },
88
+ "3": {
89
+ "figure_path": "2311.08236v2_figure_3.png",
90
+ "caption": "Fig. 3: The AUC gradually increases as the ViT model size expands while the trainable parameters of corresponding MeLo modules remain consistently low.",
91
+ "url": "http://arxiv.org/html/2311.08236v2/extracted/5746448/figures/ChinaSet_AUC_diffViT.png"
92
+ }
93
+ },
94
+ "validation": true,
95
+ "references": [
96
+ {
97
+ "1": {
98
+ "title": "\u201cAn image is worth 16x16 words: Transformers for image recognition at scale,\u201d",
99
+ "author": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al.,",
100
+ "venue": "arXiv preprint arXiv:2010.11929, 2020.",
101
+ "url": null
102
+ }
103
+ },
104
+ {
105
+ "2": {
106
+ "title": "\u201cDeep learning for medical image processing: Overview, challenges and the future,\u201d",
107
+ "author": "Muhammad Imran Razzak, Saeeda Naz, and Ahmad Zaib,",
108
+ "venue": "Classification in BioApps: Automation of Decision Making, pp. 323\u2013350, 2018.",
109
+ "url": null
110
+ }
111
+ },
112
+ {
113
+ "3": {
114
+ "title": "\u201cImagenet: A large-scale hierarchical image database,\u201d",
115
+ "author": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei,",
116
+ "venue": "in 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009, pp. 248\u2013255.",
117
+ "url": null
118
+ }
119
+ },
120
+ {
121
+ "4": {
122
+ "title": "\u201cReproducible scaling laws for contrastive language-image learning,\u201d",
123
+ "author": "Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, and Jenia Jitsev,",
124
+ "venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 2818\u20132829.",
125
+ "url": null
126
+ }
127
+ },
128
+ {
129
+ "5": {
130
+ "title": "\u201cScaling vision transformers to gigapixel images via hierarchical self-supervised learning,\u201d",
131
+ "author": "Richard J Chen, Chengkuan Chen, Yicong Li, Tiffany Y Chen, Andrew D Trister, Rahul G Krishnan, and Faisal Mahmood,",
132
+ "venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16144\u201316155.",
133
+ "url": null
134
+ }
135
+ },
136
+ {
137
+ "6": {
138
+ "title": "\u201cScaling vision transformers to 22 billion parameters,\u201d",
139
+ "author": "Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, et al.,",
140
+ "venue": "in International Conference on Machine Learning. PMLR, 2023, pp. 7480\u20137512.",
141
+ "url": null
142
+ }
143
+ },
144
+ {
145
+ "7": {
146
+ "title": "\u201cLora: Low-rank adaptation of large language models,\u201d",
147
+ "author": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen,",
148
+ "venue": "arXiv preprint arXiv:2106.09685, 2021.",
149
+ "url": null
150
+ }
151
+ },
152
+ {
153
+ "8": {
154
+ "title": "\u201cChestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,\u201d",
155
+ "author": "Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers,",
156
+ "venue": "in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2097\u20132106.",
157
+ "url": null
158
+ }
159
+ },
160
+ {
161
+ "9": {
162
+ "title": "\u201cInbreast: toward a full-field digital mammographic database.,\u201d",
163
+ "author": "In\u00eas Moreira, Igor Amaral, In\u00eas Domingues, Ant\u00f3nio Cardoso, Maria Jo\u00e3o Cardoso, and Jaime S. Cardoso,",
164
+ "venue": "Academic Radiology, vol. 19, pp. 236\u2013248, 2012.",
165
+ "url": null
166
+ }
167
+ },
168
+ {
169
+ "10": {
170
+ "title": "\u201cBreast imaging reporting and data system (bi-rads).,\u201d",
171
+ "author": "Laura Liberman and Jennifer H. Menell,",
172
+ "venue": "Radiologic Clinics of North America, vol. 40, pp. 409\u2013430, 2002.",
173
+ "url": null
174
+ }
175
+ },
176
+ {
177
+ "11": {
178
+ "title": "\u201cLearning transferable visual models from natural language supervision,\u201d",
179
+ "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.,",
180
+ "venue": "in International Conference on Machine Learning. PMLR, 2021, pp. 8748\u20138763.",
181
+ "url": null
182
+ }
183
+ },
184
+ {
185
+ "12": {
186
+ "title": "\u201cPytorch image models,\u201d https://github.com/rwightman/pytorch-image-models, 2019.",
187
+ "author": "Ross Wightman,",
188
+ "venue": null,
189
+ "url": null
190
+ }
191
+ }
192
+ ],
193
+ "url": "http://arxiv.org/html/2311.08236v2"
194
+ }
20240722/2311.12048v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2311.13348v2.json ADDED
@@ -0,0 +1,290 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "MergeSFL: Split Federated Learning with Feature Merging and Batch Size Regulation",
3
+ "abstract": "Recently, federated learning (FL) has emerged as a popular technique for edge AI to mine valuable knowledge in edge computing (EC) systems.\nTo boost the performance of AI applications, large-scale models have received increasing attention due to their excellent generalized abilities.\nHowever, training and transmitting large-scale models\nwill incur significant computing and communication burden on the resource-constrained workers, and the exchange of entire models may violate model privacy.\nTo relax the burden of workers and protect model privacy, split federated learning (SFL) has been released by integrating both data and model parallelism.\nDespite resource limitations, SFL also faces two other critical challenges in EC systems, i.e., statistical heterogeneity and system heterogeneity.\nIn order to address these challenges, we propose a novel SFL framework, termed MergeSFL, by incorporating feature merging and batch size regulation in SFL.\nConcretely, feature merging aims to merge the features from workers into a mixed feature sequence, which is approximately equivalent to the features derived from IID data and is employed to promote model accuracy.\nWhile batch size regulation aims to assign diverse and suitable batch sizes for heterogeneous workers to improve training efficiency.\nMoreover, MergeSFL explores to jointly optimize these two strategies upon their coupled relationship to better enhance the performance of SFL.\nExtensive experiments are conducted on a physical platform with 80 NVIDIA Jetson edge devices, and the experimental results show that MergeSFL can improve the final model accuracy by 5.82% to 26.22%, with a speedup by about 1.39 to 4.14, compared to the baselines.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "As an emerging and popular technique in edge AI, federated learning (FL) is proposed to train a globally-shared model through collaboration among workers (e.g., IoT devices) in the data-parallel fashion [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###].\nUnder coordination of the parameter server (PS), participating workers periodically train deep learning (DL) models on their local datasets, and then push the models to the PS for global aggregation without exposing their raw data.\nFL has been leveraged by Google to develop the Gboard application with improved user experience in a privacy-preserving manner [7 ###reference_b7###].\nTo boost the performance of AI applications or services, it is usually practical and effective to augment the parameters of DL models [8 ###reference_b8###, 9 ###reference_b9###].\nHowever, training large-scale models is challenging for resource-constrained workers due to their hardware limitations of CPU and memory [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###].\nAdditionally, transmitting large-scale models between workers and the PS incurs significant communication latency, and the exchange of entire models may violate model privacy [13 ###reference_b13###, 14 ###reference_b14###].\nTo mitigate the computing/communication burden on the resource-constrained workers and better protect model privacy, split federated learning (SFL) has been proposed by incorporating both data parallelism and model parallelism [12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###].\nSFL splits an entire model into two submodels, i.e., bottom model and top model, at a certain neural layer, termed the split layer.\nThe bottom model (close to the input) is trained on the workers, while the training of the top model (close to the output) is offloaded to the relatively resource-rich PS.\nThus, SFL significantly reduces the computing load on the workers, which makes it feasible and efficient to train larger-scale models [14 ###reference_b14###, 16 ###reference_b16###, 17 ###reference_b17###].\nDifferent from typical FL, only the bottom models plus the features (also called activations or smashed data) and the gradients of the split layer are exchanged between workers and the PS.\nSince the size of the bottom model or features/gradients is much smaller than that of an entire model, the communication load is greatly reduced.\nFor example, the size of a 16-layer VGG16 [18 ###reference_b18###] is about 321MB, whereas the sizes of its bottom model and the features/gradients (with batch size of 64) separately are about 56MB and 3MB, when splitting the model at the 13th layer.\nAs workers only have access to the bottom models and process their data locally, the privacy of user data and models is effectively protected [13 ###reference_b13###, 15 ###reference_b15###].\nBesides, the existing privacy preserving techniques such as Differential Privacy [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###] and Homomorphic Encryption [22 ###reference_b22###] can be employed to further protect privacy of features/gradients in SFL.\nAlthough SFL provides the aforementioned advantages, it still suffers from two other critical challenges in practical applications.\n1) Statistical Heterogeneity. The workers always collect local data based on their locations and/or user preferences [23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###].\nBesides, the raw data of workers is not shared with others to prevent privacy leakage, resulting in non-independent and identically distributed (non-IID) data across all workers i.e., statistical heterogeneity [28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###].\nThe non-IID data decelerates the convergence rate and even compromises the accuracy of the trained models [33 ###reference_b33###, 34 ###reference_b34###].\n2) System Heterogeneity. In EC systems, workers commonly possess varying and limited capabilities [35 ###reference_b35###, 34 ###reference_b34###].\nThe computing capabilities (e.g., CPU frequency) and communication capabilities (e.g., bandwidth, throughput) of workers could differ from each other by more than tenfold times [36 ###reference_b36###, 37 ###reference_b37###, 38 ###reference_b38###].\nSystem heterogeneity poses significant influences on synchronous training processes, as fast workers may be forced to wait for slow ones, leading to increased waiting time and decreased training efficiency.\nSo far, the existing SFL works have mainly focused on training a large-scale DL model on resource-constrained workers, without simultaneously resolving the aforementioned system and statistical heterogeneity [13 ###reference_b13###, 14 ###reference_b14###, 12 ###reference_b12###].\nFor instance, SplitFed [13 ###reference_b13###] is the first to demonstrate the feasibility of SFL, and aggregates bottom models after each local updating.\nSuch frequent aggregation results in high network traffic consumption.\nTo save the traffic consumption, LocFedMix-SL [16 ###reference_b16###] proposes to reduce the aggregation frequency of bottom models, but it cannot fully utilize the capacities of heterogeneous workers.\nAs an advanced solution, AdaSFL [17 ###reference_b17###] assigns adaptive and diverse batch sizes for different workers to address system heterogeneity, but still cannot deal with the statistical heterogeneity.\nPrior to the emergence of SFL, many solutions to address the heterogeneity challenges [39 ###reference_b39###, 24 ###reference_b24###, 40 ###reference_b40###, 41 ###reference_b41###, 34 ###reference_b34###, 37 ###reference_b37###, 42 ###reference_b42###, 43 ###reference_b43###, 44 ###reference_b44###] have been studied in typical FL scenarios.\nIn order to alleviate the negative effect of system heterogeneity, some works [43 ###reference_b43###, 44 ###reference_b44###, 42 ###reference_b42###] investigate to optimize the batch sizes of different workers.\nIn addition, other works [40 ###reference_b40###, 34 ###reference_b34###, 37 ###reference_b37###] propose to employ worker selection to simultaneously address heterogeneity issues.\nFor example, PyramidFL [34 ###reference_b34###] proposes a fine-grained worker selection strategy that focuses on the divergence between the selected and remaining workers to fully exploit the computing resource and data of different workers.\nHowever, those FL researches can not be directly applied for SFL, since workers maintaining only the bottom models in SFL must complete the whole training procedure by interacting with the top model residing on the PS.\nTo expand the ability of addressing the heterogeneity challenges for SFL, we review the distinct properties of SFL compared to those of FL, and propose a novel SFL framework, termed MergeSFL.\nThe design of MergeSFL is based on two fundamental observations.\n1) In SFL, the top model can be regarded as a classifier [36 ###reference_b36###, 45 ###reference_b45###],\nand the features derived from non-IID data always mislead the convergence direction of the top model, leading to the degradation of model accuracy.\nAs illustrated in Section II-B ###reference_###, if we merge the features from different workers to form a mixed feature sequence, which is approximately equivalent to the features derived from an IID mini-batch, the top model will be updated along the reasonably optimal direction.\n2) Inspired by previous FL works [44 ###reference_b44###, 43 ###reference_b43###], assigning appropriate batch sizes for different workers will help to accommodate to their diverse capacities.\nFor example, the workers with high computing capacities are assigned with large batch sizes [44 ###reference_b44###], thus the time consumption of performing forward and backward propagation across workers can be essentially the same, and the system heterogeneity is expected to be addressed.\nMotivated by the above insights, MergeSFL explores to build up an efficient SFL system by combining feature merging and batch size regulation.\nThe difficulty of system design lies in the interactions between feature merging and batch size regulation.\nOn one hand, to make full use of local data across workers, it is desirable to collect enough features (indicating large batch sizes) from different workers at each iteration.\nHowever, considering resource limitation and system heterogeneity, MergeSFL needs to assign appropriate (but relatively small) batch sizes for the workers to balance their training time.\nOn the other hand, since the merged feature sequence is composed of the mini-batches from different workers, given the workers with diverse batch sizes, MergeSFL should dynamically select suitable workers and arrange their features to form a large IID mini-batch.\nOnly by jointly optimizing feature merging and batch size regulation, does MergeSFL contribute to well tackling the heterogeneity challenges, and realize efficient SFL, which, to our best knowledge, has not been investigated in existing literature.\nIn a nutshell, the main contributions of this paper are summarized as follows:\nWe review the characteristic properties of SFL, and propose a novel SFL framework, termed MergeSFL, which incorporates feature merging and batch size regulation to overcome the challenges of system and statistical heterogeneity.\nWe analyze joint influence of feature merging and batch size regulation on training performance and obtain their coupled relationship.\nThen, MergeSFL dynamically selects and arranges a part of workers under the heterogeneity restrictions,\nthereby promoting model accuracy as well as training efficiency.\nThe performance of MergeSFL is evaluated through a physical platform with totally 80 NVIDIA Jetson edge devices.\nThe experimental results show that MergeSFL improves the final model accuracy by 5.82% to 26.22%, with a speedup by about 1.39 to 4.14, compared to the baselines.\nThe rest of the paper is organized as follows.\nSection II ###reference_### presents the background of split federated learning, and introduces the motivations of our MergeSFL system.\nSection III ###reference_### illustrates the overview of MergeSFL.\nThen we elaborate the detailed design of\nMergeSFL in Section IV ###reference_###.\nThe experimental evaluation is presented in Section V ###reference_###.\nWe review some related works in Section VI ###reference_### and conclude the paper in Section VII ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Background and Motivation",
15
+ "text": "###figure_1###"
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Split Federated Learning",
21
+ "text": "Considering an EC system with a parameter server (PS) and totally workers, split federated learning (SFL) is proposed to perform deep learning tasks through a loose federation of workers coordinated by the PS.\nThe basic idea of SFL is to split the model into two submodels at the split layer, i.e., , where denotes the bottom (sub-)model and is the top (sub-)model.\nFor ease of description, we take the CNN model as an example.\nThe bottom model usually consists of the input layer and convolutional layers whereas the top model includes fully-connected layers and the output layer.\nIn SFL, the PS maintains the top model , while each worker () trains a bottom model using its local data .\nBesides, the workers complete the whole training procedure by interacting the top model residing on the PS.\nThe goal of SFL is to find the optimal model that minimizes the loss function as follows:\nwhere and separately denote the loss functions of bottom model on worker and top model .\nDue to the intrinsic complexity of most deep learning tasks, it is usually challenging to obtain a closed-form solution of Eq. (1 ###reference_###).\nNevertheless, Eq. (1 ###reference_###) can be solved by the mini-batch stochastic gradient descent (SGD) algorithms in SFL [13 ###reference_b13###, 14 ###reference_b14###].\nFor ease of expression, some important notations in this paper are listed in Table I ###reference_###.\n###table_1### The basic training process of SFL involves three main stages, i.e., forward/backward propagation of the worker-specific bottom models, forward/backward propagation of the top model, and global aggregation of bottom models on the PS.\nFirstly, each worker performs forward propagation with a batch of data samples, and delivers the features (also called smashed data) of the split layer to the PS.\nSubsequently, the PS performs forward/backward propagation to update the top model.\nThen, the PS sends the backpropagated gradients back to the workers for updating the bottom models by proceeding backward propagation.\nSuch a complete process of forward/backward propagation is regarded as a local iteration.\nAfter several local iterations, the PS aggregates the bottom models from all workers and sends the aggregated bottom model back to the workers for further training.\nThe above whole training process is regarded as a communication round.\nLet denote the bottom model of worker at the -th iteration in round .\nThen, after one iteration, the bottom model is updated as:\nHerein, is the gradient for a certain mini-batch with size , and denotes the stochastic gradient given the bottom model and input data sample .\nLet denote the top model at the -th iteration in round .\nAccordingly, the process of updating the top model at one iteration is expressed as follows:\nwhere is the gradient of top loss function, and denotes the stochastic gradient for the top model and the output of the bottom model (i.e., features), given the input data sample .\nAfter local updating, the PS receives and aggregates the bottom models from all workers:"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Importance of Feature Merging",
27
+ "text": "Different from the assumption in traditional centralized training with IID data distribution, the distributions of local data on geographically diverse workers vary significantly, which causes the non-IID issue (i.e., statistical heterogeneity) and deteriorates the training performance [4 ###reference_b4###, 30 ###reference_b30###].\nIn typical SFL (e.g., LocFedMix-SL [16 ###reference_b16###]), the PS directly applies the features of each worker to complete the forward/backward propagation of the top model, and sends the corresponding gradients back to the workers in sequence.\nHowever, the features of different workers with non-IID data may hinder the model from being updated along the optimal directions, which reduces model accuracy.\nTo tackle the non-IID issue, we introduce the strategy of feature merging in typical SFL.\nThe difference between typical SFL and SFL with feature merging is illustrated in Fig. 1 ###reference_###.\nThe idea of feature merging is to merge the features (with small batch sizes) from different workers to form one mixed feature sequence (with a large batch size), which is approximately equivalent to the feature derived from IID mini-batch and is utilized to conduct forward/backward propagation of the top model.\nSubsequently, the PS segments the mixed large-size gradient into multiple small-size gradients corresponding to each worker, and then dispatches the gradients back to the workers to update the bottom models.\nConsidering the advantage of feature merging, both the top model and bottom models will be updated along the relatively right directions in the optimization space, which contributes to improving model accuracy.\n###figure_2### ###figure_3### To demonstrate the effectiveness of feature merging, we conduct a set of pre-experiments for training AlexNet on 10 workers with typical SFL (denoted as SFL-T for short) and SFL with feature merging (denoted as SFL-FM).\nWe distribute non-IID data samples from the CIFAR-10 dataset to the participating workers, whose data together are IID.\nWe record the training process and final test accuracy of models trained with SFL-T and SFL-FM.\nBy Figs. 2 ###reference_### and 3 ###reference_###, SFL-FM improves test accuracy by about 18.2%, compared to SFL-T.\nFurthermore, we decide to perform one iteration of model updating with SFL-FM and SFL-T, respectively, and investigate the effects of feature merging on gradients.\nThe iteration starts with the same top and bottom models, and the mini-batches across workers are non-IID, while the union of the mini-batches follows IID.\nIn Fig. 4(a) ###reference_sf1###, the backpropagated gradients from the top models of SFL-T and SFL-FM are visualized in the 2D vector space by performing principal component analysis (PCA) [46 ###reference_b46###].\nBesides, we also conduct standalone model training of the entire model (i.e., a combination of the top and bottom models) for one iteration w.r.t. the above (IID) mini-batch union.\nThe gradient derived by the standalone SGD generally indicates the right optimization direction.\nMoreover, the gradients corresponding to bottom models of three randomly selected workers are illustrated in Fig. 4(b) ###reference_sf2###, where the dashed arrows and solid arrows denote the gradient vectors in SFL-T and SFL-FM, respectively.\n###figure_4### ###figure_5### ###figure_6### ###figure_7### By Fig. 4(a) ###reference_sf1###, we observe that the gradient derived by SFL-FM is much closer to that by the standalone SGD, since the role of feature merging in SFL-FM can be regarded as a regularization operation for the gradients, which ensures updating the top model along a much more right direction than SFL-T.\nBesides, the dispatched gradients in Fig. 4(b) ###reference_sf2### also exhibit quite different directions compared to the gradients by SFL-T, and help to update the bottom models more efficiently as validated in Fig. 2(a) ###reference_sf1###.\nIn a nutshell, SFL-FM enables faster convergence rate and higher test accuracy for the trained model than SFL-T, which demonstrates the advantages of feature merging in addressing statistical heterogeneity.\n###figure_8###"
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C Importance of Batch Size Regulation",
33
+ "text": "Due to system heterogeneity, the computing time of bottom models and transmission time of features at each iteration probably vary significantly across workers.\nIf the workers are assigned with identical and fixed batch sizes at each iteration, the fast workers are forced to wait for the slow ones, incurring idle waiting time and inevitably impairing the training efficiency.\nAccordingly, we propose to adaptively assign different batch sizes to workers with diverse capacities, termed batch size regulation, so as to greatly reduce the waiting time and address the system heterogeneity.\nGenerally, the workers with higher computing and communication capabilities are configured with larger batch sizes, and can process more data at each iteration, while those with lower capabilities are assigned with smaller batch sizes.\nTo illustrate the efficiency of SFL with batch size regulation (denoted as SFL-BR), we record the average waiting time and completion time of model training with SFL-BR and SFL-T.\nAs shown in Figs. 2 ###reference_### and 3 ###reference_###, with the help of batch size regulation, SFL-BR only takes 5,943s to achieve the target accuracy of about 65%, while SFL-T takes 10,585s to reach the similar target accuracy.\nBesides, by Fig. 2(b) ###reference_sf2###, SFL-BR reduces the average per-round waiting time by about 67%, compared to SFL-T, which demonstrates its superiority in addressing system heterogeneity."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "II-D Discussion",
39
+ "text": "Motivated by the above findings, it is necessary to incorporate feature merging and batch size regulation in SFL to simultaneously cope with system and statistical heterogeneity.\nGiven workers with diverse batch sizes matching their heterogeneous capacities, it is usually infeasible to directly merge the features from all workers for model training, since the underlying distribution of the mixed feature sequence probably deviates further away from that of the features derived from IID data.\nAn intuitive way is to select and arrange a part of workers, whose data together follows IID assumption, for feature merging.\nHowever, there may still exist a large gap between the distribution of merged features from local data and the IID distribution.\nSince batch size always imposes a comprehensive influence on the distribution of the merged features and computing/communication overhead, we are encouraged to reconfigure batch sizes for the selected workers, so as to\ncontribute to well tackling the statistical and system heterogeneity.\nMoreover, the inherent resource limitation in EC will complicate the optimization problem of SFL, which is elaborated in Section IV ###reference_###."
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "III System Overview",
45
+ "text": "As illustrated in Fig. 5 ###reference_###, MergeSFL consists of two key modules, i.e., the control module and the training module.\nAt the beginning of each round, the control module of the PS first collects the state information (e.g., label distribution, computing and communication capabilities) of all candidate workers ( 1).\nSubsequently, the control module estimates the worker states to dynamically select a part of workers with the consideration of statistical and system heterogeneity.\nOnce the control module has made the decision, it assigns the configurations of feature merging and batch size ( 2) to the selected workers, which are further distributed with the bottom models ( 3) and activated for model training.\nIn the training module, the selected workers train the bottom models using their local data and interact with the PS to update the top model at each iteration.\nConcretely, the selected worker performs forward propagation with batch size of , and then pushes the features ( 4) to the PS.\nAt each iteration, the PS strives to obtain a large-size mixed feature sequence by merging the features from multiple workers to overcome the statistical heterogeneity.\nAfterwards, the PS completes forward and backward propagation with the mixed feature sequence to update top model.\nFollowing the backward propagation of the top model, the PS has to divide the mixed large-size gradients into small gradients corresponding to each worker.\nSubsequently, the PS dispatches the corresponding gradients ( 5) back to the selected workers.\nAfter a certain number of local iterations, the PS aggregates the bottom models from all selected workers ( 6) to get the updated bottom model for next-round training.\nIt is worth noting that, since the selected workers are configured with different batch sizes, the bottom models will be assigned with adaptive aggregation weights related to their batch sizes, so as to guarantee the convergence of bottom model when performing model aggregation."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "IV System Design",
51
+ "text": "In this section, we will elaborate the detailed design of control and training modules in MergeSFL."
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "IV-A Control Module",
57
+ "text": "In each communication round, the control module of the PS is implemented to estimate the state information of all available workers, and further generates specific configurations of feature merging and batch size for the workers upon their heterogeneous properties.\nThe workers that meet the selection requirements are arranged to participate in the model training.\nWorker State Estimation. \nIn order to make effective decisions, it is crucial to collect information about the current working states of the PS (e.g., ingress bandwidth) as well as all workers (e.g., label distribution, computing and communication capabilities).\nConcretely, in EC, the available ingress bandwidth of the PS is usually limited in each round , and the PS always consumes a large portion of bandwidth to exchange features/gradients with the workers.\nIt is instinct to ensure that the occupied bandwidth of the PS does not exceed the bandwidth budget to prevent the PS from becoming a bottleneck.\nTherefore, at the beginning of round , estimating the available ingress bandwidth becomes vital to determine the number of selected workers and their corresponding batch sizes in MergeSFL.\nWe analyze the statistical distribution of the ingress bandwidth based on the behavior of the PS in the previous rounds, and employ the statistical results to estimate the available ingress bandwidth in round .\nTo update the model along the relatively optimal direction and tackle the non-IID issue, MergeSFL merges the features from different workers to form a mixed feature sequence, which is approximately equivalent to the features derived from IID data.\nHerein, the label distribution, a vector V ( and ) to parameterize a categorical distribution of class labels over classes, is required to assist the implementation of feature merging.\nAs the workers will deliver the features with corresponding labels to continue forward propagation of the top model on the PS in typical SFL [13 ###reference_b13###, 14 ###reference_b14###, 12 ###reference_b12###], the PS can directly collect the labels of workers\u2019 features and obtains the label distribution of worker w.r.t. the mini-batch.\nHowever, if the workers are unwilling or not permitted to share labels, each worker will derive the label distribution based on its whole local data and report to the PS before training, which protects the label information of specific samples from be exposing.\nConsidering that privacy leakage is an important challenge in SFL, some popular privacy protection techniques, e.g., Differential Privacy [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###], Distance Correlation Technique [47 ###reference_b47###] and Homomorphic Encryption [22 ###reference_b22###], can be applied to protect the privacy of raw data, features/gradients and models, and are orthogonal to the main focus of MergeSFL.\nThe estimation of time-varying computing and communication capacities of workers is critical for MergeSFL to develop appropriate strategies of batch size regulation for participating workers.\nAs proxy metrics, we adopt the computing time for processing one data sample and the corresponding transmission time , which can be recorded by the workers directly during model training, to indicate the computing and communication capacities of worker in round , respectively.\nPrior to starting model training in round , the PS collects the latest computing time and transmission time from all workers.\nBesides, we introduce the moving average with the historical states of workers to improve the robustness of the estimation [48 ###reference_b48###].\nAccordingly, the PS estimates the computing time and corresponding transmission time for worker in round by calculating the moving average with (for example, in our experiments) as:\nBesides, it is worth noting that advancing the techniques for state estimation is not our focus, and other existing advanced estimation techniques [49 ###reference_b49###, 50 ###reference_b50###] can be applied in MergeSFL.\nInput: , , , , .\nOutput:\nThe worker set with specific configurations for training.\nWorker Arrangement and Configuration. \nBased on the estimated state information, the control module tries to select and arrange a part of workers, which are dynamically configured with appropriate batch sizes and a feature merging plan.\nThe detailed process is presented in Alg. 1 ###reference_###.\nWe denote the local updating frequency as , which is fixed and identical for workers during training as in typical SFL like LocFedMix-SL [16 ###reference_b16###].\nTherefore, given the estimated computing time and the corresponding transmission time of worker , we formulate the duration time (including computing and communication time) of worker with batch size in round as follows:\nTherefore, when training with all workers, we can obtain the completion time of round upon the duration time of all workers as , which equals to the duration time of the slowest worker.\nThus, the waiting time of worker in round can be expressed as .\nAccordingly, the average waiting time of all workers is formulated as:\nTo minimize the average waiting time, MergeSFL will regulate the batch sizes of all workers so as to align their duration time.\nThus, it ensures that the average waiting time will be small enough to mitigate the negative impacts of synchronization barrier and improve the training efficiency.\nThe regulation rule is expressed as follows:\nwhere denotes the index of the fastest worker assigned with the default maximum batch size in round .\nAccording to Eq. (9 ###reference_###), we can obtain the specific batch sizes for all workers in round (Line 1-2 of Alg. 1 ###reference_###).\nDue to the constraint of available ingress bandwidth in round , it is actually infeasible to allow all the workers to participate in training.\nTherefore, MergeSFL selects workers and constructs the worker set for feature merging and model training.\nThe occupied bandwidth of the PS communicating with the workers in is limited as follows:\nwhere is a constant and denotes the bandwidth occupied by transmitting the feature of one data sample.\nTo tackle the non-IID issue, the mixed feature sequence needs to be approximately equivalent to the features derived from IID data.\nWe first define the IID distribution as .\nIf the data of all workers follows IID distribution, we can get , where is the label distribution of worker .\nConsidering the worker set with size of in round , the label distribution of data from workers in is denoted as:\nThe mixed feature sequence of the worker set for feature merging is expected to meet the requirement that its label distribution is approximately consistent with the IID distribution .\nWe introduce the KL-divergence to measure the gap between and as follows [51 ###reference_b51###, 52 ###reference_b52###]:\nIn order to balance the contribution of all workers to model training, we define the participating frequency (denoted as ) to keep track of the number of times that worker engages in training.\nThen, the priority of selecting worker for future training is expressed as follows:\nwhich indicates that the workers with small participating frequencies will have a large priority to be selected.\nWe employ the genetic algorithm (GA) [53 ###reference_b53###, 54 ###reference_b54###, 55 ###reference_b55###] to construct the worker set with the minimum under the resource constraint in Eq. (10 ###reference_###).\nIn particular, we select workers (e.g., ) based on their priority as the initial population, and\nencode each gene as whether the worker is selected or not (Line of 3-5 in Alg. 1 ###reference_###).\nIn practice, there may still exist a large gap between the label distribution of worker set and the IID distribution.\nThus, we need to continue regulating the batch sizes of workers in to further minimize and ensure , where is the predefined threshold (close the zero).\nHowever, the regulation of batch size inevitably violates the Eq. (9 ###reference_###) and increases the average waiting time of the worker set .\nThe increased waiting time at each iteration is denoted as:\nwhere is the difference of batch size before and after batch size regulation for worker .\nWe explore to finetune the batch size so as to minimize under the constraint of .\nTo this end, we formulate the above problem as a Lagrange dual problem, which can be well solved as in [56 ###reference_b56###, 57 ###reference_b57###] (Line 6 in Alg. 1 ###reference_###).\nAfter that, we scale up or down the batch size proportionally to maximize the utilization of bandwidth resource under the constraint in Eq. (10 ###reference_###) (Line 7 in Alg. 1 ###reference_###)."
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "IV-B Training Module",
63
+ "text": "After the control module generates the worker selection decision, it distributes the feature merging and batch size configurations to the selected workers, which are employed to guide the subsequent model training process.\nBesides, the PS broadcasts the latest bottom models to the selected workers and runs the training module.\nThe training module consists of four phases, i.e., bottom model training, feature merging, gradient dispatching and bottom model aggregation.\nBottom Model Training.\nFor a certain worker in round , we adopt the mini-batch SGD algorithm with batch size to update the bottom model.\nIn order to guarantee model convergence, we formulate a rule to guide the setting of worker-specific local learning rate , which is proportional to the batch size of each worker as suggested in [44 ###reference_b44###].\nAccordingly, the process of updating the bottom model at iteration is expressed as:\nFeature Merging.\nOnce worker performs forward propagation at iteration in round , it delivers its features and the corresponding labels to the PS.\nIn terms of the feature merging configuration, the PS will merge the received features from the selected workers and obtain a mixed feature sequence, which is expected to be the features derived from an IID mini-batch.\nThe mixed feature sequence from workers (including from worker to worker in the worker set ) is denoted as .\nThus, the PS performs forward/backward propagation with the mixed feature sequence to update the top model at iteration in round as follows:\nwhere and is the learning rate of top model in round .\nIf the workers only delivers their features without labels, the PS would return the output logits of top model to the workers for calculating loss locally.\nThen the workers send the loss back to the PS for calculating the gradients and completing the backward propagation.\nGradient Dispatching.\nAfter performing backward propagation of the top model, the PS obtains the mixed backpropagated gradients at iteration in round .\nIn order to correctly update the bottom models of different workers, it is necessary for the workers to obtain the gradients corresponding to their features uploaded at the feature merging phase.\nConcretely, the PS first segments the mixed large-size gradients into multiple small-size gradients for the selected workers, including from worker to worker in worker set .\nThen, the PS dispatches the corresponding gradients to the selected workers for completing backward propagation according to Eq. (15 ###reference_###).\nBottom Model Aggregation.\nAfter performing totally iterations in round , the selected workers in worker set push their bottom models to the PS for central aggregation.\nConsidering the workers are configured with different batch sizes for model training, the bottom models are updated and trained with varying degrees, which needs adaptive weight aggregation to guarantee the performance of aggregated bottom model [38 ###reference_b38###, 58 ###reference_b58###].\nTherefore, the PS aggregates the bottom models with adaptive weights related to the batch sizes of selected workers as follows:\nThe aggregated bottom model is stored in the PS and will be distributed to future selected workers to continue further training, or be combined with the top model to form a complete model used for AI tasks."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Experiments and Evaluation",
69
+ "text": "###table_2###"
70
+ },
71
+ {
72
+ "section_id": "5.1",
73
+ "parent_section_id": "5",
74
+ "section_name": "Experimental Settings",
75
+ "text": "System Deployment.\nWe conduct extensive experiments to evaluate the performance of MergeSFL on an edge computing hardware prototype system.\nSpecifically, we employ a deep learning GPU workstation as the PS, which is equipped with an Intel(R) Core(TM) i9-10900X CPU, four NVIDIA GeForce RTX 2080Ti GPUs and 256 GB RAM.\nIn addition, we specify 80 NVIDIA Jetson kits,\nincluding 30 Jetson TX2 devices, 40 Jetson NX devices, and 10 Jetson AGX devices, as workers to construct a heterogeneous system.\nThe detailed technical specifications of Jetson TX2, NX and AGX are listed in Table II ###reference_###.\nNotably, the TX2 showcases a 256-core Pascal GPU and a CPU cluster consisting of a 2-core Denver2 and a 4-core ARM CortexA57.\nThe NX is outfitted with a 384-core NVIDIA Volta GPU and a 6-core NVIDIA Carmel ARMv8.2 CPU.\nJetson Xavier NX dramatically enhances the NVIDIA software stack over 10 the performance of Jetson TX2.\nLastly, the AGX stands out a 512-core NVIDIA Volta GPU and an 8-core NVIDIA Carmel ARMv8.2 CPU.\nIn the experiments, we build the software platform based on Docker Swarm [59 ###reference_b59###, 60 ###reference_b60###] and the PyTorch deep learning library [61 ###reference_b61###].\nThe Docker Swarm, a distributed software development kit, facilitates the construction of a distributed system and enables the monitoring of each device\u2019s operational status.\nThe PyTorch library facilitates the implementation of model training on devices.\nAdditionally, to streamline communication among devices, we implement MPI (Message Passing Interface) [62 ###reference_b62###], which includes a collection of sending and receiving functions.\nThe experimental source code is available at https://github.com/ymliao98/MergeSFL.\n###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### Setting of System Heterogeneity.\nTo enable the workers with heterogeneous computing and communication capabilities, we present the following experimental settings.\n1) For Computation.\nAll the Jetson TX2, NX and AGX can be configured to work with different modes, specifying the number of working CPUs and the frequency of CPU/GPU, so that they can work with different computing capacities.\nSpecifically, TX2 can work in one of four modes while NX and AGX work in eight modes each.\nFor instance, the AGX with highest performance mode (i.e., mode 0 of AGX) achieves training by 100 faster than the TX2 with lowest performance mode (i.e., mode 1 of TX2).\nTo further reflect the time-varying on-device resources, we randomly change the modes for devices every 20 communication rounds.\n2) For Communication.\nAll devices are connected to the PS via WiFi routers.\nWe group the devices into four groups, each containing 20 devices.\nThese groups are then placed at different locations, i.e., 2m, 8m, 14m, and 20m away from the WiFi routers.\nDue to random channel noise and competition among devices, the bandwidth between the PS and devices dynamically varies during the training.\nThe bandwidth of devices is measured by iperf3 [63 ###reference_b63###], which fluctuates between 1Mb/s and 30Mb/s.\nApplications and Models.\nWe evaluate the performance of MergeSFL on four classical datasets and four DNN models.\n1) Human Activity Recognition.\nWe adopt the Human Activity Recognition (HAR) dataset [64 ###reference_b64###] in this application, which is collected from 30 individuals and includes 7,352 samples for training and 2,947 for test.\nWe train a plain CNN model [4 ###reference_b4###] with three 55 convolutional layers and two fully-connected layers, which is tailored to the HAR dataset and represented as CNN-H.\n2) Speech Recognition.\nThe Google Speech dataset [65 ###reference_b65###] (expressed as Speech for short) is adopted for the task of speech recognition, which allows a computer or device to recognize and interpret spoken language.\nThe dataset includes 85,511 and 4,890 audio clips for training and test, respectively.\nThe model trained on Speech is a CNN network (denoted as CNN-S) with four 1-D convolutional layers and one fully-connected layer.\n3) Object Recognition.\nWe adopt the CIFAR-10 dataset [66 ###reference_b66###] for the evaluation, which is an image dataset composed of 60,000 3232 colour images (50,000 for training and 10000 for test) across 10 categories.\nWe utilize an 8-layer AlexNet with size of 136MB [67 ###reference_b67###] for CIFAR-10.\nThe AlexNet is composed of three 33 convolutional layers, one 77 convolutional layer, one 1111 convolutional layer, two fully-connected hidden layers, and one softmax output layer.\n###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### 4) Image Classification.\nImageNet [68 ###reference_b68###] is a dataset for image recognition that consists of 1,281,167 training images, 50,000 validation images and 100,000 test images from 1000 categories.\nTo adapt to the resource-constrained workers, we create a subset of ImageNet, called IMAGE-100, which contains 100 out of 1,000 categories, and each sample is resized with the shape of 64643.\nFor the most complex tasks, we adopt a famous large model VGG16 with size of 321MB [18 ###reference_b18###], which is much larger than the size of AlexNet, to classify the images in IMAGE-100.\nThe VGG16 consists of 13 convolutional layers with kernel of 33, two fully-connected layers and a softmax output layer.\nSetting of Statistical Heterogeneity.\nIn the experiments, training samples of each worker are drawn independently by a vector v.\nTo create non-IID datasets, we draw from a Dirichlet distribution [69 ###reference_b69###, 70 ###reference_b70###], i.e., , where q characterizes a prior class distribution, and is a concentration parameters controlling the identicalness among workers.\nWith , all workers have identical distributions to the prior class distribution (i.e., IID); with , each worker holds data samples from only one class, which indicates high degree of statistical heterogeneity.\nWe specify 6 values (e.g., , 1, 0.5, 0.25, 0.2, 0.1) for to generate different data distributions that cover a spectrum of identicalness, and define (i.e., ) to quantify the non-IID levels.\nThe degree of statistical heterogeneity increases as increases, and = 0 is a special case of IIDness.\nBaselines.\nWe measure the effectiveness of MergeSFL through a comparison with three baselines.\n1) FedAvg [4 ###reference_b4###] is a famous FL approach that trains the entire models on all participating workers using identical batch size, and aggregates them to derive a global model.\n2) LocFedMix-SL [16 ###reference_b16###] is a typical and advanced SFL approach.\nIt proposes to reduce the aggregation frequency of bottom models to save the traffic consumption, but can not fully utilize the capacities of heterogeneous workers.\n3) AdaSFL [17 ###reference_b17###] is a state-of-the-art SFL approach.\nIt assigns adaptive and diverse batch sizes for different workers to address system heterogeneity, but still cannot deal with the statistical heterogeneity.\n4) PyramidFL [34 ###reference_b34###] is a state-of-the-art FL approach with fine-grained worker selection, and it focuses on the divergence between the selected and the unselected workers to fully exploit the computing resource and data of different workers.\nMetrics.\nWe adopt the following metrics to evaluate the performance of MergeSFL and the baselines.\n1) Test Accuracy reflects the accuracy of the models trained by different approaches on the test datasets, and is measured by the proportion of the data correctly predicted by the models to all the test data.\nSpecifically, we evaluate the test accuracy of the global model (a combination of the bottom and top models in SFL) in each round, and record the final test accuracy for different approaches.\n2) Time-to-Accuracy is denoted as the total wall clock time taken for training a model to achieve a target accuracy (i.e., training time).\nFor fair comparison, we set the target accuracy as the achievable accuracy by all approaches.\nWe record the completion time of each round and sum up to get the total training time.\nIn addition, we also record the average waiting time to reflect the training efficiency of different approaches.\n3) Network Traffic is calculated by summing the traffic for transmitting models or features/gradients between the PS and workers when achieving a target accuracy.\nExperimental Parameters.\nBy default, each set of experiments will run 150 communication rounds for CNN-H, and 250 communication rounds for CNN-S, AlexNet and VGG16.\nFor CNN-H, the learning rate is initialized as 0.1 and the decay rate is specified as 0.98.\nThe learning rates and decay rates for CNN-S, AlexNet and VGG16 are identical, and are initialized as 0.1 and 0.993 [29 ###reference_b29###, 71 ###reference_b71###], respectively.\nWe set the local updating frequency =10 for CNN-H, =30 for CNN-S and AlexNet, =40 for VGG16.\nFor the SFL approaches, we separately split the CNN-H, CNN-S, AlexNet, and VGG16 at the 3rd, 4th, 5th, and 13th layer."
76
+ },
77
+ {
78
+ "section_id": "5.2",
79
+ "parent_section_id": "5",
80
+ "section_name": "Overall Performance",
81
+ "text": "Firstly, we conduct a set of experiments on the IID datasets to evaluate the performance of MergeSFL and the baselines.\nThe training processes of these approaches are presented in Fig. 6 ###reference_###.\nBy the results, all the approaches achieve the similar test accuracy eventually.\nHowever, MergeSFL achieves the fastest convergence rate, outperforming the other approaches by a significant margin on all the four datasets.\nFor instance, by Fig. 6(a) ###reference_sf1###, MergeSFL takes 1,130s to achieve 87% accuracy for CNN-H on HAR, while PyramidFL, AdaSFL, LocFedMix-SL and FedAvg consume 1,946s, 1,471s, 2,939s, 4,401s, respectively.\nBy Fig. 6(b) ###reference_sf2###, MergeSFL also outperforms the other approaches in terms of total completion time for CNN-S on Speech.\nBesides, Fig. 6(c) ###reference_sf3### shows that MergeSFL reduces the total completion time for AlexNet on CIFAR-10 by about 39%, 24%, 54% and 69%, compared to PyramidFL, AdaSFL, LocFedMix-SL and FedAvg, respectively.\nMoreover, for VGG16 on IMAGE-100, as shown in Fig. 6(d) ###reference_sf4###, MergeSFL can separately speed up training by about 1.74, 1.39, 2.46 and 4.14, compared to PyramidFL, AdaSFL, LocFedMix-SL and FedAvg.\nThese results demonstrate the superiority of MergeSFL in addressing system heterogeneity.\nSecondly, we also conduct a set of experiments of these approaches on all the datasets with non-IID level =10, and the results are presented in Fig. 7 ###reference_###.\nWe observe that MergeSFL maintains the similar convergence rate as that in the IID scenario, and achieves the highest accuracy among these approaches.\nFor instance, by Fig. 7(a) ###reference_sf1###, MergeSFL achieves 86.8% accuracy in 1,484s for CNN-H on HAR, while PyramidFL, AdaSFL, LocFedMix-SL, and FedAvg takes 2,207s, 1,887s, 3,154s, and 4,745s to reach the accuracy of 79.44%, 72.53%, 72.38%, and 72.82%, respectively.\nSimilarly, Fig. 7(b) ###reference_sf2### illustrates that MergeSFL separately improves the final test accuracy by about 5.82%, 25.10%, 25.70% and 26.22% for CNN-S on Speech, compared to PyramidFL, AdaSFL, LocFedMix-SL and FedAvg.\nBesides, by Fig. 7(c) ###reference_sf3###, when achieving the similar test accuracy of around 60% for AlexNet on CIFAR-10, MergeSFL reduces the total completion time by about 67%, 73%, 85% and 89%, compared to PyramidFL, AdaSFL, LocFedMix-SL and FedAvg, respectively.\nMoreover, as shown in Fig. 6(d) ###reference_sf4###, for VGG16 on IMAGE-100 with the same training time of 5,200s, MergeSFL improves the test accuracy by about 17.18%, 19.68%, 30.98% and 45.84%, compared to PyramidFL, AdaSFL, LocFedMix-SL and FedAvg, respectively.\nThese results demonstrate that MergeSFL is effective in simultaneously tackling the heterogeneity challenges with feature merging and batch size regulation.\n###figure_25### ###figure_26### ###figure_27### ###figure_28### Thirdly, to illustrate the advantage of MergeSFL in saving communication resource, we present the network traffic consumption of these approaches when achieving different target accuracies in Fig. 8 ###reference_###.\nBy the results, the network traffic consumption of all approaches increases with the target accuracy for all the four datasets.\nFurthermore, MergeSFL always consumes the least network traffic among all approaches.\nIn addition, model splitting (i.e., MergeSFL, AdaSFL and LocFedMix-SL) helps to save much more network traffic compared to typical FL approaches (i.e., PyramidFL and FedAvg).\nAdaSFL with adaptive local updating frequency reduce the network traffic consumption while MergeSFL with adaptive worker arrangement further reduce the network traffic consumption.\nAs shown in Fig. 8(b) ###reference_sf2###, when achieving 87% accuracy, MergeSFL, AdaSFL and LocFedMix-SL consume 1,229MB, 1,694MB and 2,398MB, respectively, while PyramidFL and FedAvg consume 3,397MB and 4,036MB for CNN-S on Speech.\nBesides, as illustrated in Fig. 8(d) ###reference_sf4###, MergeSFL saves network traffic consumption by about 49%, 19%, 38% and 58% when achieving 65% accuracy for VGG16 on IMAGE-100, compared to the baselines (i.e., PyramidFL, AdaSFL, LocFedMix-SL and FedAvg).\n###figure_29### ###figure_30### To further demonstrate the robustness of MergeSFL towards system heterogeneity, we illustrate the average waiting time of the five approaches on the four datasets in Fig. 9 ###reference_###.\nAdaSFL with adaptive and diverse batch sizes for heterogeneous workers achieves the least waiting time, but the waiting time of MergeSFL is close to AdaSFL and is much less than that of other approaches.\nFor instance, by Fig. 9(a) ###reference_sf1###, the average waiting time of MergeSFL is 1.2s for CNN-H on HAR while PyramidFL, AdaSFL, LocFedMix-SL and FedAvg incur average waiting time of 3.4s, 1.1s, 5.9s and 6.1s, respectively.\nConsidering the workers with varying capacities, LocFedMix-SL and FedAvg use fixed and identical batch size for model training, without considering system heterogeneity, thus lead to non-negligible waiting time.\nPyramidFL with adaptive worker selection to fully exploit the computing resource and data of different workers reduces the average waiting time to a certain extent.\nConcretely, by Fig. 9(d) ###reference_sf4###, MergeSFL can reduce the average waiting time to train VGG16 on IMAGE-100 by about 65%, 79% and 81%, compared to PyramidFL, LocFedMix-SL and FedAvg.\n###figure_31### ###figure_32###"
82
+ },
83
+ {
84
+ "section_id": "5.3",
85
+ "parent_section_id": "5",
86
+ "section_name": "Effect of Non-IID Levels",
87
+ "text": "To demonstrate the effectiveness of MergeSFL in handling non-IID data, we present the test accuracy of different approaches at varying non-IID levels in Fig. 10 ###reference_###, where the horizontal axis denotes the non-IID level of the datasets.\nAs shown in Fig. 10 ###reference_###, the test accuracy of the models trained by the five approaches on all datasets decreases as the non-IID level increases.\nHowever, MergeSFL consistently outperforms the other approaches on all datasets.\nLocFedMix-SL, AdaSFL and FedAvg, without considering the challenges of system and statistical heterogeneity, exhibit the lowest model accuracy on non-IID datasets.\nPyramidFL, which focuses on the divergence between the selected workers and the remaining to fully exploit the computing resource and data of different workers, can mitigate the impact of non-IID data on model training to some extent.\nSpecifically, by Fig. 10(a) ###reference_.sf1###, with non-IID level of =10 on HAR, MergeSFL and PyramidFL achieve 86.8% and 79.44% accuracy, while LocFedMix-SL, AdaSFL and FedAvg only achieve 72.53%, 72.38% and 72.82% accuracy.\nMoreover, as shown in Fig. 10(b) ###reference_.sf2###, while transitioning from IID to non-IID level of\n=10 on Google Speech, MergeSFL and PyramidFL suffer from only 1.93% and 7.23% loss in accuracy, while the accuracy loss for AdaSFL, LocFedMix-SL and FedAvg is 18.54%, 18.74% and 18.91%, respectively.\nNotably, by Fig. 10(c) ###reference_.sf3###, MergeSFL can achieve improvement of test accuracy by about 12.50%, 27.36%, 27.98%, 26.66% on CIFAR-10 with non-IID level of =10, compared to the baselines (i.e., PyramidFL, AdaSFL, LocFedMix-SL, FedAvg)."
88
+ },
89
+ {
90
+ "section_id": "5.4",
91
+ "parent_section_id": "5",
92
+ "section_name": "Effect of Key Strategies",
93
+ "text": "There are two key strategies, i.e., feature merging and batch size regulation, that are developed to enhance the performance of SFL.\nHerein, we conduct several sets of experiments for AlexNet on CIFAR-10 with IID distribution (=0) and non-IID distribution (=10) to evaluate the effectiveness of the two critical strategies.\nWe adopt the MergeSFL without feature merging (MergeSFL w/o FM) and MergeSFL without batch size regulation (MergeSFL w/o BR) as the baselines.\nConcretely, in MergeSFL w/o FM, the PS directly applies the features of workers with diverse batch sizes to separately perform forward/backward propagation without feature merging.\nWhile in MergeSFL w/o BR, all workers are assigned with an identical batch size, that is the average of batch sizes in MergeSFL, for feature merging and model training.\nBy Fig. 11 ###reference_###, MergeSFL w/o FM converges as fast as MergeSFL on the IID dataset, while MergeSFL w/o BR achieves similar test accuracy as MergeSFL on the non-IID dataset.\nPowered by feature merging and batch size regulation, MergeSFL can speed up training by about 2.17 compared to MergeSFL w/o BR, and improve the final test accuracy by about 28.83% compared to MergeSFL w/o FM, which reflects the positive roles of the two strategies."
94
+ },
95
+ {
96
+ "section_id": "5.5",
97
+ "parent_section_id": "5",
98
+ "section_name": "Effect of System Scales",
99
+ "text": "In this section, to demonstrate the robustness of MergeSFL, we evaluate the performance of MergeSFL and baselines with different scales of participating workers.\nWe train AlexNet on CIFAR-10 with four scales (i.e., 100, 200, 300, 400) through extensive simulation experiments, which are conducted on an AMAX deep learning workstation equipped with an Intel(R) Xeon(R) Gold 5218R CPU, 8 NVIDIA GeForce RTX 3090 GPUs and 256 GB RAM.\nThe results of completion time to achieve 80% accuracy for these approaches are presented in Fig. 12(a) ###reference_.sf1###, while the training processes of different scales for MergeSFL are presented in Fig. 12(b) ###reference_.sf2###.\nAs the number of participating workers increases, all approaches achieve faster convergence.\nFor instance, MergeSFL with 400 workers achieves a speedup of 1.68, 1.47 and 1.23, compared to MergeSFL with 100, 200 and 300 workers, respectively.\nThe reason is that the number of samples on a worker is limited and more workers contribute more local data for training.\nIn addition, MergeSFL reaches the target accuracy 1.472.85 faster than the baselines (i.e., PyramidFL, AdaSFL, LocFedMix-SL, FedAvg) regarding the different scales of workers."
100
+ },
101
+ {
102
+ "section_id": "6",
103
+ "parent_section_id": null,
104
+ "section_name": "VI Related Work",
105
+ "text": "The existing split federated learning (SFL) researches are initially proposed to offload the computing tasks on resource-constrained workers when training large-scale DL models, but they are unable to simultaneously overcome the system and statistical heterogeneity [13 ###reference_b13###, 14 ###reference_b14###, 12 ###reference_b12###, 17 ###reference_b17###, 16 ###reference_b16###].\nFor instance, Thapa et al. [13 ###reference_b13###] demonstrate the feasibility of SFL and pioneer the first SFL method, termed SplitFed, which aggregates bottom models after each local updating.\nSuch frequent aggregation results in high network traffic consumption.\nTo save the traffic consumption, Han et al. [14 ###reference_b14###] propose LocSplitFed and allow the workers not send features to the PS by using local-loss-based training.\nThen, Oh et al. [16 ###reference_b16###] propose LocFedMix-SL, which is implemented to maintain all the benefits of SplitFed and LocSplitFed with multiple local updating frequency, but cannot fully utilize the capacities of heterogeneous workers.\nAlthough Liao et al. [17 ###reference_b17###] propose an advanced solution AdaSFL to assign adaptive and diverse batch sizes for different workers, AdaSFL still cannot address the statistical heterogeneity.\nDespite these notable advancements, none of the existing SFL works have yet explored to simultaneously tackle heterogeneity issues.\nPrior to the emergence of SFL, the system and statistical heterogeneity issues have been studied and addressed in many typical FL works [39 ###reference_b39###, 24 ###reference_b24###, 41 ###reference_b41###, 37 ###reference_b37###, 31 ###reference_b31###].\nOn one hand, in order to alleviate the negative effect of system heterogeneity, some works [72 ###reference_b72###, 38 ###reference_b38###, 43 ###reference_b43###, 44 ###reference_b44###, 42 ###reference_b42###] investigate to optimize the local updating frequencies and batch sizes of different workers.\nFor example, Xu et al. [38 ###reference_b38###] propose FedLamp to assign the relatively high-performance workers (with high computing/communication capacities) with larger local updating frequencies.\nBesides, Ma et al. [44 ###reference_b44###] propose to assign adaptive batch sizes and scaled learning rates for heterogeneous workers.\nOn the other hand, some works [31 ###reference_b31###, 73 ###reference_b73###, 74 ###reference_b74###] actively selects high-utility data samples to address statistical heterogeneity.\nFor instance, Li et al. [73 ###reference_b73###] propose to prioritize client training samples with higher importance in FL, while Shin et al. [31 ###reference_b31###] propose FedBalancer, which introduce a deadline control strategy to optimize the time-to-accuracy performance.\nIn addition, other works [40 ###reference_b40###, 34 ###reference_b34###, 37 ###reference_b37###] propose to employ worker selection to simultaneously address system and statistical heterogeneity.\nSpecifically, Li et al. [34 ###reference_b34###] develop PyramidFL, a fine-grained worker selection strategy that focuses on the divergence between the selected workers and the remaining workers to fully exploit the computing resource and data of different workers.\nHowever, those FL researches may be infeasible if the resource-constrained workers do not have enough memory to run the program of training large-scale models.\nMoreover, the relevant optimization techniques are probably unable to be directly applied for SFL and MergeSFL, since workers in SFL maintain only the bottom models and must continuously exchange features/gradients with the PS that possesses the top model."
106
+ },
107
+ {
108
+ "section_id": "7",
109
+ "parent_section_id": null,
110
+ "section_name": "VII Conclusion",
111
+ "text": "In this paper, we have designed and implemented a novel SFL framework, termed MergeSFL, which incorporated feature merging and batch size regulation to address the system and statistical heterogeneity.\nBy assigning diverse as well as suitable batch sizes for heterogeneous workers, and merging the features from workers into the mixed feature sequence, MergeSFL could promote model accuracy and training efficiency for SFL.\nThe experimental results showed that MergeSFL significantly outperformed the baselines, providing a speedup of 1.394.14 with the improvement of final model accuracy by 5.82%26.22%."
112
+ }
113
+ ],
114
+ "appendix": [],
115
+ "tables": {
116
+ "1": {
117
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Key Notations.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S2.T1.35\">\n<tr class=\"ltx_tr\" id=\"S2.T1.35.36\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T1.35.36.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.35.36.1.1\">Notation</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S2.T1.35.36.2\">\n<span class=\"ltx_text\" id=\"S2.T1.35.36.2.1\"></span> <span class=\"ltx_text\" id=\"S2.T1.35.36.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.35.36.2.2.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.35.36.2.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.35.36.2.2.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.35.36.2.2.1.1.1.1\">Semantics</span></span></span>\n</span></span><span class=\"ltx_text\" id=\"S2.T1.35.36.2.3\"></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.1.1.2\">number of workers</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.2.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.3.3.2\">local dataset of worker \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.5.5.2\">top model model in round \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.6.6.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.8.8.3\">bottom model on worker in round \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.9.9.2\">loss function of the top model</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.11.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.11.11.2\">loss function of bottom model on worker \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.14.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.12.12.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.14.14.3\">batch size of worker in round \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.17.17\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.15.15.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.17.17.3\">duration time of worker in round \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.19.19\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.18.18.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.19.19.2\">average waiting time of round \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.20.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.20.20.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.20.20.2\">the available ingress bandwidth of the PS</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.21.21\">\n<td class=\"ltx_td\" id=\"S2.T1.21.21.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.21.21.1\">in round \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.22.22\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.22.22.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.22.22.2\">the worker set for feature merging</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.23.23\">\n<td class=\"ltx_td\" id=\"S2.T1.23.23.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.23.23.1\">and model training in round \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.24.24\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.24.24.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.24.24.2\">computing time of processing one data</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.26.26\">\n<td class=\"ltx_td\" id=\"S2.T1.26.26.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.26.26.2\">sample in round on worker \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.27.27\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.27.27.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.27.27.2\">communication time of transmitting one</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.29.29\">\n<td class=\"ltx_td\" id=\"S2.T1.29.29.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.29.29.2\">data sample in round on worker \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.31.31\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.30.30.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.31.31.2\">the label distribution of worker \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.32.32\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.32.32.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.32.32.2\">the label distribution of data from</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.33.33\">\n<td class=\"ltx_td\" id=\"S2.T1.33.33.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.33.33.1\">workers in \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.35.35\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_b ltx_border_t\" id=\"S2.T1.34.34.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_b ltx_border_t\" id=\"S2.T1.35.35.2\">the participating frequency of worker \n</td>\n</tr>\n</table>\n</figure>",
118
+ "capture": "TABLE I: Key Notations."
119
+ },
120
+ "2": {
121
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Device technical specifications.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T2.1\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1\">\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T2.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.2.1\">AI Performance</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.3.1\">GPU Type</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.2.1\">Jetson TX2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.2\">1.33 TFLOPs</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.3\">256-core Pascal</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.3.1\">Jetson NX</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2\">21 TOPs</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.3\">384-core Volta</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.4.1\">Jetson AGX</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.2\">32 TOPs</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.3\">512-core Volta</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.5\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S5.T2.1.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.5.2.1\">CPU Type</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.5.3.1\">ROM</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.6.1\">Jetson TX2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.6.2\">Denver 2 and ARM 4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.6.3\">8 GB LPDDR4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.1.7.1\">Jetson NX</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.7.2\">6-core Carmel ARM 8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.7.3\">8 GB LPDDR4x</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T2.1.8.1\">Jetson AGX</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.1.8.2\">8-core Carmel ARM 8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.1.8.3\">32 GB LPDDR4x</td>\n</tr>\n</table>\n</figure>",
122
+ "capture": "TABLE II: Device technical specifications."
123
+ }
124
+ },
125
+ "image_paths": {
126
+ "1": {
127
+ "figure_path": "2311.13348v2_figure_1.png",
128
+ "caption": "Figure 1: Illustration of typical SFL (left) and SFL with feature merging (right).",
129
+ "url": "http://arxiv.org/html/2311.13348v2/x1.png"
130
+ },
131
+ "2(a)": {
132
+ "figure_path": "2311.13348v2_figure_2(a).png",
133
+ "caption": "(a) Test Accuracy\nFigure 2: Test accuracy and average waiting time of three approaches with non-IID data.",
134
+ "url": "http://arxiv.org/html/2311.13348v2/x2.png"
135
+ },
136
+ "2(b)": {
137
+ "figure_path": "2311.13348v2_figure_2(b).png",
138
+ "caption": "(b) Average Waiting time\nFigure 2: Test accuracy and average waiting time of three approaches with non-IID data.",
139
+ "url": "http://arxiv.org/html/2311.13348v2/x3.png"
140
+ },
141
+ "3(a)": {
142
+ "figure_path": "2311.13348v2_figure_3(a).png",
143
+ "caption": "(a) Training Time\nFigure 3: Training performance of three approaches with non-IID data.",
144
+ "url": "http://arxiv.org/html/2311.13348v2/x4.png"
145
+ },
146
+ "3(b)": {
147
+ "figure_path": "2311.13348v2_figure_3(b).png",
148
+ "caption": "(b) Test Accuracy\nFigure 3: Training performance of three approaches with non-IID data.",
149
+ "url": "http://arxiv.org/html/2311.13348v2/x5.png"
150
+ },
151
+ "4(a)": {
152
+ "figure_path": "2311.13348v2_figure_4(a).png",
153
+ "caption": "(a) Top Model\nFigure 4: Visualization of gradients of the top and bottom models.",
154
+ "url": "http://arxiv.org/html/2311.13348v2/x6.png"
155
+ },
156
+ "4(b)": {
157
+ "figure_path": "2311.13348v2_figure_4(b).png",
158
+ "caption": "(b) Bottom Model\nFigure 4: Visualization of gradients of the top and bottom models.",
159
+ "url": "http://arxiv.org/html/2311.13348v2/x7.png"
160
+ },
161
+ "5": {
162
+ "figure_path": "2311.13348v2_figure_5.png",
163
+ "caption": "Figure 5: System workflow of MergeSFL.",
164
+ "url": "http://arxiv.org/html/2311.13348v2/x8.png"
165
+ },
166
+ "6(a)": {
167
+ "figure_path": "2311.13348v2_figure_6(a).png",
168
+ "caption": "(a) HAR\nFigure 6: Test accuracy of five approaches on the four IID datasets.",
169
+ "url": "http://arxiv.org/html/2311.13348v2/x9.png"
170
+ },
171
+ "6(b)": {
172
+ "figure_path": "2311.13348v2_figure_6(b).png",
173
+ "caption": "(b) Speech\nFigure 6: Test accuracy of five approaches on the four IID datasets.",
174
+ "url": "http://arxiv.org/html/2311.13348v2/x10.png"
175
+ },
176
+ "6(c)": {
177
+ "figure_path": "2311.13348v2_figure_6(c).png",
178
+ "caption": "(c) CIFAR-10\nFigure 6: Test accuracy of five approaches on the four IID datasets.",
179
+ "url": "http://arxiv.org/html/2311.13348v2/x11.png"
180
+ },
181
+ "6(d)": {
182
+ "figure_path": "2311.13348v2_figure_6(d).png",
183
+ "caption": "(d) IMAGE-100\nFigure 6: Test accuracy of five approaches on the four IID datasets.",
184
+ "url": "http://arxiv.org/html/2311.13348v2/x12.png"
185
+ },
186
+ "7(a)": {
187
+ "figure_path": "2311.13348v2_figure_7(a).png",
188
+ "caption": "(a) HAR\nFigure 7: Test accuracy of five approaches on the four non-IID datasets.",
189
+ "url": "http://arxiv.org/html/2311.13348v2/x13.png"
190
+ },
191
+ "7(b)": {
192
+ "figure_path": "2311.13348v2_figure_7(b).png",
193
+ "caption": "(b) Speech\nFigure 7: Test accuracy of five approaches on the four non-IID datasets.",
194
+ "url": "http://arxiv.org/html/2311.13348v2/x14.png"
195
+ },
196
+ "7(c)": {
197
+ "figure_path": "2311.13348v2_figure_7(c).png",
198
+ "caption": "(c) CIFAR-10\nFigure 7: Test accuracy of five approaches on the four non-IID datasets.",
199
+ "url": "http://arxiv.org/html/2311.13348v2/x15.png"
200
+ },
201
+ "7(d)": {
202
+ "figure_path": "2311.13348v2_figure_7(d).png",
203
+ "caption": "(d) IMAGE-100\nFigure 7: Test accuracy of five approaches on the four non-IID datasets.",
204
+ "url": "http://arxiv.org/html/2311.13348v2/x16.png"
205
+ },
206
+ "8(a)": {
207
+ "figure_path": "2311.13348v2_figure_8(a).png",
208
+ "caption": "(a) HAR\nFigure 8: Network traffic consumption of five approaches when achieving different target accuracies.",
209
+ "url": "http://arxiv.org/html/2311.13348v2/x17.png"
210
+ },
211
+ "8(b)": {
212
+ "figure_path": "2311.13348v2_figure_8(b).png",
213
+ "caption": "(b) Speech\nFigure 8: Network traffic consumption of five approaches when achieving different target accuracies.",
214
+ "url": "http://arxiv.org/html/2311.13348v2/x18.png"
215
+ },
216
+ "8(c)": {
217
+ "figure_path": "2311.13348v2_figure_8(c).png",
218
+ "caption": "(c) CIFAR-10\nFigure 8: Network traffic consumption of five approaches when achieving different target accuracies.",
219
+ "url": "http://arxiv.org/html/2311.13348v2/x19.png"
220
+ },
221
+ "8(d)": {
222
+ "figure_path": "2311.13348v2_figure_8(d).png",
223
+ "caption": "(d) IMAGE-100\nFigure 8: Network traffic consumption of five approaches when achieving different target accuracies.",
224
+ "url": "http://arxiv.org/html/2311.13348v2/x20.png"
225
+ },
226
+ "9(a)": {
227
+ "figure_path": "2311.13348v2_figure_9(a).png",
228
+ "caption": "(a) HAR\nFigure 9: Average waiting time of five approaches on the four datasets.",
229
+ "url": "http://arxiv.org/html/2311.13348v2/x21.png"
230
+ },
231
+ "9(b)": {
232
+ "figure_path": "2311.13348v2_figure_9(b).png",
233
+ "caption": "(b) Speech\nFigure 9: Average waiting time of five approaches on the four datasets.",
234
+ "url": "http://arxiv.org/html/2311.13348v2/x22.png"
235
+ },
236
+ "9(c)": {
237
+ "figure_path": "2311.13348v2_figure_9(c).png",
238
+ "caption": "(c) CIFAR-10\nFigure 9: Average waiting time of five approaches on the four datasets.",
239
+ "url": "http://arxiv.org/html/2311.13348v2/x23.png"
240
+ },
241
+ "9(d)": {
242
+ "figure_path": "2311.13348v2_figure_9(d).png",
243
+ "caption": "(d) IMAGE-100\nFigure 9: Average waiting time of five approaches on the four datasets.",
244
+ "url": "http://arxiv.org/html/2311.13348v2/x24.png"
245
+ },
246
+ "10(a)": {
247
+ "figure_path": "2311.13348v2_figure_10(a).png",
248
+ "caption": "(a) HAR\nFigure 10: Test accuracy varies with different non-IID levels.",
249
+ "url": "http://arxiv.org/html/2311.13348v2/x25.png"
250
+ },
251
+ "10(b)": {
252
+ "figure_path": "2311.13348v2_figure_10(b).png",
253
+ "caption": "(b) Speech\nFigure 10: Test accuracy varies with different non-IID levels.",
254
+ "url": "http://arxiv.org/html/2311.13348v2/x26.png"
255
+ },
256
+ "10(c)": {
257
+ "figure_path": "2311.13348v2_figure_10(c).png",
258
+ "caption": "(c) CIFAR-10\nFigure 10: Test accuracy varies with different non-IID levels.",
259
+ "url": "http://arxiv.org/html/2311.13348v2/x27.png"
260
+ },
261
+ "10(d)": {
262
+ "figure_path": "2311.13348v2_figure_10(d).png",
263
+ "caption": "(d) IMAGE-100\nFigure 10: Test accuracy varies with different non-IID levels.",
264
+ "url": "http://arxiv.org/html/2311.13348v2/x28.png"
265
+ },
266
+ "11(a)": {
267
+ "figure_path": "2311.13348v2_figure_11(a).png",
268
+ "caption": "(a) IID\nFigure 11: Effects of feature merging and batch size regulation.",
269
+ "url": "http://arxiv.org/html/2311.13348v2/x29.png"
270
+ },
271
+ "11(b)": {
272
+ "figure_path": "2311.13348v2_figure_11(b).png",
273
+ "caption": "(b) Non-IID\nFigure 11: Effects of feature merging and batch size regulation.",
274
+ "url": "http://arxiv.org/html/2311.13348v2/x30.png"
275
+ },
276
+ "12(a)": {
277
+ "figure_path": "2311.13348v2_figure_12(a).png",
278
+ "caption": "(a) Completion Time\nFigure 12: Performance comparison with different number of workers.",
279
+ "url": "http://arxiv.org/html/2311.13348v2/x31.png"
280
+ },
281
+ "12(b)": {
282
+ "figure_path": "2311.13348v2_figure_12(b).png",
283
+ "caption": "(b) Training Process\nFigure 12: Performance comparison with different number of workers.",
284
+ "url": "http://arxiv.org/html/2311.13348v2/x32.png"
285
+ }
286
+ },
287
+ "validation": true,
288
+ "references": [],
289
+ "url": "http://arxiv.org/html/2311.13348v2"
290
+ }
20240722/2311.14671v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2312.02216v3.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "DragVideo: Interactive Drag-style Video Editing",
3
+ "abstract": "Video generation models have shown their superior ability to generate photo-realistic video. However, how to accurately control (or edit) the video remains a formidable challenge. The main issues are: 1) how to perform direct and accurate user control in editing; 2) how to execute editings like changing shape, expression, and layout without unsightly distortion and artifacts to the edited content; and 3) how to maintain spatio-temporal consistency of video after editing. To address the above issues, we propose DragVideo, a general drag-style video editing framework. Inspired by DragGAN [draggan], DragVideo addresses issues 1) and 2) by proposing the drag-style video latent optimization method which gives desired control by updating noisy video latent according to drag instructions through video-level drag objective function. We amend issue 3) by integrating the video diffusion model with sample-specific LoRA and Mutual Self-Attention in DragVideo to ensure the edited result is spatio-temporally consistent. We also present a series of testing examples for drag-style video editing and conduct extensive experiments across a wide array of challenging editing cases, showing DragVideo can edit video in an intuitive, faithful-to-user-intention manner, with nearly unnoticeable distortion and artifacts, while maintaining spatio-temporal consistency. While traditional prompt-based video editing fails to do the former two and directly applying image drag editing fails in the last, DragVideo\u2019s versatility and generality are emphasized. Project page: https://dragvideo.github.io/",
4
+ "sections": [],
5
+ "appendix": [],
6
+ "tables": {},
7
+ "image_paths": {},
8
+ "validation": true,
9
+ "references": [],
10
+ "url": "http://arxiv.org/html/2312.02216v3"
11
+ }
20240722/2312.05910v5.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2312.07962v2.json ADDED
@@ -0,0 +1,285 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Treewidth is Polynomial in Maximum Degree on Weakly Sparse Graphs Excluding a Planar Induced Minor",
3
+ "abstract": "A graph contains a graph as an induced minor if can be obtained from after vertex deletions and edge contractions.\nWe show that for every -vertex planar graph , every graph excluding as an induced minor and as a subgraph has treewidth at most where denotes the maximum degree of .\nWithout requiring the absence of a subgraph, Korhonen [JCTB \u201923] has shown the upper bound of whose dependence in is exponential.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "A graph contains a graph as a minor if can be obtained from by vertex deletions, edge deletions, and edge contractions.\nThe notion of induced minor is defined similarly except edge deletions are disallowed.\nThe celebrated Grid Minor theorem [26 ###reference_b26###, 27 ###reference_b27###] implies that graphs without large grid minors have low treewidth.\nWhat can be said about the treewidth of graphs solely excluding grids as induced minor?\nTheir treewidth can be arbitrarily large, as exemplified by cliques.\nHowever, a notable result by Korhonen is that their treewidth can be upperbounded by a function of their maximum degree .\nEvery graph excluding a fixed -vertex planar graph as an induced minor has treewidth at most for some universal constant .\nIn this paper, we obtain a polynomial dependence in if, further, arbitrarily large bicliques are excluded.\nThere is an such that every graph without as a subgraph nor fixed -vertex planar graph as an induced minor has treewidth at most .\nWe actually prove the following stronger statement.\nThere is an such that every graph without as a subgraph, and excluding as induced minors a -vertex planar graph and an -vertex graph has treewidth at most .\nOur tools combine well with classes of graphs that admit a product structure; see Section 8 ###reference_### for the definition of the strong product of two graphs.\nMore precisely, we prove the following.\nLet be a graph of treewidth at most , and be a path.\nLet be a subgraph of excluding a -vertex planar graph as an induced minor.\nThen the treewidth of is at most .\nA dependence in is necessary.\nThere are subgraphs of the strong product of a path with a star (hence a graph of treewidth 1) avoiding a planar induced minor, but whose treewidth is a growing function of the number of vertices.\nTake the grid, remove the \u201cvertical\u201d edges, and add in each \u201ccolumn\u201d a vertex adjacent to every vertex in the column; see Figure 1 ###reference_###.\nThis construction found by Pohoata [24 ###reference_b24###], and rediscovered by Davies [8 ###reference_b8###], has treewidth but avoids the grid as an induced minor.\nThe figure is a proof-by-picture that these graphs are indeed subgraphs of strong products of a path and a star.\nChudnovsky [4 ###reference_b4###, Open problem 4.1] asks if, when , the treewidth of graphs excluding the grid as an induced minor and the biclique as a subgraph is .\nOur results give a first answer to this question: The treewidth of these graphs is at most polylogarithmic.\nAt first sight, Chudnovsky\u2019s question centered around forbidden induced subgraphs may look somewhat different from the setting of Theorem 1.2 ###reference_theorem2###.\nThe two statements match since forbidding large cliques and bicliques as induced subgraphs is, by Ramsey\u2019s theorem [25 ###reference_b25###], equivalent to excluding large bicliques as subgraphs, and forbidding a subdivision of a large wall or the line graph of a subdivision of a large wall as an induced subgraph is the same as excluding a large grid as an induced minor.\nAnother simplifying feature of working with induced minors rather than induced subgraphs is that excluding as induced minor a large grid, or a large wall, or a planar graph of large treewidth are all equivalent.\nThe motivation behind the condition in Chudnovsky\u2019s question is that the treewidth could in principle be logarithmic in as well.\nThis would yield polynomial-time algorithms for several problems including Max Independent Set.\nWe come slightly short of proving it, but Theorem 1.2 ###reference_theorem2### does imply a quasipolynomial-time algorithm for Max Independent Set (and several other problems) on these graphs.\nIt is possible (and believed) that graphs excluding a -vertex planar graph as an induced minor have treewidth , for some function , even without requiring the absence of subgraph.\nThis also is motivated by fast algorithms for Max Independent Set, as it would imply a subexponential-time algorithm running in .\nDallard, Milani\u010d, and \u0160torgel [5 ###reference_b5###] even ask whether a (quasi)polynomial-time algorithm always exists in the absence of a fixed planar induced minor.\nAfter Korhonen [18 ###reference_b18###] gave the first (very slightly) subexponential algorithm, Korhonen and Lokshtanov [19 ###reference_b19###] provided an algorithm running in time , which extends to the case when the forbidden induced minor is non-planar.\nThere have been several recent developments in (quasi)polynomial algorithms for Max Independent Set on graphs excluding a planar induced minor [1 ###reference_b1###, 2 ###reference_b2###, 6 ###reference_b6###, 7 ###reference_b7###, 14 ###reference_b14###, 16 ###reference_b16###, 17 ###reference_b17###, 23 ###reference_b23###], some phrased in terms of forbidden induced subgraphs instead.\nThe most motivating next step would be to show Theorem 1.2 ###reference_theorem2### without requiring our graphs to exclude a fixed biclique as a subgraph.\nLet us explicitly mention the potential further improvements by increasing difficulty.\nDoes every graph excluding a fixed -vertex planar graph as an induced minor have, for some function , treewidth at most ? treewidth at most ? treewidth at most ? treewidth at most ?\nWe note that Gartland and Lokshtanov [15 ###reference_b15###] conjecture the following, which would in particular imply a positive answer to every case of the above question.\nThere is a function such that every graph excluding a fixed -vertex planar graph as an induced minor has a balanced separator dominated by at most vertices.\nFully spelled out, the conjecture says that for every excluding a -vertex planar graph as an induced minor, there is a set of size at most such that has no connected component of size larger than .\nIn particular, these graphs would have balanced separators of size , known to imply treewidth [13 ###reference_b13###].\nIf true, by a simple win-win argument, Max Independent Set could be solved in time on -vertex graphs excluding a -vertex planar graph as an induced minor."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries",
15
+ "text": "If are two integers, we denote by the set of integers , and by , the set .\nWe denote by and the set of vertices and edges of a graph , respectively.\nFor , the subgraph of induced by , denoted , is obtained by removing from all the vertices that are not in (together with their incident edges).\nThen is a short-hand for .\nA star is a tree with at most one non-leaf vertex.\nWe denote by and , the open, respectively closed, neighborhood of in .\nFor , we set and .\nWe may omit the subscript if is clear from the context.\nWe denote by the maximum degree of a graph , and by , its treewidth.\nA coloring of is a mapping for some natural .\nIt is proper if holds for every .\nWe may call a -coloring.\nThe sets are then called color classes, with for each .\nA star coloring of is a proper coloring such every two color classes induce a star forest, i.e., a disjoint union of stars.\nThe star chromatic number (resp. chromatic number) of is the minimum such that admits a star coloring (resp. proper coloring) with color classes.\nThe radius of a graph is defined as , where is the number of edges in a shortest path between and .\nThe radius of a subset of vertices is simply defined as .\nNote that two vertices can be further away in than in .\nA depth- minor of , denoted by , is a minor of with branch sets satisfying for every .\nIn particular depth-0 minors correspond to subgraphs.\nThe theory of graph sparsity pioneered by Ne\u0161et\u0159il and Ossona de Mendez [21 ###reference_b21###] introduces the following invariants for a graph and a class :\nA class of graphs is said to have bounded expansion if for every .\nWe say that a graph has expansion , or that bounds the expansion of , if for every ."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Contraction\u2013uncontraction technique",
21
+ "text": "We will need a treewidth sparsifier, i.e., the extraction of a subcubic subgraph of large treewidth in a graph of larger treewidth.\nWe could here use the Grid Minor theorem [27 ###reference_b27###], but the following result of Chekuri and Chuzhoy provides a better lower bound in the resulting treewidth.\nThere is a constant such that every graph of treewidth admits a subcubic subgraph of treewidth at least .\nThe next lemma abstracts out the contraction\u2013uncontraction technique of the third author which, in [18 ###reference_b18###], is specifically used over radius-2 balls.\nLet be a positive integer, be a graph, and be such that every connected component of the graph has at most vertices.\nThen, admits an induced subgraph such that\nin , every vertex is incident to at most three edges of , and\n, with the constant of Theorem 3.1 ###reference_theorem1###.\nLet be the partition of into the vertex sets of the connected components of .\nIt follows that for every .\nIn particular, .\nIndeed, a tree-decomposition of of width at most could be turned into a tree-decomposition of of width at most , simply by flattening the parts of in each bag, leading to a contradiction.\nOn the other hand, since is obtained from by performing edge contractions, as each is connected.\nBy Theorem 3.1 ###reference_theorem1### applied to , there is a subcubic subgraph of with\nWe now build an induced subgraph of having as a minor (hence at least its treewidth) such that every vertex of is incident to at most three edges of .\nAs is subcubic, each is incident to at most three edges of .\nFrom each , let us keep a minimal subset such that is connected and still contains as a subgraph, where .\nBy minimal we mean that for each , the removal of any vertex in breaks one of the latter conditions.\nNote that each comprises up to three terminals realizing the up-to-three edges in , plus a minimal subset connecting these three terminals in .\nTherefore, if would contain a vertex with more than three neighbors in , we could delete one of its neighbors by taking shortest paths from to the terminals in and deleting a neighbor not used in these shortest paths.\nThis implies that every vertex of is incident to at most three edges of in , since no edge of can have exactly one endpoint in .\nThus we set , and get ."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Star coloring with constantly many colors",
27
+ "text": "Building on a classic result by K\u00fchn and Osthus [20 ###reference_b20###], Dvo\u0159\u00e1k showed the following.\nFor every non-negative integer and graph , there is a function such that every graph without subgraph nor induced subdivision of has expansion .\nK\u00fchn and Osthus showed the same statement with the weaker conclusion that the degeneracy is bounded by a function of and .\nIn turn, by the work of Ne\u0161et\u0159il and Ossona de Mendez, graphs of bounded expansion have bounded star chromatic number.\nEvery graph class with bounded expansion has bounded star chromatic number.\nWe also observe the following.\n{observation}\nEvery graph excluding a graph as an induced minor also excludes as an induced subdivision.\nCombining Theorems 4.1 ###reference_theorem1###, 4.2 ###reference_theorem2### and 4 ###reference_### we get a bounded star coloring for our graphs of interest.\nThere is a function such that every graph without as a subgraph nor fixed -vertex graph as an induced minor admits a star -coloring.\nNote that need not be planar in Theorem 4.3 ###reference_theorem3###."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Reduced number of sparsification rounds",
33
+ "text": "We now use the contraction\u2013uncontraction technique [18 ###reference_b18###]; see Section 3 ###reference_###.\nIn a first step, we lower the maximum degree.\nIn a second step, we simply use Korhonen\u2019s result (see Theorem 1.1 ###reference_theorem1###) on an induced subgraph of low maximum degree.\nThe crucial difference with [18 ###reference_b18###] is that the number of rounds does not depend on the \u201cinitial\u201d maximum degree but solely on and (such that is not a subgraph of , and excludes a -vertex planar graph).\nWe successively apply Lemma 3.2 ###reference_theorem2### times, where is the function of Theorem 4.3 ###reference_theorem3###, on stars formed by every pair of color classes in a star coloring.\nLet be a positive integer, and be a fixed -vertex graph.\nEvery graph without as a subgraph nor as an induced minor has an induced subgraph such that\n, and\n,\nwhere is as in Theorem 4.3 ###reference_theorem3###.\nLet be the color classes of a star coloring of given by Theorem 4.3 ###reference_theorem3###.\nFor every unordered pair , is a star forest, a property that is closed under taking induced subgraphs.\nWe set and .\nWe build a chain for the induced subgraph relation , in the following way.\nWe (bijectively) list the unordered pairs from 1 to .\nWe obtain , where corresponds to the pair , by applying Lemma 3.2 ###reference_theorem2### on the triple , , and .\nWe recall that Lemma 3.2 ###reference_theorem2### takes in addition to a graph (here ), an edge subset , and an integer .\nAs is a star forest, so is its induced subgraph .\nThus we indeed have that every connected component has at most vertices.\nWe get that .\nIt thus eventually holds that .\nFix any and .\nFor every , at most three edges of can be incident to in .\nSo has degree at most in .\nThus satisfies the claimed properties.\nWe can now prove our main theorem, whose statement we recall for convenience.\nThere is an such that every graph without as a subgraph nor fixed -vertex planar graph as an induced minor has treewidth at most .\nLet be as in Theorem 4.3 ###reference_theorem3###, , be the constant of Theorem 3.1 ###reference_theorem1###, be that of Theorem 1.1 ###reference_theorem1###, and be the largest integer such that .\nWe can assume that since otherwise the validity of the theorem statement is clear.\nBy Lemma 5.1 ###reference_theorem1###, admits an induced subgraph with maximum degree at most and treewidth at least .\nAs satisfies the same hereditary properties as , by Theorem 1.1 ###reference_theorem1###, its treewidth is at most .\nTherefore,\nEither (and we are done as long as ) or .\nIn the latter case,\nWe conclude by choosing .\nAs Lemma 5.1 ###reference_theorem1### does not require the excluded induced minor to be planar, we proved:\nThere is an such that every graph without as a subgraph, and excluding as induced minors a -vertex planar graph and an -vertex graph has treewidth at most ."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Clustered edge-colorings",
39
+ "text": "The combination of Sections 3 ###reference_### and 4 ###reference_### suggests the use of (non-necessarily proper) edge-colorings any connected component induced by any monochromatic component of which has small size.\nThis is referred to as clustered edge-coloring.\nMore precisely, an edge-coloring of a graph has clustering if for every monochromatic component , every connected component of has at most vertices.\nFor instance, an edge-coloring with clustering 2 is a proper edge-coloring.\nLet be a positive integer, be a graph, and be the color classes of an edge-coloring of with clustering .\nThen, admits an induced subgraph such that\n, and\n, with the constant of Theorem 3.1 ###reference_theorem1###.\nSet .\nFor every going from to , let be the induced subgraph of obtained by applying Lemma 3.2 ###reference_theorem2### with edge subset .\nWe then define as .\nBy the first item of Lemma 3.2 ###reference_theorem2###, every vertex of has at most three incident edges in , hence has degree at most .\nThe second item readily follows from that of Lemma 3.2 ###reference_theorem2###.\nWe show an upper bound on the treewidth of graphs excluding a grid as an induced minor and admitting edge-colorings with few colors and moderately large clustering.\nEvery graph excluding a -vertex planar graph as an induced minor and admitting an -edge-coloring with clustering has treewidth at most .\nBy Lemma 6.1 ###reference_theorem1###, admits an induced subgraph of maximum degree at most and\nwith the constant of Theorem 3.1 ###reference_theorem1###.\nAs excludes a -vertex planar graph as an induced minor, so does .\nThus by Theorem 1.1 ###reference_theorem1###,\nfor some universal constant .\nFrom the two previous inequalities, we get that\nIf , we get that\nas claimed.\nIf instead , the statement of the lemma also holds, as then ."
40
+ },
41
+ {
42
+ "section_id": "7",
43
+ "parent_section_id": null,
44
+ "section_name": "Clusters of bounded treewidth",
45
+ "text": "The goal of this section is to relax the notion of clustering of edge-colorings so that Lemma 6.3 ###reference_theorem3### still holds.\nNamely, we now allow clusters to be arbitrarily large, however, we want their treewidth to be bounded.\nThis can be converted into an edge-coloring (still with few colors) with bounded clustering.\nIndeed, we show that a graph of bounded treewidth admits a -edge-coloring with clustering .\nThe main tool we plan to use is the notion of tree-partitions of graphs.\nA pair is a tree-partition of a graph if is a tree, and is a partition of such that for every , there exist a pair of equal or adjacent vertices such that .\nThe width of a tree-partition is defined as the maximum cardinality of an element of .\nThe tree-partition width of a graph , denoted , is the minimum width of a tree-partition of .\nAn anonymous referee of [9 ###reference_b9###] showed that every graph has tree-partition width of at most (see also [28 ###reference_b28###, 10 ###reference_b10###]).\nEvery graph admits a -edge-coloring with clustering , which is in particular .\nLet be a tree-partition of of width .\nWe root at an arbitrary vertex .\nAssign to each vertex its distance to in , denoted .\nWe define a coloring as follows.\nLet , and let be such that and .\nWithout loss of generality assume that .\nIf , then we set , and otherwise, we set .\nEvery monochromatic connected component of color is contained in a single part for some , and so, its cardinality is at most .\nOn the other hand, for every monochromatic connected component of color or , there exists a single part for some such that every edge in the component is incident to a vertex in .\nIt follows that the size of this monochromatic component is at most .\nNow, we state and prove a relaxed version of Lemma 6.3 ###reference_theorem3###.\nSuppose graph excludes as an induced minor a -vertex planar graph and admits an edge-coloring with color classes such that for each , the graph has treewidth at most .\nThen the treewidth of is at most .\nBy Lemma 7.1 ###reference_theorem1###, for each , the graph admits a -edge-coloring with clustering .\nSince is a partition of , the above edge-colorings give a -edge-coloring of .\nConsider the product edge-coloring col of and of , that is, for every .\nObserve that col uses at most colors and has clustering .\nFinally, by Lemma 6.3 ###reference_theorem3###, we obtain\nas claimed."
46
+ },
47
+ {
48
+ "section_id": "8",
49
+ "parent_section_id": null,
50
+ "section_name": "Product structure",
51
+ "text": "The strong product of graphs and , denoted by , is the graph with vertex set such that there is an edge whenever either and , or and , or and .\nWe prove the following theorem.\nLet be a graph of treewidth at most , and be a path.\nLet be a subgraph of excluding a -vertex planar graph as an induced minor.\nThen the treewidth of is at most .\nWe claim that admits a -edge-coloring such that if is any of its color classes, then the graph has treewidth at most .\nFirst, note that this suffices to prove the theorem.\nIndeed, we can restrict this edge-coloring to and apply Lemma 7.3 ###reference_theorem3### with to end the proof.\nLet us justify the initial claim.\nWe construct a coloring .\nLet .\nWe set the color of each edge such that to , and each edge such that to , where the positive integer satisfying .\nThe graph restricted to edges of color is simply a disjoint union of copies of , hence, it has treewidth at most .\nOn the other hand, the graph restricted to edges of color or is a disjoint union of copies of the graph .\nThus , which ends the proof."
52
+ }
53
+ ],
54
+ "appendix": [],
55
+ "tables": {},
56
+ "image_paths": {},
57
+ "validation": true,
58
+ "references": [
59
+ {
60
+ "1": {
61
+ "title": "Sparse graphs with bounded induced cycle packing number have\nlogarithmic treewidth.",
62
+ "author": "Marthe Bonamy, \u00c9douard Bonnet, Hugues D\u00e9pr\u00e9s, Louis Esperet,\nColin Geniet, Claire Hilaire, St\u00e9phan Thomass\u00e9, and Alexandra\nWesolek.",
63
+ "venue": "In Nikhil Bansal and Viswanath Nagarajan, editors, Proceedings\nof the 2023 ACM-SIAM Symposium on Discrete Algorithms, SODA 2023,\nFlorence, Italy, January 22\u201325, 2023, pages 3006\u20133028. SIAM, 2023.",
64
+ "url": null
65
+ }
66
+ },
67
+ {
68
+ "2": {
69
+ "title": "Maximum independent set when excluding an induced minor: and .",
70
+ "author": "\u00c9douard Bonnet, Julien Duron, Colin Geniet, St\u00e9phan Thomass\u00e9, and\nAlexandra Wesolek.",
71
+ "venue": "In Inge Li G\u00f8rtz, Martin Farach-Colton, Simon J. Puglisi, and\nGrzegorz Herman, editors, 31st Annual European Symposium on Algorithms\n(ESA 2023), volume 274 of Leibniz International Proceedings in\nInformatics (LIPIcs), pages 23:1\u201323:15, Dagstuhl, Germany, 2023. Schloss\nDagstuhl \u2013 Leibniz-Zentrum f\u00fcr Informatik.",
72
+ "url": null
73
+ }
74
+ },
75
+ {
76
+ "3": {
77
+ "title": "Degree-3 treewidth sparsifiers.",
78
+ "author": "Chandra Chekuri and Julia Chuzhoy.",
79
+ "venue": "In Piotr Indyk, editor, Proceedings of the Twenty-Sixth Annual\nACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA,\nJanuary 4\u20136, 2015, pages 242\u2013255. SIAM, 2015.",
80
+ "url": null
81
+ }
82
+ },
83
+ {
84
+ "4": {
85
+ "title": "Vertex Partitioning in Graphs: From Structure to Algorithms\n(Dagstuhl Seminar 22481).",
86
+ "author": "Maria Chudnovsky, Neeldhara Misra, Dani\u00ebl Paulusma, Oliver Schaudt, and\nAkanksha Agrawal.",
87
+ "venue": "Dagstuhl Reports, 12(11):109\u2013123, 2023.",
88
+ "url": null
89
+ }
90
+ },
91
+ {
92
+ "5": {
93
+ "title": "Treewidth versus clique number. I. graph classes with a forbidden\nstructure.",
94
+ "author": "Cl\u00e9ment Dallard, Martin Milani\u010d, and Kenny \u0160torgel.",
95
+ "venue": "SIAM J. Discret. Math., 35(4):2618\u20132646, 2021.",
96
+ "url": null
97
+ }
98
+ },
99
+ {
100
+ "6": {
101
+ "title": "Treewidth versus clique number. III. tree-independence number of\ngraphs with a forbidden structure.",
102
+ "author": "Cl\u00e9ment Dallard, Martin Milani\u010d, and Kenny \u0160torgel.",
103
+ "venue": "CoRR, abs/2206.15092, 2022.",
104
+ "url": null
105
+ }
106
+ },
107
+ {
108
+ "7": {
109
+ "title": "Treewidth versus clique number. II. tree-independence number, 2021.",
110
+ "author": "Cl\u00e9ment Dallard, Martin Milani\u010d, and Kenny \u0160torgel.",
111
+ "venue": "doi:10.48550/ARXIV.2111.04543.",
112
+ "url": null
113
+ }
114
+ },
115
+ {
116
+ "8": {
117
+ "title": "Oberwolfach report 1/2022.",
118
+ "author": "James Davies.",
119
+ "venue": "2022.",
120
+ "url": null
121
+ }
122
+ },
123
+ {
124
+ "9": {
125
+ "title": "Some results on tree decomposition of graphs.",
126
+ "author": "Guoli Ding and Bogdan Oporowski.",
127
+ "venue": "Journal of Graph Theory, 20(4):481\u2013499, 1995.",
128
+ "url": null
129
+ }
130
+ },
131
+ {
132
+ "10": {
133
+ "title": "Tree-partitions with small bounded degree trees, 2023.",
134
+ "author": "Marc Distel and David R. Wood.",
135
+ "venue": "arXiv:2210.12577.",
136
+ "url": null
137
+ }
138
+ },
139
+ {
140
+ "11": {
141
+ "title": "Graph product structure for non-minor-closed classes.",
142
+ "author": "Vida Dujmovi\u0107, Pat Morin, and David R. Wood.",
143
+ "venue": "Journal of Combinatorial Theory, Series B, 162:34\u201367, 2023.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "12": {
149
+ "title": "Induced subdivisions and bounded expansion.",
150
+ "author": "Zdenek Dvor\u00e1k.",
151
+ "venue": "European Journal of Combinatorics, 69:143\u2013148, 2018.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "13": {
157
+ "title": "Treewidth of graphs with balanced separations.",
158
+ "author": "Zden\u011bk Dvo\u0159\u00e1k and Sergey Norin.",
159
+ "venue": "J. Comb. Theory, Ser. B, 137:137\u2013144, 2019.",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "14": {
165
+ "title": "Independent set on -free graphs in quasi-polynomial time.",
166
+ "author": "Peter Gartland and Daniel Lokshtanov.",
167
+ "venue": "In Sandy Irani, editor, 61st IEEE Annual Symposium on\nFoundations of Computer Science, FOCS 2020, Durham, NC, USA, November\n16\u201319, 2020, pages 613\u2013624. IEEE, 2020.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "15": {
173
+ "title": "private communication, 2023.",
174
+ "author": "Peter Gartland and Daniel Lokshtanov.",
175
+ "venue": null,
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "16": {
181
+ "title": "Maximum weight independent set in graphs with no long claws in\nquasi-polynomial time.",
182
+ "author": "Peter Gartland, Daniel Lokshtanov, Tom\u00e1\u0161 Masa\u0159\u00edk, Marcin Pilipczuk,\nMicha\u0142 Pilipczuk, and Pawe\u0142 Rz\u0105\u017cewski.",
183
+ "venue": "CoRR, abs/2305.15738, 2023.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "17": {
189
+ "title": "Finding large induced sparse subgraphs in -free graphs in\nquasipolynomial time.",
190
+ "author": "Peter Gartland, Daniel Lokshtanov, Marcin Pilipczuk, Micha\u0142 Pilipczuk, and\nPawe\u0142 Rz\u0105\u017cewski.",
191
+ "venue": "In Samir Khuller and Virginia Vassilevska Williams, editors, STOC \u201921: 53rd Annual ACM SIGACT Symposium on Theory of Computing,\nVirtual Event, Italy, June 21\u201325, 2021, pages 330\u2013341. ACM, 2021.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "18": {
197
+ "title": "Grid induced minor theorem for graphs of small degree.",
198
+ "author": "Tuukka Korhonen.",
199
+ "venue": "Journal of Combinatorial Theory, Series B, 160:206\u2013214, 2023.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "19": {
205
+ "title": "Induced-minor-free graphs: Separator theorem, subexponential\nalgorithms, and improved hardness of recognition.",
206
+ "author": "Tuukka Korhonen and Daniel Lokshtanov.",
207
+ "venue": "CoRR, abs/2308.04795, 2023.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "20": {
213
+ "title": "Induced subdivisions in -free graphs of large average\ndegree.",
214
+ "author": "Daniela K\u00fchn and Deryk Osthus.",
215
+ "venue": "Comb., 24(2):287\u2013304, 2004.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "21": {
221
+ "title": "Sparsity - Graphs, Structures, and Algorithms, volume 28 of\nAlgorithms and combinatorics.",
222
+ "author": "Jaroslav Nesetril and Patrice Ossona de Mendez.",
223
+ "venue": "Springer, 2012.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "22": {
229
+ "title": "Grad and classes with bounded expansion i. decompositions.",
230
+ "author": "Jaroslav Ne\u0161et\u0159il and Patrice Ossona de Mendez.",
231
+ "venue": "Eur. J. Comb., 29(3):760\u2013776, 2008.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "23": {
237
+ "title": "Quasi-polynomial-time algorithm for independent set in -free\ngraphs via shrinking the space of induced paths.",
238
+ "author": "Marcin Pilipczuk, Micha\u0142 Pilipczuk, and Pawe\u0142 Rz\u0105\u017cewski.",
239
+ "venue": "In Hung Viet Le and Valerie King, editors, 4th Symposium on\nSimplicity in Algorithms, SOSA 2021, Virtual Conference, January 11\u201312,\n2021, pages 204\u2013209. SIAM, 2021.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "24": {
245
+ "title": "Unavoidable induced subgraphs of large graphs.",
246
+ "author": "Andrei Cosmin Pohoata.",
247
+ "venue": "Senior theses, Princeton University, 2014.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "25": {
253
+ "title": "On a problem of formal logic.",
254
+ "author": "Frank P. Ramsey.",
255
+ "venue": "In Proc. London Math. Soc. series 2, volume 30 of 264\u2013286, 1930.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "26": {
261
+ "title": "Graph minors. V. excluding a planar graph.",
262
+ "author": "Neil Robertson and Paul D. Seymour.",
263
+ "venue": "Journal of Combinatorial Theory, Series B, 41(1):92\u2013114, 1986.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "27": {
269
+ "title": "Quickly excluding a planar graph.",
270
+ "author": "Neil Robertson, Paul D. Seymour, and Robin Thomas.",
271
+ "venue": "Journal of Combinatorial Theory, Series B, 62(2):323\u2013348,\n1994.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "28": {
277
+ "title": "On tree-partition-width.",
278
+ "author": "David R. Wood.",
279
+ "venue": "European Journal of Combinatorics, 30(5):1245\u20131253, 2009.",
280
+ "url": null
281
+ }
282
+ }
283
+ ],
284
+ "url": "http://arxiv.org/html/2312.07962v2"
285
+ }
20240722/2312.09781v4.json ADDED
@@ -0,0 +1,386 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "GSQA: An End-to-End Model for Generative Spoken Question Answering",
3
+ "abstract": "In recent advancements in spoken question answering (SQA), end-to-end models have made significant strides. However, previous research has primarily focused on extractive span selection. While this extractive-based approach is effective when answers are present directly within the input, it falls short in addressing abstractive questions, where answers are not directly extracted but inferred from the given information. To bridge this gap, we introduce the first end-to-end Generative Spoken Question Answering (GSQA) model that empowers the system to engage in abstractive reasoning. The challenge in training our GSQA model lies in the absence of a spoken abstractive QA dataset. We propose using text models for initialization and leveraging the extractive QA dataset to transfer knowledge from the text generative model to the spoken generative model. Experimental results indicate that our model surpasses the previous extractive model by 3% on extractive QA datasets. Furthermore, the GSQA model has only been fine-tuned on the spoken extractive QA dataset. Despite not having seen any spoken abstractive QA data, it can still closely match the performance of the cascade model. In conclusion, our GSQA model shows the potential to generalize to a broad spectrum of questions, thus further expanding the SQA capabilities of abstractive QA.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Question Answering (QA) tasks have consistently emerged as one of the foundational challenges [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###]. As a touchstone for measuring machine comprehension of textual content, QA presents unique complexities that have long captivated researchers. If answers are directly in the text and just need to be found, we categorize it as extractive QA. In contrast, when the answers are not explicitly present, needing a deeper dive involving inference and synthesis to craft a response, we refer it to as abstractive QA. With the rise of smart speakers and voice assistants, an emerging challenge in QA is Spoken Question Answering (SQA) to facilitate interactions between humans and these assistants. SQA requires both the questions and answers to be vocalized. Unlike text, speech encompasses richer details such as the identity of the speaker, their tone, and more, which introduces added intricacies to the SQA tasks.\nAn intuitive approach to tackle this challenge is to cascade models. Here, an Automatic Speech Recognition (ASR) system first transcribes the input speech of both questions and passages and then feeds transcriptions into a textual question answering language model that predicts the corresponding answers. Last, answers are transformed back into speech through a Text-To-Speech (TTS) model. The cascaded method\u2019s primary advantage is its ability to harness the great power of text-based language models trained on vast textual datasets, ensuring it can tackle both extractive and abstractive QA tasks. However, the cascaded model has its drawbacks. The primary issue stems from error propagations, since the text-based language models are not typically trained on ASR-erroneous data, the presence of ASR errors can notably undermine the performance of cascaded models. Moreover, some natural languages do not possess writing text, which leads to the absence of language models that are capable of acquiring these languages. Thus, to tackle these issues, an End-to-End speech model is needed.\n\n###figure_1### Recently, end-to-end, textless approaches (e.g., DUAL[5 ###reference_b5###]) have emerged as a solution to the cascade method\u2019s limitations. These works [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###] rely on the encoded representations of acoustic models, like HuBERT[8 ###reference_b8###], and apply K-means clustering to transform speech representations into discrete units. These units, encapsulating both questions and passages, are fed into the model that predicts the span of the answer in the passage using start and end positions. By adopting this textless approach, DUAL effectively sidesteps the error propagation from ASR systems [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###]. It improves the model robustness in noisy speech. However, there is a notable limitation to the DUAL model: it is not able to handle abstractive QA scenarios, as it is designed to only output start and end positions within the input spoken passage.\nAs Fig.1 ###reference_### shows the difference between our proposed method, cascaded method, and DUAL, we combine the advantages of both the cascaded method and end-to-end (E2E) textless approach to introduce the first-ever textless E2E model capable of handling both extractive and abstractive QA. Mirroring DUAL\u2019s methodology, we utilize HuBERT to convert the input speech (questions and passages) into discrete units[6 ###reference_b6###]. A sequence-to-sequence model then processes these units to generate answers in the form of discrete units. Motivated by the strengths and limitations of the approaches above, we propose GSQA (Generative Spoken Question Answering), the first textless end-to-end model capable of handling both extractive and abstractive QA.\nFurthermore, given the current absence of a spoken abstractive QA corpus, we begin by initializing GSQA with a generative Text QA (TQA) model. This TQA model has been trained on several extractive and abstractive datasets using a generative approach. Subsequently, we fine-tune the TQA model on NMSQA, a dataset dedicated to extractive spoken QA. The primary objective of GSQA is to impart the Natural Language Understanding prowess of the TQA model to GSQA. We also use Microsoft\u2019s Azure TTS service to create a synthesized test set for evaluation. It\u2019s noteworthy that, during the transfer learning phase, GSQA exhibited a significant enhancement, registering an 18% relative gain in the BLEU1 score in a zero-shot abstractive spoken QA setting.\nThe contributions of this paper are summarized below:\nThe introduction of spoken generative question answering model GSQA, the first end-to-end textless generative model.\nThe establishment of a method where pre-training on textual QA, followed by fine-tuning on extractive spoken QA, leads to zero-shot abstractive spoken QA.\nDemonstrating that our proposed model has competitive performance against textless extractive models."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "The existing SQA methods primarily utilize Automatic Speech Recognition (ASR) [12 ###reference_b12###, 13 ###reference_b13###] transcripts, processed by a large language model (LLM) for QA tasks. A key challenge is that ASR errors significantly impact the LLM\u2019s performance. For instance, less robust ASR systems produce incorrect transcripts, leading to erroneous LLM predictions. In contrast, more advanced ASR systems, such as Whisper-large with 1550M parameters, offer lower error rates and provide more accurate input for the LLM, improving QA task accuracy. However, these high-parameter models also result in longer inference times. In extractive SQA, models identify answer spans within spoken documents. DUAL [5 ###reference_b5###] is noteworthy for its ability to determine these spans, but it fails when answers are not present in the input audio. TWIST [14 ###reference_b14###] addresses SpeechLM\u2019s [15 ###reference_b15###] limitations, particularly its lack of semantic understanding, which often results in irrelevant or incorrect responses. While TWIST improves SpeechLM\u2019s initialization, it still shows limited progress in transferring semantic knowledge from text to speech."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Methodology",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Overview & Formulation",
27
+ "text": "In the realm of Question Answering (QA), we divide the problem into three components: the Question (Q), the Passage (P), and the Answer (A). These entities can manifest in two formats: textual data (, , ) and speech discrete unit-based data (, , ).\nOur method lies in the ambition to create a textless generative end-to-end language model that is capable of question answering. Initially, we follow [6 ###reference_b6###, 7 ###reference_b7###] to quantize speech representations into discrete units. We then utilize a discrete unit-based sequence generative model to craft the answer units, which would be subsequently transformed into speech via a Unit-based HiFi-GAN vocoder [16 ###reference_b16###]. To boost the logical understanding capabilities of the discrete unit-based sequence generative model, we initialize this generative SQA model using weights from a textual model [14 ###reference_b14###]. Fig.2 ###reference_### shows the entire model architecture and training pipeline.\n\n###figure_2###"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Speech Quantization",
33
+ "text": "HuBERT [8 ###reference_b8###] is used for efficient speech representation, converting speech signals into discrete units through mask segment prediction training. This involves clustering each 20ms frame into one of K categories using k-means clustering, where the centroid ID of each cluster represents the frame\u2019s discrete unit. To achieve this conversion, we utilize the HuBERT-base model with a 100-cluster configuration, following Lee et al.\u2019s methods [6 ###reference_b6###]. Additionally, we employ run-length encoding to condense consecutive identical units into a single unit, thus reducing the length of input unit sequences."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Text Question Answering Pretraining",
39
+ "text": "Due to the scarcity of training data in abstractive spoken QA, our model\u2019s performance could be limited. To address this, we propose using a pretrained textual model to enhance semantic understanding. This model is initially trained as a generative language model and then adapted for SQA, facilitating the transfer of textual QA capabilities to speech units.\nHandling the longer sequences in speech, even after deduplication, presents a challenge, as many generative textual models struggle with such long inputs [17 ###reference_b17###]. Therefore, our chosen initialized model must be generative and capable of processing sequences longer than 4096 tokens [18 ###reference_b18###, 19 ###reference_b19###].\nWe select LongT5 [19 ###reference_b19###] for this purpose. LongT5, an extension of the T5 encoder, incorporates global-local attention mechanisms to efficiently manage longer inputs. It combines the attention efficiency of ETC with PEGASUS\u2019s summarization-focused pre-training, offering improved performance. We further pretrain LongT5 on textual QA datasets, denoted as LongT5-TQA.\nThe primary goal during this phase is to train the model to process a textual question and passage and generate the corresponding answer . This pretraining is crucial, equipping our model with foundational QA skills for the subsequent phases of our training methodology."
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "GSQA: Generative Spoken QA Model",
45
+ "text": "Fig.2 ###reference_### illustrates our GSQA training procedure. As mentioned in Sec.3.3 ###reference_###, we pretrain the LongT5 on Text Question Answering (TQA) tasks to produce LongT5-TQA, which is capable of text-level extractive and abstractive QA. This model is then fine-tuned on the Extractive Spoken QA dataset, NMSQA [5 ###reference_b5###]. To unify the dataset format in abstractive QA form, we modify the labels in NMSQA from time spans to the answers\u2019 waveform. Here, the given spoken passages, questions, and answers would go through a quantization model and would be converted into discrete units. Since LongT5-TQA does not recognize discrete units, we introduced them as new tokens. The embeddings of units are initialized by random sampling from other text token embeddings, ensuring proper tokenization and a seamless transition from text to speech. By these modifications, LongT5-TQA would be fine-tuned to predict answers in discrete units instead of text, where we denote the fine-tuned model as Unit-LongT5-TQA. Note that if the model\u2019s denotation is not appended with TQA, it means that we don\u2019t pretrain the model on TQA tasks. Last, to achieve speech-to-speech, we employ a HiFi-GAN vocoder [20 ###reference_b20###] to convert discrete units back to speech. This vocoder includes a duration prediction module for deduplicated unit synthesis.\nNMSQA-dev\nNMSQA-test\nSpoken-NarrativeQA-test\n\nModel\nNum. Param.\nF1 Score\nEM Score\nF1 Score\nEM Score\nBLEU1\nROUGE-L\n\nCascade model (w/ ASR transcriptions)\n1025M\n49.1\n32.0\n47.3\n30.4\n13.5\n19.9\n\nDUAL\n452M\n39.4\n21.9\n33.6\n21.2\n-\n-\n\nUnit-LongT5\n312M\n25.5\n12.6\n20.1\n9.4\n6.8\n10.4\n\nUnit-LongT5-TQA (Proposed)\n312M\n41.8\n24.9\n36.0\n24.0\n8.0\n11.8"
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Experimental Setup",
51
+ "text": ""
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "Dataset",
57
+ "text": "For TQA pretraining data, We utilized four extractive QA datasets: SQuAD [24 ###reference_b24###], MultiRC [25 ###reference_b25###], NewsQA [26 ###reference_b26###], and Drop [27 ###reference_b27###], along with one abstractive dataset, NarrativeQA [28 ###reference_b28###], to pretrain LongT5-TQA, and use it as foundation model of GSQA. On the other hand, for SQA training, we use NMSQA [5 ###reference_b5###] as our downstream dataset for training Unit-LongT5. The NMSQA is derived from SQuAD v1.1 [29 ###reference_b29###] and consists of Wikipedia paragraphs with human-written questions. While its train and dev sets feature synthesized speech using Amazon Polly, the test set contains audio from 60 human speakers (30 males and 30 females). In addition, we propose a new spoken abstractive SQA evaluation set, named Spoken-NarrativeQA. This dataset is built upon the NarrativeQA [28 ###reference_b28###] test set, an abstractive text question answering dataset for reading comprehension from books and movie scripts with complex narrative understanding. Note that many answers in this dataset are not directly extractable from passages. We transform the text to speech using the TTS system and specifically select test data where answers are absent in the passage. There are 1626 testing samples in the Spoken-NarrativeQA dataset.\nMethod\nSpeech-to-(text/unit) model\nQA Model\nTotal\n\nCascade model\nWhisper-medium.en (764M)\nLongT5-TQA (261M)\n1025M\n\nDUAL\nHuBERT-large-128-layer1-22 (304M)\nLongFormer (148M)\n452M\n\nUnit-LongT5-TQA\nHuBERT-base-100-layer1-6 (51M)\nLongT5-TQA (261M)\n312M\nNMSQA-dev\nNMSQA-test\nSpoken-NarrativeQA-test\n\n8.1\n14.8\n6.3\nQuestion\nWhat did Tancred\u2019s destiny seem to be?\n\nAnswer\nTo live the life of a normal member of the British ruling class.\n\nUnit-LongT5\nlive the life of any\n\nUnit-LongT5-TQA\nlive the life of any conventional member of the British ruling class"
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "Cascaded Pipeline",
63
+ "text": "The conventional approach for SQA tasks typically follows a cascaded pipeline, as illustrated in the middle of Fig. 1 ###reference_###. The passages and questions are first transcribed by an ASR system, followed by a text-based language model that generates text-based answers, and a TTS system converts the text-based answers into speech answers. To be more specific, we use Whisper-medium.en [13 ###reference_b13###] as the ASR system, the fine-tuned LongT5-TQA model as the text-based language model, and the Azure TTS service as the TTS system. We record the cascaded pipeline\u2019s experiment result in the Cascade Model column in Table 1 ###reference_###. The total number of parameters of the cascaded model is shown in Table 2 ###reference_###, and the WERs of the ASR system in given datasets are listed in Table 3 ###reference_###."
64
+ },
65
+ {
66
+ "section_id": "4.3",
67
+ "parent_section_id": "4",
68
+ "section_name": "Automatic Performance Evaluation",
69
+ "text": "Since all our models ultimately output in speech, what we focus on is whether the content of this waveform contains the answer. We then use an ASR model to convert the generated waveform back to text, allowing us to use text-based metrics to evaluate the content of the model. To reduce errors from the ASR during evaluation, we utilize the state-of-the-art ASR model, whisper-large-v2, during automatic performance evaluation. We then compare the transcriptions of the outputs from whisper-large-v2 with the text ground truth. We calculate the F1-score and Exact-matched (EM) scores for extractive QA tasks. The BLEU1 score [21 ###reference_b21###] and the Rouge score [22 ###reference_b22###] are used for abstractive QA tasks. Lastly, the experiment results of all models are reported in Table 1 ###reference_###."
70
+ },
71
+ {
72
+ "section_id": "4.4",
73
+ "parent_section_id": "4",
74
+ "section_name": "Implementation details",
75
+ "text": "As the previous section mentioned, we pretrain LongT5 on the 5 TQA datasets for 13 epochs, the pretraining learning rate is set to 0.0005, and the weight decay is 0.01. On the other hand, we fine-tune the pretrained LongT5 on the downstream unit-based dataset, NMSQA, for 25 epochs, the downstream learning rate is set to 0.0003, and the weight decay is 0.001. Last, when we inference our model on the abstractive test set(e.g., Spoken-NarrativeQA-test), to encourage GSQA output longer answers, we use beam search for decoding with beam size of 5, and set length penalty to 2."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Results",
81
+ "text": "As illustrated in Table 1 ###reference_###, we evaluate our models on both extractive QA dataset NMSQA-dev, NMSQA-test, and on abstractive QA dataset Spoken-Narrative QA. Among the models evaluated, the proposed Unit-LongT5-TQA exhibits remarkable performance. On the NMSQA-dev set, it achieves an F1 score of 41.8% and an EM score of 24.9%. Its performance remains consistent on the NMSQA-test dataset, registering scores of 36.0% and 24.0% for F1 and EM, respectively. Furthermore, this model outperforms the others on the generative Spoken-NarrativeQA-test set, securing a BLEU1 score of 8.0% and a ROUGE-L score of 11.8%."
82
+ },
83
+ {
84
+ "section_id": "5.1",
85
+ "parent_section_id": "5",
86
+ "section_name": "The effectiveness of TQA pretraining",
87
+ "text": "From Table1 ###reference_###, we can observe that Unit-LongT5-TQA has huge improvements on both NMSQA-dev and NMSQA-test, which gets 16% F1-scores gain than Unit-LongT5 and gets two times than it on EM scores. It shows that pretraining language model on TQA tasks brings significant advantages for unit-based model initialization to learn semantic information in speech. On the other hand, in the Spoken-NarrativeQA-test, an abstractive spoken QA dataset, Unit-LongT5-TQA also outperforms Unit-LongT5 on both BLEU1 and ROUGE-L scores, which gets 1.2 and 1.4 increase, respectively. In our showcase, we have observed that TQA pretraining could further help the model generating more fluent sentences. The following results in Table 4 ###reference_### are from our test set. Without TQA pretraining, our prediction is \u201dlive the life of any,\u201d where the sentence is truncated in the middle, resulting in an incoherent meaning. However, after TQA training, the model would predict complete sentences."
88
+ },
89
+ {
90
+ "section_id": "5.2",
91
+ "parent_section_id": "5",
92
+ "section_name": "Performance at different WERs",
93
+ "text": "###figure_3### Fig. 3 ###reference_### illustrates the superior stability and efficiency of our E2E model for SQA, especially compared to the cascaded model\u2019s sensitivity to ASR WERs. The cascaded model, using ASR systems Whisper-medium.en and Whisper-small.en, demonstrates marked performance degradation with increasing WERs, whereas our E2E model maintains consistency even in high WER scenarios. Additionally, the cascaded model, particularly with Whisper-medium.en, requires significantly larger parameters, escalating computational resource demands. While enhancing the ASR system improves cascaded model performance, this approach requires extensive paired text-speech data, larger model scales, and considerable training costs for both LLMs and the TTS system, making it extremely expensive. In contrast, our smaller E2E model potentially offers comparable performance with greater training efficiency. Furthermore, the E2E model circumvents the limitations of languages without written forms, which are insurmountable for ASR-based systems. These advantages firmly position our E2E model as a more stable, resource-efficient, and universally applicable solution for SQA tasks."
94
+ },
95
+ {
96
+ "section_id": "5.3",
97
+ "parent_section_id": "5",
98
+ "section_name": "Models parameters",
99
+ "text": "For a fair comparison, we strive to ensure the number of parameters across all models remains relatively consistent. Upon examination, the cascaded method is found to have a large number of parameters. This is primarily because it requires stacking three huge models. Since the performance of each model should not be compromised, opting for smaller models is not feasible, leading to a larger overall parameter size. In contrast, end-to-end models rely on a single model to accomplish SQA tasks, resulting in a reduced number of parameters. We observed that for abstractive QA, there isn\u2019t a stringent requirement for a compelling speech-to-unit model. It is sufficient to encode the content\u2019s information into discrete units. Instead, a more potent QA model is needed to infer the answers to the questions. Detailed parameter counts are presented in Table 2 ###reference_###. Our model has the fewest parameters among all models. Notably, our model outperforms the others across various metrics."
100
+ },
101
+ {
102
+ "section_id": "6",
103
+ "parent_section_id": null,
104
+ "section_name": "Conclusion",
105
+ "text": "In conclusion, we introduce the first end-to-end Generative Spoken Question Answering (GSQA) model, bridging the gap between spoken extractive and abstractive QA tasks. Combining the advantages of cascaded and textless models, our GSQA model demonstrates superior performance on extractive QA datasets, outperforming the previous E2E extractive SQA models. Furthermore, in the challenging abstractive zero-shot domain, our model exhibits competitive capabilities. We also highlighted the importance of pretraining the language model on textual data to enhance the unit-to-unit model\u2019s capability of semantic understanding. Our future research would concentrate on improving the model\u2019s performance with human narrations and exploring training strategies to enhance generalization ability across diverse SQA tasks."
106
+ },
107
+ {
108
+ "section_id": "7",
109
+ "parent_section_id": null,
110
+ "section_name": "Acknowledgement",
111
+ "text": "We thank the National Center for High-performance Computing (NCHC) of National Applied Research Laboratories (NARLabs) in Taiwan for providing computational and storage resources."
112
+ }
113
+ ],
114
+ "appendix": [],
115
+ "tables": {
116
+ "1": {
117
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T1.2\" style=\"width:433.6pt;height:89.1pt;vertical-align:-0.8pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-48.4pt,9.9pt) scale(0.81755,0.81755) ;\">\n<p class=\"ltx_p\" id=\"S3.T1.2.1\"><span class=\"ltx_text\" id=\"S3.T1.2.1.1\" style=\"font-size:90%;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S3.T1.2.1.1.1\" style=\"width:530.4pt;height:109pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<span class=\"ltx_p\" id=\"S3.T1.2.1.1.1.1\"><span class=\"ltx_text\" id=\"S3.T1.2.1.1.1.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.1.1.1.1.1.1\">\n<span class=\"ltx_tr\" id=\"S3.T1.2.1.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.2.1.1.1.1.1.1.1.1\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.2.1.1.1.1.1.1.1.2\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S3.T1.2.1.1.1.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1.1.1.1.1.1.3.1\">NMSQA</span>-dev</span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S3.T1.2.1.1.1.1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1.1.1.1.1.1.4.1\">NMSQA</span>-test</span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S3.T1.2.1.1.1.1.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1.1.1.1.1.1.5.1\">Spoken-NarrativeQA</span>-test</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.2.1.1.1.1.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.1.1.1.1.1.2.1\">Model</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.2.1.1.1.1.1.1.2.2\">Num. Param.</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.1.1.1.1.1.2.3\">F1 Score</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.2.1.1.1.1.1.1.2.4\">EM Score</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.1.1.1.1.1.2.5\">F1 Score</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.2.1.1.1.1.1.1.2.6\">EM Score</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.1.1.1.1.1.2.7\">BLEU1</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.1.1.1.1.1.2.8\">ROUGE-L</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.2.1.1.1.1.1.1.3\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.3.1\">Cascade model (w/ ASR transcriptions)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.3.2\">1025M</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.3.3\">49.1</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.3.4\">32.0</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.3.5\">47.3</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.3.6\">30.4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.3.7\">13.5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.3.8\">19.9</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.2.1.1.1.1.1.1.4\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.4.1\">DUAL</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.4.2\">452M</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.4.3\">39.4</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.4.4\">21.9</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.4.5\">33.6</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.4.6\">21.2</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.4.7\">-</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.1.1.1.1.1.4.8\">-</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.2.1.1.1.1.1.1.5\">\n<span class=\"ltx_td ltx_align_left\" id=\"S3.T1.2.1.1.1.1.1.1.5.1\">Unit-LongT5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.2.1.1.1.1.1.1.5.2\">312M</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.1.1.1.1.1.5.3\">25.5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.2.1.1.1.1.1.1.5.4\">12.6</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.1.1.1.1.1.5.5\">20.1</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.2.1.1.1.1.1.1.5.6\">9.4</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.1.1.1.1.1.5.7\">6.8</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.1.1.1.1.1.5.8\">10.4</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.2.1.1.1.1.1.1.6\">\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T1.2.1.1.1.1.1.1.6.1\">Unit-LongT5-TQA (Proposed)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.2.1.1.1.1.1.1.6.2\">312M</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.1.1.1.1.1.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1.1.1.1.1.6.3.1\">41.8</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.2.1.1.1.1.1.1.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1.1.1.1.1.6.4.1\">24.9</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.1.1.1.1.1.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1.1.1.1.1.6.5.1\">36.0</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.2.1.1.1.1.1.1.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1.1.1.1.1.6.6.1\">24.0</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.1.1.1.1.1.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1.1.1.1.1.6.7.1\">8.0</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.1.1.1.1.1.6.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1.1.1.1.1.6.8.1\">11.8</span></span></span>\n</span></span></span>\n</span></span></span></p>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.36.1.1\">Table 1</span>: </span>The table presents the speech-to-speech evaluation of our models and the number of parameters for each method. For extractive QA datasets, we utilized F1 and exact match (EM) as evaluation metrics. On the other hand, for abstractive QA datasets, we employed BLEU1<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.09781v4#bib.bib21\" title=\"\">21</a>]</cite> and Rouge-L<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.09781v4#bib.bib22\" title=\"\">22</a>]</cite> for assessment. The cascaded model in this table includes an ASR system and an LM, which is <span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.37.2\">Whisper-medium.en</span> <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.09781v4#bib.bib13\" title=\"\">13</a>]</cite> and <span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.38.3\">LongT5-TQA</span> <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.09781v4#bib.bib19\" title=\"\">19</a>]</cite>, respectively. DUAL<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.09781v4#bib.bib5\" title=\"\">5</a>]</cite> is combined a <span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.39.4\">HuBERT-base</span> <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.09781v4#bib.bib8\" title=\"\">8</a>]</cite>, <span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.40.5\">K-means model</span> for 128 clusters, and a <span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.41.6\">Longformer</span> <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.09781v4#bib.bib23\" title=\"\">23</a>]</cite>.</figcaption>\n</figure>",
118
+ "capture": "Table 1: The table presents the speech-to-speech evaluation of our models and the number of parameters for each method. For extractive QA datasets, we utilized F1 and exact match (EM) as evaluation metrics. On the other hand, for abstractive QA datasets, we employed BLEU1[21] and Rouge-L[22] for assessment. The cascaded model in this table includes an ASR system and an LM, which is Whisper-medium.en [13] and LongT5-TQA [19], respectively. DUAL[5] is combined a HuBERT-base [8], K-means model for 128 clusters, and a Longformer [23]."
119
+ },
120
+ "2": {
121
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<p class=\"ltx_p ltx_align_center\" id=\"S4.T2.2\"><span class=\"ltx_text\" id=\"S4.T2.2.2\" style=\"font-size:90%;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T2.2.2.2\" style=\"width:375.4pt;height:72pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<span class=\"ltx_p\" id=\"S4.T2.2.2.2.2\"><span class=\"ltx_text\" id=\"S4.T2.2.2.2.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.2.2.2.2.2.2\">\n<span class=\"ltx_tr\" id=\"S4.T2.2.2.2.2.2.2.3\">\n<span class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T2.2.2.2.2.2.2.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.2.2.2.2.3.1.1\">Method</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.2.2.2.2.2.2.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.2.2.2.2.3.2.1\">Speech-to-(text/unit) model</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.2.2.2.2.2.2.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.2.2.2.2.3.3.1\">QA Model</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.2.2.2.2.2.2.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.2.2.2.2.3.4.1\">Total</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.2.2.2.2.2.2.4\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.2.2.2.2.2.2.4.1\">Cascade model</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.2.2.2.4.2\">Whisper-medium.en (764M)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.2.2.2.4.3\">LongT5-TQA (261M)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.2.2.2.4.4\">1025M</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.1.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_left\" id=\"S4.T2.1.1.1.1.1.1.1.2\">DUAL</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.1.1.1.1.1.1\">HuBERT-large-128-layer<sub class=\"ltx_sub\" id=\"S4.T2.1.1.1.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.1.1.1.1.1.1.1.1\">1-22</span></sub> (304M)</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.1.1.1.1.1.3\">LongFormer (148M)</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.1.1.1.1.1.4\">452M</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.2.2.2.2.2.2.2\">\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T2.2.2.2.2.2.2.2.2\">Unit-LongT5-TQA</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.2.2.2.2.2.2.2.1\">HuBERT-base-100-layer<sub class=\"ltx_sub\" id=\"S4.T2.2.2.2.2.2.2.2.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.2.2.2.2.2.2.2.1.1.1\">1-6</span></sub> (51M)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.2.2.2.2.2.2.2.3\">LongT5-TQA (261M)</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.2.2.2.2.2.2.2.4\">312M</span></span>\n</span></span></span>\n</span></span></span><span class=\"ltx_text\" id=\"S4.T2.2.3\" style=\"font-size:90%;\"></span></p>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.6.1.1\">Table 2</span>: </span>The respective number of parameters in our experiment. Note that the E2E method, like DUAL and our work, does not need an ASR system but needs a Unit-Extraction model to generate units, and the number after the name of that is the number of clusters.</figcaption>\n</figure>",
122
+ "capture": "Table 2: The respective number of parameters in our experiment. Note that the E2E method, like DUAL and our work, does not need an ASR system but needs a Unit-Extraction model to generate units, and the number after the name of that is the number of clusters."
123
+ },
124
+ "3": {
125
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<p class=\"ltx_p ltx_align_center\" id=\"S4.T3.2\"><span class=\"ltx_text\" id=\"S4.T3.2.1\" style=\"font-size:90%;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T3.2.1.1\" style=\"width:240.7pt;height:36pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<span class=\"ltx_p\" id=\"S4.T3.2.1.1.1\"><span class=\"ltx_text\" id=\"S4.T3.2.1.1.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.2.1.1.1.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T3.2.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.2.1.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.1.1.1.1.1.1.1.1\">NMSQA</span>-dev</span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.2.1.1.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.1.1.1.1.1.1.2.1\">NMSQA</span>-test</span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.2.1.1.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.1.1.1.1.1.1.3.1\">Spoken-NarrativeQA</span>-test</span></span>\n<span class=\"ltx_tr\" id=\"S4.T3.2.1.1.1.1.1.2\">\n<span class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.2.1.1.1.1.1.2.1\">8.1</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.2.1.1.1.1.1.2.2\">14.8</span>\n<span class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.2.1.1.1.1.1.2.3\">6.3</span></span>\n</span></span></span>\n</span></span></span><span class=\"ltx_text\" id=\"S4.T3.2.2\" style=\"font-size:90%;\"></span></p>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.1.1\">Table 3</span>: </span>Word Error Rate (WER) of the pretrained Whisper-medium.en ASR system on different datasets for Spoken QA tasks.</figcaption>\n</figure>",
126
+ "capture": "Table 3: Word Error Rate (WER) of the pretrained Whisper-medium.en ASR system on different datasets for Spoken QA tasks."
127
+ },
128
+ "4": {
129
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<p class=\"ltx_p ltx_align_center\" id=\"S4.T4.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1\" style=\"font-size:90%;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T4.2.1.1\" style=\"width:362.0pt;height:72pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<span class=\"ltx_p\" id=\"S4.T4.2.1.1.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.1.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.2.1.1.1.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T4.2.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.1.1.1.1.1.1.1.1\">Question</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.1.1.1.1.1.2\">What did Tancred\u2019s destiny seem to be?</span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.2.1.1.1.1.1.2\">\n<span class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.1.1.1.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.1.1.1.1.1.2.1.1\">Answer</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.1.1.1.1.2.2\">To live the life of a normal member of the British ruling class.</span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.2.1.1.1.1.1.3\">\n<span class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.1.1.1.1.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.1.1.1.1.1.3.1.1\">Unit-LongT5</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.1.1.1.1.3.2\">live the life of any</span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.2.1.1.1.1.1.4\">\n<span class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.1.1.1.1.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.1.1.1.1.1.4.1.1\">Unit-LongT5-TQA</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.1.1.1.1.4.2\">live the life of any conventional member of the British ruling class</span></span>\n</span></span></span>\n</span></span></span><span class=\"ltx_text\" id=\"S4.T4.2.2\" style=\"font-size:90%;\"></span></p>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.5.1.1\">Table 4</span>: </span>Comparison of Results</figcaption>\n</figure>",
130
+ "capture": "Table 4: Comparison of Results"
131
+ }
132
+ },
133
+ "image_paths": {
134
+ "1": {
135
+ "figure_path": "2312.09781v4_figure_1.png",
136
+ "caption": "Fig. 1: GSQA compared to other baselines: The Cascade Method accommodates both abstractive and extractive QA but risks error propagation. DUAL is an end-to-end textless approach, exclusive to extractive QA. GSQA is a textless, end-to-end generative method, capable of handling both extractive and abstractive QA.",
137
+ "url": "http://arxiv.org/html/2312.09781v4/extracted/5746284/GSQA-17.png"
138
+ },
139
+ "2": {
140
+ "figure_path": "2312.09781v4_figure_2.png",
141
+ "caption": "Fig. 2: Left: The process of discrete unit quantization from synthesis data. Right: Model Training Procedure: A depiction of the transition from textual QA pretraining to spoken QA fine-tuning.",
142
+ "url": "http://arxiv.org/html/2312.09781v4/extracted/5746284/GSQA-16.png"
143
+ },
144
+ "3": {
145
+ "figure_path": "2312.09781v4_figure_3.png",
146
+ "caption": "Fig. 3: We sample the 8000 data within NMSQA-dev to verify the impact on the cascaded model under different Word Error Rates (WERs) with different ASR systems.",
147
+ "url": "http://arxiv.org/html/2312.09781v4/extracted/5746284/WER.png"
148
+ }
149
+ },
150
+ "validation": true,
151
+ "references": [
152
+ {
153
+ "1": {
154
+ "title": "\u201cGenerative question answering: Learning to answer the whole question,\u201d",
155
+ "author": "Mike Lewis and Angela Fan,",
156
+ "venue": "in 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.",
157
+ "url": null
158
+ }
159
+ },
160
+ {
161
+ "2": {
162
+ "title": "\u201cEnd-to-end synthetic data generation for domain adaptation of question answering systems,\u201d",
163
+ "author": "Siamak Shakeri et al.,",
164
+ "venue": "in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, November 16-20, 2020.",
165
+ "url": null
166
+ }
167
+ },
168
+ {
169
+ "3": {
170
+ "title": "\u201cContrastive domain adaptation for question answering using limited text corpora,\u201d",
171
+ "author": "Zhenrui Yue et al.,",
172
+ "venue": "in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021.",
173
+ "url": null
174
+ }
175
+ },
176
+ {
177
+ "4": {
178
+ "title": "\u201cDomain adaptation for question answering via question classification,\u201d",
179
+ "author": "Zhenrui Yue et al.,",
180
+ "venue": "in Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022.",
181
+ "url": null
182
+ }
183
+ },
184
+ {
185
+ "5": {
186
+ "title": "\u201cDUAL: discrete spoken unit adaptive learning for textless spoken question answering,\u201d",
187
+ "author": "Guan-Ting Lin et al.,",
188
+ "venue": "in Interspeech 2022, 23rd Annual Conference of the International Speech Communication Association, Incheon, Korea, 18-22 September 2022.",
189
+ "url": null
190
+ }
191
+ },
192
+ {
193
+ "6": {
194
+ "title": "\u201cUnity: Two-pass direct speech-to-speech translation with discrete units,\u201d",
195
+ "author": "Hirofumi Inaguma et al.,",
196
+ "venue": "in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023, Toronto, Canada, July 9-14, 2023.",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "7": {
202
+ "title": "\u201cTextless speech-to-speech translation on real data,\u201d",
203
+ "author": "Ann Lee et al.,",
204
+ "venue": "in NAACL 2022, Seattle, WA, USA, July 10-15, 2022.",
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "8": {
210
+ "title": "\u201cHubert: Self-supervised speech representation learning by masked prediction of hidden units,\u201d",
211
+ "author": "Wei-Ning Hsu et al.,",
212
+ "venue": "IEEE ACM Trans. Audio Speech Lang. Process., pp. 3451\u20133460, 2021.",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "9": {
218
+ "title": "\u201cSpoken squad: A study of mitigating the impact of speech recognition errors on listening comprehension,\u201d",
219
+ "author": "Chia-Hsuan Li et al.,",
220
+ "venue": "in Interspeech 2018, 19th Annual Conference of the International Speech Communication Association, Hyderabad, India, 2-6 September 2018.",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "10": {
226
+ "title": "\u201cMitigating the impact of speech recognition errors on spoken question answering by adversarial domain adaptation,\u201d",
227
+ "author": "Chia-Hsuan Lee, Yun-Nung Chen, and Hung-yi Lee,",
228
+ "venue": "in ICASSP 2019, Brighton, United Kingdom, May 12-17, 2019.",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "11": {
234
+ "title": "\u201cKnowledge distillation for improved accuracy in spoken question answering,\u201d",
235
+ "author": "Chenyu You et al.,",
236
+ "venue": "in ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021.",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "12": {
242
+ "title": "\u201cwav2vec 2.0: A framework for self-supervised learning of speech representations,\u201d",
243
+ "author": "Alexei Baevski et al.,",
244
+ "venue": "in NeurIPS 2020, December 6-12, 2020.",
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "13": {
250
+ "title": "\u201cRobust speech recognition via large-scale weak supervision,\u201d",
251
+ "author": "Alec Radford et al.,",
252
+ "venue": "in International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, Proceedings of Machine Learning Research.",
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "14": {
258
+ "title": "\u201cTextually pretrained speech language models,\u201d 2023.",
259
+ "author": "Michael Hassid et al.,",
260
+ "venue": null,
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "15": {
266
+ "title": "\u201cSpeechlm: Enhanced speech pre-training with unpaired textual data,\u201d 2023.",
267
+ "author": "Ziqiang Zhang, Sanyuan Chen, Long Zhou, Yu Wu, Shuo Ren, Shujie Liu, Zhuoyuan Yao, Xun Gong, Lirong Dai, Jinyu Li, and Furu Wei,",
268
+ "venue": null,
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "16": {
274
+ "title": "\u201cHifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis,\u201d",
275
+ "author": "Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae,",
276
+ "venue": "in Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "17": {
282
+ "title": "\u201cBART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,\u201d",
283
+ "author": "Mike Lewis et al.,",
284
+ "venue": "in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020.",
285
+ "url": null
286
+ }
287
+ },
288
+ {
289
+ "18": {
290
+ "title": "\u201cLlama: Open and efficient foundation language models,\u201d 2023.",
291
+ "author": "Hugo Touvron et al.,",
292
+ "venue": null,
293
+ "url": null
294
+ }
295
+ },
296
+ {
297
+ "19": {
298
+ "title": "\u201cLongt5: Efficient text-to-text transformer for long sequences,\u201d",
299
+ "author": "Mandy Guo et al.,",
300
+ "venue": "in Findings of the Association for Computational Linguistics: NAACL 2022, Seattle, WA, United States, July 10-15, 2022.",
301
+ "url": null
302
+ }
303
+ },
304
+ {
305
+ "20": {
306
+ "title": "\u201cDirect speech-to-speech translation with discrete units,\u201d",
307
+ "author": "Ann et al. Lee,",
308
+ "venue": "in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland, May 2022, pp. 3327\u20133339, Association for Computational Linguistics.",
309
+ "url": null
310
+ }
311
+ },
312
+ {
313
+ "21": {
314
+ "title": "\u201cBleu: a method for automatic evaluation of machine translation,\u201d",
315
+ "author": "Papineni et al.,",
316
+ "venue": "in Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics.",
317
+ "url": null
318
+ }
319
+ },
320
+ {
321
+ "22": {
322
+ "title": "\u201cROUGE: A package for automatic evaluation of summaries,\u201d",
323
+ "author": "Chin-Yew Lin,",
324
+ "venue": "in Text Summarization Branches Out, Barcelona, Spain, July 2004, pp. 74\u201381, Association for Computational Linguistics.",
325
+ "url": null
326
+ }
327
+ },
328
+ {
329
+ "23": {
330
+ "title": "\u201cLongformer: The long-document transformer,\u201d 2020.",
331
+ "author": "Iz Beltagy, Matthew E. Peters, and Arman Cohan,",
332
+ "venue": null,
333
+ "url": null
334
+ }
335
+ },
336
+ {
337
+ "24": {
338
+ "title": "\u201cKnow what you don\u2019t know: Unanswerable questions for squad,\u201d",
339
+ "author": "Pranav Rajpurkar et al.,",
340
+ "venue": "in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018.",
341
+ "url": null
342
+ }
343
+ },
344
+ {
345
+ "25": {
346
+ "title": "\u201cSuperglue: A stickier benchmark for general-purpose language understanding systems,\u201d",
347
+ "author": "Alex Wang et al.,",
348
+ "venue": "in Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada.",
349
+ "url": null
350
+ }
351
+ },
352
+ {
353
+ "26": {
354
+ "title": "\u201cNewsqa: A machine comprehension dataset,\u201d",
355
+ "author": "Adam Trischler et al.,",
356
+ "venue": "in Proceedings of the 2nd Workshop on Representation Learning for NLP, Rep4NLP@ACL 2017, Vancouver, Canada, August 3, 2017.",
357
+ "url": null
358
+ }
359
+ },
360
+ {
361
+ "27": {
362
+ "title": "\u201cDROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs,\u201d",
363
+ "author": "Dheeru Dua et al.,",
364
+ "venue": "in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019.",
365
+ "url": null
366
+ }
367
+ },
368
+ {
369
+ "28": {
370
+ "title": "\u201cThe narrativeqa reading comprehension challenge,\u201d",
371
+ "author": "Tom\u00e1s Kocisk\u00fd et al.,",
372
+ "venue": "Trans. Assoc. Comput. Linguistics.",
373
+ "url": null
374
+ }
375
+ },
376
+ {
377
+ "29": {
378
+ "title": "\u201cSquad: 100, 000+ questions for machine comprehension of text,\u201d",
379
+ "author": "Pranav Rajpurkar et al.,",
380
+ "venue": "in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016.",
381
+ "url": null
382
+ }
383
+ }
384
+ ],
385
+ "url": "http://arxiv.org/html/2312.09781v4"
386
+ }
20240722/2312.10217v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2312.12056v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2312.12544v3.json ADDED
@@ -0,0 +1,719 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "The Dark Side of NFTs: A Large-Scale Empirical Study of Wash Trading",
3
+ "abstract": "NFTs (Non-Fungible Tokens) have seen significant growth since they first captured public attention in 2021. However, the NFT market is plagued by fake transactions and economic bubbles, e.g., NFT wash trading. Wash trading typically refers to a transaction involving the same person or two colluding individuals, and has become a major threat to the NFT ecosystem. Previous studies only detect NFT wash trading from the financial aspect, while the real-world wash trading cases are much more complicated (e.g., not aiming at inflating the market value). There is still a lack of multi-dimension analysis to better understand NFT wash trading. Therefore, we present the most comprehensive study of NFT wash trading, analyzing 8,717,031 transfer events and 3,830,141 sale events from 2,701,883 NFTs. We identify three types of NFT wash trading and propose identification algorithms. Our experimental results reveal 824 transfer events and 5,330 sale events (accounting for a total of $8,857,070.41) and 370 address pairs related to NFT wash trading behaviors, causing a minimum loss of $3,965,247.13. Furthermore, we provide insights from six aspects, i.e., marketplace design, profitability, NFT project design, payment token, user behavior, and NFT ecosystem.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1. Introduction",
9
+ "text": "NFTs (Non-Fungible Tokens) (Wikipedia, 2023 ###reference_b44###) are blockchain-enabled digital assets, which users can buy and sell without third-party participation. Due to the rising enthusiasm for the concept, the NFT trading volume increased to an astonishing $25.1 billion in 2021 (Hayward, 2022 ###reference_b23###). Many NFT marketplaces have been created to facilitate NFT trading. Take OpenSea, the NFT marketplace with over one million registered users (Hayes, 2022 ###reference_b22###), as an example. It facilitated transactions (TXNs) for around $5 billion alone in January 2022 (Hypebeast, 2023 ###reference_b24###). However, the market value of NFTs may not justify such a thriving market. According to Binance, almost 45% of all NFT trading volume may be fraudulent due to wash trading events (Coinfomania, 2022 ###reference_b16###), where users manipulate the market by buying and selling the same financial product (Sergeenkov, 2022 ###reference_b34###). A notorious case (Tahmasbi and Fuchsberger, 2022 ###reference_b37###; Staff, 2021 ###reference_b36###; pastel, 2022 ###reference_b31###; Baker, 2021 ###reference_b12###; ANDREW ROSSOW, 2022 ###reference_b11###) involves the wash trading on Cryptopunk #9998 (an NFT)111The reference of all the NFTs, TXN hashes, and Ethereum accounts mentioned in each section can be found at REFERENCE.csv ###reference__Side_of_NFTs/blob/main/REFERENCE.csv###, where the buyer initially obtained loans from multiple sources to purchase the NFT, then immediately sold the NFT to the origin holder for the same price, and finally repaid the loans. Another example is that Meebits #13824 was traded twice between 0x35D0CA and 0xA99A76 for 14,700 WETH (Wra, 2023 ###reference_b5###)(a cryptocurrency) and 15,000 WETH, around 40 times the previous trading price.\nThe existence of NFT wash trading has been proven in the previous works (von Wachter et al., 2022 ###reference_b40###; Das et al., 2021 ###reference_b19###; Serneels, 2023 ###reference_b35###; Tariq and Sifat, 2022 ###reference_b38###; Wen et al., 2023 ###reference_b42###), with several features abstracted, e.g., von Wachter et al. (von Wachter et al., 2022 ###reference_b40###) discovered that 2.04% of NFTs\u2019 sale events trigger suspicions of market abuse, while Das et al. (Das et al., 2021 ###reference_b19###) adopted the strongly/weakly connected component of the graph constructed by NFTs\u2019 events to detect NFT wash trading. However, they mainly abstracted NFT wash trading as graph patterns and only focused on its financial losses, while ignoring TXNs themselves (e.g., the trading price) and other aspects of the NFT ecosystem. Investigating NFT wash trading remains challenging due to the lack of a broader discussion on its definition, a researcher-friendly dataset, and more valuable insights. To fill the gap, we present the most comprehensive study on NFT wash trading, with findings from six aspects.\nIn this paper, we first collect NFTs\u2019 sale/transfer events through API access and related block/ERC-20 TXNs via open-source datasets. Then, we define three types of NFT wash trading, i.e., Round-trip Trading, Unprofitable Trading, and Hidden Trading. Next, we design heuristic algorithms to detect each type and adopt FP-Growth (Han et al., 2004 ###reference_b21###) to identify wash trading address pairs/groups. We flag 824 transfer events, 5,330 sale events, 370 address pairs, and 29 address groups related to wash trading, accounting for tokens worth around $8,857,070.41. Based on the experimental results, we offer insights from six aspects, i.e., marketplace design, profitability, NFT project design, payment token, user behavior, and the NFT ecosystem. The main contributions of this paper are summarized as follows:\nTo the best of our knowledge, this work is the most comprehensive study on NFT wash trading. We identify three forms of NFT wash trading and propose heuristic algorithms for their detection.\nWe contribute an extensive dataset of 2,701,883 NFTs\u2019 event sequences from 285 most popular collections. We have released the open-source processed datasets to help researchers uncover further studies222 https://github.com/NFTWashTrading/The_Dark_Side_of_NFTs ###reference__Side_of_NFTs###.\nWe systematically evaluate the NFT wash trading results, including their financial impact, trend, market liquidity, and insights from six aspects. In addition, we provide practical advice to NFT marketplaces."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "2. Background",
15
+ "text": "In this section, we introduce the background knowledge of Ethereum and NFTs."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "2.1. Ethereum",
21
+ "text": "Blockchain is a decentralized, peer-to-peer network system that relies on cryptographic algorithms to secure data and consensus mechanisms to validate transactions. These technologies work together to ensure the integrity and transparency of the blockchain, making it a reliable system for various applications (wik, 2023a ###reference_b2###). Based on blockchain technology, Ethereum is a distributed ledger platform with programmable features. With the design of smart contracts and the account-based model, Ethereum allows people to initiate TXNs and develop applications for users to interact with on blockchains (wik, 2023b ###reference_b4###). It increases the diversity of the decentralized world, including the prosperity of NFTs.\nSmart contracts (wik, 2023c ###reference_b8###) are tamper-proof programs stored on the blockchain that run when predetermined conditions are met. They facilitate TXNs in the decentralized system. Ethereum has two account types, i.e., externally-owned account (EOA) and contract account (contributors, 2023 ###reference_b18###). EOAs are controlled by anyone with private keys, while the contract account is associated with the smart contract code (Vuji\u010di\u0107 et al., 2018 ###reference_b41###). Accounts in Ethereum are anonymous but traceable. Moreover, a single user can hold multiple accounts without providing personal information.\nEther (ETH) is the native token circulating in Ethereum, used as a payment system for verifying TXNs (wik, 2023b ###reference_b4###). ERC-20 token is a standard for creating alternative tokens developed by smart contracts (OpenZeppelin, 2023 ###reference_b30###).\nA block TXN (Zheng et al., 2020 ###reference_b45###) constructs the body of the block in Ethereum and refers to the TXN where the sender sends ETH or other tokens to the receiver with some additional information (e.g., smart contract function calls). An internal TXN refers to the TXN that occurs during the execution of a smart contract (Chan and Olmsted, 2017 ###reference_b13###). When a smart contract is called, it may execute multiple functions, and each function may trigger calls to other contracts. The invocation and interaction between these contracts are achieved through internal TXNs. In this paper, we refer to all TXNs involving the transfer of ERC-20 tokens between addresses as ERC-20 token TXNs.\n###figure_1### ###figure_2###"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "2.2. NFT Trading",
27
+ "text": "An NFT (Non-Fungible Token) is a unique on-chain identifier that typically refers to digital files, such as photos. Most NFTs are built using the ERC-721 standard (off, 2023a ###reference_b3###), which defines the primary interface for users to own, trade, and manage their tokens through a smart contract. NFTs are transferred on-chain using functions displayed by Figure 2 ###reference_1###. The transfer of an NFT involves the from and to addresses, which represent the propagation of the NFT, and the tokenId and contract address, which together represent a unique NFT on the blockchain333We only discuss ERC-721 NFT in this paper.. Within a collection, each tokenId is different. The first transfer of an NFT is also called the minted event, which shows the publishing of an NFT on the blockchain.\nNFTs are often purchased on various trading platforms, such as OpenSea. Regarding OpenSea, as shown in Figure 1 ###reference_1###, there are two main ways to purchase an NFT. 1) instant sale, where an NFT is listed at a fixed price, and a buyer can purchase it immediately, with a portion of the trading value being transferred to OpenSea\u2019s official wallet as service fees. 2) auction, where buyers place bids on an NFT, and sellers can accept the offer. In this case, the sellers pay the service fees (part of the auction price) to OpenSea. The payment token for the auction is WETH (Wra, 2023 ###reference_b5###)."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "3. WASH TRADING/TRADER TYPES",
33
+ "text": "In this section, we delve into the high-level ideas of three\nwash trading types, providing detailed examples. Then, we define\nwash-trading pairs/groups to flag wash traders and confirm\na related case of colluding addresses."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "3.1. Type 1: Round-trip Trading",
39
+ "text": "Explanation. Round-trip Trading is the unethical practice of repeatedly buying and selling the same securities to manipulate the finance market (SCOTT, 2023 ###reference_b33###). In the NFT market, Round-trip Trading happens when someone purchases an NFT and promptly resells it, either directly or through multiple addresses, to the original NFT holder.\nExample. OG:Crystal #4015 was traded 24 times at almost the same price between 0xCF6FF6 and 0xC17D7c within 9 hours, accounting for 126.2 ETH."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "3.2. Type 2: Unprofitable Trading",
45
+ "text": "Explanation. We define that Unprofitable Trading refers to an NFT TXN where the buyer is either funded beforehand by the seller or receives the seller\u2019s return amount, indicating that the TXN is not intended for profit. Normally, the transfer of NFT (seller buyer) and the transfer of TXN amount (buyer seller) occur simultaneously within a block TXN, which is guaranteed by smart contracts. However, Unprofitable Trading occurs with an additional value transfer, e.g., the seller transfers a certain amount of ETH that is similar to the NFT trading price to the buyer shortly before or after the TXN. Another conduction is using a certain amount of ERC-20 tokens with market value instead of ETH. An early value transfer can be interpreted as a form of funding, while the restitution of the amount illustrates that the NFT TXN does not profit from a third party.\nExample. For Omnimorph #3980, 0x6149ca (seller) transferred 0.23 ETH (104.5% of the trading price) to 0x2beba3 (buyer) three minutes before the NFT sale event. Twenty minutes later, 0x2beba3 (new seller) returned 0.2098 ETH (95% of the trading price) to 0x0f767ef\n(new buyer) four minutes after the next sale event. The two NFT sellers did not profit from either TXN, which was abnormal for a usual sale event. They conspired to complete the wash trading process."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "3.3. Type 3: Hidden Trading",
51
+ "text": "Explanation. Hidden Trading is third type of wash trading we identify, characterized by collusive actions between sellers and designated buyers. Hidden Trading implies the existence of a series of continuous private tradings where the NFT sellers designate the NFT buyers. On OpenSea, NFT holders can reserve items for specific buyers (Goodman, 2022 ###reference_b20###), i.e., only the addresses approved by the holders have the right to purchase the NFT. Private Tradings disregard the impact of market liquidity and user sentiment because the sellers and buyers know each other before the TXNs occur.\nExample. Wash traders raised the NFT price from 3.750 ETH, 3.780 ETH to 4 ETH, 7.4 ETH for VeeFriends #7582, all through continuous private tradings."
52
+ },
53
+ {
54
+ "section_id": "3.4",
55
+ "parent_section_id": "3",
56
+ "section_name": "3.4. Wash trading pairs/groups",
57
+ "text": "Explanation. We define wash trading pairs as address pairs with high-frequency wash trading behaviors, while a wash trading group consists of multiple relevant wash trading pairs. Suspicious addresses will no longer be regarded as incidentally or deemed innocently involved in NFT wash trading if they frequently participate in Round-trip Trading, Unprofitable Trading, and Hidden Trading. Specifically, address pairs/groups involved in three types of NFT wash trading more than a certain number of times can be marked as wash trading pairs/groups. If there is a common address/TXN connecting both pairs, we assume that the two pairs come from the same wash trading group.\nExample Loot #2157 was wash-traded ten times between by 0xB7639A and 0x996665 for 1 ETH between 2022-08-10T12:11:14Z and 2022-08-11T03:25:36Z. Shortly after, it was sold by 0xB7639A to 0xcc8990, and the same wash trading behavior happened again between 0xcc8990 and 0xBF1eD4, and more address pairs. Within a day, 178 transactions transpired back and forth at nearly the same price, and the domain names of sellers and buyers were highly similar, all in the form of Chinese license plates. Here, 0xB7639A and 0x996665, 0xcc8990 and 0xBF1eD can be treated as two wash trading pairs, while 0xB7639A, 0x996665, 0xcc8990, and 0xBF1eD is a wash trading group."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "4. DATA COLLECTION",
63
+ "text": "In this section, we detail the methodology used to construct our dataset, which is derived from API access and open-source datasets."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "4.1. Event sequence",
69
+ "text": "The OpenSea API allows users to fetch metadata and core elements of NFTs (OpenSea, 2023 ###reference_b29###), e.g., sale/transfer events. The transfer events for each NFT reveal its circulation flow, while the sale events additionally provide specific trading information. We retrieve 8,717,031 transfer events and 3,830,141 sale events of 2,701,883 NFTs from Ethereum\u2019s 285 most popular collections. We start counting each NFT\u2019s event sequence since it was minted (th record), and the th record\u2019s fields will be marked as timestamp_i, etc. Table 1 ###reference_### displays an example of the event sequence, i.e., since Alpha Shark (collection) #9 (tokenId) was minted, it was transferred to 0x1c2fd0 (to_0), then to 0x9164e3 (to_1), then bought by 0x99264d (to_2) for 2.9 (numToken_2) ETH (payToken_2) at 2022-06-15T16:56:53 (timestamp_2), while each ETH cost $1215.68 (usdToken_2). Also, isPrivate indicates whether the NFT is reserved for a specific buyer."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "4.2. Block TXN and ERC-20 token TXN",
75
+ "text": "XBlock (xbl, 2023 ###reference_b9###) is a data platform for the blockchain community. This data source (Zheng et al., 2020 ###reference_b45###) provides researchers with information about block TXNs and ERC-20 token TXNs. To enable the identification of ETH and ERC-20 token transfers, we filter out 184,008,844 block TXNs and 48,513,194 ERC-20 token TXNs in the field of the same from or to as that of each NFT sale/transfer event."
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "4.3. Historical market price data",
81
+ "text": "CoinGecko API enables users to obtain crypto prices, historical market data, etc (CoinGecko, 2023 ###reference_b17###). We adopt it to collect the historical market price of ERC-20 tokens. Specifically, we identify 233,618 ERC-20 token smart contracts involved in related ERC-20 token TXNs. Since not all ERC-20 tokens have a market price reference value, we finally collect 2,373,787 historical price records of 2,982 ERC-20 tokens."
82
+ },
83
+ {
84
+ "section_id": "4.4",
85
+ "parent_section_id": "4",
86
+ "section_name": "4.4. Dataset overview",
87
+ "text": "Table 2 ###reference_### demonstrates the final look of our dataset, and we make some preliminary calculations on it, e.g., holders of each NFT change on average 3.2 times each year, indicating that the NFT market does not enjoy high liquidity. Therefore, consecutive TXNs in a short period are abnormal and worth being concerned about. Also, the selected block TXNs and ERC-20 token TXNs contain ETH and ERC-20 token transfer information between each NFT sale event\u2019s from and to, exposing that the related addresses generate more behaviors before/after the NFT TXNs, as there is no direct transfer of ETH or ERC-20 token between two trading addresses for a common deal on OpenSea, except for the auction, using WETH as a payment token."
88
+ },
89
+ {
90
+ "section_id": "5",
91
+ "parent_section_id": null,
92
+ "section_name": "5. DETECTION",
93
+ "text": "###figure_3### As demonstrated in Figure 3 ###reference_###, by using NFTs\u2019 event sequences and selected block/ERC-20 token TXNs, we develop a series of heuristic algorithms to identify wash trading/traders."
94
+ },
95
+ {
96
+ "section_id": "5.1",
97
+ "parent_section_id": "5",
98
+ "section_name": "5.1. Approach on Round-trip Trading",
99
+ "text": ""
100
+ },
101
+ {
102
+ "section_id": "5.1.1",
103
+ "parent_section_id": "5.1",
104
+ "section_name": "5.1.1. Approach design",
105
+ "text": "Our preliminary exploration of Round-trip Trading suggests that the NFTs are purchased repeatedly in a short time interval. Based on its definition, we build a multi-edge directed graph for each NFT\u2019s event sequence, where nodes represent addresses and edges denote the circulation of the NFT. Suspicious activities related to Round-trip Trading will be detected through cycles. During the identification period, a few challenges exist:\nDefinition of a short time interval is ambiguous. Since there is no empirical value for a time interval regarding how far apart two TXNs that are not suspected of wash trading should occur. To address this, we design a time window segment method.\nWhether a transfer event contains trading information is uncertain. It is possible that a transfer event is essentially a real NFT trading. To extend the detection, we have included transfer events in our algorithm."
106
+ },
107
+ {
108
+ "section_id": "5.1.2",
109
+ "parent_section_id": "5.1",
110
+ "section_name": "5.1.2. Time window",
111
+ "text": "We define a period of consecutive TXNs with a short time interval as a non-overlapping time window, and we set the following rules to segment the time windows. We use\nas the threshold, where ATI represents the average time interval between two adjacent events in each time window. For each time window, timestampend is timestamp of the last record, while the timestampstart is that of the first one, and num denotes the number of the adjacent time intervals. If timestamp_i is timestamp_end of the current time window and , we consider the i+1th record should be included in the time window; otherwise, the i+1th record is the first record of a new time window. Although we disregard the relationships between TXNs separated for a long time, we care about the potential continuous Round-trip Trading. Thus, regardless of the ATI, the i+1th record will be included if to of the i+1th record has ever occurred in the current time window. We initialize the ATI with an empirical value of 84,400 seconds (one day)444Choosing different values for the key parameters in our algorithm produces different results. We actually provide a value reference while guaranteeing the correctness of the results and allowing researchers to adjust them according to datasets of various sizes. See how we determine the value from here ###reference__Side_of_NFTs/blob/main/thresholdExplanation.md###.."
112
+ },
113
+ {
114
+ "section_id": "5.1.3",
115
+ "parent_section_id": "5.1",
116
+ "section_name": "5.1.3. Cycles finding",
117
+ "text": "For each collection, we represent the graph constructed by event sequences in each time window of an NFT with a unique identifier, Gij, where i is the NFT\u2019s tokenId and j is the index of the time window that starts from 0. In Gij, is the set of addresses, and indicates the NFT\u2019s propagating direction. We record all edges going in the same direction into a dictionary D, where the key is a tuple of the start and end nodes of each edge, and the value is the list of edges with the same direction. We adopt Depth First Search (DFS) (Tarjan, 1972 ###reference_b39###) to find all cycles for Gij and establish rules to confirm if Round-trip Trading exists by investigating the potential paths from D."
118
+ },
119
+ {
120
+ "section_id": "5.1.4",
121
+ "parent_section_id": "5.1",
122
+ "section_name": "5.1.4. Wash trading confirmation",
123
+ "text": "If a cycle\u2019s D performs in (v1, v2): [e1, e2]; (v2, v3): [e3]; (v3, v1): [e5, e6], the total number of suspicious walks for the cycle is 2 * 1 * 2 = 4. To prevent misjudgment and ensure stringent identification, each cycle is confirmed as Round-trip Trading behavior only when one of the following rules is met. 1) We consider a cycle included in Round-trip Trading when its number of repetitive walks is no less than the strict threshold, 10 * 10 = 1004 ###reference_te4###, meaning at least ten back-and-forth tradings happen for two addresses. or 2) If all the events in one of the walks are sale events, the cycle would be identified as Round-trip Trading."
124
+ },
125
+ {
126
+ "section_id": "5.2",
127
+ "parent_section_id": "5",
128
+ "section_name": "5.2. Approach on Unprofitable Trading",
129
+ "text": "ETH and ERC-20 token transfers constitute additional significant evidence for distinguishing wash trading behavior. We establish three rules to detect each Unprofitable Trading behavior. 1) from and to of the related ERC-20 token/block TXNs are the same as the corresponding sale event. 2) Block TXNs with null input data are considered, meaning that they only contain ETH transfers without any invocation of smart contracts. Specifically, the time threshold is 20 minutes4 ###reference_te4###, i.e., only ETH transfers that occurred 20 minutes before/after the sale event will be included. 3) For ERC-20 token transfer, the time threshold is 80 minutes33footnotemark: 3, i.e., only ERC-20 token transfers that occurred 80 minutes before and after the sale event will be included."
130
+ },
131
+ {
132
+ "section_id": "5.3",
133
+ "parent_section_id": "5",
134
+ "section_name": "5.3. Approach on Hidden Trading",
135
+ "text": "We use isPrivate of each record to indicate whether trading is private. For each NFT\u2019s event sequence, each group of three and more continuous private trading would be detected as Hidden Trading behavior."
136
+ },
137
+ {
138
+ "section_id": "5.4",
139
+ "parent_section_id": "5",
140
+ "section_name": "5.4. Detection of wash trader",
141
+ "text": "The probability of addresses appearing simultaneously as a pair or a group provides the reference for wash trader detection. We utilize FP-Growth (Han et al., 2004 ###reference_b21###), an association-rule mining algorithm, on addresses related to Round-trip Trading, Unprofitable Trading, and Hidden Trading, to find the suspicious address pairs. We define the tuple R: (from, to) for Round-trip Trading, U: (from, to) for Unprofitable Trading, to represent the wash trading pairs. Considering the colluding relationship is not displayed by each sale event of Hidden Trading, we include every address of the continuous private trading, i.e., the tuple H: (address1, address2, address3, \u2026). Given a set of addresses, frequent pattern mining can find all the itemsets with a frequency greater than the support. The matrix is the final input for FP-Growth, where the elements for each row vector come from the tuples of R, U, or H. If the address pair/group\u2019s occurrence frequency exceeds the support, the address pair/group is detected as a wash trading pair/group. Also, we combine all wash trading pairs with at least one common address/TXN between each other as a wash trading group."
142
+ },
143
+ {
144
+ "section_id": "6",
145
+ "parent_section_id": null,
146
+ "section_name": "6. EXPERIMENTAL RESULTS",
147
+ "text": "This section demonstrates the results of our identification methodology for NFT wash trading and wash traders."
148
+ },
149
+ {
150
+ "section_id": "6.1",
151
+ "parent_section_id": "6",
152
+ "section_name": "6.1. Results",
153
+ "text": ""
154
+ },
155
+ {
156
+ "section_id": "6.1.1",
157
+ "parent_section_id": "6.1",
158
+ "section_name": "6.1.1. Results overview",
159
+ "text": "We identify 824 transfer events and 5,330 sale events related to NFT wash trading, accounting for $8,857,070.41, which is around 0.12% of the total trading amount ($7,216,023,387.61). It suggests that the financial impact of NFT wash trading on popular collections is relatively small and overrated. However, as shown in Table 3 ###reference_### that displays the top five collections with their market value ranked by the number of wash trading behaviors, OG:Crystal with lagging ranking is wash traded most, far exceeding the other collections. It indicates the harm of wash trading inflicted on the NFT market might not be fully exposed through trading volume, inspiring us to explore broader aspects of its impact."
160
+ },
161
+ {
162
+ "section_id": "6.1.2",
163
+ "parent_section_id": "6.1",
164
+ "section_name": "6.1.2. Results of Round-trip Trading",
165
+ "text": "Our results show that 3,041 Round-trip Trading events from 2,948 time windows are detected, accounting for $6,453,831.51. We find each NFT has around one wash-traded time window, and it supports our time window division, i.e., continuous Round-trip Trading behaviors are detected together. For instance, 0x837E6f and 0xABE3aE wash traded Bean #16738 in multiple time periods, but eventually, the events involved are included in a single time window, containing 14 TXNs."
166
+ },
167
+ {
168
+ "section_id": "6.1.3",
169
+ "parent_section_id": "6.1",
170
+ "section_name": "6.1.3. Results of Unprofitable Trading",
171
+ "text": "The results include two parts. 1) We identify 2,238 sale events with suspicious ETH transfers, accounting for $2,247,915.26. 59.30% of sale events occur when the NFT sellers transfer ETH to the NFT buyers to fund the purchase activities 20 minutes before the trading, while the remaining events happen as the NFT sellers return ETH to the NFT buyers 20 minutes after. 2) There are 768 sale events with suspicious ERC-20 token transfers, accounting for $1,709,390.72. Among them, the most used ERC-20 token for Unprofitable Trading is WETH, with 99.3%."
172
+ },
173
+ {
174
+ "section_id": "6.1.4",
175
+ "parent_section_id": "6.1",
176
+ "section_name": "6.1.4. Results of Hidden Trading",
177
+ "text": "We detect 968 groups with three or more continuous private trading, including 4,257 sale events and accounting for $1,556,267.23. Besides, NFTs\u2019 prices rise in 62.29% of Hidden Trading cases, where 70.81% of them even maintain an upward trend in all private trading, e.g, the price of Bored Ape Yacht Club #5332 goes up from 3.55, 5.63, 6.5432 to 8.95 ETH all through Hidden Trading. However, as we search for segments of three or more adjacent sale events with continuously increasing prices in each NFT\u2019s event sequence, we find less than 5% of the NFTs meet the requirement. It suggests wash traders may adopt Hidden Trading to inflate the prices of NFTs.\n###figure_4###"
178
+ },
179
+ {
180
+ "section_id": "6.1.5",
181
+ "parent_section_id": "6.1",
182
+ "section_name": "6.1.5. Results of wash trading pairs/groups",
183
+ "text": "We identify 370 wash trading pairs and 29 wash trading groups, indicating these addresses occur at least 24,311 * 0.0005 = 12 times, where 24,311 is the number of row vectors in our FP-Growth matrix, and 0.0005 is the support. To observe whether our results present obvious evidence of suspicious activities for users, we visualize all the related events of addresses from each wash trading group. As shown in Figure 4 ###reference_1### (the partial result of the visualization), each node marked in red represents an address from a group, and each directed edge between nodes represents the propagation of the NFT. It demonstrates obvious conspiracy in the wash trading groups."
184
+ },
185
+ {
186
+ "section_id": "6.1.6",
187
+ "parent_section_id": "6.1",
188
+ "section_name": "6.1.6. Trend of wash trading events.",
189
+ "text": "We investigate the trend for sale/transfer events related to NFT wash trading. Particularly, we exclude OG:Crystal to avoid its extremely leading role in our trend assessment. Figure 5 ###reference_1### shows the change in the number of wash trading events of our results from 2021-06-28T15:00:34Z to 2022-06-20:18:28:10Z. The events have grown and become stable since June 2021 and reach their peak on January 11, 2022. It indicates that wash trading has been used consciously after June 2021. For the peak, it is mostly because of the launch of LooksRare (off, 2023b ###reference_b6###), an NFT marketplace with an incentive reward plan that encourages frequent TXNs (LooksRare, 2023 ###reference_b28###) and thus spawns a bunch of wash trading behaviors (Serneels, 2023 ###reference_b35###; La Morgia et al., 2022 ###reference_b27###; Cho et al., 2023 ###reference_b14###) (See 7.1 ###reference_### for details).\n###figure_5### ###figure_6###"
190
+ },
191
+ {
192
+ "section_id": "6.1.7",
193
+ "parent_section_id": "6.1",
194
+ "section_name": "6.1.7. Market liquidity.",
195
+ "text": "We then study the impact of NFT wash trading on market liquidity from the perspective of time windows. The interval of adjacent time windows provides evidence that Round-trip Trading stimulates NFTs sale events. It takes an average of 20.80 days for the current wash-traded time window to transition to the non-wash-traded one. In contrast, as shown in Figure 6 ###reference_1###, it takes six days longer in 1,725,783 cases of the current non-wash-traded time window, potentially accelerating NFT liquidity."
196
+ },
197
+ {
198
+ "section_id": "7",
199
+ "parent_section_id": null,
200
+ "section_name": "7. FINDINGS FROM MULTIPLE DIMENSIONS",
201
+ "text": "In this section, we provide an in-depth analysis of six key aspects related to NFT wash trading, including marketplace design, profitability, NFT project design, payment token, user behavior, and the NFT ecosystem."
202
+ },
203
+ {
204
+ "section_id": "7.1",
205
+ "parent_section_id": "7",
206
+ "section_name": "7.1. Marketplace design",
207
+ "text": "As shown in Figure 5 ###reference_1###, the NFT wash trading trend of events peaks on January 11, 2022. Noticeably, LooksRare was released one day before. We investigate the collection (named Meebits) with the most wash trading events on the peak and find that nearly 93% of its TXNs occur on LooksRare. It suggests that the wash trading may lead by the launch of LooksRare. One of LooksRare\u2019s policies is rewarding the users who trade on the platform with LOOKS (its native token). For example, Bored Ape Yacht Club #4937 was sold at 91 ETH, and both the seller and buyer received around $3.5K worth of LOOKS. According to our experiment results, the NFTs from Meebits associated with the wash trading peak were all sold for over 20 ETH, driven by the marketplace incentive policies.\nFinding 1: Policies implemented by NFT marketplaces can encourage wash trading behaviors."
208
+ },
209
+ {
210
+ "section_id": "7.2",
211
+ "parent_section_id": "7",
212
+ "section_name": "7.2. Profitability",
213
+ "text": "Existing research has not yet explored whether wash traders can profit from benign users or measured the potential loss of the victims under the influence of wash trading. To fill the gap, we conduct the following analysis.\nWe investigate the possibility for wash traders to profit from benign users without considering incentive rewards from NFT marketplaces through the results of Round-trip Trading. In other words, we assess if wash traders can offset the service fees (charged by NFT marketplaces for their service) by reselling the NFTs. Our analysis focuses on a scenario where a non-wash-traded time window follows a wash-traded time window. At the same time, the total service fees from round-trip sale events in the previous time window are less than or equal to the first sale event in the latter time window. To strengthen our analysis, we adopt OpenSea\u2019s service fee rate (2.5%), the highest among major NFT marketplaces. Our analysis identifies 1,752 wash-traded time windows that satisfy the scenario. Among them, wash traders in around 60% of cases achieve profitability when jumping to the next non-wash-traded time window. Figure 8 ###reference_### shows the distribution of all wash-traded time windows regarding gain or loss, where most are concentrated around 0, indicating that most wash traders do not suffer significant losses after deducting the service fees. For example, in the case of OG:Crystal #895, two traders (0x89a09c and 0x0B7742) traded the NFT six times with prices ranging from 0.06 ETH to 0.09 ETH during the first time window. The NFT was finally sold for 1 ETH in the first sale event of the second time window, resulting in a profit of 0.99 ETH for wash traders.\n###figure_7### ###figure_8### In addition, to explore the resale prices of NFTs after being wash traded, we compare the price of the final sale event in a wash-traded time window with that of the first one in the next non-wash-traded time window. The result demonstrates that 48.62% of wash traders resell the NFT at a lower price when transitioning into the next time window, whereas this percentage is 36.86% for ordinary users. This implies that wash traders tend to list NFTs at a lower price to accelerate sales. Moreover, to measure the impact on benign users in non-wash-traded time windows after each wash-traded time window, we compare the following trading price for users with the last trading price in wash-traded time window. For all 1,752 cases, there are up to seven TXNs after the wash-traded time window. Figure 8 ###reference_### shows the average difference between the trading prices. The following users sell their NFTs at a higher price as the TXNs increase, indicating that wash trading behaviors help to raise the overall price trend. If we treat the difference as the loss of each user, this portion of Round-trip Trading causes at least a total loss of $3,965,247.13 for users.\nFinding 2: Most wash traders do not suffer significant losses or even profit from reselling NFTs to benign users after conducting Round-trip Trading. They tend to list NFTs at lower prices to attract more victims and accelerate sales."
214
+ },
215
+ {
216
+ "section_id": "7.3",
217
+ "parent_section_id": "7",
218
+ "section_name": "7.3. NFT project design",
219
+ "text": "###figure_9### As Table 3 ###reference_### shown, OG:Crystal is the collection with the most wash trading behaviors totaling 9,412 times, accounting for $1,031,442.16. However, the main purpose of wash trading on OG:Crystal is not to inflate trading volume but to enhance the appearance of the NFTs. The design and working mechanism of OG:Crystal is responsible for this (ogw, 2023 ###reference_b7###; La Morgia et al., [n.\u2009d.] ###reference_b26###): each OG:Crystal transforms and evolves with each purchase made by a new collector. The appearance of each piece is generated based on the properties of the owner\u2019s crypto wallet. Specifically, each OG:Crystal will be \u2018locked\u2019, meaning that new TXNs will not affect its shape, structure, or rarity, two months after the initial sale date or after it has grown by seven generations through multiple trading. For example, Figure 9 ###reference_1### shows the evolution of OG:Crystal #1861 from 5th generation to 7th generation through Round-trip Trading between 0x9FFFB8 and 0xB05Cb1. The overview result on OG:Crystal is more convincing for verifying its working mechanism. 1) Wash traders of OG:Crystal generally appear as a pair, and the average number of walks formed by TXNs of each NFT is nearly 35, meaning that they may resell the NFT to each other around six times. 2) Wash trading does not exist after the NFT is \u2018locked\u2019 in 95.73% cases. Both findings suggest that the design of OG:Crystal can lead to wash trading problem, turning the project\u2019s vision that creates a public work of art for all users into personal wash trading activities.\nOn the contrary, Kaiju Kingz is a collection that exhibits minimal wash trading behavior, with a deliberate feature of its design (KaijuKingz, 2023 ###reference_b25###). Owners of Kaiju Kingz\u2019s NFTs can generate 5 RWASTE per day, the circulating token for Kaiju Kingz used for breeding. They can create a so-called Kaiju Kingz baby (a new NFT) with 750 RWASTE. In this sense, owners will be more inclined to hold the NFTs for a longer period of time.\nMoreover, we manually investigate the design mechanisms of all collections with wash trading, but find no more factors that drive TXNs and potentially facilitate wash trading. It illustrates that OG:Crystal is a unique presence in the NFT market and deserves the attention of subsequent NFT designers.\nFinding 3: The operational mechanism of an NFT project can be designed to encourage wash trading activities."
220
+ },
221
+ {
222
+ "section_id": "7.4",
223
+ "parent_section_id": "7",
224
+ "section_name": "7.4. Payment token",
225
+ "text": "We haven\u2019t discovered various ERC-20 tokens involved in Unprofitable Trading based on the results, and the main circulating ERC-20 token involved is WETH, accounting for 99.3%. WETH enables users to submit pre-authorized bids that will be automatically fulfilled later without the bidder\u2019s permission. In our results, WETH mainly appears in two places. 1) NFT holders invoke the smart contract, Wrapped Ether, to transfer WETH to NFT bidders, and afterward, NFT holders accept NFT bidders\u2019 offer to obtain WETH back through the auction. For example, Table 4 ###reference_### demonstrates the period of how 0xcF5e38 and 0xe67753 execute Unprofitable Trading: before the bidder 0xe67753 made an offer of 0.1 WETH to buy Chibi Dino #5723, 0xcF5e38 transferred total 0.1 + 0.07 + 0.3 + 0.47 = 0.94 WETH to 0xe67753. Two minutes after, 0xcF5e38 accepted the offer. It is a complete self-directed purchase that contributes to false trading volume. 2) Another occurrence shows in the situation that there is another NFT sale event with the opposite from and to as the current one 80 minutes before. The WETH was transferred from from to to through the previous bidding auction and is transferred back through the current one. It is Round-trip Trading with multiple bids won, e.g., 0xeB1543 and 0xB893AE accept each other\u2019s offer respectively on OG:Crystal #5609 and OG:Crystal #6255 within one minute, and their offer prices are the same, i.e., 0.13 WETH.\nDuring the auction process, we notice more obvious evidence of address collusion regarding WETH, e.g., some auction is essential almost free transfer of NFT, making the NFT\u2019s historical price curve sag seriously and creating a huge deviation from the market reference value. For example, the accepted offer price for dotdotdot #2663 on December 30, 2021, is less than 0.0001 WETH, while its previous trading price is 3 ETH. The NFT holder and bidder know each other for such abnormal behavior, and their user names all start with blitmonk. To explore the observation, we define to represent the trading price fluctuations, where we start counting from 0 for each NFT\u2019s all sale events, i.e., the 0thPF is the price fluctuation between an NFT\u2019s first and second sale events. To demonstrate server price sag, we specify that and (i+1)thPF should be greater than 1,000. The results show 231 cases, only accounting for $366.12. It suggests that this suspicious auction, which is overlooked by previous works, has been conducted on a small scale, and its collusive feature for the addresses involved should be taken into research.\nFinding 4: WETH needs to be focused most on applying payment tokens on NFT wash trading."
226
+ },
227
+ {
228
+ "section_id": "7.5",
229
+ "parent_section_id": "7",
230
+ "section_name": "7.5. User behavior",
231
+ "text": "###figure_10### In Unprofitable Trading, sellers collude with buyers to create unprofitable TXNs. For example, just 4 seconds before the sale event 0x1a4521, 0x4EDE98 completed a transfer (0.96 ETH) fully sufficient to support 0xE1f14d in buying Animeta #4302 (0.435 ETH) in a normal TXN 0x536fa8. To understand to what extent ETH transfer helps with each sale event, we define as the total amount received by sellers, as the total amount returned by sellers, and as the related NFT trading price. We then adopt Pearson correlation coefficient (Cohen et al., 2009 ###reference_b15###) to measure the strength of the linear relationship between (, ) and (, ). The results are 0.7999 for (, ) and 0.9840 for (, ), indicating that the amount of each ETH transfer is similar to the NFT trading price. Furthermore, we count the number of ETH transfer events detected under a finer granularity (i.e., 60 seconds). Figure 10 ###reference_.1### shows that the closer the time point before/after a sale event is, the more ETH transfer behaviors happen. Also, wash traders make the most frequent ETH transfer between two and four minutes before/after a sale event, after which the number gradually decreases. The respective high price and time correlations prove that these ETH transfers are directly for the NFT TXNs.\nFinding 5: Users can adopt ETH transfers with the intention of directly servicing wash trading."
232
+ },
233
+ {
234
+ "section_id": "7.6",
235
+ "parent_section_id": "7",
236
+ "section_name": "7.6. NFT ecosystem",
237
+ "text": "To better understand the impact of wash trading pairs/groups on NFT wash trading, we identify all 10,594 TXNs related to these address pairs and exclude their trading volume. For instance, if there is a flagged TXN in Hidden Trading, we do not calculate the trading volume of all consecutive private trading. As shown in Table 5 ###reference_###, the results indicate a decrease of around 30% in wash trading amount for round-trip and Hidden Trading, while Unprofitable Trading experiences a significant decline, indicating the prevalence of wash traders in Unprofitable Trading. Interestingly, the addresses involved in wash trading pairs/groups accounted for only 0.082% of the total, meaning that a small group of users could cause considerable wash trading, and sizable wash trading behaviors could be eliminated by identifying and flagging them. It suggests that to avoid wash trading better and alert the users, NFT marketplaces do not need to keep all the trading addresses under governance but mark a small proportion of malicious accounts in event sequences.\nFinding 6: A small proportion of users could cause considerable wash trading amount."
238
+ },
239
+ {
240
+ "section_id": "8",
241
+ "parent_section_id": null,
242
+ "section_name": "8. DISCUSSION",
243
+ "text": ""
244
+ },
245
+ {
246
+ "section_id": "8.1",
247
+ "parent_section_id": "8",
248
+ "section_name": "8.1. Threats to validity",
249
+ "text": ""
250
+ },
251
+ {
252
+ "section_id": "8.1.1",
253
+ "parent_section_id": "8.1",
254
+ "section_name": "8.1.1. Internal validity",
255
+ "text": "1) Adjusting the support for FP-Growth may affect the experimental results, e.g., the number of wash trading pairs/groups would increase if it is set loosely. Moreover, the support should be reconsidered for different datasets. To ensure the reliability of our results, we set a strict value, i.e., address pairs will be considered suspicious only when they participate in wash trading at least twelve times. 2) Our method may ignore unknown wash trading. We have not yet investigated more complex trading networks, e.g., incorporating internal TXNs to understand the intention of NFT traders from a more underlying perspective and thus reveal more NFT wash trading patterns. However, we are still the most comprehensive study on NFT wash trading."
256
+ },
257
+ {
258
+ "section_id": "8.1.2",
259
+ "parent_section_id": "8.1",
260
+ "section_name": "8.1.2. External validity",
261
+ "text": "The current tools (e.g., OpenSea API) for collecting NFT events have shortcomings, such as mislabeling transfer/sale/minted events. Nevertheless, we address the related issues and make public a guaranteed dataset for researchers.555We have open-sourced our data, both before and after processing in https://drive.google.com/drive/folders/1bddfHZgk3BSmDUN0aTAub7mJ-_1_36ff ###reference_dfHZgk3BSmDUN0aTAub7mJ-_1_36ff###, which includes transfer and sale events for 285 NFTs collections."
262
+ },
263
+ {
264
+ "section_id": "8.2",
265
+ "parent_section_id": "8",
266
+ "section_name": "8.2. Implications for NFT Marketplace",
267
+ "text": "We find no restriction or warning in place to limit wash trading on major NFT marketplaces. For instance, OpenSea only reports NFTs involved in suspicious activities and compromised accounts but does not provide explanations for these reports. Alarmingly, we manually investigate the NFTs involved in wash trading but find no alerting information. Interestingly, OpenSea has altered the way it records private trading in NFT event sequences, switching from the generic \u2018Sale\u2019 to \u2018Sale - Reserved\u2019. This positive change offers NFT participants more insight into the nature of the trade. Our algorithm has the potential to detect wash trading activities promptly by analyzing the historical trail of an NFT. It would benefit the overall health of the NFT ecosystem if our technology could be integrated into the front end of major NFT marketplaces to send alerts about wash trading when players purchase NFTs."
268
+ },
269
+ {
270
+ "section_id": "9",
271
+ "parent_section_id": null,
272
+ "section_name": "9. RELATED WORK",
273
+ "text": ""
274
+ },
275
+ {
276
+ "section_id": "9.1",
277
+ "parent_section_id": "9",
278
+ "section_name": "9.1. Data collection for NFT research.",
279
+ "text": "NFT researchers mainly collect data in an external way. Von Wachter et al. (von Wachter et al., 2022 ###reference_b40###) utilized OpenSea API to collect NFTs\u2019 events and adopted Coingecko API to retrieve the historic USD prices for crypto. White et al. (White et al., 2022 ###reference_b43###) utilized a moving window of Unix timestamps to retrieve 5,252,252 sale events via OpenSea API. Apart from the API access, Das et al. (Das et al., 2021 ###reference_b19###) collected more data through web scraping."
280
+ },
281
+ {
282
+ "section_id": "9.2",
283
+ "parent_section_id": "9",
284
+ "section_name": "9.2. Detection for NFT wash trading.",
285
+ "text": "Several studies have been published to detect wash trading behavior for crypto assets. In terms of Bitcoin, Aloosh et al. (Aloosh and Li, 2019 ###reference_b10###) provided direct evidence for \u2018fake volume\u2019 that occurred in cryptocurrency exchanges through trading records leaked by hackers. For NFTs, current researchers pay more attention to NFT TXN networks rather than discussing different types of wash trading, including evidence from block/ERC-20 TXNs and Private Trading. For example, von Wachter et al. (von Wachter et al., 2022 ###reference_b40###) presented methods to identify the graph patterns for wash trading, indicating that 2.04% of sale events are suspicious. Based on that, La et al. (La Morgia et al., [n.\u2009d.] ###reference_b26###) explored more suspicious graph patterns. Das et al. (Das et al., 2021 ###reference_b19###) focused on SCC and WCC to discover the wash trading behavior. Wen et al. (Wen et al., 2023 ###reference_b42###) provided a novel visualization method for identification. In addition to La et al. (La Morgia et al., [n.\u2009d.] ###reference_b26###) who analyzed the profitability of wash trading, we are the first to place a broader discussion on forms/impacts of NFT wash trading, instead."
286
+ }
287
+ ],
288
+ "appendix": [],
289
+ "tables": {
290
+ "1": {
291
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.1\" style=\"font-size:80%;\">timestamp</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.2.1\" style=\"font-size:80%;\">collection</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.3.1\" style=\"font-size:80%;\">tokenId</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.4.1\" style=\"font-size:80%;\">from</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.5.1\" style=\"font-size:80%;\">to</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.6.1\" style=\"font-size:80%;\">type</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.7.1\" style=\"font-size:80%;\">isPrivate</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.8.1\" style=\"font-size:80%;\">payToken</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.9.1\" style=\"font-size:80%;\">numToken</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.10.1\" style=\"font-size:80%;\">usdToken</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.2.1\"><span class=\"ltx_text\" id=\"S4.T1.1.2.2.1.1\" style=\"font-size:80%;\">2021-12-19T01:00:08</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.2.2\"><span class=\"ltx_text\" id=\"S4.T1.1.2.2.2.1\" style=\"font-size:80%;\">Alpha Shark</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.2.3\"><span class=\"ltx_text\" id=\"S4.T1.1.2.2.3.1\" style=\"font-size:80%;\">9</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.2.4\"><span class=\"ltx_text\" id=\"S4.T1.1.2.2.4.1\" style=\"font-size:80%;\">0x000000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.2.5\"><span class=\"ltx_text\" id=\"S4.T1.1.2.2.5.1\" style=\"font-size:80%;\">0x1c2fd0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.2.6\"><span class=\"ltx_text\" id=\"S4.T1.1.2.2.6.1\" style=\"font-size:80%;\">transfer</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.2.7\"><span class=\"ltx_text\" id=\"S4.T1.1.2.2.7.1\" style=\"font-size:80%;\">NaN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.2.8\"><span class=\"ltx_text\" id=\"S4.T1.1.2.2.8.1\" style=\"font-size:80%;\">NaN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.2.9\"><span class=\"ltx_text\" id=\"S4.T1.1.2.2.9.1\" style=\"font-size:80%;\">NaN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.2.2.10\"><span class=\"ltx_text\" id=\"S4.T1.1.2.2.10.1\" style=\"font-size:80%;\">NaN</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.3.1\"><span class=\"ltx_text\" id=\"S4.T1.1.3.3.1.1\" style=\"font-size:80%;\">2021-12-21T15:59:40</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.3.2\"><span class=\"ltx_text\" id=\"S4.T1.1.3.3.2.1\" style=\"font-size:80%;\">Alpha Shark</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.3.3\"><span class=\"ltx_text\" id=\"S4.T1.1.3.3.3.1\" style=\"font-size:80%;\">9</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.3.4\"><span class=\"ltx_text\" id=\"S4.T1.1.3.3.4.1\" style=\"font-size:80%;\">0x1c2fd0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.3.5\"><span class=\"ltx_text\" id=\"S4.T1.1.3.3.5.1\" style=\"font-size:80%;\">0x9164e3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.3.6\"><span class=\"ltx_text\" id=\"S4.T1.1.3.3.6.1\" style=\"font-size:80%;\">transfer</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.3.7\"><span class=\"ltx_text\" id=\"S4.T1.1.3.3.7.1\" style=\"font-size:80%;\">NaN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.3.8\"><span class=\"ltx_text\" id=\"S4.T1.1.3.3.8.1\" style=\"font-size:80%;\">NaN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.3.9\"><span class=\"ltx_text\" id=\"S4.T1.1.3.3.9.1\" style=\"font-size:80%;\">NaN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.3.3.10\"><span class=\"ltx_text\" id=\"S4.T1.1.3.3.10.1\" style=\"font-size:80%;\">NaN</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.4.1\"><span class=\"ltx_text\" id=\"S4.T1.1.4.4.1.1\" style=\"font-size:80%;\">2022-06-15T16:56:53</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.4.2\"><span class=\"ltx_text\" id=\"S4.T1.1.4.4.2.1\" style=\"font-size:80%;\">Alpha Shark</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.4.3\"><span class=\"ltx_text\" id=\"S4.T1.1.4.4.3.1\" style=\"font-size:80%;\">9</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.4.4\"><span class=\"ltx_text\" id=\"S4.T1.1.4.4.4.1\" style=\"font-size:80%;\">0x9164e3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.4.5\"><span class=\"ltx_text\" id=\"S4.T1.1.4.4.5.1\" style=\"font-size:80%;\">0x99264d</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.4.6\"><span class=\"ltx_text\" id=\"S4.T1.1.4.4.6.1\" style=\"font-size:80%;\">sale</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.4.7\"><span class=\"ltx_text\" id=\"S4.T1.1.4.4.7.1\" style=\"font-size:80%;\">FALSE</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.4.8\"><span class=\"ltx_text\" id=\"S4.T1.1.4.4.8.1\" style=\"font-size:80%;\">ETH</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.4.9\"><span class=\"ltx_text\" id=\"S4.T1.1.4.4.9.1\" style=\"font-size:80%;\">2.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.4.4.10\"><span class=\"ltx_text\" id=\"S4.T1.1.4.4.10.1\" style=\"font-size:80%;\">1215.68</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1. </span>An example of event sequence: <span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.6.1\">Alpha Shark #9</span>\u2019s event sequence.</figcaption>\n</figure>",
292
+ "capture": "Table 1. An example of event sequence: Alpha Shark #9\u2019s event sequence."
293
+ },
294
+ "2": {
295
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.1\" style=\"width:433.6pt;height:179.6pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-1.7pt,0.7pt) scale(0.992371431755214,0.992371431755214) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_rr ltx_border_tt\" id=\"S4.T2.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.1.1.1\" style=\"font-size:90%;\">Source</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_rr ltx_border_tt\" id=\"S4.T2.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.1.2.1\" style=\"font-size:90%;\">No. of</span></th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.1.3.1\" style=\"font-size:90%;\">Statistic</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr ltx_border_t\" id=\"S4.T2.1.1.2.1.1\"><span class=\"ltx_text\" id=\"S4.T2.1.1.2.1.1.1\" style=\"font-size:90%;\">OpenSea API</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_rr ltx_border_t\" id=\"S4.T2.1.1.2.1.2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.2.1.2.1\" style=\"font-size:90%;\">collection</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2.1.3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.2.1.3.1\" style=\"font-size:90%;\">285</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.3.2.1\"><span class=\"ltx_text\" id=\"S4.T2.1.1.3.2.1.1\" style=\"font-size:90%;\">OpenSea API</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.3.2.2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.3.2.2.1\" style=\"font-size:90%;\">NFT</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.1.1.3.2.3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.3.2.3.1\" style=\"font-size:90%;\">2,701,883</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.4.3.1\"><span class=\"ltx_text\" id=\"S4.T2.1.1.4.3.1.1\" style=\"font-size:90%;\">OpenSea API</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.4.3.2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.4.3.2.1\" style=\"font-size:90%;\">address</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.1.1.4.3.3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.4.3.3.1\" style=\"font-size:90%;\">902,571</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.5.4.1\"><span class=\"ltx_text\" id=\"S4.T2.1.1.5.4.1.1\" style=\"font-size:90%;\">OpenSea API</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.5.4.2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.5.4.2.1\" style=\"font-size:90%;\">sale event</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.1.1.5.4.3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.5.4.3.1\" style=\"font-size:90%;\">3,830,141</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.6.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.6.5.1\"><span class=\"ltx_text\" id=\"S4.T2.1.1.6.5.1.1\" style=\"font-size:90%;\">OpenSea API</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.6.5.2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.6.5.2.1\" style=\"font-size:90%;\">transfer event</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.1.1.6.5.3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.6.5.3.1\" style=\"font-size:90%;\">8,717,031</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.7.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.7.6.1\"><span class=\"ltx_text\" id=\"S4.T2.1.1.7.6.1.1\" style=\"font-size:90%;\">XBlock</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.7.6.2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.7.6.2.1\" style=\"font-size:90%;\">selected block TXN</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.1.1.7.6.3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.7.6.3.1\" style=\"font-size:90%;\">184,008,844</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.8.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.8.7.1\"><span class=\"ltx_text\" id=\"S4.T2.1.1.8.7.1.1\" style=\"font-size:90%;\">XBlock</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.8.7.2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.8.7.2.1\" style=\"font-size:90%;\">selected ERC-20 token TXN</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.1.1.8.7.3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.8.7.3.1\" style=\"font-size:90%;\">48,513,194</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.9.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.9.8.1\"><span class=\"ltx_text\" id=\"S4.T2.1.1.9.8.1.1\" style=\"font-size:90%;\">CoinGecko API</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T2.1.1.9.8.2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.9.8.2.1\" style=\"font-size:90%;\">ERC-20 token</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T2.1.1.9.8.3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.9.8.3.1\" style=\"font-size:90%;\">2,982</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.10.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_rr\" id=\"S4.T2.1.1.10.9.1\"><span class=\"ltx_text\" id=\"S4.T2.1.1.10.9.1.1\" style=\"font-size:90%;\">CoinGecko API</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_rr\" id=\"S4.T2.1.1.10.9.2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.10.9.2.1\" style=\"font-size:90%;\">historical price record</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T2.1.1.10.9.3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.10.9.3.1\" style=\"font-size:90%;\">2,373,787</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2. </span>The final look of our dataset applied to the identification algorithm.</figcaption>\n</figure>",
296
+ "capture": "Table 2. The final look of our dataset applied to the identification algorithm."
297
+ },
298
+ "3": {
299
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S6.T3.1\" style=\"width:433.6pt;height:151.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(62.2pt,-21.7pt) scale(1.40241610971426,1.40241610971426) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S6.T3.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.1.1.1.1\">Collection</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.1.1.2.1\">No. of behaviors</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S6.T3.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.1.1.3.1\">Rank of Market value</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.2.1.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S6.T3.1.1.2.1.1.1\">OG:Crystal</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.2.1.2\">9412</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S6.T3.1.1.2.1.3\">204th</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S6.T3.1.1.3.2.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S6.T3.1.1.3.2.1.1\">Apes R Us</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.1.1.3.2.2\">99</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S6.T3.1.1.3.2.3\">237th</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S6.T3.1.1.4.3.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S6.T3.1.1.4.3.1.1\">Meebits</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.1.1.4.3.2\">94</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S6.T3.1.1.4.3.3\">9th</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S6.T3.1.1.5.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S6.T3.1.1.5.4.1.1\">Bored Ape Yacht Club</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.1.1.5.4.2\">57</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S6.T3.1.1.5.4.3\">2th</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.6.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S6.T3.1.1.6.5.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S6.T3.1.1.6.5.1.1\">hashmasks</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S6.T3.1.1.6.5.2\">45</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S6.T3.1.1.6.5.3\">26th</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3. </span>Top five collections by the number of wash trading behaviors</figcaption>\n</figure>",
300
+ "capture": "Table 3. Top five collections by the number of wash trading behaviors"
301
+ },
302
+ "4": {
303
+ "table_html": "<figure class=\"ltx_table\" id=\"S7.T4\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S7.T4.4\" style=\"width:433.6pt;height:178.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(64.1pt,-26.5pt) scale(1.41989157893269,1.41989157893269) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S7.T4.4.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S7.T4.4.4.5.1\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S7.T4.4.4.5.1.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.4.4.5.1.1.1\">time (+UTC)</span></th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S7.T4.4.4.5.1.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.4.4.5.1.2.1\">event</span></th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T4.4.4.5.1.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T4.4.4.5.1.3.1\">value</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S7.T4.1.1.1\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r ltx_border_t\" id=\"S7.T4.1.1.1.2\" style=\"background-color:#EFEFEF;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.1.1.1.2.1\" style=\"background-color:#EFEFEF;\">Aug-26-2021 05:46 AM</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r ltx_border_t\" id=\"S7.T4.1.1.1.1\" style=\"background-color:#EFEFEF;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.1.1.1.1.1\" style=\"background-color:#EFEFEF;\">WETH transfer: 0xcF5e38 0xe67753</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S7.T4.1.1.1.3\" style=\"background-color:#EFEFEF;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.1.1.1.3.1\" style=\"background-color:#EFEFEF;\">0.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.2.2.2\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S7.T4.2.2.2.2\" style=\"background-color:#EFEFEF;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.2.2.2.2.1\" style=\"background-color:#EFEFEF;\">Aug-26-2021 05:49 AM</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S7.T4.2.2.2.1\" style=\"background-color:#EFEFEF;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.2.2.2.1.1\" style=\"background-color:#EFEFEF;\">WETH transfer: 0xcF5e38 0xe67753</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left\" id=\"S7.T4.2.2.2.3\" style=\"background-color:#EFEFEF;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.2.2.2.3.1\" style=\"background-color:#EFEFEF;\">0.07</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.3.3.3\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S7.T4.3.3.3.2\" style=\"background-color:#EFEFEF;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.3.3.3.2.1\" style=\"background-color:#EFEFEF;\">Aug-26-2021 05:50 AM</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S7.T4.3.3.3.1\" style=\"background-color:#EFEFEF;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.3.3.3.1.1\" style=\"background-color:#EFEFEF;\">WETH transfer: 0xcF5e38 0xe67753</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left\" id=\"S7.T4.3.3.3.3\" style=\"background-color:#EFEFEF;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.3.3.3.3.1\" style=\"background-color:#EFEFEF;\">0.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.4.4.4\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S7.T4.4.4.4.2\" style=\"background-color:#EFEFEF;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.4.4.4.2.1\" style=\"background-color:#EFEFEF;\">Aug-26-2021 05:52 AM</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S7.T4.4.4.4.1\" style=\"background-color:#EFEFEF;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.4.4.4.1.1\" style=\"background-color:#EFEFEF;\">WETH transfer: 0xcF5e38 0xe67753</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left\" id=\"S7.T4.4.4.4.3\" style=\"background-color:#EFEFEF;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.4.4.4.3.1\" style=\"background-color:#EFEFEF;\">0.47</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.4.4.6.1\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S7.T4.4.4.6.1.1\" style=\"background-color:#C0C0C0;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.4.4.6.1.1.1\" style=\"background-color:#C0C0C0;\">Aug-26-2021 05:56 AM</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S7.T4.4.4.6.1.2\" style=\"background-color:#C0C0C0;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.4.4.6.1.2.1\" style=\"background-color:#C0C0C0;\">Offer made: 0xe67753</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left\" id=\"S7.T4.4.4.6.1.3\" style=\"background-color:#C0C0C0;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.4.4.6.1.3.1\" style=\"background-color:#C0C0C0;\">0.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.4.4.7.2\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_bb ltx_border_r\" id=\"S7.T4.4.4.7.2.1\" style=\"background-color:#C0C0C0;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.4.4.7.2.1.1\" style=\"background-color:#C0C0C0;\">Aug-26-2021 05:58 AM</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_bb ltx_border_r\" id=\"S7.T4.4.4.7.2.2\" style=\"background-color:#C0C0C0;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.4.4.7.2.2.1\" style=\"background-color:#C0C0C0;\">Offer accepted: 0xcF5e38</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"S7.T4.4.4.7.2.3\" style=\"background-color:#C0C0C0;padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S7.T4.4.4.7.2.3.1\" style=\"background-color:#C0C0C0;\">0.1</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 4. </span>An example of Unprofitable Trading using ERC-20 token: <span class=\"ltx_text ltx_font_italic\" id=\"S7.T4.6.1\">Chibi Dino #5723</span></figcaption>\n</figure>",
304
+ "capture": "Table 4. An example of Unprofitable Trading using ERC-20 token: Chibi Dino #5723"
305
+ },
306
+ "5": {
307
+ "table_html": "<figure class=\"ltx_table\" id=\"S7.T5\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S7.T5.1\" style=\"width:433.6pt;height:97.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(56.9pt,-12.8pt) scale(1.35564859704158,1.35564859704158) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S7.T5.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S7.T5.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S7.T5.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T5.1.1.1.1.1.1\">form</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S7.T5.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T5.1.1.1.1.2.1\">amount / before</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S7.T5.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T5.1.1.1.1.3.1\">amount / after</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S7.T5.1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T5.1.1.1.1.4.1\">decrease</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S7.T5.1.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S7.T5.1.1.2.1.1\">Round-trip Trading</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.1.1.2.1.2\">$6,453,831.51</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.1.1.2.1.3\">$ 4,298,251.79</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T5.1.1.2.1.4\">33.40%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T5.1.1.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S7.T5.1.1.3.2.1\">Unprofitable Trading</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.1.1.3.2.2\">$2,247,915.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.1.1.3.2.3\">$606,487.54</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T5.1.1.3.2.4\">73.02%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T5.1.1.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S7.T5.1.1.4.3.1\">Hidden Trading</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S7.T5.1.1.4.3.2\">$1,556,267.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S7.T5.1.1.4.3.3\">$1,100,436.56</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S7.T5.1.1.4.3.4\">29.29%</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5. </span>Statistical results of decreased wash trading behavior</figcaption>\n</figure>",
308
+ "capture": "Table 5. Statistical results of decreased wash trading behavior"
309
+ }
310
+ },
311
+ "image_paths": {
312
+ "1": {
313
+ "figure_path": "2312.12544v3_figure_1.png",
314
+ "caption": "Figure 1. Two main ways to purchase NFTs on OpenSea: instant sale (Black), auction (Green).",
315
+ "url": "http://arxiv.org/html/2312.12544v3/x1.png"
316
+ },
317
+ "2": {
318
+ "figure_path": "2312.12544v3_figure_2.png",
319
+ "caption": "Figure 2. NFT transfer functions standardized by ERC721 (off, 2023a)",
320
+ "url": "http://arxiv.org/html/2312.12544v3/x2.png"
321
+ },
322
+ "3": {
323
+ "figure_path": "2312.12544v3_figure_3.png",
324
+ "caption": "Figure 3. The workflow of wash trading and wash trader identification",
325
+ "url": "http://arxiv.org/html/2312.12544v3/x3.png"
326
+ },
327
+ "4": {
328
+ "figure_path": "2312.12544v3_figure_4.png",
329
+ "caption": "Figure 4. Obvious evidence in visualization for wash trading groups.",
330
+ "url": "http://arxiv.org/html/2312.12544v3/x4.png"
331
+ },
332
+ "5": {
333
+ "figure_path": "2312.12544v3_figure_5.png",
334
+ "caption": "Figure 5. The trend for the number of events related to wash trading, excluding OG:Crystal.",
335
+ "url": "http://arxiv.org/html/2312.12544v3/x5.png"
336
+ },
337
+ "6": {
338
+ "figure_path": "2312.12544v3_figure_6.png",
339
+ "caption": "Figure 6. Time interval differences when transitioning to the next non-wash-traded time window between wash-traded and non-wash-traded time windows",
340
+ "url": "http://arxiv.org/html/2312.12544v3/x6.png"
341
+ },
342
+ "7": {
343
+ "figure_path": "2312.12544v3_figure_7.png",
344
+ "caption": "Figure 7. The distribution of all wash-traded time windows in terms of gain or loss\n",
345
+ "url": "http://arxiv.org/html/2312.12544v3/x7.png"
346
+ },
347
+ "8": {
348
+ "figure_path": "2312.12544v3_figure_8.png",
349
+ "caption": "Figure 8. The difference between the last trading price in wash-traded time window with each trading price for the following 1st to 7th users\n",
350
+ "url": "http://arxiv.org/html/2312.12544v3/x8.png"
351
+ },
352
+ "9": {
353
+ "figure_path": "2312.12544v3_figure_9.png",
354
+ "caption": "Figure 9. The evolution of OG:Crystal #1861 from the 5th generation to its final form through Round-trip Trading (Reef, 2022)",
355
+ "url": "http://arxiv.org/html/2312.12544v3/x9.png"
356
+ },
357
+ "10": {
358
+ "figure_path": "2312.12544v3_figure_10.png",
359
+ "caption": "Figure 10. The total number of ETH transfer events at different times before and after an NFT sale event.",
360
+ "url": "http://arxiv.org/html/2312.12544v3/x10.png"
361
+ }
362
+ },
363
+ "validation": true,
364
+ "references": [
365
+ {
366
+ "1": {
367
+ "title": "Blockchain.",
368
+ "author": "2023a.",
369
+ "venue": "",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "2": {
375
+ "title": "ERC721 standard.",
376
+ "author": "2023a.",
377
+ "venue": "",
378
+ "url": null
379
+ }
380
+ },
381
+ {
382
+ "3": {
383
+ "title": "Ethereum.",
384
+ "author": "2023b.",
385
+ "venue": "",
386
+ "url": null
387
+ }
388
+ },
389
+ {
390
+ "4": {
391
+ "title": "Information of Wrapped Ether.",
392
+ "author": "2023.",
393
+ "venue": "",
394
+ "url": null
395
+ }
396
+ },
397
+ {
398
+ "5": {
399
+ "title": "LooksRare.",
400
+ "author": "2023b.",
401
+ "venue": "",
402
+ "url": null
403
+ }
404
+ },
405
+ {
406
+ "6": {
407
+ "title": "Official explanation for OG:Crystal.",
408
+ "author": "2023.",
409
+ "venue": "",
410
+ "url": null
411
+ }
412
+ },
413
+ {
414
+ "7": {
415
+ "title": "Smart Contract.",
416
+ "author": "2023c.",
417
+ "venue": "",
418
+ "url": null
419
+ }
420
+ },
421
+ {
422
+ "8": {
423
+ "title": "XBlock.",
424
+ "author": "2023.",
425
+ "venue": "",
426
+ "url": null
427
+ }
428
+ },
429
+ {
430
+ "9": {
431
+ "title": "Direct evidence of bitcoin wash trading.",
432
+ "author": "Arash Aloosh and Jiasun Li. 2019.",
433
+ "venue": "Available at SSRN 3362153 (2019).",
434
+ "url": null
435
+ }
436
+ },
437
+ {
438
+ "10": {
439
+ "title": "Scams Explained: What is an NFT Wash Trade? Is It a Crime?",
440
+ "author": "ESQ. ANDREW ROSSOW. 2022.",
441
+ "venue": "",
442
+ "url": null
443
+ }
444
+ },
445
+ {
446
+ "11": {
447
+ "title": "An NFT Just Sold for $532 Million, But Didn\u2019t Really Sell at All.",
448
+ "author": "Nick Baker. 2021.",
449
+ "venue": "",
450
+ "url": null
451
+ }
452
+ },
453
+ {
454
+ "12": {
455
+ "title": "Ethereum transaction graph analysis. In 2017 12th international conference for internet technology and secured transactions (ICITST). IEEE, 498\u2013500.",
456
+ "author": "Wren Chan and Aspen Olmsted. 2017.",
457
+ "venue": "",
458
+ "url": null
459
+ }
460
+ },
461
+ {
462
+ "13": {
463
+ "title": "Non-fungible token transactions: Data and challenges.",
464
+ "author": "Jason B Cho, Sven Serneels, and David S Matteson. 2023.",
465
+ "venue": "Data Science in Science 2, 1 (2023), 2151950.",
466
+ "url": null
467
+ }
468
+ },
469
+ {
470
+ "14": {
471
+ "title": "Pearson correlation coefficient.",
472
+ "author": "Israel Cohen, Yiteng Huang, Jingdong Chen, Jacob Benesty, Jacob Benesty, Jingdong Chen, Yiteng Huang, and Israel Cohen. 2009.",
473
+ "venue": "Noise reduction in speech processing (2009), 1\u20134.",
474
+ "url": null
475
+ }
476
+ },
477
+ {
478
+ "15": {
479
+ "title": "Nearly 45% of all NFT Trading Volume on Ethereum are Fake: Report.",
480
+ "author": "Coinfomania. 2022.",
481
+ "venue": "",
482
+ "url": null
483
+ }
484
+ },
485
+ {
486
+ "16": {
487
+ "title": "CoinGecko API document.",
488
+ "author": "CoinGecko. 2023.",
489
+ "venue": "",
490
+ "url": null
491
+ }
492
+ },
493
+ {
494
+ "17": {
495
+ "title": "ETHEREUM ACCOUNTS.",
496
+ "author": "Ethereum contributors. 2023.",
497
+ "venue": "",
498
+ "url": null
499
+ }
500
+ },
501
+ {
502
+ "18": {
503
+ "title": "Understanding security issues in the NFT ecosystem.",
504
+ "author": "Dipanjan Das, Priyanka Bose, Nicola Ruaro, Christopher Kruegel, and Giovanni Vigna. 2021.",
505
+ "venue": "arXiv preprint arXiv:2111.08893 (2021).",
506
+ "url": null
507
+ }
508
+ },
509
+ {
510
+ "19": {
511
+ "title": "How To Sell An NFT Privately.",
512
+ "author": "Luci Goodman. 2022.",
513
+ "venue": "",
514
+ "url": null
515
+ }
516
+ },
517
+ {
518
+ "20": {
519
+ "title": "Mining frequent patterns without candidate generation: A frequent-pattern tree approach.",
520
+ "author": "Jiawei Han, Jian Pei, Yiwen Yin, and Runying Mao. 2004.",
521
+ "venue": "Data mining and knowledge discovery 8 (2004), 53\u201387.",
522
+ "url": null
523
+ }
524
+ },
525
+ {
526
+ "21": {
527
+ "title": "How many users on opensea? Is OpenSea the most popular?",
528
+ "author": "Herman Hayes. 2022.",
529
+ "venue": "",
530
+ "url": null
531
+ }
532
+ },
533
+ {
534
+ "22": {
535
+ "title": "NFT Sales in 2022 Nearly Matched the 2021 Boom, Despite Market Crash.",
536
+ "author": "Andrew Hayward. 2022.",
537
+ "venue": "",
538
+ "url": null
539
+ }
540
+ },
541
+ {
542
+ "23": {
543
+ "title": "OpenSea Sets New Record With $5 Billion USD in Monthly NFT Sales.",
544
+ "author": "Hypebeast. 2023.",
545
+ "venue": "",
546
+ "url": null
547
+ }
548
+ },
549
+ {
550
+ "24": {
551
+ "title": "Official explanation for KaijuKingz.",
552
+ "author": "KaijuKingz. 2023.",
553
+ "venue": "",
554
+ "url": null
555
+ }
556
+ },
557
+ {
558
+ "25": {
559
+ "title": "A Game of NFTs: Characterizing NFT Wash Trading in the Ethereum Blockchain.",
560
+ "author": "Massimo La Morgia, Alessandro Mei, Alberto Maria Mongardini, and Eugenio Nerio Nemmi. [n.\u2009d.].",
561
+ "venue": "([n.\u2009d.]).",
562
+ "url": null
563
+ }
564
+ },
565
+ {
566
+ "26": {
567
+ "title": "NFT Wash Trading in the Ethereum Blockchain.",
568
+ "author": "Massimo La Morgia, Alessandro Mei, Alberto Maria Mongardini, and Eugenio Nerio Nemmi. 2022.",
569
+ "venue": "arXiv preprint arXiv:2212.01225 (2022).",
570
+ "url": null
571
+ }
572
+ },
573
+ {
574
+ "27": {
575
+ "title": "Trading Rewards.",
576
+ "author": "LooksRare. 2023.",
577
+ "venue": "",
578
+ "url": null
579
+ }
580
+ },
581
+ {
582
+ "28": {
583
+ "title": "OpenSea API document.",
584
+ "author": "OpenSea. 2023.",
585
+ "venue": "",
586
+ "url": null
587
+ }
588
+ },
589
+ {
590
+ "29": {
591
+ "title": "ERC-20.",
592
+ "author": "OpenZeppelin. 2023.",
593
+ "venue": "",
594
+ "url": null
595
+ }
596
+ },
597
+ {
598
+ "30": {
599
+ "title": "The Story of Cryptopunk #9998: Did it really sell for 532 million?",
600
+ "author": "pastel. 2022.",
601
+ "venue": "",
602
+ "url": null
603
+ }
604
+ },
605
+ {
606
+ "31": {
607
+ "title": "Details of OG Crystals #1861.",
608
+ "author": "Organic Growth: Crystal Reef. 2022.",
609
+ "venue": "",
610
+ "url": null
611
+ }
612
+ },
613
+ {
614
+ "32": {
615
+ "title": "Rount-Trip Trading Definition, Legitimate & Unethical Examples.",
616
+ "author": "GORDON SCOTT. 2023.",
617
+ "venue": "",
618
+ "url": null
619
+ }
620
+ },
621
+ {
622
+ "33": {
623
+ "title": "What Is NFT Wash Trading?",
624
+ "author": "Andrey Sergeenkov. 2022.",
625
+ "venue": "",
626
+ "url": null
627
+ }
628
+ },
629
+ {
630
+ "34": {
631
+ "title": "Detecting wash trading for nonfungible tokens.",
632
+ "author": "Sven Serneels. 2023.",
633
+ "venue": "Finance Research Letters 52 (2023), 103374.",
634
+ "url": null
635
+ }
636
+ },
637
+ {
638
+ "35": {
639
+ "title": "$500M CryptoPunk sale was just wash trading, because of course it was.",
640
+ "author": "Protos Staff. 2021.",
641
+ "venue": "",
642
+ "url": null
643
+ }
644
+ },
645
+ {
646
+ "36": {
647
+ "title": "Non-fungible Tokens-Exploring Suspicious Washtrader Communities in NFT Networks.",
648
+ "author": "Nargess Tahmasbi and Alexander Fuchsberger. 2022.",
649
+ "venue": "(2022).",
650
+ "url": null
651
+ }
652
+ },
653
+ {
654
+ "37": {
655
+ "title": "Suspicious trading in nonfungible tokens (nfts): Evidence from wash trading.",
656
+ "author": "Syed Ahzam Tariq and Imtiaz Sifat. 2022.",
657
+ "venue": "Available at SSRN 4097642 (2022).",
658
+ "url": null
659
+ }
660
+ },
661
+ {
662
+ "38": {
663
+ "title": "Depth-first search and linear graph algorithms.",
664
+ "author": "Robert Tarjan. 1972.",
665
+ "venue": "SIAM journal on computing 1, 2 (1972), 146\u2013160.",
666
+ "url": null
667
+ }
668
+ },
669
+ {
670
+ "39": {
671
+ "title": "NFT wash trading: Quantifying suspicious behaviour in NFT markets.",
672
+ "author": "Victor von Wachter, Johannes Rude Jensen, Ferdinand Regner, and Omri Ross. 2022.",
673
+ "venue": "arXiv preprint arXiv:2202.03866 (2022).",
674
+ "url": null
675
+ }
676
+ },
677
+ {
678
+ "40": {
679
+ "title": "Blockchain technology, bitcoin, and Ethereum: A brief overview. In 2018 17th International Symposium on INFOTEH-JAHORINA, INFOTEH 2018-Proceedings.",
680
+ "author": "D Vuji\u010di\u0107, D Jagodic, and S RanDi\u0107. 2018.",
681
+ "venue": "",
682
+ "url": null
683
+ }
684
+ },
685
+ {
686
+ "41": {
687
+ "title": "NFTDisk: Visual Detection of Wash Trading in NFT Markets.",
688
+ "author": "Xiaolin Wen, Yong Wang, Xuanwu Yue, Feida Zhu, and Min Zhu. 2023.",
689
+ "venue": "arXiv preprint arXiv:2302.05863 (2023).",
690
+ "url": null
691
+ }
692
+ },
693
+ {
694
+ "42": {
695
+ "title": "Characterizing the OpenSea NFT marketplace. In Companion Proceedings of the Web Conference 2022. 488\u2013496.",
696
+ "author": "Bryan White, Aniket Mahanti, and Kalpdrum Passi. 2022.",
697
+ "venue": "",
698
+ "url": null
699
+ }
700
+ },
701
+ {
702
+ "43": {
703
+ "title": "Non-fungible token.",
704
+ "author": "Wikipedia. 2023.",
705
+ "venue": "",
706
+ "url": null
707
+ }
708
+ },
709
+ {
710
+ "44": {
711
+ "title": "XBlock-ETH: Extracting and exploring blockchain data from Ethereum.",
712
+ "author": "Peilin Zheng, Zibin Zheng, Jiajing Wu, and Hong-Ning Dai. 2020.",
713
+ "venue": "IEEE Open J. Comput. Soc. 1 (May 2020), 95\u2013106.",
714
+ "url": null
715
+ }
716
+ }
717
+ ],
718
+ "url": "http://arxiv.org/html/2312.12544v3"
719
+ }
20240722/2312.14055v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2401.00009v3.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Turing\u2019s Test, a Beautiful Thought Experiment",
3
+ "abstract": "In the wake of the latest trends of artificial intelligence (AI), there has been a resurgence of claims and questions about the Turing test and its value, which are reminiscent of decades of practical \u201cTuring\u201d tests. If AI were quantum physics, by now several \u201cSchr\u00f6dinger\u2019s\u201d cats would have been killed.\nIt is time for a historical reconstruction of Turing\u2019s beautiful thought experiment. This paper presents a wealth of evidence, including new archival sources, and gives original answers to several open questions about Turing\u2019s 1950 paper, including its relation with early AI.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "WHAT IS THE TURING TEST?",
9
+ "text": "In 1950, Alan Turing (Fig. 1 ###reference_###)\npublished \u201cComputing Machinery and Intelligence\u201d [turing1950], the second of his three seminal papers.111The other two being \u201cOn Computable Numbers\u201d [turing1936] and \u201cThe Chemical Basis of Morphogenesis\u201d [turing1952b].\nThe text has 28 pages, divided in seven sections, \u00a71-\u00a77. Three main logical steps can be identified in his argument: the proposal (\u00a71-\u00a73, in pp.), the science (\u00a74-\u00a75, in 6 pp.), and the discussion (\u00a76-\u00a77, in 18+ pp.). This structure, with its thematic order and size distribution, can be revealing about the nature of Turing\u2019s paper and argument.\nThe proposal sought to replace with the imitation game the question \u201cCan machines think?,\u201d which he considered \u201ctoo meaningless to deserve discussion\u201d (p. 442; Turing was coming from unstructured multidisciplinary debates in two editions of a seminar, \u201cMind and Machine,\u201d held at the Philosophy Department of Manchester University in October and December, 1949.)222Of the December edition, a participant wrote: \u201cI wish you had been with us a few days ago we had an amusing evening discussion with Thuring [sic], Williams, Max Newman, Polyani [sic], Jefferson, J Z Young & myself An electronic analyser and a digital computer (universal type) might have sorted the arguments out a bit.\u201d Christmas postcard from Jules Y. Bogue to Warren McCulloch, c. December, 1949. American Philosophical Society, W. S. McCulloch Papers, Mss.B.M139_005. Thanks to Jonathan Swinton for this archival finding.\nThe purpose of the proposal was to change the common meaning of the word \u201cmachine\u201d (e.g., a steam engine, a bulldozer) in light of the new mathematical science of \u201cuniversal\u201d digital computing. The imitation game would allow for a grounded discussion of \u201cmachine\u201d and \u201cthinking,\u201d seeking to expand the meaning of \u201cthinking\u201d and detach it from the human species, much as the meaning of \u201cuniverse\u201d was once detached from the Earth, but also as a critique of anthropocentrism.\nIn 1950, one of the OED definitions of \u201cmachine\u201d was:333New English Dictionary. Oxford, Vol. VI, Part II, M-N, p. 7.\n\u201ca combination of parts moving mechanically as contrasted with a being having life, consciousness and will Hence applied to a person who acts merely from habit or obedience to a rule, without intelligence, or to one whose actions have the undeviating precision and uniformity of a machine.\u201d\nThus, by definition, common sense did not allow the meanings of \u201cmachine\u201d and \u201cthinking\u201d to overlap. Despite Turing\u2019s emphasis in his opening paragraph that he did not intend to discuss how these words were \u201ccommonly used\u201d (p. 433), the hostility to his proposal can be seen from one of the first reactions, from a participant in the 1949 Manchester seminars, who quoted the above OED definition to appeal to common sense [mays1952, p. 149].\nThe new question, which Turing considered to have a \u201cmore accurate form\u201d [turing1950, p. 442], would be based on a vivid image, his \u201ccriterion for \u2018thinking\u2019\u2006\u201d (p. 436), which he called interchangeably the \u201cimitation game\u201d and his \u201ctest.\u201d444Turing referred to his \u201ctest\u201d four times \u2014 in pp. 446\u2013447, 454. He also referred to it as an \u201cexperiment\u201d \u2014 once on p. 436, twice on p. 455, and twice again on p. 457.\nThe new question was whether a machine playing A, the deceiver, could imitate a woman, a man, a human being, or a different machine playing B, the assistant, in a remotely played conversation game to pass as B in the eyes of an average human interrogator playing C, the judge.\nyellow\nHowever, the details and exact conditions of the imitation game as an experiment slipped through Turing\u2019s text in a series of variations that defies interpretation. A close reading of the text identifies four different conditions of the game with respect to players A-B, namely, man-woman (p. 433), machine-woman (p. 434), machine-machine (pp. 441, 451-452), and machine-man (p. 442).\nThese different conditions relate to four variants of the \u201cnew\u201d question that Turing posed to replace his \u201coriginal\u201d question (see Box 1).\nIn addition to varying the species (types) of the players, he also increased the storage and speed of the machine and provided it with a hypothetically appropriate program (), and suggested a base time for the interrogation session (). Other seemingly relevant parameters were not mentioned, such as the number of interrogators used to arrive at a statistically sound conclusion, although their profile is mentioned \u2014 they should be \u201caverage\u201d \u2014, and later reiterated \u2014 they \u201cshould not be expert about machines.\u201d555\u201cCan automatic calculating machines be said to think?,\u201d Broadcast on BBC Third Programme, 14 and 23 Jan. 1952. Archives Centre, King\u2019s College, Cambridge, AMT/B/6.\n\\sethlcolorpink\n[backgroundcolor=blue!9]\nBOX 1\n \n\nThe various questions and conditions of Turing\u2019s test\n: \u201cI propose to consider the question, \u2018Can machines think?\u2019\u2006\u201d (p. 433)\n: \u201cWe now ask the question, \u2018What will happen when a machine takes the part of A in this game?\u2019 Will the interrogator decide wrongly as often when the game is played like this [machine-woman] as he does when the game is played between a man and a woman? These questions replace our original, \u2018Can machines think?\u2019\u2006\u201d (pp. 433-434)\n: \u201cThere are already a number of digital computers in working order, and it may be asked, \u2018Why not try the experiment straight away? It would be easy to satisfy the conditions of the game. A number of interrogators could be used, and statistics compiled to show how often the right identification was given.\u2019 The short answer is that we are not asking whether all digital computers would do well in the game nor whether the computers at present available would do well, but whether there are imaginable computers which would do well.\u201d (p. 436)\n: \u201cIt was suggested tentatively that the question []\nshould be replaced by [] \nBut in view of the universality property we see that either of these questions is equivalent to this,\n\u2018Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?\u2019\u2006\u201d (p. 442)\n: \u201cI believe that in about fifty years\u2019 time it will be possible to programme computers, with a storage capacity of about , to make them play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning.\u201d (p. 442)\npink\nVersion of the test appears at the beginning of \u00a76 of the 1950 paper. As we will see shortly, this is the version that Turing most directly associates with the idea of a future experiment. As the least underspecified version of the test (cf. Box 1), it has been the one picked up by promoters of practical \u201cTuring\u201d tests. In that passage, Turing expresses his belief that in \u201cabout fifty years\u201d an \u201caverage interrogator\u201d would miss the identification in at least 30% of the test sessions.\nTwo sentences later, Turing states a second belief:\nI believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.\n[turing1950, p. 442]\nOnce again we are brought to a crucial moment at the end of the century. But this second stated belief neatly reformulates the first, to which it is almost juxtaposed, and seems to reveal in common language what is meant by the rhetoric of achieving 30% of misidentification in the imitation game: it expresses in rough round numbers the experience of living in the culture he envisions, where \u201cone will be able to speak of machines thinking without expecting to be contradicted.\u201d\nCould such a cultural shift come with the future of digital computing? How remote was it? These discursive questions are arguably the real questions he addresses. What about running his test?\nHaving gone through nine objections to machine intelligence discussed on the basis of the imitation game, we come to the crux: Turing\u2019s reference to \u201cexperiment\u201d at the beginning of his \u00a77, \u201cLearning Machines.\u201d\nThis passage, which comes after his revisiting of Lady Lovelace\u2019s Objection, has received little attention:\nThese last two paragraphs do not claim to be convincing arguments. They should rather be described as \u2018recitations tending to produce belief.\u2019\n[Continues]\nThe only really satisfactory support that can be given for the view expressed at the beginning of \u00a76 [his two stated beliefs]\nwill be that provided by waiting for the end of the century and then doing the experiment described.\n[Concludes]\nBut what can we say in the meantime? What steps should be taken now if the experiment is to be successful?\n[turing1950, p. 455]\nThis passage, in three sentences followed by two rhetorical questions, sums up the sophistication of Turing\u2019s rhetoric. The suggestion to run the experiment comes juxtaposed with his indication that he is actively engaged in propaganda (\u201crecitations tending to produce belief\u201d). Again, he pushes \u201cthe experiment\u201d to \u201cthe end of the century,\u201d but now lands in the present, in a call to arms for research into \u201clearning machines\u201d so that the experiment \u2014 an iconic representation of the change he expects to see in talk of \u201cmachines thinking\u201d \u2014 could be successful.\nThe contrast between what he proposes for the future and for the present is revealing. If the research is done as he suggests, by the time \u201cthe experiment\u201d is to be conducted, he expects the machines to be so advanced that talk of \u201cmachines thinking\u201d will be commonplace. In his major 1948 \u201cIntelligent Machinery\u201d report,666Archives Centre, King\u2019s College, Cambridge, AMT/C/11.\nthe rhetoric of a crucial experiment does not appear. It was rather \u201cthe actual production of the machines\u201d that \u201cwould probably have some effect\u201d in convincing critics and opponents, because \u201cthe idea of \u2018intelligence\u2019 is itself emotional rather than mathematical\u201d (p. 3).\nBut in the crucial year of 1949, as we will see later, Turing faced the strongest wave of opposition from contemporaries, leading to his test.\n[backgroundcolor=yellow!20]\nBOX 2\n \n\nTuring\u2019s mathematical concept of imitation\nDear Miss Worsley,\nI was interested in your work on the relation between computers and\nTuring machines. I think it would be better though if you could try and find a realtion [sic] between T machines and infinite computers, rahter [sic] than between finite T machines and computers. The relation that you suggest is rather too trivial. The fact is that the motions of either a finite T machine or a finite computer are ultimately periodic, and therefore any sequence computed by them is ultimately periodic. It is easy therefore in theory to make one imitate the other, though the size of the imitating machine will (if this technique is adopted) have to be of the order of the exponential of the size of the imitated machine. Probably your methods could prove that this exponential relation could be reduced to a multiplicative factor. \nYours sincerely, A. M. Turing777\nTuring to B. H. Worsley, June 11, 1951, typeset; emphasis added. Unpublished writings of Alan Turing, copyright The Provost and Scholars of King\u2019s College Cambridge 2023. B.H. Worsley Collection, Archives Center, National Museum of American History, Smithsonian Institution. Quoted with permission. Thanks to Mark Priestley for this archival finding."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "IMITATION: FROM 1936 TO 1950",
15
+ "text": "yellow\nBecause the machine must imitate stereotypes of what it is not, Turing\u2019s proposal has often been criticized for encouraging fakes and tricks. But this view is related to the reading that Turing would have meant his test as a practical experiment.\nSuch a literal reading of Turing\u2019s test misses the point of his use of irony [goncalves2023irony], and misses the fact that his notion of imitation in 1950 was largely in continuity with his 1936 paper [turing1936]. This was hinted at in the words of the director of the National Physical Laboratory in a BBC broadcast in late 1946;888\u201cabout twelve years ago, a young Cambridge Mathematician by the name of Turing, wrote a paper which appeared in one of the mathematical journals, in which he worked out by strict logical principles, how far a machine could be imagined which would imitate the processes of thought\u201d (emphasis added). The Listener, Nov. 14, 1946, p. 663.\nand both in Turing\u2019s lecture in early 1947 and in his NPL report in mid-1948;999\u201cLecture to L.M.S. Feb. 20 1947\u201d and \u201cIntelligent Machinery.\u201d Archives Centre, King\u2019s College, Cambridge, AMT/B/1 and AMT/C/11.\nand as newly discovered correspondence with the Mexican-Canadian computer pioneer Beatrice Worsley (1921-1972) helps to clarify (see Box 2).\nIn his letter to Worsley, Turing seems to be more interested in the relations between \u201cthe motions\u201d of Turing machines and infinite computers, whose behavior can be non-periodic.\nPerhaps he thought of the living human brain as an infinite computer, in the sense that it has a continuous interface with its environment, which constantly intervenes and changes its logical structure.101010\u201cIntelligent Machinery\u201d (op. cit.).\nNow, the imitation game puts into empirical form the relation between digital computers, whose behavior is ultimately periodic, and the behavior of the human players. Can the behavior of their brains be approximated by a digital computer? Turing pursued this question. For his May 1951 broadcast, he wrote:\n\u201cthe view which I hold myself, that it is not altogether unreasonable to describe digital computers as brains \nIf it is accepted that real brains, as found in animals, and in particular in men, are a sort of machine it will follow that our digital computer, suitably programmed, will behave like a brain.\u201d111111\u201cCan digital computers think?,\u201d broadcast on BBC Third Programme, 15 May 1951. Archives Centre, King\u2019s College, Cambridge, AMT/B/5.\n[backgroundcolor=pink!45]\nBOX 3\n \n\n\u201c\u2026any sharp line between what machine and brain can do will fail\u201d\nDear Miss Worsley, \nI do not think you will be able to find any clue to essential differences between brains and computing machines (if there are any), in neuron behaviour. So long as what we know about a neuron can be embodied in the description of stochastic processes, the behaviour of any mechanism embodying such neurons can, in principle, be calculated by a suitable enlarged and speeded up \\stFerranti [Mark II] machine.121212\u201cFerranti\u201d is typed and erased, and \u2018Mark II\u2019 (a version of the Manchester electronic computer) is added in pencil.\nMore accurately I should say that one can calculate random samples of its behaviour. I think any attempt to draw any sharp line between what machine and brain can do will fail. I think it is largely a quantitative matter. Probably one needs immensely more storage capacity then [sic] we have got, and possibly more than we shall ever have. Perhaps we may have enough capacity, but just won\u2019t find an appropriate programme. Naturally one won\u2019t make a man that way ever. It\u2019ll just be another species of the thinking genus.\nYours sincerely, A. M. Turing131313\nTuring to B. H. Worsley, circa June, 1951, Turing\u2019s emphasis. Credit for this source is exactly the same as for that of Box 2. Quoted with permission.\nEven if the human brain can only be compared to an infinite computer, could it not be simulated by a digital computer equipped with a sufficiently large memory? An excerpt of another newly discovered Turing letter to Worsley from mid-1951 can give more contour and provide further insight into Turing\u2019s views (see Box 3). A highlight in this excerpt is Turing\u2019s view that to the extent that the behavior of a neuron can be described as a stochastic process, it would be possible to \u201ccalculate random samples\u201d of the mechanism that embodies the brain and then imitate it.\nAn effective imitation of the brain by a machine would require knowledge of the anatomy and physiology of the brain to inspire an appropriate program, as well as much more storage and speed than was available to the Ferranti Mark I at the time (see Fig. 2 ###reference_###).\nAnother important element in the excerpt is Turing\u2019s point that, even if a thinking machine is possible, the relation he has in mind is not one of identity but one of analogy: \u201cIt\u2019ll just be another species of the thinking genus.\u201d\n###figure_1### An original answer to the question of why design a test based on imitation, which can be seen as encouraging deception, is that imitation was actually Turing\u2019s fundamental principle of the new science of universal digital computing. He conceived his 1950 paper partly in continuity with his 1936 paper. Both were based on his core concepts of machine and imitation, i.e., what it takes for a machine to imitate another machine. A key difference that breaks the continuity is that, by 1950, he had generalized the logical architecture of his universal machine. It would not only follow instructions, but would also be able to acquire new cognitive skills by learning. Using Turing\u2019s 1948 language,141414\u201cIntelligent Machinery\u201d (op. cit.).\nuniversality can be achieved by starting with an \u201corganized\u201d machine (1936), or with an \u201cunorganized\u201d machine (1948/1950). Whereas in 1936 the machine would be given an a priori, well-defined and fixed table of instructions for each task, in 1950 it would also be able to perform a new task by changing its logical structure as a result of learning by experience, much as the brain does \u201cby changing its neuron circuits by the growth of axons and dendrites.\u201d151515Turing to Ross Ashby, circa November 19, 1946. British Library, Collection \u201cW. Ross Ashby: correspondence of W. Ross Ashby,\u201d Add MS 89153/26.\nThe use of psychological tricks in practical \u201cTuring\u201d tests has little to do with Turing\u2019s 1950 proposal. In 1951, he warned: \u201cIt would be quite easy to arrange the [machine\u2019s] experiences in such a way that they automatically caused the structure of the machine to build up into a previously intended form, and this would obviously be a gross form of cheating, almost on a par with having a man inside the machine.\u201d161616\u201cIntelligent machinery, a heretical theory,\u201d a lecture given to \u201c51 Society\u201d at Manchester, c. 1951. Archives Centre, King\u2019s College, Cambridge, AMT/B/4.\nThe \u201chuman fallibility\u201d that Turing encouraged the machine to show was meant as a by-product of learning by experience [turing1950, p. 459]:\n\u201cAnother important result of preparing our machine for its part in the imitation game by a process of teaching and learning is that \u2018human fallibility\u2019 is likely to be omitted [from the teaching] in a rather natural way, i.e., [learned] without special \u2018coaching\u2019.\u201d\nThat is, for a machine to be a valid player of Turing\u2019s test, it cannot be specially prepared for it. This means that we have never seen a practical Turing test."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "THE METHOD OF THOUGHT EXPERIMENTS",
21
+ "text": "The various rhetorical questions Turing posed, , to replace the original question, , can be generalized as follows [goncalves2023experiment]:\nQuestion : could player A imitate intellectual stereotypes associated with player B\u2019s type successfully (well enough to deceive player C), despite A and B\u2019s physical differences?\nIt has been largely unnoticed that the various questions instantiating can be viewed as following a case-control methodology, applied in two stages.\nyellow\nAt the more obvious intra-game level, A plays the case, and B plays the control. However, at the inter-game level, two variants set the case (machine-woman and the machine-man) and the other two set the control (man-woman and the machine-machine). While the first two are open, creating suspense around the test, the latter two are resolved as follows.\nIt is known that a man (A) can imitate gender stereotypes associated with a woman (B) to deceive an interrogator (C) despite their physical differences, as assumed in parlor games and as Turing illustrates on p. 434: \u201cMy hair is shingled\u2026\u201d\nFurther, regarding the machine-machine variant, it is also known that a digital computer (A), because of its universality property [turing1950, \u00a7\u00a74, 5], can imitate any discrete-state machine (B) and even a continuous one (p. 451), despite their physical differences.\nWe can now explore how Turing\u2019s presentation of his test fits Mach\u2019s description of \u201cthe basic method of thought experiments,\u201d which is variation, continuously if possible.\nMach is the author of perhaps the most classic text on thought experiments in the modern scientific tradition [mach1897], in which he developed observations and insights based on examples from the history of modern physics, mathematics, and common sense experience. He wrote: \u201cBy varying the conditions (continuously if possible), the scope of ideas (expectations) tied to them is extended: by modifying and specializing the conditions we modify and specialize the ideas, making them more determinate, and the two processes alternate\u201d (p. 139).\nMach illustrated his point with the process of discovery of universal gravitation:\nA stone falls to the ground. Increase the stone\u2019s distance from the earth, and it would go against the grain to expect that this continuous increase would lead to some discontinuity. Even at lunar distance the stone will not suddenly lose its tendency to fall. Moreover, big stones fall like small ones: the moon tends to fall to the earth. Our ideas would lose the requisite determination if one body were attracted to the other but not the reverse, thus the attraction is mutual and remains so with unequal bodies, for the cases merge into one another continuously discontinuities are quite conceivable, but it is highly improbable that their existence would not have betrayed itself by some experience. [mach1897, pp. 138-139]\nThe conditions, i.e., the distance of the fall and the size of the stones, are continuously varied in the physicist\u2019s mind and eventually stretched to the celestial scale. Reciprocally, the concept of a celestial body, such as the Earth or the Moon, becomes interchangeable with the concept of a stone, and quite unequal stones can then become mutually attracted.\nThe cases continuously merge into one another, and a conceptual integration is established that connects near-earth bodies to celestial bodies under a unified concept.\nTuring\u2019s imitation game extended the scope of ideas and expectations established earlier in his 1936 paper, moving from machine-machine and restricted human-machine imitation in 1936171717\u201cWe may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions\u201d [turing1936, p. 231].\nto more general human-machine imitation in 1950.\nTo understand this better, let us take a brief look at Turing\u2019s 1948 report \u201cIntelligent Machinery\u201d (op. cit.).\nIn section (\u00a73) \u2018Varieties of machinery,\u2019 he noted: \u201cAll machinery can be regarded as continuous, but when it is possible to regard it as discrete it is usually best to do so.\u201d\nA brain, he wrote, \u201cis probably\u201d a \u2018continuous controlling\u2019 machine, but in light of the digital nature of neural impulses, it \u201cis very similar to much discrete machinery.\u201d\nIn section (\u00a76) \u201cMan as Machine,\u201d he referred to the imitation of \u201cany small part of a man\u201d by machines: \u201cA great positive reason for believing in the possibility of making thinking machinery is the fact that it is possible to make machinery to imitate any small part of a man\u201d (p. 420).\nIn light of this, he argued:\n\u201cOne way of setting about our task of building a \u2018thinking machine\u2019 would be to take a man as a whole and to try to replace all the parts of him by machinery.\u201d\nBut Turing dismissed such a method as \u201caltogether too slow and impracticable,\u201d and later alluded to moral and aesthetic reasons as well.181818For the May 1951 broadcast (op. cit.), he wrote: \u201cI certainly hope and believe that no great efforts will be put into making machines with the most distinctively human, but non-intellectual characteristics such as the shape of the human body; it appears to me to be quite futile to make such attempts and their results would have something like the unpleasant quality of artificial flowers.\u201d\nWe can now follow Turing\u2019s use of the method of continuous variation in the design of his imitation tests.\nThe central question () Turing asks is whether the intellectual and cultural performances (the stereotypes)191919Susan Sterrett first emphasized the importance of stereotypes in the imitation game [sterrett2000].\nassociated with woman, man, machine (the types) could be imitated, and thus softly transposed.\nNote that for any arbitrarily chosen type, say, a woman, further specific subtypes can be continuously conceived and considered as varied conditions of the imitation game: women having a certain property, a subproperty, and so on. For any two arbitrarily chosen types, a new type can be conceived, whether as a specialization or a modification (\u201cany small part of a man\u201d). Because concepts are fluid abstractions, there is an evolving continuum of levels and types.\nThe question across the various versions of the game can be posed this way: how does C\u2019s perception of A\u2019s performance against B\u2019s performance change as the game\u2019s conditions are (continuously) varied?\nWill it change if gendered verbal behavior is required as a subtype of human verbal behavior?\nWill it change if the machine\u2019s hardware is increased and/or its learning program is modified?\nFor Turing, there is no conceptual discontinuity between the various conditions instantiating his thought experiment. It is often asked whether the imitation goal changes from the machine-woman test (p. 434) to the machine-man test (p. 442). Note that this open question vanishes under this interpretation of the test, which observes material and structural facts of Turing\u2019s text: he describes such goals only very loosely, and, more importantly, continuously varies the design of the game. His focus is on the power of universal digital computing to imitate and deconstruct arbitrary types, not on setting this or that type for the sake of promoting a particular test.\nFrom 1948 to 1952, Turing presented various imitation tests based on both the game of chess and conversation, even bringing back chess at the end of his 1950 paper (p. 460), and referring to his (\u201cmy\u201d) various \u201cimitation tests\u201d in 1952.202020\u2018Can automatic calculating machines be said to think?\u2019, January 1952 (op. cit.). Why would he present various tests, as opposed to a well-defined, controlled experiment? This is a historically sound question, because it does not struggle with the materiality of Turing\u2019s texts and their chronological coherence. Nor does it erase some of his tests in favor of others, or overlook the historical conditions of his proposal. This paper has provided an answer by reconstructing what can be called Turing\u2019s use of the method of thought experiments, whose context will now be explored."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "1949, THE CRUCIAL YEAR",
27
+ "text": "As is often the case with thought experiments, Turing proposed his test out of a controversy [goncalves2023argument]. He was coming from his continuing disputes with the physicist and computer pioneer, Fellow of the Royal Society (FRS), Douglas Hartree (1897-1958),\nover the meaning of the newly existing digital computers, which had started in 1946 [goncalves2023lovelace]. Now, in mid-1949, new opponents had arrived, most notably the neurosurgeon Geoffrey Jefferson (1886-1961),\nand the chemist and philosopher Michael Polanyi (1891-1976),\nboth also FRS and based at the same institution as Turing, the University of Manchester, where Turing had spent a year as a Reader in the Department of Mathematics [hodges1983].\nThese three thinkers challenged Turing\u2019s claims about the future possibilities and limitations of digital computers.\nIn June 1949, Hartree published his Calculating Instruments and Machines [hartree1949], in which Ada Lovelace\u2019s work was acknowledged seemingly for the first time by a twentieth-century computer pioneer [goncalves2023lovelace]. Since November 1946, Hartree had been opposing the use of the term \u201celectronic brain.\u201d He wrote in a letter to the Times: \u201cThese machines can only do precisely what they are instructed to do by the operators who set them up.\u201d212121\u201cThe \u2018Electronic Brain\u2019: A Misleading Term; No Substitute for Thought,\u201d Times, November 7, 1946.\nNow in 1949, Hartree added strength to his argument by quoting the words of Ada Lovelace from the 1840s about Charles Babbage\u2019s machine: \u201cThe Analytical Engine has no pretensions to originate anything It can do whatever we know how to order it to perform\u201d (her emphasis) [hartree1949, p. 70].\nNoting Hartree\u2019s anachronism in taking Lovelace\u2019s words out of their time and place, Turing further developed his earlier, 1947 response to Hartree\u2019s challenge,222222\u2018Lecture to L.M.S. Feb. 20 1947\u2019 (op. cit.), p. 22.\nnow calling it \u201c(6) Lady Lovelace\u2019s objection\u201d [turing1950, p. 450]. Turing argued that intelligent behavior is the result of learning, a capability he had no problem attributing to future digital computers.\nHe also questioned the implicit assumption of Hartree\u2019s challenge: \u201cWho can be certain that \u2018original work\u2019 that he has done was not simply the growth of the seed planted in him by teaching, or the effect of following well-known general principles\u201d (p. 450).\nIn the imitation game, Turing suggested, the interrogator would be able to evaluate the machine\u2019s ability to learn: \u201cThe game (with the player B omitted) is frequently used in practice under the name of viva voce to discover whether some one really understands something or has \u2018learnt it parrot fashion\u2019\u2006\u201d (p. 446). But then, what is player B doing in the imitation game? Following the 1949 events will suggest an answer.\nOn June 9, in London, Jefferson delivered his prestigious Lister Oration on \u201cThe Mind of Mechanical Man,\u201d which was published in the debuting British Medical Journal on June 25 [jefferson1949]. His lecture was headlined in the Times on June 10,232323\u201cNo Mind For Mechanical Man.\u201d Times, 10 June 1949, p. 2.\nemphasizing his claim that \u201cNot until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain\u201d (p. 1110).\nThis rendered Turing\u2019s famous response: \u201cI do not think you can even draw the line about sonnets, though the comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine.\u201d242424\u201cThe Mechanical Brain.\u201d Times, 11 June 1949, p. 4.\nIn October and December 1949, the two seminars on \u201cMind and Machine\u201d were organized by Polanyi et al., and attended by Jefferson, Turing et al., at the Philosophy Department in Manchester [[]p. 275]polanyi1958. These seminar discussions, followed by Jefferson giving Turing an offprint of his Lister Oration,252525This may have happened in the evening of the December meeting of the Manchester seminar (op. cit.), when, according to a later letter from Jefferson to Ethel S. Turing, Turing and J.Z. Young went to dinner at Jefferson\u2019s house [sara1959, p. xx].\nwhich Turing read and marked with a pencil,262626Off-print, \u201cThe mind of mechanical man\u201d by Geoffrey Jefferson. Archives Centre, King\u2019s College, Cambridge, AMT/B/44.\nled him to write his 1950 paper and propose his test [goncalves2023argument].\nJefferson had characterized intelligence as an emergent property of the animal nervous system.\nHe emphasized that \u201csex hormones introduce peculiarities of behaviour often as inexplicable as they are impressive\u201d (p. 1107).\nBecause \u201cmodern automata\u201d are not moved by male and female sex hormones, they could not exhibit such peculiarities to imitate the actions of animals or \u201cmen.\u201d Specifically, he used a thought experiment to criticize Grey Walter\u2019s mechanical turtles by suggesting that gendered behavior is causally related to the physiology of sex hormones (ibid.):\n[\u2026It] should be possible to construct a simple animal such as a tortoise (as Grey Walter ingeniously proposed) that would show by its movements that it disliked bright lights, cold, and damp, and be apparently frightened by loud noises, moving towards or away from such stimuli as its receptors were capable of responding to.\nIn a favourable situation the behaviour of such a toy could appear to be very lifelike \u2014 so much so that a good demonstrator might cause the credulous to exclaim \u2018This is indeed a tortoise.\u2019 I imagine, however, that another tortoise would quickly find it a puzzling companion and a disappointing mate.\nIn reaction to Grey Walter and his transgressive tortoises [hayward2001],272727Jefferson would attack Walter\u2019s automata again in speeches to learned societies in Manchester in 1953 [jefferson1953] and 1956 [jefferson1956] in the wake of Walter\u2019s The Living Brain [walter1953].\nJefferson offered the image of a genuine individual of a species placed side by side with an artificial one to emphasize the latter\u2019s artificiality. The function of the genuine individual is to expose the artificiality of the impostor. The means of exposure is the failure to demonstrate the authentic (sexual) behavior of the original species. This can explain Turing\u2019s introduction of a (gendered) control player B, who appears in Turing\u2019s 1950 test, whose design was prompted by his reading of Jefferson, but not in Turing\u2019s 1948, 1951, and 1952 tests.\nIn discussing \u201c(4) The Argument from Consciousness,\u201d Turing addressed Jefferson directly and quoted in full his conditions for agreeing \u201cthat machine equals brain,\u201d including \u201cbe warmed by flattery\u201d and \u201cbe charmed by sex\u201d [turing1950, pp. 445-446].282828Jefferson\u2019s response was [jefferson1953, p. 73]: \u201cBut there are those like Dr. Turing who believe that we have no right to deny self-consciousness to the machines since they fulfill the definition of mind as given above \u2014 the ability to make choices.\u201d\nIn discussing the \u201c(5) Argument from Various Disabilities\u201d (p. 447), Turing again addressed Jefferson (p. 450) and argued that to say that a machine could never \u201cfall in love\u201d or \u201cmake someone fall in love with it\u201d was a flawed scientific induction from the capabilities of present machines.\nTuring\u2019s test design may have been an ironic response to Jefferson\u2019s suggestion that gendered behavior is causally related to the physiology of male and female sex hormones. As a repressed homosexual [hodges1983] and non-conformist in postwar England [goncalves2023irony], Turing might have been prepared to refer to a gender test. However, we have just seen that a basic version of this idea was actually available to him in the form of Jefferson\u2019s reaction to Walter\u2019s tortoises.\nApart from Turing\u2019s 1950 paper, in which he is in direct dialogue with Jefferson, he links his views on machine intelligence to sex and gender in one other known source, again with Jefferson in the background. At the end of a letter to a friend written after the Wilmslow police challenged his testimony,292929Turing to Norman Routledge, circa mid-Feb., 1952. Archives Centre, King\u2019s College, Cambridge, AMT/D/14a.\nhe comments on the BBC broadcast of January 1952 (op. cit.): \u201cGlad you enjoyed the broadcast. J. [Jefferson] certainly was rather disappointing though. I\u2019m rather afraid that the following syllogism may be used by some in the future[:] Turing believes machines think. Turing lies with men. Therefore machines do not think.\u201d\nIt remains to be explored Turing\u2019s choice of conversation for his test.\nSurviving minutes of the \u201cMind and Machine\u201d seminar held on October 27, 1949, were published in 2000 by a participant, Wolfe Mays [mays2000].\nIn the first session, Polanyi presented a statement, \u201cCan the mind be represented by a machine?,\u201d303030Polanyi, Michael. Papers, Box 32, Folder 6, Hanna Holborn Gray Special Collections Research Center, University of Chicago Library.\nwhich was a G\u00f6delian argument that humans can do things that machines cannot. Although Turing had already addressed this argument in his 1947 lecture (op. cit.), Polanyi\u2019s insistence may help explain Turing\u2019s inclusion of \u201c(3) The Mathematical Objection\u201d [turing1950, p. 444]. Further, the minutes of the Manchester seminar show that Polanyi tried to distinguish the formal \u201crules of the logical system\u201d from the informal \u201crules which determine our own behaviour,\u201d and this helps explain Turing\u2019s inclusion of \u201c(8) The Argument from Informality of Behaviour\u201d (p. 452). Polanyi\u2019s argument came too late, as Turing had long been involved in such debates in the Moral Sciences Club at Cambridge University, both before and after World War II.313131Minutes and other papers of the Moral Sciences Club, 1878\u20132018, Cambridge University Library, GBR/0265/UA/Min.IX.39-48\u2217, 56-6\u2217 etc.\nYears later [polanyi1958, p. 275], Polanyi remembered \u201ca communication to a Symposium held on \u2018Mind and Machine\u2019 at Manchester University in October, 1949,\u201d in which \u201cA.M. Turing has shown\nthat it is possible to devise a machine which will both construct and assert as new axioms an indefinite sequence of G\u00f6delian sentences.\u201d323232Polanyi added that \u201cthis is foreshadowed\u201d in Turing\u2019s 1938 paper based on his Ph.D. thesis, \u201cSystems of Logic Based on Ordinals,\u201d J. London Math. Soc. s2-45(1), 161-228.\nPolanyi resumed, showing that he assimilated the punch: \u201cAny heuristic process of a routine character\u2014for which in the deductive sciences the G\u00f6delian process is an example\u2014could likewise be carried out automatically.\u201d\nHowever, Polanyi used the same argument to dismiss the game of chess as a testbed for machine intelligence, noting: \u201cA routine game of chess can be played automatically by a machine, and indeed, all arts can be performed automatically to the extent to which the rules of the art can be specified.\u201d\nChess, not conversation, had been Turing\u2019s chosen field to illustrate, develop, and test machine intelligence since at least February 1946.333333\u201cProposed electronic calculator,\u201d February 1946. Archives Centre, King\u2019s College, Cambridge, AMT/C/32. On p. 16, Turing asks: \u201cCan the machine play chess?\u201d\nIn his 1948 \u2018Intelligent Machinery\u2019 (op. cit., pp. 21-22), Turing had discussed a tradeoff between convenient and impressive intellectual fields for exploring machine intelligence. After discussing \u201cvarious games e.g. chess,\u201d he wrote: \u201cOf the above possible fields the learning of languages would be the most impressive, since it is the most human of these activities.\u201d\nHowever, he avoided language learning because it seemed \u201cto depend rather too much on sense organs and locomotion to be feasible,\u201d stuck with chess, and ended up describing a chess-based imitation game.\nNow, in October 1949, he saw chess being dismissed as unimpressive to make the case for machine intelligence because its rules could be specified.\nSome time later, probably around Christmas 1949, Turing read Jefferson\u2019s Lister Oration [jefferson1949] and marked the passage quoting Ren\u00e9 Descartes (p. 1106), which starts: \u201cDescartes made the point, and a basic one it is, that a parrot repeated only what it had been taught and only a fragment of that; it never used words to express its own thoughts.\u201d Overall, Jefferson suggested \u2018speech\u2019 to be the distinguishing feature of human intelligence compared to other kinds of animal intelligence: \u201cGranted that much that goes on in our heads is wordless we certainly require words for conceptual thinking as well as for expression It is here that there is the sudden and mysterious leap from the highest animal to man, and it is in the speech areas of the dominant hemisphere that Descartes should have put the soul, the highest intellectual faculties\u201d (p. 1109).\nUnlike chess, which is governed by definite rules, good performance in conversation cannot be easily specified. Turing\u2019s 1950 choice for \u201cthe learning of languages\u201d as the intellectual field addressed in his test can be best understood as yet another concession to Jefferson and, in this case, to Polanyi as well.\nFinally, Jefferson also argued that the nervous impulse is not a purely electrical phenomenon but also a chemical one that depends on the continuity of specific physical quantities (p. 1108). It would thus be incommensurable with the activity of a digital computer. In response, Turing formulated \u201c(7) The Argument from Continuity in the Nervous System\u201d [turing1950, p. 451], in which he used the imitation game in its machine-machine version to neutralize that physical difference. A digital computer (a discrete-state machine) could imitate a differential analyzer (a continuous-state machine) to compute a transcendental number such as up to a certain digit. Turing gave this as a proof of concept that such a difference in nature disappears with the power of universal digital computing, given sufficient storage.\nIn summary, there is enough evidence to suggest that Turing varied the design of his imitation tests in response to the challenges posed by Hartree, Polanyi, and Jefferson.\nTuring\u2019s test was a response to critics. But was it also intended as a positive proposition?\nIn the 1990s, Turing\u2019s former student, close friend, and literary executor, Robin Gandy, wrote that Turing\u2019s 1950 paper \u201cwas intended not so much as a penetrating contribution to philosophy but as propaganda.\u201d Gandy added: \u201cTuring thought the time had come for philosophers and mathematicians and scientists to take seriously the fact that computers were not merely calculating engines but were capable of behaviour which must be accounted as intelligent; he sought to persuade people that this was so. He wrote this paper unlike his mathematical papers quickly and with enjoyment. I can remember him reading aloud to me some of the passages always with a smile, sometimes with a giggle\u201d [gandy1996, p. 125]. We can now explore the effect of Turing\u2019s propaganda for machine intelligence on the other side of the North Atlantic."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "TURING\u2019S TEST AND EARLY AI",
33
+ "text": "Claude Shannon visited Turing in Manchester in October 1950,343434Claude E. Shannon, an oral history conducted in 1982 by Robert Price. IEEE History Center, Piscataway, NJ, USA.\nand may have returned to the United States with an offprint of Turing\u2019s 1950 paper,353535The first reprint of Turing\u2019s \u2018Computing Machinery and Intelligence\u2019 in the US appears to be that of James R. Newman (Ed.), The World of Mathematics, vol. 4 (New York: Simon and Schuster), first published January 1, 1956.\nwhich he would cite in his \u201cComputers and Automata\u201d [shannon1953] published in Proc. IRE in October 1953. \u201cRereading Samuel Butler\u2019s Erewhon today,\u201d Shannon wrote, \u201cone finds \u2018The Book of the Machines\u2019 disturbingly prophetic\u201d (p. 1235). Butler\u2019s novel [butler1872] envisioned a revolution of the machines against a satire of the Victorians representing the human species. It was invoked as a dystopia in June 1949, first by a Catholic priest in a letter to The\u2005Times published on the 14th (p. 5), and then in an editorial, \u201cMind, Machine, and Man,\u201d of the British\u2005Medical\u2005Journal introducing Jefferson\u2019s article on the 25th (pp. 1129-1130). Both urged scientists to disassociate themselves from Turing\u2019s research program. Turing responded with irony, not without a point. In his writings of 1950 and 1951 he referred to Butler\u2019s work for an image of his vision of the future of machines in nature and society [goncalves2023irony].\nBy May 1953, Shannon was working with John McCarthy on their collection Automata Studies [mccarthy1956], which revolved around \u201cthe theory of Turing machines\u201d (p. vii), and to which they invited Turing to contribute.363636Shannon and McCarthy to Turing, May 18, 1953. Alan Turing Papers (Additional), University of Manchester Library, GB133 TUR/Add/123.\nTuring declined the invitation, saying that he had been working for the last two years on \u201cthe mathematics of morphogenesis,\u201d although he expected \u201cto get back to cybernetics very shortly.\u201d373737Turing to Shannon, June 3, 1953 (ibid.).\nOne year and four days later, Turing was dead, and early AI would not note his biological turn.\nCommenting on \u201cthe Turing definition of thinking\u201d (p. vi), McCarthy and Shannon found it \u201cinteresting\u201d because it \u201chas the advantages of being operational or, in the psychologists\u2019 term, behavioristic No metaphysical notions of consciousness, ego and the like are involved.\u201d They also thought that this very strength could be a weakness, because it has \u201cthe disadvantage\u201d of being susceptible to a memorizing machine playing the imitation game by looking up \u201ca suitable dictionary.\u201d\nMcCarthy and Shannon referred interchangeably to \u201cdefinition\u201d and to a word that Turing actually used, \u201ccriterion:\u201d\n\u201cWhile certainly no machines at the present time can even make a start at satisfying this rather strong criterion, Turing has speculated that within a few decades it will be possible to program general purpose computers in such a way as to satisfy this test\u201d [mccarthy1956, p. v, emphasis added].\nIn 1955, before the publication of Automata Studies, McCarthy and Shannon, together with Marvin Minsky and Nathaniel Rochester, co-authored their well-known \u201cProposal\u201d for AI research [mccarthy1955]. Unlike Turing himself, they seem to have thought of machine intelligence in complete continuity with Turing machines, as their opening paragraph suggests: \u201cThe study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.\u201d\nTuring\u2019s postwar view, which we have seen in connection with his concepts of machine and imitation above, seems instead to be that machines would learn their behavior primarily from experience, growing in intelligence like a human child, not always by being given precise instructions on how to do things.\nIn any case, the proponents of the Dartmouth seminar did follow Turing closely in writing: \u201cFor the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving\u201d (p. 7). This definition \u2014 compare it with \u201cthe Turing definition of thinking\u201d \u2014 would stay. Intelligent machines would be machines designed to match or exceed human performance in a range of cognitive tasks. The implication, emphasized by both Turing and Norbert Wiener but not by the founders of AI, was that humans could no longer be needed for most white-collar jobs in the labor market.\nIn the early 1960s, E. Feigenbaum and J. Feldman noted in Computers and Thought [feigenbaum1963] that Turing\u2019s 1950 paper \u201cappeared five years before concrete developments in intelligent behavior by machine began to occur;\u201d and \u201cyet,\u201d they continued, \u201cit remains today one of the most cogent and thorough discussions in the literature on the general question \u201cCan a machine think?\u201d (pp. 9-10). They observed Turing\u2019s \u201cbehavioristic posture relative to the question,\u201d which \u201cis to be decided by an unprejudiced comparison of the alleged \u2018thinking behavior\u2019 of the machine with normal \u2018thinking behavior\u2019 in human beings\u201d (emphasis added). They concluded: \u201cHe proposes an experiment \u2014 commonly called \u2018Turing\u2019s test\u2019 \u2014 in which the unprejudiced comparison could be made Though the test has flaws, it is the best that has been proposed to date.\u201d\nMinsky, in the preface to his 1967 collection [minsky1968], reiterates the definition of AI as \u201cthe science of making machines do things that would require intelligence if done by men\u201d (p. v).\nAround the same time, Minsky collaborated with Stanley Kubrick and Arthur Clarke on their 1968 screenplay, also written as a novel, 2001: A Space Odyssey [clarke1968], which featured a futuristic computer named HAL:\nWhether HAL could actually think was a question which had been settled by the British mathematician Alan Turing back in the 1940s. Turing had pointed out that, if one could carry out a prolonged conversation with a machine \u2014 whether by typewriter or microphone was immaterial \u2014 without being able to distinguish between its replies and those that a man might give, then the machine was thinking, by any sensible definition of the word. HAL could pass the Turing test with ease.\nThe \u201cTuring definition of thinking\" was to become legendary.\nyellow\nMcCarthy and Shannon\u2019s memorizing machine objection was studied in depth by Stuart Shieber, who elaborated on its assumptions and concluded that it is invalid [shieber2014].\nBut McCarthy\u2019s concept of memorizing may have been more elastic, as his later comment on Deep Blue\u2019s defeat of Gary Kasparov suggests [mccarthy1997].\nHe expressed disappointment that it was mostly an achievement of computational power rather than thinking, and gave a clear argument why he thought so. Essentially, computer chess advanced by replacing heuristic techniques, which relied on the expertise of human players to prune the search space of possible moves, with brute force computing. \u201c[I]t is a measure of our limited understanding of the principles of artificial intelligence,\u201d McCarthy wrote, \u201cthat this level of play requires many millions of times as much computing as a human chess player does.\u201d It may be, but that the problem was \u201clargely a quantitative matter\u201d was hinted at by Turing in his letter of c. June 1951 (Box 3).\nTen years after Deep Blue vs. Kasparov, McCarthy referred to Turing\u2019s 1947 lecture (op. cit.)\nas \u201cthe first scientific discussion of human level machine intelligence,\u201d and to Turing\u2019s 1950 paper as \u201camplifying\u201d that discussion into a \u201cgoal\u201d [mccarthy2007, p. 1174].\nIn 1992, Minsky co-authored a work of fiction, The Turing Option (Warner, New York), in which Turing\u2019s test is featured in the preface. In 1995, Minsky took a stand against Loebner\u2019s Weizenbaum experiments, pleading to \u201crevoke his stupid prize, save himself some money, and spare us the horror of this obnoxious and unproductive annual publicity campaign.\u201d383838\u2018Annual Minsky Loebner Prize Revocation Prize 1995 Announcement,\u2019 2 March 1995. Available at: https://groups.google.com/g/comp.ai/c/dZtU8vDD_bk/m/QYaYB18qAToJ ###reference_8vDD_bk/m/QYaYB18qAToJ###. Accessed 25 Nov 2023.\nIn 2013, when asked about the Turing test in a taped interview, Minsky said: \u201cThe Turing test is a joke, sort of, about saying \u2018A machine would be intelligent if it does things that an observer would say must be being done by a human\u2019 it was suggested by Alan Turing as one way to evaluate a machine but he had never intended it as being the way to decide whether a machine was really intelligent.\u201d393939\u2018Marvin Minsky on AI: the Turing test is a joke!\u2019, from 23\u2019 35\u201d to 24\u201945\u201d. Available at https://www.singularityweblog.com/marvin-minsky/ ###reference_insky/###. Accessed Jun. 10, 2024.\nThis materially connects McCarthy et al.\u2019s definition of \u201cthe AI problem\u201d with Turing\u2019s test, if material evidence were still needed.\nOverall, it seems that all of these AI pioneers understood and were inspired by Turing\u2019s test at the level of conceptual foundations. Even if some of them also used the term \u201cexperiment,\u201d none of them took it literally as a practical experiment, which would indeed imply an astonishing lack of imagination on their part.\nThe Turing test helped move the burgeoning field of AI away from unproductive debates about the meaning of words, allowing Minsky, for example, to write in 1967 [minsky1967]: \u201cTuring discusses some of these issues in his brilliant article, \u2018Computing Machines and Intelligence\u2019 [sic], and I will not recapitulate his arguments They amount, in my view, to a satisfactory refutation of many such objections\u201d (p. 107).\nThe value of Turing\u2019s test is that it has long been and still is a unifying \u201cdefinition,\u201d a \u201ccriterion,\u201d a \u201cgoal\u201d for, in the words of McCarthy et al., the science and engineering of \u201cmaking a machine behave in ways that would be called intelligent if a human were so behaving.\u201d\nFor better or worse, every time AI succeeds in automating a new task that was once reserved for humans because it requires intelligence, \u201cthe Turing definition of thinking\u201d conquers new territory, and the significance of Turing\u2019s early message to his contemporaries becomes clearer."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "CONCLUSION",
39
+ "text": "This paper presented a new perspective on Turing\u2019s test.\nNew light has been shed on Turing\u2019s concept of imitation, suggesting that it does not give a license for deception in AI. Rather, imitation was for Turing a mathematical concept, largely in continuity with his 1936 paper, although he later generalized how it could be achieved. It was also suggested that Turing\u2019s presentation of the various versions of his test fits what Mach called \u201cthe basic method of thought experiments\u201d in the history of science. The historical conditions of Turing\u2019s proposal were reconstructed, showing that the basic idea of a gender test had been raised originally by Jefferson, and Turing\u2019s imitation game comes out of that context. Conversational performance was also a concession to his opponents, and overall Turing\u2019s test was a response to critics. But Turing also took the opportunity to promote his positive views. The known primary and secondary sources indicate that he became actively engaged in machine intelligence propaganda, and it was also in this spirit that he proposed his test, hoping to influence contemporaries and future generations of scientists. The question of the value of Turing\u2019s test and its relation to early AI was revisited, arguing that \u201cthe Turing definition of thinking\u201d provided McCarthy, Minsky, and others with a definition of the AI problem at the level of conceptual foundations that arguably still drives AI research today.\nBut whatever its utility, we can now appreciate that there is more to the imitation game. With its structural elements neatly designed as lighthearted concessions to opponents, and at the same time able to demonstrate the power of digital computing as early as 1950, Turing\u2019s test has secured its place as one of the most beautiful thought experiments in the history of science."
40
+ },
41
+ {
42
+ "section_id": "7",
43
+ "parent_section_id": null,
44
+ "section_name": "ACKNOWLEDGMENTS",
45
+ "text": "The author thanks Andrew Hodges, Jim Miles, and H. V. Jagadish for their valuable comments on an earlier version of this article; Mark Priestley for the gift of the Turing letters to Worsley; Fabio Cozman and Murray Shanahan for their support; The author is solely responsible for the accuracy of this work.\nThe author thanks the Center for Artificial Intelligence (C4AI-USP) and the support from the S\u00e3o Paulo Research Foundation (FAPESP grants nos. 2019/07665-4, 2019/21489-4, and 2022/16793-9) and from the IBM Corporation. This article is a result of the project \u201cThe Future of Artificial Intelligence: The Logical Structure of Alan Turing\u2019s Argument\u201d).\nBernardo Gon\u00e7alves is currently a researcher at the Center for Artificial Intelligence (C4AI), University of S\u00e3o Paulo, Brazil, and a Visiting Fellow at King\u2019s College, University of Cambridge, UK. He works on Alan Turing, AI and computer science. He received Ph.D. degrees in Philosophy from the University of S\u00e3o Paulo and in Computational Modeling from the National Laboratory for Scientific Computing, Brazil."
46
+ }
47
+ ],
48
+ "appendix": [],
49
+ "tables": {},
50
+ "image_paths": {
51
+ "1": {
52
+ "figure_path": "2401.00009v3_figure_1.png",
53
+ "caption": "Figure 1: Alan Turing (1912-1954). Photographs of Alan Turing, copyright The Provost and Scholars of King\u2019s College Cambridge 2023. Archives Centre, King\u2019s College, Cambridge, AMT/K/7/12. Reproduced with permission.",
54
+ "url": "http://arxiv.org/html/2401.00009v3/extracted/5747886/turing-kings-min.jpg"
55
+ },
56
+ "2": {
57
+ "figure_path": "2401.00009v3_figure_2.png",
58
+ "caption": "Figure 2: Console of Ferranti Mark I and a group with Turing\u2019s secretary at the Computing Machine Laboratory, Sylvia Robinson (n\u00e9e Wagstaff), pretending to play chess with the machine, c. 1955. Courtesy of The University of Manchester.",
59
+ "url": "http://arxiv.org/html/2401.00009v3/extracted/5747886/ferranti-chess-min.jpg"
60
+ }
61
+ },
62
+ "validation": true,
63
+ "references": [],
64
+ "url": "http://arxiv.org/html/2401.00009v3"
65
+ }
20240722/2401.00280v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2401.02413v2.json ADDED
@@ -0,0 +1,636 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Simulation-Based Inference with Quantile Regression",
3
+ "abstract": "We present Neural Quantile Estimation (NQE), a novel Simulation-Based Inference (SBI) method based on conditional quantile regression.\nNQE autoregressively learns individual one dimensional quantiles for each posterior dimension, conditioned on the data and previous posterior dimensions.\nPosterior samples are obtained by interpolating the predicted quantiles using monotonic cubic Hermite spline, with specific treatment for the tail behavior and multi-modal distributions.\nWe introduce an alternative definition for the Bayesian credible region using the local Cumulative Density Function (CDF), offering substantially faster evaluation than the traditional Highest Posterior Density Region (HPDR).\nIn case of limited simulation budget and/or known model misspecification, a post-processing calibration step can be integrated into NQE to ensure the unbiasedness of the posterior estimation with negligible additional computational cost.\nWe demonstrate that NQE achieves state-of-the-art performance on a variety of benchmark problems.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Given the likelihood of a stochastic forward model and observation data , Bayes\u2019 theorem postulates that the underlying model parameters follow the posterior distribution , where represents the prior.\nIn many applications, however, we are restricted to simulating the data , while the precise closed form of remains unavailable.\nSimulation-Based Inference (SBI), also known as Likelihood-Free Inference (LFI) or Implicit Likelihood Inference (ILI), conducts Bayesian inference directly from these simulations, circumventing the need to explicitly formulate a tractable likelihood function.\nEarly research in this field primarily consists of Approximate Bayesian Computation (ABC) variants, which employ a distance metric in the data space and approximate true posterior samples using realizations whose simulated data are \u201cclose enough\u201d to the observation (e.g. Tavar\u00e9 et al., 1997 ###reference_b51###; Pritchard et al., 1999 ###reference_b41###; Beaumont et al., 2002 ###reference_b1###, 2009 ###reference_b2###). However, these methods are prone to the curse of dimensionality and prove inadequate for higher-dimensional applications.\nIn recent years, a series of neural-network-based SBI methods have been proposed, which can be broadly categorized into three groups.\nNeural Likelihood Estimation (NLE, Papamakarios et al., 2019b ###reference_b39###; Lueckmann et al., 2019 ###reference_b28###) fits the likelihood using a neural density estimator, typically based on Normalizing Flows.\nThe posterior is then evaluated by multiplying the likelihood with the prior, and posterior samples can be drawn using Markov Chain Monte Carlo (MCMC).\nNeural Posterior Estimation (NPE, Papamakarios & Murray, 2016 ###reference_b37###; Lueckmann et al., 2017 ###reference_b27###; Greenberg et al., 2019 ###reference_b14###) uses neural density estimators to approximate the posterior, thereby enabling direct posterior sample draws without running MCMC.\nNeural Ratio Estimation (NRE, Hermans et al., 2020 ###reference_b16###) employs classifiers to estimate density ratios, commonly selected as the likelihood-to-evidence ratio.\nIndeed, Durkan et al. (2020 ###reference_b9###) demonstrates that NRE can be unified with specific types of NPE under a general contrastive learning framework.\nEach method has its sequential counterpart, namely SNLE, SNPE, and SNRE, respectively.\nWhereas standard NLE, NPE, and NRE allocate new simulations based on the prior, allowing them to be applied to any observation data (i.e., they are amortized), their sequential counterparts allocate new simulations based on the inference results from previous iterations and must be trained specifically for each observation.\nThese neural-network-based methods typically surpass traditional ABC methods in terms of inference accuracy under given simulation budgets.\nSee Cranmer et al. (2020 ###reference_b4###) for a review and Lueckmann et al. (2021 ###reference_b29###) for a comprehensive benchmark of prevalent SBI methods.\nQuantile Regression (QR), as introduced by Koenker & Bassett Jr (1978 ###reference_b21###), estimates the conditional quantiles of the response variable over varying predictor variables.\nMany Machine Learning (ML) algorithms can be extended to quantile regression by simply transitioning to a weighted loss (e.g. Meinshausen & Ridgeway, 2006 ###reference_b32###; Rodrigues & Pereira, 2020 ###reference_b44###; Tang et al., 2022 ###reference_b50###).\nIn this paper, we introduce Neural Quantile Estimation (NQE), a new family of SBI methods supplementing the existing NPE, NRE and NLE approaches.\nNQE successively estimates the one dimensional quantiles of each dimension of , conditioned on the data and previous dimensions.\nWe interpolate the discrete quantiles with monotonic cubic Hermite splines, adopting specific treatments to account for the tail behavior and potential multimodality of the distribution.\nPosterior samples can then be drawn by successively applying inverse transform sampling for each dimension of .\nWe also develop a post-processing calibration strategy, leading to guaranteed unbiased posterior estimation as long as one provides enough () simulations to accurately calculate the empirical coverage.\nTo the best of our knowledge, this constitutes the first demonstration that QR-based SBI methods can attain state-of-the-art performance, matching or surpassing the benchmarks set by existing methods.\nThe structure of this paper is as follows:\nIn Section 2 ###reference_###, we introduce the methodology of NQE, along with a alternative definition for Bayesian credible regions and a post-processing calibration scheme to ensure the unbiasedness of the inference results.\nIn Section 3 ###reference_###, we demonstrate that NQE attains state-of-the-art performance across a variety of benchmark problems, together with a realistic application to high dimensional cosmology data.\nSubsequently, in Section 4 ###reference_###, we discuss related works in the literature and potential avenues for future research.\nThe results in this paper can be reproduced with the publicly available NQE package\n111https://github.com/h3jia/nqe ###reference_github.com/h3jia/nqe###. based on pytorch (Paszke et al., 2019 ###reference_b40###)."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Methodology",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Quantile Estimation And Interpolation",
21
+ "text": "###figure_1### The cornerstone of most contemporary SBI methods is some form of conditional density estimator, which is used to approximate the likelihood, the posterior, or the likelihood-to-evidence ratio. Essentially, every generative model can function as a density estimator. While Generative Adversarial Networks (Goodfellow et al., 2020 ###reference_b12###) and more recently Diffusion Models (Dhariwal & Nichol, 2021 ###reference_b7###) have shown remarkable success in generating high-quality images and videos, the SBI realm is primarily governed by Normalizing Flows (NF, e.g. Rezende & Mohamed, 2015 ###reference_b43###; Papamakarios et al., 2019a ###reference_b38###), which offer superior inductive bias for the probabilistic distributions with up to dozens of dimensions frequently encountered in SBI tasks. Our proposed NQE method can also be viewed as a density estimator, as it reconstructs the posterior distribution autoregressively from its 1-dim conditional quantiles.\n\n###figure_2### In a typical SBI setup, one first samples the model parameters from the prior , and then runs the forward simulations to generate the corresponding observations .\nFor simplicity, let us start with the scenario of 1-dim .\nGiven a dataset and a neural network parameterized by , one can estimate the median (mean) of conditioned on by minimizing the () loss 222Not to be confused with and defined below. between and .\nAs a straightforward generalization, one can estimate the -th quantile of conditioned on by minimizing the following weighted loss,\nHere one can introduce an additional -dependent weight ,\nsimilar to the fact that one can use simulations allocated from an arbitrary prior to train SNLE.\nA discussion regarding the choice of can be found in Appendix B ###reference_###.\nTo reconstruct the full posterior, we require the quantiles at multiple \u2019s, for which we aggregate the individual loss functions,\nWithout loss of generality, we assume the prior of is zero outside some interval .\nIf the prior is positive everywhere on , one can choose such that the prior mass outside it is negligible.\nFor example, one can set to for a standard Gaussian prior; in case of heavy-tailed priors, one can also use the (inverse) prior CDF to map the prior support to .\nWe then equally divide the interval into bins, and estimate the corresponding quantiles with .\nIn this work, we choose to be a Multi-Layer Perceptron (MLP) with outputs followed by a softmax layer, such that the -th quantile of is parameterized as , and we add shortcut connections (the input layer of MLP is concatenated to every hidden layer) to facilitate more efficient information propagation throughout the network.\nMoreover, an optional embedding network (e.g. Jiang et al., 2017 ###reference_b19###; Radev et al., 2020 ###reference_b42###) can be added before the MLP to more efficiently handle high dimensional data (e.g. the cosmology example in Section 3.3 ###reference_###).\nFor multidimensional , we successively apply the aforementioned method to each dimension , conditioned on not only the data but also all the previous dimensions .\nIn other words, in Equations 1 ###reference_### and 2 ###reference_### is replaced by , since is effectively treated as observation data for the inference of .\nAn illustration of the NQE architecture can be found in the top panel of Figure 1 ###reference_###.\nSimilar to Flow Matching Posterior Estimation (FMPE, Dax et al., 2023 ###reference_b5###), NQE has an unconstrained architecture which does not require specialized NFs.\n\n###figure_3### The estimated conditional quantiles must be interpolated to enable sampling from them. We achieve this by interpolating the Cumulative Distribution Function (CDF) using Piecewise Cubic Hermite Interpolating Polynomial with Exponential Tails (PCHIP-ET), a modified version of the PCHIP scheme (Fritsch & Carlson, 1980 ###reference_b10###), which preserves monotonicity of input data and continuity of first derivatives, ensuring a well-defined Probability Distribution Function (PDF). As depicted in the 1st row of Figure 2 ###reference_###, the original PCHIP algorithm presents discernible interpolation artifacts, primarily because polynomials cannot decay rapidly enough to align with the true PDF in the tail regime. To address this issue, we substitute the polynomials with Gaussians within bins identified as tails. A more detailed description of our PCHIP-ET scheme is available in Appendix A ###reference_###. We observe that a satisfactory reconstruction of unimodal distributions can be achieved with quantiles, while incorporating additional bins may facilitate better convergence in multimodal cases. Samples can then be drawn using inverse transform sampling with the interpolated CDF.\nNQE requires\none neural network for each posterior dimension, which can be trained independently on multiple devices to reduce the training wall time.\nIn principle, one can also train NQE by maximizing the joint PDF, similar to the training of NPE.\nHowever, such approach will be less efficient than minimizing in Equation 2 ###reference_###, since one needs to compute the PCHIP-ET interpolation for the PDF, while only depends on the individual quantiles.\nNQE can also be used to estimate distributions with no observation to condition on.\nIn this case, we do not need neural networks for the first dimension , which can be directly interpolated from the empirical quantiles.\nIn Figure 3 ###reference_###, we demonstrate that NQE can successfully model two complicated distributions from Grathwohl et al. (2018 ###reference_b13###)."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Regularization",
27
+ "text": "Numerical derivatives are inherently noisier than integrals, and similarly for the PDF compared with the CDF.\nTo mitigate this issue, we propose the following regularization scheme to improve the smoothness of NQE PDF predictions.\nIntuitively, a \u201csmooth distribution\u201d means the averaged PDF within every 1-dim bin for quantile prediction, , should be close to the interpolated value between its neighboring bins,\nwith and , which leads to the following loss for regularization,\nwhere is the Heaviside function.\nWith Equation 4 ###reference_###, we only penalize cases where , since we will have between the peaks in multimodal problems, which is therefore a possible feature in the ground truth solution that should not be penalized.\nFor similar reasons, in Equation 3 ###reference_### is set to be larger than the naive average of and , so that the regularization is only activated when necessary.\nThe total loss is then defined as\nNote that a linear rescaling of changes while remains invariant, which motivates our choice of above.\nWe find 0.1 to be a generally reasonable choice for , although one may reduce for examples with e.g. sharp spikes or edges in the posterior distribution, if one has such prior knowledge of the typical shape of the posterior."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Empirical Coverage",
33
+ "text": "Analogous to frequentist confidence regions, Bayesian statistics utilizes credible regions to define the reasonable space for model parameters given .\nThe most popular choice of Bayesian credible region, namely the highest posterior density region (HPDR, e.g. McElreath, 2020 ###reference_b31###), encloses the samples with the highest PDF for the credible region,\nachieving the smallest volume for any given credibility level.\nTo test whether a posterior estimator is biased, one checks the empirical coverage, namely the probability of the true model parameters to fall into the credible region over the simulation data.\nIf such probability is larger (smaller) than , the posterior estimator is over-conservative (biased) 333Note that being well calibrated is a necessary yet not sufficient condition for an estimator to predict the Bayesian optimal posterior, as exemplified by the extreme case where the posterior estimator always outputs the prior..\nTo compute the empirical coverage in practice, one needs to pick pairs of from the simulation data, and generate samples for each of them to get the rank of PDF, leading to neural network calls for NPE and NQE 444We ignore the factor for NQE as we define one network call as one evaluation of the whole estimator..\nFor NLE and NRE, such cost is further multiplied by , the number of posterior evaluations per effective MCMC sample\n555For one may circumvent MCMC using Importance Sampling, which however becomes inefficient as the dimensionality of grows..\nTypically one needs to set both and to so as to get a reliable estimate of the empirical coverage, leading to a moderate computational cost especially for NLE and NRE methods.\nA unique characteristics of NQE is that it predicts the distribution quantiles, which explicitly contains the information regarding the global properties of the posterior and enables us to propose the following quantile mapping credible region (QMCR) 666Not to be confused with the quantile mapping technique used to e.g. correct the bias for simulated climate data (Maraun, 2013 ###reference_b30###)., a generalization of the 1-dim equal-tailed credible interval (e.g. McElreath, 2020 ###reference_b31###) for multidimensional distributions.\nTalts et al. (2018 ###reference_b49###) shows the rank of any 1-dim statistic can be used to define the Bayesian credible region, with HPDR a special case that chooses such statistic as the posterior PDF.\nWith the conditional quantiles predicted by NQE, we introduce an auxiliary distribution , which we typically set to a multivariate standard Gaussian.\nWe then define a bijective mapping that establishes a one-to-one correspondence between and with the same 1-dim conditional CDF, and , across all the dimensions 777If is set to a multivariate standard Gaussian, there is no correlation between the different dimensions so we indeed have ..\nThe defining statistic of the credible region is chosen as with , whose rank can be computed analytically using the distribution since is Gaussian.\nIf the interpolation indicates that includes multiple modes, we use the local CDF within the mode containing to define the mapping , such that the low PDF regions between the modes are excluded from the credible regions.\nA comparison of HPDR and QMCR for a toy distribution can be found in the 2nd row of Figure 2 ###reference_###, together with the mapping illustrated in the 4th row.\nHeuristically, the limit of QMCR encloses the (conditional) median across all the dimensions for unimodal distributions, as opposed to the global maximum of the PDF for HPDR.\nTherefore, unlike HPDR, QMCR is invariant under any 1-dim monotonic transforms of , as long as such reparameterization does not give rise to a different identification of multimodality during the CDF interpolation.\nAs shown with the examples below, QMCR typically leads to similar conclusions regarding the (un)biasedness of the posterior estimators as HPDR, but only requires network calls to evaluate as one no longer needs to generate samples for each observation.\nSuch speed-up allows us to perform posterior calibration in the next subsection with negligible computational cost.\nFor simplicity, in the rest of this paper we will use the term coverage (coverage) for empirical coverage computed with HPDR (QMCR).\nIn addition, we note that due to its autoregressive structure, one can compute the coverage of NQE for the leading dimensions without additional training, which is useful if the unbiasedness of certain dimensions takes precedence over others."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "Posterior Calibration",
39
+ "text": "Hermans et al. (2021 ###reference_b17###) demonstrates that all existing SBI methods may produce biased results when the simulation budget is limited.\nIntuitively, a biased posterior is too narrow to enclose the true model parameters, so we propose the following calibration strategy as illustrated in the bottom panel of Figure 1 ###reference_###.\nTo make a distribution broader, we fix the medians of all 1-dim conditional posteriors and increase the distance between the medians and all other quantiles by a global broadening factor.\nSimilar to the coverage evaluation, we utilize the local quantiles within modes for multimodal distributions.\nWe remove the quantiles that escape from the boundary of the prior and/or the boundary between different modes, and redistribute the corresponding posterior mass to the bins still within the boundary based on the bin mass, so that the local posterior shape is preserved.\nThe effect of such broadening transform is shown in the 3rd row of Figure 2 ###reference_###.\nWe then solve for the minimum broadening factor such that the calibrated posterior is unbiased across a series of credibility levels, which we set to throughout this paper.\nNote that ideally, a good estimator should have empirical coverage that matches the credibility level.\nHowever, if this is not possible due to limited training data, over-conservative inference should be preferred over biased results.\nThe broadening factor can also be smaller than 1, in case the original posterior is already too conservative.\nWhile one has the freedom to choose the definition of the coverage for the calibration process, the broadened posterior is only guaranteed to be unbiased at the calibrated credibility levels under the same coverage definition.\nWhile similar calibration tricks may also be developed for other SBI methods, it will likely be considerably more expensive than NQE in practice, for the following reasons.\nFirstly, the evaluation of coverage is exclusive to NQE, which is faster by at least a factor of than traditional coverage (with an additional factor of if MCMC is required for sampling).\nMore importantly, we have developed a broadening strategy for NQE that preserves not only the local correlation structure of the posterior but also the ability of fast sampling without MCMC.\nWe are not aware of any similar techniques for existing SBI methods, which estimate the local PDF with no explicit global information of the distribution.\nFor example, while one can broaden an NF-based probability distribution by lowering its temperature, i.e. replacing with , , this will necessitate MCMC sampling for NPE (NLE and NRE need MCMC even without broadening).\nIn addition, with the analytical rank evaluation of coverage, the NQE network outputs can also be reused between different iterations, thus reducing the total network calls by another factor of .\nWe compare the computational cost of broadening calibration for different methods in Table 1 ###reference_###.\nSuch post-processing calibration relies on a reliable calculation of the coverage.\nThe (pointwise) error of empirical coverage due to stochastic sampling can be estimated using binomial distribution (S\u00e4ilynoja et al., 2022 ###reference_b45###); with , the maximum error is smaller than , regardless of the dimensionality of and 888See Appendix E ###reference_### for more discussion on this..\nIn other words, for any inference task, with the broadening calibration, one only needs simulations in the validation dataset to ensure the unbiasedness of the posterior, if there is no model misspecification.\nNevertheless, the number of network calls required for broadening is different across the various algorithms as compared in Table 1 ###reference_###.\nUsing NQE and coverage, one only needs calls of the NQE network for the broadening, which is typically negligible compared with the cost for running the simulations and training the neural estimators.\nIn addition, similar calibration tricks can be used to mitigate partially known model misspecification, as exemplified in Section 3.3 ###reference_### below.\nNote that we use the same validation dataset during the training and broadening calibration of NQE, as the one-parameter broadening transform is unlikely to overfit.\nWe summarize the proposed NQE method in Algorithm 1 ###reference_###.\nIn this paper, we focus on the simple broadening calibration, which is guaranteed to converge with validation simulations, regardless of and .\nWith more simulations, it may be beneficial to employ a more sophisticated calibration scheme to remove the bias without over-broadening the predicted posterior.\nWe plan to conduct a comprehensive survey of such calibration schemes in a follow-up paper.\nOne example is the quantile shifting calibration demonstrated with the cosmology example in Section 3.3 ###reference_###: for each quantile of predicted by NN, we check if we indeed have probability that the true is smaller than the predicted quantile (on the validation dataset) 999For multi-modal distributions, we use the local quantile within the mode that contains the true , similar to the definition of the coverage in Section 2.3 ###reference_###..\nIf not, we calculate the shift required for the quantile such that this statement is true.\nNote that we apply a shift of quantile that is different for each and , but the same for all and .\nIn other words, we effectively calculate the bias averaged over the prior, and shift the predicted quantiles accordingly to remove the bias.\nStrictly speaking, such quantile shifting scheme calibrates the coverage of all the individual 1-dim conditional posteriors, but not necessarily the coverage of the multi-dimensional joint posterior.\nIn addition, the number of simulations required for this scheme depends on the dimensionality of , in contrast to the global broadening scheme which always converges with validation simulations.\nWe leave a more detailed investigation of such methods for future research; nevertheless, for the cosmology example in Section 3.3 ###reference_###, the posterior calibrated with quantile shifting has an almost diagonal empirical coverage and is much narrower than the posterior calibrated with simple global broadening, when there is a significant bias in the uncalibrated posterior due to model misspecification.\n\n###figure_4### \n###figure_5### \n###figure_6###"
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Numerical Experiments",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "3.1",
49
+ "parent_section_id": "3",
50
+ "section_name": "SBI Benchmark Problems",
51
+ "text": "We assess the performance of NQE on six benchmark problems,\nwith detailed specifications provided in Appendix C ###reference_###.\nAll results for methods other than NQE are adopted from Lueckmann et al. (2021 ###reference_b29###).\nAs discussed in Appendix F ###reference_###, we conduct a mild search of hyperparameters for NQE, but in the end use the same set of hyperparameters across all the benchmark problems,\nalthough it is possible to further improve the performance by tuning the hyperparameters based on specific posterior structures.\nFor example, increasing the number of predicted quantiles will be beneficial for multimodal problems with large simulation budgets.\nTo evaluate the performance of SBI algorithms, we employ Classifier-based 2-Sample Testing (C2ST) as implemented in the sbibm package (Lopez-Paz & Oquab, 2016 ###reference_b24###; Lueckmann et al., 2021 ###reference_b29###). Lower C2ST values denote superior results, with 0.5 signifying a perfect posterior and 1.0 indicating complete failure.\nWe plot the C2ST results for the benchmark problems in Figure 4 ###reference_###, showing that (uncalibrated) NQE achieves state-of-the-art performance across all the examples.\nIn Figure 5 ###reference_###, we compare the NQE coverage before and after broadening: with the broadening calibration, NQE consistently predicts unbiased posterior for all the problems.\nWhile Figure 5 ###reference_### utilizes simulations to enhance the smoothness of the coverage curves, a convergence test in Appendix E ###reference_### shows that simulations are sufficient for most cases.\nThe exact values of the broadening factor can be found in Figure 15 ###reference_###.\nIn Figure 16 ###reference_###, we find that the C2ST is generally similar or slightly worse after the global broadening calibration: this is likely due to the nature of the C2ST metric, since a conservative posterior will be similarly penalized as a biased posterior, although the former should be preferred over the latter for most scientific applications (e.g. Hermans et al., 2021 ###reference_b17###; Delaunoy et al., 2022 ###reference_b6###)."
52
+ },
53
+ {
54
+ "section_id": "3.2",
55
+ "parent_section_id": "3",
56
+ "section_name": "Order of Model Parameters",
57
+ "text": "Due to its autoregressive structure, NQE\u2019s performance may be affected by the order of dimensions.\nWhile each 1-dim conditional distribution is estimated independently, the 1-dim marginal posterior does depend on the estimation for all the previous that are correlated with , therefore one may expect the marginals for the latter dimensions to be less accurate than the former dimensions as the error will accumulate.\nTo study this effect, we compute all the 1-dim marginal C2ST\u2019s for the benchmark problems and plot them with respect to the dimension indices in Figure 6 ###reference_###.\nContrary to the conjecture above, we find no clear dependence between the marginal C2ST and the dimension index.\nNevertheless, this may be due to the relative low posterior dimensionality of the benchmark problems, such that the accumulation of per-dimension error has not become the dominant contribution.\nWe still recommend ordering the dimensions based on the relative importance of the parameters, especially for applications to higher () dimensional posteriors.\nWe note that similar to the TMNRE approach (Miller et al., 2021 ###reference_b33###), one may estimate the individual marginal posteriors with NQE, if the high dimensionality makes it impractical to accurately model the joint posterior."
58
+ },
59
+ {
60
+ "section_id": "3.3",
61
+ "parent_section_id": "3",
62
+ "section_name": "Application to Cosmology",
63
+ "text": "###figure_7### \n###figure_8### The cosmological large scale structures contain ample information regarding the origin and future of our universe, which can be inferred from the locations and/or shapes of the galaxies (e.g. Dodelson & Schmidt, 2020 ###reference_b8###), however the optimal strategy to extract the information remains an unsolved problem.\nWhile at larger scales the power spectra carry most of the information and can be well modeled with a Gaussian likelihood, at smaller scales the highly nonlinear evolution render SBI methods necessary for the optimal inference.\nUnfortunately, the small-scale baryonic physics is still poorly understood, leading to potential model misspecification which can bias the SBI inference (e.g. Modi et al., 2023 ###reference_b34###).\nAs we do not know the exact forward model for our Universe, the best we can do is to make sure our SBI estimator is unbiased on all the well-motivated baryonic physics models, which requires a massive amount of expensive cosmological hydrodynamic simulations (e.g. Villaescusa-Navarro et al., 2021 ###reference_b53###).\nHowever, with NQE one can first train it using cheap (therefore less realistic) simulations and then calibrate it using all available high fidelity (therefore much more expensive) simulations to make sure the uncertainties of baryonic physics have been properly accounted for 101010Here we assume the model misspecification is at least partially known, in the sense that our selection of baryonic physics models \u201cincludes\u201d the correct model for our Universe. The post-processing calibration cannot mitigate completely unknown model misspecification..\nNote that one only needs simulations for each baryonic model to calibrate NQE, which is far fewer than the amount required to directly train field-level SBI with them.\nSuch approach is demonstrated in Figures 7 ###reference_### and 8 ###reference_###, where we show that the bias due to model misspecification can be mitigated by the calibration of NQE.\nAs the model misspecification introduces a large systematic bias, we find that the global broadening calibration makes the posterior over-conservative while the quantile shifting scheme eliminates the bias without over-broadening the posterior, highlighting the benefits of such more advanced calibration methods that will be examined more thoroughly in a follow-up paper.\nMore details regarding this example can be found in Appendix D ###reference_###."
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "Discussion",
69
+ "text": "The main contribution of this work is to introduce Neural Quantile Estimation (NQE), a novel class of SBI methods that incorporate the concept of quantile regression, with competitive performance across various examples.\nStrictly speaking, our paper presents Neural Quantile Posterior Estimation, a method that can be extended to Neural Quantile Likelihood Estimation, which fits the likelihood with conditional quantiles.\nWe note that the idea of interpolating predicted quantiles has been explored for e.g. time series forecasting (Gasthaus et al., 2019 ###reference_b11###; Sun et al., 2023 ###reference_b48###).\nNonetheless, to our knowledge our paper is the first work that implements this idea in the SBI framework, with a dedicated interpolation scheme that minimizes the potential artifacts.\nIn addition, Jeffrey & Wandelt (2020 ###reference_b18###) uses a similar architecture to predict the moments of the posterior.\nMontel et al. (2023 ###reference_b36###) proposes to autoregressively apply marginal NRE estimators to obtain the joint distribution, which outperforms standard NRE in their benchmarks.\nAs shown in Hermans et al. (2021 ###reference_b17###), all existing SBI methods may predict biased results in practice: while the Bayesian optimal posterior has perfect calibration, there is no guarantee regarding the unbiasedness of SBI algorithms trained with insufficient number of simulations.\nHowever, with the post-processing calibration step, NQE is guaranteed to be unbiased should there be no unknown model misspecification, in the sense that the credible regions of the posterior will enclose no fewer samples samples than their corresponding credibility levels, as long as one has validation data to reliably compute the empirical coverage for the broadening calibration.\nWhile Balanced Neural Ratio Estimation (BNRE, Delaunoy et al., 2022 ###reference_b6###) pursues similar goals of robust SBI inference, the unbiasedness of BNRE depends on the choice of their regularization parameter, so in principle they need to tune this parameter for each task to obtain best results.\nUnfortunately, the coverage evaluation is considerably more expensive for NRE methods which relies on MCMC sampling, making the coverage-based\ntuning of BNRE computationally prohibitive for higher dimensional applications.\nOn the other hand, the broadening calibration of NQE can be applied with negligible computational cost, with the calibrated NQE manifestly unbiased as the empirical coverage has been explictly corrected during the broadening process.\nIn addition, one can also mitigate the bias due to partially known model misspecification by calibrating the NQE posterior.\nBefore concluding this paper, we enumerate several promising directions for future study.\nFirst of all, NQE can be straightforwardly generalized to Sequential NQE (SNQE), which will be presented in a separate paper.\nSecond, while our PCHIP-ET scheme shows competitive performance across various problems, it does not have continuous PDF derivatives, which may be improved by a higher order interpolation scheme.\nMoreover, in this work we mostly restrict to a global broadening transform for the calibration of NQE, which eliminates the bias at the cost of being possibly too conservative for certain credibility levels.\nAs shown in Section 3.3 ###reference_###, a more advanced calibration strategy would be useful, in particular for problems with a large systematic bias, so that one can calibrate biased posteriors without losing too much constraining power."
70
+ }
71
+ ],
72
+ "appendix": [
73
+ {
74
+ "section_id": "Appendix 1",
75
+ "parent_section_id": null,
76
+ "section_name": "Appendix A Piecewise Cubic Hermite Interpolating Polynomial with Exponential Tails",
77
+ "text": "We interpolate the CDF of the conditional 1-dim distributions using the quantiles predicted by NQE.\nOur interpolation scheme is based on Piecewise Cubic Hermite Interpolating Polynomial (PCHIP, Fritsch & Carlson, 1980 ###reference_b10###; Moler, 2004 ###reference_b35###), which preserves the monotonicity of the input data and has continuous first order derivatives.\nThe values of the interpolated function at the th and th nodes, and , match the values of the target function, while the derivatives, and , are given by the two-side scheme for non-boundary points,\nFor boundary points, we use the following one-side scheme for the left end (similarly for the right end),\nwhich however is clipped to for and for , with a hyperparameter typically set to .\nNote that for well-defined CDF data, one always has in Equations 6 ###reference_### and 7 ###reference_###.\nFritsch & Carlson (1980 ###reference_b10###) shows a sufficient condition for the interpolation to preserve monotonicity is and 111111Indeed 3 is the largest number for the criterion of this form., which is satisfied by Equations 6 ###reference_### and 7 ###reference_###.\nWith , , and , the interpolation gives\nAs shown in Figure 2 ###reference_###, this interpolation scheme generates notable artifacts in the PDF, due to the challenge posed by fitting polynomials to the exponentially declining tail of the probability density.\nIn response to this challenge, we fit the local distribution with Gaussian tails whenever necessary.\nIn this regime, the fitting PDF is given by\nwith and its first derivative continuous at the end point of the bin.\nWe then solve the parameter by requiring that the PDF has correct normalization within the bin, which can be computed via the following indefinite integrals.\nFor , we have\nwhile for ,\nwhere and are the error function and imaginary error function, respectively.\nFor and , we use the following expressions which are analytically equivalent but numerically more stable,\nwhere is Dawson\u2019s integral and is the scaled complementary error function.\nNonetheless, in rare cases where we set and give up the continuity condition for the first derivative of PDF, and instead solve for the correct normalization within the bin.\nOur criterion for deciding whether a bin should be fitted with exponential tails is as follows.\nFirst of all, the leftmost and rightmost bins have one-sided exponential tails as long as their averaged PDF is smaller than 0.6 times the averaged PDF in the bin next to them, otherwise the edge bins likely have a hard truncation by the prior and are therefore fitted with polynomials.\nIn addition, we also allow other bins to have double, i.e. from left endpoint towards right and from right endpoint towards left, exponential tails to account for potential multimodality.\nFor each bin , we attempt to fit the distribution with double exponential tails, and compute\nNote that the PDF is no longer strictly continuous at the bin endpoints when fitted with double exponential tails, and quantifies such discontinuity.\nWe then switch to double exponentials only for bins with local minimum , and stick with the PCHIP polynomials for the remaining bins.\nThe rationale behind this is intuitive: a smaller implies a likely gap between two isolated peaks of the PDF (see, for instance, the top right panel of Figure 2 ###reference_###), which can be better fitted with two exponential tails extending from both sides.\nOur PCHIP-ET scheme incorporates the inductive bias that for most SBI problems the tails of probabilistic distributions can be well modeled by Gaussians; if this is not the case, one may replace the Gaussian with e.g. student\u2019s or Cauchy for long-tailed distributions."
78
+ },
79
+ {
80
+ "section_id": "Appendix 2",
81
+ "parent_section_id": null,
82
+ "section_name": "Appendix B Weights in",
83
+ "text": "In this work, we use NQE to predict the quantiles equally spaced between , which tends to put more emphasis on the regions with larger PDF where the neighboring quantiles are closer to each other, leading to potential instability in the tail regions.\nInstead of directly weighting the different terms in , we adopt the following dropout strategy: for each training batch, we only keep of the terms in using a no-replacement multinomial sampling with weights proportional to , , with and by default.\nThis will effectively upweight the quantiles where the PDF is small, while the no-replacement sampling prevents specific terms from having too large weights that dominate the whole loss function."
84
+ },
85
+ {
86
+ "section_id": "Appendix 3",
87
+ "parent_section_id": null,
88
+ "section_name": "Appendix C Benchmark Problems",
89
+ "text": "We use the following problems from Lueckmann et al. (2021 ###reference_b29###) to benchmark the performance of the SBI methods.\nThe \u201cground truth\u201d posterior samples are available for all the problems.\nA toy problem with complicated global (bimodality) as well as local (crescent shape) structures.\nA challenging problem designed to have a simple likelihood and a complex posterior, with uninformative dimensions (distractors) added to the observation.\nInference of a 10-parameter Generalized Linear Model (GLM) with raw Bernoulli observations.\nInferring the common mean of a mixture of two Gaussians, one with much broader covariance than the other.\nAn epidemiological model describing the numbers of individuals in three possible states: susceptible , infectious , and recovered or deceased .\nAn influential ecology model describing the dynamics of two interacting species."
90
+ },
91
+ {
92
+ "section_id": "Appendix 4",
93
+ "parent_section_id": null,
94
+ "section_name": "Appendix D Details of the Cosmology Application",
95
+ "text": "We run dark-matter-only Particle Mesh (PM) simulations with particles in Mpc/h3 boxes using the pmwd code (Li et al., 2022a ###reference_b22###, b ###reference_b23###), and generate two projected overdensity fields from each simulation by dividing the box into two halves along the axis as the observation data.\nWe use 80% simulations for training, 10% for validation, and 10% for test. We evaluate the calibration of NQE with the validation data, and plot Figures 7 ###reference_### and 8 ###reference_### with the test data.\nThe model parameters are , the total matter density today, and , the RMS matter fluctuation today in linear theory, with uniform priors and .\nAs a proof-of-concept example, we substitute the expensive cosmological hydrodynamic simulations with a post-processing scale-independent bias 121212Here the bias means any deviation of the actual observed field with respect to the dark-matter-only density field. model over the density fields from the dark-matter-only simulations, i.e. with 131313But we still require that ..\nIn other words, we train NQE with simulations but requires the inference to be unbiased for , which is achieved via the calibration of NQE.\nA ResNet (He et al., 2016 ###reference_b15###) with 10 convolutional layers is utilized as the embedding network for a more efficient inference with the high dimensional data."
96
+ },
97
+ {
98
+ "section_id": "Appendix 5",
99
+ "parent_section_id": null,
100
+ "section_name": "Appendix E Convergence Test of Coverage Evaluation",
101
+ "text": "We check the convergence of the coverage evaluation in Figures 9 ###reference_###, 10 ###reference_### and 11 ###reference_###.\nWhile Figure 5 ###reference_### in the main paper uses simulations to enhance the smoothness of the coverage curves, in most cases simulations should be sufficient for the evaluation of coverage.\nActually, the (pointwise) standard error of empirical coverage can be estimated using the properties of binomial distribution as , where is the number of simulations for the coverage evaluation (S\u00e4ilynoja et al., 2022 ###reference_b45###).\nTherefore, with , one has for all .\n\n###figure_9### \n###figure_10### \n###figure_11###"
102
+ },
103
+ {
104
+ "section_id": "Appendix 6",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix F Hyperparameter Choices",
107
+ "text": "We train all the models on NVIDIA A100 MIG GPUs using the AdamW optimizer (Loshchilov & Hutter, 2017 ###reference_b25###), and find the wall time of NQE training to be comparable to existing methods like NPE.\nOur PCHIP-ET scheme has been implemented with Cython (Behnel et al., 2010 ###reference_b3###), so that its evaluation is much faster than the quantile regression network calls for typical real-world examples.\nWe conduct a mild search for {, , } in Figures 12 ###reference_### and 13 ###reference_###, which leads to our baseline choice in Table 2 ###reference_###.\nWe reduce the stepsize by 10% after every 5 epochs, and terminate the training if the loss does not improve after 30 epochs or when the training reaches 300 epochs.\n\n###figure_12### \n###figure_13### We find that some tasks require a different stepsize while some tasks exhibit significant overfitting, so we train 9 realizations for each network with {initial step size = 5e-4, 1e-4, 2e-5} {AdamW weight decay = 0, 1, 10}, and choose the realization with the smallest loss function.\nNevertheless, most problems have a clear preference regarding these two parameters so it should be straightforward to tune them for specific problems in practice."
108
+ },
109
+ {
110
+ "section_id": "Appendix 7",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix G Additional Plots",
113
+ "text": "###figure_14### \n###figure_15### \n###figure_16###"
114
+ }
115
+ ],
116
+ "tables": {
117
+ "1": {
118
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Computational cost of the broadening calibration, with NQE being significantly faster than other methods. : number of iterations to solve for the desired coverage. : number of simulated observations for coverage computation. : number of samples per observation for the rank of PDF. : number of posterior evaluations per effective MCMC sample. We assume there is no broadening technique for NPE that does not necessitate MCMC sampling.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.23.15\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T1.23.15.16.1\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S2.T1.23.15.16.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T1.23.15.16.1.2\">coverage</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T1.23.15.16.1.3\">simulations</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T1.23.15.16.1.4\">network calls</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.11.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.11.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.11.3.3.4.1\">NQE</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.9.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.10.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.11.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.14.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.14.6.6.4\">NQE</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.12.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.13.5.5.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.14.6.6.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.17.9.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.17.9.9.4\">NLE</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.15.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.16.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.17.9.9.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.20.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.20.12.12.4\">NPE</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.18.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.19.11.11.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.20.12.12.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.23.15.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T1.23.15.15.4\">NRE</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.21.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.22.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.23.15.15.3\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
119
+ "capture": "Table 1: Computational cost of the broadening calibration, with NQE being significantly faster than other methods. : number of iterations to solve for the desired coverage. : number of simulated observations for coverage computation. : number of samples per observation for the rank of PDF. : number of posterior evaluations per effective MCMC sample. We assume there is no broadening technique for NPE that does not necessitate MCMC sampling."
120
+ },
121
+ "2": {
122
+ "table_html": "<figure class=\"ltx_table\" id=\"A6.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Our baseline choice of NQE hyperparameters. </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A6.T2.11.11\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A6.T2.11.11.12.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.11.11.12.1.1\">hyperparameter</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.11.11.12.1.2\">value</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"A6.T2.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A6.T2.1.1.1.2\">0.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.3.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.2.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.3.3.3.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.5.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.4.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.5.5.5.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.7.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.6.6.6.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.7.7.7.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.9.9.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.8.8.8.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.9.9.9.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.10.10.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.10.10.10.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.10.10.10.2\">0.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.11.11.13.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.11.11.13.2.1\"># of MLP hidden layers</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.11.11.13.2.2\">10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.11.11.14.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.11.11.14.3.1\"># of MLP hidden neurons per layer</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.11.11.14.3.2\">512</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.11.11.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A6.T2.11.11.11.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.11.11.11.2\">16</td>\n</tr>\n</tbody>\n</table>\n</figure>",
123
+ "capture": "Table 2: Our baseline choice of NQE hyperparameters. "
124
+ }
125
+ },
126
+ "image_paths": {
127
+ "1": {
128
+ "figure_path": "2401.02413v2_figure_1.png",
129
+ "caption": "Figure 1: \n(Top) Network architecture of our NQE method, which autoregressively learns 1-dim conditional quantiles for each posterior dimension.\nThe estimated quantiles are then interpolated to reconstruct the full distribution.\n(Bottom) A post-processing calibration step can be employed to ensure the unbiasedness of NQE inference results.",
130
+ "url": "http://arxiv.org/html/2401.02413v2/x1.png"
131
+ },
132
+ "2": {
133
+ "figure_path": "2401.02413v2_figure_2.png",
134
+ "caption": "Figure 2: \n(1st row) Interpolation of Gaussian and Gaussian Mixture distributions.\nWhile the original PCHIP algorithm shows significant interpolation artifacts, our modified PCHIP-ET scheme decently reconstructs the distributions with only \u223c15similar-toabsent15\\sim 15\u223c 15 quantiles.\n(2nd row) Comparison of the 68.3% and 95.4% credible regions for a mixture of two asymmetric modes, evaluated with HPDR (p\ud835\udc5dpitalic_p-coverage) and QMCR (q\ud835\udc5eqitalic_q-coverage).\n(3rd row) Broadening of the interpolated posterior, with the broadening factors indicated in the legend.\n(4th row) The bijective mapping fqmsubscript\ud835\udc53qmf_{\\rm qm}italic_f start_POSTSUBSCRIPT roman_qm end_POSTSUBSCRIPT establishes a one-to-one correspondence between \ud835\udf3d\ud835\udf3d{\\bm{\\theta}}bold_italic_\u03b8 and \ud835\udf3d\u2032superscript\ud835\udf3d\u2032{\\bm{\\theta}}^{\\prime}bold_italic_\u03b8 start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT with the same 1-dim conditional CDF across all the \u03b8(i)superscript\ud835\udf03\ud835\udc56\\theta^{(i)}italic_\u03b8 start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT dimensions. The p\u2212limit-from\ud835\udc5dp-italic_p -coverage and q\u2212limit-from\ud835\udc5eq-italic_q -coverage are based on the ranking of p\u2062(\ud835\udf3d)\ud835\udc5d\ud835\udf3dp({\\bm{\\theta}})italic_p ( bold_italic_\u03b8 ) and qaux\u2062(\ud835\udf3d\u2032)subscript\ud835\udc5eauxsuperscript\ud835\udf3d\u2032q_{\\rm aux}({\\bm{\\theta}}^{\\prime})italic_q start_POSTSUBSCRIPT roman_aux end_POSTSUBSCRIPT ( bold_italic_\u03b8 start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT ), respectively.",
135
+ "url": "http://arxiv.org/html/2401.02413v2/x2.png"
136
+ },
137
+ "3": {
138
+ "figure_path": "2401.02413v2_figure_3.png",
139
+ "caption": "Figure 3: Probability density estimation for two toy examples from Grathwohl et al. (2018).\nDespite the intricate multimodal structures, NQE is able to faithfully reconstruct both distributions.",
140
+ "url": "http://arxiv.org/html/2401.02413v2/x3.png"
141
+ },
142
+ "4": {
143
+ "figure_path": "2401.02413v2_figure_4.png",
144
+ "caption": "Figure 4: Comparison of C2ST as a function of simulation budget for the six benchmark problems, with lower C2ST values representing better performance of the algorithm.\nThe error bars are estimated using the 25%, 50% and 75% quantiles of C2ST over ten realizations for each problem.\n(Uncalibrated) NQE achieves state-of-the-art performance across all the examples.",
145
+ "url": "http://arxiv.org/html/2401.02413v2/x4.png"
146
+ },
147
+ "5": {
148
+ "figure_path": "2401.02413v2_figure_5.png",
149
+ "caption": "Figure 5: \n(Top) NQE q\u2212limit-from\ud835\udc5eq-italic_q -coverage for the benchmark problems. Like other SBI methods, with limited simulation budgets, NQE may predict biased posteriors.\n(Bottom) Calibrated NQE predicts unbiased posteriors for all the problems.\nErrorbars are small and thus not plotted.\nSee Appendix E for a convergence test and Figure 14 for a similar plot with p\u2212limit-from\ud835\udc5dp-italic_p -coverage.",
150
+ "url": "http://arxiv.org/html/2401.02413v2/x5.png"
151
+ },
152
+ "6": {
153
+ "figure_path": "2401.02413v2_figure_6.png",
154
+ "caption": "Figure 6: \nThe C2ST values for the 1-dim uncalibrated marginal posteriors. We do not observe a clear trend of increasing C2ST with respect to the ordering of the dimensions.",
155
+ "url": "http://arxiv.org/html/2401.02413v2/x6.png"
156
+ },
157
+ "7": {
158
+ "figure_path": "2401.02413v2_figure_7.png",
159
+ "caption": "Figure 7: \n(Left) Sample image of the simulated data. The task is to infer two parameters of our Universe, \u03a9msubscript\u03a9\ud835\udc5a\\Omega_{m}roman_\u03a9 start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT and \u03c38subscript\ud835\udf0e8\\sigma_{8}italic_\u03c3 start_POSTSUBSCRIPT 8 end_POSTSUBSCRIPT, from such images.\n(Right) The q\u2212limit-from\ud835\udc5eq-italic_q -coverage for uncalibrated NQE without model misspecification (No MM), uncalibrated NQE with model misspecification (With MM), and NQE with model misspecification but calibrated using a broadening factor of 4.2 (Broadening) and using the quantile shifting method (Shifting).\nBoth calibration methods eliminate the bias due to known model misspecification, with quantile shifting achieving almost exact empirical coverage whereas global broadening being over-conservative.",
160
+ "url": "http://arxiv.org/html/2401.02413v2/x7.png"
161
+ },
162
+ "8": {
163
+ "figure_path": "2401.02413v2_figure_8.png",
164
+ "caption": "Figure 8: \nComparison of the uncalibrated posterior and the posteriors calibrated with two different schemes.\nThe quantile shifting scheme removes the bias without over-broadening the posterior.",
165
+ "url": "http://arxiv.org/html/2401.02413v2/x8.png"
166
+ },
167
+ "9": {
168
+ "figure_path": "2401.02413v2_figure_9.png",
169
+ "caption": "Figure 9: \nSimilar to Figure 5, but using 2,000 simulations for the evaluation of q\u2212limit-from\ud835\udc5eq-italic_q -coverage.",
170
+ "url": "http://arxiv.org/html/2401.02413v2/x9.png"
171
+ },
172
+ "10": {
173
+ "figure_path": "2401.02413v2_figure_10.png",
174
+ "caption": "Figure 10: \nSimilar to Figure 5, but using 1,000 simulations for the evaluation of q\u2212limit-from\ud835\udc5eq-italic_q -coverage.",
175
+ "url": "http://arxiv.org/html/2401.02413v2/x10.png"
176
+ },
177
+ "11": {
178
+ "figure_path": "2401.02413v2_figure_11.png",
179
+ "caption": "Figure 11: \nSimilar to Figure 5, but using 500 simulations for the evaluation of q\u2212limit-from\ud835\udc5eq-italic_q -coverage.",
180
+ "url": "http://arxiv.org/html/2401.02413v2/x11.png"
181
+ },
182
+ "12": {
183
+ "figure_path": "2401.02413v2_figure_12.png",
184
+ "caption": "Figure 12: \nA survey of NQE performance across different choices of hyperparameters.\nFrom left to right, we set f0subscript\ud835\udc530f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT as (0, 0, 0, 0, 1, 1, 1, 1), and set \u03bbregsubscript\ud835\udf06reg\\lambda_{\\rm reg}italic_\u03bb start_POSTSUBSCRIPT roman_reg end_POSTSUBSCRIPT as (0, 0.01, 0.1, 1, 0, 0.01, 0.1, 1).\nAll other parameters are the same as Table 2.",
185
+ "url": "http://arxiv.org/html/2401.02413v2/x12.png"
186
+ },
187
+ "13": {
188
+ "figure_path": "2401.02413v2_figure_13.png",
189
+ "caption": "Figure 13: \nSame as Figure 12, but using 25 quantile bins. Increasing the number of bins is helpful for multimodal problems (e.g. TM) with large simulation budgets.",
190
+ "url": "http://arxiv.org/html/2401.02413v2/x13.png"
191
+ },
192
+ "14": {
193
+ "figure_path": "2401.02413v2_figure_14.png",
194
+ "caption": "Figure 14: \nEmpirical coverage results using p\u2212limit-from\ud835\udc5dp-italic_p -coverage, while the calibration is still evaluated using q\u2212limit-from\ud835\udc5eq-italic_q -coverage. We find that the p\u2212limit-from\ud835\udc5dp-italic_p -coverage results are qualitatively similar to the q\u2212limit-from\ud835\udc5eq-italic_q -coverage in most cases, and the broadening calibration with q\u2212limit-from\ud835\udc5eq-italic_q -coverage in the main text also mitigates the bias for the p\u2212limit-from\ud835\udc5dp-italic_p -coverage. Nevertheless, one can always solve the broadening factor directly with p\u2212limit-from\ud835\udc5dp-italic_p -coverage if one wishes the p\u2212limit-from\ud835\udc5dp-italic_p -coverage to be strictly unbiased, at the cost of more network calls required than using q\u2212limit-from\ud835\udc5eq-italic_q -coverage.",
195
+ "url": "http://arxiv.org/html/2401.02413v2/x14.png"
196
+ },
197
+ "15": {
198
+ "figure_path": "2401.02413v2_figure_15.png",
199
+ "caption": "Figure 15: The actual broadening factor applied to remove the bias for the benchmark problems.",
200
+ "url": "http://arxiv.org/html/2401.02413v2/x15.png"
201
+ },
202
+ "16": {
203
+ "figure_path": "2401.02413v2_figure_16.png",
204
+ "caption": "Figure 16: Similar to Figure 4, but for NQE calibrated with the global broadening scheme. The C2ST of calibrated NQE is generally similar to or slightly worse than uncalibrated NQE in Figure 4.",
205
+ "url": "http://arxiv.org/html/2401.02413v2/x16.png"
206
+ }
207
+ },
208
+ "validation": true,
209
+ "references": [
210
+ {
211
+ "1": {
212
+ "title": "Approximate bayesian computation in population genetics.",
213
+ "author": "Beaumont, M. A., Zhang, W., and Balding, D. J.",
214
+ "venue": "Genetics, 162(4):2025\u20132035, 2002.",
215
+ "url": null
216
+ }
217
+ },
218
+ {
219
+ "2": {
220
+ "title": "Adaptive approximate bayesian computation.",
221
+ "author": "Beaumont, M. A., Cornuet, J.-M., Marin, J.-M., and Robert, C. P.",
222
+ "venue": "Biometrika, 96(4):983\u2013990, 2009.",
223
+ "url": null
224
+ }
225
+ },
226
+ {
227
+ "3": {
228
+ "title": "Cython: The best of both worlds.",
229
+ "author": "Behnel, S., Bradshaw, R., Citro, C., Dalcin, L., Seljebotn, D. S., and Smith, K.",
230
+ "venue": "Computing in Science & Engineering, 13(2):31\u201339, 2010.",
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "4": {
236
+ "title": "The frontier of simulation-based inference.",
237
+ "author": "Cranmer, K., Brehmer, J., and Louppe, G.",
238
+ "venue": "Proceedings of the National Academy of Sciences, 117(48):30055\u201330062, 2020.",
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "5": {
244
+ "title": "Flow matching for scalable simulation-based inference.",
245
+ "author": "Dax, M., Wildberger, J., Buchholz, S., Green, S. R., Macke, J. H., and Sch\u00f6lkopf, B.",
246
+ "venue": "arXiv preprint arXiv:2305.17161, 2023.",
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "6": {
252
+ "title": "Towards reliable simulation-based inference with balanced neural ratio estimation.",
253
+ "author": "Delaunoy, A., Hermans, J., Rozet, F., Wehenkel, A., and Louppe, G.",
254
+ "venue": "arXiv preprint arXiv:2208.13624, 2022.",
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "7": {
260
+ "title": "Diffusion models beat gans on image synthesis.",
261
+ "author": "Dhariwal, P. and Nichol, A.",
262
+ "venue": "Advances in Neural Information Processing Systems, 34:8780\u20138794, 2021.",
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "8": {
268
+ "title": "Modern cosmology.",
269
+ "author": "Dodelson, S. and Schmidt, F.",
270
+ "venue": "Academic press, 2020.",
271
+ "url": null
272
+ }
273
+ },
274
+ {
275
+ "9": {
276
+ "title": "On contrastive learning for likelihood-free inference.",
277
+ "author": "Durkan, C., Murray, I., and Papamakarios, G.",
278
+ "venue": "In International Conference on Machine Learning, pp. 2771\u20132781. PMLR, 2020.",
279
+ "url": null
280
+ }
281
+ },
282
+ {
283
+ "10": {
284
+ "title": "Monotone piecewise cubic interpolation.",
285
+ "author": "Fritsch, F. N. and Carlson, R. E.",
286
+ "venue": "SIAM Journal on Numerical Analysis, 17(2):238\u2013246, 1980.",
287
+ "url": null
288
+ }
289
+ },
290
+ {
291
+ "11": {
292
+ "title": "Probabilistic forecasting with spline quantile function rnns.",
293
+ "author": "Gasthaus, J., Benidis, K., Wang, Y., Rangapuram, S. S., Salinas, D., Flunkert, V., and Januschowski, T.",
294
+ "venue": "In The 22nd international conference on artificial intelligence and statistics, pp. 1901\u20131910. PMLR, 2019.",
295
+ "url": null
296
+ }
297
+ },
298
+ {
299
+ "12": {
300
+ "title": "Generative adversarial networks.",
301
+ "author": "Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y.",
302
+ "venue": "Communications of the ACM, 63(11):139\u2013144, 2020.",
303
+ "url": null
304
+ }
305
+ },
306
+ {
307
+ "13": {
308
+ "title": "Ffjord: Free-form continuous dynamics for scalable reversible generative models.",
309
+ "author": "Grathwohl, W., Chen, R. T., Bettencourt, J., Sutskever, I., and Duvenaud, D.",
310
+ "venue": "arXiv preprint arXiv:1810.01367, 2018.",
311
+ "url": null
312
+ }
313
+ },
314
+ {
315
+ "14": {
316
+ "title": "Automatic posterior transformation for likelihood-free inference.",
317
+ "author": "Greenberg, D., Nonnenmacher, M., and Macke, J.",
318
+ "venue": "In International Conference on Machine Learning, pp. 2404\u20132414. PMLR, 2019.",
319
+ "url": null
320
+ }
321
+ },
322
+ {
323
+ "15": {
324
+ "title": "Deep residual learning for image recognition.",
325
+ "author": "He, K., Zhang, X., Ren, S., and Sun, J.",
326
+ "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770\u2013778, 2016.",
327
+ "url": null
328
+ }
329
+ },
330
+ {
331
+ "16": {
332
+ "title": "Likelihood-free mcmc with amortized approximate ratio estimators.",
333
+ "author": "Hermans, J., Begy, V., and Louppe, G.",
334
+ "venue": "In International Conference on Machine Learning, pp. 4239\u20134248. PMLR, 2020.",
335
+ "url": null
336
+ }
337
+ },
338
+ {
339
+ "17": {
340
+ "title": "A trust crisis in simulation-based inference? your posterior approximations can be unfaithful.",
341
+ "author": "Hermans, J., Delaunoy, A., Rozet, F., Wehenkel, A., Begy, V., and Louppe, G.",
342
+ "venue": "arXiv preprint arXiv:2110.06581, 2021.",
343
+ "url": null
344
+ }
345
+ },
346
+ {
347
+ "18": {
348
+ "title": "Solving high-dimensional parameter inference: marginal posterior densities & moment networks.",
349
+ "author": "Jeffrey, N. and Wandelt, B. D.",
350
+ "venue": "arXiv preprint arXiv:2011.05991, 2020.",
351
+ "url": null
352
+ }
353
+ },
354
+ {
355
+ "19": {
356
+ "title": "Learning summary statistic for approximate bayesian computation via deep neural network.",
357
+ "author": "Jiang, B., Wu, T.-y., Zheng, C., and Wong, W. H.",
358
+ "venue": "Statistica Sinica, pp. 1595\u20131618, 2017.",
359
+ "url": null
360
+ }
361
+ },
362
+ {
363
+ "20": {
364
+ "title": "A contribution to the mathematical theory of epidemics.",
365
+ "author": "Kermack, W. O. and McKendrick, A. G.",
366
+ "venue": "Proceedings of the royal society of london. Series A, Containing papers of a mathematical and physical character, 115(772):700\u2013721, 1927.",
367
+ "url": null
368
+ }
369
+ },
370
+ {
371
+ "21": {
372
+ "title": "Regression quantiles.",
373
+ "author": "Koenker, R. and Bassett Jr, G.",
374
+ "venue": "Econometrica: journal of the Econometric Society, pp. 33\u201350, 1978.",
375
+ "url": null
376
+ }
377
+ },
378
+ {
379
+ "22": {
380
+ "title": "pmwd: A differentiable cosmological particle-mesh -body library.",
381
+ "author": "Li, Y., Lu, L., Modi, C., Jamieson, D., Zhang, Y., Feng, Y., Zhou, W., Kwan, N. P., Lanusse, F., and Greengard, L.",
382
+ "venue": "arXiv preprint arXiv:2211.09958, 2022a.",
383
+ "url": null
384
+ }
385
+ },
386
+ {
387
+ "23": {
388
+ "title": "Differentiable cosmological simulation with adjoint method.",
389
+ "author": "Li, Y., Modi, C., Jamieson, D., Zhang, Y., Lu, L., Feng, Y., Lanusse, F., and Greengard, L.",
390
+ "venue": "arXiv preprint arXiv:2211.09815, 2022b.",
391
+ "url": null
392
+ }
393
+ },
394
+ {
395
+ "24": {
396
+ "title": "Revisiting classifier two-sample tests.",
397
+ "author": "Lopez-Paz, D. and Oquab, M.",
398
+ "venue": "arXiv preprint arXiv:1610.06545, 2016.",
399
+ "url": null
400
+ }
401
+ },
402
+ {
403
+ "25": {
404
+ "title": "Decoupled weight decay regularization.",
405
+ "author": "Loshchilov, I. and Hutter, F.",
406
+ "venue": "arXiv preprint arXiv:1711.05101, 2017.",
407
+ "url": null
408
+ }
409
+ },
410
+ {
411
+ "26": {
412
+ "title": "Analytical note on certain rhythmic relations in organic systems.",
413
+ "author": "Lotka, A. J.",
414
+ "venue": "Proceedings of the National Academy of Sciences, 6(7):410\u2013415, 1920.",
415
+ "url": null
416
+ }
417
+ },
418
+ {
419
+ "27": {
420
+ "title": "Flexible statistical inference for mechanistic models of neural dynamics.",
421
+ "author": "Lueckmann, J.-M., Goncalves, P. J., Bassetto, G., \u00d6cal, K., Nonnenmacher, M., and Macke, J. H.",
422
+ "venue": "Advances in neural information processing systems, 30, 2017.",
423
+ "url": null
424
+ }
425
+ },
426
+ {
427
+ "28": {
428
+ "title": "Likelihood-free inference with emulator networks.",
429
+ "author": "Lueckmann, J.-M., Bassetto, G., Karaletsos, T., and Macke, J. H.",
430
+ "venue": "In Symposium on Advances in Approximate Bayesian Inference, pp. 32\u201353. PMLR, 2019.",
431
+ "url": null
432
+ }
433
+ },
434
+ {
435
+ "29": {
436
+ "title": "Benchmarking simulation-based inference.",
437
+ "author": "Lueckmann, J.-M., Boelts, J., Greenberg, D., Goncalves, P., and Macke, J.",
438
+ "venue": "In International Conference on Artificial Intelligence and Statistics, pp. 343\u2013351. PMLR, 2021.",
439
+ "url": null
440
+ }
441
+ },
442
+ {
443
+ "30": {
444
+ "title": "Bias correction, quantile mapping, and downscaling: Revisiting the inflation issue.",
445
+ "author": "Maraun, D.",
446
+ "venue": "Journal of Climate, 26(6):2137\u20132143, 2013.",
447
+ "url": null
448
+ }
449
+ },
450
+ {
451
+ "31": {
452
+ "title": "Statistical rethinking: A Bayesian course with examples in R and Stan.",
453
+ "author": "McElreath, R.",
454
+ "venue": "CRC press, 2020.",
455
+ "url": null
456
+ }
457
+ },
458
+ {
459
+ "32": {
460
+ "title": "Quantile regression forests.",
461
+ "author": "Meinshausen, N. and Ridgeway, G.",
462
+ "venue": "Journal of machine learning research, 7(6), 2006.",
463
+ "url": null
464
+ }
465
+ },
466
+ {
467
+ "33": {
468
+ "title": "Truncated marginal neural ratio estimation.",
469
+ "author": "Miller, B. K., Cole, A., Forr\u00e9, P., Louppe, G., and Weniger, C.",
470
+ "venue": "Advances in Neural Information Processing Systems, 34:129\u2013143, 2021.",
471
+ "url": null
472
+ }
473
+ },
474
+ {
475
+ "34": {
476
+ "title": "Sensitivity analysis of simulation-based inference for galaxy clustering, 2023.",
477
+ "author": "Modi, C., Pandey, S., Ho, M., Hahn, C., Blancard, B. R.-S., and Wandelt, B.",
478
+ "venue": null,
479
+ "url": null
480
+ }
481
+ },
482
+ {
483
+ "35": {
484
+ "title": "Numerical computing with MATLAB.",
485
+ "author": "Moler, C. B.",
486
+ "venue": "SIAM, 2004.",
487
+ "url": null
488
+ }
489
+ },
490
+ {
491
+ "36": {
492
+ "title": "Scalable inference with autoregressive neural ratio estimation.",
493
+ "author": "Montel, N. A., Alvey, J., and Weniger, C.",
494
+ "venue": "arXiv preprint arXiv:2308.08597, 2023.",
495
+ "url": null
496
+ }
497
+ },
498
+ {
499
+ "37": {
500
+ "title": "Fast -free inference of simulation models with bayesian conditional density estimation.",
501
+ "author": "Papamakarios, G. and Murray, I.",
502
+ "venue": "Advances in neural information processing systems, 29, 2016.",
503
+ "url": null
504
+ }
505
+ },
506
+ {
507
+ "38": {
508
+ "title": "Normalizing flows for probabilistic modeling and inference, arxiv e-prints.",
509
+ "author": "Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., and Lakshminarayanan, B.",
510
+ "venue": "arXiv preprint arXiv:1912.02762, 2019a.",
511
+ "url": null
512
+ }
513
+ },
514
+ {
515
+ "39": {
516
+ "title": "Sequential neural likelihood: Fast likelihood-free inference with autoregressive flows.",
517
+ "author": "Papamakarios, G., Sterratt, D., and Murray, I.",
518
+ "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 837\u2013848. PMLR, 2019b.",
519
+ "url": null
520
+ }
521
+ },
522
+ {
523
+ "40": {
524
+ "title": "Pytorch: An imperative style, high-performance deep learning library.",
525
+ "author": "Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.",
526
+ "venue": "Advances in neural information processing systems, 32, 2019.",
527
+ "url": null
528
+ }
529
+ },
530
+ {
531
+ "41": {
532
+ "title": "Population growth of human y chromosomes: a study of y chromosome microsatellites.",
533
+ "author": "Pritchard, J. K., Seielstad, M. T., Perez-Lezaun, A., and Feldman, M. W.",
534
+ "venue": "Molecular biology and evolution, 16(12):1791\u20131798, 1999.",
535
+ "url": null
536
+ }
537
+ },
538
+ {
539
+ "42": {
540
+ "title": "Bayesflow: Learning complex stochastic models with invertible neural networks.",
541
+ "author": "Radev, S. T., Mertens, U. K., Voss, A., Ardizzone, L., and K\u00f6the, U.",
542
+ "venue": "IEEE transactions on neural networks and learning systems, 33(4):1452\u20131466, 2020.",
543
+ "url": null
544
+ }
545
+ },
546
+ {
547
+ "43": {
548
+ "title": "Variational inference with normalizing flows.",
549
+ "author": "Rezende, D. and Mohamed, S.",
550
+ "venue": "In International conference on machine learning, pp. 1530\u20131538. PMLR, 2015.",
551
+ "url": null
552
+ }
553
+ },
554
+ {
555
+ "44": {
556
+ "title": "Beyond expectation: Deep joint mean and quantile regression for spatiotemporal problems.",
557
+ "author": "Rodrigues, F. and Pereira, F. C.",
558
+ "venue": "IEEE transactions on neural networks and learning systems, 31(12):5377\u20135389, 2020.",
559
+ "url": null
560
+ }
561
+ },
562
+ {
563
+ "45": {
564
+ "title": "Graphical test for discrete uniformity and its applications in goodness-of-fit evaluation and multiple sample comparison.",
565
+ "author": "S\u00e4ilynoja, T., B\u00fcrkner, P.-C., and Vehtari, A.",
566
+ "venue": "Statistics and Computing, 32(2):32, 2022.",
567
+ "url": null
568
+ }
569
+ },
570
+ {
571
+ "46": {
572
+ "title": "Adaptive approximate bayesian computation tolerance selection.",
573
+ "author": "Simola, U., Cisewski-Kehe, J., Gutmann, M. U., and Corander, J.",
574
+ "venue": "Bayesian analysis, 16(2):397\u2013423, 2021.",
575
+ "url": null
576
+ }
577
+ },
578
+ {
579
+ "47": {
580
+ "title": "Sequential monte carlo without likelihoods.",
581
+ "author": "Sisson, S. A., Fan, Y., and Tanaka, M. M.",
582
+ "venue": "Proceedings of the National Academy of Sciences, 104(6):1760\u20131765, 2007.",
583
+ "url": null
584
+ }
585
+ },
586
+ {
587
+ "48": {
588
+ "title": "Neural spline search for quantile probabilistic modeling.",
589
+ "author": "Sun, R., Li, C.-L., Arik, S. \u00d6., Dusenberry, M. W., Lee, C.-Y., and Pfister, T.",
590
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 9927\u20139934, 2023.",
591
+ "url": null
592
+ }
593
+ },
594
+ {
595
+ "49": {
596
+ "title": "Validating bayesian inference algorithms with simulation-based calibration.",
597
+ "author": "Talts, S., Betancourt, M., Simpson, D., Vehtari, A., and Gelman, A.",
598
+ "venue": "arXiv preprint arXiv:1804.06788, 2018.",
599
+ "url": null
600
+ }
601
+ },
602
+ {
603
+ "50": {
604
+ "title": "Nonparametric quantile regression: Non-crossing constraints and conformal prediction.",
605
+ "author": "Tang, W., Shen, G., Lin, Y., and Huang, J.",
606
+ "venue": "arXiv preprint arXiv:2210.10161, 2022.",
607
+ "url": null
608
+ }
609
+ },
610
+ {
611
+ "51": {
612
+ "title": "Inferring coalescence times from dna sequence data.",
613
+ "author": "Tavar\u00e9, S., Balding, D. J., Griffiths, R. C., and Donnelly, P.",
614
+ "venue": "Genetics, 145(2):505\u2013518, 1997.",
615
+ "url": null
616
+ }
617
+ },
618
+ {
619
+ "52": {
620
+ "title": "Approximate bayesian computation scheme for parameter inference and model selection in dynamical systems.",
621
+ "author": "Toni, T., Welch, D., Strelkowa, N., Ipsen, A., and Stumpf, M. P.",
622
+ "venue": "Journal of the Royal Society Interface, 6(31):187\u2013202, 2009.",
623
+ "url": null
624
+ }
625
+ },
626
+ {
627
+ "53": {
628
+ "title": "The camels project: Cosmology and astrophysics with machine-learning simulations.",
629
+ "author": "Villaescusa-Navarro, F., Angl\u00e9s-Alc\u00e1zar, D., Genel, S., Spergel, D. N., Somerville, R. S., Dave, R., Pillepich, A., Hernquist, L., Nelson, D., Torrey, P., et al.",
630
+ "venue": "The Astrophysical Journal, 915(1):71, 2021.",
631
+ "url": null
632
+ }
633
+ }
634
+ ],
635
+ "url": "http://arxiv.org/html/2401.02413v2"
636
+ }
20240722/2401.02938v2.json ADDED
@@ -0,0 +1,531 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Fast and Effective Weight Update for Pruned Large Language Models",
3
+ "abstract": "Pruning large language models (LLMs) is a challenging task due to their enormous size. The primary difficulty is fine-tuning the model after pruning, which is needed to recover the lost performance caused by dropping weights. Recent approaches have either ignored fine-tuning entirely, focusing on efficient pruning criteria, or attempted layer-wise weight updates, preserving the behavior of each layer. However, even layer-wise weight updates can be costly for LLMs, and previous works have resorted to various approximations.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Large language models (LLMs) (Brown et al., 2020 ###reference_b7###; Zhang et al., 2022 ###reference_b45###; Touvron et al., 2023a ###reference_b36###; b ###reference_b37###) have displayed impressive performance in different tasks, but deploying them can be challenging due to their large size and high memory demands.\nIn this work, we introduce a one-shot pruning and weight update algorithm for LLMs that is both fast and effective. Our algorithm produces state-of-the-art results for LLM pruning while imposing minimal computational overhead (Table 1 ###reference_###).\nNeural networks are usually compressed by either quantization or weight pruning.\nLLM quantization (Dettmers et al., 2022 ###reference_b11###; Dettmers & Zettlemoyer, 2023 ###reference_b10###; Ahmadian et al., 2023 ###reference_b2###; Xiao et al., 2023 ###reference_b41###) compresses LLMs by storing weights using a small number of bits.\nOn the other hand, pruning compresses models by dropping irrelevant weights (LeCun et al., 1989 ###reference_b26###; Han et al., 2015 ###reference_b19###; Zhu & Gupta, 2018 ###reference_b48###).\nPruning can be helpful for LLMs since, during inference, the main bottleneck is memory bandwidth for loading weights to processing unit (Xia et al., 2023 ###reference_b40###).\nHowever, the main challenge in deploying LLM pruning is that the network needs to be fine-tuned (Blalock et al., 2020 ###reference_b5###; Liu et al., 2018 ###reference_b27###), which is not feasible with LLMs due to extensive computational and memory footprint. For example, Agarwalla et al. (2024 ###reference_b1###) needed retraining on 45 - 100 billion tokens to recover lost performance by pruning.\nAlso, memory-efficient fine-tuning like LoRA (Hu et al., 2021 ###reference_b23###) is not applicable for LLM weight pruning since we cannot easily merge the low-rank update with the sparsified matrix.\n###table_1### A feasible alternative is one-shot pruning, where one is given a trained model with a small amount of calibration data, and has to compress the model in a single forward pass using limited computational resources.\nThis is typically done via layer-wise pruning, where the pruning problem is split into layer-wise subproblems. In each layer, one aims to select a pruning mask and update weights to minimize reconstruction error.\nAdaprune (Hubara et al., 2021 ###reference_b24###) solves layer-wise reconstruction by updating weights directly via gradient descent (using Adam optimizer). However, it needs many iterations to achieve convergence.\nOptimal brain compression (OBC) (Frantar & Alistarh, 2022 ###reference_b12###) removes weights one by one. In each step, it calculates the optimal weight to remove and also the optimal update.\nHowever, this approach is also very time-consuming for pruning LLMs.\nThe first practical approach applicable to LLMs was SparseGPT (Frantar & Alistarh, 2023 ###reference_b13###) using approximations on top of the OBC approach to make the problem feasible, albeit at the cost of decreased reconstruction quality.\nRecently, Wanda (Sun et al., 2023 ###reference_b35###) showed that LLMs can be pruned by removing weights with the smallest product of weight magnitude and corresponding input activation norm. This selection approach without the weight update is competitive with SparseGPT on lower sparsities (up to 60%).\nOur results. In this paper, we introduce an efficient layer-wise weight update algorithm based on alternating direction method of multipliers (ADMM) (Boyd et al., 2011 ###reference_b6###). Our algorithm sidesteps all of the problems of previous solutions. We do not need many gradient descent iterations, nor do we need any heuristic approximation for calculating the weight update.\nWe only need a single inversion of a matrix similar in size to the weight matrix and very few simple iterations to achieve accurate weight updates for a given pruning mask.\nFurthermore, we extend our algorithm with gradual pruning (Zhu & Gupta, 2018 ###reference_b48###), where in each step, we prune more and more weights. This simple extension allows us to obtain state-of-the-art pruning results at a very low additional cost."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Large language models and transformers",
21
+ "text": "Large language models (like Llama) use transformer (Vaswani et al., 2017 ###reference_b38###) architecture and are trained to predict the next word in the text. Transformer consists of multiple repeating blocks. Each block has multihead attention and a feed-forward subblock, which contain multiple linear transformations. Our work focuses on pruning weights in these linear transformations."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "One-shot and layer-wise pruning",
27
+ "text": "We consider a scenario of post-training pruning, where we prune an already trained model to a desired sparsity (we assume that the sparsity is the same in each pruned layer).\nSince manipulating the whole LLM at once leads to huge computational and memory requirements, we follow the works of Hubara et al. (2021 ###reference_b24###); Frantar & Alistarh (2022 ###reference_b12###; 2023 ###reference_b13###). We prune the LLM during one forward pass (one-shot pruning) and split pruning into multiple layer-wise subproblems. During the forward pass, we capture the calibration inputs for each layer and then prune and update each layer accordingly.\nMore specifically, for each block in the model, we run a forward pass through it, capture inputs for each layer, prune and update weights, and then rerun a forward pass through the whole block to get outputs after pruning.\nWe are given the original weights for each layer and calibration inputs .\nOur goal is to find a binary weight mask and updated weights \nsuch that the following reconstruction error is minimized:\nFor now, we assume that pruning mask was found via a separate method and focus only on finding\nupdated weights .\nAssuming that our layer has output neurons and inputs, one can just solve independent linear regressions to solve the problem optimally. Since the mask for each output is different, each one of outputs requires a separate matrix inversion of the relevant submatrix of , which in total takes time.\nThis is infeasible even for small neural networks.\nIt is possible to use various approximations to compute updates faster, as done in SparseGPT (Frantar & Alistarh, 2023 ###reference_b13###). However, we demonstrate in our experiments that this compromises the quality of the solution.\nAnother approximation is to not update weights and prune weights with the lowest product of magnitude and input activation norm, as done in Wanda (Sun et al., 2023 ###reference_b35###).\nAnother possible solution is to update weights iteratively via gradient descent as in Adaprune (Hubara et al., 2021 ###reference_b24###).\nHere, one update step is proportional to . Assuming is precomputed, one update step takes time. While this looks much better than solving linear regressions, Frantar & Alistarh (2023 ###reference_b13###) as well as our own experiments show that one needs many iterations to achieve reasonable convergence."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Alternating Direction Method of Multipliers",
33
+ "text": "Alternating direction method of multipliers (ADMM) (Boyd et al., 2011 ###reference_b6###) is an optimization method for solving problems in the form:\nwhere and are convex functions.\nADMM forms the augmented Lagrangian with dual variables and penalty factor :\nTypically ADMM is given using scale dual variable in a form:\nThe Lagrangian is then optimized via the following iterations:\nIt can shown that ADMM converges to the optimal solution when and are convex and some other mild assumptions hold Boyd et al. (2011 ###reference_b6###).\nThe (extended-real-valued) functions and are closed, proper, and convex.\nThe unaugmented Lagrangian has a saddle point, i.e. there exists where for all :\nLet Assumptions 1 and 2 hold. Then:\nas , i.e. iterates approach feasibility.\napproach optimal value as\nOne application of ADMM is solving constrained optimization over convex set , i.e.:\nThis problem can be rewritten into ADMM form using indicator function , where if , and otherwise:\nIn this case, the ADMM update becomes:\nHere, is an Euclidean projection operation onto set .\nAlso, note that the update is just the original unconstrained problem with a simple quadratic penalty term."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Methods",
39
+ "text": "Here, we propose an alternative solution to finding updated weights in the layer-wise pruning problem.\nOur solution will have same iteration complexity as gradient descent, but will converge much faster.\nRecall, that we are given set of calibration inputs and mask and are looking for , such that reconstruction error is minimized.\nWe observe that when a set of zeroed weights is fixed, valid weight matrices form a convex set .\nIn other words, we are solving the following constrained optimization problem (we omit subscript for clarity):\nOur objective is also convex and thus we can use ADMM to solve our optimization problem.\nWe denote our objective as and\nwe will use indicator function which takes value if and otherwise.\nUsing formulation (1 ###reference_###) and updates (2 ###reference_###), we get following iterations:\nIn our case, -update is just a projection onto the set of valid matrices, thus:\nUpdating is very similar to ridge regression and can be computed as:\nFor a fixed calibration input and mask \niterates 3 ###reference_### (with updates 4 ###reference_###, 5 ###reference_###) converge to optimal solution for weight update.\nSince our algorithm uses ADMM iterations, we only need to prove that assumptions 1 and 2 hold.\n and are clearly closed, proper, and convex functions; thus, assumption 1 holds.\nTo show that assumption 2 holds, we need to prove that there exists that for all :\n where .\nThere is a globally optimal solution (can be found by independent linear regressions), where:\n and thus\n.\nIf ( is unmasked and can have any value), then we set .\nIf , then and we set .\nThen all and derivatives of are zero (or must be 0 due to masking) at and since is convex in and , then we have a global optimum for given and thus . And thus, assumption 2 holds.\n\u220e\nWe can precompute and cache and , and then one update iteration has complexity, which is the same as the complexity of gradient descent.\nNote that theoretical results do not say anything about the speed of convergence. In the experimental section, we will show that, in practice, we can get high-quality solutions after very few iterations.\nOne can also view update as a way of pulling pruned weights towards zero. Note that for unpruned weights, the penalty term only limits the step size, but for pruned weights, the value of will have different sign than the value of and thus they will be strongly pulled towards zero."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Mask selection and preconditioning",
45
+ "text": "In the previous section, we described how to update weights when we are given the sparsity mask.\nNow, we will discuss how to select the mask for pruning.\nWanda (Sun et al., 2023 ###reference_b35###) is a simple rule to select a high-quality mask for pruning LLMs.\nInstead of selecting weights with the largest value (magnitude pruning), they select weights with the highest product of weight absolute value and input neuron norm, i.e. .\nIn our implementation, we follow this selection rule, but we use the norm of inputs as preconditioning.\nWe multiply the weight matrix by feature norms and divide calibration inputs by their feature norms, run the ADMM algorithm, and then normalize the weight matrix back.\nNote that after the preconditioning, selecting the mask by weight magnitude is equivalent to the Wanda algorithm and that the diagonal of contains only ones.\nWanda paper also suggests keeping a constant number of weights per output. We found that in our case with weight update, this constraint is actually slightly detrimental, and in our work, we select the top weights for the whole layer."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Gradual pruning",
51
+ "text": "Until now, we considered a scenario where one selects the pruning mask first and then updates weights.\nHere, we propose a simple extension to our algorithm, which progressively prunes more and more weights and simultaneously computes the weight update. Note, that this still happens during one forward pass, we will just apply multiple iterations to one layer-wise problem.\nWe adopt cubic sparsity schedule from (Zhu & Gupta, 2018 ###reference_b48###), where sparsity at step is computed as\n, where is final sparsity and is the number of sparsification steps.\nIn each step, we set weights to zero and then proceed with the ADMM update. Note, that only overhead of gradual pruning is just a mask selection added into each step.\nWhile represents the current valid solution, we found that it is slightly better to use for selecting weights to prune. This is actually the optimal choice if our constraint (function ) was a specific sparsity, not a predefined mask.\nWe summarize our pruning algorithm in Algorithm 1 ###reference_###.\nWe also extend gradual pruning to structured 2:4 sparsity using the following straightforward idea. Our final sparsify will be . If in step our target sparsity is , then we always keep the two highest elements from each group of four and then prune weights from the remaining ones."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Comparison with SparseGPT and Wanda",
57
+ "text": "Compared to SparseGPT (Frantar & Alistarh, 2023 ###reference_b13###), our algorithm does a more accurate weight update since it does not rely on approximation (we also verify this later in the experimental section). It is difficult to say which mask selection algorithm is better in theory. We gradually prune the whole weight matrix while SparseGPT does optimal selection on group columns of the weight matrix iteratively. But in our experiments our mask selection leads to better results.\nOur algorithm can also be thought of as Wanda (Sun et al., 2023 ###reference_b35###) with added weight updates and gradual pruning."
58
+ },
59
+ {
60
+ "section_id": "3.4",
61
+ "parent_section_id": "3",
62
+ "section_name": "Note on using ADMM with penalty",
63
+ "text": "It is possible to use ADMM to optimize functions under constraint heuristically. This was previously done by Zhang et al. (2018 ###reference_b46###); Ye et al. (2019 ###reference_b42###); Gui et al. (2019 ###reference_b18###). While some of the papers claim that this approach is \"systematic\", in reality, using ADMM with constraint is just a heuristic since the constraint is not convex. Moreover, in our preliminary experiments, we found that ADMM with constraint is very sensitive to the choice of , and for some choices, it will actually run in cycles and not converge."
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "Experiments",
69
+ "text": "General setup. We implement our algorithms by extending the Wanda (Sun et al., 2023 ###reference_b35###) codebase, which relies on Pytorch and the Huggingface library.\nSimilarly to Wanda and SparseGPT, we use 128 calibration samples from the C4 training dataset (Raffel et al., 2020 ###reference_b32###).\nWe run pruning on a machine with two Quadro RTX 5000 GPUs (each with 16GB of GPU memory). Since we prune layers\nsequentially in order, we need only to load one layer to GPU memory at one time. This allows us to prune 70B parameter LLaMA models using a relatively small GPU. Unless stated otherwise, we prune for iterations, using sparsification steps, and set the dampening factor to and ADMM penalty factor .\nWe compare our methods to Wanda (Sun et al., 2023 ###reference_b35###), which does not do weight update and just prunes weights with the lowest product of magnitude and activation norm, and SparseGPT (Frantar & Alistarh, 2023 ###reference_b13###), which uses multiple approximations to select pruned weight and calculating weight updates. For both methods, we use their public implementation and default hyperparameter settings.\nModels and evaluation. We test our methods on LLaMA (Touvron et al., 2023a ###reference_b36###) and LLaMA2 (Touvron et al., 2023b ###reference_b37###) models. Similarly to previous works (Frantar & Alistarh, 2023 ###reference_b13###; Sun et al., 2023 ###reference_b35###), we measure the performance of pruned models on language modeling and zero-shot tasks.\nOur main focus is perplexity on held-out WikiText (Merity et al., 2016 ###reference_b30###), considered a goto metric for evaluating language model compression (Frantar & Alistarh, 2023 ###reference_b13###).\nAs an additional verification and testing, we use the same seven tasks as Wanda uses from EleutherAI LM Harness (Gao et al., 2021 ###reference_b15###).\n###figure_1### ###figure_2###"
70
+ },
71
+ {
72
+ "section_id": "4.1",
73
+ "parent_section_id": "4",
74
+ "section_name": "Reconstruction error convergence",
75
+ "text": "As a first experiment, we study the quality of our update algorithm. We use a fixed sparsity mask derived using Wanda with 50% sparsity and observe reconstruction error convergence in one layer.\nWe compare our algorithm to gradient-based approaches using Adam and SGD optimizers with varying learning rates. We also compare it to the SparseGPT update (without mask selection) used in the Wanda paper.\nThe results for selected layers of LLaMA-7b are presented in Figure 1.\nOur ADMM-based algorithm is superior to both gradient-based algorithms and SparseGPT as it converges almost instantly after computing the initial matrix inverse. We also note that ADMM works well with the default setting of and does not require learning rate tuning, which starkly contrasts with SGD and Adam, which have different optimal learning rates in different layers."
76
+ },
77
+ {
78
+ "section_id": "4.2",
79
+ "parent_section_id": "4",
80
+ "section_name": "Weight update quality comparison",
81
+ "text": "In this experiment, we first prune each layer of LLaMA-7B to 60% or 80% sparsity using Wanda mask selection and then update weights either using gradient-based (via Adam) or ADMM update. We select the pruning mask in a single step, i.e., we do not do any gradual mask selection. We test using 1, 10, 20, 50, and 100 update steps. We also test the performance of SparseGPT weight update and, for reference, include results of running SparseGPT with its own gradual mask selection.\nWe measure perplexity on Wikitext and time overhead (over forward pass) for each update option.\nUsing just one update step, we can almost beat SparseGPT and all gradient-based algorithms (Figure 2 ###reference_###). The ADMM update almost converges with ten update steps, while the gradient-based algorithms need more than 100 steps.\nADMM is thus clearly a faster and superior weight update algorithm compared to the gradient-based update.\nOur algorithm also provides a better weight update than SparseGPT weight update, and at 60% sparsity, it is even better than SparseGPT with its own iterative mask selection.\nFurthermore, we explicitly compare SparseGPT and ADMM weight updates over different weight masks.\nWe select either Wanda or SparseGPT mask and apply SparseGPT or ADMM weight update (in the case of SparseGPT mask, SparseGPT update is no-op, and for ADMM update, we rewind weights and keep the selected mask).\nResults are summarized in Table 2 ###reference_###. Our ADMM weight update is always better than SparseGPT update. Note that, our mask selection is also better than SparseGPT one (9.22 vs 9.92 perplexity)."
82
+ },
83
+ {
84
+ "section_id": "4.3",
85
+ "parent_section_id": "4",
86
+ "section_name": "Pruning LLaMA-7B",
87
+ "text": "Based on previous observations, we set the number of update iterations to 20, which should provide a pruning overhead similar to SparseGPT (Table 3 ###reference_###) and also guarantee reasonable convergence of weight updates.\nWe compare our weight update after mask selection without gradual pruning (ADMM1), our gradual pruning algorithm, which computes the mask over 15 iterations (ADMM-Grad) with Wanda and SparseGPT pruning.\nWe prune LLaMA-7b to various sparsities and also with 2:4 structured sparsity.\nFirst, we measure Wikitext perplexity (Table 1 ###reference_###). We see that our weight update over a fixed Wanda mask (ADMM1) produces better results than any other algorithm at 50%, 60%, and 2:4 sparsities. Note that SparseGPT generates the pruning mask iteratively, which gives it a slight edge in higher sparsities.\nWhen selecting the mask gradually, we are superior to all previously developed algorithms, especially at higher sparsities.\nFinally, we measure performance on seven zero-shot tasks (we use the same selection as the authors of Wanda):\nBoolQ (Clark et al., 2019 ###reference_b8###), RTE (Wang et al., 2018 ###reference_b39###), HellaSWAG (Zellers et al., 2019 ###reference_b44###), WinoGrande (Sakaguchi et al., 2021 ###reference_b33###), ARC easy and challenge (Clark et al., 2018 ###reference_b9###), and OpenbookQA (Mihaylov et al., 2018 ###reference_b31###).\nOur results (Table 4 ###reference_###) show that our algorithm is superior to the previous ones except for the RTE task. We note that results for the RTE task are slightly erratic (e.g. there is better performance at 60% sparsity than at 50%). We attribute this to the small RTE dataset size (277 samples).\nNotably, we recover 30-40% of the performance drop of SparseGPT on the BoolQ task at 50-70% sparsities and also on WinoGrande task using 50-60% sparsities. When using 2:4 sparsity, we recover 20-25% of the performance drop on WinoGrande and ARC-e tasks."
88
+ },
89
+ {
90
+ "section_id": "4.4",
91
+ "parent_section_id": "4",
92
+ "section_name": "Pruning LLaMA-2 variants",
93
+ "text": "Our algorithm generalizes and scales to bigger LLMs. We test it on variants of LLaMA-2 at various sparsity levels. Table 5 ###reference_### shows that our method is superior\nto previous ones, except at 2:4 sparsity on LLaMA2-70B.\nWe note quite a substantial improvement of our algorithm over previous ones at 60% sparsity and also at 2:4 sparsity on 7B and 13B models."
94
+ },
95
+ {
96
+ "section_id": "5",
97
+ "parent_section_id": null,
98
+ "section_name": "Related Work",
99
+ "text": "General neural network pruning. Post-training network pruning aims to compress neural networks by removing some of their parts (weights, neurons, layers) (LeCun et al., 1989 ###reference_b26###; Han et al., 2015 ###reference_b19###; Blalock et al., 2020 ###reference_b5###; Liu et al., 2018 ###reference_b27###).\nPruning criteria vary from simple magnitude pruning (Zhu & Gupta, 2018 ###reference_b48###) to sophisticated second-order approximations (Singh & Alistarh, 2020 ###reference_b34###). Nowadays, there is also a focus on methods that use limited calibration data and do very little fine-tuning (Frantar & Alistarh, 2022 ###reference_b12###; Hubara et al., 2021 ###reference_b24###).\nLLM pruning algorithms. Due to sheer LLM size, weight pruning algorithms focused mainly on pruning with limited calibration data and fine-tuning. SparseGPT (Frantar & Alistarh, 2023 ###reference_b13###) solves layer-wise pruning problem using multiple approximations.\nWanda (Sun et al., 2023 ###reference_b35###) shows that a simple product of weight magnitude and input activation norm provides competition pruning criterion. DST (Zhang et al., 2023 ###reference_b47###) provides an iterative mask improvement algorithm.\nAnother possibility for LLM pruning is structured pruning. One can either remove individual neurons (Ma et al., 2023 ###reference_b28###; Ashkboos et al., 2024 ###reference_b3###), or remove whole layers (Men et al., 2024 ###reference_b29###; Gromov et al., 2024 ###reference_b16###).\nTarget specific distillation and tuning. One can also make neural networks smaller by using knowledge distillation (Hinton et al., 2015 ###reference_b21###). In LLM context, this is usually done with a specific task in mind (Hsieh et al., 2023 ###reference_b22###; Fu et al., 2023 ###reference_b14###; Gu et al., 2023 ###reference_b17###; Ko et al., 2024 ###reference_b25###), where large general model knowledge (logits) is distilled into smaller task-specific model. This is in contrast with our method, which aims to preserve the general ability of the original LLM."
100
+ },
101
+ {
102
+ "section_id": "6",
103
+ "parent_section_id": null,
104
+ "section_name": "Conclusions and Future Work",
105
+ "text": "In this work, we presented a simple, fast, and effective post-pruning weight update algorithm based on the alternating direction method of multipliers. We showed that our algorithm converges much faster than any previously available option. Our weight update method is also theoretically sound and does not rely on any heuristical decisions or approximations.\nWe further improved the pruning performance by doing gradual mask selection and weight updates.\nThis achieves state-of-the-art performance in the layer-wise pruning setting, much better than previous solutions like Wanda or SparseGPT.\nOur main limitation is that our update rule runs over dense matrices, and thus, during update computation, we have no time or space savings from potential sparsity. We hope to address this in future work.\nAnother limitation is that one-shot pruned large models are still inferior to smaller dense ones.\nThe pruning results can certainly be improved by using nonuniform sparsity across layers (Yin et al., 2023 ###reference_b43###); for now, we leave this as a future work. Another option for improvement is to use a more accurate mask selection rule, such as one in Optimal brain surgeon (Hassibi et al., 1993 ###reference_b20###).\nFinally, our algorithm provides an efficient update rule for sparse matrices and can be used in some advanced optimizers like FOOF (Benzing, 2022 ###reference_b4###)."
106
+ }
107
+ ],
108
+ "appendix": [],
109
+ "tables": {
110
+ "1": {
111
+ "table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>WikiText perplexity of pruned LLaMA-7B. Our ADMM-based methods are superior to previous ones.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S1.T1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S1.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.1.1.1\">Method</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.1.1.2\">Sparsity</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.1.1.3\">Perplexity</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.2.2.1\">Dense</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.2.2.2\">0 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.2.2.3\">5.68</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.3.3.1\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.3.3.2\">50 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.3.3.3\">7.26</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.4.4.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.4.4.2\">50 %</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.4.4.3\">7.22</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.5.5\" style=\"background-color:#F3F3F3;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.5.5.1\"><span class=\"ltx_text\" id=\"S1.T1.1.5.5.1.1\" style=\"background-color:#F3F3F3;\">ADMM1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.5.5.2\"><span class=\"ltx_text\" id=\"S1.T1.1.5.5.2.1\" style=\"background-color:#F3F3F3;\">50 %</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.5.5.3\"><span class=\"ltx_text\" id=\"S1.T1.1.5.5.3.1\" style=\"background-color:#F3F3F3;\">7.20</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.6.6\" style=\"background-color:#E8E8E8;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.6.6.1\"><span class=\"ltx_text\" id=\"S1.T1.1.6.6.1.1\" style=\"background-color:#E8E8E8;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.6.6.2\"><span class=\"ltx_text\" id=\"S1.T1.1.6.6.2.1\" style=\"background-color:#E8E8E8;\">50 %</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.6.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S1.T1.1.6.6.3.1\" style=\"background-color:#E8E8E8;\">7.06</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.7.7.1\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.7.7.2\">60 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.7.7.3\">10.66</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.8.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.8.8.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.8.8.2\">60 %</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.8.8.3\">10.51</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.9.9\" style=\"background-color:#F3F3F3;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.9.9.1\"><span class=\"ltx_text\" id=\"S1.T1.1.9.9.1.1\" style=\"background-color:#F3F3F3;\">ADMM1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.9.9.2\"><span class=\"ltx_text\" id=\"S1.T1.1.9.9.2.1\" style=\"background-color:#F3F3F3;\">60 %</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.9.9.3\"><span class=\"ltx_text\" id=\"S1.T1.1.9.9.3.1\" style=\"background-color:#F3F3F3;\">9.96</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.10.10\" style=\"background-color:#E8E8E8;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.10.10.1\"><span class=\"ltx_text\" id=\"S1.T1.1.10.10.1.1\" style=\"background-color:#E8E8E8;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.10.10.2\"><span class=\"ltx_text\" id=\"S1.T1.1.10.10.2.1\" style=\"background-color:#E8E8E8;\">60 %</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.10.10.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S1.T1.1.10.10.3.1\" style=\"background-color:#E8E8E8;\">9.22</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.11.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.11.11.1\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.11.11.2\">70 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.11.11.3\">85.77</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.12.12.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.12.12.2\">70 %</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.12.12.3\">26.30</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.13.13\" style=\"background-color:#F3F3F3;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.13.13.1\"><span class=\"ltx_text\" id=\"S1.T1.1.13.13.1.1\" style=\"background-color:#F3F3F3;\">ADMM1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.13.13.2\"><span class=\"ltx_text\" id=\"S1.T1.1.13.13.2.1\" style=\"background-color:#F3F3F3;\">70 %</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.13.13.3\"><span class=\"ltx_text\" id=\"S1.T1.1.13.13.3.1\" style=\"background-color:#F3F3F3;\">26.31</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.14.14\" style=\"background-color:#E8E8E8;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.14.14.1\"><span class=\"ltx_text\" id=\"S1.T1.1.14.14.1.1\" style=\"background-color:#E8E8E8;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.14.14.2\"><span class=\"ltx_text\" id=\"S1.T1.1.14.14.2.1\" style=\"background-color:#E8E8E8;\">70 %</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.14.14.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S1.T1.1.14.14.3.1\" style=\"background-color:#E8E8E8;\">18.66</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.15.15.1\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.15.15.2\">80 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.15.15.3\">5e3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.16.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.16.16.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.16.16.2\">80 %</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.16.16.3\">154.75</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.17.17\" style=\"background-color:#F3F3F3;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.17.17.1\"><span class=\"ltx_text\" id=\"S1.T1.1.17.17.1.1\" style=\"background-color:#F3F3F3;\">ADMM1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.17.17.2\"><span class=\"ltx_text\" id=\"S1.T1.1.17.17.2.1\" style=\"background-color:#F3F3F3;\">80 %</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.17.17.3\"><span class=\"ltx_text\" id=\"S1.T1.1.17.17.3.1\" style=\"background-color:#F3F3F3;\">202.04</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.18.18\" style=\"background-color:#E8E8E8;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.18.18.1\"><span class=\"ltx_text\" id=\"S1.T1.1.18.18.1.1\" style=\"background-color:#E8E8E8;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.18.18.2\"><span class=\"ltx_text\" id=\"S1.T1.1.18.18.2.1\" style=\"background-color:#E8E8E8;\">80 %</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.18.18.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S1.T1.1.18.18.3.1\" style=\"background-color:#E8E8E8;\">69.46</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.19.19\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.19.19.1\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.19.19.2\">2:4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.1.19.19.3\">11.53</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.20.20\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.20.20.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.20.20.2\">2:4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.20.20.3\">11.00</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.21.21\" style=\"background-color:#F3F3F3;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.21.21.1\"><span class=\"ltx_text\" id=\"S1.T1.1.21.21.1.1\" style=\"background-color:#F3F3F3;\">ADMM1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.21.21.2\"><span class=\"ltx_text\" id=\"S1.T1.1.21.21.2.1\" style=\"background-color:#F3F3F3;\">2:4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.1.21.21.3\"><span class=\"ltx_text\" id=\"S1.T1.1.21.21.3.1\" style=\"background-color:#F3F3F3;\">10.38</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.1.22.22\" style=\"background-color:#E8E8E8;\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T1.1.22.22.1\"><span class=\"ltx_text\" id=\"S1.T1.1.22.22.1.1\" style=\"background-color:#E8E8E8;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T1.1.22.22.2\"><span class=\"ltx_text\" id=\"S1.T1.1.22.22.2.1\" style=\"background-color:#E8E8E8;\">2:4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S1.T1.1.22.22.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S1.T1.1.22.22.3.1\" style=\"background-color:#E8E8E8;\">9.90</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
112
+ "capture": "Table 1: WikiText perplexity of pruned LLaMA-7B. Our ADMM-based methods are superior to previous ones."
113
+ },
114
+ "2": {
115
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Comparision of weight update quality between ADMM and SparseGPT on Llama-7B using 60% sparsity.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T2.1.1.1.1\">Mask selection</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T2.1.1.1.2\">Weight update</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.1.1.1.3\">Perplexity</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.1.1\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.2.1.2\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.2.1.3\">10.86</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.3.2.1\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.3.2.2\">ADMM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.3.2.3\">9.96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.3.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.3.2\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.3.3\">10.51</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.5.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.5.4.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.5.4.2\">ADMM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.4.3\">9.92</td>\n</tr>\n</tbody>\n</table>\n</figure>",
116
+ "capture": "Table 2: Comparision of weight update quality between ADMM and SparseGPT on Llama-7B using 60% sparsity."
117
+ },
118
+ "3": {
119
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Total pruning time for Llama-7B</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S4.T3.1.1.1.1\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T3.1.1.1.2\">Total seconds</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.1.1\">Wanda</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.1.2\">245</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.1.3.2.1\">SparseGPT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.2.2\">850</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.1.4.3.1\">ADMM1</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.3.2\">832</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.1.5.4.1\">ADMM-Grad</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.4.2\">869</td>\n</tr>\n</tbody>\n</table>\n</figure>",
120
+ "capture": "Table 3: Total pruning time for Llama-7B"
121
+ },
122
+ "4": {
123
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Zero shot accuracies on various tasks during pruning of LLaMA-7B</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.1.1.1.1\">Sparsity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.1.1.1.2\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.1.1.1.3\">BoolQ</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.1.1.1.4\">RTE</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.1.1.1.5\">HellaSwag</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.1.1.1.6\">WinoGrande</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.1.1.1.7\">ARC-e</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.1.1.1.8\">ARC-c</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.1.1.1.9\">OBQA</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T4.1.1.1.10\">Mean</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.2.2.1\">0 %</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.2.2.2\">Dense</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.2.2.3\">75.05</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.2.2.4\">66.43</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.2.2.5\">56.92</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.2.2.6\">69.93</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.2.2.7\">75.34</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.2.2.8\">41.89</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.2.2.9\">34.40</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.2.2.10\">59.99</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.3.1.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T4.1.3.1.1.1\">50%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.3.1.2\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.3.1.3\">71.22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.3.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.3.1.4.1\">55.60</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.3.1.5\">51.85</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.3.1.6\">66.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.3.1.7\">69.11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.3.1.8\">36.86</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.3.1.9\">28.80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.3.1.10\">54.21</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.4.2.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.4.2.2\">73.05</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.4.2.3\">52.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.4.2.4\">51.21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.4.2.5\">68.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.4.2.6\">70.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.4.2.7\">36.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.4.2.8\">28.60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.4.2.9\">54.39</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.5.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.5.3.1\"><span class=\"ltx_text\" id=\"S4.T4.1.5.3.1.1\" style=\"background-color:#EEEEEE;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.5.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.5.3.2.1\" style=\"background-color:#EEEEEE;\">73.63</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.5.3.3\"><span class=\"ltx_text\" id=\"S4.T4.1.5.3.3.1\" style=\"background-color:#EEEEEE;\">52.34</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.5.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.5.3.4.1\" style=\"background-color:#EEEEEE;\">52.33</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.5.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.5.3.5.1\" style=\"background-color:#EEEEEE;\">69.13</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.5.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.5.3.6.1\" style=\"background-color:#EEEEEE;\">70.74</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.5.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.5.3.7.1\" style=\"background-color:#EEEEEE;\">37.88</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.5.3.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.5.3.8.1\" style=\"background-color:#EEEEEE;\">30.20</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.5.3.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.5.3.9.1\" style=\"background-color:#EEEEEE;\">55.18</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.6.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.6.4.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T4.1.6.4.1.1\">60%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.6.4.2\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.6.4.3\">69.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.6.4.4\">59.56</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.6.4.5\">43.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.6.4.6\">62.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.6.4.7\">62.58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.6.4.8\">30.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.6.4.9\">25.20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.6.4.10\">50.43</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.7.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.7.5.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.7.5.2\">70.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.7.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.7.5.3.1\">62.09</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.7.5.4\">44.84</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.7.5.5\">65.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.7.5.6\">64.14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.7.5.7\">30.97</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.7.5.8\">25.20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.7.5.9\">51.93</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.8.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.8.6.1\"><span class=\"ltx_text\" id=\"S4.T4.1.8.6.1.1\" style=\"background-color:#EEEEEE;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.8.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.8.6.2.1\" style=\"background-color:#EEEEEE;\">72.41</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.8.6.3\"><span class=\"ltx_text\" id=\"S4.T4.1.8.6.3.1\" style=\"background-color:#EEEEEE;\">58.84</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.8.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.8.6.4.1\" style=\"background-color:#EEEEEE;\">46.61</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.8.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.8.6.5.1\" style=\"background-color:#EEEEEE;\">66.77</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.8.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.8.6.6.1\" style=\"background-color:#EEEEEE;\">64.52</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.8.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.8.6.7.1\" style=\"background-color:#EEEEEE;\">31.65</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.8.6.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.8.6.8.1\" style=\"background-color:#EEEEEE;\">26.20</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.8.6.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.8.6.9.1\" style=\"background-color:#EEEEEE;\">52.43</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.9.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.9.7.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T4.1.9.7.1.1\">70%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.9.7.2\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.9.7.3\">59.78</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.9.7.4\">58.12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.9.7.5\">28.81</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.9.7.6\">50.82</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.9.7.7\">32.40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.9.7.8\">18.85</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.9.7.9\">14.20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.9.7.10\">37.57</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.10.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.10.8.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.10.8.2\">62.35</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.10.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.10.8.3.1\">55.95</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.10.8.4\">33.77</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.10.8.5\">59.35</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.10.8.6\">45.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.10.8.7\">23.97</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.10.8.8\">17.20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.10.8.9\">42.61</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.11.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.11.9.1\"><span class=\"ltx_text\" id=\"S4.T4.1.11.9.1.1\" style=\"background-color:#EEEEEE;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.11.9.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.11.9.2.1\" style=\"background-color:#EEEEEE;\">66.05</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.11.9.3\"><span class=\"ltx_text\" id=\"S4.T4.1.11.9.3.1\" style=\"background-color:#EEEEEE;\">53.79</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.11.9.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.11.9.4.1\" style=\"background-color:#EEEEEE;\">36.29</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.11.9.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.11.9.5.1\" style=\"background-color:#EEEEEE;\">59.74</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.11.9.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.11.9.6.1\" style=\"background-color:#EEEEEE;\">50.84</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.11.9.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.11.9.7.1\" style=\"background-color:#EEEEEE;\">25.50</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.11.9.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.11.9.8.1\" style=\"background-color:#EEEEEE;\">18.60</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.11.9.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.11.9.9.1\" style=\"background-color:#EEEEEE;\">44.40</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.12.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.12.10.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T4.1.12.10.1.1\">80%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.12.10.2\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.12.10.3\">37.82</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.12.10.4\">48.37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.12.10.5\">26.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.12.10.6\">48.77</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.12.10.7\">27.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.12.10.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.12.10.8.1\">20.56</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.12.10.9\">13.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.12.10.10\">31.72</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.13.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.13.11.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.13.11.2\">41.89</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.13.11.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.13.11.3.1\">52.70</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.13.11.4\">27.83</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.13.11.5\">48.38</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.13.11.6\">30.30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.13.11.7\">18.77</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.13.11.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.13.11.8.1\">13.40</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.13.11.9\">33.32</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.14.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.14.12.1\"><span class=\"ltx_text\" id=\"S4.T4.1.14.12.1.1\" style=\"background-color:#EEEEEE;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.14.12.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.14.12.2.1\" style=\"background-color:#EEEEEE;\">56.14</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.14.12.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.14.12.3.1\" style=\"background-color:#EEEEEE;\">52.70</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.14.12.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.14.12.4.1\" style=\"background-color:#EEEEEE;\">28.75</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.14.12.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.14.12.5.1\" style=\"background-color:#EEEEEE;\">50.74</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.14.12.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.14.12.6.1\" style=\"background-color:#EEEEEE;\">31.56</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.14.12.7\"><span class=\"ltx_text\" id=\"S4.T4.1.14.12.7.1\" style=\"background-color:#EEEEEE;\">18.94</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.14.12.8\"><span class=\"ltx_text\" id=\"S4.T4.1.14.12.8.1\" style=\"background-color:#EEEEEE;\">12.40</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.14.12.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.14.12.9.1\" style=\"background-color:#EEEEEE;\">35.89</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.15.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.15.13.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T4.1.15.13.1.1\">2:4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.15.13.2\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.15.13.3\">69.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.15.13.4\">51.99</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.15.13.5\">42.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.15.13.6\">62.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.15.13.7\">60.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.15.13.8\">28.07</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.15.13.9\">24.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.15.13.10\">48.53</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.16.14\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.16.14.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.16.14.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.16.14.2.1\">70.46</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.16.14.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.16.14.3.1\">60.65</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.16.14.4\">42.99</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.16.14.5\">64.88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.16.14.6\">61.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.16.14.7\">30.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.16.14.8\">23.60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.16.14.9\">50.60</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.17.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.17.15.1\"><span class=\"ltx_text\" id=\"S4.T4.1.17.15.1.1\" style=\"background-color:#EEEEEE;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.17.15.2\"><span class=\"ltx_text\" id=\"S4.T4.1.17.15.2.1\" style=\"background-color:#EEEEEE;\">70.27</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.17.15.3\"><span class=\"ltx_text\" id=\"S4.T4.1.17.15.3.1\" style=\"background-color:#EEEEEE;\">55.59</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.17.15.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.17.15.4.1\" style=\"background-color:#EEEEEE;\">44.88</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.17.15.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.17.15.5.1\" style=\"background-color:#EEEEEE;\">66.14</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.17.15.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.17.15.6.1\" style=\"background-color:#EEEEEE;\">64.18</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.17.15.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.17.15.7.1\" style=\"background-color:#EEEEEE;\">30.97</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.17.15.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.17.15.8.1\" style=\"background-color:#EEEEEE;\">25.20</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.17.15.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.17.15.9.1\" style=\"background-color:#EEEEEE;\">51.03</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
124
+ "capture": "Table 4: Zero shot accuracies on various tasks during pruning of LLaMA-7B"
125
+ },
126
+ "5": {
127
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Perplexity of pruned LLaMA-2 variants on WikiText</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T5.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T5.1.1.1.1\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T5.1.1.1.2\">Sparsity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T5.1.1.1.3\">7B</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T5.1.1.1.4\">13 B</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T5.1.1.1.5\">70B</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.1.2.2.1\">Dense</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.1.2.2.2\">0 %</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.1.2.2.3\">5.12</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.1.2.2.4\">4.57</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.1.2.2.5\">3.12</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T5.1.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.3.1.1\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.3.1.2\">50 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.3.1.3\">6.42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.3.1.4\">5.56</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.3.1.5\">3.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.4.2.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.4.2.2\">50 %</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.4.2.3\">6.51</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.4.2.4\">5.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.4.2.5\">3.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.5.3\" style=\"background-color:#EEEEEE;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.5.3.1\"><span class=\"ltx_text\" id=\"S4.T5.1.5.3.1.1\" style=\"background-color:#EEEEEE;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.5.3.2\"><span class=\"ltx_text\" id=\"S4.T5.1.5.3.2.1\" style=\"background-color:#EEEEEE;\">50 %</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.5.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.5.3.3.1\" style=\"background-color:#EEEEEE;\">6.33</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.5.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.5.3.4.1\" style=\"background-color:#EEEEEE;\">5.52</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.5.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.5.3.5.1\" style=\"background-color:#EEEEEE;\">3.95</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.6.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.6.4.1\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.6.4.2\">60 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.6.4.3\">9.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.6.4.4\">7.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.6.4.5\">4.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.7.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.7.5.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.7.5.2\">60 %</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.7.5.3\">9.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.7.5.4\">7.80</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.7.5.5\">4.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.8.6\" style=\"background-color:#EEEEEE;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.8.6.1\"><span class=\"ltx_text\" id=\"S4.T5.1.8.6.1.1\" style=\"background-color:#EEEEEE;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.8.6.2\"><span class=\"ltx_text\" id=\"S4.T5.1.8.6.2.1\" style=\"background-color:#EEEEEE;\">60 %</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.8.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.8.6.3.1\" style=\"background-color:#EEEEEE;\">8.70</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.8.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.8.6.4.1\" style=\"background-color:#EEEEEE;\">7.09</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.8.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.8.6.5.1\" style=\"background-color:#EEEEEE;\">4.81</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.9.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.9.7.1\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.9.7.2\">80 %</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.9.7.3\">5e3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.9.7.4\">2e3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.9.7.5\">1e2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.10.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.10.8.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.10.8.2\">80 %</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.10.8.3\">108.87</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.10.8.4\">94.23</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.10.8.5\">25.86</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.11.9\" style=\"background-color:#EEEEEE;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.11.9.1\"><span class=\"ltx_text\" id=\"S4.T5.1.11.9.1.1\" style=\"background-color:#EEEEEE;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.11.9.2\"><span class=\"ltx_text\" id=\"S4.T5.1.11.9.2.1\" style=\"background-color:#EEEEEE;\">80 %</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.11.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.11.9.3.1\" style=\"background-color:#EEEEEE;\">55.93</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.11.9.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.11.9.4.1\" style=\"background-color:#EEEEEE;\">43.58</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.11.9.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.11.9.5.1\" style=\"background-color:#EEEEEE;\">18.84</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.12.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.12.10.1\">Wanda</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.12.10.2\">2:4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.12.10.3\">11.02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.12.10.4\">8.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.12.10.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.12.10.5.1\">5.16</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.13.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.13.11.1\">SparseGPT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.13.11.2\">2:4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.13.11.3\">10.17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.13.11.4\">8.32</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.13.11.5\">5.40</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.14.12\" style=\"background-color:#EEEEEE;\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.14.12.1\"><span class=\"ltx_text\" id=\"S4.T5.1.14.12.1.1\" style=\"background-color:#EEEEEE;\">ADMM-Grad</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.14.12.2\"><span class=\"ltx_text\" id=\"S4.T5.1.14.12.2.1\" style=\"background-color:#EEEEEE;\">2:4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.14.12.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.14.12.3.1\" style=\"background-color:#EEEEEE;\">9.74</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.14.12.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.14.12.4.1\" style=\"background-color:#EEEEEE;\">7.78</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.14.12.5\"><span class=\"ltx_text\" id=\"S4.T5.1.14.12.5.1\" style=\"background-color:#EEEEEE;\">5.19</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
128
+ "capture": "Table 5: Perplexity of pruned LLaMA-2 variants on WikiText"
129
+ }
130
+ },
131
+ "image_paths": {
132
+ "1": {
133
+ "figure_path": "2401.02938v2_figure_1.png",
134
+ "caption": "Figure 1: Reconstruction error over time (in seconds) during optimization of weights in selected layers of LLaMA-7B. The mask was derived by Wanda using 50% sparsity. We compare our proposed ADMM algorithm to SGD with momentum and Adam using various learning rates. We also compare to the SparseGPT update. Our ADMM update converges much faster than other methods and is better than the SparseGPT update.",
135
+ "url": "http://arxiv.org/html/2401.02938v2/x1.png"
136
+ },
137
+ "2": {
138
+ "figure_path": "2401.02938v2_figure_2.png",
139
+ "caption": "Figure 2: WikiText perplexity vs time overhead for ADMM, Adam, and SparseGPT weight update on LLaMA-7B.\nWe run ADMM and Adam for 1, 10, 20, 50 and 100 update steps and test Adam with various learning rates. The top plot shows 60% sparsity. The bottom one uses 80% sparsity. SparseGPT full refers to normal SparseGPT, which also selects the pruning mask gradually. All other options just update weights over a fixed mask selected by Wanda. Our weight update is better than the one in SparseGPT and better than gradient-based methods.",
140
+ "url": "http://arxiv.org/html/2401.02938v2/x2.png"
141
+ }
142
+ },
143
+ "validation": true,
144
+ "references": [
145
+ {
146
+ "1": {
147
+ "title": "Enabling high-sparsity foundational llama models with efficient pretraining and deployment.",
148
+ "author": "Abhinav Agarwalla, Abhay Gupta, Alexandre Marques, Shubhra Pandit, Michael Goin, Eldar Kurtic, Kevin Leong, Tuan Nguyen, Mahmoud Salem, Dan Alistarh, et al.",
149
+ "venue": "arXiv preprint arXiv:2405.03594, 2024.",
150
+ "url": null
151
+ }
152
+ },
153
+ {
154
+ "2": {
155
+ "title": "Intriguing properties of quantization at scale.",
156
+ "author": "Arash Ahmadian, Saurabh Dash, Hongyu Chen, Bharat Venkitesh, Stephen Gou, Phil Blunsom, Ahmet \u00dcst\u00fcn, and Sara Hooker.",
157
+ "venue": "arXiv preprint arXiv:2305.19268, 2023.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "3": {
163
+ "title": "Slicegpt: Compress large language models by deleting rows and columns.",
164
+ "author": "Saleh Ashkboos, Maximilian L Croci, Marcelo Gennari do Nascimento, Torsten Hoefler, and James Hensman.",
165
+ "venue": "arXiv preprint arXiv:2401.15024, 2024.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "4": {
171
+ "title": "Gradient descent on neurons and its link to approximate second-order optimization.",
172
+ "author": "Frederik Benzing.",
173
+ "venue": "In International Conference on Machine Learning, pp. 1817\u20131853. PMLR, 2022.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "5": {
179
+ "title": "What is the state of neural network pruning?",
180
+ "author": "Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag.",
181
+ "venue": "Proceedings of machine learning and systems, 2:129\u2013146, 2020.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "6": {
187
+ "title": "Distributed optimization and statistical learning via the alternating direction method of multipliers.",
188
+ "author": "Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et al.",
189
+ "venue": "Foundations and Trends\u00ae in Machine learning, 3(1):1\u2013122, 2011.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "7": {
195
+ "title": "Language models are few-shot learners.",
196
+ "author": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.",
197
+ "venue": "Advances in neural information processing systems, 33:1877\u20131901, 2020.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "8": {
203
+ "title": "Boolq: Exploring the surprising difficulty of natural yes/no questions.",
204
+ "author": "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova.",
205
+ "venue": "arXiv preprint arXiv:1905.10044, 2019.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "9": {
211
+ "title": "Think you have solved question answering? try arc, the ai2 reasoning challenge.",
212
+ "author": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord.",
213
+ "venue": "arXiv preprint arXiv:1803.05457, 2018.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "10": {
219
+ "title": "The case for 4-bit precision: k-bit inference scaling laws.",
220
+ "author": "Tim Dettmers and Luke Zettlemoyer.",
221
+ "venue": "In International Conference on Machine Learning, pp. 7750\u20137774. PMLR, 2023.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "11": {
227
+ "title": "Llm. int8 (): 8-bit matrix multiplication for transformers at scale.",
228
+ "author": "Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer.",
229
+ "venue": "arXiv preprint arXiv:2208.07339, 2022.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "12": {
235
+ "title": "Optimal brain compression: A framework for accurate post-training quantization and pruning.",
236
+ "author": "Elias Frantar and Dan Alistarh.",
237
+ "venue": "Advances in Neural Information Processing Systems, 35:4475\u20134488, 2022.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "13": {
243
+ "title": "Sparsegpt: Massive language models can be accurately pruned in one-shot.",
244
+ "author": "Elias Frantar and Dan Alistarh.",
245
+ "venue": "In International Conference on Machine Learning, pp. 10323\u201310337. PMLR, 2023.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "14": {
251
+ "title": "Specializing smaller language models towards multi-step reasoning.",
252
+ "author": "Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot.",
253
+ "venue": "In International Conference on Machine Learning, pp. 10421\u201310430. PMLR, 2023.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "15": {
259
+ "title": "A framework for few-shot language model evaluation.",
260
+ "author": "Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, et al.",
261
+ "venue": "Version v0. 0.1. Sept, 2021.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "16": {
267
+ "title": "The unreasonable ineffectiveness of the deeper layers.",
268
+ "author": "Andrey Gromov, Kushal Tirumala, Hassan Shapourian, Paolo Glorioso, and Daniel A Roberts.",
269
+ "venue": "arXiv preprint arXiv:2403.17887, 2024.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "17": {
275
+ "title": "Minillm: Knowledge distillation of large language models.",
276
+ "author": "Yuxian Gu, Li Dong, Furu Wei, and Minlie Huang.",
277
+ "venue": "In The Twelfth International Conference on Learning Representations, 2023.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "18": {
283
+ "title": "Model compression with adversarial robustness: A unified optimization framework.",
284
+ "author": "Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, and Ji Liu.",
285
+ "venue": "Advances in Neural Information Processing Systems, 32, 2019.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "19": {
291
+ "title": "Learning both weights and connections for efficient neural network.",
292
+ "author": "Song Han, Jeff Pool, John Tran, and William Dally.",
293
+ "venue": "Advances in neural information processing systems, 28, 2015.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "20": {
299
+ "title": "Optimal brain surgeon and general network pruning.",
300
+ "author": "Babak Hassibi, David G Stork, and Gregory J Wolff.",
301
+ "venue": "In IEEE international conference on neural networks, pp. 293\u2013299. IEEE, 1993.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "21": {
307
+ "title": "Distilling the knowledge in a neural network.",
308
+ "author": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean.",
309
+ "venue": "arXiv preprint arXiv:1503.02531, 2015.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "22": {
315
+ "title": "Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes.",
316
+ "author": "Cheng-Yu Hsieh, Chun-Liang Li, Chih-Kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alexander Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister.",
317
+ "venue": "arXiv preprint arXiv:2305.02301, 2023.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "23": {
323
+ "title": "Lora: Low-rank adaptation of large language models.",
324
+ "author": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen.",
325
+ "venue": "arXiv preprint arXiv:2106.09685, 2021.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "24": {
331
+ "title": "Accelerated sparse neural training: A provable and efficient method to find n: m transposable masks.",
332
+ "author": "Itay Hubara, Brian Chmiel, Moshe Island, Ron Banner, Joseph Naor, and Daniel Soudry.",
333
+ "venue": "Advances in neural information processing systems, 34:21099\u201321111, 2021.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "25": {
339
+ "title": "Distillm: Towards streamlined distillation for large language models.",
340
+ "author": "Jongwoo Ko, Sungnyun Kim, Tianyi Chen, and Se-Young Yun.",
341
+ "venue": "arXiv preprint arXiv:2402.03898, 2024.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "26": {
347
+ "title": "Optimal brain damage.",
348
+ "author": "Yann LeCun, John Denker, and Sara Solla.",
349
+ "venue": "Advances in neural information processing systems, 2, 1989.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "27": {
355
+ "title": "Rethinking the value of network pruning.",
356
+ "author": "Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell.",
357
+ "venue": "arXiv preprint arXiv:1810.05270, 2018.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "28": {
363
+ "title": "Llm-pruner: On the structural pruning of large language models.",
364
+ "author": "Xinyin Ma, Gongfan Fang, and Xinchao Wang.",
365
+ "venue": "Advances in neural information processing systems, 36:21702\u201321720, 2023.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "29": {
371
+ "title": "Shortgpt: Layers in large language models are more redundant than you expect.",
372
+ "author": "Xin Men, Mingyu Xu, Qingyu Zhang, Bingning Wang, Hongyu Lin, Yaojie Lu, Xianpei Han, and Weipeng Chen.",
373
+ "venue": "arXiv preprint arXiv:2403.03853, 2024.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "30": {
379
+ "title": "Pointer sentinel mixture models.",
380
+ "author": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher.",
381
+ "venue": "arXiv preprint arXiv:1609.07843, 2016.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "31": {
387
+ "title": "Can a suit of armor conduct electricity? a new dataset for open book question answering.",
388
+ "author": "Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal.",
389
+ "venue": "arXiv preprint arXiv:1809.02789, 2018.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "32": {
395
+ "title": "Exploring the limits of transfer learning with a unified text-to-text transformer.",
396
+ "author": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu.",
397
+ "venue": "The Journal of Machine Learning Research, 21(1):5485\u20135551, 2020.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "33": {
403
+ "title": "Winogrande: An adversarial winograd schema challenge at scale.",
404
+ "author": "Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi.",
405
+ "venue": "Communications of the ACM, 64(9):99\u2013106, 2021.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "34": {
411
+ "title": "Woodfisher: Efficient second-order approximation for neural network compression.",
412
+ "author": "Sidak Pal Singh and Dan Alistarh.",
413
+ "venue": "Advances in Neural Information Processing Systems, 33:18098\u201318109, 2020.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "35": {
419
+ "title": "A simple and effective pruning approach for large language models.",
420
+ "author": "Mingjie Sun, Zhuang Liu, Anna Bair, and J Zico Kolter.",
421
+ "venue": "arXiv preprint arXiv:2306.11695, 2023.",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "36": {
427
+ "title": "Llama: Open and efficient foundation language models.",
428
+ "author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et al.",
429
+ "venue": "arXiv preprint arXiv:2302.13971, 2023a.",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "37": {
435
+ "title": "Llama 2: Open foundation and fine-tuned chat models.",
436
+ "author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.",
437
+ "venue": "arXiv preprint arXiv:2307.09288, 2023b.",
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "38": {
443
+ "title": "Attention is all you need.",
444
+ "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin.",
445
+ "venue": "Advances in neural information processing systems, 30, 2017.",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "39": {
451
+ "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding.",
452
+ "author": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman.",
453
+ "venue": "arXiv preprint arXiv:1804.07461, 2018.",
454
+ "url": null
455
+ }
456
+ },
457
+ {
458
+ "40": {
459
+ "title": "Flash-llm: Enabling cost-effective and highly-efficient large generative model inference with unstructured sparsity.",
460
+ "author": "Haojun Xia, Zhen Zheng, Yuchao Li, Donglin Zhuang, Zhongzhu Zhou, Xiafei Qiu, Yong Li, Wei Lin, and Shuaiwen Leon Song.",
461
+ "venue": "arXiv preprint arXiv:2309.10285, 2023.",
462
+ "url": null
463
+ }
464
+ },
465
+ {
466
+ "41": {
467
+ "title": "Smoothquant: Accurate and efficient post-training quantization for large language models.",
468
+ "author": "Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han.",
469
+ "venue": "In International Conference on Machine Learning, pp. 38087\u201338099. PMLR, 2023.",
470
+ "url": null
471
+ }
472
+ },
473
+ {
474
+ "42": {
475
+ "title": "Adversarial robustness vs. model compression, or both?",
476
+ "author": "Shaokai Ye, Kaidi Xu, Sijia Liu, Hao Cheng, Jan-Henrik Lambrechts, Huan Zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, and Xue Lin.",
477
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 111\u2013120, 2019.",
478
+ "url": null
479
+ }
480
+ },
481
+ {
482
+ "43": {
483
+ "title": "Outlier weighed layerwise sparsity (owl): A missing secret sauce for pruning llms to high sparsity.",
484
+ "author": "Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Mykola Pechenizkiy, Yi Liang, Zhangyang Wang, and Shiwei Liu.",
485
+ "venue": "arXiv preprint arXiv:2310.05175, 2023.",
486
+ "url": null
487
+ }
488
+ },
489
+ {
490
+ "44": {
491
+ "title": "Hellaswag: Can a machine really finish your sentence?",
492
+ "author": "Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi.",
493
+ "venue": "arXiv preprint arXiv:1905.07830, 2019.",
494
+ "url": null
495
+ }
496
+ },
497
+ {
498
+ "45": {
499
+ "title": "Opt: Open pre-trained transformer language models.",
500
+ "author": "Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al.",
501
+ "venue": "arXiv preprint arXiv:2205.01068, 2022.",
502
+ "url": null
503
+ }
504
+ },
505
+ {
506
+ "46": {
507
+ "title": "A systematic dnn weight pruning framework using alternating direction method of multipliers.",
508
+ "author": "Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad, and Yanzhi Wang.",
509
+ "venue": "In Proceedings of the European conference on computer vision (ECCV), pp. 184\u2013199, 2018.",
510
+ "url": null
511
+ }
512
+ },
513
+ {
514
+ "47": {
515
+ "title": "Dynamic sparse no training: Training-free fine-tuning for sparse llms.",
516
+ "author": "Yuxin Zhang, Lirui Zhao, Mingbao Lin, Yunyun Sun, Yiwu Yao, Xingjia Han, Jared Tanner, Shiwei Liu, and Rongrong Ji.",
517
+ "venue": "arXiv preprint arXiv:2310.08915, 2023.",
518
+ "url": null
519
+ }
520
+ },
521
+ {
522
+ "48": {
523
+ "title": "To prune, or not to prune: Exploring the efficacy of pruning for model compression.",
524
+ "author": "Michael Zhu and Suyog Gupta.",
525
+ "venue": "In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Workshop Track Proceedings. OpenReview.net, 2018.",
526
+ "url": null
527
+ }
528
+ }
529
+ ],
530
+ "url": "http://arxiv.org/html/2401.02938v2"
531
+ }
20240722/2401.02957v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2401.04152v2.json ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Cross-speaker encoding network for multi-talker speech recognition",
3
+ "abstract": "End-to-end multi-talker speech recognition has garnered great interest as an effective approach to directly transcribe overlapped speech from multiple speakers.\nCurrent methods typically adopt either 1) single-input multiple-output (SIMO) models with a branched encoder, or 2) single-input single-output (SISO) models based on attention-based encoder-decoder architecture with serialized output training (SOT).\nIn this work, we propose a Cross-Speaker Encoding (CSE) network to address the limitations of SIMO models by aggregating cross-speaker representations.\nFurthermore, the CSE model is integrated with SOT to leverage both the advantages of SIMO and SISO while mitigating their drawbacks.\nTo the best of our knowledge, this work represents an early effort to integrate SIMO and SISO for multi-talker speech recognition.\nExperiments on the two-speaker LibrispeechMix dataset show that the CES model reduces word error rate (WER) by 8% over the SIMO baseline.\nThe CSE-SOT model reduces WER by 10% overall and by 16% on high-overlap speech compared to the SOT model.\nCode is available at https://github.com/kjw11/CSEnet-ASR.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Automatic speech recognition (ASR) aims to transcribe human speech into text.\nThanks to the rapid progress of deep learning, advanced models like Conformer [1 ###reference_b1###], RNN-Transducer (RNN-T) [2 ###reference_b2###, 3 ###reference_b3###], and Attention-based Encoder-Decoder (AED) [4 ###reference_b4###, 5 ###reference_b5###] have achieved superior performance in single-speaker ASR tasks.\nMeanwhile, multi-talker ASR has emerged as an active research area [6 ###reference_b6###], seeking to further empower ASR systems to handle more complex conversational scenarios.\nIn particular, natural conversational speech often contains one or multiple speakers, with varying degrees of overlap.\nThese challenges necessitate dedicated models specifically designed to address them.\nThere have been diverse approaches proposed for multi-talker ASR.\nConventional cascaded systems utilize a speaker separation model as a front-end, followed by a regular ASR model for recognition [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###].\nMore recently, end-to-end models have gained attention due to their promising performance.\nEnd-to-end models can be classified into two types: single input multiple output (SIMO) and single input single output (SISO).\nSIMO models employ branch-based architectures that internally separate the mixed speech into isolated branches, followed by shared recognition blocks to transcribe different speakers in parallel [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###].\nTo align branches with respective speakers, permutation invariant training (PIT) [15 ###reference_b15###, 16 ###reference_b16###] or heuristic error assignment training (HEAT) [12 ###reference_b12###, 10 ###reference_b10###] are applied to calculate the ASR loss.\nCompared to cascaded systems, SIMO models combine separation and recognition in a unified structure and without separation loss in training.\nIn contrast, SISO models serialize transcriptions of different speakers into a single stream, relying on auto-regressive AED frameworks with serialized output training (SOT) [17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###].\nCompared to SIMO models that require pre-defined numbers of speakers and branches, SISO models with SOT leverage an attention-based decoder to transcribe speakers in chronological order, which allows for flexibility in the number of speakers.\nDespite recent advances, limitations persist for both SIMO and SISO approaches.\nSIMO models propagate speakers\u2019 encoding through isolated branches, which could cause repeated or omitted transcriptions [17 ###reference_b17###, 21 ###reference_b21###].\nMoreover, incorporating SIMO with other structures like AED and RNN-T introduces complexity due to the multiple output streams [11 ###reference_b11###].\nMeanwhile, SISO models highly rely on attention mechanisms to disambiguate speakers without explicitly modeling separation.\nThe Lack of prior information for distinguishing speakers\u2019 characteristics could cause performance degradation when facing higher overlapped speech.\nIn this work, we propose a Cross-Speaker Encoding (CSE) network to address the limitations of SIMO approaches. Further, the CSE is integrated with the SOT strategy to leverage the advantages of SIMO and SISO while mitigating their drawbacks. Specifically, we attribute the drawbacks of SIMO to isolating speakers into separate branches. This \u201cone-time deal\u201d precludes the possibility of different branches conditioning on each other, and impedes potential inter-dependencies between speakers. In our proposed CSE network, a cross-encoder, in conjunction with a joint-HEAT module, is employed to jointly encode cross-speaker representations. The cross-encoder enables separate branches to condition on each other, while the joint-HEAT simultaneously improves the single-talker performance of the original HEAT and converges the model outputs into a uniform stream. Building on the CSE architecture, we further introduce CSE-SOT as the first attempt to integrate these two methods, reflecting a novel combination of their strengths.\nWe conducted experiments on simulated conversational speech with varying speaker numbers and overlap degrees.\nResults demonstrate that CSE outperforms the branch-based baseline with lower complexity.\nAdditionally, CSE-SOT significantly surpasses the SOT model while retaining the capability to generalize to more speakers than that in the training data.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Cross-Speaker Encoding Network",
15
+ "text": "In this section, we will first review SIMO models and discuss the limitations, then introduce the proposed Cross-Speaker Encoding (CSE) network.\nFor clear demonstration, we focus on the two-speaker case when explaining our method."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Branch-based SIMO model",
21
+ "text": "The branch-based SIMO model provides a unified framework for joint speech separation and recognition.\nAs shown in Figure 1 ###reference_### (a), given a mixture speech feature and ground truth transcripts for two speakers, the model first encodes with a mixture encoder.\nA separation module with two branches then generates separated representations and for each speaker.\nThis separation module can employ either speaker differentiator (SpkrDiff) encoders to encode individual speaker, or masking encoders to mask out unwanted speaker(s).\nThe separated representations are then fed to a shared recognition encoder, which serves as a standard ASR encoder to predict transcripts separately for each of the two branches.\nA key challenge is associating branch output with corresponding target labels .\nPermutation Invariant Training (PIT) [15 ###reference_b15###] addresses this by permuting all mappings and picking the one with minimum loss to update the model, i.e.,\nwhere can be any ASR loss such as connectionist temporal classification (CTC) [22 ###reference_b22###].\nThe PIT method does not assume any prior knowledge of the mixing conditions, making it applicable to all cases, including fully mixed speech with limited evidence for identifying speakers.\nAs a promising alternative to PIT,\nHeuristic Error Assignment Training (HEAT) has been explored in [10 ###reference_b10###, 12 ###reference_b12###].\nBased on the chronological appearance of speakers, HEAT directly assigns the speaker order to simplify the complexity of PIT during training.\nHence the loss can be calculated by:\nFor instance, in the two-speaker overlapped speech scene, one branch will be assigned to always transcribe the first-talking speaker, while another branch for the latter speaker.\nThis strategy has shown to be superior to PIT in streaming multi-talker ASR systems [12 ###reference_b12###]."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Cross Speaker Encoding network",
27
+ "text": "The limitations of SIMO models have been discussed in prior studies [6 ###reference_b6###, 21 ###reference_b21###].\nFirst, the separate encoding branches propagate errors monotonically throughout the model layers.\nHence separation error in early layers could persist to recognition encoders, yielding repeated and omitted transcriptions.\nSecond, as SIMO models output multiple streams, incorporating SIMO models into frameworks such as AED and RNN-T incurs extra computational cost [11 ###reference_b11###].\nWe attribute the drawbacks of SIMO to isolating speakers into separate branches. This \u201cone-time deal\u201d precluded the possibility of different branches conditioning on each other, and impeded potential inter-dependencies between speakers.\nAdditionally, outputting separate branches misaligns with the single-stream manner of common ASR architectures.\nAddressing these two points, we proposed a Cross-Speaker Encoding (CSE) model comprising two improvements: cross-encoder and joint-HEAT.\nCross Encoder.\nCross-encoder was proposed to model inter-speaker dependencies, illustrated in Figure 1 ###reference_### (b) and (d).\nIt comprises four steps: 1) concatenating the outputs of separated branches and mixture encoding into a joint encoding .\n2) Adding this joint encoding with a learnable partition-wise positional embedding , where frames belonging to the same partition share the same positional embedding.\n3) Feeding the derived representations into Conformer blocks [1 ###reference_b1###], where the self-attention layer provides a global view, allowing the branches to attend to each other mutually.\nThis enables omission errors could be compensated from mixture encoding, and repetitions could be censored based on other branches.\nFinally, the joint encoding was clipped as and , i.e., encoded versions of and with additional context, allowing the shared recognition encoders to generate respective transcriptions.\nNote that, one can directly feed concatenated encoding into recognition encoders without clipping.\nHowever, our preliminary experiments show this cannot bring additional performance gain while introducing considerable computational cost due to the quadratic complexity of self-attention.\njoint-HEAT.\nWe introduce joint-HEAT as a straightforward solution to unify separate output streams.\nFirst, we concatenated outputs of different branches, then adopted HEAT loss to disambiguate the labels based on the speaking order, as used in [12 ###reference_b12###].\nSpecifically, during training we concatenation text labels of multi-talker speech as sc, where associates with the first-talking speaker and associated with the second one.\nThe sc token indicates speaker-changes boundaries between texts from separate speakers.\nJoint-HEAT can also improve single-talker performance compared to the original HEAT.\nOur preliminary studies show that in single-speaker speech, models trained with HEAT produce omission errors at the end of sentences.\nThis may be because the separation module attempts to only model the \u201dfirst speaking speaker\u201d, hence omitting part of the tokens.\nConcatenating the outputs makes the predictions consider both branches, empirically alleviating this problem."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Integrated CSE-SOT model",
33
+ "text": "As joint-HEAT unified separate output streams, we explored a hybrid SIMO-SISO system by employing an attentioned decoder with a CSE encoder and used SOT to guide the decoder training.\nThe integrated CSE-SOT model complements the weaknesses of each method when used alone.\nFirst, the attention decoder can better handle speech context and temporal dependencies compared to sole SIMO models.\nThen SIMO models explicitly model speaker separation, facilitates speaker disambiguation for the SISO decoder."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Experimental setup",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Dataset",
45
+ "text": "We use LibriSpeechMix (LSM) [17 ###reference_b17###] as a benchmark dataset in our experiments.\nThis dataset is simulated from 960-hour LibriSpeech (LS) [23 ###reference_b23###] corpus with 2-speaker (LSM-2mix) and 3-speak conditions (LSM-3mix).\nSince LibriSpeechMix only provides standard development and test sets, we simulated a 2-speaker training set following the same protocol as in [17 ###reference_b17###].\nSpecifically, we randomly sample two utterances from LS training set with random delay offset and speed perturbation between times.\nWe combine the original LS training set and this simulated data, then subset 400k utterances (1.7k hours) for efficient training.\nFor a realistic evaluation, we expect a multi-talker ASR system to be able to handle both single-talker and multi-talker speech.\nTherefore, we utilized both the LS and LSM datasets to examine all models.\nFurthermore, given the diverse overlap conditions within the test set, we partitioned the LSM test set into three subsets, denoting them as low-overlap, median-overlap, and high-overlap scenarios respectively.\nThe corresponding overlap ratios are bounded by (0, 0.2], (0.2, 0.5], and (0.5, 1.0].\nThe overlap ratio here is defined as the number of overlapped frames divided by the total number of frames."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Model settings",
51
+ "text": "We implement models based on the ASR Conformer encoder using ESPnet2 [24 ###reference_b24###] toolkit.\nWe used 80-dimensional Mel-filter bank as the input feature with the speed perturbation described above.\nFor the SIMO model, apart from the original convolutional subsampling layer, we used the same CNN layer as the Mix encoder, and 4 Conformer blocks as SpkrDiff encoder for each branch, finally 8 Conformer blocks as shared recognition encoder.\nTherefore there are 16 conformer blocks, and each Conformer block has a 4-head self-attention with 256 hidden units and two 1024-dimensional feed-forward layers (macaron style).\nOn top of the SIMO model, the CSE model used the same Mix and SpkrDiff encoder, but 2 Conformer blocks as cross-encoder and 6 Conformer blocks as recognition encoder.\nTherefore SIMO CSE models have the same number of parameters of 33.20M.\nAs for the SOT baseline, a conformer encoder with 16 blocks was adopted followed by an 8-block transformer decoder.\nEach transformer block comprises a self-attention layer with 4 attention heads and 256 hidden units, but a 2048-dimensional feed-forward layer.\nThe CSE-SOT model used the same encoder as the CSE model and the same decoder as the SOT model.\nAs a result, both CSE-SOT and SOT models have 45.24M parameters."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Training settings & Metrics",
57
+ "text": "For all of our experiments, we trained the models with 35 epochs and fused the best 10 epochs on the dev set as final models.\nWe use Adam optimizer with a learning rate of and warm-up steps of .\nFor SOT and CSE-SOT models, we use joint CTC/attention [25 ###reference_b25###] as the training objective, with a CTC weight of .\nFor single-speaker cases, we evaluated using the standard word error rate (WER). For multi-talker samples, we applied the permutation-invariant WER as in [17 ###reference_b17###], which chooses the speaker order with the lowest WER for scoring.\nHowever, we observed that the overall WER was dominated by samples with only mild overlaps (i.e., over 40% samples have overlap ratios 0.2). Therefore, we further employed an overlap-aware WER (OA-WER) that averages WERs across different overlap ratios, in order to assess the models\u2019 ability to handle different degrees of overlap."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Results and Discussions",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Results of CSE",
69
+ "text": "PIT vs. HEAT.\nFirst, we compared the baseline SIMO approach, as depicted in Figure 1 ###reference_### (a), with either PIT or HEAT loss.\nShown in Table 1 ###reference_###, the HEAT model (system A2) is significantly worse than the PIT model (system A1) in single-talker cases.\nAs discussed in Section 2.2 ###reference_###, our experiments observed that the HEAT model frequently produces omission errors at the end of sentences.\nThis may be because, in our setting, HEAT models only use one branch output to transcribe single-talker speech, without any constraint on another branch to suppress potential token leakage.\nIn contrast, on LibrispeechMix-2mix (LSM-2mix) multi-talker set, the HEAT model shows superior accuracy than PIT, this aligns with the observations in [12 ###reference_b12###].\nThis improvement is consistent across different overlap ratios, especially for high-overlap samples ( vs.).\nWe argue that expressly assigning one SpkrDiff encoder to capture one exact speaker (e.g., the first-talking one) can well guide this encoder to learn specific patterns.\nProposed methods.\nIn system B1 of Table 1 ###reference_###, we first replace HEAT with our proposed joint-HEAT.\nWe can see the performance on single-talker utterances was boosted by a large margin as expected, while the performance on multi-talker speech is equivalent ( vs. on OA-WER).\nEnhanced from system B1, system B2 demonstrates the effectiveness of the suggested CSE model, which is further equipped with a cross-encoder.\nFor single-talker cases, the CSE model further improves the performance and fills the gaps in comparison to the PIT model.\nFor multi-talker cases, the introduction of a cross-encoder leads to additional improvement, possibly due to the benefit of information sharing between the two branches.\nWe will discuss this hypothesis in the visualization section below.\nAblation study on Cross-Encoder.\nTo validate the effectiveness of the two components in cross-encoder, we conducted ablation studies by removing internal components. First, system C1 removes partition-wise positional embedding (PPE).\nWithout PPE, the cross-encoder won\u2019t be explicitly instructed on which frames belong to which partition, hence solely relying on distance information from relative positional encoding.\nThis leads to considerable performance decline, especially for high-overlap speech ( vs.).\nSystem C2 removed mixture encoding from joint encoding (i.e., only concatenate representations of two branches).\nIn this case, the information omitted by both branches may not be recovered.\nThis could explain the consistent performance degeneration on multi-talker speech."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Results of CSE-SOT",
75
+ "text": "CSE-SOT vs. SOT.\nWe compared the baseline SOT model with the proposed CSE-SOT model shown as system D1 and D2 in Table 1 ###reference_###.\nIt is not surprising that CSE-SOT did not show improvement on single-talker cases, since SOT inherits a standard AED architecture designed for single-talker ASR.\nFor multi-talker cases, CES-SOT shows equivalent performance on low-overlap speech ( vs.), while showing remarkable improvement on both median and high-overlap scenarios.\nIn particular, compared to the SOT baseline, CSE-SOT model attains of relative improvement ( vs.) on median-overlap speech, and of relative improvement ( vs.) on high-overlap speech.\nThese findings indicate that SOT, relying solely on cross-attention, has limitations in transcribing overlapping speech beyond minor conditions.\nIn contrast, the proposed CSE-SOT, serving as a hybrid SIMO-SISO framework, explicitly models separation with a SIMO structure and offers a straightforward solution to mitigate the above drawback.\nGeneralize to more speakers.\nOne benefit of the SOT model is its ability to generalize to more speakers beyond that in training data.\nWe also evaluated the CSE-SOT model under this condition.\nAs shown in Table 2 ###reference_###, it demonstrated that the CSE-SOT model still retains the same ability even though there are only two branches in the CSE module.\nNote that the results of CSE-SOT are slightly worse than the SOT baseline.\nA possible reason is one of the branches encoded two speakers simultaneously, whereas this may trouble the decoder to further distinguish these two.\nMore investigation will be conducted in our future work for more insight into this phenomenon."
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "Visualization",
81
+ "text": "To better understand the CSE model, we investigated the self-attention layer in the Conformer block of the cross-encoder.\nFigure 2 ###reference_### illustrates attention matrices of 4 attention heads from the last Conformer block, where each row represents a weight vector that indicates how outputs attend.\nThe visualizations show that different attention heads have distinct roles. A certain head attends to frame-level cues (diagonal matrices in head (a)) while another focuses on partition-level details (dense matrices in head (b)). Heads (c) and (d) exhibit the combination of both patterns. Interestingly, head (a) shows S1 and S2 mutually attending \u2013 the output of mainly focused on , while focused on \u2013 implying two branches may be swapped, this serves as a similar function of PIT but without extra training complexity."
82
+ },
83
+ {
84
+ "section_id": "5",
85
+ "parent_section_id": null,
86
+ "section_name": "Conclusions",
87
+ "text": "###figure_2### In this work, we discussed the limitations of commonly used branch-based SIMO models for the multi-talker ASR task, then proposed a cross-speaker encoding (CSE) network consisting of a cross-encoder and joint-HEAT module.\nExperiments validated that the cross-encoder allows separate branches to condition on each other, while the joint-HEAT simultaneously enhances the single-talker performance of the original HEAT and converges the model outputs into a uniform stream.\nFurther, the CSE is integrated with the SOT strategy to leverage both the advantages of SIMO and SISO while mitigating their drawbacks.\nCompared to the SOT baseline, the integrated CSE-SOT model reduces WER by 10% overall and by 16% on high-overlap speech, demonstrating promising potential for further investigation."
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "Acknowledgements",
93
+ "text": "This work is supported by the HKSARG Research Grants Council\u2019s Theme-based Research Grant Scheme (Project No. T45- 407/19N) and the CUHK Stanley Ho Big Data Decision Research Centre."
94
+ }
95
+ ],
96
+ "appendix": [],
97
+ "tables": {
98
+ "1": {
99
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.4.1.1\">Table 1</span>: </span>\nPerformance comparison (WER%) of different systems on Librispeech and LibrispeechMix-2mix evaluation set. The Test (Conditional) set involves three subsets with different overlap ratios.\nFor simplification, we denote these three subsets as low, median, and high-overlap in the text. OA-WER refers to the averaged results across these three subsets.\n</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T1.5\" style=\"width:446.4pt;height:194.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-24.8pt,10.8pt) scale(0.9,0.9) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.5.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T1.5.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1\" style=\"font-size:90%;\">ID</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T1.5.1.1.1.2\" rowspan=\"3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.2.1\" style=\"font-size:90%;\">System</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T1.5.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.3.1\" style=\"font-size:90%;\">Librispeech</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"6\" id=\"S3.T1.5.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.4.1\" style=\"font-size:90%;\">LibrispeechMix-2mix</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.2.2.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.2.2.1.1\" style=\"font-size:90%;\">dev-clean</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.2.2.2\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.2.2.2.1\" style=\"font-size:90%;\">test-clean</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.2.2.3\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.2.2.3.1\" style=\"font-size:90%;\">Dev</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.2.2.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.5.1.2.2.4.1\" style=\"font-size:90%;\">\n<span class=\"ltx_inline-block\" id=\"S3.T1.5.1.2.2.4.1.1\">\n<span class=\"ltx_p\" id=\"S3.T1.5.1.2.2.4.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.2.2.4.1.1.1.1\">Test</span></span>\n<span class=\"ltx_p\" id=\"S3.T1.5.1.2.2.4.1.1.2\">(Overall)</span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"4\" id=\"S3.T1.5.1.2.2.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.2.2.5.1\" style=\"font-size:90%;\">Test</span><span class=\"ltx_text\" id=\"S3.T1.5.1.2.2.5.2\" style=\"font-size:90%;\"> (Conditional)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.3.3.1\"><span class=\"ltx_text\" id=\"S3.T1.5.1.3.3.1.1\" style=\"font-size:90%;\">(0, 0.2]</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.3.3.2\"><span class=\"ltx_text\" id=\"S3.T1.5.1.3.3.2.1\" style=\"font-size:90%;\">(0.2, 0.5]</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.3.3.3\"><span class=\"ltx_text\" id=\"S3.T1.5.1.3.3.3.1\" style=\"font-size:90%;\">(0.5, 1.0]</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.3.3.4\"><span class=\"ltx_text\" id=\"S3.T1.5.1.3.3.4.1\" style=\"font-size:90%;\">OA-WER</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1.4.4\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row\" colspan=\"10\" id=\"S3.T1.5.1.4.4.1\"><span class=\"ltx_rule\" style=\"width:100%;height:0.9pt;background:black;display:inline-block;\">\u00a0</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.5.1.5.5.1\"><span class=\"ltx_text\" id=\"S3.T1.5.1.5.5.1.1\" style=\"font-size:90%;\">A1</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.5.1.5.5.2\"><span class=\"ltx_text\" id=\"S3.T1.5.1.5.5.2.1\" style=\"font-size:90%;\">SIMO w/ PIT</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.5.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.5.5.3.1\" style=\"font-size:90%;\">6.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.5.1.5.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.5.5.4.1\" style=\"font-size:90%;\">6.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.5.5.5\"><span class=\"ltx_text\" id=\"S3.T1.5.1.5.5.5.1\" style=\"font-size:90%;\">12.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.5.1.5.5.6\"><span class=\"ltx_text\" id=\"S3.T1.5.1.5.5.6.1\" style=\"font-size:90%;\">11.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.5.5.7\"><span class=\"ltx_text\" id=\"S3.T1.5.1.5.5.7.1\" style=\"font-size:90%;\">8.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.5.5.8\"><span class=\"ltx_text\" id=\"S3.T1.5.1.5.5.8.1\" style=\"font-size:90%;\">12.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.5.5.9\"><span class=\"ltx_text\" id=\"S3.T1.5.1.5.5.9.1\" style=\"font-size:90%;\">17.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.5.5.10\"><span class=\"ltx_text\" id=\"S3.T1.5.1.5.5.10.1\" style=\"font-size:90%;\">12.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.5.1.6.6.1\"><span class=\"ltx_text\" id=\"S3.T1.5.1.6.6.1.1\" style=\"font-size:90%;\">A2</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.5.1.6.6.2\"><span class=\"ltx_text\" id=\"S3.T1.5.1.6.6.2.1\" style=\"font-size:90%;\">SIMO w/ HEAT</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.6.6.3\"><span class=\"ltx_text\" id=\"S3.T1.5.1.6.6.3.1\" style=\"font-size:90%;\">9.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.5.1.6.6.4\"><span class=\"ltx_text\" id=\"S3.T1.5.1.6.6.4.1\" style=\"font-size:90%;\">9.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.6.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.6.6.5.1\" style=\"font-size:90%;\">12.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.5.1.6.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.6.6.6.1\" style=\"font-size:90%;\">11.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.6.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.6.6.7.1\" style=\"font-size:90%;\">8.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.6.6.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.6.6.8.1\" style=\"font-size:90%;\">11.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.6.6.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.6.6.9.1\" style=\"font-size:90%;\">16.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.6.6.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.6.6.10.1\" style=\"font-size:90%;\">12.2</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.7.7.1\"><span class=\"ltx_text\" id=\"S3.T1.5.1.7.7.1.1\" style=\"font-size:90%;\">B1</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.7.7.2\"><span class=\"ltx_text\" id=\"S3.T1.5.1.7.7.2.1\" style=\"font-size:90%;\">SIMO w/ Joint-HEAT</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.7.7.3\"><span class=\"ltx_text\" id=\"S3.T1.5.1.7.7.3.1\" style=\"font-size:90%;\">7.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.7.7.4\"><span class=\"ltx_text\" id=\"S3.T1.5.1.7.7.4.1\" style=\"font-size:90%;\">7.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.7.7.5\"><span class=\"ltx_text\" id=\"S3.T1.5.1.7.7.5.1\" style=\"font-size:90%;\">12.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.7.7.6\"><span class=\"ltx_text\" id=\"S3.T1.5.1.7.7.6.1\" style=\"font-size:90%;\">11.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.7.7.7\"><span class=\"ltx_text\" id=\"S3.T1.5.1.7.7.7.1\" style=\"font-size:90%;\">8.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.7.7.8\"><span class=\"ltx_text\" id=\"S3.T1.5.1.7.7.8.1\" style=\"font-size:90%;\">11.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.7.7.9\"><span class=\"ltx_text\" id=\"S3.T1.5.1.7.7.9.1\" style=\"font-size:90%;\">16.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.7.7.10\"><span class=\"ltx_text\" id=\"S3.T1.5.1.7.7.10.1\" style=\"font-size:90%;\">12.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.5.1.8.8.1\"><span class=\"ltx_text\" id=\"S3.T1.5.1.8.8.1.1\" style=\"font-size:90%;\">B2</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.5.1.8.8.2\"><span class=\"ltx_text\" id=\"S3.T1.5.1.8.8.2.1\" style=\"font-size:90%;\">CSE</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.8.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.8.8.3.1\" style=\"font-size:90%;\">6.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.5.1.8.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.8.8.4.1\" style=\"font-size:90%;\">6.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.8.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.8.8.5.1\" style=\"font-size:90%;\">11.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.5.1.8.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.8.8.6.1\" style=\"font-size:90%;\">10.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.8.8.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.8.8.7.1\" style=\"font-size:90%;\">8.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.8.8.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.8.8.8.1\" style=\"font-size:90%;\">11.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.8.8.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.8.8.9.1\" style=\"font-size:90%;\">16.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.8.8.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.8.8.10.1\" style=\"font-size:90%;\">11.9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.9.9.1\"><span class=\"ltx_text\" id=\"S3.T1.5.1.9.9.1.1\" style=\"font-size:90%;\">C1</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.9.9.2\"><span class=\"ltx_text\" id=\"S3.T1.5.1.9.9.2.1\" style=\"font-size:90%;\">\u00a0\u00a0\u2003- PPE</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.9.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.9.9.3.1\" style=\"font-size:90%;\">6.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.9.9.4\"><span class=\"ltx_text\" id=\"S3.T1.5.1.9.9.4.1\" style=\"font-size:90%;\">6.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.9.9.5\"><span class=\"ltx_text\" id=\"S3.T1.5.1.9.9.5.1\" style=\"font-size:90%;\">11.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.9.9.6\"><span class=\"ltx_text\" id=\"S3.T1.5.1.9.9.6.1\" style=\"font-size:90%;\">11.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.9.9.7\"><span class=\"ltx_text\" id=\"S3.T1.5.1.9.9.7.1\" style=\"font-size:90%;\">8.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.9.9.8\"><span class=\"ltx_text\" id=\"S3.T1.5.1.9.9.8.1\" style=\"font-size:90%;\">11.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.9.9.9\"><span class=\"ltx_text\" id=\"S3.T1.5.1.9.9.9.1\" style=\"font-size:90%;\">16.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.9.9.10\"><span class=\"ltx_text\" id=\"S3.T1.5.1.9.9.10.1\" style=\"font-size:90%;\">12.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.5.1.10.10.1\"><span class=\"ltx_text\" id=\"S3.T1.5.1.10.10.1.1\" style=\"font-size:90%;\">C2</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.5.1.10.10.2\"><span class=\"ltx_text\" id=\"S3.T1.5.1.10.10.2.1\" style=\"font-size:90%;\">\u00a0\u00a0\u2003- mix. encoding</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.10.10.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.10.10.3.1\" style=\"font-size:90%;\">6.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.5.1.10.10.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.10.10.4.1\" style=\"font-size:90%;\">6.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.10.10.5\"><span class=\"ltx_text\" id=\"S3.T1.5.1.10.10.5.1\" style=\"font-size:90%;\">11.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.5.1.10.10.6\"><span class=\"ltx_text\" id=\"S3.T1.5.1.10.10.6.1\" style=\"font-size:90%;\">10.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.10.10.7\"><span class=\"ltx_text\" id=\"S3.T1.5.1.10.10.7.1\" style=\"font-size:90%;\">8.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.10.10.8\"><span class=\"ltx_text\" id=\"S3.T1.5.1.10.10.8.1\" style=\"font-size:90%;\">11.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.10.10.9\"><span class=\"ltx_text\" id=\"S3.T1.5.1.10.10.9.1\" style=\"font-size:90%;\">16.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.10.10.10\"><span class=\"ltx_text\" id=\"S3.T1.5.1.10.10.10.1\" style=\"font-size:90%;\">12.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1.11.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.11.11.1\"><span class=\"ltx_text\" id=\"S3.T1.5.1.11.11.1.1\" style=\"font-size:90%;\">D1</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.11.11.2\"><span class=\"ltx_text\" id=\"S3.T1.5.1.11.11.2.1\" style=\"font-size:90%;\">SOT</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.11.11.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.11.11.3.1\" style=\"font-size:90%;\">4.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.11.11.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.11.11.4.1\" style=\"font-size:90%;\">5.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.11.11.5\"><span class=\"ltx_text\" id=\"S3.T1.5.1.11.11.5.1\" style=\"font-size:90%;\">9.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.11.11.6\"><span class=\"ltx_text\" id=\"S3.T1.5.1.11.11.6.1\" style=\"font-size:90%;\">9.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.11.11.7\"><span class=\"ltx_text\" id=\"S3.T1.5.1.11.11.7.1\" style=\"font-size:90%;\">7.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.11.11.8\"><span class=\"ltx_text\" id=\"S3.T1.5.1.11.11.8.1\" style=\"font-size:90%;\">9.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.11.11.9\"><span class=\"ltx_text\" id=\"S3.T1.5.1.11.11.9.1\" style=\"font-size:90%;\">13.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.11.11.10\"><span class=\"ltx_text\" id=\"S3.T1.5.1.11.11.10.1\" style=\"font-size:90%;\">10.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S3.T1.5.1.12.12.1\"><span class=\"ltx_text\" id=\"S3.T1.5.1.12.12.1.1\" style=\"font-size:90%;\">D2</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S3.T1.5.1.12.12.2\"><span class=\"ltx_text\" id=\"S3.T1.5.1.12.12.2.1\" style=\"font-size:90%;\">CSE-SOT</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.5.1.12.12.3\"><span class=\"ltx_text\" id=\"S3.T1.5.1.12.12.3.1\" style=\"font-size:90%;\">4.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.5.1.12.12.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.12.12.4.1\" style=\"font-size:90%;\">5.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.5.1.12.12.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.12.12.5.1\" style=\"font-size:90%;\">8.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.5.1.12.12.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.12.12.6.1\" style=\"font-size:90%;\">8.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.5.1.12.12.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.12.12.7.1\" style=\"font-size:90%;\">7.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.5.1.12.12.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.12.12.8.1\" style=\"font-size:90%;\">8.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.5.1.12.12.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.12.12.9.1\" style=\"font-size:90%;\">12.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.5.1.12.12.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.12.12.10.1\" style=\"font-size:90%;\">9.2</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
100
+ "capture": "Table 1: \nPerformance comparison (WER%) of different systems on Librispeech and LibrispeechMix-2mix evaluation set. The Test (Conditional) set involves three subsets with different overlap ratios.\nFor simplification, we denote these three subsets as low, median, and high-overlap in the text. OA-WER refers to the averaged results across these three subsets.\n"
101
+ },
102
+ "2": {
103
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.1.1\">Table 2</span>: </span>Performance comparison (WER%) between the SOT baseline and CSE-SOT model on the LibrispeechMix-3mix evaluation sets. Both models are train on single-talker plus 2-talker data, and evaluated on 3-talker data.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.5\" style=\"width:249.0pt;height:81pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-13.8pt,4.5pt) scale(0.9,0.9) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.5.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.5.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S4.T2.5.1.1.1.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.1.1.1.1\" style=\"font-size:90%;\">System</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.5.1.1.1.2\" rowspan=\"2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.1.1.2.1\" style=\"font-size:90%;\">Dev</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.5.1.1.1.3\" rowspan=\"2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.1.1.3.1\" style=\"font-size:90%;\">\n<span class=\"ltx_inline-block\" id=\"S4.T2.5.1.1.1.3.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.5.1.1.1.3.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.1.1.3.1.1.1.1\">Test</span></span>\n<span class=\"ltx_p\" id=\"S4.T2.5.1.1.1.3.1.1.2\">(Overall)</span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S4.T2.5.1.1.1.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.1.1.4.1\" style=\"font-size:90%;\">Test</span><span class=\"ltx_text\" id=\"S4.T2.5.1.1.1.4.2\" style=\"font-size:90%;\"> (Conditional)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.1.2.2.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.2.2.1.1\" style=\"font-size:90%;\">(0,20]</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.1.2.2.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.2.2.2.1\" style=\"font-size:90%;\">(20,50]</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.1.2.2.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.2.2.3.1\" style=\"font-size:90%;\">(50,100]</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.1.2.2.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.2.2.4.1\" style=\"font-size:90%;\">OA-WER</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.1.3.3\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left\" colspan=\"7\" id=\"S4.T2.5.1.3.3.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_rule\" style=\"width:100%;height:0.9pt;background:black;display:inline-block;\">\u00a0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.1.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.5.1.4.4.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.4.4.1.1\" style=\"font-size:90%;\">SOT</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.4.4.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.4.4.2.1\" style=\"font-size:90%;\">24.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.1.4.4.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.4.4.3.1\" style=\"font-size:90%;\">24.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.4.4.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.4.4.4.1\" style=\"font-size:90%;\">18.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.4.4.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.4.4.5.1\" style=\"font-size:90%;\">24.0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.4.4.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.4.4.6.1\" style=\"font-size:90%;\">31.0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.1.4.4.7\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.4.4.7.1\" style=\"font-size:90%;\">24.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.1.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T2.5.1.5.5.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.5.5.1.1\" style=\"font-size:90%;\">CSE-SOT</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.5.1.5.5.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.5.5.2.1\" style=\"font-size:90%;\">24.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T2.5.1.5.5.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.5.5.3.1\" style=\"font-size:90%;\">24.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.5.1.5.5.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.5.5.4.1\" style=\"font-size:90%;\">18.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.5.1.5.5.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.5.5.5.1\" style=\"font-size:90%;\">24.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.5.1.5.5.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.5.5.6.1\" style=\"font-size:90%;\">31.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.5.1.5.5.7\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.5.1.5.5.7.1\" style=\"font-size:90%;\">24.7</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
104
+ "capture": "Table 2: Performance comparison (WER%) between the SOT baseline and CSE-SOT model on the LibrispeechMix-3mix evaluation sets. Both models are train on single-talker plus 2-talker data, and evaluated on 3-talker data."
105
+ }
106
+ },
107
+ "image_paths": {
108
+ "1": {
109
+ "figure_path": "2401.04152v2_figure_1.png",
110
+ "caption": "Fig. 1: The architecture of (a) standard branch-based SIMO model, (b) proposed Cross Speaker Encoding (CSE) network, (c) Integrated model with CSE and serialized output training (SOT), and (4) cross-encoder. SpkrDiff. refers to speaker differentiator, Rec. refers to recognition, and PPE refers to partition-wise positional embedding. \u2295direct-sum\\oplus\u2295 stands for concatenation and X^^\ud835\udc4b\\hat{X}over^ start_ARG italic_X end_ARG stands for mixture encoding.",
111
+ "url": "http://arxiv.org/html/2401.04152v2/x1.png"
112
+ },
113
+ "2": {
114
+ "figure_path": "2401.04152v2_figure_2.png",
115
+ "caption": "Fig. 2: \nAttention matrices of the last conformer block in the cross-encoder. (a)-(d) refers to 4 attention heads.\nX^^\ud835\udc4b\\hat{X}over^ start_ARG italic_X end_ARG, S1subscript\ud835\udc461S_{1}italic_S start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and S2subscript\ud835\udc462S_{2}italic_S start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT denote the input partitions. X\u00af\u00af\ud835\udc4b\\bar{X}over\u00af start_ARG italic_X end_ARG, S^1subscript^\ud835\udc461\\hat{S}_{1}over^ start_ARG italic_S end_ARG start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and S^2subscript^\ud835\udc462\\hat{S}_{2}over^ start_ARG italic_S end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT denote the corresponding output partitions.",
116
+ "url": "http://arxiv.org/html/2401.04152v2/x2.png"
117
+ }
118
+ },
119
+ "validation": true,
120
+ "references": [
121
+ {
122
+ "1": {
123
+ "title": "\u201cConformer: Convolution-augmented transformer for speech recognition,\u201d",
124
+ "author": "Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, et al.,",
125
+ "venue": "arXiv preprint arXiv:2005.08100, 2020.",
126
+ "url": null
127
+ }
128
+ },
129
+ {
130
+ "2": {
131
+ "title": "\u201cSequence transduction with recurrent neural networks,\u201d",
132
+ "author": "Alex Graves,",
133
+ "venue": "arXiv preprint arXiv:1211.3711, 2012.",
134
+ "url": null
135
+ }
136
+ },
137
+ {
138
+ "3": {
139
+ "title": "\u201cExploring architectures, data and units for streaming end-to-end speech recognition with rnn-transducer,\u201d",
140
+ "author": "Kanishka Rao, Ha\u015fim Sak, and Rohit Prabhavalkar,",
141
+ "venue": "in 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEE, 2017, pp. 193\u2013199.",
142
+ "url": null
143
+ }
144
+ },
145
+ {
146
+ "4": {
147
+ "title": "\u201cAttention-based models for speech recognition,\u201d",
148
+ "author": "Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio,",
149
+ "venue": "Advances in neural information processing systems, vol. 28, 2015.",
150
+ "url": null
151
+ }
152
+ },
153
+ {
154
+ "5": {
155
+ "title": "\u201cListen, attend and spell: A neural network for large vocabulary conversational speech recognition,\u201d",
156
+ "author": "William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals,",
157
+ "venue": "in 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 2016, pp. 4960\u20134964.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "6": {
163
+ "title": "\u201cRecent advances in end-to-end automatic speech recognition,\u201d",
164
+ "author": "Jinyu Li et al.,",
165
+ "venue": "APSIPA Transactions on Signal and Information Processing, vol. 11, no. 1, 2022.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "7": {
171
+ "title": "\u201cSupervised speech separation based on deep learning: An overview,\u201d",
172
+ "author": "DeLiang Wang and Jitong Chen,",
173
+ "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, no. 10, pp. 1702\u20131726, 2018.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "8": {
179
+ "title": "\u201cMulti-microphone neural speech separation for far-field multi-talker speech recognition,\u201d",
180
+ "author": "Takuya Yoshioka, Hakan Erdogan, Zhuo Chen, and Fil Alleva,",
181
+ "venue": "in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 5739\u20135743.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "9": {
187
+ "title": "\u201cSingle-channel multi-talker speech recognition with permutation invariant training,\u201d",
188
+ "author": "Yanmin Qian, Xuankai Chang, and Dong Yu,",
189
+ "venue": "Speech Communication, vol. 104, pp. 1\u201311, 2018.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "10": {
195
+ "title": "\u201cEnd-to-end multi-talker overlapping speech recognition,\u201d",
196
+ "author": "Anshuman Tripathi, Han Lu, and Hasim Sak,",
197
+ "venue": "in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 6129\u20136133.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "11": {
203
+ "title": "\u201cEnd-to-end multi-speaker speech recognition with transformer,\u201d",
204
+ "author": "Xuankai Chang, Wangyou Zhang, Yanmin Qian, Jonathan Le Roux, and Shinji Watanabe,",
205
+ "venue": "in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 6134\u20136138.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "12": {
211
+ "title": "\u201cStreaming end-to-end multi-talker speech recognition,\u201d",
212
+ "author": "Liang Lu, Naoyuki Kanda, Jinyu Li, and Yifan Gong,",
213
+ "venue": "IEEE Signal Processing Letters, vol. 28, pp. 803\u2013807, 2021.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "13": {
219
+ "title": "\u201cA sidecar separator can convert a single-talker speech recognition system to a multi-talker one,\u201d",
220
+ "author": "Lingwei Meng, Jiawen Kang, Mingyu Cui, Yuejiao Wang, Xixin Wu, and Helen Meng,",
221
+ "venue": "in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023, pp. 1\u20135.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "14": {
227
+ "title": "\u201cUnified modeling of multi-talker overlapped speech recognition and diarization with a sidecar separator,\u201d",
228
+ "author": "Lingwei Meng, Jiawen Kang, Mingyu Cui, Haibin Wu, Xixin Wu, and Helen Meng,",
229
+ "venue": "in Proceedings of Interspeech, 2023, pp. 3467\u20133471.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "15": {
235
+ "title": "\u201cPermutation invariant training of deep models for speaker-independent multi-talker speech separation,\u201d",
236
+ "author": "Dong Yu, Morten Kolb\u00e6k, Zheng-Hua Tan, and Jesper Jensen,",
237
+ "venue": "in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017, pp. 241\u2013245.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "16": {
243
+ "title": "\u201cMultitalker speech separation with utterance-level permutation invariant training of deep recurrent neural networks,\u201d",
244
+ "author": "Morten Kolb\u00e6k, Dong Yu, Zheng-Hua Tan, and Jesper Jensen,",
245
+ "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 10, pp. 1901\u20131913, 2017.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "17": {
251
+ "title": "\u201cSerialized output training for end-to-end overlapped speech recognition,\u201d",
252
+ "author": "Naoyuki Kanda, Yashesh Gaur, Xiaofei Wang, Zhong Meng, and Takuya Yoshioka,",
253
+ "venue": "arXiv preprint arXiv:2003.12687, 2020.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "18": {
259
+ "title": "\u201cStreaming multi-talker asr with token-level serialized output training,\u201d",
260
+ "author": "Naoyuki Kanda, Jian Wu, Yu Wu, Xiong Xiao, Zhong Meng, Xiaofei Wang, Yashesh Gaur, Zhuo Chen, Jinyu Li, and Takuya Yoshioka,",
261
+ "venue": "arXiv preprint arXiv:2202.00842, 2022.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "19": {
267
+ "title": "\u201cM2met: The icassp 2022 multi-channel multi-party meeting transcription challenge,\u201d",
268
+ "author": "Fan Yu, Shiliang Zhang, Yihui Fu, Lei Xie, Siqi Zheng, Zhihao Du, Weilong Huang, Pengcheng Guo, Zhijie Yan, Bin Ma, et al.,",
269
+ "venue": "in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 6167\u20136171.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "20": {
275
+ "title": "\u201cJoint speaker counting, speech recognition, and speaker identification for overlapped speech of any number of speakers,\u201d",
276
+ "author": "Naoyuki Kanda, Yashesh Gaur, Xiaofei Wang, Zhong Meng, Zhuo Chen, Tianyan Zhou, and Takuya Yoshioka,",
277
+ "venue": "arXiv preprint arXiv:2006.10930, 2020.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "21": {
283
+ "title": "\u201cBa-sot: Boundary-aware serialized output training for multi-talker asr,\u201d",
284
+ "author": "Yuhao Liang, Fan Yu, Yangze Li, Pengcheng Guo, Shiliang Zhang, Qian Chen, and Lei Xie,",
285
+ "venue": "arXiv preprint arXiv:2305.13716, 2023.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "22": {
291
+ "title": "\u201cConnectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,\u201d",
292
+ "author": "Alex Graves, Santiago Fern\u00e1ndez, Faustino Gomez, and J\u00fcrgen Schmidhuber,",
293
+ "venue": "in Proceedings of the 23rd international conference on Machine learning, 2006, pp. 369\u2013376.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "23": {
299
+ "title": "\u201cLibrispeech: an asr corpus based on public domain audio books,\u201d",
300
+ "author": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur,",
301
+ "venue": "in 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 2015, pp. 5206\u20135210.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "24": {
307
+ "title": "\u201cESPnet: End-to-end speech processing toolkit,\u201d",
308
+ "author": "Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, and Tsubasa Ochiai,",
309
+ "venue": "in Proceedings of Interspeech, 2018, pp. 2207\u20132211.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "25": {
315
+ "title": "\u201cHybrid ctc/attention architecture for end-to-end speech recognition,\u201d",
316
+ "author": "Shinji Watanabe, Takaaki Hori, Suyoun Kim, John R Hershey, and Tomoki Hayashi,",
317
+ "venue": "IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 8, pp. 1240\u20131253, 2017.",
318
+ "url": null
319
+ }
320
+ }
321
+ ],
322
+ "url": "http://arxiv.org/html/2401.04152v2"
323
+ }
20240722/2401.07598v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240722/2401.08742v3.json ADDED
@@ -0,0 +1,287 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Efficient4D: Fast Dynamic 3D Object Generation from a Single-view Video",
3
+ "abstract": "Generating dynamic 3D object from a single-view video is challenging due to the lack of 4D labeled data.\nAn intuitive approach is to\nextend previous image-to-3D pipelines by transferring off-the-shelf image generation models such as score distillation sampling.\nHowever, this approach would be\nslow and expensive to scale due to the need for back-propagating the information-limited supervision signals through a large pretrained model.\nTo address this, we propose an efficient video-to-4D object generation framework called Efficient4D.\nIt generates high-quality spacetime-consistent images under different camera views, and then uses them as labeled data to directly reconstruct the 4D content through a 4D Gaussian splatting model.\nImportantly, our method can achieve real-time rendering under continuous camera trajectories. To enable robust reconstruction under sparse views, we introduce inconsistency-aware confidence-weighted loss design, along with a lightly weighted score distillation loss.\nExtensive experiments on both synthetic and real videos show that Efficient4D offers a remarkable 10-fold increase in speed when compared to prior art alternatives while preserving the quality of novel view synthesis. For example, Efficient4D takes only 10 minutes to model a dynamic object, vs 120 minutes by the previous art model Consistent4D.\nOur code is publicly available at https://github.com/fudan-zvg/Efficient4D.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Humans possess a remarkable capacity to comprehensively comprehend the spatial and temporal characteristics of a dynamic object in a brief video, even with a limited perspective, enabling them to predict its appearance in unseen viewpoints over time.\nDespite the significant advancement of 3D object generation,\nexisting works [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###] mostly consider static scenes or objects.\nWith the availability of large-scale 3D datasets [5 ###reference_b5###, 6 ###reference_b6###],\ntraining generalizable models capable of directly generating multi-view images becomes possible [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###]. These generated images can be turned into a 3D object through reconstruction techniques [10 ###reference_b10###, 11 ###reference_b11###].\nBy further augmenting these generated static objects with predefined animations [12 ###reference_b12###], dynamic 3D content can be generated. However, this approach is still limited due to the need for fine-grained meshes as well as rigid restrictions.\nDirectly generating 4D object/scene content from text description has been recently attempted [13 ###reference_b13###].\nTo bypass the need of exhaustively labeled training data pairs in form of (text, 4D), it trains a Neural Radiance Fields (NeRF)-like representation [10 ###reference_b10###] via score distillation sampling [1 ###reference_b1###] and separates the modeling of static scene and its dynamics. Not only is this method computationally inefficient caused by heavy supervision back-propagation through a large pretrained model, but also its textual condition is highly ambiguous in expressing the intended visual content.\nIn quest of the aforementioned human\u2019s capability,\na recent work [14 ###reference_b14###] proposed to generate dynamic 3D object images from a single-view video (statically captured monocular video from a fixed view), namely as video-to-4D object generation. However, similar as [13 ###reference_b13###] this method is also slow to train (e.g., 120 minutes to model a single dynamic object) in addition to complex design, hence unscalable and expensive in practice.\nTo address identified limitations, we formulate an efficient video-to-4D two staged object generation method called Efficient4D.\nIn the first stage, we generate spacetime-consistent images across different camera views as synthetic training data.\nThis is realized by imposing temporal smoothing into a multi-view image generator (e.g., SyncDreamer [8 ###reference_b8###]) in tandem with frame interpolation.\nIn the second stage, we use these training data to optimize a 4D Gaussian splatting model [15 ###reference_b15###].\nThis is an extension of the 3D Gaussian splatting [16 ###reference_b16###], originally designed for static 3D scene representation, with the temporal dimension introduced additionally, allowing for real-time rendering under continuous camera trajectories.\nUsing Gaussian representation brings about further computational efficiency gain, when compared with NeRF based designs (Figure 1 ###reference_###).\nTo tackle the challenging discontinuity between the generated sparse frames, we design an inconsistency-aware loss function based on whether there is confidence in the consistency of a frame with its adjacent frames.\nAlong with a lightly weighted score distillation sampling loss for smooth viewpoint transitions, this technique enables robust 4D reconstruction. Notably, although our method takes a single-view video as input, Efficient4Dcan be easily extended to image-to-4D task by leveraging a image-to-video diffusion model [17 ###reference_b17###].\nOur contributions are summarized as follows:\n(i)\nWe consider for the first time the efficiency challenge with the under-studied video-to-4D object generation problem.\n(ii)\nWe propose an efficient video-to-4D object generation pipeline,\nEfficient4D, characterized by directly generating high-quality training data without the need for heavy supervision back-propagation through a large pretrained model as suffered by most 3D/4D object generation approaches. We also extends Efficient4D to image-to-4D task.\n(iii)\nWe introduce a inconsistency-aware confidence-weighted loss for reconstructing a 4D Gaussian splatting model using the generated training data efficiently and robustly.\n(iv)\nExtensive experiments on both synthetic and real data validate the\nsignificant efficiency advantage (e.g., 10 speedup) of our Efficient4D over the prior art,\nwhilst maintaining the quality of novel view synthesis.\nAlso, our method can work well under the more challenging few-shot setting where only a handful of key frames are available for training, further extending the application scope.\n###figure_1### ###figure_2### ###figure_3###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related work",
15
+ "text": "3D generation\n3D generation takes two main settigns: text-to-3D and image-to-3D.\nThe pioneering work, DreamFusion [1 ###reference_b1###], introduces the score distillation sampling (SDS) loss for optimizing 3D shapes with diffusion models.\nSDS\u2019s generality has prompted numerous subsequent efforts in both text-to-3D tasks [2 ###reference_b2###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 3 ###reference_b3###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 4 ###reference_b4###, 27 ###reference_b27###] and image-to-3D tasks [28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###, 31 ###reference_b31###] across various dimensions.\nHowever, SDS-based approaches often suffer from difficulty to converge and extended optimization times. Conversely, some efficient methods [32 ###reference_b32###, 33 ###reference_b33###, 7 ###reference_b7###, 34 ###reference_b34###, 8 ###reference_b8###] have emerged. Notably, Point-E [32 ###reference_b32###] and Shap-E [33 ###reference_b33###] train models to directly generate 3D point clouds or meshes.\nZero123 [7 ###reference_b7###] focuses on generating a 2D image from an unseen view based on a single image, convertible to a 3D shape through SDS or [34 ###reference_b34###].\nImportantly, SyncDreamer [8 ###reference_b8###] produces multi-view consistent images, offering inspiration for reconstructing 3D objects.\n4D representation\nEfforts to synthesize videos with free-viewpoint control in dynamic scenes have a well-documented history [35 ###reference_b35###].\nFor example, pre-NeRF [10 ###reference_b10###] approaches challenges in reconstructing intricate scene details.\nRecent advancements in 4D representations, particularly those based on neural rendering, include D-NeRF [36 ###reference_b36###], DeVRF [37 ###reference_b37###] and HyperNeRF [38 ###reference_b38###], which decouple geometry and motion, utilizing a canonical space and a learned deformation field.\nDynIBaR [39 ###reference_b39###] deploys an image-based rendering paradigm for representing long videos with complex camera and object motions.\nAnother group of methods [40 ###reference_b40###, 41 ###reference_b41###, 42 ###reference_b42###] adopt tensor decomposition of 4D volumes to represent dynamic 3D scenes.\nRecently, Gaussian Splatting [16 ###reference_b16###] has received widespread attention for its real-time high-fidelity rendering, especially its explicit point-based representation which holds great potential in modeling dynamic scenes. Consequently, a significant amount of work has been proposed to explore its extension to dynamic scene modeling.\nAmong them,\nDeformable 3D Gaussians [43 ###reference_b43###] and 4DGaussian [44 ###reference_b44###] integrated the deformation field with 3D Gaussian Splatting for the joint learning of the scene\u2019s geometry and dynamics. SC-GS [45 ###reference_b45###] represents the motion with a set of sparse control points to achieve reconstruction and motion editing. Unlike the previous radiance field-based representations often involving complex training schedules or suffering from slow convergence, these methods can be optimized efficiently, achieving real-time rendering while surpassing the past methods in terms of quality.\nOur work is perpendicular to all the above works, where any of them can be deployed in our reconstruction phase.\nBut considering both optimization efficiency and expressive capability, we choose to represent dynamic 3D assets by a set of native 4D scene primitives, which is proved to be superior in 4DGS [15 ###reference_b15###].\n4D generation\nThere are a few recent works dedicated for more challenging 4D object generation.\nFor instance, MAV3D [13 ###reference_b13###] deals with a text-to-4D problem by training Hexplane [41 ###reference_b41###] with a video diffusion model and SDS loss.\nInstead of text input, Consistent4D [14 ###reference_b14###] conditions the generation of 4D object over time on a monocular video with richer and more specific information.\nHowever, it is computationally inefficient due to inheriting the previous SDS loss, along with complex pipeline design.\nTo overcome this limitation, we present a novel two-staged pipeline in a generation-and-reconstruction strategy, drastically boosting the training speed by 20 whilst maintaining the quality of novel view synthesis."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Preliminary",
21
+ "text": "4D Gaussian splatting\nThe 4D Gaussian splatting (4DGS [15 ###reference_b15###]) builds upon the 3D Gaussian splatting technique introduced in [16 ###reference_b16###], originally designed for static scene representation. To address the complexities of dynamic scenes, 4DGS represents each Gaussian defined as:\nwhere is the mean vector, and is the anisotropic covariance matrix. The input represents a spacetime position with a spatial coordinate and time . The covariance matrix decomposes into a diagonal scaling matrix and a rotation matrix through . The 4D rotation is represented by a pair of iso-rotations, each characterized by a quaternion.\nFor rendering, each Gaussian includes opacity and view-dependent color represented by spherical harmonics (SH). Given an arbitrary view defined by intrinsic and extrinsic parameters, we render the pixel at position with timestamp by blending visible Gaussians:\nHere, indexes the visible Gaussians sorted by depth, refers to the direction of the pixel under the view , and denotes the influence of a Gaussian on this position. To obtain the influence, unlike 3D Gaussian Splatting [16 ###reference_b16###] which directly projects 3D Gaussians to image space, we need condition 4D Gaussians on time and then project them. More specifically, is expressed by:\nwhere is the marginal distribution of the 4D Gaussian in time, and is the projected version of the conditional 3D Gaussian with\nThe projection operation [16 ###reference_b16###, 46 ###reference_b46###] projects the world point to image space and transforms the covariance by where is the extrinsic matrix of and is the Jacobian of the affine approximation of the projective transformation.\nScore distillation sampling\nScore distillation sampling (SDS) is first introduced by DreamFusion [1 ###reference_b1###], which is used to distill the knowledge from a pretrained diffusion model . Specifically, given an image rendered from a scene representation (e.g. 3DGS) parameterized by , the gradient of SDS loss is calculated as:\nwhere is the perturbed image with noise at time step , and is the condition (e.g. one frame of the input video in this paper)."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Method",
27
+ "text": "###table_1### ###figure_4### Our Efficient4D addresses the challenge of efficiently generating dynamic objects under novel views from a single-view video. The input single-view video can be either provided by user or generated by a video generation model. The latter approach extends the application of our method beyond video-to-4D. For example, we can also achieve image-to-4D through a image-to-video diffusion model [17 ###reference_b17###].\nAs illustrated in Figure 2 ###reference_###, it comprises two key components:\nAn image synthesis pipeline (Figure 2 ###reference_###(A)) generates images across views and timestamps, ensuring sufficient geometry and temporal consistency.\nAn efficient and robust reconstruction process (Figure 2 ###reference_###(B)) efficiently utilizes the synthetic images for accurate dynamic object reconstruction and novel view synthesis.\nWe will elaborate on these components in Section 4.1 ###reference_### and 4.2 ###reference_###, respectively."
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "Image synthesis across views and timestamps",
33
+ "text": "Due to the difficulty of obtaining calibrated 4D scans, our approach involves the direct generation of high-quality consistent 4D data from a single-view video which is much easier to capture (Figure 2 ###reference_###(a)). Specifically, we seek to produce a image matrix representing 2D images with geometrical and temporal consistency. Here, denotes timestamps, and represents views, with each matrix element corresponding to an image (Figure 2 ###reference_###(c)). This approach combines conventional video (capturing time variation, represented by a single column in the image matrix) and 3D (capturing view variation, represented by a single row in the image matrix) generation [47 ###reference_b47###, 8 ###reference_b8###], offering comprehensive information for modeling a dynamic object.\nTo initiate the image matrix , we set the first view (i.e., the first column) with frames from the input video and proceed to generate the remaining views. Our task involves generating multi-view consistent images from a single image for each row. Existing image-to-3D methods, such as SyncDreamer [8 ###reference_b8###], can be leveraged for this purpose. However, these methods often struggle with temporal inconsistency within a specific view (i.e., continuity in the column direction) due to the independent synthesis of multi-frame images.\nTo address this issue, we propose an enhanced version of SyncDreamer with improved temporal continuity, referred to as SyncDreamer-T.\nSpecifically, SyncDreamer generates multi-view images of a static object using a synchronized -view noise predictor that predicts synchronized noise for noisy multi-view images . The noise predictor is conditioned on information correlated with all views.\nCross-view conditioning is achieved through a spatial feature volume unprojected by to inject 3D-aware features into the shared noise predictor, ensuring geometrical consistency across views for static moments.\nTo impose temporal consistency, the information from at different timestamps is aggregated using a time-synchronous spatial volume we design here (Figure 2 ###reference_###(b)). A smoothing filter layer is introduced into the spatial volumes of different frames/timestamps, incorporating a weight vector which serves as the smooth filter. At each denoising step, time-synchronized spatial volumes for each input frame are constructed as:\nThis synchronization ensures consistent features across\n past and future frames\nduring the denoising process, thus enhancing temporal consistency. With this time-synchronized spatial volumes, the proposed SyncDreamer-T is entirely training-free established on the pretrained SyncDreamer.\nTo further improve temporal resolution, video frame interpolation (e.g., RIFE [48 ###reference_b48###]) can be applied after generating the image matrix .\nThe midpoint interpolation is applied twice recursively, giving three additional frames between each pair of consecutive frames. This results in a total of images in a column of .\n###table_2### ###figure_5### ###figure_6### Analysis on temporal synchronization\nFor a clearer insight into our temporal synchronization design, we undertake a simplified experiment involving two input frames, each with its feature volumes labeled as and . The fusion process is carried out by combining them as follows:\nwhere and denotes the fusion ratio, we systematically vary the ratio from 0 to 0.5, resulting in distinct columns of \u201cFeature fusion\" as illustrated in Figure 3 ###reference_###. For , the original generation process is represented, where the spatial volumes of the two frames are independent, leading to temporal inconsistency. As approaches , we observe a gradual convergence of the generated astronauts conditioned on different input images, achieving similarity in both texture and geometry. This suggests that smoothing at the feature level is effective in aligning frames over time.\nHowever, a challenge arises as the fused features may induce similar motion. While the bottom-left astronaut is stepping forward, the one at the bottom-right column does not exhibit forward motion like the top astronaut. Thus, a trade-off is necessary between achieving temporal consistency and preserving motion independence. In practical terms, it is recommended to set the ratio just above , striking a balance that ensures temporal texture consistency at a moderate cost of entangled geometry. Also note that the results in Figure 3 ###reference_### are not sensitive to the choice."
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "4D generation through reconstruction",
39
+ "text": "Aiming for 3D dynamic object modeling,\ndiscrete images not suffice.\nOur next goal is to model a truly 4D content from the image matrix .\nFor efficient modeling,\nwe have formulated the 4DGS representation model in Section 3 ###reference_###,\ndeparting from previous slow-to-train 4D reconstruction models [40 ###reference_b40###].\nOptimization\nIn the training of 4DGS, optimization is performed on the mean (), covariance (), opacity (), and spherical harmonic (SH) coefficients, as well as density control including densification and pruning for each Gaussian.\nThe original loss function, as presented in [15 ###reference_b15###], involves both RGB loss and SSIM loss with balancing weights fixed. Specifically, the loss formulation is defined as:\nwhere is the loss in RGB space, is the SSIM loss [49 ###reference_b49###], and is the respective weight hyper-parameter.\nHowever, such optimization approach assumes clean training data, which may not valid for synthetic data with inherent imperfections. To address this, we first introduce a inconsistency-aware loss formulation with adaptive balancing weights:\nwhere is fixed weight and\n is the adaptive confidence score of a generated image calculated as\nwhere is the unwarped image from adjacent frames estimated by optical flow. In such way, the confidence function as an adaptive role in controlling the loss and gradient by assigning lower weights to inconsistent regions, thus enhancing overall reconstruction quality.\nThe confidence design can guarantee temporal consistency, but achieving high quality of novel view synthesis still remains challenging due to the sparsity of the generated images. Therefore, we also incorporate SDS loss with a relatively small weight for smooth transition across different supervised views. We use the image-to-3D diffusion model [17 ###reference_b17###] in SDS loss conditioned on the frame of input video at each timestamp. The total loss function is expressed by:\nwith ."
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "Experiments",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "5.1",
49
+ "parent_section_id": "5",
50
+ "section_name": "Experiment setup",
51
+ "text": "Competitors\nFor comparison, we mainly focus on video-to-4D and image-to-4D task. The competitors include Consistent4D [14 ###reference_b14###] (ICLR2024), 4DGen [18 ###reference_b18###] (ArXiv2024) and STAG4D [19 ###reference_b19###] (ECCV2024) for video-to-4D, and Animate124 [50 ###reference_b50###] (ArXiv2023) and DreamGaussian4D [51 ###reference_b51###] (ArXiv2024) for image-to-4D.\nWe obtained their results by running their released official code. We also compare SyncDreamer [8 ###reference_b8###] partially by replacing SyncDreamer-T in ablation study (Section 5.6 ###reference_###).\nEvaluation data\nTo showcase the versatility of our proposed method, we conducted extensive experiments using a diverse set of data sources.\nFor video-to-4D, we focused on 36 sequences: 32 sequences released by [14 ###reference_b14###] and 4 sequences processed by ourselves. Among the released data, seven sequences are synthetic data where ground truth are available. For image-to-4D, we used the ten images released by [50 ###reference_b50###].\nOur four sequences are used for sparse input evaluation in Section 5.5 ###reference_###, which only contain two frames for each sequence. Three of them, named dragon, guard, and yoxi, are rendered from 3D animated models obtained from Sketchfab [52 ###reference_b52###]. The other one named yellow face was collected from the internet. All the data are publicly available.\nEvaluation metrics\n\nAs 4D generation research is still at early stage,\nthere is no well established metric yet. However, we adopt multiple metrics by referring related works for comprehensive evaluation. For evaluating on synthetic data, we use LPIPS score [53 ###reference_b53###] and CLIP similarity [54 ###reference_b54###] between rendered images and ground truth following [14 ###reference_b14###].\nFor the cases without ground truth, we also use CLIP-similarity between generated images and input frames as [55 ###reference_b55###, 51 ###reference_b51###] to measure image quality. For temporal smoothness, following [18 ###reference_b18###, 56 ###reference_b56###, 57 ###reference_b57###] we use CLIP-T to measure the similarity between adjacent frames of a generated video from different views, including front (CLIP-T-f), side (CLIP-T-s) and back (CLIP-T-b) views.\nTo evaluate a 4D object completely for different methods, we obtain the generated images by rendering 320 images uniformly distributed in space and time, covering 16 viewpoints and 20 timestamps.\nImplementation details\nIn equation (8 ###reference_###), the spatial volumes are smoothed locally in a sliding window style, so the consistency of generated images may be weakened as the frame number increase. In practice, we can adjust the weight following two rules: (i) the middle weight should be at least ; (ii) At least half of the volumes should have a positive weight. Here we provide some sample weights which have a good effect:\nwhere denotes the frame number. The weight will be normalized to ensure the sum is .\nIn denosing process, we follow the default setting of [8 ###reference_b8###] using a improved sampling strategy introduced by HarmonyView [58 ###reference_b58###]. HarmonyView redefines the score estimation by decomposing consistency and diversity. Please refer to [58 ###reference_b58###] for details.\nIn the reconstruction stage, Gaussians are initialized randomly inside the sphere with radius space 0.5 with identity rotations and a initial number of 5,000. Training of 4D Gaussian Splatting is carried out using the Adam optimizer for 1500 iterations with batch size 1. All other 4Dgs hyperparameters remain consistent with those in [15 ###reference_b15###]. The balancing weights in loss function (equation 13 ###reference_###) are set as .\nIn equation (12 ###reference_###), the calculation of estimated image is implemented by frame interpolation using a optical flow based model [48 ###reference_b48###]. For each generated frame , we use its adjacent four frames to interpolate the estimated frame as follows:\nThen we can compute the mean RGB confidence score:\nThe SSIM confidence score is similar by using equation (12 ###reference_###).\nFor speed efficiency, our proposed method only costs about 2 minutes for image generation if parallel denoising is allowed and 8 minutes for reconstruction on one A6000 GPU."
52
+ },
53
+ {
54
+ "section_id": "5.2",
55
+ "parent_section_id": "5",
56
+ "section_name": "Evaluation on synthetic data",
57
+ "text": "###table_3### ###figure_7### ###table_4### We first present the results on synthetic data in Figure 4 ###reference_### with ground truth shown. Since the data only include videos, we compare with two video-to-4D methods: Consistent4D [14 ###reference_b14###], 4DGen [18 ###reference_b18###] and STAG4D [19 ###reference_b19###].\nOur visual results exhibit superior accuracy in both texture and geometry when compared to the ground truth, such as the direction of dinosaur\u2019s head, the color of trump\u2019s arm and the teeth of skull. The quantitative metrics are also presented in Table 1 ###reference_###. The better CLIP and LPIPS scores further validate our superior generation results."
58
+ },
59
+ {
60
+ "section_id": "5.3",
61
+ "parent_section_id": "5",
62
+ "section_name": "Video-to-4D comparison",
63
+ "text": "###table_5### ###figure_8### ###table_6### ###figure_9### ###table_7### For data without ground truth, we present qualitative comparisons in Figure 5 ###reference_###.\nWhen assessing texture quality, it is important to note that all methods fall under the category of lifting 2D to 4D. However, Consistent4D produce watercolor-like images with low fidelity, which can be exemplified by the blurry edges of their rendered images.\nWe attribute this blur to conflicts from multiple supervisory signals during prolonged optimization. In contrast, our method excels in directly generating high-quality 2D images, thus providing a strong and consistent supervision in the reconstruction stage. Although 4DGen and STAG4D also use pseudo labels, their images are inconsistent in the temporal coordinate and SDS loss still dominates the optimization. Therefore, 4DGen lacks details and STAG4D suffers from floaters and blur.\nBesides, our method\u2019s reconstruction stage supports rendering under any viewpoints while the images generated in stage-1 are discrete and sparse with a fixed elevation . Overall, our method consistently outperforms the baseline methods in most cases.\nIn Table 2 ###reference_###, we compare our method with baselines quantitatively. The metrics and data used are described in Section 5.1 ###reference_###.\nWe draw several observations:\n(i) Our method achieves superior image quality and temporal smoothness on all cases validated by the higher CLIP and CLIP-T scores;\n(ii) Our method significantly accelerates the generation process, achieving over 10 speed improvement (10 mins vs 120 mins).\nMore specifically, the speed improvement is attributed to (i) our image supervision design, converging optimization faster and reduce the training iterations, and (ii) each iteration in our method requiring much less time thanks to the efficiency of Gaussian representation."
64
+ },
65
+ {
66
+ "section_id": "5.4",
67
+ "parent_section_id": "5",
68
+ "section_name": "Image-to-4D comparison",
69
+ "text": "Next, we will evaluate our method on the image-to-4D task. In Figure 6 ###reference_###, we present the visual results of three different methods. It can be observed that our extended method is capable of generating 4D assets with higher quality compared to those produced by state-of-the-art image-to-4D methods. Animate124 generates textures lacking in detail, and DreamGaussian4D tends to produce fragmented meshes. In contrast, the results of our Efficient4D exhibit both good geometry and high-quality textures. Additionally, Table 2 ###reference_### supports the similar conclusion that our method is superior, as seen in the video-to-4D comparison.\nAlthough Animate124 achieves the best CLIP-T score, This score merely indicates that the range of motion is small rather than the motion being continuous. From Figure 6 ###reference_### we can easily find that the quality of Animate124 results are lower than any other methods.\nAnother notable point is that the overall CLIP scores for image-to-4D methods are lower than those for video-to-4D methods. This may be because the reference single-view videos in image-to-4D are generated by video diffusion models, which are inconsistent and have lower quality compared to real-world videos."
70
+ },
71
+ {
72
+ "section_id": "5.5",
73
+ "parent_section_id": "5",
74
+ "section_name": "Sparse input case",
75
+ "text": "###table_8### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### We assessed our method\u2019s performance in an extremely sparse input scenario, comprising only two discrete frames. In such cases, we set in equation (9 ###reference_###). We also modified the code of Consistent4D [14 ###reference_b14###] to make it adapt to such case. As illustrated in Figure 7 ###reference_###, our approach successfully generates images featuring smooth motion and high spatiotemporal consistency. In contrast, Consistent4D fails to operate effectively under such conditions.\nConsider a scenario where we seek 4D modeling for a static toy. While the toy can take different poses, it lacks autonomous movement, posing a challenge for continuous video capturing. In these instances, our method demonstrates effectiveness by requiring only a few key frames to produce dynamic content, thereby expanding the potential applications of the 4D generation task."
76
+ },
77
+ {
78
+ "section_id": "5.6",
79
+ "parent_section_id": "5",
80
+ "section_name": "Ablation studies",
81
+ "text": ""
82
+ },
83
+ {
84
+ "section_id": "5.6.1",
85
+ "parent_section_id": "5.6",
86
+ "section_name": "5.6.1 Ablation of image generation",
87
+ "text": "###table_9### Single\n\nview\n###figure_16### ###figure_17### ###figure_18### ###figure_19### No\n\ntime-sync\n###figure_20### ###figure_21### ###figure_22### ###figure_23### No\n\ninterp\n###figure_24### ###figure_25### ###figure_26### ###figure_27### Full\n\nsetting\n###figure_28### ###figure_29### ###figure_30### ###figure_31### ###table_10### We first performed ablation studies to assess the influence of different components in our image generation stage. To better evaluate the image quality itself, we reconstruct the 4D Gaussian without SDS loss.\nWe compared our full method against three baseline settings: (1) Only input video are used for reconstruction; (2) Time-synchronous spatial volume is excluded; (3) Frame interpolation is excluded. Note that setting (2) is equivalent to the case where SyncDreamer-T is replaced with original SyncDreamer. The results corresponding to these settings can be found in Figure 8 ###reference_###.\nTable 3 ###reference_### also gives quantitative evaluations.\nImportance of synthetic data\nWe compare our generated image matrix with utilizing only input video using the same 4D Gaussian representation model.\nAs shown in the first row of Figure 8 ###reference_###,\nwhen only relying on a single-view video, the model cannot produce any meaningful results for novel views. This indicates the importance of constructing proper training data.\nEffect of time-synchronous spatial volume\nWe assess the impact of the time-synchronous spatial volume concept introduced in SyncDreamer, as depicted in the contrast of the second and last rows in Figure 8 ###reference_###. Without time-synchronous spatial volume, the back aspects of the toy Spiderman exhibit inconsistencies, leading to distorted geometry. In contrast, the proposed time-synchronous spatial volume enhances both spatial and temporal consistency while preventing geometry collapse, resulting in more visually appealing image generation and higher CLIP-T scores in Table 3 ###reference_###.\nEffect of frame interpolation\nAs illustrated in the contrast between the third and last rows in Figure 8 ###reference_###, frame interpolation is effective in mitigating the blurring observed in the rendered image from novel views, thus also delivering higher CLIP-T scores in Table 3 ###reference_###. This is attributed to the low frame rates of the image matrix, which results in noticeable discontinuities."
88
+ },
89
+ {
90
+ "section_id": "5.6.2",
91
+ "parent_section_id": "5.6",
92
+ "section_name": "5.6.2 Ablation of reconstruction stage",
93
+ "text": "###table_11### ###figure_32### ###table_12### ###figure_33### In the reconstruction stage, we will study the effect of three components: (1) confidence map, (2) supervision of generated images, and (3) SDS loss. The visual results are shown in Figure 9 ###reference_### and 10 ###reference_###.\nEffect of confidence map\nThe integration of a confidence-weighted loss in our design serves as a strategy to mitigate training data noise.\nIn the left of Figure 9 ###reference_###, we study the effect of confidence map when obvious inconsistency exists in the generated images. Our temporal smooth mechanism can support consistency for most areas, whilst a small fraction of lower-quality regions may introduce conflicting gradients in the reconstruction stage, thus hurting the overall quality.\nTo mitigate this, our confidence aware design comes into play for weakening the supervision from those inconsistent regions.\nSince these weakly supervised regions only occupy a small proportion across all views, the missing information about texture or geometry can be compensated by the redundancy of other views and generalization ability of 4D Gaussians.\nAs shown in the left of Figure 9 ###reference_###, the inclusion of confidence maps effectively reduces blurry and inconsistent rendering even when the generated images have plausible consistency, resulting in a significant overall improvement in quality and temporal smoothness.\nImportance of image supervision and SDS loss\nIn Figure 10 ###reference_###, we compare the full setting of our method with baselines without image supervision (w/o image) or SDS loss (w/o SDS). The absence of anchored image supervision results in bad geometry, such as the hole in the elephant ears and multiple legs of the lion. This can be also attributed to the conflicts of SDS loss during prolonged optimization. In addition, due to the sparsity of the generated images, the novel views may not be reconstructed well, leading to blur and floaters. By contrast, our full method avoids the disadvantages of both, being able to render clean images with good geometry."
94
+ },
95
+ {
96
+ "section_id": "6",
97
+ "parent_section_id": null,
98
+ "section_name": "Conclusion",
99
+ "text": "This study introduces a new framework, Efficient4D, designed for generating efficiently dynamic 4D objects seamlessly from monocular videos captured by a stationary camera. The Efficient4D consists of two main stages: first, generating consistent multi-view videos with spatial and temporal coherence, and second, rapidly producing 4D object reconstructions. Our approach, utilizing image supervision with lightly weighted SDS loss, significantly accelerates the generation process, achieving about 10 times faster speeds compared to previous works, while still delivering superior reconstruction and novel view synthesis results. Moreover, our model is effective in extremely sparse input scenarios, requiring only two available images, thereby expanding its application scope."
100
+ }
101
+ ],
102
+ "appendix": [],
103
+ "tables": {
104
+ "1": {
105
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.1\">Quantitative evaluation</span> on synthetic data. We report CLIP and LPIPS scores between rendered images and ground truth images.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T1.2.2\">\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2.3\">\n<td class=\"ltx_td ltx_border_r ltx_border_tt ltx_border_tt\" id=\"S5.T1.2.2.3.1\" style=\"padding-left:12.8pt;padding-right:12.8pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_tt\" id=\"S5.T1.2.2.3.2\" style=\"padding-left:12.8pt;padding-right:12.8pt;\">Consistent4D\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.08742v3#bib.bib14\" title=\"\">14</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_tt\" id=\"S5.T1.2.2.3.3\" style=\"padding-left:12.8pt;padding-right:12.8pt;\">4DGen\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.08742v3#bib.bib18\" title=\"\">18</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_tt\" id=\"S5.T1.2.2.3.4\" style=\"padding-left:12.8pt;padding-right:12.8pt;\">STAG4D\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.08742v3#bib.bib19\" title=\"\">19</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_tt\" id=\"S5.T1.2.2.3.5\" style=\"padding-left:12.8pt;padding-right:12.8pt;\">Efficient4D(Ours)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S5.T1.1.1.1.1\" style=\"padding-left:12.8pt;padding-right:12.8pt;\">CLIP \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T1.1.1.1.2\" style=\"padding-left:12.8pt;padding-right:12.8pt;\">0.87</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T1.1.1.1.3\" style=\"padding-left:12.8pt;padding-right:12.8pt;\">0.89</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T1.1.1.1.4\" style=\"padding-left:12.8pt;padding-right:12.8pt;\">0.91</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T1.1.1.1.5\" style=\"padding-left:12.8pt;padding-right:12.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.1.1.5.1\">0.92</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_bb ltx_border_r\" id=\"S5.T1.2.2.2.1\" style=\"padding-left:12.8pt;padding-right:12.8pt;\">LPIPS \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_bb\" id=\"S5.T1.2.2.2.2\" style=\"padding-left:12.8pt;padding-right:12.8pt;\">0.16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_bb\" id=\"S5.T1.2.2.2.3\" style=\"padding-left:12.8pt;padding-right:12.8pt;\">0.14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_bb\" id=\"S5.T1.2.2.2.4\" style=\"padding-left:12.8pt;padding-right:12.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.2.2.4.1\">0.13</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_bb\" id=\"S5.T1.2.2.2.5\" style=\"padding-left:12.8pt;padding-right:12.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.2.2.5.1\">0.13</span></td>\n</tr>\n</table>\n</figure>",
106
+ "capture": "Table 1: Quantitative evaluation on synthetic data. We report CLIP and LPIPS scores between rendered images and ground truth images."
107
+ },
108
+ "2": {
109
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.7.1\">Quantitative comparisons</span> with state-of-the-art methods on both video-to-4D and image-to-4D generation.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T2.5.5\">\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5.5\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r ltx_border_tt ltx_border_tt\" id=\"S5.T2.5.5.5.6\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">Method</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt ltx_border_tt\" id=\"S5.T2.1.1.1.1\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">CLIP\n</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt ltx_border_tt\" id=\"S5.T2.2.2.2.2\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">CLIP-T-f\n</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt ltx_border_tt\" id=\"S5.T2.3.3.3.3\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">CLIP-T-s\n</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt ltx_border_tt\" id=\"S5.T2.4.4.4.4\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">CLIP-T-b\n</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt ltx_border_tt\" id=\"S5.T2.5.5.5.5\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">Generation time\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5.6\">\n<td class=\"ltx_td ltx_nopad_l ltx_align_left ltx_border_tt\" colspan=\"6\" id=\"S5.T2.5.5.6.1\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.5.5.6.1.1\">- Video-to-4D comparison</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5.7\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r ltx_border_tt\" id=\"S5.T2.5.5.7.1\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">Consistent4D\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.08742v3#bib.bib14\" title=\"\">14</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S5.T2.5.5.7.2\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">0.8471</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S5.T2.5.5.7.3\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">0.9692</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S5.T2.5.5.7.4\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S5.T2.5.5.7.4.1\">0.9658</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S5.T2.5.5.7.5\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">0.9697</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S5.T2.5.5.7.6\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">120 mins</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5.8\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S5.T2.5.5.8.1\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">4DGen\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.08742v3#bib.bib18\" title=\"\">18</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.8.2\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S5.T2.5.5.8.2.1\">0.8730</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.8.3\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">0.9568</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.8.4\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">0.9568</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.8.5\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">0.9573</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.8.6\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">130 mins</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5.9\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S5.T2.5.5.9.1\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">STAG4D\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.08742v3#bib.bib19\" title=\"\">19</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.9.2\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">0.8398</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.9.3\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.9.3.1\">0.9766</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.9.4\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.9.4.1\">0.9731</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.9.5\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S5.T2.5.5.9.5.1\">0.9760</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.9.6\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">70 mins</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5.10\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S5.T2.5.5.10.1\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">Efficient4D (Ours)</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.10.2\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.10.2.1\">0.8745</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.10.3\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.10.3.1\">0.9766</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.10.4\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">0.9609</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.10.5\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.10.5.1\">0.9780</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.10.6\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.10.6.1\">10 mins</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5.11\">\n<td class=\"ltx_td ltx_nopad_l ltx_align_left ltx_border_tt\" colspan=\"6\" id=\"S5.T2.5.5.11.1\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.5.5.11.1.1\">- Image-to-4D comparison</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5.12\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r ltx_border_tt\" id=\"S5.T2.5.5.12.1\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">Animate124\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.08742v3#bib.bib50\" title=\"\">50</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S5.T2.5.5.12.2\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">0.8076</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S5.T2.5.5.12.3\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.12.3.1\">0.9673</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S5.T2.5.5.12.4\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.12.4.1\">0.9639</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S5.T2.5.5.12.5\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.12.5.1\">0.9541</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S5.T2.5.5.12.6\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">540 mins</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5.13\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_r\" id=\"S5.T2.5.5.13.1\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">DreamGaussian4D\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.08742v3#bib.bib51\" title=\"\">51</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.13.2\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S5.T2.5.5.13.2.1\">0.8242</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.13.3\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">0.8999</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.13.4\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">0.9072</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.13.5\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">0.9038</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.13.6\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">12 mins</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5.14\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_border_bb ltx_border_bb ltx_border_r\" id=\"S5.T2.5.5.14.1\" style=\"padding-left:1.6pt;padding-right:1.6pt;\">Efficient4D (Ours)</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_bb\" id=\"S5.T2.5.5.14.2\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.14.2.1\">0.8350</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_bb\" id=\"S5.T2.5.5.14.3\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S5.T2.5.5.14.3.1\">0.9346</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_bb\" id=\"S5.T2.5.5.14.4\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S5.T2.5.5.14.4.1\">0.9297</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_bb\" id=\"S5.T2.5.5.14.5\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S5.T2.5.5.14.5.1\">0.9321</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_bb\" id=\"S5.T2.5.5.14.6\" style=\"padding-left:1.6pt;padding-right:1.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.14.6.1\">10 mins</span></td>\n</tr>\n</table>\n</figure>",
110
+ "capture": "Table 2: Quantitative comparisons with state-of-the-art methods on both video-to-4D and image-to-4D generation."
111
+ },
112
+ "3": {
113
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.1\" style=\"color:#000000;\">Quantitative ablation study<span class=\"ltx_text ltx_font_medium\" id=\"S5.T3.6.1.1\"> on all our modules mesured on 4 sequences: alpaca, astronaut, rabbit and spiderman.</span></span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T3.4.4\">\n<tr class=\"ltx_tr\" id=\"S5.T3.4.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt ltx_border_t\" id=\"S5.T3.4.4.4.5\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_t\" id=\"S5.T3.1.1.1.1\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">CLIP \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_t\" id=\"S5.T3.2.2.2.2\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">CLIP-T-f \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_t\" id=\"S5.T3.3.3.3.3\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">CLIP-T-s \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_t\" id=\"S5.T3.4.4.4.4\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">CLIP-T-b \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.4.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt ltx_border_t\" id=\"S5.T3.4.4.5.1\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">Single-view</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_t\" id=\"S5.T3.4.4.5.2\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">0.6684</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_t\" id=\"S5.T3.4.4.5.3\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_t\" id=\"S5.T3.4.4.5.4\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt ltx_border_t\" id=\"S5.T3.4.4.5.5\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.4.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T3.4.4.6.1\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">No time-sync</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.4.6.2\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">0.8270</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.4.6.3\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">0.9258</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.4.6.4\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">0.9197</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.4.6.5\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">0.8819</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.4.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T3.4.4.7.1\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">No interp</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.4.7.2\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">0.8595</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.4.7.3\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">0.9409</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.4.7.4\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">0.9443</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.4.4.7.5\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">0.9336</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.4.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_b ltx_border_r\" id=\"S5.T3.4.4.8.1\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">Full setting</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_b\" id=\"S5.T3.4.4.8.2\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.4.4.8.2.1\">0.8702</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_b\" id=\"S5.T3.4.4.8.3\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.4.4.8.3.1\">0.9689</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_b\" id=\"S5.T3.4.4.8.4\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.4.4.8.4.1\">0.9673</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_b\" id=\"S5.T3.4.4.8.5\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.4.4.8.5.1\">0.9684</span></td>\n</tr>\n</table>\n</figure>",
114
+ "capture": "Table 3: Quantitative ablation study on all our modules mesured on 4 sequences: alpaca, astronaut, rabbit and spiderman."
115
+ }
116
+ },
117
+ "image_paths": {
118
+ "1(a)": {
119
+ "figure_path": "2401.08742v3_figure_1(a).png",
120
+ "caption": "Figure 1: Examples of video-to-4D generation.\nInput: A brief video of a dynamic object, as represented by 2 frames per case;\nOutput: Generated novel views at different timestamps. The generation time is also shown for each method. More visualized comparisons can be found in Figure 5.",
121
+ "url": "http://arxiv.org/html/2401.08742v3/x1.png"
122
+ },
123
+ "1(b)": {
124
+ "figure_path": "2401.08742v3_figure_1(b).png",
125
+ "caption": "Figure 1: Examples of video-to-4D generation.\nInput: A brief video of a dynamic object, as represented by 2 frames per case;\nOutput: Generated novel views at different timestamps. The generation time is also shown for each method. More visualized comparisons can be found in Figure 5.",
126
+ "url": "http://arxiv.org/html/2401.08742v3/x2.png"
127
+ },
128
+ "1(c)": {
129
+ "figure_path": "2401.08742v3_figure_1(c).png",
130
+ "caption": "Figure 1: Examples of video-to-4D generation.\nInput: A brief video of a dynamic object, as represented by 2 frames per case;\nOutput: Generated novel views at different timestamps. The generation time is also shown for each method. More visualized comparisons can be found in Figure 5.",
131
+ "url": "http://arxiv.org/html/2401.08742v3/x3.png"
132
+ },
133
+ "2": {
134
+ "figure_path": "2401.08742v3_figure_2.png",
135
+ "caption": "Figure 2: Overview of our Efficient4D approach.\nGiven as the input (a) a brief video depicting a dynamic object from a single perspectives,\nour model aims to generate this object with geometrical and temporal consistency under any specific view and time.\nEfficient4D comprises two components:\n(A) Image sequence synthesis through (b) time-synchronous spatial volumes, resulting in (c) an image matrix\nwhere each row consists of multi-view geometrically consistent images\nand each column consists of view-specific temporally consistent images.\n(B) 4D Reconstruction using the generated images in (A). The 4D Gaussian representation can be trained efficiently and robustly under the confidence-weighted loss \u2112img-confsubscript\u2112img-conf\\mathcal{L}_{\\text{img-conf}}caligraphic_L start_POSTSUBSCRIPT img-conf end_POSTSUBSCRIPT and the low-weighted SDS loss \u2112S\u2062D\u2062Ssubscript\u2112\ud835\udc46\ud835\udc37\ud835\udc46\\mathcal{L}_{SDS}caligraphic_L start_POSTSUBSCRIPT italic_S italic_D italic_S end_POSTSUBSCRIPT.",
136
+ "url": "http://arxiv.org/html/2401.08742v3/x4.png"
137
+ },
138
+ "3(a)": {
139
+ "figure_path": "2401.08742v3_figure_3(a).png",
140
+ "caption": "Figure 3: \nExample analysis on temporal synchronization.\nIn this illustration, we manipulate the fusion ratio (w\ud835\udc64witalic_w) across a range from 0 to 0.5 for the spatial feature volumes of two input frames. The results indicate that a moderate ratio can achieve superior outcomes by balancing both temporal consistency and motion independence.",
141
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/time_sync/input_sync.png"
142
+ },
143
+ "3(b)": {
144
+ "figure_path": "2401.08742v3_figure_3(b).png",
145
+ "caption": "Figure 3: \nExample analysis on temporal synchronization.\nIn this illustration, we manipulate the fusion ratio (w\ud835\udc64witalic_w) across a range from 0 to 0.5 for the spatial feature volumes of two input frames. The results indicate that a moderate ratio can achieve superior outcomes by balancing both temporal consistency and motion independence.",
146
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/time_sync/feature_fusion.png"
147
+ },
148
+ "4": {
149
+ "figure_path": "2401.08742v3_figure_4.png",
150
+ "caption": "Figure 4: Qualitative evaluation against ground truth (GT) on synthetic data. We compare our Efficient4D with Consistent4D [14], 4DGen [18] and STAG4D [19].",
151
+ "url": "http://arxiv.org/html/2401.08742v3/x5.png"
152
+ },
153
+ "5": {
154
+ "figure_path": "2401.08742v3_figure_5.png",
155
+ "caption": "Figure 5: Qualitative comparisons on video-to-4D generation. We compare our Efficient4D with Consistent4D [14], 4DGen [18] and STAG4D [19]. For each case, we show four images per method with 0\u2218superscript00^{\\circ}0 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT elevation. Our Efficient4D comprises two stages: image generation stage (Our stage-1, 30\u2218superscript3030^{\\circ}30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT elevation) and reconstruction stage (Our stage-2, 0\u2218superscript00^{\\circ}0 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT elevation).",
156
+ "url": "http://arxiv.org/html/2401.08742v3/x6.png"
157
+ },
158
+ "6": {
159
+ "figure_path": "2401.08742v3_figure_6.png",
160
+ "caption": "Figure 6: Qualitative comparisons on image-to-4D generation. We compare our Efficient4D with Animate124 [50] and DreamGaussian4D [51] (DG4D). For each case, we show four images per method with 0\u2218superscript00^{\\circ}0 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT elevation. Our Efficient4D comprises two stages: image generation stage (Our stage-1, 30\u2218superscript3030^{\\circ}30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT elevation) and reconstruction stage (Our stage-2, 0\u2218superscript00^{\\circ}0 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT elevation).",
161
+ "url": "http://arxiv.org/html/2401.08742v3/x7.png"
162
+ },
163
+ "7(a)": {
164
+ "figure_path": "2401.08742v3_figure_7(a).png",
165
+ "caption": "Figure 7: Given only two input frames, our method is able to generate smooth dynamics. For each case, we show three internal images from two novel views.",
166
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/2input/dragon_input.png"
167
+ },
168
+ "7(b)": {
169
+ "figure_path": "2401.08742v3_figure_7(b).png",
170
+ "caption": "Figure 7: Given only two input frames, our method is able to generate smooth dynamics. For each case, we show three internal images from two novel views.",
171
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/2input/dragon_ours.png"
172
+ },
173
+ "7(c)": {
174
+ "figure_path": "2401.08742v3_figure_7(c).png",
175
+ "caption": "Figure 7: Given only two input frames, our method is able to generate smooth dynamics. For each case, we show three internal images from two novel views.",
176
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/2input/dragon_c4d.png"
177
+ },
178
+ "7(d)": {
179
+ "figure_path": "2401.08742v3_figure_7(d).png",
180
+ "caption": "Figure 7: Given only two input frames, our method is able to generate smooth dynamics. For each case, we show three internal images from two novel views.",
181
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/2input/guard_input.png"
182
+ },
183
+ "7(e)": {
184
+ "figure_path": "2401.08742v3_figure_7(e).png",
185
+ "caption": "Figure 7: Given only two input frames, our method is able to generate smooth dynamics. For each case, we show three internal images from two novel views.",
186
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/2input/guard_ours.png"
187
+ },
188
+ "7(f)": {
189
+ "figure_path": "2401.08742v3_figure_7(f).png",
190
+ "caption": "Figure 7: Given only two input frames, our method is able to generate smooth dynamics. For each case, we show three internal images from two novel views.",
191
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/2input/guard_c4d.png"
192
+ },
193
+ "8(a)": {
194
+ "figure_path": "2401.08742v3_figure_8(a).png",
195
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
196
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/img_1view.png"
197
+ },
198
+ "8(b)": {
199
+ "figure_path": "2401.08742v3_figure_8(b).png",
200
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
201
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/rec_1view.png"
202
+ },
203
+ "8(c)": {
204
+ "figure_path": "2401.08742v3_figure_8(c).png",
205
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
206
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/img_1view2.png"
207
+ },
208
+ "8(d)": {
209
+ "figure_path": "2401.08742v3_figure_8(d).png",
210
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
211
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/rec_1view2.png"
212
+ },
213
+ "8(e)": {
214
+ "figure_path": "2401.08742v3_figure_8(e).png",
215
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
216
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/img_sync.png"
217
+ },
218
+ "8(f)": {
219
+ "figure_path": "2401.08742v3_figure_8(f).png",
220
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
221
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/rec_sync.png"
222
+ },
223
+ "8(g)": {
224
+ "figure_path": "2401.08742v3_figure_8(g).png",
225
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
226
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/img_sync2.png"
227
+ },
228
+ "8(h)": {
229
+ "figure_path": "2401.08742v3_figure_8(h).png",
230
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
231
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/rec_sync2.png"
232
+ },
233
+ "8(i)": {
234
+ "figure_path": "2401.08742v3_figure_8(i).png",
235
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
236
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/img_interp.png"
237
+ },
238
+ "8(j)": {
239
+ "figure_path": "2401.08742v3_figure_8(j).png",
240
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
241
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/rec_interp.png"
242
+ },
243
+ "8(k)": {
244
+ "figure_path": "2401.08742v3_figure_8(k).png",
245
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
246
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/img_interp2.png"
247
+ },
248
+ "8(l)": {
249
+ "figure_path": "2401.08742v3_figure_8(l).png",
250
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
251
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/rec_interp2.png"
252
+ },
253
+ "8(m)": {
254
+ "figure_path": "2401.08742v3_figure_8(m).png",
255
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
256
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/img_full.png"
257
+ },
258
+ "8(n)": {
259
+ "figure_path": "2401.08742v3_figure_8(n).png",
260
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
261
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/rec_full.png"
262
+ },
263
+ "8(o)": {
264
+ "figure_path": "2401.08742v3_figure_8(o).png",
265
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
266
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/img_full2.png"
267
+ },
268
+ "8(p)": {
269
+ "figure_path": "2401.08742v3_figure_8(p).png",
270
+ "caption": "Figure 8: Ablation study on image generation, time-synchronous spatial volumes and frame interpolation. The images follow a chronological order from left to right.",
271
+ "url": "http://arxiv.org/html/2401.08742v3/extracted/5746397/image/ablation/rec_full2.png"
272
+ },
273
+ "9": {
274
+ "figure_path": "2401.08742v3_figure_9.png",
275
+ "caption": "Figure 9: Ablation study on confidence maps. The images follow a chronological order from left to right.",
276
+ "url": "http://arxiv.org/html/2401.08742v3/x8.png"
277
+ },
278
+ "10": {
279
+ "figure_path": "2401.08742v3_figure_10.png",
280
+ "caption": "Figure 10: Ablation study on image supervision and SDS loss.",
281
+ "url": "http://arxiv.org/html/2401.08742v3/x9.png"
282
+ }
283
+ },
284
+ "validation": true,
285
+ "references": [],
286
+ "url": "http://arxiv.org/html/2401.08742v3"
287
+ }
20240722/2401.09967v4.json ADDED
The diff for this file is too large to render. See raw diff