yilunzhao commited on
Commit
8f5b997
·
verified ·
1 Parent(s): a25920f

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 20240127/2002.05907v2.json +0 -0
  2. 20240127/2111.02764v2.json +456 -0
  3. 20240127/2112.04870v2.json +255 -0
  4. 20240127/2206.05581v3.json +700 -0
  5. 20240127/2208.07462v4.json +132 -0
  6. 20240127/2209.07661v3.json +0 -0
  7. 20240127/2210.03888v2.json +0 -0
  8. 20240127/2210.13008v3.json +266 -0
  9. 20240127/2212.00403v2.json +159 -0
  10. 20240127/2302.11529v2.json +0 -0
  11. 20240127/2303.15198v2.json +0 -0
  12. 20240127/2304.01295v4.json +0 -0
  13. 20240127/2305.03939v4.json +0 -0
  14. 20240127/2305.07984v3.json +0 -0
  15. 20240127/2305.11467v4.json +198 -0
  16. 20240127/2306.06230v3.json +157 -0
  17. 20240127/2306.06397v4.json +103 -0
  18. 20240127/2308.10335v5.json +457 -0
  19. 20240127/2308.12608v3.json +0 -0
  20. 20240127/2308.14993v2.json +421 -0
  21. 20240127/2309.16742v4.json +100 -0
  22. 20240127/2309.17194v2.json +0 -0
  23. 20240127/2310.18446v5.json +462 -0
  24. 20240127/2311.00604v2.json +0 -0
  25. 20240127/2311.02340v2.json +0 -0
  26. 20240127/2311.04892v2.json +0 -0
  27. 20240127/2312.10623v3.json +0 -0
  28. 20240127/2401.09720v2.json +587 -0
  29. 20240127/2401.10124v2.json +634 -0
  30. 20240127/2401.11113v2.json +0 -0
  31. 20240127/2401.11723v2.json +0 -0
  32. 20240127/2401.13998v2.json +403 -0
  33. 20240127/2401.14132v2.json +0 -0
  34. 20240127/2401.15254v1.json +334 -0
  35. 20240127/2401.15258v1.json +276 -0
  36. 20240127/2401.15265v1.json +0 -0
  37. 20240127/2401.15275v1.json +0 -0
  38. 20240127/2401.15279v1.json +0 -0
  39. 20240127/2401.15282v1.json +0 -0
  40. 20240127/2401.15287v1.json +0 -0
  41. 20240127/2401.15290v1.json +71 -0
  42. 20240127/2401.15291v1.json +66 -0
  43. 20240127/2401.15293v1.json +268 -0
  44. 20240127/2401.15304v1.json +313 -0
  45. 20240127/2401.15307v1.json +0 -0
  46. 20240127/2401.15308v1.json +54 -0
  47. 20240127/2401.15312v1.json +192 -0
  48. 20240127/2401.15317v1.json +117 -0
  49. 20240127/2401.15319v1.json +0 -0
  50. 20240127/2401.15323v1.json +290 -0
20240127/2002.05907v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2111.02764v2.json ADDED
@@ -0,0 +1,456 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Stabilization and Variations to the Adaptive Local Iterative Filtering Algorithm: the Fast Resampled Iterative Filtering Method",
3
+ "abstract": "Non-stationary signals are ubiquitous in real life. Many techniques have been proposed in the last decades which allow decomposing multi-component signals into simple oscillatory mono-components, like the groundbreaking Empirical Mode Decomposition technique and the Iterative Filtering method. When a signal contains mono-components that have rapid varying instantaneous frequencies, we can think, for instance, to chirps or whistles, it becomes particularly hard for most techniques to properly factor out these components.\nThe Adaptive Local Iterative Filtering technique has recently gained interest in many applied fields of research for being able to deal with non-stationary signals presenting amplitude and frequency modulation.\nIn this work, we address the open question of how to guarantee a priori convergence of this technique, and propose two new algorithms.\nThe first method, called Stable Adaptive Local Iterative Filtering, is a stabilized version of the Adaptive Local Iterative Filtering that we prove to be always convergent. The stability, however, comes at the cost of a higher complexity in the calculations. The second technique, called Resampled Iterative Filtering, is a new generalization of the Iterative Filtering method. We prove that Resampled Iterative Filtering is guaranteed to converge a priori for any kind of signal. Furthermore, in the discrete setting, by leveraging on the mathematical properties of the matrices involved, we show that its calculations can be accelerated drastically. Finally, we present some artificial and real-life examples to show the powerfulness and performance of the proposed methods.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The analysis and decomposition of non-stationary signals is an active research direction in both Mathematics and Signal Processing. In the last decades many new techniques have been proposed. Among them, the Iterative Filtering (IF) algorithm [20 ###reference_b20###] was proposed a decade ago as an alternative technique to the celebrated Empirical Mode Decomposition (EMD) technique [17 ###reference_b17###] and its variants [28 ###reference_b28###, 30 ###reference_b30###, 25 ###reference_b25###, 31 ###reference_b31###]. The EMD and its variants, in fact, were missing a rigorous mathematical analysis, due to the usage of a number of heuristic and ad hoc elements. Some results have been presented in the literature [15 ###reference_b15###, 26 ###reference_b26###, 16 ###reference_b16###], but a complete rigorous mathematical analysis is still missing nowadays.\nThe EMD-like methods are based on the iterative computation of the signal moving average via envelopes connecting its extrema. The computation of the signal moving average allows to split the signal itself into a small number of simple oscillatory components, called Intrinsic Mode Functions (IMFs), which are separated in frequencies and almost uncorrelated [11 ###reference_b11###]. The IF method has been developed following the same structure of EMD, but with a key difference: the moving average is now obtained through an iterated convolutional filtering operation on the signal, with the aim to single out all its non-stationary components, starting from the highest frequency one.\nThe IF algorithm structure allowed to develop a complete mathematical analysis of this method [8 ###reference_b8###, 12 ###reference_b12###, 14 ###reference_b14###]. On the other side, this method is \u201crigid\u201d in the sense that it allows to extract only IMFs which are amplitudes modulated, but almost stationary in frequencies. This is a clear limitation if the signal contains chirps or whistles, that are components with quickly changing instantaneous frequencies. For this reason in [12 ###reference_b12###] the authors proposed a generalization of IF called Adaptive Local IF (ALIF). ALIF does not suffer anymore of the rigidity of IF in extracting IMFs containing rapidly varying instantaneous frequencies. However, this new technique loses most of mathematical background of IF. Even if the algorithm gained visibility since its introduction five years ago, we can mention here, for instance, [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 18 ###reference_b18###, 19 ###reference_b19###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 29 ###reference_b29###], an initial mathematical analysis has been only recently developed [10 ###reference_b10###, 13 ###reference_b13###], and much more study on extensions, variations and stabilization methods is currently ongoing, see, for instance, [6 ###reference_b6###].\nDue to the missing theoretical background of the ALIF method, in this paper we introduce two new algorithms for which such analysis is possible. The first, called Stable ALIF (SALIF) method, is always convergent, even in presence of noise, but it presents an increased computational cost with respect to ALIF. The second, called Resampled IF method (RIF), is actually a modification of the IF algorithm, that preserves IF convergence property, but, at the same time, presents the same flexibility as ALIF. Furthermore RIF method can be made, in the discrete case, highly computational efficient via the FFT computation of the convolutions, in what is called the Fast Resampled IF method (FRIF).\nThe rest of this paper is structured as follows. Section 2 ###reference_### is a review of the IF and ALIF methods, and introduces the new SALIF method. Here we compare their features, stressing their strength and weaknesses. Section 3 ###reference_### is dedicated to the RIF algorithm, its analysis, properties and acceleration via FFT, in what is called FRIF technique. In this section we show how RIF combines the convergence and stability of IF with the flexibility of ALIF, and how it can be made computationally efficient. In Section 4 ###reference_### we compare those algorithms on artificial and real data, reporting the efficiency and accuracy of each method. Eventually, in Section 5 ###reference_###, we draw conclusions and suggest future lines of research."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Iterative Filtering based Methods",
15
+ "text": "Throughout this document, a signal is intended to be a real function , and we study its behaviour in the reference interval . Outside this interval the signal is usually not known, and so we have to impose some boundary conditions, discussed for example in [9 ###reference_b9###] and [24 ###reference_b24###]. In particular, in [24 ###reference_b24###], the authors show how any signal can be pre-extended and made periodical at the boundaries. Therefore, from now on, for simplicity and without losing generality, we will assume that the signals to be decomposed are always periodical at the boundaries.\nThe Iterative Filtering (IF) methods mimics the EMD algorithm in the application of a moving average that captures the main trend of the signal, and allows us to decompose it into simple IMF components. If we call the moving average, then both EMD and IF algorithms extract the first IMF as\nRepeating iteratively the same procedure on , we can extract all the IMFs until becomes a trend signal, meaning that it possesses at most two extrema.\nThe difference between these two algorithms is that, while for EMD the moving average operator is changing at each iteration and depends completely on the shape of a given signal, in IF can be rewritten as the convolution of with what is called a filter .\nHere a filter is an even, nonnegative, bounded and measurable real function with compact support and unit mass, meaning .\nA generalization of the IF method is called Adaptive Local Iterative Filtering (ALIF), and utilizes the convolution with a family of filters as moving average, whose support is in , i.e. it varies with . Therefore, the moving average computation operator can be written as\nFollowing [12 ###reference_b12###], we can rewrite the same\nexpression as\nwhere is a filter with constant support in and is a measurable function. In the following subsections we report the most common choice for the filters, and a description of the resulting method."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Linear ALIF",
21
+ "text": "When we talk about the ALIF method, we usually refer to Linear ALIF. After having fixed a filter with support , and a positive \u201clength\u201d function , then the linear ALIF method prescribes\nNotice that is a filter with support for every .\nGiven now a signal , we can compute a length function , that usually depends on the relative positions of local extrema in if the signal does not contain noise, and apply the iteration in (1 ###reference_###) with the appropriate filter.\nRepeating iteratively the same procedure on , we obtain a decomposition of the signal into IMFs. Notice that changes after we identify each different IMFs.\nHere we report the resulting algorithm.\nThe operation is designed to catch the fluctuation part of the signal, that usually presents high frequency. The operation is iterated until a stopping criterion is satisfied, usually regarding the norm of the difference , or the number of iterations themselves. For more details on the stopping criterion, we refer the interested reader to [12 ###reference_b12###, 20 ###reference_b20###].\nThe IMFs are thus extracted from the signal until it becomes just a trend signal with or less extrema. Since the sum of all the IMFs and the trend signal returns the original signal, it can effectively be called a decomposition.\nRegarding the length function identification in signals containing noise, we observe that it is always possible to first run a time-frequency representation (TFR) algorithm, see [27 ###reference_b27###] for a comprehensive review of modern TFR techniques, and then use the acquired information to designed the optimal . This procedure is really important for ALIF algorithm, but it is also a research topic per se. This is why, from now on, we assume that the length function can be computed accurately and we postpone the analysis of how actually compute it to a future work.\nConceptually, the ALIF method separates non-stationary components of the signal, even with varying amplitudes, starting from the highest frequencies.\nFor example, on real data, the method first extract high frequency noise IMFs, and then starts to produce clean components. The main feature of the produced IMFs is that their instantaneous frequencies are pointwise sorted in decreasing order. In formulae, if is the instantaneous frequency of the -th IMF at the point , we have that\nThe method, albeit being very powerful and having already been utilized in a variety of applications, still lacks a theoretical analysis proving the convergence of (4 ###reference_###), except in a few notorious cases [6 ###reference_b6###, 10 ###reference_b10###, 12 ###reference_b12###].\nIn the next sections, we report some of the available convergence results for the discrete version of the algorithm."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Discrete ALIF and Stabilization",
27
+ "text": "Usually in a discrete setting, a signal is given as a vector of sampled values where and for . As a consequence, one can discretize the relation (2 ###reference_###) with a simple quadrature formula.\nIn turn, this lets us write the sampling vector of , called , as a matrix-vector multiplication. In fact, if we assume all the indexes start from zero,\nIn the Linear ALIF paradigm, we fix a filter and choose a length function depending on the signal, to produce our family of filters . The resulting algorithm is reported here.\nFrom the algorithm, it is evident that the convergence of the internal loop only depends on the spectral properties of the matrix . In fact, since , we find that a necessary condition for the convergence is\nIf the zero eigenvalue of has equal geometric and algebraic multiplicities, and it is the only eigenvalue for which ,\nthen the condition is also sufficient. From the same analysis, one can notice that the algorithm actually produces a projection of the signal on the approximated null space of .\nNotice that if is an invertible matrix, then , meaning that substituting in the algorithm doesn\u2019t significantly change the output. As a consequence, we can always suppose that is a stochastic matrix, so that we don\u2019t have to worry about large . Nonetheless, we still cannot assert the absence of negative or complex eigenvalues which are not fulfilling the relation (7 ###reference_###). Recent studies [10 ###reference_b10###, 6 ###reference_b6###] show that for large and continuous functions , , almost all eigenvalues of the matrix are real and nonnegative, but it is still not enough to establish the convergence of the method. Moreover, it has been ascertained experimentally that such cases may arise, especially whit a fast changing function .\nA simple way to stabilize the method is to choose , so that is a nonnegative matrix that is also positive semidefinite, with all the eigenvalues bounded by . Notice that we can also use instead of , where , or in general any constant satisfying . We call the resulting method Stable ALIF (SALIF).\nThe method is called stable since a perturbation of the matrix does not prevent the convergence of the inner loop. Moreover, we will show in the experiments, ref. Section 4 ###reference_###, that SALIF is able to produce more accurate solutions than the other methods.\nThe algorithm, though, comes with an increased computational cost with respect to ALIF, mainly due to two factors.\nThe iterative step in the SALIF algorithm takes at least double the time with respect to the respective step in the ALIF algorithm.\nSince the number of iterations is usually much smaller than , even computing beforehand does not improve the speed.\nThe order of the smallest eigenvalues of is approximately the square of the smallest ones in . The algorithm thus requires more iterations to attain the same accuracy of ALIF, since it must separate eigenspaces that are now closer.\nA different way to stabilize the method is to take constant, producing a much faster algorithm, i.e. the IF algorithm, whose spectrum of application is though more limited."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "IF and Discrete IF",
33
+ "text": "When we talk about the IF method, we refer to the linear ALIF method with constant length function in (4 ###reference_###), or equivalently, where in (3 ###reference_###).\nThe IF method only separates IMF components of the signal which are amplitude modulated, but quasi-stationary in frequency, starting from the highest frequencies. Nevertheless, it has been proved [15 ###reference_b15###] that in this case the iterations (4 ###reference_###) always converge whenever is a filter with nonnegative Fourier transform. The condition is satisfied, for example, by , where is a generic filter and is the convolution operator.\nIn the discrete setting, the IF algorithm has the advantage of a fast implementation based on FFT, in what is called Fast Iterative Filtering (FIF), and an advanced theoretical analysis [8 ###reference_b8###, 14 ###reference_b14###]. Recall that we only know the signal on the interval , so we can always suppose that the original signal is -periodic (for example, by reflecting the signal on both sides and making it decay [24 ###reference_b24###]). We can thus rewrite the moving average (2 ###reference_###) as\nand discretize it on a regular grid of . Here, the integral is always well-defined, since the filter has compact support. Moreover is inversely proportional to the target frequency of the extracted IMF, and usually indicates that we already have a trend signal , so we always suppose .\nFollowing the same steps as in the ALIF algorithm, we find\nNotice that the above formula can be expressed through a Hermitian circulant matrix with first row\nwhere .\nThe sampling vector of on the points , can be thus rewritten as a matrix-vector multiplication\nThe resulting algorithm is thus the same as Algorithm 2, but where is Hermitian and circulant.\nThe IF method is consequently much faster than the ALIF algorithm since the multiplication can be performed very efficiently through an FFT. Actually, in [14 ###reference_b14###] we can find an even faster implementation, the so called FIF algorithm, and the proof that is also positive semidefinite.\nKeeping in mind that, as in ALIF, we can always multiply by a diagonal matrix and make it stochastic, we have the following result.\nGiven a double-convoluted filter , then for the IF operator in (8 ###reference_###) the limit\nconverges for any function .\nMoreover, if , then for the IF matrix in (9 ###reference_###) the limit\nconverges for any vector .\nTo summarize, we have\nthe IF and FIF algorithms always converge and are very fast, but cannot capture non-stationary components with quickly varying frequencies,\nthe ALIF algorithm is enough flexible to extract fully non-stationary components, but its convergence is not guaranteed,\nthe SALIF algorithm is always convergent and it has an output which is more accurate than the ALIF algorithm one, but it is very slow.\nIn the next section, we show how to design an alternative method, that is flexible enough to perform non-stationary analysis on the signals, but at the same time fast and provably convergent."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Resampled Iterative Filtering",
39
+ "text": "The linear ALIF method makes use of a length function to locally stretch a fixed filter so that the convolution with the signal smooths out the high oscillatory behaviour. The idea behind the Resampled Iterative Filtering (RIF) algorithm is to set a fixed length for and instead modify the signal through a global resampling function. In a sense, we want to locally stretch the signal, making the component of higher frequency approximately stationary, so that we are able to identify it through the fast IF algorithm.\nGiven a resampling function\n that is increasing and regular enough, the moving average for the RIF method will coincide with the IF one applied on , as\nwhere we assume that the resampled signal is -periodic. If we consider the first-order expansion of , and after a change of variable , , we have\nthat is analogous to the linear ALIF moving average in (4 ###reference_###), where, equivalently,\nWith (12 ###reference_###), now we have a way to derive the resampling function from the length . The full RIF algorithm is thus reported as Algorithm 4 ###reference_###.\nFrom the algorithm it is evident that, after the resampling, the steps are the same of the IF algorithm. In fact, we always extract almost stationary IMFs from the resampled signal, and then we operate the inverse sampling to obtain the respective IMFs for the original signal. Moreover, we point out that depends on , so it must be computed every time we want to extract a new component.\nThis observation is also enough to show that the internal loop always converge to some IMF. In the next section we see how these properties carry to the discrete case.\nNotice that RIF is actually a particular ALIF method, since\nwith in (3 ###reference_###).\n\nFrom the relations (3 ###reference_###), we can say that Linear ALIF is a first-order approximation of RIF, and since RIF is a convergent method, we could ask whether it produces the same output as Linear ALIF. The answer is provided in the following\nThe RIF method produces the same output as the Linear ALIF algorithm only when is a constant function, i.e. when the linear ALIF algorithm reduces to IF.\nThe proof follows from the observation that the derivation of equation 3 ###reference_### holds true only if\nmeaning that is constant for every ."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Fast Resampled Iterative Filtering",
45
+ "text": "First of all we review how to possibly implement a discrete version of RIF. One way is by discretizing the IF moving average on the resampled signal, as in\nNotice that has domain , so we need to discretize it on the regular grid for . Recall that and that in the IF algorithm, a constant indicates that is already a trend signal. This shows that we can safely assume and thus .\nWe now extend the signal cyclically on the real line, meaning that for every and every . The quadrature rule on the discretization points yields\nNotice that the above formula coincides with the IF moving average with length , and can be expressed through a Hermitian circulant matrix with first row\nwhere .\nThe moving average thus becomes\nwhere is still a Hermitian and circulant matrix, so that the matrix vector multiplication can be performed efficiently through a FFT. In particular,\nwhere stands for the Hadamard (or element-wise) product between vectors, and DFT, iDFT stand for Discrete Fourier Transform and its inverse, respectively. Moreover, since\nand since the stopping criterion can be checked on , we can further accelerate the method by computing the DFTs on and and the iDFT outside the loop, thus avoiding iterated computations of Fourier transforms.\nThe resulting method is reported in Algorithm 5 ###reference_###.\nNotice that while the internal loop only consists of Hadamard multiplications among vectors, and its convergence properties can be analysed with the same tools used for the IF algorithm [8 ###reference_b8###, 14 ###reference_b14###], in the outer loop we perform operations that may lead to a loss in accuracy of the method. We can thus adopt a spline interpolation to mitigate the accuracy loss, and even in this case, the computational cost of the outer loop is still operations due to the Fourier transforms.\nAs for the previous algorithms the matrix , and thus the vector , can be multiplied by a constant to upper bound its eigenvalues, and from Lemma 1 ###reference_ma1### one can state an analogous convergence result.\nGiven a double-convoluted filter , then the inner loop of the RIF Algorithm 4 ###reference_### converges for any initial function .\nMoreover, if , then in the FRIF Algorithm 5 ###reference_### the limit\nconverges for any vector .\nWe have seen that the FRIF algorithm is provably convergent, and that its computational time is comparable with the FIF method. In the numerical examples, we will also show that empirically it produces sensible decompositions, but first let us address another property of the method."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Anti-Aliasing Property",
51
+ "text": "In the discrete setting, the resampling of the signal may in theory come with an undersampling of the highest frequencies, leading to aliasing effects. Here we show that in the FRIF algorithm, this is actually not a problem.\nSuppose that the signal can be split into components , where has the highest instantaneous frequency among all the components.\nIn the FRIF algorithm we choose the resampling where and . The resampled signal has thus domain , but in the discrete setting we treat it as a signal over , so we are actually working with\nThe signal presents now a new decomposition in components , where and if was the instantaneous frequency of , then the respective frequency of is\n. Notice that the function is chosen so that is now approximately a stationary signal, so\nfor some constant and for every . As a consequence,\nthat is surely less than .\nMoreover, since is increasing and , then\nmeaning that has still the biggest instantaneous frequency among the . This proves that the resampling does not create artificial high frequency components, so the\nFRIF algorithm does not suffer from aliasing problems."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Avoiding Interpolation",
57
+ "text": "As pointed out before, the interpolations may introduce a loss in accuracy on the output of Algorithm 5 ###reference_###. One can though formulate a different, but equivalent, version of the continuous algorithm that does not require a resampling of the signal. Taking from the start of (3 ###reference_###),\nlet , and , so that .\nAs a consequence, we can discretize the relation\nby applying a quadrature rule on the points , as\nthat coincides with multiplying the discretized signal by the matrix , where\nNotice that is positive definite, since from (12 ###reference_###), and is symmetric since the filter is an even function.\nIf we call the matrix in the case , then by Corollary 3 of [14 ###reference_b14###], is positive semidefinite. Since for a big enough , the matrix is approximated up to an arbitrary small error by a principal submatrix of the matrix , then we can conclude that is also positive semidefinite. As a consequence,\nand all its eigenvalues are real and less than 1. Eventually, as in the precedent algorithms, the matrix can be multiplied by a constant so that its eigenvalues are upper bounded for example by 1, so that and the method becomes provably convergent.\nThe resulting algorithm is thus equivalent in its continuous version to Algorithm 4 ###reference_###, and in its discrete version it avoids the need to interpolate the signal two times per IMF. Moreover, its internal loop has been proved to be convergent and it presents the same flexibility properties as ALIF.\nAt the same time, though, the matrix is not cyclic, so we lose the fast implementation that was possible in Algorithm 5 ###reference_###.\nFor this reason, we do not test this version of the RIF algorithms in the following numerical experiments."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Numerical Experiments",
63
+ "text": "In this section we show and compare the performances of all the reviewed techniques. In order to study the signals and their decompositions in time-frequency, we will rely on the so called IMFogram, a recently developed algorithm [7 ###reference_b7###], which allows to represent the frequency content of all IMFs. The IMFogram proves to be a robust, fast and reliable way to obtain the time-frequency representation of a signal, and it has been shown to converge, in the limit, to the well know spectrogram based on the FFT [11 ###reference_b11###].\nThe following tests have been conducted using MATLAB R2021a installed on a 64\u2013bit Windows 10 Pro computer equipped with a 11th Gen Intel\nCore i7-1165G7 at 2.80GHz processor and 8GB RAM. All tested examples and algorithms are freely available at 111www.cicone.com ###reference_ne.com###."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Example 1",
69
+ "text": "We consider the artificial signal , plotted in the left panel, bottom row, of Figure 1 ###reference_###, which contains two nonstationary components with exponentially changing frequencies and , plus a trend . In particular\nwhere vary in and is sampled over points.\nThe and components and signal are plotted in the left panel of Figure 1 ###reference_###, whereas and frequencies are shown in the central panel.\n###figure_1### ###figure_2### ###figure_3### in Table 1 ###reference_### we report the computational time required by ALIF, SALIF and FRIF with a fixed stopping criterion.\nIn the same table we summarize the performance of the three techniques in terms of inner loop iterations required to produce the two IMFs and the relative error measured as ratio between the norm 2 of the difference between the computed IMF and the corresponding ground truth, and the norm 2 of the ground truth itself.\nFrom Table 1 ###reference_### results it is clear that FRIF proves to converge quickly to a really accurate solution. In fact, it takes less than a second to produce a decomposition which has a relative error which is order of magnitudes smaller than the ones produced using ALIF and SALIF methods. Furthermore ALIF and SALIF decompositions require more than 16 and 26 seconds, respectively, to converge. This is confirmed by the results shown in the right panel of Figure 1 ###reference_###, where we compare the norm 2 relative error of the obtained using ALIF, SALIF, and FRIF algorithms for subsequent steps in the inner loops when we remove the stopping condition.\nALIF initially tends toward the right solution. At 35 steps the relative error reach the minimum value of , and then, after that, the instabilities of the method show up and drive the solution far away from the right one. SALIF, instead, is clearly convergent, in fact the solution is moving steadily to the exact one. However SALIF converge rate is small, as proven by the relative error which is slowly decaying. In fact, after 500 inner loop steps, the relative error is still around .\nFinally, FRIF quickly converge to a really good approximation of the right solution, at 73 steps the error is minimal with a relative error of . After this step, the relative error restarts growing due to the chosen stopping criterion. It is important to remember, in fact, that, in general, the ground truth is not known. This is the reason why the stopping criterion adopted in these techniques does not rely on the ground truth knowledge. Hence, as a consequence, FRIF, ALIF and SALIF, do not necessarily stop when the actual best approximation of the ground truth is achieved. For example, one can see that the ALIF algorithm doesn\u2019t stop in the computation of the second IMF of the signal.\nStudying what it could be an ideal stopping criterion and how to tune it properly is outside the scope of this work."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Example 2",
75
+ "text": "In this second example, we start from the artificial signal which contains two nonstationary components, and , and a trend ,\nwhere vary in and is sampled over 8000 points.\nThe , , the trend component, and signal are plotted in the left column of Figure 2 ###reference_###, whereas and frequencies are shown in the right panel.\n###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### in Table 2 ###reference_### we report the performance of ALIF, SALIF and FRIF techniques. In Figure 3 ###reference_### we show the differences between the IMFs produced by the different methods and the known ground truth. It is evident both from the table and the figure that the proposed FRIF method outperform the other approaches both from the computational and the accuracy point of view."
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "Example 3",
81
+ "text": "In this example we show the robustness of the proposed FRIF approach to noise. To do so, we consider the signal studied in Example 2 and we perturb it by additive Gaussian noise. In Figure 4 ###reference_### we plot on the left panel the perturbed signal when the signal to noise ratio (SNR) is of 8.6 dB. On the right panel we report the decomposition produced by FRIF. It is evident that the method can separate properly the random perturbation in the first row, from the deterministic components in the following three rows.\n###figure_9### ###figure_10### This result is confirmed even if we increase the SNR to 1.3 dB, left panel of Figure 5 ###reference_###. It is evident from this figure that this level of noise is quite high. Nevertheless FRIF method proves to be able still to separate the deterministic signal from the additive Gaussian contribution, as shown in the left panel of Figure 5 ###reference_###.\n###figure_11### ###figure_12###"
82
+ },
83
+ {
84
+ "section_id": "4.4",
85
+ "parent_section_id": "4",
86
+ "section_name": "Example 4",
87
+ "text": "We conclude the numerical section with an example based on a real life signal. We consider the recording of the sound emitted by a bat, shown in the left panel of Figure 6 ###reference_###. In the central panel, we show the associated time-frequency plot obtained using the IMFogram [7 ###reference_b7###]. From this plot we observe that this signal appears to contain three main simple oscillatory components which present rapid changes in frequencies. Those are classical examples of the so called chirps. By using a curve extraction method, it is possible to derive from the IMFogram the instantaneous frequency curves plotted in the right panel of Figure 6 ###reference_###. As briefly mentioned earlier, the identification of these instantaneous frequency curves is of fundamental importance for the proper functioning of FRIF, but it is also a research topic per se. In this work, we assume that they can be computed accurately and we postpone the analysis of how to compute them in a robust and accurate way to future works.\n###figure_13### ###figure_14### ###figure_15### By leveraging on the extracted curves, we run FRIF algorithm and derive the decomposition shown in the left most panel of Figure 7 ###reference_###. The first three IMFs produced correspond to the three main chirps observed in the IMFogram, which is depicted in the central panel of Figure 6 ###reference_###. This is confirmed by running IMFogram separately on the first three IMFs produced by FRIF. The results are shown in the rightmost 3 panels of Figure 7 ###reference_###. From these plots it becomes clear that the algorithm is able to separate in a clean way the three chirps contained in the signal.\n###figure_16### ###figure_17### ###figure_18### ###figure_19###"
88
+ },
89
+ {
90
+ "section_id": "5",
91
+ "parent_section_id": null,
92
+ "section_name": "Conclusions",
93
+ "text": "Following the success of the Empirical Mode Decomposition (EMD) method for the decomposition of non-stationary signals, and given that its mathematical understanding is still very limited, in recent years the Iterative Filtering (IF) first, and then the Adaptive Local Iterative Filtering (ALIF) have been proposed. They inherit the same structure of EMD, but rely on convolution for the computation of the signal moving average. On the one hand, the mathematical understanding of IF is now pretty advanced, this include its acceleration in what is called Fast Iterative Filtering (FIF) and its complete convergence analysis. On the other hand, IF proved to be limited in separating, in a physically meaningful way, components which exhibit quick changes in their frequencies, like chirps or whistles. For this reason ALIF was proposed as a generalization of IF which overcome the limitations that are present in IF. However, even though some advances have been obtained in recent years, the theoretical understanding of ALIF is far from being complete. In particular, it is not yet clear under which assumptions it is possible to guarantee a priori its convergence.\nFor this reason, in this work we introduced the Resampled Iterative Filtering (RIF), and, in the discrete setting, the Stable Adaptive Local Iterative Filtering (SALIF) and the Fast Resampled Iterative Filtering (FRIF), that are capable of decomposing non-stationary signals into simple oscillatory components, even in presence of fast changes in their instantaneous frequencies, like in chirps. We have analyzed them from a theoretical stand point, showing, among other things, that it is possible to guarantee a priori their convergence. Furthermore, we have tested them using several artificial and real life examples.\nMore is yet to be said about the argument. In particular, all these methods are dependent on the computation of a length function which is, de facto, the reciprocal of the instantaneous frequency curve associated with each component contained in the signal. This function is required to guide the aforementioned methods, including ALIF itself, in the extraction of physically meaningful IMFs. The identification of instantaneous frequency curves associated with each component contained in a given signal is a research topic per se, and it is out of the scope of the present work. This is why we plan to study this problem in future works.\nAnother open problem regards the selection of an optimal stopping criterion and its tuning to be used in this kind of methods. The stopping criterion implemented can influence consistently the performance of these techniques. We plan to work in this direction in the future.\nFinally, we plan to work on the extension of the proposed techniques to handle multidimensional and multivariate signals."
94
+ }
95
+ ],
96
+ "appendix": [],
97
+ "tables": {
98
+ "1": {
99
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.23\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.23.24.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T1.23.24.1.1\">Example 1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.23.24.1.2\">ALIF</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.23.24.1.3\">SALIF</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.23.24.1.4\">FRIF</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_tt\" id=\"S4.T1.3.3.4\">time(s)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T1.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.5.5.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.6.6.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.7.7.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.11.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T1.8.8.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.9.9.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.10.10.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.11.11.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.15.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T1.12.12.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.13.13.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.14.14.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.15.15.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.19.19\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T1.16.16.1\">num of iter \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.17.17.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.18.18.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.19.19.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.23.23\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T1.20.20.1\">num of iter \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.21.21.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.22.22.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.23.23.4\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.25.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S4.T1.26.2\" style=\"font-size:90%;\">performance of various techniques when applied on Example 1, measured as relative errors in norm 2 and number of iterations.</span></figcaption>\n</figure>",
100
+ "capture": "Table 1: performance of various techniques when applied on Example 1, measured as relative errors in norm 2 and number of iterations."
101
+ },
102
+ "2": {
103
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.23\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.23.24.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T2.23.24.1.1\">Example 2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.23.24.1.2\">ALIF</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.23.24.1.3\">SALIF</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.23.24.1.4\">FRIF</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_tt\" id=\"S4.T2.3.3.4\">time(s)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T2.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.5.5.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.6.6.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.7.7.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.11.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T2.8.8.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.9.9.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.10.10.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.11.11.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.15.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T2.12.12.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.13.13.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.14.14.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.15.15.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.19.19\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T2.16.16.1\">num of iter \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.17.17.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.18.18.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.19.19.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.23.23\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T2.20.20.1\">num of iter \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.21.21.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.22.22.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.23.23.4\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T2.25.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S4.T2.26.2\" style=\"font-size:90%;\">Example 2 performance of ALIF, SALIF and FRIF, measured as relative errors in norm 2 and iteration number.</span></figcaption>\n</figure>",
104
+ "capture": "Table 2: Example 2 performance of ALIF, SALIF and FRIF, measured as relative errors in norm 2 and iteration number."
105
+ }
106
+ },
107
+ "image_paths": {
108
+ "1(a)": {
109
+ "figure_path": "2111.02764v2_figure_1(a).png",
110
+ "caption": "Figure 1: Example 1. Left panel: the components f1subscript\ud835\udc531f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and f2subscript\ud835\udc532f_{2}italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, respectively first and second row,the trend, third row, and the signal f\ud835\udc53fitalic_f, bottom row. Central panel: exponential instantaneous frequencies of f1subscript\ud835\udc531f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and f2subscript\ud835\udc532f_{2}italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Right panel: relative error in norm 2 between the ground truth and IMF1subscriptIMF1\\textrm{IMF}_{1}IMF start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT produced by ALIF, SALIF, and FRIF algorithms.",
111
+ "url": "http://arxiv.org/html/2111.02764v2/x1.png"
112
+ },
113
+ "1(b)": {
114
+ "figure_path": "2111.02764v2_figure_1(b).png",
115
+ "caption": "Figure 1: Example 1. Left panel: the components f1subscript\ud835\udc531f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and f2subscript\ud835\udc532f_{2}italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, respectively first and second row,the trend, third row, and the signal f\ud835\udc53fitalic_f, bottom row. Central panel: exponential instantaneous frequencies of f1subscript\ud835\udc531f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and f2subscript\ud835\udc532f_{2}italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Right panel: relative error in norm 2 between the ground truth and IMF1subscriptIMF1\\textrm{IMF}_{1}IMF start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT produced by ALIF, SALIF, and FRIF algorithms.",
116
+ "url": "http://arxiv.org/html/2111.02764v2/x2.png"
117
+ },
118
+ "1(c)": {
119
+ "figure_path": "2111.02764v2_figure_1(c).png",
120
+ "caption": "Figure 1: Example 1. Left panel: the components f1subscript\ud835\udc531f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and f2subscript\ud835\udc532f_{2}italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, respectively first and second row,the trend, third row, and the signal f\ud835\udc53fitalic_f, bottom row. Central panel: exponential instantaneous frequencies of f1subscript\ud835\udc531f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and f2subscript\ud835\udc532f_{2}italic_f start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Right panel: relative error in norm 2 between the ground truth and IMF1subscriptIMF1\\textrm{IMF}_{1}IMF start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT produced by ALIF, SALIF, and FRIF algorithms.",
121
+ "url": "http://arxiv.org/html/2111.02764v2/x3.png"
122
+ },
123
+ "2(a)": {
124
+ "figure_path": "2111.02764v2_figure_2(a).png",
125
+ "caption": "Figure 2: Example 2. Left panel: the components h1subscript\u210e1h_{1}italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and h2subscript\u210e2h_{2}italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, respectively first and second row, and the signal h\u210ehitalic_h, bottom row. Right panel: exponential instantaneous frequencies of h1subscript\u210e1h_{1}italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and h2subscript\u210e2h_{2}italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.",
126
+ "url": "http://arxiv.org/html/2111.02764v2/x4.png"
127
+ },
128
+ "2(b)": {
129
+ "figure_path": "2111.02764v2_figure_2(b).png",
130
+ "caption": "Figure 2: Example 2. Left panel: the components h1subscript\u210e1h_{1}italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and h2subscript\u210e2h_{2}italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, respectively first and second row, and the signal h\u210ehitalic_h, bottom row. Right panel: exponential instantaneous frequencies of h1subscript\u210e1h_{1}italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and h2subscript\u210e2h_{2}italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.",
131
+ "url": "http://arxiv.org/html/2111.02764v2/x5.png"
132
+ },
133
+ "3(a)": {
134
+ "figure_path": "2111.02764v2_figure_3(a).png",
135
+ "caption": "Figure 3: Example 2. Difference between the ground truth and the derived decomposition via ALIF (left), SALIF (central), FRIF (right).",
136
+ "url": "http://arxiv.org/html/2111.02764v2/x6.png"
137
+ },
138
+ "3(b)": {
139
+ "figure_path": "2111.02764v2_figure_3(b).png",
140
+ "caption": "Figure 3: Example 2. Difference between the ground truth and the derived decomposition via ALIF (left), SALIF (central), FRIF (right).",
141
+ "url": "http://arxiv.org/html/2111.02764v2/x7.png"
142
+ },
143
+ "3(c)": {
144
+ "figure_path": "2111.02764v2_figure_3(c).png",
145
+ "caption": "Figure 3: Example 2. Difference between the ground truth and the derived decomposition via ALIF (left), SALIF (central), FRIF (right).",
146
+ "url": "http://arxiv.org/html/2111.02764v2/x8.png"
147
+ },
148
+ "4(a)": {
149
+ "figure_path": "2111.02764v2_figure_4(a).png",
150
+ "caption": "Figure 4: Example 3. Left panel, the noisy signal compared with the noiseless signal h\u210ehitalic_h defined in Example 2. The SNR is around 8.6 dB. Right panel, the IMF decomposition derived by FRIF.",
151
+ "url": "http://arxiv.org/html/2111.02764v2/x9.png"
152
+ },
153
+ "4(b)": {
154
+ "figure_path": "2111.02764v2_figure_4(b).png",
155
+ "caption": "Figure 4: Example 3. Left panel, the noisy signal compared with the noiseless signal h\u210ehitalic_h defined in Example 2. The SNR is around 8.6 dB. Right panel, the IMF decomposition derived by FRIF.",
156
+ "url": "http://arxiv.org/html/2111.02764v2/x10.png"
157
+ },
158
+ "5(a)": {
159
+ "figure_path": "2111.02764v2_figure_5(a).png",
160
+ "caption": "Figure 5: Example 3. Left panel, the noisy signal with SNR around 1.3 dB compared with the noiseless signal h\u210ehitalic_h of Example 2. Right panel, the corresponding FRIF decomposition compared with the ground truth.",
161
+ "url": "http://arxiv.org/html/2111.02764v2/x11.png"
162
+ },
163
+ "5(b)": {
164
+ "figure_path": "2111.02764v2_figure_5(b).png",
165
+ "caption": "Figure 5: Example 3. Left panel, the noisy signal with SNR around 1.3 dB compared with the noiseless signal h\u210ehitalic_h of Example 2. Right panel, the corresponding FRIF decomposition compared with the ground truth.",
166
+ "url": "http://arxiv.org/html/2111.02764v2/x12.png"
167
+ },
168
+ "6(a)": {
169
+ "figure_path": "2111.02764v2_figure_6(a).png",
170
+ "caption": "Figure 6: Example 4. Left panel, sound produced by a bat. Central panel, the corresponding IMFogram time-frequency plot. Right panel, instantaneous frequency curves inferred from the IMFogram plot.",
171
+ "url": "http://arxiv.org/html/2111.02764v2/x13.png"
172
+ },
173
+ "6(b)": {
174
+ "figure_path": "2111.02764v2_figure_6(b).png",
175
+ "caption": "Figure 6: Example 4. Left panel, sound produced by a bat. Central panel, the corresponding IMFogram time-frequency plot. Right panel, instantaneous frequency curves inferred from the IMFogram plot.",
176
+ "url": "http://arxiv.org/html/2111.02764v2/x14.png"
177
+ },
178
+ "6(c)": {
179
+ "figure_path": "2111.02764v2_figure_6(c).png",
180
+ "caption": "Figure 6: Example 4. Left panel, sound produced by a bat. Central panel, the corresponding IMFogram time-frequency plot. Right panel, instantaneous frequency curves inferred from the IMFogram plot.",
181
+ "url": "http://arxiv.org/html/2111.02764v2/x15.png"
182
+ },
183
+ "7(a)": {
184
+ "figure_path": "2111.02764v2_figure_7(a).png",
185
+ "caption": "Figure 7: Example 4. Left most panel, IMF decomposition produced by FRIF. From central left to right most panel, the IMFogram time-frequency plots associated with the first, second and third row in the FRIF decomposition, respectively.",
186
+ "url": "http://arxiv.org/html/2111.02764v2/x16.png"
187
+ },
188
+ "7(b)": {
189
+ "figure_path": "2111.02764v2_figure_7(b).png",
190
+ "caption": "Figure 7: Example 4. Left most panel, IMF decomposition produced by FRIF. From central left to right most panel, the IMFogram time-frequency plots associated with the first, second and third row in the FRIF decomposition, respectively.",
191
+ "url": "http://arxiv.org/html/2111.02764v2/x17.png"
192
+ },
193
+ "7(c)": {
194
+ "figure_path": "2111.02764v2_figure_7(c).png",
195
+ "caption": "Figure 7: Example 4. Left most panel, IMF decomposition produced by FRIF. From central left to right most panel, the IMFogram time-frequency plots associated with the first, second and third row in the FRIF decomposition, respectively.",
196
+ "url": "http://arxiv.org/html/2111.02764v2/x18.png"
197
+ },
198
+ "7(d)": {
199
+ "figure_path": "2111.02764v2_figure_7(d).png",
200
+ "caption": "Figure 7: Example 4. Left most panel, IMF decomposition produced by FRIF. From central left to right most panel, the IMFogram time-frequency plots associated with the first, second and third row in the FRIF decomposition, respectively.",
201
+ "url": "http://arxiv.org/html/2111.02764v2/x19.png"
202
+ }
203
+ },
204
+ "validation": true,
205
+ "references": [
206
+ {
207
+ "1": {
208
+ "title": "Local rub-impact fault diagnosis of a rotor system based on adaptive\nlocal iterative filtering.",
209
+ "author": "X. An.",
210
+ "venue": "Transactions of the Institute of Measurement and Control,\n39(5):748\u2013753, 2017.",
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "2": {
216
+ "title": "Application of adaptive local iterative filtering and approximate\nentropy to vibration signal denoising of hydropower unit.",
217
+ "author": "X. An, C. Li, and F. Zhang.",
218
+ "venue": "Journal of Vibroengineering, 18(7):4299\u20134311, 2016.",
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "3": {
224
+ "title": "Wind turbine bearing fault diagnosis based on adaptive local\niterative filtering and approximate entropy.",
225
+ "author": "X. An and L. Pan.",
226
+ "venue": "Proceedings of the Institution of Mechanical Engineers, Part C:\nJournal of Mechanical Engineering Science, 231(17):3228\u20133237, 2017.",
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "4": {
232
+ "title": "Vibration signal analysis of a hydropower unit based on adaptive\nlocal iterative filtering.",
233
+ "author": "X. An, W. Yang, and X. An.",
234
+ "venue": "Proceedings of the Institution of Mechanical Engineers, Part C:\nJournal of Mechanical Engineering Science, 231(7):1339\u20131353, 2017.",
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "5": {
240
+ "title": "Demodulation analysis based on adaptive local iterative filtering for\nbearing fault diagnosis.",
241
+ "author": "X. An, H. Zeng, and C. Li.",
242
+ "venue": "Measurement, 94:554\u2013560, 2016.",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "6": {
248
+ "title": "Conjectures on spectral properties of alif algorithm, 2021.",
249
+ "author": "G. Barbarino and A. Cicone.",
250
+ "venue": "arXiv:2009.00582.",
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "7": {
256
+ "title": "Time-frequency representation of nonstationary signals: the imfogram.",
257
+ "author": "P. Barbe, A. Cicone, W. Suet Li, and H. Zhou.",
258
+ "venue": "Pure and Applied Functional Analysis, 2021.",
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "8": {
264
+ "title": "Iterative filtering as a direct method for the decomposition of\nnonstationary signals.",
265
+ "author": "A. Cicone.",
266
+ "venue": "Numerical Algorithms, pages 1\u201317, 2020.",
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "9": {
272
+ "title": "Study of boundary conditions in the iterative filtering method for\nthe decomposition of nonstationary signals.",
273
+ "author": "A. Cicone and P. Dell\u2019Acqua.",
274
+ "venue": "Journal of Computational and Applied Mathematics, 373:112248,\n2020.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "10": {
280
+ "title": "Spectral and convergence analysis of the discrete alif method.",
281
+ "author": "A. Cicone, C. Garoni, and S. Serra-Capizzano.",
282
+ "venue": "Linear Algebra and its Applications, 580:62\u201395, 2019.",
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "11": {
288
+ "title": "New theoretical insights in the decomposition and time-frequency\nrepresentation of nonstationary signals: the imfogram algorithm.",
289
+ "author": "A. Cicone, W. S. Li, and H. Zhou.",
290
+ "venue": "preprint, 2021.",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "12": {
296
+ "title": "Adaptive local iterative filtering for signal decomposition and\ninstantaneous frequency analysis.",
297
+ "author": "A. Cicone, J. Liu, and H. Zhou.",
298
+ "venue": "Applied and Computational Harmonic Analysis, 41(2):384\u2013411,\n2016.",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "13": {
304
+ "title": "Convergence analysis of adaptive locally iterative filtering and sift\nmethod.",
305
+ "author": "A. Cicone and H.-T. Wu.",
306
+ "venue": "submitted, 2021.",
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "14": {
312
+ "title": "Numerical analysis for iterative filtering with new efficient\nimplementations based on fft.",
313
+ "author": "A. Cicone and H. Zhou.",
314
+ "venue": "Numerische Mathematik, 147(1):1\u201328, 2021.",
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "15": {
320
+ "title": "Convergence of a convolution-filtering-based algorithm for empirical\nmode decomposition.",
321
+ "author": "C. Huang, L. Yang, and Y. Wang.",
322
+ "venue": "Advances in Adaptive Data Analysis, 1(04):561\u2013571, 2009.",
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "16": {
328
+ "title": "Introduction to the hilbert\u2013huang transform and its related\nmathematical problems.",
329
+ "author": "N. E. Huang.",
330
+ "venue": "Hilbert\u2013Huang transform and its applications, pages 1\u201326,\n2014.",
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "17": {
336
+ "title": "The empirical mode decomposition and the hilbert spectrum for\nnonlinear and non-stationary time series analysis.",
337
+ "author": "N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N.-C. Yen,\nC. C. Tung, and H. H. Liu.",
338
+ "venue": "Proceedings of the Royal Society of London. Series A:\nmathematical, physical and engineering sciences, 454(1971):903\u2013995, 1998.",
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "18": {
344
+ "title": "A multiscale computation for highly oscillatory dynamical systems\nusing empirical mode decomposition (emd)\u2013type methods.",
345
+ "author": "S. J. Kim and H. Zhou.",
346
+ "venue": "Multiscale Modeling & Simulation, 14(1):534\u2013557, 2016.",
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "19": {
352
+ "title": "The entropy algorithm and its variants in the fault diagnosis of\nrotating machinery: A review.",
353
+ "author": "Y. Li, X. Wang, Z. Liu, X. Liang, and S. Si.",
354
+ "venue": "IEEE Access, 6:66723\u201366741, 2018.",
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "20": {
360
+ "title": "Iterative filtering as an alternative algorithm for empirical mode\ndecomposition.",
361
+ "author": "L. Lin, Y. Wang, and H. Zhou.",
362
+ "venue": "Advances in Adaptive Data Analysis, 1(04):543\u2013560, 2009.",
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "21": {
368
+ "title": "Classification of partial discharge signals by combining adaptive\nlocal iterative filtering and entropy features.",
369
+ "author": "I. Mitiche, G. Morison, A. Nesbitt, M. Hughes-Narborough, B. G. Stewart, and\nP. Boreham.",
370
+ "venue": "Sensors, 18(2):406, 2018.",
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "22": {
376
+ "title": "Adaptive local iterative filtering: A promising technique for the\nanalysis of nonstationary signals.",
377
+ "author": "M. Piersanti, M. Materassi, A. Cicone, L. Spogli, H. Zhou, and R. G. Ezquer.",
378
+ "venue": "Journal of Geophysical Research: Space Physics,\n123(1):1031\u20131046, 2018.",
379
+ "url": null
380
+ }
381
+ },
382
+ {
383
+ "23": {
384
+ "title": "Automatic sleep stages classification based on iterative filtering of\nelectroencephalogram signals.",
385
+ "author": "R. Sharma, R. B. Pachori, and A. Upadhyay.",
386
+ "venue": "Neural Computing and Applications, 28(10):2959\u20132978, 2017.",
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "24": {
392
+ "title": "New insights and best practices for the successful use of empirical\nmode decomposition, iterative filtering and derived algorithms.",
393
+ "author": "A. Stallone, A. Cicone, and M. Materassi.",
394
+ "venue": "Scientific Reports, 10:15161, 2020.",
395
+ "url": null
396
+ }
397
+ },
398
+ {
399
+ "25": {
400
+ "title": "A complete ensemble empirical mode decomposition with adaptive noise.",
401
+ "author": "M. E. Torres, M. A. Colominas, G. Schlotthauer, and P. Flandrin.",
402
+ "venue": "In 2011 IEEE international conference on acoustics, speech and\nsignal processing (ICASSP), pages 4144\u20134147. IEEE, 2011.",
403
+ "url": null
404
+ }
405
+ },
406
+ {
407
+ "26": {
408
+ "title": "Filter bank property of multivariate empirical mode decomposition.",
409
+ "author": "N. Ur Rehman and D. P. Mandic.",
410
+ "venue": "IEEE transactions on signal processing, 59(5):2421\u20132426, 2011.",
411
+ "url": null
412
+ }
413
+ },
414
+ {
415
+ "27": {
416
+ "title": "Current state of nonlinear-type time-frequency analysis and\napplications to high-frequency biomedical signals.",
417
+ "author": "H.-T. Wu.",
418
+ "venue": "Current Opinion in Systems Biology, 23:8\u201321, 2020.",
419
+ "url": null
420
+ }
421
+ },
422
+ {
423
+ "28": {
424
+ "title": "Ensemble empirical mode decomposition: a noise-assisted data analysis\nmethod.",
425
+ "author": "Z. Wu and N. E. Huang.",
426
+ "venue": "Advances in adaptive data analysis, 1(01):1\u201341, 2009.",
427
+ "url": null
428
+ }
429
+ },
430
+ {
431
+ "29": {
432
+ "title": "Oscillation mode analysis for power grids using adaptive local\niterative filter decomposition.",
433
+ "author": "D. Yang, B. Wang, G. Cai, and J. Wen.",
434
+ "venue": "International Journal of Electrical Power & Energy Systems,\n92:25\u201333, 2017.",
435
+ "url": null
436
+ }
437
+ },
438
+ {
439
+ "30": {
440
+ "title": "Complementary ensemble empirical mode decomposition: A novel noise\nenhanced data analysis method.",
441
+ "author": "J.-R. Yeh, J.-S. Shieh, and N. E. Huang.",
442
+ "venue": "Advances in adaptive data analysis, 2(02):135\u2013156, 2010.",
443
+ "url": null
444
+ }
445
+ },
446
+ {
447
+ "31": {
448
+ "title": "Partly ensemble empirical mode decomposition: An improved\nnoise-assisted method for eliminating mode mixing.",
449
+ "author": "J. Zheng, J. Cheng, and Y. Yang.",
450
+ "venue": "Signal Processing, 96:362\u2013374, 2014.",
451
+ "url": null
452
+ }
453
+ }
454
+ ],
455
+ "url": "http://arxiv.org/html/2111.02764v2"
456
+ }
20240127/2112.04870v2.json ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Eigenfunction martingale estimators for interacting particle systems and their mean field limit",
3
+ "abstract": "We study the problem of parameter estimation for large exchangeable interacting particle systems when a sample of discrete observations from a single particle is known. We propose a novel method based on martingale estimating functions constructed by employing the eigenvalues and eigenfunctions of the generator of the mean field limit, where the law of the process is replaced by the (unique) invariant measure of the mean field dynamics. We then prove that our estimator is asymptotically unbiased and asymptotically normal when the number of observations and the number of particles tend to infinity, and we provide a rate of convergence towards the exact value of the parameters. Finally, we present several numerical experiments which show the accuracy of our estimator and corroborate our theoretical findings, even in the case the mean field dynamics exhibit more than one steady states.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Interacting particle systems and, more generally interacting multiagent models, appear frequently in the natural and social sciences. In addition to the well known applications, e.g., plasma physics [22 ###reference_b22###] and stellar dynamics [7 ###reference_b7###], new applications include, e.g., the modeling of chemotaxis [40 ###reference_b40###], pedestrian dynamics [24 ###reference_b24###, 30 ###reference_b30###], crowd dynamics [32 ###reference_b32###], urban modeling [14 ###reference_b14###], models for opinion formation [18 ###reference_b18###, 21 ###reference_b21###], collective behavior [11 ###reference_b11###], and models for systemic risk [20 ###reference_b20###]. In many of these applications, the phenomenological models involve unknown parameters that need to be estimated from data. This is particularly the case for multiagent models used in the social sciences and in economics, where no physics-informed choices of parameters are available. Learning parameters or even models, in a nonparametric setting, from data is becoming an increasingly important aspect of the overall mathematical modeling strategy. This is particularly the case in view of the huge quantity of available data in different areas, which allows the development of accurate data-driven techniques for learning parameters from data.\nIn this paper we study the problem of inference for systems of (weakly) interacting diffusions for which the mean field limit exists and is described by a nonlinear diffusion process of McKean type, obtained in the limit as the number of interacting processes goes to infinity. When the number of interacting stochastic differential equations (SDEs) is large, the inference problem can become computationally intractable and it is often useful to study the problem of parameter estimation for the limiting mean field SDE. This is related, but distinct, from the problem of inference for multiscale diffusions [1 ###reference_b1###, 2 ###reference_b2###, 17 ###reference_b17###, 35 ###reference_b35###, 37 ###reference_b37###] where the objective is to learn the parameters in the homogenized (limiting) SDE from observations of the full dynamics. Our goal is to show how the inference methodology using eigenfunction martingale estimating functions that was applied in [2 ###reference_b2###] to multiscale diffusions can be modified so that it can also be applied to interacting diffusions with a well defined mean field limit. It is useful to keep in mind the analogy between the homogenization and mean field limits, in the context of parameter estimation.\nInference for large interacting systems has attracted considerable attention, starting from the work of Kasonga [26 ###reference_b26###], in which the maximum likelihood estimator (MLE) was considered. In particular, it was proved that the MLE for estimating parameters in the drift, when the drift is linearly dependent on the parameters, given continuous time observations of all the particles of the -particle system, is consistent and asymptotically normal in the limit as . In this setting, it is possible to test whether the particles are interacting or not, at least in the linear case, i.e., for a system of interacting Ornstein\u2013Uhlenbeck processes. Consistency and asymptotic normality of the sieve estimator and an approximate MLE estimator, i.e., when discrete observations of all the particles are given, was studied in [8 ###reference_b8###] in the same framework of linear dependence on the parameters for the drift and known diffusion coefficient. Moreover, MLE inference of the mean field Ornstein\u2013Uhlenbeck SDE was also considered. Properties of the MLE for the McKean SDE, when a continuous path of the SDE is observed, were studied in [43 ###reference_b43###]. Consistency of the MLE was proved and an application to a model for ionic diffusion was presented. The MLE estimator for the McKean SDE was also considered in [29 ###reference_b29###] and numerical experiments for the mean field Ornstein\u2013Uhlenbeck process were presented. The combined large particle and long time asymptotics, and , of the MLE for the case of a quadratic interaction, i.e., for interacting Ornstein\u2013Uhlenbeck processes, was studied in [10 ###reference_b10###]. Unlike the previous works mentioned in this literature review, the case where only a single particle trajectory is observed was considered in this paper. It was shown that the parameters in the drift can be estimated with optimal rate of convergence simultaneously in mean-field limit and in long-time dynamics. Offline and online inference for the McKean SDE was studied in [39 ###reference_b39###]. Consistency and asymptotic normality of the offline MLE for the interacting particle system in the limit as the number of particles was shown. In addition, an online parameter estimator for the mean field SDE was proposed, which evolves according to a continuous-time stochastic gradient descent algorithm on the asymptotic log-likelihood of the interacting particle system.\nIn this paper we consider systems of exchangeable weakly interacting diffusions for which uniform propagation of chaos results are known [4 ###reference_b4###, 5 ###reference_b5###, 12 ###reference_b12###, 31 ###reference_b31###, 33 ###reference_b33###] and for which the mean field SDE has a unique invariant measure. We assume that we are given a sample of discrete-time observations of a single particle. Due to exchangeability, this amount of information should be sufficient to infer parameters in the mean field SDE, in the joint asymptotic limit as the number of observations and the number of particles go to infinity. Our approach consists of constructing martingale estimating functions [6 ###reference_b6###, 27 ###reference_b27###] based on the eigenvalues and the eigenfunctions of the generator of the mean field dynamics. Then, our eigenfunction estimator is the zero of the estimating function. The martingale estimator based on the eigenfunctions of the generator was used to study the inference problem for multiscale diffusions in [2 ###reference_b2###]. Unlike the finite dimensional case, the mean field SDE is a measure-valued process and the generator is a nonlinear operator, dependent on the law of the process. A direct application of the martingale eigenfunction estimator would require the solution of a nonlinear eigenvalue problem that can be computationally demanding and that would also lead to eigenfunctions depending on time via their dependence on the law of the process. We circumvent this difficulty by replacing the law of the process with the (unique) invariant measure of the mean field dynamics. This leads to a standard Sturm\u2013Liouville type of eigenvalue problem that we can analyze and also solve numerically at a low computational cost. In this paper we consider the framework where the invariant measure of the mean field SDE is unique. We remark, however, that our numerical experiments show that our methodology applies to McKean SDEs that exhibit phase transitions, i.e., that have multiple stationary measures, as long as we are below the transition point, or the form of the invariant measure is known up to a finite set of parameters, e.g., moments.\nWhen the mean field dynamics has a unique invariant measure, we first show the existence of the estimator with high probability when the number of available data and particles is large enough, and then analyze its consistency proving the asymptotic convergence towards the true value of the unknown parameter and providing a rate. Moreover, we prove that the estimator is asymptotically normal. We also note that the relationship between the number of observations and particles plays an important role in the study of the asymptotic properties of the estimator, in particular the latter must be sufficiently greater than the former in order for the previous results to hold. We then present a series of numerical experiments which confirm our theoretical results and we show the advantages of our method with respect to the MLE. In particular, in contrast with our estimator, the MLE is biased when we have sparse observations, i.e., when the sampling rate is far from the asymptotic limit ."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Problem setting",
15
+ "text": "In this work we consider a system of interacting particles in one dimension moving in a confining potential over the time interval whose interaction is governed by an interaction potential\nwhere is the number of particles, are standard independent one dimensional Brownian motions, and are the confining and interaction potentials, respectively, which depend on some parameters , and is the diffusion coefficient. The functions and are then the derivatives of and with respect to their first argument. We assume chaotic initial conditions, i.e., that the particles are initially distributed according to the same measure .\nWe consider the case when the particles move in one dimension for the clarity of exposition. In fact, the proposed method and our rigorous results can be easily generalized to the case of interacting particles moving in dimension . However in higher dimensions the problem becomes more complex and expensive from the computational point of view.\nWe place ourselves in the same framework of [31 ###reference_b31###], which is summarized in the following assumption.\nThe confining and interaction potentials and , respectively, satisfy:\nis uniformly convex and polynomially bounded along with its derivatives uniformly in ;\nis even, convex and polynomially bounded along with its derivatives uniformly in .\nIt is well-known (see, e.g., [36 ###reference_b36###, Chapter 4]) that under Assumption 2.2 ###reference_theorem2### the dynamics described by the system (2.1 ###reference_###) is geometrically ergodic with unique invariant measure given by the Gibbs measure , where\nand is defined by\nfor with and the set of admissible parameters. The main goal of this paper is the estimation of the unknown parameter , given discrete observations of the path of one single particle. We are interested in applications involving large interacting particle systems, i.e., when , hence studying the whole system is not practical and can be computationally unfeasible. Therefore, our approach consists of considering the mean field limit which has already been thoroughly studied (see, e.g., [11 ###reference_b11###, 19 ###reference_b19###]). Letting the number of particles go to infinity we obtain the nonlinear, in the sense of McKean, SDE\nwhere is the density with respect to the Lebesgue measure of the law of and the nonlinearity means that the drift of the SDE (2.4 ###reference_###) depends on the law of the process. The density is the solution of the nonlinear Fokker\u2013Planck (McKean\u2013Vlasov) equation\nwith initial condition . It is well known that, in contrast to the finite dimensional dynamics, the mean field limit (2.4 ###reference_###) can have, in the non-convex case more than one invariant measures [9 ###reference_b9###, 11 ###reference_b11###]. The density of the stationary state(s) satisfies the stationary Fokker\u2013Planck equation\nwhere the second variable emphasizes the fact that depends on the parameters and of the potentials and , respectively. However, under Assumption 2.2 ###reference_theorem2### it has been proven in [31 ###reference_b31###] that there exists a unique invariant measure which is the solution of\nwhere is the normalization constant\nA particular choice for the interaction potential is the Curie\u2013Weiss quadratic interaction [11 ###reference_b11###], which is also known as harmonic potential. We take and consider the confining potential\nThe interacting particles system (2.1 ###reference_###) becomes, for all\nwhere denotes the empirical mean\nThis interaction term creates a tendency for the particles to relax toward the center of gravity of the ensemble and the parameter measures the strength of the interaction between the agents, hence this model provides a simple example of cooperative interaction.\nThe mean field limit (2.4 ###reference_###) then becomes\nwhere denotes the expectation of , , and its unique (when the confining potential is convex) invariant measure is given by\nwith the constraint for the expectation with respect to the invariant measure\nand where\nEquation (2.14 ###reference_###) is the self-consistency equation [11 ###reference_b11###, 15 ###reference_b15###, 23 ###reference_b23###] that enables us to calculate the invariant measure and, then, the stationary state(s). In the case where the confining potential is quadratic, we have a system linear SDEs and the mean field limit reduces to the mean field Ornstein-Uhlenbeck SDE. In this case the first moment vanishes, , and the invariant measure is unique (this is the case, of course, of arbitrary strictly convex confining potentials). The inference problem for the linear interacting particle system and for the corresponding mean field limit is easier than that of the general case. We emphasize that, unlike this present work, most earlier papers, e.g., [8 ###reference_b8###, 26 ###reference_b26###], focus on this linear case, i.e., on systems of weakly interacting linear stochastic differential equations. The estimator proposed and studied in this paper can be applied to arbitrary non-quadratic interaction and confining potentials."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Parameter estimation problem",
21
+ "text": "We now present our method for the estimation of the unknown parameter , given discrete observation of a single particle of the system (2.1 ###reference_###). Consider equidistant observation times , let be the sampling rate and let be a realization of the -th particle of the solution of the system (2.1 ###reference_###) for some . We then aim to estimate the unknown parameter given a sample of the realization where and . We want to construct martingale estimating functions based on the eigenfunctions and the eigenvalues of the generator of the dynamics, a technique which was initially proposed in [27 ###reference_b27###] for single-scale SDEs and then successfully applied to multiscale SDEs in [2 ###reference_b2###]. In principle, the methodology developed in [27 ###reference_b27###] can be applied to the particle system. However, this would require solving the eigenvalue problem for the generator of an dimensional diffusion process, which is computationally expensive. Moreover, our fundamental assumption is that are observing a single particle and thus we do not have a complete knowledge of the system. Therefore, we construct the martingale estimating functions employing the generator of the mean field dynamics, which is a good approximation of the path of a single particle when the number of particles is large [41 ###reference_b41###]. Let be the generator of the mean field limit SDE (2.4 ###reference_###)\nand let be the generator obtained replacing the density with the density of the invariant measure\nWe remark that now the generator is time-independent. We then consider the eigenvalue problem , which reads\nand from the well-known spectral theory of diffusion processes (see, e.g., [25 ###reference_b25###]) we deduce the existence of a countable set of eigenvalues whose corresponding eigenfunctions form an orthonormal basis of the weighted space . In fact, even if the SDE (2.4 ###reference_###) is nonlinear, when then the solution behaves like a classic diffusion process with drift function , hence the spectral theory for diffusion processes still holds. We also state here the variational formulation of the eigenvalue problem, which will be employed to implement numerically the proposed methodology. Let be a test function and multiply equation (2.18 ###reference_###) by , where the density of the invariant measure is defined in (2.7 ###reference_###). Then, integrating over and by parts we obtain\nWe are now ready to present how to employ the eigenvalue problem in the construction of the martingale estimation function and afterwords in the definition of our estimator. Let be a positive integer and let for be arbitrary functions dependent on the parameter which satisfy Assumption 2.5 ###reference_theorem5### below, and define the martingale estimating function as\nwhere\nand is the set of observations of the -th particle from the system with particles. The estimator we propose is then given by the solution of the -dimensional nonlinear system\nwhere denotes the vector with all components equal to zero. An intuition on why considering the solution of equation (2.8) as a good estimator is the following and will be more clear later. Let defined in (4.4 ###reference_###) be the estimating function where the observations from the inetracting particle system have been replaced by the observations from the corresponding mean field limit. Then, employing formula (4.3 ###reference_###) we have\nwhich means that the zero of the expectation of the estimating function with observations from the mean field limit is exactly the true unknown coefficient. The main steps needed to obtain the estimator are summarized in Algorithm 1 ###reference_###. For further details about the implementation and for discussions about the choice of the arbitrary functions we refer to Appendix B and Remark 2.6 in [2 ###reference_b2###].\nThe main limitation of our approach is that the knowledge of the invariant measure is required in order to construct the martingale estimating function (step 1 in Algorithm 1 ###reference_###). However, it is often the case that the invariant measure is known up to a set of parameters, such as moments, i.e., only the functional form of the invariant measure is known. These parameters (moments) are obtained by solving appropriate self-consistency equations [15 ###reference_b15###, Section 2.3]. When such a situation arises, it is possible to first learn these parameters using the available data, e.g., estimate the moments that appear in the invariant measure by employing the law of large numbers. Then, we are in the setting where our technique applies and we can proceed in the same way, as shown in the numerical experiments in Sections 3.5 ###reference_### and 3.6 ###reference_###. In summary, it is sufficient to replace step 1 in Algorithm 1 ###reference_### with \u201cestimate the moments in the invariant measure \u201d.\nWe finally introduce a technical hypothesis which will be needed for the proofs of our main results.\nLet be a compact set. Then the following hold for all and for all :\nis continuously differentiable with respect to for all ;\nall components of , , , are polynomially bounded uniformly in ;\nthe potentials and are such that , and all components of , are polynomially bounded uniformly in ;\nwhere the dot denotes either the Jacobian matrix or the gradient with respect to .\nAssumption 2.5 ###reference_theorem5###(i) together with [38 ###reference_b38###, Sections 2 and 6] gives the continuous differentiability of the vector-valued function with respect to the unknown parameter .\nFind the invariant measure .\nConsider the equation\nCompute the first eigenvalues and eigenfunctions . \n4:\n\nConstruct the function . \n5:\n\nConstruct the score function . \n6:\n\nLet be the solution of the nonlinear system .\nIn this paper we always assume that the diffusion coefficient in (2.1 ###reference_###) is known. We remark that this is not an essential limitation of our methodology; in fact, if the diffusion coefficient is also unknown, we can consider the parameter set to be estimated to be and repeat the same procedure. The estimator is then obtained as the solution of the nonlinear system of dimension corresponding to (2.22 ###reference_###). A numerical experiment illustrating this procedure is given in Section 3.3 ###reference_###. Moreover, our main theoretical results remain valid and the proofs do not need any major changes. Alternatively, if the sampling rate is sufficiently small, it is possible to first estimate the diffusion coefficient using the quadratic variation and then proceed with the methodology proposed in this paper.\nLet us consider the Curie\u2013Weiss quadratic interaction introduced in Example 2.3 ###reference_theorem3### as well as a quadratic Ornstein\u2013Uhlenbeck confining potential . In this case the only unknown parameter is and the eigenvalue problem (2.18 ###reference_###) reads\nso that the eigenvalue and eigenfunctions can be computed analytically [2 ###reference_b2###, Section 3.1]. In particular, the first eigenvalue and eigenfunction are given by and , respectively. Therefore, letting we have an explicit expression for our estimator\nFor additional details regarding the eigenvalue problem (2.24 ###reference_###) we refer to [2 ###reference_b2###, Section 3.1]. We also remark that when the drift coefficient of the Ornstein\u2013Uhlenbeck process is unknown, i.e., if we consider the confining potential then the eigenvalue problem reads\nwhich only depends on the sum and not on the single parameters alone. Therefore, in this case it is not possible to estimate the unknown coefficients and , but we can only estimate their sum. This is in contrast with the set up in [26 ###reference_b26###], where all the particles are observed in continuous time. When this amount of information is available, it is possible to check whether or not the particles are interacting, i.e., to check whether or not (see [26 ###reference_b26###, Section 4])."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Main results",
27
+ "text": "In this section we present the main theoretical results of this work. In particular, we prove that our estimator is asymptotically unbiased (consistent) and asymptotically normal as the number of observations and particles go to infinity and we compute the rate of convergence towards the true value of the parameter, which we denote by . Part of the proof of the consistency of the estimator, which will be presented in detail in Section 4 ###reference_###, is inspired by our previous work [2 ###reference_b2###, Section 5]. In this paper we studied the asymptotic properties of a similar estimator for multiscale SDEs letting the number of observations go to infinity and the multiscale parameter vanish. The proofs or our results in the present work also requires us to perform a rigorous asymptotic analysis with respect to two parameters, the number of observations and the number of particles.\nWe first define the Jacobian matrix of the function introduced in (2.21 ###reference_###) with respect to the parameter , with denoting the outer product in ,\nas well as the following quantity\nWe remark that whenever we write we mean that and similarly for the other probability measures.\nWe now present our main results. In Theorem 2.9 ###reference_theorem9### we prove that our estimator is consistent.\nLet be a positive integer and let be a set of observations obtained by system (2.1 ###reference_###) with true parameter . Under Assumptions 2.2 ###reference_theorem2### and 2.5 ###reference_theorem5### and if\nthere exists such that for all an estimator , which solves the system , exists with probability tending to one as goes to infinity. Moreover, the estimator is asymptotically unbiased, i.e.,\nand if\nThen, in Theorem 2.10 ###reference_theorem10### we provide a rate of convergence for our estimator.\nLet the assumptions of Theorem 2.9 ###reference_theorem9### hold, and let us introduce the notation\nThen, for all there exists such that\nand if\nFinally, in Theorem 2.11 ###reference_theorem11### we show that our estimator is asymptotically normal.\nLet the assumptions of Theorem 2.9 ###reference_theorem9### hold with . Then, the estimator is asymptotically normal, i.e.,\nwhere\nWe note that the technical assumption (2.29 ###reference_###) is not a serious limitation of the validity of the theorem; in fact, it is a nondegeneracy hypothesis which holds true in all nonpathological cases and is equivalent to [27 ###reference_b27###, Condition 4.2(a)] and [2 ###reference_b2###, Assumption 3.1]. Moreover, it is not necessary to assume that the matrix in Theorem 2.11 ###reference_theorem11### is indeed a covariance matrix because, due to the particular form of the estimating function, this follows directly from the central limit theorem as explained in [27 ###reference_b27###].\nFor the proof of the main results, we need to assume that, roughly speaking, the number of particles goes to infinity faster than the number of observations. It is not clear whether this assumption is strictly necessary. We expect that noncommutativity issues between the different distinguished limits may arise in the case where the mean field dynamics exhibits phase transitions, i.e., when the stationary state is not unique, see [13 ###reference_b13###]. We will study the consequences of this noncommutativity due to phase transitions to the performance of our estimator and, more generally, to the inference problem in future work."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Numerical experiments",
33
+ "text": "In this section we present a series of numerical experiments to validate our theoretical results and demonstrate the effectiveness of our estimator in estimate unknown drift parameters of interacting particle systems. In order to generate synthetic data we employ the Euler\u2013Maruyama method with a time step to solve numerically system (2.1 ###reference_###) and obtain for all . Notice that in order to preserve the exchangeability property of the system it is important to set the same initial condition for all the particles, hence we take for all . We then randomly choose a value and we assume to know a sample of observations obtained from the -th particle with sampling rate . We remark that the parameters and are not related to each other, in fact the former is only used to generate the data, while the latter is the actual distance between two consecutive observations. We repeat the same procedure for different realizations of the Brownian motions and then we compute the average of the values obtained employing our estimator . In the following, we first perform a sensitivity analysis with respect to the number of observations , particles and eigenvalues and eigenfunctions employed in the estimation , then we confirm our theoretical results given in Theorems 2.9 ###reference_theorem9###, 2.10 ###reference_theorem10###, and 2.11 ###reference_theorem11### and finally we test our technique with more challenging academic examples which do not exactly fit into the theory."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Sensitivity analysis and rate of convergence",
39
+ "text": "###figure_1### \n###figure_2### ###figure_3### ###figure_4### ###figure_5### We consider the setting of Example 2.8 ###reference_theorem8### choosing , i.e., the interacting particles system reads\nand we aim to estimate the interaction parameter , so we write . We set and the number of eigenvalues and eigenfunctions with , so that we can employ the analytical expression of our estimator given in (2.25 ###reference_###). In Fig. 1 ###reference_### we perform a sensitivity analysis for the estimator fixing , varying the number of observations and of particles and choosing as other parameter respectively and , for which convergence has been reached. The blue line is the estimation given by one single particle while the red line is obtained by averaging the estimations computed employing all the different particles. We notice that convergence is reached when both and are large enough and, as expected, the estimation computed by averaging over all the particles stabilizes faster. Moreover, in Fig. 2 ###reference_### we fix and and we compare the results for different numbers of eigenvalues and eigenfunctions employed in the construction of the estimating function. We observe that increasing the value of does not significantly improves the results, hence it seems preferable to always choose in order to reduce the computational cost. Finally, in Fig. 3 ###reference_### we verify that the rates of convergence of the estimator towards the exact value with respect to the number of observations and particles are consistent with the theoretical results given in Theorem 2.10 ###reference_theorem10###. In particular, we observe that approximately it holds"
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Comparison with the maximum likelihood estimator",
45
+ "text": "###figure_6### \n###figure_7### We keep the same setting of Section 3.1 ###reference_### and we compare the results of our estimator with a maximum likelihood estimator. In particular, in [26 ###reference_b26###] the MLE for the interacting particles system with continuous observations is rigorously derived. Since for large values of all the particles are approximately independent and identically distributed and we are assuming to observe only one particle, we replace the sample mean with the expectation with respect to the invariant measure, i.e., , and we ignore the sum over all the particles. We then discretize the integrals in the formulation obtaining a modified MLE\nIn Fig. 4 ###reference_### we fix the final time and we repeat the estimation for different values of with . We observe that, differently from our estimator, the MLE is unbiased only for small values of the sampling rate , i.e., when the discrete observations approximate well the continuous trajectory. Notice also that, as highlighted by the numerical experiments, our estimator and the MLE defined respectively in (2.25 ###reference_###) and (3.3 ###reference_###) coincide in the limit of vanishing . In fact, we can rewrite equation (2.25 ###reference_###) as\nobserve that the fraction in the argument of the logarithm is and employ the asymptotic expansion for ."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Diffusion coefficient",
51
+ "text": "###figure_8### ###figure_9### \n###figure_10### We still consider the setting of Example 2.8 ###reference_theorem8###, but, differently from Section 3.1 ###reference_###, we now assume the diffusion coefficient to be unknown and we aim to retrieve the correct values of the interaction parameter and the diffusion coefficient, which are given by and , respectively. We set the number of particles and the number of observations . A first approach consists in first estimating the diffusion coefficient alone employing the quadratic variation and then infer the interaction parameter as in the previous numerical experiments. In particular, the diffusion coefficient can be approximated as\nHowever, this estimator is asymptotically unbiased only in the limit of vanishing and is therefore reliable only if the sampling rate is sufficiently small. In fact, one can prove that\nwhich, due to the fact that in the framework of Example 2.8 ###reference_theorem8### at stationarity is a Gaussian process with zero mean and covariance function\nimplies\nwhere the right-hand side converges to if goes to zero. This is also shown in Fig. 5 ###reference_### where we estimate the diffusion coefficient for different values of the sampling rate with . Hence, if is far from its vanishing limit we have to follow a different procedure. We now fix and aim to simultaneously infer the diffusion coefficient and the interaction parameter using our eigenfunction martingale estimators. We then write and in order to construct the estimating functions we employ eigenvalues and eigenfunctions with functions . We remark that in the particular case of the Ornstein\u2013Uhlenbeck process it is possible to express the eigenvalues and eigenfunctions analytically and the first two are given by\nNote that the first eigenvalue and eigenfunction do not depend on the diffusion coefficient and therefore they alone do not provide enough information, hence it is important to choose at least . In Fig. 6 ###reference_### we show the numerical results. On the left and we plot the estimation computed employing one single particle for all the particles and we observe that the estimators are concentrated around the exact values. On the other hand, on the right, we average all the estimations previously computed and we plot the results varying the number of observations . We notice that the estimations stabilize fast near the correct coefficients."
52
+ },
53
+ {
54
+ "section_id": "3.4",
55
+ "parent_section_id": "3",
56
+ "section_name": "Central limit theorem",
57
+ "text": "We keep the same setting of Section 3.1 ###reference_### and we validate numerically the central limit theorem which we proved theoretically in Theorem 2.11 ###reference_theorem11###. In this particular case, the asymptotic variance can be computed analytically. In fact, the mean field limit of (3.1 ###reference_###) at stationarity is\nand its solution is a Gaussian process, i.e., , where and\nMoreover, we have\nand therefore we obtain\nWe then fix the number of particles , the number of observations and the sampling rate . In Fig. 7 ###reference_### we plot the quantity for any particle and for realizations of the Brownian motion and we observe that it is approximately distributed as accordingly to the theoretical result.\n###figure_11###"
58
+ },
59
+ {
60
+ "section_id": "3.5",
61
+ "parent_section_id": "3",
62
+ "section_name": "Double well potential",
63
+ "text": "###figure_12### ###figure_13### ###figure_14### ###figure_15### \n###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### \n###figure_21### We consider the setting of Example 2.3 ###reference_theorem3### and we analyse the double well potential, i.e., we let the confining potential be\nwith , which is the parameter that we aim to estimate, so we write . Moreover, we set the interaction term and the number of observations with sampling rate . Finally, to construct the estimating functions we use eigenfunctions and eigenvalues and we employ the function . We remark that this example does not fit in Assumption 2.2 ###reference_theorem2###, but if the diffusion coefficient is chosen sufficiently large, then we are below the phase transition and the mean field limit admits a unique invariant measure [11 ###reference_b11###], so the theory applies. However, when the diffusion coefficient is below the critical noise strength, then a continuous phase transition occurs and two stationary states exist [23 ###reference_b23###]. In particular, the transition point occurs at with these data. We therefore perform two numerical experiments, one below and one above the phase transition, setting and . In the former we have a unique invariant measure, so we can follow the usual approach, while in the latter we do not know in which state the data are converging. Nevertheless, the invariant distribution is known up to the first moment by equation (2.13 ###reference_###), so we first estimate the expectation using the law of large numbers with the available observations and then repeat the same procedure as in the previous case. In Figs. 8 ###reference_### and 9 ###reference_### we plot the results of these two experiments. On the top of the figures we plot the evolution of our estimator varying the number of observations for two different values of the number of particles, in particular and . We observe that the estimator approaches the correct drift coefficient as the number of observations increases and, as expected, the final approximation is better when the number of particles is sufficiently large. Moreover, on the bottom of the same figures we show the scatter plot of the estimations obtained from each particle with observations and we can see that they are concentrated around the exact drift coefficient . We finally remark that we do not notice significant differences between two cases, yielding that the initial estimation of the first moment of the invariant measure does not affect the final results and thus that our methodology can be employed even when multiple stationary states exist."
64
+ },
65
+ {
66
+ "section_id": "3.6",
67
+ "parent_section_id": "3",
68
+ "section_name": "Nonsymmetric confining potential",
69
+ "text": "We still consider the same setting of Example 2.3 ###reference_theorem3### and we now study the case of a nonsymmetric potential. In particular, we let the confining potential be\nwith , which is the unknown parameter that we want to infer, hence we set . Notice that the confining potential is given by the sum of the double well potential and a linear term which breaks the symmetry. This type of potentials of the form , where , , and , which is used in the study of metastability and phase transitions and may have arbitrarily deep double wells, has been analyzed in [42 ###reference_b42###, 44 ###reference_b44###]. Similarly to the experiment in Section 3.5 ###reference_###, this example does not satisfy Assumption 2.2 ###reference_theorem2### and more stationary states can exist. In particular, in [42 ###reference_b42###] it has been proved the existence of an invariant measure around each critical point of the potential. We therefore adopt the same strategy as in the second part of Section 3.5 ###reference_### and, since the invariant measure is known up to the first moment by equation (2.13 ###reference_###), we first approximate the expectation using the sample mean of the available observations, and then proceed with the following steps of the algorithm. We further set the interaction term , the diffusion coefficient , the number of particles and the number of observations with sampling rate . Moreover, to construct the estimating functions we use eigenfunctions and eigenvalues and we employ the function . In Fig. 10 ###reference_### we plot the results of the inference procedure considering two components of the three-dimensional drift coefficient at a time and the single components alone. We observe that the majority of the estimations obtained from all particles are concentrated around the exact values and that their average provides a reliable approximation of the true unknown. A peculiarity of this numerical experiment is the relationship between the first and second components of the estimated drift coefficient, in fact one increases when the other decreases and vice-versa, meaning that the two approximations appear to be correlated."
70
+ },
71
+ {
72
+ "section_id": "4",
73
+ "parent_section_id": null,
74
+ "section_name": "Proof of the main results",
75
+ "text": "In this section we present the proof of Theorems 2.9 ###reference_theorem9###, 2.10 ###reference_theorem10###, and 2.11 ###reference_theorem11###, which are the main results of this work. We first recall that due to [16 ###reference_b16###, Lemma 2.3.1] the solution of the interacting particle system and of its mean field limit have bounded moments of any order, in particular there exists a constant independent of such that for all , and\nMoreover, in [31 ###reference_b31###, Theorem 3.3] it is shown that each particle converges to the solution of the mean field limit with the same Brownian motion in , i.e, that\nwhere the constant is also independent of the final time . We also state here a formula which has been proved in [27 ###reference_b27###] and will be crucial in the last part of the proof\nwhere is the true parameter which generates the path and denotes the fact that . Before entering the main part of the proof, we introduce some notation and technical results which will be used later. We finally remark that all the constants will be denoted by and their value can change from line to line."
76
+ },
77
+ {
78
+ "section_id": "4.1",
79
+ "parent_section_id": "4",
80
+ "section_name": "Limits of the estimating function and its derivative",
81
+ "text": "Let us first define the following vector-valued functions and matrix-valued functions\nThe following lemma then shows that these quantities are bounded in a suitable norm and thus well defined.\nUnder Assumptions 2.2 ###reference_theorem2### and 2.5 ###reference_theorem5### there exists a constant independent of such that for all\nSince the argument is similar for four cases, we only write the details of . Using the triangle inequality we have\nand due to the Cauchy\u2013Schwarz inequality we obtain\nFinally, bound (4.1 ###reference_###) together with the fact that and are polynomially bounded for all by Assumption 2.5 ###reference_theorem5### gives the desired result.\n\u220e\nIn the next proposition we study the behaviour of the estimating function as the number of observations and particles go to infinity.\nUnder Assumptions 2.2 ###reference_theorem2### and 2.5 ###reference_theorem5### it holds for all\nMoreover, there exists a constant independent of and such that\nResults and are direct consequences of [6 ###reference_b6###, Lemma 3.1] and of the ergodicity of the processes and given by [23 ###reference_b23###, Section 1] and [31 ###reference_b31###, Theorem 3.16], respectively. Let us now consider cases and . Using the triangle inequality we have\nwhere\nand applying the mean value theorem we obtain\nThen, employing the H\u00f6lder inequality with exponents and since are polynomially bounded by Assumption 2.5 ###reference_theorem5### and have bounded moments of any order by (4.1 ###reference_###) we deduce\nwhich due to (4.2 ###reference_###) proves , which directly implies . Finally, the proofs of results and are similar to cases and , respectively, and are omitted here.\n\u220e\nUnder Assumptions 2.2 ###reference_theorem2### and 2.5 ###reference_theorem5### it holds for all\nEmploying the triangle inequality we have\nwhere the right-hand side vanishes by and in Proposition 4.2 ###reference_theorem2###, yielding the desires result.\n\u220e\nThe limits considered in Proposition 4.2 ###reference_theorem2### are summarized schematically in the following diagram\nwhere .\nNotice that all the results in this section hold true also for the derivatives , , with respect to the parameter defined in (4.4 ###reference_###). Since the arguments are analogous we omit the details here."
82
+ },
83
+ {
84
+ "section_id": "4.2",
85
+ "parent_section_id": "4",
86
+ "section_name": "Zeros of the limits of the estimating function",
87
+ "text": "The goal of this section is to show that the limits of the estimating functions previously defined admit zeros and to study their asymptotic limit. We already know by (4.3 ###reference_###) that , where is the true parameter. Then, in the following lemma we consider the zero of the function and its limit as .\nUnder Assumptions 2.2 ###reference_theorem2### and 2.5 ###reference_theorem5### and if there exists such that for all there exists which solves the system and satisfies . Moreover, there exists a constant independent of such that\nWe first remark that by (4.3 ###reference_###) we have and, without loss of generality, we can assume that . Let sufficiently small, by point in Proposition 4.2 ###reference_theorem2### and Remark 4.4 ###reference_theorem4### we know that converges to uniformly in and therefore there exist and such that for all and for all\nHence, due to equation (4.17 ###reference_###) and applying the inverse function theorem we deduce the existence of such that\nNotice that the radius can be chosen independently of . In fact, by the proof of [34 ###reference_b34###, Theorem 2.3] and [28 ###reference_b28###, Lemma 1.3] we observe that is dependent on the radius of the ball and the quantity , which can be bounded independently of due to estimate (4.18 ###reference_###). Moreover, since\nthen there exists such that for all we have . Therefore, setting for all there exists such that , which proves the existence. Furthermore, equation (4.17 ###reference_###) gives . It now remains to show estimate (4.16 ###reference_###). Since the set is compact, there exist and a subsequence such that\nBy point in Proposition 4.2 ###reference_theorem2### the function converges to uniformly in , thus we have\nwhich yields . This is guaranteed by the fact that can be previously chosen sufficiently small such that is the only zero of the function in . Since is the unique limit point for the subsequence , it follows that the whole sequence converges. Then, applying the mean value theorem we obtain\nwhich implies\nSince converges to as goes to infinity, then\nwhere the right-hand side is well defined because . Therefore, if is sufficiently large there exists a constant independent of such that\nwhich together with point in Proposition 4.2 ###reference_theorem2### yields estimate (4.16 ###reference_###) and concludes the proof.\n\u220e\nIn the next lemma we study the zero of the random function and its limit as . This result is almost the same as [27 ###reference_b27###, Theorem 4.3].\nLet the assumptions of Lemma 4.5 ###reference_theorem5### hold. Then, an estimator , which solves the equation and is such that , exists with a probability tending to one as . Moreover,\nand\nwhere is defined in (2.38 ###reference_###).\nThe existence of the estimator which solves the equation with a probability tending to one as and its asymptotic unbiasedness and normality is given by [27 ###reference_b27###, Theorem 4.3], whose prove can be found in [6 ###reference_b6###, Theorem 3.2] and is based on [3 ###reference_b3###, Theorem A.1]. Moreover, by the last line of the proof of [6 ###reference_b6###, Theorem 3.2] or by (A.5) in [27 ###reference_b27###, Theorem 4.3] we have\nwhere by assumption. Hence, there exists such that if\nthen . Moreover, for large enough it holds\nwhere as . Let us now define the events\nand notice that by the first part of the proof we have where as . Then, using the basic properties of probability measures we obtain\nwhere the last term tends to one as , and which gives the desired result.\n\u220e\nWe now consider the zero of the actual estimating function and we first analyze its limit as .\nLet the assumptions of Theorem 2.9 ###reference_theorem9### hold. Then, there exists such that for all an estimator , which solves the system , exists with a probability tending to one as goes to infinity. Moreover, there exist solving such that\nand\nwhere is a positive definite covariance matrix such that where is defined in (2.38 ###reference_###).\nFirst, by Lemma 4.5 ###reference_theorem5### there exists such that for all there exists such that\nThen, the results are equivalent to Lemma 4.6 ###reference_theorem6### and therefore the argument follows the same steps of its proof, which is given in detail in [6 ###reference_b6###, Theorem 3.2] and is based on [3 ###reference_b3###, Theorem A.1]. Finally, the convergence of the covariance matrix is implied by (4.2 ###reference_###).\n\u220e\nWe then study the limit of the zero of as .\nLet the assumptions of Lemma 4.7 ###reference_theorem7### hold and let . Then, the estimator satisfies for some solving and for a constant independent of and\nThe existence of the estimators , such that and , and , such that , with a probability tending to one as goes to infinity is guaranteed by Lemmas 4.7 ###reference_theorem7### and 4.6 ###reference_theorem6###, respectively. Then, all the following events are considered as conditioned on the existence of and and the fact that . Let us now define the function as\nwhere denotes the -th component of the vector , and the vectors and whose -th components for are given by\nwhere is the set of observations and are the corresponding realizations of the mean field limit. Notice that due to Assumptions 2.5 ###reference_theorem5### and 2.6 ###reference_theorem6### and by definition we have\nTherefore, applying the implicit function theorem there exist and a continuously differentiable function such that for all . Hence, if is close enough to then there must be one such that . Then, employing Jensen\u2019s inequality and by estimate (4.2 ###reference_###) we have\nwhere the constant is independent of and . Therefore, letting and applying Markov\u2019s inequality we obtain\nDefining the event and using the law of total expectation conditioning on we deduce\nwhich since , a compact set, and due to estimate (4.42 ###reference_###) implies\nIt now remains to study the first term in the right-hand side. Applying the mean value theorem we obtain\nwhich implies\nUsing H\u00f6lder inequality with exponents and its conjugate such that we have\nwhere\nEmploying the inequality , which holds for any positive random variable , point in Proposition 4.2 ###reference_theorem2### and estimate (4.42 ###reference_###), the second term in the right-hand side can be bounded by\nwhere the last inequality is justified by the fact that and by changing the value of the constant . We now have to bound the first term in the right-hand side of equation (4.47 ###reference_###). Employing the inequality , which holds for any square nonsingular matrix , we have\nSince we are conditioning on the event , by the first part of the proof, we know that and, by taking sufficiently small, we can always find small enough, but still finite, such that the absolute value of the determinant in the denominator is lower bounded by a constant independent of and because and by (4.29 ###reference_###) it converges in probability to , which is invertible. Hence, applying Jensen\u2019s inequality we obtain\nwhich due to Lemma 4.1 ###reference_theorem1###, Remark 4.4 ###reference_theorem4###, the property , which holds for any positive random variable , and estimate (4.42 ###reference_###) yields\nwhich together with equations (4.44 ###reference_###), (4.47 ###reference_###) and (4.49 ###reference_###) gives the desired result.\n\u220e\nThe results of this section are summarized in the following diagram\nwhere stands for convergence in probability.\nAll the previous results only prove the existence of such estimators with high probability and do not guarantee their uniqueness. However, as we will see in the next section, any of these estimators converge to the exact value of the unknown."
88
+ },
89
+ {
90
+ "section_id": "4.3",
91
+ "parent_section_id": "4",
92
+ "section_name": "Proof of the main theorems",
93
+ "text": "In this section we finally present the proofs of the main results of this work, i.e., Theorems 2.9 ###reference_theorem9###, 2.10 ###reference_theorem10###, and 2.11 ###reference_theorem11###.\nFirst, by Lemma 4.7 ###reference_theorem7### we deduce the existence of such that for all the estimator exists with a probability tending to one as goes to infinity. Then, we prove separately equations (2.30 ###reference_###), (2.31 ###reference_###) and (2.32 ###reference_###). \nProof of (2.30 ###reference_###). By Lemmas 4.7 ###reference_theorem7### and 4.5 ###reference_theorem5### we have\nwhich proves (2.30 ###reference_###). \nProof of (2.31 ###reference_###). By Lemma 4.8 ###reference_theorem8### the estimator converges to in as goes to infinity and hence in probability. Therefore, applying Lemma 4.6 ###reference_theorem6### we obtain\nwhich shows (2.31 ###reference_###). \nProof of (2.32 ###reference_###). We introduce the following decomposition\nwhere is defined in Lemma 4.6 ###reference_theorem6### and due to Lemma 4.8 ###reference_theorem8### the first quantity satisfies\nwith the constant independent of and . Therefore, since , estimate (4.56 ###reference_###) together with Lemma 4.6 ###reference_theorem6### and the fact that convergence in implies convergence in probability gives the desired result (2.32 ###reference_###) and ends the proof.\n\u220e\nThe existence of the estimator is given by Theorem 2.9 ###reference_theorem9###. Then, we prove separately equations (2.34 ###reference_###), (2.35 ###reference_###) and (2.36 ###reference_###). \nProof of (2.34 ###reference_###). Let be defined in Lemma 4.5 ###reference_theorem5###. Using basic properties of probability measures we have\nwhich implies\nand we now study two terms in the right-hand side separately. First, letting and go to infinity by Lemma 4.7 ###reference_theorem7### we obtain\nwhere the right-hand side can be made arbitrarily small by taking sufficiently large. Moreover, we have\nwhere the right-hand side is identically equal to zero if we set , where the constant is given by Lemma 4.5 ###reference_theorem5###. Hence, for all we can take sufficiently large such that\nwhich proves (2.34 ###reference_###). \nProof of (2.35 ###reference_###). Let be defined in Lemma 4.6 ###reference_theorem6###. Repeating a procedure similar to (4.57 ###reference_###) and applying Markov\u2019s inequality we get\nand we now study two terms in the right-hand side separately. First, by Lemma 4.6 ###reference_theorem6### we have\nwhere the right-hand side can be made arbitrarily small by taking sufficiently large. Moreover, by Lemma 4.8 ###reference_theorem8### we have\nwhere the constant is independent of and . Hence, for all we can take sufficiently large such that\nwhich shows (2.35 ###reference_###). \nProof of (2.36 ###reference_###). Equation (2.36 ###reference_###) is obtained following verbatim the proof of (2.35 ###reference_###) in the previous step and using the fact that to show that the right-hand side in equation (4.64 ###reference_###) vanishes.\n\u220e\nThe existence of the estimator is given by Theorem 2.9 ###reference_theorem9###. Then, let us introduce the following decomposition\nwhere is defined in Lemma 4.6 ###reference_theorem6###. We now study two terms in the right-hand side separately. By Lemma 4.8 ###reference_theorem8### we have\nwhere the constant is independent of and , hence since by hypothesis we obtain\nMoreover, by Lemma 4.6 ###reference_theorem6### we know that\nwhere the covariance matrix is defined in (2.38 ###reference_###). Finally, limits (4.68 ###reference_###) and (4.69 ###reference_###) together with Slutsky\u2019s theorem imply the desired result.\n\u220e"
94
+ },
95
+ {
96
+ "section_id": "5",
97
+ "parent_section_id": null,
98
+ "section_name": "Conclusion",
99
+ "text": "In this work we considered inference problems for large systems of exchangeable interacting particles. When the number of particles is large, then the path of a single particle is well approximated by its mean field limit. The limiting mean field SDE is on the one hand more complex because it is a nonlinear SDE (in the sense of McKean), but on the other hand more tractable from a computational viewpoint as it reduces an -dimensional SDE to a one dimensional one. Our aim was to infer unknown parameters of the dynamics, in particular of the confining and interaction potentials, from a set of discrete observations of a single particle. We propose a novel estimator which is obtained by computing the zero of a martingale estimating function based on the eigenvalues and the eigenfunctions of the generator of the mean field limit, where the law of the process is replaced by the (unique) invariant measure of the mean field dynamics. We showed both theoretically and numerically the asymptotic unbiasedness and normality of our estimator in the limit of infinite data and particles, providing also a rate of convergence towards the true value of the unknown parameter. In particular, we observed that these properties hold true if the number of particles is much larger than the number of observations. Even though our theoretical results require uniqueness of the steady state for the mean field dynamics, our numerical experiments suggest that our method works well even when phase transitions are present, i.e., when there are more than one stationary states. Moreover, we compared our estimator with the maximum likelihood estimator, demonstrating that our approach is more robust with respect to small values of the sampling rate. We believe, therefore, that the inference methodology proposed and analyzed in this paper can be very efficient when learning parameters in mean field SDE models from data.\nThe work presented in this paper can be extended in several interesting directions. First, the main limitation of our methodology is the fact that in order to construct the martingale estimating function we have to know the functional form of the invariant measure of the mean field SDE, possibly parameterized in terms of a finite number of moments. There are many interesting examples of mean field PDEs where the self-consistency equation cannot be solved analytically or, at least, its solution depends on the unknown parameters in the model. Therefore, it would be interesting to lift this assumption by first learning the invariant measure from data and then applying our martingale eigenfunction estimator approach. This leads naturally to our second objective, namely the extension of our methodology to a nonparametric setting, i.e., when the functional form of the confining and interaction potentials are unknown. Thirdly, we want to obtain more detailed information on the computational complexity of the proposed algorithm, in particular when more eigenfunctions are needed for our martingale estimator and when we are in higher dimensions in space. We will return to these problems in future work."
100
+ }
101
+ ],
102
+ "appendix": [],
103
+ "tables": {},
104
+ "image_paths": {
105
+ "1(a)": {
106
+ "figure_path": "2112.04870v2_figure_1(a).png",
107
+ "caption": "Figure 1: Sensitivity analysis for the Ornstein\u2013Uhlenbeck potential with respect to the number M\ud835\udc40Mitalic_M of observations and N\ud835\udc41Nitalic_N of particles, for the estimator \u03b8^M,NJsubscriptsuperscript^\ud835\udf03\ud835\udc3d\ud835\udc40\ud835\udc41\\widehat{\\theta}^{J}_{M,N}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1.",
108
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/sensitivity_OU_Mobservations.png"
109
+ },
110
+ "1(b)": {
111
+ "figure_path": "2112.04870v2_figure_1(b).png",
112
+ "caption": "Figure 1: Sensitivity analysis for the Ornstein\u2013Uhlenbeck potential with respect to the number M\ud835\udc40Mitalic_M of observations and N\ud835\udc41Nitalic_N of particles, for the estimator \u03b8^M,NJsubscriptsuperscript^\ud835\udf03\ud835\udc3d\ud835\udc40\ud835\udc41\\widehat{\\theta}^{J}_{M,N}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1.",
113
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/sensitivity_OU_Nparticles.png"
114
+ },
115
+ "2": {
116
+ "figure_path": "2112.04870v2_figure_2.png",
117
+ "caption": "Figure 2: Sensitivity analysis for the Ornstein\u2013Uhlenbeck potential with respect to the number J\ud835\udc3dJitalic_J of eigenvalues and eigenfunctions, for the estimator \u03b8^M,NJsubscriptsuperscript^\ud835\udf03\ud835\udc3d\ud835\udc40\ud835\udc41\\widehat{\\theta}^{J}_{M,N}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT.",
118
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/sensitivity_OU_Jeigen.png"
119
+ },
120
+ "3(a)": {
121
+ "figure_path": "2112.04870v2_figure_3(a).png",
122
+ "caption": "Figure 3: Rates of convergence for the Ornstein\u2013Uhlenbeck potential with respect to the number M\ud835\udc40Mitalic_M of observations and N\ud835\udc41Nitalic_N of particles, for the estimator \u03b8^M,NJsubscriptsuperscript^\ud835\udf03\ud835\udc3d\ud835\udc40\ud835\udc41\\widehat{\\theta}^{J}_{M,N}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1.",
123
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/rate_OU_Mobservations.png"
124
+ },
125
+ "3(b)": {
126
+ "figure_path": "2112.04870v2_figure_3(b).png",
127
+ "caption": "Figure 3: Rates of convergence for the Ornstein\u2013Uhlenbeck potential with respect to the number M\ud835\udc40Mitalic_M of observations and N\ud835\udc41Nitalic_N of particles, for the estimator \u03b8^M,NJsubscriptsuperscript^\ud835\udf03\ud835\udc3d\ud835\udc40\ud835\udc41\\widehat{\\theta}^{J}_{M,N}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1.",
128
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/rate_OU_Nparticles.png"
129
+ },
130
+ "4(a)": {
131
+ "figure_path": "2112.04870v2_figure_4(a).png",
132
+ "caption": "Figure 4: Comparison between the estimator \u03b8^M,NJsubscriptsuperscript^\ud835\udf03\ud835\udc3d\ud835\udc40\ud835\udc41\\widehat{\\theta}^{J}_{M,N}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1 (left) and the maximum likelihood estimator \u03b8~M,NMLEsuperscriptsubscript~\ud835\udf03\ud835\udc40\ud835\udc41MLE\\widetilde{\\theta}_{M,N}^{\\mathrm{MLE}}over~ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_MLE end_POSTSUPERSCRIPT (right) varying the distance \u0394\u0394\\Deltaroman_\u0394 between two consecutive observations for the Ornstein\u2013Uhlenbeck potential.",
133
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/comparison_OU_MLE_1.png"
134
+ },
135
+ "4(b)": {
136
+ "figure_path": "2112.04870v2_figure_4(b).png",
137
+ "caption": "Figure 4: Comparison between the estimator \u03b8^M,NJsubscriptsuperscript^\ud835\udf03\ud835\udc3d\ud835\udc40\ud835\udc41\\widehat{\\theta}^{J}_{M,N}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1 (left) and the maximum likelihood estimator \u03b8~M,NMLEsuperscriptsubscript~\ud835\udf03\ud835\udc40\ud835\udc41MLE\\widetilde{\\theta}_{M,N}^{\\mathrm{MLE}}over~ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_MLE end_POSTSUPERSCRIPT (right) varying the distance \u0394\u0394\\Deltaroman_\u0394 between two consecutive observations for the Ornstein\u2013Uhlenbeck potential.",
138
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/comparison_OU_MLE_2.png"
139
+ },
140
+ "5": {
141
+ "figure_path": "2112.04870v2_figure_5.png",
142
+ "caption": "Figure 5: Inference of the diffusion coefficient based on the quadratic variation varying the distance \u0394\u0394\\Deltaroman_\u0394 between two consecutive observations for the Ornstein\u2013Uhlenbeck potential.",
143
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/diffusion_QV.png"
144
+ },
145
+ "6(a)": {
146
+ "figure_path": "2112.04870v2_figure_6(a).png",
147
+ "caption": "Figure 6: Simultaneous inference of the interaction and diffusion coefficients for the Ornstein\u2013Uhlenbeck potential. Left: estimation \u03b8^M,NJsuperscriptsubscript^\ud835\udf03\ud835\udc40\ud835\udc41\ud835\udc3d\\widehat{\\theta}_{M,N}^{J}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT obtained from each particle with J=2\ud835\udc3d2J=2italic_J = 2. Right: average of the estimations varying the number of observations.",
148
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/diffusion_3_color.png"
149
+ },
150
+ "6(b)": {
151
+ "figure_path": "2112.04870v2_figure_6(b).png",
152
+ "caption": "Figure 6: Simultaneous inference of the interaction and diffusion coefficients for the Ornstein\u2013Uhlenbeck potential. Left: estimation \u03b8^M,NJsuperscriptsubscript^\ud835\udf03\ud835\udc40\ud835\udc41\ud835\udc3d\\widehat{\\theta}_{M,N}^{J}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT obtained from each particle with J=2\ud835\udc3d2J=2italic_J = 2. Right: average of the estimations varying the number of observations.",
153
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/diffusion_2_color.png"
154
+ },
155
+ "7": {
156
+ "figure_path": "2112.04870v2_figure_7.png",
157
+ "caption": "Figure 7: Central limit theorems for the Ornstein\u2013Uhlenbeck potential, for the estimator \u03b8^M,NJsubscriptsuperscript^\ud835\udf03\ud835\udc3d\ud835\udc40\ud835\udc41\\widehat{\\theta}^{J}_{M,N}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1.",
158
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/CLT_OU.png"
159
+ },
160
+ "8(a)": {
161
+ "figure_path": "2112.04870v2_figure_8(a).png",
162
+ "caption": "Figure 8: Inference of the two-dimensional drift coefficient of the double well potential below the phase transition. Top: average of the estimations \u03b8^M,NJsuperscriptsubscript^\ud835\udf03\ud835\udc40\ud835\udc41\ud835\udc3d\\widehat{\\theta}_{M,N}^{J}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1 varying the number of observations. Bottom: scatter plot of the estimations obtained from each particle.",
163
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/bistable_path_Nsmall.png"
164
+ },
165
+ "8(b)": {
166
+ "figure_path": "2112.04870v2_figure_8(b).png",
167
+ "caption": "Figure 8: Inference of the two-dimensional drift coefficient of the double well potential below the phase transition. Top: average of the estimations \u03b8^M,NJsuperscriptsubscript^\ud835\udf03\ud835\udc40\ud835\udc41\ud835\udc3d\\widehat{\\theta}_{M,N}^{J}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1 varying the number of observations. Bottom: scatter plot of the estimations obtained from each particle.",
168
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/bistable_path.png"
169
+ },
170
+ "8(c)": {
171
+ "figure_path": "2112.04870v2_figure_8(c).png",
172
+ "caption": "Figure 8: Inference of the two-dimensional drift coefficient of the double well potential below the phase transition. Top: average of the estimations \u03b8^M,NJsuperscriptsubscript^\ud835\udf03\ud835\udc40\ud835\udc41\ud835\udc3d\\widehat{\\theta}_{M,N}^{J}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1 varying the number of observations. Bottom: scatter plot of the estimations obtained from each particle.",
173
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/legend_gray.png"
174
+ },
175
+ "8(d)": {
176
+ "figure_path": "2112.04870v2_figure_8(d).png",
177
+ "caption": "Figure 8: Inference of the two-dimensional drift coefficient of the double well potential below the phase transition. Top: average of the estimations \u03b8^M,NJsuperscriptsubscript^\ud835\udf03\ud835\udc40\ud835\udc41\ud835\udc3d\\widehat{\\theta}_{M,N}^{J}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1 varying the number of observations. Bottom: scatter plot of the estimations obtained from each particle.",
178
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/bistable_scatter_Nsmall.png"
179
+ },
180
+ "8(e)": {
181
+ "figure_path": "2112.04870v2_figure_8(e).png",
182
+ "caption": "Figure 8: Inference of the two-dimensional drift coefficient of the double well potential below the phase transition. Top: average of the estimations \u03b8^M,NJsuperscriptsubscript^\ud835\udf03\ud835\udc40\ud835\udc41\ud835\udc3d\\widehat{\\theta}_{M,N}^{J}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1 varying the number of observations. Bottom: scatter plot of the estimations obtained from each particle.",
183
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/legend_gray.png"
184
+ },
185
+ "9(a)": {
186
+ "figure_path": "2112.04870v2_figure_9(a).png",
187
+ "caption": "Figure 9: Inference of the two-dimensional drift coefficient of the double well potential above the phase transition. Top: average of the estimations \u03b8^M,NJsuperscriptsubscript^\ud835\udf03\ud835\udc40\ud835\udc41\ud835\udc3d\\widehat{\\theta}_{M,N}^{J}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1 varying the number of observations. Bottom: scatter plot of the estimations obtained from each particle.",
188
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/bistable_path_Nsmall_mean_unknown.png"
189
+ },
190
+ "9(b)": {
191
+ "figure_path": "2112.04870v2_figure_9(b).png",
192
+ "caption": "Figure 9: Inference of the two-dimensional drift coefficient of the double well potential above the phase transition. Top: average of the estimations \u03b8^M,NJsuperscriptsubscript^\ud835\udf03\ud835\udc40\ud835\udc41\ud835\udc3d\\widehat{\\theta}_{M,N}^{J}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1 varying the number of observations. Bottom: scatter plot of the estimations obtained from each particle.",
193
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/bistable_path_mean_unknown.png"
194
+ },
195
+ "9(c)": {
196
+ "figure_path": "2112.04870v2_figure_9(c).png",
197
+ "caption": "Figure 9: Inference of the two-dimensional drift coefficient of the double well potential above the phase transition. Top: average of the estimations \u03b8^M,NJsuperscriptsubscript^\ud835\udf03\ud835\udc40\ud835\udc41\ud835\udc3d\\widehat{\\theta}_{M,N}^{J}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1 varying the number of observations. Bottom: scatter plot of the estimations obtained from each particle.",
198
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/legend_gray.png"
199
+ },
200
+ "9(d)": {
201
+ "figure_path": "2112.04870v2_figure_9(d).png",
202
+ "caption": "Figure 9: Inference of the two-dimensional drift coefficient of the double well potential above the phase transition. Top: average of the estimations \u03b8^M,NJsuperscriptsubscript^\ud835\udf03\ud835\udc40\ud835\udc41\ud835\udc3d\\widehat{\\theta}_{M,N}^{J}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1 varying the number of observations. Bottom: scatter plot of the estimations obtained from each particle.",
203
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/bistable_scatter_Nsmall_mean_unknown.png"
204
+ },
205
+ "9(e)": {
206
+ "figure_path": "2112.04870v2_figure_9(e).png",
207
+ "caption": "Figure 9: Inference of the two-dimensional drift coefficient of the double well potential above the phase transition. Top: average of the estimations \u03b8^M,NJsuperscriptsubscript^\ud835\udf03\ud835\udc40\ud835\udc41\ud835\udc3d\\widehat{\\theta}_{M,N}^{J}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_M , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT with J=1\ud835\udc3d1J=1italic_J = 1 varying the number of observations. Bottom: scatter plot of the estimations obtained from each particle.",
208
+ "url": "http://arxiv.org/html/2112.04870v2/extracted/5371924/figures/legend_gray.png"
209
+ }
210
+ },
211
+ "validation": true,
212
+ "references": [
213
+ {
214
+ "1": {
215
+ "title": "Preprint arXiv:2104.10587, 2021.",
216
+ "author": "A. Abdulle, G. A. Pavliotis, and A. Zanoni, Eigenfunction martingale\nestimating functions and filtered data for drift estimation of discretely\nobserved multiscale diffusions.",
217
+ "venue": null,
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "2": {
223
+ "title": "Fundamentals and applications.",
224
+ "author": "T. D. Frank, Nonlinear Fokker-Planck equations, Springer Series\nin Synergetics, Springer-Verlag, Berlin, 2005.",
225
+ "venue": null,
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "3": {
231
+ "title": "Preprint arXiv:2109.03132, 2021.",
232
+ "author": "G. Garegnani and A. Zanoni, Robust estimation of effective\ndiffusions from multiscale data.",
233
+ "venue": null,
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "4": {
239
+ "title": "An introduction to the microscopic modeling of crowds, With a\nforeword by Laure Saint-Raymond.",
240
+ "author": "B. Maury and S. Faure, Crowds in equations, Advanced Textbooks in\nMathematics, World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2019.",
241
+ "venue": null,
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "5": {
247
+ "title": "Diffusion processes, the Fokker-Planck and Langevin equations.",
248
+ "author": "G. A. Pavliotis, Stochastic processes and applications, vol. 60 of\nTexts in Applied Mathematics, Springer, New York, 2014.",
249
+ "venue": null,
250
+ "url": null
251
+ }
252
+ }
253
+ ],
254
+ "url": "http://arxiv.org/html/2112.04870v2"
255
+ }
20240127/2206.05581v3.json ADDED
@@ -0,0 +1,700 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Federated Offline Reinforcement Learning",
3
+ "abstract": "Evidence-based or data-driven dynamic treatment regimes are essential for personalized medicine, which can benefit from offline reinforcement learning (RL). Although massive healthcare data are available across medical institutions, they are prohibited from sharing due to privacy constraints. Besides, heterogeneity exists in different sites. As a result, federated offline RL algorithms are necessary and promising to deal with the problems. In this paper, we propose a multi-site Markov decision process model that allows for both homogeneous and heterogeneous effects across sites. The proposed model makes the analysis of the site-level features possible. We design the first federated policy optimization algorithm for offline RL with sample complexity. The proposed algorithm is communication-efficient, which requires only a single round of communication interaction by exchanging summary statistics. We give a theoretical guarantee for the proposed algorithm,\nwhere the suboptimality for the learned policies is comparable to the rate as if data is not distributed. Extensive simulations demonstrate the effectiveness of the proposed algorithm. The method is applied to a sepsis dataset in multiple sites to illustrate its use in clinical settings.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The construction of evidence-based or data-driven dynamic treatment regimes (DTRs) (Murphy et al., 2001 ###reference_b45###; Lavori and Dawson, 2004 ###reference_b35###) is a central problem for personalized medicine. Reinforcement learning (RL) models, which often treat the observations by the episodic Markov decision process (MDP) (Sutton and Barto, 2018 ###reference_b61###) have shown remarkable success for these applications. For instance, RL has been used for deciding optimal treatments and medication dosage in sepsis management (Raghu et al., 2017 ###reference_b50###; Sonabend et al., 2020 ###reference_b59###) and HIV therapy selection (Parbhoo et al., 2017 ###reference_b48###).\nHowever, many healthcare problems only allow for retrospective studies due to their offline nature (Lange et al., 2012 ###reference_b34###). Exploring new policies/treatments on patients is subject to ethical, financial, and legal constraints due to their associated significant risks and costs. Hence, it is typically infeasible or impractical to collect data online for finding DTR. With increasingly available massive healthcare datasets such as the electronic\nhealth records (EHR) data (Johnson et al., 2016 ###reference_b30###) and mobile health data (Xu et al., 2021 ###reference_b67###), and due to the high risk of direct interventions on patients and the expensive cost of conducting clinical trials (Chakraborty and Murphy, 2014 ###reference_b8###), offline RL is usually preferred for finding DTRs (Murphy, 2003 ###reference_b44###; Robins, 2004 ###reference_b52###; Chakraborty, 2013 ###reference_b7###), whose objective is to learn an optimal policy based on existing datasets (Levine et al., 2020 ###reference_b37###; Kidambi et al., 2020 ###reference_b32###; Sonabend-W et al., 2023 ###reference_b60###). For example, mobile health data are employed for controlling blood glucose levels in patients with type diabetes (Luckett et al., 2020 ###reference_b41###) and the diagnosis of depression (Xu et al., 2021 ###reference_b67###) due to the increasing popularity of mobile devices (Free et al., 2013 ###reference_b16###).\nHowever, healthcare data is often distributed across different sites. For instance, the EHR data are always stored locally in different hospitals, and the mobile health data are recorded and stored in the users\u2019 own mobile devices (Xu et al., 2021 ###reference_b67###). Although the aggregation of multi-site data can improve the quality of the models, for the protection of individual information (Agu et al., 2013 ###reference_b3###; Cao et al., 2017 ###reference_b6###), different medical institutions are often not allowed to share data (Hao et al., 2019 ###reference_b21###), which hinders the direct aggregation of multi-site data (Duchi et al., 2014 ###reference_b13###) and affects the model accuracy significantly (Li et al., 2019 ###reference_b38###).\nBesides, heterogeneity exists in different sites. For instance, patients are only given some specific drugs in a hospital due to the local suppliers, so models trained on a single site can mislead agents because of the limited actions. Similarly, doctors in the same hospital may follow a common treatment procedure, yielding insufficient exploration of the MDP. Thus, the trajectories in one hospital may distribute substantially differently from that induced by the optimal policy, which is known as distribution shift (Levine et al., 2020 ###reference_b37###)."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Overview of the Proposed Model and Algorithm",
15
+ "text": "To resolve these challenges, in this paper, we propose a multi-site MDP model that allows heterogeneity among different sites to address previously discussed issues. Specifically, we consider the setting with sites generated from the episodic linear MDP (Puterman, 2014 ###reference_b49###; Sutton and Barto, 2018 ###reference_b61###) with the state space , action space and horizon detailed in Section 2 ###reference_###. We incorporate heterogeneity by allowing the transition kernel and the reward function to be different for the sites specified as follows:\nfor , where and are the state and action in the time , respectively, \u2019s are unknown measures over , and and are known feature maps. The feature maps can be thought of as the representation of relevant time-varying covariates. The effects of , which includes site-level covariates such as the size of the healthcare system, are assumed to be common across sites; while the effects of are site-specific and hence capture the cross-site heterogeneity. Although the linear MDP model assumes that the transition kernel and expected rewards are the linear functions of the feature maps, the maps themselves can be nonlinear.\nThe proposed model allows the analysis of site-level information, which is helpful and sometimes necessary to learn optimal policies for sequential clinical decision-making (Gottesman et al., 2018 ###reference_b19###). For example, the hospital-level information, such as the number of intensive care units (ICU) and the ratio between doctors/nurses and patients, is related to the mortality of COVID- (Roomi et al., 2021 ###reference_b53###). Due to patient and site heterogeneity, personalized treatment is needed, and the optimal treatments can vary significantly among care units (Zhang et al., 2020 ###reference_b72###). It is thus important to incorporate site-level information into policy optimization. We account for potential differences in the transition probability (1 ###reference_###) across various sites by the site-specific measure . The reward function defined in (2 ###reference_###) has two components. The first term is homogeneous among the sites and can depend on the site-level features, while the second term is heterogeneous for each site.\nUnder this model, we propose a two-step Federated Dynamic Treatment Regime algorithm (FDTR) illustrated in Figure 1 ###reference_### to estimate the model parameters and learn the optimal policy. First, the local dynamic treatment regime algorithm (LDTR) adapted from the pessimistic variant of the value iteration algorithm (PEVI) (Jin et al., 2021 ###reference_b28###) is run at each site using its data only to learn the optimal policies and the corresponding value functions for each site. PEVI uses an uncertainty quantifier as the penalty function to eliminate spurious correlations from uninformative trajectories. Next, the summary statistics involving the learned value functions are shared across the sites. By utilizing these summary statistics, each site updates its policy using pessimism again. With the two steps, FDTR achieves efficient communication by sharing necessary summary statistics only once.\n###figure_1###"
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Related Work",
21
+ "text": "With single-source data, offline RL, such as Q-learning and PEVI, has been developed to estimate optimal DTR (Levine et al., 2020 ###reference_b37###; Jin et al., 2021 ###reference_b28###). Existing offline RL methods typically require strong assumptions on the collected data such as sufficient coverage (Fujimoto et al., 2019 ###reference_b17###; Agarwal et al., 2020 ###reference_b2###; Gulcehre et al., 2020 ###reference_b20###), the ratio between the visitation measure of the target policy and behavior policies (Jiang and Li, 2016 ###reference_b27###; Thomas and Brunskill, 2016 ###reference_b62###; Zhang et al., 2020 ###reference_b71###), or a similar ratio over the state-action space (Antos et al., 2008 ###reference_b4###; Liu et al., 2019 ###reference_b40###; Scherrer et al., 2015 ###reference_b57###). However, these assumptions either tend to be violated in healthcare settings or are not verifiable.\nOne approach to overcome such challenges is to train optimal DTR using data from multiple sites. To further mitigate data-sharing constraints, federated learning has been developed to enable co-training algorithms based on shared summary-level data, with significant success in various tasks including prediction (Min et al., 2019 ###reference_b43###; Hard et al., 2018 ###reference_b22###), and classification or clustering (Kone\u010dn\u1ef3 et al., 2016 ###reference_b33###; McMahan et al., 2017 ###reference_b42###; Yang et al., 2019 ###reference_b68###). Compared to stochastic gradient descent (SGD)-based algorithms (Rothchild et al., 2020 ###reference_b54###), communication-efficient algorithms based on exchanging summary statistics are more practical for many healthcare problems since due to privacy concerns, only the summary-level data are ready for research and can be shared with researchers (Hong et al., 2021 ###reference_b25###), while SGD-based algorithms still need to be performed based on patient-level data locally, which requires the researchers to have access to the patient-level data. Communication-efficient algorithms with statistical guarantees have been developed for parametric likelihood framework (Jordan et al., 2019 ###reference_b31###; Battey et al., 2018 ###reference_b5###; Duan et al., 2022 ###reference_b11###)\nand the modern machine learning models (Wang et al., 2019 ###reference_b63###; Elgabli et al., 2020 ###reference_b14###).\nIn recent years, federated RL algorithms have also been proposed to co-train the underlying model overcoming data sharing constraints (Zhuo et al., 2019 ###reference_b73###; Nadiger et al., 2019 ###reference_b47###; Lim et al., 2020 ###reference_b39###). However, these algorithms focus on the online RL setting, which prevents them from being used for finding DTRs; additionally, many of them lack theoretical guarantees. Until recently, several online algorithms have been proposed with guaranteed convergences (Fan et al., 2021 ###reference_b15###; Chen et al., 2023 ###reference_b10###; Xie and Song, 2023 ###reference_b66###; Jadbabaie et al., 2022 ###reference_b26###). Federated RL is also different from multi-agent RL (Chen et al., 2022 ###reference_b9###; Zhang et al., 2019 ###reference_b70###), which considers the interaction of several agents in a common environment as illustrated by Zhuo et al. (2019 ###reference_b73###). Nevertheless,\nlimited methods exist for federated offline RL. Under homogeneous settings where the underlying models are assumed to be identical across sites, one may train local DTRs within each site and obtain a combined estimate by averaging the local DTR estimates across sites. However, such an approach cannot accommodate cross-site heterogeneity. As far as we are aware, no existing methods can perform co-training federated offline RL models in the presence of heterogeneity."
22
+ },
23
+ {
24
+ "section_id": "1.3",
25
+ "parent_section_id": "1",
26
+ "section_name": "Our Contributions",
27
+ "text": "To the best of our knowledge, FDTR is the first federated policy optimization algorithm for offline RL with sample complexity.\nOur primary innovation in the algorithm aspect is the multi-site MDP model, which takes site-level covariates into account to address heterogeneity and bias when performing federated learning. When site-level covariates are constant within each site, the PEVI algorithm cannot estimate site-level coefficients using single-site data. Although PEVI could be applied to pooled patient-level data from all sites, this approach is not permissible under privacy constraints. To overcome this issue, we have developed an efficient two-step algorithm for federated learning in the presence of privacy constraints, using only summary-level data.\nOur second innovation lies in our theoretical contributions. We provide theoretical justification for FDTR,\nwithout the assumption of sufficient action coverage, a strong assumption which is often violated for clinical trials and observational healthcare data, due to the limited feasible treatments and sample size (Gottesman et al., 2018 ###reference_b19###, 2019 ###reference_b18###). Instead, we assume independence of the trajectories, a weaker assumption that is easier to be satisfied in reality within many disease contexts such as cancer and sepsis, as a standard simplification in theoretical development (Nachum et al., 2019 ###reference_b46###; Xie et al., 2021 ###reference_b64###; Xie and Jiang, 2021 ###reference_b65###; Zhan et al., 2022 ###reference_b69###). We provide an error rate for homogeneous coefficients that behaves as if all data were pooled together, which is particularly relevant for federated learning (Theorem 3 ###reference_orem3###). We offer an explicit rate (Corollary 1 ###reference_ollary1###) under the well-explored dataset assumption further and extend the linear MDP to a nonparametric model (Theorem 4 ###reference_orem4###) to enhance the generalizability of the proposed method.\nIn the rest of the paper, we detail the multi-site MDP model in Section 2 ###reference_###, present FDTR in Section 3 ###reference_###, and show its theoretical property in Section 4 ###reference_###.\nSimulations are conducted in Section 5.1 ###reference_### and a real data application is given in Section 5.2 ###reference_###.\nFinally, discussions are shown in Section 6 ###reference_###."
28
+ },
29
+ {
30
+ "section_id": "2",
31
+ "parent_section_id": null,
32
+ "section_name": "Multi-site MDP Model",
33
+ "text": "We define as -norm, as operator norm. For vectors , . For integer , . For matrix , is its Moore-Penrose inverse. denotes the identity matrix. For symmetric matrices , we say if is positive semi-definite. For , . We use when is bounded by up to logarithmic factors.\nIn this section, we detail the multi-site MDP model. Given the th site for , a dataset is collected a priori where at each step of each trajectory (e.g., patient) , the experimenter (e.g., clinician) takes the action at the state , receives the reward satisfying (2 ###reference_###), and observes the next state satisfying (1 ###reference_###). The feature maps and in (1 ###reference_###) and (2 ###reference_###) are pre-specified. To guarantee sample complexity and model identifiability, we assume no co-linearity between and (detailed in Remark 4 ###reference_ark4###). The transition probabilities only depend on features specified in and we let include site-level features and features with effects that are common across sites. For example, we can allow the age categories (e.g., children or adults) to have a common effect across sites, which are contained by , while allowing for a small degree of heterogeneity by including linear age effects in . All trajectories in for are assumed to be independent. We impose no constraint on the behavior policies \u2019s and allow them to vary across the sites. For any policy , we define the (state-)value function and the action-value function (Q-function) for the th site at each step as\nHere the expectation in (3 ###reference_###) and (4 ###reference_###) is taken with respect to the randomness of the trajectory induced by , which is obtained by taking the action at the state and observing the next state at each step .\nMeanwhile, we fix in (3 ###reference_###) and in (4 ###reference_###). Bellman equation implies\nwhere is the inner product over , is the Bellman operator defined as for any function , with taken with respect to the randomness of the reward and next state where . By (1 ###reference_###) and (2 ###reference_###), for any function , there exists such that\nwhere . Therefore, the coefficients and can be estimated through linear regression if the values are known, which inspires us to derive the FDTR algorithm."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Federated Dynamic Treatment Regimes Algorithm",
39
+ "text": "Suppose that we are treating the th site. Inspired by (5 ###reference_###), we notice the key step is to construct estimates of and of based on and the summary statistics of . As mentioned in Section 1 ###reference_###, pessimism plays an important role in the control of suboptimality. Define . Following the line of Jin et al. (2021 ###reference_b28###), we achieve pessimism by the notion of multi-site confidence bound as follows.\nFor the th site, we say is a -multi-site confidence bound of with respect to if the event\nsatisfies . Here the value functions and can depend on . Specifically, if both of them only depend on , then .\nBy definition, quantifies the approximation error of for , which is important in eliminating the spurious correlation as discussed by Jin et al. (2021 ###reference_b28###). Before we present the details of FDTR, we introduce the following notations for simplicity:\nwhere with , and is some value function. Besides, let where and . We define the empirical mean squared Bellman error:\nIf the individual-level information is shareable across the sites, we can aggregate to estimate and for simultaneously. However, as we state above, it is usually not possible in reality due to privacy concerns. On the contrary, we assume that we have some preliminary estimators for the value functions . Here we adopt the PEVI of linear MDP algorithm in Jin et al. (2021 ###reference_b28###) to obtain the preliminary estimators , which is summarized in Algorithm 1 ###reference_###.\nWe consider the objective function for :\nFor the th site, we only need and to estimate the value functions. We do not impose the regularization on for for the simplicity of theoretical analysis, which can be revealed in the proof of Theorem 2 ###reference_orem2###. The objective function (9 ###reference_###) has the explicit minimizer\nwhere . Let be the projection matrix to the column space of . We define\nWe then set\n\nMeanwhile, we construct based on as\nat each step . Here is a scaling parameter to be specified later according to the theoretical rate. In addition, we construct based on \nas\nNotice that the above procedures only require summary statistics from other sites. To be specific, only and , for are required when we are treating the th site.\nThe FDTR algorithm is summarized in Algorithm 2 ###reference_###. The communication cost is for getting the summary statistics from the other sites and the computational complexity is for calculating the summary statistics and conducting linear regressions. For the homogeneous parameter , one may use the average estimators from the site defined as in practice, while the rate of is the same as for each as revealed by Corollaries 1 ###reference_ollary1### and 2 ###reference_ollary2###.\nIn comparison, using SGD for linear regression necessitates multiple communication rounds, each costing due to gradient exchanges across sites for time points. While SGD demands iterations for an\n-optimal solution in convex problems (Harold et al., 1997 ###reference_b23###), leading to a total computational cost of among sites, FDTR, assuming , is more time-efficient than SGD when , a frequent scenario in RL (Agarwal et al., 2021 ###reference_b1###)."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "Theoretical Analysis",
45
+ "text": "We now state the theoretical properties of FDTR.\nFor the multi-site MDP , we use , and to denote the optimal policy, Q-function, and value function for the th site, respectively. We have and the Bellman optimality equation\n, \nMeanwhile, the optimal policy is specified by\n and \nwhere the maximum is taken over all functions mapping from to distributions over . We aim to learn a policy that maximizes the expected cumulative reward. Correspondingly, we define the performance metric at the th site as ,\nwhich is the suboptimality of the policy with regard to given the initial state .\nWe are presenting both data-dependent and explicit rate results. The data-dependent results are more adaptive to the concrete realization of the data and require fewer assumptions, while the explicit rate results are more straightforward to compare and give more insights into how the suboptimality and the estimators depend on , , and . For technical simplicity, we assume that for all , , , which can be guaranteed after suitable normalization, where with abuse of notation, we define\n. Theorems\n1 ###reference_orem1### and 2 ###reference_orem2### characterize the data-dependent suboptimality of Algorithms 1 ###reference_### and 2 ###reference_###.\nIn Algorithm 1 ###reference_###, we set , , where ,\n is an absolute constant and is the confidence parameter. Then in Algorithm 1 ###reference_### is a -multi-site confidence bound of . For any , in Algorithm 1 ###reference_### satisfies \nwith probability at least , for any .\nHere is taken with respect to the trajectory induced by in the underlying MDP given the fixed .\nIn Algorithm 2 ###reference_###, we set\nHere is an absolute constant and is the confidence parameter. Then specified in (12 ###reference_###) is a -multi-site confidence bound of in Algorithm 2 ###reference_###. Besides, for any , with probability at least , for any , in Algorithm 2 ###reference_### satisfies\nHere is taken concerning the trajectory induced by in the underlying MDP given the fixed matrix .\nThe theoretical results of FDTR are structural, which means that even if the local estimator were to be altered from PEVI to another choice such as the standard value iteration algorithm (i.e., setting in Algorithm 1 ###reference_###),\nanalogous theoretical results for FDTR would still be applicable. Our examination of the proof of Theorem 2 ###reference_orem2### elucidates that FDTR\u2019s theoretical outcomes are maintained provided the initial estimates are encompassed within the function class delineated in (S2) of Supplementary S2.\nBy the definition of in (11 ###reference_###), it is easy to show that , which implies\nthat the right-hand side of (14 ###reference_###) is finite irrespective of the behavior policy employed and\n.\nComparing Theorem 1 ###reference_orem1### and Theorem 2 ###reference_orem2###, the scaling parameters and only have a difference in the logarithm term, while Theorem 2 ###reference_orem2### has a sharper bound for the expected term, which is owing to the utilization of multi-site information.\nFor specified in Theorem 2 ###reference_orem2###, satisfies for any with probability .\nRecall that is not estimable when is a constant vector for the trajectories in the same site (i.e., only contains the site-level covariates), since for is a rank-one matrix. However, when we combine sites, if , under mild conditions, will be a full rank matrix and can be estimated with theoretical guarantee, whose details will be given later.\nThe above theorems are data-dependent. To present the explicit rates related to the sample sizes , we need to impose more assumptions on the data generation mechanism. For notational simplicity, we define the following (uncentered) covariance matrices\nfor , , and . By definition, is the covariance of homogeneous features, is the covariance of heterogeneous features and is their joint covariance matrix.\nIn what follows, we illustrate the necessity of FDTR by considering the setting where the data-collecting process well explores the state-action space. Recall that is the number of trajectories of all the sites. We have the following corollary.\nAssume that there exists an absolute constant such that for any and and and . If we choose the tuning parameters as (13 ###reference_###), it holds with probability at least that and .\nAccordingly, we have satisfying\n.\nUnder the well-coverage assumption in Corollary 1 ###reference_ollary1###, we can get similar results for LDTR. Specifically, and with probability at least ,\nwhere is the estimated preliminary homogeneous coefficient corresponding to the first elements of defined in line 6 of Algorithm 1 ###reference_###.\nNotice that . The additional assumption that and implies that the norm is uniformly distributed among and , which is only needed to obtain the explicit rate dependent on and . Otherwise, the upper bound of can be replaced by\nBesides, the assumption that implies that and can not be fully linear dependent, that is, a feature exists in both and . Otherwise, may not be full rank, violating the lower bounded eigenvalue assumption. However, the features in and can be correlated, as the age covariate example in Section 2 ###reference_###.\nThe assumption that assumes that the dataset well explores the state and action space (Duan et al., 2020 ###reference_b12###; Jin et al., 2021 ###reference_b28###) in the sense that each direction in the feature space is well explored in the dataset . When considering the well-explored dataset, Corollary 1 ###reference_ollary1### shows that FDTR attains the suboptimality of order , while the suboptimality of the preliminary estimator is (Jin et al., 2021 ###reference_b28###). So FDTR will have a tighter rate than LDTR when . Thus, FDTR is more efficient than LDTR in eliminating the suboptimality arising from the site-level features. In specific, when , the suboptimality of FDTR is which achieves the desirable rate of . Given our assumption of heterogeneity across the sites as outlined in models (1 ###reference_###) and (2 ###reference_###), only the homogeneous component can be universally shared among sites, boasting an effective sample size of . In contrast, the heterogeneous component maintains an effective sample size of , precisely mirroring the rate of as defined in Corollary 1 ###reference_ollary1###. Besides, in Corollary 1 ###reference_ollary1###, we further show that the statistical rate of the homogeneous parameter is even if , which is better than the rate of LDTR with a factor of . When and are fixed, it achieves the standard parametric rate which performs like pooling all data together.\nConsider the setting where the dataset well explores the state-action space, which recovers the online setting after the exploration. For example, when and , our setting recovers the homogeneity setting where the data of all the sites follow the same distribution. In such a setting, our sample complexity for achieving the -optimal solution is at the order of , which matches the rate of Fan et al. (2021 ###reference_b15###). Given that our estimators reach these optimal rates (up to logarithmic factors), there is no statistical advantage to permitting multiple communication rounds.\nHowever, the assumption that will be violated when , which happens when and is constant in the same site. To this end, cannot be estimated by using the single site data only. However, can still be estimated by FDTR as shown by the following corollary.\nWe assume that there exists an absolute constant such that and\nand and , . If we choose the tuning parameters as (13 ###reference_###), it holds with probability at least that and . Accordingly, we have\n.\nBy imposing the assumption in (16 ###reference_###), we get the same suboptimality and statistical rate as in Corollary 1 ###reference_ollary1###.\nIn particular, the assumption in (16 ###reference_###) allows for for a single site , which allows to be constant within each site. Recall that is the (uncentered) covariance between the homogeneous feature and the heterogeneous feature . The inequality in (16 ###reference_###) holds when such a correlation is relatively small. Besides, we require to be relatively large, which holds when the space of site-level features is well-explored by the dataset ."
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "Extension to Non-parametric Estimation",
51
+ "text": "Theorems 1 ###reference_orem1### and 2 ###reference_orem2### rely on the linear MDP assumption, which is restrictive in many real-world applications. However, FDTR is robust to model misspecified when the linear MDP structure is violated. As long as the discrepancy is not significant, FDTR still enjoys theoretical guarantees as shown in Supplementary S8. To further enhance the generalizability of FDTR, we extend it to non-parametric estimation. Specifically, instead of imposing the parametric assumptions\n(1 ###reference_###) and (2 ###reference_###) on the MDP, we consider using basis expansions with the number of basis functions growing with sample size to approximate the transition kernels and reward functions. Assume that where is the homogeneous feature and is the heterogeneous feature, and , respectively, and let . For simplicity, we assume that the homogeneous features in remain constant during the whole horizon, and and are bounded, which are common for many real-world applications. We also assume that and have th order continuous and bounded derivative with respect to , and . Let and be the basis functions of and , respectively. By the property of multivariate basis expansion (Hastie et al., 2009 ###reference_b24###), the basis functions of can be represented by , where are the basis functions of . We define \nas the homogeneous basis vector with the number of bases , as the heterogeneous basis vector with the number of bases and . Examples of basis functions are B-splines or Fourier series basis terms. Then we can use the basis functions and to estimate the rewards and value functions. In addition, we define a new multi-site confidence bound as\nwhere and are two tuning parameters.\nWith the tuning parameters set as (13 ###reference_###) and for some constant . Then we have, for any and for any , satisfies \nwith probability at least for some constant dependent on and .\nOther basis functions can also be used such as multivariate trigonometric polynomials, with slightly different smoothness requirements for the transition kernel and rewards while yielding similar results. More discussion on the approximation property of these basis functions can be found in Schultz (1969 ###reference_b58###). Under the same well-explored assumption as Corollary 1 ###reference_ollary1###, we have the following corollary.\nAssume that there exists an absolute constant such that for any and and and . If we choose the tuning parameters as Theorem 4 ###reference_orem4###, it holds\n\nwith probability at least .\nWhen , Corollary 3 ###reference_ollary3### reveals that the suboptimality of FDTR is . When is large, indicating sufficiently smooth transition and reward functions, the term approximate . This suggests that the rate from Corollary 3 ###reference_ollary3### converges to the optimal rate of ."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Experiments",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "5.1",
61
+ "parent_section_id": "5",
62
+ "section_name": "Simulations",
63
+ "text": "We perform extensive simulations to evaluate the performance of FDTR. In particular, we focus on how well the proposed methods work as we vary the dimension and cardinality of the state and action space, respectively, episode length, and the number of sites. We simulate the data according to the following linear MDP. We generate random vectors , for and matrix following element-wise distribution, which are then normalized to satisfy the assumptions in Section 4 ###reference_###. We generate rewards using a Gaussian distribution centered at according to (2 ###reference_###). Importance sampling is used to sample states from the state transitions density: given state action pair we draw the next state from a proposal distribution: and re-weigh samples using with and we use column in as .\nWe clip state vectors and normalize rewards such that , for all , , . Finally, we use linear functions with an action-interaction term to represent the treatment effect for and , specifically, for .\nBased on the theoretical analysis presented in Theorems 1 ###reference_orem1### and 2 ###reference_orem2###, we set the parameter . The parameter defines the probabilities for the suboptimality to be valid and is assigned a value nearing one, specifically . The remaining hyper-parameter within is determined using -fold cross-validation on the training data to optimize the estimated value functions across sites and trajectories, resulting in . We implement different benchmark methods for comparison. The first one is LDTR in Algorithm 1 ###reference_###, which yields a policy for a fixed . We then use the locally trained LDTRs to define LDTR (MV), which given a state, uses majority voting across the policies to select an action. We also train a -learning policy in a single hospital site. We use ordinary least squares to estimate the -functions, as these are linear on . There are three variations of this method. The first, -learn uses a single -function for any time-step trained locally in each site. The second selects the most popular action among the locally trained functions; we call this -learn (1-MV). The third one, -learn , is also locally trained -learning; however, this one uses a different function for each of the time steps.\n###figure_2### ###figure_3### ###figure_4### Figure 2 ###reference_### shows empirical results for the performance of FDTR, LDTR, and its majority voting version, and their -learning counterparts for different settings. It is clear that across settings, FDTR outperforms in terms of the mean value. These results are expected since FDTR efficiently aggregates data across sites to estimate the policy. As the state dimension becomes large all methods decrease in performance for any given sample size, this is natural as estimated parameters have larger standard errors. In these cases such as Figure (1(c) ###reference_sf3###), FDTR still performs best relative to local and majority voting methods. It is worth noting that the larger sample size benefits FDTR in terms of better policy estimation and also decreases uncertainty in terms of its performance, as illustrated by the narrower standard errors. Additionally, FDTR can estimate coefficients at the hospital level, which other methods cannot do locally. Hospital-level covariate estimation allows FDTR to tailor the policy to local hospital characteristics, translating into a better policy function."
64
+ },
65
+ {
66
+ "section_id": "5.2",
67
+ "parent_section_id": "5",
68
+ "section_name": "FDTR for Sepsis treatment Across Intensive Care Units",
69
+ "text": "We further illustrate the performance of FDTR in optimizing the treatment of sepsis, an acute and life-threatening condition in which the body reacts to infection improperly. Clinicians must determine fluid management strategies (actions) based on a patient\u2019s clinical state including vital signs and lab tests (state) at different stages (time) (Rhodes et al., 2017 ###reference_b51###). While survival is an important long-term reward to target, it is not a short-term parameter that can be tracked by physicians to adjust treatment strategies during hospitalization. We use an alternative reward based on serum lactate level which is an established marker for systemic tissue hypoperfusion that reflects cellular dysfunction in sepsis and is predictive of sepsis mortality (Lee and An, 2016 ###reference_b36###; Ryoo et al., 2018 ###reference_b56###). We identify optimal fluid management strategies for sepsis patients using longitudinal serum lactate levels as short-term rewards based on the MIMIC-IV EHR data (Johnson et al., 2020 ###reference_b29###; Raghu et al., 2017 ###reference_b50###).\nUsing publicly available SQL queries222https://github.com/yugangjia/Team-Sepsis/blob/main/Sepsis_cohort.sql ###reference_lob/main/Sepsis_cohort.sql###, we extracted MIMIC-IV sepsis cohort which consists of data from different care units with patient trajectories (episodes) (Sonabend et al., 2020 ###reference_b59###). The site-level covariates include an indicator for whether the care unit is not an ICU (noICU) and the patient flow (Flow) defined as the total duration of all patients at each care unit (normalized by the total duration of all patients at all care units), which characterizes the size of the care unit. Patient-level covariates include measurements of weight, temperature, systolic blood pressure, hemoglobin, and potassium levels. We use the feature maps for to incorporate the second order action-interaction effect, yielding a state-space dimension of . Each time step consists of a four-hour interval, and trajectories consist of time steps. The reward is proportionately inverse to the lactic acid level, which is a clinical index measuring the severity of Sepsis (Lee and An, 2016 ###reference_b36###), and we transform it to be between and .\nThe action space corresponds to the dosage median for intravenous fluids yielding three different actions. We use a step-importance sampling estimator (Thomas and Brunskill, 2016 ###reference_b62###; Gottesman et al., 2018 ###reference_b19###) for the value function, which we use to compare methods. We use a - split at each care unit for training and test data, respectively. We train all methods described in Section 5.1 ###reference_### and evaluate them at each unit. The hyperparameters are chosen the same way as Section 5.1 ###reference_###.\nFigure (2(a) ###reference_sf1###) shows the value function estimate averaged over test sets. Estimates are computed for all methods along with the confidence interval. FDTR performs significantly better than the rest of the methods. This is because FDTR aggregates information across care units and can estimate site-specific effects, which yields a superior policy function. As a sensitivity analysis, we have further fit a more complex model including non-linear bases and shown that including non-linear effects does not improve the value function, in part due to the bias-variance trade-off with limited sample size. The details and results are given in Supplementary S9.\n###figure_5### ###figure_6### Finally, we show the homogeneous coefficients estimated by FDTR averaged over all care units in Figure (2(b) ###reference_sf2###). First, we observe that all of the estimated coefficients are positive. For example, the positive coefficients related to the variable \u201cFlow\u201d show that patients in larger care units (with a higher patient flow) might experience better outcomes, given the same treatment action. This could be due to factors such as the availability of resources, staffing levels, or specific expertise present in larger care units that may contribute to the more effective initial management of sepsis (Rudd et al., 2018 ###reference_b55###). Furthermore, coefficients of site-level covariates tend to decrease over time. The trend implies that the influence of site-level covariates on treatment decisions decreases as the treatment progresses. This is expected because treatment decisions at later steps are more likely to be driven by patient-specific factors, such as their response to previous treatments and evolving clinical conditions, rather than site-level factors (Zhang et al., 2020 ###reference_b72###).\nIn conclusion, the clinical interpretation of the FDTR results for sepsis management underscores the importance of understanding the influence of care unit characteristics and treatment actions on patient outcomes. This knowledge can help healthcare providers make more informed decisions and tailor their treatment strategies for better patient care."
70
+ },
71
+ {
72
+ "section_id": "6",
73
+ "parent_section_id": null,
74
+ "section_name": "Discussion",
75
+ "text": "The current implementation of FDTR allows users to specify features with common effects across sites and features with site-specific effects. It is also plausible to allow for more data-adaptive co-training by imposing shrinkage estimation on the learned site-specific parameters, following strategies similar to the heterogeneity-aware federated regression methods (Duan et al., 2022 ###reference_b11###). We show the sample complexity of FDTR comparable to the homogeneous case under our model and it is interesting to find if the sample complexity involving shrinkage estimation will have similar results.\nAn interesting extension to our method would be to equip FDTR with doubly robust (DR) models. For example, if we additionally model the treatment propensity observed in the data, a DR version of FDTR could potentially achieve suboptimality even if the linear MDP assumption is incorrect. Another extension is to develop communication-efficient FDTR algorithms for more general models, which warrants future exploration."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {},
80
+ "image_paths": {
81
+ "1": {
82
+ "figure_path": "2206.05581v3_figure_1.png",
83
+ "caption": "Figure 1: An illustration of FDTR.",
84
+ "url": "http://arxiv.org/html/2206.05581v3/x1.png"
85
+ },
86
+ "2(a)": {
87
+ "figure_path": "2206.05581v3_figure_2(a).png",
88
+ "caption": "(a) (d,|\ud835\udc9c|,H)=(8,6,15)\ud835\udc51\ud835\udc9c\ud835\udc3b8615(d,|\\mathcal{A}|,H)=(8,6,15)( italic_d , | caligraphic_A | , italic_H ) = ( 8 , 6 , 15 )\nFigure 2: Mean value function for FDTR and benchmarks trained on K=\ud835\udc3eabsentK=italic_K =5 sites for (1(a)), (1(c)) and K=10\ud835\udc3e10K=10italic_K = 10 for (1(b)). We show the value function averaged over the K\ud835\udc3eKitalic_K sites for increasing sample size. Error bars show 95%percent9595\\%95 % CI. Finally, (d,|\ud835\udc9c|,H)\ud835\udc51\ud835\udc9c\ud835\udc3b(d,|\\mathcal{A}|,H)( italic_d , | caligraphic_A | , italic_H ) stands respectively for the dimension of state space, cardinality of action space, and episode length.",
89
+ "url": "http://arxiv.org/html/2206.05581v3/extracted/5372643/figures/State1.png"
90
+ },
91
+ "2(b)": {
92
+ "figure_path": "2206.05581v3_figure_2(b).png",
93
+ "caption": "(b) (d,|\ud835\udc9c|,H)=(8,6,15)\ud835\udc51\ud835\udc9c\ud835\udc3b8615(d,|\\mathcal{A}|,H)=(8,6,15)( italic_d , | caligraphic_A | , italic_H ) = ( 8 , 6 , 15 )\nFigure 2: Mean value function for FDTR and benchmarks trained on K=\ud835\udc3eabsentK=italic_K =5 sites for (1(a)), (1(c)) and K=10\ud835\udc3e10K=10italic_K = 10 for (1(b)). We show the value function averaged over the K\ud835\udc3eKitalic_K sites for increasing sample size. Error bars show 95%percent9595\\%95 % CI. Finally, (d,|\ud835\udc9c|,H)\ud835\udc51\ud835\udc9c\ud835\udc3b(d,|\\mathcal{A}|,H)( italic_d , | caligraphic_A | , italic_H ) stands respectively for the dimension of state space, cardinality of action space, and episode length.",
94
+ "url": "http://arxiv.org/html/2206.05581v3/extracted/5372643/figures/State2.png"
95
+ },
96
+ "2(c)": {
97
+ "figure_path": "2206.05581v3_figure_2(c).png",
98
+ "caption": "(c) (d,|\ud835\udc9c|,H)=(20,2,5)\ud835\udc51\ud835\udc9c\ud835\udc3b2025(d,|\\mathcal{A}|,H)=(20,2,5)( italic_d , | caligraphic_A | , italic_H ) = ( 20 , 2 , 5 )\nFigure 2: Mean value function for FDTR and benchmarks trained on K=\ud835\udc3eabsentK=italic_K =5 sites for (1(a)), (1(c)) and K=10\ud835\udc3e10K=10italic_K = 10 for (1(b)). We show the value function averaged over the K\ud835\udc3eKitalic_K sites for increasing sample size. Error bars show 95%percent9595\\%95 % CI. Finally, (d,|\ud835\udc9c|,H)\ud835\udc51\ud835\udc9c\ud835\udc3b(d,|\\mathcal{A}|,H)( italic_d , | caligraphic_A | , italic_H ) stands respectively for the dimension of state space, cardinality of action space, and episode length.",
99
+ "url": "http://arxiv.org/html/2206.05581v3/extracted/5372643/figures/State3.png"
100
+ },
101
+ "3(a)": {
102
+ "figure_path": "2206.05581v3_figure_3(a).png",
103
+ "caption": "(a) Estimated value function.\nFigure 3: (2(a)) Value function estimates on the sepsis\nheld out data across 9999 ICU. (2(b)) Homogeneous coefficients estimated by FDTR at different time points.",
104
+ "url": "http://arxiv.org/html/2206.05581v3/extracted/5372643/figures/sepsis_results_jasa.png"
105
+ },
106
+ "3(b)": {
107
+ "figure_path": "2206.05581v3_figure_3(b).png",
108
+ "caption": "(b) Estimated homogeneous coefficients.\nFigure 3: (2(a)) Value function estimates on the sepsis\nheld out data across 9999 ICU. (2(b)) Homogeneous coefficients estimated by FDTR at different time points.",
109
+ "url": "http://arxiv.org/html/2206.05581v3/x2.png"
110
+ }
111
+ },
112
+ "validation": true,
113
+ "references": [
114
+ {
115
+ "1": {
116
+ "title": "On the theory of policy gradient methods: Optimality, approximation, and distribution shift.",
117
+ "author": "Agarwal, A., S. M. Kakade, J. D. Lee, and G. Mahajan (2021).",
118
+ "venue": "The Journal of Machine Learning Research 22(1), 4431\u20134506.",
119
+ "url": null
120
+ }
121
+ },
122
+ {
123
+ "2": {
124
+ "title": "An optimistic perspective on offline reinforcement learning.",
125
+ "author": "Agarwal, R., D. Schuurmans, and M. Norouzi (2020).",
126
+ "venue": "In International Conference on Machine Learning, pp. 104\u2013114. PMLR.",
127
+ "url": null
128
+ }
129
+ },
130
+ {
131
+ "3": {
132
+ "title": "The smartphone as a medical device: Assessing enablers, benefits and challenges.",
133
+ "author": "Agu, E., P. Pedersen, D. Strong, B. Tulu, Q. He, L. Wang, and Y. Li (2013).",
134
+ "venue": "In 2013 IEEE International Workshop of Internet-of-Things Networking and Control (IoT-NC), pp. 48\u201352. IEEE.",
135
+ "url": null
136
+ }
137
+ },
138
+ {
139
+ "4": {
140
+ "title": "Fitted q-iteration in continuous action-space mdps.",
141
+ "author": "Antos, A., C. Szepesv\u00e1ri, and R. Munos (2008).",
142
+ "venue": "In J. Platt, D. Koller, Y. Singer, and S. Roweis (Eds.), Advances in Neural Information Processing Systems, Volume 20. Curran Associates, Inc.",
143
+ "url": null
144
+ }
145
+ },
146
+ {
147
+ "5": {
148
+ "title": "Distributed testing and estimation under sparse high dimensional models.",
149
+ "author": "Battey, H., J. Fan, H. Liu, J. Lu, and Z. Zhu (2018).",
150
+ "venue": "Annals of statistics 46(3), 1352.",
151
+ "url": null
152
+ }
153
+ },
154
+ {
155
+ "6": {
156
+ "title": "Deepmood: modeling mobile phone typing dynamics for mood detection.",
157
+ "author": "Cao, B., L. Zheng, C. Zhang, P. S. Yu, A. Piscitello, J. Zulueta, O. Ajilore, K. Ryan, and A. D. Leow (2017).",
158
+ "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 747\u2013755.",
159
+ "url": null
160
+ }
161
+ },
162
+ {
163
+ "7": {
164
+ "title": "Statistical methods for dynamic treatment regimes.",
165
+ "author": "Chakraborty, B. (2013).",
166
+ "venue": "Springer.",
167
+ "url": null
168
+ }
169
+ },
170
+ {
171
+ "8": {
172
+ "title": "Dynamic treatment regimes.",
173
+ "author": "Chakraborty, B. and S. A. Murphy (2014).",
174
+ "venue": "Annual review of statistics and its application 1, 447\u2013464.",
175
+ "url": null
176
+ }
177
+ },
178
+ {
179
+ "9": {
180
+ "title": "Communication-efficient policy gradient methods for distributed reinforcement learning.",
181
+ "author": "Chen, T., K. Zhang, G. B. Giannakis, and T. Ba\u015far (2022).",
182
+ "venue": "IEEE Transactions on Control of Network Systems 9(2), 917\u2013929.",
183
+ "url": null
184
+ }
185
+ },
186
+ {
187
+ "10": {
188
+ "title": "Byzantine-robust online and offline distributed reinforcement learning.",
189
+ "author": "Chen, Y., X. Zhang, K. Zhang, M. Wang, and X. Zhu (2023).",
190
+ "venue": "In International Conference on Artificial Intelligence and Statistics, pp. 3230\u20133269. PMLR.",
191
+ "url": null
192
+ }
193
+ },
194
+ {
195
+ "11": {
196
+ "title": "Heterogeneity-aware and communication-efficient distributed statistical inference.",
197
+ "author": "Duan, R., Y. Ning, and Y. Chen (2022).",
198
+ "venue": "Biometrika 109(1), 67\u201383.",
199
+ "url": null
200
+ }
201
+ },
202
+ {
203
+ "12": {
204
+ "title": "Minimax-optimal off-policy evaluation with linear function approximation.",
205
+ "author": "Duan, Y., Z. Jia, and M. Wang (2020).",
206
+ "venue": "In International Conference on Machine Learning, pp. 2701\u20132709. PMLR.",
207
+ "url": null
208
+ }
209
+ },
210
+ {
211
+ "13": {
212
+ "title": "Privacy aware learning.",
213
+ "author": "Duchi, J. C., M. I. Jordan, and M. J. Wainwright (2014).",
214
+ "venue": "Journal of the ACM (JACM) 61(6), 1\u201357.",
215
+ "url": null
216
+ }
217
+ },
218
+ {
219
+ "14": {
220
+ "title": "Gadmm: Fast and communication efficient framework for distributed machine learning.",
221
+ "author": "Elgabli, A., J. Park, A. S. Bedi, M. Bennis, and V. Aggarwal (2020).",
222
+ "venue": "Journal of Machine Learning Research 21(76), 1\u201339.",
223
+ "url": null
224
+ }
225
+ },
226
+ {
227
+ "15": {
228
+ "title": "Fault-tolerant federated reinforcement learning with theoretical guarantee.",
229
+ "author": "Fan, X., Y. Ma, Z. Dai, W. Jing, C. Tan, and B. K. H. Low (2021).",
230
+ "venue": "Advances in Neural Information Processing Systems 34, 1007\u20131021.",
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "16": {
236
+ "title": "The effectiveness of mobile-health technologies to improve health care service delivery processes: a systematic review and meta-analysis.",
237
+ "author": "Free, C., G. Phillips, L. Watson, L. Galli, L. Felix, P. Edwards, V. Patel, and A. Haines (2013).",
238
+ "venue": "PLoS Med 10(1), e1001363.",
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "17": {
244
+ "title": "Off-policy deep reinforcement learning without exploration.",
245
+ "author": "Fujimoto, S., D. Meger, and D. Precup (2019).",
246
+ "venue": "In International Conference on Machine Learning, pp. 2052\u20132062. PMLR.",
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "18": {
252
+ "title": "Guidelines for reinforcement learning in healthcare.",
253
+ "author": "Gottesman, O., F. Johansson, M. Komorowski, A. Faisal, D. Sontag, F. Doshi-Velez, and L. A. Celi (2019).",
254
+ "venue": "Nature medicine 25(1), 16\u201318.",
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "19": {
260
+ "title": "Evaluating reinforcement learning algorithms in observational health settings.",
261
+ "author": "Gottesman, O., F. Johansson, J. Meier, J. Dent, D. Lee, S. Srinivasan, L. Zhang, Y. Ding, D. Wihl, X. Peng, et al. (2018).",
262
+ "venue": "arXiv preprint arXiv:1805.12298.",
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "20": {
268
+ "title": "Rl unplugged: Benchmarks for offline reinforcement learning.",
269
+ "author": "Gulcehre, C., Z. Wang, A. Novikov, T. L. Paine, S. G. Colmenarejo, K. Zolna, R. Agarwal, J. Merel, D. Mankowitz, C. Paduraru, et al. (2020).",
270
+ "venue": "arXiv preprint arXiv:2006.13888.",
271
+ "url": null
272
+ }
273
+ },
274
+ {
275
+ "21": {
276
+ "title": "Efficient and privacy-enhanced federated learning for industrial artificial intelligence.",
277
+ "author": "Hao, M., H. Li, X. Luo, G. Xu, H. Yang, and S. Liu (2019).",
278
+ "venue": "IEEE Transactions on Industrial Informatics 16(10), 6532\u20136542.",
279
+ "url": null
280
+ }
281
+ },
282
+ {
283
+ "22": {
284
+ "title": "Federated learning for mobile keyboard prediction.",
285
+ "author": "Hard, A., K. Rao, R. Mathews, S. Ramaswamy, F. Beaufays, S. Augenstein, H. Eichner, C. Kiddon, and D. Ramage (2018).",
286
+ "venue": "arXiv preprint arXiv:1811.03604.",
287
+ "url": null
288
+ }
289
+ },
290
+ {
291
+ "23": {
292
+ "title": "Stochastic approximation and recursive algorithm and applications.",
293
+ "author": "Harold, J., G. Kushner, and G. Yin (1997).",
294
+ "venue": "Application of Mathematics 35.",
295
+ "url": null
296
+ }
297
+ },
298
+ {
299
+ "24": {
300
+ "title": "The elements of statistical learning: data mining, inference, and prediction, Volume 2.",
301
+ "author": "Hastie, T., R. Tibshirani, and J. H. Friedman (2009).",
302
+ "venue": "Springer.",
303
+ "url": null
304
+ }
305
+ },
306
+ {
307
+ "25": {
308
+ "title": "Clinical knowledge extraction via sparse embedding regression (KESER) with multi-center large scale electronic health record data.",
309
+ "author": "Hong, C., E. Rush, M. Liu, D. Zhou, J. Sun, A. Sonabend, V. M. Castro, P. Schubert, V. A. Panickan, T. Cai, et al. (2021).",
310
+ "venue": "NPJ digital medicine 4(1), 1\u201311.",
311
+ "url": null
312
+ }
313
+ },
314
+ {
315
+ "26": {
316
+ "title": "Byzantine-robust federated linear bandits.",
317
+ "author": "Jadbabaie, A., H. Li, J. Qian, and Y. Tian (2022).",
318
+ "venue": "In 2022 IEEE 61st Conference on Decision and Control (CDC), pp. 5206\u20135213. IEEE.",
319
+ "url": null
320
+ }
321
+ },
322
+ {
323
+ "27": {
324
+ "title": "Doubly robust off-policy value evaluation for reinforcement learning.",
325
+ "author": "Jiang, N. and L. Li (2016).",
326
+ "venue": "In International Conference on Machine Learning, pp. 652\u2013661. PMLR.",
327
+ "url": null
328
+ }
329
+ },
330
+ {
331
+ "28": {
332
+ "title": "Is pessimism provably efficient for offline RL?",
333
+ "author": "Jin, Y., Z. Yang, and Z. Wang (2021).",
334
+ "venue": "In International Conference on Machine Learning, pp. 5084\u20135096. PMLR.",
335
+ "url": null
336
+ }
337
+ },
338
+ {
339
+ "29": {
340
+ "title": "MIMIC-IV (version 0.4). PhysioNet.",
341
+ "author": "Johnson, A., L. Bulgarelli, T. Pollard, S. Horng, L. A. Celi, and R. Mark. (2020).",
342
+ "venue": null,
343
+ "url": null
344
+ }
345
+ },
346
+ {
347
+ "30": {
348
+ "title": "MIMIC-III, a freely accessible critical care database.",
349
+ "author": "Johnson, A. E., T. J. Pollard, L. Shen, H. L. Li-Wei, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. A. Celi, and R. G. Mark (2016).",
350
+ "venue": "Scientific data 3(1), 1\u20139.",
351
+ "url": null
352
+ }
353
+ },
354
+ {
355
+ "31": {
356
+ "title": "Communication-efficient distributed statistical inference.",
357
+ "author": "Jordan, M. I., J. D. Lee, and Y. Yang (2019).",
358
+ "venue": "Journal of the American Statistical Association 114(526), 668\u2013681.",
359
+ "url": null
360
+ }
361
+ },
362
+ {
363
+ "32": {
364
+ "title": "Morel: Model-based offline reinforcement learning.",
365
+ "author": "Kidambi, R., A. Rajeswaran, P. Netrapalli, and T. Joachims (2020).",
366
+ "venue": "Advances in Neural Information Processing Systems 33, 21810\u201321823.",
367
+ "url": null
368
+ }
369
+ },
370
+ {
371
+ "33": {
372
+ "title": "Federated learning: Strategies for improving communication efficiency.",
373
+ "author": "Kone\u010dn\u1ef3, J., H. B. McMahan, F. X. Yu, P. Richt\u00e1rik, A. T. Suresh, and D. Bacon (2016).",
374
+ "venue": "arXiv preprint arXiv:1610.05492.",
375
+ "url": null
376
+ }
377
+ },
378
+ {
379
+ "34": {
380
+ "title": "Batch reinforcement learning.",
381
+ "author": "Lange, S., T. Gabel, and M. Riedmiller (2012).",
382
+ "venue": "In Reinforcement learning, pp. 45\u201373. Springer.",
383
+ "url": null
384
+ }
385
+ },
386
+ {
387
+ "35": {
388
+ "title": "Dynamic treatment regimes: practical design considerations.",
389
+ "author": "Lavori, P. W. and R. Dawson (2004).",
390
+ "venue": "Clinical trials 1(1), 9\u201320.",
391
+ "url": null
392
+ }
393
+ },
394
+ {
395
+ "36": {
396
+ "title": "New clinical criteria for septic shock: serum lactate level as new emerging vital sign.",
397
+ "author": "Lee, S. M. and W. S. An (2016).",
398
+ "venue": "Journal of thoracic disease 8(7), 1388.",
399
+ "url": null
400
+ }
401
+ },
402
+ {
403
+ "37": {
404
+ "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems.",
405
+ "author": "Levine, S., A. Kumar, G. Tucker, and J. Fu (2020).",
406
+ "venue": "arXiv preprint arXiv:2005.01643.",
407
+ "url": null
408
+ }
409
+ },
410
+ {
411
+ "38": {
412
+ "title": "Privacy-preserving federated brain tumour segmentation.",
413
+ "author": "Li, W., F. Milletar\u00ec, D. Xu, N. Rieke, J. Hancox, W. Zhu, M. Baust, Y. Cheng, S. Ourselin, M. J. Cardoso, et al. (2019).",
414
+ "venue": "In International Workshop on Machine Learning in Medical Imaging, pp. 133\u2013141. Springer.",
415
+ "url": null
416
+ }
417
+ },
418
+ {
419
+ "39": {
420
+ "title": "Federated reinforcement learning for training control policies on multiple iot devices.",
421
+ "author": "Lim, H.-K., J.-B. Kim, J.-S. Heo, and Y.-H. Han (2020).",
422
+ "venue": "Sensors 20(5), 1359.",
423
+ "url": null
424
+ }
425
+ },
426
+ {
427
+ "40": {
428
+ "title": "Neural trust region/proximal policy optimization attains globally optimal policy.",
429
+ "author": "Liu, B., Q. Cai, Z. Yang, and Z. Wang (2019).",
430
+ "venue": "Advances in Neural Information Processing Systems 32, 10565\u201310576.",
431
+ "url": null
432
+ }
433
+ },
434
+ {
435
+ "41": {
436
+ "title": "Estimating dynamic treatment regimes in mobile health using v-learning.",
437
+ "author": "Luckett, D. J., E. B. Laber, A. R. Kahkoska, D. M. Maahs, E. Mayer-Davis, and M. R. Kosorok (2020).",
438
+ "venue": "Journal of the American Statistical Association 115(530), 692\u2013706.",
439
+ "url": null
440
+ }
441
+ },
442
+ {
443
+ "42": {
444
+ "title": "Communication-efficient learning of deep networks from decentralized data.",
445
+ "author": "McMahan, B., E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas (2017).",
446
+ "venue": "In Artificial Intelligence and Statistics, pp. 1273\u20131282. PMLR.",
447
+ "url": null
448
+ }
449
+ },
450
+ {
451
+ "43": {
452
+ "title": "Predictive modeling of the hospital readmission risk from patients\u2019 claims data using machine learning: a case study on copd.",
453
+ "author": "Min, X., B. Yu, and F. Wang (2019).",
454
+ "venue": "Scientific reports 9(1), 1\u201310.",
455
+ "url": null
456
+ }
457
+ },
458
+ {
459
+ "44": {
460
+ "title": "Optimal dynamic treatment regimes.",
461
+ "author": "Murphy, S. A. (2003).",
462
+ "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology) 65(2), 331\u2013355.",
463
+ "url": null
464
+ }
465
+ },
466
+ {
467
+ "45": {
468
+ "title": "Marginal mean models for dynamic regimes.",
469
+ "author": "Murphy, S. A., M. J. van der Laan, J. M. Robins, and C. P. P. R. Group (2001).",
470
+ "venue": "Journal of the American Statistical Association 96(456), 1410\u20131423.",
471
+ "url": null
472
+ }
473
+ },
474
+ {
475
+ "46": {
476
+ "title": "Dualdice: Behavior-agnostic estimation of discounted stationary distribution corrections.",
477
+ "author": "Nachum, O., Y. Chow, B. Dai, and L. Li (2019).",
478
+ "venue": "Advances in Neural Information Processing Systems 32.",
479
+ "url": null
480
+ }
481
+ },
482
+ {
483
+ "47": {
484
+ "title": "Federated reinforcement learning for fast personalization.",
485
+ "author": "Nadiger, C., A. Kumar, and S. Abdelhak (2019).",
486
+ "venue": "In 2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), pp. 123\u2013127. IEEE.",
487
+ "url": null
488
+ }
489
+ },
490
+ {
491
+ "48": {
492
+ "title": "Combining kernel and model based learning for hiv therapy selection.",
493
+ "author": "Parbhoo, S., J. Bogojeska, M. Zazzi, V. Roth, and F. Doshi-Velez (2017).",
494
+ "venue": "AMIA Summits on Translational Science Proceedings 2017, 239.",
495
+ "url": null
496
+ }
497
+ },
498
+ {
499
+ "49": {
500
+ "title": "Markov decision processes: discrete stochastic dynamic programming.",
501
+ "author": "Puterman, M. L. (2014).",
502
+ "venue": "John Wiley & Sons.",
503
+ "url": null
504
+ }
505
+ },
506
+ {
507
+ "50": {
508
+ "title": "Continuous state-space models for optimal sepsis treatment: a deep reinforcement learning approach.",
509
+ "author": "Raghu, A., M. Komorowski, L. A. Celi, P. Szolovits, and M. Ghassemi (2017).",
510
+ "venue": "In Machine Learning for Healthcare Conference, pp. 147\u2013163. PMLR.",
511
+ "url": null
512
+ }
513
+ },
514
+ {
515
+ "51": {
516
+ "title": "Surviving sepsis campaign: international guidelines for management of sepsis and septic shock: 2016.",
517
+ "author": "Rhodes, A., L. E. Evans, W. Alhazzani, M. M. Levy, M. Antonelli, R. Ferrer, A. Kumar, J. E. Sevransky, C. L. Sprung, M. E. Nunnally, et al. (2017).",
518
+ "venue": "Intensive care medicine 43(3), 304\u2013377.",
519
+ "url": null
520
+ }
521
+ },
522
+ {
523
+ "52": {
524
+ "title": "Optimal structural nested models for optimal sequential decisions.",
525
+ "author": "Robins, J. M. (2004).",
526
+ "venue": "In Proceedings of the second seattle Symposium in Biostatistics, pp. 189\u2013326. Springer.",
527
+ "url": null
528
+ }
529
+ },
530
+ {
531
+ "53": {
532
+ "title": "Declining intensive care unit mortality of covid-19: a multi-center study.",
533
+ "author": "Roomi, S., S. O. Shah, W. Ullah, S. U. Abedin, K. Butler, K. Schiers, B. Kohl, E. Yoo, M. Vibbert, and J. Jallo (2021).",
534
+ "venue": "Journal of clinical medicine research 13(3), 184.",
535
+ "url": null
536
+ }
537
+ },
538
+ {
539
+ "54": {
540
+ "title": "Fetchsgd: Communication-efficient federated learning with sketching.",
541
+ "author": "Rothchild, D., A. Panda, E. Ullah, N. Ivkin, I. Stoica, V. Braverman, J. Gonzalez, and R. Arora (2020).",
542
+ "venue": "In International Conference on Machine Learning, pp. 8253\u20138265. PMLR.",
543
+ "url": null
544
+ }
545
+ },
546
+ {
547
+ "55": {
548
+ "title": "The global burden of sepsis: barriers and potential solutions.",
549
+ "author": "Rudd, K. E., N. Kissoon, D. Limmathurotsakul, S. Bory, B. Mutahunga, C. W. Seymour, D. C. Angus, and T. E. West (2018).",
550
+ "venue": "Critical Care 22(1), 1\u201311.",
551
+ "url": null
552
+ }
553
+ },
554
+ {
555
+ "56": {
556
+ "title": "Lactate level versus lactate clearance for predicting mortality in patients with septic shock defined by sepsis-3.",
557
+ "author": "Ryoo, S. M., J. Lee, Y.-S. Lee, J. H. Lee, K. S. Lim, J. W. Huh, S.-B. Hong, C.-M. Lim, Y. Koh, and W. Y. Kim (2018).",
558
+ "venue": "Critical care medicine 46(6), e489\u2013e495.",
559
+ "url": null
560
+ }
561
+ },
562
+ {
563
+ "57": {
564
+ "title": "Approximate modified policy iteration and its application to the game of tetris.",
565
+ "author": "Scherrer, B., M. Ghavamzadeh, V. Gabillon, B. Lesner, and M. Geist (2015).",
566
+ "venue": "Journal of Machine Learning Research 16, 1629\u20131676.",
567
+ "url": null
568
+ }
569
+ },
570
+ {
571
+ "58": {
572
+ "title": "L-multivariate approximation theory.",
573
+ "author": "Schultz, M. H. (1969).",
574
+ "venue": "SIAM Journal on Numerical Analysis 6(2), 161\u2013183.",
575
+ "url": null
576
+ }
577
+ },
578
+ {
579
+ "59": {
580
+ "title": "Expert-supervised reinforcement learning for offline policy learning and evaluation.",
581
+ "author": "Sonabend, A., J. Lu, L. A. Celi, T. Cai, and P. Szolovits (2020).",
582
+ "venue": "In Advances in Neural Information Processing Systems, Volume 33, pp. 18967\u201318977.",
583
+ "url": null
584
+ }
585
+ },
586
+ {
587
+ "60": {
588
+ "title": "Semi-supervised off-policy reinforcement learning and value estimation for dynamic treatment regimes.",
589
+ "author": "Sonabend-W, A., N. Laha, A. N. Ananthakrishnan, T. Cai, and R. Mukherjee (2023).",
590
+ "venue": "Journal of Machine Learning Research 24(323), 1\u201386.",
591
+ "url": null
592
+ }
593
+ },
594
+ {
595
+ "61": {
596
+ "title": "Reinforcement learning: An introduction.",
597
+ "author": "Sutton, R. S. and A. G. Barto (2018).",
598
+ "venue": "MIT press.",
599
+ "url": null
600
+ }
601
+ },
602
+ {
603
+ "62": {
604
+ "title": "Data-efficient off-policy policy evaluation for reinforcement learning.",
605
+ "author": "Thomas, P. and E. Brunskill (2016).",
606
+ "venue": "In International Conference on Machine Learning, pp. 2139\u20132148. PMLR.",
607
+ "url": null
608
+ }
609
+ },
610
+ {
611
+ "63": {
612
+ "title": "Distributed inference for linear support vector machine.",
613
+ "author": "Wang, X., Z. Yang, X. Chen, and W. Liu (2019).",
614
+ "venue": "Journal of Machine Learning Research 20.",
615
+ "url": null
616
+ }
617
+ },
618
+ {
619
+ "64": {
620
+ "title": "Bellman-consistent pessimism for offline reinforcement learning.",
621
+ "author": "Xie, T., C.-A. Cheng, N. Jiang, P. Mineiro, and A. Agarwal (2021).",
622
+ "venue": "Advances in neural information processing systems 34, 6683\u20136694.",
623
+ "url": null
624
+ }
625
+ },
626
+ {
627
+ "65": {
628
+ "title": "Batch value-function approximation with only realizability.",
629
+ "author": "Xie, T. and N. Jiang (2021).",
630
+ "venue": "In International Conference on Machine Learning, pp. 11404\u201311413. PMLR.",
631
+ "url": null
632
+ }
633
+ },
634
+ {
635
+ "66": {
636
+ "title": "Fedkl: Tackling data heterogeneity in federated reinforcement learning by penalizing kl divergence.",
637
+ "author": "Xie, Z. and S. Song (2023).",
638
+ "venue": "IEEE Journal on Selected Areas in Communications 41(4), 1227\u20131242.",
639
+ "url": null
640
+ }
641
+ },
642
+ {
643
+ "67": {
644
+ "title": "Fedmood: Federated learning on mobile health data for mood detection.",
645
+ "author": "Xu, X., H. Peng, L. Sun, M. Z. A. Bhuiyan, L. Liu, and L. He (2021).",
646
+ "venue": "arXiv preprint arXiv:2102.09342.",
647
+ "url": null
648
+ }
649
+ },
650
+ {
651
+ "68": {
652
+ "title": "Federated machine learning: Concept and applications.",
653
+ "author": "Yang, Q., Y. Liu, T. Chen, and Y. Tong (2019).",
654
+ "venue": "ACM Transactions on Intelligent Systems and Technology (TIST) 10(2), 1\u201319.",
655
+ "url": null
656
+ }
657
+ },
658
+ {
659
+ "69": {
660
+ "title": "Offline reinforcement learning with realizability and single-policy concentrability.",
661
+ "author": "Zhan, W., B. Huang, A. Huang, N. Jiang, and J. Lee (2022).",
662
+ "venue": "In Conference on Learning Theory, pp. 2730\u20132775. PMLR.",
663
+ "url": null
664
+ }
665
+ },
666
+ {
667
+ "70": {
668
+ "title": "Multi-agent reinforcement learning: A selective overview of theories and algorithms.",
669
+ "author": "Zhang, K., Z. Yang, and T. Ba\u015far (2019).",
670
+ "venue": "arXiv preprint arXiv:1911.10635.",
671
+ "url": null
672
+ }
673
+ },
674
+ {
675
+ "71": {
676
+ "title": "Gendice: Generalized offline estimation of stationary values.",
677
+ "author": "Zhang, R., B. Dai, L. Li, and D. Schuurmans (2020).",
678
+ "venue": "arXiv preprint arXiv:2002.09072.",
679
+ "url": null
680
+ }
681
+ },
682
+ {
683
+ "72": {
684
+ "title": "Individualized fluid administration for critically ill patients with sepsis with an interpretable dynamic treatment regimen model.",
685
+ "author": "Zhang, Z., B. Zheng, and N. Liu (2020).",
686
+ "venue": "Scientific Reports 10(1), 1\u20139.",
687
+ "url": null
688
+ }
689
+ },
690
+ {
691
+ "73": {
692
+ "title": "Federated deep reinforcement learning.",
693
+ "author": "Zhuo, H. H., W. Feng, Y. Lin, Q. Xu, and Q. Yang (2019).",
694
+ "venue": "arXiv preprint arXiv:1901.08277.",
695
+ "url": null
696
+ }
697
+ }
698
+ ],
699
+ "url": "http://arxiv.org/html/2206.05581v3"
700
+ }
20240127/2208.07462v4.json ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Speeding up random walk mixing by starting from a uniform vertex",
3
+ "abstract": "The theory of rapid mixing random walks plays a fundamental role in the study of modern randomised algorithms.\nUsually, the mixing time is measured with respect to the worst initial position.\nIt is well known that the presence of bottlenecks in a graph hampers mixing and, in particular, starting inside a small bottleneck significantly slows down the diffusion of the walk in the first steps of the process.\nThe average mixing time is defined to be the mixing time starting at a uniformly random vertex and hence is not sensitive to the slow diffusion caused by these bottlenecks.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1. Introduction",
9
+ "text": "Random walks on graphs are one of the fundamental tools for sampling (see, e.g., [38 ###reference_b38###]).\nApplications are numerous in areas such as computer science, discrete mathematics and statistical physics.\nProminent examples include the polynomial-time algorithm to estimate the volume of a convex body [19 ###reference_b19###], computing the matrix permanent [28 ###reference_b28###] or the use of Glauber dynamics to sample from Gibbs distributions, in particular from proper colourings [42 ###reference_b42###].\nMost usually, the size of the sampling space is exponential in the input size, and fully exploring this space is computationally intractable.\nThe Markov chain Monte Carlo (MCMC) method consists of running a random walk in an appropriately chosen graph, whose vertex set is the sample space, until its distribution is arbitrarily close to equilibrium, regardless of the initial state.\nAt that time we say the walk has mixed, and the time until it does is called the (worst-case) mixing time.\nTo obtain efficient sampling algorithms it suffices to prove that the mixing time is poly-logarithmic in the input size.\nThe connection between rapid mixing and expanders is well-established.\nIn the context of random walks, expansion is measured by means of a graph parameter called conductance; see Section 2.2 ###reference_### for the precise definition.\nJerrum and Sinclair [28 ###reference_b28###] gave an upper bound on the mixing time depending on the conductance and the logarithm of the minimum stationary value.\nThis bound is central in the theory of Markov chains.\nRandom environments are particularly interesting sampling spaces and, in the last 20 years, researchers have developed the theory of random walks on random graphs.\nAs expected, the good expansion properties of random graphs ensure rapid mixing.\nBy the Jerrum-Sinclair bound, graphs with conductance bounded away from zero mix in logarithmically many steps and usually exhibit cut-off, that is, the distribution converges rapidly to the stationary distribution in a small window of time.\nGood examples are random graph models with control on the degrees, such as random regular graphs [34 ###reference_b34###], random graphs with given degree sequences [6 ###reference_b6###, 4 ###reference_b4###], their directed analogues [9 ###reference_b9###, 12 ###reference_b12###], or graphs perturbed by random perfect matchings [27 ###reference_b27###].\nNonetheless, the presence of small obstructions slows down the mixing.\nA canonical example is the giant component of a sparse Erd\u0151s-R\u00e9nyi graph with .\nThis component contains relatively small bottlenecks, that is, connected sets that only have few edges connecting them to the rest of the graph.\nIn such cases, tools like the Jerrum-Sinclair bound fail to pin down the correct order of the mixing time.\nFountoulakis and Reed [23 ###reference_b23###] introduced a strengthening of the bound that is sensitive to small bottlenecks and used it to show that the mixing time of the largest component in is asymptotically almost surely (a.a.s. for short) [24 ###reference_b24###].\nIndeed, this is the correct order as the component contains paths of degree vertices (also referred to as bare paths) whose length is of order .\nStarting at the centre of such paths, a random walk takes steps in expectation to escape from it.\nWe remark that the mixing time in the supercritical random graph was also bounded independently by Benjamini, Kozma and Wormald [5 ###reference_b5###], using a different approach investigating the anatomy of the giant component.\nHowever, these local bottlenecks are a negligible part of the giant component and the rest of the component has good expansion properties.\nThis suggests that, if the random walk started outside the bottlenecks, the mixing time would decrease.\nThis was implicit in the work of Benjamini, Kozma and Wormald [5 ###reference_b5###] and their description of the giant component, and such a speeding up of mixing time was also conjectured explicitly by Fountoulakis and Reed [24 ###reference_b24###].\nBerestycki, Lubetzky, Peres and Sly [6 ###reference_b6###] confirmed their prediction, showing that there exists such that the mixing time starting at a uniformly random vertex is asymptotically with high probability (they in fact proved much more, establishing the value of precisely as well as cut-off for the random walk).\nThis result reinforces the idea that, in certain heterogeneous scenarios, averaging over the starting position yields more efficient sampling algorithms.\nThe goal of this paper is to provide a general framework to show logarithmic average-case mixing time for random walks on graphs with small bottlenecks."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "1.1. Average mixing times",
15
+ "text": "Given an -vertex graph , the lazy random walk over is a Markov chain with state space which can be defined as follows.\nIf at any given time we are in a vertex , the lazy random walk stays in with probability , and with probability it moves to a uniformly random neighbour of in .\nIf is a connected graph, it is well known that the lazy random walk over is ergodic and its distribution converges to the (unique) stationary distribution (see, e.g., [33 ###reference_b33###] for a comprehensive review of random walks and mixing times).\nThe total variation distance between two probability distributions and on the vertex set of a graph is defined as\nLet be the transition matrix of the lazy random walk over .\nFor , the -mixing time of this lazy random walk is defined as\nwhere is the distribution supported entirely on .\nIf instead of considering the worst-case initial vertex we consider a uniformly random vertex , then the quantity is a random variable. We define the average -mixing time of the lazy random walk, to be the time at which the expectation of this random variable falls below the . That is,\nIn this work, we will focus on the quantity , which we believe is a natural candidate for tracking mixing times starting from a uniform vertex. Nonetheless, other related quantities have been used to measure the mixing time from a uniform starting point.\nIndeed, for a vertex , define\nand consider the random variable , where is a vertex chosen uniformly at random from .\nThis notion was the one studied by Berestycki, Lubetzky, Peres and Sly [6 ###reference_b6###].\nIt is natural to compare to : in the first case, we average the total variation distance over starting vertices and take the smallest time when this average is smaller than ; in the second one, we average the mixing times over the starting vertices (see Figure 1 ###reference_###).\nIn general as functions, neither of these notions is stronger than the other, in that one can design examples of trajectories for total variation distances for different vertices , showing that cannot be bounded by a function of and vice versa.\nHowever, bounding either or implies that is small with high probability.\nIn the first case this is a direct application of Markov\u2019s inequality. In the second one, define , for a vertex , then is the time at which the expected value of (averaged over starting points) is less than . By Markov\u2019s inequality, with probability at least .\n###figure_1### A related but very different notion is the time it takes to mix starting at , the uniform distribution over :\nA similar notion has been studied for directed graphs, where the initial distribution is the in-degree one; see, e.g., [9 ###reference_b9###, Theorem 3]. In general, this latter notion of average mixing time is much smaller than the previous notions and we expect this to also be the case in the settings studied here, although we do not explore this direction.\nIn the literature, the mixing time of the random walk is often defined as , since the distance to the stationary distribution is contractive after this time.\nHowever, this might not be the case for .\nConsider for instance the lollipop graph : a clique on vertices and a path on vertices joined by an edge incident to one of the endpoints of the path.\nIf and are both very large, then, after one step, the total variation distance is roughly if we start at the clique (almost all the mass of is supported on the clique), and roughly if we start at the path.\nTaking , then\nIf , then .\nHowever, the time required to further decrease the distance to the stationary distribution is of order , as this is the time required for the walk starting at a typical vertex in the path to hit the clique."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "1.2. Our results",
21
+ "text": "Our results will apply to graphs satisfying certain natural structural conditions, which we formalise in the following definition.\nLet be an -vertex graph.\nFor , we say that a set is -thin in if\nFor , we say that a set is -loaded in if\nWe say that is an -spreader graph if it satisfies the following three properties:\nFor all ,\nthe number of -connected -thin sets with is less than .\nFor all , the number of -connected -loaded sets with is less than .\nNo set with is -loaded in .\nNote that, for , one has that , and thus the conditions on an -vertex graph being an -spreader graph guarantee that there are no -connected vertex subsets of size between and that have too few edges leaving the set ###reference_i1### or too many edges contained inside the set ###reference_i2###.\nThese pseudo-random conditions on expansion and edge distribution arise naturally in the context of random graph models.\nIndeed, the density of a random graph within any vertex set and across any vertex partition is expected to be the same as the density of the whole graph, and concentration inequalities in conjunction with union bounds can be used to derive the non-existence of such bad connected vertex sets with high probability.\nMoreover, the conditions of Definition 1.3 ###reference_definition3### bound the number of bad vertex sets of size between and , with exponential decay as one can expect from concentration inequalities on binomial random variables.\nIn the context of the current work, the conditions on spreader graphs will guarantee that all bottlenecks are small and they are scarce in the graph.\nTo digest the notion of spreader graphs, one can think of as an arbitrarily small constant and as arbitrarily large.\nThe parameter controls conditions ###reference_i1### and ###reference_i2### in the sense that, as shrinks, these conditions become easier to satisfy and thus the definition of spreader graphs captures more graphs.\nSimilarly, the parameter controls ###reference_i3### and imposes in particular that the spreader graphs are sparse with bounded average degree.\nIt should be noted that, due to appearing in ###reference_i1### and ###reference_i2### and appearing in ###reference_i3###, our definition is not actually monotone in these parameters.\nThis is a technical subtlety that is needed in our proof to guarantee a trade-off between the conditions.\nHowever, in all applications, the restraints given by in ###reference_i1### and ###reference_i2### and in ###reference_i3### are never critical, as we have very good control over the edge distribution in all linear sets.\nWe also remark that the constant could be replaced by any constant .\nIndeed, for sets smaller than , we impose no restriction.\nThe point is that, as we will focus on connected spreader graphs , even if a small set is an extreme bottleneck, the random walk will not get stuck there for too long before exploring the set enough to escape.\nIn our proof, these bottlenecks contribute a factor (due to a connected set of size having conductance at least and Theorem 2.1 ###reference_definition1### giving a quadratic dependence on conductance, see Section 2.2 ###reference_### for details), hence a choice of guarantees that this contribution is negligible. It may be possible to replace this constraint of by but not beyond this.\nFinally, we remark that the constants and could be replaced with functions that depend on and the definition of spreader graphs could be adjusted so that our main theorem would still give bounds on average mixing times.\nHowever, as our focus is on sparse graphs with constant average degree, we do not pursue this direction here.\nThe definition of -spreader graphs bears resemblance with that of -AN graphs (or -decorated expanders) introduced in [5 ###reference_b5###].\nAn -AN graph is defined in terms of the existence of an expander subgraph whose complement is formed by a small number of small components, similar to what can be deduced from ###reference_i1###- ###reference_i3###, and additionally requiring that not too many components of are connected to each .\nThe backbone of the main result in [5 ###reference_b5###] is to show that random walks on -AN graphs mix in steps.\nOur main theorem provides a tool to prove logarithmic average mixing time for -spreader graphs.\nFor all , and , there exists a such that the following holds for all sufficiently large.\nSuppose is an -vertex connected -spreader graph.\nThen,\nWe believe that in many cases, as in our two applications below, this theorem can be used to quickly derive optimal bounds for average mixing times in settings where worst-case mixing times are established via conductance bounds.\nThe proof of Theorem 1.5 ###reference_definition5### bears some similarities with the proof in [6 ###reference_b6###].\nBoth use the idea of contracting badly connected sets and coupling the random walks in the original and the contracted graphs.\nHowever, our proof is conceptually simpler as it does not use the anatomy of the giant component [16 ###reference_b16###], a powerful description of the largest component in the supercritical regime.\nInstead, we rely on the Fountoulakis-Reed bound for mixing [23 ###reference_b23###] and recent progress on hitting time lemmas [35 ###reference_b35###]."
22
+ },
23
+ {
24
+ "section_id": "1.3",
25
+ "parent_section_id": "1",
26
+ "section_name": "1.3. Application 1: Smoothed analysis on connected graphs",
27
+ "text": "The idea of studying the effect of random perturbations on a given structure arose naturally in several distinct settings.\nIn theoretical computer science, Spielman and Teng [40 ###reference_b40###] (see also [41 ###reference_b41###]) introduced the notion of smoothed analysis of algorithms.\nBy randomly perturbing an input to an algorithm, they could interpolate between a worst-time case analysis and an average case analysis, leading to a better understanding of the practical performance of algorithms on real life instances.\nThis has been hugely influential, leading to the study of smoothed analysis in a host of different settings, including numerical analysis [39 ###reference_b39###, 43 ###reference_b43###], satisfiability [22 ###reference_b22###, 14 ###reference_b14###], data clustering [3 ###reference_b3###], multilinear algebra [7 ###reference_b7###] and machine learning [29 ###reference_b29###].\nAlmost simultaneously, in graph theory, Bohman, Frieze and Martin [8 ###reference_b8###] introduced the model of randomly perturbed graphs which, as with smoothed analysis, allows one to understand the interplay between an extremal and probabilistic viewpoint.\nThe majority of work on the subject has focused on dense graphs [10 ###reference_b10###, 11 ###reference_b11###, 26 ###reference_b26###].\nIn the context of random walk mixing, it can be seen that small random perturbations cannot speed up the mixing time on dense graphs significantly.\nIndeed, the canonical examples leading to torpid mixing (e.g., two cliques connected by a long path) are robust with respect to that property.\nSmoothed analysis of sparse graphs was introduced by Krivelevich, Reichman and Samotij [31 ###reference_b31###].\nHere one starts with a connected graph of bounded degree (in fact, bounded degeneracy often suffices) and applies a small random perturbation by adding a copy of the binomial random graph for small .\nAlthough this perturbation is very slight, they showed that it greatly improves the expansion properties of the graph.\nA graph is said to be -degenerate if there is some ordering of the vertices of such that each vertex has at most neighbours in that precede it in the ordering.\nTo be precise, Krivelevich, Reichman and Samotij proved that, for any and , if is an -vertex -degenerate connected graph and , then a.a.s. satisfies that .\nBy considering, for example, a path on vertices, which has mixing time , we see a vast improvement after a slight random perturbation.\nWe also note that the result is tight on such examples, as the randomly perturbed path a.a.s. contains bare paths of length .\nOur first application of Theorem 1.5 ###reference_definition5### shows that we can improve the mixing time yet further in this model by starting from a uniformly chosen vertex, as in this case we avoid the small bottlenecks that remain if the initial graph had poor expansion.\nFor any and , there exists a such that the following holds.\nLet be an -vertex -degenerate connected graph, choose and let .\nThen, a.a.s.\nTheorem 1.6 ###reference_definition6### is tight, up to the constant factor , for all graphs with maximum degree .\nIndeed, this follows from the fact that for all vertices , where denotes the number of vertices that are at distance at most from in and is an upper bound on the average degree in .\nSuch an upper bound can be shown easily by induction, see for example [13 ###reference_b13###], and setting for sufficiently small shows that at least half of the vertices cannot be reached from in steps and hence .\nNonetheless, the converse of the inequality in Theorem 1.6 ###reference_definition6### is not true for all -degenerate graphs.\nConsider for instance a star: it is -degenerate, but the mixing time of the randomly perturbed star is as we mix in the step after visiting the centre of the star for the first time.\nSome time before the systematic study of random perturbations in the combinatorial and theoretical computer science communities discussed above, the notion appeared in physics literature with the study of so-called small-world networks.\nHere we will concentrate on a model introduced by Newman and Watts [37 ###reference_b37###, 36 ###reference_b36###] where, for some fixed , and large, one starts with -vertices of the graph ordered as , adds all edges for which (with addition modulo ), and then adds all remaining edges independently with probability .\nWe denote the resulting random graph as .\nIt is easy to see that this graph fits into the framework of Krivelevich, Reichman and Samotij [31 ###reference_b31###], and so their result implies that, for any and , the Newman-Watts small world network a.a.s. satisfies .\nIn fact, this was established before their work by Addario-Berry and Lei [1 ###reference_b1###], improving on a previous bound of due to Durrett [18 ###reference_b18###].\nHere, as a direct consequence of Theorem 1.6 ###reference_definition6###, we conclude that the average mixing time on the Newman-Watts small world network is of order .\nFor all and , there exists a such that the following holds.\nThe Newman-Watts small world network a.a.s. satisfies"
28
+ },
29
+ {
30
+ "section_id": "1.4",
31
+ "parent_section_id": "1",
32
+ "section_name": "1.4. Application 2: Giant components in random subgraphs of expanders",
33
+ "text": "For and a graph , we define to be the graph with the same vertex set where each edge of is retained in independently with probability .\nThe graph is called the host graph, and the random subgraph , the -percolated one.\nPercolation on graphs is a well-established topic in probability theory.\nMost classically, if the host graph is the complete graph on vertices , then its -percolated subgraph is the Erd\u0151s-R\u00e9nyi graph .\nFor any graph , let denote a largest connected component in and let denote its order.\nIn their seminal paper [21 ###reference_b21###], Erd\u0151s and R\u00e9nyi proved a phase transition for .\nNamely, writing for some constant , if then a.a.s. , while if then a.a.s. and the largest component, which is the unique component of linear size, is known as the giant component.\nA central question in random graph theory is whether other host graphs exhibit the same phenomenon [2 ###reference_b2###].\nOne quickly observes that, in order for to have a sharp threshold for the component structure, the host graph should satisfy some additional properties.\nA natural property to consider is the pseudo-random notion of expansion.\nThere is a strong connection between expansion and the graph spectrum.\nGiven the eigenvalues of the adjacency matrix of a -regular graph , say , we let be the second largest eigenvalue.\nWe then define an -graph to be a -regular graph on vertices with .\nWhen is small compared to , an -graph is said to be an expander and it enjoys many of the same properties as a random graph with the same density.\nWe refer the reader to the excellent survey of Krivelevich and Sudakov [32 ###reference_b32###] on the subject.\nIn terms of percolation, Frieze, Krivelevich and Martin [25 ###reference_b25###] proved that, if is an -graph with , then undergoes a phase transition at .\nThey obtained the following description of the supercritical regime: for and , and provided that , a.a.s. for some .\nMoreover, as in , the largest component is a.a.s. the unique component of linear size.\nVery recently, Diskin and Krivelevich [17 ###reference_b17###] studied the mixing time of percolated -graphs.\nMore precisely, they showed that, in the supercritical regime, there exists such that a.a.s. .\nThis is indeed optimal for some graphs, in particular for Erd\u0151s-R\u00e9nyi random graphs [24 ###reference_b24###, 5 ###reference_b5###], as discussed above.\nOur next application of our main result shows that for percolated pseudo-random graphs the average mixing time is logarithmic.\nFor all sufficiently small and all ,\nthere exists a such that, if and is an -graph with , then a.a.s.\nSimilarly as in Remark 1.7 ###reference_definition7###, it can be proven that Theorem 1.9 ###reference_definition9### is tight up to multiplicative constant for all -graphs.\nAs a consequence, for we obtain the following.\nFor all sufficiently small and all , there exists a such that, for , a.a.s.\nBy Remark 1.1 ###reference_definition1###, a.a.s., where is chosen uniformly at random from . This result aligns with [6 ###reference_b6###], although theirs is much stronger, showing cut-off for as previously mentioned."
34
+ },
35
+ {
36
+ "section_id": "1.5",
37
+ "parent_section_id": "1",
38
+ "section_name": "1.5. Organisation",
39
+ "text": "The rest of this paper is organised as follows.\nIn Section 2 ###reference_### we introduce all the necessary notation, definitions and tools for our proofs.\nWe use these to prove Theorem 1.5 ###reference_definition5### in Section 3 ###reference_###.\nThis section is structured in subsections where we build different tools to be used in our main proof; in particular, we discuss the main ideas of the proof in Section 3.1 ###reference_###.\nSections 4 ###reference_### and 5 ###reference_### are devoted to proving Theorems 1.6 ###reference_definition6### and 1.9 ###reference_definition9###, respectively.\nFinally, we discuss some open problems in Section 6 ###reference_###."
40
+ },
41
+ {
42
+ "section_id": "2",
43
+ "parent_section_id": null,
44
+ "section_name": "2. Preliminaries",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "2.1",
49
+ "parent_section_id": "2",
50
+ "section_name": "2.1. Basic notation",
51
+ "text": "Let denote the set of non-negative integers.\nIf is a positive integer, we set .\nThroughout, we will consider both simple graphs and multigraphs.\nThe word graph will refer to simple graphs, that is, each pair of vertices forms at most one edge.\nOur multigraphs, which will be allowed to have parallel edges but no loops, will be clearly identified as such.\nAll our graphs are labelled, so whenever we discuss an -vertex (multi)graph , we implicitly assume that .\nGiven a (multi)graph and disjoint sets , we write for the (multi)set of edges of contained in , and for the (multi)set of edges with one endpoint in and the other in .\nWe set and .\nIf , we write , and similarly in all related notation.\nFor simplicity, we write .\nWe write .\nWe say that is -connected if is connected.\nFor each vertex , we write for its degree.\nWe denote and .\nIn many of our statements we will consider an -vertex graph satisfying a set of conditions or a conclusion, which are often asymptotic in nature.\nThis is in fact an abuse of notation.\nTo be precise, one must consider a sequence of graphs on an increasing number of vertices so that the graphs in the sequence satisfy the conditions.\nThis abuse of notation greatly simplifies the statements, so we will assume it throughout.\n(This also includes any asymptotic statements about random graphs.)\nFor any sequence of graphs with , we say that a graph property holds asymptotically almost surely (a.a.s.) if ."
52
+ },
53
+ {
54
+ "section_id": "2.2",
55
+ "parent_section_id": "2",
56
+ "section_name": "2.2. Random walks",
57
+ "text": "Given an arbitrary connected multigraph , the lazy random walk over is a Markov chain on state space defined by the transition matrix given by\nThat is, the lazy random walk is a sequence of random variables with probability distributions , respectively, over , where is the starting distribution and, for each , the distribution of is obtained from the distribution of as .\nThe sequence of distributions thus depends only on and the starting distribution.\nIn the special case when there is a vertex such that , we will write to denote the resulting sequence of distributions.\nIf is connected, the lazy random walk over converges to a stationary distribution (that is, a distribution satisfying ), independently of the starting distribution .\nIt is well known (see, e.g., [33 ###reference_b33###]) that this stationary distribution satisfies\nfor all .\nGiven a set , we define .\nIt follows from (2.2 ###reference_###) that\nWe define and .\nRecall the definition of mixing times in the introduction.\nThe mixing time of a random walk on a connected (multi)graph is deeply tied with the concept of conductance.\nGiven a set , we define\nwhere the equality follows from (2.1 ###reference_###) and (2.2 ###reference_###).\nObserve that .\nFinally, we define the conductance of as\nFrom the definitions in (2.3 ###reference_###), (2.4 ###reference_###) and (2.5 ###reference_###) and the fact that for any set , it follows that\nOur approach to estimate the mixing time of the lazy random walk over a multigraph is based on ideas of Fountoulakis and Reed [23 ###reference_b23###, 24 ###reference_b24###].\nRoughly speaking, their main contribution is the fact that the mixing time of an abstract irreducible, reversible, aperiodic Markov chain (which we may represent using a weighted graph on its state space) can be bounded from above using the conductances of different -connected sets of states of various sizes.\nThe fact that we may restrict ourselves to -connected sets is crucial to obtain tighter bounds than would be obtained through other classical means.\nFor simplicity, here we only state a version of the result of Fountoulakis and Reed [23 ###reference_b23###] which is applicable to our setting.\nFor any , we let be the minimum conductance over all -connected sets such that (if no such set exists, we set ).\nLet be a connected multigraph.\nThere exists an absolute constant such that\nAnother parameter of interest is the hitting time to a vertex (or set of vertices) in the random walk on a multigraph .\nGiven any and the lazy random walk with starting distribution , we define the hitting time to as\nIn more generality, given any set , we define the hitting time to as\nGiven any vertex , let be the matrix obtained from the transition matrix by removing the row and column corresponding to .\nIf is primitive (i.e., all entries of are positive for some ), by Perron-Frobenius, the largest eigenvalue of , denoted by , is real, of multiplicity and satisfies .\nWe will make use of the first visit time lemma of Cooper and Frieze [15 ###reference_b15###].\nHere we state a more recent version with weaker hypotheses due to Manzo, Quattropani and Scoppola [35 ###reference_b35###].\nLet be an -vertex connected multigraph.\nSuppose that there exist a real number and a diverging sequence such that the following conditions hold:\nFast mixing: .\nSmall : .\nLarge : .\nThen, for all , we have\nand\nwhere\nis the expected number of indices for which the lazy random walk on starting at satisfies .\nFrom the intuitive point of view, the theorem says that the hitting time to is roughly distributed as a geometric random variable with success probability .\nIf one wants to hit by independently sampling vertices according to , then it would be a geometric random variable with success probability .\nThe factor is the price to pay for taking into account the geometry of the graph: the more likely it is to return from to , the less connected is to the rest of the graph, and the smaller the probability to hit it at a given (large) time is.\nIn the proof of Theorem 2.2 ###reference_definition2###, one can check that, if we only want (2.7 ###reference_###) to hold for a given , then ###reference_i2### can be replaced by\nSmall : .\nThe following holds as a corollary of Theorem 2.2 ###reference_definition2###.\nFor any fixed and -vertex connected multigraph satisfying ###reference_i1###, ###reference_i2### (or ###reference_i2###) and ###reference_i3### with the additional property that , if and is such that , then\nIndeed, for , Theorem 2.2 ###reference_definition2### implies that .\nThen, satisfies .\nMoreover, as is connected, we have that and so .\nTherefore,\nwhich can be made arbitrarily small, by taking small with respect to .\nThis establishes (2.9 ###reference_###)."
58
+ },
59
+ {
60
+ "section_id": "3",
61
+ "parent_section_id": null,
62
+ "section_name": "3. A general approach to average mixing times",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "3.1",
67
+ "parent_section_id": "3",
68
+ "section_name": "3.1. Proof overview",
69
+ "text": "As discussed in the introduction, the main tool we will use to bound the mixing times of random walks is the result of Fountoulakis and Reed (Theorem 2.1 ###reference_definition1###) which relates the (worst-case) mixing time of a random walk in a graph to the conductance (see (2.5 ###reference_###)) of the -connected vertex subsets of .\nWe think of vertex subsets whose conductance is poor (those for which ) as bottlenecks: they have more edges internally in than leaving and so the random walk is likely to get held up in .\nThe spreader graphs (see Definition 1.3 ###reference_definition3###) we are interested in studying here have only few small bottlenecks.\nIndeed, any vertex subset which can lead to small bottlenecks must either be thin or loaded and our upper bounds on the number of these sets in a spreader graph readily imply that any vertex set with poor conductance is of at most polylogarithmic size (size to be precise, see Remark 3.3 ###reference_definition3###).\nNow, if a set with poor conductance is very small (size at most ), it will not slow down mixing significantly as our random walk will not get stuck for very long in these sets before leaving them.\nTherefore, it is the intermediate size sets which pose a problem, and we will first show in Section 3.2 ###reference_### (see Lemma 3.4 ###reference_definition4###) that the set of bad vertices contained in some intermediate set which has poor conductance contains a negligible proportion of the overall vertex set of our spreader graph.\nIntuitively, we can then see how starting at an average vertex in speeds up the mixing time.\nIndeed, we are very unlikely to start at a bad vertex in and, moreover, we are in fact very unlikely to visit a vertex in in the first time steps, by which time we aim to show that the distribution of the random walk is already well-mixed.\nIn order to formalise this intuition, we adjust our spreader graph by shrinking the intermediate sets with poor conductance and thus removing troublesome small bottlenecks.\nThe resulting (multi-)graph we will call .\nUsing that the number of bad vertices is negligible, or rather that the number of edges incident to , , is negligible (Lemma 3.4 ###reference_definition4###), we show in Section 3.3 ###reference_### that switching from to does not have a big effect on the edge distribution and that, in particular, the stationary distributions on and are comparable.\nWe then show in Section 3.4 ###reference_### that, after contracting intermediate sets with poor conductance, we can apply Theorem 2.1 ###reference_definition1### of Fountoulakis and Reed to conclude that the worst-case mixing time in is logarithmic.\nHere we will need that is defined carefully to preserve connectivity between sets after contractions (see (3.7 ###reference_###)).\nFinally, we will prove Theorem 1.5 ###reference_definition5### by coupling the random walk from an average vertex on with the random walk in .\nAs the random walk in from any starting point mixes rapidly, we can conclude that the random walk in also mixes rapidly as long as the two random walks stay coupled for long enough.\nFor this, our final ingredient is to show that the random walk in is unlikely to hit our bad vertices , which we do in Section 3.5 ###reference_### by appealing to the First Visit Time Lemma (Theorem 2.2 ###reference_definition2###) of Manzo, Quattropani and Scoppola."
70
+ },
71
+ {
72
+ "section_id": "3.2",
73
+ "parent_section_id": "3",
74
+ "section_name": "3.2. Badly connected sets",
75
+ "text": "We will make use of the following simple definition.\nFor and a connected multigraph , we say a set is -bad in if\nand that it is -good otherwise.\nThe following lemma gives us a basic property of bad sets in a connected multigraph and quickly ties them with our notion of -spreader graphs.\nNote that the notions of thin and loaded sets extend naturally to multigraphs.\nLet be a multigraph.\nFor , if a set is -bad in , then\n,\nand\neither is -thin in or it is -loaded in (or both).\nThe assertion ###reference_i1### follows easily since, if , then\na contradiction.\nFor the second assertion, suppose that is neither -thin nor -loaded in .\nThen, we have that\na contradiction.\nHere we used that from ###reference_i1### in the first inequality and the definitions of -thin and -loaded in the second.\n\u220e\nWe will also make use of the following simple observation.\nLet , and let be an -vertex graph satisfying ###reference_i1### and ###reference_i2###.\nThen, there are no -connected -thin or -loaded sets of size .\nGiven any and , we define and .\nGiven any -vertex graph , let\nFurther, we define to be the set of vertices that lie in sets in , that is,\nWe now give bounds on and .\nLet and .\nLet be an -vertex connected graph which satisfies ###reference_i1### and ###reference_i2###.\nIf is sufficiently large, then\nand\nIn particular,\nLet and .\nBy Lemma 3.2 ###reference_definition2### ###reference_i2###, all sets must be -thin or -loaded.\nBy ###reference_i1### and ###reference_i2###, for each , the number of -thin or -loaded sets of size in is less than .\nMoreover, as stated in Remark 3.3 ###reference_definition3###, there are no -thin or -loaded sets of size .\nThus,\nTherefore, by the definition of , the bound on the size of the largest , and assuming is sufficiently large, we conclude that\nWe now turn our attention to (3.4 ###reference_###).\nBy the definition of , we have that\nand using Lemma 3.2 ###reference_definition2### ###reference_i1### this simplifies to\nNow, for each , let be some -connected set such that and .\nNote that this is possible because is connected and every has size less than .\nBy ###reference_i2### and the bounds on , for each we have that\nTherefore, using (3.6 ###reference_###) and for sufficiently large,\nIn particular, since is connected (so ), it follows from (2.3 ###reference_###) that"
76
+ },
77
+ {
78
+ "section_id": "3.3",
79
+ "parent_section_id": "3",
80
+ "section_name": "3.3. Stationary distributions",
81
+ "text": "Let and be given, let be some -vertex graph, and consider the set .\nWe now define to be the multigraph obtained by contracting all the connected components of to single vertices.\nTo be more precise, let be a partition of into sets each of which induces a connected component in and let be a set of new vertices.\nThen, let and, for each , let\nIn particular, .\nFinally, we define the multiset\nObserve that is connected if and only if is connected, and that must be an independent set in .\nGiven some connected graph , we want to compare the behaviour of the lazy random walk on and its contracted form .\nIn particular, we wish to compare their stationary distributions.\nIn order to do this, we need to make them comparable by having them on the same state space.\nLet us describe this in full generality.\nLet and be two connected multigraphs (possibly with ).\nThen, we define an auxiliary multigraph as the union of and .\nGiven the stationary distributions and , we define two distributions and on , where, for each ,\nFor the sake of notation, for any vertex we set and, similarly, for any we set .\nWith this setup, we abuse notation slightly and write for .\nLet and .\nLet be an -vertex connected graph which satisfies ###reference_i1### and ###reference_i2###.\nIf is sufficiently large, we have that\nLet , and let .\nWe also fix , and recall the distributions defined in (3.8 ###reference_###).\nObserve that\nThe equality uses the fact that, for all , we have .\nIn the inequality, we simply note that by definition and that, by the definition of , this gives an upper bound for the second sum.\nMoreover, from (1.1 ###reference_###) and the triangle inequality we have\nThe second term in the sum can be evaluated as\nIntroducing this in (3.3 ###reference_2###) together with (3.9 ###reference_###) and using Lemma 3.4 ###reference_definition4### and that , we conclude that"
82
+ },
83
+ {
84
+ "section_id": "3.4",
85
+ "parent_section_id": "3",
86
+ "section_name": "3.4. Mixing time after contractions",
87
+ "text": "The following result shows the mixing properties of contracted spreader graphs.\nFor all , and , there exists a such that the following holds for all sufficiently large.\nSuppose is an -vertex connected -spreader graph.\nThen,\nIn order to prove Proposition 3.6 ###reference_definition6###, we will rely on the following lemma.\nFor all and , the following holds for all sufficiently large.\nSuppose is an -vertex connected -spreader graph.\nThen, taking , for all -connected such that we have that .\nLet and .\nRecall our definition of from (3.7 ###reference_###).\nThe proof will make use of the following claim.\nAny -connected set with and such that is -good in .\nTake any such and let .\nNote that must be -connected.\nWe claim that the bounds on imply that it must be -good in .\nIndeed, if was -bad in , it would be a -connected subset of (recall (3.1 ###reference_###) and (3.2 ###reference_###)) and so would be mapped by to a single vertex.\nAs is surjective and , this is clearly not possible.\nIt also follows from the definition of that and .\nTherefore,\nand so is -good in .\n\u220e\nWe will also need the fact that none of the sets from the statement are too large.\nLet be a -connected set such that .\nThen, .\nAssume for a contradiction that there is such a set with .\nLetting , we have that and , implying that\nwhere we used here that is -connected and the fact that .\nLet be some set of size such that , noting that this is possible since, by Lemma 3.4 ###reference_definition4###, we have .\nNow, by ###reference_i3###, we have that , a contradiction, where we again appealed to the facts that and .\n\u220e\nNow suppose that there exists some -connected set with and .\nBy the bound on the conductance, it follows from (2.6 ###reference_###) that is -bad in and so, by Lemma 3.2 ###reference_definition2### ###reference_i1###, we have .\nThis implies that\nwhere in the last inequality we used the fact that from Lemma 3.4 ###reference_definition4### and the fact that as is connected.\nObserve that, since and is connected, no set with can be -bad, so we must have .\nThen, Claim 1 ###reference_im1### implies that has size or .\nIf , then (again by Lemma 3.4 ###reference_definition4###), and we know this cannot happen by Claim 2 ###reference_im2###, so we must have .\nAs, trivially, any vertex set has , we have that .\nThis contradicts (3.4 ###reference_1###).\n\u220e\nWith this, we can prove Proposition 3.6 ###reference_definition6###.\nLet .\nObserve that ###reference_i3### implies .\nBy Lemma 3.7 ###reference_definition7###, for any -connected set with we have .\nConsider now any -connected set with (in particular, ).\nThe fact that is connected together with (2.4 ###reference_###) and (2.5 ###reference_###) ensures that\nLet denote the set of indices such that , and note that .\nIt then follows that\nwhere in the last inequality we use the definition of .\nBy Theorem 2.1 ###reference_definition1### we have\nwhere is some absolute constant.\nSince the total variation distance decreases exponentially fast after the mixing time (see, e.g., [33 ###reference_b33###, section 4.5]), we get\nand the proposition holds by taking appropriately.\n\u220e"
88
+ },
89
+ {
90
+ "section_id": "3.5",
91
+ "parent_section_id": "3",
92
+ "section_name": "3.5. Hitting time of bad vertices",
93
+ "text": "Let and and consider a connected -spreader graph .\nWe now wish to study the hitting time to the set of bad vertices in and show that a.a.s. it is not too small.\nLet and .\nLet be an -vertex connected -spreader graph, and let .\nThen,\nIn this proof we will use a new auxiliary multigraph .\nLet us introduce it here.\nConsider the multigraph , and let .\nRecall that the definition of implies that it is an independent set in .\nNow, is obtained from by contracting to a single new vertex .\nThe fact that is an independent set in guarantees that , and since vertices outside do not see their degree changed by this operation, it follows that for all and .\nLet and denote lazy random walks on and starting on some vertex .\nLet us denote and .\nDefine the natural coupling as follows: for any , while , let ; if there is a such that , then for the smallest such we let ; otherwise (that is, for all ), we let and evolve independently.\nObserve that this is indeed a valid coupling since for all we have .\nWith this natural coupling, conditional on , we have and so ; in particular, for any we have\nWe will study the hitting times using Theorem 2.2 ###reference_definition2### on with and .\nWe start with the following claim (which we make no efforts to optimise).\nWe have that .\nFirst we prove that, for any -connected set such that , we have that .\nIndeed, fix such an and define as\nFurther, let for some be a decomposition of into -connected components (note that if ).\nNow, as and , we have that and hence for all .\nMoreover, as is an independent set in , we have that and .\nReturning to analyse the conductance of , from (2.6 ###reference_###), we have that\nletting be a minimising index.\nIf , then we are done due to the fact that as is connected.\nIf , then , using that from ###reference_i3###.\nTherefore, using (2.6 ###reference_###) and (3.13 ###reference_###), we have that\nwhere we used Lemma 3.7 ###reference_definition7### in last inequality and the fact that in the penultimate inequality.\nSo we have established that for all -connected sets with .\nNow notice that due to ###reference_i3###, and hence, when applying Theorem 2.1 ###reference_definition1###, there are logarithmically many terms in the sum.\nThis establishes the desired upper bound on .\n\u220e\nUsing (1.1 ###reference_###) and Claim 3 ###reference_im3### and as the total variation distance decreases exponentially fast after the mixing time (see, e.g., [33 ###reference_b33###, section 4.5]), we have\nand ###reference_i1### is satisfied.\nLet us now prove that is small.\nBy Remark 2.3 ###reference_definition3###, to prove our statement it suffices to have ###reference_i2### for .\nBy Lemma 3.4 ###reference_definition4### (3.5 ###reference_###), .\nMoreover, as mentioned earlier, .\nBy Lemma 3.5 ###reference_definition5###, we have that\nIt follows that and ###reference_i2### holds.\nFinally, since is a connected -spreader graph with fixed and , is a connected multigraph with , by ###reference_i3###.\nIt follows by (2.2 ###reference_###) that and ###reference_i3### is satisfied.\nNow, recalling the relevant definitions from Theorem 2.2 ###reference_definition2### and letting , as goes to infinity we have\nwhere we used , (3.14 ###reference_###) and .\nTherefore, appealing to Remark 2.4 ###reference_definition4###, we have that\nTogether with (3.12 ###reference_###), this completes the proof of the lemma.\n\u220e"
94
+ },
95
+ {
96
+ "section_id": "3.6",
97
+ "parent_section_id": "3",
98
+ "section_name": "3.6. Proof of the main theorem",
99
+ "text": "We are finally ready to prove Theorem 1.5 ###reference_definition5###.\nLet and .\nLet and denote lazy random walks on and , respectively.\nFor , let .\nSimilarly as in the proof of Lemma 3.8 ###reference_definition8###, we consider a natural coupling of the random walks so that for we let and otherwise we let the walks evolve independently.\nFirst, observe that, by Lemma 3.5 ###reference_definition5### and the triangle inequality (and adopting the abuse of notation introduced in Section 3.3 ###reference_###), for any we have\nFor every and , we can write\nwhere we let if .\nLet .\nIt follows that\nLet be the constant given by Proposition 3.6 ###reference_definition6### with playing the role of , and let .\nBy Proposition 3.6 ###reference_definition6###, we have that .\nCombining these bounds with (3.15 ###reference_###) and (3.6 ###reference_0###), we obtain for all that\nBy Lemma 3.4 ###reference_definition4### (3.3 ###reference_###) and Lemma 3.8 ###reference_definition8###, we conclude that\nand , concluding the proof.\n\u220e"
100
+ },
101
+ {
102
+ "section_id": "4",
103
+ "parent_section_id": null,
104
+ "section_name": "4. Smoothed analysis on connected graphs",
105
+ "text": "We next want to show applications of Theorem 1.5 ###reference_definition5###.\nWe use this section to prove Theorem 1.6 ###reference_definition6###.\nLet and be a sufficiently small constant.\nIf is a connected -spreader graph, the statement follows from Theorem 1.5 ###reference_definition5###.\nSince is connected by assumption, it suffices to show that a.a.s. is an -spreader graph.\nWe need to verify that ###reference_i1###, ###reference_i2### and ###reference_i3### hold a.a.s. in .\nThe fact that ###reference_i2### and ###reference_i3### hold a.a.s. follows directly from the fact that is -degenerate and that for all (see, for example, [31 ###reference_b31###, Lemma 8]).\nThe fact that property ###reference_i1### holds a.a.s. in follows from the proof of [31 ###reference_b31###, Theorem 3].\nIndeed, for each , let denote the number of -connected -thin sets with .\nThen (see [31 ###reference_b31###, eq. (4)] and the claim immediately after), one can check that the expected number of such sets satisfies\nwhere is some absolute constant.\nFollowing again the proof in [31 ###reference_b31###] and choosing sufficiently small and sufficiently large, one concludes that\nProperty ###reference_i1### then follows by Markov\u2019s inequality and a union bound over all .\n\u220e"
106
+ },
107
+ {
108
+ "section_id": "5",
109
+ "parent_section_id": null,
110
+ "section_name": "5. Random subgraphs of expanders",
111
+ "text": "In order to prove Theorem 1.9 ###reference_definition9###, we will rely on several known properties of the giant component of a random subgraph of an -graph.\nRecall from Section 2.1 ###reference_### that when we refer to asymptotic statements holding in an -graph, implicitly what is meant is that the statement holds for any sequence of -graphs that satisfy the stated condition.\nLet be a sufficiently small constant and let be an -graph with .\nLet and let be a largest component in .\nThen, a.a.s. the following properties hold:\nhas vertices, where as tends to .\nThere exists some absolute constant such that, for any -connected with , we have that\nThere exists some absolute constant such that, for any with , we have that\nStatement (a) ###reference_i1### follows from [25 ###reference_b25###, Theorem 1]; see also the discussion following [17 ###reference_b17###, Theorem 1.1].\nStatement (b) ###reference_i2### is a consequence of [17 ###reference_b17###, Theorem 1 (1)], while (c) ###reference_i3### is given in [17 ###reference_b17###, Theorem 2].\n\u220e\nWith this, we can prove Theorem 1.9 ###reference_definition9###.\nLet .\nFix , (where and are the absolute constants from Lemma 5.1 ###reference_definition1###(b) ###reference_i2### and (c) ###reference_i3###) and .\nBy Theorem 1.5 ###reference_definition5###, it suffices to show that a.a.s. is an -spreader graph.\nThat is, we need to show that (for sufficiently small ) a.a.s. satisfies properties ###reference_i1###\u2013 ###reference_i3###.\nWe are first going to show that ###reference_i1### and ###reference_i2### hold a.a.s. in , rather than , for sets of size at least .\nSimilarly as happened in the proof of Theorem 1.6 ###reference_definition6###, in this case properties ###reference_i1### (for ) and ###reference_i2### can be obtained by following the proofs of [17 ###reference_b17###, Theorem 1 (1)] and [17 ###reference_b17###, Lemma 2.4], respectively.\nLet us give here a brief sketch.\nLet us first consider ###reference_i1### (for ).\nFor each , let denote the number of -connected sets with which are -thin.\nThen, following [17 ###reference_b17###, Theorem 1 (1)], we have .\nBy Markov\u2019s inequality and a union bound over all , we conclude that a.a.s.\nfor all , the number of -connected -thin sets with is less than .\nSimilarly, for each , one can check from the proof of [17 ###reference_b17###, Lemma 2.4] that, if we let denote the number of -connected sets with such that , then .\nAgain by Markov\u2019s inequality and a union bound over all , we conclude that a.a.s.\nfor all , the number of -connected -loaded sets with is less than .\nWe next work towards property ###reference_i3###.\nWe are going to show that no set with is -loaded in .\nFix some and let .\nIf is sufficiently small, we have that .\nNow, by the expander mixing lemma (see, for example, [17 ###reference_b17###, Lemma 2.1]), for any set with we have that\nusing that .\nHence, the probability that contains edges in is at most\nTaking a union bound over all possible sets of size , we have that the probability of there existing a -loaded set of size in is at most\nBy a union bound over all we conclude that a.a.s.\nno set with is -loaded in .\nCondition on the event that Lemma 5.1 ###reference_definition1###(a) ###reference_i1### and (c) ###reference_i3### as well as (d) ###reference_i1###, (e) ###reference_i1### and (f) ###reference_i1### hold in , which a.a.s. occurs.\nLet by (a) ###reference_i1###.\nIt follows that and so, by (d) ###reference_i1###, ###reference_i1### holds in for sets of size at most ; similarly, ###reference_i2### holds by (e) ###reference_i1###, and ###reference_i3### holds by (f) ###reference_i1###.\nThus, it only remains to establish ###reference_i1### for sets such that .\nFor , this is immediate from (c) ###reference_i3###, and we can also use (c) ###reference_i3### for larger sets .\nIndeed, suppose and let .\nIt follows from (a) ###reference_i1###, by taking a sufficiently small , that\nwhere in the last two inequalities we use that tends to as tends to due to the fact that .\nWe also have that , using also here that is sufficiently small.\nHence, we can apply (c) ###reference_i3### to and we obtain that"
112
+ },
113
+ {
114
+ "section_id": "6",
115
+ "parent_section_id": null,
116
+ "section_name": "6. Open problems",
117
+ "text": "Theorem 1.5 ###reference_definition5### is only effective on graphs where the mixing is slowed down by few small bottlenecks.\nThis is the case in the two applications presented.\nNevertheless, there are other cases where both small and large bottlenecks exist.\nIt would be interesting to study average-case mixing times in such scenarios and determine which improvement with respect to the worst-case can be attained.\nOne such example is the small-world model of Kleinberg [30 ###reference_b30###], whose mixing time has been studied in [20 ###reference_b20###].\nIn recent years, the theory of random walks in random directed graphs has attracted a considerable amount of attention.\nAs in the case of random regular graphs, under mild conditions on the bidegree sequence, the mixing time is logarithmic [9 ###reference_b9###, 12 ###reference_b12###].\nFrom the point of view of smoothed analysis, a natural question is whether randomly perturbing a deterministic strongly connected digraph can yield logarithmic mixing time.\nConductance-based bounds such as Jerrum-Sinclair and Fountoulakis-Reed are not valid in the non-reversible setting, which requires new ideas.\nFinally, we mention an analogous result to the mixing time in randomly perturbed connected graphs by Krivelevich, Reichman and Samotij [31 ###reference_b31###], for graphs perturbed by a random perfect matching, has been obtained by Hermon, Sly and Sousi [27 ###reference_b27###].\nConsidering such a model in the directed setting would also be interesting.\nAcknowledgements. The authors would like to thank Matteo Quattropani for fruitful discussions on the First Visit Time Lemma (FVTL). They would also like to thank the anonymous referees for their insightful comments, in particular for spotting a misuse of the FVTL and for pointing out the non-contractivity of the average mixing time (see Remark 1.2 ###reference_definition2###)."
118
+ }
119
+ ],
120
+ "appendix": [],
121
+ "tables": {},
122
+ "image_paths": {
123
+ "1": {
124
+ "figure_path": "2208.07462v4_figure_1.png",
125
+ "caption": "Figure 1. Schematic plot of the total variation distance starting at different vertices and the two average mixing times for \u03f5=0.05italic-\u03f50.05\\epsilon=0.05italic_\u03f5 = 0.05.\nIn red, the function 1n\u2062\u2211u\u2208V\u2062(G)dTV\u2062(\u03bc0u\u2062PGt,\u03c0G)1\ud835\udc5bsubscript\ud835\udc62\ud835\udc49\ud835\udc3asubscript\ud835\udc51normal-TVsuperscriptsubscript\ud835\udf070\ud835\udc62superscriptsubscript\ud835\udc43\ud835\udc3a\ud835\udc61subscript\ud835\udf0b\ud835\udc3a\\frac{1}{n}\\sum_{u\\in V(G)}d_{\\mathrm{TV}}(\\mu_{0}^{u}P_{G}^{t},\\pi_{G})divide start_ARG 1 end_ARG start_ARG italic_n end_ARG \u2211 start_POSTSUBSCRIPT italic_u \u2208 italic_V ( italic_G ) end_POSTSUBSCRIPT italic_d start_POSTSUBSCRIPT roman_TV end_POSTSUBSCRIPT ( italic_\u03bc start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT italic_P start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT , italic_\u03c0 start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT ) and the dot representing t\u00afmix\u2062(G,\u03f5)subscriptnormal-\u00af\ud835\udc61normal-mix\ud835\udc3aitalic-\u03f5\\bar{t}_{\\mathrm{mix}}(G,\\epsilon)over\u00af start_ARG italic_t end_ARG start_POSTSUBSCRIPT roman_mix end_POSTSUBSCRIPT ( italic_G , italic_\u03f5 ).\nIn blue, the average of mixing times at different thresholds and the dot representing \ud835\udd3c\u2062(tmix(Un)\u2062(G,\u03f5))\ud835\udd3csuperscriptsubscript\ud835\udc61normal-mixsubscript\ud835\udc48\ud835\udc5b\ud835\udc3aitalic-\u03f5\\mathbb{E}(t_{\\mathrm{mix}}^{(U_{n})}(G,\\epsilon))blackboard_E ( italic_t start_POSTSUBSCRIPT roman_mix end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_U start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT ( italic_G , italic_\u03f5 ) ).",
126
+ "url": "http://arxiv.org/html/2208.07462v4/extracted/5372373/Figure_1.png"
127
+ }
128
+ },
129
+ "validation": true,
130
+ "references": [],
131
+ "url": "http://arxiv.org/html/2208.07462v4"
132
+ }
20240127/2209.07661v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2210.03888v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2210.13008v3.json ADDED
@@ -0,0 +1,266 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Consistent inference for diffusions from low frequency measurements",
3
+ "abstract": "Let be a reflected diffusion process in a bounded convex domain in , solving the stochastic differential equation",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Diffusion describes a random process for the evolution over time of phenomena such as heat flow, electric conductance, chemical reactions, or molecular dynamics, to name just a few examples. The density of a diffusing substance in an insulated medium, say a bounded convex subset of , , is described by the solutions to the parabolic partial differential equation (PDE) known as the heat equation, , with a divergence form elliptic second order differential operator\nand equipped with Neumann boundary conditions. Here is a positive scalar \u2018diffusivity\u2019 function and is a \u2018force\u2019 potential inducing a Gibbs measure with (Lebesgue-) probability density . If is a -dimensional Brownian motion then the corresponding \u2018microscopic\u2019 statistical model for a diffusing particle is provided by solutions to the stochastic differential equation (SDE)\nstarted at . The process is reflected when hitting the boundary of its state space: is a \u2018local time\u2019 process acting only when and is the (inward) pointing normal vector at . When are Lipschitz maps on , a continuous time Markov process giving a unique pathwise solution to (1 ###reference_###) exists [69 ###reference_b69###].\n###figure_1### ###figure_2### Real world observations of diffusion are necessarily discrete and often subject to a lower bound on the time that elapses between consecutive measurements. We denote this \u2018observation distance\u2019 by and assume for simplicity that it is the same at each measurement. The data is for some , that is, we are tracking the trajectory of a given particle along discrete points in time, see Fig. 1 ###reference_###. In practice one may be observing several independent particles which essentially corresponds to (linearly) augmenting sample size \u2013 we consider the one-particle model without loss of generality. We investigate the possibility to infer and the transition operator of the Markov process both at and at \u2018unobserved\u2019 times by a statistical algorithm, that is, by a computable function of . We are interested in the scenario where is fixed (but known) as sample size . This is often the most appropriate observational model: for instance the speed at which particles or molecules transverse the medium may be much faster than the frequency at which images can be taken. Following [29 ###reference_b29###] we refer to this as the \u2018low measurement frequency\u2019 scenario. See [33 ###reference_b33###, 34 ###reference_b34###] or also Ch. 4 in [49 ###reference_b49###] for such situations in the biological sciences, or [48 ###reference_b48###, 62 ###reference_b62###, 44 ###reference_b44###] in the context of data assimilation problems.\nThe invariant \u2018equilibrium\u2019 distribution of the Markov process (1 ###reference_###) is well known ([5 ###reference_b5###], Sec.1.11.3) to equal and hence identifies the potential via its probability density . The infinite-dimensional parameter (and thus ) can then be estimated from a discrete sample by standard linear density estimators that smooth the empirical measure of the \u2019s near any point (cf. [29 ###reference_b29###] or also, with continuous data, [19 ###reference_b19###, 66 ###reference_b66###, 28 ###reference_b28###]). Using exponentially fast mixing of ergodic averages of the Markov process towards their -expectations (e.g., via [60 ###reference_b60###] combined with Thm 4.9.3 and Sec.1.11.3 in [5 ###reference_b5###], or also with [15 ###reference_b15###]) one can then obtain excellent statistical guarantees for in relevant norms (e.g., as after (30) in [56 ###reference_b56###]). But the invariant measure contains no information about the diffusivity in eq. (1 ###reference_###), and in a \u2018low frequency\u2019 measurement scheme, standard statistics of the data such as the quadratic variation (\u2018mean square displacements\u2019) of the process provide no valid inference on either (not even along the observed path). We conclude not only that recovering is a much harder problem than estimation of , but also that the problems essentially decouple and can be treated separately. Therefore, to simplify the exposition of our main contributions we henceforth assume that in (1 ###reference_###) and consider the model\nstarted uniformly at random . We denote by the resulting probability law of (in path space). Our statistical results could be generalised to the case of unknown in (1 ###reference_###) as we discuss in Remark 4 ###reference_ark4### below.\nThe problem to determine diffusivity parameters from data has a long history in mathematical inverse problems \u2013 we mention here [13 ###reference_b13###, 41 ###reference_b41###, 68 ###reference_b68###, 52 ###reference_b52###, 73 ###reference_b73###, 1 ###reference_b1###] in the context of the Calder\u00f3n problem as well as [63 ###reference_b63###, 22 ###reference_b22###, 67 ###reference_b67###, 38 ###reference_b38###, 10 ###reference_b10###, 27 ###reference_b27###, 54 ###reference_b54###] in the context of Darcy\u2019s flow problem, and the many references therein. All these settings consider a simplified observational model where one is given a \u2018steady state\u2019 measurement of diffusion, returning the (typically \u2018noisy\u2019) solution of a time-independent elliptic PDE. The potential inferential barrier arising with low frequency measurements disappears in the reduction from a time evolution equation to the elliptic PDE and hence does not inform the statistical setting investigated here.\nAs the invariant measure is identical for all , the information contained in low frequency discrete data from (2 ###reference_###) is encoded in the transition operator of the underlying Markov process . Little is known about how to conduct statistically valid inference in this setting, with notable exceptions being the one-dimensional case studied in [29 ###reference_b29###, 56 ###reference_b56###]. We also mention the consistency results [75 ###reference_b75###, 32 ###reference_b32###] as well as [46 ###reference_b46###] for Markovian transition operators, but these do not concern the conductivities themselves. A first question is whether the task of identifying from for fixed observation distance is even well-posed, that is, whether the (non-linear) map is injective. The answer to this question is positive at least if is prescribed near . Denote by the Hilbert space of square Lebesgue integrable functions on .\nSuppose positive diffusion coefficients are bounded away from zero on and such that near . Then if coincide as bounded linear operators on for some , we must have on .\nSee Theorem 5 ###reference_orem5### for details. That should be known near can be explained by the fact that the reflection (which is independent of ) dominates the local dynamics near .\nStatistical algorithms are often motivated by \u2018population version\u2019 identification equations for unknown parameters, as in the one-dimensional case considered in [29 ###reference_b29###, 56 ###reference_b56###], who use ordinary differential equation (ODE) techniques to derive identities for in terms of the first eigenfunction of the transition operator . This approach appears of limited use in the present multi-dimensional context . Instead we shall maintain as our statistical model for natural choices of parameter spaces of sufficiently smooth, positive, functions. This makes available the algorithmic toolbox of Bayesian statistics in infinite-dimensional parameter spaces which does not require any identification equations or inversion formulae. Instead one employs a Gaussian process prior for the function-valued parameter , see [76 ###reference_b76###, 67 ###reference_b67###, 24 ###reference_b24###, 54 ###reference_b54###], and updates according to Bayes\u2019 rule: if are the transition densities of (fundamental solutions), the posterior distribution is\nAs the \u2018forward map\u2019 can be evaluated by numerical PDE techniques for parabolic equations, one can leverage ideas from [16 ###reference_b16###] (see also [30 ###reference_b30###, 18 ###reference_b18###, 8 ###reference_b8###]) to propose computationally feasible MCMC methodology that draws approximate samples from , and the resulting ergodic averages approximate the posterior mean vector, which in turn gives an estimated output for . See Section 2.3 ###reference_###, specifically Remark 3 ###reference_ark3###, for details.\nRecent progress in Bayesian theory for non-linear inverse problems [53 ###reference_b53###, 50 ###reference_b50###, 59 ###reference_b59###, 57 ###reference_b57###], [54 ###reference_b54###] has clarified that such Bayesian methods can solve non-linear problems without \u2018inversion formulae\u2019 as long as appropriate stability estimates for the forward map, here , are available. Following this strategy we prove here a first statistical consistency result in multi-dimensional diffusion models with such \u2018low frequency\u2019 measurements.\nLet and consider data generated from the diffusion (2 ###reference_###) in a bounded smooth convex domain . Assume the ground truth is sufficiently regular in a Sobolev sense and equals near . Assign an appropriate Gaussian process prior to , form , and consider the random field\narising from the posterior mean function. Then the posterior inference for the transition operators as well as for is consistent, that is, as and in -probability,\nwhere denotes the operator norm on .\nSee Theorems 9 ###reference_orem9### and 10 ###reference_orem10### for details. Next to the stability estimates underlying Theorem 1 ###reference_orem1###, a main ingredient of our proofs is an estimate (Theorem 11 ###reference_orem11###) on the \u2018information\u2019 (Kullback-Leibler) distance of the underlying statistical experiment in terms of a negative Sobolev norm on . This result is of independent interest and also sharp (in view of Theorems 3 ###reference_orem3###, 4 ###reference_orem4###).\nOur proofs provide a rate of convergence in the last limits, and the rate obtained for cannot be improved (as we show) at the \u2018observed time\u2019 , corresponding to \u2018prediction risk\u2019. For the parameters and , our inversion rates are potentially slow (i.e., only inverse logarithmic in ). The question of optimal recovery in these non-linear inverse problems is delicate as they (implicitly or explicitly) involve solving a \u2018backward heat equation\u2019 from knowledge of alone. We shed some light on the issue and exhibit infinite-dimensional parameter spaces of \u2019s where faster than logarithmic rates (algebraic in ) can be obtained. These are based on certain spectral \u2018symmetry\u2019 hypotheses on the domain and on the diffusion process. For these hypotheses are always satisfied and our theory thus recovers the one-dimensional results from [29 ###reference_b29###, 56 ###reference_b56###] as a special case (but with novel proofs based on PDE theory). In multi-dimensions and for in a -neighbourhood of the constant function, we show that the required symmetries of can be related to the \u2018hot spots conjecture\u2019 from spectral geometry [4 ###reference_b4###, 12 ###reference_b12###, 36 ###reference_b36###, 3 ###reference_b3###, 64 ###reference_b64###, 37 ###reference_b37###], providing further incentives for the study of this topic. The topic of \u2018fast\u2019 rates beyond that conjecture will be investigated in future research.\nIn principle, the Bayesian approach can be expected to give valid inferences for any measurement regime and hence should work irrespectively of whether or not. In fact, a \u2018high frequency\u2019 regime is explicitly investigated in the recent contribution [35 ###reference_b35###] who show posterior consistency if sufficiently fast compared to (but still such that the observation horizon ). We also refer to Sec. 3.3 in [28 ###reference_b28###] for a discussion of the hypothetical case when the entire trajectory of is observed. More generally, the recent contributions [65 ###reference_b65###, 55 ###reference_b55###, 28 ###reference_b28###, 2 ###reference_b2###] to non-parametric inference for multi-dimensional diffusions (Bayesian or not) contain many further references."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Main results",
15
+ "text": "We are given discrete observations of the solution of the SDE (2 ###reference_###) where , that is, the diffusion is started in its (constant) invariant distribution. If for some fixed , then our proofs work as well in view of the exponentially fast mixing (36 ###reference_###) of the process towards the uniform law , by just discarding the \u2018burn-in phase\u2019, that is, by letting the process evolve for a while before one starts to record measurements. We emphasise again that the time interval between consecutive observations remains fixed in the asymptotics.\nThe domain supporting our diffusion process is a bounded convex open subset of , and to avoid technicalities we assume that the boundary of is smooth, ensuring in particular the existence of all \u2018reflecting\u2019 normal vectors at . Throughout will denote the Hilbert space of square integrable functions for Lebesgue measure on . We also assume (solely for notational convenience) that the volume of is normalised to one, .\nThe physical model underlying (2 ###reference_###) describes the intensity of diffusion in an insulated medium by the equation for flux (e.g., p.361f. in [70 ###reference_b70###], and after (31 ###reference_###) below). For smooth test functions , let the elliptic operator be given by the action\nwhere denote the gradient, divergence and Laplace operator, respectively. Then solves the heat equation for with Neumann boundary conditions on . Its fundamental solutions describe the probabilities for the position of a diffusing particle to lie in a region at time when it was at at time . More generally the transition operator describes a self-adjoint action on ,\nThe process from (2 ###reference_###) is the unique Markov random process with these transition probabilities, infinitesimal generator , and equilibrium (invariant) probability density on . The generator with Neumann boundary condition is characterised by an infinite sequence of (orthonormal) eigen-pairs where is the constant eigenfunction corresponding to . By ellipticity the first eigenvalue satisfies the spectral gap estimate (see (25 ###reference_###) below). The transition operators from (4 ###reference_###) can be described in this eigen-basis via the eigenvalues , and their densities are uniformly bounded over . These well-known facts are reviewed in Sec. 3 ###reference_###.\nSome more notation: denotes the space of uniformly continuous functions on . The Sobolev and H\u00f6lder spaces of maps defined on are defined as all functions that have partial derivatives up to order defining elements of , respectively, and we set , by convention. Attaching the subscript to any of the preceding spaces denotes the linear subspaces of such functions of compact support within . The Sobolev sub-spaces of are the completions of for the -norms. The symbols denote the operator and Hilbert-Schmidt (HS) norm of a linear operator on a Banach space , respectively. We denote by the supremum norm and by the norm of a normed space , with dual space . Throughout, denotes inequalities (in the last case two-sided) up to fixed multiplicative constants, while means that a random variable has law ."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Optimal recovery of the transition operator",
21
+ "text": "Given our data, can be estimated directly at by evaluating a suitable set of basis functions of at the observed \u2018transition pairs\u2019 . For instance if we take the linear span of the first eigenfunctions of the Neumann Laplacian , then a projection estimator for is described in (3.5.1 ###reference_4###) below. Our first theorem establishes a bound on the convergence rate for recovery of in operator norm if the approximating space is of sufficiently high dimension depending on the Sobolev regularity of .\nConsider data at fixed observation distance , from the reflected diffusion model (2 ###reference_###) on a bounded convex domain with smooth boundary, started at , with such that , . Then the estimator from (3.5.1 ###reference_4###) with choice satisfies,\nwith constants in the notation.\nOur proof gives a non-asymptotic concentration inequality for the bound in (5 ###reference_###), see Proposition 5 ###reference_position5###. Moreover, as in Corollary 2 ###reference_ollary2### below we can deduce from (5 ###reference_###) the convergence rate\nfor (stronger) operator norms on the spaces. This rate is optimal in an information theoretic \u2018minimax\u2019 sense (cf. Ch.6 in [26 ###reference_b26###]), as we now show for the case relevant below.\nIn the setting of Theorem 3 ###reference_orem3###, there exists a bounded convex domain with smooth boundary and a constant such that\nwhere the infimum extends over all estimators of (i.e., measurable functions of the taking values in the space of bounded linear operators on ).\nThe proof relies on some results from spectral geometry that require an appropriate choice of domain, in fact is the \u2018smoothed\u2019 hyperrectangle in (15 ###reference_###) below for and large enough. The lower bound remains valid when restricting the supremum to \u2019s that are constant near . The above theorems show that the minimax rate in the class of reflected diffusion processes is faster by the power of a -factor than the minimax rate of recovery of a general Markovian transition operator in the same regularity class, cf. Thm 2.2 in [46 ###reference_b46###]."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Injectivity of ,",
27
+ "text": ""
28
+ },
29
+ {
30
+ "section_id": "2.2.1",
31
+ "parent_section_id": "2.2",
32
+ "section_name": "2.2.1 Stability estimates",
33
+ "text": "We now turn to the problem of guaranteeing validity of inference on , and in turn also for for any . When in the asymptotics, ideas from stochastic calculus come into force and the inference problem becomes tractable either by direct techniques that identify the parameter \u2013 see [35 ###reference_b35###] and references therein; or by steady state approximations to the diffusion equation (discussed in the introduction).\nThe \u2018low frequency\u2019 regime where is fixed was studied in [29 ###reference_b29###, 56 ###reference_b56###] when . The key idea of [29 ###reference_b29###] is to infer from a principal component analysis (PCA) of the operator . Following their line of work when is not possible as they rely on explicit identification equations for based on ODE techniques (see Section 3.1 in [29 ###reference_b29###]), and in particular on the simplicity of the first non-zero eigenvalue of \u2013 both ideas do not extend to . Instead we follow the route via \u2018stability estimates\u2019 used recently in work on non-linear statistical inverse problems, see [50 ###reference_b50###], [27 ###reference_b27###, 1 ###reference_b1###] and also [54 ###reference_b54###]. We are not aware of an explicit reference that establishes the injectivity of the \u2018forward\u2019 map for arbitrary fixed (and ), let alone a stability estimate. Our first result achieves this when is known near the boundary of .\nLet be a bounded convex domain in with smooth boundary. Let be bounded from below by a constant , suppose on for some compact subset of and that for some . Then there exists a positive constant depending on such that\nIn particular if co-incide as linear operators on for some , we must have on .\nThe proof consists of a combination of the functional calculus identity\nwith injectivity estimates for the non-linear map for appropriate (which have been developed earlier in related contexts, see, e.g., [58 ###reference_b58###, 54 ###reference_b54###] and therein).\nIt is of interest to improve the logarithmic modulus of continuity in (8 ###reference_###). We now show that at least in some regions of the parameter space of \u2019s this is possible. The proof strategy is substantially different from Theorem 5 ###reference_orem5### and instead of functional calculus relies on a spectral \u2018pseudo-linearisation\u2019 identity for obtained from perturbation theory for parabolic PDE. This identity simplifies when testing against eigenfunctions of , and allows to identify if a certain transport operator (related to the stability estimates for ) is injective. Stability of this transport operator can be reduced to a hypothesis on the eigenfunctions of , which in turn can be tackled with techniques from spectral geometry.\nTo this end, define the first block of eigenfunctions of from (3 ###reference_###) as\nwhere is the first (non-zero) eigenvalue. Note that the last sum is necessarily finite and is any vector of scalars. The following theorem shows that under certain assumptions on to be discussed, a Lipschitz (or H\u00f6lder) stability estimate holds true.\nIn addition to the hypotheses of Theorem 5 ###reference_orem5###, assume also for some and that\nfor some and some vector . Then we have\nfor a constant .\nBy standard interpolation inequalities for Sobolev norms (p.44 in [45 ###reference_b45###]) and Proposition 3 ###reference_position3### with some , the bound (11 ###reference_###) directly implies a H\u00f6lder stability estimate\nfor . Whenever we can let as .\nAs we can choose we only need to find one linear combination of eigenfunctions in the eigenspace for that satisfies the hypothesis (10 ###reference_###). As multiplicities of eigenvalues reflect symmetries of on , one could regard the added flexibility as a \u2018blessing of symmetry\u2019.\nWe can write for the operator functional on the spectrum of , see the identity (32 ###reference_###). For the map is for some and one deduces from operator-norm Lipschitz properties (e.g., Lemma 3 in [42 ###reference_b42###]) that then . This is intuitive as the forward heat map is a smooth integral operator (the Chapman-Kolmogorov equations). In contrast in the case , the operator functional does not have a bounded Lipschitz constant on the spectrum. The last two theorems combined with Theorem 11 ###reference_orem11### below (for there, and via the continuous imbedding ) imply the following stability estimates for the dependence of the \u2018backward heat operator\u2019 on : Under the hypotheses of Theorem 5 ###reference_orem5### and assuming for , we can bound the -Hilbert Schmidt norms as\nwhere the modulus of continuity can be taken to be , and with constant now depending also on . In light of the exponential growth of the Lipschitz constant of in the tail of the spectrum of , one may think that such a logarithmic modulus of continuity is necessary. However, under the hypothesis (10 ###reference_###) we can obtain a stronger H\u00f6lder modulus from our techniques. For the proof, we combine Theorem 6 ###reference_orem6### (in fact (12 ###reference_###)) and Theorem 11 ###reference_orem11### below.\nUnder the hypotheses of Theorem 6 ###reference_orem6### we have\nwhere is as in (12 ###reference_###) and where .\nIn the one-dimensional setting , Lemma 6.1 and Proposition 6.5 in [29 ###reference_b29###] prove simplicity of and the strict monotonicity in any closed subinterval of of the corresponding eigenfunction (for any ). This entails that the derivative cannot vanish on and verifies the key hypothesis (10 ###reference_###) of Theorem 6 ###reference_orem6### for some and all large enough depending on (finite if ).\nWe next discuss an approach to verify (10 ###reference_###) also in multi-dimensions based on the \u2018hot spots\u2019 conjecture from spectral geometry. Ways to obtain \u2018H\u00f6lder\u2019 stability estimates that involve eigenfunctions for multiple distinct eigenvalues (rather than just the first), can be thought of too and will be investigated in future research."
34
+ },
35
+ {
36
+ "section_id": "2.2.2",
37
+ "parent_section_id": "2.2",
38
+ "section_name": "2.2.2 Reflected diffusion and \u2018hot spots\u2019",
39
+ "text": "While (10 ###reference_###) is satisfied in dimension (Remark 2 ###reference_ark2###), this is less clear in higher dimensions. Indeed, if the first eigenfunction has a critical point in with non-positive Laplacian (e.g., consider near of the form ), the condition (10 ###reference_###) does not hold. The hope is that eigenfunctions have special properties that exclude such situations, at least in regions one can identify.\nLet us start with some simple examples where the condition is satisfied when . For the Laplacian () on the unit cube, the first eigenfunctions of corresponding to are cosines in one of the axial variables, constant otherwise, and vanishes only at the respective corners of . Moreover is bounded on any compact and so we can verify (10 ###reference_###) for large enough, appropriate , and such . The argument just given extends to cylindrical domains with base equal to a convex domain in :\nConsider a cylinder of height and with convex base of diameter . Then (10 ###reference_###) holds for , any compact , some , and constants depending on .\nOur proof shows that when , the first eigenvalue is simple and its eigenfunction satisfies (10 ###reference_###). When , the eigenspace of is possibly multi-dimensional, but there always exists one eigenfunction in that eigenspace that satisfies (10 ###reference_###).\nThe proof of the last proposition is not difficult (see Section 3.7 ###reference_###) \u2013 it draws inspiration from [40 ###reference_b40###] and provides one of the few elementary examples for the validity of Rauch\u2019s hot spots conjecture [4 ###reference_b4###, 11 ###reference_b11###] which is concerned precisely with domains for which the gradient of any eigenfunction of corresponding to has all its zeros at the boundary . As the eigenfunctions are smooth in the interior of this conjecture implies (10 ###reference_###) for and any compact as we can then choose large enough depending on . The hot spots conjecture is believed to be true whenever is convex but with the exception of cylinders has been proved only in special 2-dimensional cases so far, see [36 ###reference_b36###, 3 ###reference_b3###, 37 ###reference_b37###, 64 ###reference_b64###] and references therein for positive results and [12 ###reference_b12###] who show that the conjecture may fail in non-convex domains. Next to convexity, symmetry properties of the domain play a key role in these proofs \u2013 in the context of Proposition 1 ###reference_position1### the central axis of symmetry of the cylinder \u2018dominates the spectrum\u2019 when the base is small enough, providing what is necessary to verify the conjecture in this case. The case from Remark 2 ###reference_ark2### can in this sense be regarded as a degenerately symmetric special case.\nIn this article we consider smooth domains but the preceding \u2018cylinder\u2019 is not smooth near the boundary of its base. But we can \u2018round the corners\u2019 of the cylinder without distorting the relevant spectral properties of . For example consider and a hyperrectangle for to be chosen, and define\nThen the are bounded convex domains that have smooth boundaries for all , and we will show that the conclusion of Proposition 1 ###reference_position1### remains valid for large enough. Moreover, to lend more credence to (10 ###reference_###) for different from constant , we can extend the result to for in a -neighbourhood of the constant function. This gives meaningful infinite-dimensional models for which the H\u00f6lder stability estimates from the previous subsection apply, and for which \u2018fast convergence rates\u2019 will be obtained in the next section. Incidentally they are also used to prove the lower bound in Theorem 4 ###reference_orem4###. For simplicity we only consider the case of simple (first) eigenvalues in the following result.\nA) Consider domains for . Then we can choose large enough such that the Laplacian on has a simple eigenvalue and the corresponding eigenfunction satisfies (10 ###reference_###) for any compact subset of , with constant depending on .\nB) The conclusions in A) remain valid if we replace by for any that satisfies as well as for some small enough, with constants now depending also on ."
40
+ },
41
+ {
42
+ "section_id": "2.3",
43
+ "parent_section_id": "2",
44
+ "section_name": "Bayesian inference in the diffusion model",
45
+ "text": "While we have shown injectivity of the non-linear map , there is no obvious inversion formula (unless ), and so the estimate from Theorem 3 ###reference_orem3### does not obviously translate into one for . The paradigm of Bayesian inversion [67 ###reference_b67###] can in principle overcome such issues. A natural Bayesian model for is obtained by placing a prior probability measure on a -field of some parameter space\nso that unique pathwise solutions to (2 ###reference_###) exist for all , with transition densities as after (3 ###reference_###). If denotes the Borel -field of , and if the maps are jointly Borel measurable from , then basic arguments (cf. [24 ###reference_b24###] and also [56 ###reference_b56###]) show that the posterior distribution is given by\nThis formula exposes the relationship of our setting to Bayesian non-linear inverse problems with PDEs [67 ###reference_b67###, 54 ###reference_b54###], since the non-linear solution map of (the fundamental solution of) a parabolic PDE features in the likelihood term. Even though our measurement model is much more complex than the additive Gaussian noise models considered in [67 ###reference_b67###, 54 ###reference_b54###], we can still leverage computational ideas from this literature \u2013 see Remark 3 ###reference_ark3### for details.\nThe priors we consider will be of Gaussian process type. With an eye on obtaining sharp results in some cases we give a concrete construction of a prior, but the proofs below can be applied to general classes of high- or infinite-dimensional priors (commonly used in the literature [24 ###reference_b24###, 50 ###reference_b50###, 51 ###reference_b51###, 54 ###reference_b54###]) replacing the truncated Gaussian series in the next display. Take the first eigenfunctions of the Neumann-Laplacian for eigenvalues , and for define a Gaussian random field\nwhere for some compact subset , the map is a non-negative cut-off function vanishing on and equal on some further compact subset of the interior of . As in [50 ###reference_b50###, 54 ###reference_b54###], the -dependent rescaling provides extra regularisation required in the proofs \u2013 it also allows us to remove the strong prior restrictions from [56 ###reference_b56###] in the case .\nFor fixed , the of is a probability measure supported in the space where is the finite-dimensional linear span of the . As the models a -smooth Gaussian random field on that is supported in a strict subset of . The prior for the diffusivity (equipped with the trace Borel -algebra of the separable Banach space ) is then\nwhich equals on . Note that the \u2018base case\u2019 corresponds to and hence to the case where the diffusion in (2 ###reference_###) is a standard reflected Brownian motion with generator . The construction can be adapted to any fixed replacing .\n###figure_3### ###figure_4### ###figure_5### The numerical computation of the posterior measure (16 ###reference_###) is possible via MCMC methods. For instance, since our priors are Gaussian, we can use the standard pCN proposal (see [16 ###reference_b16###] or Section 1.2.4 in [54 ###reference_b54###]) to set up a Markov chain that has as invariant distribution. Posterior functionals\ncan be approximated by ergodic averages see Fig. 2 ###reference_### for an illustration with . The computation of each iterate of this chain requires the draw of a -dimensional Gaussian (from the prior) and the evaluation of the log-likelihood function\nIn light of the representation (33 ###reference_###), the latter can be evaluated by standard numerical methods for elliptic PDEs that compute the first few eigen-pairs of the differential operator with Neumann boundary conditions. Explicit error bounds for the approximation of the transition densities can be obtained from the exponential decay of the tail of the series in (33 ###reference_###) via Corollary 1 ###reference_ollary1###, and since is fixed in our setting. Moreover, taking limits in the pseudo-linearisation identity (42 ###reference_###) below allows to check the gradient stability condition from [59 ###reference_b59###, 9 ###reference_b9###], which is a key to give computational guarantees for MCMC.\nThe above Bayesian methodology extends to more general diffusion models (1 ###reference_###) by proceeding as in [56 ###reference_b56###], Sec. 2.3.2. One employs a hierarchical prior construction that first specifies a prior for the diffusivity and then models the \u2018remaining\u2019 drift conditionally on , for instance by priors as in [28 ###reference_b28###]. One can the employ MCMC samplers for hierarchical priors, and proceed similar to [74 ###reference_b74###] in the \u2018drift step\u2019. Alternatively one can simply plug in an empirical estimate for (e.g., via an estimate as after (30) in [56 ###reference_b56###]), avoiding hierarchical methods. When , the case of drift vector fields in (1 ###reference_###) that are not in gradient form is innately more challenging as one loses self-adjointness of the infinitesimal generator. Some ideas for how to deal with such non-reversible processes can be found in [55 ###reference_b55###, 2 ###reference_b2###], but for many applications, gradients of \u2018force\u2019 potentials provide natural non-parametric models with relevant physical interpretation [28 ###reference_b28###, 33 ###reference_b33###, 34 ###reference_b34###]\nThe reflected diffusion model (2 ###reference_###) \u2013 which corresponds to Neumann boundary conditions \u2013 is essential to obtain a Markov process that does not \u2018terminate\u2019 at a finite time (as would be the case for Dirichlet boundary conditions). Processes that are periodic on a -dimensional cube, or that reflect along directions different from the inward normal vector at , can be accommodated as well (but, at least in the latter case, introduce further tedious technicalities)."
46
+ },
47
+ {
48
+ "section_id": "2.4",
49
+ "parent_section_id": "2",
50
+ "section_name": "Posterior consistency theorems",
51
+ "text": "We now obtain mathematical guarantees for the inference provided by , following the programme of Bayesian Non-parametrics [24 ###reference_b24###] in the context of non-linear inverse problems [54 ###reference_b54###]."
52
+ },
53
+ {
54
+ "section_id": "2.4.1",
55
+ "parent_section_id": "2.4",
56
+ "section_name": "2.4.1 Posterior reconstruction of",
57
+ "text": "We show that the Bayesian approach attains the optimal convergence rate for inference on the transition operator at the \u2018observed\u2019 times .\nConsider discrete data at fixed observation distance , from the reflected diffusion model (2 ###reference_###) on a bounded convex domain with smooth boundary, started at . Assume , satisfies and on . Let be the posterior distribution (16 ###reference_###) resulting from the prior for from (17 ###reference_###) with and the given . Then there exists depending on and such that\nInspection of our proofs shows that one obtains convergence rates also in norms as in (6 ###reference_###). When the first non-zero eigenvalue of is simple, the previous theorem implies consistency of the PCA provided by . Since draws are self-adjoint Markov transition operators, we can extract their \u2018principal component\u2019, or second eigenfunction, . By the operator norm convergence of to the simplicity of the eigenvalue eventually translates into simplicity of with probability approaching one, and a unique then exists (up to choice of sign), cf. Proposition 9 ###reference_position9###. Using quantitative perturbation arguments (e.g., Proposition 4.2 in [29 ###reference_b29###]) one obtains\nIn dimension , the top eigenfunction fully identifies with an explicit reconstruction formula [29 ###reference_b29###, 56 ###reference_b56###], but in multi-dimensions this approach is not feasible, also because is not simple in general, in which case the PCA for the eigenfunction will not be consistent."
58
+ },
59
+ {
60
+ "section_id": "2.4.2",
61
+ "parent_section_id": "2.4",
62
+ "section_name": "2.4.2 Consistency and convergence rates for the non-linear inverse problem",
63
+ "text": "We now state the main statistical result of this article.\nConsider the setting of Theorem 9 ###reference_orem9###. Then there exists a sequence such that as ,\nas well as, for any ,\nSpecifically we can take for some . Moreover, if in addition (10 ###reference_###) holds for , then we can take .\nWhen we could obtain directly the convergence rate for operator norms from Theorem 9 ###reference_orem9### and the argument sketched at the beginning of Remark 1 ###reference_ark1###. But for we are solving a genuine inverse problem. Note further that the -norms equivalently bound the norms of the difference between the transition densities from (33 ###reference_###).\nIn order to obtain faster rates , the hypothesis (10 ###reference_###) needs to hold only at the ground truth and not throughout the parameter space of prior diffusivities . Next to the one-dimensional case discussed in Remark 2 ###reference_ark2###, Theorem 8 ###reference_orem8### describes an infinite-dimensional class of \u2019s for which such faster rates can indeed be attained also when .\nUsing uniform integrability type arguments as in [50 ###reference_b50###, 54 ###reference_b54###], a similar convergence rate can be proved for the posterior mean vector and the induced conductivity and transition operators , yielding Theorem 2 ###reference_orem2###. See Subsection 3.6.3 ###reference_.SSS3###."
64
+ },
65
+ {
66
+ "section_id": "3",
67
+ "parent_section_id": null,
68
+ "section_name": "Proofs",
69
+ "text": ""
70
+ },
71
+ {
72
+ "section_id": "3.1",
73
+ "parent_section_id": "3",
74
+ "section_name": "Analytical background: reflected diffusions and their generators",
75
+ "text": ""
76
+ },
77
+ {
78
+ "section_id": "3.1.1",
79
+ "parent_section_id": "3.1",
80
+ "section_name": "3.1.1 Divergence form operators",
81
+ "text": "Let be a bounded convex domain in with smooth boundary and such that . Consider the divergence form elliptic operator from (3 ###reference_###). The Sobolev space can be endowed both with the usual norm or with the equivalent norm with equivalence constants depending only on . Moreover the elements of satisfying zero Neumann-boundary conditions (in the usual trace sense) are defined as\nwith the unit normal vector. By the divergence theorem (e.g., p.143 in [71 ###reference_b71###])\nso is self-adjoint for the -inner product on . This operator can be closed to give an operator on the domain that coincides with on \n([21 ###reference_b21###], Theorem 7.2.1), and which corresponds to the bi-linear symmetric (Dirichlet) form\nwhich in turn defines a Markov process arising from a semi-group with infinitesimal generator and as invariant probability measure. An application of Ito\u2019s formula shows that this Markov process describes solutions of the SDE (2 ###reference_###) with \u2018reflection of the process at the boundary\u2019 provided by the (inward) normal vector and the \u2018local time\u2019 process that is non-zero only when . Details can be found in [69 ###reference_b69###], [7 ###reference_b7###] (ch. 37, 38), [6 ###reference_b6###] (Sec. I.12. and p.52) and also [5 ###reference_b5###]."
82
+ },
83
+ {
84
+ "section_id": "3.1.2",
85
+ "parent_section_id": "3.1",
86
+ "section_name": "3.1.2 Spectral resolution of the generator",
87
+ "text": "We recall here some standard facts on the spectral theory of the generator with Neumann boundary conditions. The arguments follow closely the treatment of the standard Laplacian on p.403 in [71 ###reference_b71###] (see also Ch.7.2 in [21 ###reference_b21###]), and extend straightforwardly to as long as .\nDenote by the operator mapping into defined before (23 ###reference_###). By (23 ###reference_###) the linear operator satisfies\nfrom which one deduces that the linear operator defines a bijection between and with operator norms depending only on . If we restrict its inverse to the Hilbert space then it defines a self-adjoint operator which is also compact as it maps into which embeds compactly into . By the spectral theorem there exist -orthonormal eigenfunctions and corresponding to eigenvalues such that\nWe denote by\nthe corresponding inverse operator acting on the Hilbert space\nfor which the form an orthonormal basis. Clearly .\nWe next record the following \u2018uniform in \u2019 spectral gap estimate: the first (nontrivial) eigenvalue has variational characterisation (see Sec. 4.5 in [21 ###reference_b21###])\nwhere we have used the Poincar\u00e9-inequality (Theorem 1 on p.292 in [23 ###reference_b23###]): for and Poincar\u00e9 constant depending only on . For subsequent eigenvalues we know that they can have at most finite multiplicities (e.g., Theorem 4.2.2 in [21 ###reference_b21###]), and in fact that they obey Weyl\u2019s law (e.g., using p.111 in [70 ###reference_b70###]),\nThe preceding asymptotics hold initially for the standard Laplacian (), with the constants involved depending only on . By the variational characterisation of the \u2019s (Sec. 4.5 in [21 ###reference_b21###]) and since\nholds for the quadratic form featuring in (25 ###reference_###), the corresponding to conductivities differ by at most a fixed constant that depends only on .\nTaking the eigenpairs of one can define Hilbert spaces\nAny can be written as and hence by Parseval\u2019s identity. The following proposition (proved in Section 3.8 ###reference_###) summarises some basic properties.\nLet be a bounded convex domain in with smooth boundary and let be s.t. . Then and\nIf we assume in addition that for some integer either A) or B) for some s.t. , then we have\nWe further have the embedding and also if is replaced by (modulo constants). Finally we have for any pair satisfying A) or B), with equivalent norms. All embedding/equivalence constants depend only on .\nUnder the hypotheses of Proposition 2 ###reference_position2###B), the eigenfunctions corresponding to eigenvalues of satisfy for some depending only on ,\nwhich whenever implies as well\nBy definition (27 ###reference_###) and (26 ###reference_###), the result is true for the -norm replacing the -norm, and since , Proposition 2 ###reference_position2### implies (29 ###reference_###), and (30 ###reference_###) then follows from the Sobolev imbedding.\n\u220e"
88
+ },
89
+ {
90
+ "section_id": "3.2",
91
+ "parent_section_id": "3",
92
+ "section_name": "Heat equation, transition operator, and a perturbation identity",
93
+ "text": "For fixed let us consider solutions in to the heat equation\nfor any initial condition satisfying . The unique solution of this PDE is given by\nwhich also lie in . We can add any fixed constant to both the initial condition and solution , by extending the above series to include for . The symmetric non-negative (Prop. 4 ###reference_position4###) fundamental solutions of the heat equation are then\nThese are precisely the kernels of the transition operator in (4 ###reference_###) and also the transition probability densities of the Markov process arising from the Dirichlet form (23 ###reference_###), cf., e.g., Sec.1.14 in [5 ###reference_b5###]."
94
+ },
95
+ {
96
+ "section_id": "3.2.1",
97
+ "parent_section_id": "3.2",
98
+ "section_name": "3.2.1 Heat kernel estimates",
99
+ "text": "By the bounds on eigenfunctions and eigenvalues from (26 ###reference_###), (30 ###reference_###), the series in (33 ###reference_###) defining converge in , and by the Sobolev imbedding with then also uniformly on .\nUnder the hypotheses of Proposition 2 ###reference_position2###B), we have for any fixed\nwhere .\nUsing the representation (33 ###reference_###) and Corollary 1 ###reference_ollary1### we obtain\n\u220e\nA further key fact is that the transition densities are bounded from below on a convex domain . See Section 3.8 ###reference_### for the proof.\nLet be a bounded convex domain with smooth boundary and suppose satisfies for some even integer . Then we have for every and some positive constant that\nUsing Proposition 6.3.4 in [5 ###reference_b5###] and (25 ###reference_###), (95 ###reference_###) (or by estimating the tail of the series in (33 ###reference_###) and integrating the result ) one also obtains geometric ergodicity of the diffusion process,"
100
+ },
101
+ {
102
+ "section_id": "3.2.2",
103
+ "parent_section_id": "3.2",
104
+ "section_name": "3.2.2 Perturbation and pseudo-linearisation identity",
105
+ "text": "In this subsection we consider two conductivities whose -norms are bounded by a fixed constant and study the resulting difference of the action of the transition operators on the eigen-functions of . We will use the factorisation of space and time variables in the identity \nwhich holds as well for the eigenblocks (with any finite sequence)\ncorresponding to the eigenvalue , that is, we have\nBy (31 ###reference_###), (32 ###reference_###), the functions\nsolve the inhomogeneous PDE\nwhere\nwith eigenvalues of . Standard semi-group arguments (Proposition 4.1.2 in [47 ###reference_b47###]) imply that the solution of (39 ###reference_###) can be represented by the \u2018variation of constants\u2019 formula\nFor the eigen-pairs of we thus arrive at\nfor coefficients\nWe can regard (42 ###reference_###) as a spectral \u2018pseudo-linearisation\u2019 identity for , similar to analogous results employed to prove stability estimates in other inverse problems, e.g., [50 ###reference_b50###]. It could also be the starting point to prove LAN-type expansions in our model as in [77 ###reference_b77###]."
106
+ },
107
+ {
108
+ "section_id": "3.3",
109
+ "parent_section_id": "3",
110
+ "section_name": "Information distances and small ball probabilities",
111
+ "text": "For the diffusion process (2 ###reference_###) with transition densities from (33 ###reference_###), the Kullback-Leibler (KL-) divergence in our discrete measurement model with observation distance is defined as\nwhere we regard the from (33 ###reference_###) as joint probability densities on (as ).\nIn the following theorem denotes the HS norm for operators on the Hilbert space (or just ). Note further that implies so the r.h.s. in (45 ###reference_###) can be bounded by and then also by .\nLet satisfy the conditions of Proposition 2 ###reference_position2###B) for some . Suppose outside of a compact subset . Then for any there exist positive constants depending on such that\nUsing Propositions 3 ###reference_position3###, 4 ###reference_position4### (noting also by the Sobolev imbedding) and standard inequalities from information theory (as at the beginning of the proof of Lemma 14 in [56 ###reference_b56###], or see Appendix B in [24 ###reference_b24###]) one shows\nThe HS-norm of an operator on any Hilbert space can be represented as\n where the are any orthonormal basis of . In what follows we take the basis arising from the spectral decomposition of , and hence need to bound\nwhere the HS-norms can be taken over the Hilbert space as both operators have identical first eigenfunction .\nFor each summand we apply the representation (42 ###reference_###) with and selecting the relevant -th eigenfunction if there are multiplicities. We then write shorthand\nfor from (40 ###reference_###) with these choices. We can bound the coefficients (43 ###reference_###) as\nso that by Parseval\u2019s identity, for , and writing for the remainder of the proof,\nReturning to (47 ###reference_###) we are thus left with bounding the double sum\nBy the divergence theorem\nso by Parseval\u2019s identity and (27 ###reference_###) (with norm there well-defined also for negative ), the r.h.s. in (48 ###reference_###) is bounded by\nIn the next step we use the basic sequence space duality relationship . Moreover, noting outside of , we take a suitable smooth cut-off function that equals one on and is compactly supported in . Then we apply the divergence theorem in conjunction with Proposition 2 ###reference_position2### to obtain\nwith spaces as after (92 ###reference_###). For we have and then in view of Corollary 1 ###reference_ollary1### with . For and we use the Sobolev embedding and again Corollary 1 ###reference_ollary1### to bound . In both cases the r.h.s. in the last display is bounded by a constant multiple of for some constant . Inserting these bounds into the second summand in (49 ###reference_###) and using (26 ###reference_###), the series\nis convergent (for fixed). The same estimate holds for replaced by , summing the first term in (49 ###reference_###) \u2013 completing the proof of the theorem.\n\u220e"
112
+ },
113
+ {
114
+ "section_id": "3.4",
115
+ "parent_section_id": "3",
116
+ "section_name": "Proofs of stability estimates",
117
+ "text": ""
118
+ },
119
+ {
120
+ "section_id": "3.4.1",
121
+ "parent_section_id": "3.4",
122
+ "section_name": "3.4.1 Proof of Theorem 5",
123
+ "text": "Take such that on and (as is a compact subset of , such exists). By the results from Section 3.1.2 ###reference_.SSS2###, the inhomogeneous elliptic PDE (58 ###reference_###) in Lemma 2 ###reference_ma2### below has unique solution\nIn particular Proposition 2 ###reference_position2### implies that and that is bounded in . The same arguments apply to replacing . Now Lemma 2 ###reference_ma2### implies\nfor finite constant , where we have also used the standard interpolation for -norms (p.44 in [45 ###reference_b45###]). We now estimate the right hand side in the last display. As we have for any that\nfor , using also (26 ###reference_###), and similarly for . By the triangle inequality\nLet us further define \u2018truncated\u2019 transition operators\nwhich, just as in the display above (52 ###reference_###) and in view of (26 ###reference_###), satisfy the estimate\nand the same is true for replacing . The operators are self-adjoint on and by what precedes and (26 ###reference_###), (25 ###reference_###), the union of their spectra is contained in\nWe can employ a cut-off function and construct smooth compactly supported on such that\nThen since on the last interval, we can write, using the notation of functional calculus,\nwhere we have used Lemma 3 in [42 ###reference_b42###] for the self-adjoint operators on and the bound (using results in Sec. 4.3 in [26 ###reference_b26###]). Combining all that precedes, we obtain the overall estimate\nwhere was arbitrary. Choosing such that\n(we can increase if necessary to ensure ) implies for some that\nAs the are uniformly bounded, we can absorb the second term into the first after adjusting constants, so the stability estimate is proved, and the injectivity assertion of the theorem follows directly from it."
124
+ },
125
+ {
126
+ "section_id": "3.4.2",
127
+ "parent_section_id": "3.4",
128
+ "section_name": "3.4.2 Proof of Theorem 6",
129
+ "text": "For eigenblocks from (37 ###reference_###), Proposition 2 ###reference_position2### gives\nThen, using the representation (42 ###reference_###) with choices and Proposition 2 ###reference_position2###,\nwhere and . We can write\nfor some mean values in the interval arising from the mean value theorem applied to the exponential map. This remains true in the degenerate case where as then .\nNow recalling the distribution of the eigenvalues from (26 ###reference_###) we see that for with fixed, the last displayed exponential is bounded below by a fixed constant depending on , while for large values of , the last but one term in the last display is of order for fixed. Hence we have for all , and some ,\nCombining this estimate with (3.4.2 ###reference_6###) and Parseval\u2019s identity gives\nThe theorem then follows from Lemma 1 ###reference_ma1### with which satisfies (56 ###reference_###) by hypothesis (10 ###reference_###) and is supremum-norm bounded by (30 ###reference_###)."
130
+ },
131
+ {
132
+ "section_id": "3.4.3",
133
+ "parent_section_id": "3.4",
134
+ "section_name": "3.4.3 Stability of a transport operator",
135
+ "text": "We now give a stability lemma for the operator\nfor appropriate choices of . It features regularly in stability estimates for elliptic PDEs, see Chapter 2 in [54 ###reference_b54###] for references.\nLet be a function such that and\nfor some compact subset of .\nFor as in Condition 1 ###reference_dition1### and any that vanishes on , the operator satisfies for a constant .\nThe divergence theorem applied to any vanishing at gives For with from (56 ###reference_###)\nso that by the Cauchy-Schwarz inequality\nfor . Now by (56 ###reference_###) and since on by hypothesis we have\nand fusing also (3.4.3 ###reference_0###) we deduce .\n\u220e\nLet be any compact subset of a bounded smooth domain and suppose that are two -diffusivities such that on . Suppose for some s.t. on , functions solve\nThen we have for some constant that\nLet us write . By (58 ###reference_###), we have on\nWe can upper bound the -norm of r.h.s. by\nTo lower bound the left hand side of (61 ###reference_###) we apply Lemma 1 ###reference_ma1### with to . The hypothesis on implies on , so that either or for . Since by a -regularity estimate (e.g., Thm 4.3.4 in [72 ###reference_b72###]) for solutions of (58 ###reference_###) with this implies (56 ###reference_###) and by Lemma 1 ###reference_ma1### the result.\n\u220e"
136
+ },
137
+ {
138
+ "section_id": "3.5",
139
+ "parent_section_id": "3",
140
+ "section_name": "Minimax estimation of the transition operator",
141
+ "text": ""
142
+ },
143
+ {
144
+ "section_id": "3.5.1",
145
+ "parent_section_id": "3.5",
146
+ "section_name": "3.5.1 Operator norm convergence",
147
+ "text": "In this subsection we construct explicit estimator for the transition operator and prove Theorem 3 ###reference_orem3###. While it is possible to take self-adjoint, this will not be required here.\nFor take the eigenfunctions of the Neumann Laplacian on (including ) and regard as a normed space equipped with the Euclidean norm via Parseval\u2019s identity for . Given the observations define a matrix by\nVia the injection of into we can regard as a bounded linear operator on described by the action\nSimilarly the transition operator induces a matrix via\nwhich equals the expectation under the law of started in stationarity . The latter matrix corresponds to the operator on arising from the composition where describes the projection onto \u2013 note that are not the eigen-spaces of unless . To obtain an estimate for the approximation error from , note first that by Proposition 2 ###reference_position2### and (27 ###reference_###), (26 ###reference_###), for any s.t. ,\nfor some since by hypothesis. Therefore, using again (27 ###reference_###), (26 ###reference_###) and Parseval\u2019s identity\nTo bound the operator norms on approximation spaces we use a standard covering argument in finite dimensional spaces (e.g., the proof of Lemma 1.1 in [14 ###reference_b14###]) to the effect that\nwhere is a discrete -net of unit vectors (i.e., ) covering the unit sphere of of cardinality at most for some , see, e.g., [26 ###reference_b26###], p.373. By a union bound and for with , , we obtain\nWe can apply the concentration inequality Proposition 6 ###reference_position6### below with an element of the Hilbert space from (69 ###reference_###) below. We have, using also Proposition 3 ###reference_position3###,\nas well as in view of the estimate\nwhere we have used (30 ###reference_###). In this way we obtain overall:\nLet and suppose arise from the diffusion (2 ###reference_###) started at on a bounded smooth convex domain with s.t. . Let be s.t. for some . Then for all we can choose such that\nIn particular for we can choose to prove Theorem 3 ###reference_orem3###. A bound on the -operator norms follows as well: Since the imbedding is continuous and since whenever , we have\nand as in (65 ###reference_###) and by Proposition 2 ###reference_position2### the approximation errors scale like\nIn the setting of Proposition 5 ###reference_position5### we also have"
148
+ },
149
+ {
150
+ "section_id": "3.5.2",
151
+ "parent_section_id": "3.5",
152
+ "section_name": "3.5.2 A concentration inequality for ergodic averages",
153
+ "text": "Consider the discrete Markov chain arising from sampling the diffusion (2 ###reference_###) started in stationarity . The transition operator of this chain is from (32 ###reference_###), with spectrum and the first spectral gap is bounded as\nin view of (25 ###reference_###) for some . We initially establish concentration bounds for additive functionals\nof bivariate Markov chains in arising from\nrespectively. By a union bound this will give concentration inequalities for ergodic averages along all indices , see (74 ###reference_###) below.\nThe transition operators of the new bivariate Markov chains have invariant measure on . If we define\nthen one shows\nby a basic application of Jensen\u2019s inequality (cf. Lemma 24 in [56 ###reference_b56###]), and by (68 ###reference_###). By the variational characterisation of eigenvalues and (68 ###reference_###) this implies that the first spectral gap of is also bounded as\nWe deduce from Theorem 3.1 in [60 ###reference_b60###] that for any we have the variance bound\nwhere we have also used (71 ###reference_###). Similarly, requiring in addition , Theorem 3.3 and eq. (3.21) in [60 ###reference_b60###] imply the concentration inequality.\nThe same inequality applies to the even indices , so that by a union bound we obtain:\nLet be s.t. , and let be sampled discretely at observation distance from the diffusion from (2 ###reference_###) with . Then for some constant and all we have"
154
+ },
155
+ {
156
+ "section_id": "3.5.3",
157
+ "parent_section_id": "3.5",
158
+ "section_name": "3.5.3 Proof of the minimax lower bound Theorem 4",
159
+ "text": "Given the analytical estimates obtained so far, the proof follows ideas of the lower bound Theorem 10 of [58 ###reference_b58###] and we sketch here only the necessary modifications. Let us take the same set of functions from (4.17) in [58 ###reference_b58###] and consider only large enough in that construction such that all the wavelets featuring there are contained inside of the compact subset of the \u2018smoothed\u2019 -dimensional hypercube from (15 ###reference_###) for from Theorem 8 ###reference_orem8###B). In particular we can choose so large that for the from Theorem 8 ###reference_orem8###B). We apply Theorem 6.3.2 in [26 ###reference_b26###] (taking also note of (6.99) there to obtain an \u2018in probability version\u2019 of the lower bound) as in Step VII of the proof of Theorem 10 in [58 ###reference_b58###], noting that in our setting we can control the KL-divergences via Theorem 11 ###reference_orem11### and the imbedding . The result will thus follow if we can show that the transition operators induced by the \u2019s are appropriately separated for the -operator norms. Using the inequality (55 ###reference_###) we have\nwhere we note that on our \u2018smoothed\u2019 cylinder, the eigenfunctions are all simple thanks to Theorem 8 ###reference_orem8###B). To proceed we need to lower bound the -norms of the r.h.s. of (4.19) in [58 ###reference_b58###], with there replaced by our . As will be shown in the proof of Proposition 8 ###reference_position8###, the first eigenfunction of on has all partial derivatives equal to zero except with respect to one, say the first, variable, and that partial derivative cannot vanish on . In view of (90 ###reference_###), (91 ###reference_###) this implies that the corresponding eigenfunction on has a partial derivative for the first variable that is strictly positive while the other partial derivatives are bounded (in fact can be made arbitrarily close to zero). One can then easily adapt the steps V and VI in the proof of Theorem 10 in [58 ###reference_b58###] (with there equal to our ) to establish, for all large enough, the required bound"
160
+ },
161
+ {
162
+ "section_id": "3.6",
163
+ "parent_section_id": "3",
164
+ "section_name": "Bayesian contraction results",
165
+ "text": ""
166
+ },
167
+ {
168
+ "section_id": "3.6.1",
169
+ "parent_section_id": "3.6",
170
+ "section_name": "3.6.1 Results for general priors",
171
+ "text": "In this subsection we follow general ideas from Bayesian nonparametrics [24 ###reference_b24###] and specifically in our diffusion context adapt the results from [56 ###reference_b56###] to our multi-dimensional setting to obtain a contraction theorem for posteriors arising from general possibly -dependent priors . Recall the information distance from (44 ###reference_###) on parameter spaces .\nFor define\nThen for any probability measure on , any and from (71 ###reference_###),\nThe proof is the same as the one Lemma 25 in [56 ###reference_b56###], ignoring the term involving invariant measures there as in our case for all . The key variance estimate in that lemma can then be replaced by our (72 ###reference_###) with .\n\u220e\nLet be a sequence of priors on and suppose for , some sequence such that and constant we have\nSuppose further for a sequence of subsets and constant we have\nand that there exists tests and a sequence such that\nwhere is some distance function on .\nThen we have for that\nThe proof is the same as the one of Theorem 13 in [56 ###reference_b56###]. We can track the constants in this proof (similar as in Theorem 1.3.2 in [54 ###reference_b54###]) to further include the set in, and to obtain the explicit convergence rate bound on the r.h.s. of, (78 ###reference_###).\n\u220e"
172
+ },
173
+ {
174
+ "section_id": "3.6.2",
175
+ "parent_section_id": "3.6",
176
+ "section_name": "3.6.2 Proof of Theorems 9 and 10",
177
+ "text": "With these preparations we can now prove Theorem 9 ###reference_orem9### and a version of it with distance functions replaced by , relevant to prove Theorem 10 ###reference_orem10###. We will choose\nthroughout, for a large enough constant. We consider the prior from (17 ###reference_###) and use standard theory for Gaussian processes (e.g., Ch.2 in [26 ###reference_b26###]). In particular, recalling the cut-off function , we note that the reproducing kernel Hilbert space (RKHS) of the Gaussian process generating is given by , with RKHS norm\ni) Verification of (75 ###reference_###). Proposition 3 ###reference_position3### with and Proposition 4 ###reference_position4### imply the two sided estimate with constants that are uniform in . This applies as well to and so, by standard inequalities (e.g., Appendix B in [24 ###reference_b24###]),\nfor such , with constants depending on .\nLet us define which is zero outside of and lies in by the hypotheses on . This implies that by Proposition 2 ###reference_position2###. If is the -projection of onto , then and\nSince (Proposition 2 ###reference_position2###) implies that embeds continuously into , we can use (26 ###reference_###) and choose large enough s.t.\nfor any given . Now using Theorem 11 ###reference_orem11###, (80 ###reference_###) and for , with large enough,\nwhere we have used that the map is Lipschitz on bounded sets of for the -norm (cf. the argument on p.27 in [54 ###reference_b54###]). We apply Corollary 2.6.18 in [26 ###reference_b26###] with \u2018shift\u2019 vector and the Gaussian correlation inequality (in the form of Theorem B.1.2 in [54 ###reference_b54###]) to further lower bound the r.h.s. in the last display by\nusing also (79 ###reference_###), (80 ###reference_###) and for some . Next, since the RKHS of the base prior embeds continuously into (cf. (79 ###reference_###)), we obtain\nas in eq. (2.24) in [54 ###reference_b54###] with there. In concluding this step we now also construct the regularisation sets for (76 ###reference_###). If we define\nthen for every we can choose large enough so that , by an application of the Gaussian isoperimetric theorem [26 ###reference_b26###] as in step iii) in the proof of Theorem 2.2.2 in [54 ###reference_b54###] with . Now we have\n and the last two terms are bounded by for . For the first we can use Proposition 2 ###reference_position2### and the triangle inequality to obtain on\nwhere we have used (26 ###reference_###) in the estimate\nand the last term is bounded by a fixed constant . In conclusion this proves for all large enough so that (75 ###reference_###) follows for our choice of and all large enough. Since is Lipschitz on bounded subsets of , we have in fact proved the stronger result \u2013 to be used in the next step \u2013 that for some we have\nii) Construction of tests. We cannot rely on Hellinger testing theory as in [24 ###reference_b24###, 50 ###reference_b50###, 54 ###reference_b54###] because our data does not arise from an i.i.d. model. Instead (following ideas from [25 ###reference_b25###, 56 ###reference_b56###]) we use concentration inequalities, specifically Proposition 5 ###reference_position5###, to construct these tests. For the hypothesis consider the plug in test where is from (3.5.1 ###reference_4###) with choice . We verify (77 ###reference_###) with from (82 ###reference_###). By Proposition 5 ###reference_position5###, the type-one error is then controlled, for large enough, as\nand likewise, by the triangle inequality,\nwhenever . Now we can apply Theorem 12 ###reference_orem12### and deduce that for all we can choose and large enough such that\nThis proves Theorem 9 ###reference_orem9###. To proceed, note that the same arguments work for operator norms by appealing to Corollary 2 ###reference_ollary2### with the same choice of , resulting in the slower convergence rate replacing . Now to prove Theorem 10 ###reference_orem10### under hypothesis (10 ###reference_###), we can invoke the stability estimate Theorem 6 ###reference_orem6### and the set inclusion\nfor large enough such that . If (10 ###reference_###) does not hold we can still use the stability estimate (8 ###reference_###) from Theorem 5 ###reference_orem5### and obtain the slower rate for the posterior distribution. This completes the proof of the contraction rate bounds for in Theorem 10 ###reference_orem10###. The rate for the HS-norms follow in a similar way from (14 ###reference_###), (13 ###reference_###) instead of the previous stability estimates."
178
+ },
179
+ {
180
+ "section_id": "3.6.3",
181
+ "parent_section_id": "3.6",
182
+ "section_name": "3.6.3 Posterior mean convergence and proof of Theorem 2",
183
+ "text": "The above contraction results holds as well for the \u2018linear\u2019 parameter , as is -Lipschitz on -bounded sets of \u2019s bounded away from zero (and using that for in bounded in ). In turn we further deduce a convergence rate for the posterior mean vectors\nusing that we have exponential convergence to zero in (78 ###reference_###) for any if we just increase the constant , and by a uniform integrability argument as in Theorem 2.3.2 of [54 ###reference_b54###] (or see also the proof of Theorem 3.2 in [50 ###reference_b50###], to whom this argument is due). This then implies the same -rates for towards and and in particular implies the second limit in Theorem 2 ###reference_orem2###. An argument parallel to the one leading to (83 ###reference_###) further implies that and we can then use (45 ###reference_###) and the imbedding to obtain convergence to zero of the Hilbert-Schmidt norms (which bound norms) also at rate ."
184
+ },
185
+ {
186
+ "section_id": "3.7",
187
+ "parent_section_id": "3",
188
+ "section_name": "Neumann eigenfunctions on cylindrical domains",
189
+ "text": ""
190
+ },
191
+ {
192
+ "section_id": "3.7.1",
193
+ "parent_section_id": "3.7",
194
+ "section_name": "3.7.1 Proof of Proposition 1",
195
+ "text": "Let us decompose a point as , The restricted Neumann Laplacians have discrete non-positive spectrum on and , respectively, with eigenfunctions all orthogonal on constants on their respective domains. If we set , for eigenvalues then the eigenfunctions of on tensorise by a standard separation of variables argument (that is left to the reader).\nThe eigenfunctions of on for eigenvalues are\nTo proceed, recall that for a convex domain , the Poincar\u00e9 constant satisfies by a classical result of [61 ###reference_b61###]. For simple eigenvalues we then have:\nSuppose that the Poincar\u00e9 constant of satisfies Then the first non-zero eigenvalue of on is simple, equals and the rest of the spectrum is separated from by at least . The corresponding eigenfunction is smooth in the strict interior of and satisfies for all small enough\nBy the assumption and (25 ###reference_###) we have . The first eigenvalue of on is , hence and the first non-constant eigenfunction of on corresponds to and equals\nBy the hypotheses the next eigenvalue satisfies and so we have a \u2018two-sided\u2019 spectral gap around in the spectrum in the sense that\nBy the assumption on the first claim follows. Next for away from the boundary we have and so we have\nfor small w.l.o.g. (so that we can use for near zero).\n\u220e\nIf in the previous proof we only assume then the first eigenvalue of may co-incide with the one of and there may then be multiple eigenfunctions for . But the eigenfunction (86 ###reference_###) is still one permissible choice, and we can choose the weight in (9 ###reference_###) to choose that eigenfunction, so that Proposition 1 ###reference_position1### remains valid also in this case."
196
+ },
197
+ {
198
+ "section_id": "3.7.2",
199
+ "parent_section_id": "3.7",
200
+ "section_name": "3.7.2 Proof of Theorem 8, Step I: perturbation",
201
+ "text": "The remainder of this section is devoted to the proof of Theorem 8 ###reference_orem8###. It consists of combining Proposition 1 ###reference_position1### with perturbation theory for linear operators. The following result will be used repeatedly. For a proof see Sec.s IV.3.4-5 in Kato [39 ###reference_b39###] (or cf. also Proposition 4.2 in [29 ###reference_b29###]). The clusters of the eigenvalues converge also without simplicity of , see the discussion in [39 ###reference_b39###] or also in Sec 2.3 in [43 ###reference_b43###].\nLet be a bounded linear self-adjoint operator on a separable Hilbert space with discrete spectrum and simple eigenvalue such that for some . Let be another self-adjoint linear operator such that . Then has a simple eigenvalue and there are eigenvectors of for such that as ."
202
+ },
203
+ {
204
+ "section_id": "3.7.3",
205
+ "parent_section_id": "3.7",
206
+ "section_name": "3.7.3 Step II: rounding the corners",
207
+ "text": "Let us fix and agree to write for the sequence of domains from (15 ###reference_###), as well as for the limit set, in this subsection. Note that is the largest domain containing all the others and the perturbation argument below will be given on the Hilbert space , where the inclusions are to be understood by restriction to, and zero extension from, the domains . [Note a slight abuse of notation that is not the cylinder base from earlier.]\nConsider the linear operators on given by from after (24 ###reference_###) in Section 3.1.2 ###reference_.SSS2### with . We extend them to operators denoted by on by restriction of to and zero-extension of the resulting functions outside of . Likewise we define on .\nWe have as that\nFor any such that and writing , we have from Theorem 3.1.3.3 in [31 ###reference_b31###] (with there) that\nwhere is a numerical constant independent of . Following the argument given after (3.2.1.8) in [31 ###reference_b31###] one shows that weakly in and then by compactness also in the norm of and in fact of for the given . This convergence is uniform in : indeed, suppose does not converge to in uniformly in . Then there exists and a sequence such that for which\nThe sequence converges in the dual space to some along a subsequence, by compactness of the inclusion . As is self-adjoint on we deduce\nusing also that the restriction operator from to is continuous from to , and where the last supremum was bounded using (24 ###reference_###) (with ) and the Cauchy-Schwarz inequality, by ,\nsince has norm at most one as its eigenvalues satisfy for all . The same argument implies that in in . From what precedes we deduce that\nconverges to zero as , which contradicts (89 ###reference_###), and proves the lemma.\n\u220e\nJust as after (24 ###reference_###), the eigenvalues of the limiting operator are for eigenfunctions of extended by zero outside of . [Note that is an orthogonal sum.] By Proposition 8 ###reference_position8###, the eigenvalue is isolated and simple. Similarly, the eigenpairs of are with eigenfunctions extended by zero outside of , and from Proposition 9 ###reference_position9### we deduce that as in . Moreover in any strict interior subset of containing , the eigenfunctions have uniformly bounded Sobolev norms of any order (e.g., use [23 ###reference_b23###], p.334, Thm 2) and so by a standard compactness argument for Sobolev norms and the Sobolev imbedding , we obtain convergence of\nThus the gradient condition (85 ###reference_###) for is inherited by for all large enough depending on the lower bound in (85 ###reference_###). Also remains bounded on by a fixed constant in view of (90 ###reference_###), so we can verify (10 ###reference_###) for large enough and some . This completes the proof of Theorem 8 ###reference_orem8###A)."
208
+ },
209
+ {
210
+ "section_id": "3.7.4",
211
+ "parent_section_id": "3.7",
212
+ "section_name": "3.7.4 Step III: neighbourhood of",
213
+ "text": "We now extend the previous result to a neighbourhood of . As the domain is fixed in what follows, we just write for the bounded convex smooth domain from the previous subsection.\nRegarding as bounded linear operators on we have for some that\nFor denote by the solution to (58 ###reference_###). By Proposition 2 ###reference_position2### we have and so since is self-adjoint and using the divergence theorem,\nwhere we use as follows from the results in Section 3.1.2 ###reference_.SSS2###.\n\u220e\nBy the arguments after (90 ###reference_###), (24 ###reference_###), the operator has a simple eigenvalue with eigenfunction satisfying (10 ###reference_###). We apply the preceding lemma and Proposition 9 ###reference_position9### in the Hilbert space , which implies the convergence of the eigenpair of to as , in . Under the hypotheses on , Theorem 2 on p.334 in [23 ###reference_b23###] implies that the norms in a strict interior subset of are all uniformly bounded for . The standard interpolation inequality for Sobolev norms (p.44 in [45 ###reference_b45###]) implies for some , and (if necessary considering fractional Sobolev norms)\nas , where all Sobolev norms are over . Since embeds continuous into this implies convergence to zero of . We can then verify (10 ###reference_###) just as after (90 ###reference_###), for small enough, completing the proof of Theorem 8 ###reference_orem8###."
214
+ },
215
+ {
216
+ "section_id": "3.8",
217
+ "parent_section_id": "3",
218
+ "section_name": "Proofs of auxiliary results",
219
+ "text": ""
220
+ },
221
+ {
222
+ "section_id": "3.8.1",
223
+ "parent_section_id": "3.8",
224
+ "section_name": "3.8.1 Proof of Proposition 2",
225
+ "text": "We require a few preparatory remarks that will be used: For any the Sobolev imbedding gives The multiplier inequality\nwhere for and for , is also standard, and where we use that imbeds continuously into for in case B) of the proposition. We also recall the standard result from elliptic PDEs that is a continuous isomorphism between and (e.g, Theorem II.5.4 in [45 ###reference_b45###] or Theorem 4.3.3 in [72 ###reference_b72###]), specifically\nwith constants depending only on . [Here the -spaces on the boundary are naturally defined as in [45 ###reference_b45###], and we note that the result is also true when if we replace the boundary spaces simply by the values of at the endpoints of the interval .]\nNow any is the limit in and in of its partial sum . Moreover the lie in since the \u2019s do. We then have from (22 ###reference_###) and for constants in depending only on , the two-sided inequality\nTaking limits, these inequalities extend to all , in particular . The inclusion is also valid (p.474 in [71 ###reference_b71###], or see Exercise 38.1 in [7 ###reference_b7###]) but will be left to the reader. This proves the required assertions when .\nFor , using (93 ###reference_###), (94 ###reference_###), , we have with constants depending on ,\nand again taking limits the result extends to all , in particular . We see that any is the -limit of elements in satisfying Neumann boundary conditions. From this and Theorem I.9.4 in [45 ###reference_b45###] we deduce that Then for and we have and by the spectral representations of we deduce . The inclusion of the r.h.s. in (28 ###reference_###) into is also clear since for such we have from the divergence and Parseval\u2019s theorem\nso that combining what precedes, (28 ###reference_###) is proved. The desired norm equivalence for then also follows from the last estimates.\nThe claims for integer follow by induction. We assume the result has been proved for and . Then we have . We then see from (93 ###reference_###) that on , the norms are equivalent to the norms . In particular for ,\nusing also the induction hypothesis, the multiplier inequality, and the definition of . The preceding bound for in particular implies . In the other direction, by similar arguments,\nThe last assertions follow for from and for from (28 ###reference_###). The general case follows again by induction: indeed suppose the result holds for some . Just as when showing (28 ###reference_###), the space consists precisely of all satisfying Neumann boundary conditions and such that . This immediately implies as elements of are of the form for some so its normal derivatives of all orders vanish at , and by the induction hypothesis. Finally, since by the induction hypothesis, we have and so . The equivalence of norms then follows from the first part of the proposition."
226
+ },
227
+ {
228
+ "section_id": "3.8.2",
229
+ "parent_section_id": "3.8",
230
+ "section_name": "3.8.2 Proof of Proposition 4",
231
+ "text": "We will apply Theorem 3.1 in [17 ###reference_b17###] with semi-group acting on , where is the closure of from before (23 ###reference_###) on the domain . We note that any bounded convex domain satisfies the \u2018chain condition\u2019 employed in that reference. Further, the doubling condition (D) there is satisfied with scaling constant . The upper bound heat kernel estimate for required in (3.1) in Theorem 3.1 in [17 ###reference_b17###] is proved in Theorem 3.2.9 in [20 ###reference_b20###] for the value (noting that a bounded domain with smooth boundary satisfies the \u2018extension property\u2019 for Sobolev spaces required in [20 ###reference_b20###]). Finally\nwhere is the -fold application of . This verifies Condition (3.2) in [17 ###reference_b17###] (for the choice of relevant in the proof of Theorem 3.1 there). To prove (95 ###reference_###), the Sobolev imbedding and Proposition 2 ###reference_position2### imply that it suffices to bound , which for equals the graph norm by the argument given in the last paragraph of the proof of Proposition 2 ###reference_position2###. This completes the proof."
232
+ }
233
+ ],
234
+ "appendix": [],
235
+ "tables": {},
236
+ "image_paths": {
237
+ "1(a)": {
238
+ "figure_path": "2210.13008v3_figure_1(a).png",
239
+ "caption": "Figure 1: Left: a reflected diffusion path (Xt:0\u2264t\u2264T):subscript\ud835\udc4b\ud835\udc610\ud835\udc61\ud835\udc47(X_{t}:0\\leq t\\leq T)( italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT : 0 \u2264 italic_t \u2264 italic_T ) initialised at X0subscript\ud835\udc4b0X_{0}italic_X start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and ran until time T=5\ud835\udc475T=5italic_T = 5. Right: N=500\ud835\udc41500N=500italic_N = 500 discrete observations (Xi\u2062D)i=0Nsuperscriptsubscriptsubscript\ud835\udc4b\ud835\udc56\ud835\udc37\ud835\udc560\ud835\udc41(X_{iD})_{i=0}^{N}( italic_X start_POSTSUBSCRIPT italic_i italic_D end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT at sampling frequency D=0.05\ud835\udc370.05D=0.05italic_D = 0.05 (T=25\ud835\udc4725T=25italic_T = 25). The diffusivity f\ud835\udc53fitalic_f is given in Fig. 2.",
240
+ "url": "http://arxiv.org/html/2210.13008v3/extracted/5372188/Image_ContPath.jpg"
241
+ },
242
+ "1(b)": {
243
+ "figure_path": "2210.13008v3_figure_1(b).png",
244
+ "caption": "Figure 1: Left: a reflected diffusion path (Xt:0\u2264t\u2264T):subscript\ud835\udc4b\ud835\udc610\ud835\udc61\ud835\udc47(X_{t}:0\\leq t\\leq T)( italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT : 0 \u2264 italic_t \u2264 italic_T ) initialised at X0subscript\ud835\udc4b0X_{0}italic_X start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and ran until time T=5\ud835\udc475T=5italic_T = 5. Right: N=500\ud835\udc41500N=500italic_N = 500 discrete observations (Xi\u2062D)i=0Nsuperscriptsubscriptsubscript\ud835\udc4b\ud835\udc56\ud835\udc37\ud835\udc560\ud835\udc41(X_{iD})_{i=0}^{N}( italic_X start_POSTSUBSCRIPT italic_i italic_D end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT at sampling frequency D=0.05\ud835\udc370.05D=0.05italic_D = 0.05 (T=25\ud835\udc4725T=25italic_T = 25). The diffusivity f\ud835\udc53fitalic_f is given in Fig. 2.",
245
+ "url": "http://arxiv.org/html/2210.13008v3/extracted/5372188/Image_DiscrObs.jpg"
246
+ },
247
+ "2(a)": {
248
+ "figure_path": "2210.13008v3_figure_2(a).png",
249
+ "caption": "Figure 2: The posterior mean estimate f\u03b8\u00afsubscript\ud835\udc53\u00af\ud835\udf03f_{\\bar{\\theta}}italic_f start_POSTSUBSCRIPT over\u00af start_ARG italic_\u03b8 end_ARG end_POSTSUBSCRIPT with \u03b8\u00af=M\u22121\u2062\u2211m=1M\u03d1m\u00af\ud835\udf03superscript\ud835\udc401superscriptsubscript\ud835\udc5a1\ud835\udc40subscriptitalic-\u03d1\ud835\udc5a\\bar{\\theta}=M^{-1}\\sum_{m=1}^{M}\\vartheta_{m}over\u00af start_ARG italic_\u03b8 end_ARG = italic_M start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT \u2211 start_POSTSUBSCRIPT italic_m = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT italic_\u03d1 start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT after M=10000\ud835\udc4010000M=10000italic_M = 10000 pCN iterates, for sample sizes N=2500\ud835\udc412500N=2500italic_N = 2500 (left) and N=25000\ud835\udc4125000N=25000italic_N = 25000 (center), at sampling frequency D=0.05\ud835\udc370.05D=0.05italic_D = 0.05; the true field f0subscript\ud835\udc530f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT (right).",
250
+ "url": "http://arxiv.org/html/2210.13008v3/x1.jpg"
251
+ },
252
+ "2(b)": {
253
+ "figure_path": "2210.13008v3_figure_2(b).png",
254
+ "caption": "Figure 2: The posterior mean estimate f\u03b8\u00afsubscript\ud835\udc53\u00af\ud835\udf03f_{\\bar{\\theta}}italic_f start_POSTSUBSCRIPT over\u00af start_ARG italic_\u03b8 end_ARG end_POSTSUBSCRIPT with \u03b8\u00af=M\u22121\u2062\u2211m=1M\u03d1m\u00af\ud835\udf03superscript\ud835\udc401superscriptsubscript\ud835\udc5a1\ud835\udc40subscriptitalic-\u03d1\ud835\udc5a\\bar{\\theta}=M^{-1}\\sum_{m=1}^{M}\\vartheta_{m}over\u00af start_ARG italic_\u03b8 end_ARG = italic_M start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT \u2211 start_POSTSUBSCRIPT italic_m = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT italic_\u03d1 start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT after M=10000\ud835\udc4010000M=10000italic_M = 10000 pCN iterates, for sample sizes N=2500\ud835\udc412500N=2500italic_N = 2500 (left) and N=25000\ud835\udc4125000N=25000italic_N = 25000 (center), at sampling frequency D=0.05\ud835\udc370.05D=0.05italic_D = 0.05; the true field f0subscript\ud835\udc530f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT (right).",
255
+ "url": "http://arxiv.org/html/2210.13008v3/x2.jpg"
256
+ },
257
+ "2(c)": {
258
+ "figure_path": "2210.13008v3_figure_2(c).png",
259
+ "caption": "Figure 2: The posterior mean estimate f\u03b8\u00afsubscript\ud835\udc53\u00af\ud835\udf03f_{\\bar{\\theta}}italic_f start_POSTSUBSCRIPT over\u00af start_ARG italic_\u03b8 end_ARG end_POSTSUBSCRIPT with \u03b8\u00af=M\u22121\u2062\u2211m=1M\u03d1m\u00af\ud835\udf03superscript\ud835\udc401superscriptsubscript\ud835\udc5a1\ud835\udc40subscriptitalic-\u03d1\ud835\udc5a\\bar{\\theta}=M^{-1}\\sum_{m=1}^{M}\\vartheta_{m}over\u00af start_ARG italic_\u03b8 end_ARG = italic_M start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT \u2211 start_POSTSUBSCRIPT italic_m = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT italic_\u03d1 start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT after M=10000\ud835\udc4010000M=10000italic_M = 10000 pCN iterates, for sample sizes N=2500\ud835\udc412500N=2500italic_N = 2500 (left) and N=25000\ud835\udc4125000N=25000italic_N = 25000 (center), at sampling frequency D=0.05\ud835\udc370.05D=0.05italic_D = 0.05; the true field f0subscript\ud835\udc530f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT (right).",
260
+ "url": "http://arxiv.org/html/2210.13008v3/x3.jpg"
261
+ }
262
+ },
263
+ "validation": true,
264
+ "references": [],
265
+ "url": "http://arxiv.org/html/2210.13008v3"
266
+ }
20240127/2212.00403v2.json ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A method of moments estimator for interacting particle systems and their mean field limit",
3
+ "abstract": "We study the problem of learning unknown parameters in stochastic interacting particle systems with polynomial drift, interaction and diffusion functions from the path of one single particle in the system. Our estimator is obtained by solving a linear system which is constructed by imposing appropriate conditions on the moments of the invariant distribution of the mean field limit and on the quadratic variation of the process. Our approach is easy to implement as it only requires the approximation of the moments via the ergodic theorem and the solution of a low-dimensional linear system. Moreover, we prove that our estimator is asymptotically unbiased in the limits of infinite data and infinite number of particles (mean field limit). In addition, we present several numerical experiments that validate the theoretical analysis and show the effectiveness of our methodology to accurately infer parameters in systems of interacting particles.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The problem of inference for stochastic differential equations (SDEs) has a long history [2 ###reference_b2###, 44 ###reference_b44###, 29 ###reference_b29###, 48 ###reference_b48###, 6 ###reference_b6###, 26 ###reference_b26###]. Numerous different techniques, such as maximum likelihood estimation and Bayesian approaches, have been developed and analyzed extensively, both in a parametric and non-parametric setting. Most of the work in the aforementioned references is focused on inference for linear, in the sense of McKean, SDEs, i.e., that the drift and diffusion coefficients do not depend on the law of the process. In recent years, several studies have been devoted to the problem of learning parameters in nonlinear SDEs, for which the drift and possibly also the diffusion coefficients depend on the law of the process [31 ###reference_b31###, 14 ###reference_b14###, 38 ###reference_b38###, 1 ###reference_b1###, 10 ###reference_b10###, 50 ###reference_b50###, 19 ###reference_b19###]. The interest in inference for McKean SDEs, and of the corresponding McKean\u2013Vlasov partial differential equations (PDEs), is due to the many applications in which such nonlinear models appear, for example systemic risk [17 ###reference_b17###], collective behaviour such as flocking and swarming [40 ###reference_b40###], pedestrian dynamics [22 ###reference_b22###], opinion formation [20 ###reference_b20###], and neuroscience [11 ###reference_b11###], to name but a few. McKean nonlinear SDEs appear in the mean field limit of systems of weakly interacting diffusions [41 ###reference_b41###, 12 ###reference_b12###]. The link between finite dimensional diffusion processes, describing the interacting particle system, and their mean field limit has been used in order to study the maximum likelihood estimator [27 ###reference_b27###, 7 ###reference_b7###].\nIn earlier work, we studied the problem of inference for McKean SDEs using two different approaches. In particular, in [47 ###reference_b47###] we used the stochastic gradient descent method, both in an online and offline setting, and in [43 ###reference_b43###] we employed the eigenfunction martingale estimator [28 ###reference_b28###] to learn parameters in both the confining and interaction potentials, as well as the diffusion coefficient, of the McKean SDE in one dimension. More precisely, we showed that, under the assumptions from [34 ###reference_b34###], the eigenfunction martingale estimator, obtained by solving the eigenvalue problem for the generator of the McKean SDE linearized around the (unique) stationary state, is asymptotically unbiased and normal in the limit as the number of particles and observations go to infinity. We emphasize the fact that, for our methodology to work, it is sufficient to observe only one particle. Furthermore, even though the theoretical analysis is valid under the assumption that the mean field dynamics has a unique stationary state, we demonstrated by means of numerical experiments that our method works also when multiple stationary states exist, i.e., when the mean field dynamics exhibit phase transitions [8 ###reference_b8###].\nWe notice that the methodology proposed and analyzed in [43 ###reference_b43###], even though it is elegant, requires observation of only one particle and is provably asymptotically unbiased and normal, it suffers from the drawback that it is computationally expensive. This is due to the fact that we need to solve repeatedly the eigenvalue problem for the generator of the linearized McKean SDE. This means, in particular, that it might be impractical to apply our method to problems where many parameters need to be learned from data, as well as in the multidimensional setting. In this paper, we propose a very simple, robust and computationally efficient methodology that can be applied to McKean SDEs with polynomial nonlinearities. In particular, we show that the classical method of moments can be employed to learn parameters in McKean SDEs with polynomial drift and diffusion coefficients. Similarly to the eigenfunction martingale estimator, for our method to work, it is sufficient to observe a single particle of the interacting particle system, whose mean field limit leads to the McKean SDE. For our theoretical analysis, we need to consider continuous, as opposed to discrete, observations, at least when the diffusion coefficient is not known. Furthermore, we need to assume that the McKean SDE has a unique invariant measure, even though numerical experiments suggest that this is not necessary in practice. Under these assumptions, we can show that our estimator is asymptotically unbiased when the number of particles and the time horizon tend to infinity, and provide a rate of convergence.\nThe method of moments is a very standard methodology in statistics [42 ###reference_b42###, Section 3.3]. It has also been used in the study of the (linear and nonlinear) Fokker\u2013Planck equation, as a numerical technique [49 ###reference_b49###], as well as a mode reduction method [51 ###reference_b51###]. Moreover, it was also used in [12 ###reference_b12###] for proving rigorously that the Desai\u2013Zwanzig model exhibits a phase transition. To our knowledge, this simple and straightforward approach has not yet been applied to the problem of inference for McKean SDEs. The empirical moments have been employed in [3 ###reference_b3###] to construct a kernel based estimator to approximate a polynomial interaction function in McKean SDEs. However, their approach is different from ours and they assume to observe all the particles only at the final time instead of the full path of a single particle. Moreover, they do not include a confining potential in their analysis, and they assume the diffusion to be constant, while we consider more general polynomial confining potentials and diffusion functions. We also mention the recent work [19 ###reference_b19###], in which the inference problem for McKean SDEs in one dimension with no confining potential, but with a polynomial interaction potential, was studied. Under the assumptions of stationarity and for continuous-time observations, an approximate likelihood function was constructed using the empirical moments of the invariant distribution. Even though this work is related to ours, our approach and the result obtained in this paper are different. First, we consider the case with a confining potential, and we estimate parameters in both the confining and the interaction potentials, as well as in the diffusion coefficient, then we use directly the empirical moments for the estimation of the parameters in the drift and diffusion coefficients, without having to construct an approximate likelihood. Moreover, we apply our methodology to problems where the invariant measure of the mean field dynamics is not unique (i.e., when the system exhibits phase transitions), and we also show that our approach applies also to problems with a non-gradient structure and with degenerate noise. Finally, when the diffusion coefficient is known, we demonstrate that it is sufficient to consider discrete observations, in order to estimate parameters in the drift.\nWe finally remark that our setting where the confining and interaction potentials and the diffusion function have a polynomial form is not a strong limitation of the scope of this work. In fact, e.g., phase transitions for long time behavior of interacting particles with linear interaction and quadratic diffusion has been studied in [23 ###reference_b23###]. Moreover, there are several McKean SDEs that appear in applications and that have polynomial drift coefficients, such as the Desai\u2013Zwanzig model [12 ###reference_b12###] and the interacting Fitzhugh\u2013Nagumo neurons model [11 ###reference_b11###]. In the numerical experiments we demonstrate the usefulness of the method of moments by applying it to both models."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Problem setting",
15
+ "text": "In this work we consider the following system of one-dimensional interacting particles over the time interval\nwhere is the number of particles, are standard independent one dimensional Brownian motions, and is the initial distribution of the particles, which is assumed to be independent of the Brownian motions . We remark that we assume chaotic initial conditions, meaning that all the particles are initially distributed according to the same measure. The functions and are the drift, interaction, and diffusion functions, respectively, which depend on some parameters and . In this article we consider polynomial functions of the form\nand we denote by the subvector of with the unknown components which we aim to estimate. Moreover, is the set of admissible parameters and is the exact value of the vector . Our goal is then to infer from the continuous path of the realization of one single particle of system (2.1 ###reference_###).\nConsidering polynomial functions is not a strong limitation of the scope of our work. In fact, this setting can be seen as a semiparameteric framework where we aim to estimate the whole drift, interaction, and diffusion functions, provided that they are sufficiently smooth and can therefore be approximated by polynomials.\nWe focus our attention on large systems, i.e., when the number of interacting particles is , and therefore it is reasonable to look at the mean field limit (see, e.g., [12 ###reference_b12###, 18 ###reference_b18###]), which provides a good approximation of the behavior of a single particle in the system. In particular, letting in (2.1 ###reference_###) we obtain the nonlinear, in the sense of McKean, SDE\nwhere is the density with respect to the Lebesgue measure of the law of , and satisfies the nonlinear Fokker\u2013Planck equation named McKean\u2013Vlasov equation\nWe notice that the SDE (2.3 ###reference_###) is said to be nonlinear in the sense that the drift function depends on the law of the process. Moreover, the mean field can have multiple invariant measures (see, e.g., [12 ###reference_b12###, 8 ###reference_b8###]), whose densities with respect to the Lebesgue measure solve the stationary Fokker\u2013Planck equation\nHowever, for the following convergence analysis we will work under the assumption that equation (2.5 ###reference_###) admits a unique solution. Nevertheless, we observe from the numerical experiments in Section 5.4 ###reference_### that this assumption does not restrict the range of applicability of the method proposed in this paper from a practical point of view, indeed we notice that technique presented here works even in presence of multiple invariant states. The main hypotheses needed in the analysis are then summarized below.\nThe set of admissible parameters is convex and compact. Moreover, for all it holds:\nfor all ,\nthe solution of (2.3 ###reference_###) is ergodic with unique invariant measure whose density solves equation (2.5 ###reference_###).\nAssumption 2.2 ###reference_theorem2### is satisfied if we consider, e.g., the setting of [34 ###reference_b34###]. In particular, the uniqueness of the invariant measure of the mean field limit is given in the case of additive noise, i.e., , and when the drift and interaction functions have the form and for some confining potential which is uniformly convex (i.e., there exists such that for all ) and some interaction potential which is even and convex. In this case the density of the invariant measure is the solution of\nwhere is the normalization constant\nMoreover, in [34 ###reference_b34###, Theorem 3.18], it is proved that there exists a constant independent of time such that\nand therefore the density converges exponentially fast to the invariant density .\nFor the clarity of the exposition we focus here on systems of interacting particles in one dimension. Nevertheless, the proposed method can be easily extended to the case of -dimensional particles with as we will see in the numerical experiment in Section 5.6 ###reference_###."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Method of moments",
21
+ "text": "We now present our approach for learning the unknown parameter from continuous observations of one single particle of the system (2.1 ###reference_###). We assume to know a realization of the -th particle of the solution of (2.1 ###reference_###) for some , and we aim to retrieve the exact value of the unknown parameter by means of the method of moments. Letting , we first multiply equation (2.5 ###reference_###) by for , integrate over and then by parts to obtain\nReplacing the definitions of and yields for the drift term\nand for the diffusion term\nwhere denotes the -th moment with respect to the invariant measure\nMoreover, due to the binomial theorem, for the interaction term we have\nCollecting equation (3.1 ###reference_###) together with the expansions (3.2 ###reference_###), (3.3 ###reference_###) and (3.5 ###reference_###) we obtain\nand recalling the definition of as the subvector of with the unknown components, we can write a linear system for the exact value of of the form\nwhere the matrix and the right-hand side depend on the remaining known components of and on the moments with . Hence, we obtained a set of constraints which have to be satisfied by the exact unknown. However, since we do not know the invariant measure and therefore its moments , we cannot construct the matrix and the right-hand side in order to solve the linear system and get the value of . Nevertheless, due to the ergodicity of the process, the exact moments can be approximated using the data by\nwhich leads to a first attempt for the definition of our estimator as the solution of the approximated linear system\nwhere the matrix and the right-hand side are obtained from and , respectively, by replacing the exact moments with their approximation . Let us now remark that it may happen for the right-hand side to be , if, e.g., all the parameters are unknown (). In this case it is necessary to exclude the trivial solution , because it is clearly not the one which we are looking for. Hence, we can then augment the system with an additional equation derived from the definition of the quadratic variation. In particular, let be the quadratic variation of the process and notice that\nwhich implies\nLetting be sufficiently small, then the quadratic variation can be approximated by the quantity\nwhere . Since we know a continuous trajectory , then can be chosen arbitrarily small, and hence we assume to be known exactly. We are now ready to define our estimator starting from the solution of the linear system\nwhich is obtained by adding equation (3.11 ###reference_###) to the system (3.9 ###reference_###) and where therefore and . We remark that the value has to be chosen such that the number of equations of the system (3.13 ###reference_###) is greater or equal than the number of parameters, i.e., . Then, if and the matrix has full rank, the system has a unique solution, otherwise we can compute the minimum-norm least squares solution through the Moore\u2013Penrose pseudoinverse. Moreover, since the parameter , the estimator is finally defined as the projection of the solution of the linear system onto the convex and compact space\nwhere, for a matrix , we denote by its Moore\u2013Penrose pseudoinverse. We remark that this definition is necessary in order to make the estimator well-defined but, as we will see later in the numerical experiments, it is not required in concrete applications in most of the cases. In fact, the least squares solution of system (3.13 ###reference_###) turns out to be always unique in practice. Moreover, for and large enough, it is also sufficiently close to the exact parameter, and therefore inside the set of admissible values , so that the projection is not needed. For completeness and since it will be useful in the following analysis, let us also introduce the linear system whose unique solution is the exact unknown parameter\nwhich is obtained by adding to the system (3.7 ###reference_###) the following limit equation for the quadratic variation of the mean field limit\nIn Algorithm 1 ###reference_### we summarize the main steps needed to construct the estimator .\nCompute the quadratic variation using equation (3.12 ###reference_###) with\nCompute the approximated moments for . \n3:\n\nConstruct the matrix and the right-hand side using\nLet be the projection onto of the minimum-norm least squares solution of the linear system .\nThe method of moments is outlined here in the context of continuous-time observations. Nevertheless, we remark that, as long as the diffusion term is known, the proposed methodology can be easily generalized to the case of discrete-time observations for which a numerical experiment is presented in Section 5.3 ###reference_###. In fact, the approximation based on the ergodic theorem of the exact moments with respect to the invariant measure of the mean field dynamics can be repeated analogously with discrete-time observations instead of continuous-time observations.\nIn this work we decided to consider the case where the path of one particle is observed, rather than of all the particles, since in concrete applications it is more likely to get measurements only from one single particle than from all the ensemble of particles. Nevertheless, we stress that if the path of more than one particle is observed, then we may follow two different approaches in order to use all the available information and improve the performances of our estimator. First, we could compute the method of moments estimator for each particle, and then take the average as final estimator. Otherwise, we could also estimate the moments by averaging the empirical moments computed for each particle, and finally solve the linear system. We believe that this second approach could potentially give better results, but the analysis of the method of moments estimator for multiple observable particles is out of the scope of this work."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "The mean field Ornstein\u2013Uhlenbeck process",
27
+ "text": "We now employ the method of moments to a simple example involving the mean field Ornstein\u2013Uhlenbeck process for which the exact moments can be computed analytically. We set , , with , , , so that we have\nwhere , , and therefore we obtain the interacting particle system\nwhose mean field limit reads\nWe then aim to estimate the two-dimensional parameter . We remark that the inference problem for this particular case has been investigated in many works such as [27 ###reference_b27###, 7 ###reference_b7###]. It can be shown (see, e.g., [12 ###reference_b12###, 15 ###reference_b15###, 21 ###reference_b21###]) that the process has a unique invariant measure\nand therefore the moments for are given by\nMoreover, equations (3.6 ###reference_###) and (3.16 ###reference_###) simplify to\nwhich together with (3.21 ###reference_###) and the fact that allows us to write explicitly the linear system (3.15 ###reference_###). In particular, we notice that only the even terms give useful equations and, taking with even, we get\nand\nWe can easily verify that the system is solved by and that the least squares solution is well posed as . Indeed, the matrix can be rewritten as\nwhich gives\nwhich in turn implies since .\nFinally, the other systems (3.7 ###reference_###), (3.9 ###reference_###) and (3.13 ###reference_###) are then obtained in a similar way.\nWe assumed the interaction coefficient to be known in order to guarantee the solvability of the problem. Otherwise it would only be possible to estimate the sum . In fact, the method of moments relies on the mean field limit (3.19 ###reference_###) at stationarity, i.e., when , which would depend only on the sum and not on the single parameters alone. Indeed, if we had assumed both and to be unknown, then we would have got as invariant measure, and the matrix would have been\nTherefore, we would have obtained , in contrast with the assumption, independently of the number of moments equations."
28
+ },
29
+ {
30
+ "section_id": "4",
31
+ "parent_section_id": null,
32
+ "section_name": "Convergence analysis",
33
+ "text": "In this section we study the asymptotic properties of the proposed estimator. In particular, we prove that it is asymptotically unbiased when both the final time and the number of particles tend to infinity, and we provide a rate of convergence. We first need to introduce an additional assumption, which guarantees that the uniform propagation of chaos property holds true for the system of interacting particles and that the variables and have bounded moments of any order for all and .\nThere exists a constant independent of and such that for all and\nwhere is one particle of the system (2.1 ###reference_###) and is the solution of the corresponding mean field limit (2.3 ###reference_###), where the Brownian motion is chosen to be .\nMoreover, it holds for all\nEven if a general theory for McKean SDEs with polynomial coefficients which guarantees Assumptions 2.2 ###reference_theorem2### and 4.1 ###reference_theorem1### does not exist, these hypotheses are satisfied in several frameworks in the literature. First, we can consider the setting of Example 2.3 ###reference_theorem3###, and consequently the mean field Ornstein\u2013Uhlenbeck process in Section 3.1 ###reference_###, where propagation of chaos and bounded moments are shown in [34 ###reference_b34###, Theorem 3.3] and [16 ###reference_b16###, Lemma 2.3.1]. In [12 ###reference_b12###] the special case, where the confining potential is bistable, the interaction potential is quadratic, and the diffusion is constant, is studied in full detail and Assumptions 2.2 ###reference_theorem2### and 4.1 ###reference_theorem1### are proven together with the analysis of the phase transition. Still maintaining the diffusion coefficient constant, another interesting class of models, which has been studied in several papers [35 ###reference_b35###, 9 ###reference_b9###, 32 ###reference_b32###, 33 ###reference_b33###, 3 ###reference_b3###], is obtained by setting the drift term equal to zero and considering an odd nondecreasing interaction term. This setting has been thoroughly analyzed in [4 ###reference_b4###, 5 ###reference_b5###]. The authors show the existence and uniqueness of the solution of the McKean SDE ([4 ###reference_b4###, Theorem 3.1]), and the existence and uniqueness of a stationary distribution for the mean field limit ([4 ###reference_b4###, Theorems 4.1 and 4.7]) with some particular cases explicitly considered in [4 ###reference_b4###, Propositions 4.13 and 4.14]. Moreover, they study the interacting particle system and the corresponding propagation of chaos result ([4 ###reference_b4###, Proposition 5.1 and Theorem 5.3]), and finally prove the convergence to the stationary distribution ([5 ###reference_b5###, Theorems 3.1 and 4.1]). Fewer papers consider the case of nonconstant diffusion term. We mention the work in [30 ###reference_b30###] where the propagation of chaos result is proved under the assumptions that the drift is Lipschitz with respect to the measure, the diffusion coefficient is nondegenerate and the ratio between the drift and the diffusion is bounded. Moreover, a more general setting, where also the diffusion coefficient can potentially depend on the law of the process, has been considered recently in [25 ###reference_b25###]. Here, propagation of chaos is shown assuming different nonglobally Lipschitz conditions for both the drift and the diffusion terms. We would be interested in further investigating existence and uniqueness of solution and invariant measure for McKean SDEs and the corresponding interacting particle system for more general drift and diffusion terms. We believe that this would be possible at least for processes where the drift and the interaction coefficients are gradients of convex functions and the diffusion is bounded below by a positive constant. We think that the starting point for an idea on how to prove these properties consists in restricting the whole space to a ball of radius , so that the theory in [34 ###reference_b34###, Section 5], which is developed for bounded and globally Lipschitz functions, can be applied. Then, the extension to the whole space could be obtained by letting and combining a strong dissipativity condition for the confining potential and the theory of Lyapunov functions in [36 ###reference_b36###] together with the localization lemma in [45 ###reference_b45###, Chapter 3]. However, these last results must first be extended to the case of mean field SDEs. We will return to this problem in future work.\nWe notice that whenever we write and for a real matrix we mean its spectral and Frobenius norm, respectively. Moreover, all the constants will be denoted by even if their value can change from line to line.\nThe next lemma is a technical result which shows the convergence of the approximated moments towards the exact moments of the invariant measure of the mean field limit, and will be crucial for the proof of the main theorem.\nLet Assumptions 2.2 ###reference_theorem2### and 4.1 ###reference_theorem1### hold and let . Then, for all and for all there exists a constant independent of and such that\nBy the Minkowsky inequality we first have\nand then we analyze the two terms in the right-hand side separately. Since the initial distribution is equal to the invariant distribution of the mean field limit, then equation (2.3 ###reference_###) becomes a standard It\u00f4 SDE and we can apply the H\u00f6lder inequality and the ergodic theorem in [37 ###reference_b37###, Section 4] to obtain\nWe then consider and applying H\u00f6lder\u2019s inequality we have\nwhich implies\nApplying H\u00f6lder\u2019s and Jensen\u2019s inequality we get\nand due to the boundedness of the moments of and and the uniform propagation of chaos property in Assumption 4.1 ###reference_theorem1### we obtain\nwhich together with bound (4.7 ###reference_###) yields\nFinally, decomposition (4.4 ###reference_###) and bounds (4.5 ###reference_###) and (4.10 ###reference_###) give the desired result.\n\u220e\nThe technical assumption in Lemma 4.3 ###reference_theorem3### that the particles in the system are initially distributed according to the invariant measure of the corresponding mean field limit is not a serious limitation of the validity of the lemma, as long as the system is ergodic and satisfies the uniform propagation of chaos property, which is equivalent to having exponentially fast convergence to equilibrium, both for the -particle system and for the mean field SDE [13 ###reference_b13###, 24 ###reference_b24###]. This can also be observed in the numerical experiments presented in the next section, where the choice of the initial condition does not affect the performance of the estimator.\nLet us now focus on the system for the exact unknown (3.15 ###reference_###) and the system (3.13 ###reference_###) obtained using the available data, and consider the corresponding least squares linear systems\nwhere and are defined as\nWe remark that if the matrix has full rank, then the linear system has a unique solution which coincides with the minimum-norm least squares solution of the system (3.13 ###reference_###). In fact, since it holds\nthen we have\nNotice also that if we write\nthen the approximated linear system can be seen as a perturbation of the exact one, and we can employ the a priori forward error stability analysis performed in [46 ###reference_b46###, Section 3.1.2]. Therefore, in the next lemma we quantify the size of the perturbations.\nUnder the same assumptions of Lemma 4.3 ###reference_theorem3###, there exists a constant independent of and such that\nwhere denotes either the spectral norm for a matrix or the Euclidean norm for a vector.\nBy the triangle inequality we first have\nwhere we also used the fact that and for any matrix , where denotes the Frobenius norm. Then, by H\u00f6lder\u2019s inequality with exponents and and by Jensen\u2019s inequality we have\nand it now remains to bound the two expectations in the right-hand side. Due to Jensen\u2019s inequality we have\nand by using that for we get\nWe now notice that the components of the matrices and consist of sum and products of the moments and their approximations , and therefore due to the boundedness of the moments and by Lemma 4.3 ###reference_theorem3### we obtain\nwhich together with equation (4.18 ###reference_###) implies point . For point we proceed analogously and by triangle inequality we have\nThen, by H\u00f6lder\u2019s inequality with exponents and we have\nwhere the first and last terms are bounded by (4.21 ###reference_###). For the remaining term we have\nwhere we used Lemma 4.3 ###reference_theorem3### and the fact that the components of the vectors and consist of sum and products of the moments and their approximations . We remark that the vectors and depend also on the quantities and given in (3.11 ###reference_###) and (3.16 ###reference_###), which in turn depend on the approximated and exact moments, respectively, and therefore Lemma 4.3 ###reference_theorem3### can still be employed. Finally, equation (4.23 ###reference_###) together with (4.21 ###reference_###) and (4.24 ###reference_###) yields point and concludes the proof.\n\u220e\nWe are now ready to state and prove the main result of this section, i.e., the asymptotic unbiasedness of our estimator and its rate of of convergence with respect to the final time and the number of interacting particles of the system.\nLet the assumptions of Lemma 4.3 ###reference_theorem3### hold and let . Then, there exists a constant independent of and such that\nand therefore\nLet us first define the event as\nand notice that by Markov\u2019s inequality and Lemma 4.5 ###reference_theorem5### we have\nThen, by the law of total expectation we obtain\nwhich since , a compact set, and due to estimate (4.28 ###reference_###) implies\nIt now remains to study the first term in the right-hand side. Since is the projection of onto the convex set , we have\nwhich yields\nThen, by [46 ###reference_b46###, Theorem 3.1] we have\nwhere denotes the condition number of the matrix defined as . Then, employing the inequality , which holds for any positive random variable , and estimate (4.28 ###reference_###) we obtain\nwhich due to Lemma 4.5 ###reference_theorem5### implies\nFinally, the desired results follow from equations (4.30 ###reference_###) and (4.35 ###reference_###).\n\u220e\nFrom the proof of Theorem 4.6 ###reference_theorem6### we notice that the precision of our estimator is affected by the condition number of the matrix . Moreover, we remark that the inverse of is never required nor written in the proof of Theorem 4.6 ###reference_theorem6###, but it is hidden in the proof of [46 ###reference_b46###, Theorem 3.1], where it can be bounded under the condition specified by the set .\nThe assumption that the matrix is nonsingular is fundamental for the solvability of the inference problem. In particular, if the matrix is noninvertible, then it means that the parameters cannot be uniquely identified through the method of moments, and thus our approach cannot be applied in this context. Our methodology, indeed, strongly relies on the McKean\u2013Vlasov SDE and therefore it is only possible to estimate the parameters which are also present in the mean field dynamics. A particular example about this fact is given in Remark 3.3 ###reference_theorem3###. However, this assumption is not verifiable for most of the problems since the exact moments with respect to the invariant distributions of the mean field dynamics are usually unknown. In order to check in practice whether the assumption is likely to be satisfied, we suggest to compute the condition number of the matrix , which is defined as , to verify if the linear system is well-conditioned. If this is not the case, then it is likely that not all the parameters can be identified, at least through the method of moments. Our assumption about the nonsingularity of the matrix seems to be related to the invertibility of the normalized Fisher information matrix in [14 ###reference_b14###], at least in presence of constant diffusion. Under this condition, the authors provide a detailed analysis about identifiability of the parameters through the maximum likelihood estimator, in particular when the log-likelihood function is quadratic in the parameters [14 ###reference_b14###, Proposition 16]. However, differently from us, they assume to observe all the particles in a finite time horizon, and not one particle in the long time limit, so they do not need to consider the McKean\u2013Vlasov SDE at stationarity. We also mention the work in [1 ###reference_b1###], where inference for MckKean\u2013Vlasov SDEs where both the drift and the diffusion coefficients can depend on the law of the process is studied. The authors assume to know discrete observations of the particles in the interacting system and propose a minimal contrast estimator, which is asymptotically unbiased and normal. In order to ensure the identifiability of the parameters, they impose a different condition on some particular quantities which cannot be computed explicitly. Finding more straightforward conditions, which can be explicitly verified a priori, to guarantee the identifiability of the parameters through the method of moments is indeed a difficult, but interesting problem, which we would like to explore in future work."
34
+ },
35
+ {
36
+ "section_id": "5",
37
+ "parent_section_id": null,
38
+ "section_name": "Numerical experiments",
39
+ "text": "In this section we present several numerical experiments to corroborate our theoretical results and show the performance of our estimator is inferring unknown parameters in interacting particle systems with polynomial drift, interaction and diffusion functions. We first perform a sensitivity analysis with respect to the number of moments equations considered, we verify the rate of convergence predicted by Theorem 4.6 ###reference_theorem6###, and we compare our methodology with different approaches in the literature. Then, we test our estimator with different examples which may also not fit into the theory, and we finally employ our approach for a system of interacting particles in two dimensions. We generate synthetic data employing the Euler\u2013Maruyama method with a fine time step to compute numerically the solution of the system of SDEs (2.1 ###reference_###). We remark that we always set for all as initial condition. We then randomly select an index and we suppose to observe only the sample path of the -th particle of the system, from which we construct the linear system whose solution is the proposed estimator ."
40
+ },
41
+ {
42
+ "section_id": "5.1",
43
+ "parent_section_id": "5",
44
+ "section_name": "Sensitivity analysis",
45
+ "text": "###figure_1### \n###figure_2### We consider the setting of Section 3.1 ###reference_###, i.e., the mean field Ornstein\u2013Uhlenbeck process, choosing as exact unknown parameters and , and we then aim to estimate . Hence, Assumption 2.2 ###reference_theorem2### is satisfied with unique invariant measure of the McKean\u2013Vlasov SDE given by , and Assumption 4.1 ###reference_theorem1### holds true due to [34 ###reference_b34###, Theorem 3.3] and [16 ###reference_b16###, Lemma 2.3.1]. In Figure 1 ###reference_### we perform a sensitivity analysis with respect to the number of moments equations employed in the construction of our estimator. We compute for all the particles in the system and for different values of , fixing the final time and the number of particles . As outlined in Section 3.1 ###reference_###, the linear system which has to be solved is where\nand\nMoreover, the corresponding exact vector and matrix for the true values and are given by\nand\nTherefore, we have\nwhich implies as , which is the additional assumption in Theorem 4.6 ###reference_theorem6###. At the left of Figure 1 ###reference_### we then plot the average error and we observe that the inference worsens as the number of moments equations increases. This may sound counterintuitive as one would expect an improvement of the estimation when the number of constraints is higher. However, as highlighted in Remark 4.7 ###reference_theorem7###, the precision of our estimator is dependent on the condition number of the matrix in the linear system, whose average is plotted at the right of Figure 1 ###reference_###. We notice indeed that the condition number increases with and, in particular, it has the same behavior of the estimation error. Therefore, we believe that the number of constraints should not be much bigger than the minimum number of equations which makes the linear system not underdetermined."
46
+ },
47
+ {
48
+ "section_id": "5.2",
49
+ "parent_section_id": "5",
50
+ "section_name": "Rate of convergence",
51
+ "text": "###figure_3### \n###figure_4### In this section we wish to validate numerically the theoretical result presented in Theorem 4.6 ###reference_theorem6###, i.e., the rate of convergence of the estimator with respect to the final time and the number of particles . We consider the framework of Example 2.3 ###reference_theorem3### and we set and , so that and . Assumptions 2.2 ###reference_theorem2### and 4.1 ###reference_theorem1### are guaranteed by [34 ###reference_b34###]. Moreover, we choose and and we aim to estimate the exact unknown parameter . The numerical results are illustrated in Figure 2 ###reference_###, where we use moments equations with the even moments\nAt the left, we fix the number of particles and we vary the final time with . We then plot the average error computed for all the particles in the system as a function of the final time. At the right of the same figure, we fix and we vary with . In order to have a fair comparison, we have to compute the average error using the same number of samples, therefore we compute the estimator only for the first particle in the system and we repeat the same procedure times, plotting then the error as a function of the number of interacting particles. In both cases we observe that the predicted rate of convergence is respected."
52
+ },
53
+ {
54
+ "section_id": "5.3",
55
+ "parent_section_id": "5",
56
+ "section_name": "Comparison with other estimators in case of discrete observations",
57
+ "text": "###figure_5### \n###figure_6### In this section we compare our methodology with different approaches in the literature. In particular, we consider the maximum likelihood estimator (MLE) in [27 ###reference_b27###] and the eigenfunction estimator in [43 ###reference_b43###]. We consider the setting of Section 3.1 ###reference_###, we fix the diffusion coefficient and we aim to efficiently estimate the drift coefficient , whose exact value is chosen to be . Regarding the first estimator, in [27 ###reference_b27###] the author rigorously derives the MLE of the drift coefficient in the context of Section 3.1 ###reference_###, when the path of all the interacting particles in the system is observed. However, since in this work we are assuming to know only the trajectory of one single particle and since for large values of all the particles are approximately independent and identically distributed, we replace the sample mean with the expectation\nwith respect to the invariant measure, i.e., = 0, and we ignore the sum over all the particles. On the other hand, the second estimator which we consider is suitable for parameter estimation when a sequence of discrete-time observations is given. Therefore, we verify how the three approaches perform for different values of the sampling rate, i.e., the distance between two consecutive observations. We remark that in order to construct our estimator using the method of moments and the MLE, all the integrals have to be discretized. Hence, letting be the sampling rate and , we approximate the moments as\nMoreover, the other two estimators are written explicitly in [43 ###reference_b43###, Section 3.2] and are given by\nIn Figure 3 ###reference_### we plot the estimated drift coefficient using only one particle of the system (left) and computing the average of the estimations obtained with all the particles (right) for different values of the sampling rate , with . We observe that the MLE is biased when the distance between two consecutive observations is not sufficiently small. On the other hand, our estimator and the eigenunction estimator are able to infer the right value of the drift coefficient independently of the sampling rate, and we notice that the eigenfunction estimator seems to give slightly better results. We remark however that, even if in this case the eigenfunction estimator has a closed-form expression, this approach is in general computationally much more expensive than the method of moments, as it requires the solution of the eigenvalue problem for the generator of the mean field limit."
58
+ },
59
+ {
60
+ "section_id": "5.4",
61
+ "parent_section_id": "5",
62
+ "section_name": "Bistable potential",
63
+ "text": "We still consider the setting of Example 2.3 ###reference_theorem3###, but we now analyze the bistable potential\nWe consider two cases where the bistable potential appears either in the confining potential or in the interaction potential. In particular, we first set and and then and , and in both cases we set . Our goal is to estimate the three-dimensional coefficient with exact parameters , and , employing the method of moments with equations and fixing as final time and as number of particles. We remark that in this experiments the hypothesis of uniqueness of the invariant measure given in Assumption 2.2 ###reference_theorem2### is not satisfied. We notice that in the first experiment, given this choice of the parameters, the mean field limit admits three different invariant distributions. Nevertheless, the numerical results presented in Figure 4 ###reference_###, where we plot the estimations computed for all the particles, suggest that our methodology works even in presence of multiple stationary states. Indeed the exact moments with respect to the right invariant measure are automatically estimated by the empirical moments, and it is not necessary to know a priori the stationary state. In particular, we observe that for the drift and the interaction components the majority of the values are concentrated around the exact unknown, for which their average provides a reasonable approximation. On the other hand, this is not the case for the diffusion coefficient, for which the variance of the estimator is close to zero. However, we notice that the bias is almost negligible and therefore all the particles give a good approximation of the true value. This is caused by the fact that we are including equation (3.11 ###reference_###) for the quadratic variation, which holds true also for the interacting particles and not only for the mean field limit."
64
+ },
65
+ {
66
+ "section_id": "5.5",
67
+ "parent_section_id": "5",
68
+ "section_name": "Multiplicative noise",
69
+ "text": "###figure_7### We now focus on estimating a more complex diffusion term and we consider the functions\nwith exact unknown components and , and we write . We fix the final time and the number of particles . In Figure 5 ###reference_### we plot the estimation computed for each particle using moments equations to construct the estimator\nWe observe that the first and the second components are slightly overestimated and underestimated, respectively, but in average the estimators provide a reliable approximation of the correct unknowns."
70
+ },
71
+ {
72
+ "section_id": "5.6",
73
+ "parent_section_id": "5",
74
+ "section_name": "Interacting Fitzhugh\u2013Nagumo Neurons",
75
+ "text": "In this last numerical experiment we consider the system of noisy interacting Fitzhugh\u2013Nagumo SDEs which describe the evolution of the membrane potential of interacting neurons [11 ###reference_b11###, Section 4.2]\nwhere controls the interaction between neurons, is the diffusion coefficient and is a kinetic parameter related with input current and synaptic conductance. The corresponding mean field limit then reads\nWe note that this mean field SDE is degenerate, since noise acts directly only on the equation for . Therefore, the analysis presented in the previous section needs to be modified to take into account the hypoelliptic nature of the dynamics. A rigorous analysis of the mean field Fitzhugh\u2013Nagumo model can be found in [39 ###reference_b39###]. Inference for hypoelliptic mean field SDEs is a very interesting problem that we will return to in future work. The unknown coefficient which we aim to infer is with exact values , and . Moreover, the number of neurons is , the final time is and we employ moments equations together with the equation given by the quadratic variation\nwhere stands for the empirical approximations of the moments with respect to the invariant distribution of the mean field dynamics. The numerical results are shown in Figure 6 ###reference_###, where we plot the estimations obtained with all the interacting neurons. We observe that our methodology is able to correctly infer the unknown parameters, in the sense that the majority of the obtained values are concentrated around the true unknowns. Regarding the diffusion coefficient, we can repeat the same considerations as in Section 5.4 ###reference_###, i.e., that even if the exact parameter is not included in the range of the estimations, the bias is however close to zero."
76
+ },
77
+ {
78
+ "section_id": "6",
79
+ "parent_section_id": null,
80
+ "section_name": "Conclusion",
81
+ "text": "In this work we considered the framework of large interacting particle systems in one dimension with polynomial confining and interaction potentials, as well as diffusion function, with unknown coefficients. We proposed a novel estimator for inferring these parameters from continuous-time observations of one single particle in the system. Our approach consists in writing a set of linear equations for the unknown coefficients by multiplying the stationary Fokker\u2013Planck equation by monomials, and thus obtaining equations which depend on the moments of the invariant measure of the mean field limit. An approximation of the moments with respect to the invariant state can be obtained by means of the ergodic theorem and using the available path of data, yielding a linear system for the unknown parameters. Moreover, we considered an additional equation employing the definition of the quadratic variation of a stochastic process. We then defined our estimator to be the least squares solution of the final augmented linear system. Under the assumption of ergodicity and uniqueness of the invariant state of the limiting Mckean SDE, we proved that our estimator is asymptotically unbiased when the number of particles and the time horizon tend to infinity, and we also provided a rate of convergence with respect to these two quantities. We remark that this technique is easy to implement since we only need to approximate the moments of the invariant measure of the mean field limit and compute the least squares solution of a linear system. Nevertheless, the numerical experiments presented above demonstrate the accuracy of the obtained estimations even in case of multiple stationary states for the mean field limit and in the multidimensional setting.\nThis work has a natural interesting direction of research, namely the extension of our methodology to the nonparametric setting, i.e., when the functional form of the drift, interaction and diffusion functions are not known. In particular, if these functions are sufficiently regular, it would be interesting to first approximate them by a truncated Taylor series and then infer the coefficients of the Taylor expansion employing our approach. In this case the theoretical analysis will be based on the study of three simultaneous limits because, in addition to the number of particles and the final time of observation, another quantity of interest will be the dimension of the truncated basis. We will return to this problem in future work."
82
+ }
83
+ ],
84
+ "appendix": [],
85
+ "tables": {},
86
+ "image_paths": {
87
+ "1(a)": {
88
+ "figure_path": "2212.00403v2_figure_1(a).png",
89
+ "caption": "Figure 1: Sensitivity analysis for the mean field Ornstein\u2013Uhlenbeck process with respect to the number M\ud835\udc40Mitalic_M of moments equations. Left: error of the estimator \u03b8^T,NMsuperscriptsubscript^\ud835\udf03\ud835\udc47\ud835\udc41\ud835\udc40\\widehat{\\theta}_{T,N}^{M}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_T , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT. Right: condition number of the matrix \ud835\udc9c\ud835\udc9c\\mathcal{A}caligraphic_A.",
90
+ "url": "http://arxiv.org/html/2212.00403v2/extracted/5371908/figures/sensitivity_analysis_error.png"
91
+ },
92
+ "1(b)": {
93
+ "figure_path": "2212.00403v2_figure_1(b).png",
94
+ "caption": "Figure 1: Sensitivity analysis for the mean field Ornstein\u2013Uhlenbeck process with respect to the number M\ud835\udc40Mitalic_M of moments equations. Left: error of the estimator \u03b8^T,NMsuperscriptsubscript^\ud835\udf03\ud835\udc47\ud835\udc41\ud835\udc40\\widehat{\\theta}_{T,N}^{M}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_T , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT. Right: condition number of the matrix \ud835\udc9c\ud835\udc9c\\mathcal{A}caligraphic_A.",
95
+ "url": "http://arxiv.org/html/2212.00403v2/extracted/5371908/figures/sensitivity_analysis_K.png"
96
+ },
97
+ "2(a)": {
98
+ "figure_path": "2212.00403v2_figure_2(a).png",
99
+ "caption": "Figure 2: Rate of convergence of the estimator \u03b8^T,NMsuperscriptsubscript^\ud835\udf03\ud835\udc47\ud835\udc41\ud835\udc40\\widehat{\\theta}_{T,N}^{M}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_T , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT towards the exact value \u03b8*superscript\ud835\udf03\\theta^{*}italic_\u03b8 start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT with respect to the final time T\ud835\udc47Titalic_T (left) and the number of interacting particles N\ud835\udc41Nitalic_N (right).",
100
+ "url": "http://arxiv.org/html/2212.00403v2/extracted/5371908/figures/rate_convergence_T.png"
101
+ },
102
+ "2(b)": {
103
+ "figure_path": "2212.00403v2_figure_2(b).png",
104
+ "caption": "Figure 2: Rate of convergence of the estimator \u03b8^T,NMsuperscriptsubscript^\ud835\udf03\ud835\udc47\ud835\udc41\ud835\udc40\\widehat{\\theta}_{T,N}^{M}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_T , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT towards the exact value \u03b8*superscript\ud835\udf03\\theta^{*}italic_\u03b8 start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT with respect to the final time T\ud835\udc47Titalic_T (left) and the number of interacting particles N\ud835\udc41Nitalic_N (right).",
105
+ "url": "http://arxiv.org/html/2212.00403v2/extracted/5371908/figures/rate_convergence_N.png"
106
+ },
107
+ "3(a)": {
108
+ "figure_path": "2212.00403v2_figure_3(a).png",
109
+ "caption": "Figure 3: Comparison between the estimator \u03b8^T,NMsuperscriptsubscript^\ud835\udf03\ud835\udc47\ud835\udc41\ud835\udc40\\widehat{\\theta}_{T,N}^{M}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_T , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT with the MLE and the eigenfunction estimator in case of discrete-time observations for different values of the sampling rate \u0394\u0394\\Deltaroman_\u0394. Left: estimation obtained with one particle. Right: average of the estimations obtained with all the particles in the system.",
110
+ "url": "http://arxiv.org/html/2212.00403v2/extracted/5371908/figures/comparison_OU_1.png"
111
+ },
112
+ "3(b)": {
113
+ "figure_path": "2212.00403v2_figure_3(b).png",
114
+ "caption": "Figure 3: Comparison between the estimator \u03b8^T,NMsuperscriptsubscript^\ud835\udf03\ud835\udc47\ud835\udc41\ud835\udc40\\widehat{\\theta}_{T,N}^{M}over^ start_ARG italic_\u03b8 end_ARG start_POSTSUBSCRIPT italic_T , italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT with the MLE and the eigenfunction estimator in case of discrete-time observations for different values of the sampling rate \u0394\u0394\\Deltaroman_\u0394. Left: estimation obtained with one particle. Right: average of the estimations obtained with all the particles in the system.",
115
+ "url": "http://arxiv.org/html/2212.00403v2/extracted/5371908/figures/comparison_OU_all.png"
116
+ },
117
+ "5": {
118
+ "figure_path": "2212.00403v2_figure_5.png",
119
+ "caption": "Figure 5: Scatter plot for the inference of the two-dimensional diffusion coefficient for the case of simultaneous additive and multiplicative noise.",
120
+ "url": "http://arxiv.org/html/2212.00403v2/extracted/5371908/figures/multiplicative_noise.png"
121
+ }
122
+ },
123
+ "validation": true,
124
+ "references": [
125
+ {
126
+ "1": {
127
+ "title": "Preprint, 2022.",
128
+ "author": "F. Comte and V. Genon-Catalot, Nonparametric adaptive estimation for\ninteracting particle systems.",
129
+ "venue": null,
130
+ "url": null
131
+ }
132
+ },
133
+ {
134
+ "2": {
135
+ "title": "Fundamentals and applications.",
136
+ "author": "T. D. Frank, Nonlinear Fokker-Planck equations, Springer Series\nin Synergetics, Springer-Verlag, Berlin, 2005.",
137
+ "venue": null,
138
+ "url": null
139
+ }
140
+ },
141
+ {
142
+ "3": {
143
+ "title": "Preprint, 2022.",
144
+ "author": "V. Genon-Catalot and C. Lar\u00e9do, Inference for ergodic\nmckean-vlasov stochastic differential equations with polynomial\ninteractions.",
145
+ "venue": null,
146
+ "url": null
147
+ }
148
+ },
149
+ {
150
+ "4": {
151
+ "title": "A rigorous first course.",
152
+ "author": "V. M. Panaretos, Statistics for mathematicians, Compact Textbooks\nin Mathematics, Birkh\u00e4user/Springer, [Cham], 2016.",
153
+ "venue": null,
154
+ "url": null
155
+ }
156
+ }
157
+ ],
158
+ "url": "http://arxiv.org/html/2212.00403v2"
159
+ }
20240127/2302.11529v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2303.15198v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2304.01295v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2305.03939v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2305.07984v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2305.11467v4.json ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Learning Sequence Descriptor based on Spatio-Temporal Attention for Visual Place Recognition",
3
+ "abstract": "Visual Place Recognition (VPR) aims to retrieve frames from a geotagged database that are located at the same place as the query frame. To improve the robustness of VPR in perceptually aliasing scenarios, sequence-based VPR methods are proposed. These methods are either based on matching between frame sequences or extracting sequence descriptors for direct retrieval. However, the former is usually based on the assumption of constant velocity, which is difficult to hold in practice, and is computationally expensive and subject to sequence length. Although the latter overcomes these problems, existing sequence descriptors are constructed by aggregating features of multiple frames only, without interaction on temporal information, and thus cannot obtain descriptors with spatio-temporal discrimination.\nIn this paper, we propose a sequence descriptor that effectively incorporates spatio-temporal information. Specifically, spatial attention within the same frame is utilized to learn spatial feature patterns, while attention in corresponding local regions of different frames is utilized to learn the persistence or change of features over time. We use a sliding window to control the temporal range of attention and use relative positional encoding to construct sequential relationships between different features. This allows our descriptors to capture the intrinsic dynamics in a sequence of frames.\nComprehensive experiments on challenging benchmark datasets show that the proposed approach outperforms recent state-of-the-art methods.\nThe code is available at https://github.com/tiev-tongji/Spatio-Temporal-SeqVPR.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "INTRODUCTION",
9
+ "text": "Visual place recognition (VPR) aims to retrieve frames from a geotagged database that are located at the same place as the queried frame [1 ###reference_b1###].\nIt is typically used for loop detection in simultaneous localization and mapping (SLAM) as well as for visual relocalization.\nVarious approaches have been proposed to enhance the performance of VPR by learning improved single frame representation [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###].\nHowever, single frame-based VPR is vulnerable to drastic changes in viewpoint and appearance, so studies have delved into the utilization of sequence information to address this issue.\nOne category of sequence-based methods is based on sequence matching.\nThis approach involves comparing each frame of the query sequence with the database to create a matching matrix.\nThen, the diagonal values are aggregated to obtain a similarity score to determine the location of the query sequence.\nHowever, this method is mainly suitable for cases where the camera motion remains relatively stable [8 ###reference_b8###].\nOtherwise, incorrect matches may occur.\nAdditionally, the computational cost of sequence matching increases with the sequence length and map size [9 ###reference_b9###].\nTo overcome the aforementioned challenges, researchers have proposed utilizing descriptors to represent a sequence [10 ###reference_b10###].\nSequence descriptors offer better scalability for varying sequence lengths and greater robustness against perceptual aliasing.\nHowever, existing research only aggregates descriptors [11 ###reference_b11###] or local features of multiple frames [12 ###reference_b12###], neglecting the cross-frame temporal interactions, which makes sequence descriptors less discriminative.\nIn this paper, we propose an approach for exploring spatio-temporal interactions within frame sequences to extract sequence descriptors.\nSuch sequence descriptors take into account both the temporal correlation across multiple frames and the spatial structure distribution in a frame.\nBy employing the attention mechanism, we adaptively weight image patches to capture and combine discriminative features in the sequence.\nA sliding window is used to control the attentional range and reduce the computational burden.\nMoreover, relative positional encoding is employed to guide the sequence descriptors in learning spatio-temporal patterns rather than specific visual content.\nThis choice stems from the observation that, during camera motion, the visual content moves with the frame, while the relative positions of spatio-temporal patterns remain constant.\nThe contributions of this paper are threefold:\nWe introduce a spatio-temporal sequence descriptor that effectively captures the interaction of the spatial and temporal information simultaneously.\nWe investigate the impact of positional encoding on the spatial and temporal information interactions.\nOur approach delivers competitive results across multiple datasets, outperforming existing state-of-the-art methods based on sequence descriptors."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II RELATED WORKS",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Sequence-based VPR",
21
+ "text": "There are mainly two avenues for utilizing sequence information in VPR: sequence matching and sequence descriptor extraction [1 ###reference_b1###].\nSequence matching involves two key steps.\nInitially, a similarity matrix is constructed by comparing the descriptors of each frame in the query sequence with the descriptors of all frames in the database.\nSubsequently, the most similar sequence in the database is determined by aggregating the individual similarity scores.\nSeqSLAM [8 ###reference_b8###] is a pioneering example of sequence matching.\nHowever, SeqSLAM can be computationally demanding especially when handling large maps.\nAdditionally, it relies on the assumption of constant velocity, which can limit its applicability in scenarios with varying motion characteristics.\nTo address these challenges, several innovations have emerged.\nFast-SeqSLAM [9 ###reference_b9###] leverages an approximate nearest neighbor algorithm to reduce time complexity without degrading accuracy.\n[13 ###reference_b13###] proposes a local matching method based on an improved dynamic time warping algorithm, which relaxes the assumption of constant velocity and concurrently reduces time complexity.\n[14 ###reference_b14###] and [15 ###reference_b15###] use a cost matrix-based approach via dynamic programming to alleviate the issues of missing frames.\nThese methods have also been evaluated on sequences with strong seasonal changes and showing promising performance.\nHowever, these methods operate on the matching scores obtained from the underlying single frame descriptors.\nIn the second avenue, a descriptor is extracted to represent a sequence, followed by a direct sequence-to-sequence similarity search.\nThis not only reduces the matching cost but also incorporates temporal cues into the descriptor.\n[10 ###reference_b10###] first proposes the idea of fusing multiple individual descriptors to generate a sequence descriptor.\nSubsequently, SeqNet [11 ###reference_b11###] proposes using a 1-D convolution to learn frame-level features into a sequence descriptor.\nHowever, this approach is implemented based on pre-computed individual frame descriptors, which prevents the sequence descriptors from capturing the local features within each frame.\nSeqVLAD [12 ###reference_b12###] proposes a detailed taxonomy of techniques using sequence descriptors.\nIt analyzes various mechanisms for fusing individual frame information, and further investigates the feasibility of using the Transformer as the backbone.\nThe sequence descriptor is aggregated by NetVLAD [6 ###reference_b6###] directly from the local features of each frame in the sequence.\nHowever, it does not consider the temporal information interaction across frames, resulting in descriptors without spatio-temporal discrimination."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Spatio-temporal Attention Mechanism",
27
+ "text": "Spatio-temporal attention mechanisms have been applied in various tasks, including video retrieval, video classification, and more.\nIn the context of video action recognition, [16 ###reference_b16###] presents a general ConvNet architecture.\nIt leverages multiplicative interactions of spatio-temporal features to capture long-term dependencies among local features.\n[17 ###reference_b17###] proposes a spatio-temporal attention network to learn discriminative feature representations for actions.\nIn the video classification task, [18 ###reference_b18###] explores the efficacy of spatio-temporal attention mechanism for feature learning directly from image patches.\nIn the realm of video action recognition and object detection,\n[19 ###reference_b19###] introduces a novel multi-scale vision transformer, which achieves state-of-the-art performance.\nThe spatio-temporal attention mechanism has also been extended to diverse tasks such as image captioning [20 ###reference_b20###] and person re-identification [21 ###reference_b21###].\nThese methods leverage both spatial and temporal information to selectively focus on relevant video regions or frames.\nInterestingly, despite the success of spatio-temporal attention mechanism in various applications, it has not yet been integrated into sequence-based VPR."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III METHODOLOGY",
33
+ "text": "We begin by presenting the architecture, which encompasses spatio-temporal-based feature learning and aggregation.\nSubsequently, we introduce the loss function.\n###figure_1###"
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-A Architecture",
39
+ "text": "The architecture of our model is illustrated in Figure 1 ###reference_###.\nIt takes a frame sequence as input, composed of image frames with dimensions .\nSince vision transformer (ViT) [22 ###reference_b22###] is computation-intensive and lacks the inductive biases inherent in convolutional neural networks (CNN) [23 ###reference_b23###], we utilize convolution layers to map each frame to a feature map , where , and represents the number of channels.\nSubsequently, the feature map of each frame is split into non-overlapping patches, where is determined as , given the patch size of .\nFollowing this, each patch is flattened and mapped to an embedding using a trainable linear projection as Equation (1 ###reference_###).\nwhere , , and which indicates the patch.\nThen, is employed as the input embedding for Spatial and Temporal Transformer Encoders."
40
+ },
41
+ {
42
+ "section_id": "3.1.1",
43
+ "parent_section_id": "3.1",
44
+ "section_name": "III-A1 Spatial Attention",
45
+ "text": "In each frame, the positions of the local features reflect the spatial distribution of the features, and this distribution remains relatively consistent within a frame under the same view.\nSimilar to ViT [22 ###reference_b22###] and as illustrated in Figure 2 ###reference_### (a), we incorporate position information into the patch embedding using a standard learnable absolute positional embedding denoted as , as follows111The superscript is omitted where it does not cause ambiguity.:\nWe employ an -layer transformer encoder for spatial fusion, which outputs spatial fusion embeddings .\nEach layer consists of a multi-head self-attention (MSA) module, a multi-layer perceptron (MLP), Layer-norm (LN) blocks and residual connections.\nIn the MSA module, linear projections , , are applied to query (), key () and value () according to Equation (3 ###reference_###), where , represents the head index and .\nSubsequently, the self-attention weights are computed through the dot-product of and , then the output is generated by multiplying scaled weights and in Equation (4 ###reference_###).\nFinally, the output from the heads are concatenated to form in Equation (5 ###reference_###), which serves as input to the Layer-norm and MLP components."
46
+ },
47
+ {
48
+ "section_id": "3.1.2",
49
+ "parent_section_id": "3.1",
50
+ "section_name": "III-A2 Temporal Attention",
51
+ "text": "We define a sliding window of size to control the temporal self-attention range in the sequence.\nWithin each layer of the temporal transformer attention, self-attention is performed within identical sliding windows of the same region across multiple images.\nThe window moves along the rows and columns, indicating that attention is performed between temporally adjacent regions, as illustrated in Figure 2 ###reference_### (d), rather than between two images.\nIn the temporal interaction, from Equation (1 ###reference_###) is taken as the input.\nWe redefine in Equation (3 ###reference_###) as , where is the frame index, is the patch index and is the sliding window size, and we generate , , respectively.\nCompared to absolute positions, we argue that relative positions provide more accurate description of the consistency or variation of local features in a sequence over time.\nThis is because relative position information can capture the changing relationship between the positions of two patches across different frames, as illustrated in Figure 2 ###reference_### (d), whereas absolute position information merely considers the static relationship between one patch and all patches, as illustrated in Figure 2 ###reference_### (b).\nWe encode the relative position between two input embeddings and in , into a relative positional embedding , following [24 ###reference_b24###].\nThe representation of pairwise encoding is then embedded into the self-attention module222Here we only show the single-head self-attention, for multi-head self-attention, please refer to Equation (3 ###reference_###), Equation (4 ###reference_###) and Equation (5 ###reference_###).,\nwhere , and , and the changes in each layer of the temporal transformer encoder.\nWe further decompose the relative positional embedding into height, width and temporal axes following [19 ###reference_b19###], which can reduce the number of learnable parameters.\nWe adopt the -layer transformer encoder for temporal fusion, then the temporal fusion embeddings are generated."
52
+ },
53
+ {
54
+ "section_id": "3.1.3",
55
+ "parent_section_id": "3.1",
56
+ "section_name": "III-A3 Aggregation",
57
+ "text": "In our spatial and temporal attention blocks, the class token is removed.\nThis decision aligns with our strategy of utilizing attention for information interaction, rather than extracting frame descriptors.\nThe sequence descriptor is aggregated by NetVLAD [6 ###reference_b6###].\nGiven a set of D-dimensional embeddings from spatial and temporal Transformer, we combine them based on the position of patches i.e., .\nFollowing this, we perform aggregation on the resulting embeddings using NetVLAD,\nwhere is a single embedding, is the -th centroid which is trainable parameter, and the is a soft-assignment defined as:\nwhere , are also trainable parameters.\n###figure_2###"
58
+ },
59
+ {
60
+ "section_id": "3.2",
61
+ "parent_section_id": "3",
62
+ "section_name": "III-B Loss Function",
63
+ "text": "Similar to the training regime of NetVLAD, we use the max-margin triplet loss as below:\nwhere is the desired margin between the norm of the anchor and the best positive and that of and the hardest negatives in the descriptor space.\nThe is the number of hard negative samples corresponding to each anchor.\nWe train our model using a set of reference and query databases.\nFor each query, we consider it as an anchor, and its positives and negatives are generated from the reference database, which will be detailed in Section IV-B ###reference_###.\n###table_1###"
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "IV EXPERIMENTS",
69
+ "text": ""
70
+ },
71
+ {
72
+ "section_id": "4.1",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-A Datasets",
75
+ "text": "In our experiments, we use three datasets: MSLS [25 ###reference_b25###], NordLand [26 ###reference_b26###], Oxford RobotCar [27 ###reference_b27###], as summarized in Table I ###reference_###."
76
+ },
77
+ {
78
+ "section_id": "4.1.1",
79
+ "parent_section_id": "4.1",
80
+ "section_name": "IV-A1 Mapillary Street Level Sequences (MSLS)",
81
+ "text": "MSLS is a comprehensive dataset consisting of street-level view image sequences, designed to support VPR studies.\nThese sequences are collected from various cities.\nWe used Melbourne for training and Amman, Boston, San Francisco and Copenhagen for testing."
82
+ },
83
+ {
84
+ "section_id": "4.1.2",
85
+ "parent_section_id": "4.1",
86
+ "section_name": "IV-A2 NordLand",
87
+ "text": "The Nordland dataset comprises a collection of images captured during of rail journeys across four seasons, covering various weather and lighting conditions.\nWe use the Summer-Winter pair for training, and Spring-Fall pair for testing."
88
+ },
89
+ {
90
+ "section_id": "4.1.3",
91
+ "parent_section_id": "4.1",
92
+ "section_name": "IV-A3 Oxford RobotCar",
93
+ "text": "The Oxford RobotCar dataset is a large-scale dataset for autonomous driving research.\nIt encompasses road scenes captured during different time periods.\nWe design two experimental sub-datasets: Oxford1 and Oxford2.\nFor Oxford1, we split the database (2015-03-17-11-08-44, day) and query (2014-12-16-18-44-24, night) to train set and test set. For Oxford2, we use a database (2014-12-16-09-14-09, day) and query (2014-12-17-18-18-43, night) for train and database (2014-11-18-13-20-12, day) and query (2014-12-16-18-44-24, night) for test.\nThese datasets are pre-processed to keep an approximate 2 meters frame separation based on the latitude and longitude of each frame location."
94
+ },
95
+ {
96
+ "section_id": "4.2",
97
+ "parent_section_id": "4",
98
+ "section_name": "IV-B Implementation Details",
99
+ "text": "###table_2### The best and second-best results for each dataset are highlighted. The best overall results on each dataset are indicated in bold, while the second-best results are underlined.\n###table_3### The best and second-best results for each dataset are highlighted. The best overall results on each dataset are indicated in bold, while the second-best results are underlined.\nArchitecture. We implement our method using the Pytorch framework [28 ###reference_b28###] on an NVIDIA RTX A6000 card.\nIn the patch embedding process, the CNN comprises two convolutional layers.\nThe first layer maps 3 channels to 64 channels and the second layer maps 64 channels to 384 channels.\nWe set the convolution parameters as follows: , and .\nAfter each convolution operation, we apply the ReLU activation function followed by max pooling.\nThen we incorporate the spatial transformer encoder with layers and the temporal transformer encoder with layers.\nAdditionally, we use a multi-head in transformer with heads.\nIn the temporal transformer encoder, we set the size of sliding window and the .\nBoth the inputs and outputs of the transformers are embeddings with a dimensionality of .\nIn the NetVLAD module, we configure the number of clusters to be 64, yielding sequence descriptors with dimensions of 384 64 without the application of Principal Component Analysis (PCA) [29 ###reference_b29###].\nTo facilitate comparison with other methods, we perform dimensionality reduction using PCA, reducing the dimensionality to 4096.\nTraining. In the training phase, we initialize the model with pre-trained parameters from CCT [23 ###reference_b23###] and adopt the Adam optimizer [30 ###reference_b30###].\nAll images are resized to 384 384.\nThe learning rates for spatial transformer encoder, temporal transformer encoder and NetVLAD are configured as , and respectively.\nWe set the , with each batch consisting of a query sequence, a best positive sequence and 5 hardest negative sequences ( as referenced in Equation (9 ###reference_###)).\nThe length of each sequence is set to .\nThe margin in triplet loss is specified as .\nThe mining method [6 ###reference_b6###] is used to select samples, i.e., we initially select samples based on GNSS labels between the query and the database, and then we select the best positive and the hardest negatives by cosine distance in the descriptor space.\nSince selecting negatives from the whole dataset is time and space consuming, we adopt partial mining [25 ###reference_b25###].\nThis involves randomly sampling a subset of negatives using GNSS labels filtering and using a cache to store the descriptors of sub-negatives.\nThe cache is employed for selecting negatives and is refreshed after every 1000 iterations.\nWe implement early stopping by halting the training if the Recall@5 does improve for 5 consecutive epochs.\nWe set the positive distance threshold to 10 meters and the negative distance threshold to 25 meters.\nEvaluation. In the evaluation phase, we use Recall@K as the performance metric.\nRecall@K is defined as the ratio of the number of correct queries retrieved to the total number of queries.\nA correct retrieval is defined as at least one of the top retrieval being within the given radius from the ground truth position of the query.\nWe use radii of 10 meters, 20 meters and 1 frame for Oxford, MSLS and Nordland datasets respectively."
100
+ },
101
+ {
102
+ "section_id": "4.3",
103
+ "parent_section_id": "4",
104
+ "section_name": "IV-C Results",
105
+ "text": ""
106
+ },
107
+ {
108
+ "section_id": "4.3.1",
109
+ "parent_section_id": "4.3",
110
+ "section_name": "IV-C1 Comparison with the State-of-the-art Methods",
111
+ "text": "The chosen baseline methods include the state-of-the-art methods using sequence descriptors, i.e., SeqNet [11 ###reference_b11###] and SeqVLAD [12 ###reference_b12###].\nAdditionally, we also compare our method with NetVLAD [6 ###reference_b6###] and NetVLAD+SeqMatch [11 ###reference_b11###].\nTo ensure a fair comparison, all the experimental results are reproduced via our setting described in Section IV-A ###reference_### and Section IV-B ###reference_###.\nTable II ###reference_### and Table III ###reference_### show the results of our method compared to the baseline methods on MSLS, NordLand and Oxford RobotCar.\nIt is evident that NetVLAD performs worse than other methods, indicating that sequence VPR significantly outperforms the single-frame VPR.\nIn addition, the methods based on sequence descriptors outperform the method based on SeqMatch.\nCompared to SeqNet, our method outperforms it across all datasets. We observe a Recall@1 improvement of over 10% in most datasets, except for a 4% improvement in Amman.\nSeqNet generates a sequence descriptor through a weighted sum of frame descriptors, which are created by aggregating the local features of each frame. While local features can be discriminative for individual frames within a sequence, they may not exhibit the same level of discriminative power across all sequences.\nIn contrast, our method directly derives the sequence descriptors from the local features of all frames within a sequence. This approach ensures that our descriptor maintains its discriminative qualities across different sequences.\nAdditionally, compared with SeqVLAD, our method exhibits superior performance in most datasets, except Amman and Oxford1.\nNotably, SeqVLAD does not take into account the temporal correlation across multiple frames.\nAs shown in Figure 6 ###reference_### (a)(c), the SeqVLAD is susceptible to dynamic objects, e.g., bicyclists, and is sensitive to illumination changes from day to night or variations in weather conditions.\nConversely, our proposed cross-frames temporal attention can effectively capture local regions correspondences to learn patterns that persist over time.\nThis property renders our sequence descriptors more robust to illumination changes and local scene variations.\nFigure 3 ###reference_### provides further insight by illustrating the attention mechanisms of both our method and SeqVLAD for different regions within query sequences from Figure 6 ###reference_### (a)(c), substantiating the aforementioned conclusions.\nWhile our method\u2019s performance in Oxford1 is slightly lower than SeqVLAD, there is a noteworthy Recall@1 improvement of over 2% in Oxford2.\nOxford1 has a smaller train set compared to Oxford2, but our model has a higher parameter count than SeqVLAD, making it more challenging to train effectively.\nOn the other hand, SeqVLAD tends to be more susceptible to overfitting.\nFinally, we delve into the impact of reducing the dimensionality of sequence descriptors.\nThe results reveal that when descriptors undergo dimensionality reduction via Principal Component Analysis (PCA) into a 4096-Dimensional space, their performance remains on par with that of the full-sized descriptors.\n###figure_3### ###figure_4###"
112
+ },
113
+ {
114
+ "section_id": "4.3.2",
115
+ "parent_section_id": "4.3",
116
+ "section_name": "IV-C2 Ablation Studies",
117
+ "text": "We conduct ablation studies on the four test cities of MSLS to analyze the effectiveness of the spatio-temporal attention with positional embedding and the sliding window setting.\nAs shown in Figure 4 ###reference_###, we compare the experimental results of spatio-temporal attention with positional embedding.\nIt can be clearly observed that descriptors extracted via spatial attention achieve better performance than those extracted via temporal attention.\nThis may indicate that the spatial structure plays a dominant role in the sequence descriptors.\nBut our sequence descriptors extracted via spatio-temporal attention achieve the best performance.\nThis suggests that fusing temporal information to spatial structure can further improve the representation of the descriptors.\nFurthermore, the role of positional embedding is slight for spatial attention but crucial for temporal attention.\nFusing position information can greatly improve performance of descriptors, and relative positional embedding is superior to absolute positional embedding.\n###figure_5### In addition, we explore how the hypermeters of the sliding window affect the ability to capture the dynamics of local features in temporal attention.\nWe compare different sliding window settings, where , and are set for the size of window, and {2, 3}, {2, 3, 4} and {3, 4, 5} are set for the stride respectively.\nAs observed in the Figure 5 ###reference_###, for a given stride, a larger sliding window performs better, but the performance decreases beyond a certain threshold.\nFinally, the optimal value of stride is half of the sliding window size."
118
+ },
119
+ {
120
+ "section_id": "4.3.3",
121
+ "parent_section_id": "4.3",
122
+ "section_name": "IV-C3 Runtime Analysis and Memory Footprint",
123
+ "text": "###figure_6### In real-world VPR systems, it is crucial to take latency and scalability into account.\nTable IV ###reference_### provides insights into the computational time, GPU memory footprint and model size of the compared techniques in evaluation.\nSeqNet is able to extract sequence descriptors more swiftly and with lower GFLOPs due to the pre-extraction and storage of NetVLAD descriptors for each image offline. This eliminates the need to account for the time taken by NetVLAD.\nIn addition, the memory footprint and model parameters of SeqNet are influenced by both the descriptor dimension and sequence length, which are proportional to , where represents the descriptor dimension and represents the sequence length.\nIn contrast, our approach and SeqVLAD extract sequence descriptors directly from the original image sequences, with the entire process being executed online.\nAdditionally, our model considers the interaction among consecutive frames, making it more intricate than SeqVLAD.\nConsequently, it takes more time to extract a sequence descriptor."
124
+ },
125
+ {
126
+ "section_id": "4.3.4",
127
+ "parent_section_id": "4.3",
128
+ "section_name": "IV-C4 Qualitative Results",
129
+ "text": "In Figure 6 ###reference_###, we show our retrieval sequence compared with those from SeqNet and SeqVLAD in MSLS street view and highway, Oxford and NordLand.\nSequences marked with green and red borders indicate correct and incorrect retrievals, respectively.\nSequences marked with orange borders indicate that the retrieval sequence and the query have the same view, but their GNSS labels define that they are not the same \u201cplace\u201d.\nBased on the qualitative results, our method demonstrates the capability to handle changes in lighting conditions caused by day-night transitions and weather changes.\nIn addition, it is less susceptible to dynamic occlusions and partial scene changes, such as pedestrians and vehicles on the road, as well as changes due to road maintenance."
130
+ },
131
+ {
132
+ "section_id": "4.4",
133
+ "parent_section_id": "4",
134
+ "section_name": "IV-D Limitations",
135
+ "text": "The complexity of our model results in a heightened reliance on the size and distribution of the train set, which may yield subpar results when the train set is small, as observed in Table III ###reference_### Oxford1.\nAdditionally, we further analyze the failure cases in Amman. We find that some query images and their ground truth exhibit a large discrepancy in field of view (FOV). Consequently, our approach, which incorporates temporal interaction, may introduce greater temporal consistency in sequence descriptors than descriptors without temporal information. This could be one of the factors contributing to the failure cases."
136
+ },
137
+ {
138
+ "section_id": "5",
139
+ "parent_section_id": null,
140
+ "section_name": "CONCLUSION",
141
+ "text": "VPR holds immense potential for various applications.\nOur work aims to provide a new perspective on sequence-based VPR.\nInstead of aggregating multiple frames spatially, we introduce the fusion of features in the temporal dimension.\nWe use a spatio-temporal attention approach to generate a discriminative descriptor of sequences with improved accuracy compared to existing methods.\nAdditionally, our findings emphasize the significance of both spatial structure and temporal variation in sequence descriptors.\nWe anticipate that these insights will serve as a solid foundation for future research endeavors, enabling improved utilization of sequence information in VPR."
142
+ }
143
+ ],
144
+ "appendix": [],
145
+ "tables": {
146
+ "1": {
147
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>datasets detail. This table specifies the number of images in the dataset used.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.3\">\n<tr class=\"ltx_tr\" id=\"S3.T1.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T1.3.1.1\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.1\" style=\"font-size:90%;\">dataset</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.1.2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.1.2.1\" style=\"font-size:90%;\">database / queries</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.2.1\" rowspan=\"5\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.3.2.1.1\" style=\"font-size:90%;\">MSLS</span><span class=\"ltx_text\" id=\"S3.T1.3.2.1.2\" style=\"font-size:90%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.2.2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.2.2.1\" style=\"font-size:90%;\">Melbourne</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.2.3\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.2.3.1\" style=\"font-size:90%;\">101827 / 88118</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.3.3.1\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.3.1.1\" style=\"font-size:90%;\">Amman</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.1\" style=\"font-size:90%;\">953 / 835</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.3.4.1\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.4.1.1\" style=\"font-size:90%;\">Boston</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.4.2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.4.2.1\" style=\"font-size:90%;\">14024 / 6724</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.3.5.1\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.5.1.1\" style=\"font-size:90%;\">San Francisco</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.5.2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.5.2.1\" style=\"font-size:90%;\">6315 / 4525</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.3.6.1\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.6.1.1\" style=\"font-size:90%;\">Copenhagen</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.6.2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.6.2.1\" style=\"font-size:90%;\">12601 / 6595</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.7.1\" rowspan=\"2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.3.7.1.1\" style=\"font-size:90%;\">NordLand</span><span class=\"ltx_text\" id=\"S3.T1.3.7.1.2\" style=\"font-size:90%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.7.2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.7.2.1\" style=\"font-size:90%;\">train set</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.7.3\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.7.3.1\" style=\"font-size:90%;\">15000 / 15000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.3.8.1\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.8.1.1\" style=\"font-size:90%;\">test set</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.8.2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.8.2.1\" style=\"font-size:90%;\">3000 / 3000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.9.1\" rowspan=\"2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.3.9.1.1\" style=\"font-size:90%;\">Oxford1</span><span class=\"ltx_text\" id=\"S3.T1.3.9.1.2\" style=\"font-size:90%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.9.2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.9.2.1\" style=\"font-size:90%;\">train set</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.9.3\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.9.3.1\" style=\"font-size:90%;\">2401 / 2448</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.3.10.1\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.10.1.1\" style=\"font-size:90%;\">test set</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.10.2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.10.2.1\" style=\"font-size:90%;\">1460 / 1474</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.3.11.1\" rowspan=\"2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\">\n<span class=\"ltx_text\" id=\"S3.T1.3.11.1.1\" style=\"font-size:90%;\">Oxford2</span><span class=\"ltx_text\" id=\"S3.T1.3.11.1.2\" style=\"font-size:90%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.11.2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.11.2.1\" style=\"font-size:90%;\">train set</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.11.3\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.11.3.1\" style=\"font-size:90%;\">3619 / 3926</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.3.12.1\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.12.1.1\" style=\"font-size:90%;\">test set</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.3.12.2\" style=\"padding-top:1.35pt;padding-bottom:1.35pt;\"><span class=\"ltx_text\" id=\"S3.T1.3.12.2.1\" style=\"font-size:90%;\">3632 / 3921</span></td>\n</tr>\n</table>\n</figure>",
148
+ "capture": "TABLE I: datasets detail. This table specifies the number of images in the dataset used."
149
+ },
150
+ "2": {
151
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Quantitative results on MSLS</figcaption><div class=\"ltx_flex_figure ltx_flex_table\">\n<div class=\"ltx_flex_cell\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1\" rowspan=\"3\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text\" id=\"S4.T2.1.1.1.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.2\" rowspan=\"3\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text\" id=\"S4.T2.1.1.2.1\">Dimension</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"12\" id=\"S4.T2.1.1.3\" style=\"padding:2pt 5.4pt;\">MSLS</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T2.1.2.1\" style=\"padding:2pt 5.4pt;\">Amman</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T2.1.2.2\" style=\"padding:2pt 5.4pt;\">Boston</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T2.1.2.3\" style=\"padding:2pt 5.4pt;\">SF</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S4.T2.1.2.4\" style=\"padding:2pt 5.4pt;\">Cph</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1\" style=\"padding:2pt 5.4pt;\">R@1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.2\" style=\"padding:2pt 5.4pt;\">R@5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.3\" style=\"padding:2pt 5.4pt;\">R@10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.4\" style=\"padding:2pt 5.4pt;\">R@1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.5\" style=\"padding:2pt 5.4pt;\">R@5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.6\" style=\"padding:2pt 5.4pt;\">R@10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.7\" style=\"padding:2pt 5.4pt;\">R@1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.8\" style=\"padding:2pt 5.4pt;\">R@5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.3.9\" style=\"padding:2pt 5.4pt;\">R@10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.10\" style=\"padding:2pt 5.4pt;\">R@1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.11\" style=\"padding:2pt 5.4pt;\">R@5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.12\" style=\"padding:2pt 5.4pt;\">R@10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.1\" style=\"padding:2pt 5.4pt;\">NetVLAD <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.11467v4#bib.bib6\" title=\"\">6</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.2\" style=\"padding:2pt 5.4pt;\">4096</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.3\" style=\"padding:2pt 5.4pt;\">0.189</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.4\" style=\"padding:2pt 5.4pt;\">0.251</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.5\" style=\"padding:2pt 5.4pt;\">0.277</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.6\" style=\"padding:2pt 5.4pt;\">0.179</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.7\" style=\"padding:2pt 5.4pt;\">0.238</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.8\" style=\"padding:2pt 5.4pt;\">0.267</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.9\" style=\"padding:2pt 5.4pt;\">0.289</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.10\" style=\"padding:2pt 5.4pt;\">0.398</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.4.11\" style=\"padding:2pt 5.4pt;\">0.455</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.12\" style=\"padding:2pt 5.4pt;\">0.405</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.13\" style=\"padding:2pt 5.4pt;\">0.534</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.14\" style=\"padding:2pt 5.4pt;\">0.594</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.5.1\" style=\"padding:2pt 5.4pt;\">NetVLAD+SeqMatch <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.11467v4#bib.bib11\" title=\"\">11</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.5.2\" style=\"padding:2pt 5.4pt;\">4096</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.3\" style=\"padding:2pt 5.4pt;\">0.246</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.4\" style=\"padding:2pt 5.4pt;\">0.302</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.5.5\" style=\"padding:2pt 5.4pt;\">0.330</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.6\" style=\"padding:2pt 5.4pt;\">0.204</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.7\" style=\"padding:2pt 5.4pt;\">0.239</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.5.8\" style=\"padding:2pt 5.4pt;\">0.257</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.9\" style=\"padding:2pt 5.4pt;\">0.363</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.10\" style=\"padding:2pt 5.4pt;\">0.430</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.5.11\" style=\"padding:2pt 5.4pt;\">0.460</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.12\" style=\"padding:2pt 5.4pt;\">0.504</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.13\" style=\"padding:2pt 5.4pt;\">0.612</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.14\" style=\"padding:2pt 5.4pt;\">0.657</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.6.1\" style=\"padding:2pt 5.4pt;\">SeqNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.11467v4#bib.bib11\" title=\"\">11</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.6.2\" style=\"padding:2pt 5.4pt;\">4096</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.3\" style=\"padding:2pt 5.4pt;\">0.269</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.4\" style=\"padding:2pt 5.4pt;\">0.376</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.6.5\" style=\"padding:2pt 5.4pt;\">0.408</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.6\" style=\"padding:2pt 5.4pt;\">0.274</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.7\" style=\"padding:2pt 5.4pt;\">0.354</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.6.8\" style=\"padding:2pt 5.4pt;\">0.390</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.9\" style=\"padding:2pt 5.4pt;\">0.556</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.10\" style=\"padding:2pt 5.4pt;\">0.671</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.6.11\" style=\"padding:2pt 5.4pt;\">0.728</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.12\" style=\"padding:2pt 5.4pt;\">0.462</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.13\" style=\"padding:2pt 5.4pt;\">0.581</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.14\" style=\"padding:2pt 5.4pt;\">0.637</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.7.1\" style=\"padding:2pt 5.4pt;\">SeqVLAD <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.11467v4#bib.bib12\" title=\"\">12</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.7.2\" style=\"padding:2pt 5.4pt;\">24576</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.3\" style=\"padding:2pt 5.4pt;\">0.300</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.4\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.7.4.1\">0.448</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.7.5\" style=\"padding:2pt 5.4pt;\">\n<span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.7.5.1\">0.519</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.6\" style=\"padding:2pt 5.4pt;\">0.466</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.7\" style=\"padding:2pt 5.4pt;\">0.628</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.7.8\" style=\"padding:2pt 5.4pt;\">0.678</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.9\" style=\"padding:2pt 5.4pt;\">0.661</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.10\" style=\"padding:2pt 5.4pt;\">0.826</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.7.11\" style=\"padding:2pt 5.4pt;\">\n<span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.7.11.1\">0.863</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.12\" style=\"padding:2pt 5.4pt;\">0.564</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.13\" style=\"padding:2pt 5.4pt;\">0.722</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.14\" style=\"padding:2pt 5.4pt;\">0.777</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.8.1\" style=\"padding:2pt 5.4pt;\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.8.2\" style=\"padding:2pt 5.4pt;\">24576</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.3\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.8.3.1\">0.303</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.4\" style=\"padding:2pt 5.4pt;\">0.423</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.8.5\" style=\"padding:2pt 5.4pt;\">0.511</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.6\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.8.6.1\">0.504</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.7\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.8.7.1\">0.645</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.8.8\" style=\"padding:2pt 5.4pt;\">\n<span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.8.8.1\">0.688</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.9\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.8.9.1\">0.680</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.10\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.8.10.1\">0.841</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.8.11\" style=\"padding:2pt 5.4pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.8.11.1\">0.864</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.12\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.8.12.1\">0.608</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.13\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.8.13.1\">0.765</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.14\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.8.14.1\">0.801</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.9.1\" style=\"padding:2pt 5.4pt;\">SeqVLAD w/ PCA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.9.2\" style=\"padding:2pt 5.4pt;\">4096</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.9.3\" style=\"padding:2pt 5.4pt;\">0.294</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.9.4\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.9.4.1\">0.442</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.9.5\" style=\"padding:2pt 5.4pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.9.5.1\">0.526</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.9.6\" style=\"padding:2pt 5.4pt;\">0.465</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.9.7\" style=\"padding:2pt 5.4pt;\">0.623</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.9.8\" style=\"padding:2pt 5.4pt;\">0.675</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.9.9\" style=\"padding:2pt 5.4pt;\">0.656</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.9.10\" style=\"padding:2pt 5.4pt;\">0.822</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.9.11\" style=\"padding:2pt 5.4pt;\">0.859</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.9.12\" style=\"padding:2pt 5.4pt;\">0.560</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.9.13\" style=\"padding:2pt 5.4pt;\">0.720</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.9.14\" style=\"padding:2pt 5.4pt;\">0.774</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.1.10.1\" style=\"padding:2pt 5.4pt;\">Ours w/ PCA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.1.10.2\" style=\"padding:2pt 5.4pt;\">4096</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.10.3\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.10.3.1\">0.306</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.10.4\" style=\"padding:2pt 5.4pt;\">0.411</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.1.10.5\" style=\"padding:2pt 5.4pt;\">0.510</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.10.6\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.10.6.1\">0.502</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.10.7\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.10.7.1\">0.645</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.1.10.8\" style=\"padding:2pt 5.4pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.10.8.1\">0.691</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.10.9\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.10.9.1\">0.671</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.10.10\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.10.10.1\">0.839</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.1.10.11\" style=\"padding:2pt 5.4pt;\">0.860</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.10.12\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.10.12.1\">0.604</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.10.13\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.10.13.1\">0.760</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.10.14\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.10.14.1\">0.801</span></td>\n</tr>\n</table>\n</div>\n<div class=\"ltx_flex_cell\">\n<p class=\"ltx_p\" id=\"S4.T2.2\">The best and second-best results for each dataset are highlighted. The best overall results on each dataset are indicated in <span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.1\">bold</span>, while the second-best results are <span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.2.2\">underlined</span>.</p>\n</div>\n</div>\n</figure>",
152
+ "capture": "TABLE II: Quantitative results on MSLS"
153
+ },
154
+ "3": {
155
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Quantitative results on Nordland and Oxford RobotCar</figcaption><div class=\"ltx_flex_figure ltx_flex_table\">\n<div class=\"ltx_flex_cell\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T3.1\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1\" rowspan=\"2\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.1.1.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.2\" rowspan=\"2\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.1.2.1\">Dimension</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T3.1.1.3\" style=\"padding:2pt 5.4pt;\">NordLand</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T3.1.1.4\" style=\"padding:2pt 5.4pt;\">Oxford1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S4.T3.1.1.5\" style=\"padding:2pt 5.4pt;\">Oxford2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.1\" style=\"padding:2pt 5.4pt;\">R@1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.2\" style=\"padding:2pt 5.4pt;\">R@5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.3\" style=\"padding:2pt 5.4pt;\">R@10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.4\" style=\"padding:2pt 5.4pt;\">R@1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.5\" style=\"padding:2pt 5.4pt;\">R@5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.6\" style=\"padding:2pt 5.4pt;\">R@10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.7\" style=\"padding:2pt 5.4pt;\">R@1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.8\" style=\"padding:2pt 5.4pt;\">R@5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.9\" style=\"padding:2pt 5.4pt;\">R@10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.1\" style=\"padding:2pt 5.4pt;\">NetVLAD <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.11467v4#bib.bib6\" title=\"\">6</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.2\" style=\"padding:2pt 5.4pt;\">4096</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.3.3\" style=\"padding:2pt 5.4pt;\">0.377</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.3.4\" style=\"padding:2pt 5.4pt;\">0.543</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.5\" style=\"padding:2pt 5.4pt;\">0.615</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.3.6\" style=\"padding:2pt 5.4pt;\">0.468</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.3.7\" style=\"padding:2pt 5.4pt;\">0.696</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.8\" style=\"padding:2pt 5.4pt;\">0.779</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.3.9\" style=\"padding:2pt 5.4pt;\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.3.10\" style=\"padding:2pt 5.4pt;\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.3.11\" style=\"padding:2pt 5.4pt;\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.4.1\" style=\"padding:2pt 5.4pt;\">NetVLAD+SeqMatch <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.11467v4#bib.bib11\" title=\"\">11</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.4.2\" style=\"padding:2pt 5.4pt;\">4096</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.3\" style=\"padding:2pt 5.4pt;\">0.610</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.4\" style=\"padding:2pt 5.4pt;\">0.705</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.4.5\" style=\"padding:2pt 5.4pt;\">0.746</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.6\" style=\"padding:2pt 5.4pt;\">0.672</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.7\" style=\"padding:2pt 5.4pt;\">0.784</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.4.8\" style=\"padding:2pt 5.4pt;\">0.846</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.9\" style=\"padding:2pt 5.4pt;\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.10\" style=\"padding:2pt 5.4pt;\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.11\" style=\"padding:2pt 5.4pt;\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.5.1\" style=\"padding:2pt 5.4pt;\">SeqNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.11467v4#bib.bib11\" title=\"\">11</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.5.2\" style=\"padding:2pt 5.4pt;\">4096</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.3\" style=\"padding:2pt 5.4pt;\">0.797</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.4\" style=\"padding:2pt 5.4pt;\">0.905</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.5.5\" style=\"padding:2pt 5.4pt;\">0.930</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.6\" style=\"padding:2pt 5.4pt;\">0.741</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.7\" style=\"padding:2pt 5.4pt;\">0.875</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.5.8\" style=\"padding:2pt 5.4pt;\">0.933</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.9\" style=\"padding:2pt 5.4pt;\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.10\" style=\"padding:2pt 5.4pt;\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.11\" style=\"padding:2pt 5.4pt;\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.6.1\" style=\"padding:2pt 5.4pt;\">SeqVLAD <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.11467v4#bib.bib12\" title=\"\">12</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.6.2\" style=\"padding:2pt 5.4pt;\">24576</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.3\" style=\"padding:2pt 5.4pt;\">0.964</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.4\" style=\"padding:2pt 5.4pt;\">0.992</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.6.5\" style=\"padding:2pt 5.4pt;\">0.993</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.6\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T3.1.6.6.1\">0.966</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.7\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T3.1.6.7.1\">0.982</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.6.8\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T3.1.6.8.1\">0.989</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.9\" style=\"padding:2pt 5.4pt;\">0.844</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.10\" style=\"padding:2pt 5.4pt;\">0.929</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.11\" style=\"padding:2pt 5.4pt;\">0.958</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.7.1\" style=\"padding:2pt 5.4pt;\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.7.2\" style=\"padding:2pt 5.4pt;\">24576</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.7.3\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.7.3.1\">0.971</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.7.4\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.7.4.1\">0.995</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.7.5\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.7.5.1\">0.995</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.7.6\" style=\"padding:2pt 5.4pt;\">0.958</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.7.7\" style=\"padding:2pt 5.4pt;\">0.978</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.7.8\" style=\"padding:2pt 5.4pt;\">0.988</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.7.9\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.7.9.1\">0.868</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.7.10\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T3.1.7.10.1\">0.944</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.7.11\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T3.1.7.11.1\">0.968</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.8.1\" style=\"padding:2pt 5.4pt;\">SeqVLAD w/ PCA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.8.2\" style=\"padding:2pt 5.4pt;\">4096</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.8.3\" style=\"padding:2pt 5.4pt;\">0.963</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.8.4\" style=\"padding:2pt 5.4pt;\">0.991</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.8.5\" style=\"padding:2pt 5.4pt;\">0.994</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.8.6\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.8.6.1\">0.967</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.8.7\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.8.7.1\">0.982</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.8.8\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.8.8.1\">0.990</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.8.9\" style=\"padding:2pt 5.4pt;\">0.847</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.8.10\" style=\"padding:2pt 5.4pt;\">0.932</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.8.11\" style=\"padding:2pt 5.4pt;\">0.961</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.1.9.1\" style=\"padding:2pt 5.4pt;\">Ours w/ PCA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.1.9.2\" style=\"padding:2pt 5.4pt;\">4096</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.1.9.3\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.9.3.1\">0.971</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.1.9.4\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.9.4.1\">0.995</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.1.9.5\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.9.5.1\">0.995</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.1.9.6\" style=\"padding:2pt 5.4pt;\">0.955</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.1.9.7\" style=\"padding:2pt 5.4pt;\">0.977</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.1.9.8\" style=\"padding:2pt 5.4pt;\">0.986</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.1.9.9\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T3.1.9.9.1\">0.866</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.1.9.10\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.9.10.1\">0.945</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.1.9.11\" style=\"padding:2pt 5.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.9.11.1\">0.969</span></td>\n</tr>\n</table>\n</div>\n<div class=\"ltx_flex_cell\">\n<p class=\"ltx_p\" id=\"S4.T3.2\">The best and second-best results for each dataset are highlighted. The best overall results on each dataset are indicated in <span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.1\">bold</span>, while the second-best results are <span class=\"ltx_text ltx_framed_underline\" id=\"S4.T3.2.2\">underlined</span>.</p>\n</div>\n</div>\n</figure>",
156
+ "capture": "TABLE III: Quantitative results on Nordland and Oxford RobotCar"
157
+ },
158
+ "4": {
159
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span>Resource consumption and model size. The ms/fra indicates the time of extracting a frame descriptor, and ms/seq indicate the time of extracting a sequence descriptor.</figcaption>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.1\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.1.1.1\" style=\"padding-top:2pt;padding-bottom:2pt;\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.1.1.2\" style=\"padding-top:2pt;padding-bottom:2pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.1.1.2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.2.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.2.1.1.1\" style=\"padding-top:2pt;padding-bottom:2pt;\">Extraction</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.2.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.2.1.2.1\" style=\"padding-top:2pt;padding-bottom:2pt;\">latency</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.1.1.3\" style=\"padding-top:2pt;padding-bottom:2pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.1.1.3.1\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.3.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.3.1.1.1\" style=\"padding-top:2pt;padding-bottom:2pt;\">GPU</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.3.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.3.1.2.1\" style=\"padding-top:2pt;padding-bottom:2pt;\">Memory</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.1.1.4\" style=\"padding-top:2pt;padding-bottom:2pt;\">GFLOPs</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.1.5\" style=\"padding-top:2pt;padding-bottom:2pt;\">Params</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.1.2.1\" style=\"padding-top:2pt;padding-bottom:2pt;\">NetVLAD <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.11467v4#bib.bib6\" title=\"\">6</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.1.2.2\" style=\"padding-top:2pt;padding-bottom:2pt;\">8.8 ms/fra</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.1.2.3\" style=\"padding-top:2pt;padding-bottom:2pt;\">57.26 MB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.1.2.4\" style=\"padding-top:2pt;padding-bottom:2pt;\">45.12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.2.5\" style=\"padding-top:2pt;padding-bottom:2pt;\">14.74 M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.1.3.1\" style=\"padding-top:2pt;padding-bottom:2pt;\">SeqNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.11467v4#bib.bib11\" title=\"\">11</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.1.3.2\" style=\"padding-top:2pt;padding-bottom:2pt;\">6.2 ms/seq</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.1.3.3\" style=\"padding-top:2pt;padding-bottom:2pt;\">320.01 MB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.1.3.4\" style=\"padding-top:2pt;padding-bottom:2pt;\">0.84</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.3.5\" style=\"padding-top:2pt;padding-bottom:2pt;\">83.89 M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.1.4.1\" style=\"padding-top:2pt;padding-bottom:2pt;\">SeqVLAD <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.11467v4#bib.bib12\" title=\"\">12</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.1.4.2\" style=\"padding-top:2pt;padding-bottom:2pt;\">8.9 ms/seq</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.1.4.3\" style=\"padding-top:2pt;padding-bottom:2pt;\">59.88 MB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.1.4.4\" style=\"padding-top:2pt;padding-bottom:2pt;\">32.71</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.4.5\" style=\"padding-top:2pt;padding-bottom:2pt;\">7.15 M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T4.1.5.1\" style=\"padding-top:2pt;padding-bottom:2pt;\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T4.1.5.2\" style=\"padding-top:2pt;padding-bottom:2pt;\">18.7 ms/seq</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T4.1.5.3\" style=\"padding-top:2pt;padding-bottom:2pt;\">103.74 MB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T4.1.5.4\" style=\"padding-top:2pt;padding-bottom:2pt;\">63.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.1.5.5\" style=\"padding-top:2pt;padding-bottom:2pt;\">13.06 M</td>\n</tr>\n</table>\n</figure>",
160
+ "capture": "TABLE IV: Resource consumption and model size. The ms/fra indicates the time of extracting a frame descriptor, and ms/seq indicate the time of extracting a sequence descriptor."
161
+ }
162
+ },
163
+ "image_paths": {
164
+ "1": {
165
+ "figure_path": "2305.11467v4_figure_1.png",
166
+ "caption": "Figure 1: The architecture of our proposed method.\nGiven a continuous sequence of raw frames I1subscript\ud835\udc3c1I_{1}italic_I start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, I2subscript\ud835\udc3c2I_{2}italic_I start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, \u2026\u2026\\ldots\u2026, ILsubscript\ud835\udc3c\ud835\udc3fI_{L}italic_I start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT, we employ a Convolutional Neural Network (CNN) to map each frame to feature maps and then split these maps into patches.\nA Linear Projection is subsequently employed to map the patch features to embeddings {x1,\u2026,N1,x1,\u2026,N2,\u2026,x1,\u2026,NL}subscriptsuperscript\ud835\udc6511\u2026\ud835\udc41subscriptsuperscript\ud835\udc6521\u2026\ud835\udc41\u2026subscriptsuperscript\ud835\udc65\ud835\udc3f1\u2026\ud835\udc41\\{x^{1}_{1,\\ldots,N},x^{2}_{1,\\ldots,N},\\ldots,x^{L}_{1,\\ldots,N}\\}{ italic_x start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 , \u2026 , italic_N end_POSTSUBSCRIPT , italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 , \u2026 , italic_N end_POSTSUBSCRIPT , \u2026 , italic_x start_POSTSUPERSCRIPT italic_L end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 , \u2026 , italic_N end_POSTSUBSCRIPT }.\nThese embeddings from individual frames are then passed through Spatial Transformer Encoders, applying self-attention for spatial information interaction.\nThis process yields a set of transformed embeddings {x1,\u2026,Ns\u20621,\u2026,x1,\u2026,Ns\u2062L}subscriptsuperscript\ud835\udc65\ud835\udc6011\u2026\ud835\udc41\u2026subscriptsuperscript\ud835\udc65\ud835\udc60\ud835\udc3f1\u2026\ud835\udc41\\{x^{s1}_{1,\\ldots,N},\\ldots,x^{sL}_{1,\\ldots,N}\\}{ italic_x start_POSTSUPERSCRIPT italic_s 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 , \u2026 , italic_N end_POSTSUBSCRIPT , \u2026 , italic_x start_POSTSUPERSCRIPT italic_s italic_L end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 , \u2026 , italic_N end_POSTSUBSCRIPT }.\nFurthermore, the embeddings across different frames within the sliding window are input into a Temporal Transformer Encoder to fuse temporal information, which generates {x1,\u2026,Nt\u20621,\u2026,x1,\u2026,Nt\u2062L}subscriptsuperscript\ud835\udc65\ud835\udc6111\u2026\ud835\udc41\u2026subscriptsuperscript\ud835\udc65\ud835\udc61\ud835\udc3f1\u2026\ud835\udc41\\{x^{t1}_{1,\\ldots,N},\\ldots,x^{tL}_{1,\\ldots,N}\\}{ italic_x start_POSTSUPERSCRIPT italic_t 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 , \u2026 , italic_N end_POSTSUBSCRIPT , \u2026 , italic_x start_POSTSUPERSCRIPT italic_t italic_L end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 , \u2026 , italic_N end_POSTSUBSCRIPT }.\nFinally, the embeddings from two branches are combined, and the NetVLAD layer is employed to aggregate these embeddings to generate a sequence descriptor.",
167
+ "url": "http://arxiv.org/html/2305.11467v4/x1.png"
168
+ },
169
+ "2": {
170
+ "figure_path": "2305.11467v4_figure_2.png",
171
+ "caption": "Figure 2: The different positional embeddings and sliding windows.\nHere we show spatial absolute positional embedding (a) and relative positional embedding (c), as well as temporal absolute positional embedding (b) and relative positional embedding (d),\nwhere we use A\ud835\udc34Aitalic_A to represent the absolute positional embedding and R\ud835\udc45Ritalic_R for the relative positional embedding.\nDashed arrows indicate that information is passed between the two patches.\nSolid arrows indicate to fuse positional embeddings to the information passing.\nIt\u2019s important to note that absolute positional embeddings are independent of the inter-patch relationships.\nIn contrast, relative positional embeddings vary based on the position relationship between patches.",
172
+ "url": "http://arxiv.org/html/2305.11467v4/x2.png"
173
+ },
174
+ "3": {
175
+ "figure_path": "2305.11467v4_figure_3.png",
176
+ "caption": "Figure 3: Visualizations on attention.\nHere are the attentions of our method and SeqVLAD for different regions of the query sequences which is in Figure 6 (a), Figure 6 (c).\nRed portions indicate more focus, and blue portions indicate less focus.\nCompared to SeqVLAD, our method focuses less on dynamic objects and more on road elements.",
177
+ "url": "http://arxiv.org/html/2305.11467v4/x3.png"
178
+ },
179
+ "4": {
180
+ "figure_path": "2305.11467v4_figure_4.png",
181
+ "caption": "Figure 4: Ablation Studies for spatio-temporal effectiveness and positional embedding.\nWe show the comparison of Recall@N performances with only spatial or temporal module, and two kinds of positional embedding or w/o position information.\nIn addition, the positional embedding is trained from scratch without pre-trained parameters.",
182
+ "url": "http://arxiv.org/html/2305.11467v4/x4.png"
183
+ },
184
+ "5": {
185
+ "figure_path": "2305.11467v4_figure_5.png",
186
+ "caption": "Figure 5: Ablation Studies for sliding window settings.\nWe show the comparison of Recall@N performances with different sliding window settings, where m/s demonstrates the size and stride of the sliding window respectively.",
187
+ "url": "http://arxiv.org/html/2305.11467v4/x5.png"
188
+ },
189
+ "6": {
190
+ "figure_path": "2305.11467v4_figure_6.png",
191
+ "caption": "Figure 6: Qualitative results.\nIn these examples, the proposed method successfully retrieves the matching reference sequence in MSLS street (a) view and highway (b), Oxford (c) and NordLand (d), while SeqNet and SeqVLAD produce incorrect place matches.\nGreen and red indicate correct and incorrect retrievals, respectively. While orange indicates the same view but beyond a certain GNSS label threshold, which is also defined as incorrect retrievals.",
192
+ "url": "http://arxiv.org/html/2305.11467v4/x6.png"
193
+ }
194
+ },
195
+ "validation": true,
196
+ "references": [],
197
+ "url": "http://arxiv.org/html/2305.11467v4"
198
+ }
20240127/2306.06230v3.json ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Design Frameworks for Hyper-Connected Social XRI Immersive Metaverse Environments Tri-council of Canada, Canada Research Chairs Program.",
3
+ "abstract": "The metaverse refers to the merger of technologies for providing a digital twin of the real world and the underlying connectivity and interactions for the many kinds of agents within. As this set of technology paradigms \u2014 involving artificial intelligence, mixed reality, the internet-of-things and others \u2014 gains in scale, maturity, and utility there are rapidly emerging design challenges and new research opportunities. In particular is the metaverse disconnect problem, the gap in task switching that inevitably occurs when a user engages with multiple virtual and physical environments simultaneously. Addressing this gap remains an open issue that affects the user experience and must be overcome to increase overall utility of the metaverse. This article presents design frameworks that consider how to address the metaverse as a hyper-connected meta-environment that connects and expands multiple user environments, modalities, contexts, and the many objects and relationships within them. This article contributes to i) a framing of the metaverse as a social XR-IoT (XRI) concept, ii) design Considerations for XRI metaverse experiences, iii) a design architecture for social multi-user XRI metaverse environments, and iv) descriptive exploration of social interaction scenarios within XRI multi-user metaverses. These contribute a new design framework for metaverse researchers and creators to consider the coming wave of interconnected and immersive smart environments.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "###figure_1### The metaverse can be described as \u201ca hypothetical synthetic environment linked to the physical world\u201d [1 ###reference_b1###] and, this concept has been growing in terms of the underlying technologies that support metaverse eco-systems. These technologies include extended reality, artificial intelligence, Internet of Things, cloud computing, blockchains and others, each of which has become mainstream productive application domains [1 ###reference_b1###]. The merger of these gives rise to the visions of the metaverse presented in early conceptualizations; toward a virtual space that is persistent and pervasive, portals between the real world and the virtual world in immersive and seamless ways. The metaverse concept has also been considered as the next-generation Internet, with many high-tech companies engaging in this area to build the infrastructure[2 ###reference_b2###] and gain access to new opportunities and use cases. As the paradigm of the metaverse and its underlying technologies grows in adoption, there are also new research questions and challenges that must be addressed to enable it to reach its full potential as a new interface and medium of communication. Further, in the context of multi-user metaverse environments, there are new human factors that emerge at the physical, psychological, social, organizational, and political levels where a human-tech approach [3 ###reference_b3###] is needed to ensure that shared immersive metaverse environments become reliable, safe, and effective spaces for social interactions.\nThe current metaverse, with characteristics related to \u201cperpetual, shared, concurrent, and 3D virtual spaces concatenating into a perceived virtual universe\u201d [1 ###reference_b1###] brings with it a naturally occurring gap between virtual and physical environments[4 ###reference_b4###]. To cope with this disconnect between the real and virtual worlds, richer connections are needed to create an immersive and hyper-connected spatial experience for users. As in the authors\u2019 previous work[4 ###reference_b4###], addressing the Metaverse disconnect problem requires approaches to hyper-connect the user, the virtual, and the physical environment by making hybrid virtual-physical objects using XR-IoT (XRI) design frameworks. To date, these have focused on single-user experiences; however, to address the complex practical nature of metaverse relationships, a focus is needed on bringing multiple users into the same hyper-connected experience.\nAt present, such a framework is not common to the metaverse platforms available, although there are works toward this direction, as shown in Table I ###reference_###. To advance the research in this direction, this work explores how to design a framework for multi-user shared hyper-connected extended metaverse immersive smart environments. This framework builds on the XRI and extended metaverse frameworks as stated in [4 ###reference_b4###], which focus on single-user scenarios, and extends these toward multi-user designs.\nThis article contributes to i) a framing of the metaverse as a social XR-IoT (XRI) concept, ii) design Considerations for XRI metaverse experiences, iii) a design architecture for social multi-user XRI metaverse environments, and iv) descriptive exploration of social interaction scenarios within XRI multi-user metaverses. This extends the XRI concept from single-user to multi-user scenarios, provides dimensions for designing such systems and their core components, and examines the complex relationships between users and other agents within XRI multi-user metaverses.\nThese contributions are presented as follows: Section I ###reference_### has provided a motivation toward multi-user XRI social metaverses. Section II ###reference_### presents existing social metaverse frameworks and their properties, showing the opportunity to merge these with XRI concepts as a social XRI metaverse. Section III ###reference_### highlights the design dimensions and underlying decisions required for creating XRI metaverse experiences. Section IV ###reference_### synthesizes these design dimensions into a new architecture for social multi-user XRI metaverse environments. Section V ###reference_### presents an exploration of the kinds of interaction scenarios that users within an XRI multi-user metaverse environment will experience, including agents, avatars, and environment objects. Section VI ###reference_### provides a closing discussion and Section VII ###reference_### presents a summary."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Toward a Social XRI Metaverse",
15
+ "text": "Table I ###reference_### presents the comparison of the various features of social metaverse with the traditional displays and VR platforms (see Figure 1 ###reference_###(a)), social metaverse with smartphones and HMD XR (see Figure 1 ###reference_###(b)), and XRI prototypes (see Figure 1 ###reference_###(c)).\nSocial metaverses designed for traditional displays and VR platforms (i.e., the most common form of metaverse design at present) typically focus on providing an immersive virtual experience for users to connect and interact with each other remotely. However, these platforms often do not have the capability of integrating physical space into the virtual experience, as they are not IoT-enabled. As a result, these existing approaches are limited to scanning the user environment and anchoring content. However, they are limited in terms of their ability also to obtain detailed information about objects and IoT edge devices that may be active in the user\u2019s environment.\nSocial metaverse with smartphones and HMD XR applications provide a mixed-reality experience with users remotely and virtually present via XR devices and displays. The XRI prototypes, as in Table I ###reference_###, are equipped with IoT capabilities, allowing them to interact with the physical space in a more meaningful way, [5 ###reference_b5###]. However, these prototypes often lack the ability to support multi-user experiences, which is a critical aspect of social VR and metaverse platforms. The lack of multi-user support in XRI prototypes can limit the level of interaction and collaboration between users in a shared virtual environment."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Related Work",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "2.1.1",
25
+ "parent_section_id": "2.1",
26
+ "section_name": "II-A1 Online Social Environments and Social Virtual Reality",
27
+ "text": "Social virtual reality (VR) platforms provide shared connectedness, and immersive virtual environments where users can interact and socialize with each other[6 ###reference_b6###]. A social metaverse space is not only presented in VR platforms but can also involve traditional two-dimensional displays, such as Decentraland and Spatial (see Table I ###reference_###). Various events are hosted in the metaverse to enable the social values; for example, LNY metaverse111https://news.yahoo.com/interview-karen-x-cheng-her-220250345.html (accessed on 05-February-2023) created a virtual space and hosted an event in spatial to celebrate the Lunar New Year."
28
+ },
29
+ {
30
+ "section_id": "2.1.2",
31
+ "parent_section_id": "2.1",
32
+ "section_name": "II-A2 Social XR Metaverse",
33
+ "text": "The Social XR metaverse (see Figure 1 ###reference_###(b)) refers to multi-user mixed reality local experiences, i.e., with each user in the same room seeing the same virtual objects (see Microsoft Mesh in Table I ###reference_###), or remote experiences, i.e., with users present remotely as a virtual avatar, such as in Spatial-AR 222https://www.wired.com/story/spatial-vr-ar-collaborative-spaces/ (accessed on 07-February-2023). The Digital Labs of MLSE (Maple Leaf Sports and Entertainment) are working on Mixed Reality for viewing the experiences of NBA and NHL fans. It is an example of a multi-user watching the same virtual experience for sports and entertainment. Regarding handheld mobile AR multi-user experience, VTime XR mode could visualize remote users and interact with them synchronously. At the same time, Vtag could anchor a user\u2019s avatar on a location that could then be visualized by other people in AR asynchronously. Figure 1 ###reference_### highlights how these different configurations of the metaverse have the opportunity to be combined."
34
+ },
35
+ {
36
+ "section_id": "2.1.3",
37
+ "parent_section_id": "2.1",
38
+ "section_name": "II-A3 XRI Extended Metaverse",
39
+ "text": "The Concept of XRI has been addressed in [5 ###reference_b5###], which is the hybridization of XR and IoT that aims to enhance the connection between virtual and physical objects and the environment. As the metaverse is becoming mainstream in recent years, the concept of XRI is being applied to extend the metaverse to physical spaces through IoT with multiple prototypes, as in [4 ###reference_b4###][7 ###reference_b7###]. The extended-XRI body is introduced in [8 ###reference_b8###] with virtual body extension for users to enhance the human-in-the-loop hyper-connected metaverse environment."
40
+ },
41
+ {
42
+ "section_id": "2.2",
43
+ "parent_section_id": "2",
44
+ "section_name": "II-B Defining Characteristics of Social XRI Metaverse Environments",
45
+ "text": "Platforms\nSocial\nVR\nXR\nTraditional display\nIoT\nIoT Avatar\nLocal Environment\nRemote Environment\nBlockchain\nAvatarization\nAgency\nSynchronous\nAsynchronous\nHorizon Worlds333https://www.oculus.com/horizon-worlds/ (accessed on 05-February-2023)\\c@Horizon\n\u2713\n\u2713\n\u2717\n\u2717\n\u2717\n\u2717\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\nSpatial444https://www.spatial.io/ (accessed on 05-February-2023)\n\u2713\n\u2713\n\u2717\n\u2713\n\u2717\n\u2717\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\nDecentraland555https://decentraland.org/ (accessed on 05-February-2023)\n\u2713\n\u2717\n\u2717\n\u2713\n\u2717\n\u2717\n\u2717\n\u2713\n\u2713\n\u2713\n\u2717\n\u2713\n\u2717\nThe Sandbox666https://www.sandbox.game/ (accessed on 05-February-2023)\n\u2713\n\u2717\n\u2717\n\u2713\n\u2717\n\u2717\n\u2717\n\u2713\n\u2713\n\u2713\n\u2717\n\u2713\n\u2717\nSomnium Space777https://somniumspace.com/ (accessed on 05-February-2023)\n\u2713\n\u2713\n\u2717\n\u2713\n\u2717\n\u2717\n\u2717\n\u2713\n\u2713\n\u2713\n\u2717\n\u2713\n\u2717\nMicrosoft Mesh888https://www.microsoft.com/mesh (accessed on 05-February-2023)\n\u2713\n\u2717\n\u2713\n\u2713\n\u2717\n\u2717\n\u2713\n\u2713\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\nMLSE Raptors Demo999https://www.thestar.com/sports/raptors/2023/01/24/the-future-of-sports-mixed-reality-viewing-experiences-coming-for-nhl-nba-fans.html (accessed on 08-February-2023)\n\u2713\n\u2717\n\u2713\n\u2713\n\u2717\n\u2717\n\u2713\n\u2717\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\nvTime XR - AR Mode101010https://vtag.com/ (accessed on 06-February-2023)\n\u2713\n\u2717\n\u2713\n\u2713\n\u2717\n\u2717\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\nVTag111111https://vtag.com/ (accessed on 06-February-2023)\n\u2713\n\u2717\n\u2713\n\u2713\n\u2717\n\u2717\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\n\u2713\n\u2713\nNextech AR - ARway121212https://www.nextechar.com/arway (accessed on 06-February-2023)\n\u2713\n\u2717\n\u2713\n\u2713\n\u2717\n\u2717\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\n\u2717\n\u2713\nIoT Avatar 2.0[9 ###reference_b9###]\n\u2717\n\u2717\n\u2713\n\u2717\n\u2713\n\u2713\n\u2713\n\u2717\n\u2717\n\u2717\n\u2713\n\u2713\n\u2717\nXRI Workstation[5 ###reference_b5###]\n\u2717\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\n\u2717\n\u2717\n\u2717\n\u2713\n\u2717\nXRI Metaverse Prototypes [4 ###reference_b4###][7 ###reference_b7###]\n\u2717\n\u2713\n\u2713\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\n\u2717\n\u2717\n\u2717\n\u2713\n\u2717\nXRI Body[8 ###reference_b8###]\n\u2717\n\u2717\n\u2713\n\u2717\n\u2717\n\u2713\n\u2713\n\u2717\n\u2717\n\u2713\n\u2717\n\u2713\n\u2717\nProposed Social XRI Metaverse\n\u2713\n\u2713\n\u2713\n\u2713\n\u2713\n\u2713\n\u2713\n\u2713\n\u2713\n\u2713\n\u2713\n\u2713\n\u2713\nOnline Social Environments, as seen in Table I ###reference_###, are growing more popular. These social virtual spaces and XRI spaces can be described according to multiple characteristics, and the list below highlights key criteria the metaverse system ideally would address:\nSocial - Can allow multiple users to interact with each other using different modalities (text, speech, avatars, gestures, images, videos, etc).\nVirtual Reality - Can allow users to experience virtual immersive environments.\nMixed Reality (XR) - Can allow users to experience virtual objects overlaid in the real-world environment.\nTraditional 2D Displays and Screens - Can allow users to experience virtual content without requiring an immersive head-mounted display.\nInternet-of-Things (IoT) - Can allow objects in the user\u2019s environment to sense and react to changes and also share information across online networks.\nIoT Avatar[9 ###reference_b9###] - Can allow objects in the user\u2019s environment to interact using virtual and mixed reality avatars.\nLocal Environment - Refers to the specific environment location where a user is physically present (local user, [10 ###reference_b10###]).\nRemote Environment - Refers to the specific environment location where a user is remote or telepresent (remote user [10 ###reference_b10###]).\nBlockchain - Online environment platform is integrated with or supported by blockchain technology [1 ###reference_b1###].\nAvatarization[11 ###reference_b11###] [8 ###reference_b8###] - Refers to the virtual representation and embodiment of a user, or an object.\nAgency[12 ###reference_b12###][13 ###reference_b13###] - Refers to the ability of users and digital agents to perceive the environment (local or remote; virtual or physical) and take actions within the environment (local or remote; virtual or physical).\nSynchronous - Refers to the continuous real-time interactions between users and other agents within the environment (local or remote; virtual or physical).\nAsynchronous - Refers to the non-continuous communication interactions between users and other agents within the environment (local or remote; virtual or physical)."
46
+ },
47
+ {
48
+ "section_id": "3",
49
+ "parent_section_id": null,
50
+ "section_name": "III Design Considerations for an XRI Metaverse Experience",
51
+ "text": "In order to address the social design challenges of multi-user and multi-agent XR-IoT metaverse systems, this work first highlights the design components for XRI systems, and later extends these toward multi-user and multi-agent scenarios.\n###figure_2### ###figure_3### Figure 2 ###reference_### presents an XRI Agent Design Landscape, with details of the Agency design process, virtual embodiment design method, and XRI interaction with input and output methods, communication method, and interaction path. These design elements are extracted from the authors\u2019 previous research on IoT Avatars[9 ###reference_b9###], and Extended Metaverse frameworks[4 ###reference_b4###], building on theoretical design frameworks like [12 ###reference_b12###]; as a set of three design dimensions: Virtual Embodiment Method, XRI Interaction Method, and Agency. Together, systems that account for these designs would have the foundations for an XRI metaverse experience. These are described as follows:"
52
+ },
53
+ {
54
+ "section_id": "3.1",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-A Virtual Embodiment",
57
+ "text": "For the Virtual Embodiment Method section in Figure 2 ###reference_###, it presents an approach toward a more holistic virtual embodiment design process that centres on multiple avatar embodiment and behavior dimensions; this seeks to create digital representations of human or non-human entities that can interact with the physical world. The method described in the paper focuses on the use of shapes to embody parameters that define the appearance and behavior of virtual objects. This includes both static and dynamic aspects of the embodiment, such as position, scale, rotation, colour, and texture, as well as animations that depict various forms of movement and express different emotions."
58
+ },
59
+ {
60
+ "section_id": "3.2",
61
+ "parent_section_id": "3",
62
+ "section_name": "III-B XRI Interaction",
63
+ "text": "XRI interaction represents virtual-physical input and output information that users and agents in XRI environments send and receive in order to interact with virtual and physical spaces, via the communication method and the interaction path [8 ###reference_b8###]. For the input method, it includes IoT-Enabled sensors (Arduino), Webcam to capture context with computer vision models, wearable devices that could be attached to the human body, a microphone for sound and speech detection, and sensors from head-mounted display mixed reality devices and their controllers. With these input methods and devices, the system could capture environmental values such as brightness, the number of people in the space, moisture and sound, and the user input including gesture, brainwave, speech, heart rate, etc.\nFor the output method, it includes the IoT-Enabled actuator (with Arduino, etc.), smart lights such as Philips Hue, an HMD display, and a speaker. These provide the feedback of brightness and colour of light, display virtual embodiment, speech and sound, and haptic feedback.\nFor the communication method, the IoT framework is presented for virtual and physical communication, including MQTT131313https://mqtt.org/ (accessed on 06-February-2023), Socket.IO141414https://socket.io/ (accessed on 06-February-2023), and HTTP Request151515https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods (accessed on 06-February-2023), which are the common protocol for IoT interaction and are being used in the authors\u2019 previous projects. In addition, it also has internal logical connections such as virtual object communication within the game engine (Unity), and multiplayer game networking method with Photon161616https://www.photonengine.com/ (accessed on 06-February-2023).\nThe interaction path should also be considered, as in [8 ###reference_b8###], with physical-to-physical, physical-to-virtual, virtual-to-virtual, and virtual-to-physical."
64
+ },
65
+ {
66
+ "section_id": "3.3",
67
+ "parent_section_id": "3",
68
+ "section_name": "III-C Agency Design Method",
69
+ "text": "Agent design is related to how the intelligent agents[12 ###reference_b12###] interact with the hybrid environments, humans, and each other. Prometheus methodology is presented in [15 ###reference_b15###] for designing and developing intelligent agent systems with inputs (percepts), outputs (actions), and shared data sources. They also indicate that an agent descriptor is needed to show the functionalities, including the name, description, functionalities and who would interact with them."
70
+ },
71
+ {
72
+ "section_id": "4",
73
+ "parent_section_id": null,
74
+ "section_name": "IV A Design Architecture for Social Multi-user XRI Metaverse Environments",
75
+ "text": "Figure 3 ###reference_### presents a social XRI metaverse framework, including components of two XRI environments with multi-user, IoT communication broker, and virtual environments and objects that are shared. Alongside the framework, the level of Body Avatarization [11 ###reference_b11###][8 ###reference_b8###], XRI Interaction[8 ###reference_b8###], Agency[12 ###reference_b12###][15 ###reference_b15###] and \u201cvirtuality continuum\u201d [14 ###reference_b14###] are the factors to be considered when designing the system. This research builds on the multi-user background and related work (in Section II-A ###reference_###), toward a new theoretical framework that extends and merges multiple physical and virtual environments (including IoT edge devices and IoT-enabled objects, mixed-reality 3D content) for single or multiple users (local or remote, virtual or physical).\n###figure_4### The XRI environments are the local spaces that include IoT-enabled devices to capture context and share through IoT communication broker or be controlled by IoT information. As indicated in the framework, the IoT edge devices could be communicated between the two XRI environments. The virtual environments and objects fit in the \u201cvirtuality continuum\u201d [14 ###reference_b14###], as mixed reality objects blended with physical objects in a fully immersive environment(s). IoT enables the \u201cshared\u201d values of the virtual content (from the shared hybrid environment objects) that are in communication with the physical environment and which can be accessed and controlled by both the virtual or physical environment elements, as presented in the framework. These hybrid objects could be accessed by the XRI environments virtually, and can be shared for multiple users to interact with, from one individual\u2019s local environment, to another individual\u2019s remote environment.\n###figure_5### In terms of the users, in one XRI environment, they could interact with each other through the physical-to-physical path (the normal way to interact in the real world), the physical-to-virtual path with virtual objects or partial avatarization body[11 ###reference_b11###][8 ###reference_b8###], the virtual-to-physical path with their partial avatarization to manipulate the physical objects and environments, and virtual-to-virtual path with partial avatarization body or full avatarization body in the full virtual environment or remote XRI environment to interact with virtual objects.\nIn terms of the level of agency[12 ###reference_b12###], this refers to the ability of intelligent actors to interact with users and each other, and also control the hybrid objects in the XRI environment. In this sense, conversational agents are common, as IoT interfaces, and in this case a conversational agent is considered as part of the agency considerations to provide chat with users, which means it could be aware of the speech of the users, understand the speech, and provide feedback (with the speech in the speaker or virtual embodiment). One of the recent popular examples is ChatGPT171717https://openai.com/blog/chatgpt/ (accessed on 07-February-2023) that could interact with users through text, and it could be potentially used in the Social XRI metaverse environment with the virtual embodiment."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Social Interaction Scenarios in XRI Multi-user Metaverse",
81
+ "text": ""
82
+ },
83
+ {
84
+ "section_id": "5.1",
85
+ "parent_section_id": "5",
86
+ "section_name": "Social XRI Multi-user Metaverse Hybrid Interactions in Local Environments",
87
+ "text": "Figure 4 ###reference_### focuses on the design of scenarios for multi-user interactions in the same room within a hyper-connected metaverse environment. The interactions involve virtual-to-physical, virtual-to-virtual, physical-to-virtual, and physical-to-physical paths and involve the use of XR-IoT hybrid bodies and virtual extensions. Figure 4 ###reference_###(a) In the first scenario, an XR-IoT hybrid body interacts with a virtual bulb as a body extension by pressing it to turn on or off a physical lamp through HTTP PUT, which can be observed by others and is considered a virtual-to-physical path interaction. Figure 4 ###reference_###(b) Another scenario involves an XRI-hybrid body manipulating the scale of a virtual IoT plant avatar using its virtual bulb extension, which can be observed by others virtually and is considered a virtual-to-virtual path interaction. Figure 4 ###reference_###(c) A physical context is affected by other users, captured by a webcam with computer vision, to affect the virtual extension agent (virtual bulb) on the XR-IoT hybrid body, which is considered a physical-to-virtual path interaction. Figure 4 ###reference_###(d) A scenario addresses control of the virtual bulb through conversation from a user, which can be observed by others and is considered a physical-to-physical path interaction."
88
+ },
89
+ {
90
+ "section_id": "5.2",
91
+ "parent_section_id": "5",
92
+ "section_name": "Social XRI Multi-user Metaverse Hybrid Interactions across Local and Remote Environments",
93
+ "text": "Figure 5 ###reference_### presents the highly connected multi-user metaverse scenarios with the seamless integration of virtual and physical spaces. These spaces can be described by the Social-XRI Interaction Cube (bottom right), based on physical-virtual, local-remote, single user and multi-user dimensions, for each of the context situations in XRI-Environments (a),(b),(c),(d), and (e).\nThe solid blue arrow represents the direct interaction of the user with virtual or physical objects. For example, as indicated in number 1 of the solid blue arrow, the user has a full avatar in an immersive metaverse environment to interact with the virtual rocket and planets (virtual-to-virtual interaction). At the same time, the dashed blue arrow symbolizes the communication of IoT information between objects, with bi-directional virtual-to-physical interaction, such as the number 1 of the dashed blue arrow indicates the virtual IoT avatar is controlled by the physical sensors in XRI Environment-II(b) Context.\nThe solid red arrow signifies movement between physical spaces; as indicated in solid red arrow number 1, user one could move physically between the XRI Environment-II(b) Context and XRI Environment-I(a) Context. In addition, the dashed red arrow represents virtual telepresence in a mixed-reality environment or an immersive metaverse (VR) environment. On the one hand, for user telepresence, as indicated in the dashed red arrow number 2, User 2 in XRI Environment-III(d) Context could have a virtual presence in the XRI Environment-I(c) Context that could be viewed by User 3 (physical presence) or in the XRI Environment-I(e) Context that could be viewed by another virtual avatar (such as User 3).\nThe solid green arrow signifies the use of traditional devices such as a laptop with a screen by the user, providing a more conventional means of interaction within the virtual environment. The numbers 2 and 3 of the green arrows are the most traditional in a meeting through webcam and screen presence, like Zoom181818https://zoom.us/ (accessed on 07-February-2023) and Teams191919https://www.microsoft.com/microsoft-teams/ (accessed on 07-February-2023). Number 1 of the green arrow using screen devices to access the three-dimensional metaverse environment (XRI Environment-I(e) Context) and present as an avatar that the mouse and keyboard could control, such as Decentraland and Spatial (see Table I ###reference_###)."
94
+ },
95
+ {
96
+ "section_id": "5.3",
97
+ "parent_section_id": "5",
98
+ "section_name": "Social XRI Multi-user Metaverse Use-Case Scenarios across Local and Remote Environments",
99
+ "text": "In a social XRI metaverse research lab scenario, the lab\u2019s room could be considered a physical and hybrid XRI environment where researchers and students work in person or remotely. The User 1, 2, and 3 are physically working in the lab as indicated in the XRI Environment-I(a) Context, and they could experience the XRI environment with a virtual IoT Avatar, trees, butterflies, and physical light that could be controlled with the virtual objects[9 ###reference_b9###][5 ###reference_b5###][4 ###reference_b4###][7 ###reference_b7###].\nThe IoT Avatar in XRI Environment-I(a) Context is the virtual telepresence of the plant (indicated by red dashed arrow number 7) and its IoT information embodiment (indicated by blue dashed arrow number 1). User 1 could be aware of the status (emotion) [9 ###reference_b9###] of the physical plant in XRI Environment-II(b) Context. If the plant is sad and needs to be watered, User 1 could physically move back to XRI Environment-II(b) Context to water the plant to make it happy. During the commute from Physical XRI Environment to XRI Environment-II(b) Context or in XRI Environment-II(b) Context, User 1 is still able to have virtual telepresence as a full avatar to the XRI Environment-I(a) Context (as indicated in red dashed arrow number 1) to communicate with User 2 and 3. At the same time, User 1 in XRI Environment-I(c) Context as an Extended-XRI body with a virtual light bulb extension, the virtual light bulb attached to the virtual avatar hand could be pressed (as indicated in solid blue arrow number 3) and control the physical light (as indicated in blue dashed arrow number 4). Such a scenario could apply when the users attempt to have an immersive meeting and provide a better lighting environment remotely.\nIn terms of the XRI Environment-I(e) Context, it is a completely virtual environment that users need to access through a VR headset or screen-based devices. Users 1 and 2 are virtually present in that space (as indicated in red dashed arrow numbers 3 and 4) and embodied as full avatars. They could host meetings, play games and etc. in the place. If User 2 would like to switch back to XRI Environment-II(b) Context, User 1 as an avatar in the XRI Environment-I(e) Context could still connect with User 2, through interaction with the virtual objects in the shared virtual environment, i.e., rocket and planet, (as indicated in solid blue arrow number 1) and control the physical lights (as indicated in blue dashed arrow number 3) which could also be saw by User 4 in the XRI Environment-III(d) Context (as indicated in solid blue arrow number 2) or the virtual bulb extended body attached to User 2 (as indicated in blue dashed arrow number 4)."
100
+ },
101
+ {
102
+ "section_id": "5.4",
103
+ "parent_section_id": "5",
104
+ "section_name": "Building the Social-XRI Metaverse: Technologies and Implementation Considerations",
105
+ "text": "The above Social XRI Metaverse scenarios (Figure 5 ###reference_###) can be realized with existing technologies (as in Figure 3 ###reference_###). These include approaches to provide avatar engagement and expression and telepresence in digital twin metaverse environments, based on: environment and body sensors; webcams; conversation agent controllers; smart lights and edge devices; actuators; communication channels; networks; information brokers (e.g., via blockchains); cloud services; mixed reality content anchor systems; HMDs; wearable devices; trackers; feedback devices (e.g., haptics); together with traditional displays, and AI Frameworks.\nThese technologies provide the core elements needed for social and multiuser and multi-agent telepresence, however, in terms of avatar design, the framework can incorporate existing avatar tools, such as\nthe avatar creation system from Meta202020https://www.theverge.com/2021/4/23/22398060/oculus-new-avatars-editor-features-vr-virtual-reality-facebook-quest-rift (accessed on 15-April-2023),\nand Ready Player Me212121https://readyplayer.me/(accessed on 15-April-2023),\nThe emotion and expressiveness of users within these environments can be incorporated by avatar by face tracking sensors such as those of current generation HMD devices (e.g., Meta Quest Pro). Avatar movement is commonly based on inverse kinematics (IK), based on the position of the two controllers and the HMD position222222https://www.uploadvr.com/meta-quest-2-body-tracking-without-trackers/ (accessed on 15-April-2023), or via on-body trackers like the HTC Vive Tracker232323https://www.vive.com/ca/accessory/tracker3/ (accessed on 15-April-2023), and/or external vision-based trackers (e.g., webcam computer vision models such as PoseNet242424https://blog.tensorflow.org/2018/05/real-time-human-pose-estimation-in.html (accessed on 15-April-2023)) or depth camera tracking (such as Kinect252525https://learn.microsoft.com/en-us/azure/kinect-dk/body-joints (accessed on 15-April-2023)). Further, volumetric capture is a viable possibility to capture the user moving in real-time with a depth camera and to produce the volume of video in the remote space, such as Holoportation262626https://www.microsoft.com/en-us/research/project/holoportation-3/ (accessed on 15-April-2023)."
106
+ },
107
+ {
108
+ "section_id": "6",
109
+ "parent_section_id": null,
110
+ "section_name": "VI Discussion",
111
+ "text": "The proposed framework presents a perspective on the future of the metaverse with social and XRI components for multiuser and multiagent interactions and experiences. This approach can bring multiple benefits to the metaverse platform, however, it also has challenges to address, as seen in [1 ###reference_b1###] across both the underlying platform technologies and the resulting ecosystem.\nIn terms of benefits social XRI metaverse ecosystems may help with: Remote-work and Co-working \u2013 The metaverse\u2019s ability to integrate multiple user environments and modalities facilitates a digital workspace where real-time collaboration can mimic in-person interactions regardless of physical geographical constraints. Metaverse Connectedness \u2013 The proposed design frameworks may help with reducing the inherent task-switching disruptions that occur when users engage within and across multiple virtual and physical environments simultaneously [4 ###reference_b4###], as it streamlines both physical and virtual spaces into a more cohesive and holistic meta-space for user interaction.\nMulti-agent Interaction \u2013 The designs indicate the possible incorporation of both human users as well as non-human agent components, which can both express and engage across the virtual-physical, local-remote dimensions. This encourages the design toward a new form of hyper-connected multi-agent telepresence, and potential for new forms of human-environment interaction, human-agent, and human-human interactions.\nOn the other hand, in terms of challenges, these include: Integration Complexity \u2013 The creation of a seamless and immersive metaverse is a complex endeavor, requiring significant advances in areas like artificial intelligence, mixed reality, and IoT technologies. These technologies each require specialized frameworks that developers need to merge into a single runtime framework. The frameworks shown in this work highlight some of the many sides of this issue. Privacy and data security\u2013 As a digital twin of the real world, the metaverse will likely process substantial personal data, raising significant concerns about privacy and data security, as well as ownership of information. This is an open area of research and must account for this from multiple sides of the social XRI problem. Latency \u2013 Ensuring real-time interactions in a hyper-connected meta-environment is a considerable challenge, especially when the systems involved are decentralized, potentially large scale, and also involve high amounts of graphical information, image data, and other artificial intelligence data and models. Such hyper-connectedness is required, but brings with it a heavy data management cost. As a result, high latency could disrupt the user experience, particularly in time-sensitive activities.\nAlthough the framework proposed does not address these challenges currently, it is expected that future advances in computation and latency will enable these new forms of multi-user interaction within the social XRI metaverse to be achievable."
112
+ },
113
+ {
114
+ "section_id": "7",
115
+ "parent_section_id": null,
116
+ "section_name": "VII Summary",
117
+ "text": "This work has presented an exploration of how the metaverse concept of a digital twin overlaying the physical environment can be made more extended through the use of XRI technologies, toward a social XRI metaverse.\nThe design considerations for this have been presented, first for XRI systems broadly, and have been extended into a design architecture for multi-user social interactions. This offers a path toward frameworks for creating such metaverse environments. Further, key scenarios related to the types of user-user or user-agent or even agent-agent interactions across these systems have been identified as an area for exploring social-metaverse concepts.\nThe outcomes of this work set the stage for new prototype concepts, and a testbed for examining the benefits and limitations of social interactions within such a hyper-connected hybrid virtual-physical shared environment. In particular, future research will evaluate and test the architecture and the experiences shown in this work, across the dimensions identified.\nLikewise, it is worth highlighting that the design challenge within the social XRI metaverse is complex, and a single architectural framework may not be able to be instantiated.\nHowever, approaches involving prototyping and proof-of-concept development\ncan be used to explore this concept and to consider the range of human factors involved [3 ###reference_b3###].\nIt remains for future metaverse researchers, developers, designers, and creators to engage in bringing this concept forward, to examine how to better accommodate shared social and multiuser real-world XRI metaverse experiences and interactions across the entirety of the reality-virtuality spectrum."
118
+ }
119
+ ],
120
+ "appendix": [],
121
+ "tables": {
122
+ "1": {
123
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<table class=\"ltx_tabular ltx_minipage ltx_align_middle\" id=\"S2.T1.2\" style=\"width:433.6pt;\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.1.1\">Platforms</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.2.1\">Social</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.3.1\">VR</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.4.1\">XR</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.5.1\">Traditional display</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.6.1\">IoT</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.7.1\">IoT Avatar</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.8.1\">Local Environment</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.9.1\">Remote Environment</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.10.1\">Blockchain</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.11.1\">Avatarization</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.12.1\">Agency</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.13.1\">Synchronous</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.1.1.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.1.1.14.1\">Asynchronous</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.2.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.1.1\">Horizon Worlds<span class=\"ltx_note ltx_role_footnote\" id=\"footnote3\"><sup class=\"ltx_note_mark\">3</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">3</sup><span class=\"ltx_tag ltx_tag_note\">3</span>https://www.oculus.com/horizon-worlds/ (accessed on 05-February-2023)</span></span></span><span class=\"ltx_ERROR undefined\" id=\"S2.T1.2.2.2.1.1.1\">\\c@Horizon</span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.2.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.3.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.4.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.5.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.6.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.7.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.8.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.9.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.10.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.11.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.12.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.2.2.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.2.2.14.1\">\u2717</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.3.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.1.1\">Spatial<span class=\"ltx_note ltx_role_footnote\" id=\"footnote4\"><sup class=\"ltx_note_mark\">4</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">4</sup><span class=\"ltx_tag ltx_tag_note\">4</span>https://www.spatial.io/ (accessed on 05-February-2023)</span></span></span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.2.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.3.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.4.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.5.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.6.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.7.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.8.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.9.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.10.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.11.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.12.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.3.3.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.3.3.14.1\">\u2717</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.4.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.1.1\">Decentraland<span class=\"ltx_note ltx_role_footnote\" id=\"footnote5\"><sup class=\"ltx_note_mark\">5</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">5</sup><span class=\"ltx_tag ltx_tag_note\">5</span>https://decentraland.org/ (accessed on 05-February-2023)</span></span></span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.2.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.3.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.4.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.5.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.6.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.7.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.8.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.9.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.10.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.11.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.12.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.4.4.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.4.4.14.1\">\u2717</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.5.5\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.1.1\">The Sandbox<span class=\"ltx_note ltx_role_footnote\" id=\"footnote6\"><sup class=\"ltx_note_mark\">6</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">6</sup><span class=\"ltx_tag ltx_tag_note\">6</span>https://www.sandbox.game/ (accessed on 05-February-2023)</span></span></span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.2.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.3.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.4.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.5.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.6.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.7.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.8.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.9.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.10.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.11.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.12.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.5.5.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.5.5.14.1\">\u2717</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.1.1\">Somnium Space<span class=\"ltx_note ltx_role_footnote\" id=\"footnote7\"><sup class=\"ltx_note_mark\">7</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">7</sup><span class=\"ltx_tag ltx_tag_note\">7</span>https://somniumspace.com/ (accessed on 05-February-2023)</span></span></span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.2.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.3.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.4.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.5.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.6.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.7.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.8.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.9.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.10.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.11.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.12.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.6.6.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.6.6.14.1\">\u2717</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.7.7\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.1.1\">Microsoft Mesh<span class=\"ltx_note ltx_role_footnote\" id=\"footnote8\"><sup class=\"ltx_note_mark\">8</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">8</sup><span class=\"ltx_tag ltx_tag_note\">8</span>https://www.microsoft.com/mesh (accessed on 05-February-2023)</span></span></span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.2.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.3.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.4.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.5.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.6.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.7.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.8.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.9.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.10.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.11.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.12.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.7.7.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.7.7.14.1\">\u2717</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.8.8\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.1.1\">MLSE Raptors Demo<span class=\"ltx_note ltx_role_footnote\" id=\"footnote9\"><sup class=\"ltx_note_mark\">9</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">9</sup><span class=\"ltx_tag ltx_tag_note\">9</span>https://www.thestar.com/sports/raptors/2023/01/24/the-future-of-sports-mixed-reality-viewing-experiences-coming-for-nhl-nba-fans.html (accessed on 08-February-2023)</span></span></span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.2.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.3.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.4.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.5.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.6.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.7.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.8.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.9.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.10.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.11.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.12.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.8.8.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.8.8.14.1\">\u2717</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.9.9\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.1.1\">vTime XR - AR Mode<span class=\"ltx_note ltx_role_footnote\" id=\"footnote10\"><sup class=\"ltx_note_mark\">10</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">10</sup><span class=\"ltx_tag ltx_tag_note\">10</span>https://vtag.com/ (accessed on 06-February-2023)</span></span></span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.2.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.3.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.4.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.5.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.6.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.7.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.8.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.9.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.10.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.11.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.12.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.9.9.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.9.9.14.1\">\u2717</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.10.10\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.1.1\">VTag<span class=\"ltx_note ltx_role_footnote\" id=\"footnote11\"><sup class=\"ltx_note_mark\">11</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">11</sup><span class=\"ltx_tag ltx_tag_note\">11</span>https://vtag.com/ (accessed on 06-February-2023)</span></span></span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.2.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.3.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.4.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.5.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.6.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.7.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.8.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.9.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.10.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.11.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.12.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.10.10.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.10.10.14.1\">\u2713</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.11.11\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.1.1\">Nextech AR - ARway<span class=\"ltx_note ltx_role_footnote\" id=\"footnote12\"><sup class=\"ltx_note_mark\">12</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">12</sup><span class=\"ltx_tag ltx_tag_note\">12</span>https://www.nextechar.com/arway (accessed on 06-February-2023)</span></span></span></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.2.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.3.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.4.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.5.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.6.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.7.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.8.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.9.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.10.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.11.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.12.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.13.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.11.11.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.11.11.14.1\">\u2713</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.12.12\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.1.1\">IoT Avatar 2.0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.06230v3#bib.bib9\" title=\"\">9 ###reference_b9###</a>]</cite></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.2.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.3.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.4.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.5.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.6.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.7.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.8.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.9.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.10.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.11.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.12.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.12.12.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.12.12.14.1\">\u2717</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.13.13\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.1.1\">XRI Workstation<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.06230v3#bib.bib5\" title=\"\">5 ###reference_b5###</a>]</cite></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.2.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.3.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.4.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.5.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.6.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.7.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.8.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.9.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.10.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.11.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.12.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.13.13.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.13.13.14.1\">\u2717</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.14.14\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.1.1\">XRI Metaverse Prototypes <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.06230v3#bib.bib4\" title=\"\">4 ###reference_b4###</a>]</cite><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.06230v3#bib.bib7\" title=\"\">7 ###reference_b7###</a>]</cite></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.2.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.3.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.4.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.5.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.6.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.7.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.8.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.9.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.10.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.11.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.12.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.14.14.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.14.14.14.1\">\u2717</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.15.15\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.1.1\">XRI Body<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2306.06230v3#bib.bib8\" title=\"\">8 ###reference_b8###</a>]</cite></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.2.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.3.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.4.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.5.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.6.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.7.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.8.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.9.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.10.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.11.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.12.1\">\u2717</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S2.T1.2.15.15.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.15.15.14.1\">\u2717</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.16.16\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.1\" style=\"width:50.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.1.1\">Proposed Social XRI Metaverse</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.2\" style=\"width:14.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.2.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.3\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.3.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.4\" style=\"width:5.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.4.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.5\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.5.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.6\" style=\"width:8.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.6.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.7\" style=\"width:17.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.7.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.8\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.8.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.9\" style=\"width:28.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.9.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.10\" style=\"width:31.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.10.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.11\" style=\"width:37.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.11.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.12\" style=\"width:19.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.12.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.13\" style=\"width:35.9pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.13.1\">\u2713</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.2.16.16.14\" style=\"width:40.1pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S2.T1.2.16.16.14.1\">\u2713</p>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table I: </span>Example Social Metaverse Applications, Extended Metaverse XRI Prototypes, and Criteria for Social XRI experiences. Key: \u2713 meets criteria; \u2717 does not meet criteria.</figcaption>\n</figure>",
124
+ "capture": "Table I: Example Social Metaverse Applications, Extended Metaverse XRI Prototypes, and Criteria for Social XRI experiences. Key: \u2713 meets criteria; \u2717 does not meet criteria."
125
+ }
126
+ },
127
+ "image_paths": {
128
+ "1": {
129
+ "figure_path": "2306.06230v3_figure_1.png",
130
+ "caption": "Figure 1: Existing social metaverse platforms focus on (a) screen and VR experiences and (b) mobile mixed reality and head-mounted experiences. This can be combined with (c) the XRI concept toward a new social-XRI metaverse. Table I shows more detail and comparison of the systems shown above.",
131
+ "url": "http://arxiv.org/html/2306.06230v3/extracted/5372698/figures/SocialXRIBackground.png"
132
+ },
133
+ "2": {
134
+ "figure_path": "2306.06230v3_figure_2.png",
135
+ "caption": "Figure 2: Design elements for XRI Applications with a focus on virtual embodiment methods, XRI interaction sensing components, and agent system designs, as in [12] [8].",
136
+ "url": "http://arxiv.org/html/2306.06230v3/extracted/5372698/figures/DesignTheory.jpg"
137
+ },
138
+ "3": {
139
+ "figure_path": "2306.06230v3_figure_3.png",
140
+ "caption": "Figure 3: Proposed Social XRI Metaverse architecture for single and multiuser, local and remote[10], physical and virtual interactions. This involves frameworks for XRI interaction[8], level of agency[13][12], body avatarization[8], and level of virtuality across the reality-virtuality continuum[14]. It defines how users in one or more XRI environments can interact, including their IoT edge devices, sensors, and agents, and the communication methods between them that provide access to shared virtual environments and hybrid objects.",
141
+ "url": "http://arxiv.org/html/2306.06230v3/extracted/5372698/figures/MultipleUserFramework.png"
142
+ },
143
+ "4": {
144
+ "figure_path": "2306.06230v3_figure_4.png",
145
+ "caption": "Figure 4: Design scenarios for local multi-user XRI metaverse interaction wherein two users engage with hybrid virtual-physical IoT objects and XRI avatars, as in XRI Environment 1 (see Figure 3). Example interactions include: (a) Manipulating a virtual bulb to turn on/off a physical lamp. (b) Manipulating the scale of a virtual IoT plant avatar. (c) Using computer vision to gather context and interact with virtual agents through physical context changes. (d) Controlling the virtual bulb through conversation. These kinds of interaction are expected to become more common as the metaverse grows in scale and maturity.",
146
+ "url": "http://arxiv.org/html/2306.06230v3/extracted/5372698/figures/SameRoomScenario.png"
147
+ },
148
+ "5": {
149
+ "figure_path": "2306.06230v3_figure_5.png",
150
+ "caption": "Figure 5: Interactions in the social XRI metaverse are envisioned to scale across multiple environments (see XRI Environment I, II and III), connecting one or more users with diverse sets of IoT-enabled edge devices, agents, and XRI avatars, regardless of their physical or virtual locations, positions across the XRI environment, avatar embodiment, or the access displays used to engage within the metaverse (i.e., screens or HMD\u2019s). Example environment configurations are as described above for (a), (b), (c), (d), and (e), as well as the different kinds of interaction (user interaction, IoT interaction, physical presence, virtual telepresence, and traditional display interactions. This level of complex interactions across physical and virtual spaces (single and multiuser, local and remote[10], physical and virtual interactions) must be addressed by social XRI metaverse systems. The Social-XRI Interaction Cube (bottom right) shows multiple dimensions and highlights where the context situations (a),(b),(c),(d), and (e) fit within the dimensions.",
151
+ "url": "http://arxiv.org/html/2306.06230v3/extracted/5372698/figures/HyperScenario.png"
152
+ }
153
+ },
154
+ "validation": true,
155
+ "references": [],
156
+ "url": "http://arxiv.org/html/2306.06230v3"
157
+ }
20240127/2306.06397v4.json ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Lower-depth programmable linear optical processors",
3
+ "abstract": "Programmable linear optical processors (LOPs) can have widespread applications in computing and information processing due to their capabilities to implement reconfigurable on-chip linear transformations. A conventional LOP that uses a mesh of Mach-Zehnder interferometers (MZIs) requires stages of phase shifters for matrices. However, it is beneficial to reduce the number of phase shifter stages to realize a more compact and lower-loss LOP, especially when long and lossy electro-optic phase shifters are used. In this work, we propose a novel structure for LOPs that can implement arbitrary matrices as long as they can be realized by previous MZI-based schemes. Through numerical analysis, we further show that the number of phase shifter stages in the proposed structure can be reduced to and for a large number of random dense matrices and sparse matrices, respectively. This work contributes to the realization of compact, low-loss, and energy-efficient programmable LOPs.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Programmable linear optical processors (LOPs) capable of implementing reconfigurable on-chip linear transformations have attracted increasing attention in recent years due to their promising applications in computing and information processing [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###]. By programming the transfer matrix of optical processors, parallel tasks such as matrix multiplication can be directly performed in the optical domain, which may significantly reduce latency and lower energy consumption compared to their electronic counterparts [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###].\nSo far, to implement an arbitrary matrix on a LOP, the matrix first needs to be decomposed into the product of two unitary matrices and a diagonal matrix via singular value decomposition [10 ###reference_b10###], as illustrated in Fig. 1(a). The unitary matrices are realized using the universal multiport interferometer architectures which consist of a mesh of tunable Mach-Zehnder interferometers (MZIs) [11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###]. Bell et al. have proposed a compact MZI-based structure that requires stages of phase shifters for realizing arbitrary unitary matrices [14 ###reference_b14###], as shown in Fig. 1(b). The diagonal matrix is realized using either a gain/absorber array or an MZI array. In practice, it is preferred to construct the entire LOP only using MZIs for easier fabrication and control. In this case, the structure of a LOP using the Bell structure is schematically shown in Fig. 1(c). Here, the last phase shifter stage in the section for and the first phase shifter stage in the section for have been absorbed into the MZI array for . Therefore, for a conventional LOP only using MZIs, in its most compact form, phase shifter stages are required.\nWhile thermo-optic (TO) phase shifters are widely used in existing devices [3 ###reference_b3###, 16 ###reference_b16###], a TO phase shifter typically consumes more than 1 mW power and is therefore not suitable for large-scale devices [17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###]. To realize a high-speed and low-power LOP, electro-optic (EO) phase shifters are highly desirable, since a GHz modulation bandwidth and ultra-low power consumption during static operations can be easily achieved [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###]. However, these phase shifters tend to have non-negligible insertion loss or can be millimeters in length, severely hindering their applications in LOPs. Therefore, it would be highly beneficial to reduce the number of phase shifter stages for the realization of compact and low-loss LOPs using EO phase shifters.\nIn this work, we propose a novel structure for programmable LOPs based on the concept of multiplane light conversion (MPLC) [24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###]. We show that using input/output ports out of an universal multiport interferometer (), an arbitrary matrix can be obtained as long as it can be realized by previous MZI-based schemes. Through numerical analysis, we further show that the number of phase shifter stages in the proposed structure can be reduced to and for a large number of random dense matrices and sparse matrices, respectively. This work contributes to the realization of compact, low-loss, and energy-efficient programmable LOPs.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Principle",
15
+ "text": "Our proposed structure is illustrated in Fig. 2 ###reference_###. We use input/output ports out of an universal multiport interferometer, which consists of cascaded phase shifter arrays and couplers. The complex-valued transfer matrix can be calculated by multiplying the transfer matrices of all phase shifter arrays and couplers. The couplers can be multimode interference (MMI) couplers [27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###], or multiport directional couplers (MDCs) [30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###]. As explained in the previous works [31 ###reference_b31###, 32 ###reference_b32###, 34 ###reference_b34###, 35 ###reference_b35###], for schemes based on MPLC, the coupler is not required to have a specific transfer matrix as long as it properly scrambles the transmitted light.\n###figure_2### For an complex matrix that can be realized in previous schemes only using MZIs, as shown in Fig. 1(c), it can be written in the form of\nwhere and are two unitary matrices and is a non-negative real diagonal matrix. Since we have assumed that is realized by an MZI array, it can be written in the form of\nwhere .\nWe now show that using input/output ports of a universal multiport interferometer, which can implement arbitrary unitary matrix, an arbitrary in the form of Eq. 1 ###reference_### can be realized provided that . We first construct two unitary matrices and from and , respectively:\nwhere and are arbitrary unitary matrices with rows/columns. We then assume and construct a matrix in the form of\nwhere is the imaginary unit and is an arbitrary unitary matrix with rows/columns. vanishes at the boundary case (). It is easy to verify that is a unitary matrix and can be written as\nwhere\nand is the transpose of .\nNow, we can see that the product of , , and is\nwhere is included as a submatrix. Since , , and are all unitary matrices, must also be a unitary matrix, which by definition can be realized by an universal multiport interferometer. Because is a submatrix of , we reach the conclusion that using input/output ports of a universal multiport interferometer, an arbitrary in the form of Eq. 1 ###reference_### can be realized provided that . On the contrary, if , a unitary can no longer be constructed and therefore cannot be realized. It also can be verified that selecting ports from ports can be arbitrary and does not have to be consecutive as in the above case, since permuting rows/columns in does not affect its unitarity. Meanwhile, we note that other submatrices in can be arbitrary since , , and are all arbitrary. This suggests that there are redundant degrees of freedom in , and we may need fewer phase shifter stages to realize . Considering the necessary degrees of freedom (), the stage number should be no less than ."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Numerical analysis",
21
+ "text": "We use two different types of couplers: MMI couplers and MDCs, respectively, to investigate the necessary phase shifter stages in this scheme. For an ideal MMI coupler in which insertion loss and power imbalance are ignored, its transfer matrix is given in Ref. [36 ###reference_b36###]. For an MDC, we use the coupled mode theory to derive its transfer matrix [37 ###reference_b37###]. The details are provided in Appendix A. In real devices, the MMI coupler should be engineered to have a low insertion loss, which typically involves the use of waveguide tapers; the MDC should be designed to have a large coupling entropy [32 ###reference_b32###].\n###figure_3###"
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III.1 Dense matrices",
27
+ "text": "We consider the cases of various , and . For each , we generate 100 random as target matrices by multiplying randomly generated , where and are Haar-random unitary matrices and the diagonal elements of are randomly sampled from the uniform distribution . In this way, all generated are dense matrices with non-zero elements. For each target matrix, we construct several LOPs with different and , and then optimize all the phase shifts to obtain the target matrix, using a covariance matrix adaptation evolution strategy (CMA-ES) algorithm [38 ###reference_b38###]. An open-source python package developed for the CMA-ES optimization is used [39 ###reference_b39###]. For simplicity, the ports are chosen to be the middle ports of the ports. After optimization, the difference between the target matrix and the obtained matrix is evaluated using the normalized squared error (NSE):\nThe NSE is also used as the cost function during optimizations, and the stopping criterion is set as either or upon reaching a specific iteration number. For all optimizations, we set the initial phase shifts to and the update step size (parameter \u201csigma\u201d in the algorithm) to 2. In most cases, the algorithm converges successfully and returns the desired results.\n###figure_4### Figure 3(a) and 3(b) show the average NSEs when the proposed LOPs are used to implement one hundred random dense matrices (), assuming the use of MMI couplers and MDCs, respectively. The error bar represents the range between the maximum and minimum value among the 100 cases. The dash line indicates the NSE of . We can see that very similar results are obtained for the two different couplers. As predicted in the previous section, an arbitrary matrix cannot be realized if . Therefore, for and , the average NSE does not decrease significantly even when is increased. On the contrary, for , sufficiently small NSEs () are obtained for all the 100 cases when , indicating that all the desired matrices are almost perfectly realized. Figure 4 further shows the cases of various . The error band represents the range between the maximum and minimum value among the 100 cases. For all desired matrices, sufficiently small NSEs are obtained for both types when and . For , although the NSE decreases slightly with increasing , it cannot be sufficiently suppressed compared to the case of . There is no notable difference between the LOP using MMI couplers and MDCs in terms of NSE. Therefore, while a rigorous proof is still lacking, the numerical analysis shows that stages of phase shifters are sufficient in this scheme for a large number of random dense matrices with non-zero elements.\n###figure_5###"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "III.2 Sparse matrices",
33
+ "text": "We then investigate the necessary stage number in this scheme for sparse matrices. Random dense matrices with non-zero elements are first generated following the same procedures described in Subsection A. Subsequently, sparse matrices with only one non-zero element are created by randomly setting elements to 0 in the dense matrices. We construct various LOPs to implement these sparse matrices. Figure 5 shows the average NSEs after optimizing the phase shifts. Each point again represents the average value of 100 random cases. It can be seen that all these sparse matrices are realized sufficiently well using stages. Although an extra stage is needed compared with the case of dense matrices, this increase is still acceptable from a practical standpoint. Additionally, we can note the differences in Fig. 5(a) and Fig. 5(b) for and , which are attributed to the differences in the coupling entropy between the two couplers."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "III.3 Hardware-induced computational errors",
39
+ "text": "We further compare the hardware-induced computational error in this scheme and that in the conventional MZI-based scheme shown in Fig. 1(c). Computational errors can arise from many factors, such as fabrication errors and finite phase control resolutions. Since it is beyond the scope of this paper to investigate the effects of different error sources, here we focus solely on the phase quantization error, which is caused by the finite phase control resolution in practical devices. In the results shown in Figs. 3-5, all phase shifts have been assumed to have sufficiently fine resolutions. Now, we assume that the resolution of phase control is 10 bits and calculate the induced errors in both schemes. We use the same 100 random matrices in Fig. 4 as the targets and calculate the ideal phase configurations for each scheme, respectively. The phase configurations for the MZI-based scheme are derived using the decomposition algorithm proposed in Ref. 14. Then, these ideal phase configurations are replaced by 10-bit approximations, and the new matrices are calculated. Figure 6 shows the average NSEs between the target and obtained matrices. For this scheme, the results of using MMI couplers and using MDCs are almost the same. The MZI-based scheme has larger average NSEs than this scheme, and the difference increases with increasing . This arises from the fact that an error occurring at an earlier stage affects all the following stages. The error accumulates along with the propagation of light in the circuit, and therefore, a deeper LOP has a larger error than a shallower one.\n###figure_6###"
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "IV Discussion",
45
+ "text": "It is natural to wonder if the proposed method can be applied into conventional schemes based on MZI meshes. Here, we show that although an arbitrary can be realized by increasing the port number to , the number of phase shifter stages can only be reduced by 1. We use and as the example, as illustrated in Fig. 7. Without loss of generality, we assume that Ports 1-3 are used as the input/output ports. The transfer matrix of this MZI mesh can be calculated by sequentially multiplying the matrices of all components:\nwhere corresponds to the MZI between the -th and -th ports on the -th stage [14 ###reference_b14###], corresponds to the phase shifter outside MZIs on the -th port, -th stage and takes the form of\nwhere is the phase shift. For an arbitrary unitary matrix, each MZI matrix and phase shifter is determined recursively using the decomposition algorithm proposed by Bell et al [14 ###reference_b14###]. Since this MZI mesh can realize arbitrary unitary matrices, according to the conclusion reached in Sec. II, an arbitrary matrix can also be realized if it can be written in the form of Eq. 1. Meanwhile, it is known that multiplying a matrix with only affects the associated elements in the -th and -th rows/columns. Therefore, , and on all stages are redundant as they do not affect the matrix elements of our interest. It then becomes obvious that the required stages of phase shifters are .\n###figure_7### The distinct difference in the required number of phase shifter stages between the MZI mesh and our proposed structure is attributed to the couplers. The MMI coupler splits the input light from any input waveguide equally into all output waveguides. The MDC does not split the input light equally, but scrambles it in a way such that the light is not localized in only a few waveguides [31 ###reference_b31###]. Therefore, the phase shifters following an coupler change the matrix elements globally since each phase shifter affects the light from all ports in the previous stage. By contrast, a phase shifter in the MZI mesh changes the matrix elements locally since it only affects the light from two ports in the previous stage. This leads to the result that a large number of phase shifters in Fig. 6 are redundant.\nAlthough the structure of the universal multiport interferometer is not new, the method of reducing the circuit depth by encoding a complex matrix as a block in a unitary matrix has not been proposed in previous works. While the number of ports has doubled in our scheme, the required number of phase shifter stages has been reduced from in previous scheme to or , which brings significant advantages in terms of insertion loss and footprint, especially when is large. In addition, the doubling of ports may not be a serious issue since there is no thermal crosstalk between the EO phase shifters, allowing them to be placed very close to each other. For unused output ports, optical power monitors can be integrated to provide real-time monitoring signals."
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "conclusion",
51
+ "text": "We have proposed a novel structure for programmable LOPs based on the concept of MPLC. We have shown that using input/output ports out of an universal multiport interferometer (), an arbitrary matrix can be obtained as long as it can realized by conventional schemes based on MZI meshes. While a rigorous proof is yet to be provided, our numerical analysis suggests that the number of phase shifter stages in the proposed structure can be significantly reduced to for a large number of dense matrices, and for a large number of sparse matrices. We have further demonstrated that the same level of reduction cannot be achieved for the conventional scheme. This work contributes to the realization of compact, low-loss, and energy-efficient programmable LOPs."
52
+ }
53
+ ],
54
+ "appendix": [
55
+ {
56
+ "section_id": "Appendix 1",
57
+ "parent_section_id": null,
58
+ "section_name": "Appendix A Transfer matrix of the multiport directional coupler (MDC)",
59
+ "text": "We consider an MDC consisting of parallel straight waveguides with the same width and spacing. in this section should not be confused with the in the main context. We assume that the perturbation is weak so that the propagation constant in each waveguide is approximately the same as that in a single waveguide, and only coupling from nearest waveguides needs to be considered. Under these assumptions, the amplitudes of light in the MDC can be described by the coupled equations [37 ###reference_b37###]:\nwhere is the distance along the propagation direction, is the amplitude in the -th waveguide (), is the propagation constant, is the coupling coefficient from the -th to -th waveguide. Since all the waveguides are assumed to have the same width and spacing, the coupling coefficient between each waveguide pair is the same and will hereafter be denoted as . The above equations can be written in the matrix form as:\nwhere\nSince is a real Hermitian matrix, we can find a unitary matrix to diagonalize it into a diagonal matrix :\nThen, by letting and substituting it into Eq. A2, we obtain:\nIt is easy to see the solution of the above equation is:\nwhere is the -th diagonal element of . It follows that\nThe above equations can be rewritten as:\nTherefore, for an MDC with a length of , the transfer matrix is given by\nIn this paper, we assume the use of silicon waveguides with core dimension of 500 220 \\unit^2. From numerical simulation, is found to be 9.91 \\unit/\\micro for transverse electric (TE) mode at 1550-\\unitnm wavelength. For an -port MDC, we assume and select an so that small deviations in and do not affect the overall performance of the LOP. Specifically, for , we choose to be 50, 60, 75, 85, 100, 120, 130, 140, 150, 160 \\unit\\micro, respectively. Regardless of the value of , two parameters: waveguide spacing and length, can be swept when designing an MDC. The design goal is to thoroughly scramble the light propagating through the MDC, which can be quantitatively described by the coupling entropy [32 ###reference_b32###]. There exists a wide design region where the coupling entropy is large and does not change abruptly. The design parameters should be chosen from this region so that the device is robust to fabrication errors.\nThe above method yields unitary transfer matrices that simplify our analysis in this paper. However, if the distance between adjacent waveguides is small enough that the assumption of weak perturbation no longer holds, numerical approaches should be used to obtain more precise results [40 ###reference_b40###]."
60
+ }
61
+ ],
62
+ "tables": {},
63
+ "image_paths": {
64
+ "1": {
65
+ "figure_path": "2306.06397v4_figure_1.png",
66
+ "caption": "Figure 1: (a) Conventional programmable LOPs based on singular value decomposition. The target matrix is first decomposed into the product of two unitary matrices and a diagonal matrix. (b) A compact universal multiport interferometer proposed by Bell et al. for implementing arbitrary unitary matrices [14]. N+2\ud835\udc412N+2italic_N + 2 stages of phase shifters are needed (N=4\ud835\udc414N=4italic_N = 4 in this figure). (c) The structure of a LOP using the Bell structure. The last phase shifter stage in the section for \ud835\udc15\ud835\udc15\\rm\\bf Vbold_V and the first phase shifter stage in the section for \ud835\udc14\ud835\udc14\\rm\\bf Ubold_U have been absorbed into the MZI array for \ud835\udeba\ud835\udeba\\rm\\bf\\Sigmabold_\u03a3. 2\u2062N+32\ud835\udc4132N+32 italic_N + 3 stages of phase shifters are needed (N=4\ud835\udc414N=4italic_N = 4 in this figure).",
67
+ "url": "http://arxiv.org/html/2306.06397v4/x1.png"
68
+ },
69
+ "2": {
70
+ "figure_path": "2306.06397v4_figure_2.png",
71
+ "caption": "Figure 2: The proposed LOP structure. While the whole device has N\u2032superscript\ud835\udc41\u2032N^{\\prime}italic_N start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT ports, only N\ud835\udc41Nitalic_N ports are used as input/output ports. For unused input and output ports, phase shifters are not needed and thus are omitted in the first and last stages. For the stages in between two N\u2032\u00d7N\u2032superscript\ud835\udc41\u2032superscript\ud835\udc41\u2032N^{\\prime}\\times N^{\\prime}italic_N start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT \u00d7 italic_N start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT couplers, all N\u2032superscript\ud835\udc41\u2032N^{\\prime}italic_N start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT phase shifters are used. The couplers can be multimode interference (MMI) couplers or multiport directional couplers. M\ud835\udc40Mitalic_M is the number of phase shifter arrays.",
72
+ "url": "http://arxiv.org/html/2306.06397v4/x2.png"
73
+ },
74
+ "3": {
75
+ "figure_path": "2306.06397v4_figure_3.png",
76
+ "caption": "Figure 3: Average NSEs when the proposed LOPs are used to implement 100 random dense matrices with non-zero elements. The error bar represents the range between the maximum and minimum values among the 100 cases. The dash line indicates the NSE of 10\u221212superscript101210^{-12}10 start_POSTSUPERSCRIPT - 12 end_POSTSUPERSCRIPT. (a) LOPs using MMI couplers. (b) LOPs using MDCs.",
77
+ "url": "http://arxiv.org/html/2306.06397v4/x3.png"
78
+ },
79
+ "4": {
80
+ "figure_path": "2306.06397v4_figure_4.png",
81
+ "caption": "Figure 4: Average NSEs for random dense matrices with non-zero elements. The error band represents the range between the maximum and minimum values among the 100 cases. (a) LOPs using MMI couplers. (b) LOPs using MDCs.",
82
+ "url": "http://arxiv.org/html/2306.06397v4/x4.png"
83
+ },
84
+ "5": {
85
+ "figure_path": "2306.06397v4_figure_5.png",
86
+ "caption": "Figure 5: Average NSEs for random sparce matrices with one non-zero element. The error band represents the range between the maximum and minimum values among the 100 cases. (a) LOPs using MMI couplers. (b) LOPs using MDCs.",
87
+ "url": "http://arxiv.org/html/2306.06397v4/x5.png"
88
+ },
89
+ "6": {
90
+ "figure_path": "2306.06397v4_figure_6.png",
91
+ "caption": "Figure 6: Average NSEs of this scheme (N\u2032=2\u2062N,M=N+2formulae-sequencesuperscript\ud835\udc41\u20322\ud835\udc41\ud835\udc40\ud835\udc412N^{\\prime}=2N,M=N+2italic_N start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT = 2 italic_N , italic_M = italic_N + 2) and the MZI-based scheme when all phase shifts are assumed to have 10-bit resolutions.",
92
+ "url": "http://arxiv.org/html/2306.06397v4/x6.png"
93
+ },
94
+ "7": {
95
+ "figure_path": "2306.06397v4_figure_7.png",
96
+ "caption": "Figure 7: An example of using a 6\u00d76666\\times 66 \u00d7 6 MZI-based universal multiport interferometer to implement 3\u00d73333\\times 33 \u00d7 3 matrices. All MZIs and phase shifters below the dash line are redundant since they have no effect on the matrix elements of interest. 2\u2062N+22\ud835\udc4122N+22 italic_N + 2 phase shifter stages are required (N=3\ud835\udc413N=3italic_N = 3 in this example).",
97
+ "url": "http://arxiv.org/html/2306.06397v4/x7.png"
98
+ }
99
+ },
100
+ "validation": true,
101
+ "references": [],
102
+ "url": "http://arxiv.org/html/2306.06397v4"
103
+ }
20240127/2308.10335v5.json ADDED
@@ -0,0 +1,457 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Can LLM Replace Stack Overflow? A Study on Robustness and Reliability of Large Language Model Code Generation",
3
+ "abstract": "Recently, large language models (LLMs) have shown an extraordinary ability to understand natural language and generate programming code. It has been a common practice for software engineers to consult LLMs when encountering coding questions. Although efforts have been made to avoid syntax errors and align the code with the intended semantics, the reliability, and robustness of the code generation from LLMs have not yet been thoroughly studied. The executable code is not equivalent to reliable and robust code, especially in the context of real-world software development. For example, the misuse of APIs in the generated code could lead to severe problems, such as resource leaks, program crashes, etc. Existing code evaluation benchmarks and datasets focus on crafting small tasks such as programming questions in coding interviews. However, this deviates from the problems developers typically consult LLMs about. To fill the missing piece, we propose a dataset RobustAPI for evaluating the reliability and robustness of code generated by LLMs. We collect 1208 coding questions from Stack Overflow on 18 representative Java APIs. We summarize the common misuse patterns of these APIs and evaluate them on current popular LLMs. The evaluation results show that even GPT-4 has 62% of the generated code that contains API misuses. It would cause unexpected consequences if the code is introduced into real-world software.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The new era of language modeling arrives when large language models (LLMs) are capable of generating customized code according to the user\u2019s needs (Ye et al. 2023 ###reference_b30###; OpenAI 2023a ###reference_b15###; Anil et al. 2023 ###reference_b1###). It is not surprising that more and more software engineers choose to query large language models for the answer to the coding questions, such as generating a code snippet using certain APIs or detecting bugs in a few lines of code. Large language models are able to respond more suitable and customized answers for the question compared with searching in the online programming forums, such as Stack Overflow.\nSuch a fast pace conceals potential risks in the code generation of large language models. From the perspective of software engineering, the robustness and reliability of generated code have not yet been thoroughly studied even if numerous works have been made to avoid syntax errors and improve semantic understanding in the generated code (Xu et al. 2022 ###reference_b28###; Chen et al. 2021 ###reference_b3###; Shen et al. 2023a ###reference_b23###; Luo et al. 2023 ###reference_b13###). Unlike the online programming forums, the generated code snippets are not reviewed by the community peers and thus suffer from API misuse, such as missing boundary checking in file reading and variable indexing, missing file stream closing, failure in transaction completion, etc. Even if the code samples are executable or functionally correct, misuse can trigger serious potential risks in production, such as memory leaks, program crashes, garbage collection failures, etc, as shown in Figure 1 ###reference_###. To make things worse, the programmers asking these questions could be vulnerable to the risk if they are novices to the APIs and cannot tell the violations in the generated code snippets. Therefore, it is essential to contemplate the code reliability while evaluating the code generation by large language models.\n###figure_1### To evaluate the code generation of large language models, most of the existing benchmarks focus on the functional correctness of the execution result from the generated code, which means the code is acceptable as long as it is functional for the user\u2019s purpose (Chen et al. 2021 ###reference_b3###; Yin et al. 2018 ###reference_b32###; Lu et al. 2021 ###reference_b12###). We argue that the correct execution result is important but it is not only the case in the software development scenario. What the engineers really need is a reliable code sample without potential risks in the long run. Moreover, the domain of most current programming datasets is far from software engineering. The data source is mostly online coding challenge websites, such as Codeforces, Kattis, Leetcode, etc (Hendrycks et al. 2021 ###reference_b7###; Austin et al. 2021 ###reference_b2###). Although remarkable progress has been made, we argue that they fail to substantially help the software development in practical scenarios.\nTo this end, we propose RobustAPI, a comprehensive benchmark to evaluate the reliability and robustness of code generated by large language models, including a dataset of coding questions and an evaluator using the abstract syntax tree (AST) (Fischer, Lusiardi, and Von Gudenberg 2007 ###reference_b6###).\nIn the dataset, we target creating an evaluation setting that is close to real software development. Thus we collect representative questions about Java from Stack Overflow. Java is one of the most popular programming languages and is widely used in software development because of its write once, run anywhere (WORA) feature111https://en.wikipedia.org/wiki/Java\u02d9(programming\u02d9language) ###reference_mming_language)###. For each question, we provide a detailed description and the related Java API. We design templates to trigger large language models to generate the code snippet and the corresponding explanation.\nWe also provide an evaluator that analyzes the generated code snippets using the abstract syntax tree (AST) and compares them with the expected API usage patterns. Following Zhang et al. (2018 ###reference_b33###), we formalize the API usage patterns into structured call sequences, as shown in Figure 2 ###reference_###. The structured call sequences present how these APIs can be properly used to eliminate the potential system risks. Any violations of such structured call sequences would be considered as API misuse from the perspective of software engineering.\nWe collect 1208 real questions from Stack Overflow which involves 18 representative Java APIs. We run experiments on the close-sourced language models (GPT-3.5 and GPT-4 (OpenAI 2023a ###reference_b15###)) as well as the open-sourced language models (Llama-2 (Touvron et al. 2023 ###reference_b26###), Vicuna-1.5 (Chiang et al. 2023 ###reference_b4###). We use the default hyper-parameter settings of the models without extensive hyper-parameter tuning. We further design two experiment settings, zero-shot and one-shot, where none or one demonstration sample is provided in the prompt. We conduct a comprehensive analysis of the generated code and study the common API misuse cases of current large language models. We would like to bring up the important issues of API misuse in the code generation by large language models, and provide a new dimension to evaluate large language models other than the commonly-used functional correctness. The main purpose of this benchmark is not to evaluate the functional correctness of the generated code, but instead, we focus on reliability and robustness. We hope this work could facilitate future research on this topic and help create a more robust coding helper out of large language models to step further into real artificial general intelligence. We open-source our dataset and evaluator on GitHub222https://github.com/FloridSleeves/RobustAPI. We summarize our contribution as follows.\nWe propose a new benchmark, RobustAPI, to evaluate the reliability and robustness of code generation by large language models. This is an important but not yet well-studied perspective to evaluate the code quality apart from functional correctness.\nWe provide a well-formalized evaluation framework including a dataset of Stack Overflow questions and an API usage checker using AST. We report the performance of popular large language models, including GPT-3.5, GPT-4, Llama-2, and Vicuna-1.5.\nWe conduct a comprehensive analysis of the code generation performance of current large language models. We summarize the common API misuse for each model and point out the promising improvement direction for the future research."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Methodology",
21
+ "text": "In this section, we describe RobustAPI, a comprehensive benchmark to thoroughly evaluate the reliability and robustness of LLM-generated code. We describe the process of data collection and prompt generation when constructing the dataset. Then we present the API misuse patterns evaluated in RobustAPI and discuss the potential consequence of violations. Finally, we introduce the static analysis method in RobustAPI for detecting the API usage violations which leverages the abstract syntax tree and achieves higher evaluation accuracy in evaluating the API misuse in code generated by LLMs compared to rule-based method such as keywords matching."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Data Collection",
27
+ "text": "To take advantage of the existing research efforts in the software engineering field, we build RobustAPI based on the dataset from ExampleCheck (Zhang et al. 2018 ###reference_b33###) as our starting point. ExampleCheck is proposed to study the frequent Java API misuse in online Q&A forums. We select 18 popular Java APIs from the dataset as shown in Table 1 ###reference_###. These 18 APIs cover 6 domains including string processing, data structure, mobile development, crypto, I/O and database operation. Then we crawl questions relevant to these APIs from Stack Overflow. We only select the questions with online answers and we keep the questions whose provided answer contains API misuse. In this way, we guarantee that the questions in RobustAPI are answerable and non-trivial so we can use them to effectively evaluate the LLMs\u2019 ability in answering coding questions that humans are prone to make mistakes. After filtering, we get 1208 questions in total. The distribution of questions for each domain is shown in Table 1 ###reference_###.\nAfter collecting the questions, we convert them into the JSON format with the following fields: {id, api, question, origin}. id field contains the unique id we assign for each sample. api field contains the API that we specifically instruct the large language models to use as a question hint. question field contains the title and description of the Stack Overflow questions. origin field contains the original URL of this sample."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Prompt Generation",
33
+ "text": "In the prompt, we start with the task introduction and the required response format. Then we append the few-shot demonstrations on this API when conducting experiments in the few-shot settings. The demonstration examples satisfy our provided response format. Next, we append the question and the corresponding API hint for this question. This prompt simulates a user asking coding questions without providing any additional hints from the API documentation which is a typical scenario when novice developers seek help from large language models. Due to the chat completion nature of state-of-the-art LLMs, we wrap the question and answer with special tags to instruct LLMs to generate answers to the questions. The prompt template is adapted from (Patil et al. 2023 ###reference_b17###), which can help LLMs follow a specific generation template so that we can extract more compilable code snippets from the response."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Demonstration Samples",
39
+ "text": "Demonstration samples have been proven helpful to LLMs in understanding natural language. To thoroughly analyze LLMs\u2019 ability in code generation, we design two few-shot settings, One-shot-irrelevant and One-shot-relevant.\nIn the one-shot-irrelevant setting, we provide LLMs with an example using an irrelevant API (e.g. Arrays.stream). We assume this demonstration example would eliminate the syntax errors in the generated code.\nIn the one-shot-relevant setting, we provide LLMs with an example using the same API as the given question. The provided example contains a pair of question and answer. The question in the demo example is not present in the testing dataset and we manually revise the answer to ensure that there is no API misuse in it and that the semantics well align with the questions."
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "Java API Misuse",
45
+ "text": "###figure_2### When using the APIs provided by language libraries, developers need to follow the API usage rules so that they can take full advantage of the ideal API effect. Violating these rules and misusing the APIs could result in unexpected behaviors in production. A typical example is the file operation. When opening and writing a file through RandomAccessFile, two usage rules need to be enforced: (1) Reading the file could throw exceptions. If the buffer limit is reached before the expected bytes are read, the API would throw IndexOutOfBoundsException. Also, if the file is concurrently closed by other processes, the API would throw ClosedChannelException. To deal with these exceptions, the correct implementation should enclose the API inside try-catch blocks. (2) The file channel should be closed after usage. Otherwise, if this code snippet is inside a long-lasting program that is concurrently running in multiple instances, the file resources could be run out. Therefore, the code needs to invoke close API after all file operations. The correct usage are shown as following:\nCorrect API Usage: try { RandomAccessFile raf = new RandomAccessFile(\"/tmp/file.json\", \"r\"); byte[] buffer = new byte[1024 * 1024]; int bytesRead = raf.read(buffer, 0, buffer.length); raf.close(); } catch(Exception e) {...} In RobustAPI, we summarized 41 API usage rules from the 18 APIs, which are validated in the documentation of these APIs (Zhang et al. 2018 ###reference_b33###). These rules include: (1) The guard condition of an API, which should be checked before API calls. For example, check the result of File.exists() before File.createNewFile() (2) Required call sequence of an API, which should be called in a specific order. For example, call close() after File.write(). (3) Control structures of an API. For example, enclose SimpleDateFormat.parse() with try-catch structure."
46
+ },
47
+ {
48
+ "section_id": "3.5",
49
+ "parent_section_id": "3",
50
+ "section_name": "Detecting API Misuse",
51
+ "text": "Existing research in evaluating the code generated by LLMs usually uses test cases, which falls short when testing the reliability and robustness of code.\nTo deal with this challenging problem, we use static analysis for RobustAPI, which has relatively mature solutions in detecting API misuse (Zhang et al. 2018 ###reference_b33###; Nguyen et al. 2014 ###reference_b14###; Wang et al. 2013 ###reference_b27###; Huang et al. 2023 ###reference_b8###). To evaluate the API usage correctness in code, RobustAPI detects the API misuses against the API usage rules by extracting call consequences and control structures from the source code, as shown in Figure 2 ###reference_###.\nThe code checker first checks the code snippets to see whether it is a snippet of a method or a method of a class so that it can enclose this code snippet and construct an abstract syntax tree (AST) from the code snippet. Then the checker traverses the AST to record all the method calls and control structures in order, which generates a call sequence. Next, the checker compares the call sequence against the API usage rules. It infers the instance type of each method call and uses the type and method as keys to retrieve corresponding API usage rules. Finally, the checker computes the longest common sequence between the call sequence and the API usage rules. If the call sequence does not match the expected API usage rules, the checker will report API misuse."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Experimenet",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "Experiment Setup",
63
+ "text": "In the experiments, we evaluate RobustAPI on four LLMs:\nGPT-3.5 (OpenAI 2023a ###reference_b15###), GPT-4 (OpenAI 2023a ###reference_b15###), Llama-2 (Touvron et al. 2023 ###reference_b26###), Vicuna-1.5 (Chiang et al. 2023 ###reference_b4###).\nWe use the default hyper-parameter settings of each model without further extensive hyper-parameter tuning. All experiment results are Pass@1 unless specified.\nFor all models, we evaluate three experiment settings:\nZero-shot: No example is provided in the prompt. The prompt only contains the instruction, question.\nOne-shot-irrelevant: RobustAPI provides one example of an irrelevant task in the prompt.\nOne-shot-relevant: RobustAPI provides one example of the same API with the correct usage in the prompt.\nThe examples for shot generations are manually written and double-checked by the authors. Then they are evaluated against the API usage checkers to make sure they are aligned with the API usage rules."
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "Evaluation Metrics",
69
+ "text": "To quantitatively evaluate the reliability of the generated code, we define the following values and our metrics are computed based on them. Supposing that we have questions in our dataset, we divide them into three groups.\n: The number of cases where our API usage checker detects the API usage violations.\n: The number of cases where our API usage checker does not detect the API usage violations.\n: The number of cases where the LLM fails to generate code or the generated code is not compilable.\nBased on the values, we define our metrics.\nAPI Misuse Rate : To analyze the proportion of misuse cases among the compilable code snippets. It reveals how reliable the generated code is after the users filter out the non-compilable cases.\nCompilation Rate :\nTo analyze the proportion of compilable cases among all questions. It is necessary to consider the percentage of compilable cases in order to eliminate the influence from the extreme situations, such as when only a few compilable code snippets are generated.\nOverall API Misuse Percentage : To analyze the proportion of misuse cases among all questions."
70
+ },
71
+ {
72
+ "section_id": "4.3",
73
+ "parent_section_id": "4",
74
+ "section_name": "Research Questions",
75
+ "text": "We conduct a series of experiments on state-of-the-art LLMs based on RobustAPI, which demonstrate the usability and effectiveness of RobustAPI. The experiments provide insights on the ability to answer real-world coding questions and the robustness and reliability of these answers regarding API misuse problems. In the experiment, we try to answer the following questions:\nQ1: What are the API misuse rates in answering real-world coding questions by these LLMs?\nQ2: How do irrelevant shots affect the results?\nQ3: Can correct API usage examples reduce the misuse?\nQ4: Why does LLM-generated code fail the API usage check?"
76
+ },
77
+ {
78
+ "section_id": "4.4",
79
+ "parent_section_id": "4",
80
+ "section_name": "Misuse Rate",
81
+ "text": "###figure_3### Firstly, we present the API misuse rate of each model based on RobustAPI on the left of Figure 3 ###reference_###. In this figure, the higher the API misuse rate is, the worse the code reliability and robustness for this large language model. The API misuse rate is calculated by dividing answers that can be compiled and contains API misuses by all the answers that can be compiled.\nFrom the evaluation results, all the evaluated models suffer from API misuse problems, even the state-of-the-art commercial models like GPT-3.5 and GPT-4. In zero-shot settings, Llama has the lowest API misuse rate. However, this is partially due to that most of Llama\u2019s answers do not include any code. A counter-intuition finding is that GPT-4 actually has a higher API misuse rate than GPT-3.5, though the coding ability of GPT-4 is proved to be \u201c40% more advanced than its predecessor, GPT-3.5\u201d (OpenAI 2023b ###reference_b16###). We also evaluate a code-specialized large language model, DeekSeekCoder(Piplani and Bamman 2018 ###reference_b20###), which is trained on a variety of programming languages including Java, and surpasses many existing Code LLMs. We report the results of deepseek-coder-6.7b-base and deepseek-coder-6.7b-instruct. We observe that the code-specialized large language model can generate more compilable samples. However, the API misuse rate is not significantly better than other models. This indicates that with the code generation ability of large language models is largely improved nowadays, the reliability and robustness of code in real-world production rises as an unnoticed issue. And the space for improvement is huge for this problem.\nThe execution time for static analysis is shown in Table 3 ###reference_###. The time difference is due to the different coding styles of each LLM, all of which are within 7 minutes.\nAnswers to real-world coding questions from the state-of-the-art large language models widely have API misuse problems."
82
+ },
83
+ {
84
+ "section_id": "4.5",
85
+ "parent_section_id": "4",
86
+ "section_name": "One-Shot-Irrelevant Results",
87
+ "text": "In this experiment, RobustAPI gives a pair of question and answer as an example to show the model how to follow the template required by the instructions. The example contains no information about the API usage checked by RobustAPI. The result is shown in the middle of Figure 3 ###reference_###. However, for most models, the irrelevant shot does not significantly reduce the API misuse rate but on the contrary, slightly increases the misuse rate. One possible reason for this is the irrelevant shot provided to the large language models actually encourages the models to give a lengthy code solution, which increases the chance of API misuse. API misuse rate of Llama increases significantly after adding the irrelevant shot because it has more valid answers that contain code snippets. Overall, adding an irrelevant shot triggers the large language models to generate more valid answers, which enables a better evaluation of the code reliability and robustness.\nAmong all the answers containing compilable code, 57-70% of the LLM answers contain API misuse, which could lead to severe consequence in production.\nIrrelevant shot examples does not help decrease the API misuse rate but triggers more valid answers, which show to be effective for benchmarking the model performance."
88
+ },
89
+ {
90
+ "section_id": "4.6",
91
+ "parent_section_id": "4",
92
+ "section_name": "One-Shot-Relevant Results",
93
+ "text": "In this experiment, RobustAPI adds a manually-written shot in the prompt, which performs a different task but uses the same API. This gives hints to LLMs on how to use these APIs correctly. From the results, after adding the correct usage shot, the API misuse rates of GPT-3.5, GPT-4, and Vicuna significantly drop. This indicates an effective improvement under this experiment setting. As for Llama, the relevant shot does not improve the performance. This experiment shows that some LLMs can effectively \u2018learn\u2019 the correct API usage and follow the usage. However, since existing language models are trained with data from code repositories if the training datasets contain a large number of API violations, the language models are prone to generate code with API misuses, which explains the high API misuse rate in zero-shot and one-shot-irrelevant evaluation. We show Pass@k results of one-shot-relevant in Table 4 ###reference_###.\n###table_1### Some LLMs can learn from the correct usage example, which reduce the API misuse rate."
94
+ },
95
+ {
96
+ "section_id": "4.7",
97
+ "parent_section_id": "4",
98
+ "section_name": "Robustness Analysis",
99
+ "text": "We evaluate the benchmark on GPT 3.5 under different temperatures (Table 5 ###reference_###). From the result, changing temperature does not significantly change the misuse rate and compilation rate. To study the effect of different prompting methods, we study how the API misuse rate changes when we replace the one-shot examples with the API usage rules. We feed the symbolized rules to ChatGPT and get the rules in natural language. We add the usage rules as part of the prompts and evaluate GPT-3.5 with RobustAPI. The results are shown in Table 6 ###reference_###, which indicates that the API usage rules might not help reduce the API misuse rate compared to one-shot relevant examples.\nIncreasing temperature or replacing one shot examples with API rules does not affect the API misuse rate significantly."
100
+ },
101
+ {
102
+ "section_id": "4.8",
103
+ "parent_section_id": "4",
104
+ "section_name": "Error Analysis",
105
+ "text": "###figure_4### In this section, we discuss the answers from LLMs that cannot pass the API usage check in RobustAPI evaluation. There are two categories for failure cases: cases that are not compilable, and cases that are compilable but contain API misuses as shown in Figure 3 ###reference_###. We refer to the ability to be compiled successfully as compilability. The compilation failure rate is calculated by dividing the number of cases that can be compiled to the total number of cases in the benchmarks. GPT-4 performs the best among all the models regarding compilability, which has less than 10% of answers that cannot be compiled across all experiment settings. Adding a few shots to prompts helps reduce the compilation failure rate in the evaluation results for all models.\nAs for the API misuse rate, we dive deeper into the APIs that LLMs are prone to misuse. Figure 4 ###reference_### details the misuse rate of each API for each LLM. Among all APIs, the Android development API Activity.setContentView has the lowest misuse rate across all the models."
106
+ },
107
+ {
108
+ "section_id": "4.9",
109
+ "parent_section_id": "4",
110
+ "section_name": "Case Study: API Misuse in GPT-3.5",
111
+ "text": "Taking GPT-3.5 as an example, we show a typical sample that GPT-3.5 reacts differently under different experiment settings. This question asks the model to help write a string to the file using API PrintWriter.write. Under zero-shot and one-irrelevant-shot settings, the answers differ slightly but both misuse the API by not catching exceptions. After giving the model the correct API usage example, the model learns how to use the API and responds correctly.\nZero Shot: PrintWriter writer = new PrintWriter(\"f.txt\", true); writer.write(\"text to append\"); writer.close(); One Irrelevant Shot: String text = \"Hello, World!\"; PrintWriter writer = new PrintWriter(\"f.txt\", true); writer.write(text); writer.close(); One Relevant Shot: try {String text = \"Hello, World!\"; PrintWriter writer = new PrintWriter(\"f.txt\", true); writer.write(text); } catch (IOException e) {e.printStackTrace();}"
112
+ },
113
+ {
114
+ "section_id": "5",
115
+ "parent_section_id": null,
116
+ "section_name": "Discussion",
117
+ "text": ""
118
+ },
119
+ {
120
+ "section_id": "6",
121
+ "parent_section_id": null,
122
+ "section_name": "Conclusion",
123
+ "text": "In this paper, we propose a benchmark RobustAPI to study the API misuse behaviors in code generated by LLMs. From the benchmark results on state-of-the-art models, we find that API misuse widely exists in large language models even when the code is executable and aligned with users\u2019 intention. Under different experiment settings, we explore effective methods of benchmarking and improving the API misuse rate of LLMs. To inspire and accelerate future research on this problem, we open source the dataset and benchmark in https://github.com/FloridSleeves/RobustAPI ###reference_###."
124
+ },
125
+ {
126
+ "section_id": "7",
127
+ "parent_section_id": null,
128
+ "section_name": "Acknowledgments",
129
+ "text": "The authors sincerely appreciate the reviewers and chairs of the AAAI for their constructive and insightful comments. Their expertise and thorough reviews have significantly contributed to the enhancement of this paper."
130
+ }
131
+ ],
132
+ "appendix": [],
133
+ "tables": {
134
+ "1": {
135
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx3.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"Sx3.T1.1\" style=\"width:438.0pt;height:596.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(86.8pt,-118.3pt) scale(1.65725625476685,1.65725625476685) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx3.T1.1.1\">\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx3.T1.1.1.1.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.1.1.1.1.1\">API</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx3.T1.1.1.1.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.1.1.1.2.1\">Domain</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx3.T1.1.1.1.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.1.1.1.3.1\">Conseq*</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx3.T1.1.1.1.4\" style=\"padding-left:1.1pt;padding-right:1.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.1.1.1.4.1\">Github*</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx3.T1.1.1.2.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">StringTokenizer.nextToken</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.2.2\" rowspan=\"3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\"><span class=\"ltx_text\" id=\"Sx3.T1.1.1.2.2.1\"><span class=\"ltx_text\" id=\"Sx3.T1.1.1.2.2.1.1\"></span> <span class=\"ltx_text\" id=\"Sx3.T1.1.1.2.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"Sx3.T1.1.1.2.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"Sx3.T1.1.1.2.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.2.2.1.2.1.1.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">String</span></span>\n<span class=\"ltx_tr\" id=\"Sx3.T1.1.1.2.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.2.2.1.2.1.2.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">Process</span></span>\n<span class=\"ltx_tr\" id=\"Sx3.T1.1.1.2.2.1.2.1.3\">\n<span class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.2.2.1.2.1.3.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(307)</span></span>\n</span></span> <span class=\"ltx_text\" id=\"Sx3.T1.1.1.2.2.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.2.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(iii)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.2.4\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">13.3K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T1.1.1.3.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">String.getBytes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.3.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(iii)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.3.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">88.1K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T1.1.1.4.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">JsonElement.getAsString</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.4.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(iii)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.4.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">4.4K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx3.T1.1.1.5.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">List.get</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.5.2\" rowspan=\"3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\"><span class=\"ltx_text\" id=\"Sx3.T1.1.1.5.2.1\"><span class=\"ltx_text\" id=\"Sx3.T1.1.1.5.2.1.1\"></span> <span class=\"ltx_text\" id=\"Sx3.T1.1.1.5.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"Sx3.T1.1.1.5.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"Sx3.T1.1.1.5.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.5.2.1.2.1.1.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">Data</span></span>\n<span class=\"ltx_tr\" id=\"Sx3.T1.1.1.5.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.5.2.1.2.1.2.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">Structure</span></span>\n<span class=\"ltx_tr\" id=\"Sx3.T1.1.1.5.2.1.2.1.3\">\n<span class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.5.2.1.2.1.3.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(404)</span></span>\n</span></span> <span class=\"ltx_text\" id=\"Sx3.T1.1.1.5.2.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.5.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(iii)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.5.4\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">2.7M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T1.1.1.6.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">Map.get</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.6.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(iii)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.6.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">2.4M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T1.1.1.7.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">Iterator.next</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.7.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(iii)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.7.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">918K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx3.T1.1.1.8.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">ProgressDialog.dismiss</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.8.2\" rowspan=\"4\" style=\"padding-left:1.1pt;padding-right:1.1pt;\"><span class=\"ltx_text\" id=\"Sx3.T1.1.1.8.2.1\"><span class=\"ltx_text\" id=\"Sx3.T1.1.1.8.2.1.1\"></span> <span class=\"ltx_text\" id=\"Sx3.T1.1.1.8.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"Sx3.T1.1.1.8.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"Sx3.T1.1.1.8.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.8.2.1.2.1.1.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">Mobile</span></span>\n<span class=\"ltx_tr\" id=\"Sx3.T1.1.1.8.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.8.2.1.2.1.2.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">Develop</span></span>\n<span class=\"ltx_tr\" id=\"Sx3.T1.1.1.8.2.1.2.1.3\">\n<span class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.8.2.1.2.1.3.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(75)</span></span>\n</span></span> <span class=\"ltx_text\" id=\"Sx3.T1.1.1.8.2.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.8.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(iii)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.8.4\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">54K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T1.1.1.9.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">TypedArray.getString</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.9.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(iv)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.9.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">6.8K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T1.1.1.10.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">ApplicationInfo.loadIcon</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.10.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(v)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.10.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">3.6K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T1.1.1.11.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">Activity.setContentView</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.11.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(v)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.11.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">4.6K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx3.T1.1.1.12.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">Cipher.init</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.12.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\"><span class=\"ltx_text\" id=\"Sx3.T1.1.1.12.2.1\">Crypto (10)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.12.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(iii)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.12.4\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">66.3K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx3.T1.1.1.13.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">RandomAccessFile.write</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.13.2\" rowspan=\"6\" style=\"padding-left:1.1pt;padding-right:1.1pt;\"><span class=\"ltx_text\" id=\"Sx3.T1.1.1.13.2.1\">I/O (390)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.13.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(i)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.13.4\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">129K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T1.1.1.14.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">BufferedReader.readLine</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.14.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(iii)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.14.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">74.8K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T1.1.1.15.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">PrintWriter.write</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.15.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(i)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.15.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">1.1M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T1.1.1.16.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">File.mkdirs</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.16.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(ii)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.16.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">73.2K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.17\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T1.1.1.17.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">File.createNewFile</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.17.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(i)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.17.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">176K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.18\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx3.T1.1.1.18.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">FileChannel.write</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.18.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(i)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.1.18.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">5.2K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.19\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx3.T1.1.1.19.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">SQLiteDatabase.query</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.19.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">Database (22)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.19.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">(iv)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.1.19.4\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">4K</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"Sx3.T1.1.1.20.1\" style=\"padding-left:1.1pt;padding-right:1.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.1.1.20.1.1\">Total</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx3.T1.1.1.20.2\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">1208</td>\n<td class=\"ltx_td ltx_border_bb ltx_border_t\" id=\"Sx3.T1.1.1.20.3\" style=\"padding-left:1.1pt;padding-right:1.1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx3.T1.1.1.20.4\" style=\"padding-left:1.1pt;padding-right:1.1pt;\">7.8M</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>18 popular Java APIs in <span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.3.1\">RobustAPI</span>. They are easily misused by developers according to the existing literature of software engineering\u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Zhang et\u00a0al. <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2308.10335v5#bib.bib33\" title=\"\">2018</a>)</cite>. *Consequences: (i) data loss; (ii) file system corruption; (iii) program crash; (iv) resource leak; (v) user interface bug. *Github: occurrences of this API on Github.</figcaption>\n</figure>",
136
+ "capture": "Table 1: 18 popular Java APIs in RobustAPI. They are easily misused by developers according to the existing literature of software engineering\u00a0(Zhang et\u00a0al. 2018). *Consequences: (i) data loss; (ii) file system corruption; (iii) program crash; (iv) resource leak; (v) user interface bug. *Github: occurrences of this API on Github."
137
+ },
138
+ "2": {
139
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"Sx4.T2.13\" style=\"width:433.6pt;height:154.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-10.7pt,3.8pt) scale(0.953125344831322,0.953125344831322) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T2.13.13\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.13.13.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx4.T2.13.13.14.1\" rowspan=\"3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.14.1.1\">Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"Sx4.T2.13.13.14.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.14.2.1\">Zero-shot</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"Sx4.T2.13.13.14.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.14.3.1\">One-shot-irrelevant</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"Sx4.T2.13.13.14.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.14.4.1\">One-shot-relevant</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.13.13.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.15.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.15.1.1\">Misuse</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.15.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.15.2.1\">Compilable</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.15.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.15.3.1\">Overall</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.15.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.15.4.1\">Misuse</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.15.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.15.5.1\">Compilable</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.15.6\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.15.6.1\">Overall</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.15.7\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.15.7.1\">Misuse</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.15.8\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.15.8.1\">Compilable</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.15.9\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.15.9.1\">Overall</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.9.9.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.1.1.1\">Rate </span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.2.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.2.2.2.2.1\">Rate </span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.3.3.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.3.3.3.3.1\">Misuse </span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.4.4.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.4.4.4.4.1\">Rate </span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.5.5.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.5.5.5.5.1\">Rate </span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.6.6.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.6.6.6.6.1\">Misuse </span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.7.7.7.7\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.7.7.7.7.1\">Rate </span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.8.8.8.8\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.8.8.8.8.1\">Rate </span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.9.9.9.9\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.9.9.9.9.1\">Misuse </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.13.13.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.13.13.16.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.16.1.1\">GPT 3.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.16.2\">62.97%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.16.3\">79.14%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.16.4\">49.83%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.16.5\">68.09%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.16.6\">91.06%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.16.7\">62.00%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.16.8\">38.56%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.16.9\">80.71%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.13.13.16.10\">31.13%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.13.13.17\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.13.13.17.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.17.1.1\">GPT 4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.17.2\">68.81%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.17.3\">90.23%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.17.4\">62.09%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.17.5\">70.38%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.17.6\">91.39%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.17.7\">64.32%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.17.8\">54.40%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.17.9\">90.40%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.17.10\">49.17%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.13.13.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.10.10.10.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.10.10.10.1.1\">Llama 2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.11.11.11.2\">7.34%\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.12.12.12.3\">9.02%\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.13.4\">0.66%\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.13.5\">61.36%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.13.6\">80.13%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.13.7\">49.17%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.13.8\">64.47%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.13.9\">72.93%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.13.10\">47.02%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.13.13.18\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.13.13.18.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.18.1.1\">Vicuna 1.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.18.2\">45.66%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.18.3\">37.17%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.18.4\">16.97%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.18.5\">57.85%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.18.6\">83.86%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.18.7\">48.51%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.18.8\">42.53%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.18.9\">64.24%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.18.10\">27.32%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.13.13.19\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.13.13.19.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.19.1.1\">ds-coder-6.7b-base</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.19.2\">41.55%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.19.3\">40.65%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.19.4\">16.89%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.19.5\">75.60%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.19.6\">95.90%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.19.7\">72.43%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.19.8\">64.12%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.19.9\">67.14%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.13.13.19.10\">43.05%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.13.13.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T2.13.13.20.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.13.13.20.1.1\">ds-coder-6.7b-instruct</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.13.13.20.2\">47.52%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.13.13.20.3\">50.00%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.13.13.20.4\">23.76%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.13.13.20.5\">59.04%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.13.13.20.6\">96.61%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.13.13.20.7\">57.04%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.13.13.20.8\">38.40%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.13.13.20.9\">86.01%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.13.13.20.10\">33.03%</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Performance of Each LLM on <span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.25.1\">RobustAPI</span>. : the <span class=\"ltx_text ltx_font_italic\" id=\"Sx4.T2.26.2\">lower</span> the <span class=\"ltx_text ltx_font_italic\" id=\"Sx4.T2.27.3\">better</span>. : the <span class=\"ltx_text ltx_font_italic\" id=\"Sx4.T2.28.4\">higher</span> the <span class=\"ltx_text ltx_font_italic\" id=\"Sx4.T2.29.5\">better</span>. Misuse Rate is the proportion of misuse cases among the compilable cases; Compilation Rate is the proportion of compilable cases among all questions; Overall Misuse is the proportion of misuse cases among all questions. Though Llama2 has a low misuse rate, its compilation rate is significantly lower than other models.</figcaption>\n</figure>",
140
+ "capture": "Table 2: Performance of Each LLM on RobustAPI. : the lower the better. : the higher the better. Misuse Rate is the proportion of misuse cases among the compilable cases; Compilation Rate is the proportion of compilable cases among all questions; Overall Misuse is the proportion of misuse cases among all questions. Though Llama2 has a low misuse rate, its compilation rate is significantly lower than other models."
141
+ },
142
+ "3": {
143
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"Sx4.T3.1\" style=\"width:433.6pt;height:68pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(102.1pt,-16.0pt) scale(1.88975135939019,1.88975135939019) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T3.1.1\">\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T3.1.1.1.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.1.1.1.1.1\">GPT 3.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T3.1.1.1.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.1.1.1.2.1\">GPT 4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T3.1.1.1.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.1.1.1.3.1\">Llama 2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T3.1.1.1.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.1.1.1.4.1\">Vicuna 1.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T3.1.1.1.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.1.1.1.5.1\">DeepSeek-Coder</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T3.1.1.2.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">6m 31s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T3.1.1.2.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">6m 56s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T3.1.1.2.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">6m 36s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T3.1.1.2.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">6m 19s</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx4.T3.1.1.2.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">6m 36s</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Execution Time of Static Analysis in <span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T3.3.1\">RobustAPI</span>.</figcaption>\n</figure>",
144
+ "capture": "Table 3: Execution Time of Static Analysis in RobustAPI."
145
+ },
146
+ "4": {
147
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"Sx4.T4.1\">\n<tr class=\"ltx_tr\" id=\"Sx4.T4.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx4.T4.1.1.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.1.1.1.1\">Pass@k</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T4.1.1.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.1.1.2.1\">Misuse Rate</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T4.1.1.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.1.1.3.1\">Compilation Rate</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T4.1.1.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.1.1.4.1\">Overall Misuse</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T4.1.2.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">Pass@1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.1.2.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">39.06%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.1.2.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">76.08%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.1.2.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">29.72%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.1.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T4.1.3.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">Pass@5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.3.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">21.98%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.3.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">93.79%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.3.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">20.61%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T4.1.4.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">Pass@10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T4.1.4.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">16.51%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T4.1.4.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">96.27%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T4.1.4.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">15.89%</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Pass@k results of GPT 3.5 (T=1, one-relevant-shot).</figcaption>\n</figure>",
148
+ "capture": "Table 4: Pass@k results of GPT 3.5 (T=1, one-relevant-shot)."
149
+ },
150
+ "5": {
151
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T5\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"Sx4.T5.1\" style=\"width:433.6pt;height:118.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(85.4pt,-23.4pt) scale(1.64996240837895,1.64996240837895) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T5.1.1\">\n<tr class=\"ltx_tr\" id=\"Sx4.T5.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx4.T5.1.1.1.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.1.1.1.1.1\">Temperature</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T5.1.1.1.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.1.1.1.2.1\">Misuse Rate</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T5.1.1.1.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.1.1.1.3.1\">Compilation Rate</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T5.1.1.1.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.1.1.1.4.1\">Overall Misuse</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.1.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T5.1.1.2.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">T = 0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.1.1.2.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">38.56%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.1.1.2.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">80.71%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.1.1.2.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">31.13%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.1.1.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T5.1.1.3.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">T = 0.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.1.1.3.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">39.77%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.1.1.3.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">80.13%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.1.1.3.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">31.87%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.1.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T5.1.1.4.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">T = 1.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T5.1.1.4.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">39.06%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T5.1.1.4.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">76.08%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T5.1.1.4.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">29.72%</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Results of GPT 3.5 with different temperature (Pass@1, one-relevant-shot).</figcaption>\n</figure>",
152
+ "capture": "Table 5: Results of GPT 3.5 with different temperature (Pass@1, one-relevant-shot)."
153
+ },
154
+ "6": {
155
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T6\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"Sx4.T6.1\" style=\"width:433.6pt;height:82.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(74.8pt,-14.2pt) scale(1.52712716809538,1.52712716809538) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T6.1.1\">\n<tr class=\"ltx_tr\" id=\"Sx4.T6.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx4.T6.1.1.1.1\" style=\"padding-left:1.4pt;padding-right:1.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T6.1.1.1.1.1\">Prompt</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T6.1.1.1.2\" style=\"padding-left:1.4pt;padding-right:1.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T6.1.1.1.2.1\">Misuse Rate</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T6.1.1.1.3\" style=\"padding-left:1.4pt;padding-right:1.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T6.1.1.1.3.1\">Compilation Rate</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T6.1.1.1.4\" style=\"padding-left:1.4pt;padding-right:1.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T6.1.1.1.4.1\">Overall Misuse</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T6.1.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T6.1.1.2.1\" style=\"padding-left:1.4pt;padding-right:1.4pt;\">API Usage Rule</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T6.1.1.2.2\" style=\"padding-left:1.4pt;padding-right:1.4pt;\">65.01%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T6.1.1.2.3\" style=\"padding-left:1.4pt;padding-right:1.4pt;\">79.78%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T6.1.1.2.4\" style=\"padding-left:1.4pt;padding-right:1.4pt;\">51.86%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T6.1.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T6.1.1.3.1\" style=\"padding-left:1.4pt;padding-right:1.4pt;\">One-shot-relevant</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T6.1.1.3.2\" style=\"padding-left:1.4pt;padding-right:1.4pt;\">38.56%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T6.1.1.3.3\" style=\"padding-left:1.4pt;padding-right:1.4pt;\">80.71%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T6.1.1.3.4\" style=\"padding-left:1.4pt;padding-right:1.4pt;\">31.13%</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Results of GPT 3.5 with API usage rules (T=0, Pass@1).</figcaption>\n</figure>",
156
+ "capture": "Table 6: Results of GPT 3.5 with API usage rules (T=0, Pass@1)."
157
+ }
158
+ },
159
+ "image_paths": {
160
+ "1": {
161
+ "figure_path": "2308.10335v5_figure_1.png",
162
+ "caption": "Figure 1: The scenario where software engineers consult large language models for the answer to the programming\nquestions. The generated code snippet is not reliable and has potential risks in the software development.",
163
+ "url": "http://arxiv.org/html/2308.10335v5/x1.png"
164
+ },
165
+ "2": {
166
+ "figure_path": "2308.10335v5_figure_2.png",
167
+ "caption": "Figure 2: The workflow of Our API Checker. The API checker uses the static analysis method and analyzes the generated code with the abstract syntax tree (AST). The API misuse is detected when the AST call sequence and the API usage rule do not match.",
168
+ "url": "http://arxiv.org/html/2308.10335v5/x2.png"
169
+ },
170
+ "3": {
171
+ "figure_path": "2308.10335v5_figure_3.png",
172
+ "caption": "Figure 3: Result of Checking API Usage from LLMs. Red bars are the percentage of answers that contain API misuse, which is the lower, the better. The white bars in dot lines are the percentage of code answers that are not compilable.",
173
+ "url": "http://arxiv.org/html/2308.10335v5/x3.png"
174
+ },
175
+ "4": {
176
+ "figure_path": "2308.10335v5_figure_4.png",
177
+ "caption": "Figure 4: Misuse rate of each API by each LLM. The deeper the color, the higher the misuse rate. G3.5, G4, LMA, Vic are short for GPT3.5, GPT4, Llama2, Vicuna1.5.",
178
+ "url": "http://arxiv.org/html/2308.10335v5/x4.png"
179
+ }
180
+ },
181
+ "validation": true,
182
+ "references": [
183
+ {
184
+ "1": {
185
+ "title": "Palm 2 technical report.",
186
+ "author": "Anil, R.; Dai, A. M.; Firat, O.; Johnson, M.; Lepikhin, D.; Passos, A.; Shakeri, S.; Taropa, E.; Bailey, P.; Chen, Z.; et al. 2023.",
187
+ "venue": "arXiv preprint arXiv:2305.10403.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "2": {
193
+ "title": "Program synthesis with large language models.",
194
+ "author": "Austin, J.; Odena, A.; Nye, M.; Bosma, M.; Michalewski, H.; Dohan, D.; Jiang, E.; Cai, C.; Terry, M.; Le, Q.; et al. 2021.",
195
+ "venue": "arXiv preprint arXiv:2108.07732.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "3": {
201
+ "title": "Evaluating large language models trained on code.",
202
+ "author": "Chen, M.; Tworek, J.; Jun, H.; Yuan, Q.; Pinto, H. P. d. O.; Kaplan, J.; Edwards, H.; Burda, Y.; Joseph, N.; Brockman, G.; et al. 2021.",
203
+ "venue": "arXiv preprint arXiv:2107.03374.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "4": {
209
+ "title": "Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality.",
210
+ "author": "Chiang, W.-L.; Li, Z.; Lin, Z.; Sheng, Y.; Wu, Z.; Zhang, H.; Zheng, L.; Zhuang, S.; Zhuang, Y.; Gonzalez, J. E.; Stoica, I.; and Xing, E. P. 2023.",
211
+ "venue": null,
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "5": {
217
+ "title": "Stack overflow considered harmful? the impact of copy&paste on android application security.",
218
+ "author": "Fischer, F.; B\u00f6ttinger, K.; Xiao, H.; Stransky, C.; Acar, Y.; Backes, M.; and Fahl, S. 2017.",
219
+ "venue": "In 2017 IEEE Symposium on Security and Privacy (SP), 121\u2013136. IEEE.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "6": {
225
+ "title": "Abstract syntax trees-and their role in model driven software development.",
226
+ "author": "Fischer, G.; Lusiardi, J.; and Von Gudenberg, J. W. 2007.",
227
+ "venue": "In International Conference on Software Engineering Advances (ICSEA 2007), 38\u201338. IEEE.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "7": {
233
+ "title": "Measuring coding challenge competence with apps.",
234
+ "author": "Hendrycks, D.; Basart, S.; Kadavath, S.; Mazeika, M.; Arora, A.; Guo, E.; Burns, C.; Puranik, S.; He, H.; Song, D.; et al. 2021.",
235
+ "venue": "arXiv preprint arXiv:2105.09938.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "8": {
241
+ "title": "Protecting data integrity of web applications with database constraints inferred from application code.",
242
+ "author": "Huang, H.; Shen, B.; Zhong, L.; and Zhou, Y. 2023.",
243
+ "venue": "In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2, 632\u2013645.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "9": {
249
+ "title": "Large Language Models and Simple, Stupid Bugs.",
250
+ "author": "Jesse, K.; Ahmed, T.; Devanbu, P. T.; and Morgan, E. 2023.",
251
+ "venue": "arXiv preprint arXiv:2303.11455.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "10": {
257
+ "title": "SWE-bench: Can Language Models Resolve Real-World GitHub Issues?",
258
+ "author": "Jimenez, C. E.; Yang, J.; Wettig, A.; Yao, S.; Pei, K.; Press, O.; and Narasimhan, K. 2023.",
259
+ "venue": "arXiv preprint arXiv:2310.06770.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "11": {
265
+ "title": "Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation.",
266
+ "author": "Liu, J.; Xia, C. S.; Wang, Y.; and Zhang, L. 2023.",
267
+ "venue": "arXiv preprint arXiv:2305.01210.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "12": {
273
+ "title": "Codexglue: A machine learning benchmark dataset for code understanding and generation.",
274
+ "author": "Lu, S.; Guo, D.; Ren, S.; Huang, J.; Svyatkovskiy, A.; Blanco, A.; Clement, C.; Drain, D.; Jiang, D.; Tang, D.; et al. 2021.",
275
+ "venue": "arXiv preprint arXiv:2102.04664.",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "13": {
281
+ "title": "WizardCoder: Empowering Code Large Language Models with Evol-Instruct.",
282
+ "author": "Luo, Z.; Xu, C.; Zhao, P.; Sun, Q.; Geng, X.; Hu, W.; Tao, C.; Ma, J.; Lin, Q.; and Jiang, D. 2023.",
283
+ "venue": "arXiv preprint arXiv:2306.08568.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "14": {
289
+ "title": "Mining preconditions of APIs in large-scale code corpus.",
290
+ "author": "Nguyen, H. A.; Dyer, R.; Nguyen, T. N.; and Rajan, H. 2014.",
291
+ "venue": "In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, 166\u2013177.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "15": {
297
+ "title": "GPT-4 Technical Report.",
298
+ "author": "OpenAI. 2023a.",
299
+ "venue": "ArXiv, abs/2303.08774.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "16": {
305
+ "title": "GPT-4 Technical Report.",
306
+ "author": "OpenAI. 2023b.",
307
+ "venue": "arXiv:2303.08774.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "17": {
313
+ "title": "Gorilla: Large language model connected with massive apis.",
314
+ "author": "Patil, S. G.; Zhang, T.; Wang, X.; and Gonzalez, J. E. 2023.",
315
+ "venue": "arXiv preprint arXiv:2305.15334.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "18": {
321
+ "title": "Asleep at the keyboard? assessing the security of github copilot\u2019s code contributions.",
322
+ "author": "Pearce, H.; Ahmad, B.; Tan, B.; Dolan-Gavitt, B.; and Karri, R. 2022.",
323
+ "venue": "In 2022 IEEE Symposium on Security and Privacy (SP), 754\u2013768. IEEE.",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "19": {
329
+ "title": "Do users write more insecure code with AI assistants?",
330
+ "author": "Perry, N.; Srivastava, M.; Kumar, D.; and Boneh, D. 2022.",
331
+ "venue": "arXiv preprint arXiv:2211.03622.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "20": {
337
+ "title": "DeepSeek: Content based image search & retrieval.",
338
+ "author": "Piplani, T.; and Bamman, D. 2018.",
339
+ "venue": "arXiv preprint arXiv:1801.03406.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "21": {
345
+ "title": "Synchromesh: Reliable code generation from pre-trained language models.",
346
+ "author": "Poesia, G.; Polozov, O.; Le, V.; Tiwari, A.; Soares, G.; Meek, C.; and Gulwani, S. 2022.",
347
+ "venue": "arXiv preprint arXiv:2201.11227.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "22": {
353
+ "title": "Lost at c: A user study on the security implications of large language model code assistants.",
354
+ "author": "Sandoval, G.; Pearce, H.; Nys, T.; Karri, R.; Garg, S.; and Dolan-Gavitt, B. 2023.",
355
+ "venue": "arXiv preprint arXiv:2208.09727.",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "23": {
361
+ "title": "PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback.",
362
+ "author": "Shen, B.; Zhang, J.; Chen, T.; Zan, D.; Geng, B.; Fu, A.; Zeng, M.; Yu, A.; Ji, J.; Zhao, J.; et al. 2023a.",
363
+ "venue": "arXiv preprint arXiv:2307.14936.",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "24": {
369
+ "title": "In chatgpt we trust? measuring and characterizing the reliability of chatgpt.",
370
+ "author": "Shen, X.; Chen, Z.; Backes, M.; and Zhang, Y. 2023b.",
371
+ "venue": "arXiv preprint arXiv:2304.08979.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "25": {
377
+ "title": "An Empirical Study of Code Smells in Transformer-based Code Generation Techniques.",
378
+ "author": "Siddiq, M. L.; Majumder, S. H.; Mim, M. R.; Jajodia, S.; and Santos, J. C. 2022.",
379
+ "venue": "In 2022 IEEE 22nd International Working Conference on Source Code Analysis and Manipulation (SCAM), 71\u201382. IEEE.",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "26": {
385
+ "title": "Llama 2: Open foundation and fine-tuned chat models.",
386
+ "author": "Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. 2023.",
387
+ "venue": "arXiv preprint arXiv:2307.09288.",
388
+ "url": null
389
+ }
390
+ },
391
+ {
392
+ "27": {
393
+ "title": "Mining succinct and high-coverage API usage patterns from source code.",
394
+ "author": "Wang, J.; Dang, Y.; Zhang, H.; Chen, K.; Xie, T.; and Zhang, D. 2013.",
395
+ "venue": "In 2013 10th Working Conference on Mining Software Repositories (MSR), 319\u2013328. IEEE.",
396
+ "url": null
397
+ }
398
+ },
399
+ {
400
+ "28": {
401
+ "title": "A systematic evaluation of large language models of code.",
402
+ "author": "Xu, F. F.; Alon, U.; Neubig, G.; and Hellendoorn, V. J. 2022.",
403
+ "venue": "In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, 1\u201310.",
404
+ "url": null
405
+ }
406
+ },
407
+ {
408
+ "29": {
409
+ "title": "From query to usable code: an analysis of stack overflow code snippets.",
410
+ "author": "Yang, D.; Hussain, A.; and Lopes, C. V. 2016.",
411
+ "venue": "In Proceedings of the 13th International Conference on Mining Software Repositories, 391\u2013402.",
412
+ "url": null
413
+ }
414
+ },
415
+ {
416
+ "30": {
417
+ "title": "A comprehensive capability analysis of gpt-3 and gpt-3.5 series models.",
418
+ "author": "Ye, J.; Chen, X.; Xu, N.; Zu, C.; Shao, Z.; Liu, S.; Cui, Y.; Zhou, Z.; Gong, C.; Shen, Y.; et al. 2023.",
419
+ "venue": "arXiv preprint arXiv:2303.10420.",
420
+ "url": null
421
+ }
422
+ },
423
+ {
424
+ "31": {
425
+ "title": "Assessing the quality of GitHub copilot\u2019s code generation.",
426
+ "author": "Yetistiren, B.; Ozsoy, I.; and Tuzun, E. 2022.",
427
+ "venue": "In Proceedings of the 18th International Conference on Predictive Models and Data Analytics in Software Engineering, 62\u201371.",
428
+ "url": null
429
+ }
430
+ },
431
+ {
432
+ "32": {
433
+ "title": "Learning to mine aligned code and natural language pairs from stack overflow.",
434
+ "author": "Yin, P.; Deng, B.; Chen, E.; Vasilescu, B.; and Neubig, G. 2018.",
435
+ "venue": "In Proceedings of the 15th international conference on mining software repositories, 476\u2013486.",
436
+ "url": null
437
+ }
438
+ },
439
+ {
440
+ "33": {
441
+ "title": "Are code examples on an online Q&A forum reliable?: a study of API misuse on stack overflow. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE).",
442
+ "author": "Zhang, T.; Upadhyaya, G.; Reinhardt, A.; Rajan, H.; and Kim, M. 2018.",
443
+ "venue": "IEEE, New York, United States, 886\u2013896.",
444
+ "url": null
445
+ }
446
+ },
447
+ {
448
+ "34": {
449
+ "title": "API deprecation: a retrospective analysis and detection method for code examples on the web.",
450
+ "author": "Zhou, J.; and Walker, R. J. 2016.",
451
+ "venue": "In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering, 266\u2013277.",
452
+ "url": null
453
+ }
454
+ }
455
+ ],
456
+ "url": "http://arxiv.org/html/2308.10335v5"
457
+ }
20240127/2308.12608v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2308.14993v2.json ADDED
@@ -0,0 +1,421 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "On \ud835\udc58-Mer-Based and Maximum Likelihood Estimation Algorithms for Trace Reconstruction",
3
+ "abstract": "The goal of the trace reconstruction problem is to recover a string given many independent traces of , where a trace is a subsequence obtained from deleting bits of independently with some given probability A recent result of Chase (STOC 2021) shows how can be determined (in exponential time) from traces. This is the state-of-the-art result on the sample complexity of trace reconstruction.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The trace reconstruction problem is an infamous question introduced by Batu, Kannan, Khanna and McGregor [BKKM04 ###reference_bx4###] in the context of computational biology. It asks to design algorithms that recover a string given access to traces of , obtained by deleting each bit independently with some given probability The best current upper and lower bounds are exponentially apart, namely traces are sufficient for reconstruction [Cha21b ###reference_bx12###] (improving upon the of [NP17 ###reference_bx33###, DOS19 ###reference_bx14###]) and [HL20 ###reference_bx21###, Cha21a ###reference_bx11###] are necessary.\nThe problem has been recently studied in several variants so far [BKKM04 ###reference_bx4###, KM05 ###reference_bx24###, VS08 ###reference_bx41###, HMPW08 ###reference_bx22###, MPV14 ###reference_bx31###, PZ17 ###reference_bx35###, NP17 ###reference_bx33###, DOS19 ###reference_bx14###, GM17 ###reference_bx17###, HPP18 ###reference_bx23###, HL20 ###reference_bx21###, HHP18 ###reference_bx20###, GM19 ###reference_bx18###, CGMR20 ###reference_bx10###, KMMP21 ###reference_bx25###, BLS20 ###reference_bx5###, CDL21b ###reference_bx8###, Cha21b ###reference_bx12###, CP21 ###reference_bx13###, NR21 ###reference_bx34###, SB21 ###reference_bx37###, GSZ22 ###reference_bx19###, Rub23 ###reference_bx36###] and it continues to elicit interest due to its deceptively simple formulation, as well as its motivating applications to DNA computing [YGM17 ###reference_bx42###].\nIn this paper, we focus on the worst-case formulation of the problem, which is equivalent from an information-theoretic point of view to the distinguishing variant. In this variant, the goal is to distinguish whether the received traces come from string or from , for some known"
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Our Contributions",
15
+ "text": "Our first result shows that algorithms based on -mer statistics can reconstruct a source string using many traces. This follows from the following theorem.\nLet be two arbitrary distinct strings, and let be their -mer density maps, respectively. Assuming , it holds that\nBased on Theorem 1 ###reference_orem1###, the algorithm estimates within an accuracy of and outputs the that minimizes The cost of this -mer-based algorithm is .\nOur main result regarding -mer-based algorithms is the following theorem which shows the tightness of the bound in Theorem 1 ###reference_orem1###.\nFix any . Suppose stands for the -mer density map of . There exist distinct strings such that\nHence, Theorem 2 ###reference_orem2### implies that the cost of any -mer-based algorithm for worst-case trace reconstruction is .\nAs one might expect, for the -mers usually contain less information than -mers. To see this, observe that for a -mer , we have the following relation\nprovided that . The same also holds for . In fact, the strings and obtained via Theorem 2 ###reference_orem2### share a common prefix of length at least (or one could prepend a prefix anyway), so for any , and one does not need to worry about the case . Plugging into the definition of -mer density maps, we have\nBy induction, for any we have\nTherefore, the bound in Theorem 2 ###reference_orem2### indeed covers all -mers for .\nWe remark that the proof of Theorem 2 ###reference_orem2### further implies that the analysis technique of [Cha21b ###reference_bx12###] is essentially tight, in the sense that no better upper bound (up to factors in the exponent) can be obtained via his analysis. We include further details about this implication in Remark 3 ###reference_ark3###.\nWe next turn to analyzing the performance of the MLE algorithm in the setting of trace reconstruction. Our main result essentially shows that if there is an algorithm for trace reconstruction that uses traces and succeeds with probability then the MLE algorithm using traces succeeds with probability Hence, given that the current upper bounds for the worst-case reconstruction problem are exponential in , we may view the MLE as an optimal algorithm for trace reconstruction.\nSuppose is such that for any . Then we have\nWe remark that the loss of a factor of in Theorem 3 ###reference_orem3### is generally inevitable. Here is a simple example: let be the uniform distribution over , and for , let be the point distribution supported on . We have . However, .\nFor a string , let denote the trace distribution of . Theorem 3 ###reference_orem3### implies the following corollary, which implies that in some sense the Maximum Likelihood Estimation is a universal algorithm for trace reconstruction.\nSuppose traces are sufficient for worst-case trace reconstruction with a success rate . Then for any , Maximum Likelihood Estimation with traces solves worst-case trace reconstruction with success rate .\nCorollary 1.1 ###reference_corollary1### incurs a factor of to the sample complexity. While we currently do not know whether this blowup is necessary for trace reconstruction, the next result shows that it is inevitable for the more general \u201cmodel estimation\u201d problem.\nFor any integer , there is a set of distributions over a common domain of size , where , satisfying the following conditions.\nThere is a distinguisher which given one sample for an unknown , recovers with probability at least . In other words, for all ,\nfails to distinguish from other distributions with probability 1, even with samples. In other words,\nFinally, we remark that in the average-case setting is indeed optimal (with no factor of factor blowup in the number of traces). This is because maximizing the likelihood is equivalent to maximizing the posterior probability under the uniform prior distribution (which is optimal), as can be seen via the Bayes rule\nTherefore maximizing both sides with respect to yields the same result."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Overview of the techniques",
21
+ "text": "In recent development of the trace reconstruction problem, the connection to various real and complex polynomials has been a recurring and intriguing theme [HMPW08 ###reference_bx22###, NP17 ###reference_bx33###, PZ17 ###reference_bx35###, HPP18 ###reference_bx23###, DOS19 ###reference_bx14###, CDL21b ###reference_bx8###, CDRV21 ###reference_bx9###, Cha21b ###reference_bx12###, SB21 ###reference_bx37###, GSZ22 ###reference_bx19###, Rub23 ###reference_bx36###]. The starting point of these techniques is to design a set of statistics that can be easily estimated from the traces (e.g., mean traces), with the property that for different source strings the corresponding statistics are somewhat \u201cfar apart\u201d. To establish this property, one key idea is to associate each source string with a generating polynomial where the coefficients are exactly the statistics of . Due to the structure of the deletion channel, in many cases, this generating polynomial (under a change of variables) is identical to another polynomial that is much easier to get a handle on. For example, the coefficients of are usually 0/1, and they are easily determined from . To show that the statistics corresponding to and are far apart (say, in -distance), it is sufficient to show that is large for an appropriate choice of . This is the point where all sorts of analytical tools are ready to shine. For instance, the main technical result in [Cha21b ###reference_bx12###] is a complex analytical result that says that a certain family of polynomials cannot be uniformly small over a sub-arc of the complex unit circle, which has applications beyond the trace reconstruction problem.\nThis analytical view of trace reconstruction can lead to a tight analysis of certain algorithms/statistics. The best example would be mean-based algorithms, for which a tight bound of traces is known to be sufficient and necessary for worst-case trace reconstruction [NP17 ###reference_bx33###, DOS19 ###reference_bx14###]. The tightness of the sample complexity is exactly due to the tightness of a complex analytical result by Borwein and Erd\u00e9lyi [BE97 ###reference_bx2###]. Our lower bound for -mer-based algorithms is obtained in a similar fashion, via establishing a complex analytical result complementary to that of [Cha21b ###reference_bx12###] (See Lemma 3.1 ###reference_lemma1###).\nOn the other hand, our argument takes a different approach than that of [BE97 ###reference_bx2###]. At a high level, both results use a Pigeonhole argument to show the existence of two univariate polynomials which are uniformly close over a sub-arc of the complex unit circle. The difference lies in the objects playing the role of \u201cpigeons\u201d. [BE97 ###reference_bx2###]\u2019s argument can be viewed as two steps: (1) apply the Pigeonhole Principle to obtain two polynomials that have close evaluations over a discrete set of points in , and (2) use a continuity argument to extend the closeness to the entire sub-arc. Here the roles of pigeons and holes are played by evaluation vectors, and Cartesian products of small intervals. Our approach considers the coordinates of a related polynomial in the Chebyshev basis, which play the roles of pigeons in place of the evaluation vector. The properties of Chebyshev polynomials allow us to get rid of the continuity argument. Instead, we complete the proof by leveraging rather standard tools from complex analysis (e.g., Theorem 5 ###reference_orem5### and Theorem 6 ###reference_orem6###). We believe this approach has the advantage of being generalizable to multivariate polynomials over the product of sub-arcs via multivariate Chebyshev series (see, e.g., [Mas80 ###reference_bx30###, Tre17 ###reference_bx40###]), whereas the same generalization seems to be tricky for the continuity argument.\nFinally, the counting argument considers a special set of strings for which effectively only one -mer contains meaningful information about the initial string. Since previous arguments did not exploit structural properties of the strings, this is another technical novelty of our proof.\nMost of our results regarding Maximum Likelihood Estimation hold under the more general \u201cmodel estimation\u201d setting, where one is given a sample drawn from an unknown distribution and tries to recover . Our main observation is that if such a distinguisher works in worst-case, then the distributions in have large pairwise statistical distances. The maximization characterization of statistical distance, in conjunction with a union bound, implies that for a sample its likelihood is maximized by except with a small probability. The factor loss in the sample complexity is essentially due to the union bound, and we show that this loss is tight in general by constructing a set of distributions which attains equality in the union bound."
22
+ },
23
+ {
24
+ "section_id": "1.3",
25
+ "parent_section_id": "1",
26
+ "section_name": "Related work",
27
+ "text": "The trace reconstruction problem was first introduced and studied by Levenshtein [Lev01b ###reference_bx29###][Lev01a ###reference_bx28###]. The original question is that if a message is sent multiple times through the same channel with random insertion/deletion errors, then how to recover the message?\n[BKKM04 ###reference_bx4###] and [HMPW08 ###reference_bx22###] formalized the problem to the current version for which the channel only has random deletions.\nTheir central motivation is actually from computational biology, i.e. how to reconstruct the whole DNA sequence from multiple related subsequences.\n[CGMR20 ###reference_bx10###] and [BLS20 ###reference_bx5###] further extended the study to the \u201ccoded\u201d version.\nThat is, the string to reconstruct is not an arbitrary string but instead is a codeword from a code.\nA variant setting where the channel has memoryless replication insertions was studied by [CDRV21 ###reference_bx9###].\nThe average case version was studied in [HMPW08 ###reference_bx22###, PZ17 ###reference_bx35###, MPV14 ###reference_bx31###, HPP18 ###reference_bx23###].\nFor this case, the best known lower bound on the number of traces is [HL20 ###reference_bx21###, Cha21a ###reference_bx11###].\nBuilding on Chase\u2019s upper bound for the worst case, [Rub23 ###reference_bx36###] improved the sample complexity upper bound to in the average-case model.\n[CDL21b ###reference_bx8###] studied another variant of the problem which is called the smooth variant.\nIt is an intermediate model between the worst-case and the average-case models, where the initial string is an arbitrary string perturbed by replacing each coordinate by a\nuniformly random bit with some constant probability in .\n[CDL21b ###reference_bx8###] provided an efficient reconstruction algorithm for this case.\nOther variants studied include trace reconstruction from the multiset of substrings [GM17 ###reference_bx17###, GM19 ###reference_bx18###],\npopulation recovery variants [BCF19 ###reference_bx1###], matrix reconstruction and parameterized algorithms [KMMP21 ###reference_bx25###], circular trace reconstruction [NR21 ###reference_bx34###], reconstruction from -decks [KR97 ###reference_bx26###, Sco97 ###reference_bx38###, DS03 ###reference_bx16###, MPV14 ###reference_bx31###], and coded trace reconstruction[CGMR20 ###reference_bx10###, BLS20 ###reference_bx5###].\n[DRSR21 ###reference_bx15###] studied approximate trace reconstruction and showed efficient algorithms.\n[CDL21a ###reference_bx7###], [CDK21 ###reference_bx6###], and [CP21 ###reference_bx13###] further proved that if the source is a random string, then an approximate solution can be found with high probability using very few traces.\nNotice that approximate reconstructions imply distinguishers for pairs of\nstrings with large edit distances.\n[MPV14 ###reference_bx31###, SB21 ###reference_bx37###, GSZ22 ###reference_bx19###] study the complexity of the problem parameterized by the Hamming/edit distance between the strings. [GSZ22 ###reference_bx19###] also shows that the problem of exhibiting explicit strings that are hard to distinguish for mean-based algorithms is equivalent to the Prouhet-Tarry-Escott problem, a difficult problem in number theory."
28
+ },
29
+ {
30
+ "section_id": "1.4",
31
+ "parent_section_id": "1",
32
+ "section_name": "Organization",
33
+ "text": "In Section 2 ###reference_### we prove Theorem 1 ###reference_orem1###, in Section 3 ###reference_### we prove our main result Theorem 2 ###reference_orem2###, and in Section 4 ###reference_### we prove Theorem 3 ###reference_orem3###."
34
+ },
35
+ {
36
+ "section_id": "2",
37
+ "parent_section_id": null,
38
+ "section_name": "-mer-based algorithms: the upper bound",
39
+ "text": "We prove Theorem 1 ###reference_orem1### in this section.\nLet us start with a definition that is essential for the study of -mer-based algorithms.\nLet and . The -mer generating polynomial for string and -mer is the following degree- polynomial in :\nWe have the following identity\nThe expression on the last line, under a change of variable , is exactly the polynomial studied in [Cha21b ###reference_bx12###].\n[Cha21b ###reference_bx12###, Proposition 6.3] \nFor distinct , if for all , then there are and such that\nHere is a constant depending only on the deletion probability .\nWe will use Lemma 2.1 ###reference_lemma1### to show that the upper bound of [Cha21b ###reference_bx12###] can be achieved by -mer-based algorithms, rather than general algorithms based on -bit statistics. Our main lower bound on the number of traces implied by Theorem 2 ###reference_orem2### will follow by showing an upper bound on the LHS in the lemma above (see Lemma 3.1 ###reference_lemma1###).\nWe remark that the result of Chase is obtained by first considering a corresponding multivariate channel polynomial that encodes in its coefficients the -bit statistics of the traces.\nThe upper bound on the number of traces reduces to understanding the supremum of this polynomial over a certain region of the complex plane.\nThe crucial element of the proof is the reduction to the existence of and satisfying Lemma 2.1 ###reference_lemma1###, by appropriately making the remaining variables take value . We noticed that the resulting univariate polynomial is essentially the -mer generating polynomial defined in Definition 4 ###reference_inition4###, with an extra factor of .\nOur result in Lemma 3.1 ###reference_lemma1### implies that no tighter lower bound (up to polylogarithmic factors in the exponent) is possible for this univariate polynomial, showing that the analysis technique used in [Cha21b ###reference_bx12###] cannot give a better upper bound on worst-case trace complexity."
40
+ },
41
+ {
42
+ "section_id": "2.1",
43
+ "parent_section_id": "2",
44
+ "section_name": "An upper bound for -mer based algorithms",
45
+ "text": "The proof of Theorem 1 ###reference_orem1### mainly uses Lemma 2.1 ###reference_lemma1###. We will also make use of the following result.\n[BEK99 ###reference_bx3###, Theorem 5.1] \nThere are absolute constants and such that\nfor every analytic function on the open unit disk that satisfies for , and .\n\nThe proof deals with two cases.\nCase 1: for all .\nIn this case, and satisfy the premise of Lemma 2.1 ###reference_lemma1###. It follows that there exist , and where , satisfying the bound\nHere is a constant depending only on the deletion probability . Rewriting in terms of the -mer generating polynomials, we have\nIt is easy to see that . We also have the following upper bounds\nFrom here we can apply the triangle inequality and conclude that\nHere is a constant depending only on the deletion probability .\nCase 2: for some , i.e., .\nIn this case, we are going to take and show a much better bound\nwhere is a constant depending only on (hence certainly greater than ). Similar to what we did in case 1, applying the triangle inequality to Eq. 2 ###reference_### gives the theorem.\nTo prove Eq. 2 ###reference_###, we let\nso that . Under our choice of , the constant term of equals to 1, i.e., .\nIf , the closed disk contains the point 0. Therefore\nWe are left with the case . Since is a polynomial with coefficients absolutely bounded by 1, we can apply Lemma 2.2 ###reference_lemma2### with and obtain\nfor constants . Denoting , we have when . In particular, is inside the closed unit disk . Therefore\nTo conclude, we can take .\n\u220e"
46
+ },
47
+ {
48
+ "section_id": "3",
49
+ "parent_section_id": null,
50
+ "section_name": "A lower bound for -mer based algorithms: Proof of Theorem 2",
51
+ "text": "We prove Theorem 2 ###reference_orem2### in this section. The proof is based on the following lemma, which we will prove shortly.\nThere exists such that for any -mer , it holds that\nWe can extract by the contour integral (cf. [Lan13 ###reference_bx27###, \u00a74, Theorem 2.1])\nTherefore\nWe stress that the bound holds for any and -mer . Note that for any fixed , there are at most different -mers for which . Namely, if then . It follows that\n\u220e\nNext, we prove Lemma 3.1 ###reference_lemma1### assuming the following result, which is our main technical lemma.\nFix any . There exist distinct both starting with a run of 0s of length , such that for any -mer , it holds that\nLet be a parameter to be decided later. Denote . We have , so that the premise of Lemma 3.2 ###reference_lemma2### is satisfied. Therefore, there exist distinct both starting with a run of 0s of length , such that for any -mer , it holds that\nLet and . Since , by construction we have for all . Therefore, any -mer we have\nHere . When is large, we can upper bound the supremum as\nHere the first inequality is due to for some constant (depending on ) when . When is small, this is taken care of by Eq. 3 ###reference_###:\nFinally, the value of is determined by balancing the two cases. Namely, we let , or , which gives the bound for both cases. Here .\n\u220e\nIt remains to prove Lemma 3.2 ###reference_lemma2###, which we do after some helpful preliminaries from complex analysis."
52
+ },
53
+ {
54
+ "section_id": "3.1",
55
+ "parent_section_id": "3",
56
+ "section_name": "Some helpful results in complex analysis",
57
+ "text": "In this section, we introduce some results in complex analysis, which will be useful for proving Lemma 3.2 ###reference_lemma2###.\nLet denote the Chebyshev polynomial, i.e., a degree- polynomial such that . Clearly, for . If a function is analytic on , it has a converging Chebyshev expansion\nHere the \u2019s are the Chebyshev coefficients, and they can be extracted by the following integral\nwhere is replaced by for . This immediately implies a uniform upper bound on Chebyshev coefficients.\nFor all , .\nIn fact, if is analytically continuable to a larger region, much better bounds can be obtained. For that we need the notion of Bernstein ellipse.\nGiven , the boundary of the Bernstein Ellipse is defined as\nThe Bernstein Ellipse has the foci at with the major and minor semi-axes given by and , respectively. When , coincides with the interval on the real line. For our purpose, we will also be working with affine transformations of . More precisely, for we denote by (the interior of) the following ellipse\nThus, can be equivalently defined as\nBelow are some useful properties of .\nThe following statements hold.\nLet . Then .\ncontains a disk centered at 1 with radius .\nItem (1):\n\nWriting where , we have\nTherefore .\n\nItem (2):\nLet be such that . We have\nThis implies .\n\u220e\nThe following result shows an exponential convergence rate of the Chebyshev expansion.\nLet a function analytic on be analytically continuable to the open Bernstein Ellipse , where it satisfies for some . Then its Chebyshev coefficients satisfy\nThe Chebyshev coefficients of is given by\nwith replaced by for . Letting , one could write , , and hence\nDenote . Note that we can substitute for and obtain\nTherefore we arrived at the expression\nSince is analytic in the open Bernstein Ellipse , we can conclude that is analytic in the annulus . That means, for any we have by Cauchy\u2019s integral theorem (cf. [Lan13 ###reference_bx27###, \u00a73, Theorem 5.1]) that\nNow we have\nFinally, since the bound holds for any , it also holds for .\n\u220e\nWe will also make use of the following theorem.\nSuppose is analytic inside and on . For , let . Then\nSuppose where . Then\nLet . Let where . Since is analytic on and inside , is analytic inside the centered disk with radius . Applying the Hadamard Three Circles Theorem to gives\nWe note that coincides with the interval on the real line. For , Proposition 2 ###reference_position2### implies . Therefore\n\u220e"
58
+ },
59
+ {
60
+ "section_id": "3.2",
61
+ "parent_section_id": "3",
62
+ "section_name": "Proof of Lemma 3.2: A Counting Argument",
63
+ "text": "We prove Lemma 3.2 ###reference_lemma2### in this section.\nWe first prove a technical lemma lower bounding the number of binary strings in which all 1s are far away from each other.\nLet be the collection of all -bit strings with the property that any two 1\u2019s are separated by at least many 0\u2019s. Then .\nFor ease of notation we fix and denote . We observe that satisfies the following recurrence relation\nWe prove by induction that . The base case is trivial since when .\nNow suppose for . This gives, for , the following bound\nSince by the AM-GM inequality we have\nor equivalently , we obtain\nThis completes the inductive step, and hence for all .\n\u220e\nIn the following, we fix , and let . The proof will focus on binary strings in the set . We have .\nBelow we characterize some properties of -mer generating polynomials of strings in .\nLet be a set of strings defined as above. For , denote by the string with a single \u201c1\u201d located at index (indices begin with 0). The following properties hold.\nFor any -mer , is the zero polynomial.\nFor any and , .\nFor any and , .\nItem 1: By definition of , contains at most one \u201c1\u201d for any string . Therefore, if contains at least two \u201c1\u201ds, then for any ,\nThis means all the coefficients of is zero, and hence is the zero polynomial.\nItem 2: Since any two consecutive \u201c1\u201ds in are separated by at least \u201c0\u201ds, if and only if . We thus have\nWe have used the fact that for , .\nItem 3: We observe that . That implies\nNote the the right-hand-side is independent of . Therefore\nThe second last line is obtained by inductively applying Item 2.\n\u220e\nBelow we give the proof of Lemma 3.2 ###reference_lemma2###. We use the notations , and .\nLet be a string of length . In light of Lemma 3.4 ###reference_lemma4###, we only need to consider a fixed -mer , where . Define\nRecall that . Denote by the Chebyshev coefficients of\nwhere (equivalently, the coordinates of in the Chebyshev basis). In other words, we can write\nWe first argue that only the first few coefficients are significant. This can be done by applying Theorem 5 ###reference_orem5### to , say, with . To that end, we first upper bound for . Denoting , we have that when . By item (1) of Proposition 2 ###reference_position2###, we have . It follows that\nTherefore, we can apply Theorem 5 ###reference_orem5### to with , and get (for large enough )\nTo each string we associate a vector\nProposition 1 ###reference_position1### implies each entry of belongs to the interval . We now partition into smaller intervals , each of length , meaning that . The vector must fall into one of the sub-cubes of the form\nwhere is a mapping that uniquely identifies the sub-cube. It follows that the total number of such sub-cubes is\nfor large enough . By the Pigeonhole Principle, there must be two distinct strings such that fall into the same sub-cube. In other words, we have\nIt follows that\nApplying Corollary 3.1 ###reference_corollary1### to with gives (for large enough )\nLet be the sub-arc of the circle which lies completely inside the ellipse . Item (2) of Proposition 2 ###reference_position2### implies that the length of is at least . Therefore the Maximum Modulus Principle implies\nNow we have established the lemma for a fixed -mer . Since , Lemma 3.4 ###reference_lemma4### says that for any other -mer either both and are zero polynomials or and\nFinally, we note that both and start with a run of 0\u2019s of length .\n\u220e\nA much simpler proof for the slightly weaker bound is possible based on the complex analytical result of Borwein and Erdelyi [BE97 ###reference_bx2###, Theorem 3.3] (see also [DOS19 ###reference_bx14###, NP17 ###reference_bx33###]): there exist strings such that\nwhere , stands for the sub-arc . Now we observe that where is the string obtained by inserting many 0\u2019s before every bit of ( is defined similarly). Clearly, since any two 1\u2019s are separated by at least many 0\u2019s. Therefore, they enjoy the properties in Lemma 3.4 ###reference_lemma4###, and Lemma 3.1 ###reference_lemma1### follows with a weaker bound.111We thank an anonymous reviewer for pointing this observation out to us."
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "Optimality of the Maximum Likelihood Estimation",
69
+ "text": "For define , and let . By definition of the total variation distance, we have\nThe Union Bound thus implies . Moreover, by Definition 3 ###reference_inition3###, when it must hold that . Therefore\n\u220e\nThe Chernoff bound implies that if we repeat the purported reconstruction algorithm times and output the majority, it succeeds with probability at least .\nLet be such a (deterministic) reconstruction algorithm with traces described as above, which successfully outputs the source string with probability at least . Formally, for any source string , it holds that\nLet be exactly the collection of -tuples of strings on which outputs . We thus have\nwhere denotes the -fold product of with itself, capturing the distribution of . On the other hand, for distinct strings and we have (by definition, cannot output both and on the same input), and hence the bound\nThis implies\nWe stress that the above bound holds for any pair of distinct strings . Applying Theorem 3 ###reference_orem3### to gives\n\u220e\nThe distributions are defined as follows. Let , and so . The domain where is the collection of all subsets of of size exactly , and . We have\nWe first define to be the uniform distribution over , i.e., for any .\nFor each one of the remaining distributions, we identify it with a -subset . The precise definition of is as follows.\nIn other words, occurs with probability , conditioned on which is the point distribution supported on ; occurs with probability , conditioned on which is the uniform distribution over . Now we verify that satisfies the two conditions.\nFor Condition 1, consider a distinguisher which on sample , outputs if , and outputs 0 if . We have\nTo see Condition 2, let be samples. Since is supported on , the samples are all elements of , meaning that there is at least one containing all samples. Calculating the likelihoods gives\nTherefore, the output of the Maximum Likelihood Estimation on will never be .\n\u220e"
70
+ },
71
+ {
72
+ "section_id": "5",
73
+ "parent_section_id": null,
74
+ "section_name": "Acknowledgements",
75
+ "text": "We are thankful to several anonymous reviewers for their valuable suggestions and comments."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {},
80
+ "image_paths": {},
81
+ "validation": true,
82
+ "references": [
83
+ {
84
+ "1": {
85
+ "title": "Beyond trace reconstruction: Population recovery from the deletion\nchannel.",
86
+ "author": "Frank Ban, Xi Chen, Adam Freilich, Rocco A Servedio, and Sandip Sinha.",
87
+ "venue": "In 60th IEEE Annual Symposium on Foundations of Computer\nScience, FOCS 2019, pages 745\u2013768. IEEE, 2019.",
88
+ "url": null
89
+ }
90
+ },
91
+ {
92
+ "2": {
93
+ "title": "Littlewood-type problems on subarcs of the unit circle.",
94
+ "author": "Peter Borwein and Tam\u00e1s Erd\u00e9lyi.",
95
+ "venue": "Indiana University mathematics journal, pages 1323\u20131346, 1997.",
96
+ "url": null
97
+ }
98
+ },
99
+ {
100
+ "3": {
101
+ "title": "Littlewood-type problems on .",
102
+ "author": "Peter Borwein, Tam\u00e1s Erd\u00e9lyi, and G\u00e9za K\u00f3s.",
103
+ "venue": "Proceedings of the London Mathematical Society, 79(1):22\u201346,\n1999.",
104
+ "url": null
105
+ }
106
+ },
107
+ {
108
+ "4": {
109
+ "title": "Reconstructing strings from random traces.",
110
+ "author": "Tugkan Batu, Sampath Kannan, Sanjeev Khanna, and Andrew McGregor.",
111
+ "venue": "In J. Ian Munro, editor, Proceedings of the Fifteenth Annual\nACM-SIAM Symposium on Discrete Algorithms, SODA 2004, New Orleans,\nLouisiana, USA, January 11-14, 2004, pages 910\u2013918. SIAM, 2004.",
112
+ "url": null
113
+ }
114
+ },
115
+ {
116
+ "5": {
117
+ "title": "Coded trace reconstruction in a constant number of traces.",
118
+ "author": "Joshua Brakensiek, Ray Li, and Bruce Spang.",
119
+ "venue": "In 61st IEEE Annual Symposium on Foundations of Computer\nScience, FOCS 2020, pages 482\u2013493. IEEE, 2020.",
120
+ "url": null
121
+ }
122
+ },
123
+ {
124
+ "6": {
125
+ "title": "Approximate trace reconstruction via median string (in average-case).",
126
+ "author": "Diptarka Chakraborty, Debarati Das, and Robert Krauthgamer.",
127
+ "venue": "In 41st IARCS Annual Conference on Foundations of Software\nTechnology and Theoretical Computer Science, FSTTCS 2021, volume 213 of\nLIPIcs, pages 11:1\u201311:23, 2021.",
128
+ "url": null
129
+ }
130
+ },
131
+ {
132
+ "7": {
133
+ "title": "Near-optimal average-case approximate trace reconstruction from few\ntraces.",
134
+ "author": "Xi Chen, Anindya De, Chin Ho Lee, Rocco A Servedio, and Sandip Sinha.",
135
+ "venue": "arXiv preprint arXiv:2107.11530, 2021.",
136
+ "url": null
137
+ }
138
+ },
139
+ {
140
+ "8": {
141
+ "title": "Polynomial-time trace reconstruction in the smoothed complexity\nmodel.",
142
+ "author": "Xi Chen, Anindya De, Chin Ho Lee, Rocco A. Servedio, and Sandip Sinha.",
143
+ "venue": "In Proceedings of the 2021 ACM-SIAM Symposium on Discrete\nAlgorithms, SODA 2021, pages 54\u201373. SIAM, 2021.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "9": {
149
+ "title": "Mean-based trace reconstruction over practically any\nreplication-insertion channel.",
150
+ "author": "Mahdi Cheraghchi, Joseph Downs, Jo\u00e3o L. Ribeiro, and Alexandra Veliche.",
151
+ "venue": "In IEEE International Symposium on Information Theory, ISIT\n2021, pages 2459\u20132464. IEEE, 2021.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "10": {
157
+ "title": "Coded trace reconstruction.",
158
+ "author": "Mahdi Cheraghchi, Ryan Gabrys, Olgica Milenkovic, and Jo\u00e3o Ribeiro.",
159
+ "venue": "IEEE Transactions on Information Theory, 66(10):6084\u20136103,\n2020.",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "11": {
165
+ "title": "New lower bounds for trace reconstruction.",
166
+ "author": "Zachary Chase.",
167
+ "venue": "In Annales de l\u2019Institut Henri Poincar\u00e9, Probabilit\u00e9s et\nStatistiques, volume 57, pages 627\u2013643. Institut Henri Poincar\u00e9, 2021.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "12": {
173
+ "title": "Separating words and trace reconstruction.",
174
+ "author": "Zachary Chase.",
175
+ "venue": "In Proceedings of the 53rd Annual ACM SIGACT Symposium on\nTheory of Computing, STOC 2021, pages 21\u201331. ACM, 2021.",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "13": {
181
+ "title": "Approximate trace reconstruction of random strings from a constant\nnumber of traces.",
182
+ "author": "Zachary Chase and Yuval Peres.",
183
+ "venue": "arXiv preprint arXiv:2107.06454, 2021.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "14": {
189
+ "title": "Optimal mean-based algorithms for trace reconstruction.",
190
+ "author": "Anindya De, Ryan O\u2019Donnell, and Rocco A Servedio.",
191
+ "venue": "The Annals of Applied Probability, 29(2):851\u2013874, 2019.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "15": {
197
+ "title": "Approximate trace reconstruction: Algorithms.",
198
+ "author": "Sami Davies, Mikl\u00f3s Z R\u00e1cz, Benjamin G Schiffer, and Cyrus Rashtchian.",
199
+ "venue": "In IEEE International Symposium on Information Theory, ISIT\n2021, pages 2525\u20132530. IEEE, 2021.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "16": {
205
+ "title": "Reconstruction from subsequences.",
206
+ "author": "Miroslav Dud\u0131k and Leonard J Schulman.",
207
+ "venue": "Journal of Combinatorial Theory, Series A, 103(2):337\u2013348,\n2003.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "17": {
213
+ "title": "The hybrid k-deck problem: Reconstructing sequences from short and\nlong traces.",
214
+ "author": "Ryan Gabrys and Olgica Milenkovic.",
215
+ "venue": "In IEEE International Symposium on Information Theory, ISIT\n2017, pages 1306\u20131310. IEEE, 2017.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "18": {
221
+ "title": "Unique reconstruction of coded strings from multiset substring\nspectra.",
222
+ "author": "Ryan Gabrys and Olgica Milenkovic.",
223
+ "venue": "IEEE Transactions on Information Theory, 65(12):7682\u20137696,\n2019.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "19": {
229
+ "title": "Limitations of mean-based algorithms for trace reconstruction at\nsmall edit distance.",
230
+ "author": "Elena Grigorescu, Madhu Sudan, and Minshen Zhu.",
231
+ "venue": "IEEE Trans. Inf. Theory, 68(10):6790\u20136801, 2022.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "20": {
237
+ "title": "Trace reconstruction with varying deletion probabilities.",
238
+ "author": "Lisa Hartung, Nina Holden, and Yuval Peres.",
239
+ "venue": "In Proceedings of the Fifteenth Workshop on Analytic\nAlgorithmics and Combinatorics, ANALCO 2018, pages 54\u201361. SIAM, 2018.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "21": {
245
+ "title": "Lower bounds for trace reconstruction.",
246
+ "author": "Nina Holden and Russell Lyons.",
247
+ "venue": "The Annals of Applied Probability, 30(2):503\u2013525, 2020.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "22": {
253
+ "title": "Trace reconstruction with constant deletion probability and related\nresults.",
254
+ "author": "Thomas Holenstein, Michael Mitzenmacher, Rina Panigrahy, and Udi Wieder.",
255
+ "venue": "In Proceedings of the Nineteenth Annual ACM-SIAM Symposium on\nDiscrete Algorithms, SODA 2008, pages 389\u2013398. SIAM, 2008.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "23": {
261
+ "title": "Subpolynomial trace reconstruction for random strings and arbitrary\ndeletion probability.",
262
+ "author": "Nina Holden, Robin Pemantle, and Yuval Peres.",
263
+ "venue": "In S\u00e9bastien Bubeck, Vianney Perchet, and Philippe Rigollet,\neditors, Conference On Learning Theory, COLT 2018, Stockholm, Sweden,\n6-9 July 2018, volume 75 of Proceedings of Machine Learning Research,\npages 1799\u20131840. PMLR, 2018.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "24": {
269
+ "title": "More on reconstructing strings from random traces: insertions and\ndeletions.",
270
+ "author": "Sampath Kannan and Andrew McGregor.",
271
+ "venue": "In IEEE International Symposium on Information Theory, ISIT\n2005, pages 297\u2013301. IEEE, 2005.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "25": {
277
+ "title": "Trace reconstruction: Generalized and parameterized.",
278
+ "author": "Akshay Krishnamurthy, Arya Mazumdar, Andrew McGregor, and Soumyabrata Pal.",
279
+ "venue": "IEEE Transactions on Information Theory, 67(6):3233\u20133250,\n2021.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "26": {
285
+ "title": "On a reconstruction problem for sequences,.",
286
+ "author": "Ilia Krasikov and Yehuda Roditty.",
287
+ "venue": "J. Comb. Theory, Ser. A, 77(2):344\u2013348, 1997.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "27": {
293
+ "title": "Complex analysis, volume 103.",
294
+ "author": "Serge Lang.",
295
+ "venue": "Springer Science & Business Media, 2013.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "28": {
301
+ "title": "Efficient reconstruction of sequences.",
302
+ "author": "Vladimir I. Levenshtein.",
303
+ "venue": "IEEE Transactions on Information Theory, 47(1):2\u201322, 2001.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "29": {
309
+ "title": "Efficient reconstruction of sequences from their subsequences or\nsupersequences.",
310
+ "author": "Vladimir I. Levenshtein.",
311
+ "venue": "J. Comb. Theory, Ser. A, 93(2):310\u2013332, 2001.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "30": {
317
+ "title": "Near-best multivariate approximation by fourier series, chebyshev\nseries and chebyshev interpolation.",
318
+ "author": "John C Mason.",
319
+ "venue": "Journal of Approximation Theory, 28(4):349\u2013358, 1980.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "31": {
325
+ "title": "Trace reconstruction revisited.",
326
+ "author": "Andrew McGregor, Eric Price, and Sofya Vorotnikova.",
327
+ "venue": "In 22th Annual European Symposium on Algorithms, ESA 2014,\nvolume 8737 of Lecture Notes in Computer Science, pages 689\u2013700.\nSpringer, 2014.",
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "32": {
333
+ "title": "Substring density estimation from traces.",
334
+ "author": "Kayvon Mazooji and Ilan Shomorony.",
335
+ "venue": "CoRR, abs/2210.10917, 2022.",
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "33": {
341
+ "title": "Trace reconstruction with samples.",
342
+ "author": "Fedor Nazarov and Yuval Peres.",
343
+ "venue": "In Proceedings of the 49th Annual ACM SIGACT Symposium on\nTheory of Computing, STOC 2017, pages 1042\u20131046. ACM, 2017.",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "34": {
349
+ "title": "Circular trace reconstruction.",
350
+ "author": "Shyam Narayanan and Michael Ren.",
351
+ "venue": "In 12th Innovations in Theoretical Computer Science Conference\n(ITCS 2021). Schloss Dagstuhl-Leibniz-Zentrum f\u00fcr Informatik, 2021.",
352
+ "url": null
353
+ }
354
+ },
355
+ {
356
+ "35": {
357
+ "title": "Average-case reconstruction for the deletion channel: Subpolynomially\nmany traces suffice.",
358
+ "author": "Yuval Peres and Alex Zhai.",
359
+ "venue": "In 58th IEEE Annual Symposium on Foundations of Computer\nScience, FOCS 2017, pages 228\u2013239. IEEE Computer Society, 2017.",
360
+ "url": null
361
+ }
362
+ },
363
+ {
364
+ "36": {
365
+ "title": "Average-case to (shifted) worst-case reduction for the trace\nreconstruction problem.",
366
+ "author": "Ittai Rubinstein.",
367
+ "venue": "In Kousha Etessami, Uriel Feige, and Gabriele Puppis, editors, 50th International Colloquium on Automata, Languages, and Programming,\nICALP 2023, July 10-14, 2023, Paderborn, Germany, volume 261 of LIPIcs, pages 102:1\u2013102:20. Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr\nInformatik, 2023.",
368
+ "url": null
369
+ }
370
+ },
371
+ {
372
+ "37": {
373
+ "title": "Trace reconstruction with bounded edit distance.",
374
+ "author": "Jin Sima and Jehoshua Bruck.",
375
+ "venue": "In IEEE International Symposium on Information Theory, ISIT\n2021, pages 2519\u20132524. IEEE, 2021.",
376
+ "url": null
377
+ }
378
+ },
379
+ {
380
+ "38": {
381
+ "title": "Reconstructing sequences.",
382
+ "author": "Alex D Scott.",
383
+ "venue": "Discrete Mathematics, 175(1-3):231\u2013238, 1997.",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "39": {
389
+ "title": "Approximation Theory and Approximation Practice.",
390
+ "author": "Lloyd N. Trefethen.",
391
+ "venue": "SIAM, 2012.",
392
+ "url": null
393
+ }
394
+ },
395
+ {
396
+ "40": {
397
+ "title": "Multivariate polynomial approximation in the hypercube.",
398
+ "author": "Lloyd Trefethen.",
399
+ "venue": "Proceedings of the American Mathematical Society,\n145(11):4837\u20134844, 2017.",
400
+ "url": null
401
+ }
402
+ },
403
+ {
404
+ "41": {
405
+ "title": "Improved string reconstruction over insertion-deletion channels.",
406
+ "author": "Krishnamurthy Viswanathan and Ram Swaminathan.",
407
+ "venue": "In Proceedings of the Nineteenth Annual ACM-SIAM Symposium on\nDiscrete Algorithms, SODA 2008, pages 399\u2013408. SIAM, 2008.",
408
+ "url": null
409
+ }
410
+ },
411
+ {
412
+ "42": {
413
+ "title": "Portable and error-free dna-based data storage.",
414
+ "author": "S. M. Hossein Tabatabaei Yazdi, Ryan Gabrys, and Olgica Milenkovic.",
415
+ "venue": "Scientific Reports, 7:2045\u20132322, 2017.",
416
+ "url": null
417
+ }
418
+ }
419
+ ],
420
+ "url": "http://arxiv.org/html/2308.14993v2"
421
+ }
20240127/2309.16742v4.json ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Supervised Learning Models for Early Detection of Albuminuria Risk in Type-2 Diabetes Mellitus Patients",
3
+ "abstract": "Diabetes, especially T2DM, continues to be a significant health problem. One of the major concerns associated with diabetes is the development of its complications. Diabetic nephropathy, one of the chronic complication of diabetes, adversely affects the kidneys, leading to kidney damage. Diagnosing diabetic nephropathy involves considering various criteria, one of which is the presence of a pathologically significant quantity of albumin in urine, known as albuminuria. Thus, early prediction of albuminuria in diabetic patients holds the potential for timely preventive measures. This study aimed to develop a supervised learning model to predict the risk of developing albuminuria in T2DM patients. The selected supervised learning algorithms included Na\u00efve Bayes, Support Vector Machine (SVM), decision tree, random forest, AdaBoost, XGBoost, and Multi-Layer Perceptron (MLP). Our private dataset, comprising 184 entries of diabetes complications risk factors, was used to train the algorithms. It consisted of 10 attributes as features and 1 attribute as the target (albuminuria). Upon conducting the experiments, the MLP demonstrated superior performance compared to the other algorithms. It achieved accuracy and f1-score values as high as 0.74 and 0.75, respectively, making it suitable for screening purposes in predicting albuminuria in T2DM. Nonetheless, further studies are warranted to enhance the model\u2019s performance.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Diabetes continues to be one of the most challenging noncommunicable diseases worldwide. It is a chronic metabolic disorder characterized by high blood sugar levels caused by problems in insulin production, sensitivity of cells\u2019 response to insulin, or both [1 ###reference_b1###]. There are four types of diabetes, namely type-1, type-2, gestational type, and other types. However, type-2 diabetes (T2DM) dominates all other diabetes types [2 ###reference_b2###], accounting for more than 90% of all diabetes cases. The high prevalence of T2DM is strongly associated with the unhealthy modern lifestyle, including unhealthy eating habits, smoking, obesity, and a lack of physical activity, as well as internal predisposition factors such as race and family history [3 ###reference_b3###].\nThe predominant challenge associated with diabetes stems from the array of complications that can arise when diabetes is not adequately controlled. Among these unwanted complications, one particularly notable issue is kidney complication, which falls under the category of microvascular complications, affecting the smaller blood vessels [4 ###reference_b4###, 5 ###reference_b5###]. This specific complication is commonly referred to as diabetic nephropathy and accounts for approximately 14.0% of diabetes-related complications [4 ###reference_b4###].\nDiabetic nephropathy is considered as a type of Chronic Kidney Disease (CKD). According to the Kidney Disease Improving Global Outcomes (KDIGO) 2012 guidelines, CKD is established when there are markers of kidney damage and/or a Glomerular Filtration Rate (GFR) 60 mL/min/1.73m2 that lasts for at least 3 months. The kidney damage markers for CKD include the presence of pathologically high quantities of urinary albumin excretion (albuminuria), the presence of urine sediment abnormalities, structural abnormalities detected by imaging, and a history of kidney transplantation [6 ###reference_b6###].\nAs mentioned in the preceding paragraph, the presence of albuminuria can be indicative of a kidney problem. Albumin in urine can signal an issue with the kidney filtration function. Albuminuria can be divided into two categories: microalbuminuria and macroalbuminuria. Microalbuminuria is diagnosed when the albumin-creatinine ratio is 30 mg/24h and 300 mg/24h, while macroalbuminuria is diagnosed when the albumin excretion is 300 mg/24h in a 24-hour urine collection sample [7 ###reference_b7###]. As albuminuria can serve as a signal of kidney problems, it becomes essential for diabetes patients to be aware of their risk of developing this condition.\nTherefore, the primary objective of this study is to develop a supervised learning model capable of predicting the risk of albuminuria development in diabetes patients, particularly those with T2DM. The primary contributions of this paper can be summarized as follows:\nDevelopment of a supervised model capable of predicting early albuminuria in patients with type 2 diabetes mellitus (T2DM).\nIdentification of the optimal supervised algorithm for early albuminuria detection in T2DM patients."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Related Work",
15
+ "text": "Recently, there has been a growing interest among researchers in using machine learning approaches to predict albuminuria. This interest arise from the urgency of developing early risk prediction tools for the disease, as it can lead to increased \u201dcosts\u201d if left undetected. To our knowledge, two studies conducted by Khitan et al. [8 ###reference_b8###] and Lin et al. [9 ###reference_b9###] have used machine learning approaches for predicting albuminuria.\nKhitan et al. [8 ###reference_b8###] in their study used machine learning approaches to predict the risk of albuminuria in person with diabetes. Their study incorporated 13 predictive factors, including measures such as subtotal lean mass, subtotal fat mass, diabetes duration, age, HbA1c levels, creatinine levels, triglyceride levels, total cholesterol levels, HDL cholesterol levels, maximum exercise capacity, systolic and diastolic blood pressure, and ankle brachial index. They conducted their study on 1330 subjects and used a variety of machine learning algorithms, including random forest, gradient boost, logistic regression, support vector machines, multilayer perceptron, and a stacking classifier. The results showed that the multilayer perceptron (MLP) exhibited the highest performance with an AUC (Area Under the Curve) value of 0.67. Furthermore, the model demonstrated a precision of 0.61, recall of 0.67, and an accuracy of 0.62, as determined from the confusion matrix presented in the paper.\nIn another study within this domain, Lin et al. [9 ###reference_b9###] aimed to predict microalbuminuria in the Chinese population using machine learning approaches. Their study involved 3,294 subjects ranging in age from 16 to 93 years. They used the \u201dglm\u201d package in the R software to construct their machine learning model. Their model achieved a specificity of 0.9 and an accuracy of 0.63, although the sensitivity was relatively low at 0.2. Despite these outcomes, the study\u2019s conclusions highlighted systolic and diastolic blood pressure, fasting blood glucose levels, triglyceride levels, gender, age, and smoking as potential predictors of microalbuminuria among the patient population."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Methodology",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III-A Dataset",
27
+ "text": "###figure_1### ###table_1### In this study, we used our private dataset consisting of data on the risk of diabetes complications, which was collected from a primary healthcare facility in DKI Jakarta, Indonesia. The dataset comprises 184 records, each consisted of 10 features and 1 target variable (Table I ###reference_###) (Figure 1 ###reference_###). All records are sourced from patients with T2DM. The features are all numerical, whereas the target variable is categorical. Prior to analysis, all data were carefully examined and cleaned to remove any missing values or measurement errors. To ensure the security and privacy of the medical data, all information was anonymized.\ndurasi_dm: This attribute refers to the length of time since the patient\u2019s initial diabetes diagnosis. The duration is measured in years.\nbmi: This attribute refers to the patient\u2019s current Body Mass Index (BMI), which is measured in kg/m2.\nhdl: This attribute refers to the current level of High-Density Lipoprotein (HDL) in the bloodstream, measured in mg/dL using standard laboratory methods.\nldl: This attribute refers to the current level of Low-Density Lipoprotein (LDL) in the bloodstream, measured in mg/dL using standard laboratory methods.\ntg: This attribute refers to the current level of triglyceride in the bloodstream, measured in mg/dL using standard laboratory methods.\nkol_tot: This attribute refers to the current level of total cholesterol in the bloodstream, measured in mg/dL using standard laboratory methods.\ngdp: This attribute refers to the current level of fasting plasma glucose in the bloodstream, measured in mg/dL using standard laboratory methods.\nTDS: This attribute refers to the current systolic blood pressure measured in mmHg using an ambulatory blood pressure device.\na1c: This attribute refers to the current level of HbA1c measured using standard laboratory methods.\ncr: This attribute refers to the current level of creatinine, measured in mg/dL using standard laboratory methods.\nkid_group: This attribute serves as the target label and describes the grouping of kidney disease. It is a categorical attribute comprising of two categories: normal and albuminuria. The determination of albuminuria label was based on the KDIGO 2012 criteria. However, instead of treating microalbuminuria and macroalbuminuria as separate categories, we classified them both under the umbrella term of \u201dalbuminuria\u201d.\nThe use of the aforementioned features was rationalized based on the complex nature of their interaction with kidney damage, as shown in Figure 2 ###reference_### [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###].\nFigure 2 ###reference_### illustrates the simplified mechanism of diabetic nephropathy, with obesity playing a central role. Elevated BMI increases the likelihood of obesity, which subsequently acts as a risk factor for developing diabetes and hypertension through a complex pathway. The intricate sequence involves the increase of plasma glucose and HbA1c in diabetes, leading to microvascular damage and subsequent kidney damage. The duration of diabetes increases the risk of such damage. On the other hand, chronic high blood pressure resulting from obesity can lead to hypertension, causing microvascular damage and putting the individual at risk of kidney damage. Additionally, kidney damage can, in turn, induce hypertension, creating an inner loop-like mechanism that worsens the condition. Furthermore, obesity also serves as a risk factor for lipid profile issues, such as an increase in LDL, TG, and cholesterol, and a decrease in HDL, posing a risk of dyslipidemia. Dyslipidemia, in turn, indirectly contributes to the kidney damage.\n\n###figure_2###"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "III-B Design of Experiment",
33
+ "text": "This study aimed to evaluate the performance of supervised learning algorithms in predicting the risk of developing albuminuria in patients with T2DM patients. We evaluated several supervised learning algorithms, including 6 machine learning algorithms and 1 deep learning algorithm. The machine learning algorithms used were Na\u00efve Bayes, Support Vector Machine (SVM), decision tree, random forest, AdaBoost, and XGBoost. Among these machine learning algorithms, random forest, AdaBoost, and XGBoost are ensemble algorithms. The deep learning algorithm employed in this study is the Multi-Layer Perceptron (MLP). The use of deep learning in this experimental design, as compared to machine learning algorithms, was intended to evaluate its potential performance considering the limited size of the dataset. Table II ###reference_### presents the complete experimental design used in this study. We used the scikit-learn library [14 ###reference_b14###], version 1.0.2, as our primary machine learning and deep learning toolkit. Additionally, we employed the xgboost library [15 ###reference_b15###], version 1.6.2, specifically designed for implementing the XGBoost algorithm.\n###table_2### The dataset is split into training and test datasets using the train_test_split function provided by scikit-learn. Since the dataset size is relatively small, we opted for a train-test ratio of 0.75:0.25."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-C Evaluation Strategy",
39
+ "text": "We used precision (1 ###reference_###), recall (2 ###reference_###), accuracy (3 ###reference_###), and f1-score (4 ###reference_###) as the evaluation metrics for our study. A competent model is expected to exhibit high values for precision, recall, accuracy, and f1-score."
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-D Ethics Approval",
45
+ "text": "This study has been ethically approved by the Health Ethics Committee of Cipto Mangunkusumo Hospital, Faculty of Medicine Universitas Indonesia, Jakarta, Indonesia number KET-246/UN2.F1/ETIK/PPM.00.02/2022."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "IV Result",
51
+ "text": "###figure_3### \n###figure_4### Figure 3 ###reference_### shows the distribution of the train and test datasets used in our study. The test ratio is set at 0.25, resulting in 138 records for the train dataset and 46 records for the test dataset. As depicted in Figure 3 ###reference_###, the dataset exhibits a relatively uniform distribution. Although several data points appear to be potential outliers, we made the decision to retain them deliberately to introduce bias to the learning algorithms and thus reduce the risk of overfitting.\nWe train each supervised learning algorithm with its defined parameters, as shown in Table II ###reference_###, using the training dataset, which results in trained models. These models are then tested using the test dataset, producing predicted labels. The performance of the models is evaluated by comparing these predicted labels with their corresponding true labels, which serve as the ground truth.\nThe model evaluation results are shown in Table III ###reference_###. As depicted in the table, the machine learning algorithms did not yield satisfactory results. Among the various machine learning algorithms experimented with, Na\u00efve Bayes outperformed the others. It demonstrated an accuracy of up to 0.65 when predicting the test dataset. In contrast, the remaining machine learning algorithms exhibited prediction accuracies of no more than 0.5. Even the ensemble models failed to surpass Na\u00efve Bayes in predicting the test dataset. However, it\u2019s worth noting that none of these machine learning algorithms achieved an f1-score higher than 0.5, suggesting that the model is insufficient for predicting the risk of albuminuria in T2DM patients.\n###table_3### The superior results were obtained from the deep learning algorithm, specifically the Multi-Layer Perceptron (MLP), which achieved an accuracy and f1-score of 0.74 and 0.71, respectively. This algorithm outperformed the machine learning algorithms, which only achieved accuracy and f1-scores of up to 0.65 and 0.55, respectively. Additionally, the MLP algorithm exhibited the highest precision and recall scores compared to the other algorithms, scoring 0.68 and 0.75, respectively. This result outperformed the study by Khitan et al. [8 ###reference_b8###] and Lin et al. [9 ###reference_b9###], indicating that the algorithm might be acceptable for predicting the risk of albuminuria among T2DM patients. However, further improvements are needed, particularly in terms of the dataset size and variety, to achieve better results.\nTo gain a better understanding of the model evaluation results, we conducted a visual error analysis on the prediction outcomes of the MLP model. To facilitate visualization, we used the Principal Component Analysis (PCA) method to reduce the features from 10 to 2 dimensions. Subsequently, we used square and triangle markers to represent the normal and albuminuria labels, respectively, while using red and green colors to indicate false and true predictions, respectively. The visualization of the model evaluation results can be observed in Figure 4 ###reference_###.\nAs shown in Figure 4 ###reference_###, the false predictions could have either a normal or albuminuria label. However, the interesting point revealed by the visualization in the figure is that the falsely predicted labels are spread out but relatively close to the adjacent cluster that forms the true predictions. This indicates that the data characteristics of \u2019normal\u2019 and \u2019albuminuria\u2019 at some points have little difference, which might cause the algorithm to experience difficulty in creating separate boundaries between the labels, leading to false predictions and resulting in lower accuracy. This phenomenon may be explained by the nature of the patient data.\nFor several patients, the risk of developing albuminuria might not be strongly correlated with the features in the dataset. For example, there could be a patient with uncontrolled diabetes, indicated by high blood glucose and high lipid profile, but not developing albuminuria, while another patient with normal glucose levels and normal lipid profile developing albuminuria. This complexity arises because the human body is complex, and the risk of developing a disease may be influenced by multiple risk factors that are not apparent in the dataset. Therefore, one possible solution to improve the model\u2019s accuracy is to increase the dataset size and variety, allowing the learning algorithms to better understand the complex patterns present in such data.\nDespite the complex nature of the patient data, the outperforming performance of the MLP algorithm might be beneficial due to its architecture. The MLP might consist of several to even thousands of hidden layers. The MLP uses the backpropagation algorithm to update its weights based on the learned data [16 ###reference_b16###]. This enables the MLP to solve complex problems relatively easily compared to other traditional machine learning algorithms when dealing with such complex data."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Conclusion",
57
+ "text": "We have developed a supervised learning model to predict the risk of developing albuminuria in patients with T2DM. Among the various supervised learning models examined in this study, the MLP algorithm demonstrated superior performance in terms of precision, recall, accuracy, and f1-score. Specifically, the algorithm achieved values of 0.68, 0.75, 0.74, and 0.71 for precision, recall, accuracy, and f1-score, respectively. To further enhance the model\u2019s performance, we recommend augmenting the dataset with additional data to increase its size and diversity. Additionally, we propose conducting further research into the utilization of deep learning algorithms like MLP to effectively handle the complexities inherent in patient data."
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {
62
+ "1": {
63
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Summary Statistics</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.11\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.2.1\">Attribute</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.3.1\">Unit</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1\">MeanStd</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.4.1\">Min</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.5.1\">Max</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.2\"><span class=\"ltx_text\" id=\"S3.T1.2.2.2.1\">durasi_dm</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.3\"><span class=\"ltx_text\" id=\"S3.T1.2.2.3.1\">Year</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.1\"><span class=\"ltx_text\" id=\"S3.T1.2.2.1.1\">5.1685.012</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.4\"><span class=\"ltx_text\" id=\"S3.T1.2.2.4.1\">0.000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.5\"><span class=\"ltx_text\" id=\"S3.T1.2.2.5.1\">38.000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.2\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.1\">bmi</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.3\"><span class=\"ltx_text\" id=\"S3.T1.3.3.3.1\">kg/m<sup class=\"ltx_sup\" id=\"S3.T1.3.3.3.1.1\">2</sup></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.1\"><span class=\"ltx_text\" id=\"S3.T1.3.3.1.1\">27.9734.517</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.4\"><span class=\"ltx_text\" id=\"S3.T1.3.3.4.1\">18.350</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.5\"><span class=\"ltx_text\" id=\"S3.T1.3.3.5.1\">44.870</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.2\"><span class=\"ltx_text\" id=\"S3.T1.4.4.2.1\">hdl</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.3\"><span class=\"ltx_text\" id=\"S3.T1.4.4.3.1\">mg/dL</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.1\"><span class=\"ltx_text\" id=\"S3.T1.4.4.1.1\">48.06510.913</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.4\"><span class=\"ltx_text\" id=\"S3.T1.4.4.4.1\">22.000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.5\"><span class=\"ltx_text\" id=\"S3.T1.4.4.5.1\">88.000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.5.5.2\"><span class=\"ltx_text\" id=\"S3.T1.5.5.2.1\">ldl</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.5.3\"><span class=\"ltx_text\" id=\"S3.T1.5.5.3.1\">mg/dL</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.5.1\"><span class=\"ltx_text\" id=\"S3.T1.5.5.1.1\">135.97838.522</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.5.4\"><span class=\"ltx_text\" id=\"S3.T1.5.5.4.1\">40.000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.5.5\"><span class=\"ltx_text\" id=\"S3.T1.5.5.5.1\">307.000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.6.6.2\"><span class=\"ltx_text\" id=\"S3.T1.6.6.2.1\">tg</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.6.6.3\"><span class=\"ltx_text\" id=\"S3.T1.6.6.3.1\">mg/dL</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.6.6.1\"><span class=\"ltx_text\" id=\"S3.T1.6.6.1.1\">185.571118.305</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.6.6.4\"><span class=\"ltx_text\" id=\"S3.T1.6.6.4.1\">47.000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.6.6.5\"><span class=\"ltx_text\" id=\"S3.T1.6.6.5.1\">878.000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.7.7.2\"><span class=\"ltx_text\" id=\"S3.T1.7.7.2.1\">kol_tot</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.7.7.3\"><span class=\"ltx_text\" id=\"S3.T1.7.7.3.1\">mg/dL</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.7.7.1\"><span class=\"ltx_text\" id=\"S3.T1.7.7.1.1\">207.55442.534</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.7.7.4\"><span class=\"ltx_text\" id=\"S3.T1.7.7.4.1\">105.000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.7.7.5\"><span class=\"ltx_text\" id=\"S3.T1.7.7.5.1\">347.000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.8.8.2\"><span class=\"ltx_text\" id=\"S3.T1.8.8.2.1\">gdp</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.8.8.3\"><span class=\"ltx_text\" id=\"S3.T1.8.8.3.1\">mg/dL</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.8.8.1\"><span class=\"ltx_text\" id=\"S3.T1.8.8.1.1\">158.07668.591</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.8.8.4\"><span class=\"ltx_text\" id=\"S3.T1.8.8.4.1\">74.000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.8.8.5\"><span class=\"ltx_text\" id=\"S3.T1.8.8.5.1\">433.000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.9.9.2\"><span class=\"ltx_text\" id=\"S3.T1.9.9.2.1\">TDS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.9.9.3\"><span class=\"ltx_text\" id=\"S3.T1.9.9.3.1\">mmHg</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.9.9.1\"><span class=\"ltx_text\" id=\"S3.T1.9.9.1.1\">141.60918.682</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.9.9.4\"><span class=\"ltx_text\" id=\"S3.T1.9.9.4.1\">96.000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.9.9.5\"><span class=\"ltx_text\" id=\"S3.T1.9.9.5.1\">201.000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.10.10.2\"><span class=\"ltx_text\" id=\"S3.T1.10.10.2.1\">a1c</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.10.10.3\"><span class=\"ltx_text\" id=\"S3.T1.10.10.3.1\">%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.10.10.1\"><span class=\"ltx_text\" id=\"S3.T1.10.10.1.1\">8.2042.134</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.10.10.4\"><span class=\"ltx_text\" id=\"S3.T1.10.10.4.1\">5.300</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.10.10.5\"><span class=\"ltx_text\" id=\"S3.T1.10.10.5.1\">16.300</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.11.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.11.11.2\"><span class=\"ltx_text\" id=\"S3.T1.11.11.2.1\">cr</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.11.11.3\"><span class=\"ltx_text\" id=\"S3.T1.11.11.3.1\">mg/dL</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.11.11.1\"><span class=\"ltx_text\" id=\"S3.T1.11.11.1.1\">0.7820.409</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.11.11.4\"><span class=\"ltx_text\" id=\"S3.T1.11.11.4.1\">0.300</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.11.11.5\"><span class=\"ltx_text\" id=\"S3.T1.11.11.5.1\">4.810</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.11.12.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.11.12.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.11.12.1.1.1\">Attribute</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T1.11.12.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.11.12.1.2.1\">Category</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.11.12.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.11.12.1.3.1\">Count</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.11.12.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.11.12.1.4.1\">Perc.</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.11.13.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.11.13.2.1\"><span class=\"ltx_text\" id=\"S3.T1.11.13.2.1.1\">kid_group</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T1.11.13.2.2\">Normal (0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.11.13.2.3\"><span class=\"ltx_text\" id=\"S3.T1.11.13.2.3.1\">92</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.11.13.2.4\"><span class=\"ltx_text\" id=\"S3.T1.11.13.2.4.1\">0.50</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.11.14.3\">\n<td class=\"ltx_td ltx_border_b ltx_border_l ltx_border_r\" id=\"S3.T1.11.14.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r\" colspan=\"2\" id=\"S3.T1.11.14.3.2\">Albuminuria (1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.11.14.3.3\"><span class=\"ltx_text\" id=\"S3.T1.11.14.3.3.1\">92</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.11.14.3.4\"><span class=\"ltx_text\" id=\"S3.T1.11.14.3.4.1\">0.50</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
64
+ "capture": "TABLE I: Summary Statistics"
65
+ },
66
+ "2": {
67
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Design of Experiment</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T2.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.1.1\">Algorithm</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T2.1.1.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.2.1\">Observable Factors</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.2.2.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.1.2.2.1.1\">Na\u00efve Bayes</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.2.2.2.1\">Class</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.2.2.3\"><span class=\"ltx_text\" id=\"S3.T2.1.2.2.3.1\">sklearn.naive_bayes.GaussianNB</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.3.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.3.3.1.1\">Params</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.3.3.2\"><span class=\"ltx_text\" id=\"S3.T2.1.3.3.2.1\">priors=None, var_smoothing=1e-09</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.4.4.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.1.4.4.1.1\">SVM</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.4.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.4.4.2.1\">Class</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.4.4.3\"><span class=\"ltx_text\" id=\"S3.T2.1.4.4.3.1\">sklearn.svm.SVC</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.5.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.5.5.1.1\">Params</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.5.5.2\"><span class=\"ltx_text\" id=\"S3.T2.1.5.5.2.1\">C=1.0, kernel=\u2019rbf\u2019, max_iter=-1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.6.6.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.1.6.6.1.1\">Decision Tree</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.6.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.6.6.2.1\">Class</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.6.6.3\"><span class=\"ltx_text\" id=\"S3.T2.1.6.6.3.1\">sklearn.tree.DecisionTreeClassifier</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.7.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.7.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.7.7.1.1\">Params</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.7.7.2\"><span class=\"ltx_text\" id=\"S3.T2.1.7.7.2.1\">criterion=\u2019gini\u2019, max_depth=None</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.8.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.8.8.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S3.T2.1.8.8.1.1\">Random Forest</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.8.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.8.8.2.1\">Class</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.8.8.3\"><span class=\"ltx_text\" id=\"S3.T2.1.8.8.3.1\">sklearn.ensemble.RandomForestClassifier</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.9.9.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.9.9.1.1\">Params</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.9.9.2\"><span class=\"ltx_text\" id=\"S3.T2.1.9.9.2.1\">n_estimators=100, criterion=\u2019gini\u2019,</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.10.10\">\n<td class=\"ltx_td ltx_border_r\" id=\"S3.T2.1.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T2.1.10.10.2\"><span class=\"ltx_text\" id=\"S3.T2.1.10.10.2.1\">max_depth=None</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.11.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.11.11.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S3.T2.1.11.11.1.1\">AdaBoost</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.11.11.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.11.11.2.1\">Class</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.11.11.3\"><span class=\"ltx_text\" id=\"S3.T2.1.11.11.3.1\">sklearn.ensemble.AdaBoostClassifier</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.12.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.12.12.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.12.12.1.1\">Params</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.12.12.2\"><span class=\"ltx_text\" id=\"S3.T2.1.12.12.2.1\">n_estimators=50, learning_rate=1.0,</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.13.13\">\n<td class=\"ltx_td ltx_border_r\" id=\"S3.T2.1.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T2.1.13.13.2\"><span class=\"ltx_text\" id=\"S3.T2.1.13.13.2.1\">algorithm=\u2019SAMME.R\u2019</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.14.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.14.14.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S3.T2.1.14.14.1.1\">XGBoost</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.14.14.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.14.14.2.1\">Class</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.14.14.3\"><span class=\"ltx_text\" id=\"S3.T2.1.14.14.3.1\">xgboost.XGBClassifier</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.15.15\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.15.15.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.15.15.1.1\">Params</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.15.15.2\"><span class=\"ltx_text\" id=\"S3.T2.1.15.15.2.1\">n_estimators=2, max_depth=1,</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.16.16\">\n<td class=\"ltx_td ltx_border_r\" id=\"S3.T2.1.16.16.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T2.1.16.16.2\"><span class=\"ltx_text\" id=\"S3.T2.1.16.16.2.1\">learning_rate=1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.17.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.17.17.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S3.T2.1.17.17.1.1\">MLP</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.17.17.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.17.17.2.1\">Class</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.17.17.3\"><span class=\"ltx_text\" id=\"S3.T2.1.17.17.3.1\">sklearn.neural_network.MLPClassifier</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.18.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.18.18.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.18.18.1.1\">Params</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T2.1.18.18.2\"><span class=\"ltx_text\" id=\"S3.T2.1.18.18.2.1\">hidden_layer_sizes=(100,),</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.19.19\">\n<td class=\"ltx_td ltx_border_r\" id=\"S3.T2.1.19.19.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S3.T2.1.19.19.2\"><span class=\"ltx_text\" id=\"S3.T2.1.19.19.2.1\">learning_rate_init=3e-3,</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.20.20\">\n<td class=\"ltx_td ltx_border_b ltx_border_l ltx_border_r\" id=\"S3.T2.1.20.20.1\"></td>\n<td class=\"ltx_td ltx_border_b ltx_border_r\" id=\"S3.T2.1.20.20.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S3.T2.1.20.20.3\"><span class=\"ltx_text\" id=\"S3.T2.1.20.20.3.1\">max_iter=200</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
68
+ "capture": "TABLE II: Design of Experiment"
69
+ },
70
+ "3": {
71
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Classification Report</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T3.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.1.1\">Algorithm</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.2.1\">Precision</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.3.1\">Recall</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.4.1\">Accuracy</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.5.1\">F1-Score</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.2.1\"><span class=\"ltx_text\" id=\"S4.T3.1.2.2.1.1\">Na\u00efve Bayes</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.2.2\"><span class=\"ltx_text\" id=\"S4.T3.1.2.2.2.1\">0.67</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.2.3\"><span class=\"ltx_text\" id=\"S4.T3.1.2.2.3.1\">0.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.2.4\"><span class=\"ltx_text\" id=\"S4.T3.1.2.2.4.1\">0.65</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.2.5\"><span class=\"ltx_text\" id=\"S4.T3.1.2.2.5.1\">0.50</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.3.1\"><span class=\"ltx_text\" id=\"S4.T3.1.3.3.1.1\">SVM</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.3.2\"><span class=\"ltx_text\" id=\"S4.T3.1.3.3.2.1\">0.50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.3.3\"><span class=\"ltx_text\" id=\"S4.T3.1.3.3.3.1\">0.70</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.3.4\"><span class=\"ltx_text\" id=\"S4.T3.1.3.3.4.1\">0.57</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.3.3.5\"><span class=\"ltx_text\" id=\"S4.T3.1.3.3.5.1\">0.58</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.4.1\"><span class=\"ltx_text\" id=\"S4.T3.1.4.4.1.1\">Decision Tree</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.4.2\"><span class=\"ltx_text\" id=\"S4.T3.1.4.4.2.1\">0.45</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.4.3\"><span class=\"ltx_text\" id=\"S4.T3.1.4.4.3.1\">0.50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.4.4\"><span class=\"ltx_text\" id=\"S4.T3.1.4.4.4.1\">0.52</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.4.4.5\"><span class=\"ltx_text\" id=\"S4.T3.1.4.4.5.1\">0.48</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.5.1\"><span class=\"ltx_text\" id=\"S4.T3.1.5.5.1.1\">Random Forest</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.5.2\"><span class=\"ltx_text\" id=\"S4.T3.1.5.5.2.1\">0.50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.5.3\"><span class=\"ltx_text\" id=\"S4.T3.1.5.5.3.1\">0.60</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.5.4\"><span class=\"ltx_text\" id=\"S4.T3.1.5.5.4.1\">0.57</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.5.5.5\"><span class=\"ltx_text\" id=\"S4.T3.1.5.5.5.1\">0.55</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.6.1\"><span class=\"ltx_text\" id=\"S4.T3.1.6.6.1.1\">AdaBoost</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.6.2\"><span class=\"ltx_text\" id=\"S4.T3.1.6.6.2.1\">0.44</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.6.3\"><span class=\"ltx_text\" id=\"S4.T3.1.6.6.3.1\">0.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.6.4\"><span class=\"ltx_text\" id=\"S4.T3.1.6.6.4.1\">0.52</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.6.6.5\"><span class=\"ltx_text\" id=\"S4.T3.1.6.6.5.1\">0.42</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.7.1\"><span class=\"ltx_text\" id=\"S4.T3.1.7.7.1.1\">XGBoost</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.7.2\"><span class=\"ltx_text\" id=\"S4.T3.1.7.7.2.1\">0.46</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.7.3\"><span class=\"ltx_text\" id=\"S4.T3.1.7.7.3.1\">0.55</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.7.4\"><span class=\"ltx_text\" id=\"S4.T3.1.7.7.4.1\">0.52</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.7.7.5\"><span class=\"ltx_text\" id=\"S4.T3.1.7.7.5.1\">0.50</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.8.8.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.8.8.1.1\">MLP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.1.8.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.8.8.2.1\">0.68</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.1.8.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.8.8.3.1\">0.75</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.1.8.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.8.8.4.1\">0.74</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.1.8.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.8.8.5.1\">0.71</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
72
+ "capture": "TABLE III: Classification Report"
73
+ }
74
+ },
75
+ "image_paths": {
76
+ "1": {
77
+ "figure_path": "2309.16742v4_figure_1.png",
78
+ "caption": "Figure 1: Dataset distribution",
79
+ "url": "http://arxiv.org/html/2309.16742v4/extracted/5372047/dataset_boxplot.png"
80
+ },
81
+ "2": {
82
+ "figure_path": "2309.16742v4_figure_2.png",
83
+ "caption": "Figure 2: Simplified diabetic nephropathy mechanism",
84
+ "url": "http://arxiv.org/html/2309.16742v4/extracted/5372047/fig_simplified_diabetic_nephropathy_mechanism.png"
85
+ },
86
+ "3": {
87
+ "figure_path": "2309.16742v4_figure_3.png",
88
+ "caption": "Figure 3: Distribution of the train-test dataset",
89
+ "url": "http://arxiv.org/html/2309.16742v4/extracted/5372047/dataset_swarmplot.png"
90
+ },
91
+ "4": {
92
+ "figure_path": "2309.16742v4_figure_4.png",
93
+ "caption": "Figure 4: Error analysis",
94
+ "url": "http://arxiv.org/html/2309.16742v4/extracted/5372047/error_analysis.png"
95
+ }
96
+ },
97
+ "validation": true,
98
+ "references": [],
99
+ "url": "http://arxiv.org/html/2309.16742v4"
100
+ }
20240127/2309.17194v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2310.18446v5.json ADDED
@@ -0,0 +1,462 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A Novel Skip Orthogonal List for Dynamic Optimal Transport Problem",
3
+ "abstract": "Optimal transport is a fundamental topic that has attracted a great amount of attention from the optimization community in the past decades. In this paper, we consider an interesting discrete dynamic optimal transport problem:\ncan we efficiently update the optimal transport plan when the weights or the locations of the data points change?\nThis problem is naturally motivated by several applications in machine learning. For example, we often need to compute the optimal transport cost between two different data sets; if some changes happen to a few data points, should we re-compute the high complexity cost function or update the cost by some efficient dynamic data structure? We are aware that several dynamic maximum flow algorithms have been proposed before, however, the research on dynamic minimum cost flow problem is still quite limited, to the best of our knowledge.\nWe propose a novel 2D Skip Orthogonal List together with some dynamic tree techniques.\nAlthough our algorithm is based on the conventional simplex method,\nit can efficiently find the variable to pivot within expected time,\nand complete each pivoting operation within expected time where is the set of all supply and demand nodes.\nSince dynamic modifications typically do not introduce significant changes,\nour algorithm requires only a few simplex iterations in practice.\nSo our algorithm is\nmore efficient than re-computing the optimal transport cost that needs at least one traversal over all variables,\nwhere denotes the number of edges in the network.\nOur experiments demonstrate that our algorithm significantly outperforms existing algorithms in the dynamic scenarios.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The discrete optimal transport (OT) problem involves finding the optimal transport plan \u201c\u201d that minimizes the total cost of transporting one weighted dataset to another , given a cost matrix \u201c\u201d [23 ###reference_b23###]. The datasets and respectively represent the supply and demand node sets, and the problem can be represented as a minimum cost flow problem by adding the edges between and to create a complete bipartite graph. The discrete optimal transport problem finds numerous applications in the areas such as image registration [15 ###reference_b15###], seismic tomography [19 ###reference_b19###], and machine learning [31 ###reference_b31###]. However, most of these applications only consider static scenario where the weights of the datasets and the cost matrix remain constant.\nYet, many real-world applications need to consider the dynamic scenarios:\nDataset Similarity. In data analysis, measuring the similarity between datasets is a crucial task, and optimal transport has emerged as a powerful tool for this purpose [2 ###reference_b2###]. Real-world datasets are often dynamic, with data points being replaced, weights adjusted, or new data points added over time. Therefore, it is necessary to take these dynamically changes into account.\nTime Series Analysis. Optimal transport can serve as a metric in time series analysis [7 ###reference_b7###]. The main intuition lies in the smooth transition of states between time points in a time series. The smoothness implies the potential to iteratively refine a new solution based on the previous one, circumventing the need for a complete recomputation.\nNeuroimage analysis [14 ###reference_b14###, 17 ###reference_b17###]. In the medical imaging applications, we may want to compute the change trend of a patient\u2019s organ (e.g., the MRI images of human brain over several months), and the differences are measured by the optimal transport cost. Since the changes are often local and small, we may hope to apply some efficient method to quickly update the cost over the period.\nDenote by and the sets of vertices and edges in the bipartite network, respectively.\nExisting methods, such as the Sinkhorn algorithm [9 ###reference_b9###] and the Network Simplex algorithm [21 ###reference_b21###], are not adequate\nto handle the dynamic scenarios. Upon any modification to the cost matrix, the Sinkhorn algorithm requires at least one Sinkhorn-Knopp iteration to regularize the entire the solution matrix, while the Network Simplex algorithm needs to traverse all edges at least once. Consequently, the time complexities of these algorithms for the dynamic model are in general cases.\nOur algorithm takes a novel data structure that yields an time solution for handling evolving datasets, where is determined by the magnitude of the modification. In practice, is usually much less than the data size , and therefore our algorithm can save a large amount of runtime for the dynamic scenarios.\n111Demo library is available at https://github.com/xyxu2033/DynamicOptimalTransport ###reference_Transport###"
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Related Works",
15
+ "text": "Exact Solutions.\nIn the past years, several linear programming based minimum cost flow algorithms have been proposed to address discrete optimal transport problems. The simplex method by Dantzig et al. [10 ###reference_b10###] can efficiently solve general linear programs. Despite its worst-case exponential time complexity, Spielman and Teng [28 ###reference_b28###] showed that its smoothed time complexity is polynomial. Cunningham [8 ###reference_b8###] adapted the simplex method for minimum cost flow problems. Further, Orlin [21 ###reference_b21###] enhanced the network simplex algorithm with cost scaling techniques and Tarjan [29 ###reference_b29###] improved its complexity to be . Recently, Van Den Brand et al. [32 ###reference_b32###] presented an algorithm based on the interior point method with a time complexity , and Chen et al. [6 ###reference_b6###] proposed a time algorithm through a specially designed data structure on the interior point method.\nApproximate Algorithms. For approximate optimal transport, Sherman [25 ###reference_b25###] proposed a approximation algorithm in time. Pele and Werman [22 ###reference_b22###] introduced the FastEMD algorithm that applies classic algorithms on a heuristic sketch of the input graph. Later, Cuturi [9 ###reference_b9###] used Sinkhorn-Knopp iterations to approximate the optimal transport problem by adding the smoothed entropic entry as the regularization term. Recently several optimizations on the Sinkhorn algorithm have been proposed, such as the Overrelaxation of Sinkhorn [30 ###reference_b30###] and the Screening Sinkhorn algorithm [1 ###reference_b1###].\nSearch Trees and Skip Lists. Our data structure also utilizes high-dimensional extensions of skip lists to maintain a 2-dimensional Euler Tour sequence. Existing high-dimensional data structures based on self-balanced binary search trees, such as -d tree [3 ###reference_b3###], are not suitable as they do not support cyclic ordered set maintenance. Skip lists [24 ###reference_b24###] as depicted in Figure 0(a) ###reference_sf1###, which are linked lists with additional layers of pointers for element skipping, is adapted in our context to form skip orthogonal lists.\nThis skipping technique is later generalized for sparse data in higher dimension [20 ###reference_b20###, 12 ###reference_b12###], but range querying generally requires time where is the number of points in the high dimensional space. On the other hand, our data structure requires expected time when applied to simplex iterations."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Overview of Our Algorithm",
21
+ "text": "Our algorithm for the dynamic optimal transport problem employs two key strategies:\nFirst, the dynamic optimal transport operations are reduced to simplex iterations.\nOur technique, grounded on the Simplex method, operates by eliminating the smallest cycle in the graph. We assume that the modifications influence only a small portion of the result, requiring only a few simplex iterations. It is worth noting that existing algorithms like the Network Simplex Algorithm perform poorly under dynamic modifications as they require scanning all the edges at least once to ensure the correctness of the solution.\nSecond, an efficient data structure is proposed for performing each simplex iteration within expected linear time complexity.\nOur data structure, as shown in Figure 0(b) ###reference_sf2###, employs the Euler Tour Technique. We adapt skip lists to maintain the cyclic ordered set produced by the Euler Tour Technique and introduce an additional dimension to create a Skip Orthogonal List. This structure aids in maintaining the information about\nthe adjusted cost matrix, which is a matrix that requires specific range modifications and queries.\n###figure_1### ###figure_2### The rest of the paper is organized as follows. In Section 2 ###reference_###, we introduce several important definitions and notations that are used throughout this paper.\nIn Section 3 ###reference_### we present the data structure Skip Orthogonal List,\nwhere subsection 3.1 ###reference_### explains how the data structure is organized\nand subsection 3.2 ###reference_### uses cut operation as an example to demonstrate the updates on this data structure.\nIn Section 4 ###reference_### we elaborate on how to use our data structure to solve the dynamic optimal transport problem.\nSubsection 4.1 ###reference_### shows that the dynamic optimal transport model can be reduced to simplex iterations,\nand Subsection 4.2 ###reference_### shows how our data structure could be used to improve the performance of each simplex iteration."
22
+ },
23
+ {
24
+ "section_id": "2",
25
+ "parent_section_id": null,
26
+ "section_name": "Preliminaries",
27
+ "text": ""
28
+ },
29
+ {
30
+ "section_id": "2.1",
31
+ "parent_section_id": "2",
32
+ "section_name": "Optimal Transport",
33
+ "text": "Let and represent the source and target point sets, respectively; the corresponding discrete probability distributions are and , such that . The cost matrix is with each entry denoting the cost of transporting a unit of mass from the point to the point . The discrete optimal transport problem can be formulated as (1 ###reference_###).\nSince Problem (1 ###reference_###) is a standard network flow problem, it can be transformed to the following Problem (2 ###reference_###) by adding infinity-cost edges [23 ###reference_b23###]:\nWe add the constraint (3 ###reference_###), and also redefine the point weights as (4 ###reference_###). Note that in (2 ###reference_###), the input weight must always satisfy ; otherwise, the constraints cannot be satisfied.\nWe use to denote a given basic solution in the context of using the simplex method to solve the Optimal Transport problem.\nWe notice that the basic variables always form a spanning tree of the\ncomplete directed graph with self loops \n[8 ###reference_b8###].\nLet the dual variables be , satisfying the following constraint:\nWe then define the adjusted cost matrix as ,\nwhere represents the simplex multipliers for the linear program [21 ###reference_b21###]."
34
+ },
35
+ {
36
+ "section_id": "2.2",
37
+ "parent_section_id": "2",
38
+ "section_name": "Euler Tour Technique",
39
+ "text": "The Euler Tour Technique is a method for representing a tree as a cyclic ordered set of linear length [29 ###reference_b29###].\nSpecifically, given a tree , we construct a directed graph with the same vertex set as follows:\nFor each vertex , add the self-loop to ;\nFor each undirected edge , add two directed edges and to .\nFollowing this definition, . Since is a tree, , and therefore .\nSince the difference of In-Degree and Out-Degree of each vertex in is 0, should always contain an Euler Tour.\nGiven a tree , the Euler Tour representation is an arbitrary sequence of Euler Tour of represented by edges. That is, with circular order induced by the Euler Tour is an Euler Tour representation.\nFor the rest of the paper, denotes the Euler Tour representation of the tree in the context.\nThrough Definition 1 ###reference_inition1###,\nwe can reduce edge linking, edge cutting, sub-tree weight updating and sub-tree weight querying to constant number of element insertion, element deletion, range weight modification and range weight querying on a circular ordered set [29 ###reference_b29###].\nWe show in Section 4.1 ###reference_### that the dynamic optimal transport can be reduced to the 2D version of\nthese four operations."
40
+ },
41
+ {
42
+ "section_id": "2.3",
43
+ "parent_section_id": "2",
44
+ "section_name": "Orthogonal Lists and Skip Lists",
45
+ "text": "Skip Lists are the probabilistic data structures that extend a singly linked list with forward links at various levels, for improving the search, insertion, and deletion operations. Figure 0(a) ###reference_sf1### illustrates an example for skip list. Each level contains a circular linked list, where the list at a higher level is a subset of the list at a lower level and the bottom level contains all the elements. The nodes at the same level have the same color and are linked horizontally. The corresponding elements in adjacent lists are connected by vertical pointers. We apply this skipping technique to circular singly linked lists in our work.\nJust as most self-balanced binary search trees, Skip Lists support \u201clazy propagation\u201d techniques, allowing range modifications within time, where is the sequence length maintained by the tree [27 ###reference_b27###]. This technique is commonly used in dynamic trees for network problems [29 ###reference_b29###].\nA -dimensional Orthogonal List has orthogonal forward links (it is a standard linked list when ). Orthogonal lists, which can be singly, doubly, or circularly linked, can maintain the information mapped from the Cartesian product of ordered sets, such as sparse tensors [5 ###reference_b5###].\nFigure 3 ###reference_###\ndemonstrates an orthogonal list that maintains a matrix. Each node has 2 forward links,\ndenoted by row links (red) and column links (blue).\nThe row links connect the elements in each row into a circular linked list horizontally and the column links connect the elements in each column into a circular linked list vertically.\n###figure_3### ###figure_4###"
46
+ },
47
+ {
48
+ "section_id": "3",
49
+ "parent_section_id": null,
50
+ "section_name": "Skip Orthogonal List",
51
+ "text": "In this section we introduce our novel data structure Skip Orthogonal List.\nIn Section 4 ###reference_###, this data structure is used for dynamically updating optimal transport.\nFormally, with the help of a Skip Orthogonal List,\nwe can maintain a forest\n\nwith at most two trees,\nand a matrix that supports the following operations. The first two and the last operations are for the case that contains only one tree; the other two operations are for the case that has two trees.\nCut.\nGiven an undirected edge , remove edge from the tree, and split it into two disjoint trees. Let the connected component containing form the vertex set , and the connected component containing form the vertex set .\nInsert.\nAdd a new node to that does not connects with any other node. Let the original nodes form the vertex set , and the new node itself form the vertex set .\nRange Update. Given , for each ,\nupdate as equation (6 ###reference_###).\nLink.\nGiven a pair , add the edge to the forest; connect two disjoint trees into a single tree, if and are disconnected.\nGlobal Minimum Query.\nReturn the minimum value of on the tree.\nFor the remaining of the section,\nwe construct a data structure with the expected space complexity, where each operation can be done with the expected time.\nSection 3.1 ###reference_### shows the overall structure of the data structure and how to query in this data structure. Section 3.2 ###reference_### illustrates the cut operation as an example based on this structure. For other operations (linking, insertion, and range updating),\nwe leave them to appendix H ###reference_###."
52
+ },
53
+ {
54
+ "section_id": "3.1",
55
+ "parent_section_id": "3",
56
+ "section_name": "The Overall Structure",
57
+ "text": "As shown in Figure 3 ###reference_###, a Skip Orthogonal List is a hierarchical\ncollection of Orthogonal lists,\nwhere each layer has fewer elements than\nthe one below it, and the elements are evenly\nspaced out.\nThe bottom layer has all the elements\nwhile the top layer has the least. Formally, it can be defined as Definition 2 ###reference_inition2###.\nGiven a parameter and a cyclic ordered set ,\na 2D Skip Orthogonal List over the set is an infinite collection of 2 Dimensional Circular Orthogonal Lists , where\nis a set of independent random variables.\nThe distribution is a geometric distribution with parameter\nFor each ,\nlet be an Orthogonal List whose key contains all the elements in\n,\nwhere is the cyclic sequence formed by\nNote that for any pair , we use to denote the corresponding element in ; with a slight abuse of notations, we also use \u201c at level \u201d to denote the corresponding node at the -th level in the Skip Orthogonal List. If level is not specified in the context, refers to the node at the bottom level.\nWe use this data structure to maintain several key information of .\nSince as discussed in Section 2.2 ###reference_###,\nsimilar to conventional 1D Skip Lists,\nwe know that the space complexity is with high probability\nin appendix G ###reference_###.\nNow we augment this data structure to store some additional information for range updating and global minimum query.\nBefore that, the concept \u201cdominate\u201d needs to be adapted to 2D case defined as Definition 3 ###reference_inition3###.\nFor any positive integer , in a Skip Orthogonal List over the cyclic ordered set ,\nsuppose and are 2 elements in .\nWe say the node\n\ndominates at level if and only if the following three conditions are all satisfied:\nand ;\nor\n;\nor\n.\nHere, for any element in the cyclic ordered set , we use \u201c\u201d to denote the successor of induced by the cyclic order.\nTo better understand Definition 3 ###reference_inition3###, we illustrate the examples in Figure 3(a) ###reference_sf1### and Figure 3(b) ###reference_sf2###. In each figure,\neach blue node dominates itself and all the yellow nodes,\nwhile the red nodes dominate every node in the orthogonal list.\n###figure_5### ###figure_6### We now augment the Skip Orthogonal List of\nDefinition 2 ###reference_inition2###.\nFor each node at orthogonal list ,\nbeside the two forward links and two backward links,\nwe add the following attributes:\ntag: maintains\nthe tag for lazy propagation\nfor all the nodes dominated by it;\nmin_value: maintains\nthe minimum value\namong all the nodes dominated by it. Note that when , it stores the original value of following lazy propagation technique.\nThat is, for any node in the data structure, after each modification and query, the data structure needs to assure\nmin_index: maintains the index corresponding to min_value attribute.\nchild: points to at the orthogonal list if , and it is invalid if ;\nparent: points to the node that dominates it if is not empty."
58
+ },
59
+ {
60
+ "section_id": "3.2",
61
+ "parent_section_id": "3",
62
+ "section_name": "The Update Operation: Cut",
63
+ "text": "In this subsection, we focus on the update operation \u201cCut\u201d for a 2D Skip Orthogonal List as an example.\nFigure 4(a) ###reference_sf1### and Figure 4(b) ###reference_sf2### illustrate the basic idea of the cutting process.\n###figure_7### ###figure_8### Taking an undirected edge that needs to be cut as the input, the algorithm can be crudely described as follows:\nFind the two rows and two columns representing the directed edges and in ,\ni.e. the transparent nodes in Figure 4(b) ###reference_sf2###\nand the shaded nodes in Figure 4(a) ###reference_sf1###.\nPush down the tag attribute of all the nodes alongside the rows and columns,\ni.e. the red nodes and transparent nodes in Figure 4(b) ###reference_sf2###.\nA node needs to be pushed down, if some changes happen to the nodes dominated by .\nCut the rows and columns, warping up the forward links and backward links of points alongside,\nas illustrated in Figure 4(a) ###reference_sf1###.\nThis operation cuts the original Skip Orthogonal List into four smaller lists.\nUpdate the min attribute of the remaining nodes whose tag attribute was pushed down in step 2, i.e. the red nodes in Figure 4(b) ###reference_sf2###.\nReturn the four smaller lists that were cut out in step 3.\nThe generalized\nlazy propagation\nto our 2D data structure ensures that only the nodes that are \u201cclose\u201d to the two rows and columns are modified, and consequently the updating time is guaranteed to be low.\nSpecifically,\nthe expected time complexity is upon each copy of procedure Cut."
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "Our Dynamic Network Simplex Method",
69
+ "text": "The simplex method performs simplex iterations on some initial feasible basis until the optimal solution is obtained.\nThe simplex iterations are used for refining the current solution under the dynamic changes.\nIn each simplex iteration,\nsome variable with negative simplex multiplier is selected for a copy of procedure Pivot, where\none common strategy is to pivot in the variable with\nthe smallest simplex multiplier.\nIn Section 4.1 ###reference_### we focus on defining the dynamic optimal transport operations and using simplex iterations\nto solve this problem,\nwhile in Section 4.2 ###reference_### we analyze the details in each simplex iteration.\nOur method is presented in the context of the conventional Network Simplex algorithm [8 ###reference_b8###, 21 ###reference_b21###]."
70
+ },
71
+ {
72
+ "section_id": "4.1",
73
+ "parent_section_id": "4",
74
+ "section_name": "Dynamic Optimal Transport Operations",
75
+ "text": "In an Optimal Transport problem, suppose the nodes in node set are located in some metric space , e.g., the Euclidean Space . The edge cost is usually defined as the (squared) distance between and in the space.\nLet denote the weight vector as defined in the equation (4 ###reference_###).\nA Dynamic Optimal Transport algorithm should support the following four types of update as well as online query:\nSpatial Position Modification. Select some supply or demand point and move to another point\n.\nThis update usually results in the modification on an entire row or column in the cost matrix .\nWeight Modification. Select a pair of supply or demand points with some positive number\n.\nThen update and .\nPoint Deletion. Delete\na point with (before performing deletion, its weight should be already modified to be via the above \u201cweight modification\u201d, due to the requirement of weight balance for OT).\nPoint Insertion. Select a point ;\nlet and insert into set (after the insertion, we can modify its weight from \nto a specified value via the \u201cweight modification\u201d).\nQuery. Answer the current Optimal Transport plan and the cost.\nThese updates do not change the overall weights in the supply and demand sets,\nand thus and a feasible transport\nplan always exists.\nTherefore we can reduce these updates to the operations on simplex basis, and we explain the ideas below:\nSpatial Position Modification. The original optimal solution is primal feasible but not primal optimal, i.e. not dual feasible.\nWe perform the primal simplex method based on the original optimal solution.\nWhen moving a point ,\nwe first update the cost matrix , the dual variable and the modified cost to meet the constraint (5 ###reference_###).\nAfter that, we repeatedly perform the simplex iterations\nas long as\nthe minimum value of the adjusted cost is negative.\nWeight Modification. The original optimal solution is dual feasible but not primal feasible.\nWe perform the dual simplex iterations based on the original optimal solution.\nSuppose we attempt to decrease and increase by . To implement this,\nwe send amount of flow from to in the residual network by the similar manner of the shortest path augmenting method [11 ###reference_b11###]. Specifically,\nwe send the flow through basic variables.\nIf some variable needs to be pivoted out before the required amount of flow is sent,\nwe pivot in the variable with the smallest adjusted cost, and repeat this process.\nPoint Deletion & Point Insertion. As the deleted/inserted point has weight (even if the weight is non-zero, we can first perform the \u201cweight modification\u201d to modify it to be zero), whether\ninserting or deleting the point does not influence\nour result.\nWe maintain a node pool keeping\nall the supply and demand nodes with 0 weight. Each\nPoint Insertion operation takes some point from\nthis pool and move it to the correct spatial location (i.e., insert a new point),\nwhile each Point Deletion operation returns\na node to the pool.\nOur solution updates the optimal transport\nplan as soon as an update happens,\nso we can answer the query for the optimal transport plan and value online.\nIf the number of modified nodes is not large, intuitively the optimal transport plan\nshould not change much, and thus we only need to run a small number of simplex\niterations to obtain the OT solution.\nAssume we need to run simplex iterations, where we assume\n. Then the time complexity of our algorithm\nis with being the time of each simplex iteration."
76
+ },
77
+ {
78
+ "section_id": "4.2",
79
+ "parent_section_id": "4",
80
+ "section_name": "The Details for Simplex Iteration",
81
+ "text": "As discussed in\nSection 4.1 ###reference_###,\nthe dynamic operations on OT can be effectively reduced\nto simplex iterations.\nIn this section, we review the operations used in\nthe conventional network simplex algorithm, and show how to use the data structure designed in Section 3 ###reference_### for maintaining .\nThe conventional network simplex method relies on the simplex method\nsimplified by some graph properties.\nA (network) simplex iteration contains the following steps:\nSelect Variable to Pivot in. Select the variables with the smallest adjusted cost to pivot in.\nDenote by the selected one to be pivoted in.\nUpdate Primal Solution.\nAdding the new variable to the current basis forms a cycle. We send the circular flow in the cycle, until some basic variable in the reverse direction runs out of flow, , through Graph Search (e.g. Depth First Search) or Link/Cut Tree [26 ###reference_b26###].\nDenote that node as , which is to be pivoted out.\nUpdate Dual Solution. Update the dual variables and modified cost to meet the constraint (5 ###reference_###),\nas the new basis, because will soon be queried in the next simplex iteration.\nThe selecting step performs a query on the data structure on for the minimum element,\nand the dual updating performs an update on the data structure.\nThough the primal updating step can be done within time [29 ###reference_b29###],\nthe conventional network simplex maintains through brute force. That is,\nthe conventional network simplex brutally traverses through all the adjusted costs and selects the minimum,\nand updates the adjusted cost one by one after the dual solution is updated.\nThis indicates that the time complexity of each simplex iteration is .\nOur goal is to reduce this complexity; in particular, we aim to maintain so that it can answer the global minimum query and perform update when the primal basis changes.\nIn the simplex method, when we decide to pivot in the variable ,\nwe update the dual variables as the following equation (7 ###reference_###),\nwhere is the set of nodes connected to and is the set of nodes connected to after the edge is cut;\n and are respectively the dual solution before and after pivoting, where and are the entries corresponding to the node\n, for each .\nBased on the definition of the adjusted cost matrix ,\nthe update objective can be formulated as below:\nwhere is the adjusted cost matrix with regard to the old dual variables while regards the new dual variables .\nWe present the details for updating the adjusted cost matrix in\nAlgorithm 1 ###reference_###.\nOur Skip Orthogonal List presented in Section 3 ###reference_### is capable of performing the operations cut, add and link in time.\nTherefore we have the following Theorem 4.1 ###reference_theorem1###.\nEach simplex iteration in the conventional network simplex can be completed within expected time."
82
+ },
83
+ {
84
+ "section_id": "5",
85
+ "parent_section_id": null,
86
+ "section_name": "Experiments",
87
+ "text": "All the experimental results are obtained on a server equipped\nwith 512GB main memory of frequency 3200 MHz;\nthe data structures are implemented in C++20\nand compiled by G++ 13.1.0 on Ubuntu 22.04.3.\nThe data structures are compiled to shared objects by PyBind11 [16 ###reference_b16###] to be called by Python 3.11.5.\nOur code uses the Network Simplex library from [4 ###reference_b4###] to obtain an initial feasible flow.\nIn our experiment, we use the Network Simplex algorithm [21 ###reference_b21###] and Sinkhorn algorithm [9 ###reference_b9###] from the\nPython Optimal Transport (POT) library [13 ###reference_b13###].\nWe test our algorithm for both the Spatial Position Modification and Point Insertion scenarios as described in Section 4.1 ###reference_###.\nWe take running time to measure their performances.\nDatasets. We study the performance of our algorithm on both synthetic and real datasets.\nFor synthetic datasets,\nwe construct a mixture of two Gaussian distributions in ,\nwhere the points of the same Gaussian distribution share the same label.\nWe also use the popular real-world dataset MNIST [18 ###reference_b18###].\nWe partition the labels into two groups and compute the optimal transport between them.\nSetup. We set the Sinkhorn regularization parameter as and scale the median of the cost matrix to be 1.\nWe vary the the node size up to .\nFor each dataset, we test the static running time of POT on our machine and executed each dynamic operations 100 times to calculate the means of our algorithm.\nFor Spatial Position Modification,\nwe randomly choose some point and add a random Gaussian noise with a variance of 0.5 in each dimension to it.\nFor Point Insertion,\nwe randomly select a point in the dataset that is outside the current OT instance to insert.\nWe perform a query after these updates and compare with the static algorithms implemented in POT.\nResult and Analysis. We illustrate our experimental results in Figure 6 ###reference_###.\nIn the dynamic scenarios,\nour algorithm is about 1000 times faster than static Network Simplex algorithm and 10 times faster than Sinkhorn Algorithm\nwhen the size reaches 40000,\nand reveals stable performance in practice.\nAs the number of nodes grow larger, the advantage of our algorithm becomes more significant.\nThis indicates that our algorithm is fast in the case when the number of simplex iterations is not large, as discussed in\nSection 4.1 ###reference_###.\nAlso, the running time of our algorithm is slightly higher than linear trend.\nThough each simplex iteration in our method is strictly linear in expectation theoretically,\nthis could be influenced by several practical factors, such as the increment in number of simplex iterations, or the decrement in cache hit rate as the node size grows larger.\n###figure_9###"
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "Conclusion and Future Work",
93
+ "text": "In this paper, we propose a dynamic data structure for the traditional network simplex method.\nWith the help of our data structure,\nthe time complexity of the whole pivoting process\nis in expectation.\nHowever, our algorithm lead to several performance issues in practice.\nFirst, as our algorithm stores the entire 2D Skip Orthogonal List data structure,\nit may take relatively high space complexity.\nSecond, as our algorithm is based on linked data structures,\nthe cache hit rate is not high.\nAn interesting future work for improving our implementation is to develop new algorithms and data structures with similar complexity but being more memory friendly."
94
+ },
95
+ {
96
+ "section_id": "7",
97
+ "parent_section_id": null,
98
+ "section_name": "Space Complexity of Skip Orthogonal List",
99
+ "text": "In an Optimal Transport problem on point set , the Skip Orthogonal List in our algorithm has a size of rows and columns. To show that the space complexity of our Skip Orthogonal List is quadratic with high probability, we only need to prove theorem G.1 ###reference_theorem1###.\nThe space complexity of a 2D Skip Orthogonal List with parameter is with high probability.\nDenote the number of nodes in the entire Skip Orthogonal List as .\nAssume there are levels in the Skip Orthogonal List. Let be the number of nodes on the -th level where , and when .\nIt is easy to see that\n;\nfor all , i.e., is monotonically decreasing;\nThere are nodes in the entire Skip Orthogonal List.\nIf , we call the -th level a big level; otherwise we call the -th level a small level.\nBy the monotonicity of , we can see that, there exists a threshold , such that if and only if -th level is a big level, i.e., if and only if .\nDenote , i.e., the number of nodes in big levels; denote , i.e., the number of nodes in small levels.\nTherefore . We now give their bound respectively with Lemma 1 ###reference_ma1### and Lemma 2 ###reference_ma2###.\nFirst we prove that\nDenote be the indicator variable of events, such that if and only if the -th row and column of the -th level exists in -th level.\nBy the construction of Skip Orthogonal List, we can see that are independent Bernoulli variables with parameter .\nAlso, by definition, \nTherefore\nBy linearity of expectation\nBy Hoeffding\u2019s inequality\nTherefore,\nWhen for all ,\nTherefore, when ,\nthere must exists some in range such that , since are positive integers.\nBy union bound, this happens with probability less than or equal to .\nIn order to bound , we separate \ninto parts:\nthe -th part contains the number of levels of which sizes are .\nDenote the number of levels in the -th part as .\nThus\nIf , which indicates that there are some consecutive levels in the Skip Orthogonal List that indeed have size .\nSince for all , (all Bernoulli variables indicating whether the row/column remains in the upper level turns true), and the event are independent, we can see that .\nTherefore, by union bound, with probability at least we have for all and thus\nwhich is equivalent to the description of Lemma 2 ###reference_ma2###.\nBy Lemma 1 ###reference_ma1###, 2 ###reference_ma2### and union bound,\nas is a constant, we can see that there exists some constant such that with probability at least , which finishes the proof of Theorem G.1 ###reference_theorem1###."
100
+ },
101
+ {
102
+ "section_id": "8",
103
+ "parent_section_id": null,
104
+ "section_name": "Operations on Skip Orthogonal List",
105
+ "text": "This section provides detailed algorithm for operations on our Skip Orthogonal List.\nWe use the notation in Figure 4(a) ###reference_sf1### and denote as the ancestor of node ."
106
+ },
107
+ {
108
+ "section_id": "8.1",
109
+ "parent_section_id": "8",
110
+ "section_name": "Cut",
111
+ "text": "Algorithm 2 ###reference_### describes the cut update to update the structure while updating some aggregate information\n(i.e. tag and min attributes).\nHere are some clarifications:\nThe time complexity of finding these rows and columns depends on the lookup table it depends on. It can never exceed .\nAs Figure 4(a) ###reference_sf1### demonstrates,\non the bottom level,\nprocedure CutLine removes 2 rows related to the edges to cut and links nodes alongside (changes the forward and backward pointers of nodes along side),\nwith a time complexity of ,\nwhile in the upper levels,\nprocedure CutLine removes all transparent nodes in Figure 4(b) ###reference_sf2### and changes the forward pointers and related backward pointers of the nodes alongside, i.e. red nodes in Figure 4(b) ###reference_sf2###.\nNote that when linking adjacent nodes, we need to update the parent attributes of nodes and accordingly, and nodes below it accordingly.\nHowever, every node is affected at most once during the entire CutLine procedure, and the number of nodes affected are sure to be affected by PullUpLines process.\nProcedure PushDownLines is capable of pushing down the tag attribute of all nodes alongside the rows and columns of the four nodes,\ni.e. the red nodes in 4(b) ###reference_sf2###.\nSimilar to PushDown, procedure PullUpLines updates the min attribute of the red nodes in 4(b) ###reference_sf2###.\nIn procedure PushDown,\n is in the children set of if and only if is the parent of .\nLater we will use this concept to analyze the time complexity.\nProcedure PushDownLines is similar to procedure PullUpLines.\nInstead of calling procedure PushDown which updates parents\u2019 min attribute from lower level to higher level,\nPullUpLines calls procedure PullUp which clears the tag attribute from higher level to lower level.\nIt pushes down the row and column of the input node and the row and column next to the given node.\nFrom analyses above,\nwe can now show Theorem H.1 ###reference_theorem1###.\nThe expected time complexity of procedure Cut is .\nFrom Algorithm 2 ###reference_###,\neach PushDownLines,\nCutLine,\nprocedure PullUpLines\nonly apply constant number of modifications and queries on children of nodes alongside the 2 cut rows and columns\n, i.e. the red and transparent nodes and their children in Figure 4(b) ###reference_sf2###.\nWe first calculate the expected number of nodes removed in procedure Cut,\ni.e. the transparent nodes in Figure 4(b) ###reference_sf2###.\nThe removed nodes in each level are in the row , and column where and .\nBy the definition of node height,\nnode , , and are in less than or equal to orthogonal lists.\nTherefore, the total number of removed points is less than or equal to .\nThus the expected number of removed nodes is less than or equal to .\nNext we calculate the children of nodes whose the dominate set is affected after procedure Cut update takes place,\ni.e. red nodes and their children after the Cut update in Figure 4(b) ###reference_sf2###.\nSimilar to counting transparent nodes,\nthe expected number of red nodes is also less than or equal to .\nNow we count the number of their children.\nRecall that in Figure 4(a) ###reference_sf1### we denote the 2 diagonal pieces of the big skip list as and while we denote the 2 non-diagonal pieces of the big skip list as and .\nWe count the number of red nodes and their children in the 4 pieces after procedure Cut respectively.\nDenote as the number of nodes in whose parent\u2019s children set is affected,\ni.e. whose parent is a red node as Figure 4(b) ###reference_sf2### demonstrates,\nwhen .\nThen is that of , and we know that:\nWith probability ,\nthe height of all rows and columns are ,\nwhich indicates that every node in the current skip orthogonal list does not have a parent.\nFurther more, as shown in Figure 6(a) ###reference_sf1###,\nwhere\nthere is only one row and one column beside the row just cut\n(the red row and column),\ntherefore .\nFor ,\n and occurs with probability ,\nas shown in Figure 6(b) ###reference_sf2###,\nwhere the yellow nodes are the position of the cut-affected node in the upper layer and the green nodes denote upper layer nodes whose children set is not affected.\nTherefore,\n nodes of the current level is a child of some cut-affected node.\n\n###figure_10### \n###figure_11### For ,\nwith probability ,\nthe upper level contains nodes,\ni.e. contain nodes which is visited in procedure Cut\n(denote ).\nBy the linearity of expectation, the following holds:\nWe can prove by induction that .\nTherefore, the expected number of visited nodes in procedure Cut\n(i.e. red nodes and their children in Figure 4(b) ###reference_sf2###)\nin the 2 diagonal pieces and does not exceed .\nDenote as the number of nodes visited in the cut process in each of the 2 non-diagonal pieces and \n(their numbers are equal by symmetry)\nwhere and .\nSimilarly, we can show that\nSimilarly, we are able to prove by induction that ,\nwhich indicates that the number of cut-affected nodes in and is .\nSince ,\nthe entire Cut process will visit nodes.\nSince each copy of procedure Cut only does constant number of operations on these nodes,\nthe expected time complexity of procedure Cut is ."
112
+ },
113
+ {
114
+ "section_id": "8.2",
115
+ "parent_section_id": "8",
116
+ "section_name": "Link",
117
+ "text": "Procedure Link is very similar to procedure Cut.\nIt can be described as Algorithm 3 ###reference_###.\nHere, procedure LinkLine function is very similar to procedure CutLine.\nIt creates a series of new node according to their heights and to link 2 Skip Orthogonal List pieces together in one direction.\nAs procedure link behaves almost the same as procedure Cut,\nmaking constant number of modifications to nodes along the newly linked line,\nthe time complexity for each copy of procedure Link is also ."
118
+ },
119
+ {
120
+ "section_id": "8.3",
121
+ "parent_section_id": "8",
122
+ "section_name": "Insert",
123
+ "text": "For procedure Insert,\nwe only need to add one basic variable to the basis to make it feasible again.\nWithout loss of generality,\nAssume we need to add node to the demand node set.\nProcedure Insert could be described in Algorithm 4 ###reference_###.\nSince and is the newly added basic variable,\ndual variable satisfies Constraint 5 ###reference_###.\nFurther more,\nfor all ,\nwe have .\nTherefore ,\nwhich indicates that simplex multipliers related to the newly added variable is non-negative.\nIf the old basis is primal optimal,\nthe new basis will sure be primal optimal too.\nLine 3 in Algorithm 4 ###reference_### requires adding an entire row and column to the Skip Orthogonal List based on the existing basic variables .\ntogether with a copy of procedure Link to update .\nThe expected time complexity for these operations are all .\nTherefore the expected time complexity of each copy of Algorithm 4 ###reference_### is ."
124
+ },
125
+ {
126
+ "section_id": "8.4",
127
+ "parent_section_id": "8",
128
+ "section_name": "Range Update (Add) and Query",
129
+ "text": "The tag attribute of each node stores the value that should be added to each node dominated by it but haven\u2019t been propagated.\nThe min attribute stores the minimum value and the index of the minimum value of all nodes dominated by it.\nSimilar to the lazy propagation technique:\nFor procedure RangeUpdate to do the range update that adds all elements by val amount in the current Skip Orthogonal List piece, we add the tag attribute and min attribute of all nodes by val on the top layer.\nFor query min to do range query on the minimum value and indices of all elements in the Skip Orthogonal List, we visit every top layer node and find the node with the minimum min attribute.\nTherefore, to bound the running time of procedure RangeUpadte and query GlobalMinimum,\nwe only need to give bound to the nodes on the top level.\nTo achieve this, we show Lemma 4 ###reference_ma4### and 5 ###reference_ma5###.\nBut in order to prove them, we need to prove Lemma 3 ###reference_ma3### first.\nLet be a positive integer and be a real number in range .\nFor any independent and identically distributed random variables following geometric distribution with parameter ,\nthe expected number of maximum elements is less than , and the expected square of the number of the maximum elements is less than ,\ni.e.,\nLet be the number of maximum elements in subsequence of the first elements, i.e.,\nTherefore, Lemma 3 ###reference_ma3### is equivalent to and .\nWe now prove that and for all integer by induction.\nFirst, , because contains only 1 element.\nTherefore, and .\nNext, we prove that and are implied by and .\nWhen calculating from , there are 3 cases:\n. Suppose this happens with probability . In this case, .\n. Suppose this happens with probability . In this case, .\n. Suppose this happens with probability . In this case, .\nNotice that, by law of total probability,\nLet . Therefore, is bounded by the interval , and we can express , , .\nFirst we prove .\nBy law of total expectation, we have\nBecause (inductive hypothesis),\n.\nSince , we get the following\nHence finishes the proof of .\nNext we prove that .\nSince ,\nby law of total expectation, we have\nBecause ,\nwe have .\nSince , we get the following\nTherefore, we have finished the proof that and , and therefore . Thus Lemma 3 is proved.\nWith Lemma 3 ###reference_ma3###, we can now prove Lemma 4 ###reference_ma4###, which immediate leads to the expected time complexity upper bound of procedure RangeUpdate.\nIf Skip Orthogonal List of size is break into 4 pieces through procedure Cut,\nthen the expected size of the top level of the 2 non-diagonal pieces and is .\nSuppose constant is the parameter of the Skip Orthogonal List.\nSuppose the 2 non-diagonal pieces have size and respectively.\nBy the properties of procedure Cut,\n.\nFor the piece, denote the height variable corresponding to the rows as \nand the height variable corresponding to the columns as .\nTherefore, and are i.i.d. following geometric distribution with parameter .\nThe height of the entire Skip Orthogonal List is .\nIf node , i.e., node on the -th row and -th column, is on the top level, then , which is implied by , i.e., is the maximum element of or is the maximum element of .\nLet be the indicator variable that node is on the top level.\nLet be the indicator variable that is the maximum element of and be the indicator that is the maximum element of .\nBy linearity expectation, the number of elements on the top level is ,\nthe number of maximum elements of array is and the maximum elements of array is .\nFrom the discussion above, by union bound, we have\nAs equals to the total number of maximum values in , by Lemma 3 ###reference_ma3###,\nwe have .\nSimilarly, .\nAs a result, the total number of variables on the top level is smaller than , which finishes the proof of Lemma 4 ###reference_ma4###.\nFurther more, with Lemma 3 ###reference_ma3###, we can directly derive Lemma 5 ###reference_ma5###.\nThe expected size of the top level of a Skip Orthogonal List is .\nNow, with Lemma 4 ###reference_ma4### and 5 ###reference_ma5###, the time complexity for procedure RangeUpdate and Query are directly obtained."
130
+ },
131
+ {
132
+ "section_id": "9",
133
+ "parent_section_id": null,
134
+ "section_name": "Supplementary Experiments",
135
+ "text": "Similar to the previous experiments, the experimental results in this section are obtained from the same server with the same dataset pools,\ni.e. Synthetic dataset about Gaussian Distributions on and MNIST dataset [18 ###reference_b18###].\nOur algorithm is compared with Network Simplex Algorithm [21 ###reference_b21###] and Sinkhorn Algorithm [9 ###reference_b9###] by Python Optimal Transport Library [13 ###reference_b13###].\nFor test Weight Modification, we randomly select a pair of nodes in the system, and send flow of which amount is in the range of of their node weight difference, repeat for 100 times;\nfor test Point Deletion & Insertion, we first delete 100 nodes with 0 weight in the current data set to return it to the pool, and insert 100 new nodes in the system.\n###figure_12### Figure 8 ###reference_### demonstrates our experiment result,\nwhere the time ratio of our algorithm and their algorithm are plotted.\nFrom the figure, the asymptotic advantage of our algorithm is obvious.\nwhere is the running time of each dynamic operation of our algorithm, while and are the static running time of Network Simplex algorithm and Sinkhorn algorithm respectively. Our algorithm for handling point insertion only takes less than of time compared to static algorithms when reaches .\nAlso the Insert and Delete procedures perform much more stable than space position modification and weight modification procedures,\nregardless of which dataset we use.\nThis matches the theory that procedure Insert and Delete is strictly linear,\nwhile the performance of normal space position modification and weight modification procedures depends on the number of simplex iterations it experiences.\nThough procedure Insert and Delete must be combined with weight modification procedures to obtain the result,\nFigure 8 ###reference_### shows that procedure Insert and Delete are not a great overhead."
136
+ }
137
+ ],
138
+ "appendix": [],
139
+ "tables": {},
140
+ "image_paths": {
141
+ "1(a)": {
142
+ "figure_path": "2310.18446v5_figure_1(a).png",
143
+ "caption": "(a) An example for 1D Euler Tour Tree\nFigure 1: Overview of Euler Tour Tree with Skip Lists",
144
+ "url": "http://arxiv.org/html/2310.18446v5/x1.png"
145
+ },
146
+ "1(b)": {
147
+ "figure_path": "2310.18446v5_figure_1(b).png",
148
+ "caption": "(b) Our 2D Euler Tour Tree\nFigure 1: Overview of Euler Tour Tree with Skip Lists",
149
+ "url": "http://arxiv.org/html/2310.18446v5/x2.png"
150
+ },
151
+ "2(a)": {
152
+ "figure_path": "2310.18446v5_figure_2(a).png",
153
+ "caption": "Figure 2: An Orthogonal List Example.",
154
+ "url": "http://arxiv.org/html/2310.18446v5/x3.png"
155
+ },
156
+ "2(b)": {
157
+ "figure_path": "2310.18446v5_figure_2(b).png",
158
+ "caption": "Figure 2: An Orthogonal List Example.",
159
+ "url": "http://arxiv.org/html/2310.18446v5/x4.png"
160
+ },
161
+ "3(a)": {
162
+ "figure_path": "2310.18446v5_figure_3(a).png",
163
+ "caption": "(a) Domination in conventional 1D Skip List\nFigure 4: Illustrations of Domination in Skip Lists",
164
+ "url": "http://arxiv.org/html/2310.18446v5/x5.png"
165
+ },
166
+ "3(b)": {
167
+ "figure_path": "2310.18446v5_figure_3(b).png",
168
+ "caption": "(b) Domination in 2D Skip List\nFigure 4: Illustrations of Domination in Skip Lists",
169
+ "url": "http://arxiv.org/html/2310.18446v5/x6.png"
170
+ },
171
+ "4(a)": {
172
+ "figure_path": "2310.18446v5_figure_4(a).png",
173
+ "caption": "(a) Vertical View (some notations in the figure are defined in appendices)\nFigure 5: Illustrations of a \u201ccut\u201d operation",
174
+ "url": "http://arxiv.org/html/2310.18446v5/x7.png"
175
+ },
176
+ "4(b)": {
177
+ "figure_path": "2310.18446v5_figure_4(b).png",
178
+ "caption": "(b) 3D View\nFigure 5: Illustrations of a \u201ccut\u201d operation",
179
+ "url": "http://arxiv.org/html/2310.18446v5/x8.png"
180
+ },
181
+ "5": {
182
+ "figure_path": "2310.18446v5_figure_5.png",
183
+ "caption": "Figure 6: The ratio of the execution time of our dynamic algorithm to that of the static algorithms.",
184
+ "url": "http://arxiv.org/html/2310.18446v5/x9.png"
185
+ },
186
+ "6(a)": {
187
+ "figure_path": "2310.18446v5_figure_6(a).png",
188
+ "caption": "(a) Current Layer is at Top\nFigure 7: Illustration of Modified Nodes at Some Layer",
189
+ "url": "http://arxiv.org/html/2310.18446v5/x10.png"
190
+ },
191
+ "6(b)": {
192
+ "figure_path": "2310.18446v5_figure_6(b).png",
193
+ "caption": "(b) Current Layer is Not at Top\nFigure 7: Illustration of Modified Nodes at Some Layer",
194
+ "url": "http://arxiv.org/html/2310.18446v5/x11.png"
195
+ },
196
+ "7": {
197
+ "figure_path": "2310.18446v5_figure_7.png",
198
+ "caption": "Figure 8: Supplementary Experiments",
199
+ "url": "http://arxiv.org/html/2310.18446v5/x12.png"
200
+ }
201
+ },
202
+ "validation": true,
203
+ "references": [
204
+ {
205
+ "1": {
206
+ "title": "Screening sinkhorn algorithm for regularized optimal transport.",
207
+ "author": "Mokhtar Z Alaya, Maxime Berar, Gilles Gasso, and Alain Rakotomamonjy.",
208
+ "venue": "Advances in Neural Information Processing Systems, 32, 2019.",
209
+ "url": null
210
+ }
211
+ },
212
+ {
213
+ "2": {
214
+ "title": "Geometric dataset distances via optimal transport.",
215
+ "author": "David Alvarez-Melis and Nicolo Fusi.",
216
+ "venue": "In NeurIPS 2020. ACM, February 2020.",
217
+ "url": null
218
+ }
219
+ },
220
+ {
221
+ "3": {
222
+ "title": "Multidimensional binary search trees used for associative searching.",
223
+ "author": "Jon Louis Bentley.",
224
+ "venue": "Communications of the ACM, 18(9):509\u2013517, 1975.",
225
+ "url": null
226
+ }
227
+ },
228
+ {
229
+ "4": {
230
+ "title": "Displacement Interpolation Using Lagrangian Mass Transport.",
231
+ "author": "Nicolas Bonneel, Michiel van de Panne, Sylvain Paris, and Wolfgang Heidrich.",
232
+ "venue": "ACM Transactions on Graphics (SIGGRAPH ASIA 2011), 30(6), 2011.",
233
+ "url": null
234
+ }
235
+ },
236
+ {
237
+ "5": {
238
+ "title": "A dictionary of computer science.",
239
+ "author": "Andrew Butterfield, Gerard Ekembe Ngondi, and Anne Kerr.",
240
+ "venue": "Oxford University Press, 2016.",
241
+ "url": null
242
+ }
243
+ },
244
+ {
245
+ "6": {
246
+ "title": "Maximum flow and minimum-cost flow in almost-linear time.",
247
+ "author": "Li Chen, Rasmus Kyng, Yang P Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva.",
248
+ "venue": "In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 612\u2013623. IEEE, 2022.",
249
+ "url": null
250
+ }
251
+ },
252
+ {
253
+ "7": {
254
+ "title": "Dynamical wasserstein barycenters for time-series modeling.",
255
+ "author": "Kevin Cheng, Shuchin Aeron, Michael C Hughes, and Eric L Miller.",
256
+ "venue": "Advances in Neural Information Processing Systems, 34:27991\u201328003, 2021.",
257
+ "url": null
258
+ }
259
+ },
260
+ {
261
+ "8": {
262
+ "title": "A network simplex method.",
263
+ "author": "William H Cunningham.",
264
+ "venue": "Mathematical Programming, 11:105\u2013116, 1976.",
265
+ "url": null
266
+ }
267
+ },
268
+ {
269
+ "9": {
270
+ "title": "Sinkhorn distances: Lightspeed computation of optimal transport.",
271
+ "author": "Marco Cuturi.",
272
+ "venue": "Advances in neural information processing systems, 26, 2013.",
273
+ "url": null
274
+ }
275
+ },
276
+ {
277
+ "10": {
278
+ "title": "The generalized simplex method for minimizing a linear form under linear inequality restraints.",
279
+ "author": "George B Dantzig, Alex Orden, Philip Wolfe, et al.",
280
+ "venue": "Pacific Journal of Mathematics, 5(2):183\u2013195, 1955.",
281
+ "url": null
282
+ }
283
+ },
284
+ {
285
+ "11": {
286
+ "title": "Theoretical improvements in algorithmic efficiency for network flow problems.",
287
+ "author": "Jack Edmonds and Richard M Karp.",
288
+ "venue": "Journal of the ACM (JACM), 19(2):248\u2013264, 1972.",
289
+ "url": null
290
+ }
291
+ },
292
+ {
293
+ "12": {
294
+ "title": "The skip quadtree: a simple dynamic data structure for multidimensional data.",
295
+ "author": "David Eppstein, Michael T Goodrich, and Jonathan Z Sun.",
296
+ "venue": "In Proceedings of the twenty-first annual symposium on Computational geometry, pages 296\u2013305, 2005.",
297
+ "url": null
298
+ }
299
+ },
300
+ {
301
+ "13": {
302
+ "title": "Pot: Python optimal transport.",
303
+ "author": "R\u00e9mi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z Alaya, Aur\u00e9lie Boisbunon, Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, et al.",
304
+ "venue": "The Journal of Machine Learning Research, 22(1):3571\u20133578, 2021.",
305
+ "url": null
306
+ }
307
+ },
308
+ {
309
+ "14": {
310
+ "title": "Fast optimal transport averaging of neuroimaging data.",
311
+ "author": "Alexandre Gramfort, Gabriel Peyr\u00e9, and Marco Cuturi.",
312
+ "venue": "In Information Processing in Medical Imaging: 24th International Conference, IPMI 2015, Sabhal Mor Ostaig, Isle of Skye, UK, June 28-July 3, 2015, Proceedings 24, pages 261\u2013272. Springer, 2015.",
313
+ "url": null
314
+ }
315
+ },
316
+ {
317
+ "15": {
318
+ "title": "Optimal mass transport for registration and warping.",
319
+ "author": "Steven Haker, Lei Zhu, Allen Tannenbaum, and Sigurd Angenent.",
320
+ "venue": "International Journal of computer vision, 60:225\u2013240, 2004.",
321
+ "url": null
322
+ }
323
+ },
324
+ {
325
+ "16": {
326
+ "title": "pybind11 \u2013 seamless operability between c++11 and python.",
327
+ "author": "Wenzel Jakob, Jason Rhinelander, and Dean Moldovan.",
328
+ "venue": "ttps://github.com/pybind/pybind11, 2017.",
329
+ "url": null
330
+ }
331
+ },
332
+ {
333
+ "17": {
334
+ "title": "Group level meg/eeg source imaging via optimal transport: minimum wasserstein estimates.",
335
+ "author": "Hicham Janati, Thomas Bazeille, Bertrand Thirion, Marco Cuturi, and Alexandre Gramfort.",
336
+ "venue": "In Information Processing in Medical Imaging: 26th International Conference, IPMI 2019, Hong Kong, China, June 2\u20137, 2019, Proceedings 26, pages 743\u2013754. Springer, 2019.",
337
+ "url": null
338
+ }
339
+ },
340
+ {
341
+ "18": {
342
+ "title": "Mnist handwritten digit database.",
343
+ "author": "Yann LeCun, Corinna Cortes, and CJ Burges.",
344
+ "venue": "http://yann.lecun.com/exdb/mnist, 2010.",
345
+ "url": null
346
+ }
347
+ },
348
+ {
349
+ "19": {
350
+ "title": "Measuring the misfit between seismograms using an optimal transport distance: Application to full waveform inversion.",
351
+ "author": "Ludovic M\u00e9tivier, Romain Brossier, Quentin M\u00e9rigot, Edouard Oudet, and Jean Virieux.",
352
+ "venue": "Geophysical Supplements to the Monthly Notices of the Royal Astronomical Society, 205(1):345\u2013377, 2016.",
353
+ "url": null
354
+ }
355
+ },
356
+ {
357
+ "20": {
358
+ "title": "Skip list data structures for multidimensional data.",
359
+ "author": "Bradford G. Nickerson.",
360
+ "venue": "Technical report, University of Maryland at College Park, USA, 1994.",
361
+ "url": null
362
+ }
363
+ },
364
+ {
365
+ "21": {
366
+ "title": "A polynomial time primal network simplex algorithm for minimum cost flows.",
367
+ "author": "James B Orlin.",
368
+ "venue": "Mathematical Programming, 78:109\u2013129, 1997.",
369
+ "url": null
370
+ }
371
+ },
372
+ {
373
+ "22": {
374
+ "title": "Fast and robust earth mover\u2019s distances.",
375
+ "author": "Ofir Pele and Michael Werman.",
376
+ "venue": "In 2009 IEEE 12th international conference on computer vision, pages 460\u2013467. IEEE, 2009.",
377
+ "url": null
378
+ }
379
+ },
380
+ {
381
+ "23": {
382
+ "title": "Computational optimal transport: With applications to data science.",
383
+ "author": "Gabriel Peyr\u00e9, Marco Cuturi, et al.",
384
+ "venue": "Foundations and Trends\u00ae in Machine Learning, 11(5-6):355\u2013607, 2019.",
385
+ "url": null
386
+ }
387
+ },
388
+ {
389
+ "24": {
390
+ "title": "Skip lists: a probabilistic alternative to balanced trees.",
391
+ "author": "William Pugh.",
392
+ "venue": "Communications of the ACM, 33(6):668\u2013676, 1990.",
393
+ "url": null
394
+ }
395
+ },
396
+ {
397
+ "25": {
398
+ "title": "Generalized preconditioning and undirected minimum-cost flow.",
399
+ "author": "Jonah Sherman.",
400
+ "venue": "In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 772\u2013780. SIAM, 2017.",
401
+ "url": null
402
+ }
403
+ },
404
+ {
405
+ "26": {
406
+ "title": "A data structure for dynamic trees.",
407
+ "author": "Daniel D Sleator and Robert Endre Tarjan.",
408
+ "venue": "In Proceedings of the thirteenth annual ACM symposium on Theory of computing, pages 114\u2013122, 1981.",
409
+ "url": null
410
+ }
411
+ },
412
+ {
413
+ "27": {
414
+ "title": "Self-adjusting binary search trees.",
415
+ "author": "Daniel Dominic Sleator and Robert Endre Tarjan.",
416
+ "venue": "Journal of the ACM (JACM), 32(3):652\u2013686, 1985.",
417
+ "url": null
418
+ }
419
+ },
420
+ {
421
+ "28": {
422
+ "title": "Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time.",
423
+ "author": "Daniel A Spielman and Shang-Hua Teng.",
424
+ "venue": "Journal of the ACM (JACM), 51(3):385\u2013463, 2004.",
425
+ "url": null
426
+ }
427
+ },
428
+ {
429
+ "29": {
430
+ "title": "Dynamic trees as search trees via euler tours, applied to the network simplex algorithm.",
431
+ "author": "Robert E Tarjan.",
432
+ "venue": "Mathematical Programming, 78(2):169\u2013177, 1997.",
433
+ "url": null
434
+ }
435
+ },
436
+ {
437
+ "30": {
438
+ "title": "Overrelaxed sinkhorn\u2013knopp algorithm for regularized optimal transport.",
439
+ "author": "Alexis Thibault, L\u00e9na\u00efc Chizat, Charles Dossal, and Nicolas Papadakis.",
440
+ "venue": "Algorithms, 14(5):143, 2021.",
441
+ "url": null
442
+ }
443
+ },
444
+ {
445
+ "31": {
446
+ "title": "A survey on optimal transport for machine learning: Theory and applications.",
447
+ "author": "Luis Caicedo Torres, Luiz Manella Pereira, and M Hadi Amini.",
448
+ "venue": "arXiv preprint arXiv:2106.01963, 2021.",
449
+ "url": null
450
+ }
451
+ },
452
+ {
453
+ "32": {
454
+ "title": "Minimum cost flows, mdps, and 1-regression in nearly linear time for dense instances.",
455
+ "author": "Jan Van Den Brand, Yin Tat Lee, Yang P Liu, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, and Di Wang.",
456
+ "venue": "In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 859\u2013869, 2021.",
457
+ "url": null
458
+ }
459
+ }
460
+ ],
461
+ "url": "http://arxiv.org/html/2310.18446v5"
462
+ }
20240127/2311.00604v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2311.02340v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2311.04892v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2312.10623v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2401.09720v2.json ADDED
@@ -0,0 +1,587 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "GaussianBody: Clothed Human Reconstruction via 3d Gaussian Splatting",
3
+ "abstract": "In this work, we propose a novel clothed human reconstruction method called GaussianBody, based on 3D Gaussian Splatting.\nCompared with the costly neural radiance-based models, 3D Gaussian Splatting has recently demonstrated great performance in terms of training time and rendering quality.\nHowever, applying the static 3D Gaussian Splatting model to the dynamic human reconstruction problem is non-trivial due to complicated non-rigid deformations and rich cloth details.\nTo address these challenges, our method considers explicit pose-guided deformation to associate dynamic Gaussians across the canonical space and the observation space, introducing a physically-based prior with regularized transformations helps mitigate ambiguity between the two spaces.\nDuring the training process, we further propose a pose refinement strategy to update the pose regression for compensating the inaccurate initial estimation and a split-with-scale mechanism to enhance the density of regressed point clouds.\nThe experiments validate that our method can achieve state-of-the-art photorealistic novel-view rendering results with high-quality details for dynamic clothed human bodies, along with explicit geometry reconstruction.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Creating high-fidelity clothed human models holds significant applications in virtual reality, telepresence, and movie production. Traditional methods involve either complex capture systems or tedious manual work from 3D artists, making them time-consuming and expensive, thus limiting scalability for novice users. Recently, there has been a growing focus on automatically reconstructing clothed human models from single RGB images or monocular videos.\nMesh-based methods [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###] are initially introduced to recover human body shapes by regressing on parametric models such as SCAPE [5 ###reference_b5###], SMPL [6 ###reference_b6###], SMPL-X [7 ###reference_b7###], and STAR [8 ###reference_b8###]. While they can achieve fast and robust reconstruction, the regressed polygon meshes struggle to capture variant geometric details and rich clothing features. The addition of vertex offsets becomes an enhancement solution [9 ###reference_b9###, 10 ###reference_b10###] in this context. However, its representation ability is still strictly constrained by mesh resolutions and generally fails in loose-cloth cases.\nTo overcome the limitations of explicit mesh models, implicit methods based on occupancy fields [11 ###reference_b11###, 12 ###reference_b12###], signed distance fields (SDF) [13 ###reference_b13###], and neural radiance fields (NeRFs) [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###] have been developed to learn the clothed human body using volume rendering techniques. These methods are capable of enhancing the reconstruction fidelity and rendering quality of 3D clothed humans, advancing the realistic modeling of geometry and appearance. Despite performance improvements, implicit models still face challenges due to the complex volume rendering process, resulting in long training times and hindering interactive rendering for real-time applications. Most importantly, native implicit approaches lack an efficient deformation scheme to handle complicated body movements in dynamic sequences [18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###].\nTherefore, combining explicit geometry primitives with implicit models has become a trending idea in recent works. For instance, point-based NeRFs [22 ###reference_b22###, 23 ###reference_b23###] propose controlling volume-based representations with point cloud proxy. Unfortunately, estimating an accurate point cloud from multi-view images is practically challenging as well due to the intrinsic difficulties of the multi-view stereo (MVS) problem.\nIn this work, we address the mentioned issues by incorporating 3D Gaussian Splatting (3D-GS) [24 ###reference_b24###] into the dynamic clothed human reconstruction framework. 3D-GS establishes a differential rendering pipeline to facilitate scene modeling, notably reducing a significant amount of training time. It learns the explicit point-based model while rendering high-quality results with spherical harmonics (SH) representation. The application of 3D-GS to present 4D scenes has demonstrated superior results [25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###], motivating our endeavor to integrate 3D-GS into human body reconstruction. However, learning dynamic clothed body reconstruction is more challenging than other use cases, primarily due to non-rigid deformations of body parts and the need to capture accurate details of the human body and clothing, especially for loose outfits like skirts.\nFirstly, we extended the 3D-GS representation to clothed human reconstruction by utilizing an articulated human model for guidance. Specifically, we use forward linear blend skinning (LBS) to deform the Gaussians from the canonical space to each observation space per frame. Secondly, we optimize a physically-based prior for the Gaussians in the observation space to mitigate the risk of overfitting Gaussian parameters. We transform the local rigidity loss [28 ###reference_b28###] to regularize over-rotation across the canonical and observation space. Finally, we propose a split-with-scale strategy to enhance point cloud density and a pose refinement approach to address the texture blurring issue.\nWe evaluate our proposed framework on monocular videos of dynamic clothed humans. By comparing it with baseline approaches and other works, our method achieves superior reconstruction quality in rendering details and geometry recovery, while requiring much less training time (approximately one hour) and almost real-time rendering speed. We also conduct ablation studies to validate the effectiveness of each component in our method.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "In this section, we briefly review the related literature with 3D clothed human reconstruction."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "3D Human Reconstruction",
21
+ "text": "Reconstructing 3D humans from images or videos is a challenging task. Recent works [29 ###reference_b29###, 10 ###reference_b10###, 9 ###reference_b9###] use template mesh models like SMPL [6 ###reference_b6###] to reconstruct 3D humans from monocular videos or single images. However, template mesh models have limitations in capturing intricate clothing details. To address these limitations, neural representations have been introduced [11 ###reference_b11###, 12 ###reference_b12###, 30 ###reference_b30###] for 3D human reconstruction. Implicit representations, like those used in PIFU [11 ###reference_b11###] and its variants, achieve impressive results in handling complex details such as hairstyle and clothing. Some methods, like ICON [13 ###reference_b13###] and ECON [31 ###reference_b31###], leverage SMPL as a prior to handle extreme poses. However, most of these methods are designed for static scenes and struggle with dynamic scenarios. Other methods [32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###] use parametric models to handle dynamic scenes and obtain animatable 3D human models.\nRecent advancements involve using neural networks for representing dynamic human models. Extensions of NeRF [14 ###reference_b14###] into dynamic scenes [35 ###reference_b35###, 36 ###reference_b36###, 37 ###reference_b37###] and methods for animatable 3D human models in multi-view scenarios [21 ###reference_b21###, 38 ###reference_b38###, 39 ###reference_b39###, 19 ###reference_b19###, 20 ###reference_b20###, 18 ###reference_b18###] or monocular videos [40 ###reference_b40###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###] have shown promising results. Signal Distance Function (SDF) is also employed [41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###] to establish a differentiable rendering framework or use NeRF-based volume rendering to estimate the surface. Our method enhances both speed and robustness by incorporating 3D-GS [24 ###reference_b24###]."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Accelerating Neural Rendering",
27
+ "text": "Several methods [44 ###reference_b44###, 45 ###reference_b45###, 46 ###reference_b46###, 47 ###reference_b47###] focus on accelerating rendering speed, primarily using explicit representations or baking methods. However, these approaches are tailored for static scenes. Some works [48 ###reference_b48###, 49 ###reference_b49###] aim to accelerate rendering in dynamic scenes, but they often require dense input images or additional geometry priors. InstantAvatar [17 ###reference_b17###], based on instant-NGP [50 ###reference_b50###], combines grid skip rendering and a quick deformation method [51 ###reference_b51###] but relies on accurate pose guidance for articulate weighting training. In contrast, 3D-GS [24 ###reference_b24###] offers fast convergence and easy integration into graphics rendering pipelines, providing a point cloud for explicit deformation. Our method extends 3D-GS for human reconstruction, achieving high-quality results and fast rendering."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "GaussianBody",
33
+ "text": "In this section, we first introduce the preliminary method 3D-GS [24 ###reference_b24###] in Section 3.1 ###reference_###. Next, we describe our framework pipeline for 3D-GS-based clothed body reconstruction (Section 3.2 ###reference_###). We then discuss the application of a physically-based prior to regularize the 3D Gaussians across the canonical and observation spaces (Section 3.3 ###reference_###). Finally, we introduce two strategies, split-with-scale and pose refinement, to enhance point cloud density and optimize the SMPL parameters, respectively (Section 3.4 ###reference_###)."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Preliminary",
39
+ "text": "3D-GS [24 ###reference_b24###] is an explicit 3D scene reconstruction method designed for multi-view images. The static model comprises a list of Gaussians with a point cloud at its center. Gaussians are defined by a covariance matrix and a center point , representing the mean value of the Gaussian:\nFor differentiable optimization, the covariance matrix can be decomposed into a scaling matrix and a rotation matrix :\nThe gradient flow computation during training is detailed in [24 ###reference_b24###]. To render the scene, the regressed Gaussians can be projected into camera space with the covariance matrix :\nHere, is the Jacobian of the affine approximation of the projective transformation, and is the world-to-camera matrix. To simplify the expression, the matrices and are preserved as rotation parameter and scaling parameter . After projecting the 3D Gaussians to 2D, the alpha-blending rendering based on point clouds bears a resemblance to the volumetric rendering equation of NeRF [14 ###reference_b14###] in terms of its formulation. During volume rendering, each Gaussian is defined by an opacity and spherical harmonics coefficients to represent the color. The volumetric rendering equation for each pixel contributed by Gaussians is given by:\nCollectively, the 3D Gaussians are denoted as ."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Framework",
45
+ "text": "In our framework, we decompose the dynamic clothed human modeling problem into the canonical space and the motion space. First, We define the template 3D Gaussians in the canonical space as . To learn the template 3D Gaussians, we employ pose-guidance deformation fields to transform them into the observation space and render the scene using differentiable rendering. The gradients of the pose transformation are recorded for each time and used for backward optimization in the canonical space.\nSpecifically, we utilize the parametric body model SMPL [6 ###reference_b6###] as pose guidance. The articulated SMPL model is defined with pose parameters and shape parameters . The transformation of each point is calculated with the skinning weight field and the target bone transformation . To mitigate computational costs, we adopt the approach from InstantAvatar [17 ###reference_b17###], which diffuses the skinning weight of the SMPL [6 ###reference_b6###] model vertex into a voxel grid. The weight of each point is then obtained through trilinear interpolation from the grid weighting, denoted as . The transformation of the canonical points to deform space via forward linear blend skinning is expressed as:\nWith the requirements of the 3D-GS[24 ###reference_b24###] initial setting, we initialize the point cloud with the template SMPL[6 ###reference_b6###] model vertex in the canonical pose(as shown in Figure.2 ###reference_###).\nFor each frame, we deform the position and rotation of the canonical Gaussians with the pose parameter of current frame and the global shape parameter to the observation space :\nwhere is the deformation function defined in Eq.5 ###reference_###.\nIn this way, we obtain the deformed Gaussians in the observation space. After differentiable rendering and image loss calculation, the gradients will be passed through the inverse of the deformation field and optimized for the canonical Gaussians.\n###figure_2###"
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Physically-Based Prior",
51
+ "text": "Although we define the canonical Gaussians and explicitly deform them to the observation space for differentiable rendering, the optimization is still an ill-posed problem because there could be multiple canonical positions mapped to the same observation position, leading to overfitting in the observation space and visual artifacts in the canonical space.\nIn the experiment, we also observed that this optimization approach might easily result in the novel view synthesis showcasing numerous Gaussians in incorrect rotations, consequently generating unexpected glitches.\nThus we follow [28 ###reference_b28###] to regularize the movement of 3D Gaussians by their local information. Particularly we employ three regularization losses to maintain the local geometry property of the deformed 3D Gaussians, including local-rigidity loss , local-rotation loss losses and a local-isometry loss .\nDifferent from [28 ###reference_b28###] that attempts to track the Gaussians frame by frame, we regularize the Gaussian transformation from the canonical space to the observation space.\nGiven the set of Gaussians with the k-nearest-neighbors of in canonical space (k=20), the isotropic weighting factor between the nearby Gaussians is calculated as:\nwhere is the distance between the Gasussians and Gasussians in canonical space, set that gives a standard deviation.\nSuch weight ensures that rigidity loss is enforced locally and still allows global non-rigid reconstruction.\nThe local rigidity loss is defined as:\nwhen the Gaussians transform from canonical space to observation space, the nearby Gaussians should move in a similar way that follows the rigid-body transform of the coordinate system of the Gaussians between two spaces.\nThe visual explanation is shown in Figure.6 ###reference_###.\nWhile the rigid loss ensures that Gaussians and Gaussians share the same rotation, the rotation loss could enhance convergence to explicitly enforce identical rotations among neighboring Gaussians in both spaces:\nwhere is the normalized Quaternion representation of each Gaussian\u2019s rotation, the demonstrates the rotation of the Gaussians with the deformation.\nWe use the same Gaussian pair sets and weighting function as before.\nFinally, we use a weaker constraint than to make two Gaussians in different space to be the same one, which only enforces the distances between their neighbors:\nafter adding the above objectives, our objective is :"
52
+ },
53
+ {
54
+ "section_id": "3.4",
55
+ "parent_section_id": "3",
56
+ "section_name": "Refinement Strategy",
57
+ "text": "Split-with-scale.\nBecause the monocular video input for 3D-GS [24 ###reference_b24###] lacks multi-view supervision, a portion of the reconstructed point cloud (3D Gaussians) may become excessively sparse, leading to oversized Gaussians or blurring artifacts. To address this, we propose a strategy to split large Gaussians using a scale threshold . If a Gaussian has a size larger than , we decompose it into two identical Gaussians, each with half the size.\nPose refinement.\nDespite the robust performance of 3D-GS [24 ###reference_b24###] in the presence of inaccurate SMPL parameters, there is a risk of generating a high-fidelity point cloud with inaccuracies. The inaccurate SMPL parameters may impact the model\u2019s alignment with the images, leading to blurred textures. To address this issue, we propose an optimization approach for the SMPL parameters. Specifically, we designate the SMPL pose parameters as the optimized parameter and refine them through the optimization process, guided by the defined losses.\n3D-GS[24 ###reference_b24###]\nNeuralBody[15 ###reference_b15###]\nAnim-NeRF[16 ###reference_b16###]\nInstantAvatar[17 ###reference_b17###]\nOurs\n###figure_3### ###figure_4###"
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Experiment",
63
+ "text": "In this section, we evaluate our method on monocular training videos and compare it with the other baselines and state-of-the-art works. We also conduct ablation studies to verify the effectiveness of each component in our method."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Datasets and Baseline",
69
+ "text": "PeopleSnapshot. PeopleSnapshot [10 ###reference_b10###] dataset contains eight sequences of dynamic humans wearing different outfits.\nThe actors rotate in front of a fixed camera, maintaining an A-pose during the recording.\nWe train the model with the frames of the human rotating in the first two circles and test with the resting frames.\nThe dataset also provides inaccurate shape and pose parameters.\nSo we first process both the train dataset and test dataset to get the accurate pose parameters.\nNote that the coefficients of the Gaussians remain fixed.\nWe evaluate the novel view synthesis quality with frame size in with the quantitative metrics including peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and learned perceptual image patch similarity (LPIPS), and train our model and other baselines in for visual comparison.\nThe PeopleSnapshot dataset doesn\u2019t have the corresponding ground truth point cloud, we only provide qualitative results.\niPER. iPER [52 ###reference_b52###] dataset consists of two sets of monocular RGB videos depicting humans rotating in an A-pose or engaging in random motions before a static camera.\nIn the experiment, we leverage the A-pose series, adopting a similar setting to PeopleSnapshot.\nSubsequently, we visualize novel views and compare them with baselines to demonstrate the robustness of our method.\nBaselines. We compare our method with original 3D-GS [24 ###reference_b24###], Neural body[15 ###reference_b15###], Anim-NeRF [16 ###reference_b16###] and InstantAvatar [17 ###reference_b17###].\nNeural body[15 ###reference_b15###] adopts the SMPL vertexes as the set of latent code to record the local feature and reconstruct humans in NeRF.\nAnim-nerf [16 ###reference_b16###] uses the explicit pose-guidance deformation that deforms the query point in observation space to canonical space with inverse linear blend skinning.\nInstantAvatar [17 ###reference_b17###] builds the hash grid to restore the feature of NeRF and control the query points with Fast-SNARF [51 ###reference_b51###], which uses the root-finding way to find the corresponding points and transform it to the observation space to optimize the articulate weighting."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Implementation Details",
75
+ "text": "GaussianBody is implemented in PyTorch and optimized with the Adam [53 ###reference_b53###].\nWe optimize the full model in 30k steps following the learning rate setting of official implementation, while the learning rate of position is initial in and the learning rate of pose parameters is .\nWe set the hyper-parameters as , , .\nFor training the model, it takes about 1 hour on a single RTX 4090."
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "Results",
81
+ "text": "Novel view synthesis. \nIn Table 1 ###reference_###, our method consistently outperforms other approaches in various metrics, highlighting its superior performance in capturing detailed reconstructions. This indicates that our method excels in reconstructing intricate cloth textures and human body details.\nFigure 5 ###reference_### visually compares the results of our method with others. 3D-GS [24 ###reference_b24###] struggles with dynamic scenes due to violations of multi-view consistency, resulting in partial and blurred reconstructions. Our method surpasses InstantAvatar in cloth texture details, such as sweater knit patterns and facial features. Even with inaccurate pose parameters on iPER [52 ###reference_b52###], our method demonstrates robust results. InstantAvatar\u2019s results, on the other hand, are less satisfactory, with inaccuracies in pose parameters leading to deformation artifacts.\nFigure 4 ###reference_### showcases realistic rendering results from different views, featuring individuals with diverse clothing and hairstyles. These results underscore the applicability and robustness of our method in real-world scenarios.\nIn Figure 4 ###reference_###, the qualitative results of our method are visually compelling. The generated point clouds exhibit sufficient details to accurately represent the human body and clothing. Examples include the organic wrinkles in the shirt, intricate facial details, and well-defined palms with distinctly separated fingers. This level of detail in the point cloud provides a solid foundation for handling non-rigid deformations of the human body more accurately."
82
+ },
83
+ {
84
+ "section_id": "4.4",
85
+ "parent_section_id": "4",
86
+ "section_name": "Ablation Study",
87
+ "text": "To evaluate the impact of the physically-based prior, we conducted experiments by training models with and without the inclusion of both part-specific and holistic physically-based priors. Additionally, we visualized the model in the canonical space with the specified configurations.\nFigure 6 ###reference_### illustrates the results. In the absence of the physically-based prior, the model tends to produce numerous glitches, especially leading to blurred facial features. Specifically, the exclusion of the rigid loss contributes to facial blurring. On the other hand, without the rotational loss, the model generates fewer glitches, although artifacts may still be present. The absence of the isometric loss introduces artifacts stemming from unexpected transformations.\nOnly when incorporating all components of the physically-based prior, the appearance details are faithfully reconstructed without significant artifacts or blurring.\nWe utilize SMPL parameters for explicit deformation, but inaccurate SMPL estimation can lead to incorrect Gaussian parameters, resulting in blurred textures and artifacts. Therefore, we introduce pose refinement, aiming to generate more accurate pose parameters, as depicted in Figure 7 ###reference_###. This refinement helps mitigate issues related to blurred textures caused by misalignment in the observation space and avoids the need for the deformation MLP to fine-tune in each frame, as illustrated in Figure 9 ###reference_###.\nGiven the divergence in our input compared to the original Gaussian input, especially the absence of part perspectives, the optimization process tends to yield a relatively sparse point cloud. This sparsity affects the representation of certain details during pose changes. As shown in Figure 8 ###reference_###, we address this issue by enhancing point cloud density through a scaling-based splitting approach.\n###figure_5### ###figure_6### ###figure_7### ###figure_8###"
88
+ },
89
+ {
90
+ "section_id": "4.5",
91
+ "parent_section_id": "4",
92
+ "section_name": "Discussion on Gaussian Deformation",
93
+ "text": "We observe that the parameters after the deformation MLP network might become random, potentially misleading the optimization in the canonical space with differentiable rendering. To capture non-rigid deformation after the pose-guidance deformation, we adopt an approach inspired by SCARF [54 ###reference_b54###] and Deformable3dgs [25 ###reference_b25###] by introducing a deformation MLP network into the 3D-GS [24 ###reference_b24###] pipeline. We concatenate the Gaussians\u2019 point positions after the forward transform and their corresponding vertices into a fully connected neural network, aiming to obtain more accurate Gaussian parameters that capture the non-rigid deformation of the cloth in the observation space.\nHowever, we encounter challenges where the canonical Gaussians lose generalization for novel views and pose synthesis, as illustrated in Figure 9 ###reference_###. The deformation MLP tends to overfit the observation space and even influences the result of the rigid transformation. This issue needs further investigation and optimization to achieve a more balanced representation.\nThe explicit representation offers several advantages, including accelerated training, simplified interfacing, and effective deformation handling. However, challenges arise from the sparse Gaussians and imprecise deformation, affecting the representation of novel poses. The sparsity of Gaussians, combined with the absence of accurate deformation, makes it challenging to represent a continuous surface. Despite attempts to mitigate these issues by reducing the size of Gaussians and applying regularization through the physically-based prior, unexpected glitches persist in novel poses, as illustrated in Figure 9 ###reference_###. Overcoming these challenges to reconstruct a reasonable non-rigid transformation of the cloth surface remains an area for improvement."
94
+ },
95
+ {
96
+ "section_id": "5",
97
+ "parent_section_id": null,
98
+ "section_name": "Conclusion",
99
+ "text": "In this paper, we present a novel method called GaussianBody for reconstructing dynamic clothed human models from monocular videos using the 3D Gaussians Splatting representation. By incorporating explicit pose deformation guidance, we extend the 3D-GS representation to clothed human reconstruction. To mitigate over-rotation issues between the observation space and the canonical space, we employ a physically-based prior to regularize the canonical space Gaussians. Additionally, we incorporate pose refinement and a split-with-scale strategy to enhance both the quality and robustness of the reconstruction. Our method achieves comparable image quality metrics with the baseline and other methods, demonstrating competitive performance, relatively fast training speeds, and the capability to train with higher resolution images."
100
+ }
101
+ ],
102
+ "appendix": [],
103
+ "tables": {
104
+ "1": {
105
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T1.2\" style=\"width:496.9pt;height:104.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-51.8pt,10.9pt) scale(0.827440531184666,0.827440531184666) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T1.2.1.1.1.1\" style=\"width:74.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S3.T1.2.1.1.1.2\">male-3-casual</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S3.T1.2.1.1.1.3\">male-4-casual</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S3.T1.2.1.1.1.4\">female-3-casual</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S3.T1.2.1.1.1.5\">female-4-casual</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.1.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T1.2.1.2.2.1\" style=\"width:74.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.2.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.2.2.2.1\">PSNR\u2191</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.2.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.2.2.3.1\">SSIM\u2191</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.2.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.2.2.4.1\">LPIPS\u2193</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.2.1.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.2.2.5.1\">PSNR\u2191</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.2.1.2.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.2.2.6.1\">SSIM\u2191</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.2.1.2.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.2.2.7.1\">LPIPS\u2193</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.2.1.2.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.2.2.8.1\">PSNR\u2191</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.2.1.2.2.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.2.2.9.1\">SSIM\u2191</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.2.1.2.2.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.2.2.10.1\">LPIPS\u2193</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.2.1.2.2.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.2.2.11.1\">PSNR\u2191</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.2.1.2.2.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.2.2.12.1\">SSIM\u2191</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T1.2.1.2.2.13\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.2.2.13.1\">LPIPS\u2193</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.1.3.1\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.2.1.3.1.1\" style=\"width:74.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.2.1.3.1.1.1\">3D-GS<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.09720v2#bib.bib24\" title=\"\">24 ###reference_b24###</a>]</cite></p>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.3.1.2\">26.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.3.1.3\">0.9393</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.3.1.4\">0.082</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.3.1.5\">24.54</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.3.1.6\">0.9469</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.3.1.7\">0.088</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.3.1.8\">24.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.3.1.9\">0.9297</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.3.1.10\">0.093</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.3.1.11\">25.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.3.1.12\">0.9364</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.1.3.1.13\">0.075</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.1.4.2\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row\" id=\"S3.T1.2.1.4.2.1\" style=\"width:74.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.2.1.4.2.1.1\">NeuralBody<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.09720v2#bib.bib15\" title=\"\">15 ###reference_b15###</a>]</cite></p>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.4.2.2\">24.94</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.4.2.3\">0.9428</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.4.2.4\">0.033</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.4.2.5\">24.71</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.4.2.6\">0.9469</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.4.2.7\">0.042</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.4.2.8\">23.87</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.4.2.9\">0.9504</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.4.2.10\">0.035</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.4.2.11\">24.37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.4.2.12\">0.9451</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.4.2.13\">0.038</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.1.5.3\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row\" id=\"S3.T1.2.1.5.3.1\" style=\"width:74.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.2.1.5.3.1.1\">Anim-NeRF<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.09720v2#bib.bib16\" title=\"\">16 ###reference_b16###</a>]</cite></p>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.5.3.2\">29.37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.5.3.3\">0.9703</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.5.3.4\">0.017</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.5.3.5\">28.37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.5.3.6\">0.9605</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.5.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.5.3.7.1\">0.027</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.5.3.8\">28.91</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.5.3.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.5.3.9.1\">0.9743</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.5.3.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.5.3.10.1\">0.022</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.5.3.11\">28.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.5.3.12\">0.9678</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.5.3.13\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.5.3.13.1\">0.017</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.1.6.4\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row\" id=\"S3.T1.2.1.6.4.1\" style=\"width:74.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.2.1.6.4.1.1\">InstantAvatar<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.09720v2#bib.bib17\" title=\"\">17 ###reference_b17###</a>]</cite></p>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.6.4.2\">29.64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.6.4.3\">0.9719</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.6.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.6.4.4.1\">0.019</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.6.4.5\">28.03</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.6.4.6\">0.9647</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.6.4.7\">0.038</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.6.4.8\">28.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.6.4.9\">0.9723</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.6.4.10\">0.025</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.6.4.11\">29.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.6.4.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.6.4.12.1\">0.9713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.1.6.4.13\">0.020</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.1.7.5\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T1.2.1.7.5.1\" style=\"width:74.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.2.1.7.5.1.1\">Ours</p>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.7.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.7.5.2.1\">35.66</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.7.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.7.5.3.1\">0.9753</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.7.5.4\">0.021</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.7.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.7.5.5.1\">32.65</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.7.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.7.5.6.1\">0.9769</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.7.5.7\">0.049</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.7.5.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.7.5.8.1\">33.22</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.7.5.9\">0.9701</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.7.5.10\">0.037</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.7.5.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.7.5.11.1\">31.43</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.7.5.12\">0.9630</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.1.7.5.13\">0.040</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T1.4.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.2\" style=\"font-size:90%;\">Quantitative comparison of novel view synthesis on PeopleSnapshot dataset.<span class=\"ltx_text ltx_font_medium\" id=\"S3.T1.5.2.1\">\nOur approach exhibits a significant advantage in metric comparisons, showing substantial improvements in PSNR and SSIM metrics due to its superior restoration of image details.</span></span></figcaption>\n</figure>",
106
+ "capture": "Table 1: Quantitative comparison of novel view synthesis on PeopleSnapshot dataset.\nOur approach exhibits a significant advantage in metric comparisons, showing substantial improvements in PSNR and SSIM metrics due to its superior restoration of image details."
107
+ }
108
+ },
109
+ "image_paths": {
110
+ "2": {
111
+ "figure_path": "2401.09720v2_figure_2.png",
112
+ "caption": "Figure 2: Overview of our pipeline.\nWe initialize the point cloud using SMPL vertices, deforming the position and rotation parameters of Gaussians through SMPL forward linear blend skinning (LBS) to transform them into the observation space. The canonical model is then optimized, taking into account the physically-based prior \u2112r\u2062i\u2062g\u2062i\u2062d,\u2112r\u2062o\u2062t,\u2112i\u2062s\u2062osubscript\u2112\ud835\udc5f\ud835\udc56\ud835\udc54\ud835\udc56\ud835\udc51subscript\u2112\ud835\udc5f\ud835\udc5c\ud835\udc61subscript\u2112\ud835\udc56\ud835\udc60\ud835\udc5c\\mathcal{L}_{rigid},\\mathcal{L}_{rot},\\mathcal{L}_{iso}caligraphic_L start_POSTSUBSCRIPT italic_r italic_i italic_g italic_i italic_d end_POSTSUBSCRIPT , caligraphic_L start_POSTSUBSCRIPT italic_r italic_o italic_t end_POSTSUBSCRIPT , caligraphic_L start_POSTSUBSCRIPT italic_i italic_s italic_o end_POSTSUBSCRIPT. To address image blurriness, we optimize the pose parameters. The output includes both the point cloud and the appearance of the reconstructed human.",
113
+ "url": "http://arxiv.org/html/2401.09720v2/extracted/5372181/picNew/pipeline1.jpg"
114
+ },
115
+ "3": {
116
+ "figure_path": "2401.09720v2_figure_3.png",
117
+ "caption": "Figure 3: Local-rigidity loss. With the Gaussians i\ud835\udc56iitalic_i rotating between the two spaces, the neighbour Gaussians j\ud835\udc57jitalic_j should move to follow the rigid-transform in the coordinate system of Gaussians i\ud835\udc56iitalic_i",
118
+ "url": "http://arxiv.org/html/2401.09720v2/extracted/5372181/picNew/rigid.png"
119
+ },
120
+ "4": {
121
+ "figure_path": "2401.09720v2_figure_4.png",
122
+ "caption": "Figure 4: Results of novel view synthesis and point cloud on PeopleSnapshot [10] dataset. Our method effectively restores details on the human body, including intricate details in the hair and folds on the clothes. Moreover, the generated point cloud faithfully captures geometric details on the clothing, demonstrating a commendable separation between geometry and texture.",
123
+ "url": "http://arxiv.org/html/2401.09720v2/extracted/5372181/picNew/novelview.jpg"
124
+ },
125
+ "5": {
126
+ "figure_path": "2401.09720v2_figure_5.png",
127
+ "caption": "Figure 5: Visual comparison of different methods about novel view synthesis on PeopleSnapshot[10](column 1&2) and iPER[52](column 3&4).\n3D-GS[24] rely on multi-view consistency to gain center subjection which failed to handle dynamic scenes.\nInstantAvatar[17] trade the quality and robust for time, might gain blur result on inaccurate parameters.\nOur method gains high-fidelity results, especially on the cloth texture and the robustness.",
128
+ "url": "http://arxiv.org/html/2401.09720v2/extracted/5372181/picNew/visual_compare2.jpg"
129
+ },
130
+ "6": {
131
+ "figure_path": "2401.09720v2_figure_6.png",
132
+ "caption": "Figure 6: Effect of physically-based prior. This figure demonstrates what would be influenced without different losses. Each loss influences different parts of the canonical model, the rigid loss regularizes a part of the wired rotation, the rot loss mainly reduces the glitch, and the iso loss reduces the unexpected transformation.",
133
+ "url": "http://arxiv.org/html/2401.09720v2/extracted/5372181/picNew/ablition.jpg"
134
+ },
135
+ "7": {
136
+ "figure_path": "2401.09720v2_figure_7.png",
137
+ "caption": "Figure 7: Effect of pose refinement. In the figure, the semi-transparent is the human appearance of our method. The hand part can demonstrate our method could adjust the pose.",
138
+ "url": "http://arxiv.org/html/2401.09720v2/extracted/5372181/picNew/refin1.jpg"
139
+ },
140
+ "8": {
141
+ "figure_path": "2401.09720v2_figure_8.png",
142
+ "caption": "Figure 8: Effect of splitting with the scale. Without our optimization, the point cloud would be sparse. After the optimization, the point cloud could express more detail of the model",
143
+ "url": "http://arxiv.org/html/2401.09720v2/extracted/5372181/picNew/pointcloud.jpg"
144
+ },
145
+ "9": {
146
+ "figure_path": "2401.09720v2_figure_9.png",
147
+ "caption": "Figure 9: Failure cases in deformation. This picture demonstrates some results of our tries in deformation.\nDue to conflicts between the deformation MLP and differentiable rendering, the model exhibits inaccuracies in both the canonical model and novel pose synthesis.\nThe Gaussians represent an elliptical region, leading to artifacts in the absence of precise deformation.",
148
+ "url": "http://arxiv.org/html/2401.09720v2/extracted/5372181/picNew/falurecase3.jpg"
149
+ }
150
+ },
151
+ "validation": true,
152
+ "references": [
153
+ {
154
+ "1": {
155
+ "title": "Learning to reconstruct 3d human pose and shape via model-fitting in the loop.",
156
+ "author": "Nikos Kolotouros, Georgios Pavlakos, Michael J Black, and Kostas Daniilidis.",
157
+ "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pages 2252\u20132261, 2019.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "2": {
163
+ "title": "Vibe: Video inference for human body pose and shape estimation.",
164
+ "author": "Muhammed Kocabas, Nikos Athanasiou, and Michael J Black.",
165
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5253\u20135263, 2020.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "3": {
171
+ "title": "Monocular, one-stage, regression of multiple 3d people.",
172
+ "author": "Yu Sun, Qian Bao, Wu Liu, Yili Fu, Michael J Black, and Tao Mei.",
173
+ "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pages 11179\u201311188, 2021.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "4": {
179
+ "title": "Collaborative regression of expressive bodies using moderation.",
180
+ "author": "Yao Feng, Vasileios Choutas, Timo Bolkart, Dimitrios Tzionas, and Michael J Black.",
181
+ "venue": "In 2021 International Conference on 3D Vision (3DV), pages 792\u2013804. IEEE, 2021.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "5": {
187
+ "title": "Scape: shape completion and animation of people.",
188
+ "author": "Dragomir Anguelov, Praveen Srinivasan, Daphne Koller, Sebastian Thrun, Jim Rodgers, and James Davis.",
189
+ "venue": "In ACM SIGGRAPH 2005 Papers, pages 408\u2013416. 2005.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "6": {
195
+ "title": "SMPL: A skinned multi-person linear model.",
196
+ "author": "Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black.",
197
+ "venue": "ACM Trans. Graphics (Proc. SIGGRAPH Asia), 34(6):248:1\u2013248:16, October 2015.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "7": {
203
+ "title": "Expressive body capture: 3d hands, face, and body from a single image.",
204
+ "author": "Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black.",
205
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10975\u201310985, 2019.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "8": {
211
+ "title": "Star: Sparse trained articulated human body regressor.",
212
+ "author": "Ahmed AA Osman, Timo Bolkart, and Michael J Black.",
213
+ "venue": "In Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK, August 23\u201328, 2020, Proceedings, Part VI 16, pages 598\u2013613. Springer, 2020.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "9": {
219
+ "title": "Learning to dress 3d people in generative clothing.",
220
+ "author": "Qianli Ma, Jinlong Yang, Anurag Ranjan, Sergi Pujades, Gerard Pons-Moll, Siyu Tang, and Michael J Black.",
221
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6469\u20136478, 2020.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "10": {
227
+ "title": "Video based reconstruction of 3d people models.",
228
+ "author": "Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, and Gerard Pons-Moll.",
229
+ "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8387\u20138397, 2018.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "11": {
235
+ "title": "Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization.",
236
+ "author": "Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, and Hao Li.",
237
+ "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pages 2304\u20132314, 2019.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "12": {
243
+ "title": "Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization.",
244
+ "author": "Shunsuke Saito, Tomas Simon, Jason Saragih, and Hanbyul Joo.",
245
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 84\u201393, 2020.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "13": {
251
+ "title": "Icon: Implicit clothed humans obtained from normals.",
252
+ "author": "Yuliang Xiu, Jinlong Yang, Dimitrios Tzionas, and Michael J Black.",
253
+ "venue": "In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13286\u201313296. IEEE, 2022.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "14": {
259
+ "title": "Nerf: Representing scenes as neural radiance fields for view synthesis.",
260
+ "author": "Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng.",
261
+ "venue": "Communications of the ACM, 65(1):99\u2013106, 2021.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "15": {
267
+ "title": "Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans.",
268
+ "author": "Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, and Xiaowei Zhou.",
269
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9054\u20139063, 2021.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "16": {
275
+ "title": "Animatable neural radiance fields from monocular rgb videos.",
276
+ "author": "Jianchuan Chen, Ying Zhang, Di Kang, Xuefei Zhe, Linchao Bao, Xu Jia, and Huchuan Lu.",
277
+ "venue": "arXiv preprint arXiv:2106.13629, 2021.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "17": {
283
+ "title": "Instantavatar: Learning avatars from monocular video in 60 seconds.",
284
+ "author": "Tianjian Jiang, Xu Chen, Jie Song, and Otmar Hilliges.",
285
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16922\u201316932, 2023.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "18": {
291
+ "title": "Humannerf: Free-viewpoint rendering of moving people from monocular video.",
292
+ "author": "Chung-Yi Weng, Brian Curless, Pratul P Srinivasan, Jonathan T Barron, and Ira Kemelmacher-Shlizerman.",
293
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern Recognition, pages 16210\u201316220, 2022.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "19": {
299
+ "title": "Tava: Template-free animatable volumetric actors.",
300
+ "author": "Ruilong Li, Julian Tanke, Minh Vo, Michael Zollh\u00f6fer, J\u00fcrgen Gall, Angjoo Kanazawa, and Christoph Lassner.",
301
+ "venue": "In European Conference on Computer Vision, pages 419\u2013436. Springer, 2022.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "20": {
307
+ "title": "Posevocab: Learning joint-structured pose embeddings for human avatar modeling.",
308
+ "author": "Zhe Li, Zerong Zheng, Yuxiao Liu, Boyao Zhou, and Yebin Liu.",
309
+ "venue": "In ACM SIGGRAPH Conference Proceedings, 2023.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "21": {
315
+ "title": "Humanrf: High-fidelity neural radiance fields for humans in motion.",
316
+ "author": "Mustafa I\u015f\u0131k, Martin R\u00fcnz, Markos Georgopoulos, Taras Khakhulin, Jonathan Starck, Lourdes Agapito, and Matthias Nie\u00dfner.",
317
+ "venue": "ACM Transactions on Graphics (TOG), 42(4):1\u201312, 2023.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "22": {
323
+ "title": "Point-nerf: Point-based neural radiance fields.",
324
+ "author": "Qiangeng Xu, Zexiang Xu, Julien Philip, Sai Bi, Zhixin Shu, Kalyan Sunkavalli, and Ulrich Neumann.",
325
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5438\u20135448, 2022.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "23": {
331
+ "title": "Point-based radiance fields for controllable human motion synthesis, 2023.",
332
+ "author": "Haitao Yu, Deheng Zhang, Peiyuan Xie, and Tianyi Zhang.",
333
+ "venue": null,
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "24": {
339
+ "title": "3d gaussian splatting for real-time radiance field rendering.",
340
+ "author": "Bernhard Kerbl, Georgios Kopanas, Thomas Leimk\u00fchler, and George Drettakis.",
341
+ "venue": "ACM Transactions on Graphics (ToG), 42(4):1\u201314, 2023.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "25": {
347
+ "title": "Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction.",
348
+ "author": "Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin.",
349
+ "venue": "arXiv preprint arXiv:2309.13101, 2023.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "26": {
355
+ "title": "Real-time photorealistic dynamic scene representation and rendering with 4d gaussian splatting, 2023.",
356
+ "author": "Zeyu Yang, Hongye Yang, Zijie Pan, Xiatian Zhu, and Li Zhang.",
357
+ "venue": null,
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "27": {
363
+ "title": "4d gaussian splatting for real-time dynamic scene rendering.",
364
+ "author": "Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Wang Xinggang.",
365
+ "venue": "arXiv preprint arXiv:2310.08528, 2023.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "28": {
371
+ "title": "Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis.",
372
+ "author": "Jonathon Luiten, Georgios Kopanas, Bastian Leibe, and Deva Ramanan.",
373
+ "venue": "In 3DV, 2024.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "29": {
379
+ "title": "Detailed human avatars from monocular video.",
380
+ "author": "Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, and Gerard Pons-Moll.",
381
+ "venue": "In 2018 International Conference on 3D Vision (3DV), pages 98\u2013109. IEEE, 2018.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "30": {
387
+ "title": "High-fidelity 3d human digitization from single 2k resolution images.",
388
+ "author": "Sang-Hun Han, Min-Gyu Park, Ju Hong Yoon, Ju-Mi Kang, Young-Jae Park, and Hae-Gon Jeon.",
389
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12869\u201312879, 2023.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "31": {
395
+ "title": "ECON: Explicit Clothed humans Optimized via Normal integration.",
396
+ "author": "Yuliang Xiu, Jinlong Yang, Xu Cao, Dimitrios Tzionas, and Michael J. Black.",
397
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2023.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "32": {
403
+ "title": "Pamir: Parametric model-conditioned implicit representation for image-based human reconstruction.",
404
+ "author": "Zerong Zheng, Tao Yu, Yebin Liu, and Qionghai Dai.",
405
+ "venue": "IEEE transactions on pattern analysis and machine intelligence, 44(6):3170\u20133184, 2021.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "33": {
411
+ "title": "Arch: Animatable reconstruction of clothed humans.",
412
+ "author": "Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, and Tony Tung.",
413
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3093\u20133102, 2020.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "34": {
419
+ "title": "Arch++: Animation-ready clothed human reconstruction revisited.",
420
+ "author": "Tong He, Yuanlu Xu, Shunsuke Saito, Stefano Soatto, and Tony Tung.",
421
+ "venue": "In Proceedings of the IEEE/CVF international conference on computer vision, pages 11046\u201311056, 2021.",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "35": {
427
+ "title": "D-nerf: Neural radiance fields for dynamic scenes.",
428
+ "author": "Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer.",
429
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10318\u201310327, 2021.",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "36": {
435
+ "title": "Nerfies: Deformable neural radiance fields.",
436
+ "author": "Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla.",
437
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5865\u20135874, 2021.",
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "37": {
443
+ "title": "Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields.",
444
+ "author": "Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, and Steven M. Seitz.",
445
+ "venue": "ACM Trans. Graph., 40(6), dec 2021.",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "38": {
451
+ "title": "Im4d: High-fidelity and real-time novel view synthesis for dynamic scenes.",
452
+ "author": "Haotong Lin, Sida Peng, Zhen Xu, Tao Xie, Xingyi He, Hujun Bao, and Xiaowei Zhou.",
453
+ "venue": "arXiv preprint arXiv:2310.08585, 2023.",
454
+ "url": null
455
+ }
456
+ },
457
+ {
458
+ "39": {
459
+ "title": "Animatable neural radiance fields for modeling dynamic human bodies.",
460
+ "author": "Sida Peng, Junting Dong, Qianqian Wang, Shangzhan Zhang, Qing Shuai, Xiaowei Zhou, and Hujun Bao.",
461
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14314\u201314323, 2021.",
462
+ "url": null
463
+ }
464
+ },
465
+ {
466
+ "40": {
467
+ "title": "Human performance modeling and rendering via neural animated mesh.",
468
+ "author": "Fuqiang Zhao, Yuheng Jiang, Kaixin Yao, Jiakai Zhang, Liao Wang, Haizhao Dai, Yuhui Zhong, Yingliang Zhang, Minye Wu, Lan Xu, et al.",
469
+ "venue": "ACM Transactions on Graphics (TOG), 41(6):1\u201317, 2022.",
470
+ "url": null
471
+ }
472
+ },
473
+ {
474
+ "41": {
475
+ "title": "High-fidelity clothed avatar reconstruction from a single image.",
476
+ "author": "Tingting Liao, Xiaomei Zhang, Yuliang Xiu, Hongwei Yi, Xudong Liu, Guo-Jun Qi, Yong Zhang, Xuan Wang, Xiangyu Zhu, and Zhen Lei.",
477
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8662\u20138672, 2023.",
478
+ "url": null
479
+ }
480
+ },
481
+ {
482
+ "42": {
483
+ "title": "Selfrecon: Self reconstruction your digital avatar from monocular video.",
484
+ "author": "Boyi Jiang, Yang Hong, Hujun Bao, and Juyong Zhang.",
485
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5605\u20135615, 2022.",
486
+ "url": null
487
+ }
488
+ },
489
+ {
490
+ "43": {
491
+ "title": "Vid2avatar: 3d avatar reconstruction from videos in the wild via self-supervised scene decomposition.",
492
+ "author": "Chen Guo, Tianjian Jiang, Xu Chen, Jie Song, and Otmar Hilliges.",
493
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12858\u201312868, 2023.",
494
+ "url": null
495
+ }
496
+ },
497
+ {
498
+ "44": {
499
+ "title": "Baking neural radiance fields for real-time view synthesis.",
500
+ "author": "Peter Hedman, Pratul P Srinivasan, Ben Mildenhall, Jonathan T Barron, and Paul Debevec.",
501
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5875\u20135884, 2021.",
502
+ "url": null
503
+ }
504
+ },
505
+ {
506
+ "45": {
507
+ "title": "Plenoctrees for real-time rendering of neural radiance fields.",
508
+ "author": "Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa.",
509
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5752\u20135761, 2021.",
510
+ "url": null
511
+ }
512
+ },
513
+ {
514
+ "46": {
515
+ "title": "Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps.",
516
+ "author": "Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger.",
517
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14335\u201314345, 2021.",
518
+ "url": null
519
+ }
520
+ },
521
+ {
522
+ "47": {
523
+ "title": "Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures.",
524
+ "author": "Zhiqin Chen, Thomas Funkhouser, Peter Hedman, and Andrea Tagliasacchi.",
525
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16569\u201316578, 2023.",
526
+ "url": null
527
+ }
528
+ },
529
+ {
530
+ "48": {
531
+ "title": "Representing volumetric videos as dynamic mlp maps.",
532
+ "author": "Sida Peng, Yunzhi Yan, Qing Shuai, Hujun Bao, and Xiaowei Zhou.",
533
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4252\u20134262, 2023.",
534
+ "url": null
535
+ }
536
+ },
537
+ {
538
+ "49": {
539
+ "title": "Fourier plenoctrees for dynamic radiance field rendering in real-time.",
540
+ "author": "Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang, Yingliang Zhang, Minye Wu, Jingyi Yu, and Lan Xu.",
541
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13524\u201313534, 2022.",
542
+ "url": null
543
+ }
544
+ },
545
+ {
546
+ "50": {
547
+ "title": "Instant neural graphics primitives with a multiresolution hash encoding.",
548
+ "author": "Thomas M\u00fcller, Alex Evans, Christoph Schied, and Alexander Keller.",
549
+ "venue": "ACM Transactions on Graphics (ToG), 41(4):1\u201315, 2022.",
550
+ "url": null
551
+ }
552
+ },
553
+ {
554
+ "51": {
555
+ "title": "Fast-snarf: A fast deformer for articulated neural fields.",
556
+ "author": "Xu Chen, Tianjian Jiang, Jie Song, Max Rietmann, Andreas Geiger, Michael J Black, and Otmar Hilliges.",
557
+ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.",
558
+ "url": null
559
+ }
560
+ },
561
+ {
562
+ "52": {
563
+ "title": "Liquid warping gan: A unified framework for human motion imitation, appearance transfer and novel view synthesis.",
564
+ "author": "Wen Liu, Zhixin Piao, Jie Min, Wenhan Luo, Lin Ma, and Shenghua Gao.",
565
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5904\u20135913, 2019.",
566
+ "url": null
567
+ }
568
+ },
569
+ {
570
+ "53": {
571
+ "title": "Adam: A method for stochastic optimization.",
572
+ "author": "Diederik P. Kingma and Jimmy Ba.",
573
+ "venue": "Ithaca, NYarXiv.org, 2014.",
574
+ "url": null
575
+ }
576
+ },
577
+ {
578
+ "54": {
579
+ "title": "Capturing and animation of body and clothing from monocular video.",
580
+ "author": "Yao Feng, Jinlong Yang, Marc Pollefeys, Michael J Black, and Timo Bolkart.",
581
+ "venue": "In SIGGRAPH Asia 2022 Conference Papers, pages 1\u20139, 2022.",
582
+ "url": null
583
+ }
584
+ }
585
+ ],
586
+ "url": "http://arxiv.org/html/2401.09720v2"
587
+ }
20240127/2401.10124v2.json ADDED
@@ -0,0 +1,634 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Lower Ricci Curvature for Efficient Community Detection",
3
+ "abstract": "This study introduces the Lower Ricci Curvature (LRC), a novel, scalable, and scale-free discrete curvature designed to enhance community detection in networks. Addressing the computational challenges posed by existing curvature-based methods, LRC offers a streamlined approach with linear computational complexity, making it well-suited for large-scale network analysis. We further develop an LRC-based preprocessing method that effectively augments popular community detection algorithms. Through comprehensive simulations and applications on real-world datasets, including the NCAA football league network, the DBLP collaboration network, the Amazon product co-purchasing network, and the YouTube social network, we demonstrate the efficacy of our method in significantly improving the performance of various community detection algorithms.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "In the modern era, the ubiquity of networks in various domains, from biological pathways (Koutrouli et al., 2020 ###reference_b22###) and social networks (Ji et al., 2022 ###reference_b19###) to technological and cosmic webs (De Regt et al., 2018 ###reference_b7###), has fostered a significant interest in the study of complex systems. These networks, characterized by nodes representing entities and edges denoting interactions, provide a framework for understanding the intricate relationships and dynamics within these systems. Graph theory, applied to these network representations, has emerged as a vital tool for dissecting and interpreting the structural and functional intricacies of these interconnected systems (West et al., 2001 ###reference_b56###).\nCommunity detection is one of the most important aspects in the analysis of complex networks (Dey et al., 2022 ###reference_b8###). In these networks, communities represent subgroups of nodes (such as individuals, biological entities, or devices) that are more densely interconnected among themselves than with the rest of the network. The identification of these communities can yield invaluable insights into the structure and dynamics of the system being studied. For instance, in social networks, communities may represent groups of people with shared interests or connections, revealing patterns in social interactions and relationships (Bakhthemmat and Izadi, 2021 ###reference_b2###). In biological networks, such as those representing metabolic or protein-protein interaction pathways, community detection can help identify functional modules or clusters of interacting molecules, crucial for understanding biological processes and disease mechanisms (Tripathi et al., 2019 ###reference_b53###). Similarly, in technological networks, such as the internet or telecommunications networks, communities might consist of densely interconnected nodes or hubs that are critical for network functionality and resilience (Zhang et al., 2022 ###reference_b59###). By discerning these communities, we can gain a deeper understanding of not only the individual elements within the network, but also the overarching principles that govern their interactions and collective behavior.\nCommunity detection methods have evolved significantly to address the diverse and complex structures of modern networks. Hierarchical Clustering (Fortunato, 2010 ###reference_b13###; Hastie et al., 2009 ###reference_b16###), for instance, has been instrumental in identifying nested community structures by iteratively merging or dividing groups based on their similarity. The Girvan-Newman algorithm (Newman, 2004 ###reference_b29###, 2006 ###reference_b30###), notable for its edge-betweenness centrality approach, has contributed substantially to understanding the modularity within networks. Similarly, Label Propagation algorithms (Raghavan et al., 2007 ###reference_b39###), recognized for their simplicity and speed, have been effective in detecting community structures in large networks by allowing nodes to adopt the majority label of their neighbors. The Walktrap algorithm (Pons and Latapy, 2005 ###reference_b37###) has gained recognition for its approach of using random walks to identify communities, based on the idea that short random walks tend to stay within the same community. This method is particularly adept at capturing the local community structure in large networks. The Leiden algorithm (Traag et al., 2019 ###reference_b51###), an improvement over the well-known Louvain method (Blondel et al., 2008 ###reference_b4###), offers enhanced accuracy and resolution in detecting communities. It addresses some of the limitations of previous methods by refining the community boundaries and ensuring a more balanced distribution of community sizes. These methods, each with their unique approaches and strengths, have collectively advanced our understanding of network structures, contributing to fields ranging from sociological studies to biological network analysis.\nA recent and significant development in network analysis is the discovery of a correlation between discrete curvature and community detection (Sia et al., 2019 ###reference_b46###), which underscored the potential of using curvature-based methods to enhance our understanding and identification of communities within complex networks. Network curvature, particularly discrete curvature, has emerged as a powerful tool in the realm of graph theory and network analysis. The concept, rooted in geometric analysis, involves adapting the notion of Ricci curvature (Ricci and Levi-Civita, 1900 ###reference_b42###; Do Carmo and Flaherty Francis, 1992 ###reference_b9###), traditionally applied to smooth manifolds, to discrete networks. This adaptation has led to the development of various discrete curvature measures, each offering unique insights into network properties.\nOne of the key forms of discrete curvature is the Ollivier-Ricci curvature (ORC, Ollivier (2007 ###reference_b34###)), which has been instrumental in studying transport efficiency and robustness in networks. It provides a measure of how the network deviates from a geometrically flat structure, offering insights into network connectivity and resilience. Another significant variant is the Forman Ricci curvature (FRC, Forman (2003 ###reference_b12###)), adapted from Riemannian geometry, which has been applied to analyze the shape and topological features of networks, proving useful in understanding the underlying structure of complex systems. The Balanced Forman curvature (BFC, Topping et al. (2021 ###reference_b50###)), a refined version of the FRC, has been particularly effective in identifying bottleneck structures and critical connections within networks. This form of curvature is beneficial in applications where understanding the flow or distribution within a network is crucial. These curvature-based approaches have opened new avenues in network analysis, offering a geometric perspective to complement traditional topological and statistical methods.\n###figure_1### Although discrete curvature has found various applications in network analysis, such as understanding internet topology (Ni et al., 2015 ###reference_b33###), differentiating cancer networks (Sandhu et al., 2015 ###reference_b45###), and addressing oversquashing and oversmoothing problems in graph neural networks (Nguyen et al., 2023 ###reference_b32###), its specific use in community detection remains relatively underexplored. A notable exception is Sia et al. (2019 ###reference_b46###), who proposed using ORC for community detection. Figure 1 ###reference_### shows a toy network where two distinct communities are apparent. In this network, the edges connecting different communities tend to have lower ORC values, while those within a community exhibit higher ORC values. The proposed method involves iteratively removing the edge with the smallest ORC and recalculating all edge ORCs until the network becomes disconnected, with each connected component identified as a community. Similarly, Fesser et al. (2023 ###reference_b11###) proposed another iterative algorithm to remove edges with augmented FRC above a threshold, until no edge curvature exceeds that threshold. However, these approaches have several major drawbacks. First, the computational cost of calculating ORC and augmented FRC is high, scaled as for ORC and for augmented FRC, where represents the number of edges and the number of nodes. This makes it prohibitively expensive for large scale such as the DBLP co-authorship network (), the Amazon product co-purchasing network (), and the YouTube social network (, Yang and Leskovec (2012 ###reference_b58###)). Second, the iterative nature of the algorithm introduces significant extra time inefficiencies. In the worst-case scenario, up to iterations might be required. Third, the methods, despite their innovative approach, can sometimes be too restrictive and may underperform compared to popular methods such as the Leiden algorithm.\nTo effectively tackle the challenges in community detection within large-scale networks, our study introduces a novel curvature measure, the Lower Ricci Curvature (LRC). LRC is specifically designed for efficient computation, with a linear computational complexity of . This significantly reduces the computational burden compared to traditional curvature measures, making it highly suitable for large networks.\nIn addition to its computational efficiency, we provide some theoretical analysis of LRC, particularly its connection to the Cheeger constant, a well-established concept in graph theory (Mohar, 1989 ###reference_b27###), which helps in understanding how LRC relates to the division of a network into communities.\nBuilding on the theoretical foundation of LRC, we have developed a preprocessing algorithm that utilizes LRC to improve existing community detection methods. This algorithm is designed to be suitable for a wide range of applications, due to its adaptability to different network structures and sizes and its compatibility with various community detection methods. Our approach was rigorously tested through both simulation studies and real-world data analysis. We applied it to networks of diverse sizes, including both small-scale networks (NCAA football league network) and large-scale networks with mixed membership (the DBLP coauthorship network, the Amazon product co-purchasing network, and the YouTube social network). The results from these studies consistently demonstrate that our preprocessing step, based on the Lower Ricci Curvature, not only enhances the efficiency but also improves the accuracy of various established community detection methods.\nProofs and additional experimental details are provided in the Appendix."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Background",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Community detection",
21
+ "text": "Community detection in network analysis is essential for unraveling the intricate structures of networks. Communities are typically defined as subgroups of nodes with denser internal connections compared to their external connections (Radicchi et al., 2004 ###reference_b38###). Understanding these communities is vital to reveal the main structural characteristics of networks and to classify nodes based on their interrelations (Fortunato and Hric, 2016 ###reference_b14###).\nWhile the Introduction briefly mentions various community detection algorithms, this subsection aims to delve deeper into their specific functionalities and contributions. The Girvan-Newman algorithm leverages edge betweenness centrality and hierarchical clustering to identify community structures (Newman and Girvan, 2004 ###reference_b31###). The Leiden algorithm, evolving from the Louvain method, focuses on optimizing modularity, thereby enhancing the resolution and accuracy of detected communities (Blondel et al., 2008 ###reference_b4###; Traag et al., 2019 ###reference_b51###). Other notable methods include Label Propagation, which relies on the diffusion of information (Raghavan et al., 2007 ###reference_b39###), and Walktrap, which uses random walks to discern community structures (Pons and Latapy, 2005 ###reference_b37###).\nAdditionally, algorithms such as the Angel (Rossetti, 2020 ###reference_b43###), ego-based community detection (Ego, Leskovec and Mcauley (2012 ###reference_b23###)), K-clique (Palla et al., 2005 ###reference_b35###), Speaker-Listener Label Propagation Algorithm (SLPA, Xie et al. (2011 ###reference_b57###)) contribute diverse perspectives and techniques to community detection. Each of these methods brings unique strengths to the analysis of network structures, addressing different aspects and challenges in identifying community patterns.\nTo evaluate the performance of these algorithms, criteria such as the Adjusted Rand Index (ARI) and the Adjusted Mutual Information (AMI) are commonly used. ARI evaluates the agreement in node pair assignments between clustering results, offering a quantitative assessment of similarity (Rand, 1971 ###reference_b40###). AMI measures the similarity between different community detection results on the same dataset, providing insights into the amount of shared information (Vinh et al., 2009 ###reference_b55###). In our study, we utilize these criteria to gauge the improvements in community detection algorithms\u2019 performance when incorporating our newly proposed LRC-based preprocessing step."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Discrete curvatures",
27
+ "text": "Curvature, a fundamental concept in mathematics, describes how a curve deviates from a straight line or a surface from being flat (Boothby, 1986 ###reference_b5###). In Riemannian geometry, curvatures such as Ricci curvature, provide insights into the unique geometry properties of different spaces, including volume changes along geodesics (Do Carmo and Flaherty Francis, 1992 ###reference_b9###). Historically, generalizing this concept to discrete objects, such as networks, presented a significant challenge. A pivotal moment in this endeavor came with the work of Forman (Forman, 2003 ###reference_b12###), who innovatively adapted curvature concepts to the discrete realm. This milestone opened the door for further exploration and application of curvatures in discrete spaces, including networks.\nFor presentational simplicity, in this paper, we focus on an unweighted graph , where is a set of nodes and is a set of edges, but the framework can be generalized to a weighted graph in a straightforward manner. Let be an edge connecting node and node , we denote the degree of , i.e., the number of edges of node , by , the number of shared neighbors of , i.e., the number of triangles based on , by . Under this framework, the BFC is defined as follows:\nThe FRC of edge is defined as\nThe computational cost for calculating FRCs for all edges is . Following Forman\u2019s groundbreaking work, there has been a surge of studies exploring and applying what is now known as the Forman Ricci curvature, or FRC, to various network structures. For example, Sreejith et al. (2016 ###reference_b47###) extends FRC from undirected to directed networks, and Sreejith et al. (2016 ###reference_b48###) extends FRC to complex networks.\nHowever, its unbounded and scale-dependent nature, as well as its skewness toward negative values in various networks pose interpretational challenges (Sreejith et al., 2016 ###reference_b48###). To address these issues, an improved version known as the balanced Forman curvature (BFC, Topping et al. (2021 ###reference_b50###)) was proposed:\nThe BFC of edge is defined as\nwhere is the number of neighbors of node forming a 4-cycle based at the edge without diagonals inside, is the maximal number of 4-cycles based at edge traversing a common node (see (Topping et al., 2021 ###reference_b50###) for more details).\nBFC, with its bounded range , has been applied to identify the bottleneck structure in graphs, particularly for addressing the over-squashing phenomenon in graph neural networks. However, its computational complexity of , limits its application to large-scale networks, mainly due to computationally intensive terms and , which involves counting the number of squares based at nodes and under certain constraints (Topping et al., 2021 ###reference_b50###).\nSimultaneously, Ollivier made significant contributions by defining the Ricci curvature for networks through optimal transport and differential equations (Ollivier, 2007 ###reference_b34###):\nThe ORC of edge is defined as\nwhere is Wasserstein-1 distance, is a local probability measures at node i, defined as\nand is the length of a shortest path from node to node , also known as the graph distance.\nOllivier-Ricci curvature, or ORC, has sparked a wide array of follow-up research, further enriching the field of network analysis with these novel curvature-based insights. For example, Lin and Yau (2010 ###reference_b25###); Lin et al. (2011 ###reference_b24###); Erbar and Maas (2012 ###reference_b10###); Bauer et al. (2013 ###reference_b3###) have provided deep mathematical insights into the properties and implications of ORC in the context of graph theory and network geometry. Notably, Sia et al. (2019 ###reference_b46###) utilized ORC for community detection, iteratively removing the edge with the smallest ORC. However, its computational cost () poses significant challenges, especially for iterative algorithms."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Stochastic Block Model (SBM)",
33
+ "text": "To illustrate network curvatures in a simplified context, we consider the Stochastic Block Model (SBM) in this manuscript, a basic yet versatile model used in network analysis (Holland et al., 1983 ###reference_b17###). SBM is renowned for its ability to mimic community structures within networks. In this model, nodes are partitioned into distinct communities, and connections between nodes are probabilistically determined based on their community memberships.\nEach node is assigned a community label , indicating its community membership. The block matrix , a symmetric matrix, is a critical component of the SBM, dictating the probability of edge formation between nodes from communities. Specifically, represents the probability of an edge existing between nodes from community and community . In an SBM, the probability of an edge existing between any two nodes and follows a Bernoulli distribution, and is independent of other edges, as reflected in the adjacency matrix :\nWhile a basic SBM might appear too simplistic for complex real-world data, its extensions, such as hierarchical SBM and mixed membership SBM, offer more nuanced representations. These models cater to scenarios involving hierarchical community structures (Peixoto, 2014 ###reference_b36###) or nodes with memberships in multiple groups (Airoldi et al., 2008 ###reference_b1###)."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Lower Ricci Curvature (LRC)",
39
+ "text": "In this section, we introduce a novel discrete curvature, the Lower Ricci Curvature (LRC), notably for its high performance in community detection and low computational complexity of . We delve into the intuition behind defining LRC, and how these factors contribute to its computational efficiency and efficacy in community detection.\nThe LRC of edge is defined as:\nSeveral key observations about LRC can be made. Firstly, the computation of requires , similar to FRC. Secondly, LRC is always within the range of , aligning with the bounds of BFC. Third, LRC is consistently less than or equal to BFC, with the difference, denoted as , defined as\nIn fact, BFC is further upper bounded by ORC (Topping et al., 2021 ###reference_b50###), leading to the following proposition, which underlines why we term it lower Ricci curvature.\nFor any edge ,\nThis proposition motivates our first rationale for defining LRC. The computational bottleneck of BFC is the term , which requires time. However, this leads to two pertinent questions. First, is LRC effective in differentiating edges within and between communities? Second, does the term significantly contribute to community detection, or is there a notable difference in for edges within the same community versus those between different communities?\nTo further investigate these questions, we utilize an SBM-generated network as a toy example. Figure 2 ###reference_### presents a network with nodes, divided evenly into two communities. Edges within communities have a higher probability of , while edges between communities are set at a lower probability of . The edges are color-coded based on their LRCs: higher LRCs are marked in yellow, while lower LRCs are marked in purple. This visual representation helps highlight that edges bridging different communities tend to have smaller LRC values compared to those within the same community.\n###figure_2### This example not only serves as a visualization exercise but also supports the use of LRC in community detection. It demonstrates that LRC achieves computational efficiency by omitting the computationally expensive term without sacrificing its ability to detect community structures.\nThe direct link between LRC and ORC is less straightforward, which guides the second intuition behind the definition of LRC. The primary computational challenge in calculating ORC lies in the Wasserstein-1 distance, also known as the earth moving distance (Villani et al., 2009 ###reference_b54###). A natural approach is to approximate this distance or bound it from below, above, or both. To effectively bound ORC, it is necessary to bound the Wasserstein distance, which involves complex calculations of the total cost of certain candidate transports (see Jost and Liu (2014 ###reference_b20###) for more details). The bounds are established as follows:\nNotably, the lower bound is precisely the LRC, which connects back to Proposition 1 ###reference_position1###. While it is technically feasible to use the upper bound, the focus is on the lower bound, i.e., LRC, due to its practical performance and the positivity of the upper bound.\nAs a direct corollary of these inequalities, the following corollary establishes a link between the bound of LRC and the diameter of the network, defined as , where represents the graph distance. This is also related to the Cheeger constant (Chung, 1997 ###reference_b6###), a measure indicative of the presence of a community structure in the network.\nIf there exists such that for any , then\n.\n, where is first non-zero eigenvalue of the normalized graph Laplacian, also known as the spectral gap, and is the Cheeger constant.\nThe interpretation of this corollary is that a larger value of , suggests a graph structure more akin to a fully connected graph, hence a smaller diameter. Similarly, a larger correlates with a higher Cheeger constant, indicating a more interconnected network with less pronounced separability into distinct community structures.\nWe conclude this section with a comparative overview of the four curvatures: FRC, BFC, ORC, and LRC. The key to this comparison is their computational complexity and whether they are scale-free, i.e., independent of the network size characterized by and . Scale-free properties are particularly important in the network analysis, as they ensure the applicability and consistency of curvature measures across networks of different sizes. This quality is preferable as it allows for meaningful comparisons and generalizations across various network structures, from small-scale to large-scale networks, without being biased by their size.\nAmong these curvatures, LRC stands out for its linear computational complexity and scale-free property, making it a versatile and efficient choice for network analysis. This blend of computational efficiency and scale-free nature makes LRC an ideal candidate for analyzing networks in various contexts."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "LRC-based preprocessing",
45
+ "text": "As observed in Figure 2 ###reference_###(b), the presence of community structures in networks often results in a bimodal distribution of LRC values. This typically manifests itself as two distinct modes in the histogram of LRCs: a smaller mode corresponding to across-community edges and a larger mode representing within-community edges. This observation underpins our proposed preprocessing step for community detection: removing edges with small LRC values below a specific threshold. This approach aims to retain more within-community edges, thereby making the community structure more pronounced. The threshold is determined by fitting a Gaussian mixture model (GMM, Reynolds et al. (2009 ###reference_b41###)) to the histogram of LRCs, as outlined in the following algorithm.\nThe workflow of our proposed method is illustrated in Figure 3 ###reference_###, based on the example network previously discussed.\n###figure_3### This toy example illustrates how our preprocessing step is expected to enhance the performance of existing community detection methods by clarifying the underlying community structures.\nFollowing the description of the underlying community structures, it\u2019s crucial to highlight the efficiency and scalability of the LRC-based preprocessing method. The calculation of LRC itself requires time, and the subsequent edge removal step is a one-time, non-iterative process. This is in stark contrast to competitor methods that rely on iterative processes (Jost and Liu, 2014 ###reference_b20###; Sia et al., 2019 ###reference_b46###), which can significantly increase computational time, especially for large networks that are increasingly common in various domains.\nCrucially, this increase in efficiency does not compromise the accuracy of community detection. In the following sections, we present empirical evidence showing how our LRC-based preprocessing not only maintains but often enhances the effectiveness of popular community detection algorithms, even in complex network scenarios. This demonstrates the dual benefit of our approach: it streamlines computation while enriching the depth of network analysis."
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "Simulation",
51
+ "text": "To access the effectiveness of LRC in community detection, we conducted simulations using networks generated from SBM. For a pair of within community edge probability and across community edge probability (with ), we generate 100 graph replicates, each with nodes. We evaluated three distinct scores, motivated by our proposed preprocessing method, to compare the performance of LRC against three other existing curvatures. The results are visually represented through heat maps, with the x-axis representing , the y-axis representing , and the color indicating the score.\nProportion of Perfect Separation (PPS). The first score we considered is the Proportion of Perfect Separation. For each graph replicate, we calculated the minimum curvature value among within-community edges and the maximum curvature value among across-community edges. A situation where the minimum within-community curvature exceeds the maximum across-community curvature indicates perfect separation of within and across community edges. This implies that our preprocessing would remove all across-community edges while retaining all within-community edges, enabling effective community detection by any reasonable method postprocessing. Mathematically, PPS is the proportion of networks satisfying . The score ranges between 0 and 1, with higher values indicating better performance.\n###figure_4### The diagonal heat maps in Figure 4 ###reference_### depict the extent of separation between within-community and across-community curvature distributions. A redder hue indicates a higher degree of separation. These maps suggest that all the four curvatures, including LRC, effectively differentiate community structures across a variety of pairs. The off-diagonal heat maps in the lower triangle compare the performance of each curvature with others (red for superior performance, blue for inferior), with a raw scale of . The upper triangle heat maps also compare curvature performances, but with normalized ranges to amplify differences. Overall, LRC shows comparable performance in PPS compared to other curvatures.\nAverage within-community Edge Removal Ratio (AER) The second score, AER, provides a softer evaluation compared to PPS. While PPS focuses on perfect separation, AER quantifies the extent to which within-community edges might be incorrectly removed when aiming to eliminate all across-community edges. This score is particularly insightful, as it accounts for the potential drawback of our preprocessing method in mistakenly removing valuable within-community connections.\nMathematically, AER is defined as the ratio of the number of within-community edges, whose LRC values are smaller than the maximum LRC value of across-community edges, to the total number of within-community edges. In formula terms, AER is given by the proportion\nA score of indicates ideal performance (no within-community edges are incorrectly removed), aligning with a PPS of . Conversely, an AER of implies the extreme scenario where all edges are erroneously removed.\nFigure 5 ###reference_### presents the heat map of AER scores, organized similarly to the PPS heat map. The diagonal panels show the AER score for each curvature, while the off-diagonal panels compare the performance of different curvatures using the AER score. A visualization in these heat maps can provide insights into how effectively each curvature avoids the unintended removal of within-community edges, which is crucial for maintaining the integrity of the community structure during preprocessing.\n###figure_5### Average overlapping percentiles AOP. The third score, AOP, is designed to quantify the extent of overlap between the curvature distributions of within-community and across-community edges in a more symmetric manner. This score captures the degree to which these two distributions intermingle, providing a nuanced view of the effectiveness of a curvature in distinguishing community structures.\nMathematically, AOP is calculated as follows: For each replicate graph, we determine the percentile of the minimum within-community LRC value within the distribution of across-community LRC values. We then calculate one minus the percentile of the maximum across-community LRC value within the distribution of within-community LRC values. The AOP score is the sum of these two quantities. Formally, it can be expressed as:\nwhere is the -th percentile of set A.\nIn the ideal scenario where there is no overlap, the first percentile would be 1 (indicating the minimum within-community LRC is at the highest end of the across-community distribution), and the second percentile would also be 0 (indicating the maximum across-community LRC is at the lowest end of the within-community distribution), resulting in an AOP score of 2. Conversely, in the worst-case scenario where there is complete overlap, the AOP score becomes 0.\n###figure_6### Figure 6 ###reference_### displays the heat map visualization of AOP scores, arranged similarly to the previous scores. The diagonal panels show the AOP for each curvature, while the off-diagonal panels compare different curvatures using the AOP measure. This visualization aids in understanding the extent to which each curvature can differentiate community structures by evaluating the overlapping of curvature distributions.\nIn summarizing the evaluations conducted using PPS, AER, and AOP, we observe that the four curvatures \u2013 LRC, FRC, BFC, and ORC \u2013 exhibit comparable performance in community detection. None of the curvatures consistently outperforms the others across all metrics and all pairs of , indicating a balanced landscape of effectiveness.\nHowever, when considering computational efficiency, LRC and FRC emerge as the fastest, both offering complexity. The crucial difference is that FRC is not scale-free, but LRC boasts this advantageous property, making it particularly suitable for analyzing large-scale networks where scalability is key. This distinction positions LRC as an ideal candidate for our proposed preprocessing method. Nevertheless, it is important to note that if specific scenarios or requirements strongly favor other curvatures, our preprocessing approach remains adaptable and can be effectively applied in a broader context.\nMoving forward, the next section will focus on the application of LRC, leveraging its efficiency and scale-free nature, to real datasets. This will provide empirical insights into its practical utility and effectiveness in real-world network analysis scenarios."
52
+ },
53
+ {
54
+ "section_id": "6",
55
+ "parent_section_id": null,
56
+ "section_name": "Application",
57
+ "text": "In this section, we evaluated the impact of our proposed LRC preprocessing method on the performance of various community detection algorithms using four real-world datasets with known community structures. We begin our analysis with a smaller network, the NCAA Football League Network, to demonstrate the impact of our preprocessing method in a more controlled setting. To this end, we compared the Adjusted Rand Index (ARI) and Adjusted Mutual Information (AMI) scores before and after applying our preprocessing technique, utilizing four representative community detection models: Label Propagation, Leiden, Girvan-Newman, and Walktrap. These models were chosen for their effectiveness and widespread use in community detection, as noted in the existing literature (Fortunato and Hric, 2016 ###reference_b14###)."
58
+ },
59
+ {
60
+ "section_id": "6.1",
61
+ "parent_section_id": "6",
62
+ "section_name": "NCAA Football League network",
63
+ "text": "The NCAA football Division I games schedule for the 2000 season (Girvan and Newman, 2002 ###reference_b15###) is represented in these graph data. It consists of 115 nodes, representing college football teams, and 613 edges, corresponding to the regular season games played between these teams. The dataset identifies 12 ground-truth communities or conferences. Since teams tend to play more frequently within their own conference, this network clearly exhibits a community structure. Table 2 ###reference_### below illustrates the performance improvement of various community detection algorithms through the application of our preprocessing method.\nThe results clearly show an improvement in both ARI and AMI scores after applying our preprocessing method across all four community detection algorithms. Notably, the Label Propagation algorithm, which initially had the lowest ARI (0.75) and AMI (0.85), significantly improved to 0.89 and 0.93, respectively, after preprocessing. This enhancement elevates it to one of the top-performing algorithms in this context. In fact, after preprocessing, all algorithms exhibit very similar scores, suggesting that our method simplifies the community detection problem by making the community structures more distinct and evident. This homogenization of performance implies that, with effective preprocessing, the choice of community detection algorithm becomes less critical, as the clarified network structure facilitates more accurate community detection across different methods.\nImportantly, the inclusion of computational complexity in our analysis (as shown in the last row of the table) provides further insight into the selection of an optimal algorithm. Algorithms such as Label Propagation and Walktrap, with their lower computational complexities of and respectively, become attractive options. This highlights another significant advantage of our preprocessing method \u2013 it not only improves the accuracy of community detection, but also enhances overall efficiency by enabling the use of faster algorithms without compromising on performance.\nAfter evaluating the NCAA Football League Network, we extend our analysis to three larger-scale networks. These networks pose additional challenges, particularly in terms of computational efficiency. Moreover, they often exhibit mixed membership, where nodes can belong to multiple communities, diverging from the unique community structures seen in smaller and simpler networks like the NCAA dataset.\nGiven these differences, we shift our focus to algorithms better suited for these conditions. For larger networks, we use Angel, Ego, K-clique, and Speaker-Listener Label Propagation Algorithm (SLPA). These replacements are due to the suitability of these algorithms in handling large-scale networks and their capability to address mixed membership scenarios.\nFurthermore, the ARI and AMI scores are less effective for evaluating community detection in networks with mixed memberships. Therefore, we utilize the F1 score, a well-established metric in such scenarios, which combines the precision and recall of the detected communities to provide a balanced measure of a method\u2019s accuracy and is particularly useful in networks where a node can be part of multiple communities (Hollocou et al., 2018 ###reference_b18###)."
64
+ },
65
+ {
66
+ "section_id": "6.2",
67
+ "parent_section_id": "6",
68
+ "section_name": "DBLP collaboration network",
69
+ "text": "The DBLP computer science bibliography co-authorship network (Yang and Leskovec, 2012 ###reference_b58###) is another dataset we explored. Each node represents a researcher, and each edge signifies a collaborative paper. The network comprises 317,080 nodes and 950,059 edges. The results, as shown in the table below, demonstrate that the LRC preprocessing method aids in detecting community structures in more complex networks.\nBefore delving into the results presented in Table 3 ###reference_###, it is important to note that due to the complex nature of the community detection process in large-scale networks, the traditional Big O notation for computational complexity is not reported here. Instead, we focus on the actual runtime of each algorithm, providing a more practical measure of efficiency in real-world applications. The \u2018Time: before\u2019 represents the runtime (in seconds) of each community detection algorithm when applied directly to the raw network data, while \u2018Time: after\u2019 encompasses the total time, which includes calculating LRC, identifying the threshold, removing edges based on this threshold, and rerunning the same community detection algorithm on the processed network. This approach ensures a fair comparison, as it accounts for all steps involved in our preprocessing method.\nTable 3 reveals a notable improvement in the F1 scores for each algorithm after integrating our preprocessing method to DBLP collaboration network. Furthermore, this improvement in performance does not come at the cost of reduced efficiency; in fact, the Angel algorithm demonstrates increased processing speed post-preprocessing, even with these additional preprocessing steps, highlighting the efficiency of our method in complex network environments."
70
+ },
71
+ {
72
+ "section_id": "6.3",
73
+ "parent_section_id": "6",
74
+ "section_name": "Amazon product co-purchasing network",
75
+ "text": "This dataset represents the co-purchasing patterns of products on Amazon (Yang and Leskovec, 2012 ###reference_b58###). The nodes symbolize products, and the edges indicate co-purchases by Amazon customers. The network includes 334,863 nodes and 925,872 edges. As with the previous datasets, the application of the LRC preprocessing method significantly enhanced the results of various community detection algorithms, as illustrated in the table below.\nIn line with the results of DBLP collaboration network, our method improved the performance of each community detection algorithm. Notably, this consistency underscores the robustness of our preprocessing approach across different types of large-scale networks."
76
+ },
77
+ {
78
+ "section_id": "6.4",
79
+ "parent_section_id": "6",
80
+ "section_name": "YouTube social network",
81
+ "text": "This dataset represents the social network on YouTube (Mislove et al., 2007 ###reference_b26###). Nodes symbolize users, and edges indicate friendship such as subscription between YouTube users. The network includes 1,134,890 nodes and 2,987,624 edges. The results show that the implementation of the LRC preprocessing method greatly improved the results of multiple community detection algorithms in line with prior datasets.\nTable 4 ###reference_### showcases the effectiveness of our preprocessing method in the YouTube social network. While the performance boost is apparent across all algorithms, the Ego and SLPA algorithms stand out for their marked improvements in F1 scores. This result diverges slightly from other large networks, as K-clique is not the fastest method here. Nevertheless, our method consistently enhances the overall performance of community detection, particularly benefiting faster analysis methods.\nIn conclusion, across all three large networks analyzed \u2013 DBLP, Amazon, and YouTube \u2013 our LRC preprocessing method consistently enables at least one community detection algorithm to achieve the best or near-best performance scores, while maintaining impressive efficiency with runtimes under 200 seconds. For instance, the Angel algorithm for the DBLP network, K-clique for the Amazon network, and Ego for the YouTube network each emerged as top performers in their respective datasets. This is particularly noteworthy given the substantial size of these networks. Such results underscore the exceptional effectiveness and efficiency of our preprocessing approach in handling complex, large-scale network data, making it a highly valuable tool in the field of network analysis."
82
+ },
83
+ {
84
+ "section_id": "7",
85
+ "parent_section_id": null,
86
+ "section_name": "Discussion and future work",
87
+ "text": "In this work, we have focused on network curvature and its applications in community detection. Our key contribution is the proposal of the Lower Ricci Curvature (LRC), a scalable and scale-free discrete curvature designed specifically for network analysis. Alongside this, we have developed an LRC-based preprocessing method that has shown potential in enhancing the performance of established community detection methods. This assertion is backed by both simulations and real-world applications, including analyses of large-scale networks such as Amazon, DBLP, and YouTube. Moreover, the LRC framework is adaptable and can be straightforwardly extended to weighted networks. Looking forward, several promising directions for extending this research are evident.\nExtension to directed graphs: Extending LRC to directed graphs opens up numerous possibilities for analysis in various fields. Directed graphs are crucial in representing asymmetric relationships, such as citation networks in academia, where the directionality of citations plays a significant role (Newman, 2001 ###reference_b28###), or in web link structures where the direction of links implies a flow of information (Kleinberg et al., 1999 ###reference_b21###). Adapting LRC to account for the directionality in such networks can provide more nuanced insights into their structural and community dynamics.\nApplication to hypergraphs: Hypergraphs, which involve higher-order interactions beyond pairwise connections, present an exciting frontier. For instance, in collaborative environments like multi-author scientific publications (Taramasco et al., 2010 ###reference_b49###) or gene co-expression (Tran, 2012 ###reference_b52###), interactions are inherently multi-dimensional. Extending LRC to hypergraphs could yield a deeper understanding of these complex relational structures and the underlying community formations.\nDeeper theoretical investigation of LRC: There is ample scope for exploring the theoretical aspects of LRC. Investigating the asymptotic behavior of the mixing components in different network models, such as the SBM, could provide valuable theoretical insights. Additionally, a thorough analysis of the three scores (PPS, APW, and AOP) under various network models could deepen our understanding of LRC\u2019s effectiveness and limitations in community detection."
88
+ }
89
+ ],
90
+ "appendix": [
91
+ {
92
+ "section_id": "Appendix 1",
93
+ "parent_section_id": null,
94
+ "section_name": "Appendix A Proof of Proposition\u00a01 and Corollary\u00a01",
95
+ "text": "directly follows from definition, as . The inequality is derived from Theorem 2 in Topping et al. (2021 ###reference_b50###). Corollary 1 ###reference_ollary1### is a direct consequence of Proposition 1 ###reference_position1### combined with Corollary 3 and Proposition 5 from Topping et al. (2021 ###reference_b50###)."
96
+ },
97
+ {
98
+ "section_id": "Appendix 2",
99
+ "parent_section_id": null,
100
+ "section_name": "Appendix B Additional experimental details",
101
+ "text": "All algorithms implemented in this paper are from Python package CDlib (Rossetti et al., 2019 ###reference_b44###). The hyperparameters are as follows:\nNCAA Football League network\nLabel Propagation: NA.\nLeiden: Initial membership = None, weights= None.\nGirvan-Newman: Level=10.\nWalktrap: NA.\nDBLP collaboration network\nAngel: Threshold = 0.5, minimum community size = 3.\nEgo-networks: Level = 1.\nK-clique: .\nSLPA: .\nAmazon product co-purchasing network\nAngel: Threshold = 0.5, minimum community size = 3.\nEgo-networks: Level 1.\nK-clique: .\nSLPA: .\nYouTube social network\nAngel Threshold = 0.5, minimum community size = 3.\nEgo-networks Level 1.\nK-clique .\nSLPA .\nAll codes can be found in https://github.com/parkyunjin/LowerRicciCurv.git ###reference_rv.git###\nThe four real datasets used in this paper can be downloaded in the following websites:\nNCAA Football League network: https://websites.umich.edu/~mejn/netdata/ ###reference_### under \u201cAmerican College football\u201d.\nDBLP collaboration network: https://snap.stanford.edu/data/com-DBLP.html ###reference_ml###\nAmazon product co-purchasing network: https://snap.stanford.edu/data/com-Amazon.html ###reference_html###\nYouTube social network: https://snap.stanford.edu/data/com-Youtube.html ###reference_.html###"
102
+ }
103
+ ],
104
+ "tables": {
105
+ "1": {
106
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.4.5.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.4.5.1.1\">Curvature</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.4.5.1.2\">Computational Complexity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.4.5.1.3\">Scale-Free</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.2\">FRC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1\">\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.3\">No</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.2\">BFC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.3.1\">Yes</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.2\">ORC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.3.3.1\">Yes</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.2\">LRC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.1\">\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.4.4.3.1\">Yes</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Comparison of four curvatures.</figcaption>\n</figure>",
107
+ "capture": "Table 1: Comparison of four curvatures."
108
+ },
109
+ "2": {
110
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T2.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T2.4.5.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T2.4.5.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_rr ltx_border_t\" colspan=\"4\" id=\"S6.T2.4.5.1.2\">Algorithms</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.6.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T2.4.6.2.1\" style=\"padding-bottom:2.15277pt;\">Scores</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.6.2.2\" style=\"padding-bottom:2.15277pt;\">Label Propagation</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.6.2.3\" style=\"padding-bottom:2.15277pt;\">Leiden</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.6.2.4\" style=\"padding-bottom:2.15277pt;\">Girvan-Newman</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T2.4.6.2.5\" style=\"padding-bottom:2.15277pt;\">Walktrap</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.7.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_tt\" id=\"S6.T2.4.7.3.1\">ARI: before</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T2.4.7.3.2\">0.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T2.4.7.3.3\">0.81</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T2.4.7.3.4\">0.84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S6.T2.4.7.3.5\">0.82</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.8.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T2.4.8.4.1\">ARI: after</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.8.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.8.4.2.1\">0.89</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.8.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.8.4.3.1\">0.89</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.8.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.8.4.4.1\">0.87</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T2.4.8.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.8.4.5.1\">0.89</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.9.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T2.4.9.5.1\">AMI: before</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.9.5.2\">0.85</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.9.5.3\">0.88</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.9.5.4\">0.91</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T2.4.9.5.5\">0.86</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.10.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T2.4.10.6.1\">AMI: after</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.10.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.10.6.2.1\">0.93</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.10.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.10.6.3.1\">0.93</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.4.10.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.10.6.4.1\">0.92</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T2.4.10.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.4.10.6.5.1\">0.93</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_ll ltx_border_r ltx_border_tt\" id=\"S6.T2.4.4.5\">Complexity</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_tt\" id=\"S6.T2.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_tt\" id=\"S6.T2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_tt\" id=\"S6.T2.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr ltx_border_tt\" id=\"S6.T2.4.4.4\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Community detection algorithms evaluation for NCAA Football League network</figcaption>\n</figure>",
111
+ "capture": "Table 2: Community detection algorithms evaluation for NCAA Football League network"
112
+ },
113
+ "3": {
114
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T3.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_rr ltx_border_t\" colspan=\"4\" id=\"S6.T3.1.1.1.2\">Algorithms</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T3.1.2.2.1\" style=\"padding-bottom:2.15277pt;\">Scores</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.2.2.2\" style=\"padding-bottom:2.15277pt;\">Angel</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.2.2.3\" style=\"padding-bottom:2.15277pt;\">Ego</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.2.2.4\" style=\"padding-bottom:2.15277pt;\">K-clique</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T3.1.2.2.5\" style=\"padding-bottom:2.15277pt;\">SLPA</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_tt\" id=\"S6.T3.1.3.3.1\">F1: before</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T3.1.3.3.2\">0.284</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T3.1.3.3.3\">0.317</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T3.1.3.3.4\">0.276</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S6.T3.1.3.3.5\">0.211</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T3.1.4.4.1\">F1: after</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.4.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.4.4.2.1\">0.452</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.4.4.3\">0.386</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.4.4.4\">0.420</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T3.1.4.4.5\">0.371</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_tt\" id=\"S6.T3.1.5.5.1\">Time: before</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T3.1.5.5.2\">260.17</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T3.1.5.5.3\">831.87</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T3.1.5.5.4\">40.50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S6.T3.1.5.5.5\">1024.12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T3.1.6.6.1\">Time: after</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T3.1.6.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.6.6.2.1\">180.39</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T3.1.6.6.3\">184.88</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T3.1.6.6.4\">111.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr ltx_border_t\" id=\"S6.T3.1.6.6.5\">2349.36</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Community detection algorithms evaluation for DBLP network</figcaption>\n</figure>",
115
+ "capture": "Table 3: Community detection algorithms evaluation for DBLP network"
116
+ },
117
+ "4": {
118
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T4.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T4.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T4.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_rr ltx_border_t\" colspan=\"4\" id=\"S6.T4.1.1.1.2\">Algorithms</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T4.1.2.2.1\" style=\"padding-bottom:2.15277pt;\">Scores</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T4.1.2.2.2\" style=\"padding-bottom:2.15277pt;\">Angel</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T4.1.2.2.3\" style=\"padding-bottom:2.15277pt;\">Ego</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T4.1.2.2.4\" style=\"padding-bottom:2.15277pt;\">K-clique</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T4.1.2.2.5\" style=\"padding-bottom:2.15277pt;\">SLPA</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_tt\" id=\"S6.T4.1.3.3.1\">F1: before</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T4.1.3.3.2\">0.368</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T4.1.3.3.3\">0.371</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T4.1.3.3.4\">0.387</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S6.T4.1.3.3.5\">0.345</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T4.1.4.4.1\">F1: after</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T4.1.4.4.2\">0.463</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T4.1.4.4.3\">0.444</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T4.1.4.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T4.1.4.4.4.1\">0.482</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T4.1.4.4.5\">0.483</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_tt\" id=\"S6.T4.1.5.5.1\">Time: before</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T4.1.5.5.2\">159.52</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T4.1.5.5.3\">1000.85</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T4.1.5.5.4\">42.40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S6.T4.1.5.5.5\">2380.61</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T4.1.6.6.1\">Time: after</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T4.1.6.6.2\">139.61</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T4.1.6.6.3\">629.56</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T4.1.6.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T4.1.6.6.4.1\">85.73</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr ltx_border_t\" id=\"S6.T4.1.6.6.5\">3911.19</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Community detection algorithms evaluation for the Amazon network</figcaption>\n</figure>",
119
+ "capture": "Table 4: Community detection algorithms evaluation for the Amazon network"
120
+ },
121
+ "5": {
122
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T5\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T5.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T5.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T5.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_rr ltx_border_t\" colspan=\"4\" id=\"S6.T5.1.1.1.2\">Algorithms</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T5.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T5.1.2.2.1\" style=\"padding-bottom:2.15277pt;\">Scores</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T5.1.2.2.2\" style=\"padding-bottom:2.15277pt;\">Angel</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T5.1.2.2.3\" style=\"padding-bottom:2.15277pt;\">Ego</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T5.1.2.2.4\" style=\"padding-bottom:2.15277pt;\">K-clique</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T5.1.2.2.5\" style=\"padding-bottom:2.15277pt;\">SLPA</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T5.1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_tt\" id=\"S6.T5.1.3.3.1\">F1: before</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T5.1.3.3.2\">0.063</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T5.1.3.3.3\">0.22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T5.1.3.3.4\">0.066</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S6.T5.1.3.3.5\">0.093</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T5.1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T5.1.4.4.1\">F1: after</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T5.1.4.4.2\">0.282</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T5.1.4.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T5.1.4.4.3.1\">0.44</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T5.1.4.4.4\">0.216</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T5.1.4.4.5\">0.429</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T5.1.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_ll ltx_border_r ltx_border_tt\" id=\"S6.T5.1.5.5.1\">Time: before</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T5.1.5.5.2\">972.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T5.1.5.5.3\">143.09</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S6.T5.1.5.5.4\">9029.98</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S6.T5.1.5.5.5\">117.23</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T5.1.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_ll ltx_border_r ltx_border_t\" id=\"S6.T5.1.6.6.1\">Time: after</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T5.1.6.6.2\">67.840</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T5.1.6.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T5.1.6.6.3.1\">129.63</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T5.1.6.6.4\">131.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr ltx_border_t\" id=\"S6.T5.1.6.6.5\">218.29</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Community detection algorithms evaluation for the YouTube network</figcaption>\n</figure>",
123
+ "capture": "Table 5: Community detection algorithms evaluation for the YouTube network"
124
+ }
125
+ },
126
+ "image_paths": {
127
+ "1": {
128
+ "figure_path": "2401.10124v2_figure_1.png",
129
+ "caption": "Figure 1: (a) A toy simulated network with two communities, with edges colored by ORC. (b) The histogram of ORC, suggesting its potential in community detection.",
130
+ "url": "http://arxiv.org/html/2401.10124v2/x1.png"
131
+ },
132
+ "2": {
133
+ "figure_path": "2401.10124v2_figure_2.png",
134
+ "caption": "Figure 2: (a) A SBM-generated network with K=2\ud835\udc3e2K=2italic_K = 2, Bk\u2062k=0.8subscript\ud835\udc35\ud835\udc58\ud835\udc580.8B_{kk}=0.8italic_B start_POSTSUBSCRIPT italic_k italic_k end_POSTSUBSCRIPT = 0.8, Bk\u2062l=0.05subscript\ud835\udc35\ud835\udc58\ud835\udc590.05B_{kl}=0.05italic_B start_POSTSUBSCRIPT italic_k italic_l end_POSTSUBSCRIPT = 0.05 for k\u2260l\ud835\udc58\ud835\udc59k\\neq litalic_k \u2260 italic_l, with edges colored by LRC. (b) The histogram of LRC, suggesting its potential in community detection. (c) The histogram of \u0394\u0394\\Deltaroman_\u0394 for within and across community edges, indicating that \u0394\u0394\\Deltaroman_\u0394 may not significantly contribute to community detection.",
135
+ "url": "http://arxiv.org/html/2401.10124v2/x2.png"
136
+ },
137
+ "3": {
138
+ "figure_path": "2401.10124v2_figure_3.png",
139
+ "caption": "Figure 3: (a) A SBM-generated network with K=2\ud835\udc3e2K=2italic_K = 2, Bk\u2062k=0.8subscript\ud835\udc35\ud835\udc58\ud835\udc580.8B_{kk}=0.8italic_B start_POSTSUBSCRIPT italic_k italic_k end_POSTSUBSCRIPT = 0.8, Bk\u2062l=0.05subscript\ud835\udc35\ud835\udc58\ud835\udc590.05B_{kl}=0.05italic_B start_POSTSUBSCRIPT italic_k italic_l end_POSTSUBSCRIPT = 0.05 for k\u2260l\ud835\udc58\ud835\udc59k\\neq litalic_k \u2260 italic_l, with edges colored by LRC. (b) The histogram of LRC, with each bar colored by LRC. (c) The threshold \u03b2\ud835\udefd\\betaitalic_\u03b2 (the dotted vertical line) estimated by GMM. GMM1 is the mixing component with a large mean \u03bc2subscript\ud835\udf072\\mu_{2}italic_\u03bc start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, GMM2 is the mixing component with a smaller mean \u03bc1subscript\ud835\udf071\\mu_{1}italic_\u03bc start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. (d) The processed network, exhibiting a more discernible community structure.",
140
+ "url": "http://arxiv.org/html/2401.10124v2/x3.png"
141
+ },
142
+ "4": {
143
+ "figure_path": "2401.10124v2_figure_4.png",
144
+ "caption": "Figure 4: Comparison heat map for PPS.",
145
+ "url": "http://arxiv.org/html/2401.10124v2/x4.png"
146
+ },
147
+ "5": {
148
+ "figure_path": "2401.10124v2_figure_5.png",
149
+ "caption": "Figure 5: Comparison heat map for AER",
150
+ "url": "http://arxiv.org/html/2401.10124v2/x5.png"
151
+ },
152
+ "6": {
153
+ "figure_path": "2401.10124v2_figure_6.png",
154
+ "caption": "Figure 6: Comparison heat map for AOP.",
155
+ "url": "http://arxiv.org/html/2401.10124v2/x6.png"
156
+ }
157
+ },
158
+ "validation": true,
159
+ "references": [
160
+ {
161
+ "1": {
162
+ "title": "Mixed membership stochastic block models.",
163
+ "author": "Airoldi, E. M., D. Blei, S. Fienberg, and E. Xing (2008).",
164
+ "venue": "Advances in neural information processing systems 21.",
165
+ "url": null
166
+ }
167
+ },
168
+ {
169
+ "2": {
170
+ "title": "Communities detection for advertising by futuristic greedy method with clustering approach.",
171
+ "author": "Bakhthemmat, A. and M. Izadi (2021).",
172
+ "venue": "Big Data 9(1), 22\u201340.",
173
+ "url": null
174
+ }
175
+ },
176
+ {
177
+ "3": {
178
+ "title": "Generalized Ricci curvature and the geometry of graphs.",
179
+ "author": "Bauer, F., B. Hua, J. Jost, and S. Liu (2013).",
180
+ "venue": "Actes des rencontres du CIRM 3(1), 69\u201378.",
181
+ "url": null
182
+ }
183
+ },
184
+ {
185
+ "4": {
186
+ "title": "Fast unfolding of communities in large networks.",
187
+ "author": "Blondel, V. D., J.-L. Guillaume, R. Lambiotte, and E. Lefebvre (2008).",
188
+ "venue": "Journal of statistical mechanics: theory and experiment 2008(10), P10008.",
189
+ "url": null
190
+ }
191
+ },
192
+ {
193
+ "5": {
194
+ "title": "An introduction to differentiable manifolds and Riemannian geometry.",
195
+ "author": "Boothby, W. M. (1986).",
196
+ "venue": "Academic press.",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "6": {
202
+ "title": "Spectral graph theory, Volume 92.",
203
+ "author": "Chung, F. R. (1997).",
204
+ "venue": "American Mathematical Soc.",
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "7": {
210
+ "title": "Network analysis of the cosmos galaxy field.",
211
+ "author": "De Regt, R., S. Apunevych, C. Von Ferber, Y. Holovatch, and B. Novosyadlyj (2018).",
212
+ "venue": "Monthly Notices of the Royal Astronomical Society 477(4), 4738\u20134748.",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "8": {
218
+ "title": "Community detection in complex networks: From statistical foundations to data science applications.",
219
+ "author": "Dey, A. K., Y. Tian, and Y. R. Gel (2022).",
220
+ "venue": "Wiley Interdisciplinary Reviews: Computational Statistics 14(2), e1566.",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "9": {
226
+ "title": "Riemannian geometry, Volume 6.",
227
+ "author": "Do Carmo, M. P. and J. Flaherty Francis (1992).",
228
+ "venue": "Springer.",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "10": {
234
+ "title": "Ricci curvature of finite Markov chains via convexity of the entropy.",
235
+ "author": "Erbar, M. and J. Maas (2012).",
236
+ "venue": "Archive for Rational Mechanics and Analysis 206, 997\u20131038.",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "11": {
242
+ "title": "Augmentations of Forman\u2019s Ricci curvature and their applications in community detection.",
243
+ "author": "Fesser, L., S. S. d. H. Iv\u00e1\u00f1ez, K. Devriendt, M. Weber, and R. Lambiotte (2023).",
244
+ "venue": "arXiv preprint arXiv:2306.06474.",
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "12": {
250
+ "title": "Bochner\u2019s method for cell complexes and combinatorial Ricci curvature.",
251
+ "author": "Forman (2003).",
252
+ "venue": "Discrete & Computational Geometry 29, 323\u2013374.",
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "13": {
258
+ "title": "Community detection in graphs.",
259
+ "author": "Fortunato, S. (2010).",
260
+ "venue": "Physics reports 486(3-5), 75\u2013174.",
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "14": {
266
+ "title": "Community detection in networks: A user guide.",
267
+ "author": "Fortunato, S. and D. Hric (2016).",
268
+ "venue": "Physics reports 659, 1\u201344.",
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "15": {
274
+ "title": "Community structure in social and biological networks.",
275
+ "author": "Girvan, M. and M. E. Newman (2002).",
276
+ "venue": "Proceedings of the national academy of sciences 99(12), 7821\u20137826.",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "16": {
282
+ "title": "The elements of statistical learning: data mining, inference, and prediction, Volume 2.",
283
+ "author": "Hastie, T., R. Tibshirani, J. H. Friedman, and J. H. Friedman (2009).",
284
+ "venue": "Springer.",
285
+ "url": null
286
+ }
287
+ },
288
+ {
289
+ "17": {
290
+ "title": "Stochastic block models: First steps.",
291
+ "author": "Holland, P. W., K. B. Laskey, and S. Leinhardt (1983).",
292
+ "venue": "Social networks 5(2), 109\u2013137.",
293
+ "url": null
294
+ }
295
+ },
296
+ {
297
+ "18": {
298
+ "title": "Multiple local community detection.",
299
+ "author": "Hollocou, A., T. Bonald, and M. Lelarge (2018).",
300
+ "venue": "ACM SIGMETRICS Performance Evaluation Review 45(3), 76\u201383.",
301
+ "url": null
302
+ }
303
+ },
304
+ {
305
+ "19": {
306
+ "title": "Co-citation and co-authorship networks of statisticians.",
307
+ "author": "Ji, P., J. Jin, Z. T. Ke, and W. Li (2022).",
308
+ "venue": "Journal of Business & Economic Statistics 40(2), 469\u2013485.",
309
+ "url": null
310
+ }
311
+ },
312
+ {
313
+ "20": {
314
+ "title": "Ollivier\u2019s Ricci curvature, local clustering and curvature-dimension inequalities on graphs.",
315
+ "author": "Jost, J. and S. Liu (2014).",
316
+ "venue": "Discrete & Computational Geometry 51(2), 300\u2013322.",
317
+ "url": null
318
+ }
319
+ },
320
+ {
321
+ "21": {
322
+ "title": "The web as a graph: Measurements, models, and methods.",
323
+ "author": "Kleinberg, J. M., R. Kumar, P. Raghavan, S. Rajagopalan, and A. S. Tomkins (1999).",
324
+ "venue": "In Computing and Combinatorics: 5th Annual International Conference, COCOON\u201999 Tokyo, Japan, July 26\u201328, 1999 Proceedings 5, pp. 1\u201317. Springer.",
325
+ "url": null
326
+ }
327
+ },
328
+ {
329
+ "22": {
330
+ "title": "A guide to conquer the biological network era using graph theory.",
331
+ "author": "Koutrouli, M., E. Karatzas, D. Paez-Espino, and G. A. Pavlopoulos (2020).",
332
+ "venue": "Frontiers in bioengineering and biotechnology 8, 34.",
333
+ "url": null
334
+ }
335
+ },
336
+ {
337
+ "23": {
338
+ "title": "Learning to discover social circles in ego networks.",
339
+ "author": "Leskovec, J. and J. Mcauley (2012).",
340
+ "venue": "Advances in neural information processing systems 25.",
341
+ "url": null
342
+ }
343
+ },
344
+ {
345
+ "24": {
346
+ "title": "Ricci curvature of graphs.",
347
+ "author": "Lin, Y., L. Lu, and S.-T. Yau (2011).",
348
+ "venue": "Tohoku Mathematical Journal, Second Series 63(4), 605\u2013627.",
349
+ "url": null
350
+ }
351
+ },
352
+ {
353
+ "25": {
354
+ "title": "Ricci curvature and eigenvalue estimate on locally finite graphs.",
355
+ "author": "Lin, Y. and S.-T. Yau (2010).",
356
+ "venue": "Mathematical research letters 17(2), 343\u2013356.",
357
+ "url": null
358
+ }
359
+ },
360
+ {
361
+ "26": {
362
+ "title": "Measurement and Analysis of Online Social Networks.",
363
+ "author": "Mislove, A., M. Marcon, K. P. Gummadi, P. Druschel, and B. Bhattacharjee (2007, October).",
364
+ "venue": "In Proceedings of the 5th ACM/Usenix Internet Measurement Conference (IMC\u201907), San Diego, CA.",
365
+ "url": null
366
+ }
367
+ },
368
+ {
369
+ "27": {
370
+ "title": "Isoperimetric numbers of graphs.",
371
+ "author": "Mohar, B. (1989).",
372
+ "venue": "Journal of combinatorial theory, Series B 47(3), 274\u2013291.",
373
+ "url": null
374
+ }
375
+ },
376
+ {
377
+ "28": {
378
+ "title": "The structure of scientific collaboration networks.",
379
+ "author": "Newman, M. E. (2001).",
380
+ "venue": "Proceedings of the national academy of sciences 98(2), 404\u2013409.",
381
+ "url": null
382
+ }
383
+ },
384
+ {
385
+ "29": {
386
+ "title": "Fast algorithm for detecting community structure in networks.",
387
+ "author": "Newman, M. E. (2004).",
388
+ "venue": "Physical review E 69(6), 066133.",
389
+ "url": null
390
+ }
391
+ },
392
+ {
393
+ "30": {
394
+ "title": "Modularity and community structure in networks.",
395
+ "author": "Newman, M. E. (2006).",
396
+ "venue": "Proceedings of the national academy of sciences 103(23), 8577\u20138582.",
397
+ "url": null
398
+ }
399
+ },
400
+ {
401
+ "31": {
402
+ "title": "Finding and evaluating community structure in networks.",
403
+ "author": "Newman, M. E. and M. Girvan (2004).",
404
+ "venue": "Physical review E 69(2), 026113.",
405
+ "url": null
406
+ }
407
+ },
408
+ {
409
+ "32": {
410
+ "title": "Revisiting over-smoothing and over-squashing using Ollivier-Ricci curvature.",
411
+ "author": "Nguyen, K., N. M. Hieu, V. D. Nguyen, N. Ho, S. Osher, and T. M. Nguyen (2023).",
412
+ "venue": "In International Conference on Machine Learning, pp. 25956\u201325979. PMLR.",
413
+ "url": null
414
+ }
415
+ },
416
+ {
417
+ "33": {
418
+ "title": "Ricci curvature of the internet topology.",
419
+ "author": "Ni, C.-C., Y.-Y. Lin, J. Gao, X. D. Gu, and E. Saucan (2015).",
420
+ "venue": "In 2015 IEEE conference on computer communications (INFOCOM), pp. 2758\u20132766. IEEE.",
421
+ "url": null
422
+ }
423
+ },
424
+ {
425
+ "34": {
426
+ "title": "Ricci curvature of metric spaces.",
427
+ "author": "Ollivier, Y. (2007).",
428
+ "venue": "Comptes Rendus Mathematique 345(11), 643\u2013646.",
429
+ "url": null
430
+ }
431
+ },
432
+ {
433
+ "35": {
434
+ "title": "Uncovering the overlapping community structure of complex networks in nature and society.",
435
+ "author": "Palla, G., I. Der\u00e9nyi, I. Farkas, and T. Vicsek (2005).",
436
+ "venue": "nature 435(7043), 814\u2013818.",
437
+ "url": null
438
+ }
439
+ },
440
+ {
441
+ "36": {
442
+ "title": "Hierarchical block structures and high-resolution model selection in large networks.",
443
+ "author": "Peixoto, T. P. (2014).",
444
+ "venue": "Physical Review X 4(1), 011047.",
445
+ "url": null
446
+ }
447
+ },
448
+ {
449
+ "37": {
450
+ "title": "Computing communities in large networks using random walks.",
451
+ "author": "Pons, P. and M. Latapy (2005).",
452
+ "venue": "In Computer and Information Sciences-ISCIS 2005: 20th International Symposium, Istanbul, Turkey, October 26-28, 2005. Proceedings 20, pp. 284\u2013293. Springer.",
453
+ "url": null
454
+ }
455
+ },
456
+ {
457
+ "38": {
458
+ "title": "Defining and identifying communities in networks.",
459
+ "author": "Radicchi, F., C. Castellano, F. Cecconi, V. Loreto, and D. Parisi (2004).",
460
+ "venue": "Proceedings of the national academy of sciences 101(9), 2658\u20132663.",
461
+ "url": null
462
+ }
463
+ },
464
+ {
465
+ "39": {
466
+ "title": "Near linear time algorithm to detect community structures in large-scale networks.",
467
+ "author": "Raghavan, U. N., R. Albert, and S. Kumara (2007).",
468
+ "venue": "Physical review E 76(3), 036106.",
469
+ "url": null
470
+ }
471
+ },
472
+ {
473
+ "40": {
474
+ "title": "Objective criteria for the evaluation of clustering methods.",
475
+ "author": "Rand, W. M. (1971).",
476
+ "venue": "Journal of the American Statistical association 66(336), 846\u2013850.",
477
+ "url": null
478
+ }
479
+ },
480
+ {
481
+ "41": {
482
+ "title": "Gaussian mixture models.",
483
+ "author": "Reynolds, D. A. et al. (2009).",
484
+ "venue": "Encyclopedia of biometrics 741(659-663).",
485
+ "url": null
486
+ }
487
+ },
488
+ {
489
+ "42": {
490
+ "title": "M\u00e9thodes de calcul diff\u00e9rentiel absolu et leurs applications.",
491
+ "author": "Ricci, M. and T. Levi-Civita (1900).",
492
+ "venue": "Mathematische Annalen 54(1-2), 125\u2013201.",
493
+ "url": null
494
+ }
495
+ },
496
+ {
497
+ "43": {
498
+ "title": "Angel: efficient, and effective, node-centric community discovery in static and dynamic networks.",
499
+ "author": "Rossetti, G. (2020).",
500
+ "venue": "Applied Network Science 5(1), 26.",
501
+ "url": null
502
+ }
503
+ },
504
+ {
505
+ "44": {
506
+ "title": "Cdlib: a python library to extract, compare and evaluate communities from complex networks.",
507
+ "author": "Rossetti, G., L. Milli, and R. Cazabet (2019).",
508
+ "venue": "Applied Network Science 4(1), 1\u201326.",
509
+ "url": null
510
+ }
511
+ },
512
+ {
513
+ "45": {
514
+ "title": "Graph curvature for differentiating cancer networks.",
515
+ "author": "Sandhu, R., T. Georgiou, E. Reznik, L. Zhu, I. Kolesov, Y. Senbabaoglu, and A. Tannenbaum (2015).",
516
+ "venue": "Scientific reports 5(1), 12323.",
517
+ "url": null
518
+ }
519
+ },
520
+ {
521
+ "46": {
522
+ "title": "Ollivier-Ricci curvature-based method to community detection in complex networks.",
523
+ "author": "Sia, J., E. Jonckheere, and P. Bogdan (2019).",
524
+ "venue": "Scientific reports 9(1), 9800.",
525
+ "url": null
526
+ }
527
+ },
528
+ {
529
+ "47": {
530
+ "title": "Forman curvature for directed networks.",
531
+ "author": "Sreejith, R., J. Jost, E. Saucan, and A. Samal (2016).",
532
+ "venue": "arXiv preprint arXiv:1605.04662.",
533
+ "url": null
534
+ }
535
+ },
536
+ {
537
+ "48": {
538
+ "title": "Forman curvature for complex networks.",
539
+ "author": "Sreejith, R., K. Mohanraj, J. Jost, E. Saucan, and A. Samal (2016).",
540
+ "venue": "Journal of Statistical Mechanics: Theory and Experiment 2016(6), 063206.",
541
+ "url": null
542
+ }
543
+ },
544
+ {
545
+ "49": {
546
+ "title": "Academic team formation as evolving hypergraphs.",
547
+ "author": "Taramasco, C., J.-P. Cointet, and C. Roth (2010).",
548
+ "venue": "Scientometrics 85(3), 721\u2013740.",
549
+ "url": null
550
+ }
551
+ },
552
+ {
553
+ "50": {
554
+ "title": "Understanding over-squashing and bottlenecks on graphs via curvature.",
555
+ "author": "Topping, J., F. Di Giovanni, B. P. Chamberlain, X. Dong, and M. M. Bronstein (2021).",
556
+ "venue": "arXiv preprint arXiv:2111.14522.",
557
+ "url": null
558
+ }
559
+ },
560
+ {
561
+ "51": {
562
+ "title": "From Louvain to Leiden: guaranteeing well-connected communities.",
563
+ "author": "Traag, V. A., L. Waltman, and N. J. Van Eck (2019).",
564
+ "venue": "Scientific reports 9(1), 5233.",
565
+ "url": null
566
+ }
567
+ },
568
+ {
569
+ "52": {
570
+ "title": "Hypergraph and protein function prediction with gene expression data.",
571
+ "author": "Tran, L. (2012).",
572
+ "venue": "arXiv preprint arXiv:1212.0388.",
573
+ "url": null
574
+ }
575
+ },
576
+ {
577
+ "53": {
578
+ "title": "Adapting community detection algorithms for disease module identification in heterogeneous biological networks.",
579
+ "author": "Tripathi, B., S. Parthasarathy, H. Sinha, K. Raman, and B. Ravindran (2019).",
580
+ "venue": "Frontiers in genetics 10, 164.",
581
+ "url": null
582
+ }
583
+ },
584
+ {
585
+ "54": {
586
+ "title": "Optimal transport: old and new, Volume 338.",
587
+ "author": "Villani, C. et al. (2009).",
588
+ "venue": "Springer.",
589
+ "url": null
590
+ }
591
+ },
592
+ {
593
+ "55": {
594
+ "title": "Information theoretic measures for clusterings comparison: is a correction for chance necessary?",
595
+ "author": "Vinh, N. X., J. Epps, and J. Bailey (2009).",
596
+ "venue": "In Proceedings of the 26th annual international conference on machine learning, pp. 1073\u20131080.",
597
+ "url": null
598
+ }
599
+ },
600
+ {
601
+ "56": {
602
+ "title": "Introduction to graph theory, Volume 2.",
603
+ "author": "West, D. B. et al. (2001).",
604
+ "venue": "Prentice hall Upper Saddle River.",
605
+ "url": null
606
+ }
607
+ },
608
+ {
609
+ "57": {
610
+ "title": "Slpa: Uncovering overlapping communities in social networks via a speaker-listener interaction dynamic process.",
611
+ "author": "Xie, J., B. K. Szymanski, and X. Liu (2011).",
612
+ "venue": "In 2011 ieee 11th international conference on data mining workshops, pp. 344\u2013349. IEEE.",
613
+ "url": null
614
+ }
615
+ },
616
+ {
617
+ "58": {
618
+ "title": "Defining and evaluating network communities based on ground-truth.",
619
+ "author": "Yang, J. and J. Leskovec (2012).",
620
+ "venue": "In Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics, pp. 1\u20138.",
621
+ "url": null
622
+ }
623
+ },
624
+ {
625
+ "59": {
626
+ "title": "Community detection based on similarities of communication behavior in ip networks.",
627
+ "author": "Zhang, S., Y. Zhang, M. Zhou, and L. Peng (2022).",
628
+ "venue": "Journal of Ambient Intelligence and Humanized Computing 13(3), 1451\u20131461.",
629
+ "url": null
630
+ }
631
+ }
632
+ ],
633
+ "url": "http://arxiv.org/html/2401.10124v2"
634
+ }
20240127/2401.11113v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2401.11723v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2401.13998v2.json ADDED
@@ -0,0 +1,403 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "WAL-Net: Weakly supervised auxiliary task learning network for carotid plaques classification",
3
+ "abstract": "The classification of carotid artery ultrasound images is a crucial means for diagnosing carotid plaques, holding significant clinical relevance for predicting the risk of stroke. Recent research suggests that utilizing plaque segmentation as an auxiliary task for classification can enhance performance by leveraging the correlation between segmentation and classification tasks. However, this approach relies on obtaining a substantial amount of challenging-to-acquire segmentation annotations. This paper proposes a novel weakly supervised auxiliary task learning network model (WAL-Net) to explore the interdependence between carotid plaque classification and segmentation tasks. The plaque classification task is primary task, while the plaque segmentation task serves as an auxiliary task, providing valuable information to enhance the performance of the primary task. Weakly supervised learning is adopted in the auxiliary task to completely break away from the dependence on segmentation annotations. Experiments and evaluations are conducted on a dataset comprising 1270 carotid plaque ultrasound images from Wuhan University Zhongnan Hospital. Results indicate that the proposed method achieved an approximately 1.3% improvement in carotid plaque classification accuracy compared to the baseline network. Specifically, the accuracy of mixed-echoic plaques classification increased by approximately 3.3%, demonstrating the effectiveness of our approach.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Ischemic stroke stands as a primary cause of disability and mortality among cardiovascular disease patients globally (Beaglehole and Bonita, 2008 ###reference_b2###). Atherosclerosis constitutes the predominant pathological process leading to the majority of ischemic strokes, with carotid artery plaques being a manifestation of atherosclerosis. The rupture and detachment of carotid artery plaques contribute to thrombosis or vascular stenosis, thereby precipitating ischemic stroke events. In the year 2020, approximately 21.1% of the global population aged 30 to 79 exhibited carotid artery plaques (Song et al., 2020 ###reference_b19###). Based on distinct ultrasound echo characteristics, carotid artery plaques can be categorized into three types: hypoechoic plaques, hyperechoic plaques, and mixed-echoic plaques (AbuRahma et al., 2002 ###reference_b1###) (Olender et al., 2022 ###reference_b15###). Among these plaques, hyperechoic plaques are relatively stable, while the remaining two types are prone to rupture, leading to ischemic stroke. Consequently, the study, identification, and treatment of carotid artery plaque categories are of paramount importance. Ultrasound examination at peripheral arteries and carotid arteries is currently the preferred non-invasive method for carotid artery assessment. Widely employed for screening and follow-up of carotid artery atherosclerotic lesions, ultrasound examination reveals the location and size of plaques, as well as the site and severity of luminal stenosis. However, this process demands high levels of concentration from medical professionals, leading to the potential for misdiagnosis and incurring significant time costs. The utilization of deep learning for auxiliary diagnosis emerges as a viable solution, addressing the aforementioned issues while also offering potential enhancements in diagnostic accuracy and efficiency.\nThe segmentation and classification tasks of carotid artery plaque ultrasound images are two key components in deep learning-based processing of carotid artery plaque ultrasound images. The purpose of the classification task is to diagnose the exact category of carotid artery plaques (e.g., hypoechoic plaques or hyperechoic plaques), while the segmentation task is employed to detect the precise location and shape of carotid artery plaques. In practice, there exists a certain degree of correlation between the segmentation and classification tasks of carotid artery plaques. For instance, the output of the segmentation task can be utilized to enhance the weight of lesion areas in the features for the classification task, thereby improving classification performance. Currently, there are two main approaches for leveraging segmentation tasks to enhance classification task performance: non-end-to-end methods and end-to-end methods. Non-end-to-end methods involve training two or more separate models to perform segmentation and classification tasks independently. The output of one model is then used to enhance the output of another model. In the field of medical image processing, various methods of this kind have been proposed. For example, Miao Wang et al. introduced a parallel polyp segmentation and classification method to explore the correlation between the two tasks (Wang et al., 2023b ###reference_b24###). This method utilizes the preliminary segmentation of samples as an additional channel input to the classification network, enhancing the classification performance. Amirreza Mahbod and colleagues investigated the impact of different approaches to handling segmentation tasks on classification tasks in the context of skin disease image processing (Mahbod et al., 2020 ###reference_b14###).In contrast, end-to-end methods design a unified network model capable of simultaneously executing different tasks through multi-task learning. These methods typically share parameters and loss functions among different tasks to learn common and useful information. For instance, He et al. proposed the Lesion Area Extraction (LAE) Module, which employs an expansive lesion area cropping strategy to filter background noise from classification features, thus improving classification performance (He et al., 2023 ###reference_b11###). Ou et al. introduced a multi-task network model that simultaneously performs carotid artery plaque classification and segmentation (Ou et al., 2022 ###reference_b16###). In this approach, the segmentation task identifies the pathological areas of carotid artery plaques, and then the weights of these areas in the features for the classification task are reinforced to improve classification performance.In summary, non-end-to-end methods may face the challenge of high training costs as they require training additional models for different tasks. On the other hand, end-to-end methods need to better exploit the correlation between classification and segmentation tasks to avoid the features learned being overly negatively influenced by different task objectives (Zhang and Yang, 2021 ###reference_b28###).\nWhile designing network models to perform multiple tasks can leverage segmentation tasks to enhance the effectiveness of classification tasks, obtaining segmentation labels for carotid artery plaque ultrasound images is challenging and requires a significant time investment from experts or medical professionals. In our previous work, we addressed this challenge by employing semi-supervised learning for the segmentation task of carotid artery plaque ultrasound images, aiming to reduce dependence on segmentation annotations (Fu et al., 2023 ###reference_b8###). This approach yielded promising results. However, considering our primary focus on the performance of the network model in the main task, namely carotid artery plaque classification, it is essential to treat the segmentation task as a purer auxiliary task. For example, employing weakly supervised learning for the segmentation task can be explored, allowing segmentation task to be performed even in the absence of segmentation annotations.\nBased on the considerations mentioned above, this paper proposes a novel end-to-end multi-task learning network model for carotid artery plaque classification, named the Weakly supervised Auxiliary Learning Network (WAL-Net). WAL-Net introduces an auxiliary task, namely weakly supervised segmentation, for the primary task of carotid artery plaque classification. The purpose of the auxiliary task is to identify the specific location and shape of the pathological regions of carotid artery plaques in ultrasound images. WAL-Net comprises a shared encoder, a decoder for segmentation, and a classification task head.The weakly supervised segmentation task is supervised by the information generated from the proposed Pseudo mask Generation Module (PGM), where the localization information is obtained through attention methods, and affinity is guided by superpixel method. The shared encoder is responsible for extracting multidimensional features of carotid artery plaques. Additionally, WAL-Net incorporates a Region of Interest cropping Module (RCM), utilizing the output predictions from the segmentation decoder to obtain the location information of the lesion plaques. This information is then used to enhance the classification features in the encoder. The enhanced classification features are input into the classification task head to improve the performance of the classification task. The overall architecture of WAL-Net is designed to seamlessly integrate the weakly supervised segmentation task as a purer auxiliary task, with the shared encoder efficiently capturing relevant features for both segmentation and classification tasks.\nIn summary, our work entails the following contributions:\nWAL-Net seamlessly integrates supervised classification tasks with weakly supervised segmentation tasks into a unified multi-task learning model. In comparison to non-end-to-end methodologies, WAL-Net is proficient in concurrently executing classification and segmentation tasks.\nOur proposed Pseudo mask Generation Module (PGM) amalgamates attention methods with superpixel methods to generate pseudo-masks. Employing weakly supervised learning enhances segmentation outcomes, eliminating the need for reliance on segmentation annotations.\nOur proposed Region of Interest cropping Module (RCM) adaptively acquires Regions of Interest (ROI) from features of varying scales and subsequently enhances them. This explicit utilization of inter-task correlations in multi-task learning proves to be instrumental.\nExperimental results demonstrate that WAL-Net achieves superior performance in carotid artery plaque classification. Furthermore, a ablation study corroborates the efficacy of each module within the proposed framework.\nThe remaining structure of the manuscript is as follows: After an introducing carotid artery plaques in the medical domain, Section 2 discusses recent investigations in this field. Subsequently, Section 3 delves into the materials and methods employed in the manuscript. Section 4 presents the results of the research findings, and finally, Section 5 provides a summary of the study."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "To date, numerous researchers have proposed various methods for the classification of carotid artery plaques. In conventional classification approaches, Cevlan et al. (Ceylan et al., 2007 ###reference_b3###) employed Principal Component Analysis (PCA) and Complex-Valued Artificial Neural Network (CVANN) for the classification of carotid artery Doppler ultrasound signals. Tsiaparas et al. (Tsiaparas et al., 2012 ###reference_b22###) utilized Support Vector Machines (SVM) to assess the capability of three multiscale transformation methods in extracting features related to carotid artery atherosclerotic plaques. Chaudhry et al. (Chaudhry et al., 2013 ###reference_b4###) introduced a technique that employs SVM and intima-media thickness as a feature vector for the classification of carotid artery ultrasound images. However, these traditional machine learning methods predominantly rely on one or more handcrafted features, rendering them incapable of accurately and comprehensively describing the state of carotid artery plaques.\nAuxiliary task learning is a form of multi-task learning (MTL) (Ruder, 2017 ###reference_b17###) aimed at benefiting from multiple tasks by incorporating suitable auxiliary tasks. Auxiliary task learning exhibits superior generalization characteristics compared to single-task learning. For instance, Zhang et al. (Zhang et al., 2015 ###reference_b29###) proposed jointly learning facial fine-grained features and head pose features in a network model, treating the learning task of head pose features as an auxiliary task to address the challenges of facial image feature learning under conditions such as image occlusion or pose variations. He et al. (He et al., 2023 ###reference_b11###) introduced a method for lesion segmentation and classification in skin disease images, along with edge segmentation, to explore correlations among multiple tasks. The edge segmentation task in skin disease images was utilized as an auxiliary task to leverage edge information and enhance the features related to edges in the image segmentation task. Liebel et al. (Liebel and K\u00f6rner, 2018 ###reference_b13###), in their study on auxiliary task learning, observed that the selection of auxiliary tasks should involve tasks that are easy to learn and obtain annotations. Appropriately chosen seemingly unrelated auxiliary tasks can significantly enhance the performance of the primary task.\nThe weakly supervised segmentation, as an application within weakly supervised learning (Zhou, 2018 ###reference_b30###), aims to enhance training effectiveness in scenarios with limited supervisory information or reduce dependence on such supervision. The weakly supervised segmentation adopted in this paper is based on image-level labels. Despite numerous studies in recent years on weakly supervised segmentation using image-level labels, for instance, Yuliang Zou et al. (Zou et al., 2020 ###reference_b31###) utilized characteristics from semi-supervised learning, merging pseudo-labels generated from diverse sources and various data augmentations to improve the effectiveness of weakly supervised segmentation. However, since this paper necessitates leveraging weakly supervised auxiliary tasks to enhance the primary task\u2019s performance, most current non-end-to-end weakly supervised learning methods are unsuitable. Approaches based on Class Activation Maps (CAM) require initial backward of classification task losses before implementation. For example, Sun et al. (Sun et al., 2021 ###reference_b20###) proposed improving CAM performance using erased images to obtain more accurate pseudo-masks, a method contradictory to end-to-end auxiliary task learning. Another approach involves employing attention mechanisms to guide weakly supervised segmentation. Kunpeng Li et al. (Li et al., 2018 ###reference_b12###), to address the inability to employ end-to-end learning in weakly supervised segmentation, proposed using attention maps as priors for feature localization and semantic segmentation tasks. This method extracts localization and segmentation information from attention maps without the need for additional segmentation labels. Sheng Yi et al. (Yi et al., 2022 ###reference_b26###) employed superpixels to guide semantic affinity between pixels, amalgamating superpixel and localization information. This method not only focuses on localization information by CAM but also considers appearance information based on superpixel method.\n###figure_1###"
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Methodology",
21
+ "text": "WAL-Net comprises a shared encoder and two distinct task heads. In WAL-Net, the weakly supervised segmentation task serves as an auxiliary task with the aim of enhancing the performance of the primary task, namely the classification task. As illustrated in Fig.1 ###reference_###, preprocessed samples of carotid artery ultrasound images are utilized as inputs, and a shared encoder is employed to extract features at different scales. Subsequently, deep and shallow features are fused in the segmentation decoder (utilizing the decoder method proposed by Deeplabv3+ (Chen et al., 2018 ###reference_b5###)) to obtain corresponding segmentation predictions. The role of segmentation predictions is to assist the execution of the classification task. After obtaining features at different scales, attention gates (Schlemper et al., 2018 ###reference_b18###) are employed by the shared encoder to generate attention maps at different scales, enhancing the features. Subsequently, the attention-enriched features and segmentation predictions are jointly fed into the RCM to obtain multi-dimensional features of the amplified region of interest. These features are then input into the classification task head to obtain the final classification prediction. In WAL-Net, the classification and segmentation tasks collaborate explicitly, leveraging certain characteristics of multi-task learning to improve overall performance. Each module will be detailed below."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "The weakly supervised segmentation task",
27
+ "text": "The purpose of the auxiliary task is to learn features that are beneficial for the primary task and provide certain enhancements to the primary task. In our case, we employ the weakly supervised segmentation task on carotid artery ultrasound images as the auxiliary task. The encoder of the segmentation network is shared with the encoder of the primary task. The decoder structure follows deeplabv3+ (Chen et al., 2018 ###reference_b5###), which combines shallow and deep features to improve the final segmentation prediction. We introduce the PGM to generate pseudo masks, which are used to supervise the segmentation predictions.\nPseudo mask Generation Module (PGM): The process of generating pseudo masks by the PGM is illustrated in Fig.2 ###reference_###. In WAL-Net, the input image is initially processed by Felzenszwalb\u2019s superpixel segmentation method (Felzenszwalb and Huttenlocher, 2004 ###reference_b7###), resulting in a superpixel map. The attention maps at different scales, obtained through attention mechanisms in the shared encoder, are then fused and regionally averaged, assigning weighted values to the superpixel map. After binarization, this produces a weighted segmentation map, serving as the supervision for the weakly supervised segmentation task, as depicted in Fig.2 ###reference_###. The attention method provides localization information for the pseudo mask, while the superpixel method guides the affinity. The module combines attention and superpixel methods to generate the pseudo mask.\n###figure_2### The fusion of attention maps at three different hierarchical levels is achieved through element-wise multiplication (as expressed in Eq.1 ###reference_###). This choice is made because the attention map for shallow features tends to have clearer contours, while the attention map for deep features provides more accurate positioning. Combining attention maps at different levels yields more complete and accurate positional information. After the fusion of attention maps at different levels, averaging is performed within the segmented regions of the superpixel map, resulting in a fused segmentation map, as indicated by Eq.2 ###reference_###.\nHere, , where represents the number of training samples in a batch, and represents the -th sample in a training sample. , , and represent the attention maps at three hierarchical levels. The fusion attention map is obtained by multiplying the attention maps from the three levels\nHere, represents the superpixel map obtained through Felzenszwalb, , where represents the number of segmentation regions in , and represents the -th segmentation region. represents the number of pixels in the -th segmentation region of .The fused attention map is averaged for each corresponding segmentation region in the unsupervised segmentation map . The result is assigned to that segmentation region. After this operation, a pseudo mask is obtained through binarization, which is then used to supervise the weakly supervised segmentation task. The cross-entropy loss function for this task is formulated as Eq.3 ###reference_###.\nHere, represents the number of training samples in the dataset, represents the pseudo mask, and represents the segmentation prediction made by the network for the training sample. and denote the -th pixel value of the -th sample in and , respectively."
28
+ },
29
+ {
30
+ "section_id": "3.1.1",
31
+ "parent_section_id": "3.1",
32
+ "section_name": "3.1.1 The classification task",
33
+ "text": "WAL-Net focuses on the primary task of classifying carotid artery plaque ultrasound images, which involves categorizing plaques into three distinct types: hypoechoic plaques, hyperechoic plaques, and mixed-echoic plaques. The classification network employed by WAL-Net bears similarity to the architecture introduced in resnest (Zhang et al., 2022 ###reference_b27###), allowing it to extract profound features from carotid artery ultrasound images. Furthermore, WAL-Net incorporates Attention Gates (Schlemper et al., 2018 ###reference_b18###) into its encoder, as proposed by Jo Schlemper et al. This attention mechanism is well-suited for medical image analysis and provides detailed, visually interpretable attention maps. Recognizing the utility of segmentation predictions obtained during the auxiliary task for the primary task, we introduce the RCM to explicitly enhance the classification task using segmentation predictions.\nROI Cropping Module (RCM): Upon obtaining segmentation predictions for carotid artery plaque ultrasound images during the auxiliary task, both the segmentation predictions and features from different depths are fed into RCM. Firstly, RCM binarizes the segmentation predictions (with a binary threshold set in this paper, values greater than or equal to 0.5 are set to 1, while values less than 0.5 are set to 0). Subsequently, RCM obtains the bounding box of the lesion area in the segmentation predictions. The bounding box is expanded by in all directions (with set to 1/7 of the matrix size in this paper) to retain a portion of the normal vascular wall or other background. Finally, the bounding box is resized uniformly and serves as input for the classification task head.In the classification task head, WAL-Net separately feeds features from different depths into fully connected layers, and the average of the results is taken as the final output. The loss function for the classification task is determined by the cross-entropy loss function, which calculates the difference between the predicted lesion classification and the actual lesion type. The loss function is expressed as Eq.4 ###reference_###.\nHere, represents the number of training samples in the dataset, and represents the number of categories for each sample (). When is equal to 1, 2, or 3, it represents the true carotid artery plaque category for that sample as hyperechoic plaque, hypoechoic plaque, or mixed-echoic plaque, respectively. For each input sample, the classification task network outputs a classification prediction , where represents the probability of type in the classification prediction, and represents the value of the classification label.\nThe total loss of WAL-Net is the sum of the classification loss and the segmentation loss , as shown in Eq.5 ###reference_###."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Experiments and Results",
39
+ "text": "###figure_3### The data used for evaluating experiments in this paper is derived from a carotid artery ultrasound image dataset obtained from Zhongnan Hospital of Wuhan University. The study received approval from the Institutional Review Board (IRB) of the hospital. Each lesion image in the dataset is accompanied by a corresponding lesion category label. The dataset comprises a total of 1,270 carotid artery ultrasound images collected from 844 patients by ultrasound imaging experts. Among these images, there are 301 hyperechoic plaque images, 605 hypoechoic plaque images, and 364 mixed-echoic plaque images. For evaluation and experimentation purposes, the dataset is split into training, validation, and test sets in a 6:2:2 ratio. The code used in the experimental section of this paper has been uploaded to \u2019https://github.com/a610lab/WAL-Net\u2019"
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "Data Preprocessing",
45
+ "text": "Due to variations in the sizes of the regions of interest in each sample of carotid artery ultrasound images (e.g., the smallest sample size is 19x29, while the largest is 134x564), this study standardizes all sample sizes to a uniform dimension (224x224, as used in the experiments). The following preprocessing steps are applied to generate input samples for the network model from the dataset: (1) Extract the region where the provided plaque is located in the carotid artery plaque ultrasound image, obtaining a rough region of interest image for each plaque as provided by the medical experts.\n(2) Normalize the images obtained in step 1 to a consistent size of 224x224.The preprocessing steps are illustrated in Fig.3 ###reference_###."
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": "Experimental Setup",
51
+ "text": "The experiments conducted in this study were implemented using the Python programming language and the PyTorch framework. The optimizer used in the experiments was the Adam optimizer with a learning rate of 0.0001. A batch size of 8 was selected for the experiments. All experimental results are the averages obtained after five random experiments on the same device.\nFive evaluation metrics were defined for the classification of carotid artery plaque ultrasound images, including accuracy, F1-score, kappa, precision, and recall."
52
+ },
53
+ {
54
+ "section_id": "4.3",
55
+ "parent_section_id": "4",
56
+ "section_name": "Comparison with Other Classification Methods",
57
+ "text": "In our dataset, WAL-Net was compared with several state-of-the-art classification methods, including Convnext-v2[23], DPN[24], Repvit[25], Sequencer[26], Rexnet[27], Res2net[28], and Resnest[22]. The experimental results for these comparative methods were obtained by running their respective source codes.\n###table_1### 0.6149 (0.065)\n0.6073 (0.162)\n0.3884 (0.162)\n0.6262 (0.194)\n0.6001 (0.111)\n0.7762 (0.021)\n0.7665 (0.023)\n0.6378 (0.035)\n0.7993 (0.027)\n0.7536 (0.024)\n0.7042 (0.019)\n0.6927 (0.017)\n0.5226 (0.027)\n0.7205 (0.021)\n0.6863 (0.013)\n0.6728 (0.022)\n0.6702 (0.029)\n0.4786 (0.038)\n0.7004 (0.022)\n0.6561 (0.032)\n0.6444 (0.020)\n0.6370 (0.026)\n0.4346 (0.036)\n0.6498 (0.023)\n0.6416 (0.029)\n0.8046 (0.018)\n0.8002 (0.016)\n0.6860 (0.029)\n0.8245 (0.024)\n0.7899 (0.017)\n(0.018)\n(0.018)\n(0.028)\n(0.017)\n(0.018)\n0.8644 (0.011)\n0.8597 (0.011)\n0.7856 (0.016)\n0.8671 (0.017)\n0.8574 (0.009)\nTable 1 presents the comparative experimental results for the classification networks. WAL-Net achieved the best performance. Specifically, compared to the second-best performer Resnest (which is also the backbone network used in WAL-Net), WAL-Net demonstrated an improvement of approximately 1.3% in the accuracy metric, highlighting the effectiveness of WAL-Net.\n###figure_4### ###figure_5### ###figure_6### ###figure_7### Our proposed WAL-Net demonstrates superior performance compared to the baseline network (Resnest[22]) on the dataset of carotid artery plaque ultrasound images. The ROC curves in Fig.4 ###reference_### illustrate the performance of WAL-Net and the baseline network in classifying images from different categories in the dataset. It can be observed from the figure that WAL-Net outperforms the baseline network in all categories, especially in the recognition of hypoechoic plaques and mixed-echoic plaques. The performance difference between the two is not significant in the identification of hyperechoic plaques, possibly because this category inherently has a high recognition accuracy, with the ROC curve area exceeding 0.98. Overall, WAL-Net exhibits a noticeable improvement in performance compared to the baseline network, demonstrating the superiority of our approach.\n###figure_8### ###figure_9### In carotid artery plaque ultrasound images, the convolutional network model faces varying levels of difficulty in learning between different categories. This discrepancy is partially due to the uneven distribution of samples among categories, differences in feature extraction difficulty for images from distinct categories, and other contributing factors. Fig.5 ###reference_### presents the confusion matrix of accuracy for different echo categories between WAL-Net and the baseline network. As observed in the figure, WAL-Net demonstrates a substantial improvement in recognition accuracy for hypoechoic plaques and mixed-echoic plaques, with an approximate increase of 3.3% in accuracy for mixed-echoic plaques. The improvement in accuracy is less pronounced for hyperechoic plaques. Overall, WAL-Net achieves increased prediction accuracy across all three categories compared to the baseline."
58
+ },
59
+ {
60
+ "section_id": "4.4",
61
+ "parent_section_id": "4",
62
+ "section_name": "Visualization of Weakly Supervised Segmentation Results",
63
+ "text": "In Fig.6 ###reference_###, we present visualization examples illustrating the pseudo-segmentation labels, segmentation predictions, and true segmentation labels obtained through our proposed weakly supervised segmentation method. As depicted in the figure, the segmentation predictions generated by our model effectively delineate plaque regions in ultrasound images, successfully suppressing noise originating from the vessel wall. This visualization highlights the model\u2019s ability to achieve accurate segmentation predictions without relying on true segmentation labels, showcasing its competitive performance.\n###figure_10###"
64
+ },
65
+ {
66
+ "section_id": "4.5",
67
+ "parent_section_id": "4",
68
+ "section_name": "Ablation Study",
69
+ "text": "In this table, we compare the experimental results of WAL-Net with different modules against the baseline of Resnest 50. The accuracy of the baseline Resnest 50 is 85.1%. When incorporating an attention mechanism into the backbone network, the accuracy improves to 85.5%. Furthermore, by adding the weakly supervised segmentation auxiliary task, WAL-Net achieves an accuracy of 86.4%. These results demonstrate the effectiveness of both the attention mechanism and the weakly supervised segmentation auxiliary task in enhancing the recognition of carotid artery plaque ultrasound images.\n###table_2### 0.8513 (0.018)\n0.8473 (0.018)\n0.7641 (0.028)\n0.8570 (0.017)\n0.8437 (0.018)\n0.8554 (0.015)\n0.8498 (0.016)\n0.7718 (0.024)\n0.8552 (0.014)\n0.8501 (0.014)\n0.8644 (0.011)\n0.8597 (0.011)\n0.7856 (0.016)\n0.8671 (0.017)\n0.8574 (0.009)"
70
+ },
71
+ {
72
+ "section_id": "4.6",
73
+ "parent_section_id": "4",
74
+ "section_name": "Comparison with Different ROI Augmentation Methods",
75
+ "text": "In the work conducted by Amirreza Mahbod et al. (Mahbod et al., 2020 ###reference_b14###), various methods influencing segmentation for classification were experimentally explored, and a particularly effective non-end-to-end approach was identified. Extending this methodology to end-to-end networks, as demonstrated by He et al. (He et al., 2023 ###reference_b11###), involved similar operations on the advanced features of classification networks, yielding favorable outcomes. To validate the applicability of this approach to the specific context of carotid artery ultrasound image datasets, we conducted experiments employing different segmentation strategies. As shown in Table 3, \u2019bg rm\u2019 represents the removal of background values from features, \u2019bg rm&crop\u2019 represents eliminating background values and crop the foreground to a fixed size, \u2019crop\u2019 signifies no removal of background values but direct crop of the foreground to a fixed size, and \u2019rwm\u2019 refers to the method proposed by (Fu et al., 2023 ###reference_b8###), which multiplies the segmentation output predictions with advanced features for classification, emphasizing foreground weights while diminishing background weights. \u2019dilated crop\u2019 represents cropping the foreground and a portion of the surrounding background, then resizing to a fixed size, which is the method adopted in this paper. The results indicate the superiority of our proposed method over various strategies in the given context.\n###table_3### 0.8483 (0.011)\n0.8438 (0.012)\n0.7593 (0.017)\n0.8526 (0.015)\n0.8394 (0.013)\n0.8100 (0.020)\n0.8025 (0.024)\n0.6932 (0.036)\n0.8335 (0.011)\n0.7884 (0.031)\n0.8352 (0.016)\n0.8305 (0.017)\n0.7397 (0.025)\n0.8358 (0.017)\n0.8297 (0.018)\n0.8452 (0.017)\n0.8395 (0.018)\n0.7542 (0.029)\n0.8496 (0.015)\n0.8361 (0.021)\n0.8644 (0.011)\n0.8597 (0.011)\n0.7856 (0.016)\n0.8671 (0.017)\n0.8574 (0.009)\nIn Table 3, our adopted segmentation-influencing classification method demonstrates superior performance, outperforming the second-best rwm method by approximately 1.8% in accuracy. This substantiates the effectiveness of our proposed approach."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Conclusion",
81
+ "text": "In this paper, we posit that judiciously harnessing the intrinsic correlations between different tasks in auxiliary task learning is crucial for improving the classification results of carotid plaque ultrasound images. Consequently, we introduce a novel Weakly Supervised Auxiliary Task Learning Network (WAL-Net) comprising a shared encoder, a classification task head, and a weakly supervised segmentation task head. In contrast to traditional classification approaches, WAL-Net incorporates an auxiliary task based on weakly supervised learning, specifically, the segmentation task. Exploiting the auxiliary task, we explicitly enhance the classification task using the RCM module, thereby improving the performance of the classification task. We also design a module for the supervision of weakly supervised auxiliary task learning, utilizing the combination of unsupervised learning and attention mechanisms to generate pseudo-segmentation labels, thereby completely alleviating dependence on real segmentation labels while still achieving satisfactory segmentation results. Various experiments on the carotid ultrasound dataset demonstrate the effectiveness of our approach.\nIt is worth noting that optimizing the PGM module to obtain better pseudo-segmentation labels without relying on real labels will make the auxiliary task more effective and precise. In future work, we plan to expand the PGM module to enhance the performance of the auxiliary task. Furthermore, while our method is specifically applied to carotid plaque ultrasound images, WAL-Net and its individual sub-modules proposed herein are generalizable and hold potential for application in other image classification domains."
82
+ }
83
+ ],
84
+ "appendix": [],
85
+ "tables": {
86
+ "1": {
87
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Performance comparison of classification methods on the carotid artery ultrasound image dataset. The best and the 2nd best results are marked in <span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.14.1\">bold</span> and .</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T1.12\">\n<tr class=\"ltx_tr\" id=\"S4.T1.7.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt ltx_border_t\" id=\"S4.T1.7.5.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.7.5.6.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T1.3.1.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.3.1.1.1.1.1\">Accuracy </span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T1.4.2.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.4.2.2.1.1.1\">F1-score </span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T1.5.3.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.5.3.3.1.1.1\">Kappa </span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T1.6.4.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.6.4.4.1.1.1\">Precision </span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T1.7.5.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T1.7.5.5.1.1.1\">Recall </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T1.12.11.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">ConvNext-V2(2023) \u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Woo et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.13998v2#bib.bib25\" title=\"\">2023</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T1.12.11.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.11.2.1\">0.6149 (0.065)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T1.12.11.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.11.3.1\">0.6073 (0.162)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T1.12.11.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.11.4.1\">0.3884 (0.162)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T1.12.11.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.11.5.1\">0.6262 (0.194)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T1.12.11.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.11.6.1\">0.6001 (0.111)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.12.12.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">DPN(2020) \u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Chen et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.13998v2#bib.bib6\" title=\"\">2017</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.12.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.12.2.1\">0.7762 (0.021)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.12.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.12.3.1\">0.7665 (0.023)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.12.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.12.4.1\">0.6378 (0.035)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.12.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.12.5.1\">0.7993 (0.027)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.12.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.12.6.1\">0.7536 (0.024)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.12.13.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">RepVit(2023) \u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Wang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.13998v2#bib.bib23\" title=\"\">2023a</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.13.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.13.2.1\">0.7042 (0.019)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.13.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.13.3.1\">0.6927 (0.017)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.13.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.13.4.1\">0.5226 (0.027)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.13.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.13.5.1\">0.7205 (0.021)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.13.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.13.6.1\">0.6863 (0.013)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.12.14.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">Sequencer (2022) \u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Tatsunami and Taki, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.13998v2#bib.bib21\" title=\"\">2022</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.14.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.14.2.1\">0.6728 (0.022)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.14.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.14.3.1\">0.6702 (0.029)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.14.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.14.4.1\">0.4786 (0.038)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.14.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.14.5.1\">0.7004 (0.022)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.14.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.14.6.1\">0.6561 (0.032)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.12.15.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">RexNet (2020) \u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Han et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.13998v2#bib.bib10\" title=\"\">2020</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.15.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.15.2.1\">0.6444 (0.020)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.15.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.15.3.1\">0.6370 (0.026)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.15.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.15.4.1\">0.4346 (0.036)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.15.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.15.5.1\">0.6498 (0.023)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.15.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.15.6.1\">0.6416 (0.029)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.12.16.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">Res2Net(2019) \u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Gao et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.13998v2#bib.bib9\" title=\"\">2019</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.16.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.16.2.1\">0.8046 (0.018)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.16.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.16.3.1\">0.8002 (0.016)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.16.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.16.4.1\">0.6860 (0.029)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.16.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.16.5.1\">0.8245 (0.024)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.16.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.16.6.1\">0.7899 (0.017)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.12.10.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">ResNeSt(2022) \u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Zhang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.13998v2#bib.bib27\" title=\"\">2022</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.8.6.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.8.6.1.1.1\"> (0.018)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.9.7.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.9.7.2.1.1\"> (0.018)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.10.8.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.10.8.3.1.1\"> (0.028)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.11.9.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.11.9.4.1.1\"> (0.017)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T1.12.10.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.10.5.1.1\"> (0.018)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_b\" id=\"S4.T1.12.17.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.12.17.1.1\">WAL-Net (Ours)</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T1.12.17.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.17.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.12.17.2.1.1\">0.8644</span> (0.011)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T1.12.17.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.17.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.12.17.3.1.1\">0.8597</span> (0.011)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T1.12.17.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.17.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.12.17.4.1.1\">0.7856</span> (0.016)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T1.12.17.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.17.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.12.17.5.1.1\">0.8671</span> (0.017)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T1.12.17.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T1.12.17.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.12.17.6.1.1\">0.8574</span> (0.009)</p>\n</td>\n</tr>\n</table>\n</figure>",
88
+ "capture": "Table 1: Performance comparison of classification methods on the carotid artery ultrasound image dataset. The best and the 2nd best results are marked in bold and ."
89
+ },
90
+ "2": {
91
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Ablation study for the RCM module and PGM module on the carotid artery ultrasound image dataset.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T2.5\">\n<tr class=\"ltx_tr\" id=\"S4.T2.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt ltx_border_t\" id=\"S4.T2.5.5.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.5.6.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T2.1.1.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.1.1.1.1.1.1\">Accuracy </span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T2.2.2.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.2.2.2.1.1.1\">F1-score </span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T2.3.3.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.3.3.3.1.1.1\">Kappa </span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T2.4.4.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.4.4.4.1.1.1\">Precision </span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T2.5.5.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T2.5.5.5.1.1.1\">Recall </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T2.5.6.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">w/o RCM &amp; PGM</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T2.5.6.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.6.2.1\">0.8513 (0.018)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T2.5.6.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.6.3.1\">0.8473 (0.018)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T2.5.6.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.6.4.1\">0.7641 (0.028)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T2.5.6.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.6.5.1\">0.8570 (0.017)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T2.5.6.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.6.6.1\">0.8437 (0.018)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.5.7.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">w/o RCM</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.5.7.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.7.2.1\">0.8554 (0.015)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.5.7.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.7.3.1\">0.8498 (0.016)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.5.7.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.7.4.1\">0.7718 (0.024)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.5.7.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.7.5.1\">0.8552 (0.014)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T2.5.7.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.7.6.1\">0.8501 (0.014)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_b\" id=\"S4.T2.5.8.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">WAL-Net</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T2.5.8.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.8.2.1\">0.8644 (0.011)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T2.5.8.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.8.3.1\">0.8597 (0.011)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T2.5.8.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.8.4.1\">0.7856 (0.016)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T2.5.8.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.8.5.1\">0.8671 (0.017)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T2.5.8.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T2.5.8.6.1\">0.8574 (0.009)</p>\n</td>\n</tr>\n</table>\n</figure>",
92
+ "capture": "Table 2: Ablation study for the RCM module and PGM module on the carotid artery ultrasound image dataset."
93
+ },
94
+ "3": {
95
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Performance comparison of different ROI augmentation methods for RCM module.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T3.5\">\n<tr class=\"ltx_tr\" id=\"S4.T3.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt ltx_border_t\" id=\"S4.T3.5.5.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.5.6.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T3.1.1.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T3.1.1.1.1.1.1\">Accuracy </span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T3.2.2.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T3.2.2.2.1.1.1\">F1-score </span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T3.3.3.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T3.3.3.3.1.1.1\">Kappa </span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T3.4.4.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T3.4.4.4.1.1.1\">Precision </span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt ltx_border_t\" id=\"S4.T3.5.5.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S4.T3.5.5.5.1.1.1\">Recall </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T3.5.6.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">rwm \u00a0<cite class=\"ltx_cite ltx_citemacro_citep\">(Fu et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.13998v2#bib.bib8\" title=\"\">2023</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T3.5.6.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.6.2.1\">0.8483 (0.011)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T3.5.6.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.6.3.1\">0.8438 (0.012)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T3.5.6.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.6.4.1\">0.7593 (0.017)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T3.5.6.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.6.5.1\">0.8526 (0.015)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"S4.T3.5.6.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.6.6.1\">0.8394 (0.013)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.5.7.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">bg rm</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.7.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.7.2.1\">0.8100 (0.020)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.7.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.7.3.1\">0.8025 (0.024)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.7.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.7.4.1\">0.6932 (0.036)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.7.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.7.5.1\">0.8335 (0.011)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.7.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.7.6.1\">0.7884 (0.031)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.5.8.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">bg rm &amp; crop</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.8.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.8.2.1\">0.8352 (0.016)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.8.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.8.3.1\">0.8305 (0.017)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.8.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.8.4.1\">0.7397 (0.025)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.8.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.8.5.1\">0.8358 (0.017)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.8.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.8.6.1\">0.8297 (0.018)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.5.9.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">crop</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.9.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.9.2.1\">0.8452 (0.017)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.9.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.9.3.1\">0.8395 (0.018)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.9.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.9.4.1\">0.7542 (0.029)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.9.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.9.5.1\">0.8496 (0.015)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S4.T3.5.9.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.9.6.1\">0.8361 (0.021)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_b\" id=\"S4.T3.5.10.1\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.10.1.1\">dilated crop</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T3.5.10.2\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.10.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.10.2.1.1\">0.8644</span> (0.011)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T3.5.10.3\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.10.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.10.3.1.1\">0.8597</span> (0.011)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T3.5.10.4\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.10.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.10.4.1.1\">0.7856</span> (0.016)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T3.5.10.5\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.10.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.10.5.1.1\">0.8671</span> (0.017)</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_b\" id=\"S4.T3.5.10.6\" style=\"padding-top:3.5pt;padding-bottom:3.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S4.T3.5.10.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.10.6.1.1\">0.8574</span> (0.009)</p>\n</td>\n</tr>\n</table>\n</figure>",
96
+ "capture": "Table 3: Performance comparison of different ROI augmentation methods for RCM module."
97
+ }
98
+ },
99
+ "image_paths": {
100
+ "1": {
101
+ "figure_path": "2401.13998v2_figure_1.png",
102
+ "caption": "Figure 1: Overview of the proposed WAL-Net. Bottom right: Structure of the ROI Cropping Module.",
103
+ "url": "http://arxiv.org/html/2401.13998v2/extracted/5370315/figures/network.png"
104
+ },
105
+ "2": {
106
+ "figure_path": "2401.13998v2_figure_2.png",
107
+ "caption": "Figure 2: Structure of the Pseudo mask Generation Module.",
108
+ "url": "http://arxiv.org/html/2401.13998v2/extracted/5370315/figures/pgm.png"
109
+ },
110
+ "3": {
111
+ "figure_path": "2401.13998v2_figure_3.png",
112
+ "caption": "Figure 3: The preprocessing pipeline for carotid artery plaque ultrasound images. (a): The original ultrasound images of the three types of plaques: hyperechoic plaque, hypoechoic plaque, and mixed-echoic plaque. (b): The regions of interest in the ultrasound images of the three types of plaques. (c): The ultrasound images of the three types of plaques after being uniformly resized.",
113
+ "url": "http://arxiv.org/html/2401.13998v2/extracted/5370315/figures/preprocess.png"
114
+ },
115
+ "4(a)": {
116
+ "figure_path": "2401.13998v2_figure_4(a).png",
117
+ "caption": "(a) hyperechoic ROC\nFigure 4: ROC curves for WAL-Net and baseline.",
118
+ "url": "http://arxiv.org/html/2401.13998v2/extracted/5370315/figures/ROC_curve_high.png"
119
+ },
120
+ "4(b)": {
121
+ "figure_path": "2401.13998v2_figure_4(b).png",
122
+ "caption": "(b) hypoechoic ROC\nFigure 4: ROC curves for WAL-Net and baseline.",
123
+ "url": "http://arxiv.org/html/2401.13998v2/extracted/5370315/figures/ROC_curve_low.png"
124
+ },
125
+ "4(c)": {
126
+ "figure_path": "2401.13998v2_figure_4(c).png",
127
+ "caption": "(c) mixed-echoic ROC\nFigure 4: ROC curves for WAL-Net and baseline.",
128
+ "url": "http://arxiv.org/html/2401.13998v2/extracted/5370315/figures/ROC_curve_mix.png"
129
+ },
130
+ "4(d)": {
131
+ "figure_path": "2401.13998v2_figure_4(d).png",
132
+ "caption": "(d) micro ROC\nFigure 4: ROC curves for WAL-Net and baseline.",
133
+ "url": "http://arxiv.org/html/2401.13998v2/extracted/5370315/figures/ROC_curve_micro.png"
134
+ },
135
+ "5(a)": {
136
+ "figure_path": "2401.13998v2_figure_5(a).png",
137
+ "caption": "(a) Baseline confusion matrix\nFigure 5: Confusion matrix for WAL-Net and baseline.",
138
+ "url": "http://arxiv.org/html/2401.13998v2/extracted/5370315/figures/confusion_matrix_2.png"
139
+ },
140
+ "5(b)": {
141
+ "figure_path": "2401.13998v2_figure_5(b).png",
142
+ "caption": "(b) WAL-Net confusion matrix\nFigure 5: Confusion matrix for WAL-Net and baseline.",
143
+ "url": "http://arxiv.org/html/2401.13998v2/extracted/5370315/figures/confusion_matrix_1.png"
144
+ },
145
+ "6": {
146
+ "figure_path": "2401.13998v2_figure_6.png",
147
+ "caption": "Figure 6: Visualization examples of the generated pseudo mask and the predicted results by proposed WAL-Net.",
148
+ "url": "http://arxiv.org/html/2401.13998v2/extracted/5370315/figures/visualization.png"
149
+ }
150
+ },
151
+ "validation": true,
152
+ "references": [
153
+ {
154
+ "1": {
155
+ "title": "Carotid plaque ultrasonic heterogeneity and severity of stenosis.",
156
+ "author": "Ali F AbuRahma, John T Wulu Jr, and Brad Crotty.",
157
+ "venue": "Stroke, 33(7):1772\u20131775, 2002.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "2": {
163
+ "title": "Global public health: a scorecard.",
164
+ "author": "Robert Beaglehole and Ruth Bonita.",
165
+ "venue": "The Lancet, 372(9654):1988\u20131996, 2008.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "3": {
171
+ "title": "Classification of carotid artery doppler signals in the early phase\nof atherosclerosis using complex-valued artificial neural network.",
172
+ "author": "Murat Ceylan, Rahime Ceylan, Fatma Dirgenali, Sad\u0131k Kara, and Y\u00fcksel\n\u00d6zbay.",
173
+ "venue": "Computers in Biology and Medicine, 37(1):28\u201336, 2007.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "4": {
179
+ "title": "Automatic active contour-based segmentation and classification of\ncarotid artery ultrasound images.",
180
+ "author": "Asmatullah Chaudhry, Mehdi Hassan, Asifullah Khan, and Jin Young Kim.",
181
+ "venue": "Journal of digital imaging, 26:1071\u20131081, 2013.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "5": {
187
+ "title": "Encoder-decoder with atrous separable convolution for semantic image\nsegmentation.",
188
+ "author": "Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig\nAdam.",
189
+ "venue": "In Proceedings of the European conference on computer vision\n(ECCV), pages 801\u2013818, 2018.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "6": {
195
+ "title": "Dual path networks.",
196
+ "author": "Yunpeng Chen, Jianan Li, Huaxin Xiao, Xiaojie Jin, Shuicheng Yan, and Jiashi\nFeng.",
197
+ "venue": "Advances in neural information processing systems, 30, 2017.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "7": {
203
+ "title": "Efficient graph-based image segmentation.",
204
+ "author": "Pedro F Felzenszwalb and Daniel P Huttenlocher.",
205
+ "venue": "International journal of computer vision, 59:167\u2013181, 2004.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "8": {
211
+ "title": "Sal-net: Semi-supervised auxiliary learning network for carotid\nplaques classification.",
212
+ "author": "Lingchao Fu, Haitao Gan, Weiyan Gan, Zhi Yang, Ran Zhou, and Furong Wang.",
213
+ "venue": "In IEEE International Conference on Systems, Man, and\nCybernetics, 2023.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "9": {
219
+ "title": "Res2net: A new multi-scale backbone architecture.",
220
+ "author": "Shang-Hua Gao, Ming-Ming Cheng, Kai Zhao, Xin-Yu Zhang, Ming-Hsuan Yang, and\nPhilip Torr.",
221
+ "venue": "IEEE transactions on pattern analysis and machine\nintelligence, 43(2):652\u2013662, 2019.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "10": {
227
+ "title": "Rexnet: Diminishing representational bottleneck on convolutional\nneural network.",
228
+ "author": "Dongyoon Han, Sangdoo Yun, Byeongho Heo, and Y Yoo.",
229
+ "venue": "arXiv preprint arXiv:2007.00992, 6:1, 2020.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "11": {
235
+ "title": "Joint segmentation and classification of skin lesions via a\nmulti-task learning convolutional neural network.",
236
+ "author": "Xiaoyu He, Yong Wang, Shuang Zhao, and Xiang Chen.",
237
+ "venue": "Expert Systems with Applications, page 120174, 2023.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "12": {
243
+ "title": "Tell me where to look: Guided attention inference network.",
244
+ "author": "Kunpeng Li, Ziyan Wu, Kuan-Chuan Peng, Jan Ernst, and Yun Fu.",
245
+ "venue": "In Proceedings of the IEEE conference on computer vision and\npattern recognition, pages 9215\u20139223, 2018.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "13": {
251
+ "title": "Auxiliary tasks in multi-task learning.",
252
+ "author": "Lukas Liebel and Marco K\u00f6rner.",
253
+ "venue": "arXiv preprint arXiv:1805.06334, 2018.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "14": {
259
+ "title": "The effects of skin lesion segmentation on the performance of\ndermatoscopic image classification.",
260
+ "author": "Amirreza Mahbod, Philipp Tschandl, Georg Langs, Rupert Ecker, and Isabella\nEllinger.",
261
+ "venue": "Computer Methods and Programs in Biomedicine, 197:105725, 2020.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "15": {
267
+ "title": "Impact and implications of mixed plaque class in automated\ncharacterization of complex atherosclerotic lesions.",
268
+ "author": "Max L Olender, Yanan Niu, David Marlevi, Elazer R Edelman, and Farhad R Nezami.",
269
+ "venue": "Computerized Medical Imaging and Graphics, 97:102051, 2022.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "16": {
275
+ "title": "An auxiliary learning network for carotid ultrasound image\nclassification.",
276
+ "author": "Yanghan Ou, Haitao Gan, Ran Zhou, and Xiaoyue Fang.",
277
+ "venue": "In 2022 China Automation Congress (CAC), pages 3779\u20133783.\nIEEE, 2022.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "17": {
283
+ "title": "An overview of multi-task learning in deep neural networks.",
284
+ "author": "Sebastian Ruder.",
285
+ "venue": "arXiv preprint arXiv:1706.05098, 2017.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "18": {
291
+ "title": "Attention-gated networks for improving ultrasound scan plane\ndetection.",
292
+ "author": "Jo Schlemper, Ozan Oktay, Liang Chen, Jacqueline Matthew, Caroline Knight,\nBernhard Kainz, Ben Glocker, and Daniel Rueckert.",
293
+ "venue": "arXiv preprint arXiv:1804.05338, 2018.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "19": {
299
+ "title": "Global and regional prevalence, burden, and risk factors for carotid\natherosclerosis: a systematic review, meta-analysis, and modelling study.",
300
+ "author": "Peige Song, Zhe Fang, Hanyu Wang, Yutong Cai, Kazem Rahimi, Yajie Zhu,\nF Gerald R Fowkes, Freya JI Fowkes, and Igor Rudan.",
301
+ "venue": "The Lancet Global Health, 8(5):e721\u2013e729,\n2020.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "20": {
307
+ "title": "Ecs-net: Improving weakly supervised semantic segmentation by using\nconnections between class activation maps.",
308
+ "author": "Kunyang Sun, Haoqing Shi, Zhengming Zhang, and Yongming Huang.",
309
+ "venue": "In Proceedings of the IEEE/CVF international conference on\ncomputer vision, pages 7283\u20137292, 2021.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "21": {
315
+ "title": "Sequencer: Deep lstm for image classification.",
316
+ "author": "Yuki Tatsunami and Masato Taki.",
317
+ "venue": "Advances in Neural Information Processing Systems,\n35:38204\u201338217, 2022.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "22": {
323
+ "title": "Assessment of carotid atherosclerosis from b-mode ultrasound images\nusing directional multiscale texture features.",
324
+ "author": "NN Tsiaparas, S Golemati, I Andreadis, J Stoitsis, I Valavanis, and KS Nikita.",
325
+ "venue": "Measurement Science and Technology, 23(11):114004, 2012.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "23": {
331
+ "title": "Repvit: Revisiting mobile cnn from vit perspective.",
332
+ "author": "Ao Wang, Hui Chen, Zijia Lin, Hengjun Pu, and Guiguang Ding.",
333
+ "venue": "arXiv preprint arXiv:2307.09283, 2023a.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "24": {
339
+ "title": "An efficient multi-task synergetic network for polyp segmentation and\nclassification.",
340
+ "author": "Miao Wang, Xingwei An, Zhengcun Pei, Ning Li, Li Zhang, Gang Liu, and Dong\nMing.",
341
+ "venue": "IEEE Journal of Biomedical and Health Informatics,\n2023b.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "25": {
347
+ "title": "Convnext v2: Co-designing and scaling convnets with masked\nautoencoders.",
348
+ "author": "Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So\nKweon, and Saining Xie.",
349
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, pages 16133\u201316142, 2023.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "26": {
355
+ "title": "Weakly-supervised semantic segmentation with superpixel guided local\nand global consistency.",
356
+ "author": "Sheng Yi, Huimin Ma, Xiang Wang, Tianyu Hu, Xi Li, and Yu Wang.",
357
+ "venue": "Pattern Recognition, 124:108504, 2022.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "27": {
363
+ "title": "Resnest: Split-attention networks.",
364
+ "author": "Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Haibin Lin, Zhi Zhang, Yue\nSun, Tong He, Jonas Mueller, R Manmatha, et al.",
365
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision\nand pattern recognition, pages 2736\u20132746, 2022.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "28": {
371
+ "title": "A survey on multi-task learning.",
372
+ "author": "Yu Zhang and Qiang Yang.",
373
+ "venue": "IEEE Transactions on Knowledge and Data Engineering,\n34(12):5586\u20135609, 2021.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "29": {
379
+ "title": "Learning deep representation for face alignment with auxiliary\nattributes.",
380
+ "author": "Zhanpeng Zhang, Ping Luo, Chen Change Loy, and Xiaoou Tang.",
381
+ "venue": "IEEE transactions on pattern analysis and machine\nintelligence, 38(5):918\u2013930, 2015.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "30": {
387
+ "title": "A brief introduction to weakly supervised learning.",
388
+ "author": "Zhi-Hua Zhou.",
389
+ "venue": "National science review, 5(1):44\u201353,\n2018.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "31": {
395
+ "title": "Pseudoseg: Designing pseudo labels for semantic segmentation.",
396
+ "author": "Yuliang Zou, Zizhao Zhang, Han Zhang, Chun-Liang Li, Xiao Bian, Jia-Bin Huang,\nand Tomas Pfister.",
397
+ "venue": "arXiv preprint arXiv:2010.09713, 2020.",
398
+ "url": null
399
+ }
400
+ }
401
+ ],
402
+ "url": "http://arxiv.org/html/2401.13998v2"
403
+ }
20240127/2401.14132v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2401.15254v1.json ADDED
@@ -0,0 +1,334 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "1 Introduction",
3
+ "abstract": "We explore a novel methodology for constructing confidence regions for parameters of linear models, using predictions from any arbitrary predictor. Our framework requires minimal assumptions on the noise and can be extended to functions deviating from strict linearity up to some adjustable threshold, thereby accommodating a comprehensive and pragmatically relevant set of functions. The derived confidence regions can be cast as constraints within a Mixed Integer Linear Programming framework, enabling optimisation of linear objectives. This representation enables robust optimization and the extraction of confidence intervals for specific parameter coordinates. Unlike previous methods, the confidence region can be empty, which can be used for hypothesis testing. Finally, we validate the empirical applicability of our method on synthetic data.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Estimating the parameters of an unknown linear system using noisy observations stands as a cornerstone challenge in various disciplines including signal processing, system identification, control theory, and statistics. Mostly, the current methods yield point estimates. To incorporate the inherent uncertainty associated with these estimated parameters, one can delineate confidence regions. Such regions ensure, with a high probability, that the true parameter resides within them. Confidence regions are crucial when robustness is a priority, offering direct utility in both uncertainty quantification and robust optimization.\nHistorically in statistics, confidence regions are predominantly derived using closed-form solutions, often predicated on the assumption of constant additive Gaussian noise (Draper and Smith, 1981 ###reference_b10###). Such an assumption curtails their practical utility in real-world scenarios, where heteroscedasticity might prevail and noise may manifest in intricate functional forms. More contemporary techniques promise confidence regions with finite sample coverage guarantees even under considerably relaxed noise assumptions. Nevertheless, these methods often limit themselves to membership testing (i.e., ascertaining if a particular parameter falls within the confidence region), without offering a compact representation (Campi and Weyer, 2005 ###reference_b2###; den Dekker et al., 2008 ###reference_b8###). This characteristic hinders their applicability to robust optimization and uncertainty quantification.\nIn this study, we introduce Residual Intervals Inversion (RII), a novel methodology for the construction of confidence regions pertaining to linear model parameters. Central to our approach is the harnessing of predictions from an arbitrary, ad hoc predictor. Such a predictor might be sourced from conventional tools like the Least Square (LS) estimator or more complex non-linear models.\nOur only assumption on the noise is that it must possess a median of zero across the entire input space. This is much weaker than the assumption of Gaussian noise made in statistical approaches, and even weaker than symmetric noise assumptions made in more recent research (Csaji et al., 2015 ###reference_b4###; Senov et al., 2014 ###reference_b15###; Campi and Weyer, 2005 ###reference_b2###). Additionally, our approach integrates an adjustable tolerance parameter that can relax this condition by bounding the noise\u2019s quantile deviation, thereby granting additional flexibility.\nThe confidence region is represented by a set of linear inequalities in the model parameter and binary variables controlling which inequalities are active. This formulation seamlessly permits its representation as constraints within a Mixed-Integer Linear Programming (MILP) problem. As a result, linear or quadratic objectives can be optimized over these confidence regions, enabling tasks such as computing confidence intervals for specific parameter coordinates.\nNotably, when the ad hoc predictor substantially outperforms any linear counterpart, the confidence regions we construct may be empty. This may occur either for non-linear ad-hoc predictors, or when it has access to different input variables. In contrast to previous works, our method thus exhibits the capacity to reject the null hypothesis, signaling that the data might not exhibit a linear relationship with the specified input. This capability paves the way for its use in hypothesis testing and feature selection.\nThe most salient properties of our method can be summarized as follows:\nCapability to use strong predictors (including non-linear) to obtain smaller confidence regions.\nThe noise is only assumed to have a median value of zero everywhere. This assumption can be flexibly relaxed by introducing user-specified tolerance level.\nThe possibility to optimize linear and quadratic objectives over the confidence region by solving a MILP or MIQP problem.\nThe confidence regions ensure finite-sample validity and for any user-determined target coverage."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "Estimating the parameters of dynamical systems is a cornerstone challenge of system identification (Gevers, 2006 ###reference_b11###; s\u00f6derstr\u00f6m1989system; LJUNG20101; ljung1994modeling; ljung1999system). The LS estimator error is asymptotically normal under reasonable assumptions, which can be used to derive confidence regions. Recently, Jochmans (2022 ###reference_b14###) proposed robust estimators in heteroskedastic settings. However, these methods are only asymptotically valid and provide no guarantees in practical settings where available data is finite.\nOther methods have derived finite sample valid confidence regions. Wasserman et al. (2020 ###reference_b19###) relies on computing the likelihood which is typically difficult in the presence of unknown and non-standard noise. Daniels (1954 ###reference_b7###) is distribution-free but constructs unbounded confidence regions, which is impractical.\nOther contemporary works have focused on methods to construct finite-sample valid confidence regions with weak assumptions on the noise (Campi and Weyer, 2005 ###reference_b2###; Dalai et al., 2007 ###reference_b6###; den Dekker et al., 2008 ###reference_b8###), but only provide a method to infer whether a given parameter belongs in the confidence region (membership testing), without compact formulation, hence limiting downstream applications.\nPerhaps the closest work in the literature is SPS (Csaji et al., 2015 ###reference_b4###; Cs\u00e1ji et al., 2012 ###reference_b5###) which constructs finite sample valid confidence regions for any symmetric noise, in the form of an ellipsoid centered on the LS estimator. Similarly to RII, linear and quadratic objectives can thus be optimized over the confidence regions. We compare RII to SPS in section 6 ###reference_###.\nAmong other relevant works, Dobriban and Lin (2023 ###reference_b9###) derives joint confidence regions over the prediction and parameters, and Angelopoulos et al. (2023 ###reference_b1###) uses an ad hoc predictor and unlabeled data to infer confidence sets over various statistics, including linear parameters, but only guarantee asymptotic validity (non-asymptotic results are obtained under stronger assumption on the distribution e.g. bounded support)."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Problem Setting",
21
+ "text": "In this section we introduce the notations and assumptions that will be used throughout this work.\nFor , let denote the set .\nConsider the following linear regression system\nwhere is the target variable, is the ground truth parameter to be estimated, the input variable, and is the noise.\nWe consider a finite sample of size which consists of inputs\nnoise\nand targets"
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Assumptions",
27
+ "text": "Since homoscedasticity is rarely verified in real world systems, we allow the noise to depend on (heteroscedasticity). Our first assumption is the independence of the noise, conditionally on :\nWe suppose that the noise is conditionally independent, given inputs\nWhen the noise does not depend on we recover the assumption of independent noise that is standard in the literature.\nWithout any further assumption on , any arbitrary function of would verify Equation 1 ###reference_### for any with\nFor our model to be informative, it is therefore necessary to adopt some restrictions on the noise.\nRecent works have departed from the usual assumption of normally distributed noise, to make confidence regions more applicable to realistic settings.\nWe introduce a tolerance parameter such that\nwhich controls how strict our assumption on the noise is.\nEven when , (2 ###reference_###) and (3 ###reference_###) are equivalent to having a noise of median , which is weaker than the assumption of symmetric distribution in (Csaji et al., 2015 ###reference_b4###; Campi and Weyer, 2005 ###reference_b2###).\nWe also define\nWe consider two versions of our second assumption, contingent on the independence of inputs.\nWhen the input data are independent and identically distributed, we suppose\nWe suppose that\nIntuitively, 2 ###reference_um2### and 3 ###reference_um3### ensure that is not too likely to be positive or too likely to be negative. 2 ###reference_um2### and 3 ###reference_um3### lead to the same guarantees, but when the inputs are iid, 2 ###reference_um2### is less restrictive. Unless specified otherwise, we assume that either 2 ###reference_um2### or 3 ###reference_um3### are verified.\nThe assumption is strongest for . As decreases, is allowed to deviate (in terms of quantile) from the median of , and the model becomes less restrictive. When , Equations (2 ###reference_###) and (3 ###reference_###) become vacuously true statements, and the model of Equation 1 ###reference_### describes any stochastic function of for any .\n2 ###reference_um2### and 3 ###reference_um3### are stable when multiplying the noise by any deterministic function of . This allows for instance the seamless integration of multiplicative noise, constant by part noise, etc."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Objectives",
33
+ "text": "###figure_1### Our goal is to build a confidence region over the unknown parameter that has a finite sample valid coverage\nFor some user-specified confidence level . While the whole output space is a trivial solution, we aim at finding smaller sets yielding more informative results for the applications listed in Section 5 ###reference_###."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Construction of Confidence Regions",
39
+ "text": "###figure_2### ###figure_3### Let and a subset of of size . For the sake of simplicity, let us consider the testing data\nLet us also denote\nthe predictions of made by any ad hoc predictor. The only constraint is that the predictor must not have seen the true targets , ie.\ncan typically be the predictions of the ordinary least square model trained on the remaining samples\nAlternatively, can also be predictions induced by a non-linear model, a model of a different input variable , or a model trained on independent data.\nOur method, RII, proceeds in two steps:\nThe first step is to build intervals , which we call residual intervals, such that\nThen, we consider as confidence region the set of such that reasonably frequently in the test set, and represent it as the feasible set of a MILP."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "Building Residual Intervals",
45
+ "text": "The first step consists in building reasonably sized intervals for each test point, so that belongs in the interval with a guaranteed probability.\nWe accomplish this by taking the interval between the true label and predicted value\nThe intuition is that if , then there is a probability at least that and thus that\nA similar reasoning holds when .\nLemma 1 ###reference_ma1### formalizes this intuition and shows that this interval has a guaranteed coverage of for . A formal proof is provided in the appendix.\nUnder 1 ###reference_um1### and (2 ###reference_um2### or 3 ###reference_um3###), for any test point with prediction , it holds"
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": "Building the Confidence Region",
51
+ "text": "Let . For , we define\nIntuitively, is the number of residual intervals containing across the test examples. Our confidence region is defined as the set of all such that is not abnormally low. Theorem 1 ###reference_orem1### uses Equation 7 ###reference_### to lower bound in probability. A complete proof is provided in the appendix.\nUnder 1 ###reference_um1### and (2 ###reference_um2### or 3 ###reference_um3###), for any , it holds\nwhere\nGiven , we define\nTheorem 1 ###reference_orem1### gives us the tool to finally define a confidence region with finite sample valid coverage guarantees, under our mild assumptions on the noise. From Equations (8 ###reference_###) and (9 ###reference_###), the following proposition immediately follows.\nFor a confidence level , let us define the confidence region as\nUnder the model specification of Section 3 ###reference_###, it holds\nThe probability is taken over only as our guarantee is valid with any that verifies Equation 5 ###reference_###, and thus in particular it is valid for any realization of , even if depends on it.\nFigure 1 ###reference_### shows the guaranteed coverage from Equation 8 ###reference_### as a function of , at fixed and for . For and any , any variable can be represented as using , which fits the vacuously true 2 ###reference_um2### with . Therefore we can not guarantee a positive coverage unless taking as confidence region. As increases, our model becomes increasingly restrictive, which allows the guaranteed coverage of the confidence region to increase, reaching its maximum when the noise is restricted to having a median of everywhere (). Increasing leads to smaller confidence regions (due to the constraints in Equation 10 ###reference_### becoming stronger), but also decreases the guaranteed coverage, following Equation 8 ###reference_###."
52
+ },
53
+ {
54
+ "section_id": "4.3",
55
+ "parent_section_id": "4",
56
+ "section_name": "Representation as a MILP feasible set",
57
+ "text": "The expression in Equation 10 ###reference_### suffices for efficient membership testing (ie. determining whether a given is in ) but is not straightforward to infer, for instance, confidence intervals on specific parameter coordinates.\nWe now show how can be represented as the feasible set of an MILP, which allows optimization of linear objectives for reasonably small .\nFirst, let us note from Equation 10 ###reference_### that iff there are at least events that are true, and thus iff there exists such that\nWe can finally use the standard Big-M method (Hillier and Lieberman, 2001 ###reference_b12###; Wolsey and Nemhauser, 2014 ###reference_b20###) by picking a constant with a larger order of magnitude than , and we obtain:\nIf is bounded and is sufficiently large, the constraints in Equation 12 ###reference_### will only be active when , in which case they become equivalent to . It is possible to confirm that the solutions obtained with Equation 12 ###reference_### indeed have inactive constraints when (and increase if needed). With such mechanism in place, the feasible set of Equation 12 ###reference_### is the same as Equation 11 ###reference_###.\nThus, iff (12 ###reference_###) is satisfied for some binary . To optimize an objective linear in over , we can simply introduce the binary slack variables and optimize over the set of constraints of Equation 12 ###reference_###, which indeed yields an MILP."
58
+ },
59
+ {
60
+ "section_id": "4.4",
61
+ "parent_section_id": "4",
62
+ "section_name": "Boundedness of",
63
+ "text": "Given a set of test inputs with a linear span of dimension at most , one can find a direction orthogonal to that span, and displacing in that direction will not affect the corresponding inequalities in Equation 12 ###reference_###. Thus, if , then is necessarily empty or not bounded. Conversely, if any subset of test inputs has a span of dimension at least , then is guaranteed to be bounded, because the solution set will be bounded regardless of which constraints are active. Thus, for applications where a bounded confidence region is desirable or necessary, such as finding confidence intervals on the coordinates, should be larger than . Since is determined by Equation 9 ###reference_###, we can make sufficiently large by increasing the test size , at the cost of an increase in computation complexity."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Applications",
69
+ "text": "In this section, we show how the MILP formulation from Section 4 ###reference_### can be directly leveraged for several applications of interest."
70
+ },
71
+ {
72
+ "section_id": "5.1",
73
+ "parent_section_id": "5",
74
+ "section_name": "Confidence Interval on Coordinates",
75
+ "text": "###figure_4### Perhaps one of the most straightforward application is to deduce confidence intervals on the coordinates of . For instance, to compute a lower bound on the -th coordinate of over , we can solve the following MILP:\nFor any , all coordinates of are within the confidence intervals calculated by Equation 13 ###reference_### by construction. Thus with probability at least , , which implies all confidence intervals simultaneously contain the ground truth. As a result, every confidence interval contains the corresponding ground truth coordinate with probability at least .\nThese confidence intervals can then be used for interpretability and feature selection. For instance, assuming features have been normalized (or have a similar order of magnitude), a tight confidence interval around indicates that the corresponding feature likely has low relevance to the output, and may be pruned. Similarly, a very large confidence interval implies that changes in the corresponding coefficient can be compensated with other coefficients, and thus that the feature is likely redundant."
76
+ },
77
+ {
78
+ "section_id": "5.2",
79
+ "parent_section_id": "5",
80
+ "section_name": "Robust Optimization",
81
+ "text": "A typical application for confidence regions is robust optimization, in which we seek to find optimal solutions under uncertainty. Robust optimization has been successfully applied in operations research, signal processing, control theory, and other fields.\nOne of the most important paradigm in robust optimization is Wald\u2019s minimax model (Wald, 1939 ###reference_b17###, 1945 ###reference_b18###), which aims at finding the parameters with best worst-case outcomes over a given uncertainty set for parameter . Given our confidence set for , and assuming (for simplicity) that , the feasible set of , does not depend on , we obtain the following optimization problem:\nWhere is a cost function to minimize. For instance, could represent the unknown parameters of a dynamical system, and the controller parameters.\nPrior works have explored robust optimization with mixed-integer constraints and convex objective functions (Siddiqui et al., 2015 ###reference_b16###). These methods can be directly applied with to perform robust optimization. Alternatively, since MILPs can be difficult to solve, can be relaxed to the covering orthotope induced by the confidence intervals of section 5.1 ###reference_###, thereby removing the mixed-integer variables and simplifying the resolution."
82
+ },
83
+ {
84
+ "section_id": "5.3",
85
+ "parent_section_id": "5",
86
+ "section_name": "Hypothesis Testing",
87
+ "text": "In previous works, confidence regions are typically built with the least square estimator at their center, and may never be empty. On the contrary, it is possible for to be empty, which is equivalent to rejecting the null hypothesis that the data is distributed according to the model described in Section 3 ###reference_###, with p-value . Indeed, if the null hypothesis is true with parameters , then there is probability at least that , and thus is empty with probability at most . Notably, is directly tied to the tolerance level , meaning that we can immediately infer p-values for different values of .\nIf the predictor is linear in , then its coefficients belong in by construction, and the null hypothesis can not be rejected. Accordingly, to reject the null hypothesis, must not be linear in . For instance, can be a predictor based on a non-linear model such as XGBoost (Chen and Guestrin, 2016 ###reference_b3###).\nAlternatively, could be linear in a different variable , such as a non-linear transformation of . This can provide a framework for feature selection. The null hypothesis is unlikely to be rejected in this setting, unless captures the distribution of better than any linear function of . This is a powerful result because it applies to all linear models of simultaneously, not just a specific estimator that might be overfitting, and thus indicates that may inherently lack expressivity."
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "Experiments",
93
+ "text": "###table_1### We now conduct experiments to evaluate the applications of RII on synthetic data. When applicable, we compare our results with the over-bound of SPS, which to the best of our knowledge is the only prior work constructing finite sample valid confidence regions under weak assumptions on the noise, in a compact form that facilitates applications. Other existing methods typically only allow membership testing, and explicitely delineating the confidence regions is generally intractable.\nWe evaluate RII using two types of ad hoc predictors: the ordinary least square predictor trained on the training data, and the Huber predictor (Huber, 1964 ###reference_b13###), which is designed to be robust to outliers, trained on the same training data.\nIn all of our experiments, we set .\nFor reproducibility, the full settings of our experiments, when not specified in this section, are detailed in Appendix B ###reference_###."
94
+ },
95
+ {
96
+ "section_id": "7",
97
+ "parent_section_id": null,
98
+ "section_name": "Conclusion",
99
+ "text": "Traditionally, a confidence interval is created for a parameter in a statistical model like Equation 1 ###reference_### by first obtaining an estimate from the data. The next step involves examining the distribution of estimation errors to construct the interval by thresholding at some appopriate quantile level. However, a closed form distribution is notoriously difficult to obtain using most methods. As such, the standard guarantees are obtained upon asymptotic normality, as is the case when using the maximum likelihood principle to obtain the estimator. In contrast, RII rely on inverting the confidence set constructed around the prediction , which enables us to bypass the majority of preceding assumptions. Consequently, we can operate using any predictors and under less stringent assumptions on the noise. RII is a flexible method, capable of levering any ad hoc predictor, of rejecting the linearity of the observed data, and displaying promising robustness across noise distributions."
100
+ }
101
+ ],
102
+ "appendix": [
103
+ {
104
+ "section_id": "Appendix 1",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix A Proofs",
107
+ "text": "Under 1 ###reference_um1### and (2 ###reference_um2### or 3 ###reference_um3###), for any test point with prediction , it holds\nProof. Let and . If , then\nand thus .\nSimilarly, if , then\nand thus .\nTherefore, ,\nLet us first assume 1 ###reference_um1### and 2 ###reference_um2###.\nWe recall that from 2 ###reference_um2###,\nFinally,\nfrom Equation 16 ###reference_###, where represents the pdf of .\nLet us now assume 1 ###reference_um1### and 3 ###reference_um3###. We have\nfrom 3 ###reference_um3###, which concludes the proof.\nUnder 1 ###reference_um1### and (2 ###reference_um2### or 3 ###reference_um3###), for any , it holds\nProof. Let the events defined as in Section 4 ###reference_###.\nUnder 1 ###reference_um1### and 2 ###reference_um2###, the are sampled independently and the are sampled independently given . Thus, (from Lemma 1 ###reference_ma1###).\nUnder 1 ###reference_um1### and 3 ###reference_um3###, we have\n(the equality comes from 1 ###reference_um1### and the last inequality from the proof of Lemma 1 ###reference_ma1### above).\nWhether 2 ###reference_um2### or 3 ###reference_um3### is verified, we obtain .\nIf , then it is easy to verify that for any , .\nLet us assume that up to some , , .\nWe now consider trials.\nThen for ,\nFor , at least of are true iif at least of are true and is false or at least of are true and is true. Thus,\nBy induction, we can thus conclude that for any and ,\nFor a confidence level , let us define the confidence region as\nUnder the model specification of Section 3 ###reference_###, it holds\nProof. The proof immediately follows from Equations (8 ###reference_###) and (9 ###reference_###). Indeed, we have"
108
+ },
109
+ {
110
+ "section_id": "Appendix 2",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix B Experimental details",
113
+ "text": "For all RII experiment requiring to solve the MILP, we use a constant (for the relaxation) of . We find no occurrences where more than inequalities are active, which indicates there is no need to increase . Such occurences would occur if however, as the confidence region would then not be bounded.\nTo measure coverage in Table 1 ###reference_###, is fixed for Figure 3 ###reference_###, while it is resampled for each trial of Table 1 ###reference_### used to estimate the coverage. Each coordinate of is sampled independently from . The noise is sampled as described in Section 6 ###reference_###. We use training points. For SPS, we use and . We found that using and led to nearly identical performances, but at a higher cost. The coordinates of are sampled independently from the uniform distribution on . For RII, we use and , which leads to .\nFor Table 5 ###reference_### we employ three different values of , with the functional form introduced in Section 6 ###reference_###. We run trials for each function and measure the frequency at which the confidence region is empty with and for the rejection rate, which corresponds to .\nFor the coverage rate, we use and for the hard example, and for the med example, and with for the easy example. All these pairs correspond to values .\nFor , we generate predictions using ordinary least squares on ."
114
+ },
115
+ {
116
+ "section_id": "Appendix 3",
117
+ "parent_section_id": null,
118
+ "section_name": "Appendix C Computational cost",
119
+ "text": "MILP problems are NP-hard, so the worst-case computation cost of solving over our confidence region is exponential. However, modern solvers often manage to solve such problems much faster in average, while still obtaining certifiably optimal solutions.\nIn comparison, SPS doesn\u2019t have discrete variable, but possesses quadratic constraints, leading to a quadratically constrained linear program. Moreover, finding the radius of the ellipsoid of SPS requires solving convex semidefinite programs.\nRII can perform membership testing in very short time (it only requires to evaluate linear inequalities, no optimization is required, and similarly for SPS (when using the exact form and not the over-bound). We report in Table 6 ###reference_### the computation time required to instantiate the confidence region (e.g. compute the ellipsoid axes and radius or train the estimator) and the computation time required to optimize a single linear objective once instantiated, as the wall clock time on a personal laptop. We use PULP_CBC as a solver, which is under an eclipse v2.0 license.\nRII is substantially more costly for inferring a single linear objective, and the method is difficult to scale to large dimensions (ie approximately, a problem also shared by SPS). However, this computation cost remains accessible for problems of reasonable dimension, despite using discrete variables in this example. It may be possible to approximate the MILP used to optimize linear objectives with RII to reduce the computational burden and scale to larger dimensions, which we leave to future works."
120
+ }
121
+ ],
122
+ "tables": {
123
+ "1": {
124
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Coverage of the SPS over-bound and RII across different dimensions and types of noise, for .</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.11\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.11.10.1\">\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T1.11.10.1.1\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"3\" id=\"S5.T1.11.10.1.2\">Additive Gaussian</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"3\" id=\"S5.T1.11.10.1.3\">Multiplicative Gaussian</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"3\" id=\"S5.T1.11.10.1.4\">Outliers</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.11.9\">\n<td class=\"ltx_td\" id=\"S5.T1.11.9.10\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.3.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.4.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.5.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.6.4.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.7.5.5\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.8.6.6\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.9.7.7\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.10.8.8\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.11.9.9\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.11.11.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.11.11.2.1\">SPS outer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.11.11.2.2\">96.8%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.11.11.2.3\">100.0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.11.11.2.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.11.11.2.5\">97.2%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.11.11.2.6\">100.0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.11.11.2.7\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.11.11.2.8\">98.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.11.11.2.9\">100.0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.11.11.2.10\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.11.12.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.11.12.3.1\">RII + LS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.11.12.3.2\">89.7%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.11.12.3.3\">91.1%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.11.12.3.4\">89.1%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.11.12.3.5\">91.1%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.11.12.3.6\">89.0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.11.12.3.7\">90.1%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.11.12.3.8\">89.5%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.11.12.3.9\">91.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.11.12.3.10\">90.8%</td>\n</tr>\n</tbody>\n</table>\n</figure>",
125
+ "capture": "Table 1: Coverage of the SPS over-bound and RII across different dimensions and types of noise, for ."
126
+ },
127
+ "2": {
128
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Rejection rate (ie. frequency at which is empty) of RII with for three different functions that are not linear in , and coverage rate when is set to , the largest value of such that the function falls within our model. is fixed to .</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S6.T2.30\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T2.18.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S6.T2.17.1.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S6.T2.17.1.1.1\">\n<tr class=\"ltx_tr\" id=\"S6.T2.17.1.1.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.17.1.1.1.2.1\">Rejection rate</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.17.1.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.17.1.1.1.1.1\"></td>\n</tr>\n</table></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S6.T2.18.2.2\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S6.T2.18.2.2.1\">\n<tr class=\"ltx_tr\" id=\"S6.T2.18.2.2.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.18.2.2.1.2.1\">Coverage rate</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.18.2.2.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.18.2.2.1.1.1\"></td>\n</tr>\n</table></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.24.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.19.3.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S6.T2.19.3.1.1\">\n<tr class=\"ltx_tr\" id=\"S6.T2.19.3.1.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.19.3.1.1.2.1\">easy</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.19.3.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.19.3.1.1.1.1\"></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.20.4.2\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S6.T2.20.4.2.1\">\n<tr class=\"ltx_tr\" id=\"S6.T2.20.4.2.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.20.4.2.1.2.1\">med</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.20.4.2.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.20.4.2.1.1.1\"></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.21.5.3\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S6.T2.21.5.3.1\">\n<tr class=\"ltx_tr\" id=\"S6.T2.21.5.3.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.21.5.3.1.2.1\">hard</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.21.5.3.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.21.5.3.1.1.1\"></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.22.6.4\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S6.T2.22.6.4.1\">\n<tr class=\"ltx_tr\" id=\"S6.T2.22.6.4.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.22.6.4.1.2.1\">easy</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.22.6.4.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.22.6.4.1.1.1\"></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.23.7.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S6.T2.23.7.5.1\">\n<tr class=\"ltx_tr\" id=\"S6.T2.23.7.5.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.23.7.5.1.2.1\">med</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.23.7.5.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.23.7.5.1.1.1\"></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T2.24.8.6\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S6.T2.24.8.6.1\">\n<tr class=\"ltx_tr\" id=\"S6.T2.24.8.6.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.24.8.6.1.2.1\">hard</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.24.8.6.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.24.8.6.1.1.1\"></td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.30.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T2.25.9.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T2.26.10.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T2.27.11.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T2.28.12.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T2.29.13.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S6.T2.30.14.6\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
129
+ "capture": "Table 2: Rejection rate (ie. frequency at which is empty) of RII with for three different functions that are not linear in , and coverage rate when is set to , the largest value of such that the function falls within our model. is fixed to ."
130
+ },
131
+ "3": {
132
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Average width of the confidence intervals on the coordinates of obtained with SPS, RII with least square estimator, and RII with Huber estimator, for three different types of noise. is fixed to .</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T3.7\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T3.7.1.1\">\n<td class=\"ltx_td ltx_border_t\" id=\"S6.T3.7.1.1.1\" rowspan=\"2\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"3\" id=\"S6.T3.7.1.1.2\">noise</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.7.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T3.7.2.2.1\">Add. normal</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T3.7.2.2.2\">Mult. normal</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T3.7.2.2.3\">Outliers</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.7.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.7.3.3.1\">SPS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.7.3.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.7.3.3.2.1\">1.230</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.7.3.3.3\">1.903</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.7.3.3.4\">2.633</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.7.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.7.4.4.1\">RII + LS</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.7.4.4.2\">2.861</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.7.4.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.7.4.4.3.1\">1.875</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.7.4.4.4\">2.486</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.7.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.7.5.5.1\">RII + Huber</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.7.5.5.2\">2.766</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.7.5.5.3\">1.958</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.7.5.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.7.5.5.4.1\">0.363</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
133
+ "capture": "Table 3: Average width of the confidence intervals on the coordinates of obtained with SPS, RII with least square estimator, and RII with Huber estimator, for three different types of noise. is fixed to ."
134
+ },
135
+ "4": {
136
+ "table_html": "<figure class=\"ltx_table\" id=\"A2.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A2.T4.7\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A2.T4.7.8.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A2.T4.7.8.1.1\">Scenario</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T4.7.8.1.2\">Parameters</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T4.7.8.1.3\">Resampling</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T4.7.8.1.4\">Rate Parameters</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A2.T4.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A2.T4.2.2.3\"><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.15254v1#S5.F3\" title=\"Figure 3 \u2023 5.1 Confidence Interval on Coordinates \u2023 5 Applications\"><span class=\"ltx_text ltx_ref_tag\">Figure</span>\u00a0<span class=\"ltx_text ltx_ref_tag\">3</span></a></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T4.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T4.2.2.2\">Fixed \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T4.2.2.4\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A2.T4.7.7.6\"><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.15254v1#S5.T1\" title=\"Table 1 \u2023 5.1 Confidence Interval on Coordinates \u2023 5 Applications\"><span class=\"ltx_text ltx_ref_tag\">Table</span>\u00a0<span class=\"ltx_text ltx_ref_tag\">1</span></a></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T4.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T4.4.4.2\">Resampled \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T4.7.7.5\">\n, , \n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Experimental setup details for <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.15254v1#S5.F3\" title=\"Figure 3 \u2023 5.1 Confidence Interval on Coordinates \u2023 5 Applications\"><span class=\"ltx_text ltx_ref_tag\">Figure</span>\u00a0<span class=\"ltx_text ltx_ref_tag\">3</span></a> and <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2401.15254v1#S5.T1\" title=\"Table 1 \u2023 5.1 Confidence Interval on Coordinates \u2023 5 Applications\"><span class=\"ltx_text ltx_ref_tag\">Table</span>\u00a0<span class=\"ltx_text ltx_ref_tag\">1</span></a>.</figcaption>\n</figure>",
137
+ "capture": "Table 4: Experimental setup details for Figure\u00a03 and Table\u00a01."
138
+ },
139
+ "5": {
140
+ "table_html": "<figure class=\"ltx_table\" id=\"A2.T5\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A2.T5.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A2.T5.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T5.2.2.3\">Example</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T5.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T5.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T5.2.2.4\">Parameters for Rate</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A2.T5.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A2.T5.4.4.3\">\u201dEasy\u201d</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T5.4.4.4\">0.05</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T5.4.4.5\">0.05</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T5.4.4.2\">\n, \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T5.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T5.6.6.3\">\u201dMed\u201d</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.6.6.4\">0.14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.6.6.5\">0.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.6.6.2\">\n, \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T5.8.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A2.T5.8.8.3\">\u201dHard\u201d</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T5.8.8.4\">0.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T5.8.8.5\">0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T5.8.8.2\">\n, \n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Values of and for different examples.</figcaption>\n</figure>",
141
+ "capture": "Table 5: Values of and for different examples."
142
+ },
143
+ "6": {
144
+ "table_html": "<figure class=\"ltx_table\" id=\"A3.T6\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Average wall clock time on a personal laptop to instantiate the confidence region and to optimize a single linear objective for respectively SPS and RII, at with and .</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A3.T6.7\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A3.T6.7.1.1\">\n<td class=\"ltx_td ltx_border_t\" id=\"A3.T6.7.1.1.1\"></td>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"A3.T6.7.1.1.2\">Conf. Region Instantiation</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"A3.T6.7.1.1.3\">Linear Obj. Optim</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T6.7.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A3.T6.7.2.2.1\">SPS</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A3.T6.7.2.2.2\">0.0270s</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A3.T6.7.2.2.3\">0.0013s</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T6.7.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"A3.T6.7.3.3.1\">RII</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"A3.T6.7.3.3.2\">0.0007s</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"A3.T6.7.3.3.3\">2.7972s</td>\n</tr>\n</tbody>\n</table>\n</figure>",
145
+ "capture": "Table 6: Average wall clock time on a personal laptop to instantiate the confidence region and to optimize a single linear objective for respectively SPS and RII, at with and ."
146
+ }
147
+ },
148
+ "image_paths": {
149
+ "1": {
150
+ "figure_path": "2401.15254v1_figure_1.png",
151
+ "caption": "Figure 1: Guaranteed coverage 1\u2212\u03b1=Snte\u2062(k,b)1\ud835\udefcsubscript\ud835\udc46subscript\ud835\udc5bte\ud835\udc58\ud835\udc4f1-\\alpha=S_{n_{\\rm te}}(k,b)1 - italic_\u03b1 = italic_S start_POSTSUBSCRIPT italic_n start_POSTSUBSCRIPT roman_te end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_k , italic_b ) from Equation 8 for nt\u2062e=30subscript\ud835\udc5b\ud835\udc61\ud835\udc5230n_{te}=30italic_n start_POSTSUBSCRIPT italic_t italic_e end_POSTSUBSCRIPT = 30 and k\u2208[4,8,12,16]\ud835\udc58481216k\\in[4,8,12,16]italic_k \u2208 [ 4 , 8 , 12 , 16 ].",
152
+ "url": "http://arxiv.org/html/2401.15254v1/x1.png"
153
+ },
154
+ "2(a)": {
155
+ "figure_path": "2401.15254v1_figure_2(a).png",
156
+ "caption": "(a) 1D example for a linear distribution with additive Gaussian noise, using the least square linear predictor on X\ud835\udc4bXitalic_X.\nFigure 2: Illustration of residual intervals on synthetic datasets with both linear and non-linear dependence between input X\ud835\udc4bXitalic_X and output Y\ud835\udc4cYitalic_Y.",
157
+ "url": "http://arxiv.org/html/2401.15254v1/x2.png"
158
+ },
159
+ "2(b)": {
160
+ "figure_path": "2401.15254v1_figure_2(b).png",
161
+ "caption": "(b) 1D example for a distribution linear in Z=(X,s\u2062i\u2062n\u2062(8\u2062\u03c0\u2062\u2016X\u20162))\ud835\udc4d\ud835\udc4b\ud835\udc60\ud835\udc56\ud835\udc5b8\ud835\udf0bsubscriptnorm\ud835\udc4b2Z=(X,sin(8\\pi||X||_{2}))italic_Z = ( italic_X , italic_s italic_i italic_n ( 8 italic_\u03c0 | | italic_X | | start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) ) with additive Gaussian noise using the least square predictor on Z\ud835\udc4dZitalic_Z.\nFigure 2: Illustration of residual intervals on synthetic datasets with both linear and non-linear dependence between input X\ud835\udc4bXitalic_X and output Y\ud835\udc4cYitalic_Y.",
162
+ "url": "http://arxiv.org/html/2401.15254v1/x3.png"
163
+ },
164
+ "3": {
165
+ "figure_path": "2401.15254v1_figure_3.png",
166
+ "caption": "Figure 3: Illustration of the bounds covering the ground-truth parameter \u03b8\u22c6subscript\ud835\udf03\u22c6\\theta_{\\star}italic_\u03b8 start_POSTSUBSCRIPT \u22c6 end_POSTSUBSCRIPT under various configurations of noise, for \u03b1=0.1\ud835\udefc0.1\\alpha=0.1italic_\u03b1 = 0.1. Squares correspond to upper bounds while circles denote corresponding lower bounds.",
167
+ "url": "http://arxiv.org/html/2401.15254v1/x4.png"
168
+ }
169
+ },
170
+ "validation": true,
171
+ "references": [
172
+ {
173
+ "1": {
174
+ "title": "Prediction-powered inference.",
175
+ "author": "Anastasios N Angelopoulos, Stephen Bates, Clara Fannjiang, Michael I. Jordan,\nand Tijana Zrnic.",
176
+ "venue": "arXiv preprint arXiv:2301.09633, 2023.",
177
+ "url": null
178
+ }
179
+ },
180
+ {
181
+ "2": {
182
+ "title": "Guaranteed non-asymptotic confidence regions in system\nidentification.",
183
+ "author": "M.C. Campi and E. Weyer.",
184
+ "venue": "Automatica, 41:1751\u20131764, 2005.",
185
+ "url": null
186
+ }
187
+ },
188
+ {
189
+ "3": {
190
+ "title": "XGBoost.",
191
+ "author": "Tianqi Chen and Carlos Guestrin.",
192
+ "venue": "In Proceedings of the 22nd ACM SIGKDD International\nConference on Knowledge Discovery and Data Mining. ACM, 2016.",
193
+ "url": null
194
+ }
195
+ },
196
+ {
197
+ "4": {
198
+ "title": "Sign-perturbed sums: A new system identification approach for\nconstructing exact non-asymptotic confidence regions in linear regression\nmodels.",
199
+ "author": "Balazs Csanad Csaji, Marco Claudio Campi, and Erik Weyer.",
200
+ "venue": "IEEE Transactions on Signal Processing, 63, jan 2015.",
201
+ "url": null
202
+ }
203
+ },
204
+ {
205
+ "5": {
206
+ "title": "Non-asymptotic confidence regions for the least-squares estimate.",
207
+ "author": "Bal\u00e1zs Csan\u00e1d Cs\u00e1ji, Marco C. Campi, and Erik Weyer.",
208
+ "venue": "IFAC Proceedings Volumes, 45, 2012.",
209
+ "url": null
210
+ }
211
+ },
212
+ {
213
+ "6": {
214
+ "title": "Parameter identification for nonlinear systems: Guaranteed confidence\nregions through lscr.",
215
+ "author": "Marco Dalai, Erik Weyer, and Marco C. Campi.",
216
+ "venue": "Automatica, 43, 2007.",
217
+ "url": null
218
+ }
219
+ },
220
+ {
221
+ "7": {
222
+ "title": "A Distribution-Free Test for Regression Parameters.",
223
+ "author": "H. E. Daniels.",
224
+ "venue": "The Annals of Mathematical Statistics, 25, 1954.",
225
+ "url": null
226
+ }
227
+ },
228
+ {
229
+ "8": {
230
+ "title": "Finite sample confidence regions for parameters in prediction error\nidentification using output error models.",
231
+ "author": "Arnold J. den Dekker, Xavier Bombois, and Paul M.J. Van den Hof.",
232
+ "venue": "IFAC Proceedings Volumes, 41, 2008.",
233
+ "url": null
234
+ }
235
+ },
236
+ {
237
+ "9": {
238
+ "title": "Joint coverage regions: Simultaneous confidence and prediction sets,\n2023.",
239
+ "author": "Edgar Dobriban and Zhanran Lin.",
240
+ "venue": null,
241
+ "url": null
242
+ }
243
+ },
244
+ {
245
+ "10": {
246
+ "title": "Applied Regression Analysis.",
247
+ "author": "N.R. Draper and H. Smith.",
248
+ "venue": "Wiley, 1981.",
249
+ "url": null
250
+ }
251
+ },
252
+ {
253
+ "11": {
254
+ "title": "A personal view of the development of system identification: A\n30-year journey through an exciting field.",
255
+ "author": "M. Gevers.",
256
+ "venue": "IEEE Control Systems Magazine, 26, 2006.",
257
+ "url": null
258
+ }
259
+ },
260
+ {
261
+ "12": {
262
+ "title": "Introduction to Operations Research.",
263
+ "author": "F.S. Hillier and G.J. Lieberman.",
264
+ "venue": "McGraw-Hill, 2001.",
265
+ "url": null
266
+ }
267
+ },
268
+ {
269
+ "13": {
270
+ "title": "Robust Estimation of a Location Parameter.",
271
+ "author": "Peter J. Huber.",
272
+ "venue": "The Annals of Mathematical Statistics, 35, 1964.",
273
+ "url": null
274
+ }
275
+ },
276
+ {
277
+ "14": {
278
+ "title": "Heteroscedasticity-robust inference in linear regression models with\nmany covariates.",
279
+ "author": "Koen Jochmans.",
280
+ "venue": "Journal of the American Statistical Association, 117, 2022.",
281
+ "url": null
282
+ }
283
+ },
284
+ {
285
+ "15": {
286
+ "title": "Exact confidence regions for linear regression parameter under\nexternal arbitrary noise.",
287
+ "author": "Alexander Senov, K. Amelin, Natalia Amelina, and O. Granichin.",
288
+ "venue": "Proceedings of the American Control Conference, 2014.",
289
+ "url": null
290
+ }
291
+ },
292
+ {
293
+ "16": {
294
+ "title": "Solving mixed-integer robust optimization problems with interval\nuncertainty using benders decomposition.",
295
+ "author": "Sauleh Siddiqui, Steven Gabriel, and Shapour Azarm.",
296
+ "venue": "Journal of the Operational Research Society, 66, 2015.",
297
+ "url": null
298
+ }
299
+ },
300
+ {
301
+ "17": {
302
+ "title": "Contributions to the theory of statistical estimation and testing\nhypotheses.",
303
+ "author": "Abraham Wald.",
304
+ "venue": "The Annals of Mathematical Statistics, 10, 1939.",
305
+ "url": null
306
+ }
307
+ },
308
+ {
309
+ "18": {
310
+ "title": "Statistical decision functions which minimize the maximum risk.",
311
+ "author": "Abraham Wald.",
312
+ "venue": "Annals of Mathematics, 46, 1945.",
313
+ "url": null
314
+ }
315
+ },
316
+ {
317
+ "19": {
318
+ "title": "Universal inference.",
319
+ "author": "Larry Wasserman, Aaditya Ramdas, and Sivaraman Balakrishnan.",
320
+ "venue": "Proceedings of the National Academy of Sciences, 117:16880\u201316890, 2020.",
321
+ "url": null
322
+ }
323
+ },
324
+ {
325
+ "20": {
326
+ "title": "Integer and Combinatorial Optimization.",
327
+ "author": "L.A. Wolsey and G.L. Nemhauser.",
328
+ "venue": "Wiley, 2014.",
329
+ "url": null
330
+ }
331
+ }
332
+ ],
333
+ "url": "http://arxiv.org/html/2401.15254v1"
334
+ }
20240127/2401.15258v1.json ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "\\aldine Foundations of Substructural Dependent Type Theory",
3
+ "abstract": "Abstract: This paper presents preliminary work on a general system for integrating dependent types into substructural type systems such as linear logic and linear type theory. Prior work on this front has generally managed to deliver type systems possessing either syntax or semantics inclusive of certain practical applications, but has struggled to combine these all in one and the same system. Toward resolving this difficulty, I propose a novel categorical interpretation of substructural dependent types, analogous to the use of monoidal categories as models of linear and ordered logic, that encompasses a wide class of mathematical and computational examples. On this basis, I develop a general framework for substructural dependent type theories, and proceed to prove some essential metatheoretic properties thereof. As an application of this framework, I show how it can be used to construct a type theory that satisfactorily addresses the problem of effectively representing cut admissibility for linear sequent calculus in a logical framework.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1. The Past, Present & Future of Substructural Dependent Type Theory",
9
+ "text": "Dependent type theory in the mould of Martin-L\u00f6f\u2019s intuitionistic type theory (martin-lof, Mar75 ###reference_b8###) promises to internalize mathematical reasoning into systems for constructing proof and program alike. Likewise, substructural type systems akin to Girard\u2019s linear logic (girard, Gir87 ###reference_b4###) seek to reflect the fundamental insight that truth is ephemeral, and to thereby capture notions of state, resources, etc. in the proofs/programs they afford. Yet these two typing disciplines, both alike in dignity, have proven resistant to a satisfactory union.\nMany authors seeking to combine substructural and dependent typing have followed the solution posed by Cervesato & Pfenning (cervesato-pfenning, CP02 ###reference_b2###), whereby contexts of would-be substructural dependent type theories are to be cleft in twain \u2013 on one side an intuitionistic context of variables upon which others may depend, and on the other a context of substructural variables amongst which there inheres no dependency. This approach is not without theoretical merit nor practical utility, but by its very nature, it falls short of fully capturing substructural reasoning about proofs/programs, since substructurality must end where dependency begins. As Jason Reed observes (reed, Ree09 ###reference_b12###), this has the practical consequence that one cannot give an effective representation of cut-admissibility for linear sequent calculus in a linear logical framework constructed in this way. Such a representation must reflect substructural constraints on terms into the types in which those terms appear \u2013 precisely what is disallowed by forfeiting dependency to the intuitionistic layer.\nOther solutions to the problem of combining substructural and dependent types have been proposed, e.g. forms of quantitative type theory (mcbride, McB16 ###reference_b10###) (atkey, Atk18 ###reference_b1###). However, the majority of these share with Cervesato & Pfenning\u2019s the confinement of dependent type formation to an essentially intuitionistic layer of the theory, which thereby runs afoul of the same example highlighted above by Reed.\nWhat is distinctly lacking from the above attempts at integrating substructural and dependent types is a solidly agreed-upon semantic basis in mathematics upon which to build the type-theoretic apparatus. To be sure, many of the above-mentioned dependent type theories have been given denotational semantics, some quite elegant, often as variations on the categorical semantics of dependent type theory, with some additional structure to make sense of the substructural component (e.g. (krishnaswami-pradic-benton, KPB15 ###reference_b7###)). Yet, as illustrated by such troublesome cases as Reed\u2019s, none of these semantics capture the full generality of substructural dependent types in the same way that, e.g., the usual categorical semantics for linear and ordered logic in monoidal categories do. Upon reflection, it should be clear why this would be the case \u2013 the semantics of linear logic were arrived at not by adding structure to the semantics of intuitionistic type theory, but rather by carefully removing such structure.\nI thus propose to develop a truly general substructural dependent type theory by generalizing the categorical semantics of dependent type theory to a novel class of structures, which I call left-fibred double categories (LFDCs), as a unifying framework for both dependent and substructural type theories. I establish metatheoretic desiderata, such as decidability, admissibility of substitution, etc., for the resulting type theory, and finally, to illustrate its general applicability, I turn it upon the example of representing cut-admissibility for linear sequent calculus, and show how the type theory succeeds at this task where others have previously failed."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "2. Sketch of a Semantics",
15
+ "text": "I begin with a reflection upon and generalization of the categorical semantics of dependent types. For present purposes, I keep the exposition somewhat informal, with fully rigorous definitions and proofs to be conducted in future work."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "2.1. Revisting the Semantics of Dependent Type Theory",
21
+ "text": "Toward defining a class of structures capable of modelling substructural dependent types, I first recall the ordinary categorical semantics of dependent types. Various definitions exist in the literature of categorical structures that model dependent types, e.g. categories with families, display map categories, natural models, etc. For present purposes, however, it will suffice for us to consider only the most general class of such models \u2013 comprehension categories.\nA comprehension category, viewed as a model of dependent type theory (c.f. Jacobs (jacobs, Jac93 ###reference_b6###)), comprises:\nA category of contexts, with objects representing contexts and morphisms viewed as substitutions.\nFor each context , a category of types, with objects interpreted as types dependent upon and morphisms as terms of type in context , additionally parameterized by .\nFor each , a functor representing the application of the substitution , such that the assignment of functors preserves identities and composites of substitutions up to coherent natural isomorphism.\nFor each , a context extension functor mapping each type to the context , to be thought of as extended with a fresh variable of type .\nFor each substitution , a natural transformation with components for all , which \u2013 roughly speaking \u2013 applies to turn under an extension of substituted over . We moreover require that the assignment of natural transformations preserves identities and composites modulo the natural isomorphisms described above in item 3.\nFor each context , a natural transformation with components for each , to be thought of as projecting out from an extension of , such that for each substitution and type , the following square is a pullback:\nA reader familiar with other categorical models of type theory, e.g. categories with families, may puzzle over the fact that our presentation involves interpreting terms as morphisms , parameterized by both the context and an additional type . In fact, this extra parameterization is crucial for generalizing the semantics to substructural dependent types and effectively separates the \u201dtype-level\u201d and \u201dterm-level\u201d aspects of contexts.\nIn a comprehension category, the projection maps induce \u201dterm-level weakening,\u201d whereby we may discard the extended part of a context. Substituting along these projection maps yields corresponding \u201ctype-level weakening\u201d functors , which play a key role in defining additional type-theoretic structure on comprehension categories, specifically:\nA comprehension category has Dependent Sums:\nif for each context and type , the weakening functor has a left adjoint , i.e. there is a natural bijection of homsets:\nand if the functors satisfy the Beck-Chevalley condition, which essentially states that for any along with and , there is a canonical isomorphism .\nA comprehension category with dependent sums as above moreover has strong sums if the canonical substitution\nis an isomorphism for all with and .\nFrom the first item in the above definition, one may deduce the usual rules for weak sum types \u2013 and if the comprehension category has strong sums, one may likewise deduce the usual rules for strong sum types. The second item \u2013 the Beck-Chevalley condition \u2013 is necessary for substitution to behave as expected when substituting into dependent sums, i.e. that substituting into a dependent sum yields a dependent sum. More generally, such compositionality of substitution ought to be required of any additional type-theoretic structure we impose upon a comprehension category.\nDependent products (i.e. dependent function types) in a comprehension category are similarly defined as right-adjoints to type-level weakening that satisfy an analogous Beck-Chevalley condition. There is also a notion of unit types in a comprehension category:\nA comprehension category has unit types if, for each context , the category has a terminal object , and the substitution functors preserve terminal objects. These unit types are strong if the projection is an isomorphism for all contexts .\nIn all of the above cases, we establish type-theoretic structure in comprehension categories by way of certain universal properties. This is evident even in the definition of a comprehension category itself, where context extension gains a universal property through projection maps forming pullback squares with substitutions. This aligns with the intuitionistic nature of ordinary dependent type theory. By way of analogy, context extension in the intuitionistic theory of simple types is given the universal property of a product. One then obtains models for linear logic & linear type theory \u2013 i.e. (closed, symmetric) monoidal categories \u2013 by replacing the universal property of context extension with the structure of a monoidal product that is unital and associative up to coherent isomorphism.\nTo obtain models of substructural dependent type theory, we may try replacing the universal property of context extension in comprehension categories with appropriately unital and associative structure. However, a challenge arises in stating unitality and associativity for contexts, since context extension builds up contexts one element at a time. A primitive semantic notion of partial context or telescope (debruijn, de \u033591 ###reference_b3###) is seemingly needed to break up contexts for stating associativity. In ordinary models of dependent type theory, the role of such partial contexts is effectively played by strong sums \u2013 the condition for strong sums given in Def. 2.2 ###reference_dfn2### can be seen as implicitly capturing a kind of associativity between dependent sums and context extension. This suggests that the proper semantic setting for substructural dependent type theory ought to be a generalization of comprehension categories with strong sums (and strong unit types) wherein the universal properties of both context extension and the dependent pair/unit types are discarded in favor of structure witnessing their mutual associativity and unitality. Such categorical structures, which I call left-fibred double categories (LFDCs), shall be my object of study for the remainder of this section."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "2.2. Left-Fibred Double Categories",
27
+ "text": "An LFDC consists of the same data as items 1-5 of Def. 2.1 ###reference_dfn1###, along with the following:\nFor each context and type , a dependent pair type functor .\nFor each substitution and each term , a natural transformation with components .\nFor each context , an unit type object .\nNatural isomorphisms with components:\nfor all\nfor all and\nfor all and\nfor all together with and\nfor all\n with and\nfor all with and and (Beck-Chevalley for pair types)\nfor all with (Beck-Chevalley for unit types)\nsubject to certain coherence laws.\nThe above definition, up to and including item 4(e), is equivalently described as a pseudomonad in a certain tricategory (specifically the tricategory whose objects are categories and whose 1-cells are spans of categories with one leg a fibration). This alternative definition makes clear that the above structure is a kind of double category, with the property that its domain projection functor is a fibration, whence the name left-fibred double category.\nI am aware of only one place in the category-theoretic literature where structures such as these have been studied before, which is in recent work by David Jaz Myers & Matteo Cappucci (cappucci-myers, MC22 ###reference_b9###), as part of the two authors\u2019 work on Categorical Systems Theory, wherein they refer to such structures as \u201cdependent actegories.\u201d Suffice it to say that, to my knowledge, the use of LFDCs as models of type theory is novel to this paper.\nOne additional piece of type-theoretic structure that has so-far been missing from the above exposition of LFDCs is some notion of empty context from which to build other contexts. To this end, we have the following: a unit context in an LFDC is a context such that the context extension functor is an equivalence of categories between and . We think of types as closed types, such that contexts and closed types may be treated interchangeably. Hence we shall hereafter restrict our attention mainly to LFDCs equipped with a choice of unit context.\nAny comprehension category with strong sums and strong unit types is inherently an LFDC, making all models of ordinary dependent type theory naturally LFDCs. Specifically, any intuitionistic dependent type theory with strong sum types and a unit type can be straightforwardly transformed into a comprehension category (the syntactic category of the type theory) with strong sums/unit types (hence an LFDC) by quotienting terms up to -equivalence of -normal forms, and defining substitution, dependent pair types, etc., as their syntactic counterparts.\nThe LFDC concept also encompasses categorical models of linear/ordered logic, specifically monoidal categories. In this context, a monoidal category with monoidal product and unit is interpreted both as the category of contexts and as the category of types for each context . The substitution functors are then simply the identity on , and and are interpreted as and , respectively, with unit types interpreted as .\nObserve that in an LFDC based on a monoidal category , a term only trivially depends upon , since substitution into is given by the identity, while substituting into the term-level context involves potentially non-trivial composition of morphisms in . LFDCs arising from monoidal categories thus exhibit non-trivial substructural properties, but trivial type dependency, whereas LFDCs arising from comprehension categories exhibit non-trivial type dependency, but trivial substructural properties. As an example of an LFDC with both non-trivial type-dependency and substructural behavior, we have the following:\nThe category of linearly ordered sets (with either strictly or weakly order-preserving maps as morphisms) is naturally regarded as an LFDC, by interpreting both context extension and dependent pair types as the lexicographic sum of linear orders. A type in context is defined as a family of linear orders indexed by the set . Given such a family of linear orders, we define the context extension as the lexicographic sum of the family, i.e. the set ordered such that iff either or and . We define the dependent pair type similarly as a family of lexicographic sums, indexed by the underlying set of its context.\nThis example straightforwardly exhibits both non-trivial type dependency and substructutral properties \u2013 on the one hand, any family of linear orders indexed by the underlying set of another linear order counts as a dependent type in this setting, while on the other hand, the lexicographic product, which is an instance of the lexicographic sum over a constant family of linear orders, is notably non-symmetric, and so is distinct from the ordinary product of intuitionistic type theory.\nAlthough the above argument is intuitively clear, we do not yet possess the appropriate structure on LFDCs to phrase such arguments in the internal language of LFDCs themselves \u2013 e.g. we defined the lexicographic product of linear orders as the lexicographic sum over a constant family, i.e. a family not dependent upon its context; As it stands, the basic definition of LFDC given above poses no criterion for when a type may be considered independent from (part of) its context. If we examine the above-given examples, we find that they all admit such a notion of independence or type-level weakening. It thus seems appropriate to augment the definition of an LFDC with such a notion of type-level weakening, which is in fact critical to defining further type-theoretic structure on LFDCs."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "2.3. Type-Level Weakening",
33
+ "text": "An LFDC has type-level weakening if it is additionally equipped with:\nA functor for each context and type\nA functor for each context and types\nSubject to certain natural isomorphisms and coherence laws thereupon that ensure the compatibility of the above-given functors with each other and with the structure of the ambient LFDC. In particular, there are natural isomorphisms and with components:\n\n\nfor all with and and along with . These isomorphisms effectively witness that a weakened type does not depend upon the type added to its context by weakening. We also require natural isomorphisms with components\n\n\nfor all with and . These isomorphisms ensure that weakening behaves as we would expect with regard to the type formers available in the ambient LFDC, i.e. the weakening of a unit type is itself (up to isomorphism) a unit type, and likewise for dependent pair types. When positing additional type formers for an LFDC with type-level weakening, we shall therefore generally require these to come equipped with analogous natural isomorphisms to ensure their compatibility with the type-level weakening structure, along with the usual Beck-Chevalley isomorphisms.\nAnalogous to the definition of the lexicographic product as the lexicographic sum of a constant family of linear orders, we then have the following in any LFDC with type-level weakening:\nIn an LFDC with type-level weakening, each category of types for each context is naturally equipped with the structure of a monoidal category whose unit object is , where the monoidal product of is defined as the dependent pair type .\nGiven type-level weakening functors and the associated monoidal products on an LFDC, it becomes straightforward to define function types as suitable right adjoints to these. Since LFDCs distinguish type-level and term-level contexts, we should expect there to correspondingly be notions of both term-level and type-level function types in an LFDC. We thus have the following definitions:\nAn LFDC with type-level weakening has term-level function types if for each context with , the functors and have right adjoints and , respectively, with the associated natural bijections of homsets:\nWe additionally require the functors and to satisfy a Beck-Chevalley condition and compatibility with type-level weakening in the form of canonical isomorphisms\netc.\nan LFDC with type-level weakening has type-level function types if for each context with , the weakening functor has a right adjoint , with corresponding natural bijection of homsets\nAs in the above definition, we also require the functors to be compatible with substitution and type-level weakening in that there are canonical isomorphisms\nand so on. In an LFDC with a unit context and both term-level and type-level function types, \u201copen\u201d terms are equivalent to closed terms of type (where is the inverse to context extension by ). Such LFDCs thus fully internalize their logic of term formation.\nMoving on to structural properties of LFDCs with type-level weakening, among the most important of these is exchange, which allows permuting independent variables in contexts. Having defined the monoidal structure on types arising from type-level weakening, we are now in a position to state this property for LFDCs in general:\nan LFDC with type-level weakening has exchange if the monoidal structure defined on each category of types in Def. 2.7 ###reference_dfn7### additionally carries the structure of a symmetric monoidal category, i.e. a natural isomorphism with components\nfor all , subject to certain coherence conditions.\nBeyond exchange, there is also term-level weakening, which allows discarding variables in the term-level context. This property may be defined for any LFDC, with or without type-level weakening, and in fact equips the LFDC with a natural form of type-level weakening.\nAn LFDC has term-level weakening if for each context , the type is a terminal object in the category . For any context , write for the unique morphism from any type to .\nAny LFDC with term-level weakening is naturally equpped with type-level weakening, given by substitution along terminal morphisms. Specifically, given a context and types and , we may define\nand\nWe may likewise define the following projection maps:\nwhere is as defined above. The latter projection induces a family of natural transformations\nby mapping to the composite\nIn this sense, type-level weakening in such an LFDC is \u201chalfway\u201d to forming an adjunction with the dependent pair type. This motivates the following definition of contraction in an LFDC with type-level weakening as forming the other half of such an adjunction.\nAn LFDC with type-level weakening has contraction if there is a natural transformation of homsets:\nfor all contexts and types and , that is suitably compatible with the structure of the LFDC.\nBy a Yoneda-style argument, the above is equivalent to the existence of a natural family of morphisms\nfor all contexts with , suitably compatible with the LFDC structure. In this sense, contraction allows for the term-level use of type-level variables. This makes sense if we think of type-level contexts in LFDCs as consisting of variables that are treated as already having been used elsewhere.\nIt follows that in an LFDC with contraction, for each context , there is a natural transformation with components for all types , defined as follows:\nIf we think of as duplicating its input, then this suffices to show that the above-defined notion of contraction for LFDCs gives rise to the usual notion of contraction as copying of variables.\nWe may then define a Cartesian LFDC as one which has both term-level weakening and contraction in a compatible way, specifically:\nAn LFDC is Cartesian if 1) it has term-level weakening, and 2) the natural transformations of homsets defined in Def. 2.11 ###reference_dfn11### are all invertible, i.e. the type-level weakening functors are right-adjoint to the dependent pair type functors .\nMoreover, any Cartesian LFDC is naturally equipped with exchange, since for any context and types , we may define a morphism as follows:\nwhere is as defined in Def. 2.12 ###reference_dfn12###. One then checks that is an isomorphism satisfying all the properties given in Def. 2.10 ###reference_dfn10###.\nMore generally, the monoidal product on each category of types in a Cartesian LFDC satisfies the universal property of a product. Even if the given LFDC is not Cartesian, it may have such products in its type-theoretic structure, without these necessarily coinciding with the monoidal product, as follows:\nAn LFDC has product types if:\nFor each , there is a functor together with natural transformations with components and , such that for all types with morphisms and , there exists a unique morphism which factors and through and , respectively.\nThe substitution functors are all finite-product-preserving, and if the LFDC has type-level weakening, then similarly the type-level weakening functors preserve finite products.\nWe think of the product of two types as offering a choice between an element of and an element of , while the monoidal product offers two such elements at once."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "2.4. Strictness",
39
+ "text": "Concluding the discussion on LFDCs, I now shift my attention to their role as models of type theory in particular. While the class of LFDCs has so far been defined as to include various mathematical examples, this definition is in a sense still too weak. Specifically, we have so far required substitution and type-level weakening to satisfy their requisite identities only up to coherent isomorphism, when from the perspective of type theory, these should hold on the nose. For this purpose, I introduce the following definition:\nAn LFDC is strict if:\nThe assignment of substitution functors strictly preserves identities and composites of substitutions.\nThe Beck-Chevalley isomorphisms are identities, i.e. substitution strictly preserves unit and dependent pair types.\nFor LFDCs with type-level weakening, the natural isomorphisms associated with type-level weakening must also be strict. Similarly, substitution and type-level weakening in LFDCs with function types / products must strictly preserve the associated type formers.\nMost of the above-considered examples of LFDCs fail to be strict. However, one significant class of LFDCs manages to meet these requirements, which is the class of syntactic categories of dependent type theory as in Ex. 2.1 ###reference_theorem1###. Because (type-level) weakening in such models is given by inclusions of terms into extended contexts, and because substitution is essentially computed by applying the prescribed identities, it follows that these models are in fact strict."
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "3. The Type Theory of LFDCs",
45
+ "text": "What is a type theory? Standardly, one might answer that a type theory is a definition of some syntax along with rules for forming judgments upon this syntax, and perhaps also for computing therewith. As an alternative, I wish to pose the view that a type theory is a system of translation between syntax and semantics. In the ordinary approach to type theory, the computational dynamics form an elementary semantics for the theory, defined e.g. in terms of equivalence classes of terms modulo -equivalence of normal forms, etc. Yet this may be generalized \u2013 provided we have a procedure for translating well typed terms of the theory into some semantic domain, we need not restrict ourselves to semantics that are so-dependent upon the syntax of the theory itself. Instead, we may define a type theory parameterized by a class of models in which to interpret it, and then search among this class for a model that yields good computational behavior. This allows for a separation of concerns between the design of the syntax of the type theory and the assurance of its computational adequacy. 111This perspective on type theories also generalizes the more modern approach of specifying the syntax and rules of a type theory within a logical framework using higher-order abstract syntax (HOAS). In that case, the semantic domain in which the type theory is interpreted is simply the ambient logical framework itself; although this offers significant advantages over the tradtional approach to the specification of type theories, it is not suitable for our present purposes of defining substructural dependent type theory, since any type theory specified via HOAS in an intuitionistic logical framework will allow for intuitionistic use of variables in type families \u2013 precisely what we wish to avoid in general. Hence I have been compelled to seek a suitable abstraction of this approach, which has led me to the above-described perspective on type theories, and the categorical semantics of substructural type theory sketched in the prior section.\nSuch separation of concerns is particularly useful in the design of dependent type theory. Typically in dependent type theory, checking whether a term belongs to a given type may require performing some amount of computation within the type. Thus, type checking of terms depends upon semantic information about the types. This suggests that, in a typing judgment such as , the type ought to be an object drawn from the semantics of the theory, while the term (the subject of the judgment) is a syntactic representation of an inhabitant of . The rules for checking such a judgment can then be designed in such a way as to implicitly compute the semantic denotation of . This is moreover in line with the discipline of bidirectional type-checking, wherein the judgments of a type theory are conceived not merely as predicates, but as procedures for computing certain information about terms/types. I hence adopt a bidirectional approach to the type theory I present herein, where every judgment computes the semantic denotation of its subject.\nBecause we have taken care in the previous section to identify the type-theoretic structure of LFDCs (with type-level weakening, function types, etc.), it is largely straightforward to give a type-theoretic syntax for this structure, as I now illustrate."
46
+ },
47
+ {
48
+ "section_id": "3.1",
49
+ "parent_section_id": "3",
50
+ "section_name": "3.1. Expressions, Denotations, Signatures & Judgments",
51
+ "text": "Assume a countably infinite stock of variables \u2013 which I shall write as \u2013 and countably infinite supplies of type family symbols and constant symbols \u2013 written and , respectively. We then have the following grammar of syntactic expressions:\nFix a strict LFDC with type-level weakening, function types, and products. For each syntactic sort of expressions defined above, we then define a corresponding class of denotations:\nContexts: the denotation of a context is a list of type annotations , which is well-formed if each such is a type in the context corresponding to the list of annotations preceding . We then have the following definition of the semantic context represented by\nI write (or when is understood to be the type-level context, and the term-level context), to mean that 1) is well-formed, and 2) the concatenation of onto the right-hand side of is also well-formed, and say that is a telescope for . We then have the following recursive definition of the semantic type represented by a telescope for a semantic context :\nTypes: given a semantic context , the denotation of a type wrt is defined as an object .\nIntroduction & Elimination Forms: given a semantic context and a telescope together with a semantic type , the denotation of an introduction/elimination form wrt and is defined as a morphism .\nIn order to capture structure of the LFDC in which the type theory is interpreted, we allow for the type theory to additionally be parameterized by a signature of constant terms and atomic type families. Such a signature is defined to be a set of bindings, where each binding has one of the following forms:\n, where is a closed semantic type, and is a type family dependent upon .\n, where is a closed semantic type, and is a closed term of type .\nWe conceive of judgments as procedures taking inputs (here written in blue) and yielding outputs (here written in red), with one input of each judgment designated the subject (written in violet). I follow McBride (mcbride, McB16 ###reference_b10###) in writing inputs to a judgment to the left of the subject, and outputs to the right, using and for the judgments in which types are checked and inferred, respectively. The four main judgments of the type theory are then as follows:\nObserve that within these judgments all inputs/outputs other than the subject are semantic objects. Hence it is only ever the subject of a judgment which requires checking, and all dependence of type checking upon semantic information about types/contexts/etc. is suitably encoded in the structure of these judgments.\nFrom this point on, I will generally omit the sub/superscripts from and , as these are readily inferred from their occurrence in the above judgments. Similarly, I will omit sub/super-\nscripts from natural transformations where these may inferred.\nThe reader familiar with traditional expositions of type theory may notice a conspicuous lack of a judgment for equality among those given above. In fact, there is no need for such a judgment, because we have defined the above judgments to compute the semantic denotations of their subjects \u2013 we thus may simply consider types judgmentally equal whenever , introduction forms judgmentally equal whenever , etc.\nBefore proceeding with the rules of the type theory, I first note some useful constructions on semantic contexts / types / terms (see Appendix A for full definitions):\nFunctoriality of telescopes: given a telescope , there is an associated functor given by iterative formation of dependent pair types.\nContext substitution/weakening: given a telescope and substitution , there is a telescope given by substituting into all types in . Similarly, for a telescope and type , there is a telescope given by weakening all types in .\nReassociation: for each telescope there is an isomorphism . Similarly, for each telescope and type there is an isomorphism\nTerm substitution: given a term , there is a substitution , and given a term and a telescope , there is a substitution\nLifting: for each telescope and type , there are lifting functors and that lift types into extended contexts by iterative weakening, and a telescopic weakening functor\nProjection: in an LFDC with term-level weakening, for a telescope and a type , there is a projection"
52
+ },
53
+ {
54
+ "section_id": "3.2",
55
+ "parent_section_id": "3",
56
+ "section_name": "3.2. Inference Rules",
57
+ "text": "In the bidirectional treatment of judgments-as-programs, a rule defines a procedure for evaluating a judgment on a subset of its input domain. For a rule of the form\nwhere and are judgments, the algorithmic interpretation is bottom-to-top: to evaluate the conclusion , evaluate the premises in order. McBride (mcbride, McB16 ###reference_b10###) offers the following heuristic \u2013 a rule is a server for its conclusion and a client for its premises, accepting inputs to its conclusion, supplying inputs to its premises, receiving outputs of the premises, and forming the output of its conclusion accordingly. Because of McBride\u2019s discipline of writing all inputs to the left and all outputs to the right of the subject of each judgment, information thus flows clockwise around a rule, starting and ending at the 6 o\u2019clock position.\nJudgments are then evaluated by nondeterministically evaluating rules whose input patterns match the input to the judgment. If no rule applies, the judgment signals failure, which is propagated to any rule invoking it as a subroutine. A successful evaluation is given by a derivation tree built from the rules \u2013 hence induction on derivations can be applied as usual in the metatheory. To minimize nondeterminism, we shall seek to ensure that at most one rule applies to any syntactic form of the subject of its conclusion, so that judgment evaluation is syntax-directed."
58
+ },
59
+ {
60
+ "section_id": "3.2.1",
61
+ "parent_section_id": "3.2",
62
+ "section_name": "3.2.1. Context Formation",
63
+ "text": "Because we have defined the denotations of contexts in close connection to their syntactic representations, the rules for context formation are straightforward:"
64
+ },
65
+ {
66
+ "section_id": "3.2.2",
67
+ "parent_section_id": "3.2",
68
+ "section_name": "3.2.2. Annotation & Embedding",
69
+ "text": "We have the following rules for switching between type checking and inference. The rule for switching from inference to checking requires us to annotate the term to be checked with the type against which to check it:\nwhile the rule for switching from checking to inference instead requires that the type we are able to synthesize for a given term is equal to the one already provided for checking:"
70
+ },
71
+ {
72
+ "section_id": "3.2.3",
73
+ "parent_section_id": "3.2",
74
+ "section_name": "3.2.3. Atoms & Constants",
75
+ "text": "Fix a signature . We then have the following rule for instantiating atomic type families contained in :\nDue to type-level weakening, we may make use of some part of the context in forming the parameter for a type family , discarding while keeping at the type level. Note that once the part of the context used in constructing has been selected, the term-level typing rules enforce that must be used in accordance with the substructural constraints of the type theory. Similarly, we have the following rule for instantiating constants from :"
76
+ },
77
+ {
78
+ "section_id": "3.2.4",
79
+ "parent_section_id": "3.2",
80
+ "section_name": "3.2.4. Identity, Weakening & Contraction",
81
+ "text": "If the ambient LFDC has neither term-level weakening nor contraction, then we have only the following strict identity rule:\nIf, on the other hand, the ambient LFDC has term-level weakening, then we may instead make use of the following weak identity rule:\nSimilarly, if the ambient LFDC has contraction but not term-level weakening, then we may make use of the following in addition to the strict identity rule above:\nFinally, if the ambient LFDC is Cartesian, then we may instead use the following rule in addition to the weak idenity rule:"
82
+ },
83
+ {
84
+ "section_id": "3.2.5",
85
+ "parent_section_id": "3.2",
86
+ "section_name": "3.2.5. Exchange",
87
+ "text": "Unlike term-level weakening and contraction, exchange cannot be handled merely in the typing rules for individual variables, since it necessarily concerns the use of multiple variables. Thus, the rules for exchange must allow permuting the variables in the term-level context of any typing judgment, i.e:\nwhere\nThe rules for exchange are thus not syntax-directed, and so introduce an additional element of nondeterminism to type-checking, since type checking an expression may require trying all valid permutations of the context. For present purposes, this is a tolerable state of affairs, as there are only ever finitely many such permutations, and so this does not impact the decidability of type-checking. However, for practical use, further work will be necessary to find ways of cutting down on this nondeterminism. We can however omit these rules in Cartesian LFDCs, since by Def. 2.13 ###reference_dfn13### the exchange structure in a Cartesian LFDC already arises from term-level weakening and contraction."
88
+ },
89
+ {
90
+ "section_id": "3.2.6",
91
+ "parent_section_id": "3.2",
92
+ "section_name": "3.2.6. The Unit Type",
93
+ "text": "The formation rule for the unit type is straightforward\nLikewise the introduction rule, except that this rule admits a variation when the ambient LFDC has term-level weakening:\nHowever, the elimination rules for the unit type are fairly complex. There are two main reasons for this: one is that the unit type is positive and so its elimination form is pattern-matching, which may be applied either at the term level or at the type level \u2013 hence there must be two distinct rules for such uses. Moreover, when performing such pattern-matching, we use some part of the ambient context in constructing a term of the unit type, but the part of the context occurring after may implicitly depend upon itself. To solve this issue, we require that a pattern matching expression represent these dependencies explicitly, by wrapping the remaining context in a type dependent upon the expression being matched over. We thus have the following term-level elimination rule:\nand the following type-level elimination rule:\nOn their own these rules are not quite sufficient, due to a quirk of the unit type: if the part of a context ocurring after that used in constructing is empty, we cannot encode this via a variable of the unit type, since we would then generally need to eliminate this variable, bringing us round in a circle. We thus have the following additional rules to handle these exceptional cases at the term level:\nand at the type level:"
94
+ },
95
+ {
96
+ "section_id": "3.2.7",
97
+ "parent_section_id": "3.2",
98
+ "section_name": "3.2.7. Dependent Pair Types",
99
+ "text": "The formation rule for dependent pair types is straightforward:\nAs is the introduction rule:\nBecause the dependent pair type is positive, its elimination rules follow the same pattern as the unit type. We have a term-level rule:\nand a type-level rule:"
100
+ },
101
+ {
102
+ "section_id": "3.2.8",
103
+ "parent_section_id": "3.2",
104
+ "section_name": "3.2.8. Term-Level Function Types",
105
+ "text": "The rules for term-level functions are each essentially variations on the rules for function types in intuitionistic type theory. We have the formation rule for :\nand the following for :\nThe intro rule for follows the usual form of function abstraction, introducing a variable on the left of the term-level context:\nwhile the intro rule for introduces a variable on right:\nThe elimination rule for thus forms its argument using part of the context occurring to the left of the part used in forming a function:\nwhile the elimination rule for forms the argument to using part of the context occurring to the right of the part used in forming :"
106
+ },
107
+ {
108
+ "section_id": "3.2.9",
109
+ "parent_section_id": "3.2",
110
+ "section_name": "3.2.9. Type-Level Function Types",
111
+ "text": "The rules for type-level functions are again a variation on the rules for function types in intuitionistic type theory, but this time the function types in question are dependent function types. We have the following formation rule:\nand the following introduction rule\nand the corresponding elimination rule\nwherein we may make use of any part of the type-level context in forming an input to , provided that the term-level context does not depend upon this part of the type-level context."
112
+ },
113
+ {
114
+ "section_id": "3.2.1",
115
+ "parent_section_id": "3.2",
116
+ "section_name": "3.2.10. Product Types",
117
+ "text": "The rules for product types are largely straightforward. We have the following type formation rule:\nand the following introduction rule\nNote that the term-level context is used in checking both and , since offers a choice of which of to construct from the resources in . We then have the following elimination rules, which allow for making such a choice:\nand"
118
+ },
119
+ {
120
+ "section_id": "3.3",
121
+ "parent_section_id": "3",
122
+ "section_name": "3.3. Syntactic Completeness",
123
+ "text": "As a consequence of the construction of the rules given above, we automatically have a form of type soundness for the theory: every well typed syntactic expression corresponds to a well defined semantic object of the appropriate kind. Beyond such soundness, however, there are further desiderata we may have for such a theory, namely completeness and effectivity/decidability of the above-defined procedure for type-checking/computing denotations of expressions.\nAs to the completeness of this theory, by soundness we already have that the syntax of the theory may be interpreted in any strict LFDC, so it remains only to show that this syntax is closed under the constructions available in a (strict) LFDC, and therefore that the syntax itself forms such a (strict) LFDC. From this it will follow that a syntactic expression is well typed if and only if a corresponding semantic object of the appropriate kind exists in every strict LFDC.\nThe syntax of the theory already includes primitive constructs corresponding to all the semantic type-formers in an LFDC with type-level weakening, function types, and product types. Therefore all that remains is to prove the admissibility of syntactic constructions corresponding to the parts of such an LFDC not given by its type-formers, which are namely: type-level weakening, and substitution/composition.\nIf then\nIf then\nIf then\nInduction on derivations. \u220e\nGiven expressions and an expression not containing any variable bound in , respectively, write , and for the uniform substitution of for all free occurrences of the variable in , respectively. We then have the following:\nIf and then\nIf and , then\nIf and , then\nIf and , then\nIf and , then\nInduction on derivations.\n\u220e\nI leave to future work a full proof that the well typed syntactic fragment of this type theory, quotiented by judgmental equality as defined in Def. 3.4 ###reference_dfn4###, forms a strict LFDC with type-level weakening, function types, products, and the appropriate structural properties. Suffice it to say, however, that the above two propositions form the backbone of such a proof, and moreover demonstrate at least morally that the syntax of this type theory completely captures the type-theoretic language of such LFDCs."
124
+ },
125
+ {
126
+ "section_id": "3.4",
127
+ "parent_section_id": "3",
128
+ "section_name": "3.4. Substructuralization & Decidability",
129
+ "text": "As to the decidability of the described type theory, in general, one should not anticipate decidability when interpreting the type theory in an arbitrary (strict) LFDC. One may hope, however, to isolate a computationally well-behaved subclass of (strict) LFDCs, ensuring decidability for the associated type theories. For this purpose, I define a notion of substructuralization that allows to convert an intuitionistic dependent type theory into a substructural one.\nLet be an intuitionistic dependent type theory with judgmentally-distinct type formers such that\nsatisfies the rules of a unit type in intuitionistic type theory\nsatisfies the rules of a dependent pair type former in intuitionistic type theory\nsatisfies the rules of a dependent function type former in intuitionistic type theory\nand both satisfy the rules of function type formers in intuitionsitic type theory\nsatisfies the rules of a product type former in intuitionistic type theory\nThen the substructuralization of is defined as the interpretation of the substructural dependent type theory defined above (with any combination of weakening, contraction, and exchange) in the strict LFDC with type-level weakening, function types, and products, given by the syntactic model of as defined in Ex. 2.4 ###reference_theorem4###.\nThe substructuralization of an intuitionistic dependent type theory essentially inherits its computational procedures from while imposing substructural constraints upon the typing rules for . The essential idea behind this is that the terms of substructural type theory denote the same sorts of data as those of intuitionistic type theory, i.e. functions, pairs, etc., whose computational behavior is already well understood and unchanged by the substructural rules. Hence we should be able to bootstrap ourselves up from an intuitionistic dependent type theory to a substructural dependent type theory, whilst preserving all the desirable computational properties thereof. To this effect, we have the following theorem:\nLet be an intuitionistic dependent type theory that is normalizing (i.e. every term of computes to a judgmentally-equal normal form), and that has Type-Canonicity in that:\nif is judgmentally equal to , then its normal form is ,\nif is judgmentally equal to , then its normal form is for some types in normal form,\netc.\nthen has the following properties:\nJudgmental equality of types is decidable. I.e. given semantic types and , we may check whether these are equal in the syntactic model of by reducing them both to normal form and comparing these normal forms for -equivalence.\nPattern-matching on types is decidable. E.g. to check whether a type is of the form , we reduce it to normal form and check whether this normal form has the form .\nIf an intuitionistic dependent type theory satisfies the conditions of the above theorem, then type checking for is decidable. Examining the rules for , we see that these require only the abilities to 1) pattern match on expressions (trivial), 2) pattern match on types using primitive type formers (follows from the above theorem), 3) pattern match for type-level weakening (can be done by checking that the weakened variable does not occur freely in a type), and 4) check types for equality (follows from the above theorem)."
130
+ },
131
+ {
132
+ "section_id": "3.5",
133
+ "parent_section_id": "3",
134
+ "section_name": "3.5. Application: Linear Logical Frameworks",
135
+ "text": "I come now to an example of the practical advantage of this theory over prior substructural dependent type theories, namely: a suitably substructuralized dependent type theory is particularly well-suited as a logical framework for the metatheory of linear logic.\nWe define the Logic of Left-Fibred Double Categories with Symmetry (LLFDC), as the substructuralization of intuitionistic dependent type theory with at least one universe, which includes the exchange rules but neither term-level weakening nor contraction as structural rules.\nWe may add atomic type families and axioms / constants to LLFDC by including corresponding variables of the appropriate types in the underlying intuitionistic dependent type theory and then adding these to the signature of its substructuralization. Hence for an atomic type family variable and a closed LLFDC-type , we write to mean , and similarly for a constant variable , we write to mean .\nI write and as abbreviations for and , respectively. Similarly, I write in place of . Additionally, I shall write for when does not occur free in , and instead of . I also generally write simply as , i.e. I treat embedding of elimination forms into introduction forms as an implicit operation, rather than an explicit one.\nI claim that LLFDC is an ideal setting in which to conduct the metatheory of ordinary (intuitionistic) linear logic, as I shall now demonstrate. I will show, in particular, that LLFDC is capable of representing cut admissibility for intuitionistic linear sequent calculus in a manner which avoids the problems with such representations in prior linear dependent type theories highlighted by Reed (reed, Ree09 ###reference_b12###).\nI follow the method of Cervesato & Pfenning (cervesato-pfenning, CP02 ###reference_b2###), as adapted by Reed (reed, Ree09 ###reference_b12###), in representing linear sequent calculus via HOAS, with suitable modifications for the specificities of LLFDC. We begin by postualting an atomic type of propositions\nwhich is then used to parameterize atomic type families of antecedents and consequents, respectively:\nthe idea being that a derivation in intuitionistic linear sequent calculus corresponds to a closed term of type\nWe then include constructors on propositions corresponding to the connectives of linear logic. For illustrative purposes, I concentrate on the linear implication , represented as follows:\nFor the sake of legibility, I write and as an abbreviation for . We then have the following constructors for derivations, corresponding to the left/right rules of each of implication in linear sequent calculus:\nAdditionally, we have the following constructors, corresponding to the Cut and Identity rules:\nOur goal, then, is to give a procedure for converting a derivation making use of into a cut-free derivation. For this purpose, we introduce a type family representing constructions of cut-free proofs:\nWe then have the following constructors for in cases where it is applied to a proof constructed from the identity or left/right rules:\nTo handle the case where is applied to a derivation whose outermost constructor is , we further introduce the following relation to capture single-step cut reduction:\nFollowing Reed (reed, Ree09 ###reference_b12###), we use the product type former to allow forming a derivation of type in the same context as the corresponding inputs to cut, which are themselves represented with the type , which must therefore split up the context accordingly. We then add the following:\nWe encode the cut reduction procedure via axioms of the form\nwhere are some universally-quantified parameters. Note that the construction of forces these parameters to be used linearly in each of . This is the key to the correct behavior of this representation of Cut Admissibility.\nFor instance, we have the following axiom for handling Principal Cuts (i.e. where a left rule meets a right rule):\nThe other axioms for various cases arising from Cut follow similarly. Note that the linearity constraints enforced upon the parameters to ensure that every derivation occurring as a parameter in the inputs to a cut must be used in the same quantity in the output of the cut reduction.\nIt follows that Reed\u2019s examples (reed, Ree09 ###reference_b12###) of spurious Cut Elimination rules that can be written in other linear logical frameworks do not apply to the above. In particular, Reed considers a case where, instead of the usual Right rule for , we instead had\nIn which case the corresponding cut reduction axiom for a principal cut would have to look something like\nbut this is ill-typed, because the variable gets used twice in a term-level position in . Similarly, if we instead had\nthen the corresponding cut reduction axiom for a principal cut would have to look something like\nbut this is again ill-typed because now the variable does not get used at the term level in the expression .\nHence Reed\u2019s problem of representing a cut admissibility relation so as to allow for only linear programs to be represented by this relation is solved in LLFDC. From here, one may apply the usual structural induction on complexity of propositions / proofs involved in cuts to show that cut reduction terminates, and hence for every derivation of type there is a corresponding proof of type ."
136
+ },
137
+ {
138
+ "section_id": "4",
139
+ "parent_section_id": null,
140
+ "section_name": "4. Conclusion & Outlook",
141
+ "text": "The foregoing, I hope, constitutes a first step toward the theory of LFDCs and their internal language, hence also toward a substructural dependent type theory at the right level of generality. Taking stock, we have seen that many of the constructs of ordinary dependent type theory (dependent pair/function types, etc.) can be given suitably-substructural analogues in this setting, and these enjoy many of the same metatheoretic desiderata, including type soundness and decidability. Moreover, these constructs are better-behaved than those of prior substructural dependent type theories, in that they do not suffer the same issues with representing substructural constraints in the formation of parameters to type families.\nOf course, much remains to be done in fleshing out this theory. On the syntactic side of things, we may hope to extend the catalogue of constructs available in substructural dependent type theory with other mainstays of type theory, e.g. universes, inductive types, coinductive types, etc. On the semantic side, a fully-rigorous treatment of the informal semantics sketched in this paper is in order, including full definitions of LFDCs and associated constructs, along with a proof that the syntactic models of this type theory are themselves strict LFDCs that are equivalent to those in which they are interpreted. Moreover, the type theory of LFDCs should be applicable in all LFDCs, not just strict ones, provided the following conjecture:\nEvery LFDC (with type-level weakening, function types, products, etc.) is equivalent to a strict one."
142
+ },
143
+ {
144
+ "section_id": "4.1",
145
+ "parent_section_id": "4",
146
+ "section_name": "4.1. Toward the type theory of monoidal topoi",
147
+ "text": "Beyond general development of the type theory LFDCs, a specific application of this theory is toward constructing a type theory for working internally in monoidal topoi, i.e. topoi equipped with an additional (bi)closed monoidal structure. Such topoi arise naturally in the analysis of substructurally-typed programming languages \u2013 if the types of a language form a monoidal category , then the presheaf category on is a monoidal topos via Day Convolution, whose internal language is essentially the logic of -programs.\nA type theory for such monoidal topoi must therefore combine aspects of ordinary dependent type theory, arising from the topos in the usual way, with the substructural aspect present in the topos due to its monoidal structure. Viewing this situation through the lens of LFDCs reveals that such monoidal topoi in fact consist of not one but two LFDC structures, one Cartesian and given by the usual comprehension category structure on the topos, the other given by the monoidal structure on the topos as in Ex. 2.2 ###reference_theorem2###. What these two LFDC structures have in common is their shared category of contexts/closed types, i.e. the underlying topos. Generalizing this situation slightly, we may consider pairs of LFDCs whose categories of contexts/closed types are equivalent. The functors constituting such an equivalence give ways of going back and forth between the type theories of these LFDCs in restricted contexts, and in this sense function as modalities on these type theories.\nThis in turn suggests that the right way to type-theoretically capture such an equivalence is to make use of constructs from modal type theory (c.f. (multimodal, Gra+20 ###reference_b5###)) in the type theory of LFDCs. Adapting such constructs from the usual categorical semantics of dependent type theory to the setting of LFDCs, and correspondingly from intuitionsitic to substructural dependent type theory, remains to be done. But the above reasoning suggests that if it can be carried out, such a marriage of modal and substructural type theory may yield significant benefits for the analysis of substructural programs.\nGoing even further, we may consider type theories interpreted not only in monoidal topoi, but monoidal -topoi, i.e. models of Homotopy Type Theory (HoTT) equipped with a suitable notion of substructurality. In some ways, this is more natural from a computational point of view, since the internal language of 1-topoi is extensional Martin-L\u00f6f Type Theory, for which type checking is undecidable, while it is possible to give a type theory for HoTT possessing the normalization and canonicity properties (sterling-angiuli, SA21 ###reference_b14###).\nA closely related line of recent work is the form of linear dependent type theory devised by Mitchell Riley in his thesis (riley, Ril22 ###reference_b13###). This approach to linear dependent type theory makes use of ideas of bunched logic, and is based on a specific model of Homotopy Type Theory in parameterized spectra. Potential applications of this type theory in quantum certification have been further considered by e.g. Myers, Riley, Sati & Schreiber (quantum, Mye+23 ###reference_b11###). It remains to be seen whether and how this type theory is related to the form of substructural dependent type theory developed in this paper, i.e. whether one subsumes the other, etc. Such intertheoretic connections thus offer yet another avenue to be explored in developing the general theory of substructural dependent types."
148
+ }
149
+ ],
150
+ "appendix": [
151
+ {
152
+ "section_id": "Appendix 1",
153
+ "parent_section_id": null,
154
+ "section_name": "Appendix A Auxiliary Definitions",
155
+ "text": "The functor associated to a telescope is defined by recursion on as follows:\nSimilarly, the substitution of a morphism into a telescope is defined by recursion on :\nand likewise, the weakening of a telescope is defined by recursion on :\nwhere\nand\nThe type-level reassociation\nof a telescope is defined by recursion on :\nand likewise the term-level reassociation\nThe substitution of a term is defined as\nand is defined as\nThe lifting functors and for a telescope and type are defined by recursion on as follows:\nand the telescopic weakening functor\nis defined as\nThe projection map\nfor each telescope and type is defined by recursion on as follows:"
156
+ }
157
+ ],
158
+ "tables": {},
159
+ "image_paths": {},
160
+ "validation": true,
161
+ "references": [
162
+ {
163
+ "1": {
164
+ "title": "\u201cThe Syntax and Semantics of Quantitative Type Theory\u201d",
165
+ "author": "Robert Atkey",
166
+ "venue": "In LICS \u201918: 33rd Annual ACM/IEEE Symposium on Logic in Computer Science, July 9\u201312, 2018, Oxford, United Kingdom, 2018",
167
+ "url": null
168
+ }
169
+ },
170
+ {
171
+ "2": {
172
+ "title": "\u201cA Linear Logical Framework\u201d",
173
+ "author": "Iliano Cervesato and Frank Pfenning",
174
+ "venue": "In Information and Computation 179.1, 2002, pp. 19\u201375",
175
+ "url": null
176
+ }
177
+ },
178
+ {
179
+ "3": {
180
+ "title": "\u201cTelescopic mappings in typed lambda calculus\u201d",
181
+ "author": "N.G. de Bruijn",
182
+ "venue": "In Information and Computation 91.2, 1991, pp. 189\u2013204",
183
+ "url": null
184
+ }
185
+ },
186
+ {
187
+ "4": {
188
+ "title": "\u201cLinear logic\u201d",
189
+ "author": "Jean-Yves Girard",
190
+ "venue": "In Theoretical Computer Science 50.1, 1987, pp. 1\u2013101",
191
+ "url": null
192
+ }
193
+ },
194
+ {
195
+ "5": {
196
+ "title": "\u201cMultimodal Dependent Type Theory\u201d, LICS \u201920",
197
+ "author": "Daniel Gratzer, G.A. Kavvos, Andreas Nuyts and Lars Birkedal",
198
+ "venue": "Saarbr\u00fccken, Germany: Association for Computing Machinery, 2020, pp. 492\u2013506",
199
+ "url": null
200
+ }
201
+ },
202
+ {
203
+ "6": {
204
+ "title": "\u201cComprehension categories and the semantics of type dependency\u201d",
205
+ "author": "Bart Jacobs",
206
+ "venue": "In Theoretical Computer Science 107.2, 1993, pp. 169\u2013207",
207
+ "url": null
208
+ }
209
+ },
210
+ {
211
+ "7": {
212
+ "title": "\u201cIntegrating Linear and Dependent Types\u201d",
213
+ "author": "Neelakantan R. Krishnaswami, Pierre Pradic and Nick Benton",
214
+ "venue": "In Proceedings of the 42nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL \u201915",
215
+ "url": null
216
+ }
217
+ },
218
+ {
219
+ "8": {
220
+ "title": "\u201cAn Intuitionistic Theory of Types: Predicative Part\u201d",
221
+ "author": "Per Martin-L\u00f6f",
222
+ "venue": "In Logic Colloquium \u201973 80, Studies in Logic and the Foundations of Mathematics",
223
+ "url": null
224
+ }
225
+ },
226
+ {
227
+ "9": {
228
+ "title": "\u201cThe Para Construction as a Distributive Law\u201d Talk given at Virtual Double Categories Workshop, 2022",
229
+ "author": "David Jaz Myers and Matteo Cappucci",
230
+ "venue": "URL: https://bryceclarke.github.io/virtual-double-categories-workshop/slides/david-jaz-myers.pdf",
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "10": {
236
+ "title": "\u201cI Got Plenty o\u2019 Nuttin\u201d\u2019",
237
+ "author": "Conor McBride",
238
+ "venue": "In A List of Successes That Can Change the World: Essays Dedicated to Philip Wadler on the Occasion of His 60th Birthday",
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "11": {
244
+ "title": "\u201cEffective Quantum Certification via Linear Homotopy Types\u201d, 2023",
245
+ "author": "David Jaz Myers, Mitchell Riley, Hisham Sati and Urs Schreiber",
246
+ "venue": "URL: https://ncatlab.org/schreiber/files/QPinLHOTT-ExtendedAbstract-230315.pdf",
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "12": {
252
+ "title": "\u201cA Hybrid Logical Framework\u201d, 2009",
253
+ "author": "Jason Reed",
254
+ "venue": null,
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "13": {
260
+ "title": "\u201cA Bunched Homotopy Type Theory for Synthetic Stable Homotopy Theory\u201d, 2022",
261
+ "author": "Mitchell Riley",
262
+ "venue": "DOI: https://doi.org/10.14418/wes01.3.139",
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "14": {
268
+ "title": "\u201cNormalization for Cubical Type Theory\u201d",
269
+ "author": "Jonathan Sterling and Carlo Angiuli",
270
+ "venue": "In 2021 36th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), 2021, pp. 1\u201315",
271
+ "url": null
272
+ }
273
+ }
274
+ ],
275
+ "url": "http://arxiv.org/html/2401.15258v1"
276
+ }
20240127/2401.15265v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2401.15275v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2401.15279v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2401.15282v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2401.15287v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2401.15290v1.json ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A template for PRIME AI Style Citation: Authors. Title. Pages\u2026. DOI:000000/11111.",
3
+ "abstract": "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit,\nvestibulum ut, placerat ac, adipiscing vitae, felis. Curabitur dictum\ngravida mauris. Nam arcu libero, nonummy eget, consectetuer id,\nvulputate a, magna. Donec vehicula augue eu neque. Pellentesque habitant\nmorbi tristique senectus et netus et malesuada fames ac turpis egestas.\nMauris ut leo. Cras viverra metus rhoncus sem. Nulla et lectus\nvestibulum urna fringilla ultrices. Phasellus eu tellus sit amet tortor\ngravida placerat. Integer sapien est, iaculis in, pretium quis, viverra\nac, nunc. Praesent eget sem vel leo ultrices bibendum. Aenean faucibus.\nMorbi dolor nulla, malesuada eu, pulvinar at, mollis ac, nulla.\nCurabitur auctor semper nulla. Donec varius orci eget risus. Duis nibh\nmi, congue eu, accumsan eleifend, sagittis quis, diam. Duis eget orci\nsit amet orci dignissim rutrum.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Nam dui ligula, fringilla a, euismod sodales, sollicitudin vel, wisi.\nMorbi auctor lorem non justo. Nam lacus libero, pretium at, lobortis\nvitae, ultricies et, tellus. Donec aliquet, tortor sed accumsan\nbibendum, erat ligula aliquet magna, vitae ornare odio metus a mi. Morbi\nac orci et nisl hendrerit mollis. Suspendisse ut massa. Cras nec ante.\nPellentesque a nulla. Cum sociis natoque penatibus et magnis dis\nparturient montes, nascetur ridiculus mus. Aliquam tincidunt urna. Nulla\nullamcorper vestibulum turpis. Pellentesque cursus luctus mauris. Nulla malesuada porttitor diam. Donec felis erat, congue non, volutpat\nat, tincidunt tristique, libero. Vivamus viverra fermentum felis. Donec\nnonummy pellentesque ante. Phasellus adipiscing semper elit. Proin\nfermentum massa ac quam. Sed diam turpis, molestie vitae, placerat a,\nmolestie nec, leo. Maecenas lacinia. Nam ipsum ligula, eleifend at,\naccumsan nec, suscipit a, ipsum. Morbi blandit ligula feugiat magna.\nNunc eleifend consequat lorem. Sed lacinia nulla vitae enim.\nPellentesque tincidunt purus vel magna. Integer non enim. Praesent\neuismod nunc eu purus. Donec bibendum quam in tellus. Nullam cursus\npulvinar lectus. Donec et mi. Nam vulputate metus eu enim. Vestibulum\npellentesque felis eu massa."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Headings: first level",
15
+ "text": "Quisque ullamcorper placerat ipsum. Cras nibh. Morbi vel justo vitae\nlacus tincidunt ultrices. Lorem ipsum dolor sit amet, consectetuer\nadipiscing elit. In hac habitasse platea dictumst. Integer tempus\nconvallis augue. Etiam facilisis. Nunc elementum fermentum wisi. Aenean\nplacerat. Ut imperdiet, enim sed gravida sollicitudin, felis odio\nplacerat quam, ac pulvinar elit purus eget enim. Nunc vitae tortor.\nProin tempus nibh sit amet nisl. Vivamus quis tortor vitae risus porta\nvehicula. See Section 2 ###reference_###."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Headings: second level",
21
+ "text": "Fusce mauris. Vestibulum luctus nibh at lectus. Sed bibendum, nulla a\nfaucibus semper, leo velit ultricies tellus, ac venenatis arcu wisi vel\nnisl. Vestibulum diam. Aliquam pellentesque, augue quis sagittis\nposuere, turpis lacus congue quam, in hendrerit risus eros eget felis.\nMaecenas eget erat in sapien mattis porttitor. Vestibulum porttitor.\nNulla facilisi. Sed a turpis eu lacus commodo facilisis. Morbi\nfringilla, wisi in dignissim interdum, justo lectus sagittis dui, et\nvehicula libero dui cursus dui. Mauris tempor ligula sed lacus. Duis\ncursus enim ut augue. Cras ac magna. Cras nulla. Nulla egestas.\nCurabitur a leo. Quisque egestas wisi eget nunc. Nam feugiat lacus vel\nest. Curabitur consectetuer.\nSed commodo posuere pede. Mauris ut est. Ut quis purus. Sed ac odio. Sed\nvehicula hendrerit sem. Duis non odio. Morbi ut dui. Sed accumsan risus\neget odio. In hac habitasse platea dictumst. Pellentesque non elit.\nFusce sed justo eu urna porta tincidunt. Mauris felis odio, sollicitudin\nsed, volutpat a, ornare ac, erat. Morbi quis dolor. Donec pellentesque,\nerat ac sagittis semper, nunc dui lobortis purus, quis congue purus\nmetus ultricies tellus. Proin et quam. Class aptent taciti sociosqu ad\nlitora torquent per conubia nostra, per inceptos hymenaeos. Praesent\nsapien turpis, fermentum vel, eleifend faucibus, vehicula eu, lacus."
22
+ },
23
+ {
24
+ "section_id": "2.1.1",
25
+ "parent_section_id": "2.1",
26
+ "section_name": "2.1.1 Headings: third level",
27
+ "text": "Suspendisse vel felis. Ut lorem lorem, interdum eu, tincidunt sit amet,\nlaoreet vitae, arcu. Aenean faucibus pede eu ante. Praesent enim elit,\nrutrum at, molestie non, nonummy vel, nisl. Ut lectus eros, malesuada\nsit amet, fermentum eu, sodales cursus, magna. Donec eu purus. Quisque\nvehicula, urna sed ultricies auctor, pede lorem egestas dui, et\nconvallis elit erat sed nulla. Donec luctus. Curabitur et nunc. Aliquam\ndolor odio, commodo pretium, ultricies non, pharetra in, velit. Integer\narcu est, nonummy in, fermentum faucibus, egestas vel, odio.\nSed commodo posuere pede. Mauris ut est. Ut quis purus. Sed ac odio. Sed\nvehicula hendrerit sem. Duis non odio. Morbi ut dui. Sed accumsan risus\neget odio. In hac habitasse platea dictumst. Pellentesque non elit.\nFusce sed justo eu urna porta tincidunt. Mauris felis odio, sollicitudin\nsed, volutpat a, ornare ac, erat. Morbi quis dolor. Donec pellentesque,\nerat ac sagittis semper, nunc dui lobortis purus, quis congue purus\nmetus ultricies tellus. Proin et quam. Class aptent taciti sociosqu ad\nlitora torquent per conubia nostra, per inceptos hymenaeos. Praesent\nsapien turpis, fermentum vel, eleifend faucibus, vehicula eu, lacus."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Examples of citations, figures, tables, references",
33
+ "text": "Pellentesque habitant morbi tristique senectus et netus et malesuada\nfames ac turpis egestas. Donec odio elit, dictum in, hendrerit sit amet,\negestas sed, leo. Praesent feugiat sapien aliquet odio. Integer vitae\njusto. Aliquam vestibulum fringilla lorem. Sed neque lectus,\nconsectetuer at, consectetuer sed, eleifend ac, lectus. Nulla facilisi.\nPellentesque eget lectus. Proin eu metus. Sed porttitor. In hac\nhabitasse platea dictumst. Suspendisse eu lectus. Ut mi mi, lacinia sit\namet, placerat et, mollis vitae, dui. Sed ante tellus, tristique ut,\niaculis eu, malesuada ac, dui. Mauris nibh leo, facilisis non,\nadipiscing quis, ultrices a, dui. [kour2014real, kour2014fast] and see [hadash2018estimate].\nThe documentation for natbib may be found at\nhttp://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf ###reference_ib/natbib/natnotes.pdf###\nOf note is the command \\citet, which produces citations\nappropriate for use in inline text. For example,\nproduces\nHasselmo, et al. (1995) investigated\u2026\nhttps://www.ctan.org/pkg/booktabs ###reference_www.ctan.org/pkg/booktabs###"
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Figures",
39
+ "text": "Suspendisse vitae elit. Aliquam arcu neque, ornare in, ullamcorper quis,\ncommodo eu, libero. Fusce sagittis erat at erat tristique mollis.\nMaecenas sapien libero, molestie et, lobortis in, sodales eget, dui.\nMorbi ultrices rutrum lorem. Nam elementum ullamcorper leo. Morbi dui.\nAliquam sagittis. Nunc placerat. Pellentesque tristique sodales est.\nMaecenas imperdiet lacinia velit. Cras non urna. Morbi eros pede,\nsuscipit ac, varius vel, egestas non, eros. Praesent malesuada, diam id\npretium elementum, eros sem dictum tortor, vel consectetuer odio sem sed\nwisi. See Figure 1 ###reference_###. Here is how you add footnotes. 111Sample of the first footnote.\nSed feugiat. Cum sociis natoque penatibus et magnis dis parturient\nmontes, nascetur ridiculus mus. Ut pellentesque augue sed urna.\nVestibulum diam eros, fringilla et, consectetuer eu, nonummy id, sapien.\nNullam at lectus. In sagittis ultrices mauris. Curabitur malesuada erat\nsit amet massa. Fusce blandit. Aliquam erat volutpat. Aliquam euismod.\nAenean vel lectus. Nunc imperdiet justo nec dolor."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Tables",
45
+ "text": "Etiam euismod. Fusce facilisis lacinia dui. Suspendisse potenti. In mi\nerat, cursus id, nonummy sed, ullamcorper eget, sapien. Praesent\npretium, magna in eleifend egestas, pede pede pretium lorem, quis\nconsectetuer tortor sapien facilisis magna. Mauris quis magna varius\nnulla scelerisque imperdiet. Aliquam non quam. Aliquam porttitor quam a\nlacus. Praesent vel arcu ut tortor cursus volutpat. In vitae pede quis\ndiam bibendum placerat. Fusce elementum convallis neque. Sed dolor orci,\nscelerisque ac, dapibus nec, ultricies ut, mi. Duis nec dui quis leo\nsagittis commodo. See awesome Table 1 ###reference_###."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Lists",
51
+ "text": "Lorem ipsum dolor sit amet\nconsectetur adipiscing elit.\nAliquam dignissim blandit est, in dictum tortor gravida eget. In ac rutrum magna."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Conclusion",
57
+ "text": "Your conclusion here"
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {
62
+ "1": {
63
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Sample table title</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.4.5.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S3.T1.4.5.1.1\">Part</th>\n<td class=\"ltx_td ltx_border_tt\" id=\"S3.T1.4.5.1.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.1.2\">Name</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.1.3\">Description</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column\" id=\"S3.T1.1.1.1\">Size (m)</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.2.2.2\">Dendrite</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.2.2.3\">Input terminal</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.2.2.1\">\n100</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.3.2\">Axon</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.3.3\">Output terminal</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.3.1\">\n10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T1.4.4.2\">Soma</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T1.4.4.3\">Cell body</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T1.4.4.1\">up to \n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
64
+ "capture": "Table 1: Sample table title"
65
+ }
66
+ },
67
+ "image_paths": {},
68
+ "validation": true,
69
+ "references": [],
70
+ "url": "http://arxiv.org/html/2401.15290v1"
71
+ }
20240127/2401.15291v1.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Improved Construction of Robust Gray Codes",
3
+ "abstract": "A robust Gray code, formally introduced by (Lolck and Pagh, SODA 2024), is a Gray code that additionally has the property that, given a noisy version of the encoding of an integer , it is possible to reconstruct so that is small with high probability. That work presented a transformation that transforms a binary code of rate to a robust Gray code with rate , where the constant in the can be at most . We improve upon their construction by presenting a transformation from a (linear) binary code to a robust Gray code with similar robustness guarantees, but with rate that can approach .",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "In [1 ###reference_b1###], Lolck and Pagh introduce the notion of a robust Gray code. Informally, a robust Gray code has an encoding map that maps integers to bitstrings, with the following desiderata.\nshould be a Gray code.111The paper [1 ###reference_b1###] also gives a more general definition, where the code should have low sensitivity, meaning that is small; however, both their code and our code is a Gray code, so we specialize to that case (in which the sensitivity is ). That is, for any , .\nshould be \u201cnoise robust.\u201d Informally, this means that we should be able to approximately recover an integer given a noisy version of . Slightly more formally, should have a decoding map , so that when , the estimate should be close to with high probability.\nshould have high rate. The rate of should be as close to as possible.\nshould have efficient algorithms. Both and should have running time polynomial (ideally, near-linear) in .\nRobust Gray codes have applications in differential privacy; see [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###] for more details on the connection. It is worth mentioning that there exist non-binary codes based on the Chinese Remainder Theorem [5 ###reference_b5###, 6 ###reference_b6###] that have nontrivial sensitivity, but in our work, we focus on binary codes.\nOur Contributions. In this paper, we improve upon the construction of [1 ###reference_b1###] by giving a construction of a robust Gray code with the same robustness guarantees, but better rate.\nMore precisely, for , [1 ###reference_b1###] give a general recipe for turning a binary error-correcting code with rate into a robust Gray code with rate , and with the following robustness guarantee:\nwhere the probability is over the noise vector , and is the failure probability of the code on the binary symmetric channel with parameter .\nOur main result is a similar transformation that turns a (linear) binary code with good performance on the binary symmetric channel into a robust Gray code . We obtain a similar robustness guarantee as (1 ###reference_###) (see Theorem 1 ###reference_orem1### for the precise statement), but with better rate. Concretely, if the original code has rate , the rate of the robust Gray code from [1 ###reference_b1###] is proven to be , where the constant inside the approaches when has sublinear distance; this comes from the fact that the a codeword in their final construction involves four codewords from . In contrast, under the same conditions, our robust Gray code has rate approaching ; this is because our construction involves only two codewords from . (See Observation 2 ###reference_ervation2### for the formal statement). Moreover, if the encoding and decoding algorithms for are efficient, then so are the encoding and decoding algorithms for our construction ; concretely, the overhead on top of the encoding and decoding algorithms for is (see Lemma 7 ###reference_ma7### for the formal statement).\nAs a result, when instantiated with, say, a constant-rate Reed-Muller code or a polar code (both of which have sublinear distance and good performance on the BSC() (see, e.g., [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###])), our construction gives efficient robust Gray codes with a rate about two times larger than than previous work, approaching .\nMain Idea. The idea of our transformation is quite simple, and follows the same high-level structure as [1 ###reference_b1###]. We begin with our base code , and use it to construct an intermediate code (with an appropriate ordering). Then we add new codewords to to complete it to a Gray code. For example, if are two consecutive codewords in , then we will insert codewords in between them, iteratively flipping bits to move from to .\nThe main difference between our construction and that of previous work is how we build and order . First, we use a standard Gray code to construct an ordering of the codewords in . Then, we build as follows. Let be the \u2019th codeword in . Then the \u2019th codeword in is given by\nwhere is a short string that is all zeros if is even and all ones otherwise, and denotes concatenation. Then we form by interpolating as described above.\nOur decoding algorithm ends up being rather complicated, but the idea is simple. Suppose that for a codeword , we see a corrupted version , where is a noise vector. As described above, is made up of a prefix from and a suffix from , for some . Let be the index where \u201ccrosses over\u201d from to . Notice that, as this crossover point can only be in one place, at least one of the two codewords of appearing in will be complete, and equal to either or . Thus, if we could identify where the crossover point was, then we could use \u2019s decoder to decode whichever the complete -codeword was to identify ; and then use our knowledge of where is to complete the decoding. The simple observation behind our construction is that, because the strings (which are either all zeros or all ones) flip with the parity of , we can tell (approximately) where was! Indeed, these strings will be all zeros before and all ones after , or vice versa. Of course, some noise will be added, but provided that the length of the strings are long enough, we will still be able to approximately locate with high probability.\nHowever, there are several challenges to implementing this simple idea. For example, given and , how do we efficiently compute ? (Here is where the fact that we ordered carefully comes in; it\u2019s not trivial because the number of codewords of inserted between and depends on , so naively adding up the number of codewords of that come before and then adding would take exponential time.) Or, what happens when the crossover point is very close to the end of ? (Here, it might be the case that we mis-identify ; but we show that this does not matter, with high probability, because our final estimate will still be close to with high probability).\nIn the rest of the paper, we show how to deal with these and other challenges."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Preliminaries",
15
+ "text": "We begin by setting notation. Throughout, we work with linear codes over , so all arithmetic between codewords is modulo 2.\nFor , let denote the Hamming distance between and . We use to denote the Hamming weight of a vector .\nFor a code , the minimum distance of the code is given by .\nFor two strings and , we use to denote the concatenation of and . For a string and for , we use to denote the prefix of the string ending at (and including) index . Analogously, we use be defined as the suffix of starting at (and including) index .\nFor an integer , we use to denote the set .\nFor , let be majority function on bits. (In the case that is even and a string has an equal number of zeros and ones, is defined to be a randomized function that outputs or each with probability .) We use to denote the Bernoulli- distribution on , so if , then is with probability and with probability .\nNext we define Binary Reflected Codes, a classical Gray code ([12 ###reference_b12###]; see also, e.g., [13 ###reference_b13###]); we will use these to define our ordering on .\nLet be a positive integer. The Binary Reflected Code (BRC) is a map defined recursively as follows.\nFor , and .\nFor , for any ,\nIf , then\nIf , then\nIt is not hard to see that for any two successive integers and , the encoded values and differ in exactly one bit.\nWe will need one more building-block, the Unary code.\nThe Unary code is defined as the image of the encoding map given by\n\nThe decoding map is given by\nNext, we define the failure probability of a code .\nLet be a code with message length and encoding and decoding maps and respectively.\nThe probability of failure of is\nwhere the probability is over a noise vector with ."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Construction",
21
+ "text": "We recall the high-level overview of our construction from the introduction:\nTo construct we will start with a base code where , which we will order in a particular way (Definition 4 ###reference_inition4###). Then we construct an intermediate code by transforming the codewords of (Definition 5 ###reference_inition5###); the codewords of inherit an order of .\nFinally, we create final code by adding new codewords that \u201cinterpolate\u201d between the codewords of so that it satisfies the Gray code condition (Definition 6 ###reference_inition6###).\nWe discuss each of these steps in the subsequent subsections."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III-A Base Code",
27
+ "text": "Given a base code , we define an ordering on the elements of as follows.\n[Ordering on ]\nLet be a linear code with block length and dimension . Let be a generator matrix for , and let denote the -th row of .\nGiven , define to be the unique integer so that .222As noted after Definition 1 ###reference_inition1###, and differ in only one bit, so is well-defined. Let . Then, for all , the -th codeword of is defined by\nOur next lemma establishes that indeed this ordering hits all of the codewords.\nLet be a linear code, and consider the ordering defined in Definition 4 ###reference_inition4###.\nFor every , there is a unique index such that .\nObserve that, by construction, we have\nSince is a bijection and is full rank, this implies that each codeword in is uniquely represented as some for \n\u220e"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "III-B Intermediate Code",
33
+ "text": "Next, we describe how to generate our intermediate code .\nLet be a linear code of dimension . Let denote the ordering of codewords in as per Definition 4 ###reference_inition4###. Let . The intermediate code , along with its ordering, is defined as follows. For each , define by the equation\nThen, is the subset of defined by , where the code is ordered as ."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-C Final Code",
39
+ "text": "To create our robust Gray code , given any two consecutive codewords in , we inject extra codewords between them to create , as follows.\nLet be a code defined as in Definition 5 ###reference_inition5###. For each , define , and let . For and , let be the -th index where codewords and differ.\nDefine the zero\u2019th codeword of as .\nFix . If for some , we define by .\nOn the other hand, if\n for some , then we define as\nFinally, define by along with the encoding map given by .\nWe will also define to be the vector of all indices in which and differ, in order, except for the last one.333The reason we don\u2019t include the last one is because once the last differing bit has been flipped, will lie in , not .\nIt will frequently be useful to be able to locate within a block . To that end, we introduce the following notation.\nLet . Let be such that . Then we will use the notation to denote . That is, is the index of in the block .\nNote that, in this notation, when , the last bit that has been flipped to arrive at in the ordering of (that is, the \u201ccrossover point\u201d alluded to in the introduction) is \nWe make a few useful observations about Definition 6 ###reference_inition6###. The first two follow immediately from the definition.\nis a Gray code. That is,\nFor any , we have that .\nSuppose that has rate and distance . Then the code constructed as in Definition 6 ###reference_inition6### has rate that approaches as .\nLet , and suppose that for some . Then\nwhere is the unary code of length . Above, denotes the restriction of the vector to the indices that appear in the vector .\nBy definition, contains the indices on which and differ, and also by definition, by the time we have reached , the first of these indices have been flipped from agreeing with to agreeing with . Thus, if we add and (mod 2), we will get on the first indices and on the on the rest.\n\u220e\nBefore we move on, we show Definition 6 ###reference_inition6### actually defines an injective map.\nLet be a code with encoding map be as defined in Definition 6 ###reference_inition6###. Then is injective.\nAssume, for the sake of contradiction, that there are two distinct such that . Without loss of generality assume that . There are three scenarios possible.\nCase 1: Both and are in the interval . Then we claim that . The reason is that and ; but by definition of , . Thus, .\nCase 2: and .\nThen is an interpolation of and , and is an interpolation of and . Notice that and ,\nas the last index of the codewords and has not been flipped yet. However, as and have different parity, , which implies .\nCase 3: and where . In this scenario, .\nSuppose that neither nor are in .\nThen and leading to hence . The same holds if neither are in , repeating the argument with .\nThe final sub-case is that and or vice versa. If this occurs (suppose without loss of generality, it is the first one, not the \u201cvice versa\u201d case), then according to Lemma 4 ###reference_ma4###, , however for , . This implies that either or , which implies that , as desired.\n\u220e"
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "IV Decoding Algorithm and Analysis",
45
+ "text": "In this section, we define our decoding algorithm and analyze it.\nWe begin with some notation for the different parts of the codewords .\nFor a string , we use to denote the substring . With this notation, for any , define the following substrings:\n\n\n\n\n.\nNotice that if , then and are in locations corresponding to the codewords of that appear in codewords of , while , and are in locations corresponding to the and strings.\nBefore we formally state the algorithm (Algorithm 2 ###reference_### below), we prove a few lemmas to motivate its structure.\nOur first lemma formalizes the intuition in the introduction that at most one of the \u201cchunks\u201d in each codeword is broken up by the crossover point .\nFix . Suppose that is such that , so\n can be written as as above.\nThen at most one of the substrings in that is not equal to the corresponding substring in or .\nFirst, suppose that . Then in that case and all of the substrings in are equal to their corresponding substring. Otherwise, . In that case, .\nThis means that (the \u201ccrossover point\u201d for ) is defined, and indexes a position in , and in particular in one of the sub-strings in . Then other substrings strictly to the left of are equal to their corresponding substring in ; and the ones strictly to the right are equal to the corresponding substring in .\n\u220e\nUsing the language of Lemma 3 ###reference_ma3###, we say that a substring in that is equal to its corresponding substring in is a full chunk.\nThus, Lemma 3 ###reference_ma3### implies that there are at least four full chunks in any .\nNotice that it is possible that a substring is in but is not a full chunk.\nWe say that all full chunks are decoded correctly if, for full chunk of , when we run the corresponding decoder, we get the right answer. That is, if is a full chunk, then if we were to run on we would obtain , and similarly for ; and if is a full chunk, and we were to run on , we would obtain , and similarly for and .\nNext, we show that if the \u201ccrossover point\u201d does not point to one of chunks , or , then there are at least two of them that differ.\nLet be a code defined as in Definition 6 ###reference_inition6###. Fix any and let be such that . Suppose that ; that is, indexes a position in for some . Then .\nWithout loss of generality, suppose that .\nBy definition, we have\nIn particular, since the \u201ccut-off\u201d points to a position within , we have that both and are full chunks, and further agrees with , while agrees with . Since and have different parities, either and , or the other way around; in either case, they are different.\nThe same argument holds when .\n\u220e\nFinally, we break things up into three cases, which will be reflected in our algorithm. In each case, we can use the pattern of the chunks to get an estimate for or , and bound where the crossover point will be.\nLet and let be such that . Let where . Let be a received input. Then define for and for . Assume that all full chunks are decoded correctly by their corresponding decoder.\nThen the following hold.\nIf , then and .\nIf , then and .\nIf , then and .\nMoreover, if , then ; and otherwise they are equal to .\nWe address each case individually.\nIf then we claim that . Assume otherwise. If , then . This means that , and are full chunks. Given the assumption that all the full chunks are decoded correctly, but that contradicts our assumption for this case; so we conclude that . Thus, . But then then and are full chunks, and according to Lemma 4 ###reference_ma4###, . Again using the assumption of correct decoding of all full chunks, this implies that , which again contradicts our assumption for this case. This establishes the claim that .\nFinally, the fact that implies that , and is a full chunk. Using the assumption of correct decoding of all full chunks, we get that\nIf , then the conclusion follows by an argument nearly identical to case 1.\nIf , then we claim that . Assume otherwise, then and and they are full chunks. Now as and do not have the same parity, . As a result, if all full chunks are decoded correctly, we have that , which contradicts our assumption in this case. This proves our claim that .\nIf then ; if then ; and in either case both are full chunks. Using the assumption that all full chunks are decoded correctly, we see that , as desired.\n\u220e"
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "IV-A Decoding Algorithm",
51
+ "text": "Before we state our main algorithm (Algorithm 2 ###reference_### below), we include a helper algorithm, compute-r (Algorithm 1 ###reference_###). This algorithm takes an index and returns . Note that this is not trivial to do efficiently: If we wanted to compute directly from the definition, that would require computing or storing for all and adding them up, which may take time . Instead, we do something much faster.\nThe Algorithm compute-r (Algorithm 1 ###reference_###) correctly computes .\nRecall that . Consider a fixed difference . This is precisely\nwhere is the unique index so that : indeed, by Definition 4 ###reference_inition4###, , and from that (4 ###reference_###) follows from the definition of (Definition 5 ###reference_inition5###). Thus, in order to compute\nit suffices to count how often each index shows up as some in that sum. This is precisely by the definition of .\n\u220e\nOur final algorithm is given in Algorithm 2 ###reference_###. It is organized into the three cases of Lemma 5 ###reference_ma5###. To help the reader, we have included comments saying what each estimate \u201cshould\u201d be. Here, \u201cshould\u201d is under the assumption that each full chunk is decoded correctly."
52
+ },
53
+ {
54
+ "section_id": "4.2",
55
+ "parent_section_id": "4",
56
+ "section_name": "IV-B Analysis",
57
+ "text": "Next, we analyze the correctness and running time of Algorithm 2 ###reference_###.\nWe begin with the running time.\nLet be a constant rate code.\nSuppose that runs in time , and runs in time , and that .\nLet be the generator matrix of , with rows for . Suppose that can be computed in time . Then the running time is of is\nand\nthe running time of is\nWe note that if is, say, a Reed-Muller code , then indeed, given , can be computed in time : if the binary expansion of has weight , then the corresponding row has weight .\nFor codes that may not have closed-form expressions for their generator matrices, we can pre-compute each (in total time ) and store them to allow for lookup time.444\nIf a lookup table is not desirable and the cannot otherwise be computed on the fly, then our algorithm still works, and runs in time at most\n\nwhere we recall that and .\nAs we are assuming that , finding the also takes time and is negligible. Among the other steps, the only non-tivial ones are running the encoding and decoding maps for (each of which happens times); running compute-r (which takes time if can be computed in time ); and running and , which can be done in time .\n\u220e\nNext, we move on to the analysis of the correctness and failure probability of Algorithm 2 ###reference_###. The final statement is Theorem 1 ###reference_orem1### below, but we will need several lemmas first. We begin by showing that, if all full chunks are decoded correctly, then is equal to on the portion of indices where the crossover point is guaranteed not to be.\nLet be the noisy input to , where and and let Assume that all full chunks are decoded correctly in Lines 3 ###reference_3### to 8 ###reference_8###.\nDefine as the set of indices that can be equal to depending on the pattern of according to Lemma 5 ###reference_ma5###.555That is, if or , then , and so on. Then .\nFirst notice that the indices in are indices corresponding to a subset . As then all chunks in are full chunks. Given that full chunks are decoded correctly, then we know that for , and for we have that . As a result the decoder fixes the values of these indices and only estimates the values of the rest of the bits in lines 18 ###reference_18###, 29 ###reference_29###, 38 ###reference_38###, and 39 ###reference_39###. Thus, .\n\u220e\nFor a let be such that and . Let be the noisy input for and be the estimate given by . Assuming that all full chunks are decoded correctly, then , Moreover, .\nWe first claim that either Case 1 or Case 2 of Lemma 5 ###reference_ma5### has occurred. Indeed,\nthe fact that implies that and are both full chunks, and our assumption that each full chunk is correctly decoded implies that while . As and have different parities, , which implies that we are in Case 1 or Case 2 of Lemma 5 ###reference_ma5###.\nNext, we establish that the estimate returned by the algorithm in Cases 1 or 2 satisfies .\nSuppose without loss of generality that we are in Case 1, so or (Case 2 is symmetric).\nWe first go through Case 1 of Algorithm 2 ###reference_###, which starts at Line 10 ###reference_10###. Since we are in Case 1 of Lemma 5 ###reference_ma5###, that lemma implies that and that .\nThus, in the first case in Algorithm 2 ###reference_###, under the assumption that all full chunks are correctly decoded, we have , , , , and .\nAt the end of this case, the final estimate is set to be\nBy the above, we have , so by Lemma 6 ###reference_ma6###, .\nNote also that\n.\nPlugging in to our expression for and subtracting from both sides, we have\nNow, recall that in Algorithm 2 ###reference_###, we have ,\nwhere is the set of appearing in so that .\nIt thus follows from the definition that\nfrom the definition of .\nPlugging this into (5 ###reference_###), we see that\n,\nwhich implies that\n,\nas desired.\nFinally, we argue that . Indeed, we may write\nBy Observation 3 ###reference_ervation3###, ; and by that observation along with the fact that , we also have \nThus,\nwhich finishes the proof of the lemma.\n\u220e\nFor , let be such that . Further let be the noisy input and be the estimate obtained from . Assuming that all full chunks are decoded correctly, then either\nand ; or\nand and .\nUnlike in Lemma 9 ###reference_ma9###, it is now the case that any of the three cases in Lemma 5 ###reference_ma5### could occur. We first consider Cases 1 and 2, so in line 7 ###reference_7###. The proof is quite similar to that of Lemma 9 ###reference_ma9###, so we just sketch it here.\nSuppose that we are in Case 1 of Lemma 5 ###reference_ma5### (Case 2 is symmetric). In Case 1, we claim that . Indeed, since all full chunks are decoded correctly, as we argued in the proof of Lemma 9 ###reference_ma9###, the values are computed correctly, meaning that they match the values that the comments in Algorithm 2 ###reference_### say they should be. In particular, as before we have\nThis establishes that , which proves the claim; the rest of this case follows identically to the proof of Lemma 9 ###reference_ma9###.\nNext we consider Case 3, so . In this case, Lemma 5 ###reference_ma5### (and the assumption that all full chunks are correctly decoded)\nsays that , which\nimplies that the value of computed in line 37 ###reference_37### is either or .\nAlgorithm 2 ###reference_### then computes two estimates and , which are meant to be an estimate of in the two cases the and ; eventually it picks whichever produces a codeword closer to the received word .\nThere are two main sub-cases. In the first, the algorithm guesses correctly, meaning that either and ; or that and .\nIn the other case, the algorithm guesses incorrectly, meaning that and , or and . We consider each of these in turn.\nFirst suppose that the algorithm guesses correctly. This case is again quite similar to that of Lemma 9 ###reference_ma9###, and we sketch the proof. Suppose that and ; the other way of \u201cguessing correctly\u201d leads to a similar argument.\nNow, we claim that . To see this, notice that in this case, we have\nin Line 41 ###reference_41###. Since , given our assumption that all full chunks are correctly decoded, it is not hard to see that . Lemma 6 ###reference_ma6### then implies that , so\nThis shows that . Once we have this, the rest of the argument follows as in Lemma 9 ###reference_ma9###.\nNow we move onto the second sub-case of Case 3, when the algorithm guesses incorrectly. Unlike the previous cases we have considered, this is different than Lemma 9 ###reference_ma9###, because may end up outside of . Without loss of generality, suppose that but that has been set to .\n(The other case is similar).\nIn this sub-case, the following hold.\n\n\n\nWe begin with B.\nFirst, since , this implies that , so Lemma 5 ###reference_ma5### (Case 3) implies that . Then since , we have\nThis proved B.\nNext we prove C. This follows from the computation of in Algorithm 2 ###reference_###, along with the assumption that all full chunks are decoded correctly. Indeed, we have\nSince , we are in the case where\n, , , and the above implies that\nThe fact that proves inequality in part C. The equality in part C follows since by the definition of , we have\nFinally, we move onto A. The fact that follows immediately from C. The fact that follows from the fact that, by a computation similar to that above, we have\nwhich is less than as .\n\u220e\nGiven the claim, we can finish the proof of the lemma in this sub-case. First, we observe that ; indeed this follows directly from B and C in the claim.\nFinally, we show that . To see this, we first write\nWe claim that and differ on only the indices in . This follows from the fact that , which we saw in the proof of Claim 1 ###reference_im1### (part B). Next, we claim that and differ only on the indices in . Indeed, part C of Claim 1 ###reference_im1### implies that . Since (part A of Claim 1 ###reference_im1###), this means that is in the last chunk (the chunk) of , which proves the claim.\nThus, we have that\nas the two parts differ on disjoint sets. Moreover, since , we have , since and differ on all of the first bits, so and differ on all of the first bits. Similarly, . Putting everything together, we conclude that\nSince in this case (as , while , this proves the last component of the lemma.\n\u220e\nThe following lemma is included in [1 ###reference_b1###]. We include a short proof for completeness.\nLet be a linear code with message length and minimum distance . Further let and . Then\nLet be the maximum likelihood decoder for . That is, given , . If there are multiple codewords that attain the minimum above, then chooses randomly among them. Then\nFix a message , and let . Let be such that , where \nLet be the set on which and disagree.\nLet be a noise vector, and define , the restriction of to the positions indexed by . Observe that as in the lemma statement.\nSuppose that . Then\n\nOn the other hand, if , then with probability at least ,\nTogether, we conclude that\n\u220e\nLet be a vector in , for . Let be the repetition code of length , so that . Then for any ,\nwhere we recall that denotes the majority function.\nAbove, the randomness is over both the choice of and any randomness that uses to break ties.\nFix . Suppose that . Then either , or else and the random choice of was incorrect, which happens with probability . Thus,\nwhich by Lemma 11 ###reference_ma11### is at most .\n\u220e\nBefore we prove our main theorem establishing correctness of with high probability (Theorem 1 ###reference_orem1###), we need one more concentration bound. We use the following from [1 ###reference_b1###].\nLet . Let for . Then\nLet , and let .\nSuppose that all full chunks are decoded correctly. Then\nBy the analysis in the proof of Lemmas 9 ###reference_ma9### and 10 ###reference_ma10###, if all full chunks are decoded correctly, then all the quantities computed in Algorithm 2 ###reference_### before (in Cases 1 and 2), or before (in Case 3) are what they \u201cshould\u201d be. That is, in Cases 1 and 2, all of the quantities computed before Lines 18 ###reference_18### and 29 ###reference_29###, respectively are what the comments claim they should be. In Case 3, all of the comments computed before Line 39 ###reference_39###) are what the comments claim they should be.\nThus, any error in comes from the estimates of (in Cases 1 or 2) or and (in Case 3).\nWe first work out Case 1.\nFirst, we observe that it suffices to look only on the set defined as in Algorithm 2 ###reference_###. That is, it suffices to show that\nIndeed, and differ only on .\nNext, recall from Observation 3 ###reference_ervation3### that ; that is, restricted to the elements in , has ones followed by all zeros. Since in Case 1 (as shown in the proof of Lemma 9 ###reference_ma9###), is ones followed by zeros.\nThus,\nis a vector of ones followed by zeros, plus the noise from .\nTherefore,\nis the decoding of .\nFor notational convenience, in the following we will introduce the notation . With this notation (and the fact that returns the closest codeword in ), we conclude that\nWe claim that in fact the first of the terms in (8 ###reference_###) is equal to and the second is equal to , which will immediately prove the statement in Case 1.\nTo see the first part of the claim, first\nnotice that in Algorithm 2 ###reference_### (using the fact that all the estimates are what they \u201cshould\u201d be, as above), we have\n, so \nThen\nAbove, we used the fact that on , and differ precisely on the indices between and .\nThis proves the first part of the claim. For the next part, we have that\nwhich is the right hand side of (8 ###reference_###).\nThis finishes proving the claim, and the lemma in this case.\nCase 2 is similar to Case 1 and we omit the analysis.\nIn Case 3, we have two candidates, and . By an analysis similar to the above, at least one of the following statements hold:\nand\n\nor;\nand\nwhere in both cases has i.i.d. coordinates.\nThus, if the first is the case, we have that\nand in the second case we have (again with an analysis similar to that above) that\nThus, by the definition of in this case (Line 45 ###reference_45###), we have\nas desired.\n\u220e\nFix .\nLet be a linear code. Let be defined as in Definition 6 ###reference_inition6### from . Let Then\nwhere is the block error probability of , and and are constants given by and .\nLet be the event that at least one of the full chunks in is decoded incorrectly.\nLet . Then\nLet be such that .\nWe will bound each of the two terms above individually.\nWe begin with . There are two scenarios, depending on whether is safely in the middle of the interval (that is, in the middle three chunks), or if it is near the edges (the outermost chunks). In either case, if all full chunks are decoded correctly, then . Indeed, in the first case this follows from\nLemma 9 ###reference_ma9###, while in the second case it follows from Lemma 10 ###reference_ma10###.\nThus, we see that in either case, the probability of returning a particular is\nAbove, the first line follows from Lemma 14 ###reference_ma14###.\nThe second line follows from Lemma 13 ###reference_ma13###, while the last line follows from the fact that as noted above.\nThus, for any integer that might be equal to , we have\nwhere the factor of two comes from the fact that might be either or .\nThus,\nNow we turn our attention to the second term, , the probability that at least one of the full chunks is decoded incorrectly.\nThe full chunks for are codewords in and are decoding using . Thus, the probability that either of these chunks (assuming they are full chunks) are decoded incorrectly is at most twice the failure probability is .\nOn the other hand, the full chunks for are repetition codes of length . Then Lemma 12 ###reference_ma12### that the probability that any of these (assuming it is a full chunk) is at most , so the probability that any of these fail is at most . Altogether, the probability that at least one full chunk is decoded incorrectly is at most\n\nSumming both terms up we see that\nwhich completes the proof of the theorem.\n\u220e"
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {},
62
+ "image_paths": {},
63
+ "validation": true,
64
+ "references": [],
65
+ "url": "http://arxiv.org/html/2401.15291v1"
66
+ }
20240127/2401.15293v1.json ADDED
@@ -0,0 +1,268 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "SkipViT: Speeding Up Vision Transformers with a Token-Level Skip Connection",
3
+ "abstract": "Vision transformers are known to be more computationally and data-intensive than CNN models. These transformer models such as ViT, require all the input image tokens to learn the relationship among them. However, many of these tokens are not informative and may contain irrelevant information such as unrelated background or unimportant scenery. These tokens are overlooked by the multi-head self-attention (MHSA), resulting in many redundant and unnecessary computations in MHSA and the feed-forward network (FFN). In this work, we propose a method to optimize the amount of unnecessary interactions between unimportant tokens by separating and sending them through a different low-cost computational path. Our method does not add any parameters to the ViT model and aims to find the best trade-off between training throughput and achieving a 0% loss in the Top-1 accuracy of the final model. Our experimental results on training ViT-small from scratch show that SkipViT is capable of effectively dropping 55% of the tokens while gaining 13.23% training throughput and maintaining classification accuracy at the level of the baseline model on Huawei Ascend910A.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "In recent years, Transformer architectures have not only advanced rapidly but have also begun to dominate a myriad of fields (Vaswani et al. 2023 ###reference_b21###). They have demonstrated state-of-the-art results in tasks such as computer vision (Dosovitskiy et al. 2020 ###reference_b8###; Hajimolahoseini, Kumar, and Gordon 2023 ###reference_b10###), natural language processing (Devlin et al. 2018 ###reference_b7###), sequence classification (Ataiefard and Hemmati 2023 ###reference_b3###), trading (Ataiefard et al. 2022 ###reference_b4###), etc. These class of deep models are constructed by stacking multiple transformer blocks which employ the attention mechanism to extract the relation between input values also called input tokens and to generate output tokens by computing the weighted average of input tokens. To have powerful transformers, parallel attention modules also called multi-head attentions are used to capture different relations among tokens. Transformer-based models have achieved more accurate results compared to their competitors such as convolutional neural networks since they can scale up more effectively when dealing with large datasets. This substantial increase in the number of parameters resulted in several issues such as long training time.\n\n\nDuring the recent years several studies have been conducted to speed up the training process. At the layer level, a key approach is layer freezing, which omits updating frozen parameters and results in training acceleration. For instance, Low-Rank Adaptation (LoRA) and it\u2019s variations introduced by (Hu et al. 2021 ###reference_b12###; Ahmed et al. 2023 ###reference_b1###; Hajimolahoseini et al. 2021 ###reference_b11###) , which freezes pre-trained weights and inserts two lightweight low-rank matrices per each frozen layer to gain speed up during fine-tuning. (Dettmers et al. 2023 ###reference_b6###) extended this approach, called Quantized Low-Rank Adaptation (QLoRA), by quantizing the weights to achieve more efficiency. The layer freezing approach is mainly applicable during fine-tuning when a checkpoint of the pre-trained weights is available.\n\n\nAt the attention module level, a novel approach to reduce complexity is to share and merge attention matrices. (Shazeer 2019 ###reference_b18###) proposed Multi Query Attention (MQA), which uses only one single key-value head across all attention heads. Although MQA could achieve fast inference decoding, its harsh merging resulted in a lower quality model. (Ainslie et al. 2023 ###reference_b2###) extended the idea by introducing GQA, which groups heads and shares a single key head and value head per subgroup. MQA and GQA methods aim at faster inference of large language models. Recent research has suggested that optimizing the number of heads in a transformer architecture can further improve the performance of transformer models (Javadi et al. 2023 ###reference_b13###).\n\n\nIn the token level, several methods have been introduced to reduce the sequence length by detecting and dropping unimportant tokens. For example, (Rao et al. 2023 ###reference_b17###) proposed DynamicVIT which uses a trainable prediction module to progressively find and mask the less informative tokens. (Yao et al. 2022 ###reference_b22###), proposed Random-LTD which randomly drops tokens in each transformer layer except the first and last layers. The random selection in this approach can cause loss of data from important tokens, therefor they are returned after each layer. That work benefits from some customized implementations to achieve training speed up.(Liang et al. 2022 ###reference_b15###) presented EVIT that uses an importance score-based metric, i.e. largest attentions scores from class tokens, to keep and drop tokens. EVIT computes the average of attentiveness value across all heads, keeps the K tokens with largest attention values and fuses inattentive tokens into a single token.\n\n\nWhile an attention-based metric can successfully identify informative tokens, based on our observation, replacing the discarded tokens with the weighted average of them as a single fused token did not show a significant impact on keeping accuracy. In particular, we tried EVIT for training VIT-small without any teacher-student method on HUAWEI AI ASCEND device but could not gain training speed up while preserving accuracy. To overcome the computation cost of EVIT and achieve training speed up, we increased the drop ratio, which resulted in a dramatic drop in accuracy.\n\n\nIn this paper, we propose SkipViT, a fast training method for VIT that uses an attention score-based approach. This method reduces the number of tokens by dropping less important ones from an image and leverages from a residual connection. This connection enables the ViT to reutilize the dropped tokens in later layers to compensate for loss in the image data. Finally, we discuss the efficiency of our method and the experimental results that lead us to the final architecture of SkipViT.\n###table_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Method",
15
+ "text": "Our method builds on top the same architecture of the ViT with 12 transformer layers and aims to improve the performance of it while keeping the same accuracy. First, we look into the multi-head attention mechanism architecture and then, describe how the proposed method is applied to the attention components."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Attention Score Overview",
21
+ "text": "Multi-Head Attention (MHA), a crucial component in Transformer models (Vaswani et al. 2017 ###reference_b20###), is designed to capture diverse aspects of the input data. Each MHA unit consists of multiple attention heads, denoted by with each head focusing on learning different features. The token inputs to the attention layer are transformed into three distinct matrices: queries , keys , and values . These transformations are achieved through a linear transformation.\nThe attention mechanism in each head is computed as:\nwhere represents the dimensionality of the key (and query) vectors.\nHere, is the dot-product of queries and keys, and is the scaling factor to avoid large values in the dot-product attention. In this paper we refer to the resulting matrix of as the attention scores."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Identifying Important Patches",
27
+ "text": "In ViT models, the first step is to execute tokenization by splitting an input image into patches and transforming each into a token embedding using a convolutional layer. In the final Transformer layer of ViT, the output corresponding to the [CLS] token is commonly utilized. For tasks such as object detection, this output is attached to a classification head, emphasizing its significance in the overall mechanism. This also means that we can employ the attention scores corresponding to this token to detect important patches of an image (Liang et al. 2022 ###reference_b15###).\n\n\nThe attention scores consist of a matrix where is the number of input tokens to the attention unit. Based on the attention Eq. 1 ###reference_###, we can say that each row in the attention score matrix are coefficients by which other tokens will attend in forming the new token at the attention unit output.\n\n\nTherefore, the values in the first row of the attention scores indicate how much other tokens contribute to forming the new [CLS] token before it is fed into the MLP layer of the ViT. Since ViT-small has 6 attention heads, we first average the attention scores across the head dimension. We then use these average values to determine which token contributes more significantly to determining the correct class for an image."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Skip Connection For Tokens",
33
+ "text": "By dropping 45% of the tokens from the \\nth6 transformer layer of ViT-small and adding a single fused token, which incorporates a weighted average of the removed tokens, our model was unable to maintain baseline accuracy while gaining throughput, as shown in Table 1 ###reference_###.\n\n\nNon-essential patches in an image, like the background or surrounding regions of an object, often contain minimal information. These areas can typically be ignored without significant impact. Completely disregarding non-essential patches such as the surroundings of an object can dramatically reduce the performance of the ViT, while these areas can marginally guide the model and contribute to the final image classification. \n\nWe propose the use of a skip connection for the tokens that would otherwise be discarded. This approach selectively excludes these tokens from contributing to certain transformer layers within the model, while still incorporating them in the final layers. Returning the dropped tokens to their original position among other tokens reduces the impact of token dropping on final classification accuracy of the model."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Experiments",
39
+ "text": "We performed all of our experiments using ViT as our baseline architecture. For all of the experiments we used hyper parameters presented in Table 2 ###reference_### to train the models from scratch. We trained all of the models using the ImageNet1K dataset (Deng et al. 2009 ###reference_b5###) with resolution 224 using the Ascend version of AdamW (Loshchilov and Hutter 2017 ###reference_b16###). We then evaluated the models on the test set of 50,000 images for classification. The metric used to report accuracy is Top-1(%) and samples per second(FPS) is used to report training throughput of the models. All of the experiments in this paper are conducted using a cluster of Ascend 910A Devices with 32GB of memory. We report our experimental results for the best token dropping strategy in Table 1 ###reference_###.\nThe results of our study indicate that the token fusion approach adopted from previous works (Liang et al. 2022 ###reference_b15###) is not sufficient for our original model and pre-training setup to maintain the final top-1 accuracy without using more advanced architectures such as (Touvron et al. 2021 ###reference_b19###), other data efficient methods(Hajimolahoseini et al. 2023 ###reference_b9###; Li et al. 2021 ###reference_b14###) or increased compute."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Determining Optimal Layers and Ratios for Token Dropping",
45
+ "text": "We experimented with two strategies for dropping the tokens. A single and a two stage token dropping (i.e., drop in one or two layers) strategy to find the best trade-off between training performance and final accuracy of the ViT model between these methods. A summary of our experimental results are presented in Table 1 ###reference_###.\n\n\nIn both of the dropping methods we were able to see a relative speedup with limited to no loss in the validation accuracy. In single layer token dropping method, our best method with dropping 55% of the tokens at layer 6 with skip connection to layer 11 reaches 0.01% accuracy drop while gaining 13.23% throughput. Using two stage token dropping approach and drop ratio of 30% for layers 4 and 7 with skip connection to layer 11, our fastest achieved 16.09% more FPS and reaching 69.4% classification accuracy which outperforms the token fusion technique."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Finding The Optimal Skip Connection",
51
+ "text": "To prevent from any degradation in Top-1 accuracy of the ViT model we reuse the dropped tokens in the future layers. Based on our findings, there is a trade-off between the FPS improvement and accuracy degradation depending on the transformer layer that the tokens are returned. Table 1 ###reference_### indicates that delaying the skip connection by even 1 block can cause a substantial decrease in the accuracy metric. Dropping 30% of the tokens at layers 4 and 7 and returning them to the sequence at \\nth10 layer compared to returning at \\nth11 layer, achieves 2.99% higher FPS while loosing 0.33% accuracy."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Effect of Warm-up On Patch Detection Quality",
57
+ "text": "In Table 3 ###reference_###, the results indicate that for a similar dropping ratio (30%) in the same layers (4 and 7), ViT reaches 2.55% higher accuracy when first 15 epochs are used as warm-up period before token dropping is applied. Based on the results we can draw the conclusion that the warm-up epochs are a essential part of our token dropping strategy which helps the model to select a more informative set of tokens to keep. Since we aim to train ViT models from the beginning, initially, the attention scores for tokens are derived using randomly initialized weights. As the model gradually learns the global relationships between different tokens after the first few epochs of training, the weights of the attention block become more efficient in detecting important tokens."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Conclusions",
63
+ "text": "Training large Transformer models from scratch require a huge amount of computation and time. In this paper, we propose SkipViT, an intuitive and stable framework to effectively reduce the amount of computation required to train ViT-based models. SkipViT takes advantage of the attention scores of the [CLS] token to differentiate the computation graph between important from less informative tokens. Furthermore, Our proposed framework achieves a significant speedup with no loss in the accuracy of the model by adding a skip connection from the dropping block to a future transformer block in ViT. Due to resource constraints we where only able to apply our experiments on the small version of ViT using the Imagenet1K dataset. This method shows promising results on the current setup, However it is limited by the size of the model and dataset and should be extended to the larger versions of ViT. Additionally, a larger dataset could be used to train ViT using SkipViT to measure scalability of this method. Another positive aspect of this approach is the reduced memory footprint of the model, which needs to be examined in the future works."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {
68
+ "1": {
69
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx1.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"Sx1.T1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx1.T1.1.1.1.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx1.T1.1.1.1.1.1\">Dropping layers</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx1.T1.1.1.1.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx1.T1.1.1.1.2.1\">Drop ratio</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx1.T1.1.1.1.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx1.T1.1.1.1.3.1\">Skip connection</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx1.T1.1.1.1.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx1.T1.1.1.1.4.1\">Throughput (Speedup)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx1.T1.1.1.1.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx1.T1.1.1.1.5.1\">Acc. Top-1(%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx1.T1.1.2.2.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">ViT-small</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx1.T1.1.2.2.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\u2013</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx1.T1.1.2.2.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\u2013</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx1.T1.1.2.2.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">4,503</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx1.T1.1.2.2.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">70.17</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.1.3.3.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.1.3.3.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">45%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.1.3.3.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">fused token</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.1.3.3.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">4,963(+10.23%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.1.3.3.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">68.39(-1.78)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.1.4.4.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">4,7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.1.4.4.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">30%,30%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.1.4.4.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.1.4.4.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">5,092(+13.1%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx1.T1.1.4.4.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">69.73(-0.44)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.5.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.5.5.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">4,7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.5.5.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">30%,30%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.5.5.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.5.5.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx1.T1.1.5.5.4.1\">5,227(+16.09%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.5.5.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">69.4(-0.77)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.6.6.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">6,8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.6.6.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">35%,35%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.6.6.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.6.6.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">4,711(+4.62%)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.6.6.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">70.53(+0.36)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.7.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.7.7.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">6,8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.7.7.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">35%,35%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.7.7.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.7.7.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">4,838(+7.45%)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.7.7.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">70.41(+0.24)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.8.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.8.8.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.8.8.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">45%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.8.8.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.8.8.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">4,944(+9.8%)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.8.8.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx1.T1.1.8.8.5.1\">70.64(+0.47)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.9.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.9.9.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.9.9.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">50%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.9.9.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.9.9.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">5,021(+11.51%)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx1.T1.1.9.9.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">70.27(+0.1)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx1.T1.1.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx1.T1.1.10.10.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx1.T1.1.10.10.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">55%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx1.T1.1.10.10.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx1.T1.1.10.10.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">5,098(+13.23%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx1.T1.1.10.10.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">70.16(-0.01)</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Performance comparison of various skip connection layers, one and two stage token dropping with different Token dropping ratios. Dropping layers shows in which transformer layer of ViT-small we discarded the tokens. Throughput is measured by the number of samples processed per second. The model using fused token does not use a skip connection and replaces the discarded token with a fused token. Metrics highlighted in bold represent the best results.</figcaption>\n</figure>",
70
+ "capture": "Table 1: Performance comparison of various skip connection layers, one and two stage token dropping with different Token dropping ratios. Dropping layers shows in which transformer layer of ViT-small we discarded the tokens. Throughput is measured by the number of samples processed per second. The model using fused token does not use a skip connection and replaces the discarded token with a fused token. Metrics highlighted in bold represent the best results."
71
+ },
72
+ "2": {
73
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx3.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx3.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"Sx3.T2.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T2.1.1.1.1.1\">Parameter</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T2.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T2.1.1.1.2.1\">value</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_tt\" id=\"Sx3.T2.1.2.1.1\">Batch size</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx3.T2.1.2.1.2\">288</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"Sx3.T2.1.3.2.1\">Epochs</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.3.2.2\">100</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"Sx3.T2.1.4.3.1\">Weight Decay</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.4.3.2\">0.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"Sx3.T2.1.5.4.1\">Learning Rate</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.5.4.2\">1e-3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"Sx3.T2.1.6.5.1\">Warmup LR</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.6.5.2\">1e-6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"Sx3.T2.1.7.6.1\">Mixup</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T2.1.7.6.2\">0.1</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Original Model Training Parameters</figcaption>\n</figure>",
74
+ "capture": "Table 2: Original Model Training Parameters"
75
+ },
76
+ "3": {
77
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx3.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx3.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx3.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T3.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T3.1.1.1.1.1\">Dropping layers</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T3.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T3.1.1.1.2.1\">Drop ratio</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T3.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T3.1.1.1.3.1\">Warm-up epochs</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T3.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T3.1.1.1.4.1\">Acc. Top-1(%)</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx3.T3.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T3.1.2.1.1\">4,7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T3.1.2.1.2\">30%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T3.1.2.1.3\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T3.1.2.1.4\">67.18</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T3.1.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T3.1.3.2.1\">4,7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T3.1.3.2.2\">30%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T3.1.3.2.3\">15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T3.1.3.2.4\">69.73</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Results of training ViT with and without a warm-up ratio before applying 30% token dropping at layers 4 and 7 with skip connection to layer 10.</figcaption>\n</figure>",
78
+ "capture": "Table 3: Results of training ViT with and without a warm-up ratio before applying 30% token dropping at layers 4 and 7 with skip connection to layer 10."
79
+ }
80
+ },
81
+ "image_paths": {
82
+ "1": {
83
+ "figure_path": "2401.15293v1_figure_1.png",
84
+ "caption": "Figure 1: Overview of the SkipViT attention block where the unimportant image patches.",
85
+ "url": "http://arxiv.org/html/2401.15293v1/x1.png"
86
+ }
87
+ },
88
+ "validation": true,
89
+ "references": [
90
+ {
91
+ "1": {
92
+ "title": "Speeding up resnet architecture with layers targeted low rank decomposition.",
93
+ "author": "Ahmed, W.; Hajimolahoseini, H.; Wen, A.; and Liu, Y. 2023.",
94
+ "venue": "arXiv preprint arXiv:2309.12412.",
95
+ "url": null
96
+ }
97
+ },
98
+ {
99
+ "2": {
100
+ "title": "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints.",
101
+ "author": "Ainslie, J.; Lee-Thorp, J.; de Jong, M.; Zemlyanskiy, Y.; Lebr\u00f3n, F.; and Sanghai, S. 2023.",
102
+ "venue": "arXiv preprint arXiv:2305.13245.",
103
+ "url": null
104
+ }
105
+ },
106
+ {
107
+ "3": {
108
+ "title": "Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents.",
109
+ "author": "Ataiefard, F.; and Hemmati, H. 2023.",
110
+ "venue": "arXiv:2309.14615.",
111
+ "url": null
112
+ }
113
+ },
114
+ {
115
+ "4": {
116
+ "title": "Deep State Inference: Toward Behavioral Model Inference of Black-Box Software Systems.",
117
+ "author": "Ataiefard, F.; Mashhadi, M. J.; Hemmati, H.; and Walkinshaw, N. 2022.",
118
+ "venue": "IEEE Transactions on Software Engineering, 48(12): 4857\u20134872.",
119
+ "url": null
120
+ }
121
+ },
122
+ {
123
+ "5": {
124
+ "title": "ImageNet: A large-scale hierarchical image database.",
125
+ "author": "Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and Fei-Fei, L. 2009.",
126
+ "venue": "In 2009 IEEE Conference on Computer Vision and Pattern Recognition, 248\u2013255.",
127
+ "url": null
128
+ }
129
+ },
130
+ {
131
+ "6": {
132
+ "title": "Qlora: Efficient finetuning of quantized llms.",
133
+ "author": "Dettmers, T.; Pagnoni, A.; Holtzman, A.; and Zettlemoyer, L. 2023.",
134
+ "venue": "arXiv preprint arXiv:2305.14314.",
135
+ "url": null
136
+ }
137
+ },
138
+ {
139
+ "7": {
140
+ "title": "Bert: Pre-training of deep bidirectional transformers for language understanding.",
141
+ "author": "Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018.",
142
+ "venue": "arXiv preprint arXiv:1810.04805.",
143
+ "url": null
144
+ }
145
+ },
146
+ {
147
+ "8": {
148
+ "title": "An image is worth 16x16 words: Transformers for image recognition at scale.",
149
+ "author": "Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020.",
150
+ "venue": "arXiv preprint arXiv:2010.11929.",
151
+ "url": null
152
+ }
153
+ },
154
+ {
155
+ "9": {
156
+ "title": "SwiftLearn: A Data-Efficient Training Method of Deep Learning Models using Importance Sampling.",
157
+ "author": "Hajimolahoseini, H.; Awad, O. M.; Ahmed, W.; Wen, A.; Asani, S.; Hassanpour, M.; Javadi, F.; Ahmadi, M.; Ataiefard, F.; Liu, K.; et al. 2023.",
158
+ "venue": "arXiv preprint arXiv:2311.15134.",
159
+ "url": null
160
+ }
161
+ },
162
+ {
163
+ "10": {
164
+ "title": "Methods, systems, and media for computer vision using 2d convolution of 4d video data tensors.",
165
+ "author": "Hajimolahoseini, H.; Kumar, K.; and Gordon, D. 2023.",
166
+ "venue": "US Patent App. 17/502,588.",
167
+ "url": null
168
+ }
169
+ },
170
+ {
171
+ "11": {
172
+ "title": "Compressing Pre-trained Language Models using Progressive Low Rank Decomposition.",
173
+ "author": "Hajimolahoseini, H.; Rezagholizadeh, M.; Partovinia, V.; Tahaei, M.; Awad, O. M.; and Liu, Y. 2021.",
174
+ "venue": null,
175
+ "url": null
176
+ }
177
+ },
178
+ {
179
+ "12": {
180
+ "title": "Lora: Low-rank adaptation of large language models.",
181
+ "author": "Hu, E. J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; and Chen, W. 2021.",
182
+ "venue": "arXiv preprint arXiv:2106.09685.",
183
+ "url": null
184
+ }
185
+ },
186
+ {
187
+ "13": {
188
+ "title": "GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values.",
189
+ "author": "Javadi, F.; Ahmed, W.; Hajimolahoseini, H.; Ataiefard, F.; Hassanpour, M.; Asani, S.; Wen, A.; Awad, O. M.; Liu, K.; and Liu, Y. 2023.",
190
+ "venue": "arXiv preprint arXiv:2311.03426.",
191
+ "url": null
192
+ }
193
+ },
194
+ {
195
+ "14": {
196
+ "title": "A short study on compressing decoder-based language models.",
197
+ "author": "Li, T.; Mesbahi, Y. E.; Kobyzev, I.; Rashid, A.; Mahmud, A.; Anchuri, N.; Hajimolahoseini, H.; Liu, Y.; and Rezagholizadeh, M. 2021.",
198
+ "venue": "arXiv preprint arXiv:2110.08460.",
199
+ "url": null
200
+ }
201
+ },
202
+ {
203
+ "15": {
204
+ "title": "Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations.",
205
+ "author": "Liang, Y.; Ge, C.; Tong, Z.; Song, Y.; Wang, J.; and Xie, P. 2022.",
206
+ "venue": "arXiv:2202.07800.",
207
+ "url": null
208
+ }
209
+ },
210
+ {
211
+ "16": {
212
+ "title": "Fixing Weight Decay Regularization in Adam.",
213
+ "author": "Loshchilov, I.; and Hutter, F. 2017.",
214
+ "venue": "CoRR, abs/1711.05101.",
215
+ "url": null
216
+ }
217
+ },
218
+ {
219
+ "17": {
220
+ "title": "Dynamic spatial sparsification for efficient vision transformers and convolutional neural networks.",
221
+ "author": "Rao, Y.; Liu, Z.; Zhao, W.; Zhou, J.; and Lu, J. 2023.",
222
+ "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence.",
223
+ "url": null
224
+ }
225
+ },
226
+ {
227
+ "18": {
228
+ "title": "Fast Transformer Decoding: One Write-Head is All You Need.",
229
+ "author": "Shazeer, N. 2019.",
230
+ "venue": "arXiv:1911.02150.",
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "19": {
236
+ "title": "Training data-efficient image transformers & distillation through attention.",
237
+ "author": "Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; and J\u00e9gou, H. 2021.",
238
+ "venue": "In International conference on machine learning, 10347\u201310357. PMLR.",
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "20": {
244
+ "title": "Attention is all you need.",
245
+ "author": "Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, \u0141.; and Polosukhin, I. 2017.",
246
+ "venue": "Advances in neural information processing systems, 30.",
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "21": {
252
+ "title": "Attention Is All You Need.",
253
+ "author": "Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2023.",
254
+ "venue": "arXiv:1706.03762.",
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "22": {
260
+ "title": "Random-LTD: Random and Layerwise Token Dropping Brings Efficient Training for Large-scale Transformers.",
261
+ "author": "Yao, Z.; Wu, X.; Li, C.; Holmes, C.; Zhang, M.; Li, C.; and He, Y. 2022.",
262
+ "venue": "arXiv:2211.11586.",
263
+ "url": null
264
+ }
265
+ }
266
+ ],
267
+ "url": "http://arxiv.org/html/2401.15293v1"
268
+ }
20240127/2401.15304v1.json ADDED
@@ -0,0 +1,313 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Adaptive Least Mean Squares Graph Neural Networks and Online Graph Signal Estimation",
3
+ "abstract": "The online prediction of multivariate signals, existing simultaneously in space and time, from noisy partial observations is a fundamental task in numerous applications.\nWe propose an efficient Neural Network architecture for the online estimation of time-varying graph signals named the Adaptive Least Mean Squares Graph Neural Networks (LMS-GNN).\nLMS-GNN aims to capture the time variation and bridge the cross-space-time interactions under the condition that signals are corrupted by noise and missing values.\nThe LMS-GNN is a combination of adaptive graph filters and Graph Neural Networks (GNN).\nAt each time step, the forward propagation of LMS-GNN is similar to adaptive graph filters where the output is based on the error between the observation and the prediction similar to GNN.\nThe filter coefficients are updated via backpropagation as in GNN.\nExperimenting on real-world temperature data reveals that our LMS-GNN achieves more accurate online predictions compared to graph-based methods like adaptive graph filters and graph convolutional neural networks.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The online prediction of irregularly structured multi-variate signals across both spatial and temporal dimensions is vital in various real-life applications, including weather prediction [1 ###reference_b1###], brain connectivity analysis [2 ###reference_b2###, 3 ###reference_b3###], traffic flow monitoring [4 ###reference_b4###], and smart grid system management [5 ###reference_b5###].\nThe signals gathered in real life are often noisy and have missing values.\nWhen representing the irregularly structured multi-dimensionality in the time-varying signals using graphs, three challenges need to be addressed to bridge the gap between the online prediction of the time-varying signal and the spatial representation in the form of graph topology: reconstructing missing data, denoising noisy observations, and capturing the time-variation.\nIn the recent developments of Graph Signal Processing (GSP), researchers have leveraged the topological structures of graphs to obtain graph embedding in the spatial domain.\nFurthermore, the Graph Fourier Transform (GFT) was defined in GSP to conduct spectral analysis of the signals on graphs [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###].\nOne strategy to address the problems of the time-variation and represent space-time cross-dimensional interactions is to combine time series analysis techniques with GSP.\nFor example, GSP can be combined with the Vector Autoregressive model [10 ###reference_b10###], the Vector Autoregressive\u2013Moving-Average model [11 ###reference_b11###], and the GARCH model [12 ###reference_b12###].\nAnother approach that could be taken to process graph signals that have time dimension is to combine GSP with adaptive filtering.\nThe first of such proposals is the adaptive graph Least Mean Squares (GLMS) algorithm [13 ###reference_b13###], where the algorithm uses a fixed predefined bandlimited filter obtained from GFT to update based on an update term derived from a LMS problem [13 ###reference_b13###].\nThere are various extensions of the GLMS algorithm, including the Normalized GLMS (GNLMS) algorithm [14 ###reference_b14###], the graph (unnormalized and normalized) least means pth algorithm [15 ###reference_b15###, 16 ###reference_b16###], and the Graph-Sign algorithm [17 ###reference_b17###] to name a few.\nNotice that the adaptive GSP algorithms are not limited to the spectral domain but could also be conducted in the spatial domain.\nFor example, the bandlimited filters in the GLMS or the Graph-Sign algorithms can be approximated using a series of Chebyshev polynomials, resulting in the adaptive graph diffusion algorithms [18 ###reference_b18###, 19 ###reference_b19###].\nEven though the above-mentioned GSP approaches have shown successful results in the online estimation of graph signals, unlike classical adaptive filters that adaptively update the filter weights, the performance of adaptive graph filters relies on the design of a fixed predefined filter.\nAn accurate definition of graph filters requires good prior knowledge of the spectrum of the graph signal, which may not be obtainable in reality, demanding the necessity of methods that require no prior knowledge.\nGraph Neural Network (GNN) has extended the spatial and spectral GSP techniques to time-invariant machine learning tasks, including node classification, link prediction, and image classification [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###].\nDifferent from the GSP approaches, GNN methods such as Graph Convolutional Neural Networks and Graph Attention Networks learn the filter from the given data through backpropagation.\nAdditionally, the non-linear activations found in GNNs are capable of handling non-linear relationships in the signals, enabling GNNs to solve a broader range of tasks.\nThe discussed GNN techniques have shown success when the data is purely a fixed graph signal that has no time-varying features, which leads to the requirement for additional treatment when the data also contains time-varying features.\nThere are very few attempts at combining GSP algorithms or GNNs with other deep-learning algorithms to include the processing of time-varying graph signals.\nThe Spatio-Temporal GCN (STGCN) and its variants use a combination of GCN and gated CNN to process spatial features and temporal features [23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###].\nHowever, the STGCN in [23 ###reference_b23###] assumes there is no missing data and assumes the signals are clean of noise.\nAdditionally, STGCN is an offline method where the complicated architecture is trained using an enormous amount of data before STGCN is deployed, and STGCN was not designed to make predictions when the number of observed time steps is extremely limited.\nTwo empirical approaches that utilize the GFT can be taken in the task of predicting time-varying graph signals.\nThe first approach is to transfer the graph signal to the spectral domain using the GFT, then process it by a sequence of Gated Recurrent Units (GRU) and transfer it back to the spatial domain in [26 ###reference_b26###].\nSimilarly, a second method named the Spectro-Temporal GCN uses GFT and graph filters to process graph signals while handling time dependencies using the classical Discrete Fourier Transform; the predictions are done by a series of Fully Connected (FC) layers after the GCN output [27 ###reference_b27###].\nThe major drawback of these two empirical approaches is that GCN or GFT only acts as data transformations, and the performance relies only on the DFT, GRUs, or FC layers that do not utilize the graph topology.\nIn addition, GRUs and FC layers have low interpretability.\nThus, we need a relatively simple and interpretable algorithm that could overcome the three challenges we discussed.\nIn this paper, we propose the Adaptive Least Mean Squares Graph Neural Networks (LMS-GNN) that conduct online estimations of time-varying graph signals under noisy observations with missing values.\nThe resulting LMS-GNN architecture is trained using the residuals of the estimation at each time step, which allows LMS-GNN to obtain a trained filter that later adaptively makes predictions in the opposite direction of the error.\nA recurrent setup will feed the filter trained by the NN components back into the adaptive graph filter components as a time-varying bandlimited filter.\nExperiment results on real-world temperature data have shown that our proposed LMS-GNN can accurately make online predictions of the future temperature compared with current GSP and GNN approaches.\nBelow is a summary of the contributions and advantages of our proposed LMS-GNN:\nThe LMS-GNN combines the advantages of GNNs and adaptive graph filters: rather than predefining a fixed filter using prior knowledge like in GSP methods, LMS-GNN uses a Neural Network structure to learn the filter from the given missing and noisy observations.\nInstead of aggregating only the signal as seen in most GNNs, the adaptive filter backbone of LMS-GNN enables it to capture the time-varying signal dynamics.\nThe combination of GNNs with adaptive graph filters makes the LMS-GNN simple yet efficient, offering high model interpretability and low computational complexity."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Background",
15
+ "text": "A graph can be represented by .\nThe node set represents nodes and the edge set represents whether the nodes are connected or not.\nGraph signals, which we denote as , are the functional values on each node.\nFor each graph there is an adjacency matrix and a degree matrix .\nThe adjacency matrix is a dimensional matrix with the entry equals to the edge weight if there is an edge between node and node , and equals to if there is no edge.\nThe degree matrix is a diagonal matrix where the entry represents the degree of the node.\nIn the context of an undirected graph, the degree is the sum of edge weights connected to the node.\nThe core of GSP and the GNN algorithms is the graph Laplacian matrix defined as .\nIn this paper, we will only be considering undirected graphs, so is a positive semi-definite matrix.\nThe GFT is defined based on the eigenvalue decomposition of the graph Laplacian matrix: .\nThe matrix represents the orthonormal eigenvector matrix and diag represents the diagonal matrix of all eigenvalues in increasing order.\nIn GSP, this increasing ordering of eigenvalue eigenvector pair can be interpreted as frequencies as an analogy to classic signal processing [7 ###reference_b7###].\nThe GFT is defined as , which transforms the original graph signal from spatial domain to spectral domain.\nAccordingly, the Inverse Graph Fourier transform (IGFT) transforms back to the spatial domain from the spectral domain.\nIn GSP and GNN algorithms, a filter can be applied to a graph signal using the graph convolution operation , where .\nIn GSP algorithms, the bandlimitedness of a graph signal in the spectral domain is defined by a frequency set with elements in the spectral domain.\nA bandlimiting filter is an idempotent and self-adjoint diagonal matrix defined as if and 0 if .\nIf a graph signal is bandlimited, then it has the property of .\nThe sampling operation on graph signal is performed with a diagonal sampling matrix according to the sampling set [13 ###reference_b13###].\nThe diagonal entries are equal to 1 if in the sampling set and 0 otherwise."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Algorithm Derivation",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III-A Spectral Graph Neural Networks",
27
+ "text": "In GCN, when omitting the activation function, the graph convolution is an aggregation of graph features (signals):\nwhere is the graph signal to be processed, and are the trainable parameters.\nThe expression on the right side of the approximation is a spectral method [22 ###reference_b22###] and the expression on the left side is a spatial method [20 ###reference_b20###, 21 ###reference_b21###].\nNotice that self-aggregation can be achieved in (1 ###reference_###) for .\nAdding the activation function to the spectral convolution, the layer of spectral GCN with filters is\nIn the spectral GCN, the goal is to obtain a convolution that is formed by filters localized in the spectral domain [22 ###reference_b22###].\nIn other words, the objective of the GCN is to train the parameters so that it eventually will resemble the structure of the underlying frequency spectrum of the ground truth data .\nIn GSP terms, a layer GCN trains a sequence of filters that satisfies ."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "III-B Adaptive Filtering",
33
+ "text": "Conventionally in adaptive graph filters, is given as the noisy observation with missing values of the ground truth graph signal ;\n is assumed to be bandlimited or close to bandlimited [13 ###reference_b13###, 14 ###reference_b14###].\nMissing values in can be represented by a sampling mask , making .\nIn the forward propagation phase of LMS-GNN, we aim to minimize the error between our observation and estimation by solving the following -norm optimization problem [13 ###reference_b13###]\nwhere is the time step, is the current estimation and the matrix is to be trained from the graph signal.\nThe cost function in 3 ###reference_### is a convex optimization problem that aims to minimize the mean-squared error of the estimation.\nThe GSP convention of signal bandlimitedness allows us to exploit the property .\nThen, the solution of the cost function (3 ###reference_###) is simply calculating the (filtered) residual\nwhere the residual is the estimation error.\nLetting the linear model track the time-varying dynamics of the graph signal, with , will lead to a forward propagation based on adaptive graph filters:\nwhere parameter adjusts the magnitude of .\nIn the conventional GSP, a predefined bandlimited filter is used to process the graph signal [13 ###reference_b13###].\nEven though algorithms like the GLMS and GNLMS have a simple implementation, the prediction capability relies on a predefined filter .\nStraightforward calculation of the using the observation is inaccurate due to the presence of noise and missing data, making it more advantageous to learn a filter from both the data and the estimation.\nMoreover, is fixed throughout the operation of GLMS.\nWhen the data is time-varying, as the data evolves, the predefined filter at some earlier time step may not have the same frequencies as the current signal.\nThus, it would be advantageous if we could train the graph filter as we make predictions over time."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-C Least Mean Squares Graph Neural Networks",
39
+ "text": "Observing the term , we see that it bears similarity with the graph aggregation in (1 ###reference_###).\nA (non-linear) graph aggregation based on can be achieved by the difference between two single-layer spectral GCN in (2 ###reference_###) (omitting ):\nBy exploiting (1 ###reference_###), an additional term is brought inside the activation function of (6 ###reference_###): setting and , then agg.\nIf we set and also include a self aggregation on the estimated signal , a single layer of LMS-GNN can be formulated as\nIn (7 ###reference_###), is the layer estimate of , is the layer step size, is the trainable bias term following GCN convention in [20 ###reference_b20###], and is the nonlinear activation function.\nIn the forward propagation, for a -layer LMS-GNN to make online predictions, we treat the final output as the prediction of the signal at time : .\nThe other layers act as denoising layers.\nThis can be ensured by assigning a relatively larger at the final layer compared to other layers.\nThe graph adaptive filter backbone of LMS-GNN will update in the direction opposite to the error because the update strategy is defined based on a -norm optimization in (3 ###reference_###).\nIn the GNN aspect, the forward propagation of LMS-GNN is an aggregation of the opposite to the error , with the filters trained from .\nIn the backward propagation of the LMS-GNN, the loss is calculated again based on the estimation error .\nThe backward propagation of LMS-GNN calculates the gradient for updating the network weights and the bias term at each layer along the path that propagates.\nWe should point out that even though there is a negative sign within at , there is also another negative sign merged into the step-size for the adaptive filter formulation (5 ###reference_###), which means that the sign is correct when calculating the gradient using backpropagation.\nLMS-GNN can use an online update of the network parameters in the testing phase.\nTo achieve the online update of the neural network parameters, we use a similar update scheme as the classical adaptive filter where the loss is calculated using only the next step prediction and when the next step observation arrives by calculating the error .\nBackpropagation is applied to this loss if the weight parameters need to be updated in testing."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "IV Experiments",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "IV-A Experiment Setup",
51
+ "text": "The real-world dataset used in experiments is the time-varying graph recordings of hourly temperatures gathered from weather stations across the U.S. [28 ###reference_b28###], with each weather station represented as a graph node.\nThe graph structure is formed using 8-nearest-neighbor based on the latitude and longitude of the weather stations using the method seen in [14 ###reference_b14###].\nWe will be comparing the LMS-GNN with GLMS, GNLMS, GCN, and STGCN.\nThe dataset is split so the first 24 hourly measurements will be the training set used to train the network weights for GCN, STGCN, and LMS-GNN.\nA bandlimited filter for GNLMS and GLMS is also defined using the spectrum of the training set with parameter following the greedy approach that maximizes the spectral content as seen in [14 ###reference_b14###].\nThe experiments are conducted under Gaussian noise with zero mean and three different noise variance (VAR) scales: VAR and .\nDuring the testing phase, the noisy and missing graph signal is fed into the algorithms one single time instance at a time for the remaining time points.\nThe goal of all the algorithms is to predict the temperature given only the missing and noisy temperature observation .\nThe original setting of the STGCN assumed the input contained no missing data [23 ###reference_b23###], so we followed this setting for the STGCN only in our experiments.\nMissing data are modeled using a spatial sampling strategy shown in the GNLMS [14 ###reference_b14###].\nThe LMS-GNN has 3 layers that share the same and .\nThe first two layers of the LMS-GNN have and serve as denoising layers.\nThe final LMS-GNN layer has , aiming to reflect the time-varying nature of the graph signal.\nBoth the GCN and STGCN are 2 layers.\nAll of the neural networks use the Adam optimizer with the criterion being the L1 loss.\nThe activation function is the Parametric ReLU for all the layers except the last layer uses identity activation.\nThe step size of GNLMS and GLMS follows the exact setting seen in the original GNLMS literature [14 ###reference_b14###]."
52
+ },
53
+ {
54
+ "section_id": "4.2",
55
+ "parent_section_id": "4",
56
+ "section_name": "IV-B Results and Discussion",
57
+ "text": "The performance of the algorithms is measured in both the spectral domain and the spatial domain.\nFor the spectral domain, we will calculate the mean absolute error (MAE), .\nIn the spatial domain, the performance is measured by the Mean Squared Error (MSE) at each time step by .\nIn the MSE and the MAE calculations, is the number of nodes and the subscript denotes the node.\nThe spatial domain MSE of all the tested algorithms for the noise with VAR is shown in Fig. 1 ###reference_### and the spectral domain MAE is shown in Fig. 2 ###reference_###.\nThe averaged MSE of the predicted signals across all time points is calculated as MSE and the averaged MAE across all T time points is MAE ; these results are summarized in Table I ###reference_###.\nFrom the MSE in Fig. 1 ###reference_###, we can see that our LMS-GNN has the lowest MSE at almost all the time points in the spatial domain.\nThe GCN performs the worst because it does not capture the time-varying changes.\nThe LMS-GNN has lower spectral MSE than GNLMS and GLMS because LMS-GNN trains a filter from the data that updates as the data changes, which captures the time-varying dynamics features more accurately.\nThe lower MAE results show that the LMS-GNN effectively trains a filter in the spectral domain that properly denoises the input and accurately restores the spectral domain features.\nAs for comparing LMS-GNN with STGCN, STGCN requires a significant amount of clean training data to be properly trained and STGCN requires feeding in a longer time span of the past data to capture the temporal changes; these requirements are usually more difficult to satisfy in reality.\nIn our experiment setting, the amount of training data is limited, the data is noisy, and algorithms are requested to make one-step predictions based on only the current step observations.\nUnder these circumstances, the LMS-GNN excels the most among all the tested algorithms.\nLMS-GNN effectively captures space-time change simultaneously using a minimalistic yet interpretable implementation, allowing it to make online predictions from a noisy and partially observed time-varying graph signal.\n###figure_1### ###figure_2###"
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Conclusion",
63
+ "text": "The LMS-GNN was proposed for the online estimation of graph signals with missing data and noise corruption.\nCombining the adaptive graph filters with GNNs allows the LMS-GNN to capture the online time variation and bridge the cross-space-time interactions.\nThe adaptive GSP backbone allows the LMS-GNN to update based on the estimation error at each time step while the GNN components of the LMS-GNN update the filter coefficients at each time step.\nThe LMS-GNN demonstrated accurate online prediction capabilities when compared against adaptive GSP algorithms and GCNs on real-world temperature data."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {
68
+ "1": {
69
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.2.1.1\" style=\"font-size:90%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S4.T1.3.2\" style=\"font-size:90%;\">Prediction spatial MSE and spectral MAE averaged over all the time points under different noise levels</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.1.1\">\n<td class=\"ltx_td\" id=\"S4.T1.4.1.1.1\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.4.1.1.2\">LMS-GNN</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.4.1.1.3\">GLMS</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.4.1.1.4\">GNLMS</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.4.1.1.5\">GCN</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.4.1.1.6\">STGCN</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"6\" id=\"S4.T1.4.2.2.1\">Spatial MSE</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.3.3.1\">VAR = 0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.3.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.3.3.2.1\">0.555</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.3.3.3\">2.112</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.3.3.4\">1.470</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.3.3.5\">7.559</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.3.3.6\">4.990</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.4.1\">VAR = 0.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.4.4.2.1\">0.616</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.4.3\">2.116</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.4.4\">1.506</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.4.5\">7.523</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.4.6\">5.709</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.5.5.1\">VAR = 1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.5.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.5.5.2.1\">0.680</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.5.5.3\">2.120</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.5.5.4\">1.553</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.5.5.5\">7.554</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.5.5.6\">5.298</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"6\" id=\"S4.T1.4.6.6.1\">Spectral MAE</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.7.7.1\">VAR = 0.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.7.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.7.7.2.1\">0.383</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.7.7.3\">0.946</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.7.7.4\">0.852</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.7.7.5\">2.008</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.7.7.6\">0.665</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.8.8.1\">VAR = 0.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.8.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.8.8.2.1\">0.398</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.8.8.3\">0.947</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.8.8.4\">0.862</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.8.8.5\">2.008</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.8.8.6\">0.747</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.9.9.1\">VAR = 1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.9.9.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.9.9.2.1\">0.413</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.9.9.3\">0.948</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.9.9.4\">0.875</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.9.9.5\">2.007</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.9.9.6\">0.660</td>\n</tr>\n</tbody>\n</table>\n</figure>",
70
+ "capture": "TABLE I: Prediction spatial MSE and spectral MAE averaged over all the time points under different noise levels"
71
+ }
72
+ },
73
+ "image_paths": {
74
+ "1": {
75
+ "figure_path": "2401.15304v1_figure_1.png",
76
+ "caption": "Figure 1: The MSE for the predictions from t=1\ud835\udc611t=1italic_t = 1 to 95959595, with the VAR = 0.1 for the zero-mean Gaussian noise. (t = [0, 24] is the training set, and t = [25, 95] is the testing set.)",
77
+ "url": "http://arxiv.org/html/2401.15304v1/x1.png"
78
+ },
79
+ "2": {
80
+ "figure_path": "2401.15304v1_figure_2.png",
81
+ "caption": "Figure 2: The MAE for the predictions from t=1\ud835\udc611t=1italic_t = 1 to 95959595, with the VAR = 0.1 for the zero-mean Gaussian noise. (t = [0, 24] is the training set, and t = [25, 95] is the testing set.)",
82
+ "url": "http://arxiv.org/html/2401.15304v1/x2.png"
83
+ }
84
+ },
85
+ "validation": true,
86
+ "references": [
87
+ {
88
+ "1": {
89
+ "title": "\u201cWeather forecasting using deep learning techniques,\u201d",
90
+ "author": "A. G. Salman, B. Kanigoro, and Y. Heryadi,",
91
+ "venue": "in ICACSIS, 2015, pp. 281\u2013285.",
92
+ "url": null
93
+ }
94
+ },
95
+ {
96
+ "2": {
97
+ "title": "\u201cNetwork modelling methods for fmri,\u201d",
98
+ "author": "S. M. Smith, K. L. Miller, G. Salimi-Khorshidi, M. Webster, C. F. Beckmann, T. E. Nichols, J. D. Ramsey, and M. W. Woolrich,",
99
+ "venue": "NeuroImage, vol. 54, no. 2, pp. 875\u2013891, 2011.",
100
+ "url": null
101
+ }
102
+ },
103
+ {
104
+ "3": {
105
+ "title": "\u201cSequential monte carlo graph convolutional network for dynamic brain connectivity,\u201d",
106
+ "author": "F. Zhao and E. E. Kuruoglu,",
107
+ "venue": "arXiv, 2023.",
108
+ "url": null
109
+ }
110
+ },
111
+ {
112
+ "4": {
113
+ "title": "\u201cTraffic flow prediction with big data: A deep learning approach,\u201d",
114
+ "author": "Y. Lv, Y. Duan, W. Kang, Z. Li, and F. Wang,",
115
+ "venue": "IEEE Trans. Intell. Transp. Syst., vol. 16, no. 2, pp. 865\u2013873, 2015.",
116
+ "url": null
117
+ }
118
+ },
119
+ {
120
+ "5": {
121
+ "title": "\u201cWhen weather matters: Iot-based electrical load forecasting for smart grid,\u201d",
122
+ "author": "L. Li, K. Ota, and M. Dong,",
123
+ "venue": "IEEE Commun. Mag., vol. 55, no. 10, pp. 46\u201351, 2017.",
124
+ "url": null
125
+ }
126
+ },
127
+ {
128
+ "6": {
129
+ "title": "\u201cThe emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,\u201d",
130
+ "author": "D. Shuman, S. Narang, P. Frossard, A. Ortega, and P. Vandergheynst,",
131
+ "venue": "IEEE Signal Process. Mag., vol. 30, no. 3, pp. 83\u201398, 2013.",
132
+ "url": null
133
+ }
134
+ },
135
+ {
136
+ "7": {
137
+ "title": "\u201cGraph signal processing: Overview, challenges, and applications,\u201d",
138
+ "author": "A. Ortega, P. Frossard, J. Kova\u010devi\u0107, J. M. F. Moura, and P Vandergheynst,",
139
+ "venue": "Proc. IEEE, vol. 106, no. 5, pp. 808\u2013828, 2018.",
140
+ "url": null
141
+ }
142
+ },
143
+ {
144
+ "8": {
145
+ "title": "\u201cGraph signal processing for machine learning: A review and new perspectives,\u201d",
146
+ "author": "X. Dong, D. Thanou, L. Toni, M. Bronstein, and P. Frossard,",
147
+ "venue": "IEEE Signal processing magazine, vol. 37, no. 6, pp. 117\u2013127, 2020.",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "9": {
153
+ "title": "\u201cBig data analysis with signal processing on graphs: Representation and processing of massive data sets with irregular structure,\u201d",
154
+ "author": "A. Sandryhaila and J. M.F. Moura,",
155
+ "venue": "IEEE Signal Process. Mag., vol. 31, no. 5, pp. 80 \u2013 90, 2014.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "10": {
161
+ "title": "\u201cSignal processing on graphs: Causal modeling of unstructured data,\u201d",
162
+ "author": "J. Mei and J. M. F. Moura,",
163
+ "venue": "IEEE Trans. Signal Process., vol. 65, no. 8, pp. 2077\u20132092, 2017.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "11": {
169
+ "title": "\u201cAutoregressive moving average graph filtering,\u201d",
170
+ "author": "E. Isufi, A. Loukas, A. Simonetto, and G. Leus,",
171
+ "venue": "IEEE Trans. Signal Process., vol. 65, no. 2, pp. 274\u2013288, 2017.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "12": {
177
+ "title": "\u201cMultivariate time series forecasting with garch models on graphs,\u201d",
178
+ "author": "J. Hong, Y. Yan, E. E. Kuruoglu, and W. K. Chan,",
179
+ "venue": "IEEE Trans. Signal Inf. Process. Netw., vol. 9, pp. 557\u2013568, 2023.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "13": {
185
+ "title": "\u201cAdaptive least mean squares estimation of graph signals,\u201d",
186
+ "author": "P. D. Lorenzo, S. Barbarossa, P. Banelli, and S. Sardellitti,",
187
+ "venue": "IEEE Trans. Signal Inf. Process. Netw., vol. 2, no. 4, pp. 555 \u2013 568, 2016.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "14": {
193
+ "title": "\u201cNormalized lms algorithm and data-selective strategies for adaptive graph signal estimation,\u201d",
194
+ "author": "M. J. M. Spelta and W. A. Martins,",
195
+ "venue": "Signal Processing, vol. 167, pp. 107326, 2020.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "15": {
201
+ "title": "\u201cAdaptive estimation and sparse sampling for graph signals in alpha-stable noise,\u201d",
202
+ "author": "N. Nguyen, K. Do\u011fan\u00e7ay, and W. Wang,",
203
+ "venue": "Digital Signal Processing, vol. 105, pp. 102782, 2020.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "16": {
209
+ "title": "\u201cGraph normalized-lmp algorithm for signal estimation under impulsive noise,\u201d",
210
+ "author": "Y. Yan, R. Adel, and E. E. Kuruoglu,",
211
+ "venue": "J. Signal Process. Syst., pp. 1\u201312, 2022.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "17": {
217
+ "title": "\u201cAdaptive sign algorithm for graph signal processing,\u201d",
218
+ "author": "Y. Yan, E. E. Kuruoglu, and M. A. Altinkaya,",
219
+ "venue": "Signal Processing, vol. 200, 2022.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "18": {
225
+ "title": "\u201cA graph diffusion lms strategy for adaptive graph signal processing,\u201d",
226
+ "author": "R. Nassif, C. Richard, J. Chen, and A. H. Sayed,",
227
+ "venue": "Asilomar, 2017.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "19": {
233
+ "title": "\u201cFast and robust wind speed prediction under impulsive noise via adaptive graph-sign diffusion,\u201d",
234
+ "author": "Y. Yan and E. E. Kuruoglu,",
235
+ "venue": "in IEEE CAI, 2023, pp. 302\u2013305.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "20": {
241
+ "title": "\u201cSemi-supervised classification with graph convolutional networks,\u201d",
242
+ "author": "T. Kipf and M. Welling,",
243
+ "venue": "ICLR, 2016.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "21": {
249
+ "title": "\u201cConvolutional neural networks on graphs with fast localized spectral filtering,\u201d",
250
+ "author": "M. Defferrard, X. Bresson, and P. Vandergheynst,",
251
+ "venue": "Advances in neural information processing systems, vol. 29, 2016.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "22": {
257
+ "title": "\u201cSpectral networks and locally connected networks on graphs,\u201d",
258
+ "author": "J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun,",
259
+ "venue": "ICLR, 2014.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "23": {
265
+ "title": "\u201cSpatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting,\u201d",
266
+ "author": "B. Yu, H. Yin, and Z. Zhu,",
267
+ "venue": "in IJCAI, 2018.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "24": {
273
+ "title": "\u201cSpatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting,\u201d",
274
+ "author": "C. Song, Y. Lin, S. Guo, and H. Wan,",
275
+ "venue": "in Proceedings of the AAAI conference on artificial intelligence, 2020, vol. 34, pp. 914\u2013921.",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "25": {
281
+ "title": "\u201cSpatial-temporal fusion graph neural networks for traffic flow forecasting,\u201d",
282
+ "author": "M. Li and Z. Zhu,",
283
+ "venue": "in Proceedings of the AAAI conference on artificial intelligence, 2021, vol. 35, pp. 4189\u20134196.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "26": {
289
+ "title": "\u201cJoint forecasting and interpolation of time-varying graph signals using deep learning,\u201d",
290
+ "author": "G. Lewenfus, W. A. Martins, S. Chatzinotas, and B. Ottersten,",
291
+ "venue": "IEEE Trans. Signal Inf. Process. Netw., vol. 6, pp. 761\u2013773, 2020.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "27": {
297
+ "title": "\u201cSpectral temporal graph neural network for multivariate time-series forecasting,\u201d",
298
+ "author": "D. Cao et al.,",
299
+ "venue": "Advances in neural information processing systems, vol. 33, pp. 17766\u201317778, 2020.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "28": {
305
+ "title": "\u201cU.S. climate normals 2020: U.S. hourly climate normals (1991-2020),\u201d",
306
+ "author": "M. Palecki, I. Durre, S. Applequist, A Arguez, and J. Lawrimore,",
307
+ "venue": "NOAA National Centers for Environmental Information, 2020.",
308
+ "url": null
309
+ }
310
+ }
311
+ ],
312
+ "url": "http://arxiv.org/html/2401.15304v1"
313
+ }
20240127/2401.15307v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2401.15308v1.json ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Construction of Locally Repairable Array Codes with Optimal Repair Bandwidth under the Rack-Aware Storage Model",
3
+ "abstract": "In this paper, we discuss codes for distributed storage systems with hierarchical repair properties. Specifically, we devote attention to the repair problem of the rack-aware storage model with locality, aiming to enhance the system\u2019s ability to repair a small number of erasures within each rack by locality and efficiently handling a rack erasure with a small repair bandwidth. By employing the regenerating coding technique, we construct a family of array codes with -locality, where the nodes of each repair set are systematically organized into a rack. When the number of failures is less than , these failures can be repaired without counting the system bandwidth. In cases where the number of failures exceeds the locality, the failed nodes within a single rack can be recovered with optimal cross-rack bandwidth.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "With the development of the information technology and artificial intelligence, the question of how to store data has become increasingly crucial. Erasure coding has been introduced to address the challenges of large-scale storage in distributed systems, offering high fault-tolerance capabilities with significantly less redundancy compared to traditional replication methods, such as the well-known Maximum Distance Separable (MDS) codes, which provide optimal failure tolerance and support for minimal storage overhead. However, when node failures occur in this storage system, traditional erasure codes suffer from challenges related to excessive bandwidth requirements for recovering the failed nodes.\nTo minimize data transmission during the repair process, Dimakis et al. [5 ###reference_b5###] introduced regenerating codes, aiming for an optimal tradeoff between storage and bandwidth. This approach involves contacting more surviving nodes but downloading only partial content from each, leading to a significant improvement compared to traditional schemes that necessitate downloading full data from helper nodes, referring to [5 ###reference_b5###, 17 ###reference_b17###, 18 ###reference_b18###, 21 ###reference_b21###, 27 ###reference_b27###, 29 ###reference_b29###, 16 ###reference_b16###, 1 ###reference_b1###, 2 ###reference_b2###, 19 ###reference_b19###, 23 ###reference_b23###, 24 ###reference_b24###, 22 ###reference_b22###] as examples.\nSimultaneously, another strategy to enhance repair efficiency is the utilization of locally repairable codes, which decrease the number of nodes accessed by the repair schemes to simplify the repair process, referring to [26 ###reference_b26###, 25 ###reference_b25###, 13 ###reference_b13###, 11 ###reference_b11###, 8 ###reference_b8###, 7 ###reference_b7###, 9 ###reference_b9###, 4 ###reference_b4###, 15 ###reference_b15###, 20 ###reference_b20###, 6 ###reference_b6###, 3 ###reference_b3###] as examples. In general, the reduction in the number of connected nodes results in a substantial decrease in repair bandwidth.\nFor a locally repairable code with -locality, if the number of failed nodes is less than , these failures can be easily recovered by leveraging the MDS property within the repair set. However, if or more failures occur simultaneously in a repair set, the locality breaks down.\nRecently, Cai et al. [12 ###reference_b12###] combined locally repairable codes and regenerating codes to address erasure patterns where the number of failures exceeds the locality. They considered a practical scenario in which nodes in each repair set may be located in adjacent physical positions, forming what can be seen as a rack. Following the bandwidth assumption of the rack-aware model, the cut-set bound is established for the rack-aware system with locality.\nIn this paper, we consider locally repairable codes under the rack-aware storage model, where each local repair set is organized into a rack. Inspired by regenerating codes, we generalize the Tamo-Barg code into an array form, resulting in a family of locally repairable array codes. The proposed code accommodates any number of failures in a single rack and achieves the optimal repair bandwidth for erasures beyond the local repair property. Furthermore, in comparison to traditional rack-based codes, this construction can support a broader range of erasure patterns with small repair bandwidth."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Preliminaries",
15
+ "text": "First of all, we introduce some notation and repair models involved in this paper.\nFor any positive integers , denote by the set and the set . Let be a prime power and be a finite field of elements.\nDenote by the ring of polynomials over . We represent as an -length vector consisting of polynomials, where , for and .\nIn this paper, we consider the array form of codes with locality. Herein, we give the formal definition.\nFor , the -th code symbol of an linear array code has locality if there exists a subset (repair set) such that\nand ;\nthe minimum distance of the punctured code is at least .\nFurthermore, the code is said to be a locally repairable code with -locality if all the code symbols have -locality.\nWe employ Reed-Solomon (RS) codes as the building block of MDS array codes to design locally repairable array codes, whose definition is given below.\nLet be the set of distinct elements over . A Generalized Reed-Solomon code of length and dimension with evaluation points is defined as\nwhere .\nWhen , this code is called a Reed-Solomon code denoted as for short.\nLet be a linear code over of length . The dual code of code is defined as\nThe dual of an RS code is a GRS code. Precisely, , where , . Therefore, this RS code can also be defined by the parity-check equations\nSince the constant multiplier is well-defined by the evaluation points , we omit it for simplicity in the subsequent discussion."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A System Models",
21
+ "text": "We illustrate the connections and differences in storage models involved in this work. To begin with, we introduce two common storage models.\nAssume that a file is divided into blocks, encoded into blocks, and then placed in different storage nodes.\nHomogeneous storage model\nFor the research of distributed storage codes, most studies have concentrated on the homogeneous storage model, in which nodes are distributed uniformly in different locations. As shown in Fig. , if a failure occurs in the node , surviving nodes (referred to as helper nodes) send symbols to a new replacement node respectively to repair the failed symbol. The repair bandwidth counts as the data transmission from all the helper nodes to the replacement node, i.e., .\nRack-aware storage model\nIn this model, storage nodes are divided into groups of size and distributed to different racks (refer to Fig. ). Since nodes within the same rack have a significantly lower transmission cost than the inter-rack transmission, the intra-rack transmission is considered to be free. Assume that the node located in Rack fails, the nodes in helper racks first send data to a special relayer in each rack, then those relayers respectively transmit symbols to the failed rack, and the remaining nodes in the failed rack transmit symbols to the new replacement node. Since the storage model only counts the data transmission between the racks, the repair bandwidth is equal to .\nDue to the feature of the rack-aware system, each rack can be regarded as a cohesive unit for data transmission. Therefore, similar to distinct nodes in the homogeneous model, different racks in the rack-aware storage model can be considered homogeneous. In this paper, we focus on the so-called rack-aware system with locality [12 ###reference_b12###]. As presented in Fig. , suppose that each rack corresponds to a repair set with -locality, then the single failure in Rack can be recovered by downloading symbols from the internal surviving nodes. For the Rack where the number of failures exceeds the code locality, similar to previous discussion, helper racks transmit , symbols to Rack by their relayers respectively, and the remaining nodes in this rack transmit symbols to the new replacement nodes. The repair bandwidth only counts as the inter-rack transmission, which is equal to .\nSpecifically, we consider a locally repairable array code with -locality such that\nrepair sets forms a partition of .\nBy setting each repair set as a rack, we obtain a code with locality in each rack.\nFor locally repairable codes under the rack-aware storage model with locality, we have the following cut-set bound.\nLet be a locally repairable array code with -locality, where be the size of the local repair group. Denote by the number of failures in -th group. For any , and any subset with , the bandwidth satisfies"
22
+ },
23
+ {
24
+ "section_id": "3",
25
+ "parent_section_id": null,
26
+ "section_name": "III Repair Scheme for Rack-Aware Locally Repairable Codes",
27
+ "text": "In this section, inspired by [14 ###reference_b14###],\nwe employ good polynomials to construct locally repairable array codes under the rack-aware storage model, such that all nodes in each rack form a repair set. It is an extension of the well-known Tamo-Barg code [14 ###reference_b14###] to the array form. Combining with the technique of regenerating codes, we design a generic repair scheme to handle multiple-node failures of the proposed construction beyond the code locality.\nTo begin with, we recall some definitions and properties of good polynomials.\nA polynomial of degree is called a good polynomial if there exists a partition over of size with , such that remains a constant on each set . In other words, for any , where for any .\nLet be\npairwise coprime polynomials over of degree .\nFor any polynomials , there exists a unique polynomial of degree less than satisfying\nLet be a monic polynomial of degree and .\nSuppose that are distinct constants such that\nThen for any ,\nwhere and for"
28
+ },
29
+ {
30
+ "section_id": "3.1",
31
+ "parent_section_id": "3",
32
+ "section_name": "III-A A Generic Construction",
33
+ "text": "Let be a good polynomial of degree . For any , the distinct elements\n over satisfy\n for , where for .\nConsider a vector , where\n and . Suppose that each component of satisfies\nwhere , and .\nDefine an array code\nwhere satisfies (3 ###reference_###) and\nThen, arrange these nodes corresponding to evaluation points into -th rack. We present it as an array\nwhich implies that each row\n\nare nodes in Rack and each node stores an -length vector.\nLet . For any , , ,\nif , then the code of (4 ###reference_###) is an locally repairable array code with -locality. Furthermore, the residue polynomials of\ncan be represented as\n\nFor any ,\nare codewords of an MDS array code, where .\nFor any , according to Lemma 2 ###reference_2###, we can claim that is of degree at most for , thus can be represented as\n\nFor any , , we have\n for and for .\nThus, the vector with \nforms a codeword of a RS with evaluation points , thereby each row in (5 ###reference_###) is a codeword of a MDS array code,\nwhich forms a repair set of locality according to Definition 1 ###reference_n1###.\nMoreover, by Lemma 2 ###reference_2###, for , we have\nwhere is a polynomial of degree less than .\nHence, \nis a codeword of an RS code for\nany , which implies that\n \nare codewords of an MDS array code. \u220e"
34
+ },
35
+ {
36
+ "section_id": "3.2",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-B Repair Mechanism",
39
+ "text": "In this subsection, we discuss the repair mechanism of the generic construction in Section III-A ###reference_###. As mentioned earlier, the array code exhibits -locality within each rack. This implies that if the number of failed nodes within racks is less than , the local repair group can successfully recover these failures solely from internal nodes. However, if the number of failures exceeds the locality, we employ the properties between the racks to handle the \u201cextra\u201d failures. We provide a detailed illustration of various erasure patterns and their corresponding repair mechanisms below.\nDenote by the set of racks contained failures, and let be the number of failed nodes in -th rack. Due to the locality, we classify the rack failure into two scenarios: one that can be recovered by the internal nodes within the same rack, and the other that requires the help of other surviving racks. Let be the number of racks containing failed nodes corresponding to these two scenarios respectively, i.e.,\nwhere , . Based on this, we have the following erasure patterns.\nCase 1: , : Since the nodes in each rack form a codeword of a RS code, for , the rack downloads all symbols from remaining nodes within itself to independently repair at most failures.\nCase 2: , : Let for . For , Rack computes coefficients of the residue polynomial . Then, by combining data from the surviving nodes in this rack, each component of with degree can be determined, thereby recovering all the failures.\nCase 3: : The hybrid erasure pattern can be decomposed into parallel repair processes corresponding to Case and .\nFor the racks contained the number of failures less than , i.e. Racks for , conduct the repair of Case . Otherwise, the repair is in accordance with Case .\nNotably, in Case , since the -locality of each rack, -th rack for suffices to compute its residue polynomial by connecting at least surviving nodes. These racks have no effect on the repair of racks containing more than failures. For the sake of simplicity, we omit the procedure of local repair and only discuss how to repair the racks that occur more than failures, i.e., the repair of Case .\nIt is clear that each component of the code in (4 ###reference_###) can be seen as a subcode of a rack-aware RS code given in [28 ###reference_b28###]. Motivated by this, we generalize its repair framework to adapt array codes that have locality in each rack.\nAssume that there are racks suffering failures indexed by the set .\nFor , is denoted as the number of failures in Rack .\nLet be the set of helper racks of size .\nThe following procedure shows the concrete repair of the rack-aware locally repairable code given in (4 ###reference_###).\nRepair procedure of codewords by (4 ###reference_###):\nFor , Rack computes from any storage nodes within the rack by Lagrange interpolation, thereby obtaining an array consisting of coefficients\nFor any ,\n\nforms a codeword of an MDS array code.\nLet and .\nDefine\nthen recover the symbols\nof the MDS array codeword .\nGiven and , let the coordinates of surviving nodes be\n. Then, for each and , by (8 ###reference_###), one can compute the polynomial\nwhose degree is at most , from the surviving nodes in rack . Then, the repair center can compute the remainder coefficients \nand thereby the entire polynomials\nwhich can figure out the failures\nAccording to the assumption of the rack-aware storage model, the repair bandwidth is only required to count the data transmission across the rack to repair the symbols of (8 ###reference_###) in Step 2, i.e., the amount of data transmitted from helper racks (repair sets) to the rack that contains the number of failures more than .\nFor a rack-aware locally repairable array code defined by (4 ###reference_###),\nassume that the racks \ncontain failures respectively. Setting , where and .\nFor , if the repair bandwidth \nis capable to recover the coefficients in (8 ###reference_###) from helper racks, the failures can be recovered with\nbandwidth\nAccording to Step of Repair procedure of codewords by (4 ###reference_###), the residue polynomial for can be computed from (8 ###reference_###) and the surviving nodes of Rack , which means that all of the nodes in failed racks can be determined. Since the assumption of the rack-aware model, only the cross-rack transmission is taken into account of the bandwidth. Thus the repair bandwidth is exactly \n\u220e"
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "IV A Family of Bandwidth-Optimal Locally Repairable Array Codes",
45
+ "text": "In this section, we consider the scenario that a rack, or a single local repair set suffers failures exceeding its local repair capacity, i.e., . We present an explicit construction of locally repairable array codes with desirable repair properties based on the generic construction given in Section III-A ###reference_###.\nOur construction is partially inspired by the known MSR code proposed in literature [23 ###reference_b23###].\nThe resulting codes support the optimal repair of a local repair set (or rack) that contains failures .\nLet and , where is the number of helper racks. Suppose that and let be an element of multiplicative order . For an integer , denote as the -th coordinate of the -ary expansion of and\ndefine the set of evaluation points , where , for simplicity, we denote .\nLet and \nFor positive integer and , let\nFor and , let the polynomial\n.\nDefine an array code\nwhere\n\nThen, arrange the nodes corresponding to evaluation points into -th rack.\nThe repair procedure follows from Section III-B ###reference_###.\nSince Steps 1 and 3 are the same for different constructions, herein we only discuss Step 2 in detail.\nThe code given in Construction 1 ###reference_s1### is a rack-based locally repairable array code with optimal repair bandwidth for the repair of any number of failures in a single rack from any helper racks.\nClearly, is a good polynomial satisfying\n for , and . Denote . Since , by Theorem 2 ###reference_2###, is an locally repairable array code.\nLet the original codeword determined by \nwith for .\nThus, according to Theorem 2 ###reference_2###, for any , denote by the residual polynomials with ,\ni.e.,\n\nthen \nforms a codeword of an RS code with evaluation points for\nany .\nTherefore, one can deduce the following parity check equations:\nfor , and .\nConsider the repair of failures in -th rack. If , the failures can be recovered from the locality without counting into the repair bandwidth, thus we discuss the case that . By Step in Repair procedure of (4 ###reference_###), we have\n\nand our target is to repair coefficients\nwhere\nFor summing the equations (10 ###reference_###) on Then, for , we can obtain equations\nwhich define a GRS code of length and dimension , where\n denotes the -th coordinate of the -ary expansion of and . Hence, any helper racks suffice to determine the coefficients in (11 ###reference_###), thereby\nrepairing the residue polynomial with the help of the reminder surviving nodes in -th rack. As a consequence, the failures in rack can be recovered.\nFrom equations (12 ###reference_###), for each , one needs to download symbols from each helper rack. Thus, the total repair bandwidth is\nwhich meets the cut-set bound of Theorem 1 ###reference_1###.\n\u220e"
46
+ }
47
+ ],
48
+ "appendix": [],
49
+ "tables": {},
50
+ "image_paths": {},
51
+ "validation": true,
52
+ "references": [],
53
+ "url": "http://arxiv.org/html/2401.15308v1"
54
+ }
20240127/2401.15312v1.json ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "How We Refute Claims: Automatic Fact-Checking through Flaw Identification and Explanation",
3
+ "abstract": "Automated fact-checking is a crucial task in the governance of internet content.\nAlthough various studies utilize advanced models to tackle this issue, a significant gap persists in addressing complex real-world rumors and deceptive claims.\nTo address this challenge, this paper explores the novel task of flaw-oriented fact-checking, including aspect generation and flaw identification.\nWe also introduce RefuteClaim, a new framework designed specifically for this task.\nGiven the absence of an existing dataset, we present FlawCheck, a dataset created by extracting and transforming insights from expert reviews into relevant aspects and identified flaws. The experimental results underscore the efficacy of RefuteClaim, particularly in classifying and elucidating false claims.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1. Introduction",
9
+ "text": "As the World Wide Web continues to expand, an overwhelming amount of information is flooding the internet.\nWith limited time available, individuals often struggle to comprehensively grasp content and may be adversely influenced by problematic claims.\nAccurately assessing the veracity of a claim demands considerable time and effort from fact-checking organizations and experts, who meticulously gather relevant evidence and scrutinize the facts and assumptions.\nHence, in the quest to efficiently identify a plethora of false statements on the internet, numerous studies have investigated automatic fact-checking.\nResearch on automatic fact-checking can be broadly categorized into veracity classification and justification generation, with a predominant focus on the former.\nAlthough veracity classification is essential for prompt assessment of claims, justification generation plays a more pivotal role in producing explainable results and facilitating comprehensive fact-checking.\nTo generate justifications in natural language, early studies (Shu et al., 2019 ###reference_b11###; Atanasova et al., 2020 ###reference_b2###) proposed directly extracting sentences from reliable sources such as news content.\nMore recent work (Chen et al., 2022 ###reference_b3###; Rani et al., 2023 ###reference_b10###; Pan et al., 2023 ###reference_b9###) explores the approach of question answering,\nwhich involves generating questions related to specific facts within the claims.\nBy answering these generated questions using supporting evidence, models predict and articulate the authenticity of the claims, fostering more robust fact-checking.\nDespite substantial advancements in the field, scant attention has been directed toward discerning the underlying causes that render a claim false.\nFor instance, in the question answering approach, the generated questions predominantly focus on straightforward facts such as who, when, where, and how.\nIn real-world scenarios, deceptive claims are often intricately crafted to mislead without falling into obvious pitfalls, strategically distorting a minor segment of the claim.\nThis is precisely why professional fact-checkers thoughtfully analyze claims from diverse perspectives, assessing not only the literal accuracy of the statement but also considering elements such as tone, context, and its consistency with established facts.\nThus, it becomes imperative to identify the specific flaws within a claim to comprehensively confirm and elucidate its veracity in an automatic fact-checking process.\nTo address the challenge at hand, we first employ aspect generation to determine the most crucial aspects associated with the claim, around which evaluation should be focused.\nBased on evidence, our model synthesizes explanations for these distinct aspects.\nThen, we carefully select seven critical flaws from an initial set of twenty-two false labels used by various fact-checking organizations to debunk claims.\nThese flaws are grouped into three distinct categories, each influencing the outcome of the fact-checking process in unique ways.\nThe first category includes three explicit flaws that can be identified by evaluating the claim\u2019s compatibility with the evidence: contradicting facts, exaggeration, and understatement.\nIn the second category, we introduce two more nuanced flaws: occasional faltering and insufficient support.\nFact-checkers working within this realm must employ critical reasoning to envisage potential scenarios where the claim may not hold true.\nA claim may seem convincing initially but fail to maintain its validity or soundness over time and across different scenarios.\nThe final category encapsulates the two most complex flaws: problematic assumptions and existence of alternative explanations.\nIdentifying these particular flaws is a more intricate process, as it requires fact-checkers to consider a wider context and often demands extensive background knowledge that may not be demonstrated in the evidence at hand.\nDetails will be elaborated in the following section.\nIn the absence of an existing dataset, we extend WatClaimCheck (Khan et al., 2022 ###reference_b8###), utilizing a large language model (LLM) to infer the aspects and flaws of claims based on review articles, thereby constructing the FlawCheck dataset.111https://github.com/NYCU-NLP-Lab/FlawCheck.git ###reference_git###\nSubsequently, we present RefuteClaim, a novel framework designed to guide the review process in creating comprehensive fact-checking articles to emulate the quality and depth of articles generated by human experts.\nIn summary, our contributions are summarized as follows:\nWe introduce a novel flaw-checking task that entails the examination of the seven flaws, reflecting the complexities of real-world automatic fact-checking scenarios.\nWe present the FlawCheck dataset, which encompasses distinct aspects and explanations for the seven flaws associated with each claim.\nThis dataset encapsulates the expertise of human fact-checking professionals.\nWe propose RefuteClaim, a framework that integrates aspect generation and flaw identification into an automatic fact-checking pipeline.\nExperimental results show promising performance, both in classifying and elucidating false claims."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "2. Flaw-Oriented Fact-Checking",
15
+ "text": "In this study, we develop RefuteClaim, which incorporates aspect generation along with flaw identification and explanation for fact-checking.\nThe definitions are provided as follows."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "2.1. Aspect Generation",
21
+ "text": "Aspect generation aids in identifying and focusing on the key elements around which the evaluation of a claim revolves.\nThe goal is to determine the most crucial aspects associated with a claim to guide subsequent flaw identification and explanation.\nAspects represent the specific dimensions or attributes that are integral to evaluating the validity of a claim,\nfor example, a statement asserting that a particular politician engaged in corrupt practices during an election campaign.\nIn this context, aspects could include:\nLegal investigations: Evaluating whether there are ongoing or concluded legal investigations into the alleged corrupt practices.\nFinancial transactions: Scrutinizing financial transactions related to the campaign to identify any irregular activities.\nPolitical motivations: Investigating if there are political motivations behind the accusations, such as rivalry strategies.\nLegal precedents: Examining similar cases in the past to understand how they were adjudicated.\nThe relevant evidence associated with a claim may lead to various aspects.\nIn this work, our model explains up to four distinct aspects."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "2.2. Flaw Identification and Explanation",
27
+ "text": "The process of flaw identification and explanation involves examining statements or claims critically to identify inaccuracies or logical inconsistencies.\nHere we define and detail seven types of flaws that may be present in a claim:\nContradicting facts:\nWhen a claim is presented, it must align with established facts.\nA contradicting fact occurs when the claim directly opposes known and verified information.\nFor instance, the claim that \u201cthe Earth is flat\u201d contradicts the overwhelming scientific evidence that the Earth is an oblate spheroid.\nIt is crucial to cross-reference claims with reliable data sources to detect such flaws.\nExaggeration:\nExaggeration is a flaw where the truth is stretched beyond its actual proportions.\nThis can be done to make something appear more significant or severe than it really is.\nFor example, stating that \u201ceveryone hates the new policy\u201d is likely an exaggeration, as it is improbable that every single person has a negative view.\nUnderstatement:\nAn understatement minimizes the significance of something in a way that misrepresents the truth and downplays important facts.\nSaying \u201cclimate change is a minor issue\u201d is an understatement because it fails to convey the widespread consensus on the seriousness of climate change impacts.\nOccasional faltering:\nA claim falters at times when it is presented as universally true, but cannot be consistently sustained.\nFor example, the claim that \u201celectric cars are always better for the environment\u201d may falter in regions where electricity is primarily generated from coal, potentially making the environmental benefits less clear-cut.\nThis inconsistency reveals that the claim does not account for specific conditions where it may not hold true.\nInsufficient support:\nClaims should be backed up by evidence.\nSupport is insufficient when assertions are made without the necessary substantiation.\nFor instance, claiming that \u201ca particular diet causes weight loss in all individuals\u201d without citing scientific studies or statistical data is an unsupported statement.\nProblematic assumptions:\nThis type of flaw arises when a claim is based on assumptions that are not validated or are questionable.\nProblematic assumptions can lead to incorrect conclusions.\nAn example would be assuming that \u201cincreased internet usage directly causes poor social skills,\u201d which ignores potential factors like the type of internet usage or individual differences.\nExistence of alternative explanations:\nEven if a claim seems plausible, other explanations may account for the observed facts.\nA flaw exists when a claim does not consider or rule out such alternative explanations.\nFor example, the conclusion that \u201crising smartphone sales are due solely to improved technology\u201d ignores other possible factors such as marketing strategies or changes in consumer behavior.\nNote that a single claim may incorporate several distinct flaws, and the composition of aspects is indeed diverse."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "3. Dataset Construction",
33
+ "text": "Due to the demand for premise articles and complete review articles written by human experts, we selected WatClaimCheck (Khan et al., 2022 ###reference_b8###) as our data source, given its ample and varied collection from eight fact-checking websites.\nIn this study, we extend 33,721 claims and their metadata in WatClaimCheck to facilitate the proposed flaw-oriented fact-checking task, leading to the creation of FlawCheck.\nSince the original content in WatClaimCheck includes a significant amount of irrelevant web crawl data, we collected the web data again and cleaned it to ensure relatively clean review articles for the evaluation of justification generation.\nThen, we harness the capabilities of GPT-3.5-turbo to distill expert opinions from review articles and transform them into the various aspects and identify flaws.\nGiven an input claim and a review article , we initially generate four silver ground-truth aspects , where .\nThe aspects may represent coarse-grained facts referenced by the claim.\nFollowing the same process, taking and , we utilize GPT-3.5-turbo to transform the human expert argumentation to elaborate the presence of flaws .\nFurthermore, to facilitate the evaluation of veracity classification, we reassigned the labels in WatClaimCheck due to inaccuracies in their arrangement, for instance, classifying the label \u201cPants on Fire\u201d to \u201cPartially True/False.\u201d\nWe define four labels\u2014Incorrect, Partly false, Unproven, and Correct\u2014based on the original ratings obtained from each fact-checking website.\nFlawCheck\u2019s label distribution in the training and testing sets is as follows: \u201cTrue\u201d with 5,272 and 657, \u201cUnproven\u201d with 805 and 112, \u201cPartly false\u201d with 3,429 and 451, and \u201cFalse\u201d with 17,470 and 2,153 instances, respectively.\nThis imbalanced distribution of labels is due primarily to the nature of fact-checking websites."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "4. Methodology",
39
+ "text": "In this section, we present the four integral components of the proposed RefuteClaim framework:\nevidence retriever, aspect generator, flaw checker, and justification generator."
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "5. Experiments",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "5.1",
49
+ "parent_section_id": "5",
50
+ "section_name": "5.1. Baseline Models",
51
+ "text": "Simple Baseline:\nThe simple baseline represents the basic utilization of LLMs, a setting commonly adopted by the majority of related work, which solely considers and .\nThe model\ndoes not consider aspects or flaw explanations.\nWith Aspects:\nAnother baseline model takes into account to follow specified directions during justification generation.\nIn the absence of flaw explanations, we evaluate the efficacy of aspect generation.\nThis baseline model utilizes , , and to directly generate justifications."
52
+ },
53
+ {
54
+ "section_id": "5.2",
55
+ "parent_section_id": "5",
56
+ "section_name": "5.2. Experimental Setup",
57
+ "text": "In our experiments we utilized Vicuna-7b-v1.5 (Zheng et al., 2023 ###reference_b12###) as the LLM.\nDuring LoRA fine-tuning, we set the rank to 8.\nTo evaluate the justification results, we adopted ROUGE-1, ROUGE-2, ROUGE-L, and BERTScore as evaluation metrics.\nGiven the absence of an existing metric to assess critical qualities such as correctness and completeness in fact-checking, we leveraged Google Gemini Pro222We chose Gemini Pro for its free access, enabling thorough result evaluation. to score the generated justifications against the ground truth.\nGemini Pro assigns scores on a scale of 0 to 1, where a score of 0 indicates a lack of quality (e.g., correctness), and a score of 1 denotes its full embodiment.\nTo assess whether the generated justifications contribute to fact-checking, we also evaluated the results of veracity classification.\nHu et al. (2023 ###reference_b5###) demonstrate the potential in utilizing LLMs to generate fact-checking descriptions.\nNevertheless, such autoregressive models may demonstrate sub-optimal performance in veracity classification.\nIn addition, to produce strictly deterministic output, we trained a RoBERTa-large model (Conneau et al., 2020 ###reference_b4###) as our veracity classifier.\nWe utilized the review articles to train and employed input justifications generated by different models for veracity classification.\nDeveloping a proper classifier to connect with different justification generators is left as future work."
58
+ },
59
+ {
60
+ "section_id": "5.3",
61
+ "parent_section_id": "5",
62
+ "section_name": "5.3. Experimental Results",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "5.3.1",
67
+ "parent_section_id": "5.3",
68
+ "section_name": "5.3.1. Justification Generation",
69
+ "text": "Table 1 ###reference_### presents the results using ROUGE scores and BERTScore as metrics.\nAs mentioned in Section 1 ###reference_###, the seven flaws are grouped into three categories.\nThe RefuteClaim variants 3F, 5F, and 7F denote the first category (three flaws), the first two categories (five flaws), and all three categories (seven flaws), respectively.\nThe results show that the RefuteClaim models achieve promising performance in ROUGE-1 and ROUGE-L for the \u201cFalse\u201d, \u201cPartly false\u201d, and \u201cTrue\u201d claims, indicating its effectiveness in generating justifications across different veracity levels.\nHowever, our models struggle with generating justifications for \u201cUnproven\u201d claims.\nMeanwhile, RefuteClaim models under different settings exhibit weaker BERTScore performance,\nperhaps because we defined seven flaw types, and some flaw elucidations generated by GPT-3.5-turbo require inference.\nConsequently, justifications generated based on the output from the flaw checker trained by silver labels could significantly diverge from the ground truth, i.e., review articles.\nNevertheless, comparison of the baseline models shows that integrating the aspect improves justification results in most cases due to the model\u2019s increased focus on several crucial points.\nWe evaluate the generated justification quality in Table 2 ###reference_###.\nRefuteClaim with seven flaws (RefuteClaim-7F) outperforms other methods, except for unproven claims.\nThe RefuteClaim results with three, five, and seven flaws suggest that a holistic consideration of all flaws, particularly the inclusion of \u201cproblematic assumptions\u201d and \u201cexistence of alternative explanations\u201d, benefits justification generation.\nFor unproven claims, the model must focus on more trivial details or find insufficiency in current evidence, whereas our defined flaws focus more on detecting significant errors in claims and introducing more inferences made by GPT-3.5-turbo based on reviews.\nIncorporating only aspects is a more fitting approach for unproven claims, as it faithfully represents most content written by experts."
70
+ },
71
+ {
72
+ "section_id": "5.3.2",
73
+ "parent_section_id": "5.3",
74
+ "section_name": "5.3.2. Veracity Classification",
75
+ "text": "Table 3 ###reference_### presents the results of veracity classification.\n\u201cGolden review\u201d denotes direct utilization of expert-written review articles to train the veracity classifier: this is challenging.\nAs shown in Table 3 ###reference_###, RefuteClaim-7F exhibits the highest macro F1 score, particularly excelling in rating \u201cFalse\u201d and \u201cPartly false\u201d claims compared to other methods.\nNevertheless, RefuteClaim-5F achieves superior performance in identifying unproven claims.\nThis aligns intuitively with its incorporation of relevant flaws, i.e., occasional faltering and insufficient support.\nMoreover, the RefuteClaim models exhibit diminished accuracy scores when evaluating claims as \u201cTrue\u201d in comparison to the baseline models.\nThis can be attributed to the nature of RefuteClaim models, which focus on highlighting flaws.\nSuch an emphasis increases the likelihood of employing negative statements, potentially confusing the classifier.\nFurthermore, our classifier exhibits notably poor performance in rating claims as \u201cPartly false\u201d and \u201cUnproven\u201d.\nEven when utilizing the golden review, accuracy does not improve.\nUpon closer examination of misclassified samples, we find that the classifier struggles to discern between \u201cPartly false\u201d and \u201cFalse\u201d, as well as between \u201cUnproven\u201d and \u201cPartly false\u201d.\nMere incorporation of the aspect or three-flaw modules is not enough to address this challenge.\nHowever, we believe the flaw identification and generation findings are valuable for the community."
76
+ },
77
+ {
78
+ "section_id": "6",
79
+ "parent_section_id": null,
80
+ "section_name": "6. Conclusion and Future Work",
81
+ "text": "This paper explores the process of forming a flaw-checking perspective to generate justifications in an effort to emulate the quality of fact-checking conducted by human professionals.\nWe introduce the novel task of flaw-oriented fact-checking and present FlawCheck, a dataset encompassing the critical aspects identified by experts and seven pivotal flaws that demand evaluation.\nA pilot framework, RefuteClaim, is proposed.\nThe experimental results highlight the effectiveness of RefuteClaim in elucidating and classifying false claims.\nAs a pioneering work in studying flaw-oriented fact-checking, further investigation is required to explore the optimal utilization of aspects and flaws."
82
+ }
83
+ ],
84
+ "appendix": [],
85
+ "tables": {
86
+ "1": {
87
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T1.1.1.1.1\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"4\" id=\"S5.T1.1.1.1.2\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.1.1.2.1\" style=\"font-size:70%;\">ROUGE-1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"4\" id=\"S5.T1.1.1.1.3\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.1.1.3.1\" style=\"font-size:70%;\">ROUGE-2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"4\" id=\"S5.T1.1.1.1.4\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.1.1.4.1\" style=\"font-size:70%;\">ROUGE-L</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S5.T1.1.1.1.5\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.1.1.5.1\" style=\"font-size:70%;\">BERTScore</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T1.1.2.2.1\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.1.1\" style=\"font-size:70%;\">Models</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.2\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.2.1\" style=\"font-size:70%;\">False</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.3\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.3.1\" style=\"font-size:70%;\">Partly false</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.4\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.4.1\" style=\"font-size:70%;\">Unproven</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.2.5\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.5.1\" style=\"font-size:70%;\">True</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.6\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.6.1\" style=\"font-size:70%;\">False</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.7\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.7.1\" style=\"font-size:70%;\">Partly false</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.8\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.8.1\" style=\"font-size:70%;\">Unproven</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.2.9\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.9.1\" style=\"font-size:70%;\">True</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.10\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.10.1\" style=\"font-size:70%;\">False</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.11\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.11.1\" style=\"font-size:70%;\">Partly false</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.12\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.12.1\" style=\"font-size:70%;\">Unproven</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.2.13\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.13.1\" style=\"font-size:70%;\">True</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.14\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.14.1\" style=\"font-size:70%;\">False</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.15\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.15.1\" style=\"font-size:70%;\">Partly false</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.16\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.16.1\" style=\"font-size:70%;\">Unproven</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.17\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.2.2.17.1\" style=\"font-size:70%;\">True</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T1.1.3.3.1\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.3.3.1.1\" style=\"font-size:70%;\">Baseline</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.2\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.3.3.2.1\" style=\"font-size:70%;\">0.3151</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.3\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.3.3.3.1\" style=\"font-size:70%;\">0.2709</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.4\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.3.3.4.1\" style=\"font-size:70%;\">0.3107</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.3.3.5\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.3.3.5.1\" style=\"font-size:70%;\">0.3091</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.6\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.3.3.6.1\" style=\"font-size:70%;\">0.1087</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.7\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.3.3.7.1\" style=\"font-size:70%;\">0.0887</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.8\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.3.3.8.1\" style=\"font-size:70%;\">0.1089</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.3.3.9\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.3.3.9.1\" style=\"font-size:70%;\">0.1117</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.10\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.3.3.10.1\" style=\"font-size:70%;\">0.1644</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.11\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.3.3.11.1\" style=\"font-size:70%;\">0.1355</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.12\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.3.3.12.1\" style=\"font-size:70%;\">0.1697</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.3.3.13\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.3.3.13.1\" style=\"font-size:70%;\">0.1629</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.14\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.3.3.14.1\" style=\"font-size:70%;\">0.8236</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.15\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.3.3.15.1\" style=\"font-size:70%;\">0.8212</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.16\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.3.3.16.1\" style=\"font-size:70%;\">0.8266</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.17\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.3.3.17.1\" style=\"font-size:70%;\">0.8269</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T1.1.4.4.1\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.4.4.1.1\" style=\"font-size:70%;\">\u00a0\u00a0 w/ aspects</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.2\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.4.4.2.1\" style=\"font-size:70%;\">0.3176</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.3\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.4.4.3.1\" style=\"font-size:70%;\">0.2721</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.4\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.4.4.4.1\" style=\"font-size:70%;\">0.2935</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.4.4.5\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.4.4.5.1\" style=\"font-size:70%;\">0.3020</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.6\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.4.4.6.1\" style=\"font-size:70%;\">0.1128</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.7\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.4.4.7.1\" style=\"font-size:70%;\">0.0932</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.8\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.4.4.8.1\" style=\"font-size:70%;\">0.0993</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.4.4.9\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.4.4.9.1\" style=\"font-size:70%;\">0.1116</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.10\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.4.4.10.1\" style=\"font-size:70%;\">0.1711</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.11\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.4.4.11.1\" style=\"font-size:70%;\">0.1392</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.12\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.4.4.12.1\" style=\"font-size:70%;\">0.1608</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.4.4.13\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.4.4.13.1\" style=\"font-size:70%;\">0.1623</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.14\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.4.4.14.1\" style=\"font-size:70%;\">0.8268</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.15\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.4.4.15.1\" style=\"font-size:70%;\">0.8246</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.16\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.4.4.16.1\" style=\"font-size:70%;\">0.8264</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.17\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.4.4.17.1\" style=\"font-size:70%;\">0.8274</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T1.1.5.5.1\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.1.1\" style=\"font-size:70%;\">RefuteClaim-3F</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.5.5.2\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.2.1\" style=\"font-size:70%;\">0.3119</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.5.5.3\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.3.1\" style=\"font-size:70%;\">0.2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.5.5.4\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.4.1\" style=\"font-size:70%;\">0.2836</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.5.5.5\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.5.1\" style=\"font-size:70%;\">0.3010</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.5.5.6\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.6.1\" style=\"font-size:70%;\">0.1033</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.5.5.7\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.7.1\" style=\"font-size:70%;\">0.0828</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.5.5.8\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.8.1\" style=\"font-size:70%;\">0.0873</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.5.5.9\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.9.1\" style=\"font-size:70%;\">0.1060</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.5.5.10\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.10.1\" style=\"font-size:70%;\">0.1683</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.5.5.11\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.11.1\" style=\"font-size:70%;\">0.1397</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.5.5.12\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.12.1\" style=\"font-size:70%;\">0.1577</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.5.5.13\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.13.1\" style=\"font-size:70%;\">0.1643</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.5.5.14\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.14.1\" style=\"font-size:70%;\">0.8215</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.5.5.15\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.15.1\" style=\"font-size:70%;\">0.8175</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.5.5.16\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.16.1\" style=\"font-size:70%;\">0.8167</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.5.5.17\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.5.5.17.1\" style=\"font-size:70%;\">0.8217</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T1.1.6.6.1\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.1.1\" style=\"font-size:70%;\">RefuteClaim-5F</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.2\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.2.1\" style=\"font-size:70%;\">0.3235</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.3\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.6.6.3.1\" style=\"font-size:70%;\">0.2828</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.4\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.4.1\" style=\"font-size:70%;\">0.2994</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.6.6.5\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.5.1\" style=\"font-size:70%;\">0.3055</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.6\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.6.1\" style=\"font-size:70%;\">0.1122</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.7\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.6.6.7.1\" style=\"font-size:70%;\">0.0948</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.8\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.8.1\" style=\"font-size:70%;\">0.1004</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.6.6.9\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.9.1\" style=\"font-size:70%;\">0.1046</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.10\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.10.1\" style=\"font-size:70%;\">0.1726</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.11\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.11.1\" style=\"font-size:70%;\">0.1432</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.12\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.12.1\" style=\"font-size:70%;\">0.1641</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.6.6.13\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.13.1\" style=\"font-size:70%;\">0.1605</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.14\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.14.1\" style=\"font-size:70%;\">0.8261</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.15\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.15.1\" style=\"font-size:70%;\">0.8214</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.16\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.16.1\" style=\"font-size:70%;\">0.8224</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.17\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.6.6.17.1\" style=\"font-size:70%;\">0.8243</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S5.T1.1.7.7.1\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.7.7.1.1\" style=\"font-size:70%;\">RefuteClaim-7F</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.7.7.2\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.7.7.2.1\" style=\"font-size:70%;\">0.3266</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.7.7.3\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.7.7.3.1\" style=\"font-size:70%;\">0.2788</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.7.7.4\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.7.7.4.1\" style=\"font-size:70%;\">0.2838</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T1.1.7.7.5\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.7.7.5.1\" style=\"font-size:70%;\">0.3106</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.7.7.6\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.7.7.6.1\" style=\"font-size:70%;\">0.1109</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.7.7.7\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.7.7.7.1\" style=\"font-size:70%;\">0.0868</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.7.7.8\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.7.7.8.1\" style=\"font-size:70%;\">0.0832</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T1.1.7.7.9\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.7.7.9.1\" style=\"font-size:70%;\">0.1091</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.7.7.10\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.7.7.10.1\" style=\"font-size:70%;\">0.1739</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.7.7.11\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.7.7.11.1\" style=\"font-size:70%;\">0.1433</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.7.7.12\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.7.7.12.1\" style=\"font-size:70%;\">0.1493</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T1.1.7.7.13\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.7.7.13.1\" style=\"font-size:70%;\">0.1682</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.7.7.14\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.7.7.14.1\" style=\"font-size:70%;\">0.8245</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.7.7.15\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.7.7.15.1\" style=\"font-size:70%;\">0.8183</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.7.7.16\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.7.7.16.1\" style=\"font-size:70%;\">0.8179</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.7.7.17\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.7.7.17.1\" style=\"font-size:70%;\">0.8243</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\" style=\"font-size:70%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1. </span>Justification generation</figcaption>\n</figure>",
88
+ "capture": "Table 1. Justification generation"
89
+ },
90
+ "2": {
91
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T2.1.1.1.1\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"4\" id=\"S5.T2.1.1.1.2\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.1.1.2.1\" style=\"font-size:70%;\">Correctness</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S5.T2.1.1.1.3\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.1.1.3.1\" style=\"font-size:70%;\">Completeness</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.2.2.1\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.2.2.1.1\" style=\"font-size:70%;\">Models</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.2.2\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.2.2.2.1\" style=\"font-size:70%;\">False</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.2.3\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.2.2.3.1\" style=\"font-size:70%;\">Partly false</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.2.4\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.2.2.4.1\" style=\"font-size:70%;\">Unproven</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.1.2.2.5\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.2.2.5.1\" style=\"font-size:70%;\">True</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.2.6\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.2.2.6.1\" style=\"font-size:70%;\">False</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.2.7\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.2.2.7.1\" style=\"font-size:70%;\">Partly false</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.2.8\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.2.2.8.1\" style=\"font-size:70%;\">Unproven</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.2.2.9\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.2.2.9.1\" style=\"font-size:70%;\">True</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T2.1.3.3.1\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.3.3.1.1\" style=\"font-size:70%;\">Baseline</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.3.3.2\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.3.3.2.1\" style=\"font-size:70%;\">0.4770</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.3.3.3\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.3.3.3.1\" style=\"font-size:70%;\">0.4700</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.3.3.4\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.3.3.4.1\" style=\"font-size:70%;\">0.4311</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.1.3.3.5\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.3.3.5.1\" style=\"font-size:70%;\">0.4675</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.3.3.6\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.3.3.6.1\" style=\"font-size:70%;\">0.5165</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.3.3.7\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.3.3.7.1\" style=\"font-size:70%;\">0.4790</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.3.3.8\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.3.3.8.1\" style=\"font-size:70%;\">0.4644</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.3.3.9\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.3.3.9.1\" style=\"font-size:70%;\">0.5031</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.4.4.1\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.4.4.1.1\" style=\"font-size:70%;\">\u00a0\u00a0 w/ aspects</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.4.2\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.4.4.2.1\" style=\"font-size:70%;\">0.4400</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.4.3\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.4.4.3.1\" style=\"font-size:70%;\">0.4780</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.4.4\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.4.4.4.1\" style=\"font-size:70%;\">0.4580</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.4.4.5\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.4.4.5.1\" style=\"font-size:70%;\">0.5069</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.4.6\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.4.4.6.1\" style=\"font-size:70%;\">0.4825</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.4.7\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.4.4.7.1\" style=\"font-size:70%;\">0.4827</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.4.8\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.4.4.8.1\" style=\"font-size:70%;\">0.4750</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.4.9\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.4.4.9.1\" style=\"font-size:70%;\">0.5394</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T2.1.5.5.1\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.5.5.1.1\" style=\"font-size:70%;\">RefuteClaim-3F</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.5.5.2\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.5.5.2.1\" style=\"font-size:70%;\">0.4540</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.5.5.3\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.5.5.3.1\" style=\"font-size:70%;\">0.4288</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.5.5.4\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.5.5.4.1\" style=\"font-size:70%;\">0.3778</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.1.5.5.5\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.5.5.5.1\" style=\"font-size:70%;\">0.4825</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.5.5.6\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.5.5.6.1\" style=\"font-size:70%;\">0.4870</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.5.5.7\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.5.5.7.1\" style=\"font-size:70%;\">0.4475</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.5.5.8\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.5.5.8.1\" style=\"font-size:70%;\">0.3889</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.5.5.9\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.5.5.9.1\" style=\"font-size:70%;\">0.5300</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.6.6.1\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.6.6.1.1\" style=\"font-size:70%;\">RefuteClaim-5F</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.6.2\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.6.6.2.1\" style=\"font-size:70%;\">0.4970</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.6.3\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.6.6.3.1\" style=\"font-size:70%;\">0.4970</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.6.4\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.6.6.4.1\" style=\"font-size:70%;\">0.4000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.6.6.5\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.6.6.5.1\" style=\"font-size:70%;\">0.5075</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.6.6\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.6.6.6.1\" style=\"font-size:70%;\">0.5170</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.6.7\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.6.6.7.1\" style=\"font-size:70%;\">0.5140</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.6.8\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.6.6.8.1\" style=\"font-size:70%;\">0.4000</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.6.9\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.6.6.9.1\" style=\"font-size:70%;\">0.5275</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S5.T2.1.7.7.1\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.7.7.1.1\" style=\"font-size:70%;\">RefuteClaim-7F</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.7.7.2\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.7.7.2.1\" style=\"font-size:70%;\">0.5088</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.7.7.3\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.7.7.3.1\" style=\"font-size:70%;\">0.5140</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.7.7.4\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.7.7.4.1\" style=\"font-size:70%;\">0.4356</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T2.1.7.7.5\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.7.7.5.1\" style=\"font-size:70%;\">0.5280</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.7.7.6\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.7.7.6.1\" style=\"font-size:70%;\">0.5381</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.7.7.7\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.7.7.7.1\" style=\"font-size:70%;\">0.5186</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.7.7.8\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text\" id=\"S5.T2.1.7.7.8.1\" style=\"font-size:70%;\">0.4611</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.7.7.9\" style=\"padding-left:1.5pt;padding-right:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.7.7.9.1\" style=\"font-size:70%;\">0.5450</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\" style=\"font-size:70%;\"><span class=\"ltx_tag ltx_tag_table\">Table 2. </span>Justification evaluation using Gemini Pro</figcaption>\n</figure>",
92
+ "capture": "Table 2. Justification evaluation using Gemini Pro"
93
+ },
94
+ "3": {
95
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S5.T3.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S5.T3.1.1.1.2\"><span class=\"ltx_text\" id=\"S5.T3.1.1.1.2.1\" style=\"font-size:80%;\">Accuracy</span></td>\n<td class=\"ltx_td ltx_border_tt\" id=\"S5.T3.1.1.1.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.1.2.2.1\"><span class=\"ltx_text\" id=\"S5.T3.1.2.2.1.1\" style=\"font-size:80%;\">Data source</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.2.2.2\"><span class=\"ltx_text\" id=\"S5.T3.1.2.2.2.1\" style=\"font-size:80%;\">False</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.2.2.3\"><span class=\"ltx_text\" id=\"S5.T3.1.2.2.3.1\" style=\"font-size:80%;\">Partly false</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.2.2.4\"><span class=\"ltx_text\" id=\"S5.T3.1.2.2.4.1\" style=\"font-size:80%;\">Unproven</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.2.2.5\"><span class=\"ltx_text\" id=\"S5.T3.1.2.2.5.1\" style=\"font-size:80%;\">True</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.2.2.6\"><span class=\"ltx_text\" id=\"S5.T3.1.2.2.6.1\" style=\"font-size:80%;\">Macro F1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.3.3.1\"><span class=\"ltx_text\" id=\"S5.T3.1.3.3.1.1\" style=\"font-size:80%;\">Golden review</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.3.2\"><span class=\"ltx_text\" id=\"S5.T3.1.3.3.2.1\" style=\"font-size:80%;\">0.7902</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.3.3\"><span class=\"ltx_text\" id=\"S5.T3.1.3.3.3.1\" style=\"font-size:80%;\">0.2111</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.3.4\"><span class=\"ltx_text\" id=\"S5.T3.1.3.3.4.1\" style=\"font-size:80%;\">0.3182</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.3.5\"><span class=\"ltx_text\" id=\"S5.T3.1.3.3.5.1\" style=\"font-size:80%;\">0.7431</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.3.6\"><span class=\"ltx_text\" id=\"S5.T3.1.3.3.6.1\" style=\"font-size:80%;\">0.4993</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.4.4.1\"><span class=\"ltx_text\" id=\"S5.T3.1.4.4.1.1\" style=\"font-size:80%;\">Baseline</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.4.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.4.4.2.1\" style=\"font-size:80%;\">0.7439</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.4.4.3\"><span class=\"ltx_text\" id=\"S5.T3.1.4.4.3.1\" style=\"font-size:80%;\">0.0889</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.4.4.4\"><span class=\"ltx_text\" id=\"S5.T3.1.4.4.4.1\" style=\"font-size:80%;\">0.0682</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.4.4.5\"><span class=\"ltx_text\" id=\"S5.T3.1.4.4.5.1\" style=\"font-size:80%;\">0.6166</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.4.4.6\"><span class=\"ltx_text\" id=\"S5.T3.1.4.4.6.1\" style=\"font-size:80%;\">0.3621</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.1.5.5.1\"><span class=\"ltx_text\" id=\"S5.T3.1.5.5.1.1\" style=\"font-size:80%;\">\u00a0\u00a0 w/ aspects</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.5.2\"><span class=\"ltx_text\" id=\"S5.T3.1.5.5.2.1\" style=\"font-size:80%;\">0.6780</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.5.3\"><span class=\"ltx_text\" id=\"S5.T3.1.5.5.3.1\" style=\"font-size:80%;\">0.0722</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.5.4\"><span class=\"ltx_text\" id=\"S5.T3.1.5.5.4.1\" style=\"font-size:80%;\">0.0682</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.5.5.5.1\" style=\"font-size:80%;\">0.6957</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.5.6\"><span class=\"ltx_text\" id=\"S5.T3.1.5.5.6.1\" style=\"font-size:80%;\">0.3492</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.6.6.1\"><span class=\"ltx_text\" id=\"S5.T3.1.6.6.1.1\" style=\"font-size:80%;\">RefuteClaim-3F</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.6.6.2\"><span class=\"ltx_text\" id=\"S5.T3.1.6.6.2.1\" style=\"font-size:80%;\">0.7122</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.6.6.3\"><span class=\"ltx_text\" id=\"S5.T3.1.6.6.3.1\" style=\"font-size:80%;\">0.0778</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.6.6.4\"><span class=\"ltx_text\" id=\"S5.T3.1.6.6.4.1\" style=\"font-size:80%;\">0.0227</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.6.6.5\"><span class=\"ltx_text\" id=\"S5.T3.1.6.6.5.1\" style=\"font-size:80%;\">0.5771</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.6.6.6\"><span class=\"ltx_text\" id=\"S5.T3.1.6.6.6.1\" style=\"font-size:80%;\">0.3255</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T3.1.7.7.1\"><span class=\"ltx_text\" id=\"S5.T3.1.7.7.1.1\" style=\"font-size:80%;\">RefuteClaim-5F</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.7.2\"><span class=\"ltx_text\" id=\"S5.T3.1.7.7.2.1\" style=\"font-size:80%;\">0.7183</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.7.3\"><span class=\"ltx_text\" id=\"S5.T3.1.7.7.3.1\" style=\"font-size:80%;\">0.1278</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.7.7.4.1\" style=\"font-size:80%;\">0.1136</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.7.5\"><span class=\"ltx_text\" id=\"S5.T3.1.7.7.5.1\" style=\"font-size:80%;\">0.6087</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.7.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.7.7.6.1\" style=\"font-size:80%;\">0.3763</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T3.1.8.8.1\"><span class=\"ltx_text\" id=\"S5.T3.1.8.8.1.1\" style=\"font-size:80%;\">RefuteClaim-7F</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.1.8.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.8.8.2.1\" style=\"font-size:80%;\">0.7439</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.1.8.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.8.8.3.1\" style=\"font-size:80%;\">0.1611</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.1.8.8.4\"><span class=\"ltx_text\" id=\"S5.T3.1.8.8.4.1\" style=\"font-size:80%;\">0.0455</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.1.8.8.5\"><span class=\"ltx_text\" id=\"S5.T3.1.8.8.5.1\" style=\"font-size:80%;\">0.5850</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.1.8.8.6\"><span class=\"ltx_text\" id=\"S5.T3.1.8.8.6.1\" style=\"font-size:80%;\">0.3733</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\">Table 3. </span>Veracity classification</figcaption>\n</figure>",
96
+ "capture": "Table 3. Veracity classification"
97
+ }
98
+ },
99
+ "image_paths": {},
100
+ "validation": true,
101
+ "references": [
102
+ {
103
+ "1": {
104
+ "title": "Generating Fact Checking Explanations. In ACL 2020.",
105
+ "author": "Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, and Isabelle Augenstein. 2020.",
106
+ "venue": "",
107
+ "url": null
108
+ }
109
+ },
110
+ {
111
+ "2": {
112
+ "title": "Generating Literal and Implied Subquestions to Fact-check Complex Claims. In EMNLP 2022.",
113
+ "author": "Jifan Chen, Aniruddh Sriram, Eunsol Choi, and Greg Durrett. 2022.",
114
+ "venue": "",
115
+ "url": null
116
+ }
117
+ },
118
+ {
119
+ "3": {
120
+ "title": "Unsupervised Cross-lingual Representation Learning at Scale. In ACL 2020.",
121
+ "author": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, \u00c9douard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020.",
122
+ "venue": "",
123
+ "url": null
124
+ }
125
+ },
126
+ {
127
+ "4": {
128
+ "title": "Bad actor, good advisor: Exploring the role of large language models in fake news detection.",
129
+ "author": "Beizhe Hu, Qiang Sheng, Juan Cao, Yuhui Shi, Yang Li, Danding Wang, and Peng Qi. 2023.",
130
+ "venue": "arXiv preprint arXiv:2309.12247 (2023).",
131
+ "url": null
132
+ }
133
+ },
134
+ {
135
+ "5": {
136
+ "title": "LoRA: Low-Rank Adaptation of Large Language Models. In ICLR.",
137
+ "author": "Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2021.",
138
+ "venue": "",
139
+ "url": null
140
+ }
141
+ },
142
+ {
143
+ "6": {
144
+ "title": "Dense Passage Retrieval for Open-Domain Question Answering. In EMNLP 2020.",
145
+ "author": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020.",
146
+ "venue": "",
147
+ "url": null
148
+ }
149
+ },
150
+ {
151
+ "7": {
152
+ "title": "WatClaimCheck: A new dataset for claim entailment and inference. In ACL 2022.",
153
+ "author": "Kashif Khan, Ruizhe Wang, and Pascal Poupart. 2022.",
154
+ "venue": "",
155
+ "url": null
156
+ }
157
+ },
158
+ {
159
+ "8": {
160
+ "title": "Fact-Checking Complex Claims with Program-Guided Reasoning.",
161
+ "author": "Liangming Pan, Xiaobao Wu, Xinyuan Lu, Anh Tuan Luu, William Yang Wang, Min-Yen Kan, and Preslav Nakov. 2023.",
162
+ "venue": "arXiv preprint arXiv:2305.12744 (2023).",
163
+ "url": null
164
+ }
165
+ },
166
+ {
167
+ "9": {
168
+ "title": "FACTIFY-5WQA: 5W Aspect-based Fact Verification through Question Answering.",
169
+ "author": "Anku Rani, SM Tonmoy, Dwip Dalal, Shreya Gautam, Megha Chakraborty, Aman Chadha, Amit Sheth, and Amitava Das. 2023.",
170
+ "venue": "arXiv preprint arXiv:2305.04329 (2023).",
171
+ "url": null
172
+ }
173
+ },
174
+ {
175
+ "10": {
176
+ "title": "dEFEND: Explainable fake news detection. In SIGKDD 2019.",
177
+ "author": "Kai Shu, Limeng Cui, Suhang Wang, Dongwon Lee, and Huan Liu. 2019.",
178
+ "venue": "",
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "11": {
184
+ "title": "Judging LLM-as-a-judge with MT-Bench and Chatbot Arena.",
185
+ "author": "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023.",
186
+ "venue": "",
187
+ "url": null
188
+ }
189
+ }
190
+ ],
191
+ "url": "http://arxiv.org/html/2401.15312v1"
192
+ }
20240127/2401.15317v1.json ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Floorplanning of VLSI by Mixed-Variable Optimization",
3
+ "abstract": "By formulating the floorplanning of VLSI as a mixed-variable optimization problem, this paper proposes to solve it by memetic algorithms, where the discrete orientation variables are addressed by the distribution evolutionary algorithm based on a population of probability model (DEA-PPM), and the continuous coordination variables are optimized by the conjugate sub-gradient algorithm (CSA). Accordingly, the fixed-outline floorplanning algorithm based on CSA and DEA-PPM (FFA-CD) and the floorplanning algorithm with golden section strategy (FA-GSS) are proposed for the floorplanning problems with and without fixed-outline constraint. Numerical experiments on GSRC test circuits show that the proposed algorithms are superior to some celebrated B*-tree based floorplanning algorithms, and are expected to be applied to large-scale floorplanning problems due to their low time complexity.\nKeywords: VLSI, floorplanning, distribution evolutionary algorithm, conjugate sub-gradient algorithm",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Floorplanning is a critical stage in the physical design of very large-scale integration circuit (VLSI) that determines the performance of VLSI chips to a large extent [1 ###reference_b1###]. It is a complex optimization problem with multiple objectives and constraints, which makes it challenging to develop high-performance algorithms for floorplanning of VLSI [2 ###reference_b2###].\nFloorplanning algorithms generally fall into two categories: the floorplanning algorithm based on combinatorial optimization model (FA-COM) and the floorplanning algorithm based on analytic optimization model (FA-AOM). Representing the relative positions of macros by combinatorial coding structures such as the B*-tree, the sequential pair, etc., one can formulate the floorplanning problem as a combinatorial optimization problem, which is then addressed by metaheuristics in the FA-COMs [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###]. The combinatorial codes representing relative positions of macros can be naturally decoded into the compact floorplans complying with the non-overlapping constraints, however, the combinatorial explosion contributes to poor performances of FA-COM on large-scale cases. Accordingly, the problem size could be reduced by clustering or partitioning strategies, which in turn makes it hard to converge to the global optimal results of the investigated large-scale floorplanning problems [7 ###reference_b7###, 8 ###reference_b8###].\nFA-AOMs address analytical floorplanning models by continuous optimization algorithms, which contributes to their lower time complexities on large-scale cases [9 ###reference_b9###, 10 ###reference_b10###].\nSince the optimization results of continuous optimization algorithms do not fulfill the non-overlapping constraints for most cases, a FA-AOM usually consists of the global floorplanning stage and the legalization stage, the first optimizing the overall evaluation index, and the second tuning the positions of macros to eliminate constraint violations of results. Li et al. [11 ###reference_b11###] proposed an analytic floorplanning algorithm for large-scale floorplanning cases, where the fixed-outline global floorplanning was implemented by optimizing the electrostatic field model of global placement. In the legalization stage, horizontal constraint graphs and vertical constraint graphs were constructed to eliminate overlap of floorplanning results. Huang et al. [12 ###reference_b12###] presented an improved electrostatics-based analytical method for fixed-outline floorplanning, which incorporates module rotation and sizing driven by wirelength.\nSince some of the evaluation indexes of global floorplanning are not smooth, additional smooth approximation to the optimization objective function could be incorporated to achieve fast convergence of gradient-based optimization algorithms. However, the approximation procedure not only introduces extra time complexity of the FA-AOM, but also leads to its local convergence to an optimal solution significantly different from that of the original non-smooth model. Accordingly, the conjugate subgradient algorithm [13 ###reference_b13###] is employed in this paper to deal with the continuous variables representing coordinates of modules. Meanwhile, we address the orientation of modules by discrete variables, and formulate the floorplanning problem as a mixed-variable optimization problem.\nRest of this paper is organized as follows. Section 2 ###reference_### introduces some preliminaries. Then, the proposed algorithms developed for floorplanning problems with and without fixed-outline constraints are presented in Sections 3 ###reference_### and 4 ###reference_###, respectively. Numerical experiment is performed in Section 5 ###reference_### to demonstrate the competitiveness of the proposed algorithms, and Section 6 ###reference_### concludes this paper."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Problem Statement",
21
+ "text": "Given a collection of rectangular modules and a set of edges (networks) , the VLSI floorplanning problem tries to minimize the total wirelength and the floorplan area by placing modules in approximate positions. Denote the center coordinates of module be , and its orientation is represented by . A floorplan of VLSI is represented by the combination of vectors , and , where , , .\nSubject to the constraint of placing non-overlaping modules with a fixed outline, the floorplanning problem is formulated as\nwhere is the total wirelength, is the sum of overlapping area, and is the sum of width beyond the fixed outline. By the Lagrange multiplier method, it can be transformed into an unconstrained optimization model\nwhere , , and are parameters to be confirmed. Here, the square root of is adopted to ensure that all indexes to be minimized are of the same dimension.\n: The total wirelength is here taken as the total sum of half-perimeter wirelength (HWPL)\n: The sum of overlapping area is computed by\nwhere and represent the overlapping lengths of modules and in the -axis and -axis directions, respectively. Denoting , we know\nwhere is confirmed by\nDenoting , we have\nwhere is confirmed by\n: For floorplanning problems with fixed-outline, the positions of modules must meet the following constraints:\nwhere and are the width and the height of square outline, respectively. Let\nand are confirmed by (6 ###reference_###) and (8 ###reference_###), respectively.\nAccordingly, can be confirmed by\nwhich is smoothed by\nLet , we get for legitimization of the global floorplanning result the optimization problem"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "The Conjugate Sub-gradient Algorithm for Optimization of the Coordinate",
27
+ "text": "Zhu et al.[13 ###reference_b13###] proposed to solving the non-smooth continuous optimization model of the global placement by the conjugate sub-gradient algorithm (CSA). With an initial solution , the pseudo code of CSA is presented in Algorithm 1 ###reference_thm1###. Because the CSA is not necessarily gradient-descendant, the step size has a significant influence on its convergence performance. The step size is determined by the norm of the conjugate directions together with the control parameter , which is updated as . As an initial study, we set in this paper. The termination-condition 1 is satisfied if is greater than a given budget or several consecutive iterations fails to get a better solution."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "The Distribution Evolutionary Algorithm for Optimization of the Orientation",
33
+ "text": "Besides the coordinate vectors and , the floorplan is also confirmed by the orientation vectors . The orientation of modules is confirmed by clockwise rotation, and we set if the rotation angle is , , . The, optimization of the orientation vectors contributes to a combinatorial optimization problem.\nThe estimation of distribution algorithm (EDA) is a kind of metaheuristics that can address the combinatorial optimization problem well, but its balance between global exploration and local exploitation is a challenging issue [14 ###reference_b14###]. Xu et al. [15 ###reference_b15###] proposed for the graph coloring problem a distribution evolutionary algorithm based on a population of probability model (DEA-PPM), where a novel probability model and the associated orthogonal search are introduced to achieve well convergence performance on large-scale combinatorial problems.\n\nThe core idea of DEA-PPM for floorplanning is to simulate the probability distribution of orientations by constructing a probability matrix\nwhere representing the probability that module satisfies\nThen, the random initialization of generates a distribution matrix\nThe implementation of DEA-PPM is based on distributed population and solution population , which are employed here for the probability distributions and instantiations of orientation, respectively. Global convergence of DEA-PPM is achieved by an orthogonal search on , and the local exploitation are implemented in both the distribution space and the solution space."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "The Fixed-outline Floorplanning Algorithm Based on CSA and DEA-PPM",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Framework",
45
+ "text": "In this paper, the fixed-outline floorplanning algorithm based on CSA and DEA-PPM (FFA-CD) is proposed to solve the problem of fixed-outline floorplanning, where the DEA-PPM is employed to optimize the orientations of the modules and the CSA is used to optimize the corresponding coordinates of the modules.\nThe framework of FFA-CD is presented in Algorithm 2 ###reference_thm2###. It starts with initialization of the distribution and solution populations and , where consists of orientation combinations of modules. Meanwhile, the corresponding population and of module coordinate is initialized by Latin hypercube sampling [16 ###reference_b16###]. Combining the orientation and coordinates of modules, we get the best coordinate vectors and , as well as the corresponding orientation vector . Then, the while loop of DEA-PPM is implemented to update and , where the CSA is deployed in UpdateXY to get the best module coordinate."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Evolution of the Distribution Population",
51
+ "text": "In order to better explore the distribution space, DEA-PPM carries out orthogonal exploration for individuals in . Algorithm 3 ###reference_thm3### gives the flow of orthogonal exploration, which aims to change worst individuals in by orthogonal transformation performed on columns of a distribution matrix. Here, is a random integer in and is a random integer in .\nIn Algorithm 4 ###reference_thm4###, the intermediate distribution population is further updated to get . Given , it is updated using and , two orientation combinations selected from and , respectively. Columns of are updated using either a exploitation strategy or a disturbance strategy presented as follows.\nTo update the column of , it is first renewed as\nwhere is the component of . Then, an local orthogonal transformation is performed as\nwhere , . is an orthogonal matrix given by\nIn order to prevent the distribution population from premature, the disturbance strategy is performed as\nwhere ."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Optimization of the Floorplan with a Fixed Outline",
57
+ "text": "The floorplan is represented by the orientation vector and the coordinate vectors and . In FFA-CD, the evolution of orientation vectors is implemented by iteration of solution population , and the corresponding coordinate vectors are optimized by the function UpdateXY."
58
+ },
59
+ {
60
+ "section_id": "3.3.1",
61
+ "parent_section_id": "3.3",
62
+ "section_name": "3.3.1 Initialization of the module orientation",
63
+ "text": "According to the principle of DEA-PPM, the solution population is obtained by sampling the distribution population . To accelerate the convergence process, the sampling process is performed with inheritance as the process illustrated in Algorithm 5 ###reference_thm5###."
64
+ },
65
+ {
66
+ "section_id": "3.3.2",
67
+ "parent_section_id": "3.3",
68
+ "section_name": "3.3.2 Optimization of module position",
69
+ "text": "With the orientation of modules confirmed by the solution population, the position of the modules is optimized by Algorithm 6 ###reference_thm6###. For a combination of position vector , the global floorplanning is first implemented by optimizing ; then, the weights of the constraint items is increased to legalize the floorplan approach by lines 4-7, or the legalization process is implemented by lines 9-10. The legalization process based on constraint graphs [10 ###reference_b10###] are implemented Graph(), which is presented in Algorithm 7 ###reference_thm7###. To prevent and from falling into inferior local solutions, the coordinates are reinitialized if no better solution is obtained for several times.\nThe legalization of Algorithm 7 ###reference_thm7### is implemented as follows. Let be the lower-left coordinate of block . is to the left of if it holds\nis to the below of if\nDenote and as the left-module set and the lower-module set of module , respectively. Then, the - and -coordinates of module are updated by"
70
+ },
71
+ {
72
+ "section_id": "4",
73
+ "parent_section_id": null,
74
+ "section_name": "The Floorplanning Algorithm Based on the Golden Section Strategy",
75
+ "text": "While the analytical optimization method is applied to the floorplanning problem without fixed-outline, it is a challenging task to minimize the floorplan area. In this paper, we proposed a floorplanning algorithm based on the golden section strategy (FA-GSS), where minimization of the floorplan area is achieved by consecutively narrowing the contour of fixed outline.\nMinimization of the floorplan area is equivalent to minimizing the blank ratio\nwhere is the sum of areas of all modules. As presented in Algorithm 8 ###reference_thm8###, we use the golden section strategy to continuously reduce the area of the fixed contour. Given the initial white rate and , where the fixed-outline floorplanning is feasible for but infeasible for , we set\nIf a legal layout can be obtained for , then =; otherwise, we set =. Repeat the section process until ."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Experimental Results and Analysis",
81
+ "text": "To verify the performance of the proposed algorithm, we conducted experiments on the well-known test benchmark GSRC. For all test circuits, the I/O pads are fixed at the given coordinates, and the modules of all circuits are hard modules. All experiments are developed in C++ programming language program, and run in Microsoft Windows 10 on a laptop equipped with the AMD Ryzen 7 5800H @ 3.2GHz and 16GB system memory."
82
+ },
83
+ {
84
+ "section_id": "5.1",
85
+ "parent_section_id": "5",
86
+ "section_name": "Wirelength Optimization with Fixed-outline Constraints",
87
+ "text": "We first test the performance of FFA-CD on the fixed-outline cases. It is compared with the well-known open source layout planner Parquet-4.5 [17 ###reference_b17###], where the floorplan is represented by the B*-tree and the simulated annealing algorithm to solve the combinatorial optimization model of floorplanning.\nAccording to the given aspect ratio , the width and height of the fixed contour are calculated as [18 ###reference_b18###]\nwhere is the summed area of all modules, and is the white rate defined in (20 ###reference_###).\nThe experiment set the white rate as , the aspect ratio R as 1, 1.5, 2, and the population number as 5. For different aspect ratios, each experiment was independently run 10 times, and the results were shown in Table 1 ###reference_###.\nNumerical results demonstrate that FFA-CD outperforms Parquet-4.5 on cases with more than 50 modules, but runs a bit slow for some of the small cases, which is attributed to the compact floorplan of Parquet-4.5. The combinatorial floorplan implemented by Parquet-4.5 could lead to smaller HPWL and shorter runtime, but its performance would degrade significantly while the problem size increases.\nThe iteration mechanism based on CSA ensures that FFA-CD can explore the floorplan space more efficiently. At the same time, DEA-PPM is introduced to explore the rotation strategy, which increases the flexibility of the floorplan and greatly improves the success rate of small-scale problems.\nConsequently, the success rate of FF-CD was better than or equal to Parquet-4.5 for all cases. Meanwhile, better results on wirelength and tuntime is obtained in several different aspect ratios for the larger-scale cases (n50-n100)."
88
+ },
89
+ {
90
+ "section_id": "5.2",
91
+ "parent_section_id": "5",
92
+ "section_name": "Minimization of Wirelength and Area without Fixed-outline Constraints",
93
+ "text": "For layout planning problems without fixed contour constraints, FA-GSS is used to optimize the wirelength and area. The proposed FA-GSS is compared with Parquet-4.5 and the Hybrid Simulated Annealing Algorithm (HSA) [19 ###reference_b19###], where the population size is set as 5, and we get .\nDue to the different magnitude of wirelength and area, the cost function to minimized for the floorplanning problem without fixed outline is taken as\nwhere and are the minimum values of and , respectively.\nThe results in Table 1 ###reference_### show that all examples obtain better wirelength and shorter time when the aspect ratio is 1. So, we take in FA-GSS for all test cases. For benchmarks in GSRC, the average , and runtime (CPU) of ten independent runs are collected in Table 2.\nThe experimental results show that FA-GSS outperforms both Parquet-4.5 and HAS except for the n30 case. Although FA-GSS runs a bit slower than Parquet-4.5 when they are tested by the n30 case, FA-GSS has the smallest rate of increase in run time as the module size increases. This means that FA-GSS is expected to achieve excellent results on larger circuits."
94
+ },
95
+ {
96
+ "section_id": "6",
97
+ "parent_section_id": null,
98
+ "section_name": "Conclusion",
99
+ "text": "In this paper, we formulate the flooplanning problem of VLSI as a mixed-variable optimization problem, where the discrete variables represent module orientations and the coordinates of modules are incorporated by continuous variables. Then, the DEA-PPM is introduced to get the module orientation, and coordinate variables are optimized by the CSA. Experimental results show that the proposed FFA-CD and FA-GSS, respectively developed for floorplanning problems with and without fixed-outline, can generally outperforms the floorplanning algorithms designed based on the B*-tree and the simulated annealing. Attributed to their low time complexity, the proposed algorithms are expected to address large-scale floorplanning problems effectively."
100
+ }
101
+ ],
102
+ "appendix": [],
103
+ "tables": {
104
+ "1": {
105
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Performance comparison for the fixed-outline cases of GSRC test problems.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T1.1.1.1.1.1\">GSRC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.1.1.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T1.1.1.1.2.1\">R</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S5.T1.1.1.1.3\">Parquet-4.5</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S5.T1.1.1.1.4\">FFA-CD</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.1.2.2.1\">SR(%)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.1.2.2.2\">HPWL</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.1.2.2.3\">CPU(s)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.1.2.2.4\">SR(%)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.1.2.2.5\">HPWL</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.1.2.2.6\">CPU(s)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T1.1.3.1.1.1\">n10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.2\">1.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.3\">60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.3.1.4.1\">55603</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.3.1.5.1\">0.04</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.6\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.7\">55774</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.1.8\">0.11</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.2.1\">1.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.2.2\">60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.4.2.3.1\">55824</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.4.2.4.1\">0.04</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.2.5\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.2.6\">56696</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.2.7\">0.20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.5.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.5.3.1\">2.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.5.3.2\">80</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.5.3.3\">58247</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.5.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.4.1\">0.04</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.5.3.5\">90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.5.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.5.3.6.1\">58236</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.5.3.7\">0.31</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.6.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.6.4.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T1.1.6.4.1.1\">n30</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.6.4.2\">1.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.6.4.3\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.6.4.4\">172173</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.6.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.6.4.5.1\">0.28</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.6.4.6\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.6.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.6.4.7.1\">160208</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.6.4.8\">0.41</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.7.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.7.5.1\">1.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.7.5.2\">90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.7.5.3\">173657</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.7.5.4\">0.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.7.5.5\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.7.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.7.5.6.1\">164237</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.7.5.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.7.5.7.1\">0.28</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.8.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.8.6.1\">2.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.8.6.2\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.8.6.3\">174568</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.8.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.8.6.4.1\">0.32</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.8.6.5\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.8.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.8.6.6.1\">166133</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.8.6.7\">0.54</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.9.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.9.7.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T1.1.9.7.1.1\">n50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.9.7.2\">1.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.9.7.3\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.9.7.4\">209343</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.9.7.5\">0.68</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.9.7.6\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.9.7.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.9.7.7.1\">185793</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.9.7.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.9.7.8.1\">0.55</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.10.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.10.8.1\">1.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.10.8.2\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.10.8.3\">211591</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.10.8.4\">0.79</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.10.8.5\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.10.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.10.8.6.1\">189878</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.10.8.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.10.8.7.1\">0.41</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.11.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.11.9.1\">2.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.11.9.2\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.11.9.3\">208311</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.11.9.4\">0.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.11.9.5\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.11.9.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.11.9.6.1\">195398</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.11.9.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.11.9.7.1\">0.71</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.12.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.12.10.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T1.1.12.10.1.1\">n100</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.12.10.2\">1.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.12.10.3\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.12.10.4\">334719</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.12.10.5\">2.10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.12.10.6\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.12.10.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.12.10.7.1\">293578</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.12.10.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.12.10.8.1\">0.89</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.13.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.13.11.1\">1.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.13.11.2\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.13.11.3\">340561</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.13.11.4\">2.26</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.13.11.5\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.13.11.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.13.11.6.1\">300079</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.13.11.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.13.11.7.1\">1.05</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.14.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.14.12.1\">2.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.14.12.2\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.14.12.3\">347708</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.14.12.4\">2.26</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.14.12.5\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.14.12.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.14.12.6.1\">308811</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.14.12.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.14.12.7.1\">1.02</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.15.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.15.13.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T1.1.15.13.1.1\">n200</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.15.13.2\">1.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.15.13.3\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.15.13.4\">620097</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.15.13.5\">9.03</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.15.13.6\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.15.13.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.15.13.7.1\">521140</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.15.13.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.15.13.8.1\">2.38</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.16.14\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.16.14.1\">1.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.16.14.2\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.16.14.3\">625069</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.16.14.4\">9.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.16.14.5\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.16.14.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.16.14.6.1\">529918</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.16.14.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.16.14.7.1\">2.53</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.17.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.17.15.1\">2.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.17.15.2\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.17.15.3\">649728</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.17.15.4\">9.24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.17.15.5\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.17.15.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.17.15.6.1\">541565</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.17.15.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.17.15.7.1\">2.71</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.18.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T1.1.18.16.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T1.1.18.16.1.1\">n300</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.18.16.2\">1.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.18.16.3\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.18.16.4\">768747</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.18.16.5\">19.08</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.18.16.6\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.18.16.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.18.16.7.1\">588118</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.18.16.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.18.16.8.1\">3.73</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.19.17\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.19.17.1\">1.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.19.17.2\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.19.17.3\">787527</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.19.17.4\">19.16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.19.17.5\">100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.19.17.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.19.17.6.1\">606548</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.19.17.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.19.17.7.1\">3.85</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.20.18\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.20.18.1\">2.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.20.18.2\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.20.18.3\">847588</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.20.18.4\">19.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.20.18.5\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.20.18.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.20.18.6.1\">626658</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.20.18.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.20.18.7.1\">4.21</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
106
+ "capture": "Table 1: Performance comparison for the fixed-outline cases of GSRC test problems."
107
+ },
108
+ "2": {
109
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Performance comparison for the GSRC test problems without fixed-outline constraints.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.3.4.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.3.4.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.4.1.1.1\">GSRC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T2.3.4.1.2\">Parquet-4.5</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T2.3.4.1.3\">HAS</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T2.3.4.1.4\">FA-GSS</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.3.3.4\">CPU(s)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.3.3.5\">CPU(s)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.3.3.6\">CPU(s)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.3.5.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.5.1.1\"><span class=\"ltx_text\" id=\"S5.T2.3.5.1.1.1\">n10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.5.1.2\">1.0885</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.5.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.5.1.3.1\">0.03</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.5.1.4\">1.0799</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.5.1.5\">0.11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.5.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.5.1.6.1\">1.0688</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.5.1.7\">0.17</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.6.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.6.2.1\"><span class=\"ltx_text\" id=\"S5.T2.3.6.2.1.1\">n30</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.6.2.2\">1.1040</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.6.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.6.2.3.1\">0.19</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.6.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.6.2.4.1\">1.0881</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.6.2.5\">0.86</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.6.2.6\">1.0959</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.6.2.7\">0.69</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.7.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.7.3.1\"><span class=\"ltx_text\" id=\"S5.T2.3.7.3.1.1\">n50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.7.3.2\">1.0871</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.7.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.7.3.3.1\">0.47</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.7.3.4\">1.0797</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.7.3.5\">2.15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.7.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.7.3.6.1\">1.0750</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.7.3.7\">1.29</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.8.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.8.4.1\"><span class=\"ltx_text\" id=\"S5.T2.3.8.4.1.1\">n100</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.8.4.2\">1.1034</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.8.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.8.4.3.1\">1.61</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.8.4.4\">1.1040</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.8.4.5\">7.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.8.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.8.4.6.1\">1.0648</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.8.4.7\">3.53</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.9.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.9.5.1\"><span class=\"ltx_text\" id=\"S5.T2.3.9.5.1.1\">n200</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.9.5.2\">1.1301</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.9.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.9.5.3.1\">6.23</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.9.5.4\">1.1628</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.9.5.5\">37.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.9.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.9.5.6.1\">1.0713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.9.5.7\">8.96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.10.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T2.3.10.6.1\"><span class=\"ltx_text\" id=\"S5.T2.3.10.6.1.1\">n300</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T2.3.10.6.2\">1.1765</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T2.3.10.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.10.6.3.1\">12.87</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T2.3.10.6.4\">1.2054</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T2.3.10.6.5\">78.21</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T2.3.10.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.10.6.6.1\">1.0715</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T2.3.10.6.7\">15.13</td>\n</tr>\n</tbody>\n</table>\n</figure>",
110
+ "capture": "Table 2: Performance comparison for the GSRC test problems without fixed-outline constraints."
111
+ }
112
+ },
113
+ "image_paths": {},
114
+ "validation": true,
115
+ "references": [],
116
+ "url": "http://arxiv.org/html/2401.15317v1"
117
+ }
20240127/2401.15319v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240127/2401.15323v1.json ADDED
@@ -0,0 +1,290 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Music Auto-tagging with Robust Music Representation Learned via Domain Adversarial Training",
3
+ "abstract": "Music auto-tagging is crucial for enhancing music discovery and recommendation. Existing models in Music Information Retrieval (MIR) struggle with real-world noise such as environmental and speech sounds in multimedia content. This study proposes a method inspired by speech-related tasks to enhance music auto-tagging performance in noisy settings. The approach integrates Domain Adversarial Training (DAT) into the music domain, enabling robust music representations that withstand noise. Unlike previous research, this approach involves an additional pretraining phase for the domain classifier, to avoid performance degradation in the subsequent phase. Adding various synthesized noisy music data improves the model\u2019s generalization across different noise levels. The proposed architecture demonstrates enhanced performance in music auto-tagging by effectively utilizing unlabeled noisy music data. Additional experiments with supplementary unlabeled data further improves the model\u2019s performance, underscoring its robust generalization capabilities and broad applicability.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Music auto-tagging is the automated process of attaching relevant semantic labels such as genre, mood, or instrument to musical tracks, usually enabled by machine learning algorithms. This function is crucial for effective music information retrieval, personalization, and recommendation systems, predominantly in music-streaming platforms such as Spotify. These services rely heavily on clean, pure music tracks and utilize comprehensive metadata for each track to create a tailored and enriched user experience. This metadata, originating from clean musical sources, allows for precise alignment with individual user preferences.\n###figure_1### Furthermore, music auto-tagging is not only crucial for enhancing search capabilities and user accessibility in music streaming services, but also vital for catering to the specific needs of users who demand more personalized recommendations and detailed search options for music content in video-streaming platforms like YouTube [1 ###reference_b1###]. Through the use of meaningful semantic tags, users can more effectively search for specific genres, artists, and moods, thereby enhancing the overall user accessibility and discoverability of music contents. However, the challenge is compounded on video-streaming services where music tracks are frequently mixed with real-world noises like crowd sounds and applause. This complicates the task for existing auto-tagging algorithms, which are predominantly trained on clean music tracks. Given the limited diversity of current tags and the sheer volume of diverse music-related content uploaded daily, there is a compelling need to advance auto-tagging techniques. Such improvements will not only make searching more efficient but also significantly contribute to delivering a more personalized and enriched user experience across various platforms.\nAs existing models have difficulty maintaining consistent feature extraction from both clean and noisy versions of the same track, we propose creating a dataset that includes both clean and noisy versions of identical music tracks. To enhance the robustness of music representation, we employ the technique of Domain Adversarial Training (DAT) [2 ###reference_b2###] which was effective in improving noise robustness of speech representation with performing downstream tasks such as Automatic Speech Recognition (ASR) or Speaker Identification (SID) [3 ###reference_b3###]. This method is designed to condition the feature extractor to be indifferent to whether a music track is clean or noisy, effectively diminishing the distinctions between the clean and noise domains. It accomplishes this by closely aligning the embeddings of identical music tracks across both clean and noisy versions. As a result, this strategy allows the model to perform downstream tasks on noisy input representations as effectively as it would on clean representations. This refined framework is designed to facilitate model training for the recognition of musical elements within noisy environments, thereby improving overall model performance across a diverse range of auditory conditions.\n\n###figure_2###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "RELATED WORKS",
15
+ "text": "In the realm of music auto-tagging, Convolutional Neural Network (CNN)-based models [4 ###reference_b4###] have demonstrated noteworthy performance, as evidenced by multiple studies [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###]. [5 ###reference_b5###] employed a hybrid model, integrating Recurrent Neural Networks (RNNs) with CNNs to more effectively capture temporal patterns in the data. Another study by [6 ###reference_b6###] deployed fully convolutional neural networks comprising multiple layers but without fully connected layers, thereby reducing the model\u2019s parameter count. While the majority of research in this area traditionally utilizes mel-spectrogram inputs, [7 ###reference_b7###] diverged by using sample-level input without the mel-spectrogram conversion. Adding another layer of complexity, [9 ###reference_b9###] incorporated a Squeeze-and-Excitation (SE) block into their sample-level input model, enhancing the extraction of representational features.\nNevertheless, given that music representations gain interpretability and significance when modeled sequentially, [10 ###reference_b10###] employed the Transformer architecture [11 ###reference_b11###] as the backbone model for their study. Further, in subsequent work, [12 ###reference_b12###] adopted a semi-supervised approach in conjunction with the Transformer, highlighting the insufficiency of available data specifically for music auto-tagging.\nFurthermore, [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###], collectively advance the field of music AI by exploring efficient pre-training strategies for audio understanding, [14 ###reference_b14###] introducing a novel self-supervised model (MERT) for nuanced music audio analysis, and [15 ###reference_b15###] establishing MARBLE, a comprehensive benchmark for evaluating music information retrieval systems.\nThe Domain Adversarial Training (DAT) [16 ###reference_b16###] approach has been effective in making speech representation more robust, as shown in previous work [3 ###reference_b3###]. In this setup, clean audio is used as the source domain, while different types of distorted audio make up the target domain. By applying the reversed gradient of the domain classifier\u2019s loss, the feature extractor can be tuned to lessen the difference caused by these distortions. This adjustment allows for better performance in downstream tasks using the label predictor. In this paper, we use the DAT approach from earlier research [3 ###reference_b3###], but with changes in domain settings and training steps, which we will discuss in subsequent sections."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "METHOD",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Architecture",
27
+ "text": "In the proposed architecture, the model is composed of three primary components: Feature Extractor, Domain Classifier, and Label Predictor.\nThe Feature Extractor (FE) is first trained to extract general music embedding from the input audio, then finetuned to blur the distinction between clean and noisy input. In this paper, we employ CLMR [17 ###reference_b17###], with SampleCNN [18 ###reference_b18###] serving as the encoder, whose backbone is SimCLR [19 ###reference_b19###]. For FE, we exclusively utilize the encoder component of CLMR. The output embedding of the Feature Extractor then subsequently serves as the input for both the Domain Classifier and the Label Predictor.\nThe Domain Classifier (DC) is tasked with determining the origin of the embedding\u2014whether it is derived from a clean or noisy audio source. The structure of the DC is based on the original DAT research [2 ###reference_b2###]. This module outputs a scalar that classifies whether the embedding originated from the clean source input or the noisy target input, which comprises simple fully-connected layers, accompanied by activation functions and batch normalization.\nThe Label Predictor (LP) focuses on the downstream task of music auto-tagging based on the provided embedding. Among the models in the proposed architecture, the LP stands out with its compact structure and minimal number of layers and parameters. This module takes the output from the FE as input and sequentially processes it through two fully-connected layers, with a ReLU activation function in between.\n\n###figure_3###"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Training Process",
33
+ "text": "Our proposed training methodology incorporates elements from previous work [3 ###reference_b3###], but introduces an additional pretraining step for the DC, resulting in a three-step process in total.\nThe initial stage involves pretraining the FE. This phase allows the FE to gain a general understanding of both music representations and real-world noises, employing a contrastive loss function [17 ###reference_b17###, 19 ###reference_b19###] for learning. In this step, both the encoder and the projector are trained, although only the encoder is used in subsequent steps. For an input audio , the FE extracts the embedding of the audio as the result of , which is a 512-dimensional vector.\nThe second stage focuses on the pretraining of the DC, with the parameters of the FE being frozen. This separate training step for DC deviates from the methods outlined in [2 ###reference_b2###] and [3 ###reference_b3###]. We opted for this separation of training steps after observing a decline in performance when the Domain Classifier (DC) was trained alongside other components. Given an input embedding vector , the DC performs binary classification to determine whether the input originates from clean or noisy musical audio.\nThe final stage is dedicated to the finetuning of the FE and the training of the LP. At this stage, the parameters of the DC are frozen, based on the assessment that it has achieved adequate binary classification performance. In contrast, LP is trained from scratch. For an input embedding vector , the LP outputs a 50-dimensional vector, which corresponds to the classification of the 50 multi-tags in the MTAT dataset [20 ###reference_b20###].\nThe combined loss of the LP and DC informs the fine-tuning of the FE, yielding the total loss function as described in the equation. Note that the gradient of the DC is negated which forces the FE to blur the distinction between clean and noisy domain. Also, is not applicable, as tag labels for the target domain are assumed to be unavailable."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "DATASET",
39
+ "text": "In our experiments, we used the MTG-Jamendo dataset [23 ###reference_b23###] for the pretraining of the FE and used the MagnaTagATune (MTAT) dataset [20 ###reference_b20###] for music auto-tagging tasks. Regarding the size of the full Jamendo dataset, we selected a subset comprising audio files with both genre and mood/theme tags. For the real-world noise dataset, Audioset [22 ###reference_b22###] is used after filtering to exclude any data containing music-related tags, such as those denoting musical notes, for example, \u2018bell\u2019 or \u2018ding\u2019. Additionally, the Musan dataset [21 ###reference_b21###] is used as an extra dataset which is provided in music, speech, and noise splits. We employed the music and noise splits for training and testing phases, respectively."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "Data Configuration",
45
+ "text": "For the pretraining of the FE, we utilized the Jamendo and Audioset datasets. The audio samples from Jamendo are subjected to random augmentations, such as pitch shifting and the application of filters, following the methodology outlined by [17 ###reference_b17###]. Additionally, to improve the model\u2019s generalization capabilities with respect to noisy musical audio, we synthesized random samples from Audioset with the Jamendo audio samples.\nRegarding the subsequent steps in our training process, we employed the MTAT and Audioset datasets under diverse experimental configurations. In the baseline setting, we assumed that only clean audio samples with corresponding tags are accessible, which aligns with the existing frameworks for auto-tagging. For the oracle setup, we assumed the availability of tags for both clean and noisy audio samples, a condition that is not feasible in real-world scenarios. In the proposed experimental setting, we made use of clean audio samples with tags for the source domain, and synthesized noisy samples without tags for the target domain (proposed (a)). Lastly, to demonstrate the capacity for further model training and generalization with noisy musical samples, we also utilized additional noisy samples synthesized from the Musan music split and Audioset, excluding any associated tags (proposed (b)). This final setting serves to underscore the efficacy of the proposed architecture in accommodating real-world audio conditions.\nFor the validation and test phase, we did not split the MTAT dataset but fully and repeatedly used the validation and test dataset in five different conditions. First we used music audio samples without any noise added for clean source domain. From second to the last condition, we added noise but in different sound-to-noise ratio (SNR) conditions from -5 to 10.\n\n###figure_4### \n###figure_5###"
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "EXPERIMENT",
51
+ "text": "In the experiment, the Adam optimizer [24 ###reference_b24###] is used across all training phases. Specifically, the learning rate is set to 3e-4 for the pretraining of the FE, and 1e-4 for pretraining DC, finetuning FE, and training LP. The length of each audio input is fixed to 59,049 samples, in alignment with previous work [17 ###reference_b17###]. Furthermore, a uniform sample rate of 22,050 is applied across all training steps, with resampling if necessary.\nTo simulate various real-world noisy conditions, we synthesized music samples by combining them with one, two, or four different types of noise samples. During each data retrieval from the dataset, each music sample is normalized and randomly mixed with noise samples. The SNR for these mixtures is also randomly selected from a predefined range of [-10, 10]. For the validation and test phases, target samples are synthesized according to predefined SNRs as described in Figure 3.\nFor the evaluation metrics, we employed the area under the receiver operating characteristic curve (AUC) and average precision (AP). The experimental test results for source domain indicate that the baseline performance was either comparable to or slightly better than our proposed approach (Table 1, 2). This suggests that incorporating additional noisy music samples could potentially impact the existing performance adversely. However, as the number of noise samples increases, our proposed method demonstrates enhanced generalization and robustness (Table 3). Notably, the proposed (b) configuration consistently delivered the best performance, underscoring the idea that even a modest amount of extra, noisy, and unlabeled data can improve the model\u2019s performance. Although the performance differences between proposed (a) and proposed (b) may appear subtle, it is important to note that the quantity of additional data samples per epoch in proposed (b) is approximately 17 times less than the unlabeled data used in proposed (a).\nAdditionally, tests conducted on the Musan noise dataset corroborate the model\u2019s consistent performance across different noise conditions. These results further support the notion that the FE\u2019s embeddings are robust when exposed to a variety of noise types, thereby enhancing the model\u2019s overall generalization capabilities. Lastly, through all the evaluations, the model demonstrated enhanced performance even under challenging conditions involving various signal-to-noise ratios, affirming its robustness and utility for music auto-tagging in noisy scenarios."
52
+ },
53
+ {
54
+ "section_id": "6",
55
+ "parent_section_id": null,
56
+ "section_name": "CONCLUSION",
57
+ "text": "In this work, we introduced a novel framework for improving music auto-tagging performance by leveraging unlabeled noisy music data. We employed Domain Adversarial Training (DAT) to enhance the robustness of feature extraction, making it capable of handling both clean and noisy audio inputs effectively. Our experimental setup included diverse datasets such as MTG-Jamendo, MagnaTagATune (MTAT), Audioset, and Musan, and incorporated various real-world noise conditions to simulate realistic scenarios.\nOur evaluations, using metrics such as AUC and AP, indicates promising results. While the incorporation of noisy data had a nuanced effect on performance, we found that as the variety of noise increased, the model\u2019s robustness improved, suggesting that our approach has strong generalization capabilities. In particular, the configuration using extra, unlabeled noisy data showed performance gains, even when the additional data volume was relatively small.\nAdditional tests on the Musan noise dataset corroborated the model\u2019s consistency and robustness across various noise conditions. These results affirm that the feature extractor\u2019s embeddings are resilient to noise, thereby extending the applicability and generalizability of our model for music auto-tagging in noisy environments."
58
+ },
59
+ {
60
+ "section_id": "7",
61
+ "parent_section_id": null,
62
+ "section_name": "Acknowledgement",
63
+ "text": "This work was partly supported by Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2022 (No.R2022020066, 1/2) and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00219429, 1/2)."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {
68
+ "1": {
69
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<p class=\"ltx_p ltx_align_center\" id=\"S4.T1.1\"><span class=\"ltx_text\" id=\"S4.T1.1.1\" style=\"font-size:90%;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_landscape\" height=\"187\" id=\"S4.T1.1.1.g1\" src=\"extracted/5251467/noise1,2_fixed2.png\" width=\"707\"/></span></p>\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.5.1.1\">Table 1</span>: </span>The test AUC and AP metrics for the baseline, oracle, and proposed configurations (a) and (b), evaluated with the inclusion of either 1 or 2 noises in the synthesized noisy music data.</figcaption>\n</figure>",
70
+ "capture": "Table 1: The test AUC and AP metrics for the baseline, oracle, and proposed configurations (a) and (b), evaluated with the inclusion of either 1 or 2 noises in the synthesized noisy music data."
71
+ },
72
+ "2": {
73
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<p class=\"ltx_p ltx_align_center\" id=\"S4.T2.1\"><span class=\"ltx_text\" id=\"S4.T2.1.1\" style=\"font-size:90%;\"><img alt=\"[Uncaptioned image]\" class=\"ltx_graphics ltx_img_landscape\" height=\"186\" id=\"S4.T2.1.1.g1\" src=\"extracted/5251467/noise4_fixed2.png\" width=\"354\"/></span></p>\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.1\">Table 2</span>: </span>The test AUC and AP metrics with the inclusion of 4 noises.</figcaption>\n</figure>",
74
+ "capture": "Table 2: The test AUC and AP metrics with the inclusion of 4 noises."
75
+ }
76
+ },
77
+ "image_paths": {
78
+ "1": {
79
+ "figure_path": "2401.15323v1_figure_1.png",
80
+ "caption": "Fig. 1: Feature extraction from clean and noisy music tracks in robust music representation learning. The extractor aims to produce closely positioned embeddings for the same track, regardless of audio quality.",
81
+ "url": "http://arxiv.org/html/2401.15323v1/extracted/5251467/FE_fig1_revised.png"
82
+ },
83
+ "2": {
84
+ "figure_path": "2401.15323v1_figure_2.png",
85
+ "caption": "Fig. 2: The proposed architecture and training process. Overall structure is composed of Feature Extractor (FE, pink), Domain Classifier (DC, yellow), and Label Predictor (LP, green). The training process is set to 3 steps for 1) pretraining FE, 2) pretraining DC, and 3) finetuning FE and training LP. In contrast, for both the baseline and oracle configurations, only the FE and LP are utilized, leading to a simplified two-step training process.",
86
+ "url": "http://arxiv.org/html/2401.15323v1/extracted/5251467/DAT_process_final2.png"
87
+ },
88
+ "3": {
89
+ "figure_path": "2401.15323v1_figure_3.png",
90
+ "caption": "Fig. 3: Proposed dataset configuration: The music dataset (red) utilizes MTAT [20] and the music split from Musan [21]. The real-world noise dataset (yellow) incorporates Audioset [22] and the noise split from Musan Synthesized samples combining music and noise are designated as target domain data. During training, source (src) and target (trg) domain samples do not overlap. Note that the Musan noise dataset is exclusively employed for creating the test set, while it is not used in the formation of the validation set.",
91
+ "url": "http://arxiv.org/html/2401.15323v1/extracted/5251467/dataset_final_6.png"
92
+ }
93
+ },
94
+ "validation": true,
95
+ "references": [
96
+ {
97
+ "1": {
98
+ "title": "\u201cTowards a new interface for music listening: A user experience study on youtube,\u201d",
99
+ "author": "Ahyeon Choi, Eunsik Shin, Haesun Joung, Joongseek Lee, and Kyogu Lee,",
100
+ "venue": "arXiv preprint arXiv:2307.14718, 2023.",
101
+ "url": null
102
+ }
103
+ },
104
+ {
105
+ "2": {
106
+ "title": "\u201cDomain-adversarial training of neural networks,\u201d",
107
+ "author": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Laviolette, Mario Marchand, and Victor Lempitsky,",
108
+ "venue": "The journal of machine learning research, vol. 17, no. 1, pp. 2096\u20132030, 2016.",
109
+ "url": null
110
+ }
111
+ },
112
+ {
113
+ "3": {
114
+ "title": "\u201cImproving distortion robustness of self-supervised speech processing tasks with domain adaptation,\u201d",
115
+ "author": "Kuan Po Huang, Yu-Kuan Fu, Yu Zhang, and Hung-yi Lee,",
116
+ "venue": "arXiv preprint arXiv:2203.16104, 2022.",
117
+ "url": null
118
+ }
119
+ },
120
+ {
121
+ "4": {
122
+ "title": "\u201cImagenet classification with deep convolutional neural networks,\u201d",
123
+ "author": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton,",
124
+ "venue": "Advances in neural information processing systems, vol. 25, 2012.",
125
+ "url": null
126
+ }
127
+ },
128
+ {
129
+ "5": {
130
+ "title": "\u201cConvolutional recurrent neural networks for music classification,\u201d",
131
+ "author": "Keunwoo Choi, Gy\u00f6rgy Fazekas, Mark Sandler, and Kyunghyun Cho,",
132
+ "venue": "in 2017 IEEE International conference on acoustics, speech and signal processing (ICASSP). IEEE, 2017, pp. 2392\u20132396.",
133
+ "url": null
134
+ }
135
+ },
136
+ {
137
+ "6": {
138
+ "title": "\u201cAutomatic tagging using deep convolutional neural networks,\u201d",
139
+ "author": "Keunwoo Choi, George Fazekas, and Mark Sandler,",
140
+ "venue": "arXiv preprint arXiv:1606.00298, 2016.",
141
+ "url": null
142
+ }
143
+ },
144
+ {
145
+ "7": {
146
+ "title": "\u201cSample-level deep convolutional neural networks for music auto-tagging using raw waveforms,\u201d",
147
+ "author": "Jongpil Lee, Jiyoung Park, Keunhyoung Luke Kim, and Juhan Nam,",
148
+ "venue": "arXiv preprint arXiv:1703.01789, 2017.",
149
+ "url": null
150
+ }
151
+ },
152
+ {
153
+ "8": {
154
+ "title": "\u201cEnd-to-end learning for music audio tagging at scale,\u201d",
155
+ "author": "Jordi Pons, Oriol Nieto, Matthew Prockup, Erik Schmidt, Andreas Ehmann, and Xavier Serra,",
156
+ "venue": "arXiv preprint arXiv:1711.02520, 2017.",
157
+ "url": null
158
+ }
159
+ },
160
+ {
161
+ "9": {
162
+ "title": "\u201cSample-level cnn architectures for music auto-tagging using raw waveforms,\u201d",
163
+ "author": "Taejun Kim, Jongpil Lee, and Juhan Nam,",
164
+ "venue": "in 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 2018, pp. 366\u2013370.",
165
+ "url": null
166
+ }
167
+ },
168
+ {
169
+ "10": {
170
+ "title": "\u201cToward interpretable music tagging with self-attention,\u201d",
171
+ "author": "Minz Won, Sanghyuk Chun, and Xavier Serra,",
172
+ "venue": "arXiv preprint arXiv:1906.04972, 2019.",
173
+ "url": null
174
+ }
175
+ },
176
+ {
177
+ "11": {
178
+ "title": "\u201cAttention is all you need,\u201d",
179
+ "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin,",
180
+ "venue": "Advances in neural information processing systems, vol. 30, 2017.",
181
+ "url": null
182
+ }
183
+ },
184
+ {
185
+ "12": {
186
+ "title": "\u201cSemi-supervised music tagging transformer,\u201d",
187
+ "author": "Minz Won, Keunwoo Choi, and Xavier Serra,",
188
+ "venue": "arXiv preprint arXiv:2111.13457, 2021.",
189
+ "url": null
190
+ }
191
+ },
192
+ {
193
+ "13": {
194
+ "title": "\u201cSupervised and unsupervised learning of audio representations for music understanding,\u201d",
195
+ "author": "Matthew C McCallum, Filip Korzeniowski, Sergio Oramas, Fabien Gouyon, and Andreas F Ehmann,",
196
+ "venue": "arXiv preprint arXiv:2210.03799, 2022.",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "14": {
202
+ "title": "\u201cMert: Acoustic music understanding model with large-scale self-supervised training,\u201d",
203
+ "author": "Yizhi Li, Ruibin Yuan, Ge Zhang, Yinghao Ma, Xingran Chen, Hanzhi Yin, Chenghua Lin, Anton Ragni, Emmanouil Benetos, Norbert Gyenge, et al.,",
204
+ "venue": "arXiv preprint arXiv:2306.00107, 2023.",
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "15": {
210
+ "title": "\u201cMarble: Music audio representation benchmark for universal evaluation,\u201d",
211
+ "author": "Ruibin Yuan, Yinghao Ma, Yizhi Li, Ge Zhang, Xingran Chen, Hanzhi Yin, Le Zhuo, Yiqi Liu, Jiawen Huang, Zeyue Tian, et al.,",
212
+ "venue": "arXiv preprint arXiv:2306.10548, 2023.",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "16": {
218
+ "title": "\u201cDomain-adversarial neural networks,\u201d",
219
+ "author": "Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Laviolette, and Mario Marchand,",
220
+ "venue": "arXiv preprint arXiv:1412.4446, 2014.",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "17": {
226
+ "title": "\u201cContrastive learning of musical representations,\u201d",
227
+ "author": "Janne Spijkervet and John Ashley Burgoyne,",
228
+ "venue": "arXiv preprint arXiv:2103.09410, 2021.",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "18": {
234
+ "title": "\u201cSamplecnn: End-to-end deep convolutional neural networks using very small filters for music classification,\u201d",
235
+ "author": "Jongpil Lee, Jiyoung Park, Keunhyoung Luke Kim, and Juhan Nam,",
236
+ "venue": "Applied Sciences, vol. 8, no. 1, pp. 150, 2018.",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "19": {
242
+ "title": "\u201cA simple framework for contrastive learning of visual representations,\u201d",
243
+ "author": "Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton,",
244
+ "venue": "in International conference on machine learning. PMLR, 2020, pp. 1597\u20131607.",
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "20": {
250
+ "title": "\u201cEvaluation of algorithms using games: The case of music tagging.,\u201d",
251
+ "author": "Edith Law, Kris West, Michael I Mandel, Mert Bay, and J Stephen Downie,",
252
+ "venue": "in ISMIR. Citeseer, 2009, pp. 387\u2013392.",
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "21": {
258
+ "title": "\u201cMusan: A music, speech, and noise corpus,\u201d",
259
+ "author": "David Snyder, Guoguo Chen, and Daniel Povey,",
260
+ "venue": "arXiv preprint arXiv:1510.08484, 2015.",
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "22": {
266
+ "title": "\u201cAudio set: An ontology and human-labeled dataset for audio events,\u201d",
267
+ "author": "Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter,",
268
+ "venue": "in 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 2017, pp. 776\u2013780.",
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "23": {
274
+ "title": "\u201cThe mtg-jamendo dataset for automatic music tagging,\u201d",
275
+ "author": "Dmitry Bogdanov, Minz Won, Philip Tovstogan, Alastair Porter, and Xavier Serra,",
276
+ "venue": "ICML, 2019.",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "24": {
282
+ "title": "\u201cAdam: A method for stochastic optimization,\u201d",
283
+ "author": "Diederik P Kingma and Jimmy Ba,",
284
+ "venue": "arXiv preprint arXiv:1412.6980, 2014.",
285
+ "url": null
286
+ }
287
+ }
288
+ ],
289
+ "url": "http://arxiv.org/html/2401.15323v1"
290
+ }